Cisco SDA
Cisco SDA
Cisco SDA
LTRCRS-2810
Speakers:
Larissa Overbey and Derek Huckaby
Scenario
In this lab activity, you will implement the following in DNA-Center SD-Access:
Page 3 of 152
Executive Summary
Digitization is transforming business in every industry – requiring every company to be an IT
company. Cisco’s Digital Network Architecture (DNA) is an open, software-driven architecture built
on a set of design principles with the objective of providing:
- Insights and Actions to drive faster business innovation
- Automaton and Assurance to lower IT costs and complexity while meeting business and
user expectations
- Security and Compliance to reduce risk as the organization continues to expand and grow
SD-Access Benefits
Page 4 of 152
SD-Access Overview
Page 5 of 152
SD-Access Lab Work Flow
The Lab Guide begins assuming the switching underlay is provisioned and network and DNA
Advantage licensing is installed. DNA Center version 1.1 is installed running SD-Access version
2.1.0.64153 and ISE v2.3 is installed in the lab environment. For simplicity this validation guide will
walk users through a University Campus deployment scenario.
1. The lab begins with a quick overview of the SD-Access apps, tools and getting familiar with
DNA Center.
2. DNA Center will be used to discover underlay devices and view them using the Discover and
Inventory tools DNA Center will be used to display the devices via the Topology tool.
3. Next, sites will be created using the Design app, where common attributes, resources, and
credentials are defined for re-use during various DNA Center work flows.
4. Then there is a quick overview of the Policy app, verification of the DNA Center and ISE
integration.
5. Virtual Networks are then created in the Policy app, creating network-level segmentation.
During this step, the scalable groups are learned from ISE will be associated to the virtual
networks, creating micro-level segmentation.
6. Next, the Policy Administration process is used to create centralized security policies for the
University scenario. The polices are verified using ISE.
7. Returning to DNA Center, the discovered devices will be provisioned to a site.
8. Next, the overlay fabric will be provisioned.
9. To wrap up, the End hosts will be on-boarded.
10. Windows clients will be modified to use Dot1x authentication to join the overlay.
11. IP connectivity and security policy enforcement will be tested.
Once complete, participants can move to the additional labs in this series for Border external
Connectivity and SD-Wireless.
Lab Topology
The SD-Access Lab Guide is based on the following topology, which utilizes a Catalyst 6800 for the
Border and a Catalyst 3850 as a Control Plane and two additional Catalyst 3850s for the Edges. An
ASR1000X is used as the Fusion router to reach the WLC and shared services. A WLC 5520 and a
Wave2 AP are used to demonstrate SD-Wireless.
Page 6 of 152
Physical Topology
Logical Topology
Page 7 of 152
Accessing the lab
To access the lab, students will need to use a Remote Desktop connection to a pod-specific jump
host. The jump host is used to allow remote access to all lab devices within a given pod. Please use
the following table to access your Pod. (The Instructor will assign you a Pod number.)
The table below provides the access information for the devices within a given pod.
IP Address Common name User Password
Page 8 of 152
Welcome to DNA Center
user: “admin”
password: “Dna@labrocks”
Once logged in, you will see the DNA Center dashboard.
Page 9 of 152
To view the DNA Center version click on the gear at the top right and select About DNA
Center.
The DNA Center main screen is divided into two main areas, “Applications” and “Tools”,
which contain the primary applications for creating and managing an SD-Access
deployment.
Page 10 of 152
Have a look at the other links within DNA Center. For example, the System Settings pages
control how the DNA Center system is integrated with other platforms, users and
applications as well as system backup and restore
While using DNA Center, you can easily transition from any application back to the home
page by clicking the Applications (Apps) button located at the top right of every window or
the Cisco DNA Center logo on the top left.
Page 11 of 152
DNA Center to ISE Integration
The Cisco Platform Exchange Grid (pxGrid) is a multivendor, cross-platform network system that
pulls together different parts of an IT infrastructure. Cisco pxGrid provides an API which is secured
via an SSL certificate system. DNA Center has automated the certificate process to allow users to
simply and easily integrate DNA Center to ISE in a secure manner.
From the à System Settings page, which opens the System 360 tab by default. Locate
the External Network Services: Identity Service Engine panel and select “Configure settings”
This will bring you to the Settings - Authentication and Policy Servers screen under the
Settings tab. Click on to open the right side panel for adding an AAA/ISE Server.
Page 12 of 152
Populate the AAA/ISE server 1 details. Ensure to select the Cisco ISE slider to show
additional fields. Use the following details to assist with field population.
h) Click Apply
At the bottom of the page a pop-up will appear to confirm the ISE settings
have been added.
Page 13 of 152
Use ISE to validate the connection is Online and
has subscribers. To do this, log into ISE as the
admin user.
user: “admin”
password: “Dna@labrocks”
Once authenticated, go to Administration à pxGrid Services. On this page a new client will
appear as “Pending” and Total Pending (1).
Note: The Integration process takes typically 2-5 minutes to complete. If you log into ISE quickly you may
need to refresh the page to see the new subscriber appear.
After a moment the client will change to an “Online” status with 3 Subscribers.
There will also be a green connection bar at the bottom of the page
Page 14 of 152
Return to DNA Center and verify the “System Settings” shows the AAA/ISE Server as
“ACTIVE”.
Congratulations DNA Center is now successfully integrated with the Identity Services Engine.
Page 15 of 152
Discovery, Inventory and Topology
Before creating a Discovery profile and running it, please take a moment to look at the
underlay configuration of the Catalyst switches, specifically the C6807 and C3850-2
devices. You can access the devices by opening the Putty.exe from the desktop. You will
see each of the devices console access in your Putty console.
There should not be any VRF or LISP configuration as this is pushed from DNAC to the
devices while provisioning, after discovery and design.
C3850-2#sho run vrf
ISIS is the underlay routing protocol, therefore ISIS configuration can be seen and verified.
C3850-2#sho run | sec isis
Page 16 of 152
Using the Discovery Name “SDA Lab”, type in the Catalyst 6807 loopback address
192.168.0.1 in the “IP Address” field. Also, change the Preferred Management IP
to “Use Loopback”.
Note: The IP address could be any L3 interface or Loopback on any switch that DNA Center can access.
Click on Add Credentials to expand the side panel for adding device credentials
Page 17 of 152
Use the table below to populate the CLI and SNMPv2c credentials.
Once populated, the credentials will appear in the center of the page.
Page 18 of 152
The last step is to enable Telnet as a discovery protocol for any legacy devices not
configured to support SSH. To do this, scroll down the page and open the “Advanced”
section. Once opened, select the Telnet protocol.
Click on “Start” in the upper right-hand corner. To view the discovered devices, simply click
on the devices found icon. This will also open a panel on the right, to display the devices
and their discovery status.
In a moment, the screen will display devices appearing on the right as they are discovered.
Note: At this point you can proceed to the Design section. This will allow you to continue the lab while the
Discover process is working. If you do, be sure to return to this section before proceeding to Policy.
By default, DNA Center will make a few changes to the discovered devices. Please have
another look at one of the access switch configurations to see the changes made.
Page 19 of 152
C3850-2#show archive config differences flash:underlay system:running-config
!Contextual Config Diffs:
+device-tracking tracking
+device-tracking policy IPDT_MAX_10
+limit address-count 10
+no protocol udp
+tracking enable
interface GigabitEthernet1/0/2
+device-tracking attach-policy IPDT_MAX_10
<..snip..>
interface GigabitEthernet1/0/45
+device-tracking attach-policy IPDT_MAX_10
interface GigabitEthernet1/0/46
+device-tracking attach-policy IPDT_MAX_10
<..snip..>
interface FortyGigabitEthernet1/1/1
+device-tracking attach-policy IPDT_MAX_10
interface FortyGigabitEthernet1/1/2
+device-tracking attach-policy IPDT_MAX_10
line con 0
+length 0
IPDT (IP Device Tracking) is used to keep track of connected hosts. This is done through a
unicasted ARP probe. This is used for multiple services. For SD-Access, this is used for Cisco
TrustSec, MAB and 802.1x session manager.
Note: Device tracking is not added to ports with end hosts physically attached. In this lab port G1/0/1 and
G1/0/11 have hosts connected to them.
Please save the Border, Control Plane, and Edge node configurations for future
comparisons.
Page 20 of 152
Inventory App and Device Roles
Once the devices are discovered, use the Apps button to open the Inventory app to
view the lab devices.
Within the Inventory page all devices should be Reachable and Managed. In some cases
the status of “In Progress” will appear, which is fine as it is indicating DNA Center is actively
updating information regarding the device.
Device Roles are used to position devices in the DNA Center Topology maps. Use the
Layout setting dots to add the Device Role column to the table and Apply.
Page 21 of 152
Using the Device Role column, verify the lab devices have the expected role. These are
auto-discovered, but typically need to be adjusted slightly for a given network topology.
In the example below the Catalyst 6807 as an “ACCESS” role, because it has end hosts
locally attached; however it should have a “CORE” role. To correct it, simply click on the
role and select the appropriate one, in this case “CORE”. Again, this selection only effects
topology map positioning. It does not add or modify any configuration with regard to the
device role selected.
Note: The following table shows the mapping order for each role type:
Page 22 of 152
Viewing the Lab Topology
Once you have verified the Device Roles are correct use the Topology application to view the EFT
lab network.
Use the Apps button to open the Topology app to view the lab devices. You can use
the zoom feature in the lower right to zoom in on the map.
Page 23 of 152
From here you can see the
Topology app groups
devices based on role
within the map. To expand
the devices, click on them
and then un-pin them from
each other.
Once un-pinned, the details of the device will be shown in the right panel. Clicking any
blank area in the map will close the right panel and center the map.
Page 24 of 152
Notice the filter window in the upper right allows user to search
or filter for devices, which is helpful in large topology maps.
Page 25 of 152
Feel free to interact with the Topology map further to get a feel for it.
Once done please continue to the Design section.
Page 26 of 152
DESIGN
Page 27 of 152
Get Started using DNA Center Design
DNA Center provides a robust Design application to allow customers of every size and scale to
easily define their physical Sites and common resources. This is implemented using a hierarchical
format for intuitive use, while removing the need to redefine the same resource in multiple places
when provisioning devices.
Note: The browser used to configure DNA Center must have Internet connectivity for the maps to appear.
Page 28 of 152
Use Add Site to create a new site called California, and then click Add.
Next, create another Site by clicking on the next to California. Select add site.
Ensure the Parent node is California. Name the site San Jose, and then click Create.
Page 29 of 152
Open California, and then next to San Jose select the gear to add a building where the
network devices will reside. Let’s use the SJC05 number as the building name.
For the Address, type in “325 E Tasman”. As you enter the address, you will see the Design App
narrowing down the known addresses to the one you enter. When you see 325 E Tasman Dr
appear in the window below, select it. The benefit of selecting a known address is that you will not
have to lookup and provide the longitude and latitude coordinates as they are automatically provided.
Page 30 of 152
Expand San Jose and once more use the next to SJC05 to add a building floor. Use
floor name “SJC05-1” and select Indoor High Ceiling for Type.
Page 31 of 152
Modify the width to 300. You will notice, based on the image uploaded, the length will
change. Modify the height to 15. Click Add.
Once loaded you should see something similar to the screen below:
Note: The Floor plan extends into the Network Hierachy panel on the left. Mouse over the light gray frame
(called out below) and slide right to expose the site hierachy once again.
Page 32 of 152
Defining Network Settings
Network services
DNA Center allows you to save common resources and settings with Design’s Network Setting
application. As we described earlier, this allows information pertaining to the enterprise to be stored
so it can be reused throughout DNA Center.
From the Network Hierarchy page, you can get to Network Settings using the menu bar.
Once opened, you will see a list of default settings, which are typical in every network environment.
Production SD-Access deployments will require AAA, DHCP, and DNS servers to be configured.
For Assurance, SYSLOG, SNMP, and Netflow can be configured using the DNA Center’s IP.
Use the Add Servers button to add AAA, Netflow and NTP servers to DNA Center.
Page 33 of 152
For AAA Server, select both “Network” and “Client/Endpoint” check boxes. This will add two
new rows of settings respectively. While DNA Center supports network clients using
TACACS and Clients/Endpoints using RADIUS, in this lab we are using RADIUS for both.
Under Network, select the Server type as “ISE” and use the pull down to select the
integrated ISE server IP Address (172.26.204.121).
Also under Network, select the Protocol as “RADIUS” and use the pull down to select the
ISE server IP (172.26.204.121).
4 5
Page 34 of 152
Configure the remaining servers according to the table below.
Server IP Address
DHCP 10.172.99.11
cisco.com
DNS
10.172.99.251
SYSLOG 172.26.205.100
SNMP 172.26.205.100
172.26.205.100
Netflow
port: 2055
NTP 172.26.204.254
Select PST for the time zone and provide a message of the day for the lab devices. When
finish, store the setting by clicking on “Save” at the bottom of the page.
Popups will appear at the bottom of the page to indicate the setting are saving and another one for
when the save is successful.
Page 35 of 152
Device Credentials
The next step is to define site device credentials. Using the menu bar, select Device Credentials.
From here you will see the credentials defined previously in the Discovery Tool. If you would like to
add credentials for additional devices to test out the UI, you can at this time.
Any additional credentials will be accessible in the Discovery tool if there are more devices to
discover is the future.
Page 36 of 152
Creating IP Pools
DNA Center will support both manually entering IP Address allotments as well as integrating with
IPAM solutions, such as Infoblox, to learn of existing IP Addresses already in use.
For validation, we will simply manually define the IP Address Pools we require. Using the
menu bar, select IP Address Pools.
Page 37 of 152
Using the table below create the four IP Pools to be used for wired validation. You are
welcome to add additional IP Pools as well, but please add these four to work with the
validation in the latter part of the guide.
When complete, the DNA Center IP Address Pools for Global should look similar to the page below:
Page 38 of 152
Image Management (Do Not Attempt in This Lab)
The Image Management area allows users to rollout out new device images to help standardize
device images within the network.
Note: The actual upgrading of images is not supported in this specific lab. The current images are already
installed. This section is provided strictly as reference for those interested in the workflow.
Clicking on Image Management from the menu bar will allow you to see the images running
on the previously discovered devices.
Click the Import Image/SMU button to load the ISO for IOS version 16.6.1, which can be
found in the Documents folder.
Page 39 of 152
Once selected, Import the IOS file.
Page 40 of 152
Once the IOS is imported, you can scroll down to the Catalyst 3850-1 device which is
running the older EFT image. Use the pull down to select the IOS 16.6.1 image as shown:
Note: You may need to refresh the page to see the image has been updated as Golden
Page 41 of 152
Select the Provision Application from the top of the page.
Once on the Provision page you will see which devices are running an outdated image, and
you are allow to select the device you want to update.
For this lab, please select the C3850-1.cisco.com.
On the next screen, simply click Update and acknowledge the switch will reload.
Page 42 of 152
After this point, the upgrade will begin. Notice a green status message will appear at the bottom of
the screen to confirm the upgrade has started
Note: The upgrade can take a while and DNAC takes even longer to be updated. You can verify the progress
and completion by accessing the switch console from Putty. You may need to refresh the screen on
DNAC to view progress on the UI.
Page 43 of 152
After some time, the C3850-1 switch will be upgraded. Please check back after the Policy section to
see that it has completed.
Page 44 of 152
Network Profiles
Network profiles have been added to allow site specific customization of network device
configurations.
Feel free to look around in this area. The example screen captures below show how network profiles
can be created through templates, and illustrate the workflow for creating router profiles.
Page 45 of 152
This completes the Design section of the SD-Access Lab.
Page 46 of 152
POLICY
Page 47 of 152
SD-Access Scalable Group Tag Creation
The user groups used can be associated with a Scalable Group Tag (SGT). SGTs can be carried
throughout the network and are the basis for access policy enforcement under Cisco DNA. Follow
the steps below to define the SGTs for “Research”.
Page 48 of 152
Once it loads, you will see an informative dashboard for Policy and a history of policy
related actions. Click on Registry from the Policy home page.
Here you will see all the default scalable group tags pushed from ISE.
Click on Add Groups.
Page 49 of 152
This will launch a new tab in your browser that will connect to ISE. From here click on the
Add button above the table header.
Give the new SGT a name of “Research”, choose a different icon, and add a description
of your choice. Click Submit at the bottom of the page to save the new group.
Page 50 of 152
Return to DNA Center and refresh the table. Go to the second page to verify the
“Research” SGT has been learned by DNA Center.
Page 51 of 152
SD-Access Network Segmentation
The Policy app supports creating and managing Virtual Networks, Policy Administration, Contracts
and Scalable Groups using the Registry. Most users will want to set up their SD-Access Policy
(Virtual Networks and Contracts) before doing any SD-Access Provisioning. In this section, we will
segment the overlay network using the DNA Center Policy app. This process virtualizes the overlay
network into multiple self-contained Virtual Networks. By default, any network device or user within
the Virtual Network is permitted to communicate with other users and devices in the same Virtual
Network. To enable communication between Virtual Networks, traffic must leave the Fabric Border
and then return, typically traversing a firewall or fusion router.
This validation will simulate deploying SD-Access in a University. This allows us to show SD-Access
virtualization and segmentation between well understood groups and entities.
Page 52 of 152
This will modify the window layout so that a new Virtual Network can be defined. Create a
Virtual Network, for these example the guide will use Campus.
Before saving, populate the newly created Virtual Network with Groups. This will allow you to further
segment the traffic within a given Virtual Network.
Once the SGTs have been moved, save the new Virtual Network
Repeat the steps above to create a new Virtual Network called Guest and move the Guests
and Network_Services groups into it. Here is what it should like once you are done:
Page 53 of 152
Page 54 of 152
SD-Access Policy Administration
Once SD-Access has been segmented into virtual networks, we can define the security policies.
DNA Center will allow you to deny or explicitly allow traffic between Groups within Virtual Networks.
In the following steps, we will show how SD-Access will be provisioned to establish security policies
with just a few clicks within DNA Center.
From the Policy à Virtual Networks page use the menu to select Policy Administration.
Page 55 of 152
Here you will see the Scalable Groups that were added to DNA Center from ISE.
These are ISE’s default scalable groups and a few created for this lab.
Use the Policy Access Table at the beginning of this section to create the policies. The first policy
“DenyEmployees” denies traffic sourced by Employees and destined for PCI_Servers or Students.
First define the Policy Name as “DenyEmployees”. Next, select Employees and drag it to
the Source box on the right-hand side. Then, drag PCI_Servers and Students to the
Destination box on the right-hand side.
Page 56 of 152
Confirm the policy is correct then click on “Save”.
Access the TrustSec Matrix on ISE by clicking on Advanced Options. This will launch a
window in your browser, arriving at the ISE TrustSec Matrix.
Note: Please be patient, as it takes a few moments to build the TrustSec Policy Matrix.
Page 57 of 152
You will see the recently created DenyEmployees policy in Red. Note this is a “Deny IP” policy.
You may need to choose a different View option to see all of the policies that have been transferred
from DNA Center to ISE. We recommend that you try View > Condensed with SGACL Names, which
removes the SGT value from the table headers.
Here is the table with the remaining policies for your convenience:
Page 58 of 152
Apply a Layer-4 Custom Contract
We will now add Layer-4 policies using the table below:
To allow certain applications, a customer contract is required. This section will show you the steps to
create the new contract and then walk you through applying it to the Faculty and PCI Servers
groups.
Navigate to the Contracts screen within the Policy app to create a custom contract.
Here you will see the default Layer-3 permit and deny contracts. Click “ Add Contract” on
the top right of this page to open the custom contract dialog.
Within the dialog, provide the name “SecureAccessOnly” and the description “Allow SSH,
HTTPS Implicit Deny” for the new contract.
Select Permit from the Actions. In the Classifier, type ssh to select the SSH TCP Port 22
protocol. Be sure to type this in lowercase. Click Add.
Page 59 of 152
Do the same for selecting and permitting the “https” protocol. Again, this needs to be in
lowercase.
Note: If you have a blank entry the Contract will not be saved. You will need to delete it or you will lose the
settings as they will not be saved.
Page 60 of 152
Saving takes you back to the main contracts page where you can verify the “SecureAccess
Only” contract was indeed added.
Return to the Policy Administration page and create a new policy. Name the Policy
“RestrictFaculty” and drag the Faculty group to the Source box.
Page 61 of 152
Click on “Add Contract”.
Upon completion DNA Center returns you to the Policy Administration page where you can
verify the saved policy now resides in the policy table.
Page 62 of 152
Once again use the “Advanced Options” button to return to the ISE TrustSec Policy Matrix.
In the new view, you should clearly see new blue filled cells indicating Layer-4 ACEs have
been applied.
Page 63 of 152
A better way to see the information is to double click within a blue cell. This will display the
Security Group ACL that was applied by DNA Center.
To view the details of the Security Group ACL simply go to “Work Centers” à “TrustSec” à
”Components” à “Security Group ACLs”.
Page 64 of 152
Using the workflows described in this section complete deploying the Layer-4 custom
contracts and policies
When complete you will see the ISE TrustSec Policy Matrix similar to the one below:
Page 65 of 152
PROVISION
Page 66 of 152
SD-Access Overlay Provisioning
When opening the Provision app, you will be placed on the Devices page. From here you can begin
the provisioning process by selecting devices and associating them to the sites previously created
during the Design phase.
Select all the devices that will become the Fabric Border (C6807), Control Plane (C3850-1)
and Fabric Edges (C3850-2 and C3850-3). Do not select the Fusion Router (ASR1001X) or
the Intermediate Node (C4500).
2 3
Once the devices are selected, use the “Selected Devices” pull down to “Add/Remove Site”.
Page 67 of 152
Type in the name of the site where the devices are deployed “SJC05”. Note, the name will
auto complete from the list of sites defined previously in the Design application.
Upon adding the devices to the site, the Device Inventory page will reappear and popups
will appear indicating the devices are being added to the site.
When a device is added to the site DNA Center updates its internal database to associate the
SYSLOG, SNMP, and Netflow servers and configures a the switches with this information.
Page 68 of 152
Use the show archive command to view the changes DNA Center has made to the devices
added to the site.
Please save the Border, Control Plane, and Edge node configurations for future
comparisons using the filename “site”.
Page 69 of 152
Provision Devices to the Site
As mentioned in the previous section, the common workflow will be to simply provision a device to a
site. This single workflow step is made up of several components, allowing DNA Center to configure
all of the network settings to the devices according to the Design for the site.
As before, select the devices of a given type that were previously add to the site and then
select “Provision” from the “Actions” pulldown.
Note: You can only simultaneously provision devices of the same Device Type.
Page 70 of 152
Here you can see the step we did separately is actually part of the Provisioning workflow.
Verify all the devices are in correct site, and click Next.
The “Configuration” step is blank at present so please skip past it by clicking “Next”.
The “Advanced Configuration” page is where templates are applied. Select the “conlen0”
template and click Next to continue.
Page 71 of 152
On the Summary page, one can review the site specific changes DNA Center will push to
the selected devices. Please make sure these are accurate before clicking on “Deploy” at
the bottom of the page to continue. Notice the C6807 device does not have the template
“conlen0” applied as expected.
DNA Center allows users to deploy devices immediately, or to schedule the deployment to
sometime in the future. For the lab, select Run Now as shown below and Apply.
Page 72 of 152
DNA Center will redirect you back to the devices page. As before, status messages will be
displayed on the Inventory screen as devices start and complete provisioning.
Starting
Completed
Note: SD-Access only requires the fabric nodes to be provisioned to a site. Although other devices such as
intermediate nodes and fusion routers can be provisioned, they are not required to be.
Page 73 of 152
When a device is provisioned DNA Center updates its internal database to include AAA, Dot1x,
Cisco TrustSec and then configures the devices to enable a secure communication channel with the
device from this point further.
Use the show archive command to view the changes DNA Center has made to the devices
added to the site.
Please save the Border, Control Plane, and Edge node configuraitons for future
comparisons using the filename “provision”.
Page 75 of 152
Create and Manage Fabrics
Once DNA Center has provisioned the SD-Access devices to sites, the SD-Access fabric can be
created.
Use the menu to select “Fabric”. After a momentary delay, you will be taken to a new page
for creating and managing SD-Access fabrics.
Click to open a right panel for creating fabrics.
Select “Campus” and provide a name for the fabric. The guide will use a fabric name
“University”, but any name can be provided. Be sure to click “Add” to create the fabric.
Page 76 of 152
A message will appear in the lower right corner of the page to indicate a new fabric is
created, and the new fabric will appear on the Provisioning à Fabric page.
Page 77 of 152
Provisioning the Fabric
Click on the new “University” fabric to open it. This will bring up a topology of the network
by default.
Note, clicking the white box will display the network in tabular form, if desired.
Click the green box with a directional arrow to rotate the topology horizontally
After clicking in the marked area above, you will see the topology rotate horizontally as shown below:
Note: If you see groups, or if devices are not in a similar topology. Click on the group or out of order object and
select Device Role. Change this accordingly and the graph will automatically update, rearranging the
devices based on the assigned role.
Page 78 of 152
Holding the shift key and the left mouse button, use your mouse to highlight the C3850-2
and C3850-3 access switches that are provisioned as Edge node.
When you release the left button, you will be able to add both switches to the fabric. By
default, any devices added to the fabric will be considered Edge nodes.
Border and Control Plane nodes must be explicitly defined. Click the C3850-1
and select “Add as CP”.
Page 79 of 152
The last step is to select a border and configure it. Click the C6807 and
select “Add as Border”.
This will open a dialog box where you can set the Border
Parameters.
Use below values for configuration:
• Border Handoff:
o Layer 3
o VRF-Lite
o Infrastructure(192.168.254.0/24)
Page 80 of 152
Upon clicking “Add Interfaces”, it will open up a dialog
where you can specify SDA Fabric border external
connectivity parameters. DNA Center will use this
parameters to automate the border handoff to the
fusion router.
Scroll through the configuration and click “Add” at the bottom of the pane.
Page 81 of 152
Click Save on top right of the screen.
Note: The save process takes a few minutes, typically less than five, so be patient.
After the devices have been provisioned to the Fabric, the nodes are represented with blue
icons.
Page 82 of 152
As soon as the “Save” button is clicked at the top left of the page, DNA Center will push
Virtual Networks and Control Plane (LISP) configurations to the fabric nodes. Log in to each
node to see the newly added configurations. You should see the following new configuration
entries.
router lisp
locator-set rloc_5d4bb509-dfe0-4ec4-a3d4-9d963e27a141
IPv4-interface Loopback0 priority 10 weight 10
auto-discover-rlocs
exit
!
map-server nmr non-site-ttl 1440
eid-table default instance-id 4097
remote-rloc-probe on-route-change LISP instance for INFRA_VN aka GRT
exit
!
Notice the Map Server and Resolver on the Border point to the C3850-1 CP’s loopback0 interface.
Page 83 of 152
Control Plane (C3850-1):
Jan 23 18:25:39.455: %SYS-5-CONFIG_I: Configured from console by cisco on vty0 (172.26.205.100)
Jan 23 18:25:45.528: %SYS-5-CONFIG_I: Configured from console by cisco on vty0 (172.26.205.100)
router lisp
locator-table default
service ipv4
encapsulation vxlan
sgt
map-server
map-resolver
exit-service-ipv4
!
service ethernet
map-server
map-resolver
exit-service-ethernet
!
instance-id 4097
service ipv4
eid-table default
route-export site-registrations
distance site-registrations 250
exit-service-ipv4
!
exit-instance-id
!
instance-id 4098
service ipv4
eid-table vrf DEFAULT_VN
route-export site-registrations
distance site-registrations 250
exit-service-ipv4
!
exit-instance-id
!
instance-id 4099
service ipv4
eid-table vrf Campus
route-export site-registrations
distance site-registrations 250
exit-service-ipv4
!
exit-instance-id
!
instance-id 4100
service ipv4
eid-table vrf Guest
route-export site-registrations
distance site-registrations 250
exit-service-ipv4
!
exit-instance-id
!
map-server nmr non-site-ttl 1440
Page 84 of 152
site site_uci
description map-server configured from apic-em
authentication-key uci
eid-record instance-id 4097 0.0.0.0/0 accept-more-specifics
eid-record instance-id 4098 0.0.0.0/0 accept-more-specifics
eid-record instance-id 4099 0.0.0.0/0 accept-more-specifics
eid-record instance-id 4100 0.0.0.0/0 accept-more-specifics
exit-site
!
Notice the iBGP configuration automated by DNA Center to redistribute LISP routes from CP to
Fabric Border (C6807)
Page 85 of 152
Fabric Edge Nodes (C3850-2/C3850-3):
C3850-2#show run | section lisp
router lisp
locator-table default
locator-set rloc_a1cd1e6e-2ed6-41ae-8db8-ccc12f56c3fd
IPv4-interface Loopback0 priority 10 weight 10
exit-locator-set
!
locator default-set rloc_a1cd1e6e-2ed6-41ae-8db8-ccc12f56c3fd
service ipv4
encapsulation vxlan
map-cache-limit 25000
database-mapping limit dynamic 5000
itr map-resolver 192.168.100.100
etr map-server 192.168.100.100 key uci
etr map-server 192.168.100.100 proxy-reply
etr
sgt
proxy-itr 192.168.120.2
exit-service-ipv4
!
service ethernet
map-cache-limit 25000
database-mapping limit dynamic 5000
itr map-resolver 192.168.100.100
itr
etr map-server 192.168.100.100 key uci
etr map-server 192.168.100.100 proxy-reply
etr
exit-service-ethernet
!
instance-id 4097
remote-rloc-probe on-route-change
service ipv4
eid-table default
map-cache 0.0.0.0/0 map-request
exit-service-ipv4
!
exit-instance-id
!
instance-id 4098
remote-rloc-probe on-route-change
service ipv4
eid-table vrf DEFAULT_VN
map-cache 0.0.0.0/0 map-request
exit-service-ipv4
!
exit-instance-id
!
instance-id 4099
remote-rloc-probe on-route-change
service ipv4
eid-table vrf Campus
map-cache 0.0.0.0/0 map-request
exit-service-ipv4
!
exit-instance-id
!
instance-id 4100
remote-rloc-probe on-route-change
service ipv4
eid-table vrf Guest
map-cache 0.0.0.0/0 map-request
exit-service-ipv4
!
exit-instance-id
!
Page 86 of 152
Please save the Border, Control Plane, and Edge node configuraitons for future
comparisons using the filename “fabric”.
Note: In this lab topology, DNA Center and ISE is connected directly to the Fabric Border (C6807).
Prior to DNA Center border provisioning step, we had configured ISIS underlay between C6807 and Fusion
Router, so that Control-Plane (3850-1), DNA Center have access to Fusion Router and the segments
connected behind it (DHCP server, WLC etc.). This is why DNA Center was able to discover the Fusion Router
initially when you did the network discovery.
When DNA Center provisioned the border external interfaces, it will replace the ISIS configuration due to which
both Fusion Router and WLC will be unreachable from DNA Center.
We will be addressing this issue in the Border Handoff Section of this guide.
This is not a concern for this guide and you may proceed with Host Onboarding and Endpoint Validation.
Page 87 of 152
SD-Access End Host Provisioning
Once the overlay is provisioned, it needs IP address pools to be added to enable hosts to
communicate within the fabric. When an IP pool is configured in SD-Access, DNA Center
immediately connects to each edge node to create the appropriate SVI (switch virtual interface) to
allow the hosts to communicate.
In addition, an Anycast gateway is applied to all edge nodes. This is an essential element of SD-
Access as it allows hosts to easily roam to any edge node with no additional provisioning.
Click on the “Host Onboarding” at the top of the Fabric topology screen to start applying
authentication and the IP networking for onboarding host devices.
Select Closed Authentication. Click Save, and then “Run Now” to continue
Note: Be sure to click save before you move onto next step.
Click the Campus Virtual Network box and add the Production and Staff IP Pools for
Data traffic. Click Update when finished, selecting “Run Now” and “Apply” to continue
Page 88 of 152
Once the syslogs message are observed, show the new running configuration of the
interfaces. Noticed the Dot1xClosed authentication policy is pushed down to all host
interfaces.
Once applied, DNA Center will push the IP Pools to the Campus Virtual Network. This
requires updates to be made to the Border, CP, and Edge nodes. Log in to each node to
see the newly added configurations.
Page 89 of 152
Control Plane Node (C3850-1):
C3850-1#show archive config differences flash:fabric system:running-config
!Contextual Config Diffs:
+interface Loopback1021
+description Loopback Map Server
+vrf forwarding Campus
+ip address 172.16.101.254 255.255.255.255
+interface Loopback1022
+description Loopback Map Server
+vrf forwarding Campus
+ip address 172.16.201.254 255.255.255.255
router lisp
site site_uci
+eid-record instance-id 4099 172.16.101.0/24 accept-more-specifics
+eid-record instance-id 4099 172.16.201.0/24 accept-more-specifics
router bgp 65002
address-family ipv4 vrf Campus
+network 172.16.101.254 mask 255.255.255.255
+network 172.16.201.254 mask 255.255.255.255
+aggregate-address 172.16.201.0 255.255.255.0 summary-only
Page 90 of 152
Click the Guest Virtual Network box and add the WiredGuest IP Pool for Data traffic. Click
Update and Run Now to Apply the change.
Please save the Border, Control Plane, and Edge node configurations for future
comparisons using the filename “wiredcomplete”.
Now the IP address pools have been assigned to the VN and configured on the devices. The
interfaces have also been configured with 802.1x closed authentication mode. Now you’re ready to
onboard your endpoints!
Page 91 of 152
VALIDATION
Optional Guide
Page 92 of 152
End Point Validation
All hosts are powered down by default. Please Power them on using the vSphere client as needed
for Validation.
To verify the end host connectivity, you will use the vSphere client to connect to your pod ESX server
and launch consoles for the end host VMs. From the consoles you will first ping the default gateway,
and then the other end hosts.
The Tiny Core Linux systems are authenticated using MAB, so let’s start with them.
From the jump host use the vSphere client to connect to your ESXi host.
(Instructor / Dna@labrocks)
Expand the ESX host for your Pod to display all the VMs associated to the Pod.
Pods 1-10 are servers .71 – .80 Pods 11-20 are servers .104 – .113
Here you will find several hosts. Please verify the P-## matches your Pod number. Open
console windows to access TCL-Guest and TCL-PCI-Server VMs by selecting them and
right clicking.
Page 93 of 152
Once the console opens, please log in and open a terminal from the menu bar at the bottom
of the desktop.
Type ifconfig. Note the HWaddr (MAC address) for eth0. You will use this to add the
endpoint into ISE. Use route -n to verify the default gateway IP address.
Start a ping test to the default gateway. Keep the ping test running while you complete the
remaining steps in this section.
For the PCI_Server (red) use 172.16.101.254 as the default gateway.
For the Guest (green) user 172.16.250.254 as the default gateway.
Page 94 of 152
Access ISE from your browser and go to Administration à Identity Management à Groups
à Endpoint Identity Groups.
From here you should see PCI_Server and GuestEndpoints Endpoint Identity Groups
amongst the default Endpoint Identity Groups. Click into each one and remove any existing
MAC addresses.
Page 95 of 152
Now, the next step is to add your Pod’s PCI_Server VM MAC and the Guest VM MAC to the
PCI_Server and GuestEndpoint Groups above.
Rather than manually typing in the MAC, you can do this by clicking through the Context
Visibility screens.
Home à Context Visibility à Endpoints
From here you will see a list of MAC Addresses. By default 10 rows are shown per page.
You should modify this to 25 to make it easier to find the VM MAC addresses.
Search the MAC Address column for your PCI_Server and click on the MAC Address
Page 96 of 152
Once you find a MAC address matching the
PCI_Servers, click on the MAC address.
This will bring up a new page for the specific Endpoint.
Click on Edit icon between the refresh and trash icons to the right of the MAC Address to
bring up the Edit Endpoint dialog.
Page 97 of 152
Here you will see a session with the Endpoint ID and Identity of your PCI_Server MAC.
Hover to the right of the “Show CoA Actions” and click the target icon when it appears. This
brings up an option list of actions to take. Select “Session termination with port bounces”.
Repeat the steps above for the Guest VM using the GuestEndpoints Identity Group.
Page 98 of 152
Verify IP Connectivity for MAB Hosts
To verify the end host connectivity, you will use the vSphere client to connect to your pod ESX server
and launch consoles for the end host VMs. From the consoles you will first ping the default gateway,
and then the other end hosts.
The Tiny Core Linux systems are authenticated using MAB, so let’s start with them.
Login to each of the 3850’s and find the associated interface that the Guest Linux and
PCI_Server endpoints are connected to. You will see this on the topology at the beginning
of this guide.
Since the endpoints are already connected to the switch, issue a shut/ no shut on the
interface these endpoints are connected to trigger a new authentication request.. An
example is below:
Use ifconfig to verify the VM’s IP address and route -n to verify the default gateway. Check
to see if the hosts can connect to their default gateway using ping.
Page 99 of 152
Connect to the Control Plane (C3850-1) and verify the hosts can be seen within the
control plane
Note: You will not be able to ping between the Guest and PCI Server as they are in separate Virtual Networks,
and we have not configured the Fusion router to leak routes between the Virtual Networks. We will do
this in the last exercise of this lab.
The Windows supplicant has been preconfigured so you will only need to log into the network from
the PC-Wired hosts. The following steps will walk you through the network authentication process.
Using the vSphere client, select your Pod’s PC-Wired-2 VM, Power it on and open a
console. You will need to login with the credentials: (admin/ dnalab1)
Once logged in, use the command window to ping the default gateway.
PC-Wired-2 gateway 172.16.101.254 and PC-Wired-3 gateway 172.16.201.254.
It may take a few seconds to authenticate the user. If you run into problems, please check
the switch and make sure the user account has authenticated.
Local Policies:
Idle timeout: 65536 sec
Server Policies:
Vlan Group: Vlan: 1022
Security Policy: None
Security Status: Link Unsecured
SGT Value: 4
You can also validate the host has appeared in the Control Plane by checking the lisp site
table on the Control Plane node (C3850-1).
In the wired lab we deployed an SD-Access fabric with a decentralized Border (Catalyst 6807) and
Control Plane (Catalyst C3850-1) and utilizing two Catalyst 3850s as Edge nodes. Once deployed
the fabric was validated using MAB and Dot1x end hosts to confirm intra Virtual Network
connectivity.
The next step is to configure the BGP handoff from the Border to the ASR Fusion router.
Once complete, end hosts will be able to communicate across Virtual Networks and reach shared
services such as DHCP, DNS, etc.
Once completed, the end host will route through the SD-Access fabric to the border, then leave the
fabric to traverse the Fusion router. The Fusion router will leak the traffic to the other VRF and then
send it to the border, where it will be re-encapsulated into the SD-Access fabric and routed to the
destination end host. See the illustrations of this process below.
Verify the external handoff configuration pushed by DNA Center by returning to the
Provision -> Fabric Topology screen. Select the Border and View Info. In the right panel
the Border information will appear. Click on the interface to expose the Virtual Network to
SVI associations.
Copy Campus and Guest VRFs from Border to the Fusion Router. Copying is required,
because the RD and RT must exactly match, and they are auto generated numbers from
DNA Center.
Note: If you have followed the Wired lab guide exactly you will see the rd/rt numbering shown below, however,
if you deviated the number will likely be different and thus we highly recommend you do a show run on
the Border and copy paste the VRF definitions to the Fusion router (ASR).
EXAMPLE ONLY
C6807#sho run
Building configuration...
On the Fusion router configure three layer-3 sub-interfaces on the link connecting the
border router (t0/0/1) so the Virtual Networks can be carried to the Fusion router.
interface TenGigabitEthernet0/0/1.3001
description fusion to Border_GRT
encapsulation dot1Q 3001
ip address 192.168.254.2 255.255.255.252
no shutdown
interface TenGigabitEthernet0/0/1.3002
description fusion to Campus_VRF
encapsulation dot1Q 3002
vrf forwarding Campus
ip address 192.168.254.6 255.255.255.252
no shutdown
interface TenGigabitEthernet0/0/1.3003
description fusion to Guest_VRF
encapsulation dot1Q 3003
vrf forwarding Guest
ip address 192.168.254.10 255.255.255.252
no shutdown
Fusion-ASR#ping 192.168.254.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.254.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms
Verify the routes now exist in both Campus and Guest VRFs.
C6807:
Fusion-ASR:
conf t
!
address-family ipv4 vrf Campus
neighbor 192.168.254.5 remote-as 65002
neighbor 192.168.254.5 update-source TenGigabitEthernet0/0/1.3002 Campus VRF
neighbor 192.168.254.5 activate
exit-address-family
!
address-family ipv4 vrf Guest
neighbor 192.168.254.9 remote-as 65002
neighbor 192.168.254.9 update-source TenGigabitEthernet0/0/1.3003 Guest VRF
neighbor 192.168.254.9 activate
exit-address-family
Note: 10.172.99.11 is the DHCP server and 10.172.120.0/24 is the WLC network which we are
advertising to the Border (C6807) via the INFRA_VN (global routing table) using BGP.
Verify DHCP server and WLC subnets are learned on the Fabric Border
Verify the Virtual Network IP Pools and specifc routes are learned on the Fusion router.
Note: We have introduced two new networks that make up the shared services in
the lab. 10.172.99.0 is for DHCP, whereas 10.172.120 is for the WLC.
These could have been on the same network, but for clarity two separate
networks have been used.
Fusion-ASR:
Note: There is no need to leak Shared Networks routes to INFRA_VN, as INFRA_VN is nothing but GRT itself
Verify the local ASR routes for the DHCP server and WLC are added to the Campus routing
table.
Fusion-ASR:
C6807:
Border (C6807 – Initial Underlay) Border (C6807 – After Border Handoff Provisioning)
C6807#show running-config interface TenGigabitEthernet C6807#show running-config interface TenGigabitEthernet 1/11
1/11 Building configuration...
Building configuration...
Current configuration : 148 bytes
Current configuration : 148 bytes !
! interface TenGigabitEthernet1/11
interface TenGigabitEthernet1/11 description ASR1K TE0/0/1 underaly
description ASR1K TE0/0/0 underlay switchport
dampening switchport mode trunk
mtu 9100 mtu 9216
ip address 192.168.0.21 255.255.255.252 logging event link-status
no ip redirects end
no ip proxy-arp
ip router isis
logging event link-status C6807#show ip int brief | include Vlan3
load-interval 30 Vlan3001 192.168.254.1 YES manual up up
carrier-delay msec 0 Vlan3002 192.168.254.5 YES manual up up
bfd interval 300 min_rx 300 multiplier 3 Vlan3003 192.168.254.9 YES manual up up
no bfd echo
end
interface Vlan3001
description vrf interface to External router
Fusion Router (ASR 1001-X) ip address 192.168.254.1 255.255.255.252
no ip redirects
Fusion-ASR#show running-config interface ip route-cache same-interface
TenGigabitEthernet 0/0/1 platform lisp-enable
Building configuration...
interface Vlan3002
Current configuration : 317 bytes description vrf interface to External router
! vrf forwarding Campus
interface TenGigabitEthernet0/0/1 ip address 192.168.254.5 255.255.255.252
description c6807 t1/11 no ip redirects
mtu 9100 ip route-cache same-interface
ip address 192.168.0.22 255.255.255.252 platform lisp-enable
no ip redirects
no ip proxy-arp interface Vlan3003
ip nat inside description vrf interface to External router
ip router isis vrf forwarding Guest
logging event link-status ip address 192.168.254.9 255.255.255.252
load-interval 30 no ip redirects
carrier-delay msec 0 ip route-cache same-interface
bfd interval 300 min_rx 300 multiplier 3 platform lisp-enable
no bfd echo
cdp enable
end
Notice in the initial underlay the ISIS configuration is applied to the Border and Fusion router. During
the border external handoff configuration in DNA Center, we specified TenGigabitEthernet1/11 as the
external interface and selected the following Virtual Networks to be extended to Fusion Router via
VRF-Lite: Campus, Guest, INFRA_VN. This caused DNA Center to change the configuration.
To correct this, advertise DNA Center, Jumphost subnets and the Control-Plane via BGP to
allow the Fusion Router to learn these management addresses and thus restore
connectivity.
conf t
Note: 172.26.204.0/24 is advertised to allow the lab Jumphost to access the WLC 5520 GUI.
172..26.205.0/24 is advertised to allow external devices to respond to DNA Center
192.168.100.100/32 is the Control Plane Loopback0. This is a host route that C6807 learned via ISIS.
After a few seconds you will see the fabric underlay learned on the Fusion router.
10.0.0.0/8 is variably subnetted, 4 subnets, 2 masks 10.0.0.0/8 is variably subnetted, 4 subnets, 2 masks
C 10.172.99.0/24 is directly connected, Loopback99
C 10.172.99.0/24 is directly connected, Loopback99 L 10.172.99.11/32 is directly connected, Loopback99
L 10.172.99.11/32 is directly connected, Loopback99 C 10.172.120.0/24 is directly connected, GigabitEthernet0/0/5
C 10.172.120.0/24 is directly connected, GigabitEthernet0/0/5 L 10.172.120.254/32 is directly connected, GigabitEthernet0/0/5
L 10.172.120.254/32 is directly connected, GigabitEthernet0/0/5 101.0.0.0/32 is subnetted, 1 subnets
101.0.0.0/32 is subnetted, 1 subnets C 101.101.101.101 is directly connected, Loopback1
C 101.101.101.101 is directly connected, Loopback1 172.26.0.0/24 is subnetted, 2 subnets
B 172.26.204.0 [20/0] via 192.168.254.1, 00:00:51
192.168.0.0/24 is variably subnetted, 2 subnets, 2 masks B 172.26.205.0 [20/0] via 192.168.254.1, 00:00:21
C 192.168.0.20/30 is directly connected, TenGigabitEthernet0/0/1 192.168.0.0/24 is variably subnetted, 2 subnets, 2 masks
L 192.168.0.22/32 is directly connected, TenGigabitEthernet0/0/1 C 192.168.0.20/30 is directly connected, TenGigabitEthernet0/0/1
192.168.105.0/32 is subnetted, 1 subnets L 192.168.0.22/32 is directly connected, TenGigabitEthernet0/0/1
C 192.168.105.1 is directly connected, Loopback0 192.168.100.0/32 is subnetted, 1 subnets
192.168.254.0/24 is variably subnetted, 2 subnets, 2 masks B 192.168.100.100 [20/20] via 192.168.254.1, 00:00:21
192.168.105.0/32 is subnetted, 1 subnets
C 192.168.254.0/30 is directly connected, C 192.168.105.1 is directly connected, Loopback0
TenGigabitEthernet0/0/1.3001 192.168.254.0/24 is variably subnetted, 2 subnets, 2 masks
L 192.168.254.2/32 is directly connected, C 192.168.254.0/30 is directly connected, TenGigabitEthernet0/0/1.3001
TenGigabitEthernet0/0/1.3001 L 192.168.254.2/32 is directly connected, TenGigabitEthernet0/0/1.3001
Fusion-ASR#
Please save the Border, Control Plane, and Edge node configurations for future
comparisons using the filename “bordercomplete”.
To recap, at this point in the lab the underlay devices (switches, routers, and wireless LAN
controllers (WLC) are discovered, sites created, discovered devices provisioned to a site and fabric
provisioned during various DNA Center work flows. Wired end hosts are successfully onboarded and
IP connectivity between virtual networks and to shared services has been validated.
Note: These are manual steps today, which will be automated in future releases
2. From DNA Center, add the WLC to the SD-Access fabric domain.
3. Within Design, create a wireless network which will configure wireless SSIDs mapped to a
site as part of a network profile.
4. Use the Provision app to associate the WLC to a site and provision it with site specific
settings.
5. From within the Provision’s Fabric Topology, add the WLC device to the fabric domain.
8. Complete the process by provisioning the Access Points by adding to a site and provisioning
to make it Fabric enabled.
9. Validate a wireless client can connect to the network and reach clients within the Fabric as
well as shared resources outside the Fabric.
SD-Wireless Highlights:
• SD-Wireless require Layer-2 LISP support as wireless client MAC addresses are used as EID
• WLC registers wireless client MAC addresses with SGT and VN information with the Fabric
Control-Plane
• The Virtual Network information is mapped to a VLAN on the Fabric Edge nodes
• WLC is responsible roaming, and updates the Fabric Control Plane as roams occur.
• Fabric enabled WLC needs to be physically located on the same sites as APs as the
maximum latency supported is 20ms.
SD-Wireless Communications
Before you proceed with the lab, some important consideration to keep in mind
• Admin user configures a pool in DNA Center to be dedicated to APs. The AP is plugged in
and powers up. The Fabric Edge discovers the AP via CDP and applies an auto-smart port
on the AP interface, which maps the interface to the right VLAN (VLAN mapped to
INFRA_VN IP Pool).
• AP gets an IP address from the same IP Pool.
• Specific to this lab, Border will redistribute WLC route from BGP to ISIS and FE’s will learn
the WLC route in the GRT.
• AP initiates a CAPWAP to the WLC using DHCP Option 43.
• AP to WLC CAPWAY traffic travels in the underlay.
• When FE receives CAPWAP packet from AP, the FE finds a match in the RIB and packet is
forwarded to no VXLAN encapsulation.
The AP and Wireless client IP Pools need to have access to the shared services, as we did for the
wired hosts in the Border lab. Since SD-Access has already been deployed with BGP handoff to a
Fusion router, it is trivial to add the necessary route leaking to allow the Wireless network to reach
shared resources and other external fabric destinations.
In the lab we will use the IP Pool 172.16.112.0/24 for APs in the INFRA_VN and the 172.16.222.0/24
IP Pool in the Campus Virtual Network for wireless clients. Again, the use of a separate IP Pools for
Wireless are for lab purposes, technically the wired and wireless clients can share the same IP Pool.
Distribute the AP and Wireless client networks to BGP for the fusion router to learn. Also
redistribute the DHCP and WLC networks to the ISIS underlay using route-maps.
Border
C6807:
conf t
address-family ipv4
network 172.16.112.254 mask 255.255.255.255 Distribute AP &
Wireless networks
address-family ipv4 vrf Campus in BGP
network 172.16.222.254 mask 255.255.255.255
router isis
redistribute bgp 65002 route-map global_shared_net metric-type external
Creating IP Pools
Please configure IP pools for both AP and wireless client subnets as per your network addressing
scheme. Additionally, ensure you have DHCP option 43 defined for the AP IP Pool scope to enable
APs to register to the WLC.
Using the Design menu bar Network Settings select IP Address Pools and define the IP
address Pools we require.
When complete, the DNA Center IP Address Pools for Global should look very close to the following
page:
Create SSID “SDAPod<P>” where <P> is your Pod number. Verify Type is Voice & Data,
Security is WPA2 Enterprise with Adaptive options. Click Next to continue.
Note: SD-Wireless SSID should be build wide as there is no seamless roaming between fabric and non-fabric
SSIDs.
Once the SSID is created, the SSID will be configured with security and the associated Profile.
Begin by selecting the WLC by checking the box to left of it. Once selected we will choose
the action “Provision”
Assign it to the location where WLC is physically located, in this lab use SJC05.
Click Next to continue.
Note: If you fail to add a floor at this step, the SD-Wireless deployment will fail when the AP is onboarded, as it
will be onboarded at a floor level. Thus a WLC which manages that floor must be deployed first.
Review the configuration – System Details, Global Settings for AAA, DHCP, DNS servers,
SSID and Managed Sites that will be pushed as part of WLC provisioning from DNA Center.
Click Deploy. And then Run Now in the following dialog to make the changes.
Click “Advanced” on the top right of the main page to get to the screens used to provision
the SD-Wireless solution.
First review the configuration DNA Center automated when the WLC was provisioned to the
site. Verify the Security -> RADIUS -> Authentication screen shows the ISE configured.
Verify the Security -> RADIUS -> Authentication screen shows the ISE configured.
Note: The Profile Name is the name given in DNA Center underscore F underscore a unique identifier. This
ensures DNA Center can uniquely track profiles in large scale environments.
Locate the WLC-5520 and select “Add to Fabric”. Click on Save after the wireless controller
is added to Fabric and Run Now, Apply to provision
.
The WLC will change color from grey to blue once they are added to Fabric as shown in the
example below.
Page 134 of 152
Verfiy the CLI changes DNA Center has automated when assigning IP Pools to the VNs.
Note there are no chages to the Border or Edge nodes when provisioning a fabric enabled WLC
Click on the “Host Onboarding” at the top of this screen to start enabling the IP pools for
AP and client devices.
Click on the INFRA_VN virtual Network, and a window open with configured address pools.
Select the APs address pool (172.16.112.0/24). Click on Update to complete the
provisioning. In the subsequent dialog box presented, select Run Now and click Apply.
Note: For the INFRA_VN AP Provision Pool and Layer-2 Extension are enabled by default for all Pools. This
simplifies the AP Pool deployment and ensures the correct settings for AP devices.
The AP Provision Pool, automatically pushes a configuration macro to all Fabric Edge switches; to
automate the bring up of APs. When the AP connects to the switch, it will be recognized thorugh
CDP (Cisco Discover Protocol), at which time a macro will be applied to the port automatically
Page 136 of 152
assigning the physical port to the right VLAN. The CDP applied macro is pushed only if the port is
configured with the “No Authentication” template. Thus make sure all AP connection ports are
configured with “No Authentication”.
The Layer-2 Extension setting enables L2 LISP and associates a L2 VNID to this pool so that the
Ethernet frames can be carried end to end within the fabric.
Next provision an IP Pool specifically for Wireless clients. Open the Campus Virtual
Network and apply the Wifi pool 172.16.222.0/24 and select traffic type Data. Click Update
and then Run Now, Apply in the following dialog.
Border (C6807)
C6807#show archive config differences disk0:bordercomplete system:running-config
!Contextual Config Diffs:
+interface Loopback1024
+description Loopback Border
+ip address 172.16.112.254 255.255.255.255
+interface Loopback1025
+description Loopback Border
+vrf forwarding Campus
+ip address 172.16.222.254 255.255.255.255
router lisp
eid-table default instance-id 4097
+map-cache 172.16.112.0/24 map-request
router bgp 65002
address-family ipv4
+network 172.16.112.254 mask 255.255.255.255
address-family ipv4 vrf Campus
+network 172.16.222.254 mask 255.255.255.255
Below the Virtual Networks section is the Wireless SSID section of the Host Onboarding
page. Verify the Pod specific SSID is displayed and select the Wifi pool
(Campus:172.16.222.0) and click Save and Run Now in the following dialog.
Page 139 of 152
Since the default auth policy is Dot1xClosed, we will need to manually change the switch
port the AP is connected to enable it to connect. Scroll down the Host Onboarding page
and select C3850-3 port g1/0/20. Change the Address Pool to 172_16_112_0-INFRA
and chance the Authenticaiton to No Authorization. Be sure to click Save and Run Now
when done.
After saving the change the screen will update and the port will show the change from the
default profile to make it simple to visualize no default ports and their different configuration.
Before you can proceed with the AP provisioning, you will need to review the interface configurations
where the APs are connected. In this Lab, we have two AP’s connected on both the Fabric Edge as
show below. Notice Authentication templates are proviosioned by DNA Center as part of the steps
you did in the Wired Lab.
Page 140 of 152
Default Port configuration After Manual Port Change
Note: You might run into scenarios where none of the AP’s are discovered even after sometime. In this
scenario, you need to select the WLC and select Resync from the Actions menu to manually force
a re-try.
After the Re-Sync completes the APs will be visible on the Fabric Devices page.
Select AP from Device list and “Add to Site” as shown in the example below.
Click Next and choose the Low Radio Frequency in the Configure screen
After a few minutes, the AP will be shwon as Successfully provisioned in the Devices table
From this tabbed screen select the Advanced Tab and note the AP Group Name displayed
Proceed to the WLANs -> Advanced -> AP Groups page and select the AP Group name
identified in the previous step.
Returning to the WLANs page, the WLAN ID for the SD-Wireless SSID will now be shown
to have Admin Status Enabled.
Edit the security settings under WLANs and verify the AAA server status
To verify the wireless host connectivity, you will use the vSphere client to connect to your pod ESX
server and launch consoles for the wireless host VM. From the console you will configure the client
to connect to the network and first ping the default gateway, and then the other wired end hosts.
From the jump host use the vSphere client to connect to your ESXi host.
(Instructor / Dna@labrocks)
Expand the ESX host for your Pod to display all the VMs associated to the Pod.
Pods 1-10 are servers .71 – .80 Pods 11-20 are servers .104 – .113
Here you will find several hosts. Please verify the P-## matches your Pod number.
Open console windows to access the PC-Wireless VM by selecting it and right clicking.
You can also ping the Wired hosts PC-Wired2 172.16.101.100 and PC-Wired3
172.16.201.100
Verify client Fabric status and the SGT tag pushed to the WLC from ISE based on the
authorization rule by clicking on the MAC address and scrolling down the page to the Fabric
Properties and Security Information sections.
Great Job!!
We sincerely hope you have enjoyed the
experience with SD-Access and DNA Center.