Cisco SDA

Download as pdf or txt
Download as pdf or txt
You are on page 1of 152

Cisco SD-Access- Hands-on Lab

LTRCRS-2810

Speakers:
Larissa Overbey and Derek Huckaby

Learning Objectives or Table of Contents


Upon completion of this lab, you will be able to understand:
• How to design a site for fabric setup
• SD-Access policy segmentation
• Fabric node roles and application
• How to Deploy SD-Access for Wired and Wireless devices

Scenario
In this lab activity, you will implement the following in DNA-Center SD-Access:

• ISE Integration • Border Handoff to interconnect Fabric to


• Network Discovery external network and shared services
• Designing the Network • Inter Virtual Network routing including
• Creating a Centralized Policy shared services
• Provisioning Devices to a Site • Validation of external connectivity
• Provisioning devices for a SD-Access Fabric • SD-Wireless provisioning
• Deploying a SD-Access Fabric • Provisioning WLC in a SD-Access Fabric
• Host Onboarding for wired endpoints • Provisioning Fabric enabled Access Points
• Wired Validation • Host Onboarding wireless endpoints
Page 1 of 152
Table of Contents
Executive Summary ............................................................................................................................. 4
Software Defined Access .............................................................................................................. 4
SD-Access Benefits ...................................................................................................................... 4
SD-Access Overview ............................................................................................................................ 5
SD-Access Capabilities supported in DNA Center 1.1 .................................................................. 5
SD-Access Lab Work Flow ............................................................................................................... 6
Lab Topology .................................................................................................................................... 6
Physical Topology ......................................................................................................................... 7
Logical Topology ........................................................................................................................... 7
Accessing the lab .............................................................................................................................. 8
Welcome to DNA Center ...................................................................................................................... 9
DNA Center to ISE Integration ........................................................................................................... 12
Discovery, Inventory and Topology .................................................................................................... 16
Discover the SD-Access Underlay .................................................................................................. 16
Inventory App and Device Roles .................................................................................................... 21
Viewing the Lab Topology .............................................................................................................. 23
Get Started using DNA Center Design ............................................................................................... 28
Creating Sites and Buildings ........................................................................................................... 28
Defining Network Settings .............................................................................................................. 33
Network services ......................................................................................................................... 33
Device Credentials ...................................................................................................................... 36
Creating IP Pools ........................................................................................................................ 37
Image Management (Do Not Attempt in This Lab) ......................................................................... 39
Network Profiles .............................................................................................................................. 45
SD-Access Scalable Group Tag Creation .......................................................................................... 48
Define Scalable Group Tag for User Group ............................................................................ 48
SD-Access Network Segmentation .................................................................................................... 52
SD-Access Policy Administration .................................................................................................... 55
Apply a Layer-3 Contract ............................................................................................................ 55
Apply a Layer-4 Custom Contract ............................................................................................... 59
SD-Access Overlay Provisioning ........................................................................................................ 67
Add Devices to a Site ..................................................................................................................... 67
Provision Devices to the Site .......................................................................................................... 70
Create and Manage Fabrics ........................................................................................................... 76
Provisioning the Fabric ................................................................................................................... 78
SD-Access End Host Provisioning ..................................................................................................... 88
End Point Validation ........................................................................................................................... 93
Verify IP Connectivity for MAB Hosts (Optional) ............................................................................ 93
Verify IP Connectivity for MAB Hosts ............................................................................................. 99
Page 2 of 152
Verify IP Connectivity for Dot1x Hosts .......................................................................................... 101
SD-Access Border Validation Overview ........................................................................................... 105
SD-Access Lab Work Flow ........................................................................................................... 105
SD-Access Inter-Virtual Network Routing ........................................................................................ 106
BGP Handoff to between Border and Fusion ASR router ............................................................. 106
Create L3 connectivity between border and Fusion Router ...................................................... 107
Extend the Virtual Networks from the Border to the Fusion router ............................................ 111
Use VRF leaking to share the routes on the Fusion router and distribute them to the Border .. 115
Verify Inter-Virtual Network routing ........................................................................................... 118
Restoring IP Connectivity to the Underlay ........................................................................................ 119
SD-Access Wireless Validation Overview ........................................................................................ 122
Wireless Validation Work Flow ..................................................................................................... 122
SD-Access Wireless Architecture ................................................................................................. 123
SD-Wireless Wireless LAN Controller (WLC)............................................................................ 123
SD-Wireless Access Points (APs) ............................................................................................. 123
SD-Wireless Communications................................................................................................... 123
Border Handoff for Wireless Networks ............................................................................................. 125
Border ....................................................................................................................................... 125
Use DNA Center Design to Provision IP Pools ................................................................................ 127
Creating IP Pools .......................................................................................................................... 127
Creating Wireless SSIDs .............................................................................................................. 128
Managing Network Profiles ........................................................................................................... 129
Wireless LAN Controller Provisioning ........................................................................................... 130
Access the WLC to Verify Site Provisioning ................................................................................. 133
Adding the WLC to the Fabric ....................................................................................................... 134
AP and Wireless Client Host Onboarding ........................................................................................ 136
Provisioning the AP .......................................................................................................................... 142
Wireless Validation ........................................................................................................................... 148
Enable the Wireless client to Connect .......................................................................................... 149

Page 3 of 152
Executive Summary
Digitization is transforming business in every industry – requiring every company to be an IT
company. Cisco’s Digital Network Architecture (DNA) is an open, software-driven architecture built
on a set of design principles with the objective of providing:
- Insights and Actions to drive faster business innovation
- Automaton and Assurance to lower IT costs and complexity while meeting business and
user expectations
- Security and Compliance to reduce risk as the organization continues to expand and grow

Software Defined Access


Cisco’s revolutionary Software Defined Access (SD-Access) solution provides policy-based
automation from the edge to the cloud. Secure segmentation for users and things is enabled through
a network fabric, drastically simplifying and scaling operations while providing complete visibility and
delivering new services quickly. By automating day-to-day tasks such as configuration, provisioning
and troubleshooting, SD-Access reduces the time it takes to adapt the network, improves issue
resolutions and reduces security breach impacts. This results in significant CapEx and OpEx savings
for the business.

SD-Access Benefits

Page 4 of 152
SD-Access Overview

SD-Access Capabilities supported in DNA Center 1.1


SD-Access is Cisco’s next generation Enterprise Networking access solution, designed to offer
integrated security, segmentation, and elastic service roll-outs via a Fabric based infrastructure and
an outstanding GUI experience for automated network provisioning via the new DNA Center
application.

SD-Access offers the following primary features in version 1.1:

Page 5 of 152
SD-Access Lab Work Flow
The Lab Guide begins assuming the switching underlay is provisioned and network and DNA
Advantage licensing is installed. DNA Center version 1.1 is installed running SD-Access version
2.1.0.64153 and ISE v2.3 is installed in the lab environment. For simplicity this validation guide will
walk users through a University Campus deployment scenario.

1. The lab begins with a quick overview of the SD-Access apps, tools and getting familiar with
DNA Center.
2. DNA Center will be used to discover underlay devices and view them using the Discover and
Inventory tools DNA Center will be used to display the devices via the Topology tool.
3. Next, sites will be created using the Design app, where common attributes, resources, and
credentials are defined for re-use during various DNA Center work flows.
4. Then there is a quick overview of the Policy app, verification of the DNA Center and ISE
integration.
5. Virtual Networks are then created in the Policy app, creating network-level segmentation.
During this step, the scalable groups are learned from ISE will be associated to the virtual
networks, creating micro-level segmentation.
6. Next, the Policy Administration process is used to create centralized security policies for the
University scenario. The polices are verified using ISE.
7. Returning to DNA Center, the discovered devices will be provisioned to a site.
8. Next, the overlay fabric will be provisioned.
9. To wrap up, the End hosts will be on-boarded.
10. Windows clients will be modified to use Dot1x authentication to join the overlay.
11. IP connectivity and security policy enforcement will be tested.

Once complete, participants can move to the additional labs in this series for Border external
Connectivity and SD-Wireless.

Lab Topology
The SD-Access Lab Guide is based on the following topology, which utilizes a Catalyst 6800 for the
Border and a Catalyst 3850 as a Control Plane and two additional Catalyst 3850s for the Edges. An
ASR1000X is used as the Fusion router to reach the WLC and shared services. A WLC 5520 and a
Wave2 AP are used to demonstrate SD-Wireless.

Page 6 of 152
Physical Topology

Logical Topology

Page 7 of 152
Accessing the lab
To access the lab, students will need to use a Remote Desktop connection to a pod-specific jump
host. The jump host is used to allow remote access to all lab devices within a given pod. Please use
the following table to access your Pod. (The Instructor will assign you a Pod number.)

Pod RDP IP Address JumpHost User Password


Pod1 128.107.90.91 JumpHost 1 admin dnalab1
Pod2 128.107.90.92 JumpHost 2 admin dnalab1
Pod3 128.107.90.93 JumpHost 3 admin dnalab1
Pod4 128.107.90.94 JumpHost 4 admin dnalab1
Pod5 128.107.90.95 JumpHost 5 admin dnalab1
Pod6 128.107.90.96 JumpHost 6 admin dnalab1
Pod7 128.107.90.97 JumpHost 7 admin dnalab1
Pod8 128.107.90.98 JumpHost 8 admin dnalab1
Pod9 128.107.90.99 JumpHost 9 admin dnalab1
Pod10 128.107.90.100 JumpHost 10 admin dnalab1
Pod11 128.107.90.114 JumpHost 11 admin dnalab1
Pod12 128.107.90.115 JumpHost 12 admin dnalab1
Pod13 128.107.90.116 JumpHost 13 admin dnalab1
Pod14 128.107.90.117 JumpHost 14 admin dnalab1
Pod15 128.107.90.118 JumpHost 15 admin dnalab1
Pod16 128.107.90.119 JumpHost 16 admin dnalab1
Pod17 128.107.90.120 JumpHost 17 admin dnalab1
Pod18 128.107.90.121 JumpHost 18 admin dnalab1
Pod19 128.107.90.122 JumpHost 19 admin dnalab1
Pod20 128.107.90.123 JumpHost 20 admin dnalab1

The table below provides the access information for the devices within a given pod.
IP Address Common name User Password

https://172.26.205.100/ DNAC admin Dna@labrocks


https://172.26.204.121/ ISE admin Dna@labrocks
http://10.172.120.2 WLC cisco cisco
192.168.100.100 C3850-1 cisco cisco
192.168.120.2 C3850-2 cisco cisco
192.168.120.3 C3850-3 cisco cisco
192.168.110.1 C4503 cisco cisco
192.168.100.1 C6807 cisco cisco
192.168.105.1 Fusion-ASR cisco cisco
172.16.112.<dhcp> AP cisco cisco
172.16.101.100 PC-Wired-2 admin dnalab1
172.16.201.100 PC-Wired-3 admin dnalab1
172.16.222.<dhcp> PC-Wireless admin dnalab1
172.16.101.201 PCI-Server -- --
172.16.250.25 Guest-Linux -- --
128.107.90.81 vCenter Instructor Dna@labrocks

Page 8 of 152
Welcome to DNA Center

Please sign in to DNA Center using the management


IP address assigned during installation and use the
admin credentials given during install.

user: “admin”
password: “Dna@labrocks”

Note: DNA Center’s SSL certificate may not be automatically


accepted by client browsers. If this occurs, please use the
advanced settings to allow the connection.

Once logged in, you will see the DNA Center dashboard.

Page 9 of 152
To view the DNA Center version click on the gear at the top right and select About DNA
Center.

The DNA Center main screen is divided into two main areas, “Applications” and “Tools”,
which contain the primary applications for creating and managing an SD-Access
deployment.

Page 10 of 152
Have a look at the other links within DNA Center. For example, the System Settings pages
control how the DNA Center system is integrated with other platforms, users and
applications as well as system backup and restore

While using DNA Center, you can easily transition from any application back to the home
page by clicking the Applications (Apps) button located at the top right of every window or
the Cisco DNA Center logo on the top left.

When you first boot DNA Center,


be sure to look at the à System
Settings à App Management page
to verify all the applications are in the
running state.

Page 11 of 152
DNA Center to ISE Integration
The Cisco Platform Exchange Grid (pxGrid) is a multivendor, cross-platform network system that
pulls together different parts of an IT infrastructure. Cisco pxGrid provides an API which is secured
via an SSL certificate system. DNA Center has automated the certificate process to allow users to
simply and easily integrate DNA Center to ISE in a secure manner.

From the à System Settings page, which opens the System 360 tab by default. Locate
the External Network Services: Identity Service Engine panel and select “Configure settings”

This will bring you to the Settings - Authentication and Policy Servers screen under the
Settings tab. Click on to open the right side panel for adding an AAA/ISE Server.

Page 12 of 152
Populate the AAA/ISE server 1 details. Ensure to select the Cisco ISE slider to show
additional fields. Use the following details to assist with field population.

a) Enter the ISE IP address.


172.26.204.121

b) Provide the shared secret “dnalab1” that will


be used between DNA Center and ISE.

c) Toggle the Cisco ISE server selection to On

d) Populate the username and password fields


with the ISE admin credentials.
“admin” / “Dna@labrocks”

e) Provide the FQDN of the ISE server.


“sda-ise.cisco.com”.

f) Type in “dnacenter” as the Subscriber Name.

g) SSH key should not be provided

h) Click Apply

Afterwards the ISE will appear in the table with a status of


“In Progress”.

At the bottom of the page a pop-up will appear to confirm the ISE settings
have been added.

Page 13 of 152
Use ISE to validate the connection is Online and
has subscribers. To do this, log into ISE as the
admin user.

user: “admin”
password: “Dna@labrocks”

Once authenticated, go to Administration à pxGrid Services. On this page a new client will
appear as “Pending” and Total Pending (1).

Note: The Integration process takes typically 2-5 minutes to complete. If you log into ISE quickly you may
need to refresh the page to see the new subscriber appear.

Select the new client and click Approve

After a moment the client will change to an “Online” status with 3 Subscribers.

There will also be a green connection bar at the bottom of the page

Page 14 of 152
Return to DNA Center and verify the “System Settings” shows the AAA/ISE Server as
“ACTIVE”.

Congratulations DNA Center is now successfully integrated with the Identity Services Engine.

Page 15 of 152
Discovery, Inventory and Topology

Discover the SD-Access Underlay


In DNA Center, the Discovery tool is used to find existing underlay devices using CDP or IP address
ranges. When defining a discovery profile, users are able to leverage the credentials defined in the
Design App (shown in a later section).

Please go to “Discovery” from the home page.

Before creating a Discovery profile and running it, please take a moment to look at the
underlay configuration of the Catalyst switches, specifically the C6807 and C3850-2
devices. You can access the devices by opening the Putty.exe from the desktop. You will
see each of the devices console access in your Putty console.

There should not be any VRF or LISP configuration as this is pushed from DNAC to the
devices while provisioning, after discovery and design.
C3850-2#sho run vrf

C3850-2#sho run | sec lisp

ISIS is the underlay routing protocol, therefore ISIS configuration can be seen and verified.
C3850-2#sho run | sec isis

Page 16 of 152
Using the Discovery Name “SDA Lab”, type in the Catalyst 6807 loopback address
192.168.0.1 in the “IP Address” field. Also, change the Preferred Management IP
to “Use Loopback”.

Note: The IP address could be any L3 interface or Loopback on any switch that DNA Center can access.

Click on Add Credentials to expand the side panel for adding device credentials

Page 17 of 152
Use the table below to populate the CLI and SNMPv2c credentials.

Add Credentials Value


CLI Credentials
Username cisco
Password cisco
Enable Password cisco
SNMPv2 RO public
Be sure to check the SNMPv2 RW private “Save as Global” box
before saving. This will push the credentials into
the Design application for later use, if needed.

Once populated, the credentials will appear in the center of the page.

Page 18 of 152
The last step is to enable Telnet as a discovery protocol for any legacy devices not
configured to support SSH. To do this, scroll down the page and open the “Advanced”
section. Once opened, select the Telnet protocol.

Click on “Start” in the upper right-hand corner. To view the discovered devices, simply click
on the devices found icon. This will also open a panel on the right, to display the devices
and their discovery status.

In a moment, the screen will display devices appearing on the right as they are discovered.

Note: At this point you can proceed to the Design section. This will allow you to continue the lab while the
Discover process is working. If you do, be sure to return to this section before proceeding to Policy.
By default, DNA Center will make a few changes to the discovered devices. Please have
another look at one of the access switch configurations to see the changes made.

Page 19 of 152
C3850-2#show archive config differences flash:underlay system:running-config
!Contextual Config Diffs:
+device-tracking tracking
+device-tracking policy IPDT_MAX_10
+limit address-count 10
+no protocol udp
+tracking enable

+crypto pki trustpoint TP-self-signed-1978819505


+enrollment selfsigned
+subject-name cn=IOS-Self-Signed-Certificate-1978819505 New RSA Keys are created
+revocation-check none
+rsakeypair TP-self-signed-1978819505

+crypto pki trustpoint 128.107.88.241


+enrollment mode ra
Secure connection to DNA Center
+enrollment terminal using the interface 1 IP address as
+usage ssl-client the certificate name
+revocation-check crl

+crypto pki certificate chain TP-self-signed-1978819505


+certificate self-signed 01
+308202E0 308201C8 A0030201 02020900 CD2FEFCE B023FF6D 300D0609 2A864886
<..snip..>
+B46DED6E CF5FB784 C039B6D2 7B354644 CBFAC28B
+ quit

+crypto pki certificate chain 128.107.88.241


+certificate ca 00E8A2F5EB01BBC1C6
+308202E0 308201C8 A0030201 02020900 E8A2F5EB 01BBC1C6 300D0609 2A864886
<..snip..>
+3FFDF6E3 01129299 2BD06DE1 2098B70B 9B1C66A2 DB9BBD0C A6EF8F92 28C5302E 2BE96EA1
+ quit

interface GigabitEthernet1/0/2
+device-tracking attach-policy IPDT_MAX_10
<..snip..>
interface GigabitEthernet1/0/45
+device-tracking attach-policy IPDT_MAX_10
interface GigabitEthernet1/0/46
+device-tracking attach-policy IPDT_MAX_10
<..snip..>
interface FortyGigabitEthernet1/1/1
+device-tracking attach-policy IPDT_MAX_10
interface FortyGigabitEthernet1/1/2
+device-tracking attach-policy IPDT_MAX_10

+ip http client source-interface Loopback0

line con 0
+length 0

IPDT (IP Device Tracking) is used to keep track of connected hosts. This is done through a
unicasted ARP probe. This is used for multiple services. For SD-Access, this is used for Cisco
TrustSec, MAB and 802.1x session manager.

Note: Device tracking is not added to ports with end hosts physically attached. In this lab port G1/0/1 and
G1/0/11 have hosts connected to them.

Please save the Border, Control Plane, and Edge node configurations for future
comparisons.

C3850-1 #copy run discovery

Page 20 of 152
Inventory App and Device Roles

Once the devices are discovered, use the Apps button to open the Inventory app to
view the lab devices.

Within the Inventory page all devices should be Reachable and Managed. In some cases
the status of “In Progress” will appear, which is fine as it is indicating DNA Center is actively
updating information regarding the device.

Device Roles are used to position devices in the DNA Center Topology maps. Use the
Layout setting dots to add the Device Role column to the table and Apply.

Page 21 of 152
Using the Device Role column, verify the lab devices have the expected role. These are
auto-discovered, but typically need to be adjusted slightly for a given network topology.
In the example below the Catalyst 6807 as an “ACCESS” role, because it has end hosts
locally attached; however it should have a “CORE” role. To correct it, simply click on the
role and select the appropriate one, in this case “CORE”. Again, this selection only effects
topology map positioning. It does not add or modify any configuration with regard to the
device role selected.

Note: The following table shows the mapping order for each role type:

Device Role Topology Position


BORDER ROUTER Top Most
CORE Top
DISTRIBUTION Bottom
ACCESS Bottom Most
UNKNOWN To the Side

This is how the lab should be categorized when finished:

Page 22 of 152
Viewing the Lab Topology
Once you have verified the Device Roles are correct use the Topology application to view the EFT
lab network.

Use the Apps button to open the Topology app to view the lab devices. You can use
the zoom feature in the lower right to zoom in on the map.

Page 23 of 152
From here you can see the
Topology app groups
devices based on role
within the map. To expand
the devices, click on them
and then un-pin them from
each other.

Once un-pinned, the details of the device will be shown in the right panel. Clicking any
blank area in the map will close the right panel and center the map.

Refresh the window, and the map resets to the


collapsed view. Use the “View Options” in the
upper left to expand.

Page 24 of 152
Notice the filter window in the upper right allows user to search
or filter for devices, which is helpful in large topology maps.

See the example where a portion of the hostname is


searched “3850-“ DNA Center shows all matches and
dynamically adjusts as more characters are typed.

From here if a device is clicked the map zooms to the


device and displays the device information as show on the
next page.

Page 25 of 152
Feel free to interact with the Topology map further to get a feel for it.
Once done please continue to the Design section.

Page 26 of 152
DESIGN

Page 27 of 152
Get Started using DNA Center Design
DNA Center provides a robust Design application to allow customers of every size and scale to
easily define their physical Sites and common resources. This is implemented using a hierarchical
format for intuitive use, while removing the need to redefine the same resource in multiple places
when provisioning devices.

Creating Sites and Buildings


To begin, select the Design app to open it. Once there, you will see a world map within a
frame and a site hierarchy on the left hand side. Add Site is used to create new sites
manually or to import them from a CSV file.

Note: The browser used to configure DNA Center must have Internet connectivity for the maps to appear.

Page 28 of 152
Use Add Site to create a new site called California, and then click Add.

Next, create another Site by clicking on the next to California. Select add site.
Ensure the Parent node is California. Name the site San Jose, and then click Create.

Page 29 of 152
Open California, and then next to San Jose select the gear to add a building where the
network devices will reside. Let’s use the SJC05 number as the building name.

For the Address, type in “325 E Tasman”. As you enter the address, you will see the Design App
narrowing down the known addresses to the one you enter. When you see 325 E Tasman Dr
appear in the window below, select it. The benefit of selecting a known address is that you will not
have to lookup and provide the longitude and latitude coordinates as they are automatically provided.

Page 30 of 152
Expand San Jose and once more use the next to SJC05 to add a building floor. Use
floor name “SJC05-1” and select Indoor High Ceiling for Type.

Any DXF, DWG, PNG, JPG or GIF file can be loaded as a


floorplan. Select Upload file

Navigate to “My Documents/floorplans”. Once there, select the SJC05_1 floorplan.


Next click Open to complete the process.

Page 31 of 152
Modify the width to 300. You will notice, based on the image uploaded, the length will
change. Modify the height to 15. Click Add.

Once loaded you should see something similar to the screen below:

Note: The Floor plan extends into the Network Hierachy panel on the left. Mouse over the light gray frame
(called out below) and slide right to expose the site hierachy once again.

Page 32 of 152
Defining Network Settings

Network services
DNA Center allows you to save common resources and settings with Design’s Network Setting
application. As we described earlier, this allows information pertaining to the enterprise to be stored
so it can be reused throughout DNA Center.

From the Network Hierarchy page, you can get to Network Settings using the menu bar.
Once opened, you will see a list of default settings, which are typical in every network environment.
Production SD-Access deployments will require AAA, DHCP, and DNS servers to be configured.
For Assurance, SYSLOG, SNMP, and Netflow can be configured using the DNA Center’s IP.

Use the Add Servers button to add AAA, Netflow and NTP servers to DNA Center.

Page 33 of 152
For AAA Server, select both “Network” and “Client/Endpoint” check boxes. This will add two
new rows of settings respectively. While DNA Center supports network clients using
TACACS and Clients/Endpoints using RADIUS, in this lab we are using RADIUS for both.

Under Network, select the Server type as “ISE” and use the pull down to select the
integrated ISE server IP Address (172.26.204.121).

Also under Network, select the Protocol as “RADIUS” and use the pull down to select the
ISE server IP (172.26.204.121).

Repeat steps 4 and 5 above under the “Client/Endpoint” section.

4 5

Page 34 of 152
Configure the remaining servers according to the table below.

Server IP Address
DHCP 10.172.99.11
cisco.com
DNS
10.172.99.251
SYSLOG 172.26.205.100

SNMP 172.26.205.100
172.26.205.100
Netflow
port: 2055
NTP 172.26.204.254

Select PST for the time zone and provide a message of the day for the lab devices. When
finish, store the setting by clicking on “Save” at the bottom of the page.

Popups will appear at the bottom of the page to indicate the setting are saving and another one for
when the save is successful.

Page 35 of 152
Device Credentials
The next step is to define site device credentials. Using the menu bar, select Device Credentials.
From here you will see the credentials defined previously in the Discovery Tool. If you would like to
add credentials for additional devices to test out the UI, you can at this time.

Any additional credentials will be accessible in the Discovery tool if there are more devices to
discover is the future.

Page 36 of 152
Creating IP Pools
DNA Center will support both manually entering IP Address allotments as well as integrating with
IPAM solutions, such as Infoblox, to learn of existing IP Addresses already in use.

For validation, we will simply manually define the IP Address Pools we require. Using the
menu bar, select IP Address Pools.

Click on Add to open a dialog for creating new IP Pools.

Note: The Overlapping check box, should not be checked.


Overlapping allows users to identify overlapping subnets within
their network, enabling these addresses to be used in multiple
places that would otherwise be denied.

Page 37 of 152
Using the table below create the four IP Pools to be used for wired validation. You are
welcome to add additional IP Pools as well, but please add these four to work with the
validation in the latter part of the guide.

Name IP Subnet Mask Gateway DHCP Server DNS Server


Production 172.16.101.0/24 172.16.101.254 10.172.99.11 10.172.99.251
Staff 172.16.201.0/24 172.16.201.254 10.172.99.11 10.172.99.251
WiredGuest 172.16.250.0/24 172.16.250.254 10.172.99.11 10.172.99.251
Infrastructure 192.168.254.0/24 192.168.254.254 N/A N/A

When complete, the DNA Center IP Address Pools for Global should look similar to the page below:

Page 38 of 152
Image Management (Do Not Attempt in This Lab)
The Image Management area allows users to rollout out new device images to help standardize
device images within the network.

Note: The actual upgrading of images is not supported in this specific lab. The current images are already
installed. This section is provided strictly as reference for those interested in the workflow.

Clicking on Image Management from the menu bar will allow you to see the images running
on the previously discovered devices.

Click the Import Image/SMU button to load the ISO for IOS version 16.6.1, which can be
found in the Documents folder.

Page 39 of 152
Once selected, Import the IOS file.

If an error occurs, you can use Show Tasks


to view the details of the import function

Page 40 of 152
Once the IOS is imported, you can scroll down to the Catalyst 3850-1 device which is
running the older EFT image. Use the pull down to select the IOS 16.6.1 image as shown:

Once selected, mark the image as Golden.

Note: You may need to refresh the page to see the image has been updated as Golden

Page 41 of 152
Select the Provision Application from the top of the page.

Once on the Provision page you will see which devices are running an outdated image, and
you are allow to select the device you want to update.
For this lab, please select the C3850-1.cisco.com.

Then use the pull down to select Update OS image

On the next screen, simply click Update and acknowledge the switch will reload.

Page 42 of 152
After this point, the upgrade will begin. Notice a green status message will appear at the bottom of
the screen to confirm the upgrade has started

Note: The upgrade can take a while and DNAC takes even longer to be updated. You can verify the progress
and completion by accessing the switch console from Putty. You may need to refresh the screen on
DNAC to view progress on the UI.

Page 43 of 152
After some time, the C3850-1 switch will be upgraded. Please check back after the Policy section to
see that it has completed.

Page 44 of 152
Network Profiles
Network profiles have been added to allow site specific customization of network device
configurations.

Feel free to look around in this area. The example screen captures below show how network profiles
can be created through templates, and illustrate the workflow for creating router profiles.

Page 45 of 152
This completes the Design section of the SD-Access Lab.

If you previously skipped ahead while Discovery was


working, please return to the Inventory and complete it
before proceeding in the Lab.

Page 46 of 152
POLICY

Page 47 of 152
SD-Access Scalable Group Tag Creation

Define Scalable Group Tag for User Group

The user groups used can be associated with a Scalable Group Tag (SGT). SGTs can be carried
throughout the network and are the basis for access policy enforcement under Cisco DNA. Follow
the steps below to define the SGTs for “Research”.

Use the Apps button to access the Policy home page.

Page 48 of 152
Once it loads, you will see an informative dashboard for Policy and a history of policy
related actions. Click on Registry from the Policy home page.

Here you will see all the default scalable group tags pushed from ISE.
Click on Add Groups.

Page 49 of 152
This will launch a new tab in your browser that will connect to ISE. From here click on the
Add button above the table header.

Give the new SGT a name of “Research”, choose a different icon, and add a description
of your choice. Click Submit at the bottom of the page to save the new group.

Page 50 of 152
Return to DNA Center and refresh the table. Go to the second page to verify the
“Research” SGT has been learned by DNA Center.

Page 51 of 152
SD-Access Network Segmentation
The Policy app supports creating and managing Virtual Networks, Policy Administration, Contracts
and Scalable Groups using the Registry. Most users will want to set up their SD-Access Policy
(Virtual Networks and Contracts) before doing any SD-Access Provisioning. In this section, we will
segment the overlay network using the DNA Center Policy app. This process virtualizes the overlay
network into multiple self-contained Virtual Networks. By default, any network device or user within
the Virtual Network is permitted to communicate with other users and devices in the same Virtual
Network. To enable communication between Virtual Networks, traffic must leave the Fabric Border
and then return, typically traversing a firewall or fusion router.

This validation will simulate deploying SD-Access in a University. This allows us to show SD-Access
virtualization and segmentation between well understood groups and entities.

Navigate to the Policy -> Virtual Network screen.

• At the top, we have the ability to create additional Virtual Networks.


• The Default Virtual Network has numerous SGT Groups, which were populated from ISE
when DNA Center was integrated with ISE, as well as the Research SGT recently created.

Click on (in left column) to add a new Virtual Network.

Page 52 of 152
This will modify the window layout so that a new Virtual Network can be defined. Create a
Virtual Network, for these example the guide will use Campus.

Before saving, populate the newly created Virtual Network with Groups. This will allow you to further
segment the traffic within a given Virtual Network.

Select the following groups: Employees, Faculty, PCI_Servers, Production_Servers, and


Students. Then drag and drop them into the Virtual Network Campus. Once you have all 5
groups in the Virtual Network, Save the changes.

Once the SGTs have been moved, save the new Virtual Network

Repeat the steps above to create a new Virtual Network called Guest and move the Guests
and Network_Services groups into it. Here is what it should like once you are done:
Page 53 of 152
Page 54 of 152
SD-Access Policy Administration
Once SD-Access has been segmented into virtual networks, we can define the security policies.
DNA Center will allow you to deny or explicitly allow traffic between Groups within Virtual Networks.

In the following steps, we will show how SD-Access will be provisioned to establish security policies
with just a few clicks within DNA Center.

Source Contract Destination Policy Name


Employees deny PCI_Servers
DenyEmployees
Employees deny Students
PCI Servers deny Prod_Servers
PCI Servers deny Employees DenyPCIServers
PCI Servers deny Students
Students deny Employees
DenyStudents
Students deny PCI_Servers

Apply a Layer-3 Contract

From the Policy à Virtual Networks page use the menu to select Policy Administration.

Click on “ Add Policy” to add a new policy.

Page 55 of 152
Here you will see the Scalable Groups that were added to DNA Center from ISE.
These are ISE’s default scalable groups and a few created for this lab.

Use the Policy Access Table at the beginning of this section to create the policies. The first policy
“DenyEmployees” denies traffic sourced by Employees and destined for PCI_Servers or Students.

First define the Policy Name as “DenyEmployees”. Next, select Employees and drag it to
the Source box on the right-hand side. Then, drag PCI_Servers and Students to the
Destination box on the right-hand side.

You will see a corresponding “S” and “D”


highlighted on the associated Scalable
Groups indicating their policy assignment.

Click on “ Add Contract”. Select


“deny” in the Contracts field and
click “Ok”.

Page 56 of 152
Confirm the policy is correct then click on “Save”.

Access the TrustSec Matrix on ISE by clicking on Advanced Options. This will launch a
window in your browser, arriving at the ISE TrustSec Matrix.

Note: Please be patient, as it takes a few moments to build the TrustSec Policy Matrix.

Page 57 of 152
You will see the recently created DenyEmployees policy in Red. Note this is a “Deny IP” policy.

You may need to choose a different View option to see all of the policies that have been transferred
from DNA Center to ISE. We recommend that you try View > Condensed with SGACL Names, which
removes the SGT value from the table headers.

Repeat steps 2-6 for the remaining Layer-3 policies.

Here is the table with the remaining policies for your convenience:

Source Contract Destination Policy Name


PCI Servers deny Prod_Servers
PCI Servers deny Employees DenyPCIServers
PCI Servers deny Students
Students deny Employees
DenyStudents
Students deny PCI Servers

Page 58 of 152
Apply a Layer-4 Custom Contract
We will now add Layer-4 policies using the table below:

Source Protocol Contract Contract Name Destination Policy Name


Faculty ssh, https allow Employees
Faculty ssh, https allow Faculty
Faculty ssh, https allow SecureAccessOnly PCI Servers RestrictFaculty
Faculty ssh, https allow Prod_Servers
Faculty ssh, https allow Students
PCI ssh, https, allow
SecureXfer Faculty RestrictPCIServers
Servers sftp

To allow certain applications, a customer contract is required. This section will show you the steps to
create the new contract and then walk you through applying it to the Faculty and PCI Servers
groups.

Navigate to the Contracts screen within the Policy app to create a custom contract.

Here you will see the default Layer-3 permit and deny contracts. Click “ Add Contract” on
the top right of this page to open the custom contract dialog.

Within the dialog, provide the name “SecureAccessOnly” and the description “Allow SSH,
HTTPS Implicit Deny” for the new contract.

Select Permit from the Actions. In the Classifier, type ssh to select the SSH TCP Port 22
protocol. Be sure to type this in lowercase. Click Add.
Page 59 of 152
Do the same for selecting and permitting the “https” protocol. Again, this needs to be in
lowercase.

Click on Save at the bottom of the dialog.

Note: If you have a blank entry the Contract will not be saved. You will need to delete it or you will lose the
settings as they will not be saved.

Page 60 of 152
Saving takes you back to the main contracts page where you can verify the “SecureAccess
Only” contract was indeed added.

Source Protocol Contract Contract Name Destination Policy Name


Faculty ssh, https allow Employees
Faculty ssh, https allow Faculty
Faculty ssh, https allow SecureAccessOnly PCI Servers RestrictFaculty
Faculty ssh, https allow Prod_Servers
Faculty ssh, https allow Students

Return to the Policy Administration page and create a new policy. Name the Policy
“RestrictFaculty” and drag the Faculty group to the Source box.

Select Employees, Faculty, PCI_Servers, Students and Production_Servers. Drag all 5


groups to the Destination window in a single action.

Page 61 of 152
Click on “Add Contract”.

Select the radio button for


“SecureAccessOnly” from
the Contract options and
click on “OK”.

Confirm the policy is correct and click on “Save”.

Upon completion DNA Center returns you to the Policy Administration page where you can
verify the saved policy now resides in the policy table.

Page 62 of 152
Once again use the “Advanced Options” button to return to the ISE TrustSec Policy Matrix.

In the new view, you should clearly see new blue filled cells indicating Layer-4 ACEs have
been applied.

By hovering over the white rectile in a blue


cell, you will see a popup dialog summarizing
the Layer-4 ACEs defined in ISE via DNA
Center.

Use the scroll bar at the bottom to reveal all the


ACEs and the Port to which they are applied.

Page 63 of 152
A better way to see the information is to double click within a blue cell. This will display the
Security Group ACL that was applied by DNA Center.

To view the details of the Security Group ACL simply go to “Work Centers” à “TrustSec” à
”Components” à “Security Group ACLs”.

Double click the Security Group


ACL name to display the edit
window screen where the specific
permits and denies are displayed.

Page 64 of 152
Using the workflows described in this section complete deploying the Layer-4 custom
contracts and policies

Source Protocol Contract Contract Name Destination Policy Name


PCI ssh, https, allow
SecureXfer Faculty RestrictPCIServers
Servers sftp

When complete you will see the ISE TrustSec Policy Matrix similar to the one below:

This completes the Policy section of the SD-Access Lab.

Page 65 of 152
PROVISION

Page 66 of 152
SD-Access Overlay Provisioning

Add Devices to a Site


This step is called out as a separate action, to highlight the changes DNA Center makes to the
network devices as they are associated to a site. DNA Center includes this initial step as part of the
device provisioning, to simplify the overall workflow. Be aware, this also applies to non-fabric
support as DNA Center can also push the site configuration specifics to devices that aren’t being
used in fabric.

Use the top menu to select the Provision app.

When opening the Provision app, you will be placed on the Devices page. From here you can begin
the provisioning process by selecting devices and associating them to the sites previously created
during the Design phase.

Select all the devices that will become the Fabric Border (C6807), Control Plane (C3850-1)
and Fabric Edges (C3850-2 and C3850-3). Do not select the Fusion Router (ASR1001X) or
the Intermediate Node (C4500).

2 3

Once the devices are selected, use the “Selected Devices” pull down to “Add/Remove Site”.

Page 67 of 152
Type in the name of the site where the devices are deployed “SJC05”. Note, the name will
auto complete from the list of sites defined previously in the Design application.

Check the “Apply to All” box click Assign.

Upon adding the devices to the site, the Device Inventory page will reappear and popups
will appear indicating the devices are being added to the site.

When a device is added to the site DNA Center updates its internal database to associate the
SYSLOG, SNMP, and Netflow servers and configures a the switches with this information.
Page 68 of 152
Use the show archive command to view the changes DNA Center has made to the devices
added to the site.

C3850-1 #show archive config differences flash:discovery system:running-config


!Contextual Config Diffs:
+flow exporter 172.26.205.100
Send Netflow to DNA Center Assurance
+destination 172.26.205.100

+logging host 172.26.205.100 Send SYSLOGs to DNA Center Assurance

+snmp-server enable traps snmp authentication linkdown linkup coldstart warmstart


+snmp-server enable traps flowmon
+snmp-server enable traps entity-perf throughput-notif
+snmp-server enable traps call-home message-send-fail server-fail
+snmp-server enable traps tty
<..snip..>
+snmp-server enable traps mpls vpn
+snmp-server enable traps mpls rfc vpn
+snmp-server enable traps vrfmib vrf-up vrf-down vnet-trunk-up vnet-trunk-down
Send SNMP Notifications
+snmp-server host 172.26.205.100 public
to DNA Center Assurance

Please save the Border, Control Plane, and Edge node configurations for future
comparisons using the filename “site”.

C3850-1 #copy run site

Page 69 of 152
Provision Devices to the Site
As mentioned in the previous section, the common workflow will be to simply provision a device to a
site. This single workflow step is made up of several components, allowing DNA Center to configure
all of the network settings to the devices according to the Design for the site.

As before, select the devices of a given type that were previously add to the site and then
select “Provision” from the “Actions” pulldown.

Note: You can only simultaneously provision devices of the same Device Type.

Page 70 of 152
Here you can see the step we did separately is actually part of the Provisioning workflow.
Verify all the devices are in correct site, and click Next.

The “Configuration” step is blank at present so please skip past it by clicking “Next”.

The “Advanced Configuration” page is where templates are applied. Select the “conlen0”
template and click Next to continue.

Page 71 of 152
On the Summary page, one can review the site specific changes DNA Center will push to
the selected devices. Please make sure these are accurate before clicking on “Deploy” at
the bottom of the page to continue. Notice the C6807 device does not have the template
“conlen0” applied as expected.

DNA Center allows users to deploy devices immediately, or to schedule the deployment to
sometime in the future. For the lab, select Run Now as shown below and Apply.

Page 72 of 152
DNA Center will redirect you back to the devices page. As before, status messages will be
displayed on the Inventory screen as devices start and complete provisioning.

Starting

Completed

Note: SD-Access only requires the fabric nodes to be provisioned to a site. Although other devices such as
intermediate nodes and fusion routers can be provisioned, they are not required to be.

Page 73 of 152
When a device is provisioned DNA Center updates its internal database to include AAA, Dot1x,
Cisco TrustSec and then configures the devices to enable a secure communication channel with the
device from this point further.

Use the show archive command to view the changes DNA Center has made to the devices
added to the site.

C3850-1#show archive config differences flash:site system:running-config


!Contextual Config Diffs:

+aaa group server radius dnac-group


+server name dnac-radius_172.26.204.121
+ip radius source-interface Loopback0
+aaa authentication login default group dnac-group local
+aaa authentication enable default enable
+aaa authentication dot1x default group dnac-group
+aaa authorization exec default group dnac-group local
+aaa authorization network default group dnac-group
+aaa authorization network dnac-cts-list group dnac-group
+aaa accounting dot1x default start-stop group dnac-group
+aaa server radius dynamic-author
+client 172.26.204.121 server-key dnalab1
+ip name-server 10.172.99.251
+cts authorization list dnac-cts-list
+dot1x system-auth-control
+ip http secure-active-session-modules none
+ip http max-connections 16
+ip http active-session-modules none
+ip access-list extended ACL_WEBAUTH_REDIRECT
+deny ip any host 172.26.204.121
+permit tcp any any eq www
+permit tcp any any eq 443
+permit tcp any any eq 8443
+permit udp any any eq domain
+permit udp any eq bootpc any eq bootps
+ip radius source-interface Loopback0
+radius-server attribute 6 on-for-login-auth
+radius-server attribute 6 support-multiple
+radius-server attribute 8 include-in-access-req
+radius-server attribute 25 access-request include
+radius-server dead-criteria time 5 tries 3
+radius-server deadtime 30
+radius server dnac-radius_172.26.204.121
+address ipv4 172.26.204.121 auth-port 1812 acct-port 1813
+timeout 2
+retransmit 1
+pac key dnalab1
+cts role-based enforcement
+banner motd ^CEnjoying the SD-Access Lab^C
line con 0
+length 0
+ntp server 172.26.204.254
-aaa authentication login default local
-aaa authorization exec default local
-ntp server 192.168.100.1

C3850-1#show cts pac


AID: E693BB73C263C76B3FD049D68BD26EBD
PAC-Info:
PAC-type = Cisco Trustsec
AID: E693BB73C263C76B3FD049D68BD26EBD
I-ID: FCW1915D0N3
A-ID-Info: Identity Services Engine
Credential Lifetime: 17:19:16 UTC Mon Apr 23 2018
PAC-Opaque:
000200B80003000100040010E693BB73C263C76B3FD049D68BD26EBD0006009C0003010022F01CA3D52F96FE75FAA66D17CBDC1D0000001
35A66250E00093A80FC383E0EBCF8F5D697E2FC43EFBEC8AA335C524617620C97F8F04FE63EB9EB19DD7669914A2D4376AA75498C0C3F20
FD5A8F69F62058FA7D9578C6906803CF357EF22191DC84C8F60F2448F898D68526E9179BF645A888C7265208F0FE7A360914E924538397A
3B452A4F429E1F6421934D63758E30338192699A2BD
Refresh timer is set for 12w4d

C3850-1#show cts environment-data


CTS Environment Data
====================
Current state = COMPLETE
Page 74 of 152
Last status = Successful
Local Device SGT:
SGT tag = 0-00:Unknown
Server List Info:
Installed list: CTSServerList1-0001, 1 server(s):
Server: 172.26.204.121, port 1812, A-ID E693BB73C263C76B3FD049D68BD26EBD
Status = ALIVE
auto-test = FALSE, keywrap-enable = FALSE, idle-time = 60 mins, deadtime = 20 secs
Multicast Group SGT Table:
Security Group Name Table:
0-31:Unknown
2-31:TrustSec_Devices
3-31:Network_Services
4-31:Employees
5-31:Contractors
6-31:Guests
7-31:Production_Users
8-31:Developers
9-31:Auditors
10-31:Point_of_Sale_Systems
11-31:Production_Servers
12-31:Development_Servers
13-31:Test_Servers
14-31:PCI_Servers
15-31:BYOD
16-31:Students
17-31:Faculty
18-31:Research
255-31:Quarantined_Systems
Environment Data Lifetime = 86400 secs
Last update time = 17:19:34 UTC Tue Jan 23 2018
Env-data expires in 0:23:49:21 (dd:hr:mm:sec)
Env-data refreshes in 0:23:49:21 (dd:hr:mm:sec)
Cache data applied = NONE
State Machine is running

Please save the Border, Control Plane, and Edge node configuraitons for future
comparisons using the filename “provision”.

C3850-1 #copy run provision

Page 75 of 152
Create and Manage Fabrics
Once DNA Center has provisioned the SD-Access devices to sites, the SD-Access fabric can be
created.

Use the menu to select “Fabric”. After a momentary delay, you will be taken to a new page
for creating and managing SD-Access fabrics.
Click to open a right panel for creating fabrics.

Select “Campus” and provide a name for the fabric. The guide will use a fabric name
“University”, but any name can be provided. Be sure to click “Add” to create the fabric.

Page 76 of 152
A message will appear in the lower right corner of the page to indicate a new fabric is
created, and the new fabric will appear on the Provisioning à Fabric page.

Page 77 of 152
Provisioning the Fabric
Click on the new “University” fabric to open it. This will bring up a topology of the network
by default.

Note, clicking the white box will display the network in tabular form, if desired.

Click the green box with a directional arrow to rotate the topology horizontally

After clicking in the marked area above, you will see the topology rotate horizontally as shown below:

Note: If you see groups, or if devices are not in a similar topology. Click on the group or out of order object and
select Device Role. Change this accordingly and the graph will automatically update, rearranging the
devices based on the assigned role.

Page 78 of 152
Holding the shift key and the left mouse button, use your mouse to highlight the C3850-2
and C3850-3 access switches that are provisioned as Edge node.

When you release the left button, you will be able to add both switches to the fabric. By
default, any devices added to the fabric will be considered Edge nodes.

Once added to the fabric, devices become


outlined in blue to indicate the devices are
planned to be added to the fabric, but the
devices have not been deployed within the
fabric as of yet.

Border and Control Plane nodes must be explicitly defined. Click the C3850-1
and select “Add as CP”.

Page 79 of 152
The last step is to select a border and configure it. Click the C6807 and
select “Add as Border”.

This will open a dialog box where you can set the Border
Parameters.
Use below values for configuration:

• Border to “Rest of Company (Internal)”

• Routing Protocol: BGP

• Local AS Number: 65002

• Border Handoff:
o Layer 3
o VRF-Lite
o Infrastructure(192.168.254.0/24)

Page 80 of 152
Upon clicking “Add Interfaces”, it will open up a dialog
where you can specify SDA Fabric border external
connectivity parameters. DNA Center will use this
parameters to automate the border handoff to the
fusion router.

Use the values below to complete the border setup:

• External Interface: TenGigabitEthernet1/11

• Remote AS Number: 65001

• Virtual Network select:


§ Campus
§ INFRA_VN
§ Guest

Click on the “Save” button to complete the border


settings and return you to the initial dialog.

Scroll through the configuration and click “Add” at the bottom of the pane.

Page 81 of 152
Click Save on top right of the screen.

Note: The save process takes a few minutes, typically less than five, so be patient.

Notice there is the opportunity to run now or to schedule


the configuration changes as seen before. Select “Run
Now” and click “Apply” to create the Fabric.

After the devices have been provisioned to the Fabric, the nodes are represented with blue
icons.

Page 82 of 152
As soon as the “Save” button is clicked at the top left of the page, DNA Center will push
Virtual Networks and Control Plane (LISP) configurations to the fabric nodes. Log in to each
node to see the newly added configurations. You should see the following new configuration
entries.

Border Console messages from DNA Center’s configuration


Jan 23 18:26:41.727: %SYS-5-CONFIG_I: Configured from console by cisco on vty0
(172.26.205.100)
Jan 23 18:26:43.567: %SYS-5-CONFIG_I: Configured from console by cisco on vty0
(172.26.205.100)

Border Node (C6807):

C6807#show run | section lisp

router lisp
locator-set rloc_5d4bb509-dfe0-4ec4-a3d4-9d963e27a141
IPv4-interface Loopback0 priority 10 weight 10
auto-discover-rlocs
exit
!
map-server nmr non-site-ttl 1440
eid-table default instance-id 4097
remote-rloc-probe on-route-change LISP instance for INFRA_VN aka GRT
exit
!

eid-table vrf Campus instance-id 4099


remote-rloc-probe on-route-change
LISP instance for Campus VRF
ipv4 route-import map-cache bgp 65002
ipv4 route-import database bgp 65002 route-map database locator-set rloc_5d4bb509-dfe0-4ec4-a3d4-
9d963e27a141
exit
!
eid-table vrf Guest instance-id 4100
remote-rloc-probe on-route-change
LISP instance for Guest VRF
ipv4 route-import map-cache bgp 65002
ipv4 route-import database bgp 65002 route-map database locator-set rloc_5d4bb509-dfe0-4ec4-a3d4-
9d963e27a141
exit
!
encapsulation vxlan
ipv4 locator reachability exclude-default
ipv4 map-cache-limit 25000
ipv4 database-mapping limit dynamic 5000
ipv4 itr map-resolver 192.168.100.100
ipv4 etr map-server 192.168.100.100 key uci
Overlay Control Plane configuration
ipv4 etr map-server 192.168.100.100 proxy-reply
ipv4 etr
ipv4 sgt
ipv4 proxy-itr 192.168.100.1 Overlay Border configuration
exit

Notice the Map Server and Resolver on the Border point to the C3850-1 CP’s loopback0 interface.

C3850-1#show ip int brief | include 192


Te1/0/1 192.168.0.9 YES NVRAM up up
Loopback0 192.168.100.100 YES NVRAM up up

Page 83 of 152
Control Plane (C3850-1):
Jan 23 18:25:39.455: %SYS-5-CONFIG_I: Configured from console by cisco on vty0 (172.26.205.100)
Jan 23 18:25:45.528: %SYS-5-CONFIG_I: Configured from console by cisco on vty0 (172.26.205.100)

C3850-1#show run | section lisp

router lisp
locator-table default
service ipv4
encapsulation vxlan
sgt
map-server
map-resolver
exit-service-ipv4
!
service ethernet
map-server
map-resolver
exit-service-ethernet
!
instance-id 4097
service ipv4
eid-table default
route-export site-registrations
distance site-registrations 250
exit-service-ipv4
!
exit-instance-id
!
instance-id 4098
service ipv4
eid-table vrf DEFAULT_VN
route-export site-registrations
distance site-registrations 250
exit-service-ipv4
!
exit-instance-id
!
instance-id 4099
service ipv4
eid-table vrf Campus
route-export site-registrations
distance site-registrations 250
exit-service-ipv4
!
exit-instance-id
!
instance-id 4100
service ipv4
eid-table vrf Guest
route-export site-registrations
distance site-registrations 250
exit-service-ipv4
!
exit-instance-id
!
map-server nmr non-site-ttl 1440

Page 84 of 152
site site_uci
description map-server configured from apic-em
authentication-key uci
eid-record instance-id 4097 0.0.0.0/0 accept-more-specifics
eid-record instance-id 4098 0.0.0.0/0 accept-more-specifics
eid-record instance-id 4099 0.0.0.0/0 accept-more-specifics
eid-record instance-id 4100 0.0.0.0/0 accept-more-specifics
exit-site
!

Notice the iBGP configuration automated by DNA Center to redistribute LISP routes from CP to
Fabric Border (C6807)

C3850-1#sho run | sec bgp


router bgp 65002
bgp router-id interface Loopback0
bgp log-neighbor-changes
neighbor 192.168.100.1 remote-as 65002
neighbor 192.168.100.1 update-source Loopback0
!
address-family ipv4
redistribute lisp metric 10 Redistribute GRT routes to Border
neighbor 192.168.100.1 activate
neighbor 192.168.100.1 send-community both
neighbor 192.168.100.1 weight 65535
neighbor 192.168.100.1 route-map tag out
exit-address-family
!
address-family vpnv4
neighbor 192.168.100.1 activate
neighbor 192.168.100.1 send-community both
neighbor 192.168.100.1 route-map tag out
exit-address-family
!
address-family ipv4 vrf Campus
Redistribute Campus VN routes to Border
redistribute lisp metric 10
exit-address-family
!
address-family ipv4 vrf DEFAULT_VN
redistribute lisp metric 10
exit-address-family
!
address-family ipv4 vrf Guest
Redistribute Guest VN routes to Border
redistribute lisp metric 10
exit-address-family

C3850-1#show ip bgp ipv4 unicast summary


BGP router identifier 192.168.100.100, local AS number 65002
BGP table version is 1, main routing table version 1

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd


192.168.100.1 4 65002 115 116 1 0 0 01:42:13 0

Page 85 of 152
Fabric Edge Nodes (C3850-2/C3850-3):
C3850-2#show run | section lisp

router lisp
locator-table default
locator-set rloc_a1cd1e6e-2ed6-41ae-8db8-ccc12f56c3fd
IPv4-interface Loopback0 priority 10 weight 10
exit-locator-set
!
locator default-set rloc_a1cd1e6e-2ed6-41ae-8db8-ccc12f56c3fd
service ipv4
encapsulation vxlan
map-cache-limit 25000
database-mapping limit dynamic 5000
itr map-resolver 192.168.100.100
etr map-server 192.168.100.100 key uci
etr map-server 192.168.100.100 proxy-reply
etr
sgt
proxy-itr 192.168.120.2
exit-service-ipv4
!
service ethernet
map-cache-limit 25000
database-mapping limit dynamic 5000
itr map-resolver 192.168.100.100
itr
etr map-server 192.168.100.100 key uci
etr map-server 192.168.100.100 proxy-reply
etr
exit-service-ethernet
!
instance-id 4097
remote-rloc-probe on-route-change
service ipv4
eid-table default
map-cache 0.0.0.0/0 map-request
exit-service-ipv4
!
exit-instance-id
!
instance-id 4098
remote-rloc-probe on-route-change
service ipv4
eid-table vrf DEFAULT_VN
map-cache 0.0.0.0/0 map-request
exit-service-ipv4
!
exit-instance-id
!
instance-id 4099
remote-rloc-probe on-route-change
service ipv4
eid-table vrf Campus
map-cache 0.0.0.0/0 map-request
exit-service-ipv4
!
exit-instance-id
!
instance-id 4100
remote-rloc-probe on-route-change
service ipv4
eid-table vrf Guest
map-cache 0.0.0.0/0 map-request
exit-service-ipv4
!
exit-instance-id
!

Page 86 of 152
Please save the Border, Control Plane, and Edge node configuraitons for future
comparisons using the filename “fabric”.

C3850-1 #copy run fabric

Note: In this lab topology, DNA Center and ISE is connected directly to the Fabric Border (C6807).

Prior to DNA Center border provisioning step, we had configured ISIS underlay between C6807 and Fusion
Router, so that Control-Plane (3850-1), DNA Center have access to Fusion Router and the segments
connected behind it (DHCP server, WLC etc.). This is why DNA Center was able to discover the Fusion Router
initially when you did the network discovery.

When DNA Center provisioned the border external interfaces, it will replace the ISIS configuration due to which
both Fusion Router and WLC will be unreachable from DNA Center.

We will be addressing this issue in the Border Handoff Section of this guide.
This is not a concern for this guide and you may proceed with Host Onboarding and Endpoint Validation.

Page 87 of 152
SD-Access End Host Provisioning
Once the overlay is provisioned, it needs IP address pools to be added to enable hosts to
communicate within the fabric. When an IP pool is configured in SD-Access, DNA Center
immediately connects to each edge node to create the appropriate SVI (switch virtual interface) to
allow the hosts to communicate.

In addition, an Anycast gateway is applied to all edge nodes. This is an essential element of SD-
Access as it allows hosts to easily roam to any edge node with no additional provisioning.

Click on the “Host Onboarding” at the top of the Fabric topology screen to start applying
authentication and the IP networking for onboarding host devices.

Select Closed Authentication. Click Save, and then “Run Now” to continue

Note: Be sure to click save before you move onto next step.

Click the Campus Virtual Network box and add the Production and Staff IP Pools for
Data traffic. Click Update when finished, selecting “Run Now” and “Apply” to continue

Page 88 of 152
Once the syslogs message are observed, show the new running configuration of the
interfaces. Noticed the Dot1xClosed authentication policy is pushed down to all host
interfaces.

C3850-2#sho run | begin interface Giga


interface GigabitEthernet0/0 authentication event server dead action
vrf forwarding Mgmt-vrf authorize voice
no ip address authentication host-mode multi-auth
speed 1000 authentication order dot1x mab
negotiation auto authentication priority dot1x mab
! authentication port-control auto
interface GigabitEthernet1/0/1 authentication periodic
description Connected to AP authentication timer reauthenticate server
switchport mode access authentication timer inactivity server dynamic
switchport voice vlan 4000 mab
authentication control-direction in dot1x pae authenticator
authentication event server dead action dot1x timeout tx-period 10
authorize vlan 3999 spanning-tree portfast
authentication event server dead action !
authorize voice interface GigabitEthernet1/0/3
authentication host-mode multi-auth description Client Wired-2
authentication order dot1x mab switchport mode access
authentication priority dot1x mab switchport voice vlan 4000
authentication port-control auto device-tracking attach-policy IPDT_MAX_10
authentication periodic authentication control-direction in
authentication timer reauthenticate server authentication event server dead action
authentication timer inactivity server dynamic authorize vlan 3999
mab authentication event server dead action
dot1x pae authenticator authorize voice
dot1x timeout tx-period 10 authentication host-mode multi-auth
spanning-tree portfast authentication order dot1x mab
! authentication priority dot1x mab
interface GigabitEthernet1/0/2 authentication port-control auto
switchport mode access authentication periodic
switchport voice vlan 4000 authentication timer reauthenticate server
device-tracking attach-policy IPDT_MAX_10 authentication timer inactivity server dynamic
shutdown mab
authentication control-direction in dot1x pae authenticator
authentication event server dead action dot1x timeout tx-period 10
authorize vlan 3999 spanning-tree portfast

Once applied, DNA Center will push the IP Pools to the Campus Virtual Network. This
requires updates to be made to the Border, CP, and Edge nodes. Log in to each node to
see the newly added configurations.

Border Node (C6807):


C6807#show archive config differences disk0:fabric system:running-config
!Contextual Config Diffs:
+interface Loopback1021
+description Loopback Border
+vrf forwarding Campus
+ip address 172.16.101.254 255.255.255.255
+interface Loopback1022
+description Loopback Border
+vrf forwarding Campus
+ip address 172.16.201.254 255.255.255.255
router bgp 65002
address-family ipv4 vrf Campus
+network 172.16.101.254 mask 255.255.255.255
+network 172.16.201.254 mask 255.255.255.255

Page 89 of 152
Control Plane Node (C3850-1):
C3850-1#show archive config differences flash:fabric system:running-config
!Contextual Config Diffs:
+interface Loopback1021
+description Loopback Map Server
+vrf forwarding Campus
+ip address 172.16.101.254 255.255.255.255
+interface Loopback1022
+description Loopback Map Server
+vrf forwarding Campus
+ip address 172.16.201.254 255.255.255.255
router lisp
site site_uci
+eid-record instance-id 4099 172.16.101.0/24 accept-more-specifics
+eid-record instance-id 4099 172.16.201.0/24 accept-more-specifics
router bgp 65002
address-family ipv4 vrf Campus
+network 172.16.101.254 mask 255.255.255.255
+network 172.16.201.254 mask 255.255.255.255
+aggregate-address 172.16.201.0 255.255.255.0 summary-only

Edge Node (C3850-2):


C3850-2#show archive config differences flash:fabric system:running-config+interface
Vlan1021
+description Configured from apic-em
+mac-address 0000.0c9f.f45c
+vrf forwarding Campus
+ip address 172.16.101.254 255.255.255.0
+ip helper-address 10.172.99.11
+no ip redirects
+ip local-proxy-arp
+ip route-cache same-interface
+no lisp mobility liveness test
+lisp mobility 172_16_101_0-Campus
+interface Vlan1022
+description Configured from apic-em
+mac-address 0000.0c9f.f45d
+vrf forwarding Campus
+ip address 172.16.201.254 255.255.255.0
+ip helper-address 10.172.99.11
+no ip redirects
+ip local-proxy-arp
+ip route-cache same-interface
+no lisp mobility liveness test
+lisp mobility 172_16_201_0-Campus
router lisp
instance-id 4099
+dynamic-eid 172_16_101_0-Campus
+database-mapping 172.16.101.0/24 locator-set rloc_2e3022a3-a712-483d-ac45-
6265a6224172
+exit-dynamic-eid
+dynamic-eid 172_16_201_0-Campus
+database-mapping 172.16.201.0/24 locator-set rloc_2e3022a3-a712-483d-ac45-
6265a6224172
+exit-dynamic-eid
+cts role-based enforcement vlan-list 1021-1022

Page 90 of 152
Click the Guest Virtual Network box and add the WiredGuest IP Pool for Data traffic. Click
Update and Run Now to Apply the change.

Please save the Border, Control Plane, and Edge node configurations for future
comparisons using the filename “wiredcomplete”.

C3850-1 #copy run wiredcomplete

Now the IP address pools have been assigned to the VN and configured on the devices. The
interfaces have also been configured with 802.1x closed authentication mode. Now you’re ready to
onboard your endpoints!

Page 91 of 152
VALIDATION
Optional Guide

Page 92 of 152
End Point Validation
All hosts are powered down by default. Please Power them on using the vSphere client as needed
for Validation.

Verify IP Connectivity for MAB Hosts (Optional)

To verify the end host connectivity, you will use the vSphere client to connect to your pod ESX server
and launch consoles for the end host VMs. From the consoles you will first ping the default gateway,
and then the other end hosts.

The Tiny Core Linux systems are authenticated using MAB, so let’s start with them.

From the jump host use the vSphere client to connect to your ESXi host.
(Instructor / Dna@labrocks)

Expand the ESX host for your Pod to display all the VMs associated to the Pod.
Pods 1-10 are servers .71 – .80 Pods 11-20 are servers .104 – .113

Here you will find several hosts. Please verify the P-## matches your Pod number. Open
console windows to access TCL-Guest and TCL-PCI-Server VMs by selecting them and
right clicking.

Page 93 of 152
Once the console opens, please log in and open a terminal from the menu bar at the bottom
of the desktop.

Type ifconfig. Note the HWaddr (MAC address) for eth0. You will use this to add the
endpoint into ISE. Use route -n to verify the default gateway IP address.

Start a ping test to the default gateway. Keep the ping test running while you complete the
remaining steps in this section.
For the PCI_Server (red) use 172.16.101.254 as the default gateway.
For the Guest (green) user 172.16.250.254 as the default gateway.

Page 94 of 152
Access ISE from your browser and go to Administration à Identity Management à Groups
à Endpoint Identity Groups.

From here you should see PCI_Server and GuestEndpoints Endpoint Identity Groups
amongst the default Endpoint Identity Groups. Click into each one and remove any existing
MAC addresses.

Page 95 of 152
Now, the next step is to add your Pod’s PCI_Server VM MAC and the Guest VM MAC to the
PCI_Server and GuestEndpoint Groups above.
Rather than manually typing in the MAC, you can do this by clicking through the Context
Visibility screens.
Home à Context Visibility à Endpoints

From here you will see a list of MAC Addresses. By default 10 rows are shown per page.
You should modify this to 25 to make it easier to find the VM MAC addresses.

Search the MAC Address column for your PCI_Server and click on the MAC Address

Page 96 of 152
Once you find a MAC address matching the
PCI_Servers, click on the MAC address.
This will bring up a new page for the specific Endpoint.

Note: in some cases the subsequent page appears to be blank. If this


occurs simply refresh the page and re-select the MAC
address.

Click on Edit icon between the refresh and trash icons to the right of the MAC Address to
bring up the Edit Endpoint dialog.

Select “Static Group


Assignment” and then
select “PCI_Servers”
for the pull down list
and save.

Now to retry the authentication, go to the Operations -> Live Sessions.

Page 97 of 152
Here you will see a session with the Endpoint ID and Identity of your PCI_Server MAC.

Hover to the right of the “Show CoA Actions” and click the target icon when it appears. This
brings up an option list of actions to take. Select “Session termination with port bounces”.

Once complete a success box will appear


at the bottom of the browser window

Repeat the steps above for the Guest VM using the GuestEndpoints Identity Group.

Page 98 of 152
Verify IP Connectivity for MAB Hosts
To verify the end host connectivity, you will use the vSphere client to connect to your pod ESX server
and launch consoles for the end host VMs. From the consoles you will first ping the default gateway,
and then the other end hosts.
The Tiny Core Linux systems are authenticated using MAB, so let’s start with them.

Login to each of the 3850’s and find the associated interface that the Guest Linux and
PCI_Server endpoints are connected to. You will see this on the topology at the beginning
of this guide.

Since the endpoints are already connected to the switch, issue a shut/ no shut on the
interface these endpoints are connected to trigger a new authentication request.. An
example is below:

C3850-2(config)#int gig 1/0/3


C3850-2(config-if)#shut
C3850-2(config-if)#no shut

Use ifconfig to verify the VM’s IP address and route -n to verify the default gateway. Check
to see if the hosts can connect to their default gateway using ping.

Page 99 of 152
Connect to the Control Plane (C3850-1) and verify the hosts can be seen within the
control plane

C3850-1#sho lisp site


LISP Site Registration Information
* = Some locators are down or unreachable
# = Some registrations are sourced by reliable transport

Site Name Last Up Who Last Inst EID Prefix


Register Registered ID
site_uci never no -- 4097 0.0.0.0/0
never no -- 4098 0.0.0.0/0
never no -- 4098 172.16.101.0/24
00:05:08 yes# 192.168.120.2 4098 172.16.101.201/32
never no -- 4098 172.16.201.0/24
never no -- 4099 0.0.0.0/0
never no -- 4099 172.16.250.0/24
00:02:12 yes# 192.168.120.3 4099 172.16.250.25/32

Note: You will not be able to ping between the Guest and PCI Server as they are in separate Virtual Networks,
and we have not configured the Fusion router to leak routes between the Virtual Networks. We will do
this in the last exercise of this lab.

Page 100 of 152


Verify IP Connectivity for Dot1x Hosts
We will be using the Windows supplicant for verifying Dot1x authentication. The PC-Wired-2 and
PC-Wired-3 VMs represent end hosts connected to C3850-2 (Edge node 1) and C3850-3 (Edge
node 2) respectively. It is important to note the VMs have static IP addresses, so this means Faculty
and Students can use PC-Wired-2, and Employees can use PC-Wired-3. If you mix this up, you will
not have IP connectivity because the host IP will not match the VLAN assigned.

User Group Username Password VM


Employee Emily dnalab1 PC-Wired-3
Employee Ethan dnalab1 PC-Wired-3
Faculty Faith dnalab1 PC-Wired-2
Faculty Fred dnalab1 PC-Wired-2
Student Sam dnalab1 PC-Wired-2
Student Stacy dnalab1 PC-Wired-2

The Windows supplicant has been preconfigured so you will only need to log into the network from
the PC-Wired hosts. The following steps will walk you through the network authentication process.

Using the vSphere client, select your Pod’s PC-Wired-2 VM, Power it on and open a
console. You will need to login with the credentials: (admin/ dnalab1)

Once open, click on the wired networking icon in the


system tray and open the Network and Sharing Center.

Select the “Change adapter settings” on the left of the window.

Right click the LAB adapter and enable it.

Page 101 of 152


As the interface comes up a popup diag box will appear asking for additional authentication.
Click on the message to be prompted for credentials.

Now all that is left is to provide the supplicatant with


Employee credentials for PC-Wired-2. Be sure to use a
Faculty or Student account:

Example: faith / dnalab1

Repeat the steps in this section to bring up the


PC-Wired-3 host logining in as either the user ethan or emily.
Again, both share the same password “dnalab1”.

Once logged in, use the command window to ping the default gateway.
PC-Wired-2 gateway 172.16.101.254 and PC-Wired-3 gateway 172.16.201.254.

It may take a few seconds to authenticate the user. If you run into problems, please check
the switch and make sure the user account has authenticated.

Page 102 of 152


C3850-3#show authentication sessions interface g1/0/1 details
Interface: GigabitEthernet1/0/1
IIF-ID: 0x1075D904
MAC Address: 0050.5696.cd0f
IPv6 Address: Unknown
IPv4 Address: 172.16.201.100
User-Name: ethan
Status: Authorized
Domain: DATA
Oper host mode: multi-auth
Oper control dir: in
Session timeout: N/A
Common Session ID: C0A8020600000019CB6A46E5
Acct Session ID: 0x00000016
Handle: 0xc6000005
Current Policy: POLICY_Gi1/0/1

Local Policies:
Idle timeout: 65536 sec

Server Policies:
Vlan Group: Vlan: 1022
Security Policy: None
Security Status: Link Unsecured
SGT Value: 4

Method status list:


Method State
dot1x Authc Success

You can also validate the host has appeared in the Control Plane by checking the lisp site
table on the Control Plane node (C3850-1).

C3850-1#sho lisp site


LISP Site Registration Information
* = Some locators are down or unreachable
# = Some registrations are sourced by reliable transport

Site Name Last Up Who Last Inst EID Prefix


Register Registered ID
site_uci never no -- 4097 0.0.0.0/0
never no -- 4098 0.0.0.0/0
never no -- 4098 172.16.101.0/24
00:13:39 yes# 192.168.120.2 4098 172.16.101.201/32
never no -- 4098 172.16.201.0/24
00:01:17 yes# 192.168.120.3 4098 172.16.201.100/32
never no -- 4099 0.0.0.0/0
never no -- 4099 172.16.250.0/24
00:10:43 yes# 192.168.120.3 4099 172.16.250.25/32

Page 103 of 152


Verify the PC-Wired2 can ping the default gateway 172.16.101.254.

Verify emily or ethan can ping faith or fred.

Can emily or ethan ping the PCI server 172.16.101.201?

Page 104 of 152


SD-Access Border Validation Overview

SD-Access Lab Work Flow


This guide will build upon the Wired Lab to show inter Virtual Network and shared services routing.

In the wired lab we deployed an SD-Access fabric with a decentralized Border (Catalyst 6807) and
Control Plane (Catalyst C3850-1) and utilizing two Catalyst 3850s as Edge nodes. Once deployed
the fabric was validated using MAB and Dot1x end hosts to confirm intra Virtual Network
connectivity.

The next step is to configure the BGP handoff from the Border to the ASR Fusion router.

This is comprised of the following tasks:


1. Establish IP connectivity between Border and the Fusion router for the Virtual Networks.
DNA Center automated the Border interface configuration when the fabric was deployed.
Thus all the remains is to configure the Fusion interface to match for IP connectivity.
2. Use BGP to extend Virtual Networks to the Fusion Router. DNAC Center automated the
Border side BGP configuration. The BGP configuration on Fusion Router needs to manually
be configured.
3. Configure router leaking to distribute Virtual Network and shared services routes within the
SD-Access fabric.
4. Validate Inter-Virtual Network routing as well as access to share services

Once complete, end hosts will be able to communicate across Virtual Networks and reach shared
services such as DHCP, DNS, etc.

Page 105 of 152


SD-Access Inter-Virtual Network Routing
Once we have confirmed intra-Virtual Network
connectivity and routing, we can move to the next
task of inter-Virtual Network routing. In this example,
we will first establish VRF_Lite between Border and
Fusion Router. Then we will show how to externally
connect the fabric to shared services.

Once complete the clients can connect to the Fusion


router, in this case an ASR-1001X and the DHCP
server. This section also sets the stage for adding
SD-Wireless.

BGP Handoff to between Border and Fusion ASR router


To allow hosts in each Virtual Network to communicate with each other we will:

• First create the L3 connectivity between border and Fusion Router


• Use BGP to extend the VRFs from the border to the Fusion router (ASR)
• Then we will use VRF leaking to share the routes between the VRFs on the Fusion router
• Finally we will distribute the routes between the VRFs down to the border.

Once completed, the end host will route through the SD-Access fabric to the border, then leave the
fabric to traverse the Fusion router. The Fusion router will leak the traffic to the other VRF and then
send it to the border, where it will be re-encapsulated into the SD-Access fabric and routed to the
destination end host. See the illustrations of this process below.

After SD-Access Extend Virtual Leak routes on Share routes to


Fabric is applied Networks to the the Fusion Router Border without
Fusion Router creating a routing
loop

Page 106 of 152


Create L3 connectivity between border and Fusion Router
The first task is to allow IP connectivity from the Border to the Fusion router, for
each Virtual Network that requires external connectivity. Since DNA Center has
automatically configured the Border, the first task is to view that configuration
and then configure the Fusion router to enable IP connectivity. To do this we
will use L3 sub-interface on ASR Fusion router.

Verify the external handoff configuration pushed by DNA Center by returning to the
Provision -> Fabric Topology screen. Select the Border and View Info. In the right panel
the Border information will appear. Click on the interface to expose the Virtual Network to
SVI associations.

Please see the logical representation below, for clarity only.

Page 107 of 152


(OPTIONAL) To review the Border configuration pushed by DNA Center use the following
commands:

C6807#show ip interface brief | include Vlan3


Vlan3001 192.168.254.1 YES manual up up
Vlan3002 192.168.254.5 YES manual up up
Vlan3003 192.168.254.9 YES manual up up

C6807#show run interface Vlan3001


description vrf interface to External router
ip address 192.168.254.1 255.255.255.252
no ip redirects
ip route-cache same-interface
platform lisp-enable
end

C6807#show run interface Vlan3002


description vrf interface to External router
vrf forwarding Campus
ip address 192.168.254.5 255.255.255.252
no ip redirects
ip route-cache same-interface
platform lisp-enable
end

C6807#show run interface Vlan3003


description vrf interface to External router
vrf forwarding Guest
ip address 192.168.254.9 255.255.255.252
no ip redirects
ip route-cache same-interface
platform lisp-enable
end

C6807#show run interface TenGigabitEthernet1/11


description ASR1K TE0/0/1 underaly
switchport
switchport mode trunk
mtu 9216
logging event link-status
end

Page 108 of 152


The next step is to create the L3 sub-interfaces on the Fusion router, however there is a catch 22
situation as the L3 sub-interface must be associated to VRFs, but the VRF do not exist as of yet on
the Fusion router. To simplify the configuration steps, we recommend defining the VRFs first and
then configure the L3 sub-interfaces.

Copy Campus and Guest VRFs from Border to the Fusion Router. Copying is required,
because the RD and RT must exactly match, and they are auto generated numbers from
DNA Center.

Note: If you have followed the Wired lab guide exactly you will see the rd/rt numbering shown below, however,
if you deviated the number will likely be different and thus we highly recommend you do a show run on
the Border and copy paste the VRF definitions to the Fusion router (ASR).

COPY THE AUTO-PROVISIONED VRF DEFINITION FROM THE BORDER CONFIG

EXAMPLE ONLY

C6807#sho run
Building configuration...

Current configuration : 25943 bytes


<..snip..>
boot-end-marker
!
!
vrf definition Campus
rd 1:4099
!
address-family ipv4
route-target export 1:4099
route-target import 1:4099
exit-address-family
!
vrf definition DEFAULT_VN
rd 1:4098
!
address-family ipv4
route-target export 1:4098
route-target import 1:4098
exit-address-family
!
vrf definition Guest
rd 1:4100
!
address-family ipv4
route-target export 1:4100
route-target import 1:4100
exit-address-family
!

Page 109 of 152


PASTE THE AUTO-PROVISIONED VRF DEFINITION FROM THE BORDER CONFIG

On the Fusion router configure three layer-3 sub-interfaces on the link connecting the
border router (t0/0/1) so the Virtual Networks can be carried to the Fusion router.

Fusion-ASR: Note: Make sure the VRFs and


VLAN numbers align with the Border
conf t

interface TenGigabitEthernet0/0/1.3001
description fusion to Border_GRT
encapsulation dot1Q 3001
ip address 192.168.254.2 255.255.255.252
no shutdown

interface TenGigabitEthernet0/0/1.3002
description fusion to Campus_VRF
encapsulation dot1Q 3002
vrf forwarding Campus
ip address 192.168.254.6 255.255.255.252
no shutdown

interface TenGigabitEthernet0/0/1.3003
description fusion to Guest_VRF
encapsulation dot1Q 3003
vrf forwarding Guest
ip address 192.168.254.10 255.255.255.252
no shutdown

Verify IP connectivity between the Fusion router and Border

Fusion-ASR#sho ip int br | i 300


Te0/0/1.3001 192.168.254.2 YES manual up up
Te0/0/1.3002 192.168.254.6 YES manual up up
Te0/0/1.3003 192.168.254.10 YES manual up up

Fusion-ASR#ping 192.168.254.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.254.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms

Fusion-ASR#ping vrf Campus 192.168.254.5


Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.254.5, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms

Fusion-ASR#ping vrf Guest 192.168.254.9


Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.254.9, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/31/133 ms

show ip route vrf <Campus | Guest> can also be used.

Page 110 of 152


Extend the Virtual Networks from the Border to the Fusion router
We will use BGP in the lab to showcase how this would work in a
production environment. As with the interface configuration,
DNA Center has also fully automated the Border BGP handoff
configuration. Verify this is working as expected and then
configure the Fusion router to extend the Fabric VNs.

Verify the routes now exist in both Campus and Guest VRFs.

C6807:

C6807#show ip route vrf Campus


Routing Table: Campus
<..snip..>
Gateway of last resort is not set

172.16.0.0/16 is variably subnetted, 4 subnets, 2 masks


B 172.16.101.0/24 [200/0] via 192.168.100.100, 19:51:50
C 172.16.101.254/32 is directly connected, Loopback1021
B 172.16.201.0/24 [200/0] via 192.168.100.100, 19:51:25
C 172.16.201.254/32 is directly connected, Loopback1022
192.168.254.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.254.4/30 is directly connected, Vlan3002
L 192.168.254.5/32 is directly connected, Vlan3002

C6807#show ip route vrf Guest


Routing Table: Campus
<..snip..>
Gateway of last resort is not set

172.16.0.0/16 is variably subnetted, 2 subnets, 2 masks


B 172.16.250.0/24 [200/0] via 192.168.100.100, 18:36:58
C 172.16.250.254/32 is directly connected, Loopback1023
192.168.254.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.254.8/30 is directly connected, Vlan3003
L 192.168.254.9/32 is directly connected, Vlan3003

Page 111 of 152


On the Fusion router, create a BGP router and define VRFs for each Virtual Network, as
was done on the border.

Fusion-ASR:

conf t

router bgp 65001


bgp log-neighbor-changes
neighbor 192.168.254.1 remote-as 65002
neighbor 192.168.254.1 update-source TenGigabitEthernet0/0/1.3001
!
address-family ipv4
network 10.172.99.11 mask 255.255.255.255
network 10.172.120.0 mask 255.255.255.0
redistribute connected INFRA_VN / GRT
neighbor 192.168.254.1 activate
exit-address-family

!
address-family ipv4 vrf Campus
neighbor 192.168.254.5 remote-as 65002
neighbor 192.168.254.5 update-source TenGigabitEthernet0/0/1.3002 Campus VRF
neighbor 192.168.254.5 activate
exit-address-family
!
address-family ipv4 vrf Guest
neighbor 192.168.254.9 remote-as 65002
neighbor 192.168.254.9 update-source TenGigabitEthernet0/0/1.3003 Guest VRF
neighbor 192.168.254.9 activate
exit-address-family

Note: 10.172.99.11 is the DHCP server and 10.172.120.0/24 is the WLC network which we are
advertising to the Border (C6807) via the INFRA_VN (global routing table) using BGP.

Verify DHCP server and WLC subnets are learned on the Fabric Border

C6807#show ip route bgp


<..snip..>
Gateway of last resort is 172.26.204.100 to network 0.0.0.0

10.0.0.0/8 is variably subnetted, 3 subnets, 2 masks


B 10.172.99.0/24 [20/0] via 192.168.254.2, 01:43:16
B 10.172.120.0/24 [20/0] via 192.168.254.2, 01:43:16
107.0.0.0/32 is subnetted, 1 subnets
B 107.107.107.107 [20/0] via 192.168.254.2, 01:43:16
192.168.0.0/24 is variably subnetted, 7 subnets, 2 masks
B 192.168.0.20/30 [20/0] via 192.168.254.2, 01:43:16
192.168.105.0/32 is subnetted, 1 subnets
B 192.168.105.1 [20/0] via 192.168.254.2, 01:43:16

Page 112 of 152


Verify BGP neighborship is established between the Border and Fusion router.

Fusion-ASR#show ip bgp ipv4 unicast summary

BGP router identifier 192.168.105.1, local AS number 65001


BGP table version is 1, main routing table version 1
6 network entries using 1488 bytes of memory
6 path entries using 816 bytes of memory
2/2 BGP path/bestpath attribute entries using 560 bytes of memory
1 BGP AS-PATH entries using 24 bytes of memory INFRA_VN / GRT
2 BGP extended community entries using 48 bytes of memory
0 BGP route-map cache entries using 0 bytes of memory
0 BGP filter-list cache entries using 0 bytes of memory
BGP using 2936 total bytes of memory
BGP activity 12/0 prefixes, 12/0 paths, scan interval 60 secs

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd


192.168.254.1 4 65002 109 108 12 0 0 01:33:56 3

Fusion-ASR#show ip bgp vpnv4 vrf Campus summary

BGP router identifier 192.168.105.1, local AS number 65001


BGP table version is 7, main routing table version 7
4 network entries using 1024 bytes of memory
4 path entries using 544 bytes of memory
6/4 BGP path/bestpath attribute entries using 1776 bytes of memory
1 BGP AS-PATH entries using 24 bytes of memory Campus VRF
2 BGP extended community entries using 48 bytes of memory
0 BGP route-map cache entries using 0 bytes of memory
0 BGP filter-list cache entries using 0 bytes of memory
BGP using 3416 total bytes of memory
BGP activity 15/0 prefixes, 15/0 paths, scan interval 60 secs

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd


192.168.254.5 4 65002 110 107 7 0 0 01:35:14 4

Fusion-ASR#show ip bgp vpnv4 vrf Guest summary

BGP router identifier 192.168.105.1, local AS number 65001


BGP table version is 7, main routing table version 7
2 network entries using 512 bytes of memory
2 path entries using 272 bytes of memory
6/4 BGP path/bestpath attribute entries using 1776 bytes of memory
1 BGP AS-PATH entries using 24 bytes of memory Guest VRF
2 BGP extended community entries using 48 bytes of memory
0 BGP route-map cache entries using 0 bytes of memory
0 BGP filter-list cache entries using 0 bytes of memory
BGP using 2632 total bytes of memory
BGP activity 15/0 prefixes, 15/0 paths, scan interval 60 secs

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd


192.168.254.9 4 65002 110 109 7 0 0 01:35:29 2

Verify the Virtual Network IP Pools and specifc routes are learned on the Fusion router.

Fusion-ASR#show ip route vrf Campus


Page 113 of 152
Routing Table: Campus
<..snip..>

Gateway of last resort is not set

172.16.0.0/16 is variably subnetted, 4 subnets, 2 masks


B 172.16.101.0/24 [20/0] via 192.168.254.5, 01:38:16
B 172.16.101.254/32 [20/0] via 192.168.254.5, 01:38:16
B 172.16.201.0/24 [20/0] via 192.168.254.5, 01:38:16
B 172.16.201.254/32 [20/0] via 192.168.254.5, 01:38:16
192.168.254.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.254.4/30 is directly connected, TenGigabitEthernet0/0/1.3002
L 192.168.254.6/32 is directly connected, TenGigabitEthernet0/0/1.3002

Fusion-ASR#show ip route vrf Guest

Routing Table: Guest


<..snip..>

Gateway of last resort is not set

172.16.0.0/16 is variably subnetted, 2 subnets, 2 masks


B 172.16.250.0/24 [20/0] via 192.168.254.9, 01:38:30
B 172.16.250.254/32 [20/0] via 192.168.254.9, 01:38:30
192.168.254.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.254.8/30 is directly connected, TenGigabitEthernet0/0/1.3003
L 192.168.254.10/32 is directly connected, TenGigabitEthernet0/0/1.3003

Page 114 of 152


Use VRF leaking to share the routes on the Fusion router and distribute
them to the Border
Route-maps are used to select the routes which should be
leaked to Virtual Networks. Import and export of these maps
enables the VRF route leaking required on the Fusion Router.

Note: We have introduced two new networks that make up the shared services in
the lab. 10.172.99.0 is for DHCP, whereas 10.172.120 is for the WLC.
These could have been on the same network, but for clarity two separate
networks have been used.

Fusion-ASR:

ip prefix-list campus_networks seq 5 permit 172.16.101.0/24


ip prefix-list campus_networks seq 10 permit 172.16.201.0/24
!
ip prefix-list guest_networks seq 10 permit 172.16.250.0/24
!
ip prefix-list shared_networks seq 5 permit 10.172.99.0/24
ip prefix-list shared_networks seq 10 permit 10.172.120.0/24

route-map campus_nets permit 10


match ip address prefix-list campus_networks
!
route-map global_shared_net permit 10
match ip address prefix-list shared_networks
!
route-map guest_nets permit 10
match ip address prefix-list guest_networks

vrf definition Campus


rd 1:4099
!
address-family ipv4
import ipv4 unicast map global_shared_net
export ipv4 unicast map campus_nets
route-target export 1:4099
route-target import 1:4099
exit-address-family

vrf definition Guest


rd 1:4100
!
address-family ipv4
import ipv4 unicast map global_shared_net
export ipv4 unicast map guest_nets
route-target export 1:4100
route-target import 1:4100
exit-address-family

Note: There is no need to leak Shared Networks routes to INFRA_VN, as INFRA_VN is nothing but GRT itself

Page 115 of 152


After a few minutes BGP will propagate the routes accordingly. (Please be patient.)

Verify the local ASR routes for the DHCP server and WLC are added to the Campus routing
table.

Fusion-ASR:

Fusion-ASR#show ip route vrf Campus

Routing Table: Campus


<..snip..>

Gateway of last resort is not set

10.0.0.0/8 is variably subnetted, 4 subnets, 2 masks


B 10.172.99.0/24 is directly connected, 00:05:32, Loopback99
L 10.172.99.11/32 is directly connected, Loopback99
B 10.172.120.0/24 is directly connected, 00:05:32, GigabitEthernet0/0/5
L 10.172.120.254/32 is directly connected, GigabitEthernet0/0/5
172.16.0.0/16 is variably subnetted, 4 subnets, 2 masks
B 172.16.101.0/24 [20/0] via 192.168.254.5, 02:21:57
B 172.16.101.254/32 [20/0] via 192.168.254.5, 02:21:57
B 172.16.201.0/24 [20/0] via 192.168.254.5, 02:21:57
B 172.16.201.254/32 [20/0] via 192.168.254.5, 02:21:57
192.168.254.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.254.4/30 is directly connected, TenGigabitEthernet0/0/1.3002
L 192.168.254.6/32 is directly connected, TenGigabitEthernet0/0/1.3002

Fusion-ASR#show ip route vrf Guest

Routing Table: Guest


<..snip..>

Gateway of last resort is not set

10.0.0.0/8 is variably subnetted, 4 subnets, 2 masks


B 10.172.99.0/24 is directly connected, 00:06:00, Loopback99
L 10.172.99.11/32 is directly connected, Loopback99
B 10.172.120.0/24 is directly connected, 00:06:00, GigabitEthernet0/0/5
L 10.172.120.254/32 is directly connected, GigabitEthernet0/0/5
172.16.0.0/16 is variably subnetted, 2 subnets, 2 masks
B 172.16.250.0/24 [20/0] via 192.168.254.9, 02:23:31
B 172.16.250.254/32 [20/0] via 192.168.254.9, 02:23:31
192.168.254.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.254.8/30 is directly connected, TenGigabitEthernet0/0/1.3003
L 192.168.254.10/32 is directly connected, TenGigabitEthernet0/0/1.3003

Page 116 of 152


Verify the routes were distributed to the Border.

C6807:

C6807#show ip route vrf Campus

Routing Table: Campus


<..snip..>

Gateway of last resort is not set

10.0.0.0/24 is subnetted, 2 subnets


B 10.172.99.0 [20/0] via 192.168.254.6, 00:09:20
B 10.172.120.0 [20/0] via 192.168.254.6, 00:09:20
172.16.0.0/16 is variably subnetted, 4 subnets, 2 masks
B 172.16.101.0/24 [200/0] via 192.168.100.100, 03:25:58
C 172.16.101.254/32 is directly connected, Loopback1021
B 172.16.201.0/24 [200/0] via 192.168.100.100, 03:25:58
C 172.16.201.254/32 is directly connected, Loopback1022
192.168.254.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.254.4/30 is directly connected, Vlan3002
L 192.168.254.5/32 is directly connected, Vlan3002

C6807#show ip route vrf Guest

Routing Table: Guest


<..snip..>

Gateway of last resort is not set

10.0.0.0/24 is subnetted, 2 subnets


B 10.172.99.0 [20/0] via 192.168.254.10, 00:08:27
B 10.172.120.0 [20/0] via 192.168.254.10, 00:08:27
172.16.0.0/16 is variably subnetted, 2 subnets, 2 masks
B 172.16.250.0/24 [200/0] via 192.168.100.100, 03:26:05
C 172.16.250.254/32 is directly connected, Loopback1023
192.168.254.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.254.8/30 is directly connected, Vlan3003
L 192.168.254.9/32 is directly connected, Vlan3003

Page 117 of 152


Verify Inter-Virtual Network routing
Ping from PC-Wired2 (172.16.101.100 Auth: fred/faith) to Fusion-ASR lo99 (10.172.99.11)

Ping from PC-Wired3 (172.16.201.100 Auth: ethan/emily) to Fusion-ASR g0/0/5


(10.172.120.254)

Verify the clients can also ping the WLC (10.172.120.2).

Please do not skip the validation steps.


They must be working before you proceed further

Page 118 of 152


Restoring IP Connectivity to the Underlay
Recall in this lab, DNA Center and ISE are connected directly to the Border (C6807). The lab began
with a manual underlay applied to allow DNA Center to discover all of the lab devices in the beginning
of the lab. When the Border handoff was provisioned by DNA Center this effectively broke the underlay
above the border. Thus for this lab environment you must perform below configurations for this
specific topology before you can proceed with rest of the lab.

Border (C6807 – Initial Underlay) Border (C6807 – After Border Handoff Provisioning)
C6807#show running-config interface TenGigabitEthernet C6807#show running-config interface TenGigabitEthernet 1/11
1/11 Building configuration...
Building configuration...
Current configuration : 148 bytes
Current configuration : 148 bytes !
! interface TenGigabitEthernet1/11
interface TenGigabitEthernet1/11 description ASR1K TE0/0/1 underaly
description ASR1K TE0/0/0 underlay switchport
dampening switchport mode trunk
mtu 9100 mtu 9216
ip address 192.168.0.21 255.255.255.252 logging event link-status
no ip redirects end
no ip proxy-arp
ip router isis
logging event link-status C6807#show ip int brief | include Vlan3
load-interval 30 Vlan3001 192.168.254.1 YES manual up up
carrier-delay msec 0 Vlan3002 192.168.254.5 YES manual up up
bfd interval 300 min_rx 300 multiplier 3 Vlan3003 192.168.254.9 YES manual up up
no bfd echo
end
interface Vlan3001
description vrf interface to External router
Fusion Router (ASR 1001-X) ip address 192.168.254.1 255.255.255.252
no ip redirects
Fusion-ASR#show running-config interface ip route-cache same-interface
TenGigabitEthernet 0/0/1 platform lisp-enable
Building configuration...
interface Vlan3002
Current configuration : 317 bytes description vrf interface to External router
! vrf forwarding Campus
interface TenGigabitEthernet0/0/1 ip address 192.168.254.5 255.255.255.252
description c6807 t1/11 no ip redirects
mtu 9100 ip route-cache same-interface
ip address 192.168.0.22 255.255.255.252 platform lisp-enable
no ip redirects
no ip proxy-arp interface Vlan3003
ip nat inside description vrf interface to External router
ip router isis vrf forwarding Guest
logging event link-status ip address 192.168.254.9 255.255.255.252
load-interval 30 no ip redirects
carrier-delay msec 0 ip route-cache same-interface
bfd interval 300 min_rx 300 multiplier 3 platform lisp-enable
no bfd echo
cdp enable
end

Notice in the initial underlay the ISIS configuration is applied to the Border and Fusion router. During
the border external handoff configuration in DNA Center, we specified TenGigabitEthernet1/11 as the
external interface and selected the following Virtual Networks to be extended to Fusion Router via
VRF-Lite: Campus, Guest, INFRA_VN. This caused DNA Center to change the configuration.

Page 119 of 152


The net result of this change is that DNA Center will lose the connectivity to Fusion Router and
WLC. To verify, go back to DNA Center Inventory and you can see that both the devices are
Unreachable

To correct this, advertise DNA Center, Jumphost subnets and the Control-Plane via BGP to
allow the Fusion Router to learn these management addresses and thus restore
connectivity.

Fabric Border (C6807)

conf t

router bgp 65002


address-family ipv4
network 172.26.204.0 mask 255.255.255.0
network 172.26.205.0 mask 255.255.255.0
network 192.168.100.100 mask 255.255.255.255

Note: 172.26.204.0/24 is advertised to allow the lab Jumphost to access the WLC 5520 GUI.
172..26.205.0/24 is advertised to allow external devices to respond to DNA Center
192.168.100.100/32 is the Control Plane Loopback0. This is a host route that C6807 learned via ISIS.

After a few seconds you will see the fabric underlay learned on the Fusion router.

Fusion-ASR#sho ip route Fusion-ASR#sho ip route


<..snip..> <..snip..>
Gateway of last resort is not set Gateway of last resort is not set

10.0.0.0/8 is variably subnetted, 4 subnets, 2 masks 10.0.0.0/8 is variably subnetted, 4 subnets, 2 masks
C 10.172.99.0/24 is directly connected, Loopback99
C 10.172.99.0/24 is directly connected, Loopback99 L 10.172.99.11/32 is directly connected, Loopback99
L 10.172.99.11/32 is directly connected, Loopback99 C 10.172.120.0/24 is directly connected, GigabitEthernet0/0/5
C 10.172.120.0/24 is directly connected, GigabitEthernet0/0/5 L 10.172.120.254/32 is directly connected, GigabitEthernet0/0/5
L 10.172.120.254/32 is directly connected, GigabitEthernet0/0/5 101.0.0.0/32 is subnetted, 1 subnets
101.0.0.0/32 is subnetted, 1 subnets C 101.101.101.101 is directly connected, Loopback1
C 101.101.101.101 is directly connected, Loopback1 172.26.0.0/24 is subnetted, 2 subnets
B 172.26.204.0 [20/0] via 192.168.254.1, 00:00:51
192.168.0.0/24 is variably subnetted, 2 subnets, 2 masks B 172.26.205.0 [20/0] via 192.168.254.1, 00:00:21
C 192.168.0.20/30 is directly connected, TenGigabitEthernet0/0/1 192.168.0.0/24 is variably subnetted, 2 subnets, 2 masks
L 192.168.0.22/32 is directly connected, TenGigabitEthernet0/0/1 C 192.168.0.20/30 is directly connected, TenGigabitEthernet0/0/1
192.168.105.0/32 is subnetted, 1 subnets L 192.168.0.22/32 is directly connected, TenGigabitEthernet0/0/1
C 192.168.105.1 is directly connected, Loopback0 192.168.100.0/32 is subnetted, 1 subnets
192.168.254.0/24 is variably subnetted, 2 subnets, 2 masks B 192.168.100.100 [20/20] via 192.168.254.1, 00:00:21
192.168.105.0/32 is subnetted, 1 subnets
C 192.168.254.0/30 is directly connected, C 192.168.105.1 is directly connected, Loopback0
TenGigabitEthernet0/0/1.3001 192.168.254.0/24 is variably subnetted, 2 subnets, 2 masks
L 192.168.254.2/32 is directly connected, C 192.168.254.0/30 is directly connected, TenGigabitEthernet0/0/1.3001
TenGigabitEthernet0/0/1.3001 L 192.168.254.2/32 is directly connected, TenGigabitEthernet0/0/1.3001
Fusion-ASR#

Page 120 of 152


Now you will notice that Fusion Router and WLC is reachable and is managed by DNA Center.
DNA Center will automatically re-sync every 25 minutes by default. If your Inventory has not
updated you can select the Fusion-ASR and WLC 5520 devices and manually re-sync them using
the Action pulldown above the table.

Please save the Border, Control Plane, and Edge node configurations for future
comparisons using the filename “bordercomplete”.

C3850-1 #copy run bordercomplete

Page 121 of 152


SD-Access Wireless Validation Overview

Wireless Validation Work Flow


This guide builds upon the Wired and Border Labs. Before SD-Wireless can be install the SD-
Access fabric must be deployed and external connectivity exists for shared services for DHCP, DNS,
IP management (DDI) and the wireless LAN controller (WLC)

To recap, at this point in the lab the underlay devices (switches, routers, and wireless LAN
controllers (WLC) are discovered, sites created, discovered devices provisioned to a site and fabric
provisioned during various DNA Center work flows. Wired end hosts are successfully onboarded and
IP connectivity between virtual networks and to shared services has been validated.

To add SD-Wireless to the SD-Access environment:


1. Start by setting up any additional networking on the Border, Control Plane and Fusion router
to support the Access Points (AP) and Clients as they first connect to the network.

Note: These are manual steps today, which will be automated in future releases

2. From DNA Center, add the WLC to the SD-Access fabric domain.

3. Within Design, create a wireless network which will configure wireless SSIDs mapped to a
site as part of a network profile.

4. Use the Provision app to associate the WLC to a site and provision it with site specific
settings.

5. From within the Provision’s Fabric Topology, add the WLC device to the fabric domain.

6. Assign IP address pools for AP and wireless clients.

7. Associate the SSID to the wireless client IP Pool.

8. Complete the process by provisioning the Access Points by adding to a site and provisioning
to make it Fabric enabled.

9. Validate a wireless client can connect to the network and reach clients within the Fabric as
well as shared resources outside the Fabric.

Page 122 of 152


SD-Access Wireless Architecture

SD-Wireless Wireless LAN Controller (WLC)


Fabric Enabled WLC are LSIP enabled to integrate with the Fabric Control Plane. The WLC
continues to be to wireless control plane, responsible for: AP image/config, Radio Resource
Management and client session management as well as roaming functions. The wireless data plane
is now integrated to the fabric via the access points.

SD-Wireless Highlights:

• SD-Wireless require Layer-2 LISP support as wireless client MAC addresses are used as EID
• WLC registers wireless client MAC addresses with SGT and VN information with the Fabric
Control-Plane
• The Virtual Network information is mapped to a VLAN on the Fabric Edge nodes
• WLC is responsible roaming, and updates the Fabric Control Plane as roams occur.
• Fabric enabled WLC needs to be physically located on the same sites as APs as the
maximum latency supported is 20ms.

SD-Wireless Access Points (APs)


Fabric enabled APs are local mode APs which integrates with the VXLAN Data Plane. AP must be
directly connected to Fabric Edge Switch.
For Fabric enabled SSIDs, APs forward client traffic based on forwarding table as programmed by
WLC to the Fabric Edge node that connect the AP. The AP is responsible for converting 802.11
traffic to 802.3 and encapsulating it into VXLAN encoding the VN and SGT of the client.

SD-Wireless Communications

• WLC ó AP: CP, WLC and AP communication is via


CAPWAP; same as with CUWN

• AP ó Switch: Data Traffic is switched between the AP


to the Edge switch over VXLAN

• WLC ó CP: The wireless LAN controller


communicates with the CP using the LISP

• DNA Center ó WLC: In the initial release DNA Center


uses SSH to configure the WLC

• Switch ó CP: The fabric enabled switches


communicate with the CP using LISP

Page 123 of 152


SD-Wireless Considerations

Before you proceed with the lab, some important consideration to keep in mind

• AP is part of Fabric Overlay.


• AP belongs to INFRA_VRF, which is essentially the Global Routing Table.
• WLC is connected directly to the Border or outside the Fabric.
• WLC must reside in Global Routing Table.
• Inter-VRF leaking is not required for APs to join the WLC
• WLC route should be present in each Fabric Node’s global routing table. The route to WLC’s
IP address should be redistributed into the underlay IGP protocol. Typically the Border is
responsible for redistributing the WLC route in the underlay.
• Interface port on Fabric Edge where the AP is connected should not have any 802.1x or other
authentication policies applied. For example, in this lab, you have provisioned the Closed
Authentication. You need to manually assign the IP pool defined for INFRA_VN and No
Authentication for the AP to connect to the WLC.

AP to WLC Traffic Flow

• Admin user configures a pool in DNA Center to be dedicated to APs. The AP is plugged in
and powers up. The Fabric Edge discovers the AP via CDP and applies an auto-smart port
on the AP interface, which maps the interface to the right VLAN (VLAN mapped to
INFRA_VN IP Pool).
• AP gets an IP address from the same IP Pool.
• Specific to this lab, Border will redistribute WLC route from BGP to ISIS and FE’s will learn
the WLC route in the GRT.
• AP initiates a CAPWAP to the WLC using DHCP Option 43.
• AP to WLC CAPWAY traffic travels in the underlay.
• When FE receives CAPWAP packet from AP, the FE finds a match in the RIB and packet is
forwarded to no VXLAN encapsulation.

WLC to AP Traffic Flow

• The WLC to AP CAPWAP traffic travels in the overlay.


• WLC will register the AP subnet in the CP.
• Border exports AP local EID space from CP to RIB and also import the AP RIB routes into
LISP map-cache entry.
• When Border receives CAPWAP packet from WLC, the LISP lookup happens and traffic is
sent to FE with VXLAN encapsulation.

Page 124 of 152


Border Handoff for Wireless Networks
For SD-Wireless we need two IP Pools. This is not inclusive of the WLC subnet (10.172.120.0/24)
which is already pre-configured in this lab. The first is used for the APs. This can be a relatively
large IP Pool as all fabric enable APs need to reside in this pool. For the lab we will use
172.16.112.0/24 (in production we could see larger subnets). The last is a IP Pool for wireless
clients, the IP pool used for the SSID(172.16.222.0/24). While SD-Access supports wired and
wireless clients in the same IP Pool, for this lab we will give wireless clients their own IP Pool to keep
the design as simple and clear as possible.

The AP and Wireless client IP Pools need to have access to the shared services, as we did for the
wired hosts in the Border lab. Since SD-Access has already been deployed with BGP handoff to a
Fusion router, it is trivial to add the necessary route leaking to allow the Wireless network to reach
shared resources and other external fabric destinations.

In the lab we will use the IP Pool 172.16.112.0/24 for APs in the INFRA_VN and the 172.16.222.0/24
IP Pool in the Campus Virtual Network for wireless clients. Again, the use of a separate IP Pools for
Wireless are for lab purposes, technically the wired and wireless clients can share the same IP Pool.

Distribute the AP and Wireless client networks to BGP for the fusion router to learn. Also
redistribute the DHCP and WLC networks to the ISIS underlay using route-maps.

Border

C6807:

conf t

router bgp 65002

address-family ipv4
network 172.16.112.254 mask 255.255.255.255 Distribute AP &
Wireless networks
address-family ipv4 vrf Campus in BGP
network 172.16.222.254 mask 255.255.255.255

ip prefix-list shared_networks seq 5 permit 10.172.99.0/24


ip prefix-list shared_networks seq 10 permit 10.172.120.0/24 Redistribute DHCP
and WLC networks
route-map global_shared_net permit 10 in the Underlay
match ip address prefix-list shared_networks

router isis
redistribute bgp 65002 route-map global_shared_net metric-type external

Page 125 of 152


Verify the Border configuration additions. Ensure the highlighted commands are present.

C6807#show run | sec bgp


<..snip..>
router bgp 65002
bgp router-id interface Loopback0
bgp log-neighbor-changes
neighbor 192.168.100.100 remote-as 65002
neighbor 192.168.100.100 update-source Loopback0
neighbor 192.168.254.2 remote-as 65001
neighbor 192.168.254.2 update-source Vlan3001
!
address-family ipv4
network 172.16.112.254 mask 255.255.255.255
network 172.26.204.0 mask 255.255.255.0
network 172.26.205.0 mask 255.255.255.0
network 192.168.100.100 mask 255.255.255.255
neighbor 192.168.100.100 activate
neighbor 192.168.100.100 weight 65535
neighbor 192.168.254.2 activate
neighbor 192.168.254.2 weight 65535
exit-address-family
!
address-family vpnv4
neighbor 192.168.100.100 activate
neighbor 192.168.100.100 send-community both
exit-address-family
!
address-family ipv4 vrf Campus
network 172.16.101.254 mask 255.255.255.255
network 172.16.201.254 mask 255.255.255.255
network 172.16.222.254 mask 255.255.255.255
neighbor 192.168.254.10 remote-as 65001
neighbor 192.168.254.10 update-source Vlan3003
neighbor 192.168.254.10 activate
neighbor 192.168.254.10 weight 65535
exit-address-family

C6807#show run | inc shared


description Pod1-shared-svc
redistribute bgp 65002 route-map global_shared_net metric-type external
ip prefix-list shared_networks seq 5 permit 10.172.99.0/24
ip prefix-list shared_networks seq 10 permit 10.172.120.0/24
route-map global_shared_net permit 10
match ip address prefix-list shared_networks

C6807#show run | sec isis


<..snip..>
router isis
net 49.0000.0000.0100.0001.00
is-type level-2-only
domain-password ciscodna
ispf level-2
metric-style transition
log-adjacency-changes
redistribute bgp 65002 route-map global_shared_net metric-type external
bfd all-interfaces

Page 126 of 152


Use DNA Center Design to Provision IP Pools

Creating IP Pools
Please configure IP pools for both AP and wireless client subnets as per your network addressing
scheme. Additionally, ensure you have DHCP option 43 defined for the AP IP Pool scope to enable
APs to register to the WLC.

Using the Design menu bar Network Settings select IP Address Pools and define the IP
address Pools we require.

APs: 172.16.112.0/24 GW: 172.16.112.254 assign DHCP and DNS.


Wifi: 172.16.222.0/24 GW: 172.16.222.254 assign DHCP and DNS.

When complete, the DNA Center IP Address Pools for Global should look very close to the following
page:

Page 127 of 152


Creating Wireless SSIDs
Creating a SSID is a two step process. First, create a wireless network by choosing the type of
network and assigning a security type and second, create a Profile. Under this Profile, add sites
where you want this SSID to broadcast. See the steps below to follow the workflow.

Under Network Settings, click on Wireless tab.

Click on Add and Enterprise SSID

Create SSID “SDAPod<P>” where <P> is your Pod number. Verify Type is Voice & Data,
Security is WPA2 Enterprise with Adaptive options. Click Next to continue.

Page 128 of 152


Create a wireless Profile “SDAlab”. Under this Profile, add the sites where this SSID will
broadcast (SJC-05). Click Finish to build the profile.

Note: SD-Wireless SSID should be build wide as there is no seamless roaming between fabric and non-fabric
SSIDs.

Once the SSID is created, the SSID will be configured with security and the associated Profile.

Managing Network Profiles


Still under Design, click on Network Profiles tab. You will see the Profile (created above)
populated here. Click on the Assign Site and select SJC05.
Page 129 of 152
Wireless LAN Controller Provisioning
From the home menu select Provisioning. From here you can begin the provisioning process by
selecting the Wireless LAN Controller and associating it to the site previously created during the
Design phase (SJC05).

Begin by selecting the WLC by checking the box to left of it. Once selected we will choose
the action “Provision”

Assign it to the location where WLC is physically located, in this lab use SJC05.
Click Next to continue.

Page 130 of 152


Once the WLC is assigned a site, verify the Managed AP location SJC05 and add the first
floor as well SJC05/1 and click Next.

Note: If you fail to add a floor at this step, the SD-Wireless deployment will fail when the AP is onboarded, as it
will be onboarded at a floor level. Thus a WLC which manages that floor must be deployed first.

Page 131 of 152


Click through the Advanced Configuration screen, as no templates are applied to the WLC
in this lab.

Review the configuration – System Details, Global Settings for AAA, DHCP, DNS servers,
SSID and Managed Sites that will be pushed as part of WLC provisioning from DNA Center.
Click Deploy. And then Run Now in the following dialog to make the changes.

Page 132 of 152


Access the WLC to Verify Site Provisioning
Use the web browser to log into the WLC http://10.172.120.2
Username: cisco
Password: cisco

Click “Advanced” on the top right of the main page to get to the screens used to provision
the SD-Wireless solution.

First review the configuration DNA Center automated when the WLC was provisioned to the
site. Verify the Security -> RADIUS -> Authentication screen shows the ISE configured.

Verify the Security -> RADIUS -> Authentication screen shows the ISE configured.

Page 133 of 152


Also confirm DNA Center deployed the WLAN wireless profile on the WLANs -> WLANs page. In
this lab we deploy a SSID to a single site and thus there is a single WLAN ID of 17. When you
deploy a SD-Wireless SSID to multiple site there will be multiple corresponding WLAN IDs ( >17)
created for each site. Notice, that the WLAN ID is administrative disabled initially.

Note: The Profile Name is the name given in DNA Center underscore F underscore a unique identifier. This
ensures DNA Center can uniquely track profiles in large scale environments.

Adding the WLC to the Fabric


Return to DNA Center and navigate to the Fabric Topology page.
Provision -> Fabric -> “University”.

Locate the WLC-5520 and select “Add to Fabric”. Click on Save after the wireless controller
is added to Fabric and Run Now, Apply to provision
.

The WLC will change color from grey to blue once they are added to Fabric as shown in the
example below.
Page 134 of 152
Verfiy the CLI changes DNA Center has automated when assigning IP Pools to the VNs.

Control Plane (C3850-1)


C3850-1#show archive config differences flash:bordercomplete system:running-config
!Contextual Config Diffs:
router lisp
+locator-set WLC
+10.172.120.2
+exit-locator-set
+map-server session passive-open WLC

Note there are no chages to the Border or Edge nodes when provisioning a fabric enabled WLC

Page 135 of 152


AP and Wireless Client Host Onboarding
In this section we will provision the AP and Wireless client IP Pools so the fabric will support the new
end points as they join the network.

Click on the “Host Onboarding” at the top of this screen to start enabling the IP pools for
AP and client devices.

Click on the INFRA_VN virtual Network, and a window open with configured address pools.
Select the APs address pool (172.16.112.0/24). Click on Update to complete the
provisioning. In the subsequent dialog box presented, select Run Now and click Apply.

Note: For the INFRA_VN AP Provision Pool and Layer-2 Extension are enabled by default for all Pools. This
simplifies the AP Pool deployment and ensures the correct settings for AP devices.

The AP Provision Pool, automatically pushes a configuration macro to all Fabric Edge switches; to
automate the bring up of APs. When the AP connects to the switch, it will be recognized thorugh
CDP (Cisco Discover Protocol), at which time a macro will be applied to the port automatically
Page 136 of 152
assigning the physical port to the right VLAN. The CDP applied macro is pushed only if the port is
configured with the “No Authentication” template. Thus make sure all AP connection ports are
configured with “No Authentication”.

The Layer-2 Extension setting enables L2 LISP and associates a L2 VNID to this pool so that the
Ethernet frames can be carried end to end within the fabric.

Next provision an IP Pool specifically for Wireless clients. Open the Campus Virtual
Network and apply the Wifi pool 172.16.222.0/24 and select traffic type Data. Click Update
and then Run Now, Apply in the following dialog.

Page 137 of 152


Verfiy the CLI changes DNA Center has automated when assigning IP Pools to the VNs.

Note: Teal highlighted CLI represents AP Pool provisioning

Border (C6807)
C6807#show archive config differences disk0:bordercomplete system:running-config
!Contextual Config Diffs:
+interface Loopback1024
+description Loopback Border
+ip address 172.16.112.254 255.255.255.255
+interface Loopback1025
+description Loopback Border
+vrf forwarding Campus
+ip address 172.16.222.254 255.255.255.255
router lisp
eid-table default instance-id 4097
+map-cache 172.16.112.0/24 map-request
router bgp 65002
address-family ipv4
+network 172.16.112.254 mask 255.255.255.255
address-family ipv4 vrf Campus
+network 172.16.222.254 mask 255.255.255.255

Control Plane (C3850-1)


C3850-1#show archive config differences flash:bordercomplete system:running-config
!Contextual Config Diffs:
+interface Loopback1024
+description Loopback Map Server
+ip address 172.16.112.254 255.255.255.255
+interface Loopback1025
+description Loopback Map Server
+vrf forwarding Campus
+ip address 172.16.222.254 255.255.255.255
router lisp
+locator-set WLC
+10.172.120.2
+exit-locator-set
+map-server session passive-open WLC
site site_uci
+eid-record instance-id 4097 172.16.112.0/24 accept-more-specifics
+eid-record instance-id 4099 172.16.222.0/24 accept-more-specifics
+eid-record instance-id 8188 any-mac
+eid-record instance-id 8189 any-mac
router bgp 65002
address-family ipv4
+network 172.16.112.254 mask 255.255.255.255
+aggregate-address 172.16.112.0 255.255.255.0 summary-only
address-family ipv4 vrf Campus
+network 172.16.222.254 mask 255.255.255.255
+aggregate-address 172.16.222.0 255.255.255.0 summary-only

Page 138 of 152


Edge (C3850-2 or C3850-3)
C3850-2#show archive config differences flash:bordercomplete system:running-config
!Contextual Config Diffs:

+ip dhcp snooping vlan 1021-1025


+vlan 1024
+name 172_16_112_0-INFRA_VN
+vlan 1025
+name 172_16_222_0-Campus
+interface Vlan1024
+description Configured from apic-em
+mac-address 0000.0c9f.f45f
+ip address 172.16.112.254 255.255.255.0
+ip helper-address 10.172.99.11
+no ip redirects
+ip route-cache same-interface
+no lisp mobility liveness test
+lisp mobility 172_16_112_0-INFRA_VN
+interface Vlan1025
+description Configured from apic-em
+mac-address 0000.0c9f.f460
+vrf forwarding Campus
+ip address 172.16.222.254 255.255.255.0
+ip helper-address 10.172.99.11
+no ip redirects
+ip route-cache same-interface
+no lisp mobility liveness test
+lisp mobility 172_16_222_0-Campus
router lisp
instance-id 4097
+dynamic-eid 172_16_112_0-INFRA_VN
+database-mapping 172.16.112.0/24 locator-set rloc_2e3022a3-a712-483d-ac45-
6265a6224172
+exit-dynamic-eid
instance-id 4099
+dynamic-eid 172_16_222_0-Campus
+database-mapping 172.16.222.0/24 locator-set rloc_2e3022a3-a712-483d-ac45-
6265a6224172
+exit-dynamic-eid
+instance-id 8188
+remote-rloc-probe on-route-change
+service ethernet
+eid-table vlan 1024
+database-mapping mac locator-set rloc_2e3022a3-a712-483d-ac45-6265a6224172
+exit-service-ethernet
+exit-instance-id
+instance-id 8189
+remote-rloc-probe on-route-change
+service ethernet
+eid-table vlan 1025
+database-mapping mac locator-set rloc_2e3022a3-a712-483d-ac45-6265a6224172
+exit-service-ethernet
+exit-instance-id
+cts role-based enforcement vlan-list 1021-1025
-ip dhcp snooping vlan 1021-1023
-cts role-based enforcement vlan-list 1021-1023

Below the Virtual Networks section is the Wireless SSID section of the Host Onboarding
page. Verify the Pod specific SSID is displayed and select the Wifi pool
(Campus:172.16.222.0) and click Save and Run Now in the following dialog.
Page 139 of 152
Since the default auth policy is Dot1xClosed, we will need to manually change the switch
port the AP is connected to enable it to connect. Scroll down the Host Onboarding page
and select C3850-3 port g1/0/20. Change the Address Pool to 172_16_112_0-INFRA
and chance the Authenticaiton to No Authorization. Be sure to click Save and Run Now
when done.

After saving the change the screen will update and the port will show the change from the
default profile to make it simple to visualize no default ports and their different configuration.

Before you can proceed with the AP provisioning, you will need to review the interface configurations
where the APs are connected. In this Lab, we have two AP’s connected on both the Fabric Edge as
show below. Notice Authentication templates are proviosioned by DNA Center as part of the steps
you did in the Wired Lab.
Page 140 of 152
Default Port configuration After Manual Port Change

C3850-3: Gig 1/0/20 C3850-3: Gig 1/0/20


C3850-3#show run interface Gi1/0/20 C3850-3#show running-config
interface Gi1/0/20
Building configuration... Building configuration...

Current configuration : 604 bytes Current configuration : 155 bytes


! !
interface GigabitEthernet1/0/20 interface GigabitEthernet1/0/20
description AP2802i description AP2802i
switchport mode access switchport access vlan 1024
switchport voice vlan 4000 switchport mode access
authentication control-direction in load-interval 30
authentication event server dead action authorize vlan 3999 spanning-tree portfast
authentication event server dead action authorize voice end
authentication host-mode multi-auth
authentication order dot1x mab
authentication priority dot1x mab
authentication port-control auto
authentication periodic
authentication timer reauthenticate server
authentication timer inactivity server dynamic
mab
dot1x pae authenticator
dot1x timeout tx-period 10
spanning-tree portfast
end

Page 141 of 152


Provisioning the AP
Once the AP reboots it will find the DCHP, and the by way of Option 43 the WLC IP
address. After it reboots, it will establish a connection to the WLC over CAPWAP and get a
Fabric provisioning. After rebooting again it will be seen in the Provision > Devices >
Inventory page

Note: You might run into scenarios where none of the AP’s are discovered even after sometime. In this
scenario, you need to select the WLC and select Resync from the Actions menu to manually force
a re-try.

Select OK for the configuration dialog

Page 142 of 152


You will notice that ReSync is in progress. Please be patient while DNA Center completes the resync process.

After the Re-Sync completes the APs will be visible on the Fabric Devices page.

Select AP from Device list and “Add to Site” as shown in the example below.

Assign the AP to the site floor


SJC05/1.

Click Next and choose the Low Radio Frequency in the Configure screen

Page 143 of 152


Review the summary, click Deploy and then Run Now, Apply.

Confirm to reboot the AP.

Note: At this point the AP is rebooting to become Fabric aware.

After a few minutes, the AP will be shwon as Successfully provisioned in the Devices table

Page 144 of 152


As part of AP provisioning, we will see the some config pushed on WLC.
Due to the reuse of these lab pods you could see multiple AP Groups in the next step. To
identify the correct AP Group for your lab session navigate to the WIRELESS -> ALL APs
page and clink on the one AP listed.

From this tabbed screen select the Advanced Tab and note the AP Group Name displayed

Proceed to the WLANs -> Advanced -> AP Groups page and select the AP Group name
identified in the previous step.

Page 145 of 152


Select the APs tab to view the APs associated to this group. You can cross reference the
AP name shown with the one in DNA Center’s Provision -> Device list if necessary.

Returning to the WLANs page, the WLAN ID for the SD-Wireless SSID will now be shown
to have Admin Status Enabled.

Edit the security settings under WLANs and verify the AAA server status

Page 146 of 152


From the scroll down options, select ISE IP address for both Authentication and Accounting
Servers.

Page 147 of 152


Wireless Validation

To verify the wireless host connectivity, you will use the vSphere client to connect to your pod ESX
server and launch consoles for the wireless host VM. From the console you will configure the client
to connect to the network and first ping the default gateway, and then the other wired end hosts.

From the jump host use the vSphere client to connect to your ESXi host.
(Instructor / Dna@labrocks)

Expand the ESX host for your Pod to display all the VMs associated to the Pod.
Pods 1-10 are servers .71 – .80 Pods 11-20 are servers .104 – .113

Here you will find several hosts. Please verify the P-## matches your Pod number.

Power it on by clicking PC-Wireless and selecting Power On

Open console windows to access the PC-Wireless VM by selecting it and right clicking.

Page 148 of 152


Enable the Wireless client to Connect
Using the vSphere client, select your Pod’s PC-Wireless VM Power it on and open a
console. Login with the credentials: (admin/ dnalab1)

Once open, click on the wireless networking icon in the system


tray and open the available SSID panel.
Click on the SSID for your Pod and connect.

When prompted for credentials, use either faith or fred to login:

User Group Username Password


Employee emily dnalab1
Employee ethan dnalab1
Faculty faith dnalab1
Faculty fred dnalab1
Student sam dnalab1
Student stacy dnalab1

Page 149 of 152


Once you have completed this step, you can use the command window to ping the default
gateway for PC-Wireless. Use “ipconfig” to verify an IP address from the Wifi Pool was
assigned.

From the command window ping the default gateway (172.16.222.254.

You can also ping the Wired hosts PC-Wired2 172.16.101.100 and PC-Wired3
172.16.201.100

Page 150 of 152


Clients are ready to connect to fabric enabled wireless SSID. You can go to your WLC and
check connected client details using the MONITOR -> Clients page.
You will see the user you connect as under User Name and can use “ipconfig /all” in the
PC_Wireless command window to verify the MAC address are the same.

Verify client Fabric status and the SGT tag pushed to the WLC from ISE based on the
authorization rule by clicking on the MAC address and scrolling down the page to the Fabric
Properties and Security Information sections.

Page 151 of 152


This completes the
SD-Access Lab.

Great Job!!
We sincerely hope you have enjoyed the
experience with SD-Access and DNA Center.

Please let us know what you thought of it!

Page 152 of 152

You might also like