CCIE DataCenter Labs v3.0 - Deploy - Second-Release v1.0

Download as pdf or txt
Download as pdf or txt
You are on page 1of 50
At a glance
Powered by AI
The key takeaways are around policies for using lab workbooks from the provider including restrictions on sharing workbooks, requirements for renewing subscriptions, and supported operating systems.

The policies specify that workbooks can only be accessed on the registered device, sharing workbooks is prohibited, there is no print access but perpetual access to purchased workbooks, and free updates are provided for 120 days with options to renew.

To renew a subscription, it must be done within 120 days or the account will expire. Renewing within 120 days costs less than renewing after 120 days or purchasing a new subscription.

www.passdatacenterlabs.

com LAB 1 RELEASE Lab 1:20-Aug-2021

CCIE Data Center v3.0 Real Labs


Deploy Module

www.chinesedumps.com 1 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

Lab Workbook Policy

1. We highly discourage sharing of the workbook hence the workbooks are mapped to Laptop/Desktop
MAC address. If one tries to open the workbook on other desktop or laptop than the registered MAC
address; account will get locked and we will not unlock it for any reasons.

2. The workbook does not have print access; kindly do not request to enable to print access. However
you will have perpetual access to the workbook which you have purchased.

3. One will be provided with free updates up to 120 days from the date of purchase, post that one need
to renew his/her account to access the latest update. However one will continue to have access to their
existing workbooks. If you pass the lab within 120 days, you are not eligible for further updates.

4. If one wish to renew their subscription/account, you need to renew within 120 days or before the
account gets expired. Post 120 days one can renew their account however the renewal will be considered
has a new purchase. Hence we encourage one to renew within 120 days of the purchase.

5. The renewal cost is 999 USD if one pay within 120 days, if one fail to renew then the cost will be
equivalent of a new purchase. (The renewal price can be changed at any time, without informing the
client)

6. Every workbook is uniquely identified for each user with hidden words. If one shares his/her
workbooks with others, and if the system detects the share, the account will be banned and we will not
entertain any explanation of any sort.

7. For any queries regarding Questions/Solutions, you can contact us on email:


[email protected] or skype @ [email protected]. Response time to any of the queries
is 24 hours.

8. We do require CISCO ID and Official email id for security purposes. We do not sell without these
details. We do background verification of the details provided, so request to give us the correct CISCO
ID and official email id.

9. The workbooks are in secured pdf format and delivered via email within 24 hours after payment is
received.

10. License is provided for only one Device. And we don’t give license again if the device crashes or
company security policies. Please install license on the device cautiously as the license will not be
provided again.

www.chinesedumps.com 2 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

11. We do support devices running Windows OS, Mac OS, Android and Mac iOS only

12. We do not provide Refund in any circumstances once the product is sold.

13. This policy is in effect from 23 November 2016 and in immediate effect for new clients and new
renewals. Old clients will continue with the old Policies until the accounts get expired.

14. If there is any update, one will receive the update automatically on their registered email id.

15. Design Module will be given only 3 days before the CCIE exam

16. For any future update you can check our 'updates' page.

17. Labs are always published in phases. For e.g. if there is a new lab we publish it as First, Second, Third
... till Final release.

18. Client who have purchased our workbooks and services and wishes to attempt the lab, need to
consult our experts before their CCIE Lab.

www.chinesedumps.com 3 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

CCIE Deploy Guidelines

Before starting, please read the below guidelines:

1. At the request of CCITea’s executive team, you are replicating CCITea’s new Collaboration
solution to make sure services, features and resiliency all work as designed.
2. Implementation knowledge and troubleshooting techniques are expected.
3. Read the entire exam and confirm working order of all devices first. During the exam, if any device
is locked or inaccessible for any reason, you must recover it. When finishing the exam, ensure
that all devices are accessible for the grading proctor. Any device that is not accessible for grading
cannot be marked and will cost you substantial point.
4. Points are awarded for working configurations only. Please verify all your work.
5. Do not change configurations on interfaces marked “DO NOT CHANGE”, doing so will lead to
connectivity loss which you must recover on your ow.
6. All servers and PCs are running on VMware, which has been thoroughly tested to support all lab
exam question.
7. All UC appliances login username is “administrator” and password is “cciecollab”. Some
applications are preconfigured-refer to exam questions for details All PC login username is
“administrator” and password is “cciecollab”.
8. All applications web administrator pages (CUCM, CUC, UCCX, IM&P) must be accessed from
remote desktop (RDP) sessions from HQ PC, Site B PC, and Site C PC.
9. Any GUI access to IOS devices and modules must be initiated from the Site PCs, such as HQ PC1,
HQ PC2, Site B PC, and Site C PC.
10. Thoroughly read the “PSTN Numbers and dialing Instructions” to understand how to initiate calls
from PSTN into the correspondent site DID numbers.
11. Please note that some devices in the topology diagrams and numbering tables may not be part
of the exam requirements, which vary between exams.

www.chinesedumps.com 4 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

Main Topology

www.chinesedumps.com 5 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

Main Topology

www.chinesedumps.com 6 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

Device Info:

DC1-N5K-1 : 10.1.1.61 admin/Cisco!123


DC1-N5K-2 : 10.1.1.62 admin/Cisco!123
DC1-N7K-1 : 10.1.1.71 admin/Cisco!123
DC1-N7K-2 : 10.1.1.72 admin/Cisco!123
DC1-N9K-1 : 10.1.1.74 admin/Cisco!123

www.chinesedumps.com 7 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

1.1 Welcome

Read the relevant resources and requirements carefully and complete the configuration.

Note: There is no specific requirement.

www.chinesedumps.com 8 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

1.2 Configure VPC

In preparation for providing uplink connectivity for the server team, you have been asked to configure
and correct two virtual port channels to the fabric interconnects:

* Use Port channel ID 100 for FI A and Port Channel 200 for FI B. Use N5K-1/2 port 1/8-9 to create vpc.
Only trunk the required VLANs. Use Interface VLAN 10
as needed.

* End to end connection to the VPC. One of the devices will be selected as the master device, and the
other device will become a slave device.

* This function allows the gateway of local VPC device to forward the message normally when the layer
3 message with the target MAC address of peer VPC device is send to the local

* Improve the convergence time of three-layer traffic.

*N5K is configured with a loopback address 10.56.72.1 as the VXLAN address.

*Refer to the topology diagram as required.

www.chinesedumps.com 9 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

Solution:

N5K1
feature vpc

vpc domain10
peer-switch
peer-keepalive destination 10.1.1.62 source 10.1.1.61 vrf management
peer-gateway
ip arp synchronize

int po10
vpc peer-link
int po100
vpc 100
int po200
vpc 200

vpc nve peer-link-vlan10


int lo0
ip 10.56.72.1/24 secondary

N5K2
feature vpc

vpc domain10
peer-switch
peer-keepalive destination 10.1.1.61 source 10.1.1.62 vrf management
peer-gateway
ip arp synchronize

int po10
vpc peer-link
int po100
vpc 100
int po200
vpc 200

vpc nve peer-link-vlan10

int lo0
ip 10.56.72.1/24 secondary
www.chinesedumps.com 10 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

Verification

N5K1-2
show vpc

www.chinesedumps.com 11 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

1.3 Configure BGP

Configure BGP on N7K and N5K and configure advertise evpn L2VPN.

www.chinesedumps.com 12 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

Solution:

N7K1-2
router bgp65501

neighbor 10.0.5.91 remote-as 65501


update-source lo0
address-family l2vpn evpn
send-community both
route-reflector-client

neighbor 10.0.5.0/24 remote-as 65501


update-source lo0
address-family l2vpn evpn
send-community both
route-reflector-client

N5K1-2
router bgp65501

neighbor 10.0.7.1 remote-as65501


update-source lo0
address-family l2vpn evpn
send-community both

neighbor 10.0.7.2 remote-as65501


update-source lo0
address-family l2vpn evpn
send-community both

vrf TENANT1
address-family ipv4 unicast
advertise l2vpn evpn
redistribute director route-map TAG

Verification

N7K1-2, N5K1-2
show bgp l2vpn evpn summary

www.chinesedumps.com 13 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

1.4 Configure a redundant RP

The Multicast tree is currently build through DC1-N7K-2. Configure redundancy for BUM replication
traffic by configuring a backup Phantom RP Candidate on DC1-N7K-1 using interface Loopback 254.
Ensure Multicast is configured consistently between all the Nexus switches to support VXLAN between
the Nexus DC1-5K-1, DC-5K-2 and Nexus DC1-7K1 and DC1-7K-2.

www.chinesedumps.com 14 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

Solution:

N7K1
ip pim rp-address 10.0.10.1 group-list 225.0.0.0/24 bidir

int lo0
ip pim sparse-mode

int lo254
no sh
ip add 10.0.10.2/28
ip ospf network point-to-point
ip router ospf UNDERLAY area 0.0.0.0
ip pim sparse-mode

int e3/4
ip pim sparse-mode
int e3/5
ip pim sparse-mode
int e3/8
ip pim sparse-mode

N7K2
ip pim rp-address 10.0.10.1 group-list 225.0.0.0/24 bidir

int lo0
ip pim sparse-mode

int lo254
no sh
ip address 10.0.10.3/29
ip ospf network point-to-point
ip router ospf UNDERLAY 0.0.0.0
ip pim sparse-mode

int e3/12
ip pim sparse-mode

int e3/13
ip pim sparse-mode

int e3/18
www.chinesedumps.com 15 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

ip pim sparse-mode

N5K1
ip pim rp-address 10.0.10.1 group-list 225.0.0.0/24 bidir

int lo0
ip pim sparse-mode

int vlan10
ip pim sparse-mode

int e1/4
ip pim sparse-mode

int e1/12
ip pim sparse-mode

N5K2
ip pim rp-address 10.0.10.1 group-list 225.0.0.0/24 bidir

int lo0
ip pim sparse-mode

int vlan10
ip pim sparse-mode

int e1/5
ip pim sparse-mode

int e1/13
ip pim sparse-mode

Verification

N7K1-2, N5K1-2
show ip mroute
show ip pim neighbor

www.chinesedumps.com 16 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

1.5 Configure VxLAN EVPN

Referenced in the beginning of the section using the route Network as an underlay. Loopback0 should
be used as a source and EVPN should be the control-plane. Ensure that all EVPN Virtual Instances (EVI)
use Route-Distinguisher that contain Loopback0 IP address and Route-Targets with the value and format
of AS# VNID. Do not statically configure any values. Use the following VNIs:

Network VNID M-Cast Group


Web (101) 10101 225.0.0.101
App (101) 10102 225.0.0.102
DB (101) 10103 225.0.0.103

www.chinesedumps.com 17 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

Solution:

N7K1-2
system bridge-domain 101-103,2005
vni 10101-10103,50000
bridge-domain 101-103,2005
member vni 10101-10103,50000

int nve1
no shut
source-interface lo0
host-reachability protocol bgp
member vni 10101 m-cast-group 225.0.0.101
member vni 10102 m-cast-group 225.0.0.102
member vni 10103 m-cast-group 225.0.0.103
member vni 50000 ass

vrf context TENANT1


vni 50000
rd auto
address-family ipv4 unicast
route-target both auto
route-target both auto evpn

N5K1-2
int nve1
no shut
source-interface lo0
host-reachability protocol bgp
member vni 10101 mcast-group 225.0.0.101
member vni 10102 mcast-group 225.0.0.102
member vni 10103 mcast-group 225.0.0.103
member vni 50000 ass

vrf context TENANT1


vni 50000
rd auto
address-family ipv4 unicast
route-target both auto
route-target both auto evpn

www.chinesedumps.com 18 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

vlan 101
vn-segment 10101
vlan 102
vn-segment 10102
vlan 103
vn-segment 10103
vlan 2005
vn-segment 50000

evpn
vni 10101 l2
rd auto
route-target both auto

vni 10102 l2
rd auto
route-target both auto

vni 10103 l2
rd auto
route-target both auto

Verification

N7K1-2
show bridge-domain
show nve vni

N5K1-2
show nve vni
show nve peers

www.chinesedumps.com 19 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

1.6 Configure VTEP and any gateway

Configure all active First-Hop Gateways on the Nexus 5000 for the servers to utilize. Ensure that all N5K
are always active and use the First-Hop Gateway MAC address of 20-20-00-00-10-10. Put the newly
created layer 3 interfaces into vrp"TENANT1". The routing uses a unified tag, the number is 4082018.
Refer to the chart below for the IP addresses to use:

Network VIP Subnet


VLAN 101/VNID 10101 10.2.1.1 /24
VLAN 102/VNID 10102 10.2.2.1 /24
VLAN 103/VNID 10103 10.2.3.1 /24

www.chinesedumps.com 20 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

Solution:

N5K1-2
fabric forwarding anycast-gateway-mac 2020.0000.1010

int vlan 101


vrf member TENANT1
no sh
mtu 9192
ip add 10.2.1.1/24 tag 4082018
fabric forwarding mode anycast-gateway

int vlan 102


vrf member TENANT1
no sh
mtu 9192
ip add 10.2.2.1/24 tag 4082018
fabric forwarding mode anycast-gateway

int vlan 103


vrf member TENANT1
no sh
mtu 9192
ip add 10.2.3.1/24 tag 4082018
fabric forwarding mode anycast-gateway

Verification

N7K1-2
show bgp l2vpn evpn

www.chinesedumps.com 21 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

1.7 Correct N9K configure

N9K1 interface loopback0 down, correct the existing configuration to ensure that the NVE function is
normal.

*Do not delete the existing configuration.

www.chinesedumps.com 22 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

Solution:

N9K1
(BGP)
router bgp65501
address-family l2vpn evpn

neighbor 10.0.7.1
remote-as65501
update-source lo0
address-family l2vpn evpn
send-community
send-community extend
route-reflector-client

neighbor 10.0.7.1
remote-as65501
update-source lo0
address-family l2vpn evpn
send-community
send-community extend
route-reflector-client

(PIM)
ip pim rp-address 10.0.10.1 group-list 225.0.0.0/24 bidir

int lo0
ip pim sparse-mode

int e1/5
ip pim sparse-mode

int e1/6
ip pim sparse-mode

(EVPN)
vlan 2005
vn-segment 50000

vrf context TENANT1


vni 50000
rd auto
www.chinesedumps.com 23 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

address-family ipv4 unicast


route-target both auto
route-target both auto evpn

int nve1
no shut
source-interface lo0
host-reachability protocol bgp
member vni 50000 associate-vrf

Verification

N9K1
show nve vni
show nve peers
show ip mroute
show ip pim neighbor
show bgp l2vpn evpn

www.chinesedumps.com 24 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

Section 2 UCS Infrastructure Configure

Activity Objective:
- Finish Section 1 Practice.

Required Resources:
These are the resources and equipment required to complete this activity:

FI A-B
Chassics 5108
N5K3-4

2.1 UCS Discover


- Discover all attached computer devices
- Connections between Chassis & IF's should be aggregated

Step1: FA A/B config ethernet server port as port 1-4


Step2: Configure Policys
Step3: Acknowledge Chassis

2.2 Creat VLAN


- Configure Port-channesl from each Fabric Interconnecto N5K-1 and N5K-2
- You will also be creating the VLANs as mentioned in table below
- Group them in such a way that you can use it per network
- Ensure configuration is in plate to maxmize number of VLANs in domain
and
minimize CPU utilization caused by VLAN configuration

VLAN Name VLAN ID Fabric Visibility


vMotion 100 Dual
Web 101 Dual
www.chinesedumps.com 25 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

App 102 Dual


DB 103 Dual
ESXi-MGMT 104 Dual
ACI-DCI 105 Dual

Step 1: VLAN Port Count Optimization


Step 2: FI A/B config ethernet uplink port as port 24-25
Step 3: FI-A Create Port Channels
Step 4: FI-B Create Port Channels
Step 5: Create Vlan and Vlan Group

2.3 Create and Implement Storage Profile


You're tasked to prepare storage configuration so you can carve out a
virtual drive
into different LUNs and assign roles
- Create group policy called "CCIE-DG" and configure it as such that it can
provide disk redundancy with two disks and use third for spare
- Create proflie called "CCIS-SP"and create Boot LUN with size 10GB
Configure Boot LUN so that it can utilized entire available disk group

Step 1: Create Disck Group Policy on UCSM


Step 2: Create Storage Profile on UCSM

2.4 UCS Resource pools


Create new Resource Pools for UUID, MAC, WWNN and WWPN.
Use the information in the table below.

Resources Pool Name Pool size


CCIE-UUID 10
CCIE-MAX 10
CCIE-WWPN 10
CCIE-WWNN 10
www.chinesedumps.com 26 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

Step1: Create new Resources Pools on UCSM.

2.5 Creat VNIC templates


Create vNIC templates one on each Fabric Interconnect with name "vNIC-A"
and "vNIC-B" and with following configuration
- Use normal MTU
- Set no redundancy
- Make sure that any vNICs create from this template should update when
you
make any changes to template
- Select necessary VLANs
- Use previously created resource pools and policies
- Without creating new policies, ensure that the vNIC's will be configured to
send and received LLDP frames.
- Without creating new policies, ensure that if the FI uplinks go down,
the vNIC remains in an UP state

2.6 Create and Implement Service Profile Features


You're tasked with creating a service profile template "CCIE-SP-Template"
and then
deploy the Service Profile to your UCS domain in preparation for the server
team
to install an VMware Operating System on local disks. The requirements for
the Service
Profile are below. Use your own naming convention where one isn't
provided to
you below.
- The Service Profile Template should use your previously create4d resouce
pools.
- The Service Profile Template should require the user to acknowledge any
applied profile changes.
www.chinesedumps.com 27 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

- Service profiles cloned from the template should be dynamically linked,


allow
and changes to the template to propagate to the Service Profiles.
- The service profile should leverage your previously created vNIC
templates.
- Create a Local Disk Configuration Policy called "CCIE-local", and ensure the
following:
- Assume there are two local disks. Configure it as such that if one disk fails,
the OS will still be functional.
- Ensur that if another Service Profile is associated with a different policy,
the data on the disks is not removed.
- Without creating new policies, ensure that all vMedia transfers from the
KVM are encrypted

Step 1: Check KVM configuration


Step 2: Create Service Profile Template on UCSM and you need verify that
configure
Step 3: Create storage policy
Step 4: Create vNIC & Check
Step 5: Create & Check vHBA setup
Step 6: Check Maintenance Policy
Step 7: Check KVM Policy

2.7 Boot Sever


The Service Profile you create from the template should be named "CCIS-
SP1" and
deployed to the UCS compute node with blade 1/1 serial number. OS is
vmware.
Step 1: Check Service Profiles From Template
Step 2: Change Service Profile Association

2.8 Optomize SANBOOT


www.chinesedumps.com 28 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

You must get a blade to boot rom a Fibre Channel Storage Array using the
service
profile named "CCIE-SP2". Some of the configuration is in place, but you
notice that
the blade is still unable to boot. Make any necessarry change to Cisco UCS
Manager
and Cisco Nexus 5000 configurations to ensre that the blade boots up
successfully.
If completed properly, the LUN should be visible in the KVM when the VIC
option ROM
loads and the blade should boot the vSphere operation system from the
storage device.
Do not crate any new policies, if configuration must be modified, modify
only
existing policy assigned to the profile.
CCIE-SP2 should be associated to Chassis-1/Blade-2
primary 50:01:43:80:01:33:ff:b2
secondary 50:01:43:80:01:33:ff:b0

Storage Objects Value


Boot LUN ID 0
Fabric A Zone name DC_vsan_500
Fabric B Zone name DC_vsan_600
Zoneset Name DC_vsan_500,DC_vsan_600

Step 1: create CCIE-SP2 from template


Step 2: check FI-A/B FC port and configuration N5K port
DC3-N5K-3(config)#sh feature | i npiv
DC3-N5K-3(config)# vsan database
DC3-N5K-3(config-vsan-db)# vsan 500 name vsan500
DC3-N5K-3(config-vsan-db)# vsan 500 interface fc2/1
DC3-N5K-3(config-vsan-db)# vsan 500 interface fc2/2
www.chinesedumps.com 29 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

DC3-N5K-3(config-vsan-db)# exit
DC3-N5K-3(config)# int fc2/1
DC3-N5K-3(config-if)# no sh
DC3-N5K-3(config-if)# switchport trunk mode off
DC3-N5K-3(config-if)# switchport mode F
DC3-N5K-3(config)# int fc2/2
DC3-N5K-3(config-if)# no sh

DC3-N5K-4(config)#sh feature | i npiv


DC3-N5K-4(config)# vsan database
DC3-N5K-4(config-vsan-db)# vsan 600 name vsan 600
DC3-N5K-4(config-vsan-db)# vsan 600 interface fc2/1
DC3-N5K-4(config-vsan-db)# vsan 600 interface fc2/2
DC3-N5K-4(config-vsan-db)# exit
DC3-N5K-4(config)# int fc2/1
DC3-N5K-4(config-if)# no sh
DC3-N5K-4(config-if)# switchport trunk mode off
DC3-N5K-4(config-if)# switchport mode F
DC3-N5K-4(config)# int fc2/2
DC3-N5K-4(config-if)# no sh

Step 3: configuration N5K zone and zoneset


DC3-N5K-3(config)# sh fcns database
DC3-N5K-3(config)# device-alias database
DC3-N5K-3(config-device-alias-db)# device-alias name storage pwwn (HP)
DC3-N5K-3(config-device-alias-db)# device-alias name FI-A pwwn (Cisco)
DC3-N5K-3(config-device-alias-db)# device-alias name server pwwn (FC0)
DC3-N5K-3(config-device-alias-db)# exit
DC3-N5K-3(config)# device-alias commit

DC3-N5K-3(config)# zone name DC-vsan_500 vsan 500


DC3-N5K-3(config-zone)# member device-alias storage
www.chinesedumps.com 30 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

DC3-N5K-3(config-zone)# member device-alias FI-A


DC3-N5K-3(config-zone)# member device-alias server
DC3-N5K-3(config-zone)# exit

DC3-N5K-3(config)# zoneset name DC_van_500 vsan 500


DC3-N5K-3(config-zoneset)# member DC_Vsan_500
DC3-N5K-3(config-zoneset)# exit

DC3-N5K-3(config)# zoneset activate name DC_vsan_500 vsan 500


Zoneset activation initiated. check zone status

DC3-N5K-3(config)# show zoneset activate vsan 500

DC3-N5K-4(config)# sh fcns database


DC3-N5K-4(config)# device-alias database
DC3-N5K-4(config-device-alias-db)# device-alias name storage pwwn (HP)
DC3-N5K-4(config-device-alias-db)# device-alias name FI-B pwwn (Cisco)
DC3-N5K-4(config-device-alias-db)# device-alias name server pwwn (FC1)
DC3-N5K-4(config-device-alias-db)# exit
DC3-N5K-4(config)# device-alias commit

DC3-N5K-4(config)# zone name DC-vsan_600 vsan 600


DC3-N5K-4(config-zone)# member device-alias storage
DC3-N5K-4(config-zone)# member device-alias FI-B
DC3-N5K-4(config-zone)# member device-alias server
DC3-N5K-4(config-zone)# exit

DC3-N5K-4(config)# zoneset name DC_van_600 vsan 600


DC3-N5K-4(config-zoneset)# member DC_Vsan_600
DC3-N5K-4(config-zoneset)# exit

DC3-N5K-4(config)# zoneset activate name DC_vsan_600 vsan 600


www.chinesedumps.com 31 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

Zoneset activation initiated. check zone status

DC3-N5K-4(config)# show zoneset activate vsan 600

Step 4: configuration SAN BOOT


Step 5: bind server

www.chinesedumps.com 32 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

SECTION 3

Section 3 ACI Infrastructure Configure

Activity Objective:

- Finish Section 3 Practice

Required Resources:

These are the resources and equipment required to complete this activity:
- APIC
- ACI fabric (1-2-3 spines, 4-6 leaves)
- AC, FEX
- Student VM

Don't delete exist CFR and any Policy, just as EPG, Tenant, etc, or failed
according to CCIE Exam Policy.

Device Info

Device IP Username Password


APIC GUI 10.1.1.51 admin Cisco!123
APIC CLI 10.1.1.51 admin Cisco!123
vCenter 10.1.1.215 [email protected] Cisco!123
DC2-N7K-1 10.1.1.81 admin Cisco!123
DC1-N9K-1 10.1.1.74 admin Cisco!123
BB-SW 10.37.37.36 admin Cisco!123

www.chinesedumps.com 33 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

3.1 Access Policies Config

Diagram

SRV3(aci-db2) ---FEX101(N2K-3)---LEAF-7(Node 201) ---SPINE-4(Node2001)


SRV3(legacy2) ---FEX102(N2K-4)---LEAF-8(Node 202) ---SPINE-4(Node2001)

Configure the access policies that are needed to discover FEX 102. FEX102 must be discovered so that
the legacy2 application is learned as an endpoint on Node 202.

www.chinesedumps.com 34 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

Solution:

Step 1 - Check leaf interface


Fabric -> Inventory -> Pod 2 -> leaf-8(Node-202) -> Interfaces ->eth1/43, 44

Step 2 - check FEX


Fabric -> Access Policies -> Switches -> Leaf Switches
-> Profiles -> Switch 202 -> click and check the names

Fabric -> Access Policies -> Interfaces -> Leaf Interfaces


-> Profiles -> Switch 202_ifselector103 -> FexCard103
-> click and check the type: range- FEX Profile: Switch202_FexP103, FEX ID:103,
Interfaces: 1/43-44

Step 3 - Config interface for v center


Fabric -> Access Policies -> Interfaces -> Leaf Interfaces
-> Profiles -> Switch 202_FexP103 -> FEX Policy Group -> F ex 103_eth_1_1
Policy Group: f ex 103, Interfaces:1/1

Fabric -> Access Policies -> Interfaces -> Leaf Interfaces -> Policy Groups
-> Leaf Access Port-> Fex103
-> Change the Link Level Policy:1G-Auto,
CDP Policy: CDP-Enabled, LLDP Policy: LLDP-Enabled,
Attached Entity Profile: SRV5_AAEP->submit

Step 4 - Config legacy 2 for v center


Tenants -> ALL TENANTS -> Xandar-rack XX -> Xandar-rack XX -> Application Profiles
-> Nova -> Application EPGs -> legacy2 -> Domains(VM Sand Bare Metals)
-> right click 'Add VMM Domain Association'
-> Select VMM Domain Profile: CCIE-DVS or type
-> Resolution Immediacy: On Demand and Click Submit

Connect vCenter
Globe Icon -> CCIE-DVS -> Networks -> Xandar-rackXX|Novallegacy2 -> Configure
-> Policies -> EDIT -> Teaming and failover -> put uplink2 to the top -> click OK

Click Hosts and Clusters Icon -> legacy2-rackXX -> Edit Settings
-> Check MAC Address
------------
Verification
------------
a pic# attach leaf-8 -> type password
www.chinesedumps.com 35 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

leaf-8# show f ex
leaf-8# show lldp neighbors
leaf-8# show endpoint detail (confirm the MAC address)
Legacy2-rack37 web console:
[root@legacy2~] # ping 10.0.3.10

www.chinesedumps.com 36 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

3.2 Tenant Policy

Ensure that each end point listed here is learned and address any EPG learning issues along the way.
Note, the leaf may need to receive traffic to learn the application as an endpoint

-Bridge Domain A (L3, GW on ACI)


aci-web, aci-app, aci-db

-Bridge Domain B (L2 only, no GW)


legacy 1, legacy 2

www.chinesedumps.com 37 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

solution:

Step 1 - Config bridge domain A


Tenants -> Xandar-rack XX -> Networking -> Bridge Domains -> Xandar -> Policy -> General
-> Change IP Data-plane Learning to 'yes', uncheck the box of Limit IP Learning to subnet box and click
'Submit' button

Tenants -> Xandar-rack37 -> Networking -> Bridge Domains -> Xandar -> Policy -> L3 Configurations
-> Check 'Unicast Routing' box should be mark.

Tenants -> Xandar-rack37 -> VRFs -> Xandar -> Policy


-> Scroll down and check 'IP Data-plane Learning': Enabled

System -> System Settings -> Endpoint Controls -> Ip Aging


-> Change Administrative State: Enabled and Submit

Step 2 - Ping GW: ping 192.168.3.1


Open aci-app-rack XX Web Console->
[root@legacy2~]# ping 192.168.3.1

V center -> Host and Cluster -> CCIE DC -> 10.1.1.115 -> rack XX
-> open these three: aci-app-rack XX, aci-db-rack XX and aci-web-rack XX -> Ping 192.168.3.1
Leaf-3#show endpoint detail
192.168.3.30: db
192.168.3.20: app
192.168.3.10: web

Step 3 - Config bridge domain B


Tenants -> Xandar-rack XX -> Networking -> Bridge Domains -> legacy -> Policy -> General
-> Change 'L2 Unknown Unicast': Flood, 'IP Data-plane Learning': Yes,
uncheck the box of 'Limit IP Learning to subnet box' and click Submit button

Tenants -> Xandar-rack37 -> Networking -> Bridge Domains -> legacy -> Policy -> L3 Configurations
-> Uncheck the box of 'Unicast Routing' and Submit

Step 4 - ping other legacy vm: ping 10.0.3.10


V center -> Host and Cluster -> CCIE DC -> 10.1.1.115 -> rack XX
-> open legacy1-rackXX -> Ping 10.0.3.10,20
[root@legacy1~]# ping 10.0.3.10
[root@legacy1~]# ping 10.0.3.20

Leaf-3#show endpoint detail


www.chinesedumps.com 38 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

824/Xandar-rack XX : Xandar Legacy1 192.168.3.20


Leaf-8#show endpoint detail

www.chinesedumps.com 39 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

- 3.3 Virtual Networking

The freshly migrated aci-db2 endpoint is not reachable in the VMM domain. Validate that the second
pod VMM deployment settings are optimized for dynamic deployment to take full advantages of the
functionality provided by VMM integration. Also, ensure that the configuration that is tied to the 'aci-
db2' VM allows for expected learning under the 'database' EPG. Do not create any new policies.

Step 1 - Config VMM domains


Tenants -> Xandar-rack XX -> Application Profiles -> Nova -> Application EPGs
-> database -> Right click Domains(VMs and Bare-Metals) and select 'Add VMM Domain Association'
-> Chose 'VMM Domain Profile': CCIE-DVS -> Resolution Immediacy: On Demand and Click Submit button

V center -> Host and Cluster Icon-> CCIE DC -> 10.1.1.113 -> rack XX
-> open aci-db2-rackXX -> Edit Settings -> Network Adapter 1: quarantine
-> Click Browse change to Xandar-rack XX | Noval database and click OK -> ping 192.168.3.1
[root@DB2~]# ping 192.168.3.1

leaf-7# show endpoint detail

- 3.4 L3 out
Configure iBGP on the existing OSPF configuration.
Make sure that subnets that originate in Cisco ACI can be advertised into the existing environment
and vice versa. Ensure that external routes can be redistributed throughout the ACI fabric.

Step 1 - Config BGP RR


System -> System Settings -> BGP Route Reflector -> Policy
-> Change Autonomous System Number to '65502' and make sure Route Reflector Nodes are correct:

Pod ID Node ID Node Name


1 1001 spine-1
1 1003 spine-3
2 2001 spine-4
-> click submit

Step 2 - Config L3 out


Tenants -> ALL TENANTS -> Common -> Networking -> L3Outs -> iBGP_over_OSPF-rack XX
-> Policy -> Main -> Enable BGP/EIGRP/OSPF: Check 'BGP' box and click Submit button.

Tenants -> ALL TENANTS -> Common -> Networking -> L3Outs -> iBGP_over_OSPF-rack XX
-> Logical Node Profiles -> Leaf1 -> Policy -> BGP Peer Connectivity
-> Click the icon and select Loopback -> Type '7.7.7.1' into 'Peer Address' and '65502' into
'Remote Autonomous System Number' -> click Submit button.
www.chinesedumps.com 40 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

Tenants -> ALL TENANTS -> Common -> Networking -> L3Outs -> iBGP_over_OSPF-rack XX
-> Logical Node Profiles -> Leaf4 -> Policy -> BGP Peer Connectivity
-> Click the icon and select Loopback -> Type '7.7.7.1' into 'Peer Address' and '65502' into
'Remote Autonomous System Number' -> click Submit button.

Tenants -> ALL TENANTS -> Common -> Networking -> L3Outs -> iBGP_over_OSPF-rack XX
-> Logical Node Profiles -> Leaf1 -> Configured Nodes -> topology/pod-1/node-101
-> BGP for VRF-common: rack XX -> Neighbors -> Check '7.7.7.1' is there with established state

leaf-4#show vrf
leaf-4#show ip ospf neighbors vrf common: rack XX
leaf-4#show bgp ipv4 unicast summary vrf common: rack XX

BB-SW:
Core_SW_C4948E-F#show vrf
Core_SW_C4948E-F#show ip ospf neighbor
Core_SW_C4948E-F#show bgp ipv4 unicast summary
Core_SW_C4948E-F#show bgp all summary

leaf-1#show vrf
common: rack XX
leaf-1#show ip ospf neighbors vrf common: rack XX
leaf-1#show bgp ipv4 unicast summary vrf common: rack XX

Step 3 - Config EXtEPG


Tenants -> ALL TENANTS -> Common -> Networking -> L3Outs -> iBGP_over_OSPF-rack XX
-> External EPGs -> L3EPG -> Policy -> General -> Subnets -> Click +(create)
->Type '0.0.0.0/0' into IP Address
-> Click the box of 'Shared Route Control Subnet' and 'Aggregate Shared Routes' -> Click Submit

leaf-1#show ip route vrf common: rack XX


leaf-1#show bgp ipv4 unicast summary vrf common: rack XX
leaf-4#show ip route vrf common: rack XX
leaf-4#show bgp ipv4 unicast summary vrf common: rack XX

www.chinesedumps.com 41 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

3.5 Inter fabric connectivity-Host Based Routes

Advertise only specific host routes as opposed to entire BD subnets in the Sandar BD. The only
endpoint(s) that should have Data Center 1 reachability today should be those that are only serving
http/https web sockets. No other host routes should get advertised, nor should the entire BD subnet get
advertised.

Step 1 - Config subnet


Check the IP address of aci-web-rack XX -> web console
[root@Web ~] ifconfig
eth0: 192.168.3.10

Tenants -> ALL TENANTS -> Xandar-rack XX -> Application Profiles -> Nova -> Application EPGs
-> Web -> Right click on Subnets -> Create EPG Subnet
-> type 192.168.3.10/32 into 'Default Gateway IP'
- uncheck Scope: Private to VRF
- check Scope: 'Advertised Externally', 'Shared Between VRF's, and 'No Default SVI Gateway'
- Change 'Type Behind Subnet' to EP Reachability and type 192.168.3.1 into 'Next Hop Ip Address'
- click submit

Step 2 - Associated L3 out


Tenants -> ALL TENANTS -> Xandar-rack XX -> Networking -> Bridge Domains -> Xandar
-> Policy -> L3 Configurations -> Click create icon in 'Associate L3 Outs:'
and Select 'iBGP_over_OSPF-rack XX' -> Click Update.

www.chinesedumps.com 42 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

3.6 Multicast-Multi-pod BUM Traffic

Troubleshoot ACI and IPN to get Layer 2 flood connectivity between legacy 1 vm in Pod1 and Legeacy2
vm in Pod2.

Step 1 - Check IPN mroute


- In exam IPN device is N7K-IPN
DC1-N9K1-1#show vrf
- check MPOD-IPN
DC1-N9K1-1#show ip int bri vrf MPOD-IPN
- lo4: 44.1.1.1
DC1-N9K1-1(config)# int lo4
DC1-N9K1-1(config-if)# ip pim sparse-mode
DC1-N9K1-1(config-if)# exit
DC1-N9K1-1(config)# int e 1/49.4
DC1-N9K1-1(config-if)# ip pim sparse-mode
DC1-N9K1-1(config-if)# exit
DC1-N9K1-1(config)# int e 1/50.4
DC1-N9K1-1(config-if)# ip pim sparse-mode
DC1-N9K1-1(config-if)# exit
DC1-N9K1-1(config)# vrf context MPOD-IPN
DC1-N9K1-1(config-vrf)# ip pim rp-address 44.1.1.1 group-list 225.0.0.0/8 bidir
DC1-N9K1-1(config-vrf)# ip pim rp-address 44.1.1.1 group-list 239.0.0.0/8 bidir
DC1-N9K1-1(config-vrf)# exit
*- check MPOD-IPN
DC1-N9K1-1(config)# show run pim

vrf context MPOD-IPN


ip pim rp-address 44.1.1.1 group-list 225.0.0.0/8 bidir
ip pim rp-address 44.1.1.1 group-list 239.0.0.0/8 bidir
ip pim ssm range 232.0.0.0/8

interface loopback4
ip pim sparse-mode
interface Ethernet1/49.4
ip pim sparse-mode
interface Ethernet1/50.4
ip pim sparse-mode

DC1-N9K1-1#show ip mroute vrf MPOD-IPN

www.chinesedumps.com 43 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

Step 2 - Check BD multicast


Tenants -> ALL TENANTS -> Xandar-rack XX -> Networking -> Bridge Domains
-> legacy -> Policy -> Advanced/Troubleshooting -> Check 'Multicast Address :225.1.47.0'

DC1-N9K1-1(config)# show ip mroute vrf MPOD-IPN


- (*, 225.1.47.0/32)

Step 3 - Check spine gipo


spine-4# show ip igmp gipo joins
spine-3# show ip igmp gipo joins

Step 4 - Test
Tenants -> ALL TENANTS -> Xandar-rack XX -> Application Profiles -> Nova -> Application EPGs
-> legacy1 -> Right click on 'Contracts' and select 'Add Provided Contract'
-> Click arrow button in Contract and select 'default' -> click Submit

legacy1 -> Right click on Contracts and select Add 'Consumed Contract'
-> Click arrow button in Contract and select 'default' -> click Submit

legacy2-rack37 - ping 10.0.3.20, 10


[root@legacy2 ~] ping 10.0.3.10, 20

www.chinesedumps.com 44 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

3.7 Fabric Monitoring and Collector Policy

Ensure that all expected flows are allowed between theses EPGs and ensure that future usage of these
contracts do not allow unintentional route leaking.

Note that ICMP has been allowed for testing purposed.

Diagram

Common\iBGP_over_OSPF <-80/443-Xandar: Nova\Web<-5000-Nova\Business_application


<-27017-Nova\Database

Step 1 - Create contract


- In exam, contract maybe pre config, but you need check filter and contract subject

Tenants -> ALL TENANTS -> Common -> Contracts


-> Right click on 'Standard' and Select 'Create Contract'
-> Type 'Name: Web_to_external-rack XX', select 'Scope: Global'
-> Click + button next to 'Subjects:'
-> It will bring Create Contract Subject Window
-> Type -> 'Name: web_to_external_subject'
-> Click + button next to Filters
-> Click + button Under Name
-> It will bring Create Filter Window
-> Type Name: web_ext
-> click + button next to Entries:
Name: http, EtherType: IP, IP protocol: tcp, Destination Port/Range 80/80
-> click update -> click + button
Name: https, EtherType: IP, IP protocol: tcp, Destination Port/Range 443/443
-> click Submit -> Bring back to Create Contract Subject -> click update
-> Click + button next to Filters
-> Select icmp common under the name and click OK
-> Submit

Tenants -> ALL TENANTS -> Xandar-rack XX -> Contracts


> Right click on Standard and select Create Contract
-> Type -> Name: web_to_app
-> Select 'Scope: VRF'
-> click + button next to Subjects:
-> Click + button next to Filters
-> Click + button Under Name

www.chinesedumps.com 45 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

-> It will bring Create Filter Window


-> Entries: click + button
Name: 5000, EtherType: IP, IP protocol: tcp, Destination Port/Range 5000/5000
-> click Submit -> Bring back to Create Contract Subject -> click update
-> Click + button next to Filters
-> Select icmp common under the name and click OK
-> Submit

> Right click on Standard and Select Create Contract


-> Type -> Name: app_to_db
-> Select 'Scope: VRF'
-> Click create contract subject
-> Click + button next to Filters
> Click + button Under Name
-> Entries: click + button
Name: 27017, EtherType: IP, IP protocol: tcp, Destination Port/Range 27017/27017
-> click Submit -> Bring back to Create Contract Subject -> click update
-> Click + button next to Filters
-> Select icmp common under the name and click OK
-> Submit

Step 2 - Associate contract


Tenants -> ALL TENANTS -> Xandar-rack XX -> Application Profiles -> Nova-> Application EPGs
-> Web -> Right click on 'Contracts' and select 'Add Provided Contract'
-> Select Contract: web_to_external-rack XX -> Click Submit

Tenants -> ALL TENANTS -> Xandar-rack XX -> Application Profiles -> Nova-> Application EPGs
-> Web -> Right click on 'Contracts' and select 'Add Consumed Contract'
-> Select Contract: web_to_app -> Click Submit

Tenants -> ALL TENANTS -> Xandar-rack XX -> Application Profiles -> Nova-> Application EPGs
-> app-> Right click on 'Contracts' and select 'Add Provided Contract'
-> Select Contract: web_to_app -> Click Submit

Tenants -> ALL TENANTS -> Xandar-rack XX -> Application Profiles -> Nova-> Application EPGs
-> app -> Right click on 'Contracts' and select 'Add Consumed Contract'
-> Select Contract: web_to_db -> Click Submit

Tenants -> ALL TENANTS -> Xandar-rack XX > Application Profiles -> Nova-> Application EPGs
-> database -> Right click on 'Contracts' and select 'Add Provided Contract'
-> Select Contract: app_to_db -> Click Submit

Tenants -> ALL TENANTS -> Common -> Networking -> L3Outs -> iBGP_over_OSPF-rack XX
www.chinesedumps.com 46 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

-> External EPGs -> L3EPG -> Policy -> Contracts -> Consumed Contracts -> Click + button
-> Select common/web_to_external-rack XX under Name -> click Update

Step 3 - Test

- DB2 ping app: [root@DB2 ~] #ping 192.168.3.20/10


- app ping web: [root@app ~] #ping 192.168.3.10/20
- check route: leaf-3# show ip route vrf xandar-rack37: xandar

- connect BB-SW
core_SW_C4948E-F# show ip route vrf rack 37
B 192.168.3.10 [200/0] via 1.1.1.1, 00:01:47

Script

* Activity Objective: Finish Section 3 Practice

Required Resources:
These are the resources and equipment required to complete this activity:

- APIC
- ACI fabric (1-2 spines, 4 leaves)
- Scripting VM
- NXOS
- DCNM

3.8 Create Large-Scale APP Deployment


In preparation for the setup for 100 web servers that run different applications,
create a dedicated APP for each web EPG.

- Create a tenant called 'Scripting' and in this tenant create a BD called 'BD'
- Create 100 app's (App1, App2, App3, App4, ...App100).
- In each app, create an EPG called 'Web'.

To make this task easier, you can login to system scripting-vm/10.1.1.220 with
user ccie01 and password Cisco!123 in VCenter

The following scripts are avialable for you:

Python: python/make-tenant-policies.py
You can execute the script by doing
www.chinesedumps.com 47 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

cd python
./ make-tenant-policies.py
(please note a backup python script is available -> make-tenant-policies-ORIGINAL.py)

Ansible:
ansible/playbook.yaml
You can execute the script by doing
cd ansible
ansible-playbook-i inventory.yaml playbook.yaml
(please note a backup playbook is available -> playbook-ORIGINAL.yaml)

Solution:
Script:SSH Scripting VM

Step1: Use CRT connect Scripting VM


[ccie01@ansible ~]$

Step 2: python
[ccie01@ansible ~]$ cd /python/
[ccie01@ansible python]$
[ccie01@ansible python]$ ll
total 8
-rwxrwxrwx 1 root root 198 Mar 22 12:44 conf.json
-rwxrwxrwx 1 root root 2625 Mar 22 12:44 make-tenant-policies.py
[ccie01@ansible python]$
[ccie01@ansible python]$ python3 make-tenant-policies.py

.
.
.
2021-03-23 16:44:29,725 |INFO|Response code :<Response [200]>
2021-03-23 16:44:29,726 |INFO|Successfully deployed configuration change
[ccie01@ansible python]$

APIC - Tenants - ALL TENANTS- Checck Scripting: EPGs: 100, Healh Score:Healthy
APIC - Tenants - Scripting - Applicaton profiles (100)
Scripting - Applicaton profiles (100) - App100 - Application EPGs - Web
- Bridge Domain:BD
APIC - Tenants - Scripting - Networking - Bridge Domains - BD - VRF: Scripting
APIC - Tenants - ALL TENANTS - right click on Scripting - Click Delete

Step 3: ansible
[ccie01@ansible python]$ cd /ansible/
www.chinesedumps.com 48 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

[ccie01@ansible ansible]$
[ccie01@ansible ansible]$ ll
total 8
-rwxrwxrwx 1 root root 5112 Mar 22 14:01 playbook.yml
[ccie01@ansible ansible]$ ansible-playbook.yml
changed: [10.1.1.51] => (item={ 'ap': 'App90'})
.
.
changed: [10.1.1.51] => (item={ 'ap': 'App100'})
PLAY RECAP *********************************************
10.1.1.51: ok=5 changed=5 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
[ccie01@ansible ansible]$

APIC - Tenants - ALL TENANTS - Checck Scripting: EPGs: 100, Healh Score:Healthy
APIC - Tenants - Scripting - Applicaton profiles (100)
Scripting - Applicaton profiles (100) - App100 - Application EPGs - Web
- Bridge Domain:BD
APIC - Tenants - Scripting - Networking - Bridge Domains - BD - VRF: Scripting
APIC - Tenants - ALL TENANTS - right click on Scripting - Click Delete

www.chinesedumps.com 49 www.fastracklabs.com
www.passdatacenterlabs.com LAB 1 RELEASE Lab 1:20-Aug-2021

VERY IMPORTANT NOTE:- Complete lab we will be providing by Sep-15 and DC side 1
you can book the rack by August end from http://ccierack.rentals

Great Pre-Order offers now get it from http://passdatacenterlabs.com |


www.chinesedumps.com

www.chinesedumps.com 50 www.fastracklabs.com

You might also like