01-02 Design and Deployment Guide For The SD-WAN EVPN Interconnection Solution
01-02 Design and Deployment Guide For The SD-WAN EVPN Interconnection Solution
01-02 Design and Deployment Guide For The SD-WAN EVPN Interconnection Solution
Design and Deployment Guide for Multi-Campus 2 Design and Deployment Guide for the SD-WAN
Network Interconnection EVPN Interconnection Solution
Figure 2-1 Design and deployment process for the EVPN Interconnection Solution
NOTE
Before deploying a multi-campus network for an enterprise, you need to install iMaster
NCE-Campus and connect it to the network. For details about the design and deployment
of iMaster NCE-Campus, see the iMaster NCE-Campus installation guide. This document
describes the design and deployment of a multi-campus network for an enterprise.
In this way, sites can communicate with each other based on the planned overlay
topology model. Therefore, the EVPN Interconnection Solution involves two site
roles: RR site and edge site.
Whether a site is an edge site or an RR site depends on the device role configured when
egress CPEs are added to the site. If the role of the egress CPE is set to Gateway+RR, the
site is an RR site. If no device of the Gateway+RR role exists at the site, the site is an edge
site.
An edge site can establish IBGP peer relationships with two RRs. The two RRs back
up each other. Multiple RRs can be deployed under a tenant and are fully meshed
on the control plane. That is, a control channel is set up between any two RRs to
directly communicate with each other.
● RR site: Plan the edge sites that function as RR sites. Generally, stable edge
sites with high CPE performance and a large number of WAN links are used
as RR sites.
● Edge site: Plan the RR sites to which each edge site is connected. Generally,
edge sites are connected to RR sites that are physically close to the edge sites
and have good network connectivity. An edge site can connect to a maximum
of two RR sites, and a maximum of eight RR sites can be configured for a
tenant. If an edge site is not connected to any RR site, the edge site does not
participate in overlay networking and service deployment.
Configure a routing domain and determine whether to enable IPSec encryption for
the routing domain. The Internet and MPLS routing domains are provided by
default. If these routing domains cannot meet your requirements, create other
routing domains as required.
● Routing domain: A routing domain defines whether routes between different
transport networks are reachable. That is, physical links of different transport
networks that belong to the same routing domain are reachable to each
other. Generally, if the transport networks that are of the same type and are
provided by different carriers can communicate with each other, they are
defined in the same routing domain. For example, the Internet of carrier A
and that of carrier B can be defined in the same routing domain.
● Encryption: Configure whether to use IPSec to encrypt data transmitted over
tunnels. Generally, for network security purposes, encryption is required.
Transport Network
A transport network defines the information about the physical network between
the site and the WAN. The following lists the data to be planned for each
transport network. The defined transport network name can be directly referenced
when physical links are specified for site WAN links and policies.
● Transport network: The transport network defines the type of a physical link
on the WAN side of a site and is determined by the type of a WAN access
network provided by carriers. Generally, a type of network provided by a
carrier is defined as a transport network. For example, the Internet of carrier
A is defined as a transport network, and the Internet of carrier B is defined as
another transport network.
● Routing domain: Select the routing domain corresponding to each transport
network.
By default, the system provides the following transport networks: Internet,
Internet1, MPLS, and MPLS1. You can create other transport networks as needed.
IPSec Encryption Parameters
You need to set the following IPSec encryption parameters for a transport network
on which the encryption function is enabled:
● Protocol: Currently, only ESP is supported.
● Authentication algorithm: Select the authentication algorithm, which can be
SHA2-256 or SM3.
● Encryption algorithm: Specify the link encryption mode. The AES128, AES256,
and SM4 algorithms are supported. If the authentication algorithm is SM3,
the encryption algorithm can only be SM4. If the authentication algorithm is
SHA2-256, the AES256 encryption algorithm is recommended. This is because
the key length of AES256 is 256 bits, having a higher security level than
AES128.
● Life Time: Plan the global IPSec SA lifetime. The value is in the range from 60
to 43200, in minutes. When an IPSec SA is established through dynamic IKE
negotiation, you can configure the SA lifetime to update the SA in real time.
This reduces the risk of SA cracking and enhances security. When the IPSec SA
is about to expire, IPSec peers negotiate a new IPSec SA through IKE. After
the new IPSec SA is negotiated, the peers immediately use the new IPSec SA
to protect communication.
elapses and the link quality still does not meet service requirements. The
value range is from 1 to 65535, in seconds. The default value is 5.
● Statistics period: This parameter specifies the interval for checking the link
quality. The value is in the range from 1 to 65535 and must be less than or
equal to the switchover period.
● Flapping suppression period: Unstable network link quality may result in
frequent link switchovers at the sites where an intelligent traffic steering
policy is applied. To prevent this situation, the system requires that services be
transmitted on a new link for at least one flapping suppression period before
the services are switched back from the new link to the original link. The
value range is from 2 to 131070, and the default value is 30 seconds. The
value must be at least twice the switchover period.
● Maximum bandwidth utilization: This parameter is used for load balancing-
based intelligent traffic steering. If the service traffic of a link reaches the
maximum bandwidth utilization, traffic can be load balanced. The default
maximum bandwidth usage is 95%, and the value range is from 50% to
100%.
● Symmetric routing: In load balancing scenarios, the service receiving site
forwards services based on the route selection result of the sending site,
without proactively selecting a route. Symmetric routing is enabled by default.
Tenants can disable symmetric routing. After symmetric routing is disabled,
devices at both ends select routes based on route selection rules.
NTP
You can centrally configure NTP for all sites, which is disabled by default.
Routing
An overlay network is established between sites through EVPN tunnels. Routes on
the overlay network between sites use BGP to establish the IBGP peer relationship.
By default, the BGP AS number of a site is 65001, which can be modified.
● Routing protocol: Only BGP is supported.
● AS number: The default value is 65001. Generally, you do not need to change
the value. If the default value cannot be used due to reasons such as a
conflict with the BGP AS number planned for an existing device on the
network, use another value in the range from 1 to 65534. A value in the
range from 64512 to 65534 is recommended.
● Community attribute pool: A community pool is a resource management pool
where community attributes are configured. Currently, it is mainly used for
WAN IBGP, RR management, Internet access, mutual communication, area
management, and multi-tenant IWG.
● Dual-gateway interconnection protocol: For a dual-gateway site, the two CPEs
need to exchange routing information through a routing protocol, which can
be OSPF or IBGP.
IPv4 Pool
When iMaster NCE-Campus automatically orchestrates services such as overlay
tunnels, overlay WAN routes, and site Internet access, IP addresses need to be
allocated. Plan address pools based on the network scale. The number of required
addresses increases with the number of sites. For details about the relationship
between them, click Details.
Table 2-1 Mapping between the mask length and the network scale
Network Scale/Number of Sites Recommended Configuration (Single
Network Segment)
2-10 /23
11-30 /22
31-60 /21
61-120 /20
121-250 /19
251-500 /18
501-1000 /17
1000+ /16
DNS Server
Plan a DNS server used for network access. If the DNS servers used by multiple
sites are different, you can group these DNS servers. When configuring LAN
services on the overlay network, you can reference different DNS server group
names to specify the DNS servers used for network access.
● DNS server group name: Plan the DNS server group and specify the group
name, for example, DNS_Server1.
● DNS server IP address: Plan the IP addresses of the DNS servers in each group.
Port Configuration
● DTLS server port: A CPE registers with the RR through DTLS. The default DTLS
server port is 55100. You can change the port number to a value in the range
from 10000 to 65535.
● STUN server port: CPEs support Session Traversal Utilities for NAT (STUN),
and can communicate with the RR traversing NAT devices. When the RR
functions as the STUN server, the default STUN server port is 3478. You can
change the port number to a value in the range from 1024 to 65535.
● Connection Source Port: If there are special requirements for the source port
of the STUN client, you can set Scanning Start Port, Scanning Times, and
Scanning Increment to specify the source port.
another egress link. This solution offers high reliability but requires more
egress links, resulting in high deployment costs.
– Device connecting to only one egress link
Each device has only one egress link and connects to another device
through the interlink. A device monitors the egress network connected to
it and notifies the other device of the monitoring result. When a fault
occurs on an egress link, the device notifies the other device. The other
device then adjusts packet forwarding policies based on the link status,
preventing traffic from being sent to the faulty uplink. This solution
provides basic egress link reliability at low deployment costs and is
recommended for small- and medium-sized campus networks.
WAN links are the basis for building an EVPN interconnection network. To prevent
repeated configuration of parameters for each site, configuration information such
as the number of gateways and WAN-side links is abstracted into a WAN link
template. The site WAN model is configured through the WAN link template.
Therefore, after creating sites, you need to plan a WAN link template for each site.
If multiple sites have the same WAN-side configurations, including the gateway
type, WAN link, and interconnection link between two gateways, the same WAN
link template can be used.
Table 2-2 lists the default WAN link templates provided by iMaster NCE-Campus.
If the default WAN link templates can meet your requirements, skip this step.
Otherwise, customize a WAN link template based on the site requirements. A
maximum of 10 WAN links can be configured for a single gateway, and a
maximum of 20 WAN links can be configured for dual gateways.
● Template name
The template name is a string of 1 to 64 characters.
● Gateway type at a site
A single gateway or dual gateways can be deployed at a specified site. For
sites with high reliability requirements, dual gateways can be deployed. If the
gateway service traffic is small and low requirements are imposed on
reliability, a single gateway can be deployed.
● Multiple sub-interfaces
Specify whether to enable multiple sub-interfaces on the device. After this
function is enabled, a maximum of 10 sub-interfaces can be created for a
single gateway, and a maximum of 20 sub-interfaces can be created for dual
gateways.
● WAN link at a site
– WAN link name: After specifying the gateway type, plan the number of
WAN links and specify a name for each link. The name can contain
information such as the network type and network provider.
– Device and interface: Specify the gateway and interface to which each
WAN link connects.
– Sub-interface: Specify whether to enable the sub-interface of the device.
– Overlay tunnel: Specify whether to enable the overlay tunnel function. If
this function is enabled, an overlay tunnel is created on the WAN link.
– Number: Specify the sub-interface number. This parameter is available
only after Sub-interface is enabled.
– Transport network: Specify the WAN-side network to be connected, which
depends on the transport network created in Global Configuration. For
▪ Single link: Specify the interfaces for interconnecting with each other
on the CPEs.
– Layer 2 link
After planning the model and network of a site, you need to plan the WAN link
parameters of the site and connect the site to the WAN network before
establishing the overlay network and configuring services. The WAN interface on
the CPE used by the site and the WAN link to be connected have been specified in
the WAN link template. This section describes how to plan the IP address and
interface parameters of the WAN interface. You must configure WAN links before
deploying a site.
● ZTP mode: The URL-, USB-, and DHCP-based deployment modes are
supported. The system selects an orchestration scheme based on the
deployment mode.
– During URL- or USB-based deployment, iMaster NCE-Campus generates a
deployment file and sends information such as IP address and VPN
information of WAN interfaces to the CPEs through the deployment file.
After a CPE registers with iMaster NCE-Campus, iMaster NCE-Campus
delivers information such as the public IP address and interface rate to
the CPE.
– During DHCP-based deployment, iMaster NCE-Campus does not need to
generate a deployment file. After a CPE registers with iMaster NCE-
Campus, iMaster NCE-Campus needs to re-deliver the IP address and VPN
information of WAN interfaces to the CPE.
● Link name: Specify the name of the current WAN link. If a WAN link is created
using the default WAN link template, the link name is Internet or MPLS. If a
WAN link is created using a customized WAN link template, the link name is
the one specified when the WAN link template is created.
● Transport network: type of the transport network to which the WAN link
belongs.
● Device: device where the WAN link resides.
● Interface: type and number of a physical interface or virtual interface used by
the current link. If iMaster NCE-Campus is deployed on the LAN side of a DC,
multiple physical interfaces and one virtual interface can be configured. The
physical interfaces are used to connect iMaster NCE-Campus to sites, and the
virtual interface is used to transmit overlay traffic. The VN instance of the
physical interfaces must be the same as that of the virtual interface.
NOTE
1. Ensure that the physical interfaces are Layer 3 interfaces. If an interface is not a
Layer 3 interface, switch the interface to a Layer 3 interface. Otherwise, the
configuration fails to be delivered.
2. If a virtual interface is configured, the overlay tunnel cannot be enabled for the
physical interface links in the same underlay VN as the virtual interface.
3. If a loopback interface is configured as the WAN link interface type, the bandwidth
trend of the overlay traffic links inside a site and between sites and the application
bandwidth usage trend are displayed as 0. This is because the uplink and downlink
bandwidths of the loopback interface cannot be set.
4. If the interface type of links is E1-IMA (ATM), Ima-group, or Serial, ZTP is not
supported, and the deployment can be performed only on the CLI.
● Sub-interface: whether to use a sub-interface. Currently, only dot1q sub-
interfaces are supported.
● Interface description
You can centrally plan the WAN links of a site and describe the CPE and site
to which the interface belongs. The deployment email can contain the
interface description so that deployment personnel can determine whether
the site is the same as the planned site based on the interface description
during deployment.
● NAT traversal
If a NAT device is deployed between the site on a private network and the
WAN side, enable the NAT traversal function to set up overlay tunnels with
other sites and RRs.
● MTU: By default, the MTU of a serial interface is 1488 bytes, and the MTU of
Ethernet links, LTE links, xDSL links, E1-IMA links, G.SHDSL links, and IMA
group links is 1500 bytes. Adjust the MTU based on the link type. For
example, for a PPPoE link, set the MTU to 1492 because the PPPoE header is
added before the IP packet.
When the CPE forwards data packets, the data packet length and MTU are
compared at the IP layer. If a data packet is longer than the MTU, the data
packet needs to be fragmented at the IP layer. After fragmentation, the
packet length is shorter than the MTU. If the MTU is too small, the
transmission efficiency decreases due to a large number of fragments. If the
MTU is too large, packets on the network may be discarded.
● MSS: The default value is 1200. To prevent TCP packets from being
fragmented, you must configure a proper MSS based on the MTU. To properly
transmit a packet, ensure that the MSS value plus all the header lengths (TCP
header and IP header) does not exceed the MTU. For example, the default
MTU of an Ethernet interface is 1500 bytes. To prevent packets from being
fragmented, set the MSS to a value equal to or smaller than 1460 bytes (1500
- 20 - 20). In the preceding formula, the two 20s indicate the minimum length
of the TCP header and IP header, respectively. It is recommended that you set
the MSS to 1200 bytes.
site. If the CPE has insufficient LAN-side interfaces, an access switch can be
connected to the CPE in VLAN trunk mode.
If dual CPEs are deployed, VRRP is generally deployed on the CPEs to prevent
them from affecting the LAN. The VRRP virtual IP address is used as the gateway
address of the network to transparently provide the redundancy function. Multiple
switches can be deployed on the LAN side to form a stack. If two CPEs are
deployed at a site, they can be interconnected directly or through the LAN. If the
two CPEs are directly interconnected, the interconnection links can be added to an
Eth-Trunk.
In VRRP redundancy mode, the master device forwards service packets. However,
in the actual environment, service packets may need to be transmitted through
the link on the backup device. In this case, the master device needs to forward
service packets to the backup device first. Therefore, an interconnection link needs
to be set up between the master and backup devices to forward service packets
between them.
For a large site, the site network has a complex structure and complex network
facilities (for example, Layer 3 core devices). Therefore, the egress routers must
support the direct connection and dual-homing networking for interconnecting
with Layer 3 devices. BGP, OSPF, and static routing are supported.
NOTE
If multiple VNs are planned for service isolation between departments, you need
to plan LAN-side configurations for sites in each VN and use Layer 3 interfaces or
VLANs to connect to LAN-side devices. The same Layer 3 interface, Layer 3 sub-
interface, or VLAN cannot be configured for the same site in different VNs. That is,
you need to plan different Layer 3 interfaces, Layer 3 sub-interfaces, or VLANs for
different VNs when planning configurations for sites to interconnect with LAN
devices in different VNs.
WAN-side gateway:
● Site name: Specify the name of the site where the LAN-side device is to be
interconnected.
● Gateway: Specify the CPE to be configured at a site, especially at a site where
two CPEs are deployed.
● Interconnection with the LAN side through Layer 3 interfaces or sub-interfaces
– Interface: Specify the interface on the CPE for connecting to the LAN-side
device. Ensure that the interface is a Layer 3 interface.
If an Eth-Trunk is used, you need to plan the following items:
Trunk 0 interface for the two gateways. You cannot create an Eth-
Trunk interface with ID 0 on the two gateways.
▪ Domain name: Plan the domain name suffix sent from the DHCP
server to the DHCP client.
▪ Lease time: By default, the lease time is one day. In locations where
clients often move and stay online for a short period of time, for
example, in cafes, Internet bars, and airports, plan a short lease time
to ensure that IP addresses are released quickly after the clients go
offline. In locations where clients seldom move and stay online for a
long period of time, for example, in office areas of an enterprise,
plan a long lease time to prevent system resources from being
occupied by frequent lease or address renewals.
▪ DNS server: Specify the DNS server group name to plan the DNS
server used by the DHCP client. The DNS server group is planned in
Global Parameters. For details, see the description of the DNS server
in "Data Planning and Design" in 2.2.1.2 Global Configuration.
After the DNS server group name is specified, the DHCP server sends
the IP addresses in the specified DNS server group to DHCP clients
when assigning IP addresses to the DHCP clients.
▪ NetBIOS node type (Option 46): Plan the NetBIOS node type for
DHCP clients. The options include: B-Node (node in broadcast
mode), P-Node (node in peer-to-peer mode), M-Node (node in
mixed mode), and H-Node (node in hybrid mode).
▪ NetBIOS server (Option 44): Plan the NetBIOS server address for
DHCP clients.
▪ Static binding: Plan IP addresses for DHCP clients that need to use
fixed IP addresses. For example, if a server functions as a DHCP client
to apply for an IP address from the DHCP server and needs to use a
fixed IP address to ensure stability, select an IP address from the
address pool and bind the IP address to the MAC address of the
server. The DHCP server then assigns a fixed IP address to the server
based on the MAC address.
– DHCP relay agent: If the DHCP relay agent mode is selected, plan the
DHCP server address for the DHCP relay agent. You can specify a
maximum of eight DHCP servers.
● VRRP: If two gateways are deployed at a site, VRRP can be configured. LAN
users access the WAN network through the master device by default. When
the master device fails, services are automatically switched to the backup
If the WLAN is deployed on the LAN side, STAs connect to the WLAN Fat AP-
capable CPE to access the network.
Table 2-3 lists the items for configuring the WLAN Fat AP function on the CPE.
Table 2-3 Items for configuring the WLAN Fat AP function on the CPE
Configur Description
ation
Item
VN If multiple VNs are configured, select a VN and plan the WLAN Fat
AP configurations for the sites in the VN.
Device Plan this item only when the device supports the WLAN Fat AP
function.
Configur Description
ation
Item
SSID Plan the service set identifier (SSID). SSIDs in different VNs must be
unique.
Frequenc Plan the frequency band, which can be 2.4 GHz or 5 GHz.
y band If 2.4 GHz is used, the transmit power level and channel can be
configured.
● Transmit power level: Transmit power levels 0 to 12 are available.
The configured transmit power level must be supported by radios,
which is subject to the laws and regulations and channels of the
corresponding country or region.
● Channel: Channels 1 to 13 are available.
The available channels and the maximum transmit power of radio
signals in the channels vary according to countries and regions.
Radio signals in different channels may have different signal
strengths. For details about the country codes and channels
compliance table, maximum transmit power of each channel,
channel number, and mapping between channels and frequencies,
search for "Country Codes & Channels Compliance" at Huawei
enterprise technical support website https://support.huawei.com/
enterprise. In the search result, N/A indicates that the
corresponding channel is not supported by a country or region.
VLAN ID Configure the service VLAN ID. WLAN service packets are
encapsulated with the VLAN ID and forwarded to the WLAN service
processing module of the CPE VLAN IDs in different VNs must be
unique and cannot conflict with those that have been configured.
Security STAs that access the wireless network can be authenticated using
authenti two modes: WPA1+PSK and WPA2+PSK. Select WPA1 or WPA2 based
cation on the security requirements and terminal encryption support, and
plan the PSK.
Configur Description
ation
Item
Maximu Set the maximum number of STAs that can access the CPE through
m the WLAN Fat AP concurrently.
number
of access
STAs
Transmit Configure the power level for a radio. A larger value indicates a
power higher power level and a lower transmit power of a radio. The
level default value is 0, indicating full power. This parameter is valid only
when the frequency band is set to 2.4 GHz.
Channel Specify the working channel for a radio. The channel is selected
based on the country code and radio mode. The default working
bandwidth is 20 MHz. This parameter is valid only when the
frequency band is set to 2.4 GHz.
Function Description
After a site CPE connects to a WAN, the CPE must have reachable underlay
network (physical network) routes to the PE, so that an overlay network can be
normally established to forward services. BGP, OSPF, or static routes can be used
based on WAN access requirements.
Application Scenarios
One EVPN network can be configured with one or more types of underlay routes
based on network requirements.
● BGP route
If an MPLS VPN network is connected and BGP dynamic routing is used, the
CPE typically needs to use BGP to exchange routing information with the PE.
iMaster NCE-Campus can configure route filtering rules based on IP network
segments to control the advertisement and receiving of BGP routes.
● OSPF route
If a Layer 2 WAN network is used, OSPF routes can be used to exchange
routes. This can be implemented by creating OSPF processes. iMaster NCE-
Campus can configure the OSPF priority and control the advertisement and
receiving of routes through the blacklist and whitelist route filtering policies.
● Static route
Static routes are applicable to many scenarios, for example, Internet access,
wireless network access using the LTE link, and using blackhole routes to
prevent routing loops.
Static routes do not involve protocol interaction and cannot detect faults on
indirectly connected links of the WAN. This may cause service interruption. To
prevent this problem, you can track the IP address of a WAN network and use
an NQA test instance to detect the IP address. If the detection fails, the
system considers that the WAN network is faulty and automatically selects
another backup link for forwarding.
NOTE
● Policy configuration: Plan service policies for sites in each VN if multiple VNs
are configured. For details, see 2.2.2.1 Internet Access, 2.2.2.2 Interworking
with Legacy Sites, and 2.2.2.3 Application Experience Optimization Policy.
Topology
Based on users' service requirements, the EVPN Interconnection Solution supports
the following typical overlay topology models for inter-site interconnection. Table
2-4 lists the site roles in different topology modes.
Branch site In the full-mesh networking, all user sites except the
hub site are called branch sites.
list and site list is AND. That is, a rule is matched only when both the IP
prefix list and site list are matched.
b. Action after matching: The action can be permit or deny. When a rule is
matched, the specified action is performed. If the action is permit, you
can specify the next-hop site.
c. Attach sites: You can specify the sites to which a customized topology
policy is applied.
Based on the enterprise WAN service and internal management requirements, the
enterprise WAN has multiple topology models. The single-layer and hierarchical
network models are supported.
For large-scale enterprise networks that have a large service scale but a small
number of centrally distributed sites, the single-layer network model can also
be used, as shown in Figure 2-5.
Figure 2-5 Application scenario for large enterprises with a small number of
branches
Figure 2-6 Application scenario for large enterprises with a large number of
branches
Topology Implementation
On iMaster NCE-Campus, you can specify the topology model between sites.
iMaster NCE-Campus then generates the corresponding network model based on
the topology model, converts the network model into BGP routing policies, and
delivers the policies to the RR. The RR controls the route sending and receiving of
different sites based on the routing policy delivered by iMaster NCE-Campus. In
this way, the sites can communicate with each other based on the specified
topology model.
– Area name: Specify the name of each leaf area, for example, Area1.
– Area overlay topology: Specify the topology (hub-spoke or full-mesh) for
each area. Different overlay topologies can be specified for areas.
▪ Hub-spoke
○ Hub site: Similar to that in the hub-spoke mode in the single-
layer network model, the hub site needs to be specified in the
hierarchical network model.
○ Border site: The hub site functions as a border site, and no
border site needs to be specified again.
▪ Full-mesh networking
○ Redirect site: You can specify whether to configure a redirect site
based on actual requirements.
○ Border site: If an area using the full-mesh mode needs to
interconnect with other areas, you need to specify the border
site. The border site must be able to communicate with border
sites in other areas and have good network connections and
1. Overlay LAN route: Static, OSPF, and BGP routes are supported. Currently,
LAN-side routes are manually configured by customers based on the
connection mode on the LAN side.
2. Interworking link route: OSPF is used for the interconnection between VNs on
the overlay and underlay networks. This route is automatically orchestrated
and configured by the system if Site-to-Internet or Site-to-Legacy Site is
enabled. This route will not be enabled if Site-to-Internet or Site-to-Legacy
Site is disabled.
3. Interconnection link route: OSPF or IBGP is used to exchange routes between
two CPEs in dual-CPE scenarios. The routes are automatically orchestrated
and configured by the system and do not need to be manually configured.
4. Overlay WAN route: BGP is used to advertise routes on the overlay network.
The routes are automatically orchestrated and configured by the system and
do not need to be manually configured.
5. Underlay WAN route: OSPF, BGP, and static routes are supported. The routes
are manually configured by customers based on the access conditions on the
WAN side.
Overlay Route
Overlay routes refer to the routes at the overlay network layer on the EVPN
interconnection network and are classified into WAN-side and LAN-side routes.
● Overlay WAN route
To enable sites on the EVPN interconnection network to communicate with
each other on the overlay network, configure overlay WAN routes. Based on
the topology model of the overlay network, iMaster NCE-Campus
automatically orchestrates overlay WAN routes. You only need to configure
the blacklist and whitelist policies on the WAN side of the overlay network to
filter overlay routes in the receive and transmit directions.
● Overlay LAN route
To enable the CPE at each site to communicate with the LAN, configure
overlay LAN routes. This ensures that services on the LAN side run properly.
For a large site, the network has a complex structure (hierarchical structure
and multi-network design) and complex network facilities (large number of
routers and switches). In Layer 3 interconnection scenarios, CPEs can establish
Layer 3 connections to the LAN through static or dynamic routes.
▪ Default route cost: Plan the default route cost for advertising the
default route. The default value is 1.
▪ Hello packet interval: Plan the interval for sending Hello packets on
an interface, in seconds. The default value is 10. Hello packets are
periodically exchanged by OSPF interfaces to establish and maintain
neighbor relationships. A smaller interval means shorter time taken
to detect network topology changes but a higher route cost. The
interval must be the same as that of the neighbor.
▪ Cost: Plan the OSPF cost for the interface. By default, OSPF
automatically calculates the cost based on the interface bandwidth.
Load balancing can be performed among several LAN-side routes
with the same protocol type, cost, and destination address. You can
change the interface costs to change the load balancing mode to the
active/standby mode according to the actual networking.
– Route importing: Import routes discovered by other routing protocols to
enrich OSPF routing information. When OSPF imports external routes,
you can set the cost of imported routes.
▪ Cost: Plan the cost of the imported route. The default value is 1. You
can change the cost to determine whether load balancing is achieved
for multiple routes destined for a network segment.
– Route filtering (supported only by AR routers): You can plan the following
parameters to use the blacklist and whitelist for route filtering to control
the advertisement and receiving of OSPF routes:
▪ IP address prefix list: Plan the IP address prefixes in the blacklist and
whitelist for filtering. You can specify the destination IP address/mask
and mask range for filtering. Multiple network segments can be
configured.
● BGP
– Site: Plan the site where overlay LAN routes need to be configured.
– Advanced Settings
– Peer IP address: Plan the IPv4 address of the peer. The IPv4 address can
be the IP address of an interface that is directly connected to the peer or
the IP address of a loopback interface of the reachable peer.
– Peer AS: Specify the AS number of the peer device. The BGP AS number
must be the same as that of the peer device. Otherwise, the BGP peer
relationship cannot be established.
– Local AS: Configure the local end to establish a connection with a
specified peer by using a fake AS number. By default, the local end uses
the actual AS number to establish a connection.
– Keepalive time: Specify the BGP keepalive time, in seconds. The default
value is 60.
– Holdtime: Specify the BGP hold time, in seconds. The default value is 180.
The hold time must be at least three times the keepalive time.
▪ If short Keepalive time and hold time are set, BGP can detect a link
fault quickly. This speeds up BGP network convergence, but increases
the number of keepalive messages on the network and loads of
devices, and consumes more network bandwidth resources.
▪ If long Keepalive time and hold time are set, the number of
keepalive messages on the network is reduced, loads of devices are
reduced, and fewer network bandwidths are consumed. If the
keepalive time is too long, BGP is unable to detect link status
changes in a timely manner. This is unhelpful for implementing rapid
BGP network convergence and may cause many packets to be lost.
– MD5 encryption: Specify whether to use MD5 authentication between
BGP peers. If MD5 encryption is used, a ciphertext password must be
specified. The MD5 authentication configuration and the ciphertext
password must be the same as the BGP configuration of the peer device.
Otherwise, the BGP peer relationship fails to be established.
– Routing policy: You can configure route filtering to control the
advertisement and receiving of BGP routes. This parameter is available
only to AR routers.
▪ IP address prefix list: Plan the IP address prefixes in the blacklist and
whitelist for filtering. You can specify the destination IP address/mask
and mask range for filtering. Multiple network segments can be
configured.
The LAN-side service planning and deployment of the headquarters campus and branch
campus are the same as those in the single-campus scenario. For details, see the [Huawei
CloudCampus Solution] Small- and Medium-Sized Campus Network Deployment Guide,
[Huawei CloudCampus Solution] Large- and Medium-Sized Campus Network Deployment
Guide (Virtualization Scenario), and [Huawei CloudCampus Solution] Large- and Medium-
Sized Campus Network Deployment Guide (Non-virtualization).
Functions
The EVPN Interconnection Solution provides the following Internet access modes:
● Local Internet access: The Internet access traffic of a site is routed from the
local Internet link to the Internet. In local Internet access mode, NAT in Easy
IP mode is provided. You can determine whether to enable the NAT function
based on the outbound interface. After NAT is enabled, the system uses the IP
address of the outbound interface as the public IP address after NAT is
performed and translates the IP address of the traffic passing through the
interface.
Local Internet access is applicable to small-scale enterprises or scenarios
where centralized security control is not required for Internet access traffic
and links for accessing the Internet are available on the WAN side.
● Centralized Internet access: All sites in an enterprise access the Internet
through a centralized Internet gateway.
Centralized Internet access is applicable to scenarios where the site does not
have links for accessing the Internet or where Internet access traffic needs to
be centrally controlled. In this mode, a centralized Internet gateway is
configured. Traffic from other sites is forwarded to the centralized Internet
gateway through the overlay network to access the Internet.
● Hybrid Internet access: The system allows some applications to access the
Internet in local Internet access mode and other applications to access the
Internet in centralized Internet access mode. If local Internet access with the
default policies (Policy is set to All) is used and centralized Internet access is
enabled, local Internet access is preferred. If the local link is faulty, the
centralized Internet access mode is used. In hybrid Internet access mode, the
NAT function in Easy IP mode can also be enabled on the outbound interface
for local Internet access.
Hybrid Internet access is applicable to scenarios where Internet access traffic
needs to be managed centrally but the traffic of specified services (such as
Office 365) is routed out from the local site to minimize the access delay.
NAT ALG
Generally, a site uses a private IP address to access the Internet through the
gateway. To implement this, the NAT function needs to be enabled to translate a
private IP address into a public IP address. NAT translates only addresses in IP
packet headers and ports in TCP/UDP headers. For some special protocols such as
FTP, IP addresses or port numbers may be contained in the data field of the
protocol packets. NAT cannot translate such IP addresses or port numbers. A good
way to solve the NAT issue for these special protocols is to use the application
level gateway (ALG) function.
Currently, the following protocols support the NAT ALG function: DNS, FTP, SIP,
PPTP, and RTSP.
▪ All: All Internet access traffic from the LAN side of a site is routed
out by matching the WAN-side routes.
diverted to ensure service security, you need to enable the VAS and
configure the VAS connection. For details, see "Third-party security device
connected through VAS" in 2.2.2.4.2 Firewall. If you switch on Enable
VAS, Policy must be set to All.
● Hybrid Internet access
– Centralized Internet gateway: For details, see Centralized Internet
access.
– Local Internet access: For details, see Local Internet access. The
following parameters for hybrid Internet access are different from those
for local Internet access:
traffic from the EVPN site to the underlay MPLS network. Users at the EVPN site
can directly communicate with users at the legacy site through the underlay
network. This scenario is suitable for small- and medium-sized enterprises that
build EVPN interconnection networks on themselves without a dedicated IWG. The
network model is simple and configuration and maintenance are easy.
In this scenario, the following traffic models can be used for mutual access,
depending on service requirements:
● Distributed local access
This model can be used if all EVPN sites can access legacy underlay MPLS
network sites through local breakout. In this model, traffic of each site can be
offloaded locally.
● Centralized access
If some EVPN sites cannot access legacy sites through local breakout, you can
select one site that can communicate with the legacy sites as the centralized
access site. Traffic from other sites is sent to the centralized access site
through overlay tunnels, and then forwarded to the legacy sites through local
breakout.
● Hybrid access
If a centralized access site is deployed on the EVPN network, it can provide
the centralized access function for the sites that cannot access the legacy
network through local breakout. In addition, the distributed access function
can be configured for sites that support local breakout. Then traffic of these
distributed sites is preferentially forwarded to the legacy underlay MPLS sites
through local breakout. If the local link for accessing the MPLS network is
faulty, traffic can be transmitted to the centralized access site through the
overlay tunnel of other links, and then forwarded to the legacy site through
the centralized access site. This improves transmission reliability for traffic.
Figure 2-9 shows the hybrid access mode. EVPN Site3 and EVPN Site1
communicate with the MPLS network. EVPN Site3 functions as the IWG. EVPN
Site2 communicates with the MPLS network through EVPN Site3. The hybrid
access mode is configured for EVPN Site1. Traffic destined for the legacy
MPLS sites is preferentially forwarded to the underlay MPLS through local
breakout. If the MPLS link is faulty, the Internet link is used to communicate
with legacy sites through EVPN Site3.
▪ If the links have the same priority, they work in load balancing mode.
– Bandwidth allocation: Specify the proportion of local breakout traffic to
the available bandwidth that has been allocated to overlay services in the
VN. If the available bandwidth for overlay services accounts for 30% of
the total bandwidth for the VN and 10% of the bandwidth is allocated to
the local breakout traffic, the available bandwidth of the local breakout
traffic accounts for 3% of the total bandwidth of the WAN link. That is, if
the total bandwidth of the interface is 100 Mbit/s, the bandwidth for
local breakout traffic is 3 Mbit/s.
● Local access
– Site: Plan the site that uses the local access mode to communicate with
legacy sites.
– IGW: Specify whether the IGW functions as the gateway for legacy sites
to access the Internet. If legacy sites access the Internet through the IGW,
you need to enable the IGW function of the sites.
– WAN link: Specify the WAN link name in a WAN link template to select
the WAN link used for MPLS network access. For sites using the same
WAN link template, only the same WAN link can be used for Internet
access.
– Link priority: Plan the priority of a WAN link. If multiple WAN links are
available for MPLS network access, you can configure the link priorities
so that the WAN links can work in active/standby mode. The link priority
is in the range from 1 to 3. A smaller value indicates a higher priority.
You can configure multiple links to work in active/standby mode or load
balancing mode by configuring the priority.
▪ If the links have the same priority, they work in load balancing mode.
– Bandwidth allocation: Specify the proportion of local breakout traffic to
the available bandwidth that has been allocated to overlay services in the
VN. If the available bandwidth for overlay services accounts for 30% of
the total bandwidth for the VN and 10% of the bandwidth is allocated to
the local breakout traffic, the available bandwidth of the local breakout
traffic accounts for 3% of the total bandwidth of the WAN link. That is, if
the total bandwidth of the interface is 100 Mbit/s, the bandwidth for
local breakout traffic is 3 Mbit/s.
● Hybrid access
– Centralized access: For details, see Centralized access.
– Local access: For details, see Local access.
and security. Service policies can be applied in subsequent service processes only
after applications are identified.
Figure 2-10 shows the application identification feature in the EVPN
Interconnection Solution. The following modes are supported: first packet
inspection (FPI) and service awareness (SA).
● FPI
FPI can identify the application type at the first data flow of an application. It
can quickly identify applications and is mainly used for SaaS applications with
fixed destination addresses or port numbers.
● SA
SA performs deep packet analysis and accurately identifies common
applications based on the characteristics in application payloads.
When a packet reaches the application identification module, FPI is performed. If
an application can be identified through the first packet, SA is no longer
performed. If the application fails to be identified, SA is performed.
For the FPI and SA, the FPI signature database and SA signature database are
preconfigured on CPEs. The CPEs can identify common applications based on the
application definition (port, signature, and behavior) in the signature database. In
addition, the FPI and SA also support customized applications, so that users can
customize special applications.
FPI
FPI is realized by matching the first packet through 3-tuple information of the
packet, or the SA cache. The application is matched based on L3-L4 information of
the packet. Therefore, if multiple applications have the same L3-L4 information,
the applications may be incorrectly identified. In addition, the FPI process is
simple, so the processing performance of FPI is higher than that of SA.
Application Scenario
1. NAT must be configured on the path through which the application passes. If
the application cannot be identified, it may be discarded after traffic steering
because NAT is not configured for SYN and ACK packets.
2. As the pervasive use of clouds, customers want to send SaaS and trusted
network traffic directly from branches to the Internet instead of forwarding
data through DCs. This improves the bandwidth utilization and reduces the
transmission delay and costs.
3. When enterprises use their own applications or applications that run on the
Internet, the Internet traffic is known and trusted, but other HTTP/HTTPS
traffic is unknown or suspicious. If the specific application cannot be identified
through the first data packet, all HTTP/HTTPS traffic must be sent to the
Internet or to the security web gateway or headquarters for further check
through the enterprise firewall and IDS/IPS resources.
Applications can be identified in user-defined mode or through the pre-defined FPI
signature database.
● User-defined mode: Applications can be customized in 3-tuple mode. When 3-
tuple information is predefined, the source and destination do not need to be
specified. The system first matches an application based on the destination IP
address, destination port number, and protocol type. If no match is found, the
system matches the application based on the source IP address, source port
number, and protocol type. If no match is found, the system matches the
application based on SA. Some common applications are predefined in the
system based on the protocol number, port number, or domain name.
● FPI signature database: The FPI function is associated through the DNS. When
a client initiates a page access request, a DNS request is sent, requesting to
access the specific IP address. The DNS server sends back a DNS response
packet. When the packet traverses the CPE, the CPE parses it to obtain the IP
address. The application ID, port number, and protocol number are queried in
the FPI signature database based on the URL. The triplet information is then
associated with the IP address, and a DNS association entry is generated.
After receiving the DNS response packet, the client requests to access the
application. Then, when the packet traverses the CPE, the application is
identified based on the DNS association entry.
NOTE
FPI based on the domain name is not supported in the web proxy scenario. To use the FPI
function, add domain names to the proxy access exception list.
SA
Different applications use different protocols, each with its own characteristics,
called signatures, which can be a specific port, a character string, or a bit
sequence. SA determines an application by detecting characteristic codes in data
packets. Signatures of some protocols are contained in multiple packets, and
therefore the device must collect and analyze multiple packets to identify the
protocol type. The system analyzes service flows passing through a device, and
compares the analysis result with the signature database loaded to the device. It
identifies an application by detecting signatures in data packets, and implements
refined policy control based on the identification result.
Applications can be identified in customized mode or through the SA signature
database predefined on the CPE.
● User-defined mode: Applications are identified based on URLs or keywords.
On the CPE, rules can be created through triplet, keywords, or both triplet and
keywords. The triplet refers to the server IP address, protocol type, and port
number. The keywords are signatures of a data packet or a data flow
corresponding to the application and uniquely identify the application.
● SA signature database: Applications are identified based on the SA signature
database. The SA signature database can have 500+ or 6000+ records,
depending on the device type. The SA signature database can be upgraded
through Huawei Security Center Platform. The SA signature database needs to
be updated frequently because applications on the live network change
rapidly. Otherwise, some applications may fail to be identified.
Configuration Description
Item
Configuration Description
Item
Configuration Description
Item
● Application group
You can select an application group in a traffic classifier to identify
applications. Only an application group can be selected. Applications that are
not added to the application group are not displayed. You cannot select only
some applications in an application group. You need to plan application
groups properly.
– SA signature database: The SA signature database can have 500+ or
6000+ records, depending on the device type. The SA signature database
can be upgraded through Huawei Security Center Platform.
– AND
▪ If the traffic classifier does not contain any ACL rule, packets match
the traffic classifier only when they match all the non-ACL rules.
– OR: Packets match the traffic classifier if they match one or more rules in
the classifier.
● L3 ACL: Define multiple ACL rules. Packets that meet specified conditions are
allowed to pass.
– Priority: Specify the priority of an ACL rule. Packets preferentially match
the Layer 3 ACL rule with a higher priority.
– Source IP address: Plan the source IP address of packets matching an ACL
rule. If no source IP address is specified, packets with any source IP
address are allowed to pass.
– Destination IP address: Plan the destination IP address of packets
matching an ACL rule. If no destination IP address is specified, packets
with any destination IP address are allowed to pass.
– DSCP: Specify the Differentiated Services Code Point (DSCP) of packets
matching an ACL rule.
– Protocol: Specify the protocol type of packets matching an ACL rule.
– Source port: Specify the source port of the UDP or TCP packets matching
an ACL rule. This parameter is valid only when the protocol of packets is
TCP or UDP. If no source port is specified, TCP or UDP packets with any
source port are matched.
– Destination port: Specify the destination port of the UDP or TCP packets
matching an ACL rule. This parameter is valid only when the protocol of
packets is TCP or UDP. If no destination port is specified, TCP or UDP
packets with any destination port are matched.
● Application: Select an application group that matches packets.
You can select an application group in a traffic classifier to identify
applications. Only an application group can be selected. Applications that are
not added to the application group are not displayed. You cannot select only
some applications in an application group. You need to plan application
groups properly.
● Advanced settings: Take effect only on policies on inbound interfaces.
– VLAN ID: Specify the start outer VLAN ID and end outer VLAN ID of
packets to be matched.
– 802.1p: Specify the 802.1p priority of packets to be matched.
– Source MAC address: Specify the source MAC address of packets to be
matched.
– Destination MAC address: Specify the destination MAC address of packets
to be matched.
– L2 protocol: Specify the Layer 2 protocol type of packets to be matched.
● Time type
– Periodic time range: Define a periodic time range based on days or
weeks. The associated traffic policy takes effect at an interval of one day
or week. For example, if the time range of a traffic policy is 08:00-12:00
every day or on every Monday, the traffic policy takes effect at
08:00-12:00 every day or on every Monday.
– Absolute time range: Define a time range from YYYY/MM/DD hh:mm:ss
to YYYY/MM/DD hh:mm:ss. The associated traffic policy takes effect only
within this period.
● Start time: Specify the time when the traffic policy starts to takes effect.
● End time: Specify the time when the traffic policy stops taking effect.
Intelligent Traffic Steering
● VN: Plan service policies for sites in each VN if multiple VNs are configured.
You need to first select the VN for which the policy needs to be configured.
● Traffic classifier: Select a traffic classifier to specify traffic to which intelligent
traffic steering needs to be applied.
NOTE
Intelligent traffic steering does not support the traffic classifier with advanced settings
or operation type being set to OR.
● Policy priority: Set the priority of an intelligent traffic steering policy. For the
same traffic, the intelligent traffic steering policy with the highest priority is
preferentially matched.
● Switchover condition: Refer to the delay, jitter, and packet loss rate of a link.
When the traffic or application quality does not meet the conditions, traffic or
applications are switched.
By default, switchover conditions of voice, real-time video, low-delay data,
and large-capacity data services are defined. You can also set the delay, jitter,
and packet loss rate to customize switchover conditions.
● Transport network priority: Set the primary and secondary transport networks.
– Primary transport network: Configure multiple transport networks as
primary transport networks. A maximum of eight transport networks can
be configured.
▪ For transport networks with the same priority, you are advised to set
Policy between TN to Loadbalance.
▪ Discard: If the traffic does not meet the conditions, packets are
discarded.
▪ ECMP: If the traffic does not meet the conditions, packets are
forwarded continuously.
– Switchover mode: Specify whether traffic can be switched back to the
original link if the quality of the original link recovers after link
switchover occurs. The default value is Pre-emptive.
The link switchover consists of the switchover between primary transport
networks with different priorities and the switchover between primary
and secondary transport networks.
This parameter can be set for high-priority applications only when the
bandwidth of the primary link on which high-priority applications are
located is sufficient.
● Advanced settings: Set bandwidth conditions list, priority, and other
parameters. The system determines whether to switch traffic to another link
based on the current bandwidth usage, application priority, and switchover
threshold, and then determines the application traffic to be switched based
on the application priority.
– Switch upper/lower limit: Select links to transmit traffic based on the
bandwidth usage in addition to delay, jitter, and packet loss rate.
▪ If the link bandwidth usage is lower than the switch lower limit, all
application traffic, including new application traffic, is forwarded
through the current transport network.
▪ If the link bandwidth usage is greater than the switch lower limit and
lower than the switch upper limit, only the existing application traffic
is forwarded through the current transport network, and new
application traffic cannot be transmitted.
▪ If the link bandwidth usage is greater than the switch upper limit,
some existing application traffic is switched to another transport
network for transmission, and new application traffic cannot be
transmitted.
It is recommended that this parameter be configured only when the
bandwidth is sufficient.
– Bandwidth conditions list: Configure bandwidth switchover conditions for
a transport network by specifying the link bandwidth (bandwidth upper/
lower limit) and application bandwidth (maximum/minimum bandwidth).
– Priority: Specify the application priority. The default value is 8. Configure
multiple intelligent traffic steering policies to match different applications
and configure different application priorities to implement application-
based traffic steering based on application priorities.
– Gateway prioritization: After the IGW service is enabled, if two CPEs that
belong to different transport networks need to communicate with each
other through the IGW, you can enable the gateway prioritization
function. By default, this function is disabled. Overlay tunnels are
established between sites based on the configured topology so that the
sites can directly communicate with each other. After gateway
prioritization is enabled, CPEs preferentially communicate with each other
through the gateway by default and learn the routes of other sites
through the gateway.
● Effective time template: Select an effective time template to specify the time
range for the intelligent traffic steering policy to take effect.
● Site: Select a site with which an intelligent traffic steering policy is to be
associated. The policy takes effect only on the selected site.
2.2.2.3.3 QoS
QoS is a mainstream function that implements differentiated services. Data
packets are classified into different priorities or multiple CoSs through traffic
classification. These priorities and CoSs are the prerequisite and basis of the
DiffServ model. Different traffic policies can be configured based on packet
priorities and CoSs to provide different services.
● Queue priority
Traffic classification is used to specify different QoS priorities for services.
Based on QoS priorities, services are forwarded through queues with different
priorities for differentiated QoS. If bandwidth resources are insufficient, the
forwarding bandwidth of high-priority services is preferentially guaranteed.
● Traffic policing
Traffic policing controls traffic by monitoring the bandwidth occupied by
service traffic, and discards excess traffic to limit the bandwidth within a
proper range, ensuring appropriate bandwidth resource allocation.
● Traffic shaping
Traffic shaping is a measure to adjust the traffic rate sent from an interface. If
traffic congestion occurs due to burst traffic, traffic shaping is performed to
make irregular traffic transmitted at an even rate, preventing traffic
congestion on the network.
● Bandwidth allocation
HQoS uses multi-level queues to implement bandwidth allocation between
VNs and within a VN. The bandwidth of a physical link is divided into
bandwidths of multiple logical links, and the bandwidth of each logical link is
used by different VNs. The bandwidth of the logical link used by each VN can
specify bandwidths of the overlay network and the local breakout network.
The bandwidth of the overlay network is used for communication between
the hub site, aggregation site, and branch site. The bandwidth of the local
breakout network is used for local access to the Internet or interconnection
between local and legacy sites.
● DSCP re-marking
– After the DSCP re-marking function is configured on the LAN interface,
the DSCP value in the IP header of a packet entering the CPE is modified.
If the packet enters the overlay tunnel for forwarding, the DSCP value in
the outer IP packet header is copied from the DSCP value in the inner IP
packet header by default. At last, the DSCP values in inner and outer IP
packet headers are re-marked. Based on re-marked values, traffic policies
can be deployed on the WAN-side overlay network to implement service
management and scheduling.
– If the DSCP re-marking function is configured on the WAN interface, the
DSCP value in the IP header of a packet sent by the outbound interface
on the underlay network is modified. If the IP packet header of the
overlay tunnel is added to the packet, only the DSCP value in the outer IP
packet header is modified. At last, the DSCP values in inner and outer IP
packet headers may be different, and the outer DSCP value is the re-
marked value.
– If the DSCP re-marking function is configured on both LAN and WAN
interfaces, the DSCP value in the IP header of a packet entering the CPE
is modified. If the packet is sent through the outbound interface on the
underlay network, the DSCP value in the outer IP packet header is
modified again. At last, for the packet that the IP header is encapsulated
in overlay tunnels, the DSCP value in the inner IP packet header is
remarked on the LAN interface and the DSCP value in the outer IP packet
header is remarked on the WAN interface. For the local breakout packet,
the DSCP value in the IP packet header is remarked on the WAN
interface.
You can create VN QoS groups and add multiple VNs to a VN QoS group.
Bandwidth allocation and QoS policy configuration can be performed based on VN
QoS groups. All VNs in a VN QoS group share bandwidth resources of the group.
Bandwidth Allocation
● Traffic distribution policy name: Specify the name of a policy. Multiple traffic
distribution policies can be configured.
● VN bandwidth: Specify the bandwidth ratio of each VN or VN QoS group. For
example, the bandwidth ratios of VN1, VN2, VN_Group1, and remaining
bandwidth can be set to 30%, 20%, 30%, and 20%, respectively.
● Local breakout bandwidth ratio: Plan the local breakout bandwidth ratio if a
site accesses the Internet or communicates with a traditional site. For details,
see Bandwidth Allocation in "Data Planning and Design" in 2.2.2.1 Internet
Access and 2.2.2.2 Interworking with Legacy Sites.
● Site: Plan a site where the traffic distribution policy is applied and specify
different traffic distribution policies for different sites. One traffic distribution
policy can only be applied to one site.
Before configuring a traffic policy, you need to create policy behavior templates,
including the redirection and QoS policy templates. QoS policy templates are
classified into WAN policy behavior templates and LAN policy behavior templates
on different interfaces based on their functions.
For details, see Data Planning and Design in 2.2.2.3.2 Intelligent Traffic Steering.
QoS
● Overlay network QoS: The same QoS policy is configured for one or more
sites in each VN. Different QoS policies can be deployed for the same site in
different VNs.
● Common QoS: QoS policies are configured based on CPEs at a site to specify
whether to apply a QoS policy to the LAN side (inbound direction) or WAN
side (outbound direction) of the CPE and specify to which interface the policy
is applied.
Table 2-8 Key configuration items of a QoS policy for the overlay network
Configuration Item Description
VN/VN QoS Group Plan service policies for sites in each VN if multiple
VNs are configured. You need to first select the VN for
which the policy needs to be configured.
Policy priority Set the QoS policy priority. For the same traffic, the
QoS policy with the highest priority is preferentially
matched.
LAN Re-mark Set the DSCP value. The IP DSCP value of traffic
policy DSCP entering the CPE will be changed to this value.
behavior
Enable To view packet statistics collected before a traffic
Statistic policy is applied, enable LAN-side traffic statistics
collection and view the statistics on the CPE.
WAN Queue Set the priority of the queue into which traffic is to be
policy priority placed. The value can be Highest (LLQ queue), High
behavior (EF queue), or Medium (AF queue). After traffic is
in the placed into a queue, the guaranteed bandwidth of the
outboun queue can be assured for the traffic. Traffic that does
d not match the preceding policy enters the BE queue
direction (with a low priority).
The guaranteed bandwidth can be set to a specific
bandwidth value or a percentage. The percentage is
set based on the available bandwidth of a department
(VN). If the guaranteed bandwidth is set to a specific
bandwidth value, the value cannot exceed the
available bandwidth.
For example, if the bandwidth of a WAN interface is
100 Mbit/s and the bandwidth available to VN1 is 50
Mbit/s, value 20% of this parameter indicates that
packets matching the traffic classifier can occupy 10
Mbit/s bandwidth (50 Mbit/s x 20%).
Limit bandwidth:
This parameter can be set to a specific bandwidth
value or a percentage. If this parameter is set and
traffic exceeds the specified value, excess traffic is
cached and sent later (when traffic shaping is
configured) or immediately discarded (when traffic
policing is configured). The percentage is set based on
the available bandwidth of a department (VN). If this
parameter is set to a specific bandwidth value, the
value cannot exceed the available bandwidth.
Theoretically, the bandwidth limit must be greater
than the guaranteed bandwidth.
Re-mark Specify the DSCP priority. The CPE changes the DSCP
DSCP priority in the outer IP header to this value. For details,
see DSCP re-marking.
WAN Cir Limit Limit the average rate of packets that can flow into a
policy bandwidth WAN interface. This parameter can be set to a specific
behavior bandwidth value or a percentage. The percentage is
in the set based on the available bandwidth of a department
inbound (VN). The available bandwidth cannot be exceeded
direction when this parameter is set to a specific bandwidth
value.
Pir Limit Limit the peak rate of packets that can flow into a
bandwidth WAN interface. You can set this parameter to a
specific bandwidth value or a percentage. The
percentage is set based on the available bandwidth of
a department (VN). If this parameter is set to a
specific bandwidth value, the value cannot exceed the
available bandwidth.
Effective time template Select an effective time template to specify the time
range for the QoS policy to take effect.
Policy priority Set the QoS policy priority. For the same traffic, the
QoS policy with the highest priority is preferentially
matched.
Device Select the device for which the QoS policy needs to be
configured.
LAN Interface Select the interface for which the QoS policy needs to
Name be configured.
WAN Interface Select the interface for which the QoS policy needs to
Name be configured.
Limit bandwidth:
This parameter can be set to a specific bandwidth
value or a percentage. If this parameter is set and
traffic exceeds the specified value, excess traffic is
cached and sent later (when traffic shaping is
configured) or immediately discarded (when traffic
policing is configured). The percentage is set based on
the available bandwidth of a department (VN). If this
parameter is set to a specific bandwidth value, the
value cannot exceed the available bandwidth.
Theoretically, the bandwidth limit must be greater
than the guaranteed bandwidth.
Re-mark Specify the DSCP priority. The CPE changes the DSCP
DSCP priority in the outer IP header to this value. For details,
see DSCP re-marking.
Effective time template Select an effective time template to specify the time
range for the QoS policy to take effect.
NOTE
A common QoS policy cannot be configured together with a bandwidth allocation policy or
an overlay network QoS policy. That is, if a bandwidth allocation policy or overlay network
QoS policy is configured, a common QoS policy cannot be configured. If a common QoS
policy is configured, bandwidth allocation and overlay network QoS policies cannot be
configured.
Dynamic NAT
In this type of NAT, the source IP address (a private IP address) in a packet header
is translated into a public IP address on an outbound interface to which the NAT
policy is applied.
Dynamic NAT can be implemented in three modes: Easy IP, Port Address
Translation (PAT), and No-Port Address Translation (No-PAT).
● In Easy IP mode, the IP address of the WAN interface on the router is used as
the translated public IP address. This mode is applicable to scenarios where a
small number of hosts are deployed on the intranet and the outbound
interface obtains a temporary public IP address through dial-up so that
intranet hosts can access the Internet. Easy IP translates multiple
combinations of IP addresses and port numbers into one public IP address and
port number, through which multiple users on the private network access the
Internet.
● PAT is similar to Easy IP. The difference between PAT and Easy IP lies in that
PAT specifies a public IP address pool and translates multiple combinations of
IP addresses and port numbers into the fixed public IP addresses and port
numbers in the public IP address pool. This mode is applicable to scenarios
where there are many hosts on the intranet, the outbound interfaces have
fixed public IP addresses, and multiple public IP addresses are available for
hosts on the intranet to access the Internet.
● No-PAT specifies a public IP address pool and maps one private IP address
into one public IP address, without translation of the TCP or UDP port
number. Different from Easy IP and PAT, No-PAT does not allow multiple
private network users to use the same public IP address to access the Internet.
Instead, intranet users must use different public IP addresses to access the
Internet. Therefore, this mode is seldom used.
Static NAT
In this type of NAT, the IP address of an intranet host is statically bound to one
public IP address, which can be used only by this intranet host to access the
Internet. Static NAT supports two modes: protocol translation and address
translation.
Application Scenarios
NAT policies are typically used in the following scenarios:
● Intranet users access the Internet.
When an intranet user accesses the Internet, the source IP address is a private
IP address and the destination IP address is a public IP address. On the
outbound interface of the CPE, the LAN-side private IP address needs to be
translated into a public IP address. Generally, a dynamic NAT policy is used to
translate the source private IP address into a public IP address. In response
packets from the public network, the public destination IP address is
translated into a private IP address based on NAT entries generated in the
outbound direction. The response packets then are sent to intranet users.
Application Optimization
Huawei SD-WAN Solution uses forward error correction (FEC) technology to
perform optimization for scenarios with voice or video packet loss. FEC uses a
proxy to obtain data flows with the specified 5-tuple information, adds verification
information to packets, and performs verification at the receive end. If packet loss
or packet damage occurs on the network, the system attempts to restore packets
by decoding the verification information.
FEC is typically used in video surveillance and video conferencing scenarios.
● Video surveillance
Video surveillance is widely used. For example, for surveillance, storage, and
analysis purposes, a large number of cameras are deployed in cities and
connected to data centers, and chain enterprises use video surveillance to
send data back to the headquarters. In such scenarios, traffic of multiple sites
is aggregated to the same site, and only upstream traffic is involved. Packet
loss may occur on intermediate networks with unguaranteed quality, such as
the Internet, and will cause issues such as artifacts and frame freezing,
affecting video quality. In this case, WAN optimization can be deployed at
egresses of camera areas and data centers to ensure video quality.
● Video conferencing
Multiple sites (branches or headquarters) of an enterprise connect to the
servers in a data center. The conference server in the data center forwards
traffic between multiple sites connected to the same data center. In this
scenario, traffic is transmitted from sites to the data center and from the data
center to each site. Packet loss may occur on intermediate networks with
unguaranteed quality, such as the Internet, and will cause issues such as
artifacts and frame freezing, affecting video quality. In this case, WAN
optimization can be deployed at the egress of sites where video terminals are
deployed and the egress of the data center to mitigate packet loss and ensure
video quality.
Function Description
To control the traffic entering a CPE, you can configure ACL rules, classify packets
based on packet information including the source IP address, destination IP
address, source port number, destination port number, and application
information, and then filter packets that match the ACL rules.
In the EVPN Interconnection Solution, the ACL-based traffic filtering function is
implemented through ACL policies. Currently, ACL policies can be deployed on the
WAN or LAN interfaces of a CPE to control the traffic entering it. You can define
the priority of each ACL policy and set related parameters including the filtering
action (permit/deny) and validity time range.
Application Scenarios
ACL rules can be used to accurately identify packets on the network, and ACL
policies can be used to control the traffic entering the CPE and filter specific
traffic.
Figure 2-11 shows the typical application scenario of ACL-based traffic filtering.
● An ACL policy is deployed on the WAN side (1) to prevent specific traffic of
external networks from entering the CPE and the internal network.
● An ACL policy is deployed on the LAN side (2) to block specific traffic
accessing external networks. In addition, an ACL policy can be deployed
independently on each virtual network.
filtering. If no match is found, the CPE continues to match the traffic classifier
in the next ACL policy.
● Interface: The value is LAN, indicating that the ACL policy of the overlay
network is applied to LAN interfaces. You do not need to specify interfaces. By
default, LAN interfaces (including Layer 3 interfaces, sub-interfaces, and
VLANIF interfaces) on the overlay network are included.
● Traffic filtering: Specify the action for traffic, which can be permit or deny.
– Deny: Packets matching a traffic classifier are not allowed to be
forwarded.
– Permit: Packets matching a traffic classifier are forwarded.
● Traffic direction: Specify whether the ACL policy takes effect on the traffic in
the inbound or outbound direction of an interface. Generally, the ACL policy
applied on a LAN-side interface takes effect on the traffic in the inbound
direction of the interface.
For a LAN-side interface, inbound traffic refers to traffic that enters a CPE
from an intranet host, and outbound traffic refers to traffic that is sent from a
CPE to an intranet host.
● Effective time template: Specify the time range in which the ACL policy takes
effect. If no time range is specified, the ACL policy takes effect at any time.
For details about the effective time template, see the description in "Data
Planning and Design" in 2.2.2.3.2 Intelligent Traffic Steering.
Planning of sites and interfaces to which the ACL policy is applied:
Specify the site where the ACL policy is to be applied. You can specify one or all
LAN-side interfaces of the selected site for which the ACL policy takes effect.
ACL Policy on the WAN Side (Underlay Network)
Planning of ACL policy configuration parameters:
● Policy name: Specify the name of an ACL policy, for example,
test_bj_acl_class2.
● Traffic classifier: Plan a traffic classification rule, make a traffic classifier, and
apply the ACL policy to packets that match the traffic classification rule.
You can define local Internet access services by specifying the source and
destination IPv4/IPv6 addresses, and TCP or UDP source and destination port
numbers, or by matching the application group, VLAN ID, 802.1p priority,
source and destination MAC addresses, and Layer 2 protocol type. For details
about the traffic classifier, see the description in "Data Planning and Design"
in 2.2.2.3.2 Intelligent Traffic Steering.
● Policy priority: Specify the priority of the ACL policy. The value is in the range
from 1 to 5000 with the recommended step of 10.
If multiple ACL policies are applied to a site, the CPE matches the received
packets against the traffic classifiers in these ACL policies based on the
descending order of priority. If a match is found, the CPE performs traffic
filtering. If no match is found, the CPE continues to match the traffic classifier
in the next ACL policy.
● Interface: The value is WAN, indicating that the ACL policy of the underlay
network is applied only to WAN interfaces.
● Traffic filtering: Specify the action for traffic, which can be permit or deny.
2.2.2.4.2 Firewall
The EVPN Interconnection Solution supports two firewall deployment schemes:
CPE's built-in firewall and third-party security devices connected through VAS.
CPE's built-in firewall
The firewall function provided by the CPE logically separates an internal network
from an external network to protect the internal network from unauthorized
access.
The firewall function involves the following two concepts:
● Security zone
A security zone consists of a single interface or a group of interfaces, and the
networks connected to these interfaces have the same security attributes.
Each security zone has a globally unique security priority.
● Interzone
Any two security zones constitute an interzone, and packets flow between
two security zones directionally (inbound and outbound). Inbound indicates
that packets are transmitted from a low-priority security zone to a high-
priority security zone, while outbound indicates that packets are transmitted
from a high-priority security zone to a low-priority security zone.
In the EVPN Interconnection Solution, the firewall function is implemented
through security policies, which are applied to the interzone. Firewall security
policies are deployed on CPEs to ensure security for Internet access services of
enterprise users and protect the internal network from unauthorized access. In
addition, the CPE provides the application specific packet filter (ASPF) function to
detect application-layer and transport-layer protocol information and dynamically
determine whether to allow packets to enter the internal network. The firewall
security policy and the ASPF function work together to provide more
comprehensive, service-based security protection for the internal network of
enterprises.
The built-in firewall of the CPE is generally used in the local Internet access
scenario. The Internet access traffic of a site is directly transmitted from the local
CPE to the Internet. The firewall function is deployed on the local CPE to ensure
security of Internet access services.
NOTE
If a site has only MPLS links for Internet access and legacy network access, the firewall
function does not take effect.
Traffic from a branch site enters the CPE at the headquarters from the WAN side.
The CPE at the headquarters diverts the traffic to the firewall (ingress) according
to the configured route. After security protection is performed on the firewall, the
traffic is sent back to the CPE (egress) and then forwarded to the Internet through
the CPE. The return path is in a reverse order. That is, service traffic from the
Internet is forwarded by the CPE to the firewall and then forwarded back to the
CPE. The CPE then forwards the traffic to branch sites on the WAN side.
Table 2-10 Key configuration items of the built-in firewall on the CPE
Configuration Description
Item
Policy name Specify the name of a security policy. The value can contain
only letters, digits, underscores (_), and hyphens (-).
Internet- Prio Specify the priority of an ACL rule. You can define multiple
to-user rity ACL rules. Packets preferentially match a rule with the
flow list highest priority. If the rule is matched, the action defined in
the rule is executed.
Acti ● Permit: Inbound packets that match the ACL rule are
on allowed to pass through.
● Deny: Inbound packets that match the ACL rule are
denied.
Configuration Description
Item
Pro Plan the protocol type of packets matching an ACL rule. The
toc protocol can be TCP, UDP, ICMP, GRE, IGMP, IP, IPinIP, and
ol OSPF. You can also set the protocol number to specify other
protocols.
User-to- Prio Specify the priority of an ACL rule. You can define multiple
Internet rity ACL rules. Packets preferentially match a rule with the
flow list highest priority. If the rule is matched, the action defined in
the rule is executed.
Acti ● Permit: Outbound packets that match the ACL rule are
on allowed to pass through.
● Deny: Outbound packets that match the ACL rule are
denied.
Pro Plan the protocol type of packets matching an ACL rule. The
toc protocol can be TCP, UDP, ICMP, GRE, IGMP, IP, IPinIP, and
ol OSPF. You can also set the protocol number to specify other
protocols.
Configuration Description
Item
Table 2-11 Key configuration items for connecting to a third-party security device
(VAS connection)
Config Description
uratio
n Item
Config Description
uratio
n Item
Directi Two connections need to be configured for a site device: one in the
on ingress direction and the other in the egress direction.
● Ingress: Internet access traffic is sent from a site device to the
ingress interface of a third-party security device.
● Egress direction: A third-party security device processes and
forwards the traffic sent from the Internet to the site device
through the egress interface.
Config Description
uratio
n Item
Trust Plan the type of a firewall security domain to which the interface is
mode added. You can add an interface to the trust or untrust zone of the
firewall. By default, the trust zone is used. If the untrust zone is
configured, packets received by the interface need to be forwarded to
the security policy module for processing.
Static Static routes and OSPF routes can be configured in the ingress and
route egress directions. The following parameters need to be configured for
static routes:
● Destination network segment and mask: destination network
segment and mask of a static route.
● Next hop type: type of the next hop for a static route. Currently,
the next hop of a static route can only be set to an IP address.
● IP address: IP address of the next hop.
● Priority: preference of a static route. A smaller value indicates a
higher preference.
● Detection: whether to associate a static route with an NQA test
instance.
● Target: If a static route is associated with an NQA test instance,
only ICMP test instances can be used to check whether there are
reachable routes between the source and destination. This
parameter specifies the destination address of an NQA test
instance.
A route configured in the ingress direction is used to divert Internet
access traffic to a third-party security device. The destination network
segment can be set to 0.0.0.0, and the next hop can be set to the IP
address of the ingress interface on the third-party security device. For
the return path, a static route needs to be configured in the egress
direction, with the destination network segment being the address
segment of a branch site in the VN and the next hop being the IP
address of the egress interface on the third-party security device.
Config Description
uratio
n Item
OSPF Static routes and OSPF routes can be configured in the ingress and
route egress directions. The following parameters need to be configured for
OSPF routes:
● Process ID: ID of an OSPF process. The process ID of an OSPF route
used for VNF interconnection is in the range from 60000 to 61000.
● Default route advertisement flag: whether to advertise the default
route to common OSPF areas. After this function is enabled, the
device constantly advertises the default OSPF route. By default, this
function is disabled in the ingress direction and is enabled in the
egress direction. You are advised to use the default settings.
● Area ID: ID of an OSPF area.
● Authentication mode: If authentication is enabled, a neighbor
relationship can be established only when OSPF packets pass
authentication. The following authentication modes are supported:
– None: Authentication is not performed on OSPF packets.
– Simple: A password needs to be configured.
– Cryptographic: The encryption mode (MD5, HMAC-MD5, or
HMAC-SHA256), key, and password need to be configured.
For security purposes, the cryptographic mode using HMAC-
SHA256 is recommended.
Parameter Description
R Policy Priority of a rule in the redirect policy. You can configure multiple
ul priority rules with different priorities in a single redirect policy. The value
e is in the range from 1 to 5000, where a lower value indicates a
s higher priority.
T
a Traffic Traffic classifier template used to match packets that need to be
bl Classifie redirected. You can either select an existing traffic classifier
e r template from the drop-down list, or click to create a new one.
Templat In addition, you can click to view details about the traffic
e classifier template.
WAN link WAN link of a CPE. A WAN link can carry an active
and a standby GRE tunnel. When the active GRE
tunnel fails, traffic is switched to the standby GRE
tunnel.
Active GRE tunnel You can create tunnel interfaces on the CPE to
Tunnel source IP establish GRE tunnels to connect the CPE to a cloud
address security gateway.
GRE Key Key of the active GRE tunnel. You can configure a
GRE tunnel key on both ends of a GRE tunnel to
enhance security. This ensures that a device accepts
packets sent from only valid tunnel interfaces and
discards invalid packets. This parameter value must
be the same as that of the security cloud gateway.
2.2.2.4.5 IPS
Function Description
The intrusion prevention system (IPS) is a security mechanism that detects
intrusion behavior (such as buffer overflow attacks, Trojan horses, and worms) by
analyzing network traffic and terminates intrusion behavior in real time through
certain responses. IPS protects enterprise information systems and network
architectures against intrusions.
The IPS signature database is preconfigured on the CPE and defines common
intrusion behaviors. The IPS compares packets against the signature database. If a
match is found, the CPE considers it as an intrusion behavior and takes
corresponding prevention measures.
In the EVPN Interconnection Solution, the IPS is implemented through security
policies, which are applied to the interzone. IPS security policies are deployed on
CPEs to implement security protection for Internet access services of enterprise
users and block a variety of intrusion behaviors from the Internet.
Application Scenarios
In the EVPN Interconnection Solution, the IPS function is mainly used in the Site-
to-Internet scenario, that is, to implement security protection for Internet access
services, as shown in Figure 2-15.
Function Description
URL filtering regulates online behaviors by controlling URLs that users can access
and permitting or denying users' access to some web resources.
The CPE allows or denies user access to a URL or a type of URLs based on the pre-
defined categories and blacklist/whitelist. The CPE extracts the URL field from an
HTTP request packet and matches the value of this field against the blacklist/
whitelist or predefined categories. If a match is found, the CPE processes the HTTP
request packet according to the configured response action.
In the EVPN Interconnection Solution, URL filtering is implemented through
security policies, and the security policies are applied to the interzone. URL
filtering security policies are deployed on CPEs to control URLs accessed by
enterprise users.
Application Scenarios
In the EVPN Interconnection Solution, URL filtering can be applied in Site-to-
Legacy Site, Site-to-EVPN Site, and Site-to-Internet scenarios, as shown in Figure
2-16.
● In the Site-to-Legacy Site scenario (1), URL filtering is deployed on the CPE to
regulate users' online behaviors by controlling URLs used by users to access
the legacy site.
● In the Site-to-EVPN Site scenario (2), URL filtering is deployed on the CPE to
regulate users' online behaviors by controlling URLs used by users to access
the EVPN site.
● In the Site-to-Internet scenario (3), URL filtering is deployed on the CPE to
regulate users' online behaviors by controlling URLs used by users to access
the Internet.
– Whitelist
● Blacklist/Whitelist: Add a URL filtering list.
– If a blacklist is added, URLs not in the blacklist can be accessed.
– If a whitelist is added, only URLs in the whitelist can be accessed.
● Enable pre-defined URL category: Specify the filtering level for the predefined
category and use the predefined category template of the system to perform
URL filtering. You can use the filtering level defined by the system or
customize the action for each predefined category template.
– Predefined URL filter level: The system defines high, medium, and low
filtering levels, and configures an initial action for each predefined URL
category according to each level. A high level indicates a strict action for
URL categories, for example, the device blocks requests matching porn,
P2P download, and video categories. A low level indicates a loose action
for URL categories, for example, the device blocks requests matching
porn categories only.
– Customized: Customize actions for each URL category. This method is
applicable to scenarios where URL categories need to be restricted.
● Site: Specify the site where the URL filtering policy is applied.
Item Description
Item Description
CPE sends a query request to the registration query center to obtain the
IP address and port number of iMaster NCE-Campus, implementing the
plug-and-play function.
3. The deployment engineer completes the deployment and checks whether the
deployment is successful onsite.
Email-based Deployment
Email-based deployment is also called URL-based deployment. After the network
administrator completes the ZTP configuration on iMaster NCE-Campus, iMaster
NCE-Campus automatically generates a deployment email. The URL parameters in
the deployment email carry the deployment information, and the deployment
email is sent to a specified deployment mailbox. After receiving the deployment
email, the deployment engineer clicks the URL in the email to start the
deployment process. Subsequently, devices automatically complete the
deployment.
Email-based deployment applies to the scenario where the ESN is not bound to
the CPE and is automatically recorded on iMaster NCE-Campus after deployment.
When a CPE is allocated to a site on iMaster NCE-Campus, only the CPE model is
specified but the ESN of the CPE is not specified. In this case, iMaster NCE-Campus
automatically allocates a token to the CPE when generating a deployment email
of the site. When the deployment engineer deploys the CPE, the CPE sends the
token, ESN, and other registration information to iMaster NCE-Campus for
registration. iMaster NCE-Campus then associates the CPE with the ESN based on
the token to complete the registration of the CPE that is not bound to the ESN.
USB-based Deployment
During the USB-based deployment, after the network administrator completes the
ZTP configuration on iMaster NCE-Campus, iMaster NCE-Campus automatically
generates the ZTP file that records the CPE deployment configuration. Then, the
deployment engineer uses a tool to generate a configuration file and imports the
configuration file to a USB flash drive for USB-based deployment.
NOTE
During batch deployment using a USB flash drive, the ESN of the CPE that is distributed to
the site must be the same as the ESN of the CPE configured on iMaster NCE-Campus.
Otherwise, the deployment may fail.
DHCP-based Deployment
During DHCP-based deployment, the network administrator configures ZTP for a
site on iMaster NCE-Campus, and allocates the IP address, gateway, southbound
IP address of iMaster NCE-Campus, and port number to the WAN-side interface of
the CPE on the DHCP server. The WAN interface for deployment must apply to the
DHCP server for an IP address through DHCP. When allocating an IP address to
the CPE, the DHCP server sends iMaster NCE-Campus information to the CPE
through DHCP Option messages. After obtaining the IP address and accessing the
underlay network, the CPE automatically registers with iMaster NCE-Campus to
complete the deployment.
Figure 2-21 shows the process of deployment through the registration query
center.
NTP can be configured independently for each site in the following sequence:
external clock source > parent site > branch site.
NAT Traversal
When the EVPN interconnection network is set up, CPEs at sites may be on
different private networks. NAT devices are deployed on the WAN side to translate
private IP addresses into public IP addresses so that sites can properly access the
public network. However, when BGP is used to exchange routing information
between sites to set up an overlay tunnel, private IP addresses instead of public IP
addresses are contained in packets. As a result, tunnel establishment is affected.
Session Traversal Utilities for NAT (STUN) can effectively solve the problem.
STUN uses the client/server model and consists of the STUN server and STUN
clients. Figure 2-22 shows the typical STUN networking of the EVPN
interconnection network.
● STUN client: An edge site functions as a STUN client. It sends STUN binding
requests and receives STUN binding responses.
● STUN server: The RR functions as the STUN server. It sends STUN binding
responses and receives STUN binding requests.
Through packet exchange with a STUN server, a STUN client can detect a NAT
device and determine the IP address and port number allocated by the NAT
device. After a data channel is established between STUN clients, an overlay
tunnel can be established between sites.
● DHCP server:
During DHCP-based deployment, when configuring the DHCP server, in
addition to the information such as the IP address and gateway allocated to
the WAN interface of the CPE, the network administrator needs to configure
fields of Option 148 to transmit the iMaster NCE-Campus IP address,
southbound IP address, and port number to the CPE.
The value of Option 148 is in the following format:
agilemode=AGILEMODE;agilemanage-mode=AGILEMANAGE-
MODE;agilemanage-domain=AGILEMANAGE-DOMAIN;agilemanage-
port=AGILEMANAGE-PORT;
● WAN interface: Be aware that the WAN interfaces used for DHCP-based
deployment must be Layer 3 interfaces in factory default settings and cannot
be Layer 2 interfaces whose working mode is changed to Layer 3 mode. For
details, see 4.4.4.1.4 DHCP-based Deployment
The following parameters are set for NTP clock synchronization at a site:
● Time zone
This parameter indicates the time zone to which a site gateway belongs. If
DST is observed in the time zone, you can choose whether to apply DST rules
to the time zone.
● NTP authentication
This parameter is optional and indicates whether to enable NTP
authentication when the gateway at a specified site functions as an NTP
server. If NTP authentication is enabled, you need to set the authentication
password and authentication ID. If the gateway at a specified site functions as
an NTP client, the authentication password and authentication ID must be the
same as those at the parent site of the NTP server. Otherwise, the
authentication fails and NTP clock synchronization fails.
● NTP client mode
Deployment Tasks
Task Description Procedure
Create sites Create a site, add devices, and 1. 4.4.3.1 Creating a Site
and add set device roles as planned. 2. 4.4.3.2 Adding Devices
devices to
the sites.
Configure Configure ZTP. In the EVPN 1. Set the WAN link template
the underlay Interconnection Solution, you for the site by referring to
network. need to configure WAN links 4.4.3.13.2 Configuring a
and activate sites before site WAN-side Site Template.
deployment. For details about 2. Set ZTP deployment
the planning of the WAN link parameters for CPEs by
template and parameters, see referring to 4.4.3.13.4
2.2.1.3.1 Site WAN Model. Configuring the Network
Access Mode for a Site.
3. Set NTP parameters for the
site by referring to
4.4.3.13.5 Configuring NTP.
4. Configure the template for
sending emails during
deployment by referring to
4.4.3.13.7 (Optional)
Customizing an Email
Template.
5. Choose Zero Touch
Provision > ZTP and click
Send Email or Download
ZTP File to activate the site.
Connect to Associate the edge site with an For details, see 4.4.3.13.8
the RR. RR site based on the planned Associating an Edge Site with
network model. an RR Site.
2.4 Deployment
Context
Huawei EVPN Interconnection Solution provides the following ZTP methods for
deploying egress devices on the WAN side:
For details about the deployment modes and planning, see 2.2.3.1 Deployment
Planning for Egress Devices on the WAN Side.
Deployment Tasks
Task Procedure
Task Procedure
NOTE
After a CPE is deployed, the site to which the CPE belongs cannot be changed. To change
the site to which the CPE belongs, perform the following operations:
1. Delete the CPE from the old site.
2. Restore the factory default settings for the CPE.
3. Add the CPE to the new site, deploy the CPE, and bring it online.
Context
When an AR router is deployed at the network egress, it is recommended that the
AR be used as the gateway to assign IP addresses to LAN-side devices. In addition,
DHCP Option 148 needs to be set so that LAN-side devices can register with
iMaster NCE-Campus and be deployed. For details about the deployment modes
for LAN-side devices, see 2.2.3.2 Deployment Plan for Devices on the LAN Side.
Deployment Tasks
Task Description Procedure
Enable the To ensure that LAN-side Enable Internet access for the
site to devices can automatically site to be deployed by referring
access the register with iMaster NCE- to Configuring an Internet
Internet. Campus and go online Access Policy for a Site.
during site deployment and
that they can access the
public network, enable the
Internet access function for
the site in the management
VN.
Deployment Tasks
Task Description Procedure
Create service With the multi-VN function, Create VNs for different
VNs. the EVPN Interconnection departments based on the
Solution isolates services of service planning by referring to
multiple departments under 4.4.11.2.1 Creating VNs in
a single tenant. An LAN-WAN Interconnection
independent VN is Scenario.
configured for each service
department. For details, see
2.2.1.5 VN Service Isolation.