pg194 Axi Bridge Pcie Gen3 en Us 3.0
pg194 Axi Bridge Pcie Gen3 en Us 3.0
pg194 Axi Bridge Pcie Gen3 en Us 3.0
Product Guide
Vivado Design Suite
Chapter 2: Overview......................................................................................................7
Feature Summary........................................................................................................................ 8
Unsupported Features................................................................................................................9
Limitations....................................................................................................................................9
Licensing and Ordering............................................................................................................ 10
Appendix B: Debugging...........................................................................................156
Finding Help on Xilinx.com.................................................................................................... 156
Debug Tools............................................................................................................................. 157
Hardware Debug..................................................................................................................... 159
Additional Debug Information.............................................................................................. 163
Interface Debug...................................................................................................................... 163
Chapter 1
Introduction
The Xilinx® AXI Bridge for PCI Express Gen3 Subsystem is available for UltraScale™ and
Virtex®-7 XT devices. The Xilinx DMA/Bridge Subsystem for PCI Express® in AXI Bridge mode is
available for UltraScale+™ devices. These cores bridge AXI4 and PCI Express interfaces.
Features
• AXI Bridge for PCI Express Gen3 supports UltraScale™ architecture and Virtex®-7 XT FPGA
Gen3 Integrated Blocks for PCI Express®
• DMA/Bridge Subsystem for PCI Express core in AXI Bridge mode supports UltraScale+
Integrated Blocks for PCI Express
• AXI Bridge for PCI Express Gen3 supports Maximum Payload Size (MPS) up to 512 bytes
• DMA/Bridge Subsystem for PCI Express core in AXI Bridge mode supports Maximum Payload
Size (MPS) up to 1024 bytes
• Multiple Vector Messaged Signaled Interrupts (MSIs)
• MSI-X interrupt support
• Legacy interrupt support
• Memory-mapped AXI4 access to PCIe® space
• PCIe access to memory-mapped AXI4 space
• Tracks and manages Transaction Layer Packets (TLPs) completion processing
• Detects and indicates error conditions with interrupts
• Optimal AXI4 pipeline support for enhanced performance
• Compliant with Advanced RISC Machine Arm® Advanced Microcontroller Bus Architecture 4
(AMBA®) AXI4 specification
• Supports up to six PCIe 32-bit or three 64-bit PCIe Base Address Registers (BARs) as Endpoint
• Supports up to two PCIe 32-bit or a single PCIe 64-bit BAR as Root Port
IP Facts
LogiCORE™ IP Facts Table
Core Specifics
Supported Device Family AXI Bridge for PCIe Gen3: UltraScale, Virtex-7 XT1
DMA/Bridge Subsystem for PCIe in AXI Bridge mode: UltraScale+
Supported User Interfaces AXI4
Resources Performance and Resource Utilization web page
Provided with Core
Design Files Verilog
Example Design Verilog
Test Bench Verilog
Constraints File Xilinx Design Constraints (XDC)
Simulation Model Not Provided
Supported S/W Driver Root Port Driver
Tested Design Flows2
Design Entry Vivado® Design Suite
Simulation For supported simulators, see the Xilinx Design Tools: Release Notes Guide
Synthesis Vivado synthesis
Support
Release Notes and Known Issues Master Answer Record: 61898
All Vivado IP Change Logs Master Vivado IP Change Logs: 72775
Xilinx Support web page
Notes:
1. Except for XC7VX485T, XC7V585T, and XC7V2000T, all Virtex-7 devices are supported.
2. For the supported versions of the tools, see the Xilinx Design Tools: Release Notes Guide.
Chapter 2
Overview
Xilinx® provides the following two cores for PCI Express® AXI Bridge:
The AXI Bridge for PCI Express Gen3 core supports UltraScale+™ and Virtex®-7 XT devices. The
DMA/Bridge Subsystem for PCI Express in AXI Bridge mode core, when configured in Bridge
mode, supports UltraScale+ devices.
This document is applicable for both the AXI Bridge for PCI Express Gen3 core, and the DMA/
Bridge Subsystem for PCI Express core in AXI Bridge functional mode. See the DMA/Bridge
Subsystem for PCI Express Product Guide (PG195) for the DMA/Bridge Subsystem for PCI Express
core in DMA functional mode. When something applies to both cores together, this document
refers to the core as the Bridge core. Otherwise, the specific core name is used.
The AXI Bridge for PCI Express Gen3 is designed for the Vivado® IP integrator in the Vivado®
Design Suite. The AXI Bridge for PCI Express Gen3 provides an interface between an AXI4 user
logic interface and PCI Express® using the Integrated Block for PCI Express. The Bridge core
provides the translation level between the AXI4 embedded system to the PCI Express system.
The Bridge core translates the AXI4 memory read or writes to PCI™ Transaction Layer Packets
(TLP) packets and translates PCIe memory read and write request TLP packets to AXI4 interface
commands.
Figure 1: High-Level Bridge Architecture for the PCI Express Gen3 Architecture
AXI MM
(S_AXI(L)_CTL)
AXI4-Lite
Register Block
I/F
AXI MM
AXI4 (S_AXI(B)) AXI PCIe Rx
Memory Slave Slave Bridge PCIe Gen3
Mapped I/F PCIe Tx
Solution IP
Bus
AXI MM
(M_AXI(B)) AXI
Master Master Bridge
I/F
X22423-030419
Feature Summary
The Bridge core is an interface between the AXI4 bus and PCI Express®. The core contains the
memory mapped AXI4 to AXI4-Stream Bridge and the AXI4-Stream Enhanced Interface Block for
PCIe. The memory-mapped AXI4 to AXI4-Stream Bridge contains a register block and two
functional half bridges, referred to as the Slave Bridge and Master Bridge. The slave bridge
connects to the AXI4 Interconnect as a slave device to handle any issued AXI4 master read or
write requests. The master bridge connects to the AXI4 interconnect as a master to process the
PCIe generated read or write TLPs. The core uses a set of interrupts to detect and flag error
conditions.
The Bridge core supports both Root Port and Endpoint configurations.
• When configured as an Endpoint, the Bridge core supports up to six 32-bit or three 64-bit
PCIe Base Address Registers (BARs).
• When configured as a Root Port, the core supports up to two 32-bit or a single 64-bit PCIe
BAR.
Unsupported Features
The following features are not supported in the Bridge core.
• Tandem Configuration solutions, which include Tandem PROM, Tandem PCIe, Tandem with
Field Updates, and DFX over PCIe, are not supported for Virtex®-7 XT devices.
• Tandem Configuration is not yet supported for Bridge mode in UltraScale+™ devices.
• In AXI Bridge Root Port mode ASPM is not supported.
Limitations
1. For this subsystem, the bridge master and bridge slave cannot achieve more than 128 Gb/s.
2. Bridge will be compliant with all MPS and MRRS settings; however, all traffic initiated from
the Bridge will be limited to 256 Bytes (max).
3. AXI address width is limited to 48 bits.
TX RX
MRd32 MRd32
MRd64 MRd64
MWr32 MWr32
MWr64 MWr64
Msg(INT/Error) Msg(SSPL,INT,Error)
Cpl Cpl
CplD CplD
Cfg Type0/1 (For Root Port)
PCIe Capability
Only the following PCIe capabilities are supported because of the AXI4 specification:
• 1 PF
• MSI
• MSI-X
• PM
• Advanced error reporting (AER)
Others
AXI Slave
• Only supports the INCR burst type. Other types result in the Slave Illegal Burst (SIB) interrupt.
• No memory type support (AxCACHE)
• No protection type support (AxPROT)
• No lock type support (AxLOCK)
• No non-contiguous byte enable support (WSTRB)
AXI Master
Information about this and other Xilinx modules is available at the Xilinx Intellectual Property
page. For information on pricing and availability of other Xilinx modules and tools, contact your
local Xilinx sales representative.
Chapter 3
Product Specification
The Register block contains registers used in the Bridge core for dynamically mapping the AXI4
memory mapped (MM) address range provided using the AXIBAR parameters to an address for
PCIe® range.
The slave bridge provides termination of memory-mapped AXI4 transactions from an AXI master
device (such as a processor). The slave bridge provides a way to translate addresses that are
mapped within the AXI4 memory mapped address domain to the domain addresses for PCIe.
Write transactions to the Slave Bridge are converted into one or more MemWr TLPs, depending
on the configured Max Payload Size setting, which are passed to the integrated block for PCI
Express. The slave bridge in AXI Bridge for PCI Express Gen3 core can support up to eight active
AXI4 Write requests. The slave bridge in the DMA/Bridge Subsystem for PCIe in AXI Bridge
mode core can support up to 32 active AXI4 Write requests. When a remote AXI master initiates
a read transaction to the slave bridge, the read address and qualifiers are captured and a MemRd
request TLP is passed to the core and a completion timeout timer is started. Completions
received through the core are correlated with pending read requests and read data is returned to
the AXI master. The slave bridge in AXI Bridge for PCI Express Gen3 core can support up to eight
active AXI4 Read requests with pending completions. The slave bridge in the DMA/Bridge
Subsystem for PCIe in AXI Bridge mode core can support up to 32 active AXI4 Read requests
with pending completions.
The master bridge processes both PCIe MemWr and MemRd request TLPs received from the
Integrated Block for PCI Express and provides a means to translate addresses that are mapped
within the address for PCIe domain to the memory mapped AXI4 address domain. Each PCIe
MemWr request TLP header is used to create an address and qualifiers for the memory mapped
AXI4 bus and the associated write data is passed to the addressed memory mapped AXI4 Slave.
The Master Bridge in AXI Bridge for PCI Express Gen3 core can support up to eight active PCIe
MemWr request TLPs. The Master Bridge in the DMA/Bridge Subsystem for PCIe in AXI Bridge
mode core can support up to 32 active PCIe MemWr request TLPs. PCIe MemWr request TLPs
support is as follows:
Each PCIe MemRd request TLP header is used to create an address and qualifiers for the memory-
mapped AXI4 bus. Read data is collected from the addressed memory mapped AXI4 slave and
used to generate completion TLPs which are then passed to the integrated block for PCI Express.
The Master Bridge in AXI Bridge for PCI Express Gen3 core can support up to eight active PCIe
MemRd request TLPs with pending completions. The Master Bridge in the DMA/Bridge
Subsystem for PCIe core in AXI Bridge mode can support up to 32 active PCIe MemRd request
TLPs with pending completions for improved AXI4 pipelining performance.
The instantiated AXI4-Stream Enhanced PCIe block contains submodules including the
Requester/Completer interfaces to the AXI bridge and the Register block. The Register block
contains the status, control, interrupt registers, and the AXI4-Lite interface.
Standards
The Bridge core is compliant with the AMBA AXI Protocol Specification and the PCI-SIG
Specifications.
For PCIe® requests with lengths greater than 1 Dword, the size of the data burst on the Master
AXI interface will always equal the width of the AXI data bus even when the request received
from the PCIe link is shorter than the AXI bus width.
slave axi wstrb can be used to facilitate data alignment to an address boundary. slave
axi wstrb may equal 0 in the beginning of a valid data cycle and will appropriately calculate an
offset to the given address. However, the valid data identified by slave axi wstrb must be
continuous from the first byte enable to the last byte enable.
All transactions initiated at the Slave Bridge interface will be modified and metered by the IP as
necessary. The Slave Bridge interface will conform to AXI4 specification and allow burst size up
to 4KB, and the IP will split the transaction automatically according to PCIe Max Read Request
Size (MRRS), Max Payload Size (MPS), and Read Completion Boundary (RCB), As a result of this
operation, one request at the AXI domain may result in multiple request at the PCIe domain, and
the IP will adjust the number of issued PCIe request accordingly to avoid oversubscribing the
available Completion buffer.
• The bresp to the remote (requesting) AXI4 master device for a write to a remote PCIe device
is not issued until the MemWr TLP transmission is guaranteed to be sent on the PCIe link
before any subsequent TX-transfers.
• If Relaxed Ordering bit is not set within the TLP header, then a remote PCIe device read to a
remote AXI slave is not permitted to pass any previous remote PCIe device writes to a remote
AXI slave received by the Bridge core. The AXI read address phase is held until the previous
AXI write transactions have completed and bresp has been received for the AXI write
transactions. If the Relaxed Ordering attribute bit is set within the TLP header, then the
remote PCIe device read is permitted to pass.
• Read completion data received from a remote PCIe device are not permitted to pass any
remote PCIe device writes to a remote AXI slave received by the Bridge core prior to the read
completion data. The bresp for the AXI write(s) must be received before the completion data
is presented on the AXI read data channel.
Note: The transaction ordering rules for PCIe might have an impact on data throughput in heavy
bidirectional traffic.
• Aperture_Base_Address_n provides the low address where AXI BAR n starts and will be
regarded as address offset 0x0 when the address is translated.
• Aperture_High_Address_n is the high address of the last valid byte address of AXI BAR
n. (For more details on how the address gets translated, see Address Translation.)
When a packet is sent to the core (outgoing PCIe packets), the packet must have an address that
is in the range of Aperture_Base_Address_n and Aperture_High_Address_n. Any
packet that is received by the core that has an address outside of this range will be responded to
with a SLVERR. When the IP integrator is used, these parameters are derived from the Address
Editor tab within the IP integrator. The Address Editor sets the AXI Interconnect as well as the
core so the address range matches, and the packet is routed to the core only when the packet
has an address within the valid range.
Address Translation
The address space for PCIe® is different than the AXI address space. To access one address space
from another address space requires an address translation process. On the AXI side, the bridge
supports mapping to PCIe on up to six 32-bit or 64-bit AXI base address registers (BARs).
• Example 1 (32-bit PCIe Address Mapping) demonstrates how to set up three AXI BARs and
translate the AXI address to a 32-bit address for PCIe.
• Example 2 (64-bit PCIe Address Mapping) demonstrates how to set up three AXI BARs and
translate the AXI address to a 64-bit address for PCIe.
• Example 3 demonstrates how to set up two 64-bit PCIe BARs and translate the address for
PCIe to an AXI address.
• Example 4 demonstrates how to set up a combination of two 32-bit AXI BARs and two 64 bit
AXI BARs, and translate the AXI address to an address for PCIe.
In this example, where C_AXIBAR_NUM=3, the following assignments for each range are made:
AXI_ADDR_WIDTH=48
C_AXIBAR_0=0x00000000_12340000
C_AXI_HIGHADDR_0=0x00000000_1234FFFF (64 Kbytes)
C_AXIBAR2PCIEBAR_0=0x00000000_56710000 (Bits 63-32 are zero in order to
produce a
32-bit PCIe TLP. Bits 15-0 must be zero based on the AXI BAR aperture size.
Non-zero
values in the lower 16 bits are invalid translation values.)
C_AXIBAR_1=0x00000000_ABCDE000
C_AXI_HIGHADDR_1=0x00000000_ABCDFFFF (8 Kbytes)
C_AXIBAR2PCIEBAR_1=0x00000000_FEDC0000 (Bits 63-32 are zero in order to
produce a
32-bit PCIe TLP. Bits 12-0 must be zero based on the AXI BAR aperture size.
Non-zero
values in the lower 13 bits are invalid translation values.)
C_AXIBAR_2=0x00000000_FE000000
C_AXI_HIGHADDR_2=0x00000000_FFFFFFFF (32 Mbytes)
C_AXIBAR2PCIEBAR_2=0x00000000_40000000 (Bits 63-32 are zero in order to
produce a
32-bit PCIe TLP. Bits 24-0 must be zero based on the AXI BAR aperture size.
Non-zero
values in the lower 25 bits are invalid translation values.)
• Accessing the Bridge AXIBAR_0 with address 0x0000_12340ABC on the AXI bus yields
0x56710ABC on the bus for PCIe.
C_AXIBAR2PCIEBAR_0 =
0x00000000_56710000
AXI AWADDR =
0x00000000_12340ABC
C_AXIBAR_0 =
0x00000000_12340000 +
Intermediate Address = 0x0ABC
C_AXI_HIGHADDR_0 =
0x00000000_0x1234FFFF
X20046-032119
• Accessing the Bridge AXIBAR_1 with address 0x0000_ABCDF123 on the AXI bus yields
0xFEDC1123 on the bus for PCIe.
• Accessing the Bridge AXIBAR_2 with address 0x0000_FFEDCBA on the AXI bus yields
0x41FEDCBA on the bus for PCIe.
In this example, where C_AXIBAR_NUM=3, the following assignments for each range are made:
AXI_ADDR_WIDTH=48
C_AXIBAR_0=0x00000000_12340000
C_AXI_HIGHADDR_0=0x00000000_1234FFFF (64 Kbytes)
C_AXIBAR2PCIEBAR_0=0x5000000056710000 (Bits 63-32 are non-zero in order to
produce a
64-bit PCIe TLP. Bits 15-0 must be zero based on the AXI BAR aperture size.
Non-zero
values in the lower 16 bits are invalid translation values.)
C_AXIBAR_1=0x00000000_ABCDE000
C_AXI_HIGHADDR_1=0x00000000_ABCDFFFF (8 Kbytes)
C_AXIBAR2PCIEBAR_1=0x60000000_FEDC0000 (Bits 63-32 are non-zero in order to
produce
a 64-bit PCIe TLP. Bits 12-0 must be zero based on the AXI BAR aperture
size. Non-zero
values in the lower 13 bits are invalid translation values.)
C_AXIBAR_2=0x00000000_FE000000
C_AXI_HIGHADDR_2=0x00000000_FFFFFFFF (32 Mbytes)
C_AXIBAR2PCIEBAR_2=0x7000000040000 (Bits 63-32 are non-zero in order to
produce a
64-bit PCIe TLP. Bits 24-0 must be zero based on the AXI BAR aperture size.
Non-zero
values in the lower 25 bits are invalid translation values.)
Example 3
This example shows the generic settings to set up two independent BARs for PCIe® and address
translation of addresses for PCIe to a remote AXI address space. This setting of BARs for PCIe
does not depend on the AXI BARs within the bridge.
In this example, where C_PCIEBAR_NUM=2, the following range assignments are made.
AXI_ADDR_WIDTH=48
zero.
Non-zero values in these ranges are invalid.)
• Accessing the Bridge AXIBAR_0 with address 0x20000000_ABCDFFF4 on the bus for PCIe
yields 0x0000_12347FF4 on the AXI bus.
C_PCIEBAR2AXIBAR_0 = 0x00000000_12340000
PF0_BAR0_APERTURE_SIZE = 0x12
(least significant 15 bits (14:0) provide window)
PCI Wr Addr =
0x20000000_ABCDFFF4
PCIe BAR 0
(set by Root Complex) = Intermediate Address = 0x7FF4
0x20000000_ABCD8000
• Accessing Bridge AXIBAR_2 with address 0xA00000001235FEDC on the bus for PCIe yields
0x0000_FE35FEDC on the AXI bus.
Example 4
This example shows the generic settings of four AXI BARs and address translation of AXI
addresses to a remote 32-bit and 64-bit addresses for PCIe®. This setting of AXI BARs do not
depend on the BARs for PCIe within the Bridge.
In this example, where number AXI BAR's are 4, the following assignments for each range are
made:
Aperture_Base_Address_0 =0x00000000_12340000
Aperture_High_Address_0 =0x00000000_1234FFFF (64 KB)
AXI_to_PCIe_Translation_0=0x00000000_56710000 (Bits 63-32 are zero to
produce a 32-bit PCIe
TLP. Bits 15-0 must be zero based on the AXI BAR aperture size. Non-zero
values in
the lower 16 bits are invalid translation values.)
Aperture_Base_Address_1 =0x00000000_ABCDE000
Aperture_High_Address_1 =0x00000000_ABCDFFFF (8 KB)
AXI_to_PCIe_Translation_1=0x50000000_FEDC0000 (Bits 63-32 are non-zero to
produce a 64-bit
PCIe TLP. Bits 12-0 must be zero based on the AXI BAR aperture size. Non-
zero values
in the lower 13 bits are invalid translation values.)
Aperture_Base_Address_2 =0x00000000_FE000000
Aperture_High_Address_2 =0x00000000_FFFFFFFF (32 MB)
Aperture_Base_Address_3 =0x00000000_00000000
Aperture_High_Address_3 =0x00000000_00000FFF (4 KB)
AXI_to_PCIe_Translation_3=0x60000000_87654000 (Bits 63-32 are non-zero to
produce a 64-bit
PCIe TLP. Bits 11-0 must be zero based on the AXI BAR aperture size. Non-
zero values
in the lower 12 bits are invalid translation values.)
• Accessing the Bridge AXI BAR_0 with address 0x0000_12340ABC on the AXI bus yields
0x56710ABC on the bus for PCIe.
• Accessing the Bridge AXI BAR_1 with address 0x0000_ABCDF123 on the AXI bus yields
0x50000000FEDC1123 on the bus for PCIe.
• Accessing the Bridge AX IBAR_2 with address 0x0000_FFFEDCBA on the AXI bus yields
0x41FEDCBA on the bus for PCIe.
• Accessing the Bridge AXI BAR_3 with address 0x0000_00000071 on the AXI bus yields
0x6000000087654071 on the bus for PCIe.
Addressing Checks
When setting the following parameters for PCIe® address mapping, C_PCIEBAR2AXIBAR_n and
PF0_BARn_APERTURE_SIZE, be sure these are set to allow for the addressing space on the AXI
system. For example, the following setting is illegal and results in an invalid AXI address.
C_PCIEBAR2AXIBAR_n=0x00000000_FFFFF000
PF0_BARn_APERTURE_SIZE=0x06 (8 KB)
For an 8 Kilobyte BAR, the lower 13 bits must be zero. As a result, the C_PCIEBAR2AXIBAR_n
value should be modified to be 0x00000000_FFFFE0000. Also, check for a larger value on
PF0_BARn_APERTURE_SIZE compared to the value assigned to the C_PCIEBAR2AXIBAR_n
parameter. And example parameter setting follows.
C_PCIEBAR2AXIBAR_n=0xFFFF_E000
PF0_BARn_APERTURE_SIZE=0x0D (1 MB)
To keep the AXIBAR upper address bits as 0xFFFF_E000 (to reference bits [31:13]), the
PF0_BARn_APERTURE_SIZE parameter must be set to 0x06 (8 KB).
Malformed TLP
The integrated block for PCI Express® detects a malformed TLP. For the IP configured as an
Endpoint core, a malformed TLP results in a fatal error message being sent upstream if error
reporting is enabled in the Device Control register.
Abnormal Conditions
This section describes how the Slave side and Master side (see the following tables) of the Bridge
core handle abnormal conditions.
Unexpected Completion
When the slave bridge receives a completion TLP, it matches the header RequesterID and Tag to
the outstanding RequesterID and Tag. A match failure indicates the TLP is an Unexpected
Completion which results in the completion TLP being discarded and a Slave Unexpected
Completion (SUC) interrupt strobe being asserted. Normal operation then continues.
Unsupported Request
A device for PCIe might not be capable of satisfying a specific read request. For example, if the
read request targets an unsupported address for PCIe, the completer returns a completion TLP
with a completion status of 0b001 - Unsupported Request. The completer that returns a
completion TLP with a completion status of Reserved must be treated as an unsupported
request status, according to the PCI Express Base Specification v3.0. When the slave bridge
receives an unsupported request response, the Slave Unsupported Request (SUR) interrupt is
asserted and the DECERR response is asserted with arbitrary data on the AXI4 memory mapped
bus.
Completion Timeout
A Completion Timeout occurs when a completion (Cpl) or completion with data (CplD) TLP is not
returned after an AXI to PCIe memory read request, or after a PCIe Configuration Read/Write
request. For PCIe Configuration Read/Write request, completions must complete within the
C_COMP_TIMEOUT parameter selected value from the time the request is issued. For PCIe
Memory Read request, completions must complete within the value set in the Device Control 2
register in the PCIe Configuration Space register. When a completion timeout occurs, an OKAY
response is asserted with all 1s data on the memory mapped AXI4 bus.
An Error Poison occurs when the completion TLP EP bit is set, indicating that there is poisoned
data in the payload. When the slave bridge detects the poisoned packet, the Slave Error Poison
(SEP) interrupt is asserted and the SLVERR response is asserted with arbitrary data on the
memory mapped AXI4 bus.
Completer Abort
A Completer Abort occurs when the completion TLP completion status is 0b100 - Completer
Abort. This indicates that the completer has encountered a state in which it was unable to
complete the transaction. When the slave bridge receives a completer abort response, the Slave
Completer Abort (SCA) interrupt is asserted and the SLVERR response is asserted with arbitrary
data on the memory mapped AXI4 bus.
Max Payload Size for PCIe, Max Read Request Size or 4K Page Violated
When the master bridge receives a SLVERR response from the addressed AXI slave, the request
is discarded and the Master SLVERR (MSE) interrupt is asserted. If the request was non-posted, a
completion packet with the Completion Status = Completer Abort (CA) is returned on the bus for
PCIe.
Completion Packets
When the MAX_READ_REQUEST_SIZE is greater than the MAX_PAYLOAD_SIZE, a read request
for PCIe can ask for more data than the master bridge can insert into a single completion packet.
When this situation occurs, multiple completion packets are generated up to
MAX_PAYLOAD_SIZE, with the Read Completion Boundary (RCB) observed.
Poison Bit
When the poison bit is set in a transaction layer packet (TLP) header, the payload following the
header is corrupted. When the master bridge receives a memory request TLP with the poison bit
set, it discards the TLP and asserts the Master Error Poison (MEP) interrupt strobe.
When the master bridge receives a write request with the Length = 0x1, FirstBE = 0x00, and
LastBE = 0x00 there is no effect.
When a Hot Reset is received by the Bridge core, the link goes down and the PCI Configuration
Space must be reconfigured.
Initiated AXI4 write transactions that have not yet completed on the AXI4 bus when the link
goes down have a SLVERR response given and the write data is discarded. Initiated AXI4 read
transactions that have not yet completed on the AXI4 bus when the link goes down have a
SLVERR response given, with arbitrary read data returned.
Any MemWr TLPs for PCIe that have been received, but the associated AXI4 write transaction has
not started when the link goes down, are discarded.
Endpoint
When configured to support Endpoint functionality, the Bridge core fully supports Endpoint
operation as supported by the underlying block. There are a few details that need special
consideration. The following subsections contain information and design considerations about
Endpoint support.
Interrupts
Interrupt capabilities are provided by the underlying PCI Express® solution IP. For additional
information, see the Virtex-7 FPGA Integrated Block for PCI Express LogiCORE IP Product Guide
(PG023), the UltraScale Devices Gen3 Integrated Block for PCI Express LogiCORE IP Product Guide
(PG156), and the UltraScale+ Devices Integrated Block for PCI Express LogiCORE IP Product Guide
(PG213).
Multiple interrupt modes can be configured during IP configuration, however only one interrupt
mode is used at runtime. If multiple interrupt modes are enabled by the host after PCI bus
enumeration at runtime, the core uses the MSI interrupt over Legacy interrupt. Both MSI and
Legacy interrupt modes are sent using the same int_msi_* interface, and the core
automatically picks the best available interrupt mode at runtime. MSI-X is implemented
externally to the core and uses a separate cfg_interrupt_msix_* interface. The core does
not prevent the use of MSI-X at any time even when other interrupt modes are enabled, however
Xilinx recommends the use of MSI-X interrupt solely if enabled over other interrupt modes even
if they are all enabled at runtime.
Legacy Interrupts
Asserting intx_msi_request when legacy interrupts are enabled causes the IP to issue a
legacy interrupt over PCIe. After a intx_msi_grant signal is asserted, it must remain asserted
until the intx_msi_grant signal is asserted and the interrupt has been serviced and cleared by
the host. The intx_msi_grant assertion indicates the requested interrupt has been sent on
the PCIe block. The Message sent is based on the value of the PF0_INTERRUPT_PIN parameter,
which is configurable during core customization in the Vivado Design Suite. Keeping
intx_msi_grant asserted ensures the interrupt pending register within the IP remains
asserted when queried by the host's Interrupt Service Routine (ISR) to determine the source of
interrupts. You must implement a mechanism in the user application to know when the interrupt
routine has been serviced. This detection can be done in many different ways depending on your
application and your use of this interrupt pin. This typically involves a register (or array of
registers) implemented in the user application that is cleared, read, or modified by the host
software when an interrupt is serviced.
This figure shows only the handshake between intx_msi_request and intx_msi_grant.
The user application might not clear or service the interrupt immediately, in which case, you must
keep intx_msi_request asserted past intx_msi_grant.
Related Information
Bridge Parameters
MSI Interrupts
Asserting intx_msi_request causes the generation of an MSI interrupt if MSI is enabled.
The MSI vector number being used must not exceed the number of MSI vectors being enabled
by the host. This information can be queried from msi_vector_width[2:0] signal after the
host enumerated and enabled MSI interrupt at runtime. The encoding of
msi_vector_width[2:0] signal is shown in the following table.
The core supports the MSI-X interrupt and its signaling. The MSI-X vector table and the MSI-X
Pending Bit Array need to be implemented as part of the user logic, by claiming a BAR aperture.
The External MSI-X interrupts mode is enabled when you set the MSI-X Implementation Location
option to External in the PCIe Misc Tab.
To send MSI-X interrupt, user logic must use cfg_interrupt_msix_* interface instead of the
intx_msi_* interface. The signaling requirement is the same as defined in the UltraScale Devices
Gen3 Integrated Block for PCIe core as shown below.
Multiple interrupt modes can be configured during IP configuration, however only one interrupt
mode is used at runtime. If multiple interrupt modes are enabled by the host after PCI bus
enumeration at runtime, MSI-X interrupt takes precedence over MSI interrupt, and MSI interrupt
takes precedence over Legacy interrupt. All of these interrupt modes are sent using the same
usr_irq_* interface and the core automatically picks the best available interrupt mode at
runtime.
Legacy Interrupts
Asserting one or more bits of usr_irq_req when legacy interrupts are enabled causes the IP to
issue a legacy interrupt over PCIe. Multiple bits may be asserted simultaneously but each bit
must remain asserted until the corresponding usr_irq_ack bit has been asserted. After a
usr_irq_req bit is asserted, it must remain asserted until the corresponding usr_irq_ack bit
is asserted and the interrupt has been serviced and cleared by the Host. The usr_irq_ack
assertion indicates the requested interrupt has been sent on the PCIe block. This will ensure
interrupt pending register within the IP remains asserted when queried by the Host's Interrupt
Service Routine (ISR) to determine the source of interrupts. You must implement a mechanism in
the user application to know when the interrupt routine has been serviced. This detection can be
done in many different ways depending on your application and your use of this interrupt pin.
This typically involves a register (or array of registers) implemented in the user application that is
cleared, read, or modified by the Host software when an interrupt is serviced.
After the usr_irq_req bit is deasserted, it cannot be reasserted until the corresponding
usr_irq_ack bit has been asserted for a second time. This indicates the deassertion message
for the legacy interrupt has been sent over PCIe. After a second usr_irq_ack occurred, the
usr_irq_req wire can be reasserted to generate another legacy interrupt.
The usr_irq_req bit can be mapped to legacy interrupt INTA, INTB, INTC, INTD through the
configuration registers. The following figure shows the legacy interrupts.
This figure shows only the handshake between usr_irq_req and usr_irq_ack. The user
application might not clear or service the interrupt immediately, in which case, you must keep
usr_irq_req asserted past usr_irq_ack.
Asserting one or more bits of usr_irq_req causes the generation of an MSI or MSI-X interrupt
if MSI or MSI-X is enabled. If both MSI and MSI-X capabilities are enabled, an MSI-X interrupt is
generated. The Internal MSI-X interrupts mode is enabled when you set the MSI-X
Implementation Location option to Internal in the PCIe Misc Tab.
After a usr_irq_req bit is asserted, it must remain asserted until the corresponding
usr_irq_ack bit is asserted and the interrupt has been serviced and cleared by the Host. The
usr_irq_ack assertion indicates the requested interrupt has been sent on the PCIe block. This
will ensure the interrupt pending register within the IP remains asserted when queried by the
Host's Interrupt Service Routine (ISR) to determine the source of interrupts. You must implement
a mechanism in the user application to know when the interrupt routine has been serviced. This
detection can be done in many different ways depending on your application and your use of this
interrupt pin. This typically involves a register (or array of registers) implemented in the user
application that is cleared, read, or modified by the Host software when an Interrupt is serviced.
Configuration registers are available to map usr_irq_req and DMA interrupts to MSI or MSI-X
vectors. For MSI-X support, there is also a vector table and PBA table. The following figure shows
the MSI interrupt.
This figure shows only the handshake between usr_irq_req and usr_irq_ack. Your
application might not clear or service the interrupt immediately, in which case, you must keep
usr_irq_req asserted past usr_irq_ack.
This figure shows only the handshake between usr_irq_req and usr_irq_ack. Your
application might not clear or service the interrupt immediately, in which case, you must keep
asserted past usr_irq_ack.
Root Port
When configured to support Root Port functionality, the Bridge core fully supports Root Port
operation as supported by the underlying block. There are a few details that need special
consideration. The following subsections contain information and design considerations about
Root Port support.
When an ECAM access is attempted to a bus number that is in the range defined by the
secondary bus number and subordinate bus number range (not including secondary bus number),
then Type 1 configuration transactions are generated. The primary, secondary and subordinate
bus numbers are written and updated by Root Port software to the Type 1 PCI Configuration
Header of the Bridge core in the enumeration procedure.
When an ECAM access is attempted to a bus number that is out of the range defined by the
secondary bus_number and subordinate bus number, the bridge does not generate a
configuration request and signal a SLVERR response on the AXI4 bus.
When an Unsupported Request (UR) response is received for a configuration read request, all
ones are returned on the AXI4 bus to signify that a device does not exist at the requested device
address. It is the responsibility of the software to ensure configuration write requests are not
performed to device addresses that do not exist. However, the Bridge core asserts SLVERR
response on the AXI4 bus when a configuration write request is performed on device addresses
that do not exist or a UR response is received.
During core customization in the Vivado® Design Suite, when there is no BAR enabled, RP
passes all received packets to the user application without address translation or address
filtering.
When BAR is enabled, by default the BAR address starts at 0x0000_0000 unless programmed
separately. Any packet received from the PCIe® link that hits a BAR is translated according to the
PCIE-to-AXI Address Translation rules.
Note: The IP must not receive any TLPs outside of the PCIe BAR range from the PCIe link when RP BAR is
enabled. If this rule cannot be enforced, it's recommended that the PCIe BAR is disabled and do address
filtering and/or translation outside of the IP.
The Root Port BAR customization options in the Vivado Design Suite are found in the PCIe BARs
Tab.
Related Information
Receiving Interrupts
In Root Port mode, you can choose one of the two ways to handle incoming interrupts;
• Legacy Interrupt FIFO mode: Legacy Interrupt FIFO mode is the default. It is available in
earlier Bridge IP variants and versions, and will continue to be available. Legacy Interrupt FIFO
mode is geared towards compatibility for legacy designs.
• Interrupt Decode mode: Interrupt Decode mode is available in the CPM AXI Bridge. Interrupt
Decode mode can be used to mitigate Interrupt FIFO overflow condition which can occur in a
design that receives interrupts at a high rate and avoids the performance penalty incurred
when such condition occurs.
If you are customizing and generating the core in the Vivado® IP integrator, replace get_ips
with get_bd_cells.
1. Optional: Write 0 to the Interrupt Mask register bit [16] to deassert the interrupt_out pin
while interrupt is being serviced.
2. Read the Root Port Status/Control Register bit [18] to check if it is not empty.
3. Read the Root Port Status/Control Register bit [19] to check if it has overflowed.
4. If the interrupt FIFO is not empty, read Root Port Interrupt FIFO Read Register 1 to check
which interrupt line is serviced, and whether this is an Assertion or Deassertion message.
5. Write 1 to the Root Port Interrupt FIFO Read Register 1 bit [31] to remove the interrupt that
the user has just read from the FIFO.
6. Repeat from step 2 to step 5 until the FIFO is indicated as empty.
7. If at any time during this process the FIFO is indicated as overflowed (status from step 2), the
user application must check any interrupt line that has not been serviced to check for any
pending interrupt on that line. Failure to do this before continuing may leave some interrupt
line unserviced.
8. Write 1 to the Interrupt Decode register bit [16] to clear the INTx interrupt bit.
9. If step 1 is executed, write 1 to the Interrupt Mask register bit [16] to re-enable the
interrupt_out pin for future INTx interrupt.
MSI Interrupt
The IP will decode the MSI interrupt based on the value programmed in Root Port MSI Base
Register 1 and Root Port MSI Base Register 2. Any Memory Write TLP received from the link
with an address that falls within a 4 Kb window from the base address programmed in those
registers will be treated as an MSI interrupt, and will not be forwarded to the M_AXI(B) interface.
When an MSI interrupt is received, the Interrupt Decode register bit[17] will be set. If the
Interrupt Mask register bit[17] is also set, the interrupt_out pin is asserted. After receiving
this interrupt, the user application must follow the following procedure to service the interrupt:
1. Optional: Write 0 to the Interrupt Mask register bit [17] to deassert the interrupt_out pin
while the interrupt is being serviced.
2. Read the Root Port Status/Control Register bit [18] to check if it is not empty.
3. Read the Root Port Status/Control Register bit [19] to check if it has overflowed.
4. If the interrupt FIFO is not empty, read the Root Port Interrupt FIFO Read Register 2 to check
MSI Message Data from the received MSI interrupt. This is used by the user application to
determine the interrupt vector number and can also be used to determine the source of the
interrupt.
5. Write 1 to the Root Port Interrupt FIFO Read Register 1 bit [31] to remove the interrupt user
has just read from the FIFO.
6. Repeat from step 2 until the FIFO is indicated as empty.
7. If at any time during this process, the FIFO was indicated as overflowed (status from step 2),
the user application must check any unserviced interrupt vectors to check for any pending
interrupts on that line. Failure to do this before continuing can leave some interrupt vector
unserviced.
8. Write 1 to the Interrupt Decode Register bit [17] to clear the MSI interrupt bit.
9. If step 1 was executed, write 1 to the Interrupt Mask Register bit [17] to re-enable the
interrupt_out pin for future MSI interrupts.
MSI-X Interrupt
All MSI-X interrupts must be decoded by the user application externally to the IP. To do this, set
all of their Endpoints to use an MSI-X address that falls outside of the range of the 4Kb window
from the base address programmed in the Root Port MSI Base Register 1 and Root Port MSI Base
Register 2. All MSI-X interrupts will be forwarded to the M_AXI(B) interface.
All TLPs forwarded to M_AXI(B) interface are subject to the PCIe-to-AXI Address translation.
1. Optional: Write 0 to the Interrupt Decode 2 Mask register to deassert an interrupt line while
the interrupt is being serviced.
2. Read the Root Port Interrupt Decode 2 register to check which interrupt line is currently
asserted.
3. Repeat step 2 until all interrupt lines are deasserted. The interrupt line is automatically
cleared when the IP receives the INTx Deassert Message corresponding to that interrupt line.
4. If step 1 was executed, write 1 to the Interrupt Decode 2 Mask register to re-enable an
interrupt line for future INTx interrupt.
MSI Interrupt
The IP decodes the MSI interrupt based on the value programmed in Root Port MSI Base
Register 1 and Root Port MSI Base Register 2. Any Memory Write TLPs received from the link
with an address that falls within the 4 Kb window from the base address programmed in those
registers will be treated as MSI interrupt, and will not be forwarded to the M_AXI(B) interface.
Note: MSI Message Data [5:0] will always be decoded as MSI Message vector regardless of how many
vectors are enabled at your Endpoint.
When an MSI interrupt is received, the Root Port MSI Interrupt Decode 1 or Root Port MSI
Interrupt Decode 2 register is set. If the Root Port MSI Interrupt Decode 1 or Root Port MSI
Interrupt Decode 2 register is also set, the interrupt_out_msi_vec* pins are asserted.
interrupt_out_msi_vec0to31 corresponds to MSI vector 0 - 31, and
interrupt_out_msi_vec32to63 corresponds to MSI vector 32 - 63. After receiving this
interrupt, the user application must follow this procedure to service the interrupt:
1. Optional: Write 0 to the Root Port MSI Interrupt Decode 1 or 2 Mask register to deassert the
interrupt_out_msi_vec* pins while the interrupt is being serviced.
2. Read the Root Port MSI Interrupt Decode 1 or 2 register to check which interrupt vector is
asserted.
3. Write 1 to the Root Port MSI Interrupt Decode 1 or 2 register to clear the MSI interrupt bit.
4. If step 1 was executed, write 1 to the Root Port MSI Interrupt Decode 1 or 2 Mask register
bit to re-enable the interrupt_out_msi_vec* pins for future MSI interrupts.
MSI-X Interrupt
All MSI-X interrupts must be decoded by the user application externally to the IP. To do this, the
user application must set all Endpoints to use an MSI-X address that falls outside of the range of
the 4Kb window from the base address programmed in the Root Port MSI Base Register 1 and
the Root Port MSI Base Register 2. All MSI-X interrupts are forwarded to the M_AXI(B) interface.
All TLPs forwarded to M_AXI(B) interface are subject to PCIe-to-AXI Address translation.
Port Descriptions
The interface signals for the core are described in the following tables.
Global Signals
The interface signals for the Bridge are described in the following table.
Table 15: DMA/Bridge Subsystem for PCIe in Bridge Mode Interrupt Signals
PCIe Interface
Table 16: PCIe Interface Signals
Bridge Parameters
Because many features in the Bridge core design can be parameterized, you can uniquely tailor
the implementation of the core using only the resources required for the desired functionality.
This approach also achieves the best possible performance with the lowest resource usage.
The parameters defined for the Bridge are shown in the following table.
Memory Map
There are three distinct register spaces, which are described in this section:
In AXI Bridge for PCI Express Gen3 IP, only Bridge Register Memory Map and Enhanced
Configuration Access Memory Map are used. In DMA/Bridge Subsystem for PCI Express® (in AXI
Bridge mode), all three register spaces are used. These registers are described in more detail in
the following section. During reset, all registers return to default values.
In AXI Bridge for PCI Express Gen3 IP, all registers are accessed through the AXI4-Lite Control
Interface and are offset from the AXI Base Address assigned to this interface (C_BASEADDR
parameter).
In DMA/Bridge Subsystem for PCI Express in AXI Bridge mode, all registers are accessed through
the AXI4-Lite Control Interface and are offset from the AXI base address assigned to this
interface. C_BASEADDR parameter has been deprecated in this particular IP and the IP is no
longer doing address filtering for the AXI4-Lite Control interface. However, if AXI Interconnect IP
is used in the design, AXI Crossbar IP within it will do address filtering based on the AXI Address
assigned to each interface. The DMA/Bridge Subsystem for PCI Express in AXI Bridge mode uses
bit 28 of the address bus to select between each register space:
• If the AXI4-Lite Slave address bit 28 is set to 0, access is directed to the Bridge Register
Memory Map and Enhanced Configuration Access Memory Map.
• If the AXI4-Lite Slave address bit 28 is set to 1, access is directed to the DMA/Bridge
Subsystem for PCIe Register Memory Map.
Note: Registers that are marked as Reserved may appear writable or have a value other than 0. If your
application does data compare after the registers are being written, ensure that it ignores the value being
returned on the reserved bit position.
The Bridge register is mapped within the PCIe® Extended Configuration Space Header memory
region. The AXI Bridge VSEC registers are only accessible from the Bridge IP AXI4-Lite Control
Interface, therefore at the start of the PCIe® Extended Configuration Space (offset 0x100), the
capability header will point to address 0x128 as the next capability pointer. When the PCIe®
Extended Configuration Space is accessed from the PCIe link, the next capability pointer field will
point to the next enabled PCIe® capability instead of the AXI Bridge VSEC register.
Follow this sequence to clear the Correctable, Non-Fatal, and Fatal bits:
1. Clear the Root Port Error FIFO (0x154) by performing first a read, followed by write-back of
the same register.
2. Read Root Port Status/Control Register (0x148) bit 16, and ensure that the Error FIFO is
empty.
Note: If the error FIFO is still not empty, repeat step 1 and step 2 until the Error FIFO is empty.
3. Write to the Interrupt Decode Register (0x138) with 1 to the appropriate error bit to clear it.
IMPORTANT! An asserted bit in the Interrupt Decode register does not cause the interrupt line to assert
unless the corresponding bit in the Interrupt Mask register is also set.
Reads are non-destructive. Removing the message from the FIFO requires a write. The write
value is ignored.
Note: Reads are non-destructive. Removing the message from the FIFO requires a write to either this
register or the Root Port Interrupt FIFO Read 2 register. The write value is ignored.
Reads are non-destructive. Removing the message from the FIFO requires a write to either this
register or the Root Port Interrupt FIFO Read 1 register (write value is ignored).
The Root Port Interrupt Decode 2 Register reads from this location return INTx interrupt source
status. Data from each read follows the format shown in the following table. For non-Root Port
cores, reads return 0.
The RootPort Interrupt Decode 2 Mask register controls whether INTx interrupt is checked by
Interrupt decode bit 16 and also forwarded to interrupt_out in Interrupt Decode mode. The
Root Port Interrupt Decode 2 Mask Register initializes to all zeros. The following table describes
the register bits.
Table 42: AXI Base Address Translation Configuration Register Bit Definitions
Core
Bits Name Reset Value Description
Access
To create the address for PCIe–this
is the value substituted for the
31-0 Lower Address R/W C_AXIBAR2PCIEBAR_0(31 to 0)
least significant 32 bits of the AXI
address.
if (C_AXIBAR2PCIEBAR_0 = 64 bits),
then reset value = To create the address for PCIe–this
C_AXIBAR2PCIEBAR_0(63 to 32) is the value substituted for the
31-0 Upper Address R/W
most significant 32 bits of the AXI
if (C_AXIBAR2PCIEBAR_0 = 32 bits), address.
then reset value = 0x00000000
To create the address for PCIe–this
is the value substituted for the
31-0 Lower Address R/W C_AXIBAR2PCIEBAR_1(31 to 0)
least significant 32 bits of the AXI
address.
if (C_AXIBAR2PCIEBAR_1 = 64 bits),
then reset value = To create the address for PCIe–
C_AXIBAR2PCIEBAR_1(63 to 32) this is the value substituted for the
31-0 Upper Address R/W
most significant 32 bits of the AXI
if (C_AXIBAR2PCIEBAR_1 = 32 bits), address.
then reset value = 0x00000000
To create the address for PCIe–this
is the value substituted for the
31-0 Lower Address R/W C_AXIBAR2PCIEBAR_2(31 to 0)
least significant 32 bits of the AXI
address.
if (C_AXIBAR2PCIEBAR_2 = 64 bits),
then reset value = To create the address for PCIe–this
C_AXIBAR2PCIEBAR_2(63 to 32) is the value substituted for the
31-0 Upper Address R/W
most significant 32 bits of the AXI
if (C_AXIBAR2PCIEBAR_2 = 32 bits), address.
then reset value = 0x00000000
To create the address for PCIe–this
is the value substituted for the
31-0 Lower Address R/W C_AXIBAR2PCIEBAR_3(31 to 0)
least significant 32 bits of the AXI
address.
if (C_AXIBAR2PCIEBAR_3 = 64 bits)
then reset value = To create the address for PCIe–this
C_AXIBAR2PCIEBAR_3(63 to 32) is the value substituted for the
31-0 Upper Address R/W
most significant 32 bits of the AXI
if (C_AXIBAR2PCIEBAR_3 = 32 bits) address.
then reset value = 0x00000000
Table 42: AXI Base Address Translation Configuration Register Bit Definitions (cont'd)
Core
Bits Name Reset Value Description
Access
To create the address for PCIe–this
is the value substituted for the
31-0 Lower Address R/W C_AXIBAR2PCIEBAR_4(31 to 0)
least significant 32 bits of the AXI
address.
if (C_AXIBAR2PCIEBAR_4 = 64 bits),
then reset value = To create the address for PCIe–this
C_AXIBAR2PCIEBAR_4(63 to 32) is the value substituted for the
31-0 Upper Address R/W
most significant 32 bits of the AXI
if (C_AXIBAR2PCIEBAR_4 = 32 bits), address.
then reset value = 0x00000000
To create the address for PCIe–this
is the value substituted for the
31-0 Lower Address R/W C_AXIBAR2PCIEBAR_5(31 to 0)
least significant 32 bits of the AXI
address.
if (C_AXIBAR2PCIEBAR_5 = 64 bits),
then reset value = To create the address for PCIe–this
C_AXIBAR2PCIEBAR_5(63 to 32) is the value substituted for the
31-0 Upper Address R/W
most significant 32 bits of the AXI
if (C_AXIBAR2PCIEBAR_5 = 32 bits), address.
then reset value = 0x00000000
Access
Bit Index Default Description
Type
user_int_enmask
AXI Bridge Functional User Interrupt Enable Mask
Mode:
0: Prevents an interrupt from being generated when the
[NUM_USR_INT-1:0] {NUM_USR_INT}'b1 RW user interrupt source is asserted.
DMA Functional Mode: 1: Generates an interrupt on the rising edge of the user
'h0 interrupt source. If the Enable Mask is set and the source
is already set, a user interrupt will be generated also.
If MSI is enabled, this register specifies the MSI or MSI-X vector number of the MSI. In Legacy
interrupts only the two LSBs of each field should be used to map to INTA, B, C, or D.
If MSI is enabled, this register specifies the MSI or MSI-X vector number of the MSI. In Legacy
interrupts only the 2 LSB of each field should be used to map to INTA, B, C, or D.
If MSI is enabled, this register specifies the MSI or MSI-X vector number of the MSI. In Legacy
interrupts only the 2 LSB of each field should be used to map to INTA, B, C, or D.
If MSI is enabled, this register specifies the MSI or MSI-X vector number of the MSI. In Legacy
interrupts only the 2 LSB of each field should be used to map to INTA, B, C, or D.
The address breakdown is defined in the following table. ECAM is used in conjunction with the
Bridge Register Memory Map only when used in both AXI Bridge for PCIe Gen3 core as well as
DMA/Bridge Subsystem for PCIe in AXI Bridge mode core. The DMA/Bridge Subsystem for PCIe
Register Memory Map does not have ECAM functionality.
When an ECAM access is attempted to the primary bus number, which defaults as bus 0 from
reset, then access to the type 1 PCI Configuration Header of the integrated block in the
Enhanced Interface for PCIe is performed. When an ECAM access is attempted to the secondary
bus number, then type 0 configuration transactions are generated. When an ECAM access is
attempted to a bus number that is in the range defined by the secondary bus number and
subordinate bus number (not including the secondary bus number), then type 1 configuration
transactions are generated. The primary, secondary, and subordinate bus numbers are written by
Root Port software to the type 1 PCI Configuration Header of the Enhanced Interface for PCIe in
the beginning of the enumeration procedure.
When an ECAM access is attempted to a bus number that is out of the bus_number and
subordinate bus number range, the bridge does not generate a configuration request and signal
SLVERR response on the AXI4-Lite bus. When the Bridge is configured for EP
(PL_UPSTREAM_FACING = TRUE), the underlying Integrated Block configuration space and the
core memory map are available at the beginning of the memory space. The memory space looks
like a simple PCI Express® configuration space. When the Bridge is configured for RC
(PL_UPSTREAM_FACING = FALSE), the same is true, but it also looks like an ECAM access to
primary bus, Device 0, Function 0.
When the Bridge core is configured as a Root Port, the reads and writes of the local ECAM are
Bus 0. Because the FPGA only has a single Integrated Block for PCIe core, all local ECAM
operations to Bus 0 return the ECAM data for Device 0, Function 0.
Configuration write accesses across the PCI Express bus are non-posted writes and block the
AXI4-Lite interface while they are in progress. Because of this, system software is not able to
service an interrupt if one were to occur. However, interrupts due to abnormal terminations of
configuration transactions can generate interrupts. ECAM read transactions block subsequent
Requester read TLPs until the configuration read completions packet is returned to allow unique
identification of the completion packet.
Chapter 4
Shared Logic
The AXI Bridge for AXI Bridge for PCI Express Gen3 and DMA/Bridge Subsystem for PCIe in AXI
Bridge mode support Shared Logic and Shared clocking features that are available in the PCIe
subcore IPs. Note that each device family contain different Shared Logic and/or Shared Clocking
features that can be supported based on the IP configuration. More details about these features
can be found in the “Designing with the Core” chapter in the following documents.
• For the Virtex-7 XT device, see Virtex-7 FPGA Integrated Block for PCI Express LogiCORE IP
Product Guide (PG023).
• For UltraScale devices, see UltraScale Devices Gen3 Integrated Block for PCI Express LogiCORE IP
Product Guide (PG156).
• For UltraScale+ devices, see UltraScale+ Devices Integrated Block for PCI Express LogiCORE IP
Product Guide (PG213).
Clocking
The reference clock input is used to generate the internal clocks used by the core and the output
clock. Note that the reference clock is refclk in Virtex®-7 devices, and sys_clk_gt in
UltraScale™ and UltraScale+™ devices. This clock must be provided at the reference clock
frequency selected in the Vivado® Integrated Design Environment (IDE) during IP generation.
This port should be driven by the PCI Express edge connector clock pins through an IBUFDSGTE
primitive.
The axi_aclk output is the clock used for all AXI interfaces and should drive all corresponding
AXI Interconnect aclk signals as well as the axi_ctl_aclk input port when using the older
version of the AXI Bridge for PCI Express Gen3 core with the axi_ctl_aclk port available
externally.
Note: The axi_aclk output should not be used for the system clock for your design. The axi_aclk is
not a free-run clock output. As noted, axi_aclk may not be present at all times.
For additional information about how the source of the aclk clock might impact your designs,
see the 7 Series FPGAs Clocking Resources User Guide (UG472), or the UltraScale Architecture
Clocking Resources User Guide (UG572).
The following figure shows the clocking diagram for the core in an UltraScale device.
The following figure shows the clocking diagram for the core in an UltraScale™ device.
Table 58: Clock Frequencies and Interface Widths Supported For Various
Configurations
Table 58: Clock Frequencies and Interface Widths Supported For Various
Configurations (cont'd)
Resets
For endpoint configurations, the sys_rst_n signal should be driven by the PCI Express® edge
connector reset (perstn). This serves as the reset for the PCI Express interface.
The axi_aresetn output the AXI reset signal synchronous with the clock provided on the
axi_aclk output. This reset should drive all corresponding AXI Interconnect aresetn signals.
The following figure shows the Endpoint system reset connection for the core in an UltraScale™
device.
The following figure shows the Endpoint system reset connection for the core in an UltraScale+
device.
For Root Port configurations, the sys_rst_n signal is internally generated by the user design.
This serves as the reset to the PCI Express slot connector reset (perstn).
The following figure shows the Root Port system reset connection for the core in an UltraScale
device.
The following figure shows the Root Port system reset connection for the core in an UltraScale+
device
Available for the DMA/Bridge Subsystem for PCIe in AXI Bridge mode, there is an optional
dma_bridge_resetn input pin which allows you to reset all internal Bridge engines and
registers as well as all AXI peripherals driven by axi_aresetn and axi_ctl_aresetn pins.
When the following parameter is set, dma_bridge_resetn does not need to be asserted
during initial link up operation because it will be done automatically by the IP. You must
terminate all transactions before asserting this pin. After being asserted, the pin must be kept
asserted for a minimum duration of at least equal to the Completion Timeout value (typically 50
ms) to clear any pending transfer that may currently be queued in the data path. To set this
parameter, type the following command at the Tcl command line:
For PCIe® requests with lengths greater than 1 Dword, the size of the data burst on the Master
AXI interface will always equal the width of the AXI data bus even when the request received
from the PCIe link is shorter than the AXI bus width.
slave axi wstrb can be used to facilitate data alignment to an address boundary. slave
axi wstrb may equal 0 in the beginning of a valid data cycle and will appropriately calculate an
offset to the given address. However, the valid data identified by slave axi wstrb must be
continuous from the first byte enable to the last byte enable.
All transactions initiated at the Slave Bridge interface will be modified and metered by the IP as
necessary. The Slave Bridge interface will conform to AXI4 specification and allow burst size up
to 4KB, and the IP will split the transaction automatically according to PCIe Max Read Request
Size (MRRS), Max Payload Size (MPS), and Read Completion Boundary (RCB), As a result of this
operation, one request at the AXI domain may result in multiple request at the PCIe domain, and
the IP will adjust the number of issued PCIe request accordingly to avoid oversubscribing the
available Completion buffer.
• The bresp to the remote (requesting) AXI4 master device for a write to a remote PCIe device
is not issued until the MemWr TLP transmission is guaranteed to be sent on the PCIe link
before any subsequent TX-transfers.
• If Relaxed Ordering bit is not set within the TLP header, then a remote PCIe device read to a
remote AXI slave is not permitted to pass any previous remote PCIe device writes to a remote
AXI slave received by the Bridge core. The AXI read address phase is held until the previous
AXI write transactions have completed and bresp has been received for the AXI write
transactions. If the Relaxed Ordering attribute bit is set within the TLP header, then the
remote PCIe device read is permitted to pass.
• Read completion data received from a remote PCIe device are not permitted to pass any
remote PCIe device writes to a remote AXI slave received by the Bridge core prior to the read
completion data. The bresp for the AXI write(s) must be received before the completion data
is presented on the AXI read data channel.
Note: The transaction ordering rules for PCIe might have an impact on data throughput in heavy
bidirectional traffic.
• Aperture_Base_Address_n provides the low address where AXI BAR n starts and will be
regarded as address offset 0x0 when the address is translated.
• Aperture_High_Address_n is the high address of the last valid byte address of AXI BAR
n. (For more details on how the address gets translated, see Address Translation.)
When a packet is sent to the core (outgoing PCIe packets), the packet must have an address that
is in the range of Aperture_Base_Address_n and Aperture_High_Address_n. Any
packet that is received by the core that has an address outside of this range will be responded to
with a SLVERR. When the IP integrator is used, these parameters are derived from the Address
Editor tab within the IP integrator. The Address Editor sets the AXI Interconnect as well as the
core so the address range matches, and the packet is routed to the core only when the packet
has an address within the valid range.
Address Translation
The address space for PCIe® is different than the AXI address space. To access one address space
from another address space requires an address translation process. On the AXI side, the bridge
supports mapping to PCIe on up to six 32-bit or 64-bit AXI base address registers (BARs).
• Example 1 (32-bit PCIe Address Mapping) demonstrates how to set up three AXI BARs and
translate the AXI address to a 32-bit address for PCIe.
• Example 2 (64-bit PCIe Address Mapping) demonstrates how to set up three AXI BARs and
translate the AXI address to a 64-bit address for PCIe.
• Example 3 demonstrates how to set up two 64-bit PCIe BARs and translate the address for
PCIe to an AXI address.
• Example 4 demonstrates how to set up a combination of two 32-bit AXI BARs and two 64 bit
AXI BARs, and translate the AXI address to an address for PCIe.
In this example, where C_AXIBAR_NUM=3, the following assignments for each range are made:
AXI_ADDR_WIDTH=48
C_AXIBAR_0=0x00000000_12340000
C_AXI_HIGHADDR_0=0x00000000_1234FFFF (64 Kbytes)
C_AXIBAR2PCIEBAR_0=0x00000000_56710000 (Bits 63-32 are zero in order to
produce a
32-bit PCIe TLP. Bits 15-0 must be zero based on the AXI BAR aperture size.
Non-zero
values in the lower 16 bits are invalid translation values.)
C_AXIBAR_1=0x00000000_ABCDE000
C_AXI_HIGHADDR_1=0x00000000_ABCDFFFF (8 Kbytes)
C_AXIBAR2PCIEBAR_1=0x00000000_FEDC0000 (Bits 63-32 are zero in order to
produce a
32-bit PCIe TLP. Bits 12-0 must be zero based on the AXI BAR aperture size.
Non-zero
values in the lower 13 bits are invalid translation values.)
C_AXIBAR_2=0x00000000_FE000000
C_AXI_HIGHADDR_2=0x00000000_FFFFFFFF (32 Mbytes)
C_AXIBAR2PCIEBAR_2=0x00000000_40000000 (Bits 63-32 are zero in order to
produce a
32-bit PCIe TLP. Bits 24-0 must be zero based on the AXI BAR aperture size.
Non-zero
values in the lower 25 bits are invalid translation values.)
• Accessing the Bridge AXIBAR_0 with address 0x0000_12340ABC on the AXI bus yields
0x56710ABC on the bus for PCIe.
C_AXIBAR2PCIEBAR_0 =
0x00000000_56710000
AXI AWADDR =
0x00000000_12340ABC
C_AXIBAR_0 =
0x00000000_12340000 +
Intermediate Address = 0x0ABC
C_AXI_HIGHADDR_0 =
0x00000000_0x1234FFFF
X20046-032119
• Accessing the Bridge AXIBAR_1 with address 0x0000_ABCDF123 on the AXI bus yields
0xFEDC1123 on the bus for PCIe.
• Accessing the Bridge AXIBAR_2 with address 0x0000_FFEDCBA on the AXI bus yields
0x41FEDCBA on the bus for PCIe.
In this example, where C_AXIBAR_NUM=3, the following assignments for each range are made:
AXI_ADDR_WIDTH=48
C_AXIBAR_0=0x00000000_12340000
C_AXI_HIGHADDR_0=0x00000000_1234FFFF (64 Kbytes)
C_AXIBAR2PCIEBAR_0=0x5000000056710000 (Bits 63-32 are non-zero in order to
produce a
64-bit PCIe TLP. Bits 15-0 must be zero based on the AXI BAR aperture size.
Non-zero
values in the lower 16 bits are invalid translation values.)
C_AXIBAR_1=0x00000000_ABCDE000
C_AXI_HIGHADDR_1=0x00000000_ABCDFFFF (8 Kbytes)
C_AXIBAR2PCIEBAR_1=0x60000000_FEDC0000 (Bits 63-32 are non-zero in order to
produce
a 64-bit PCIe TLP. Bits 12-0 must be zero based on the AXI BAR aperture
size. Non-zero
values in the lower 13 bits are invalid translation values.)
C_AXIBAR_2=0x00000000_FE000000
C_AXI_HIGHADDR_2=0x00000000_FFFFFFFF (32 Mbytes)
C_AXIBAR2PCIEBAR_2=0x7000000040000 (Bits 63-32 are non-zero in order to
produce a
64-bit PCIe TLP. Bits 24-0 must be zero based on the AXI BAR aperture size.
Non-zero
values in the lower 25 bits are invalid translation values.)
Example 3
This example shows the generic settings to set up two independent BARs for PCIe® and address
translation of addresses for PCIe to a remote AXI address space. This setting of BARs for PCIe
does not depend on the AXI BARs within the bridge.
In this example, where C_PCIEBAR_NUM=2, the following range assignments are made.
AXI_ADDR_WIDTH=48
bits 63-48 should be zero. Base on the PCIe Bar Size bits 14-0 should be
zero.
Non-zero values in these ranges are invalid.)
BAR 2 is set to 0xA000000012000000 by Root Port. (Since this is a 64-bit
BAR PCIe BAR3
is disabled.)
PF0_BAR0_APERTURE_SIZE=0x12 (32 Mbytes)
C_PCIEBAR2AXIBAR_2=0x00000000_FE000000 (Because the AXI address is 48-bits
wide,
bits 63-48 should be zero. Base on the PCIe Bar Size bits 24-0 should be
zero.
Non-zero values in these ranges are invalid.)
• Accessing the Bridge AXIBAR_0 with address 0x20000000_ABCDFFF4 on the bus for PCIe
yields 0x0000_12347FF4 on the AXI bus.
C_PCIEBAR2AXIBAR_0 = 0x00000000_12340000
PF0_BAR0_APERTURE_SIZE = 0x12
(least significant 15 bits (14:0) provide window)
PCI Wr Addr =
0x20000000_ABCDFFF4
PCIe BAR 0
(set by Root Complex) = Intermediate Address = 0x7FF4
0x20000000_ABCD8000
• Accessing Bridge AXIBAR_2 with address 0xA00000001235FEDC on the bus for PCIe yields
0x0000_FE35FEDC on the AXI bus.
Example 4
This example shows the generic settings of four AXI BARs and address translation of AXI
addresses to a remote 32-bit and 64-bit addresses for PCIe®. This setting of AXI BARs do not
depend on the BARs for PCIe within the Bridge.
In this example, where number AXI BAR's are 4, the following assignments for each range are
made:
Aperture_Base_Address_0 =0x00000000_12340000
Aperture_High_Address_0 =0x00000000_1234FFFF (64 KB)
AXI_to_PCIe_Translation_0=0x00000000_56710000 (Bits 63-32 are zero to
produce a 32-bit PCIe
TLP. Bits 15-0 must be zero based on the AXI BAR aperture size. Non-zero
values in
the lower 16 bits are invalid translation values.)
Aperture_Base_Address_1 =0x00000000_ABCDE000
Aperture_High_Address_1 =0x00000000_ABCDFFFF (8 KB)
AXI_to_PCIe_Translation_1=0x50000000_FEDC0000 (Bits 63-32 are non-zero to
produce a 64-bit
PCIe TLP. Bits 12-0 must be zero based on the AXI BAR aperture size. Non-
zero values
in the lower 13 bits are invalid translation values.)
Aperture_Base_Address_2 =0x00000000_FE000000
Aperture_High_Address_2 =0x00000000_FFFFFFFF (32 MB)
Aperture_Base_Address_3 =0x00000000_00000000
Aperture_High_Address_3 =0x00000000_00000FFF (4 KB)
AXI_to_PCIe_Translation_3=0x60000000_87654000 (Bits 63-32 are non-zero to
produce a 64-bit
PCIe TLP. Bits 11-0 must be zero based on the AXI BAR aperture size. Non-
zero values
in the lower 12 bits are invalid translation values.)
• Accessing the Bridge AXI BAR_0 with address 0x0000_12340ABC on the AXI bus yields
0x56710ABC on the bus for PCIe.
• Accessing the Bridge AXI BAR_1 with address 0x0000_ABCDF123 on the AXI bus yields
0x50000000FEDC1123 on the bus for PCIe.
• Accessing the Bridge AX IBAR_2 with address 0x0000_FFFEDCBA on the AXI bus yields
0x41FEDCBA on the bus for PCIe.
• Accessing the Bridge AXI BAR_3 with address 0x0000_00000071 on the AXI bus yields
0x6000000087654071 on the bus for PCIe.
Addressing Checks
When setting the following parameters for PCIe® address mapping, C_PCIEBAR2AXIBAR_n and
PF0_BARn_APERTURE_SIZE, be sure these are set to allow for the addressing space on the AXI
system. For example, the following setting is illegal and results in an invalid AXI address.
C_PCIEBAR2AXIBAR_n=0x00000000_FFFFF000
PF0_BARn_APERTURE_SIZE=0x06 (8 KB)
For an 8 Kilobyte BAR, the lower 13 bits must be zero. As a result, the C_PCIEBAR2AXIBAR_n
value should be modified to be 0x00000000_FFFFE0000. Also, check for a larger value on
PF0_BARn_APERTURE_SIZE compared to the value assigned to the C_PCIEBAR2AXIBAR_n
parameter. And example parameter setting follows.
C_PCIEBAR2AXIBAR_n=0xFFFF_E000
PF0_BARn_APERTURE_SIZE=0x0D (1 MB)
To keep the AXIBAR upper address bits as 0xFFFF_E000 (to reference bits [31:13]), the
PF0_BARn_APERTURE_SIZE parameter must be set to 0x06 (8 KB).
Malformed TLP
The integrated block for PCI Express® detects a malformed TLP. For the IP configured as an
Endpoint core, a malformed TLP results in a fatal error message being sent upstream if error
reporting is enabled in the Device Control register.
Abnormal Conditions
This section describes how the Slave side and Master side (see the following tables) of the Bridge
core handle abnormal conditions.
Unexpected Completion
When the slave bridge receives a completion TLP, it matches the header RequesterID and Tag to
the outstanding RequesterID and Tag. A match failure indicates the TLP is an Unexpected
Completion which results in the completion TLP being discarded and a Slave Unexpected
Completion (SUC) interrupt strobe being asserted. Normal operation then continues.
Unsupported Request
A device for PCIe might not be capable of satisfying a specific read request. For example, if the
read request targets an unsupported address for PCIe, the completer returns a completion TLP
with a completion status of 0b001 - Unsupported Request. The completer that returns a
completion TLP with a completion status of Reserved must be treated as an unsupported
request status, according to the PCI Express Base Specification v3.0. When the slave bridge
receives an unsupported request response, the Slave Unsupported Request (SUR) interrupt is
asserted and the DECERR response is asserted with arbitrary data on the AXI4 memory mapped
bus.
Completion Timeout
A Completion Timeout occurs when a completion (Cpl) or completion with data (CplD) TLP is not
returned after an AXI to PCIe memory read request, or after a PCIe Configuration Read/Write
request. Completions must be received within the value set in the Device Control 2 register in
the PCIe configuration space register. When a completion timeout occurs for a PCIe memory
read request, AXI Bridge for PCIe Gen3 IP provides SLVERR response, while DMA/Bridge
Subsystem for PCIe in Bridge Mode IP provides OKAY response with 0s data on the AXI4
memory mapped bus.
Completer Abort
A Completer Abort occurs when the completion TLP completion status is 0b100 - Completer
Abort. This indicates that the completer has encountered a state in which it was unable to
complete the transaction. When the slave bridge receives a completer abort response, the Slave
Completer Abort (SCA) interrupt is asserted and the SLVERR response is asserted with arbitrary
data on the memory mapped AXI4 bus.
Max Payload Size for PCIe, Max Read Request Size or 4K Page
Violated
When the master bridge receives a SLVERR response from the addressed AXI slave, the request
is discarded and the Master SLVERR (MSE) interrupt is asserted. If the request was non-posted, a
completion packet with the Completion Status = Completer Abort (CA) is returned on the bus for
PCIe.
Completion Packets
When the MAX_READ_REQUEST_SIZE is greater than the MAX_PAYLOAD_SIZE, a read request
for PCIe can ask for more data than the master bridge can insert into a single completion packet.
When this situation occurs, multiple completion packets are generated up to
MAX_PAYLOAD_SIZE, with the Read Completion Boundary (RCB) observed.
Poison Bit
When the poison bit is set in a transaction layer packet (TLP) header, the payload following the
header is corrupted. When the master bridge receives a memory request TLP with the poison bit
set, it discards the TLP and asserts the Master Error Poison (MEP) interrupt strobe.
When the master bridge receives a write request with the Length = 0x1, FirstBE = 0x00, and
LastBE = 0x00 there is no effect.
When a Hot Reset is received by the Bridge core, the link goes down and the PCI Configuration
Space must be reconfigured.
Initiated AXI4 write transactions that have not yet completed on the AXI4 bus when the link
goes down have a SLVERR response given and the write data is discarded. Initiated AXI4 read
transactions that have not yet completed on the AXI4 bus when the link goes down have a
SLVERR response given, with arbitrary read data returned.
Any MemWr TLPs for PCIe that have been received, but the associated AXI4 write transaction has
not started when the link goes down, are discarded.
Endpoint
When configured to support Endpoint functionality, the Bridge core fully supports Endpoint
operation as supported by the underlying block. There are a few details that need special
consideration. The following subsections contain information and design considerations about
Endpoint support.
Interrupts
Interrupt capabilities are provided by the underlying PCI Express® solution IP. For additional
information, see the Virtex-7 FPGA Integrated Block for PCI Express LogiCORE IP Product Guide
(PG023), the UltraScale Devices Gen3 Integrated Block for PCI Express LogiCORE IP Product Guide
(PG156), and the UltraScale+ Devices Integrated Block for PCI Express LogiCORE IP Product Guide
(PG213).
Multiple interrupt modes can be configured during IP configuration, however only one interrupt
mode is used at runtime. If multiple interrupt modes are enabled by the host after PCI bus
enumeration at runtime, the core uses the MSI interrupt over Legacy interrupt. Both MSI and
Legacy interrupt modes are sent using the same int_msi_* interface, and the core
automatically picks the best available interrupt mode at runtime. MSI-X is implemented
externally to the core and uses a separate cfg_interrupt_msix_* interface. The core does
not prevent the use of MSI-X at any time even when other interrupt modes are enabled, however
Xilinx recommends the use of MSI-X interrupt solely if enabled over other interrupt modes even
if they are all enabled at runtime.
Legacy Interrupts
Asserting intx_msi_request when legacy interrupts are enabled causes the IP to issue a
legacy interrupt over PCIe. After a intx_msi_grant signal is asserted, it must remain asserted
until the intx_msi_grant signal is asserted and the interrupt has been serviced and cleared by
the host. The intx_msi_grant assertion indicates the requested interrupt has been sent on
the PCIe block. The Message sent is based on the value of the PF0_INTERRUPT_PIN parameter,
which is configurable during core customization in the Vivado Design Suite. Keeping
intx_msi_grant asserted ensures the interrupt pending register within the IP remains
asserted when queried by the host's Interrupt Service Routine (ISR) to determine the source of
interrupts. You must implement a mechanism in the user application to know when the interrupt
routine has been serviced. This detection can be done in many different ways depending on your
application and your use of this interrupt pin. This typically involves a register (or array of
registers) implemented in the user application that is cleared, read, or modified by the host
software when an interrupt is serviced.
This figure shows only the handshake between intx_msi_request and intx_msi_grant.
The user application might not clear or service the interrupt immediately, in which case, you must
keep intx_msi_request asserted past intx_msi_grant.
Related Information
Bridge Parameters
MSI Interrupts
Asserting intx_msi_request causes the generation of an MSI interrupt if MSI is enabled.
The MSI vector number being used must not exceed the number of MSI vectors being enabled
by the host. This information can be queried from msi_vector_width[2:0] signal after the
host enumerated and enabled MSI interrupt at runtime. The encoding of
msi_vector_width[2:0] signal is shown in the following table.
The core supports the MSI-X interrupt and its signaling. The MSI-X vector table and the MSI-X
Pending Bit Array need to be implemented as part of the user logic, by claiming a BAR aperture.
The External MSI-X interrupts mode is enabled when you set the MSI-X Implementation Location
option to External in the PCIe Misc Tab.
To send MSI-X interrupt, user logic must use cfg_interrupt_msix_* interface instead of the
intx_msi_* interface. The signaling requirement is the same as defined in the UltraScale Devices
Gen3 Integrated Block for PCIe core as shown below.
Multiple interrupt modes can be configured during IP configuration, however only one interrupt
mode is used at runtime. If multiple interrupt modes are enabled by the host after PCI bus
enumeration at runtime, MSI-X interrupt takes precedence over MSI interrupt, and MSI interrupt
takes precedence over Legacy interrupt. All of these interrupt modes are sent using the same
usr_irq_* interface and the core automatically picks the best available interrupt mode at
runtime.
Legacy Interrupts
Asserting one or more bits of usr_irq_req when legacy interrupts are enabled causes the IP to
issue a legacy interrupt over PCIe. Multiple bits may be asserted simultaneously but each bit
must remain asserted until the corresponding usr_irq_ack bit has been asserted. After a
usr_irq_req bit is asserted, it must remain asserted until the corresponding usr_irq_ack bit
is asserted and the interrupt has been serviced and cleared by the Host. The usr_irq_ack
assertion indicates the requested interrupt has been sent on the PCIe block. This will ensure
interrupt pending register within the IP remains asserted when queried by the Host's Interrupt
Service Routine (ISR) to determine the source of interrupts. You must implement a mechanism in
the user application to know when the interrupt routine has been serviced. This detection can be
done in many different ways depending on your application and your use of this interrupt pin.
This typically involves a register (or array of registers) implemented in the user application that is
cleared, read, or modified by the Host software when an interrupt is serviced.
After the usr_irq_req bit is deasserted, it cannot be reasserted until the corresponding
usr_irq_ack bit has been asserted for a second time. This indicates the deassertion message
for the legacy interrupt has been sent over PCIe. After a second usr_irq_ack occurred, the
usr_irq_req wire can be reasserted to generate another legacy interrupt.
The usr_irq_req bit can be mapped to legacy interrupt INTA, INTB, INTC, INTD through the
configuration registers. The following figure shows the legacy interrupts.
This figure shows only the handshake between usr_irq_req and usr_irq_ack. The user
application might not clear or service the interrupt immediately, in which case, you must keep
usr_irq_req asserted past usr_irq_ack.
Asserting one or more bits of usr_irq_req causes the generation of an MSI or MSI-X interrupt
if MSI or MSI-X is enabled. If both MSI and MSI-X capabilities are enabled, an MSI-X interrupt is
generated. The Internal MSI-X interrupts mode is enabled when you set the MSI-X
Implementation Location option to Internal in the PCIe Misc Tab.
After a usr_irq_req bit is asserted, it must remain asserted until the corresponding
usr_irq_ack bit is asserted and the interrupt has been serviced and cleared by the Host. The
usr_irq_ack assertion indicates the requested interrupt has been sent on the PCIe block. This
will ensure the interrupt pending register within the IP remains asserted when queried by the
Host's Interrupt Service Routine (ISR) to determine the source of interrupts. You must implement
a mechanism in the user application to know when the interrupt routine has been serviced. This
detection can be done in many different ways depending on your application and your use of this
interrupt pin. This typically involves a register (or array of registers) implemented in the user
application that is cleared, read, or modified by the Host software when an Interrupt is serviced.
Configuration registers are available to map usr_irq_req and DMA interrupts to MSI or MSI-X
vectors. For MSI-X support, there is also a vector table and PBA table. The following figure shows
the MSI interrupt.
This figure shows only the handshake between usr_irq_req and usr_irq_ack. Your
application might not clear or service the interrupt immediately, in which case, you must keep
usr_irq_req asserted past usr_irq_ack.
This figure shows only the handshake between usr_irq_req and usr_irq_ack. Your
application might not clear or service the interrupt immediately, in which case, you must keep
asserted past usr_irq_ack.
Root Port
When configured to support Root Port functionality, the Bridge core fully supports Root Port
operation as supported by the underlying block. There are a few details that need special
consideration. The following subsections contain information and design considerations about
Root Port support.
When an ECAM access is attempted to a bus number that is in the range defined by the
secondary bus number and subordinate bus number range (not including secondary bus number),
then Type 1 configuration transactions are generated. The primary, secondary and subordinate
bus numbers are written and updated by Root Port software to the Type 1 PCI Configuration
Header of the Bridge core in the enumeration procedure.
When an ECAM access is attempted to a bus number that is out of the range defined by the
secondary bus_number and subordinate bus number, the bridge does not generate a
configuration request and signal a SLVERR response on the AXI4 bus.
When an Unsupported Request (UR) response is received for a configuration read request, all
ones are returned on the AXI4 bus to signify that a device does not exist at the requested device
address. It is the responsibility of the software to ensure configuration write requests are not
performed to device addresses that do not exist. However, the Bridge core asserts SLVERR
response on the AXI4 bus when a configuration write request is performed on device addresses
that do not exist or a UR response is received.
During core customization in the Vivado® Design Suite, when there is no BAR enabled, RP
passes all received packets to the user application without address translation or address
filtering.
When BAR is enabled, by default the BAR address starts at 0x0000_0000 unless programmed
separately. Any packet received from the PCIe® link that hits a BAR is translated according to the
PCIE-to-AXI Address Translation rules.
Note: The IP must not receive any TLPs outside of the PCIe BAR range from the PCIe link when RP BAR is
enabled. If this rule cannot be enforced, it's recommended that the PCIe BAR is disabled and do address
filtering and/or translation outside of the IP.
The Root Port BAR customization options in the Vivado Design Suite are found in the PCIe BARs
Tab.
Related Information
Receiving Interrupts
In Root Port mode, you can choose one of the two ways to handle incoming interrupts;
• Legacy Interrupt FIFO mode: Legacy Interrupt FIFO mode is the default. It is available in
earlier Bridge IP variants and versions, and will continue to be available. Legacy Interrupt FIFO
mode is geared towards compatibility for legacy designs.
• Interrupt Decode mode: Interrupt Decode mode is available in the CPM AXI Bridge. Interrupt
Decode mode can be used to mitigate Interrupt FIFO overflow condition which can occur in a
design that receives interrupts at a high rate and avoids the performance penalty incurred
when such condition occurs.
If you are customizing and generating the core in the Vivado® IP integrator, replace get_ips
with get_bd_cells.
1. Optional: Write 0 to the Interrupt Mask register bit [16] to deassert the interrupt_out pin
while interrupt is being serviced.
2. Read the Root Port Status/Control Register bit [18] to check if it is not empty.
3. Read the Root Port Status/Control Register bit [19] to check if it has overflowed.
4. If the interrupt FIFO is not empty, read Root Port Interrupt FIFO Read Register 1 to check
which interrupt line is serviced, and whether this is an Assertion or Deassertion message.
5. Write 1 to the Root Port Interrupt FIFO Read Register 1 bit [31] to remove the interrupt that
the user has just read from the FIFO.
6. Repeat from step 2 to step 5 until the FIFO is indicated as empty.
7. If at any time during this process the FIFO is indicated as overflowed (status from step 2), the
user application must check any interrupt line that has not been serviced to check for any
pending interrupt on that line. Failure to do this before continuing may leave some interrupt
line unserviced.
8. Write 1 to the Interrupt Decode register bit [16] to clear the INTx interrupt bit.
9. If step 1 is executed, write 1 to the Interrupt Mask register bit [16] to re-enable the
interrupt_out pin for future INTx interrupt.
MSI Interrupt
The IP will decode the MSI interrupt based on the value programmed in Root Port MSI Base
Register 1 and Root Port MSI Base Register 2. Any Memory Write TLP received from the link
with an address that falls within a 4 Kb window from the base address programmed in those
registers will be treated as an MSI interrupt, and will not be forwarded to the M_AXI(B) interface.
When an MSI interrupt is received, the Interrupt Decode register bit[17] will be set. If the
Interrupt Mask register bit[17] is also set, the interrupt_out pin is asserted. After receiving
this interrupt, the user application must follow the following procedure to service the interrupt:
1. Optional: Write 0 to the Interrupt Mask register bit [17] to deassert the interrupt_out pin
while the interrupt is being serviced.
2. Read the Root Port Status/Control Register bit [18] to check if it is not empty.
3. Read the Root Port Status/Control Register bit [19] to check if it has overflowed.
4. If the interrupt FIFO is not empty, read the Root Port Interrupt FIFO Read Register 2 to check
MSI Message Data from the received MSI interrupt. This is used by the user application to
determine the interrupt vector number and can also be used to determine the source of the
interrupt.
5. Write 1 to the Root Port Interrupt FIFO Read Register 1 bit [31] to remove the interrupt user
has just read from the FIFO.
6. Repeat from step 2 until the FIFO is indicated as empty.
7. If at any time during this process, the FIFO was indicated as overflowed (status from step 2),
the user application must check any unserviced interrupt vectors to check for any pending
interrupts on that line. Failure to do this before continuing can leave some interrupt vector
unserviced.
8. Write 1 to the Interrupt Decode Register bit [17] to clear the MSI interrupt bit.
9. If step 1 was executed, write 1 to the Interrupt Mask Register bit [17] to re-enable the
interrupt_out pin for future MSI interrupts.
MSI-X Interrupt
All MSI-X interrupts must be decoded by the user application externally to the IP. To do this, set
all of their Endpoints to use an MSI-X address that falls outside of the range of the 4Kb window
from the base address programmed in the Root Port MSI Base Register 1 and Root Port MSI Base
Register 2. All MSI-X interrupts will be forwarded to the M_AXI(B) interface.
All TLPs forwarded to M_AXI(B) interface are subject to the PCIe-to-AXI Address translation.
1. Optional: Write 0 to the Interrupt Decode 2 Mask register to deassert an interrupt line while
the interrupt is being serviced.
2. Read the Root Port Interrupt Decode 2 register to check which interrupt line is currently
asserted.
3. Repeat step 2 until all interrupt lines are deasserted. The interrupt line is automatically
cleared when the IP receives the INTx Deassert Message corresponding to that interrupt line.
4. If step 1 was executed, write 1 to the Interrupt Decode 2 Mask register to re-enable an
interrupt line for future INTx interrupt.
MSI Interrupt
The IP decodes the MSI interrupt based on the value programmed in Root Port MSI Base
Register 1 and Root Port MSI Base Register 2. Any Memory Write TLPs received from the link
with an address that falls within the 4 Kb window from the base address programmed in those
registers will be treated as MSI interrupt, and will not be forwarded to the M_AXI(B) interface.
Note: MSI Message Data [5:0] will always be decoded as MSI Message vector regardless of how many
vectors are enabled at your Endpoint.
When an MSI interrupt is received, the Root Port MSI Interrupt Decode 1 or Root Port MSI
Interrupt Decode 2 register is set. If the Root Port MSI Interrupt Decode 1 or Root Port MSI
Interrupt Decode 2 register is also set, the interrupt_out_msi_vec* pins are asserted.
interrupt_out_msi_vec0to31 corresponds to MSI vector 0 - 31, and
interrupt_out_msi_vec32to63 corresponds to MSI vector 32 - 63. After receiving this
interrupt, the user application must follow this procedure to service the interrupt:
1. Optional: Write 0 to the Root Port MSI Interrupt Decode 1 or 2 Mask register to deassert the
interrupt_out_msi_vec* pins while the interrupt is being serviced.
2. Read the Root Port MSI Interrupt Decode 1 or 2 register to check which interrupt vector is
asserted.
3. Write 1 to the Root Port MSI Interrupt Decode 1 or 2 register to clear the MSI interrupt bit.
4. If step 1 was executed, write 1 to the Root Port MSI Interrupt Decode 1 or 2 Mask register
bit to re-enable the interrupt_out_msi_vec* pins for future MSI interrupts.
MSI-X Interrupt
All MSI-X interrupts must be decoded by the user application externally to the IP. To do this, the
user application must set all Endpoints to use an MSI-X address that falls outside of the range of
the 4Kb window from the base address programmed in the Root Port MSI Base Register 1 and
the Root Port MSI Base Register 2. All MSI-X interrupts are forwarded to the M_AXI(B) interface.
All TLPs forwarded to M_AXI(B) interface are subject to PCIe-to-AXI Address translation.
Tandem Configuration
Tandem Configuration utilizes a two-stage methodology that enables the IP to meet the
configuration time requirements indicated in the PCI Express Specification. Multiple use cases
are supported with this technology:
• Tandem PROM: Load the single two-stage bitstream from the flash.
• Tandem PCIe: Load the first stage bitstream from flash, and deliver the second stage bitstream
over the PCIe link to the MCAP.
• Tandem (PCIe) with Field Updates: After a Tandem PROM (UltraScale only) or Tandem PCIe
(UltraScale or UltraScale+) initial configuration, update the entire user design while the PCIe
link remains active. The update region (floorplan) and design structure are predefined, and Tcl
scripts are provided.
• Tandem + Dynamic Function eXchange: This is a more general case of Tandem Configuration
followed by Dynamic Function eXchange (DFX) of any size or number of dynamic regions.
• Dynamic Function eXchange over PCIe: This is a standard configuration followed by DFX,
using the PCIe / MCAP as the delivery path of partial bitstreams.
To enable any of these capabilities, select the appropriate option when customizing the core. In
the Basic tab:
Tandem Configuration features are available for the Bridge core for all supported UltraScale and
UltraScale+ devices.
For complete information about Tandem Configuration, including supported devices, required
PCIe block locations, design flow examples, requirements, restrictions and other considerations,
see Tandem Configuration in the UltraScale Devices Gen3 Integrated Block for PCI Express LogiCORE
IP Product Guide (PG156) and UltraScale+ Devices Integrated Block for PCI Express LogiCORE IP
Product Guide (PG213). For information on Dynamic Function eXchange, see the Vivado Design
Suite User Guide: Dynamic Function eXchange (UG909).
Chapter 5
• Vivado Design Suite User Guide: Designing IP Subsystems using IP Integrator (UG994)
• Vivado Design Suite User Guide: Designing with IP (UG896)
• Vivado Design Suite User Guide: Getting Started (UG910)
• Vivado Design Suite User Guide: Logic Simulation (UG900)
If you are customizing and generating the core in the Vivado® IP integrator, see the Vivado Design
Suite User Guide: Designing IP Subsystems using IP Integrator (UG994) for detailed information. IP
integrator might auto-compute certain configuration values when validating or generating the
design. To check whether the values do change, see the description of the parameter in this
chapter. To view the parameter value you can run the validate_bd_design command in the
Tcl Console.
You can customize the IP for use in your design by specifying values for the various parameters
associated with the IP core using the following steps:
For details, see the Vivado Design Suite User Guide: Designing with IP (UG896) and the Vivado
Design Suite User Guide: Getting Started (UG910).
Figures in this chapter are illustrations of the Vivado Integrated Design Environment (IDE). This
layout might vary from the current version.
• For Virtex®-7 XT and UltraScale™ devices, see AXI Bridge for PCIe Gen3.
• For UltraScale+™ devices, see DMA/AXI Bridge Subsystem for PCI Express in AXI Bridge
Mode.
In the Basic Mode tab, the additional parameters available in Advanced Mode are applicable to
UltraScale™ architecture devices only.
• Component Name: Base name of the output files generated for the core. The name must
begin with a letter and can be composed of these characters: a to z, 0 to 9, and “_.”
Note: The name cannot be the same as a core module name; for example, "axi_pcie" is a reserved name.
• Mode: Allows you to select the Basic or Advanced mode of the configuration of core.
• Device / Port Type: Indicates the PCI Express logical device type.
• PCIe Block Location: Selects from the available integrated blocks to enable generation of
location-specific constraint files and pinouts. This selection is used in the default example
design scripts.
• Enable GT Quad Selection: This parameter is used to enable the device/package migration.
Applicable to UltraScale™ devices only.
• Number of Lanes: The core requires the selection of the initial lane width. Wider lane width
cores can train down to smaller lane widths and consume more FPGA resource.
• Maximum Link Speed: Indicates the maximum link speed supported by the design. Higher link
speed cores are capable of training to lower link speeds and run at a higher clock frequency.
• AXI Address Width: Indicates the AXI address width for the S_AXI and M_AXI interfaces, but
does not affect the address width of the S_AXI_CTL interface.
• AXI Data Width: Indicates the AXI data width for the S_AXI and M_AXI interfaces, but does
not affect the data width of the S_AXI_CTL interface.
• AXI Clock Frequency: Indicates the clock frequency that will be generated on the axi_aclk
output. All AXI interfaces and a majority of the core outputs are synchronous to this clock.
• Enable AXI Slave Interface: Allows the slave bridge to be enabled or disabled as desired by
the system design. If only the master bridge is used the slave bridge can be disabled to
conserve FPGA resources.
• Enable AXI Master Interface: Allows the master bridge to be enabled or disabled as desired by
the system design. If only the slave bridge is used, the master bridge can be disabled to
conserve FPGA resources.
• Reference Clock Frequency: Selects the frequency of the reference clock provided on the
refclk reference clock input. This reference clock input corresponds to refclk for
Virtex®-7 devices and sys_clk_gt for UltraScale™ devices.
• Enable Pipe Simulation: When selected, this option generates the core that can be simulated
with PIPE interfaces connected.
• Enable GT Channel DRP Ports: When checked, enables the GT channel DRP interface.
• Enable PCIe DRP Ports: When checked, enables the PCIe DRP interface.
• Tandem Mode: For supported devices only, this option allow you to choose the Tandem
Configuration mode: None, Tandem PROM and Tandem PCIe.
• Enable External STARTUP primitive: When checked, generates the STARTUP primitive
external to the IP.
• Use the dedicated PERST routing resources: Enables sys_rst dedicated routing for
applicable UltraScale PCIe locations. This option is not applicable for Virtex-7 XT and
UltraScale+ devices.
• System Reset polarity : This parameter is used to set the polarity of the sys_rst
ACTIVE_HIGH or ACTIVE_LOW.
• The values of 250 MHz and 500 MHz are available for selection for speed grade -2 or -3
and link width other than x8. For this configuration, this parameter is available when
Advanced mode is selected.
• For speed grades -2 or -3 and link width of x8, this parameter defaults to 500 MHz and is
not available for selection.
• For -1 speed grade (-1, -1L, -1LV,-1H and -1HV) and link width other than x8, this
parameter defaults to 250 MHz and is not available for selection.
• This parameter defaults to 250 MHz and is not available for selection.
Note: When -1 or -1L, -1LV, -1H and -1HV speed grade is selected and non production parts of
XCKU060 (ES2), XCKU115 (ES2) and VU440 (ES2) are selected, this parameter defaults to 250 MHz
and is not available for selection.
PCIe ID Tab
The PCIe® Identity parameters are shown in the following figure. These settings customize the IP
initial values and device class code.
ID Initial Values
• Vendor ID: Identifies the manufacturer of the device or application. Valid identifiers are
assigned by the PCI Special Interest Group to guarantee that each identifier is unique. The
default value, 10EEh, is the Vendor ID for Xilinx. Enter your vendor identification number
here. FFFFh is reserved.
• Device ID:
A unique identifier for the application; the default value, which depends on the configuration
selected, is 70<link speed><link width>h. This field can be any value; change this value for the
application.
1. the device family (9 for UltraScale+, 8 for UltraScale, 7 for 7 series devices),
2. EP or RP mode,
3. Link width, and
4. Link speed.
If any of the above values are changed, the Device ID value will be re-evaluated, replacing the
previous set value.
It is always recommended that the link width, speed and Device Port type be changed first
and then the Device ID value.
• Revision ID: Indicates the revision of the device or application; an extension of the Device ID.
The default value is 00h; enter values appropriate for the application.
• Subsystem Vendor ID: Further qualifies the manufacturer of the device or application. Enter a
Subsystem Vendor ID here; the default value is 10EEh. Typically, this value is the same as
Vendor ID. Setting the value to 0000h can cause compliance testing issues.
• Subsystem ID: Further qualifies the manufacturer of the device or application. This value is
typically the same as the Device ID; the default value depends on the lane width and link
speed selected. Setting the value to 0000h can cause compliance testing issues.
Class Code
The Class Code identifies the general function of a device, and is divided into three byte-size
fields. The Vivado allows you to either enter the 24-bit value manually (default) by either
selecting the Enter Class Code Manually checkbox or using the Class Code lookup assistant to
populate the field. De-select the checkbox to enable the Class Code assistant.
• Base Class: Broadly identifies the type of function performed by the device.
The Class Code Look-up Assistant provides the Base Class, Sub-Class and Interface values for a
selected general function of a device. This Look-up Assistant tool only displays the three values
for a selected function. You must enter the values in Class Code for these values to be translated
into device settings.
The Bridge core in Endpoint configuration supports up to six 32-bit BARs or three 64-bit BARs.
The AXI Bridge for PCI Express in Root Port configuration supports up to two 32-bit BARs or one
64-bit BAR.
BARs can be one of two sizes. BARs 0, 2, and 4 can be either 32-bit or 64-bit addressable. BARs
1, 3, and 5 can only be 32-bit addressable and are disabled if the previous BAR is enabled as 64-
bit.
• 32-bit BARs: The address space can be as small as 4 kilobytes or as large as 2 gigabytes. Used
for Memory to I/O.
• 64-bit BARs: The address space can be as small as 4 kilobytes or as large as 256 gigabytes.
Used for Memory only.
• Checkbox: Click the checkbox to enable BAR. Deselect the checkbox to disable BAR.
• Type: BARs can be Memory apertures only. Memory BARs can be either 64-bit or 32-bit.
Prefetch is enabled for 64-bit and not enabled for 32-bit. When a BAR is set as 64 bits, it uses
the next BAR for the extended address space, making it inaccessible.
• Size: The available Size range depends on the PCIe Device/Port Type and the Type of BAR
selected. The following table lists the available BAR size ranges.
• Prefetchable: Identifies the ability of the memory space to be prefetched. This can only be
enabled for 64-bit addressable bars.
• Value: The value assigned to BAR.
• PCIe to AXI Translation: This text field should be set to the appropriate value to perform the
translation from the PCI Express base address to the desired AXI Base Address.
For more information about managing the Base Address Register settings, see Managing Base
Address Register Settings.
Memory indicates that the address space is defined as memory aperture. The base address
register only responds to commands that access the specified address space.
For best results, disable unused base address registers to conserve system resources. A base
address register is disabled by deselecting unused BARs in the Vivado IDE.
Indicates the usage of Legacy interrupts. The Bridge core implements INTA only.
Indicates that the MSI capability structure exists. Cannot be enabled when MSI-X Capability
Structure is enabled.
Indicates that the MSI-X capability structure exists. Cannot be enabled when MSI Capability
Structure is enabled.
• Table Offset: Specifies the offset from the Base Address Register that points to the base of
the MSI-X Table.
• BAR Indicator: Indicates the Base Address Register in the Configuration Space used to map
the function in the MSI-X Table onto memory space. For a 64-bit Base Address Register, this
indicates the lower DWORD.
• PBA Offset: Specifies the offset from the Base Address Register that points to the base of the
MSI-X PBA.
• PBA BAR Indicator: Indicates the Base Address Register in the Configuration Space used to
map the function in the MSI-X PBA onto Memory Space.
Indicates the number of message signals interrupt vectors that this endpoint could request.
Indicates the completion timeout value for incoming completions due to outstanding memory
read requests. This option is deprecated and does not have any effect. This option is now
maintained from the Device Control 2 register in the PCIe Configuration Space register.
• Number of BARs: Indicates the number of AXI BARs enabled. The BARs are enabled
sequentially.
• Aperture Base Address: Sets the base address for the address range of BAR. You should edit
this parameter to fit design requirements.
In the Vivado® IDE, this parameter is handled in the Address Manager, and does not appear in
the Core Customization dialog box.
• Aperture High Address: Sets the upper address threshold for the address range of BAR. You
should edit this parameter to fit design requirements.
In the Vivado IDE, this parameter is handled in the Address Manager, and does not appear in
the Core Customization dialog box.
• AXI to PCIe Translation: Configures the translation mapping between AXI and PCI Express
address space. You should edit this parameter to fit design requirements.
• S AXI ID WIDTH: Sets the ID width for the AXI Slave Interface.
Multiple IDs are not supported for AXI Master Interface. Therefore, all signals concerned with
ID are not available at AXI Master Interface.
• AXI Master outstanding write transactions: Indicates the number of outstanding write
transactions that are allowable for the AXI Master interface.
• AXI Master outstanding read transactions: Indicates the number of outstanding read
transactions that are allowable for the AXI Master interface.
• AXI Slave outstanding write transactions: Indicates the number of outstanding write
transactions that are allowable for the AXI Slave interface.
• AXI Slave outstanding read transactions: Indicates the number of outstanding read
transactions that are allowable for the AXI Slave interface.
GT Settings Tab
The following figure shows the GT Settings tab.
• Form factor driven Insertion loss adjustment: Indicates the insertion loss profile options.
There are three options provided.
1. Chip-to-Chip (5 dB)
2. Add-in Card (15 dB)
3. Backplane (20 dB)
The default value is 4. It is not advisable to change the default value unless recommended by
Xilinx® Technical Support.
• Receiver Detect: Indicates the type of Receiver Detect: Default or Falling Edge. This
parameter is available in the GT Settings Tab when Advanced mode is selected. This
parameter is available only for Production devices. When the Falling Edge option is selected,
the GT Channel DRP Parameter in the Basic tab (in Advanced mode) is disabled. For more
information on the receiver falling edge detect, see the applicable GT User Guide for your
targeted device (7 Series FPGAs GTX/GTH Transceivers User Guide (UG476), or UltraScale
Architecture GTH Transceivers User Guide (UG576).
• Enable JTAG Debugger: When selected, the JTAG debugger option of the base IP is set to
true which will enable the debugging capability of the subcore through the JTAG interface. For
more details, see Virtex-7 FPGA Integrated Block for PCI Express LogiCORE IP Product Guide
(PG023), and UltraScale Devices Gen3 Integrated Block for PCI Express LogiCORE IP Product Guide
(PG156).
• Enable In System IBERT: This debug option is used to view the eye diagram of the serial link
at the desired link speed. This option is available for UltraScale devices only, and not
supported for 7 series devices. For more details, seeUltraScale Devices Gen3 Integrated Block
for PCI Express LogiCORE IP Product Guide (PG156).
• Enable Descrambler for Gen3 Mode: This debug option integrates an encrypted version of the
descrambler module inside the PCIe core, which will be used to descrambler the PIPE data to/
from the PCIe integrated block in Gen3 link speed mode. This provides hardware-only support
to debug on the board.
Basic Tab
The Basic tab options for the AXI Bridge mode (Functional Mode option) are shown in the
following figure.
• Device / Port Type: PCI Express Endpoint device or Root Port of PCI Express® Root complex
can be selected.
• Enable AXI Slave Interface: This interface is selected by default, user can de-select this
interface.
• Enable AXI Master Interface: This interface is selected by default, user can de-select this
interface.
All other options are the same as those for the DMA Subsystem mode. For a description of these
options, see the “Basic Tab” options in Chapter 4: “Design Flow Steps” in the DMA/Bridge
Subsystem for PCI Express Product Guide (PG195).
PCIe ID Tab
For a description of these options, see PCIe ID Tab.
Figure 44: PCIe BARs Tab for AXI Bridge Functional Mode
The Bridge core in Endpoint configuration supports up to six 32-bit BARs or three 64-bit BARs.
The AXI Bridge for PCI Express® in Root Port configuration supports up to two 32-bit BARs or
one 64-bit BAR.
BARs can be one of two sizes. BARs 0, 2, and 4 can be either 32-bit or 64-bit addressable. BARs
1, 3, and 5 can only be 32-bit addressable and are disabled if the previous BAR is enabled as 64-
bit.
• 32-bit BARs: The address space can be as small as 4 kilobytes or as large as 2 gigabytes. Used
for Memory to I/O.
• 64-bit BARs: The address space can be as small as 4 kilobytes or as large as 256 gigabytes.
Used for Memory only.
• Checkbox: Click the checkbox to enable BAR. Deselect the checkbox to disable BAR.
• Type: BARs can be Memory apertures only. Memory BARs can be either 64-bit or 32-bit.
Prefetch is enabled for 64-bit and not enabled for 32-bit. When a BAR is set as 64 bits, it uses
the next BAR for the extended address space, making it inaccessible.
• Size: The available Size range depends on the PCIe Device/Port Type and the Type of BAR
selected. The following table lists the available BAR size ranges
• Prefetchable: Identifies the ability of the memory space to be prefetched. This can only be
enabled for 64-bit addressable bars.
• PCIe to AXI Translation: This text field should be set to the appropriate value to perform the
translation from the PCI Express base address to the desired AXI Base Address.
Memory indicates that the address space is defined as memory aperture. The base address
register only responds to commands that access the specified address space. If MSI-X Capability
Structure is enabled and MSI-X Internal implementation location is selected, there will be a 64
KB reserved address space in one of the enabled PCIe BAR. See the BAR Indicator option in the
PCIe Misc Tab.
For best results, disable unused base address registers to conserve system resources. A base
address register is disabled by deselecting unused BARs in the Vivado® IDE.
Figure 45: PCIe Miscellaneous Tab for AXI Bridge Functional Mode
• Bar Indicator: This is the space allocated for Interrupt processing registers that are shared
between AXI Bridge and AXI DMA. This register space includes an MSI-X table, and when
MSI-X Internal is selected, 64KB address space from the BAR indicated here will be reserved
for the MSI-X table. Depending on PCIe BAR selection this register space can be allocated to
any selected BAR space (BAR0 to BAR5). This options is valid only when MSI-X Internal
option is selected. For all other interrupt options there is no allocated space.
• Legacy Interrupt Settings: Select one of the Legacy Interrupts: INTA, INTB, INTC, or INTD.
• MSI Capabilities: By default, MSI Capabilities is enabled, and 1 vector is enabled. You can
choose up to 16 vectors. In general, Linux uses only 1 vector for MSI. This option can be
disabled.
• MSI-X Capabilities: Select a MSI-X event. For more information, see MSI-X Vector Table and
PBA (0x8).
• MSI-X Implementation Location: For MSI-X, there are two options: Internal and External.
When Internal is selected, MSI-X table is internal to the IP and the table can be accessed
depending on BAR Indicator selection. When External is selected, MSI-X related ports are
brought out of IP and user is responsible to make table outside of IP. In this case, Bar Indicator
is not used.
• Config Extended Interface: PCIe extended interface can be selected for more configuration
space. When Configuration Extend Interface is selected, you are responsible for adding logic
to extend the interface to make it work properly.
Figure 46: AXI BARs Tab for AXI Bridge Functional Mode
• Number of BARs: Indicates the number of AXI BARs enabled. The BARs are enabled
sequentially.
• Aperture Base Address: Sets the base address for the address range of BAR. You should edit
this parameter to fit design requirements.
• Aperture High Address: Sets the upper address threshold for the address range of BAR. You
should edit this parameter to fit design requirements.
• AXI to PCIe Translation: Configures the translation mapping between AXI and PCI Express®
address space. You should edit this parameter to fit design requirements.
Figure 47: AXI Misc Tab for AXI Bridge Functional Mode
• Slave AXI ID Width: Sets the ID width for the AXI Slave Interface.
GT Settings Tab
For a description of these options, see GT Settings Tab.
Output Generation
For details, see the Vivado Design Suite User Guide: Designing with IP (UG896).
For information regarding the example design, see Example Design Output Structure.
Related Information
Required Constraints
The Bridge core requires a clock period constraint for the reference clock input that agrees with
the REF_CLK_FREQ parameter setting. In addition, pin-placement (LOC) constraints are needed
that are board/part/package specific.
See Placement Constraints for more details on the constraint paths for FPGA architectures.
Additional information on clocking can be found in the Xilinx® Solution Center for PCI Express
(see Solution Centers).
Note: The reference clock input is refclk in Virtex®-7 devices and sys_clk_gt in UltraScale devices.
System Integration
A typical embedded system including the Bridge core is shown in . Some additional components
to this system in the Vivado® IP integrator can include the need to connect the MicroBlaze™
processor or Zynq® device Arm® processor peripheral to communicate with PCI Express® (in
addition to the AXI4-Lite register port on the PCIe bridge). The AXI Interconnect provides this
capability and performs the necessarily conversions for the various AXI ports that might be
connected to the AXI Interconnect IP (see AXI to AXI Connector Data Sheet (DS803)).
The Bridge core can be configured with each port connection for an AXI Vivado IP integrator
system topology. When instantiating the core, ensure the following bus interface tags are
defined.
BUS_INTERFACE M_AXI
BUS_INTERFACE S_AXI
BUS_INTERFACE S_AXI_CTL
Placement Constraints
The Bridge core provides a Xilinx® design constraint (XDC) file for all supported PCIe, Part, and
Package permutations. You can find the generated XDC file in the Sources tab of the Vivado®
IDE after generating the IP in the Customize IP dialog box.
For design platforms, it might be necessary to manually place and constrain the underlying blocks
of the Bridge core. The modules to assign a LOC constraint include:
Location Constraints
This section highlights the LOC constraints to be specified in the XDC file for the Bridge core for
design implementations.
For placement/path information on the integrated block for PCIe® itself, use the following
constraint:
# 7 Series Constraint
set_property LOC PCIE_X*Y* [get_cells {axi_pcie3_0_i/inst/pcie3_ip_i/inst/
pcie_top_i/pcie_7vx_i/PCIE_3_0_i}]
# Ultrascale Constraint
set_property LOC PCIE_X*Y* [get_cells {axi_pcie3_0_i/inst/pcie3_ip_i/inst/
pcie3_uscale_top_inst/pcie3_uscale_wrapper_inst/PCIE_3_1_inst}]
For placement/path information of the GTH transceivers, use the following constraint:
# 7 Series Constraint
set_property LOC GTXE2_CHANNEL_X*Y* [get_cells {axi_pcie3_0_i/inst/
pcie3_ip_i/inst/
gt_top_i/pipe_wrapper_i/pipe_lane[0].gt_wrapper_i/
gth_channel.gthe2_channel_i}]
# Ultrascale Constraint
set_property LOC GTXE2_CHANNEL_X*Y* [get_cells {axi_pcie3_0_i/inst/
pcie3_ip_i/inst/
gt_top_i/gt_wizard.gtwizard_top_i/axi_pcie3_0_pcie3_ip_gt_i/inst/
gen_gtwizard_gthe3_top.axi_pcie3_0_pcie3_ip_gt_gtwizard_gthe3_inst/
gen_gtwizard_gthe3.gen_channel_container[1].gen_enabled_channel.gthe3_channe
l_wrapp
er_inst/channel_inst/
gthe3_channel_gen.gen_gthe3_channel_inst[0].GTHE3_CHANNEL_PRIM_INST}]
For placement/path constraints of the input PCIe differential clock source (using the example
provided in System Integration), use the following constraint:
Clock Frequencies
The AXI Memory Mapped to PCI Express® Bridge supports reference clock frequencies of 100
MHz, 125 MHz, and 250 MHz and is configurable within the Vivado® IDE.
Clock Management
Clock management is covered in the core clocking section.
Related Information
Clocking
Clock Placement
For details, see Placement Constraints.
Banking
This section is not applicable for this IP core.
Transceiver Placement
The Transceiver primitives adjacent to the PCIe hard block should be used to aid in the place and
route of the solution IP. The adjacent Transceiver banks one above or one below the desired PCIe
hard block can also be used. Transceivers outside this range are not likely to meet the timing
requirements for the PCI Express® Solution IP and should not be used.
Simulation
For comprehensive information about Vivado® simulation components, as well as information
about using supported third party tools, see the Vivado Design Suite User Guide: Logic Simulation
(UG900).
Note: For cores targeting 7 series or Zynq-7000 devices, UNIFAST libraries are not supported. Xilinx® IP is
tested and qualified with UNISIM libraries only.
Post-Synthesis/Post-Implementation Netlist
Simulation
The DMA/Bridge Subsystem core for UltraScale+™ devices does not support post-synthesis/
post-implementation netlist functional simulations. The AXI Bridge for PCIe® core supports post-
synthesis/post-implementation netlist functional simulations. However, some configurations do
not support this feature in this release. See the following table for the configuration support of
netlist functional simulations.
Post-synthesis/implementation netlist timing simulations are not supported for any of the
configurations in this release.
Chapter 6
Example Design
This chapter contains information about the example design provided with the generation of the
IP in the Vivado® Design Suite.
• Root Port Model: a test bench that generates, consumes, and checks PCI Express® bus traffic
• AXI Block RAM Controller
The following figure shows the DMA/Bridge Subsystem for PCIe IP in Root Port configuration.
AXI VIP
(Master RP
s_axi EP PCIe (model)
Mode) AXI PCIe
AXI_CTL
STIMULUS s_axi_ctl
Generator
X20166-042919
The following figure shows the DMA/Bridge Subsystem for PCIe IP in Endpoint configuration.
Endpoint Mode
PCIe Link
AXI4 Interface AXI4 Interface
X20167-042919
• In the PCIE:Basics tab, the example design supports only an Endpoint (EP) device.
• In the AXI:BARS tab, the Base Address, High Address, and AXI to PCIe® Translation default
values are used.
The following figure illustrates the simulation design provided with the Bridge core.
PCIe Link
AXI4 Interface
X20048-042919
• An example Verilog HDL or VHDL wrapper (instantiates the cores and example design).
• A customizable demonstration test bench to simulate the example design.
The following table provides a description of the contents of the example design directories.
Directory Description
project_1/axi_pcie3_example Contains all example design files.
project_1/axi_pcie3_example/axi_pcie3_example.srcs/ Contains the top module for the example design,
sources_1/imports/example_design/ xilinx_axi_pcie3_ep.v.
project_1/axi_pcie3_example/axi_pcie3_example.srcs/ Contains the XDC file based on device selected, all design
sources_1/ip/axi_pcie3 files and subcores used in axi_pcie, and the top modules for
simulation and synthesis.
project_1/axi_pcie3_example/axi_pcie3_example.srcs/ Contains block RAM controller files used in example design.
sources_1/ip/axi_bram_ctrl_0
project_1/axi_pcie3_example/axi_pcie3_example.srcs/sim_1/ Contains all RP files, cgator and PIO files.
imports/simulation/dsport
project_1/axi_pcie3_example/axi_pcie3_example.srcs/sim_1/ Contains the test bench file.
imports/simulation/functional
project_1/axi_pcie3_example/axi_pcie3_example.srcs/ Contains the example design XDC file.
constrs_1/imports/example_design
Chapter 7
Test Bench
This chapter contains information about the test benches provided in the Vivado® Design Suite
environment.
Source code for the Root Port Model is included to provide the model for a starting point for
your test bench. All the significant work for initializing the configuration space, creating TLP
transactions, generating TLP logs, and providing an interface for creating and verifying tests are
complete, allowing you to dedicate efforts to verifying the correct functionality of the design
rather than spending time developing an Endpoint core test bench infrastructure.
• Test Programming Interface (TPI), which allows you to stimulate the Endpoint device for the
model.
• Example tests that illustrate how to use the test program TPI.
The following figure illustrates the Root Port Model coupled with the PIO design.
Root Port
Output
usrapp_com Model TPI for
Logs
Express
Test
usrapp_rx usrapp_tx
Program
dsport
BRAM Controller
Architecture
The Root Port Model consists of these blocks, illustrated in the previous figure:
The usrapp_tx and usrapp_rx blocks interface with the dsport block for transmission and
reception of TLPs to/from the Endpoint Design Under Test (DUT). The Endpoint DUT consists of
the Endpoint for AXI-PCIe® and the Block RAM controller design (displayed) or customer design.
The usrapp_tx block sends TLPs to the dsport block for transmission across the PCI Express
Link to the Endpoint DUT. In turn, the Endpoint DUT device transmits TLPs across the PCI
Express Link to the dsport block, which are subsequently passed to the usrapp_rx block. The
dsport and core are responsible for the data link layer and physical link layer processing when
communicating across the PCI Express logic. Both usrapp_tx and usrapp_rx utilize the
usrapp_com block for shared functions, for example, TLP processing and log file outputting.
Transaction sequences or test programs are initiated by the usrapp_tx block to stimulate the
logic interface of the Endpoint device. TLP responses from the Endpoint device are received by
the usrapp_rx block. Communication between the usrapp_tx and usrapp_rx blocks allow
the usrapp_tx block to verify correct behavior and act accordingly when the usrapp_rx block
has received TLPs from the Endpoint device.
AXI BRAM
m_axi
Controller
RP EP
AXI STIMULUS AXI-PCIE PCIe Gen3
Generator s_axi
Gen3 (model)
AXI_CTL
STIMULUS s_axi_ctl
Generator
X20050-030419
Architecture
The Endpoint model consists of these blocks:
The pio_rx_engine and pio_tx_engine blocks interface with the ep block for reception
and transmission of TLPs from/to the Root Port Design Under Test (DUT). The Root Port DUT
consists of the core configured as a Root Port and the Block RAM controller along with s_axi
and s_axi_ctl models to drive traffic on s_axi and s_axi_ctl.
Related Information
Example Design
In Endpoint configuration, the Slave AXI interface is typically driven by a DMA module or a
packet generator, such as an AXI exerciser. The master AXI4 interface is used to receive packets
from the host, such as the DMA configuration file or PIO, and accesses directly to a Memory
module, such as MIG. The slave AXI4-Lite interface is not typically used in an Endpoint
configuration but can be utilized to check some Bridge/Link status registers. The User Interrupt
input signals can be driven by an Interrupt controller to signal the Host for any important event
at the Endpoint side, such as a DMA transfer complete and an error event.
The following figure shows an example of where the Bridge cores can be used.
Appendix A
Upgrading
This appendix contains information about migrating from other cores to this core, and upgrading
the core to most recent version of the IP core.
Parameter Changes
The following parameters have changed from the AXI PCIe Gen2 core to the AXI Bridge for PCIe
Gen3 core.
AXI PCIe Gen2 Parameter AXI Bridge for PCIe Gen3 Parameter
C_PCIEBAR_LEN_0 PF0_BAR0_APERTURE_SIZE
C_PCIEBAR_LEN_1 PF0_BAR1_APERTURE_SIZE
C_PCIEBAR_LEN_2 PF0_BAR2_APERTURE_SIZE
PF0_BAR3_APERTURE_SIZE
PF0_BAR4_APERTURE_SIZE
PF0_BAR5_APERTURE_SIZE
C_SUPPORTS_NARROW_BURST C_S_AXI_SUPPORTS_NARROW_BURST1
C_NO_OF_LANES PL_LINK_CAP_MAX_LINK_WIDTH
C_DEVICE_ID PF0_DEVICE_ID
C_VENDOR_ID PF0_VENDOR_ID
C_CLASS_CODE PF0_CLASS_CODE
C_REV_ID PF0_REVISION_ID
C_REF_CLK_FREQ REF_CLK_FREQ
Notes:
1. The core supports Narrow Burst starting in 2016.4.
Parameter Changes
The following parameters have changed from the AXI Bridge for PCIe Gen3 core to the DMA/
Bridge Subsystem for PCIe in AXI Bridge mode.
Port Changes
The following parameters have changed from the AXI Bridge for PCIe Gen3 core to the DMA/
Bridge Subsystem for PCIe in AXI Bridge mode.
Parameter Changes
The following table shows the new parameter added to the core in the current release.
Port Changes
The following table shows the new port added to the core in the current release.
The following table shows the cfg_ext_if signals which are available when the CFG_EXT_IF
parameter is set to true in the AXI Bridge for PCIe Gen3 only.
Appendix B
Debugging
This appendix provides information for using the resources available on the Xilinx® Support
website, debug tools, and other step-by-step processes for debugging designs that use the AXI
Bridge for PCIe core.
Documentation
This product guide is the main document associated with the core. This guide, along with
documentation related to all products that aid in the design process, can be found on the Xilinx
Support web page or by using the Xilinx Documentation Navigator.
Download the Xilinx Documentation Navigator from the Downloads page. For more information
about this tool and the features available, see the online help after installation.
Solution Centers
See the Xilinx Solution Centers for support on devices, software tools, and intellectual property
at all stages of the design cycle. Topics include design assistance, advisories, and troubleshooting
tips.
The PCI Express® Solution Center is located at Xilinx Solution Center for PCI Express. Extensive
debugging collateral is available in AR: 56802.
Answer Records
Answer Records include information about commonly encountered problems, helpful information
on how to resolve these problems, and any known issues with a Xilinx product. Answer Records
are created and maintained daily ensuring that users have access to the most accurate
information available.
Answer Records for this core are listed below, and can be located by using the Search Support
box on the main Xilinx support web page. To maximize your search results, use proper keywords,
such as:
Technical Support
Xilinx provides technical support on the Xilinx Community Forums for this LogiCORE™ IP product
when used as described in the product documentation. Xilinx cannot guarantee timing,
functionality, or support if you do any of the following:
• Implement the solution in devices that are not defined in the documentation.
• Customize the solution beyond that allowed in the product documentation.
• Change any section of the design labeled DO NOT MODIFY.
Debug Tools
There are many tools available to address Bridge design issues. It is important to know which
tools are useful for debugging various situations.
The Vivado logic analyzer is used to interact with the logic debug cores, including:
See Vivado Design Suite User Guide: Programming and Debugging (UG908).
Reference Boards
Various Xilinx development boards support the core. These boards can be used to prototype
designs and establish that the core can communicate with the system.
Third-Party Tools
This section describes third-party software tools that can be useful in debugging.
LSPCI (Linux)
LSPCI is available on Linux platforms and allows you to view the PCI Express® device
configuration space. LSPCI is usually found in the /sbin directory. LSPCI displays a list of
devices on the PCI buses in the system. See the LSPCI manual for all command options. Some
useful commands for debugging include:
• lspci -x -d [<vendor>]:[<device>]
This displays the first 64 bytes of configuration space in hexadecimal form for the device with
vendor and device ID specified (omit the -d option to display information for all devices). The
default Vendor/Device ID for Xilinx cores is 10EE:6012. Here is a sample of a read of the
configuration space of a Xilinx device:
> lspci -x -d 10EE:6012
81:00.0 Memory controller: Xilinx Corporation: Unknown device 6012
00: ee 10 12 60 07 00 10 00 00 00 80 05 10 00 00 00
10: 00 00 80 fa 00 00 00 00 00 00 00 00 00 00 00 00
20: 00 00 00 00 00 00 00 00 00 00 00 00 ee 10 6f 50
30: 00 00 00 00 40 00 00 00 00 00 00 00 05 01 00 00
Included in this section of the configuration space are the Device ID, Vendor ID, Class Code,
Status and Command, and Base Address Registers.
• lspci -k
Shows kernel drivers handling each device and kernel modules capable of handling it (works
with kernel 2.6 or later).
Hardware Debug
Transceiver Debug
IMPORTANT! The ports in the Transceiver Control And Status Interface must be driven in accordance
with the appropriate GT user guide. Using the input signals listed below may result in unpredictable
behavior of the IP core.
The following table describes the ports used to debug transceiver related issues when a 7 series
device is targeted.
The following table describes the ports used to debug transceiver related issues when an
UltraScale™ device is targeted.
See UltraScale+ Devices Integrated Block for PCI Express LogiCORE IP Product Guide (PG213) for
Transceiver Debug interface ports for Ultrascale+ devices when DMA/Bridge subsystem is
enabled.
• Virtex-7 FPGA Integrated Block for PCI Express LogiCORE IP Product Guide (PG023).
• UltraScale Devices Gen3 Integrated Block for PCI Express LogiCORE IP Product Guide (PG156).
• UltraScale+ Devices Integrated Block for PCI Express LogiCORE IP Product Guide (PG213).
Interface Debug
AXI4-Lite Interfaces
Read from a register that does not have all 0s as a default to verify that the interface is
functional. Output s_axi_arready asserts when the read address is valid and output
s_axi_rvalid asserts when the read data/response is valid. If the interface is unresponsive
ensure that the following conditions are met.
• For older versions of the AXI Bridge for PCIe Gen3 core with the axi_ctl_aclk port,
ensure the axi_ctl_aclk and axi_ctl_aclk_out pins are connected to the design and
are pulsing out of the IP.
• The interface is not being held in reset, and axi_aresetn is an active-Low reset.
• Ensure that the main core clocks are toggling and that the enables are also asserted.
• Has a simulation been run? Verify in simulation and/or a Vivado® Design Suite debug feature
capture that the waveform is correct for accessing the AXI4-Lite interface.
Appendix C
XVC-over-PCIe should be used to perform FPGA debug remotely using the Vivado Design Suite
debug feature when JTAG debug is not available. This is commonly used for data center
applications where the FPGA is connected to a PCIe Host system without any other connections
to the hardware device.
Using debug over XVC requires software, driver, and FPGA hardware design components. Since
there is an FPGA hardware design component to XVC-over-PCIe debug, you cannot perform
debug until the FPGA is already loaded with an FPGA hardware design that implements XVC-
over-PCIe and the PCIe link to the Host PC is established. This is normally accomplished by
loading an XVC-over-PCIe enabled design into the configuration flash on the board prior to
inserting the card into the data center location. Since debug using XVC-over-PCIe is dependent
on the PCIe communication channel this should not be used to debug PCIe link related issue.
IMPORTANT! XVC only provides connectivity to the debug cores within the FPGA. It does not provide the
ability to program the device or access device JTAG and configuration registers. These operations can be
performed through other standard Xilinx interfaces or peripherals such as the PCIe MCAP VSEC and
HWICAP IP.
Overview
The main components that enable XVC-over-PCIe debug are as follows:
These components are provided as a reference on how to create XVC connectivity for Xilinx
FPGA designs. These three components are shown in the following figure and connect to the
Vivado Design Suite debug feature through a TCP/IP socket.
Running on local or remote Running on Host PC connected to Xilinx FPGA card Running on Xilinx FPGA
Host
X18837-032119
The Debug Bridge IP, when configured for From PCIe to BSCAN or From AXI to BSCAN,
provides a connection point for the Xilinx® debug network from either the PCIe Extended
Capability or AXI4-Lite interfaces respectively. Vivado tool automation connects this instance of
the Debug Bridge to the Xilinx debug cores found in the design rather than connecting them to
the JTAG BSCAN interface. There are design trade-offs to connecting the debug bridge to the
PCIe Extended Configuration Space or AXI4-Lite. The following sections describe the
implementation considerations and register map for both implementations.
The PCIe Extended Configuration Interface uses PCIe configuration transactions rather than PCIe
memory BAR transactions. While PCIe configuration transactions are much slower, they do not
interfere with PCIe memory BAR transactions at the PCIe IP boundary. This allows for separate
data and debug communication paths within the FPGA. This is ideal if you expect to debug the
datapath. Even if the datapath becomes corrupt or halted, the PCIe Extended Configuration
Interface can remain operational to perform debug. The following figure describes the
connectivity between the PCIe IP and the Debug Bridge IP to implement the PCIe-XVC-VSEC.
Note: Although the previous figure shows the UltraScale+™ Devices Integrated Block for PCIe IP, other
PCIe IP (that is, the UltraScale™ Devices Integrated Block for PCIe, AXI Bridge for PCIe, or PCIe DMA IP)
can be used interchangeably in this diagram.
Note: Although the previous figure shows the PCIe DMA IP, any AXI-enabled PCIe IP can be used
interchangeably in this diagram.
The AXI-XVC implementation allows for higher speed transactions. However, XVC debug traffic
passes through the same PCIe ports and interconnect as other PCIe control path traffic, making it
more difficult to debug transactions along this path. As result the AXI-XVC debug should be used
to debug a specific peripheral or a different AXI network rather than attempting to debug
datapaths that overlap with the AXI-XVC debug communication path.
The PCIe-XVC-VSEC and AXI-XVC have a slightly different register map that must be taken into
account when designing XVC drivers and software. The register maps in the following tables
show the byte-offset from the base address.
• The PCIe-XVC-VSEC base address must fall within the valid range of the PCIe Extended
Configuration space. This is specified in the Debug Bridge IP configuration.
• The base address of an AXI-XVC Debug Bridge is the offset for the Debug Bridge IP peripheral
that was specified in the Vivado Address Editor.
The following tables describe the register map for the Debug Bridge IP as an offset from the base
address when configured for the From PCIe-Ext to BSCAN or From AXI to BSCAN modes.
Register
Register Name Description Register Type
Offset
0x00 PCIe Ext Capability Header PCIe defined fields for VSEC use. Read Only
0x04 PCIe VSEC Header PCIe defined fields for VSEC use. Read Only
0x08 XVC Version Register IP version and capabilities information. Read Only
0x0C XVC Shift Length Register Shift length. Read Write
0x10 XVC TMS Register TMS data. Read Write
0x14 XVC TDIO Register TDO/TDI data. Read Write
0x18 XVC Control Register General control register. Read Write
0x1C XVC Status Register General status register. Read Only
Register
Register Name Description Register Type
Offset
0x00 XVC Shift Length Register Shift length. Read Write
0x04 XVC TMS Register TMS data. Read Write
0x08 XVC TDI Register TDI data. Read Write
0x0C XVC TDO Register TDO data. Read Only
0x10 XVC Control Register General control register. Read Write
0x14 XVC Status Register General status register. Read Only
0x18 XVC Version Register IP version and capabilities information. Read Only
Bit Initial
Field Description Type
Location Value
PCIe
This field is a PCI-SIG defined ID number that indicates the nature
Extended
15:0 and format of the Extended Capability. The Extended Capability ID 0x000B Read Only
Capability
for a VSEC is 0x000B
ID
This field is a PCI-SIG defined version number that indicates the
Capability
19:16 version of the capability structure present. Must be 0x1 for this 0x1 Read Only
Version
version of the specification.
This field is passed in from the user and contains the offset to the
next PCI Express Capability structure or 0x000 if no other items
Next
exist in the linked list of capabilities. For Extended Capabilities
31:20 Capability 0x000 Read Only
implemented in the PCIe extended configuration space, this value
Offset
must always be within the valid range of the PCIe Extended
Configuration space.
Bit Initial
Field Description Type
Location Value
This field is the ID value that can be used to identify the PCIe-XVC-
15:0 VSEC ID 0x0008 Read Only
VSEC and is specific to the Vendor ID (0x10EE for Xilinx).
This field is the Revision ID value that can be used to identify the
19:16 VSEC Rev 0x0 Read Only
PCIe-XVC-VSEC revision.
This field indicates the number of bytes in the entire PCIe-XVC-
VSEC
31:20 VSEC structure, including the PCIe Ext Capability Header and PCIe 0x020 Read Only
Length
VSEC Header registers.
This register is used to set the scan chain shift length within the debug scan chain.
This register is used to set the TMS data within the debug scan chain.
This register is used for TDO/TDI data access. When using PCIePCI-XVC-VSEC, these two
registers are combined into a single field. When using AXI-XVC, these are implemented as two
separate registers.
When operating in PCIe-XVC-VSEC mode, the driver will initiate PCIe configuration transactions
to interface with the FPGA debug network. When operating in AXI-XVC mode, the driver will
initiate 32-bit PCIe Memory BAR transactions to interface with the FPGA debug network. By
default, the driver will attempt to discover the PCIe-XVC-VSEC and use AXI-XVC if the PCIe-
XVC-VSEC is not found in the PCIe configuration extended capability linked list.
The driver is provided in the data directory of the Vivado installation as a .zip file. This .zip
file should be copied to the Host PC connected through PCIe to the Xilinx FPGA and extracted
for use. README.txt files have been included; review these files for instructions on installing
and running the XVC drivers and software.
To add the BSCAN interface to the Reconfigurable Partition definition the appropriate ports and
port attributes should be added to the Reconfigurable Partition definition. The sample Verilog
provided below can be used as a template for adding the BSCAN interface to the port
declaration.
...
// BSCAN interface definition and attributes.
// This interface should be added to the DFX module definition
// and left unconnected in the DFX module instantiation.
(* X_INTERFACE_INFO = "xilinx.com:interface:bscan:1.0 S_BSCAN drck" *)
(* DEBUG="true" *)
input S_BSCAN_drck,
(* X_INTERFACE_INFO = "xilinx.com:interface:bscan:1.0 S_BSCAN shift" *)
(* DEBUG="true" *)
input S_BSCAN_shift,
(* X_INTERFACE_INFO = "xilinx.com:interface:bscan:1.0 S_BSCAN tdi" *)
(* DEBUG="true" *)
input S_BSCAN_tdi,
(* X_INTERFACE_INFO = "xilinx.com:interface:bscan:1.0 S_BSCAN update" *)
(* DEBUG="true" *)
input S_BSCAN_update,
(* X_INTERFACE_INFO = "xilinx.com:interface:bscan:1.0 S_BSCAN sel" *)
(* DEBUG="true" *)
input S_BSCAN_sel,
(* X_INTERFACE_INFO = "xilinx.com:interface:bscan:1.0 S_BSCAN tdo" *)
(* DEBUG="true" *)
output S_BSCAN_tdo,
(* X_INTERFACE_INFO = "xilinx.com:interface:bscan:1.0 S_BSCAN tms" *)
(* DEBUG="true" *)
input S_BSCAN_tms,
(* X_INTERFACE_INFO = "xilinx.com:interface:bscan:1.0 S_BSCAN tck" *)
(* DEBUG="true" *)
input S_BSCAN_tck,
(* X_INTERFACE_INFO = "xilinx.com:interface:bscan:1.0 S_BSCAN runtest" *)
(* DEBUG="true" *)
input S_BSCAN_runtest,
When link_design is run, the exposed ports are connected to the static portion of the debug
network through tool automation. The ILAs are also connected to the debug network as required
by the design. There might also be an additional dbg_hub cell that is added at the top level of
the design. For Tandem PCIe with Field Updates designs, the dbg_hub and tool inserted clock
buffer(s) must be added to the appropriate design partition. The following is an example of the
Tcl commands that can be run after opt_design to associate the dbg_hub primitives with the
appropriate design partitions.
The PCIe-XVC-VSEC can be added to the UltraScale+™ PCIe example design by selecting the
following options.
Note: Although the previous figure shows to the UltraScale+ Devices Integrated Block for PCIe IP, the
example design hierarchy is the same for other PCIe IPs.
9. Double-click the Debug Bridge IP identified as xvc_vsec to view the configuration option
for this IP. Make note of the following configuration parameters because they will be used to
configure the driver.
• PCIe XVC VSEC ID (default 0x0008)
• PCIe XVC VSEC Rev ID (default 0x0)
IMPORTANT! Do not modify these parameter values when using a Xilinx Vendor ID or provided XVC
drivers and software. These values are used to detect the XVC extended capability. (See the PCIe
specification for additional details.)
10. In the Flow Navigator, click Generate Bitstream to generate a bitstream for the example
design project. This bitstream will be then be loaded onto the FPGA board to enable XVC
debug over PCIe.
After the XVC-over-PCIe hardware design has been completed, an appropriate XVC enabled
PCIe driver and associated XVC-Server software application can be used to connect the Vivado
Design Suite to the PCIe connected FPGA. Vivado can connect to an XVC-Server application that
is running local on the same Machine or remotely on another machine using a TCP/IP socket.
System Bring-Up
The first step is to program the FPGA and power on the system such that the PCIe link is
detected by the host system. This can be accomplished by either:
• Programming the design file into the flash present on the FPGA board, or
• Programming the device directly via JTAG.
If the card is powered by the Host PC, it will need to be powered on to perform this
programming using JTAG and then re-started to allow the PCIe link to enumerate. After the
system is up and running, you can use the Linux lspci utility to list out the details for the
FPGA-based PCIe device.
The XVC driver and software are provide as a ZIP file included with the Vivado Design Suite
installation.
1. Copy the ZIP file from the Vivado install directory to the FPGA connected Host PC and
extract (unzip) its contents. This file is located at the following path within the Vivado
installation directory.
XVC Driver and SW Path: …/data/xicom/driver/pcie/xvc_pcie.zip
The README.txt files within the driver_* and xvcserver directories identify how to
compile, install, and run the XVC drivers and software, and are summarized in the following
steps. Follow the following steps after the driver and software files have been copied to the
Host PC and you are logged in as a user with root permissions.
2. Modify the variables within the driver_*/xvc_pcie_user_config.h file to match your
hardware design and IP settings. Consider modifying the following variables:
• PCIE_VENDOR_ID: The PCIe Vendor ID defined in the PCIe® IP customization.
• PCIE_DEVICE_ID: The PCIe Device ID defined in the PCIe® IP customization.
• Config_space: Allows for the selection between using a PCIe-XVC-VSEC or an AXI-XVC
peripheral. The default value of AUTO first attempts to discover the PCIe-XVC-VSEC, then
attempts to connect to an AXI-XVC peripheral if the PCIe-XVC-VSEC is not found. A value
of CONFIG or BAR can be used to explicitly select between PCIe®-XVC-VSEC and AXI-
XVC implementations, as desired.
• config_vsec_id: The PCIe XVC VSEC ID (default 0x0008) defined in the Debug Bridge IP
when the Bridge Type is configured for From PCIE to BSCAN. This value is only used for
detection of the PCIe®-XVC-VSEC.
• config_vsec_rev: The PCIe XVC VSEC Rev ID (default 0x0) defined in the Debug Bridge IP
when the Bridge Type is configured for From PCIe to BSCAN. This value is only used for
detection of the PCIe-XVC-VSEC.
• bar_index: The PCIe BAR index that should be used to access the Debug Bridge IP when
the Bridge Type is configured for From AXI to BSCAN. This BAR index is specified as a
combination of the PCIe IP customization and the addressable AXI peripherals in your
system design. This value is only used for detection of an AXI-XVC peripheral.
• bar_offset: PCIe BAR Offset that should be used to access the Debug Bridge IP when the
Bridge Type is configured for From AXI to BSCAN. This BAR offset is specified as a
combination of the PCIe IP customization and the addressable AXI peripherals in your
system design. This value is only used for detection of an AXI-XVC peripheral.
3. Move the source files to the directory of your choice. For example, use:
/home/username/xil_xvc or /usr/local/src/xil_xvc
4. Make sure you have root permissions and change to the directory containing the driver files.
# cd /driver_*/
/lib/modules/[KERNEL_VERSION]/kernel/drivers/pci/pcie/Xilinx/
xil_xvc_driver.ko
6. Run the depmod command to pick up newly installed kernel modules:
# depmod -a
If you run the dmesg command, you will see the following message:
kernel: xil_xvc_driver: Starting…
Note: You can also use insmod on the kernel object file to load the module:
# insmod xil_xvc_driver.ko
However, this is not recommended unless necessary for compatibility with older kernels.
9. The resulting character file, /dev/xil_xvc/cfg_ioc0, is owned by user root and group
root, and it will need to have permissions of 660. Change permissions on this file if it does
not allow the application to interact with the driver.
# chmod 660 /dev/xil_xvc/cfg_ioc0
You should see various successful tests of differing lengths, followed by the following
message:
"XVC PCIE Driver Verified Successfully!"
1. Make sure the firewall settings on the system expose the port that will be used to connect to
the Vivado Design Suite. For this example, port 10200 is used.
2. Make note of the host name or IP address. The host name and port number will be required
to connect Vivado to the xvcserver application. See the OS help pages for information
regarding the firewall port settings for your OS.
3. Move the source files to the directory of your choice. For example, use:
/home/username/xil_xvc or /usr/local/src/xil_xvc
4. Change to the directory containing the application source files:
# cd ./xvcserver/
After the Vivado Design Suite has connected to the XVC-server application you should see
the following message from the XVC-server.
Enable verbose by setting VERBOSE evn var.
Opening /dev/xil_xvc/cfg_ioc0
8. Select the newly added XVC target from the Hardware Targets table, and click Next.
9. Click Finish.
10. In the Hardware Device Properties panel, select the debug bridge target, and assign the
appropriate probes .ltx file.
Vivado now recognizes your debug cores and debug signals, and you can debug your design
through the Vivado hardware tools interface using the standard debug approach.
This allows you to debug Xilinx FPGA designs through the PCIe connection rather than JTAG
using the Xilinx Virtual Cable technology. You can terminate the connection by closing the
hardware server from Vivado using the right-click menu. If the PCIe connection is lost or the
XVC-Server application stops running, the connection to the FPGA and associated debug cores
will also be lost.
For DFX designs, it is important to terminate the connection during DFX operations. During a
DFX operation where debug cores are present inside the dynamic region, a portion of the debug
tree is expected to be reprogrammed. Vivado debug tools should not be actively communicating
with the FPGA through XVC during a DFX operation.
Appendix D
CCIX Interface
This appendix covers the following aspects of the CCIX interface:
• Supported Configurations
• Port Descriptions
• CCIX Activation/Deactivation
• CCIX-VC1 Credits
• Example Figures
Supported Configurations
The following table lists the link widths, the required core clock frequency and speed grade.
Port Descriptions
CCIX Core Interfaces
In addition to status, control and AXI4-Stream interfaces, the core has two CCIX interfaces used
to transfer and receive transactions.
Or
The CCIX application can wait to accumulate the needed amount
of credits before transmitting a TLP packet in order to avoid valid
drop between packet.
2 is_sop0_ptr 1 • 0: Byte 0
• 1: Byte 16
Indicates the position of the first byte of the second TLP starting
in this beat:
3 is_sop1_ptr 1 • 0: Reserved
• 1: Byte 16
• 10: Reserved.
2 is_sop0_ptr 1 • 0: Byte 0
• 1: Byte 16
Indicates the position of the first byte of the second TLP starting
in this beat:
3 is_sop1_ptr 1 • 0: Reserved
• 1: Byte 16
• 10: Reserved.
• 00: Byte 0
5:4 is_sop0_ptr[1:0] 2 • 01: Byte 16
• 10: Byte 32
• 11: Byte 48
Indicates the position of the first byte of the second TLP starting
in this beat:
• 00: Reserved
7:6 is_sop1_ptr[1:0] 2 • 01: Byte 16
• 10: Byte 32
• 11: Byte 48
Indicates the position of the first byte of the second TLP starting
in this beat:
• 00: Reserved
9:8 is_sop2_ptr[1:0] 2 • 01: Reserved
• 10: Byte 32
• 11: Byte 48
Indicates the position of the first byte of the second TLP starting
in this beat:
• 00: Reserved
11:10 is_sop3_ptr[1:0] 2 • 01: Reserved
• 10: Reserved
• 11: Byte 48
• 00: Byte 0
5:4 is_sop0_ptr[1:0] 2 • 01: Byte 16
• 10: Byte 32
• 11: Byte 48
Indicates the position of the first byte of the second TLP starting
in this beat:
• 00: Reserved
7:6 is_sop1_ptr[1:0] 2 • 01: Byte 16
• 10: Byte 32
• 11: Byte 48
Indicates the position of the first byte of the second TLP starting
in this beat:
• 00: Reserved
9:8 is_sop2_ptr[1:0] 2 • 01: Reserved
• 10: Byte 32
• 11: Byte 48
• 00: Reserved
11:10 is_sop3_ptr[1:0] 2 • 01: Reserved
• 10: Reserved
• 11: Byte 48
Signals that a TLP is ending in this beat. These outputs are set in
the final beat of a TLP. The encoding are as follows:
• 0000: No TLPs ending in this beat.
• 0001: A single TLP is ending in this beat. is_eop0_ptr[3:0]
provides the offset of the last Dword of this TLP.
• 0011: Two TLPs are ending in this beat.
○ is_eop0_ptr[3:0] provides the offset of the last Dword of
the first TLP.
○ is_eop1_ptr[3:0] provides the offset of the last Dword of
the second TLP.
• 0111: Two TLPs are ending in this beat.
○ is_eop0_ptr[3:0] provides the offset of the last Dword of
the first TLP.
15:12 is_eop[3:0] 4 ○ is_eop1_ptr[3:0] provides the offset of the last Dword of
the second TLP.
○ is_eop2_ptr[3:0] provides the offset of the last Dword of
the third TLP.
• 1111: Two TLPs are ending in this beat.
○ is_eop0_ptr[3:0] provides the offset of the last Dword of
the first TLP.
○ is_eop1_ptr[3:0] provides the offset of the last Dword of
the second TLP.
○ is_eop2_ptr[3:0] provides the offset of the last Dword of
the third TLP.
○ is_eop3_ptr[3:0] provides the offset of the last Dword of
the fourth TLP.
• All other values: Reserved.
This signal can be asserted by the core during a transfer if it has
detected an error in the data being transferred and needs to
abort the packet. The user logic nullifies the corresponding TLP
on the link to avoid data corruption.
The core asserts this signal in any beat of a TLP. It can either
19:16 discontinue[3:0] 4
choose to terminate the packet prematurely in the cycle where
the error was signaled, or continue until all bytes of the payload
are delivered to the core.
The discontinue signal can be asserted only when
s_axis_ccix_rx_tvalid is High.
Indicates the offset of the last Dword of the first TLP ending in
23:20 is_eop0_ptr[3:0] 4
this beat. This output is valid when is_eop[0] is asserted.
Indicates the offset of the last Dword of the second TLP ending
27:24 is_eop1_ptr[3:0] 4
in this beat. This output is valid when is_eop[1] is asserted.
Example Figures
The following figuren shows how the CCIX RX core interface transfers data based on available
credits from the user application. The interface will deassert ccix_rx_valid for a TLP that is
in the process of being presented, because it has no credit available. The core takes one user
clock cycle to restart the TLP presentation after the user application sends a credit.
The following figure shows the requirements of the user application on the Transmit CCIX core
interface, when the ccix_tx_valid signal is deasserted during presentation of a TLP packet
(option 1).
The following figure shows options 2, where the user accumulates enough credits before
transmitting the TLP packet, which ensures that the user does not need to deassert
ccix_tx_valid during the presentation of the TLP.
CCIX-VC1 Credits
The following table shows the credits information for CCIX core interface 256 bits and 512 bits.
CCIX Activation/Deactivation
Four states define how a CCIX link moves from deactivation to activation, and activation to
deactivation.
Note: When moving activation to deactivation, it is important that the credit exchange is carefully
controlled to avoid loss of data or credits in the CCIX link. The CCIX link must be in quiesce state.
The following figure shows the four state transitions that are based on a pair of interface signals
used as a concatenation:
Stop
00 10
Deactivate Activate
01 11
Run
X22414-030119
Appendix E
Xilinx Resources
For support resources such as Answers, Documentation, Downloads, and Forums, see Xilinx
Support.
Xilinx Design Hubs provide links to documentation organized by design tasks and other topics,
which you can use to learn key concepts and address frequently asked questions. To access the
Design Hubs:
Note: For more information on DocNav, see the Documentation Navigator page on the Xilinx website.
References
This section provides links to supplemental material useful to this document:
Revision History
The following table shows the revision history for this document.
• C_S_AXI_NUM_WRITE
• C_S_AXI_NUM_READ
• C_M_AXI_NUM_WRITE
• C_M_AXI_NUM_READ
Copyright
© Copyright 2014–2022 Advanced Micro Devices, Inc. Xilinx, the Xilinx logo, Alveo, Artix,
Kintex, Kria, Spartan, Versal, Vitis, Virtex, Vivado, Zynq, and other designated brands included
herein are trademarks of Xilinx in the United States and other countries. AMBA, AMBA Designer,
Arm, ARM1176JZ-S, CoreSight, Cortex, PrimeCell, Mali, and MPCore are trademarks of Arm
Limited in the EU and other countries. PCI, PCIe, and PCI Express are trademarks of PCI-SIG and
used under license. All other trademarks are the property of their respective owners.