Srx-Cluster-Management-Best 2015
Srx-Cluster-Management-Best 2015
Srx-Cluster-Management-Best 2015
Modified: 2015-11-30
Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper Networks reserves the right to change, modify,
transfer, or otherwise revise this publication without notice.
The information in this document is current as of the date on the title page.
Juniper Networks hardware and software products are Year 2000 compliant. Junos OS has no known time-related limitations through the
year 2038. However, the NTP application is known to have some difficulty in the year 2036.
The Juniper Networks product that is the subject of this technical documentation consists of (or is intended for use with) Juniper Networks
software. Use of such software is subject to the terms and conditions of the End User License Agreement (“EULA”) posted at
http://www.juniper.net/support/eula.html. By downloading, installing or using such software, you agree to the terms and conditions of
that EULA.
This document provides the best practices and methods for monitoring high-end SRX
®
Series chassis clusters using instrumentation available in the Junos operating system
(Junos OS) such as SNMP, the NETCONF XML management protocol, and syslog. This
document is applicable to all high-end SRX Series Services Gateways.
A chassis cluster provides high redundancy. Before you begin managing an SRX Series
chassis cluster, you need to have a basic understanding of how the cluster is formed and
how it works.
• A resilient system architecture, with a single active control plane for the entire cluster
and multiple Packet Forwarding Engines. This architecture presents a single device
view of the cluster.
• Monitoring of physical interfaces and failover if the failure parameters cross a configured
threshold.
To form a chassis cluster, a pair of the same model of supported SRX Series devices are
combined to act as a single system that enforces the same overall security. Chassis
cluster formation depends on the model. For SRX3400 and SRX3600 chassis clusters,
the location and type of Services Processing Cards (SPCs), I/O cards (IOCs), and Network
Processing Cards (NPCs) must match in the two devices. For SRX5600 and SRX5800
chassis clusters, the placement and type of SPCs must match in the two clusters.
Hardware Requirements
This following hardware components are required to support chassis clusters:
Software Requirements
This following software components are required to support chassis clusters:
• Junos OS Release 10.1R2 or later for SRX Series branch device virtual chassis
management and in-band management
For more information about how to form clusters, see the Junos OS Security Configuration
Guide.
The data plane software operates in active/active mode. In a chassis cluster, session
information is updated as traffic traverses either device, and this information is transmitted
between the nodes over the fabric link to guarantee that established sessions are not
dropped when a failover occurs. In active/active mode, it is possible for traffic to ingress
the cluster on one node and egress the cluster from the other node.
When a device joins a cluster, it becomes a node of that cluster. With the exception of
unique node settings and management IP addresses, nodes in a cluster share the same
configuration.
You can deploy up to 15 chassis clusters in a Layer 2 domain. Clusters and nodes are
identified in the following ways:
Chassis clustering of interfaces and services is provided through redundancy groups and
primacy within groups. A redundancy group is an abstract construct that includes and
manages a collection of objects. A redundancy group contains objects on both nodes. A
redundancy group is primary on one node and backup on the other at any time. When a
redundancy group is said to be primary on a node, its objects on that node are active. See
the Junos OS Security Configuration Guide for detailed information about redundancy
groups. Redundancy groups are the concept in Junos OS Services Redundancy Protocol
(JSRP) clustering that is similar to a virtual security interface (VSI) in Juniper Networks
®
ScreenOS Software. Basically, each node has an interface in the redundancy group,
where only one interface is active at a time. A redundancy group is a concept similar to
a virtual security device (VSD) in ScreenOS Software. Redundancy group 0 is always for
the control plane, while redundancy group 1+ is always for the data plane ports.
Active/passive chassis cluster mode is the most common type of chassis cluster firewall
deployment and consists of two firewall members of a cluster. One actively provides
routing, firewall, NAT, virtual private network (VPN), and security services, along with
maintaining control of the chassis cluster. The other firewall passively maintains its state
for cluster failover capabilities should the active firewall become inactive.
SRX Series devices support the active/active chassis cluster mode for environments in
which you want to maintain traffic on both chassis cluster members whenever possible.
In an SRX Series device active/active deployment, only the data plane is in active/active
mode, while the control plane is actually in active/passive mode. This allows one control
plane to control both chassis members as a single logical device, and in case of control
plane failure, the control plane can fail over to the other unit. This also means that the
data plane can fail over independently of the control plane. Active/active mode also
allows for ingress interfaces to be on one cluster member, with the egress interface on
the other. When this happens, the data traffic must pass through the data fabric to go
to the other cluster member and out of the egress interface. This is known as Z mode.
Active/active mode also allows the routers to have local interfaces on individual cluster
members that are not shared among the cluster in failover, but rather only exist on a
single chassis. These interfaces are often used in conjunction with dynamic routing
protocols that fail traffic over to the other cluster member if needed. Figure 1 on page 4
shows two SRX5800 devices in a cluster.
Control port 1
fpc 18, port 1
Fiber-optic
cable
Control port 1
fpc 6, port 1
g030658
To effectively manage the SRX clusters, network management applications must do the
following:
Figure 2 on page 5 shows the SRX Series high-end devices configuration for out-of-band
management and administration.
NM Software
Backup Router
fxp0/reth fxp0/reth
Control
Data
g041312
Primary SRX High-End Cluster Secondary
Figure 3 on page 6 shows the SRX Series branch devices configuration for out-of-band
management and administration.
NM Software
Backup Router
fxp0/reth fxp0/reth
Control
Data
g041313
Primary Secondary
SRX Branch Cluster
}
interfaces {
fxp0 {
unit 0 {
family inet {
address 172.19.100.164/24;
}
}
}
}
}
node1 {
system {
host-name SRX3400-2;
backup-router 172.19.100.1 destination 10.0.0.0/8;
services {
outbound-ssh {
client nm-10.200.0.1 {
device-id F007CC;
secret "$9$kPFn9ApOIEAtvWXxdVfTQzCt0BIESrIR-VsYoa9At0Rh"; ##
SECRET-DATA
services netconf;
10.200.0.1 port 7804;
}
}
}
}
}
filter {
input protect;
}
address 172.19.100.166/24 {
master-only;
}
}
}
}
}
{primary:node0}[edit]
security-level authentication {
read-view all;
}
}
}
}
}
}
target-address petserver {
address 116.197.178.20;
tag-list router1;
routing-instance MGMT_10;
target-parameters test;
}
target-parameters test {
parameters {
message-processing-model v3;
security-model usm;
security-level authentication;
security-name juniper;
}
notify-filter filter1;
}
notify server {
type trap;
tag router1;
}
notify-filter filter1 {
oid .1 include;
}
}
{primary:node0}[edit]
then accept;
}
term permit-icmp {
from {
protocol icmp;
icmp-type [ echo-reply echo-request ];
}
then accept;
}
term permit-ntp {
from {
source-address {
149.20.68.16/32;
}
protocol udp;
port ntp;
}
then accept;
}
term permit-ospf {
from {
protocol ospf;
}
then accept;
}
term permit-snmp {
from {
source-address {
10.200.0.0/24;
}
protocol udp;
port [ snmp snmptrap ];
}
then accept;
}
term deny-and-count {
from {
source-address {
0.0.0.0/0;
}
}
then {
count denied;
reject tcp-reset;
}
}
}
Explanation of Configuration
• The best way to connect to an SRX Series chassis cluster through the fxp0 interface
(a new type of interface) is to assign IP addresses to both management ports on the
primary and secondary nodes using groups.
• Use a master-only IP address across the cluster. This way, you can query a single IP
address and that IP address is always the master for redundancy group 0. If you are
not using a master-only IPv4 address, each node IP address must be added and
monitored. Secondary node monitoring is limited, as detailed in this topic.
• With the fxp0 interface configuration previously shown, the management IPv4 address
on the fxp0 interface of the secondary node in a chassis cluster is not reachable. The
secondary node routing subsystem is not running. The fxp0 interface is reachable by
hosts that are on the same subnet as the management IPv4 address. If the host is on
a different subnet than the management IPv4 address, then communication fails. This
is an expected behavior and works as designed. The secondary cluster member’s
Routing Engine is not operational until failover. The routing protocol process does not
work in the secondary node when the primary node is active. When management
access is needed, the backup-router configuration statement can be used.
With the backup-router statement, the secondary node can be accessed from an
external subnet for management purposes. Due to a system limitation, do not configure
the destination address specified in the backup-router as ‘0.0.0.0/0’ or ‘::/0’. The mask
has to be a non-zero value. Multiple destinations can be included if your management
IP address range is not contiguous. In this example, backup router 172.19.100.1 is
reachable through the fxp0 interface, and the destination network management system
IPv4 address is 10.200.0.1. The network management address is reachable through
the backup router. For the backup router to reach the network management system,
include the destination subnet in the backup router configuration.
• We recommend using different SNMP engine IDs for each node. This is because SNMPv3
uses the SNMP engine boots value for authentication of the protocol data units (PDUs),
and the SNMP engine boots value is different for each node. SNMPv3 might fail after
a switchover when the SNMP engine boots value does not match the expected value.
Most of the protocol stacks will resynchronize if the SNMP engine IDs are different.
• Keep other SNMP configurations, such as the SNMP communities, trap-groups, and
so on, common between the nodes as shown in the sample configuration.
NOTE: SNMP traps are sent only from the primary node. This includes
events and failures detected on the secondary node. The secondary node
never sends SNMP traps or alerts. Use the client-only configurable option
to restrict SNMP access to the required clients only. Use SNMPv3 for
encryption and authentication.
• Syslog messages should be sent from both nodes separately as the log messages are
node specific.
• You can restrict access to the device using firewall filters. The previous sample
configuration shows that SSH, SNMP, and Telnet are restricted to the 10.0.0.0/8
network. This configuration allows UDP, ICMP, OSPF, and NTP traffic and denies other
traffic. This filter is applied to the fxp0 interface.
• You can also use security zones to restrict the traffic. For more information, see the
Junos OS Security Configuration Guide.
• The factory default configuration for the SRX100, SRX210, and SRX240 devices
automatically enables Layer 2 Ethernet switching. Because Layer 2 Ethernet switching
is not supported in chassis cluster mode, for these devices, if you use the factory default
configuration, you must delete the Ethernet switching configuration before you enable
chassis clustering.
• Syslog should be used with caution. It can cause cluster instability. Data plane logging
should never be sent through syslogs for SRX Series branch devices.
If the reth interface belongs to redundancy group 1+, then the TCP connection to the
management station is seamlessly transitioned to the new primary. But if redundancy
group 0 failover occurs and the Routing Engine switches over to a new node, then
connectivity is lost for all sessions for a couple of seconds.
When enabling a chassis cluster in an SRX Series Services Gateway, the same model
device is used to provide control plane redundancy as shown in Figure 4 on page 13.
Node 0 Node 1
Control
Forwarding fab0 fab1 Forwarding Plane
Daemons Daemons
Node 0 Node 1 Data
g041314
Plane
Similar to a device with two Routing Engines, the control plane of an SRX Series cluster
operates in an active/passive mode with only one node actively managing the control
plane at any given time. Because of this, the forwarding plane always directs all traffic
sent to the control plane (also referred to as host-inbound traffic) to the cluster’s primary
node. This traffic includes (but is not limited to):
• Traffic for the routing processes, such as BGP, OSPF, IS-IS, RIP, and PIM traffic
• Traffic directed to management processes, such as SSH, Telnet, SNMP, and NETCONF
This behavior applies only to host-inbound traffic. Through traffic (that is, traffic forwarded
by the cluster, but not destined to any of the cluster’s interfaces) can be processed by
either node, based on the cluster’s configuration.
Because the forwarding plane always directs host-inbound traffic to the primary node,
the fxp0 interface provides an independent connection to each node, regardless of the
status of the control plane. Traffic sent to the fxp0 interface is not processed by the
forwarding plane, but is sent to the Junos OS kernel, thus providing a way to connect to
the control-plane of a node, even on the secondary node.
This topic explains how to manage a chassis cluster through the primary node without
requiring the use of the fxp0 interfaces, that is, in-band management. This is particularly
needed for SRX Series branch devices since the typical deployment for these devices is
such that there is no management network available to monitor the remote branch office.
Before Junos OS Release 10.1 R2, the management of an SRX Series branch chassis
cluster required connectivity to the control plane of both members of the cluster, thereby
requiring access to the fxp0 interface of each node. In Junos OS Release 10.1 R2 and later,
SRX Series branch devices can be managed remotely using the reth interfaces or the
Layer 3 interfaces.
Managing SRX Series Branch Chassis Clusters Through the Primary Node
Accessing the primary node of a cluster is as easy as establishing a connection to any of
the node’s interfaces (other than the fxp0 interface). Layer 3 and reth interfaces always
direct the traffic to the primary node, whichever node that is. Both deployment scenarios
are common and are depicted in Figure 5 on page 15 and Figure 6 on page 16.
In both cases, establishing a connection to any of the local addresses connects to the
primary node. To be precise, you are connected to the primary node of redundancy group
0. For example, you can connect to the primary node even when the reth interface, a
member of the redundancy group 1, is active in a different node (the same applies to
Layer 3 interfaces, even if they physically reside in the backup node). You can use SSH,
Telnet, SNMP, or the NETCONF XML management protocol to monitor the SRX chassis
cluster.
Figure 5 on page 15 shows an example of an SRX Series branch device being managed
over a reth interface. This model can be used for SRX Series high-end devices as well,
using Junos OS Release 10.4 or later.
EX Switch EX Switch
SRX Cluster
EX Switch
RETH1.0 RETH0.0
Redundant Ethernet Redundant Ethernet
interface connected INTERNET interface connected
g041315
to the Internet to the trusted network
Figure 6 on page 16 shows physical connections for in-band management using a Layer
3 interface.
EX Switch EX Switch
SRX Cluster
L3 Interfaces RETH0.0
ge-0/0/0.0 interface Redundant Ethernet
connected to the INTERNET interface connected
g041316
Internet to the trusted network
Table 1 on page 16 lists the advantages and disadvantages of using different interfaces.
Using the fxp0 interface with a master-only IP address allows A transit or reth interface has access only to the data of the
access to all routing instances and virtual routers within the routing instance or virtual router it belongs to. If it belongs to
system. The fxp0 interface can only be part of the inet.0 routing the default routing instance, it has access to all routing
table. Since the inet.0 routing table is part of the default routing instances.
instance, it can be used to access data for all routing instances
and virtual routers.
The fxp0 interface with a master-only IP address can be used Transit interfaces lose connectivity after a failover (or when
for management of the device even after failover, and we the device hosting the interface goes down or is disabled),
highly recommend this. unless they are part of a reth group.
Managing through the fxp0 interface requires two IP addresses, The reth interface does not need two IP addresses, and no
one per node. This also means that a switch needs to be switch is required to connect to the SRX Series chassis cluster.
present to connect to the cluster nodes using the fxp0 Transit interfaces on each node, if used for management, need
interface. two explicit IP addresses for each interface. But since this is a
transit interface, the IP addresses are also used for traffic apart
from management as well.
SRX Series branch device clusters with a non-Ethernet link SRX Series branch devices with a non-Ethernet link can be
(ADSL, T1\E1) cannot be managed using the fxp0 interface. managed using a reth or transit interface.
SSH or Telnet for CLI This is only recommended for manual configuration and monitoring of a single cluster.
Access
Junos OS XML This is an XML-based interface that can run over Telnet, SSH, and SSL, and it is a precursor to the
Management Protocol NETCONF XML management protocol. It provides access to Junos OS XML APIs for all configuration
and operational commands that can be entered using the CLI. We recommend this method for accessing
operational information. It can run over a NETCONF XML management protocol session as well.
NETCONF XML This is the IETF-defined standard XML interface for configuration. We recommend using it to configure
Management Protocol the device. This session can also be used to run Junos OS XML Management Protocol remote procedure
calls (RPCs).
SNMP From an SRX Series chassis cluster point of view, the SNMP system views the two nodes within the
clusters as a single system. There is only one SNMP process running on the master Routing Engine. At
initialization time, the protocol master indicates which SNMP process (snmpd) should be active based
on the Routing Engine master configuration. The passive Routing Engine has no snmpd running.
Therefore, only the primary node responds to SNMP queries and sends traps at any point of time. The
secondary node can be directly queried, but it has limited MIB support, which is detailed in “Retrieving
Chassis Inventory and Interfaces” on page 21. The secondary node does not send SNMP traps. SNMP
requests to the secondary node can be sent using the fxp0 interface IP address on the secondary node
or the reth interface IP address.
Syslogs Standard system log messages can be sent to an external syslog server. Note that both the primary
and secondary nodes can send syslog messages. We recommend that you configure both the primary
and secondary nodes to send syslog messages separately.
Security Log AppTrack, an application tracking tool, provides statistics for analyzing bandwidth usage of your
Messages (SPU) network. When enabled, AppTrack collects byte, packet, and duration statistics for application flows
in the specified zone. By default, when each session closes, AppTrack generates a message that
provides the byte and packet counts and duration of the session, and sends the message to the host
device. AppTrack messages are similar to session log messages and use syslog or structured syslog
formats. The message also includes an application field for the session. If AppTrack identifies a
custom-defined application and returns an appropriate name, the custom application name is included
in the log message. Note that application identification has to be configured for this to occur. See the
Junos OS Security Configuration Guide for details on configuring and using application identification
and tracking.
J-Web All Junos OS devices provide a graphical user interface for configuration and administration. This
interface can be used for administering individual devices.
Following are some best practices for chassis clusters for SRX Series devices.
Using BFD
The Bidirectional Forwarding Detection (BFD) protocol is a simple hello mechanism that
detects failures in a network. Hello packets are sent at a specified, regular interval. A
neighbor failure is detected when the router stops receiving a reply after a specified
interval. BFD works with a wide variety of network environments and topologies. BFD
failure detection times are shorter than RIP detection times, providing faster reaction
times to various kinds of failures in the network. These timers are also adaptive. For
example, a timer can adapt to a higher value if the adjacency fails, or a neighbor can
negotiate a higher value for a timer than the one configured. Therefore, BFD liveliness
can be configured between the two nodes of an SRX Series chassis cluster using the
local interfaces and not the fxp0 IP addresses on each node. This way BFD can keep
monitoring the status between the two nodes of the cluster. When there is any network
issue between the nodes, the BFD session-down SNMP traps are sent, which indicates
an issue between the nodes.
Using IP Monitoring
IP monitoring is an automation script that enables you to use this critical feature on the
SRX Series platforms. It allows for path and next-hop validation through the existing
network infrastructure using the Internet Control Message Protocol (ICMP). Upon
detection of a failure, the script executes a failover to the other node in an attempt to
prevent downtime.
chassis {
cluster {
reth-count 6;
redundancy-group 0 {
node 0 priority 129;
node 1 priority 128;
}
redundancy-group 1 {
node 0 priority 129;
node 1 priority 128;
interface-monitor {
ge-0/0/0 weight 255;
ge-8/0/0 weight 255;
}
ip-monitoring {
global-weight 255;
global-threshold 0;
family {
inet {
128.249.34.1 {
weight 10;
interface reth0.34 secondary-ip-address 128.249.34.202;
}
}
}
}
}
}
}
Three main types of graceful restart are available on Juniper Networks routing platforms:
• Graceful restart for aggregate and static routes and for routing protocols—Provides
protection for aggregate and static routes and for BGP, End System-to-Intermediate
System (ES-IS), IS-IS, OSPF, RIP, next-generation RIP (RIPng), and Protocol
Independent Multicast (PIM) sparse mode routing protocols.
• Graceful restart for MPLS-related protocols—Provides protection for LDP, RSVP, circuit
cross-connect (CCC), and translational cross-connect (TCC).
• Graceful restart for virtual private networks (VPNs)—Provides protection for Layer 2
and Layer 3 VPNs.
SRX Series chassis cluster inventory and interface information is gathered to monitor
the hardware components and the interfaces on the cluster. The primary node contains
information about the secondary node components and interfaces.
Using the Junos OS XML Management Protocol or NETCONF XML Management Protocol
• Use the get-chassis-inventory remote procedure call (RPC) to get the chassis inventory.
This RPC reports components on both the primary and secondary nodes. For more
information, see “Managing SRX Series Chassis Clusters Using RPCs” on page 85.
• Use the get-interface-information RPC to get the interfaces inventory. This RPC reports
information about the interfaces on the secondary node except for the fxp0 interface.
See the Junos XML API Operational Developer Reference for details about using the RPCs
and their responses.
Using SNMP
• Use the jnx-chas-defines MIB to understand the SRX Series chassis structure and
modeling. This MIB is not for querying. It is only used to understand the chassis cluster
modeling.
jnxProductLineSRX3600 OBJECT IDENTIFIER ::= { jnxProductLine 34 }
jnxProductNameSRX3600 OBJECT IDENTIFIER ::= { jnxProductName 34 }
jnxProductModelSRX3600 OBJECT IDENTIFIER ::= { jnxProductModel 34 }
jnxProductVariationSRX3600 OBJECT IDENTIFIER ::= { jnxProductVariation 34 }
jnxChassisSRX3600 OBJECT IDENTIFIER ::= { jnxChassis 34 }
Top of MIB Use the top-level objects to show chassis details such as the jnxBoxClass, jnxBoxDescr,
jnxBoxSerialNo, jnxBoxRevision, and jnxBoxInstalled MIB objects.
jnxLedTable Use to check the LED status of the components. This MIB only reports the LED status of
the primary node.
jnxFilledTable Use to show the empty/filled status of the container in the device containers table.
jnxOperatingTable Use to show the operating status of Operating subjects in the box contents table.
jnxRedundancyTable Use to show redundancy details on both nodes. Note that currently this MIB only reports
on the Routing Engines. Both Routing Engines are reported as the master of the respective
nodes. Do not use this to determine the active and backup status.
jnxFruTable Use to show the field-replaceable unit (FRU) in the chassis. Note that even the empty
slots are reported.
NOTE: The jnx-chassis MIB is not supported on SRX Series branch devices
in cluster mode. It is supported on standalone SRX Series branch devices.
• ifTable—Use to show all the interfaces on the cluster. Note that except for the fxp0
interface on the secondary node, all interfaces of the secondary node are reported by
the primary node.
To determine if the SRX Series device is configured in a cluster, use the following methods.
We recommend using the master-only IP address from the management station to
perform the operations suggested.
RPC :<rpc>
<get-chassis-cluster-status>
</get-chassis-cluster-status>
</rpc>
Response: See “Managing SRX Series Chassis Clusters Using RPCs” on page 85.
Use the get-chassis-inventory remote procedure call (RPC) to get the inventory of the
chassis for both the primary and secondary nodes. This identifies two nodes as part of
a multi-routing-engine-item. See “Managing SRX Series Chassis Clusters Using RPCs”
on page 85 for sample output of the RPC. The following output shows only the relevant
tags.
RPC:<rpc><get-chassis-inventory/></rpc>
<multi-routing-engine-item>
<re-name>node0</re-name>
#Node 0 Items
</multi-routing-engine-item>
<multi-routing-engine-item>
<re-name>node1></re-name>
#Node 0 Items
</multi-routing-engine-item>
</multi-routing-engine-item>
Using SNMP
We recommend that you use the master-only IP address to do SNMP polling. After a
switchover, the management system continues to use the master-only IP address to
manage the cluster. If a master-only IP address is not used, only the primary node responds
to the jnx-chassis MIB queries. The primary node includes components from the secondary
node as well. The secondary node does not respond to the jnx-chassis MIB queries.
NOTE: There are no MIBS to identify the primary and secondary nodes. The
only method to identify the primary and secondary nodes using SNMP is to
send queries to retrieve the jnx-chassis MIB objects on both IP addresses.
Only the primary responds. If you use a master-only IP address, the active
primary responds. Another option is to SNMP MIB walk the jnxLedTable MIB.
This only returns data for the primary node.
The following sample shows two Routing Engines and two nodes, node 0 and node 1,
present on the device.
The jnx-chassis MIB is not supported on SRX Series branch devices in cluster mode. It is
supported on standalone SRX Series branch devices.
• get-config – Use to show the node0 and node1 fxp0 interface and the reth interface
configuration to identify the IP addresses used by the primary and secondary nodes.
• get-interface-information – Use to show the interfaces and basic details. Use the
interface-address tag to identify the IP addresses for the fxp0 and reth interfaces. Using
this remote procedure call (RPC), all interfaces are reported, including the addresses
on the secondary node, except for the fxp0 interface on the secondary node. The
following sample shows the fxp0 interface on the primary node:
<physical-interface>
<name>
fxp0
</name>
<admin-status>
up
</admin-status>
<oper-status>
up
</oper-status>
<logical-interface>
<name>
fxp0.0
</name>
<admin-status>
up
</admin-status>
<oper-status>
up
</oper-status>
<filter-information>
</filter-information>
<address-family>
<address-family-name>
inet
</address-family-name>
<interface-address>
<ifa-local junos:emit="emit">
10.204.131.37/18
</ifa-local>
</interface-address>
</address-family>
</logical-interface>
</physical-interface>
</interface-information>
Use the ifTable MIB table to get the ifIndex MIB object of the fxp0 interface and the reth
interface on the primary node. Use the ipAddrTable MIB table to determine the IP address
of the interfaces. The following is a sample showing the fxp0 interface on the active
primary node. Note that the ifTable MIB table reports all interfaces on the secondary
node, except for the fxp0 interface on the secondary node.
{primary:node0}
user@host> show snmp mib walk ifTable | grep fxp0
ifDescr.1 = fxp0
ifDescr.13 = fxp0.0
For SNMP communication directly with the secondary node, the IP address of the
secondary node should be predetermined and preconfigured on the management system.
Querying the ifTable MIB table directly on the secondary node returns only the fxp0
interface and a few private interface details on the secondary node, and no other interfaces
are reported. All other interfaces are reported by the primary node itself. Use the ifTable
MIB table and the ipAddrTable MIB table as previously shown to directly query the
secondary node to find the fxp0 interface details such as the ifAdminStatus and
ifOperStatus MIB objects on the secondary node.
To monitor the cluster, you need to discover the redundancy groups. When you initialize
a device in chassis cluster mode, the system creates a redundancy group referred to in
this topic as redundancy group 0. Redundancy group 0 manages the primacy and failover
between the Routing Engines on each node of the cluster. As is the case for all redundancy
groups, redundancy group 0 can be primary on only one node at a time. The node on
which redundancy group 0 is primary determines which Routing Engine is active in the
cluster. A node is considered the primary node of the cluster if its Routing Engine is the
active one. You can configure one or more redundancy groups numbered 1 through 128,
referred to in this section as redundancy group x. The maximum number of redundancy
groups is equal to the number of redundant Ethernet interfaces +1 that you configure.
Each redundancy group x acts as an independent unit of failover and is primary on only
one node at a time. There are no MIBS available to retrieve this information.
Using the Junos OS XML Management Protocol or NETCONF XML Management Protocol
Use the get-configuration remote procedure call (RPC) to get the redundancy
configuration and the redundancy groups present on the device. This provides the
redundancy groups configured.
<rpc>
<get-configuration inherit="inherit" database="committed">
<configuration>
<chassis>
<cluster>
</cluster>
</chassis>
</configuration>
</get-configuration>
</rpc>
Response:
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:junos="http://xml.juniper.net/junos/10.4I0/junos">
<configuration xmlns="http://xml.juniper.net/xnm/1.1/xnm"
junos:commit-seconds="1277806450" junos:commit-localtime="2010-06-29 03:14:10
PDT" junos:commit-user="regress">
<chassis>
<cluster>
<reth-count>10</reth-count>
<control-ports>
<fpc>4</fpc>
<port>0</port>
</control-ports>
<control-ports>
<fpc>10</fpc>
<port>0</port>
</control-ports>
<redundancy-group>
<name>0</name>
<node>
<name>0</name>
<priority>254</priority>
</node>
<node>
<name>1</name>
<priority>1</priority>
</node>
</redundancy-group>
<redundancy-group>
<name>1</name>
<node>
<name>0</name>
<priority>100</priority>
</node>
<node>
<name>1</name>
<priority>1</priority>
</node>
</redundancy-group>
</cluster>
</chassis>
</configuration>
</rpc-reply>
]]>]]>
• Use the get-chassis-cluster-interfaces remote procedure call (RPC) to obtain the reth
interface details. The following sample output shows four reth interfaces configured:
<control-link-interface-name>em0</control-link-interface-name>
<control-link-interface-status>Up</control-link-interface-status>
</control-information>
<control-information>
<control-link-interface-index>1</control-link-interface-index>
<control-link-interface-name>em1</control-link-interface-name>
<control-link-interface-status>Down</control-link-interface-status>
</control-information>
</control-link-interfaces>
<dataplane-interface-status>Up</dataplane-interface-status>
<dataplane-interfaces>
<fabric-information>
<fabric-interface-index>0</fabric-interface-index>
<fabric-child-interface-name>ge-6/0/15</fabric-child-interface-name>
<fabric-child-interface-status>Up</fabric-child-interface-status>
<fabric-interface-index>0</fabric-interface-index>
</fabric-information>
<fabric-information>
<fabric-interface-index>1</fabric-interface-index>
<fabric-child-interface-name>ge-19/0/15</fabric-child-interface-name>
<fabric-child-interface-status>Up</fabric-child-interface-status>
<fabric-interface-index>1</fabric-interface-index>
</fabric-information>
</dataplane-interfaces>
<reth>
<reth-name>reth0</reth-name>
<reth-status>Down</reth-status>
<redundancy-group-id-for-reth>1</redundancy-group-id-for-reth>
<reth-name>reth1</reth-name>
<reth-status>Down</reth-status>
<redundancy-group-id-for-reth>Not
configured</redundancy-group-id-for-reth>
<reth-name>reth2</reth-name>
<reth-status>Down</reth-status>
<redundancy-group-id-for-reth>1</redundancy-group-id-for-reth>
<reth-name>reth3</reth-name>
<reth-status>Down</reth-status>
<redundancy-group-id-for-reth>Not
configured</redundancy-group-id-for-reth>
<reth-name>reth4</reth-name>
<reth-status>Down</reth-status>
<redundancy-group-id-for-reth>1</redundancy-group-id-for-reth>
<reth-name>reth5</reth-name>
<reth-status>Down</reth-status>
<redundancy-group-id-for-reth>1</redundancy-group-id-for-reth>
<reth-name>reth6</reth-name>
<reth-status>Down</reth-status>
<redundancy-group-id-for-reth>1</redundancy-group-id-for-reth>
<reth-name>reth7</reth-name>
<reth-status>Down</reth-status>
<redundancy-group-id-for-reth>1</redundancy-group-id-for-reth>
<reth-name>reth8</reth-name>
<reth-status>Down</reth-status>
<redundancy-group-id-for-reth>1</redundancy-group-id-for-reth>
<reth-name>reth9</reth-name>
<reth-status>Down</reth-status>
<redundancy-group-id-for-reth>1</redundancy-group-id-for-reth>
<reth-name>reth10</reth-name>
<reth-status>Up</reth-status>
<redundancy-group-id-for-reth>1</redundancy-group-id-for-reth>
<reth-name>reth11</reth-name>
<reth-status>Down</reth-status>
<redundancy-group-id-for-reth>1</redundancy-group-id-for-reth>
<reth-name>reth12</reth-name>
<reth-status>Down</reth-status>
<redundancy-group-id-for-reth>Not
configured</redundancy-group-id-for-reth>
<reth-name>reth13</reth-name>
<reth-status>Up</reth-status>
<redundancy-group-id-for-reth>1</redundancy-group-id-for-reth>
<reth-name>reth14</reth-name>
<reth-status>Up</reth-status>
<redundancy-group-id-for-reth>1</redundancy-group-id-for-reth>
<reth-name>reth15</reth-name>
<reth-status>Up</reth-status>
<redundancy-group-id-for-reth>1</redundancy-group-id-for-reth>
<reth-name>reth16</reth-name>
<reth-status>Up</reth-status>
<redundancy-group-id-for-reth>1</redundancy-group-id-for-reth>
</reth>
<interface-monitoring>
</interface-monitoring>
</chassis-cluster-interface-statistics>
<cli>
<banner>{secondary:node0}</banner>
</cli>
</rpc-reply>
Control interfaces:
Index Interface Status
0 em0 Up
1 em1 Down
Fabric interfaces:
Name Child-interface Status
fab0 ge-6/0/15 Up
fab0
fab1 ge-19/0/15 Up
fab1
Redundant-ethernet Information:
Name Status Redundancy-group
reth0 Down 1
reth1 Down Not configured
reth2 Down 1
reth3 Down Not configured
reth4 Down 1
reth5 Down 1
reth6 Down 1
reth7 Down 1
reth8 Down 1
reth9 Down 1
reth10 Up 1
reth11 Down 1
reth12 Down Not configured
reth13 Up 1
reth14 Up 1
reth15 Up 1
reth16 Up 1
{secondary:node0}
• Use the get-interface-information remote procedure call (RPC) to show reth interface
details and to identify the reth interfaces on the device. This RPC also shows which
Gigabit Ethernet or Fast Ethernet interfaces belong to which reth interface as shown
in the following sample output:
<rpc>
<get-interface-information>
<terse/>
<interface-name>reth0</interface-name>
</get-interface-information>
</rpc><rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:junos="http://xml.juniper.net/junos/10.4I0/junos">
<interface-information
xmlns="http://xml.juniper.net/junos/10.4I0/junos-interface" junos:style="terse">
<physical-interface>
<name>
reth0
</name>
<admin-status>
up
</admin-status>
<oper-status>
up
</oper-status>
<logical-interface>
<name>
reth0.0
</name>
<admin-status>
up
</admin-status>
<oper-status>
up
</oper-status>
<filter-information>
</filter-information>
<address-family>
<address-family-name>
inet
</address-family-name>
<interface-address>
<ifa-local junos:emit="emit">
192.168.29.2/24
</ifa-local>
</interface-address>
</address-family>
<address-family>
<address-family-name>
multiservice
</address-family-name>
</address-family>
</logical-interface>
</physical-interface>
</interface-information>
Now, the interface that belongs to this. Extracting only the relevant information
<rpc>
<get-interface-information>
<terse/>
</get-interface-information>
</rpc><rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:junos="http://xml.juniper.net/junos/10.4I0/junos">
<interface-information
xmlns="http://xml.juniper.net/junos/10.4I0/junos-interface" junos:style="terse">
<physical-interface>
<name>
ge-5/1/1
</name>
<admin-status>
up
</admin-status>
<oper-status>
up
</oper-status>
<logical-interface>
<name>
ge-5/1/1.0
</name>
<admin-status>
up
</admin-status>
<oper-status>
up
</oper-status>
<filter-information>
</filter-information>
<address-family>
<address-family-name>
aenet
</address-family-name>
<ae-bundle-name>
reth0.0
</ae-bundle-name>
</address-family>
</logical-interface>
</physical-interface>
</interface-information>
In the sample output, the ae-bundle-name tag identifies the reth interface it belongs
to.
Using SNMP
• Use the ifStackStatus MIB table to map the reth interface to the underlying interfaces
on the primary and secondary nodes. The reth interface is the high layer, and the
individual interfaces from both nodes show up as lower layer indexes.
{primary:node0}
user@host> show interfaces terse | grep reth0
Find the index of all interfaces from the ifTable. The following information shows
indexes of interfaces required in this example:
{primary:node0}
user@host> show snmp mib walk ifDescr | grep reth0
ifDescr.503 = reth0.0
ifDescr.528 = reth0
Now, search for the index for reth0 in the ifStackStatus table. In the following sample
output, reth0 index 503 is the higher layer index, and index 522 and 552 are the lower
layer indexes. Index 522 and 552 represent interfaces ge-5/1/1.0 and ge-11/1/1.0,
respectively.
{primary:node0}
user@host> show snmp mib walk ifStackStatus | grep 503
ifStackStatus.0.503 = 1
ifStackStatus.503.522 = 1
ifStackStatus.503.552 = 1
{primary:node0}
user@host> show snmp mib walk ifDescr | grep 522
ifDescr.522 = ge-5/1/1.0
{primary:node0}
user@host> show snmp mib walk ifDescr | grep 552
ifDescr.552 = ge-11/1/1.0
Use the get-configuration remote procedure call (RPC) to get the control port configuration
as shown in the following sample output.
<rpc>
<get-configuration inherit="inherit" database="committed">
<configuration>
<chassis>
<cluster>
</cluster>
</chassis>
</configuration>
</get-configuration>
</rpc>
<rpc>
<get-chassis-cluster-data-plane-interfaces>
</get-chassis-cluster-data-plane-interfaces>
</rpc><rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:junos="http://xml.juniper.net/junos/10.4I0/junos">
<chassis-cluster-dataplane-interfaces>
<fabric-interface-index>0</fabric-interface-index>
<fabric-information>
<child-interface-name>xe-5/0/0</child-interface-name>
<child-interface-status>up</child-interface-status>
</fabric-information>
<fabric-interface-index>1</fabric-interface-index>
<fabric-information>
<child-interface-name>xe-11/0/0</child-interface-name>
<child-interface-status>up</child-interface-status>
</fabric-information>
</chassis-cluster-dataplane-interfaces>
Using SNMP
The ifTable MIB table reports fabric (fab) interfaces and the link interfaces. However,
the relationship between the underlying interfaces and fabric interfaces cannot be
determined using SNMP.
Commit scripts give you better control over how your devices are configured to enforce:
This topic provides information about the options available for monitoring chassis
components such as FPCs, PICs, and Routing Engines for data such as operating state,
CPU, and memory.
NOTE: The jnx-chassis MIB is not supported for SRX Series branch devices
in cluster mode. However, it is supported for standalone SRX Series branch
devices. Therefore, we recommend using options other than SNMP for chassis
monitoring of SRX Series branch devices.
• For temperature of sensors, fan speed, and status • Use the jnxOperatingTable MIB table for temperature, fan speed, and
of each component, use the so on. The jnxOperatingState MIB should be used to get the status of
get-environment-information remote procedure the component. If the component is a FRU, then use the jnxFruState
call (RPC). MIB also. Use the jnxOperatingTemp MIB for the temperature of
• For temperature thresholds for the hardware sensors. Use the jnxFruState MIB to get the FRU status such as offline,
components for each element, use the online, empty, and so on.
get-temperature-threshold-information RPC. • Note the following about the objects available for monitoring in the
• For Routing Engine status, CPU, and memory, use jnxOperatingTable MIB table:
the get-route-engine-information RPC. This RPC • No MIB is available for temperature thresholds.
provides 1, 5, and 15 minute load averages. • For the Routing Engine, use the jnxOperatingCPU,
• For FPC status, temperature, CPU, and memory, jnxOperatingTemp, jnxOperatingMemory, jnxOperatingISR, and
use the get-fpc-information RPC. jnxOperatingBuffer MIB objects under container Index 9.
• Use the get-pic-detail RPC with the fpc-slot and • Look at the jnxRedundancyTable for redundancy status monitoring.
pic-slot RPCs to get the PIC status. This only gives data for the last 5 seconds.
• For the FPCs, look at the objects in the jnxOperatingTable and
jnxFruTable MIB tables on container Index 7 for temperature, CPU,
and memory utilization.
• For the PICs (including SPU/SPC cards flows), look at the objects
in the jnxOperatingTable and jnxFruTable MIB tables under
container Index 8 in the following sample output for temperature,
CPU, and memory utilization.
jnxOperatingDescr.8
jnxOperatingDescr.8.5.1.0 = node0 PIC: SPU Cp-Flow @ 4/0/*
jnxOperatingDescr.8.5.2.0 = node0 PIC: SPU Flow @ 4/1/*
jnxOperatingDescr.8.6.1.0 = node0 PIC: 4x 10GE XFP @ 5/0/*
jnxOperatingDescr.8.6.2.0 = node0 PIC: 16x 1GE TX @ 5/1/*
jnxOperatingDescr.8.11.1.0 = node1 PIC: SPU Cp-Flow @ 4/0/*
jnxOperatingDescr.8.11.2.0 = node1 PIC: SPU Flow @ 4/1/*
jnxOperatingDescr.8.12.1.0 = node1 PIC: 4x 10GE XFP @ 5/0/*
jnxOperatingDescr.8.12.2.0 = node1 PIC: 16x 1GE TX @ 5/1/*
Accounting Profiles
• Use a Routing Engine accounting profile to get the master Routing Engine statistics in comma separated value (CSV) format.
Configure the routing-engine-profile under the [edit accounting-options] hierarchy level. The collection interval fields and
filename can be configured per your requirements. We recommend transferring the file directly to a management system
using the Junos OS transfer options provided under the [edit accounting-options] hierarchy level. Note that only the primary
node master Routing Engine statistics are available.
The Routing Engine accounting profile is stored in the /var/log directory by default. The following is a sample of an accounting
profile:
• Use a MIB accounting profile for any other MIBs listed in the SNMP MIB column to get results in a CSV format. You can select
the MIB objects, collection interval, and so on.
• Use the get-chassis-cluster-statistics remote procedure call (RPC) to get Not available. The utility MIB can be used to
the cluster statistics, including the control plane, fabric, and dataplane provide this data using Junos OS operation
statistics. scripts. For more information about operation
• If you want to monitor dataplane and control plane statistics separately, scripts.
you can use the get-chassis-cluster-control-plane-statistics and
get-chassis-cluster-data-plane-statistics RPCs, respectively.
Use the get-chassis-cluster-status remote procedure call (RPC) to get Not available. The utility MIB can be used to provide
chassis cluster information as shown. this data using Junos OS operation scripts. For more
information about operation scripts.
<rpc>
<get-chassis-cluster-status>
<redundancy-group>1</redundancy-group>
</get-chassis-cluster-status>
</rpc>
Interface Statistics
You can use the methods listed in Table 7 on page 38 to get interface statistics including
the reth and fabric interfaces. Note that you can poll the reth interface statistics and then
use the information to determine the redundancy group status because the non-active
reth link shows 0 output packets per second (output-pps).
• Use the get-interface-information remote procedure call (RPC) • Use the following MIB tables for interface statistics:
with the extensive tag to get information such as interface • ifTable – Standard MIB II interface stats
statistics, COS statistics, and traffic statistics. This works for all
• ifXTable – Standard MIB II high-capacity interface
interfaces including reth interfaces and fabric interfaces on the
stats
primary node and the secondary node, except the fxp0 interface
on the secondary node. • JUNIPER-IF-MIB – A list of Juniper extensions to the
interface entries
• Use the relationship between reth and underlying interfaces to
determine the statistics between the physical interfaces. • JUNIPER-JS-IF-EXT-MIB – Used to monitor the entries
in the interfaces pertaining to the security
• The fxp0 interface on the secondary node can be directly queried
management of the interface
using the IP address of the fxp0 interface on the secondary node.
• For secondary node fxp0 interface details, directly query
the secondary node (optional).
Accounting Profiles
• Use Interface accounting profiles for interface statistics in CSV format collected at regular intervals.
• Use MIB accounting profiles for any MIBs collected at regular intervals with output in CSV format.
• Use Class usage profiles for source class and destination class usage.
Use the methods listed in “Junos OS XML RPC Instrumentation for SPU Monitoring” on
page 38 and “SNMP MIB Instrumentation for SPU Monitoring” on page 40 to get the SPU
to monitor data.
• Use the get-flow-session-information remote procedure call (RPC) to get the SPU to
monitor data such as total sessions, current sessions, and max sessions per node.
<rpc>
<get-flow-session-information>
<summary/>
</get-flow-session-information>
</rpc>
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:junos="http://xml.juniper.net/junos/10.4D0/junos">
<multi-routing-engine-results>
<multi-routing-engine-item>
<re-name>node0</re-name>
<flow-session-information xmlns="http://xml.juniper.net/
junos/10.4D0/junos-flow">
<flow-fpc-pic-id> on FPC4 PIC0:</flow-fpc-pic-id>
</flow-session-information>
<flow-session-summary-information xmlns="http://
xml.juniper.net/junos/10.4D0/junos-flow">
<active-unicast-sessions>0</active-unicast-sessions>
<active-multicast-sessions>0</active-multicast-sessions>
<failed-sessions>0</failed-sessions>
<active-sessions>0</active-sessions>
<active-session-valid>0</active-session-valid>
<active-session-pending>0</active-session-pending>
<active-session-invalidated>0</active-session-invalidated>
<active-session-other>0</active-session-other>
<max-sessions>524288</max-sessions>
</flow-session-summary-information>
<flow-session-information xmlns="http://xml.juniper.net/
junos/10.4D0/junos-flow"></flow-session-information>
<flow-session-information xmlns="http://xml.juniper.net/
junos/10.4D0/junos-flow">
<flow-fpc-pic-id> on FPC4 PIC1:</flow-fpc-pic-id>
</flow-session-information>
<flow-session-summary-information xmlns="http://
xml.juniper.net/junos/10.4D0/junos-flow">
<active-unicast-sessions>0</active-unicast-sessions>
<active-multicast-sessions>0</active-multicast-sessions>
<failed-sessions>0</failed-sessions>
<active-sessions>0</active-sessions>
<active-session-valid>0</active-session-valid>
<active-session-pending>0</active-session-pending>
<active-session-invalidated>0</active-session-invalidated>
<active-session-other>0</active-session-other>
<max-sessions>1048576</max-sessions>
</flow-session-summary-information>
<flow-session-information xmlns="http://
xml.juniper.net/junos/10.4D0/junos-flow">
</flow-session-information>
</multi-routing-engine-item>
<multi-routing-engine-item>
<re-name>node1</re-name>
<flow-session-information xmlns="http://xml.juniper.net
/junos/10.4D0/junos-flow">
<flow-fpc-pic-id> on FPC4 PIC0:</flow-fpc-pic-id>
</flow-session-information>
<flow-session-summary-information xmlns="http://
xml.juniper.net/junos/10.4D0/junos-flow">
<active-unicast-sessions>0</active-unicast-sessions>
<active-multicast-sessions>0</active-multicast-sessions>
<failed-sessions>0</failed-sessions>
<active-sessions>0</active-sessions>
<active-session-valid>0</active-session-valid>
<active-session-pending>0</active-session-pending>
<active-session-invalidated>0</active-session-invalidated>
<active-session-other>0</active-session-other>
<max-sessions>524288</max-sessions>
</flow-session-summary-information>
<flow-session-information xmlns="http://
xml.juniper.net/junos/10.4D0/junos-flow">
</flow-session-information>
<flow-session-information xmlns="http://
xml.juniper.net/junos/10.4D0/junos-flow">
<flow-fpc-pic-id> on FPC4 PIC1:</flow-fpc-pic-id>
</flow-session-information>
<flow-session-summary-information xmlns="http://
xml.juniper.net/junos/10.4D0/junos-flow">
<active-unicast-sessions>0</active-unicast-sessions>
<active-multicast-sessions>0</active-multicast-sessions>
<failed-sessions>0</failed-sessions>
<active-sessions>0</active-sessions>
<active-session-valid>0</active-session-valid>
<active-session-pending>0</active-session-pending>
<active-session-invalidated>0</active-session-invalidated>
<active-session-other>0</active-session-other>
<max-sessions>1048576</max-sessions>
</flow-session-summary-information>
<flow-session-information xmlns="http://xml.juniper.
net/junos/10.4D0/junos-flow"></flow-session-information>
</multi-routing-engine-item>
</multi-routing-engine-results>
</rpc-reply>
jnxJsSPUMonitoringMIB
jnxJsSPUMonitoringFPCIndex.16 = 4
jnxJsSPUMonitoringFPCIndex.17 = 4
jnxJsSPUMonitoringFPCIndex.40 = 4
jnxJsSPUMonitoringFPCIndex.41 = 4
jnxJsSPUMonitoringSPUIndex.16 = 0
jnxJsSPUMonitoringSPUIndex.17 = 1
jnxJsSPUMonitoringSPUIndex.40 = 0
jnxJsSPUMonitoringSPUIndex.41 = 1
jnxJsSPUMonitoringCPUUsage.16 = 0
jnxJsSPUMonitoringCPUUsage.17 = 0
jnxJsSPUMonitoringCPUUsage.40 = 0
jnxJsSPUMonitoringCPUUsage.41 = 0
jnxJsSPUMonitoringMemoryUsage.16 = 70
jnxJsSPUMonitoringMemoryUsage.17 = 73
jnxJsSPUMonitoringMemoryUsage.40 = 70
jnxJsSPUMonitoringMemoryUsage.41 = 73
jnxJsSPUMonitoringCurrentFlowSession.16 = 0
jnxJsSPUMonitoringCurrentFlowSession.17 = 0
jnxJsSPUMonitoringCurrentFlowSession.40 = 0
jnxJsSPUMonitoringCurrentFlowSession.41 = 0
jnxJsSPUMonitoringMaxFlowSession.16 = 524288
jnxJsSPUMonitoringMaxFlowSession.17 = 1048576
jnxJsSPUMonitoringMaxFlowSession.40 = 524288
jnxJsSPUMonitoringMaxFlowSession.41 = 1048576
jnxJsSPUMonitoringCurrentCPSession.16 = 0
jnxJsSPUMonitoringCurrentCPSession.17 = 0
jnxJsSPUMonitoringCurrentCPSession.40 = 0
jnxJsSPUMonitoringCurrentCPSession.41 = 0
jnxJsSPUMonitoringMaxCPSession.16 = 2359296
jnxJsSPUMonitoringMaxCPSession.17 = 0
jnxJsSPUMonitoringMaxCPSession.40 = 2359296
jnxJsSPUMonitoringMaxCPSession.41 = 0
jnxJsSPUMonitoringNodeIndex.16 = 0
jnxJsSPUMonitoringNodeIndex.17 = 0
jnxJsSPUMonitoringNodeIndex.40 = 1
jnxJsSPUMonitoringNodeIndex.41 = 1
jnxJsSPUMonitoringNodeDescr.16 = node0
jnxJsSPUMonitoringNodeDescr.17 = node0
jnxJsSPUMonitoringNodeDescr.40 = node1
jnxJsSPUMonitoringNodeDescr.41 = node1
jnxJsSPUMonitoringCurrentTotalSession.0 =
jnxJsSPUMonitoringMaxTotalSession.0 = 1572864
NOTE:
• Junos OS versions prior to Junos OS Release 9.6 only return local node
data for this MIB. To support a chassis cluster, Junos OS Release 9.6 and
later support a jnxJsSPUMonitoringNodeIndex index and a
jnxJsSPUMonitoringNodeDescr field in the table. Therefore, in chassis
cluster mode, Junos OS Release 9.6 and later return SPU monitoring
data of both the primary and secondary nodes.
• SRX Series branch devices have a virtualized dataplane across the cluster
datacores. Therefore, they are reported as one SPU with an index of 0.
Security Features
Following is a summary of Junos OS XML remote procedure calls (RPCs) and SNMP MIBs
related to security features that are supported on SRX Series devices.
The RPCs and MIBs might not be directly comparable to each other. One might provide
more or less information than the other. Use the following information to determine
which instrumentation to use.
<get-services-ipsec-statistics-information> JUNIPER-JS-IPSEC-VPN
<get-ike-security-associations> JUNIPER-IPSEC-FLOW-MONITOR
<get-service-nat-pool-information>
<get-firewall-filter-information>
<get-firewall-information>
<get-firewall-log-information>
<get-firewall-prefix-action-information>
<get-flow-table-statistics-information>
<get-aaa-subscriber-statistics>
<get-aaa-subscriber-table>
<get-idp-application-system-cache>
<get-idp-counter-information>
<get-idp-detail-status-information>
<get-idp-memory-information>
<get-idp-policy-template-information>
<get-idp-predefined-attack-filters>
<get-idp-predefined-attack-groups>
<get-idp-predefined-attacks>
<get-idp-recent-security-package-information>
<get-idp-security-package-information>
<get-idp-ssl-key-information>
<get-idp-ssl-session-cache-information>
<get-idp-status-information>
<get-idp-subscriber-policy-list>
RMON
Junos OS supports the remote monitoring (RMON) MIB (RFC 2819). RMON can be used
to send alerts for MIB variables when upper and lower thresholds are crossed. This can
be used for various MIB variables. Some good examples are interface statistics monitoring
and Routing Engine CPU monitoring.
The following configuration snippet shows RMON configuration for monitoring a Routing
Engine on node 0 of a cluster and for monitoring octets out of interface index 2000:
rmon {
alarm 100 {
interval 5;
variable jnxOperatingCPU.9.1.0.0;
sample-type absolute-value;
request-type get-request;
rising-threshold 90;
falling-threshold 80;
rising-event-index 100;
falling-event-index 100;
}
event 100 {
type log-and-trap;
community petblr;
}
alarm 10 {
interval 60;
variable ifHCInOctets.2000;
sample-type delta-value;
request-type get-request;
startup-alarm rising-alarm;
rising-threshold 100000;
falling-threshold 0;
rising-event-index 10;
falling-event-index 10;
}
event 10 {
type log-and-trap;
community test;
}
}
You can use SNMP traps and system log messages for fault monitoring of SRX Series
chassis clusters.
SNMP Traps
Table 9 on page 45 lists the SNMP traps supported on SRX Series devices. Note that only
the primary node sends SNMP traps. For details of each trap, see the Network Management
Administration Guide for Routing Devices, MIB Reference for SRX1400, SRX3400, and
SRX3600 Services Gateways, and MIB Reference for SRX5600 and SRX5800 Services
Gateways.
1: 2: 3: 4: 5:
n
j xPn
i gIngressStddevThresholdExceeded 1.3.6.1.4.1.2636.4.9.0.8 Remote All Junos OS 1. pingCtlTargetAddressType -
operations devices except .1.3.6.1.2.1.80.1.2.1.3
EX and high-end 2. pingCtlTargetAddress -
SRX Series .1.3.6.1.2.1.80.1.2.1.4
devices
3. pingResultsOperStatus -
.1.3.6.1.2.1.80.1.3.1.1
4. pingResultsIpTargetAddressType
- .1.3.6.1.2.1.80.1.3.1.2
5. pingResultsIpTargetAddress -
.1.3.6.1.2.1.80.1.3.1.3
6. jnxPingResultsMinIngressUs -
.1.3.6.1.4.1.2636.3.7.1.3.1.13
7. jnxPingResultsMaxIngressUs -
.1.3.6.1.4.1.2636.3.7.1.3.1.14
8. jnxPingResultsAvgIngressUs -
.1.3.6.1.4.1.2636.3.7.1.3.1.15
9. pingResultsProbeResponses -
.1.3.6.1.2.1.80.1.3.1.7
10. pingResultsSentProbes -
.1.3.6.1.2.1.80.1.3.1.8
11. pingResultsRttSumOfSquares -
.1.3.6.1.2.1.80.1.3.1.9
12. pingResultsLastGoodProbe -
.1.3.6.1.2.1.80.1.3.1.10
13. jnxPingResultsStddevIngressUs
- .1.3.6.1.4.1.2636.3.7.1.3.1.16
14. jnxPingCtlIngressStddevThreshold
- .1.3.6.1.4.1.2636.3.7.1.2.1.14
NOTE: If the fxp0 interface fails on the backup Routing Engine, it does not
send any traps. The system logging (syslog) feature can be used to monitor
the secondary node fxp0 interface by logging a link down message.
The following sample shows the jnxSyslog trap configuration for a ui_commit_progress
(configuration commit in progress) event.
jnxSyslog Trap
A switchover can be detected using a failover trap, the chassis cluster status, or an
automatic failover trap.
Failover Trap
The trap message can help you troubleshoot failovers. It contains the following
information:
The cluster can be in any of the different states at any given instant: hold, primary,
secondary-hold, secondary, ineligible, and disabled. Traps are generated for the following
state transitions (only a transition from a hold state does not trigger a trap):
A transition can be triggered due to events such as interface monitoring, SPU monitoring,
failures, and manual failovers.
Event triggering is applicable for all redundancy groups including RG0, RG1, and so on.
All redundancy group failover events trigger the same trap, and the actual group can be
identified by examining the jnxJsChClusterSwitchoverInfoRedundancyGroup parameter
in the trap varbind.
The trap is forwarded over the control link if the outgoing interface is on a node different
from the node of the Routing Engine that generates the trap. The following are sample
traps for manual and automatic failovers. Note that the traps are generated by the current
primary devices before the failover occurs.
NOTE: A failover in any redundancy group (RG) other than redundancy group
0 does not make the other node the primary node.
In the following example, node 0 is the primary node in RG0, while it is the secondary
node in RG1. Node 0 remains the primary node for the cluster. Only when the failover
happens on node 1 in RG0 does node 1 become the primary node for the cluster. So even
if a switchover happens on other groups, the primary node should be queried for all
statistics and data as previously mentioned.
Cluster ID: 12
Node Priority Status Preempt Manual failover Redundancy group: 0 , Failover
count: 3
node 255 primary no yes
node1 1 secondary-hold no yes Redundancy group: 1 , Failover count: 4
node0 100 secondary no yes
node1 255 primary no yes
• After a failover, LinkUp traps are sent for all the interfaces that come up on the new
primary node.
Managing and Monitoring a Chassis Cluster Using Operational and Event Scripts
Junos OS operation (op) scripts automate network and router management and
troubleshooting. Op scripts can perform any function available through the remote
procedure calls (RPCs) supported by either of the two application programming interfaces
(APIs): the Junos OS Extensible Markup Language (XML) API and the Junos OS XML
Management Protocol API. Scripts are written in the Extensible Stylesheet Language
Transformations (XSLT) or Stylesheet Language Alternative Syntax (SLAX) scripting
languages.
• Reconfigure the routing platform to avoid or work around known problems in the Junos
OS software.
Junos OS event scripts automate network and router management and troubleshooting.
These are operational scripts triggered by event policies.
An example of a jnx event trap follows. In the example, the ev-syslog-trap event script
raises a jnxEvent trap whenever an alarm is triggered on the device.
}
then {
event-script ev-syslog-trap.slax {
arguments {
event SYSTEM;
message "{$$.message}";
}
}
}
}
The following trap is sent to bring a link down on the device to set an alarm.
The data in these MIB tables can be populated using hidden CLI commands, which are
also accessible from an op script using the jcs:invoke remote procedure call (RPC) API.
One of the examples we use for reading power on the device, which is not available using
SNMP, is the jnxUtil MIB. With a simple event script, you can read the power output every
minute and populate the jnxUtil MIB. Similarly, you can write op scripts or event scripts
that can populate a variety of data of different types. For more information about utility
MIB examples for sample scripts and usage of the utility MIB, see Utility MIB Examples.
For example, use the show security flow session command to:
• Show the security attributes associated with a flow, for example, the policies that
apply to traffic belonging to that flow.
• Display the session timeout value, when the session became active, how long it has
been active, and if there is active traffic on the session.
For detailed information about this command, see the Junos OS CLI Reference.
Session information can also be logged if a related policy configuration includes the
logging option. Session logging infrastructure logs the session log messages when a
session is created, closed, denied, or rejected. In the SRX3000 and SRX5000 lines, the
log messages are streamed directly to an external syslog server/repository, bypassing
the Routing Engine. The SRX Series devices support both traditional and structured
syslog. The SRX3000 and SRX5000 lines support 1000 log messages per second, and
the management station must be equipped to handle this volume. See the Junos OS
Security Configuration Guide for configuration examples and details about these logs.
The logs are available through the management interface of both the primary and
secondary nodes. Ensure that the external server receiving these log messages is reachable
by both nodes.
The high-end SRX Series devices have a distributed processing architecture that processes
traffic as well as generates log messages. In the SRX Series devices, the firewall processes
the traffic sessions on each of the SPUs in the chassis. After each session is created, it
is processed by the same SPU in the chassis, which is also the SPU that generates the
log message.
The standard method of generating log messages is to have each SPU generate the
message as a UDP syslog message and send it directly out the data plane to the syslog
server. The SRX Series devices can log extremely high rates of traffic. They can log up to
750 MB per second of log messages, which surpasses the limits of the control plane.
Therefore, we do not recommended logging messages to the control plane, except under
certain circumstances.
For SRX Series branch devices running Junos OS Release 9.6 and later and high-end SRX
Series devices running Junos OS Release 10.0 and later, the devices can log messages
to the control plane at a limited maximum rate (1000 log messages per second) rather
than logging to the data plane. If the log messages are sent through the data plane using
syslog, a syslog collector—such as the Juniper Security Threat Response Manager
(STRM)—must be used to collect the logs for viewing, reporting, and alerting. In SRX
Series branch devices running Junos OS Release 9.6 and later and high-end SRX Series
devices running Junos OS Release 10.0 and later, the devices can only send log messages
to the data plane or the control plane, but not to both at the same time.
There are two supported formats for system log messages: structured and standard.
Structured syslog is generally preferred because it prepends the fields with a title. For
instance, the source-IP address field is source-address="10.102.110.52" rather than
just the IP address 10.102.110.52. In the following command, the format sd-syslog
option configures structured syslog, whereas the format syslog option configures
standard syslog.
The syslog source address can be any arbitrary IP address. It does not have to be an
IP address that is assigned to the device. Rather, this IP address is used on the syslog
collector to identify the syslog source. The best practice is to configure the source
address as the IP address of the interface that the traffic is sent out on.
The system log stream identifies the destination IP address that the syslog messages
are sent to. On the high-end SRX Series devices running Junos OS Release 9.5 and
later, up to two syslog streams can be defined (all messages are sent to the syslog
streams). Note that you must give a name to the stream. This name is arbitrary, but
it is a best practice to use the name of the syslog collector for easy identification in
the configuration.
You can also define the UDP port to which the log messages are sent. By default, log
messages are sent to UDP port 1514.
To configure the system log server IP address and specify the UDP port number:
user@host# set security log stream name host ip-address port port
Configuring High-End SRX Series Device Data Plane Logging to the Control Plane
If the management station cannot receive log messages from the data plane, then
configure it to send messages through the management connection. If you log to the
control plane, the SRX Series devices can also send these syslog messages out the fxp0
interface. If event logging is configured, all log messages from the data plane go to the
control plane.
It might be necessary to rate-limit the event log messages from the data plane to the
control plane due to limited resources on the control plane to process high volumes
of log messages. This is especially applicable if the control plane is busy processing
dynamic routing protocols such as BGP or large-scale routing implementations. The
following command rate-limits the log messages so that they do not overwhelm the
control plane. Log messages that are rate-limited are discarded. A best practice for
high-end SRX Series devices is to log no more than 1000 log messages per second
to the control plane.
user@host# set security log mode event event-rate logs per second
Configuring SRX Series Branch Devices to Send Traffic Log Messages Through the Data Plane
The SRX Series branch device traffic log messages can be sent through the data plane
security logs in stream mode. Note that this is possible only using stream mode. The
following is a sample configuration and log output.
Configuration
set security log mode stream
set security log format sd-syslog
set security log source-address 10.204.225.164
set security log stream vmware-server severity debug
set security log stream vmware-server host 10.204.225.218
In this case, the SRX Series device traffic log messages are sent to an external syslog
server through the dataplane. This ensures that the Routing Engine is not a bottleneck
for logging. It also ensures that the Routing Engine does not get impacted during excessive
logging. In addition to traffic log messages, the control plane and the log messages sent
to the Routing Engine are written to a file in flash memory. The following is a sample
configuration to enable this type of logging.
Configuration
Syslog (self logs)—This configuration can be customized as per required self logging.
Configuration
To configure the syslog server to receive log messages from the SRX Series device,
define which syslog hosts receive the streams along with which facilities and severities
to send. Note that multiple facilities and priorities can be configured to send multiple
log message types. To send all message types, specify the any option for the facility
and severity.
The source IP address of the syslog stream is needed because the SRX Series device
can send the syslog message with any address. The same IP address should be used
regardless of which interface is selected.
Sometimes an administrator might want to filter the log messages that are sent to
the syslog server. Log filtering can be specified with the match statement. In this
example, only logs defined in the match statement regular expression (IDP) are sent
to the syslog server.
user@host# set system syslog host syslog server facility severity match IDP
In this configuration:
Use the match statement regular expression to send traffic log messages only. These
log messages are sent directly to the syslog server without writing them to flash memory.
This configuration does not send log messages normally sent to the Routing Engine to
the syslog server. However, it is possible to create a separate file and write control plane
log messages to a file on the Routing Engine as shown.
Configuration
set system syslog host 10.204.225.218 any any
set system syslog host 10.204.225.218 match RT_FLOW_SESSION
set system syslog file messages any any
Sample log messages:
The following configuration sends both traffic and control log messages to the syslog
server, but might overwhelm the syslog server and cause cluster instability. We do not
recommend using this configuration.
Configuration
set system syslog host 10.204.225.218 any any
set system syslog file messages any any
Security log event mode is the default mode on SRX Series branch devices, and it is not
advisable for these devices. We recommend changing the default behavior.
NOTE: Extensive logging on local flash can have an undesired impact on the
device such as instability on the control plane.
Sending Data Plane Log Messages with an IP Address in the Same Subnet as the fxp0 Interface
You might want to deploy fault management and performance management applications
and systems such as Juniper Networks Security Threat Response Manager (STRM).
STRM collects log messages through the management network and is connected through
the fxp0 interface. The fault management and performance management applications
manage the SRX Series device through the fxp0 interface, but the SRX Series device also
needs to send the data plane log messages to STRM on the same network. For instance,
if the rate of log messages is going to be greater than 1000 log messages per second,
then logging to the control plane is not supported. The issue is that two interfaces in the
same virtual router cannot be in the same subnet, and the fxp0 interface cannot be
moved to any virtual router other than inet.0.
To work around these issues, place a data plane interface in a virtual router other than
the default virtual router inet.0, and place a route in the inet.0 routing table to route traffic
to STRM through that virtual router. The following configuration example shows how to
do this.
In this example:
NOTE: AppA is now able to manage the ge-0/0/7 interface since AppA
manages the device using the fxp0 interface in the default routing instance.
To do this, AppA must use the Logging@<snmp-community-string-name>
message format to access the ge-0/0/7 interface data using SNMP.
AppTrack, an application tracking tool, provides statistics for analyzing bandwidth usage
of your network. When enabled, AppTrack collects byte, packet, and duration statistics
for application flows in the specified zone. By default, when each session closes, AppTrack
generates a message that provides the byte and packet counts and the duration of the
session, and sends it to the host device. An AppTrack message is similar to session log
messages and uses syslog or structured syslog formats. The message also includes an
application field for the session. If AppTrack identifies a custom-defined application and
returns an appropriate name, the custom application name is included in the log message.
Management stations can subscribe to receive log messages for application tracking.
An SRX Series device can support a high volume of these log messages (minimum 1000
log messages per second). Management stations should be able to handle this volume.
The log messages are available through the management interface of both the primary
and secondary nodes. Additional care should be taken that the external server receiving
these log messages is reachable by both nodes. See the Junos OS Security Configuration
Guide for information about configuring application identification and tracking.
This topic provides details related to managing SRX Series chassis clusters using remote
procedure calls (RPCs).
<rpc><get-chassis-inventory/></rpc>
<rpc><get-chassis-inventory/></rpc>
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:junos="http://xml.juniper.net/junos/10.4I0/junos">
<multi-routing-engine-results>
<multi-routing-engine-item>
<re-name>node0</re-name>
<chassis-inventory xmlns="http://xml.juniper.net/junos/10.4I0/junos-chassis">
<chassis junos:style="inventory">
<name>Chassis</name>
<serial-number>JN1146479AGB</serial-number>
<description>SRX 5600</description>
<chassis-module>
<name>Midplane</name>
<version>REV 01</version>
<part-number>710-024804</part-number>
<serial-number>ABAA1033</serial-number>
<description>SRX 5600 Midplane</description>
<model-number>SRX5600-MP-A</model-number>
</chassis-module>
<chassis-module>
<name>FPM Board</name>
<version>REV 01</version>
<part-number>710-024631</part-number>
<serial-number>XD8881</serial-number>
<description>Front Panel Display</description>
<model-number>SRX5600-CRAFT-A</model-number>
</chassis-module>
<chassis-module>
<name>PEM 0</name>
<version>Rev 03</version>
<part-number>740-023485</part-number>
<serial-number>QCS0852H02U</serial-number>
<description>PS 1.2-1.7kW; 100-240V AC in</description>
<model-number>SRX5600-PWR-AC-A</model-number>
</chassis-module>
<chassis-module>
<name>PEM 1</name>
<version>Rev 03</version>
<part-number>740-023485</part-number>
<serial-number>QCS0852H01P</serial-number>
<description>PS 1.2-1.7kW; 100-240V AC in</description>
<model-number>SRX5600-PWR-AC-A</model-number>
</chassis-module>
<chassis-module>
<name>Routing Engine 0</name>
<version>REV 03</version>
<part-number>740-023530</part-number>
<serial-number>9009008946</serial-number>
<description>RE-S-1300</description>
<model-number>SRX5K-RE-13-20-A</model-number>
</chassis-module>
<chassis-module>
<name>CB 0</name>
<version>REV 03</version>
<part-number>710-024802</part-number>
<serial-number>WX5291</serial-number>
<description>SRX5k SCB</description>
<model-number>SRX5K-SCB-A</model-number>
</chassis-module>
<chassis-module>
<name>FPC 4</name>
<version>REV 01</version>
<part-number>750-023996</part-number>
<serial-number>WW0754</serial-number>
<description>SRX5k SPC</description>
<model-number>SRX5K-SPC-2-10-40</model-number>
<chassis-sub-module>
<name>CPU</name>
<version>REV 02</version>
<part-number>710-024633</part-number>
<serial-number>WY0854</serial-number>
<description>SRX5k DPC PMB</description>
</chassis-sub-module>
<chassis-sub-module>
<name>PIC 0</name>
<part-number>BUILTIN</part-number>
<serial-number>BUILTIN</serial-number>
<description>SPU Cp-Flow</description>
</chassis-sub-module>
<chassis-sub-module>
<name>PIC 1</name>
<part-number>BUILTIN</part-number>
<serial-number>BUILTIN</serial-number>
<description>SPU Flow</description>
</chassis-sub-module>
</chassis-module>
<chassis-module>
<name>FPC 5</name>
<version>REV 07</version>
<part-number>750-027945</part-number>
<serial-number>XH9092</serial-number>
<description>SRX5k FIOC</description>
<chassis-sub-module>
<name>CPU</name>
<version>REV 03</version>
<part-number>710-024633</part-number>
<serial-number>XH8755</serial-number>
<description>SRX5k DPC PMB</description>
</chassis-sub-module>
<chassis-sub-module>
<name>PIC 0</name>
<version>REV 05</version>
<part-number>750-021378</part-number>
<serial-number>XH3698</serial-number>
<description>4x 10GE XFP</description>
<model-number>SRX-IOC-4XGE-XFP</model-number>
<chassis-sub-sub-module>
<name>Xcvr 0</name>
<version>REV 02</version>
<part-number>740-011571</part-number>
<serial-number>C850XJ02G</serial-number>
<description>XFP-10G-SR</description>
</chassis-sub-sub-module>
<chassis-sub-sub-module>
<name>Xcvr 1</name>
<version>REV 02</version>
<part-number>740-011571</part-number>
<serial-number>C850XJ01T</serial-number>
<description>XFP-10G-SR</description>
</chassis-sub-sub-module>
</chassis-sub-module>
<chassis-sub-module>
<name>PIC 1</name>
<version>REV 03</version>
<part-number>750-027491</part-number>
<serial-number>XH8525</serial-number>
<description>16x 1GE TX</description>
</chassis-sub-module>
</chassis-module>
<chassis-module>
<name>Fan Tray</name>
<description>Left Fan Tray</description>
<model-number>SRX5600-FAN</model-number>
</chassis-module>
</chassis>
</chassis-inventory>
</multi-routing-engine-item>
<multi-routing-engine-item>
<re-name>node1</re-name>
<chassis-inventory xmlns="http://xml.juniper.net/junos/10.4I0/junos-chassis">
<chassis junos:style="inventory">
<name>Chassis</name>
<serial-number>JN11471C4AGB</serial-number>
<description>SRX 5600</description>
<chassis-module>
<name>Midplane</name>
<version>REV 01</version>
<part-number>710-024804</part-number>
<serial-number>ABAA4537</serial-number>
<description>SRX 5600 Midplane</description>
<model-number>SRX5600-MP-A</model-number>
</chassis-module>
<chassis-module>
<name>FPM Board</name>
<version>REV 01</version>
<part-number>710-024631</part-number>
<serial-number>XD8876</serial-number>
<description>Front Panel Display</description>
<model-number>SRX5600-CRAFT-A</model-number>
</chassis-module>
<chassis-module>
<name>PEM 0</name>
<version>Rev 03</version>
<part-number>740-023485</part-number>
<serial-number>QCS0901H015</serial-number>
<description>PS 1.2-1.7kW; 100-240V AC in</description>
<model-number>SRX5600-PWR-AC-A</model-number>
</chassis-module>
<chassis-module>
<name>PEM 1</name>
<version>Rev 03</version>
<part-number>740-023485</part-number>
<serial-number>QCS0901H011</serial-number>
<description>PS 1.2-1.7kW; 100-240V AC in</description>
<model-number>SRX5600-PWR-AC-A</model-number>
</chassis-module>
<chassis-module>
<name>Routing Engine 0</name>
<version>REV 03</version>
<part-number>740-023530</part-number>
<serial-number>9009020065</serial-number>
<description>RE-S-1300</description>
<model-number>SRX5K-RE-13-20-A</model-number>
</chassis-module>
<chassis-module>
<name>CB 0</name>
<version>REV 03</version>
<part-number>710-024802</part-number>
<serial-number>XH7224</serial-number>
<description>SRX5k SCB</description>
<model-number>SRX5K-SCB-A</model-number>
</chassis-module>
<chassis-module>
<name>FPC 4</name>
<version>REV 01</version>
<part-number>750-023996</part-number>
<serial-number>WY2679</serial-number>
<description>SRX5k SPC</description>
<model-number>SRX5K-SPC-2-10-40</model-number>
<chassis-sub-module>
<name>CPU</name>
<version>REV 02</version>
<part-number>710-024633</part-number>
<serial-number>WY3712</serial-number>
<description>SRX5k DPC PMB</description>
</chassis-sub-module>
<chassis-sub-module>
<name>PIC 0</name>
<part-number>BUILTIN</part-number>
<serial-number>BUILTIN</serial-number>
<description>SPU Cp-Flow</description>
</chassis-sub-module>
<chassis-sub-module>
<name>PIC 1</name>
<part-number>BUILTIN</part-number>
<serial-number>BUILTIN</serial-number>
<description>SPU Flow</description>
</chassis-sub-module>
</chassis-module>
<chassis-module>
<name>FPC 5</name>
<version>REV 07</version>
<part-number>750-027945</part-number>
<serial-number>XH9087</serial-number>
<description>SRX5k FIOC</description>
<chassis-sub-module>
<name>CPU</name>
<version>REV 03</version>
<part-number>710-024633</part-number>
<serial-number>XH8765</serial-number>
<description>SRX5k DPC PMB</description>
</chassis-sub-module>
<chassis-sub-module>
<name>PIC 0</name>
<version>REV 05</version>
<part-number>750-021378</part-number>
<serial-number>XH3692</serial-number>
<description>4x 10GE XFP</description>
<model-number>SRX-IOC-4XGE-XFP</model-number>
<chassis-sub-sub-module>
<name>Xcvr 0</name>
<version>REV 02</version>
<part-number>740-011571</part-number>
<serial-number>C850XJ05M</serial-number>
<description>XFP-10G-SR</description>
</chassis-sub-sub-module>
<chassis-sub-sub-module>
<name>Xcvr 1</name>
<version>REV 02</version>
<part-number>740-011571</part-number>
<serial-number>C850XJ05F</serial-number>
<description>XFP-10G-SR</description>
</chassis-sub-sub-module>
</chassis-sub-module>
<chassis-sub-module>
<name>PIC 1</name>
<version>REV 03</version>
<part-number>750-027491</part-number>
<serial-number>XH8521</serial-number>
<description>16x 1GE TX</description>
</chassis-sub-module>
</chassis-module>
<chassis-module>
<name>Fan Tray</name>
<description>Left Fan Tray</description>
<model-number>SRX5600-FAN</model-number>
</chassis-module>
</chassis>
</chassis-inventory>
</multi-routing-engine-item>
</multi-routing-engine-results>
</rpc-reply>
This topic provides details related to managing SRX Series chassis clusters using SNMP.
SNMP MIB Walk of the jnxOperating MIB Table with Secondary Node Details
user@host> show snmp mib walk jnxOperatingDescr |grep node1
jnxOperatingDescr.1.2.0.0 = node1 midplane
jnxOperatingDescr.2.4.0.0 = node1 PEM 1
jnxOperatingDescr.4.2.0.0 = node1 Fan Tray
jnxOperatingDescr.4.2.1.0 = node1 Fan 1
jnxOperatingDescr.4.2.2.0 = node1 Fan 2
jnxOperatingDescr.4.2.3.0 = node1 Fan 3
jnxOperatingDescr.4.2.4.0 = node1 Fan 4
jnxOperatingDescr.7.9.0.0 = node1 FPC: SRX3k SFB 12GE @ 0/*/*
jnxOperatingDescr.7.10.0.0 = node1 FPC: SRX3k 16xGE TX @ 1/*/*
jnxOperatingDescr.7.11.0.0 = node1 FPC: SRX3k 2x10GE XFP @ 2/*/*
jnxOperatingDescr.7.14.0.0 = node1 FPC: SRX3k SPC @ 5/*/*
jnxOperatingDescr.7.15.0.0 = node1 FPC: SRX3k NPC @ 6/*/*
jnxOperatingDescr.8.9.1.0 = node1 PIC: 8x 1GE-TX 4x 1GE-SFP @ 0/0/*
jnxOperatingDescr.8.10.1.0 = node1 PIC: 16x 1GE-TX @ 1/0/*
jnxOperatingDescr.8.11.1.0 = node1 PIC: 2x 10GE-XFP @ 2/0/*
jnxOperatingDescr.8.14.1.0 = node1 PIC: SPU Cp-Flow @ 5/0/*
jnxOperatingDescr.8.15.1.0 = node1 PIC: NPC PIC @ 6/0/*
jnxOperatingDescr.9.3.0.0 = node1 Routing Engine 0
jnxOperatingDescr.10.2.1.0 = node1 FPM Board
jnxOperatingDescr.12.3.0.0 = node1 CB 0
root@SRX3400-1> show snmp mib walk jnxOperatingState
jnxOperatingState.1.1.0.0 = 2
jnxOperatingState.1.2.0.0 = 2
jnxOperatingState.2.2.0.0 = 2
jnxOperatingState.2.4.0.0 = 2
jnxOperatingState.4.1.0.0 = 2
jnxOperatingState.4.1.1.0 = 2
jnxOperatingState.4.1.2.0 = 2
jnxOperatingState.4.1.3.0 = 2
jnxOperatingState.4.1.4.0 = 2
jnxOperatingState.4.2.0.0 = 2
jnxOperatingState.4.2.1.0 = 2
jnxOperatingState.4.2.2.0 = 2
jnxOperatingState.4.2.3.0 = 2
jnxOperatingState.4.2.4.0 = 2
jnxOperatingState.7.1.0.0 = 2
jnxOperatingState.7.2.0.0 = 2
jnxOperatingState.7.3.0.0 = 2
jnxOperatingState.7.6.0.0 = 2
jnxOperatingState.7.7.0.0 = 2
jnxOperatingState.7.9.0.0 = 2
jnxOperatingState.7.10.0.0 = 2
jnxOperatingState.7.11.0.0 = 2
jnxOperatingState.7.14.0.0 = 2
jnxOperatingState.7.15.0.0 = 2
jnxOperatingState.8.1.1.0 = 2
jnxOperatingState.8.2.1.0 = 2
jnxOperatingState.8.3.1.0 = 2
jnxOperatingState.8.6.1.0 = 2
jnxOperatingState.8.7.1.0 = 2
jnxOperatingState.8.9.1.0 = 2
jnxOperatingState.8.10.1.0 = 2
jnxOperatingState.8.11.1.0 = 2
jnxOperatingState.8.14.1.0 = 2
jnxOperatingState.8.15.1.0 = 2
jnxOperatingState.9.1.0.0 = 2
jnxOperatingState.9.3.0.0 = 2
jnxOperatingState.10.1.1.0 = 2
jnxOperatingState.10.2.1.0 = 2
jnxOperatingState.12.1.0.0 = 2
jnxOperatingState.12.3.0.0 = 2
{primary:node0}
ifName.537 = reth0.0
ifInMulticastPkts.537 = 0
ifInBroadcastPkts.537 = 0
ifOutMulticastPkts.537 = 0
ifOutBroadcastPkts.537 = 0
ifHCInOctets.537 = 17667077627
ifHCInUcastPkts.537 = 17183691748
ifHCInMulticastPkts.537 = 0
ifHCInBroadcastPkts.537 = 0
ifHCOutOctets.537 = 9790270
ifHCOutUcastPkts.537 = 227798
ifHCOutMulticastPkts.537 = 0
ifHCOutBroadcastPkts.537 = 0
ifLinkUpDownTrapEnable.537 = 1
ifHighSpeed.537 = 10000
ifPromiscuousMode.537 = 2
ifConnectorPresent.537 = 2
ifAlias.537
ifCounterDiscontinuityTime.537 = 0
The following information shows the output for generating the jnxEventTrap SNMP trap
event script.
version 1.0;
ns junos = "http://xml.juniper.net/junos/*/junos";
ns xnm = "http://xml.juniper.net/xnm/1.1/xnm";
ns jcs = "http://xml.juniper.net/junos/commit-scripts/1.0";
param $event;
param $message;
match / {
/*
* trapm utilty wants the following characters in the value to be escaped
* '[', ']', ' ', '=', and ','
*/
var $event-escaped = {
call escape-string($text = $event, $vec = '[] =,');
}
var $message-escaped = {
call escape-string($text = $message, $vec = '[] =,');
}
<op-script-results> {
var $rpc = <request-snmp-spoof-trap> {
<trap> "jnxEventTrap";
<variable-bindings> "jnxEventTrapDescr[0]='Event-Trap' , "
_ "jnxEventAvAttribute[1]='event' , "
_ "jnxEventAvValue[1]='" _ $event-escaped _ "' , "
_ "jnxEventAvAttribute[2]='message' , "
if (jcs:empty($vec)) {
expr $text;
} else {
var $index = 1;
var $from = substring($vec, $index, 1);
var $changed-value = {
call replace-string($text, $from) {
with $to = {
expr "\\";
expr $from;
}
}
}
call escape-string($text = $changed-value, $vec = substring($vec, $index+
1));
}
}
template replace-string ($text, $from, $to) {
if (contains($text, $from)) {
var $before = substring-before($text, $from);
var $after = substring-after($text, $from);
var $prefix = $before _ $to;
expr $before;
expr $to;
call replace-string($text = $after, $from, $to);
} else {
expr $text;
}
}
This topic presents examples of show command output and SNMP MIB walk results
using the utility MIB for power readings on a Junos OS device.
PEM 0 status:
State Online
Temperature OK
AC Input: OK
DC Output Voltage Current Power Load
50 12 600 35
PEM 1 status:
State Online
Temperature OK
AC Input: OK
DC Output Voltage Current Power Load
50 13 650 38
PEM 2 status:
State Present
PEM 3 status:
State Present
In the following example, the index is PEM pem number type of reading.
SNMP MIB Walk Results for the jnx-utility MIB Populated with Power Readings
user@host> show snmp mib walk jnxUtil ascii
jnxUtilStringValue."PEM0dc-current" = 12
jnxUtilStringValue."PEM0dc-load" = 35
jnxUtilStringValue."PEM0dc-power" = 600
jnxUtilStringValue."PEM0dc-voltage" = 50
jnxUtilStringValue."PEM1dc-current" = 13
jnxUtilStringValue."PEM1dc-load" = 38
jnxUtilStringValue."PEM1dc-power" = 650
jnxUtilStringValue."PEM1dc-voltage" = 50
jnxUtilStringTime."PEM0dc-current" = 07 d9 09 15 0a 10 2d 00 2b 00 00
jnxUtilStringTime."PEM0dc-load" = 07 d9 09 15 0a 10 2d 00 2b 00 00
jnxUtilStringTime."PEM0dc-power" = 07 d9 09 15 0a 10 2d 00 2b 00 00
jnxUtilStringTime."PEM0dc-voltage" = 07 d9 09 15 0a 10 2d 00 2b 00 00
jnxUtilStringTime."PEM1dc-current" = 07 d9 09 15 0a 10 2d 00 2b 00 00
jnxUtilStringTime."PEM1dc-load" = 07 d9 09 15 0a 10 2d 00 2b 00 00
jnxUtilStringTime."PEM1dc-power" = 07 d9 09 15 0a 10 2d 00 2b 00 00
jnxUtilStringTime."PEM1dc-voltage" = 07 d9 09 15 0a 10 2d 00 2b 00 00
Sample Script
version 1.0;
ns junos = "http://xml.juniper.net/junos/*/junos";
ns xnm = "http://xml.juniper.net/xnm/1.1/xnm";
ns jcs = "http://xml.juniper.net/junos/commit-scripts/1.0";
ns ext = "http://xmlsoft.org/XSLT/namespace";
import "../import/junos.xsl";
match / {
<op-script-results> {
var $command="get-environment-pem-information";
var $pem = jcs:invoke($command);
for-each ($pem/environment-component-item)
{
var $pemslot = substring-after(name, " ");
for-each (./dc-information/dc-detail/*)
{
var $info=name();
var $valueofinfo = .;
call snmp_set($instance = "PEM" _ $pemslot _ $info, $value = $valueofinfo);
}
}
}
}
template snmp_set($instance, $value = "0", $type = "string" ) {
generate-event {
1-min time-interval 60;
}
policy powerUtil {
events 1-min;
then {
event-script power.slax;
}
}