Imagine you have a network, whether it's a LAN or a vast enterprise-level network spread across different locations. Now, picture yourself wanting to monitor and analyze the data flow within that network. That's where network telemetry comes into play.
Network telemetry is a group of techniques that allow you to understand better what's happening within networks. It's like watching the network's pulse to keep track of its health and performance.
Read on to learn more about the network telemetry landscape.
A subset of telemetry, network telemetry is the collection, measurement and analysis of data related to the behavior and performance of a network. It involves gathering information about routers, switches, servers and applications to gain insights into how they function and how data moves through them.
To achieve this, network telemetry employs different methods. One common approach is network monitoring tools that capture and analyze traffic data. These tools provide information about network bandwidth, latency, packet loss, and other performance metrics.
Telemetry also includes protocols like SNMP (Simple Network Management Protocol) or NetFlow that enable data collection from network devices and routers. This data can then be processed and visualized to:
With network telemetry, you can detect and address network bottlenecks, security threats or anomalies that might impact the network's efficiency. It’ll help you make informed decisions, optimize network resources, and ensure a smooth and reliable network experience for users.
(It’s important to note the differences between monitoring, observability & telemetry.)
Machine learning helps with analyzing network data to automate network operations. But, you should use multiple data sources and techniques to meet the needs of telemetry data.
A telemetry framework will help organize these different sources and integrate different approaches, making combining data for different applications easier. This simplifies interfaces and makes it more flexible.
The network telemetry framework has four modules. Each module has three components for data configuration, encoding, and instrumentation. The framework uses uniform data mechanisms and types, making it easy to manage and locate data in the system.
There are four categories of network telemetry's top-level modules:
1) The Management plane includes protocols like SNMP and syslog through which network elements interact with a network management system (NMS). This telemetry must address data subscription, structured data, high-speed transport and congestion avoidance to ensure efficient automatic network operation.
2) Control plane telemetry monitors the health of different network control protocols. It helps to detect, localize, and predict network issues. This method also allows for real-time and detailed network optimization.
3) Forwarding plane telemetry system functions depending on the data that the network device can provide. Ensuring that data meets the quality, quantity, and timing standards can be challenging for devices in the network's data plane where the data originates.
4) In external data telemetry, external events are an essential data source. They can be detected by hardware or software. There are a few challenges in this telemetry:
Each plane's telemetry module has five different parts.
You can acquire network data through subscription (push) and query (pull):
Data can be pulled whenever needed, but pushing the data is more efficient and can reduce latency.
The framework's versatility allows it to function effectively across various computer systems. But, particular challenges may arise when gathering and examining data from multiple domains. So, you should plan and map mechanisms carefully to get accurate and reliable results.
(See how network and application monitoring differ.)
As the network becomes more automated, new requirements are added to the existing techniques used in network telemetry. Each stage builds upon previous techniques and adds new requirements.
Here are the four stages of network telemetry applications:
At the time of design, the data source and its type for telemetry are determined. And the network operator's flexibility is limited to configuring how to utilize it.
During the first stage, it's possible to program or configure telemetry data on the fly without disrupting the network's operation. This permits a balance to be struck between resource conservation, performance, flexibility, and coverage.
To meet network operations' visibility needs, you can tailor and adjust the telemetry data in real-time. Modifications are made frequently at this stage, depending on real-time feedback. Some tasks are automated, but human operators are still required to make decisions.
No human operators interfere with the telemetry except when generating reports. The intelligent network operation engine is responsible for automatically requesting telemetry data, analyzing it, and updating network operations through closed control loops.
(No matter the stage, all that data needs to be protected, that’s where network security comes in.)
Telemetry protocols and standards ensure data is sent and received correctly between devices and systems. They help keep data accurate in monitoring, research, and automation.
Here are three protocols and standards:
OpenTelemetry is an observability framework for managing and exporting telemetry data like traces, metrics, and logs. It helps analyze software performance and is open-source. It's not the same as network telemetry — but it can collect data from network devices.
Network telemetry is like having a network detective gathering data and clues about the network's behavior and performance. It empowers network administrators to stay on top of their game, maintaining a robust and efficient network infrastructure.
So, whether you're a network guru or just dipping your toes into the networking world, network telemetry is invaluable for managing and optimizing your network.
See an error or have a suggestion? Please let us know by emailing [email protected].
This posting does not necessarily represent Splunk's position, strategies or opinion.
The Splunk platform removes the barriers between data and action, empowering observability, IT and security teams to ensure their organizations are secure, resilient and innovative.
Founded in 2003, Splunk is a global company — with over 7,500 employees, Splunkers have received over 1,020 patents to date and availability in 21 regions around the world — and offers an open, extensible data platform that supports shared data across any environment so that all teams in an organization can get end-to-end visibility, with context, for every interaction and business process. Build a strong data foundation with Splunk.