The Truth About Microsegmentation

The Truth About Microsegmentation

The Truth About Micro-Segmentation: It's Not About the Network (Part 1)

Never confuse a marketecture from an architecture. Marketecture is how to simplify a company’s technology to represent what a product can do. 

By contrast, architectures represent the overall components, interrelationships and operations of products. It is how things really work. And the devil is in the details, especially when it comes to micro-segmentation.

Since VMware introduced the concept of micro-segmentation for data center security about about three years ago, the security and networking industry have been racing to introduce competing technologies to reduce the lateral spread of bad actors in the data center and cloud. One of the key insights of VMware’s NSX team – which my team fully subscribes to – is that traditional networking technology presents a boatload of limitations to implementing micro-segmentation at scale. As they noted in August 2014:

The idea of using network micro-segmentation to limit lateral traffic isn’t new, but until recently, it was never feasible. Even if you blanketed your data center with a legion of hardware firewalls, there’d be no way to operationalize them, and the costs would be astronomical. Until now. 

A purely network-centric approach to micro-segmentation cannot operate at scale or deliver complete data center and cloud security. Traffic steering, service-chaining, and requiring proprietary network operations falls apart in today’s dynamic, distributed, heterogeneous and distributed computing operations. The proprietary network chokepoint/enforcement point model re-introduces the complexity of client server technology into the cloud world. When a vendor talks about throughput in the context of security in today’s hybrid cloud world, they are reaching for the past, not the future.

The big “aha” for security and networking teams is not that segmentation will support better data center hygiene – they already knew that. What is different is that network segmentation is related to, security segmentation, but is not the same. While network segmentation and security segmentation both introduce forms of isolation, they were built for different purposes:

● Network segmentation was designed initially to create smaller networks (subnets) to reduce performance considerations such as layer 2 broadcast storm, and then later to create isolation. It is built on top of IP addressing.

● Security segmentation focuses on the policy model of applications: should applications and application components be allowed to communicate and is built on top of data/workload tagging.

A great example of this is the failure of network technology to allow a server to live in multiple dimensions. Can a database serve two different applications that live on different network segments?

Don’t Sell Me Micro When You Mean Macro

The original segmentation model for the data center was the network security perimeter firewall. Because it was a single chokepoint that can process a blacklist model at line speed, it has been manifested in hardware devices with increasing levels of capacity and throughput. Network devices do a good job of coarse grain, micro-segmentation, not only for the perimeter, but for well-defined zones that include environmental separation in relatively static and well-defined boundaries.


Where networking fails – and this includes the network stack in the hypervisor or containers – is where you need the more granular security segmentation of micro-segmentation. As you move to ringfence applications, tiers of applications or individual workloads, the network and hypervisor model both lacks the context and the flexibility to do the job. What happens if an application spans several data centers? Would you hairpin traffic back to an enforcement point? Even worse, what happens when some organizations have dozens and dozens of data centers that support a single application?

While the network- and hypervisor-centric versions of micro-segmentation do a fine job of “micro-segmentation” (i.e., read environmental segmentation), they then become complex and operationally stultified when you move to true micro-segmentation.

This becomes self-evident as we move to the need to segment processes or individual ports at the workload or container level. How would you use a network to segment across 10,000 dynamic ports in Active Directory? How does it work at the container level?

Said simply: networks are great for macro-segmentation, but software-centric approaches are required for micro-segmentation. 

The dynamic and distributed data center/cloud world is leaving the client-server network-centric security model behind. It’s easy to change a marketecture but nearly impossible to change an architecture.

In part 2 of Micro-segmentation Misdirection, I will cover the difference between Visibility and Application Dependency Mapping for micro-segmentation.

The Truth About Micro-Segmentation (Part 2)

“Why cover the same ground again? It goes against my grain to repeat a tale told once, and told so clearly.” ― Homer, The Odyssey

For the past few decades, visibility has been the Odyssey of security professionals. The saying, “You can't protect what you can’t see” has launched a thousand security startups, most to fatally founder on irrelevance or poor execution.

In the data center and cloud security world, the role of visibility resurfaces with a redoubled effort. Most data centers are built on the "hard exterior" --i.e., firewalled perimeter with a soft chewy open interior school of network security. With the increasing spread of attacks inside the data center and cloud -- malware, insider threats, or simply application or communications vulnerabilities exploited by bad actors – there is a growing focus around segmentation as a core data center strategy.

Gartner Distinguished Analyst and VP Greg Young has suggested:

“[Security and risk management leaders] should also consider redesigning their assets and moving different assets into more secure locations, or segmenting to add floodwalls between parts of their organization. Adding these obstacles will make it more challenging for hackers to penetrate an organization.”

However, strong microsegmentation approaches cannot be implemented unless IT Operations and Security have clear visibility into how their applications and communicating so that they can determine quickly what should be communicating. 

This requires going beyond traditional network visibility to understand how the applications dependencies actually work. The current parlance around this capability is Application Dependency Mapping. It cannot be produced however, if you only see what happens on the network. The applications and the hosts they sit on must be included in a live, continuous map. You need real-time understanding of both vectors to build a cybersecurity approach to protect your data center.

The Neat & Clean World and The Real World

Traditionally, application dependency maps are built manually, as network flow and one server at a time. This approach is nearly unworkable in the largest data center and cloud environments.

The marketectures have a symbolic icon views to describe a perfect, 3-tier application (caveat emptor: the graphic below is provided by my employer):


When you get past the marketing side of things, there is a strong movement in the industry to use D3 Javascript diagrams to create stronger visibility into application and network environments.

Automated data collection and new visualization tools do a better job of creating the map. However, most vendors still offer simple, stylized view of an application dependencies which are not particularly useful at scale. Two new examples include 

DS3 “chord diagrams” can offer directed relationships among a set of entities.

“Sunburst” diagrams go a step further to show relationships as well as application groups.

Beyond the Neatly Presented View of Your Application Communications 

In reality, most data centers or even simply applications do not remotely look the earlier diagrams from an Application Dependency Map. They actually look like this:


How many workloads/servers do you think this ADM involves (answer below) *

To bring visibility and application dependency mapping to microsegmentation, both systems must have these 5 properties.

• Work at scale, up to tens of thousands of workloads and hundreds of thousands of other objects in the map, including laptops

• Be precise enough to support a large whitelist model 

• Adapt to changes in the environment

• Work in all environments and infrastructures as changes applications or migration to the cloud

• Provide the intelligence and automation to eliminate the manual model that scale deployments make nearly impossible.

From a security perspective, to create understand application dependencies you need not only to understand the flows and servers, you need to understand the ports and underlying processes. Most servers will have dozens of open, hence vulnerable ports. More significantly, these maps must be living systems, not one time snapshot. otherwise it is impossible to keep up with the dynamic and distributed nature of today’s cloud and microservices architectures.

The big change is rather than the Visio world much of the network world originated from, we are moving into a metadata world. Powerful and accurate microsegmentation requires the ability to ingest metadata from CMDBs, spreadsheets or derive it from real-time observation of traffic flows. Moreover, the system needs to suggest microsegmentation rules based on observation of behavior. This means application dependency and segmentation are two sides of a single system and IT and Network teams should be mindful of gluing together two separate systems.

In part III of this series I will conclude by reviewing the tradeoffs of various microsegmentation approaches.

*Less than 75!

The Truth About Micro-Segmentation: Healthy Heterogeneity (Part 3)

Civilization is a progress from indefinite, incoherent homogeneity toward a definite, coherent heterogeneity.” ― Herbert Spencer

In my prior two posts, I covered the proverbial “bait and switch” of macrosegmentation for microsegmentation and the differences between visibility and application dependency mapping in security. In this concluding post, I want focus on the dangers of “monoculture” in security and computing operations.

Today’s data center and cloud environments have hatched more healthy innovation than we can remember in a generation. For computing formats, we are in a post-virtualization era, where containers and “serverless” formats are rapidly gaining ground and forced to run alongside legacy approaches. The maturation of the API economy has resulted in applications that are more dynamic and distributed, spanning multiple data centers and clouds. To wit, in the past year, I have seen single distributed applications that run within and across over 35 data centers.

So why would anyone choose a data center & cloud microsegmentation solution that are boat-anchored to those fixtures of the client server era, the network and the hypervisor?

Today, there are different types of segmentation architectures: network centric, hypervisor centers, or distributed (e.g., host-centric)

Let’s take a look at each one and review the puts and takes of each approach.

In the Network

Segmentation, whether it is performed in a switch of a firewall, was designed during the era of static workloads and excels in “North/South” traffic flows where “big iron” plays an important role from a throughput or lookup consideration. Hi capacity firewalls are terrific at filtering inbound traffic by providing granular flow analysis and provide useful for clustered storage and aging legacy computing platforms such as the IBM AS400.

The challenges of this model are reliance on proprietary and operational complex hardware, where replacing hardware can prove daunting from a cost or availability perspective. Hardware solutions do not translate to cloud environments such as Amazon Web Services or Microsoft Azure and the “virtualized” version of hardware solutions – i.e., running on a virtual machine – have serious throughput limitations and create fragility through service chaining or traffic steering challenges. When can service-chaining be a bullet-proof approach.

Data centers increasingly must be optimized for lateral traffic (approximately 80% of all DC traffic) as speed and agility become the most important drivers for IT and Security teams. In the new IT economy, applications are the new profit centers while infrastructure remains a cost center.

In the Hypervisor

Segmentation in the hypervisor was designed to filer traffic through hypervisor-attached firewalls or increasing network-virtualization such as VMware NSX. Each hypervisor has visibility into traffic flows and can enforce security policies locally.

The benefits of such an approach range from visibility into overlay software-defined networks and prevention of policy traffic before it hits the physical network. In homogenous environments, hypervisor-centric segmentation can provide programmable APIs that eliminate some of the manual work associated with firewall management.

The limitations however are well understood:

• Poor support for legacy servers, NAS, bare-metal, containerized or public cloud workloads

• Limited support for heterogeneous virtualization environments

• No knowledge of processes or services initiating traffic

• Adds additional Hypervisor overhead

• Potential scale limitations, including Application Dependency Mapping 

Distributed Segmentation

The newest form of microsegmentation is derived from an overlay approach that is decoupled from the infrastructure yet takes advantage of packer filtering in the operating system (e.g., Windows Filtering Platform, iptables or Berkeley Packet Filters in Linux) or other devices (layer 4 firewall in load balancers, ACLs in network switches). This approach helps security professionals to craft policy centrally and distribute enforcement for scale.

The benefits of this approach include: 

• Complete application visibility, regardless of underlying infrastructure

• Insight into processes and services establishing connections

• Bare-metal, VMs & containers; On-prem and/or in the cloud

• Stops out of policy traffic before it hits the physical network

• Integration into heterogeneous environments

• Programmable APIs

Limitations include lack of need in smaller single vendor customers with 100% virtualized workload and perceived concerns with agents installed for telemetry collection. 

If your world is increasingly distributed, dynamic, heterogeneous and hybrid, the architectural choice is clear. 


These three posts originally appeared in SecurityWeek



Venkatesh Balaji

Transformation lead | Service Delivery | Design Build of IT Infrastructure | Enterprise Architect | Security | T Shaped Profile

4y

Good read for the ones who are planning for MS journey. Thanks

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics