In spite of the billions of dollars invested to prevent external threats from penetrating data centers, breaches are a fact of life. In the wake of some highly publicized and costly breaches in which perimeter defenses clearly failed, organizations are now turning their attention to detecting and thwarting attacks in progress, or at least minimizing the havoc they can cause.
And therein lies an enormous challenge. Server and network virtualization, combined with ever-increasing traffic, network speed and server density have created a visibility gap. Administrators simply cannot “see” what is going on deep in their data centers and sophisticated attackers can go undetected for extended periods of time.
Mind the gap
A recent Forrester report describes the “low and slow” attack method characteristic of so many breaches today: “This is a very systematic and precise attack in which the attackers go after the network, then the applications, and then the data, covering all traces of their presence as they penetrate the different parts of the environment.”
In some of the high-profile attacks of the past few years, hackers gained access to corporate networks with stolen credentials, then helped themselves to millions of credit card numbers and customer data files over a leisurely period of weeks or months before being discovered. System operators were ill-equipped to protect that data from a live breach or mitigate the impact (and enormous costs) of those attacks.
What this clearly points out is an escalating need among data centers and cloud infrastructures to deploy security measures at the application and process level. VLAN separation, which can limit unapproved communication between servers, is useful but no longer enough. Every machine, virtual or physical, must be configured to allow only the essential incoming and outgoing traffic. The absence of application-level security policies, or policies that are too broad to enforce consistently, leaves vulnerabilities uncovered that low-and-slow attackers will inevitably discover and exploit, enabling them to move laterally between machines undetected.
Consequently, micro-segmentation has emerged as a state-of-the-art technique for establishing and enforcing stringent security policies at a very granular level around individual or groups of applications within the data center environment. A number of players in the data center space have developed frameworks that employ distributed firewalls for enforcing micro-segmentation policies.
Still, implementation of those policies in the first place remains a challenge. How can administrators determine which applications are allowed to communicate with each other and block connections from unknown, unauthorized origins? How can they then actually establish and enforce policies at the application level for thousands of existing servers?
First, implementing micro-segmentation security policies requires deep visibility down to the process level in order to identify applications, recognize the relationships between them and understand both network and application flows. Process-level visibility allows security administrators to identify servers with similar roles and shared responsibilities so they can be easily grouped for the purpose of establishing security policies.
At first blush, this may seem to be a daunting task and is likely the first impediment to effective micro-segmentation. However, with the aid of graphic visualization tools that enable administrators to automatically discover and accurately map their data center applications and the communication processes between them, the complexity of implementing an effective micro-segmentation strategy can be greatly simplified.
Once administrators have gained this depth of visibility, they can begin to filter and organize applications into groups for the purpose of setting common security policies – for example, all applications related to a particular workflow or business function. Micro-segmentation policies can then be created, tested and refined as needed for each defined group.
It’s important to note that micro-segmentation has two interrelated benefits. Not only can it prevent the hostile takeover of applications and processes, but it can also alert operators to the presence of an intruder. An attempted or blocked connection is a signal of a possible attack in progress that at a minimum warrants investigation. For this to be truly effective, however, system administrators must be able to monitor every port and all east-west traffic – VM-to-VM, app-to-app, process-to-process – in order to quickly identify policy violations.
As micro-segmentation strategies continue to mature, conventional breach detection and analysis methods - which involve sifting through tedious and voluminous SIEM reports and data logs after the fact - are now giving way to automated methods. As a result, it is now possible to monitor all east-west traffic for anomalies and detect, investigate and respond to threats inside the data center in real time.
Technologies exist today that make micro-segmentation security policies practical to implement and manage systematically, closing what has historically been a yawning gap in the data center security infrastructure.
Dave Burton is vice president of Marketing at GuardiCore.