Cookie policy: This site uses cookies (small files stored on your computer) to simplify and improve your experience of this website. Cookies are small text files stored on the device you are using to access this website. For more information on how we use and manage cookies please take a look at our privacy and cookie policies. Some parts of the site may not work properly if you choose not to accept cookies.


Hardening defenses against hackers

  • Print
  • Share
  • Comment
  • Save

With notable security breaches at several major retailers, social media companies, and even the US Federal Reserve, CIOs and CTOs are under pressure to harden their networks against a tide of cyber-attacks that seems to only be getting worse. The proliferation of mobile devices, BYOD and the cloud is changing the way people work and collaborate and this adds further complexity to the network and security monitoring challenge.

Data threats go far beyond the traditional foes everyone knows. Newer dangers include Advanced Persistent Threats (APTs) and so-called “man-in-the-middle” attacks that intercept traffic between machines operating on Internet Protocol version 6 (IPv6) and older routers running IPv4.

In the rush to secure every vulnerable network portal, CIOs and security managers should not overlook the critical need for 100 percent network visibility. This requires a comprehensive monitoring strategy implemented with techniques that ensure all data is available, at line rate, to all monitoring tools at all times. For enterprises that have yet to consider this critical fact — and the research indicates many companies aren’t even trying to analyze all the traffic moving through their networks — it’s time to sit up and take notice.

Intelligent networks
What you don’t know — and, more importantly, what you can’t monitor — is the Achilles heel of any network security strategy. In many cases, the information speeding into networks at upwards of 40G is simply too much to handle for tools installed in the 1G era. But with an intelligent network monitoring strategy, using the appropriate network monitoring architecture, taps/tools and packet manipulation techniques, you can maintain your 1G tools in these expanding networks.

I’ll give you an example. Consider the case of a large financial services institution with responsibilities to shareholders and millions of customers. The company’s credibility resides in its ability to maintain a secure network. To ensure that security, the organization needs to apply five or six tools to the critical data flowing through its network. There might be an Intrusion Protection System (IPS), an Intrusion Detection System (IDS), a Data Loss Prevention (DLP) tool and other tools that watch for signs of hacking or troublesome domains.

In such a case, one of the concerns is how to ensure all those tools get simultaneous access to the network at multiple monitoring points, in real time without dropping packets.

It’s done by rethinking network monitoring architecture. The data center infrastructure traditionally consists of network switches that with one or two outlets called Switched Port Analyzers (SPAN) can be used for sending production data to a monitoring tool. If you have six or more security tools that need to access data, two ports simply won’t do. The problem is referred to as SPAN port contention, and it’s one of the biggest challenges we face in a security environment.

Further, SPAN ports won’t catch everything that crosses your network, leaving blind spots in your visibility. In heavily loaded networks, if you configure too much traffic to be replicated by a SPAN port you are likely to lose packets as the port tries to keep up.

The use of network taps that sit in line between switches offers the advantage of accessing all of the data, without the possibility of dropped packets, regardless of bandwidth. Total visibility is possible only through the adequate use of network taps and the right switching architecture with advanced monitoring techniques that ensure all data is available, at line rate, to all monitoring tools at all times.

After achieving access to all that data, it’s important to take other steps to ensure that your tools are not overwhelmed.

An intelligent network monitoring switch will support complete packet manipulation and modification including aggregation, filtering, packet slicing/stripping, deduplication, and network load balancing to reduce the combined stream to meet available bandwidth.

Multi-stage filtering offers limitless flexibility in filtering rules and provides pinpoint accuracy, allowing users to specify exactly which packets are delivered to each egress port on the switch and eliminating the threat of oversubscribing ports and dropping packets.

Many critical security monitoring tasks are concerned only with the data contained in the packet header. Packet slicing can be used to discard packet payload information sent to monitoring tools to reduce overall data volume, increase tool performance, enhance network visibility and save scarce budget resources.

The need to eliminate duplicate packets has become fundamentally important for both security and network performance monitoring. With up to 50 percent of network monitoring traffic being duplicates, implementing a packet deduplication function with the network monitoring switch is foundational to ensuring security monitoring tools remain efficient.

As you can see, the fight for full network visibility depends on many factors. But with some strategic thought and planning, and by employing the latest intelligent network monitoring technology and techniques, it is possible to harden your defenses.

The first step is to open an internal discussion on network monitoring switch architecture to ensure that your company’s tools have 100-percent visibility. Any company that lacks complete visibility leaves itself vulnerable.

The views expressed in this article are those of the author, not DatacenterDynamics FOCUS

Related images

  • APCON's Paul Ginn

Have your say

Please view our terms and conditions before submitting your comment.

  • Print
  • Share
  • Comment
  • Save


  • Overhead Power Distribution – Best Practice in Modular Design

    Tue, 10 Nov 2015 16:00:00

    Overhead power distribution in your data center offers many attractive possibilities but is not without its challenges. Join UE Corp’s Director of Marketing, Mark Swift, and CPI’s Senior Data Center Consultant, Steve Bornfield, for an exploration of the options and some of the pitfalls, supported by real-life examples from the field.

  • Overcoming the Challenges of High Power Density Deployments

    Wed, 4 Nov 2015 19:00:00

    Increasing rack power densities saves space and energy, and improves both OPEX and CAPEX. But it can also create unintended problems that could bring your data center to a screeching halt. Join Raritan’s VP of Products & Marketing, Henry Hsu, and DCD’s CTO Stephen Worn, as they reveal the three key challenges in deploying a high density cabinet, and explain how to: Reduce operating costs, Increase up-time , Improve mean time to repair, become more energy-efficient manage existing capacity and plan for growth.

  • Squeezing the Lemon - The Power to do More with Less

    Tue, 20 Oct 2015 08:00:00

    Energy costs rising, manpower resources falling – managing a data center is getting more stressful by the day. One cold night could be all it takes to tip your power supply over the edge. And let's not forget the never-ending demands from IT for additional space. More information on its own is not the answer. Join Rittal's webinar to understand how to: • Lower your power consumption and OPEX charges, with 'smart' power distribution • Identify issues before they become problems, with intelligent PDUs' monitoring capabilities • Expand your DC as your business grows, with modular PDUs • Profile your power requirements to help you plan and make better-informed decisions REGISTER NOW Note: All attendees will receive a free copy of the latest White Paper from Rittal.

  • Live Customer Roundtable: Optimizing Capacity (12:00 EST)

    Tue, 8 Sep 2015 16:00:00

    The biggest challenge facing many data centers today? Capacity. How to optimize what you have today. And when you need to expand, how to expand your capacity smarter. Learn from the experts about how Data Center Infrastructure Management (DCIM) and Prefabricated Modular Data Centers are driving best practices in how capacity is managed and optimized: - lower costs - improved efficiencies and performance - better IT services delivered to the business - accurate long-range planning Don;t miss out on our LIVE customer roundtable and your chance to pose questions to expert speakers from Commscope, VIRTUS and University of Montana. These enterprises are putting best practices to work today in the only place that counts – the real world.

  • Power Optimization – Can Your Business Survive an Unplanned Outage? (APAC)

    Wed, 26 Aug 2015 05:00:00

    Most outages are accidental; by adopting an intelligent power chain, you can help mitigate them and reduce your mean-time to repair. Join Anixter and DatacenterDynamics for a webinar on the five best practices and measurement techniques to help you obtain the performance data you need to optimize your power chain. Register today!

More link