There is no doubt that 2017 was a big year for enterprise cloud computing. According to Gartner, 90 percent of global enterprises are currently using some sort of cloud service. Multicloud, however, is shaping up to be the next step in building truly dynamic infrastructures. Running workloads dynamically across multiple cloud providers gives enterprises the ability to ensure workloads are truly optimized. In fact, the same Gartner study reports that 70 percent of enterprises are planning multicloud deployments by 2019—up from 10 percent today.
But are enterprises ready for the security challenges that multicloud architectures present? Applications spread across diverse cloud providers are notoriously difficult to gain visibility into. Each cloud provider has their own technology biases, unique cloud services, and management interfaces. It can be difficult to build an integrated view of what is happening. The result is that enterprises may not really know if their security policies are consistently applied to workloads running across multiple cloud providers.
Enterprises could simply trust that cloud providers are protecting their data, but they are reluctant to take on that risk given the very public fallout associated with security breaches. In addition, a lack of understanding or proof of compliance is enough to fail most audits. Ignorance is not an acceptable defense—especially for enterprises that should be able to allocate the resources to mitigate any sort of risk.
Paradoxically, enterprises are responsible for data security in multicloud environments, yet most do not have the visibility or control to ensure data is 100 percent protected. There are approaches, however. Here are four steps that enterprises can take to get a better handle on their multicloud infrastructure:
1. Get your hands on packet-level monitoring data
Enterprises absolutely need access to cloud packet data. The data available from cloud providers is not yet at the level IT managers are used to in the data center. For example, you can get metrics about your cloud instances, but generally not the actual packets themselves. In addition, the metrics may not be as granular or only available for a limited period of time. There may be no easy way to build the customized dashboards you need to pinpoint network and application performance issues. These limitations make it more difficult and time-consuming to identify and resolve security and performance issues.
2. Treat it like it came from your data center
Once available, enterprises need to feed cloud packet data into existing IT service management (ITSM) solutions where it can be centrally monitored alongside other systems management data. This allows enterprises to seamlessly monitor performance, availability and security of workloads—regardless of the underlying infrastructure—while providing a baseline from which to begin policy enforcement. This central monitoring and policy enforcement will ensure that the enterprise has control over the security posture of its own data and that policies are applied on all workloads consistently—whether workloads run in the data center, on a single cloud provider’s infrastructure or across multicloud architectures.
3. Understand context and apply intelligent policies
Like all monitoring data, cloud packet data needs to be put into the proper context, so it can be analyzed. To determine whether a packet is good or bad, it needs to be fed into the appropriate monitoring, compliance, analytics and security appliances where it can be converted into to actionable information. CRM data is dealt with differently than HR documentation in the data center, so why would an enterprise deal with them differently if they came from the cloud? Intelligence at the network packet level would be able to identify and divert data according to existing policies. The result would be a more robust and intelligent security, improved network performance, and a better allocation of resources.
4. Rely on your own testing procedures
Let’s be honest, you trust your own tests more than anyone else’s. Cloud providers do their best, but they have to service the masses of users, not your individual needs. It is critical that enterprises constantly test the performance, availability and—most importantly—the security of their workloads running in multicloud environments. Not doing so would be extremely detrimental and could lead to non-compliance or, worse, a security breach. Testing once provides a certain level of confidence, but continuous re-testing provides continuous reinforcement of confidence in your cloud security.
Enterprises will end up fully embracing multicloud architectures in 2018 as users continue to demand always-optimized experiences. The ability to move workloads across clouds enables this optimization, leading to powerful experiences. However, security remains a major concern in multicloud adoption. Enterprises can resolve this by implementing the same packet-level network visibility they employ in their private networks. Seamless access to cloud packet data provides the freedom to route information into any security, monitoring and testing tools where they can be parsed and analyzed. Achieving 100 percent security is possible in a multicloud environment. It just takes planning and vigilant execution.
Jeff Harris is vice-president of Product Portfolio at Keysight Technologies