As virtualised data centres have begun to mature, eyes are now turning towards software-defined infrastructure (SDI), which is set to enable improved automation, easier deployment and far greater scalability of workloads. These capabilities will be essential in enabling businesses to rapidly deploy and support the growing array of applications and platforms that end-users have come to rely on. Gartner clearly thinks SDI is a big deal; forecasting that by 2020, these technologies will be critical for three-quarters of Global 2000 enterprises that have implemented DevOps methodologies.

However, we’re still very much in the early days of this technology revolution. 451 Group recently reported that just one in five enterprises have deployed SDI to date, but forecast that 68% will increase spending on this area in 2016. Further forecasts from Markets and Markets estimate that the global market for these technologies will be worth $77.18 billion by 2020. All this stacks up to a pretty clear indication that SDI is on a strong upward curve and is set to become much more widespread in the next few years.

Data center blueprint
– Icon_Craft_Studio

Software-defined challenges

Most IT departments seem to be sold on the benefits that SDI can offer; whether it’s the ability to introduce more automation, reduce manual provisioning, or make better use of their data centre infrastructure.

However, as with any new IT revolution, there are hurdles to overcome and pitfalls to avoid on the road to adoption. Since SDI technology is still relatively immature, the majority of businesses have yet to identify how they will account for the increased complexity it introduces to data centre management.

As well as the complications of running virtualised or cloud environments, you have to account for the additional layer of complexity introduced by the SDI controller. With cloud, you were handing over the reins to a technology partner or service provider and putting your trust in them. With SDI, you’re putting your trust in a piece of technology that’s calling all the shots, so it’s pretty important that you can keep a close eye on the behaviour of that SDI controller.

The agility and flexibility created by SDI are, of course, essential advantages driving business adoption; but they come with a price. The speed with which new services can be launched and the constantly changing nature of dynamic, self-optimising data centre environments makes it extremely difficult for IT teams to keep track of the impact that IT infrastructure has on the performance of IT services.

As such, IT teams could suddenly realise they don’t have sufficient visibility into the data centre to allow them to understand what’s causing problems. As a result, it will be much harder for them to guarantee high-quality application performance and optimise the experience for end-users. Needless to say, that’s a big problem – and one that businesses have to get a handle on if they are to reap the benefits of SDI.

Navigating the software-defined labyrinth

The first thing to realise is that traditional approaches to data centre management won’t work in the software-defined world. Monitoring infrastructure components alone won’t be enough to identify the impact that underlying systems health is having on IT service quality. In the digital business world, every user and customer is essential, so IT departments must have the ability to monitor the transaction path that they follow across the data centre in real-time. Without this visibility, business teams will be blind to the customer or user-experience of applications until they start to complain about problems, or abandon an ecommerce site to shop elsewhere. By this time, the damage has already been done.

In dynamic data-centre environments, this transaction path is constantly changing, so the only effective way of staying in control is if monitoring systems are designed to automatically discover and track the end-user journey path. It’s a bit like using a giant magnet to instantly draw the needle out of the haystack regardless of how often it changes position. They must also be able to correlate real-time user experience indicators with the changes taking place in the SDI environment, so that IT teams can pinpoint the root cause of any service degradation and speed-up the resolution process.

For example, if they can see that users started to experience unacceptable lag-time when the SDI controller decommissioned a virtualised server, IT teams can take preventative actions to stop it from happening again in the future. These capabilities will give businesses the ability to deal with any problems before the user has even noticed, rather than getting lost in the labyrinth of an ever-morphing software-defined data centre.

A software-defined future

Whilst we’re still only in the early stages of hype, as we’ve seen with cloud and server virtualisation before it, the rollout of SDI technology will rocket once the early adopters start to prove that it works. The key to ensuring that the benefits are realised quickly and fully is being prepared to address any challenges or complexity that emerges as IT environments undergo the transition. There doesn’t have to be a trade-off between efficiency gains and control. If businesses equip their teams with the ability to maintain complete visibility into the impact that data centre infrastructure has on the user-experience, they stand to reap the benefits of streamlined operations and improved business functions that SDI promises far sooner.

Michael Allen is the vice president of Solutions at Dynatrace.