Archived Content

The following content is from an older version of this website, and may not display correctly.

Overprovisioning has long been considered a necessary element of data center architecture to ensure performance, availability and capacity. But with increasing rack space shortage and pressure on data centers to reduce power consumption, overprovisioning can no longer be seen as a best practice.

Performance is mission-critical for many business applications and downtime costs businesses real money. To compensate, organizations have overprovisioned data center resources such as servers and storage in an attempt to prevent performance and availability problems.

The growth of virtualization has exacerbated the problem of overprovisioning, particularly with storage. Server virtualization addressed server sprawl and made the compute side of a data center more efficient. However, in the process it created storage sprawl and made storage a lot less efficient. Storage that was built for a physical era struggles to deliver reliable application performance, and troubleshooting performance issues or finding bottlenecks is increasingly difficult.

As a result, many organizations have deployed extra storage by estimating worst-case performance requirements and sizing storage systems based on the performance that can be met with a given number of disk drives. The problem is that most storage systems deployed to support virtualized environments are only 20-30% utilized from a capacity perspective before they run out of performance resources. That’s a ton of wasted space and a lot of overspending on storage capacity that will never be used.

In many organizations the tremendous cost of storage overprovisioning has deterred IT organizations from virtualizing more of their data centers and reaping the full benefits of server virtualization.

While overprovisioning might be perceived as necessary to guarantee performance, the true cost is much more than just extra disk drives. Excess capacity consumes rack space, power, cooling and other datacentre support infrastructure. At some point, users risk running out of rack space and may hit power limitations. The number of data centers being built around the world is expected to double by 2016 but despite this significant increase it will not accommodate the predicted rate of growth of the capacity they have to manage. Many data centers are running into capacity constraints already and while organizations are looking to acquire or build new data centers, many have not optimized the efficiency of their existing infrastructure.

Alongside data center growth is the inevitable surge in power supply demand. While data centers and servers are becoming more energy efficient, they will always be massive power consumers, resulting in unavoidable heat and CO2 emission impacts on the environment. Because of the shortage of space for data centers, they have to be packed more densely with IT equipment, which means power demand per rack in data centers is growing very fast.

To quantify this growth of power consumption, data centers have increased their power usage by 63% in one year, according the DatacenterDynamics Industry Census for 2012, and in the UK alone the data center market has a power demand of 2.85 GW, estimated to rise to 3.15GW by 2016. The green agenda is gaining in importance, driven by the need to cut costs around cooling, space and power consumption, along with the threat of regulations and taxes. It is evident that overprovisioning is completely counterproductive to the goal of becoming more energy efficient.

Minimizing power usage will result in cooler-running data centers, a smaller carbon footprint, longer lives for capital equipment and lower power bills each month: all environmentally- and fiscally-friendly outcomes. A data center’s power consumption can be optimized by ensuring a higher degree of IT infrastructure utilization and replacing those aspects of the infrastructure that are grossly overprovisioned with more efficient alternatives.

How to achieve increased utilization?
Let’s start with the part of the infrastructure that poses the number one problem relating to performance and complexity in today’s data centers – storage. The increasingly virtualized data center needs a purpose-built storage solution that can support a large number of VMs without wasting capacity. Flash-based storage with intelligent provisioning dramatically increases virtual machine storage utilization by improving VM density and reducing the number of physical disk drives needed to meet performance requirements. Flash storage is a much better fit for virtualized workloads because it handles the random IO patterns more efficiently than legacy disk-based storage ever can.

But flash alone is not enough, because inefficient data management can still contribute to storage waste, even on flash-based systems. What is needed is “VM-aware” functionality alongside flash that provides more efficient data management designed for virtualized environments, ensuring that performance is allocated to each VM and data is managed on a per-VM basis. “VM-aware” flash-based storage is a key way to combat overprovisioning, decreasing energy consumption, providing the storage performance needed in small footprint - saving space, power and cooling in the datacentre and allowing the business to scale efficiently.