Cookie policy: This site uses cookies (small files stored on your computer) to simplify and improve your experience of this website. Cookies are small text files stored on the device you are using to access this website. For more information on how we use and manage cookies please take a look at our privacy and cookie policies. Some parts of the site may not work properly if you choose not to accept cookies.

sections

The data deluge: How to keep afloat with virtualisation

  • Print
  • Share
  • Comment
  • Save

Businesses now have to store an unprecedented amount of information. 1.8 zettabytes of data was generated in 2011 alone, and the total is expected to reach 35 zettabytes by 2020 - enough to fill approximately 70bn desktop hard drives.

The most pressing concern is where to store all this data. Unfortunately, simply moving the data from a server room to a data center only improves capacity: it doesn’t help organisations effectively manage the data, or the infrastructure it sits on.

Effectively storing and accessing data has become a problem for every business. Organisations such as financial institutions must keep their data under the watchful eye of strict legislative bodies with specific demands on how data is stored, managed and shared. Other companies, such as major retailers and firms that rely on CRM (customer relationship management) systems, need to access their data at will, for a variety of business needs.

These new demands mean organisations are concerned not only about where their data is sitting (whether on a local server, in a private data centre or hosted externally), but also that it is resilient and effectively managed. While data storage has certainly advanced from the days of filing cabinets, many of the challenges remain the same. Whether stored on paper, on a physical server or on a virtual server, data can’t just be filed away - businesses must think about the problems this will cause further down the line.

Getting a handle on data

Moving data to a data center environment may help businesses reduce costs in on-premise architecture and management, but the continual availability of this data is needed if organisations are to avoid losing access to one of their most treasured resources at the exact time they need it most. Power outages, extreme weather and a host of other unpredictable events can destroy business critical data in one fell swoop and take months or years to recover from. It is no longer enough to just move business data from an on-premise server room to an offsite data centre server: Businesses must ensure that all locations are equally protected and managed.

Virtualisation has been a great help for organisations dealing with explosive data growth, by allowing quick provisioning and scaling of technology infrastructures. Virtualisation also provides the opportunity to accelerate and enhance backup and recovery processes. While virtualisation is well on its way to becoming the IT norm, many businesses still use legacy backup tools that are not suited to their growing virtual infrastructures. Virtualisation can ensure that data center infrastructure is making full use of server capacity and providing businesses with unprecedented flexibility and scalability, but VMs do not conform to the same rules as physical servers and can make putting an effective data recovery policy in place a drawn-out and troublesome process.

Backup tools designed for virtual environments can help businesses recover in the event of an outage and get back to operational status as quickly and effectively as possible. By working to virtualisation’s strengths instead of treating it as physical infrastructure, business data can be recovered in a matter of hours or even minutes, rather than days. While this alone is a significant improvement for any organisation, virtualisation also makes it easier for data centre managers to replicate their infrastructure and data: essentially creating a parallel data centre environment that can quickly take over if one site fails. As server and data availability become of core importance to more businesses, we will undoubtedly see replication of virtual environments becoming the standard.

Once organisations have ensured their systems are safe and being backed up, the next step is ensuring that they can bring those systems and data back if the worst comes.

Testing the waters

Backup is all very well and good, but it doesn’t amount to much if systems and data cannot be recovered successfully.

Unfortunately, most organisations don’t know if their systems will come back online exactly as expected until they attempt to perform a recovery. This is because testing every backup is quite simply too time consuming, resource-hungry and expensive; instead, organizations take the risk that a certain percentage of backups will be unrecoverable.

According to a recent study on the virtualisation market conducted by Vanson Bourne, failed recoveries of virtual servers cost businesses an average of around £150,000 a year, a worrying figure for any CIO in modern times.

Thankfully, another benefit of virtualisation is that computing resources can be used more effectively. Resources from across the virtual infrastructure that would otherwise remain “idle” can be used to create testing environments, allowing organisations to test backups without interrupting their ongoing work. These resources can be drawn from the same virtual infrastructure that data is backed up on, eliminating the need for further Capex. This process delivers instant peace of mind that the organisation’s data is safe.

Virtual environments also have advantages in terms of recovering specific items from the sea of data available: a task that becomes increasingly important as the sea of data swells. This is especially true where only a few individual records are needed from a gargantuan database for a specific purpose, such as cases of eDiscovery.

Conventionally, once data has been backed up, it is not easy to recover individual items without recovering the whole backup. But by virtualising processes and components associated with the recovery, these barriers can be removed. As a result, businesses and data centre managers can ensure they are only dealing with items they need to recover, drastically saving recovery time and cost.

The big picture

As technology becomes ever more core to businesses and more information needs to be managed, we see a robust data recovery process becoming more important for businesses to remain resilient and retain an intelligent overview of their businesses data. However, in order to realise the tangible benefits, businesses and their data centre environments need to be able to provision the necessary resources and have the right tools on hand to deliver such resilience well into the future.

 

 

Related images

  • Veeam Software's Doug Hazelman

Have your say

Please view our terms and conditions before submitting your comment.

required
required
required
required
required
  • Print
  • Share
  • Comment
  • Save

Webinars

  • Live Customer Roundtable: Optimizing Capacity (12:00 EST)

    Tue, 8 Sep 2015 16:00:00

    The biggest challenge facing many data centers today? Capacity. How to optimize what you have today. And when you need to expand, how to expand your capacity smarter. Learn from the experts about how Data Center Infrastructure Management (DCIM) and Prefabricated Modular Data Centers are driving best practices in how capacity is managed and optimized: - lower costs - improved efficiencies and performance - better IT services delivered to the business - accurate long-range planning Don;t miss out on our LIVE customer roundtable and your chance to pose questions to expert speakers from Commscope, VIRTUS and University of Montana. These enterprises are putting best practices to work today in the only place that counts – the real world.

  • Power Optimization – Can Your Business Survive an Unplanned Outage? (APAC)

    Wed, 26 Aug 2015 05:00:00

    Most outages are accidental; by adopting an intelligent power chain, you can help mitigate them and reduce your mean-time to repair. Join Anixter and DatacenterDynamics for a webinar on the five best practices and measurement techniques to help you obtain the performance data you need to optimize your power chain. Register today!

  • Power Optimization – Can Your Business Survive an Unplanned Outage? (Americas)

    Tue, 25 Aug 2015 18:00:00

    Most outages are accidental; by adopting an intelligent power chain, you can help mitigate them and reduce your mean-time to repair. Join Anixter and DatacenterDynamics for a webinar on the five best practices and measurement techniques to help you obtain the performance data you need to optimize your power chain. Register today!

  • Power Optimization – Can Your Business Survive an Unplanned Outage? (EMEA)

    Tue, 25 Aug 2015 14:00:00

    Most outages are accidental; by adopting an intelligent power chain, you can help mitigate them and reduce your mean-time to repair. Join Anixter and DatacenterDynamics for a webinar on the five best practices and measurement techniques to help you obtain the performance data you need to optimize your power chain. Register today!

  • 5 Reasons Why DCIM Has Failed

    Wed, 15 Jul 2015 10:00:00

    Historically, DCIM systems have over-promised and under-delivered. Vendors have supplied complex and costly solutions which fail to address real business drivers and goals. Yet the rewards can be vast and go well beyond better-informed decision-making, to facilitate continuous improvement and cost savings across the infrastructure. How can vendors, customers and the industry as a whole take a better approach? Find out on our webinar on Wednesday 15 July.

More link