Cookie policy: This site uses cookies (small files stored on your computer) to simplify and improve your experience of this website. Cookies are small text files stored on the device you are using to access this website. For more information on how we use and manage cookies please take a look at our privacy and cookie policies. Some parts of the site may not work properly if you choose not to accept cookies.

sections

The data deluge: How to keep afloat with virtualisation

  • Print
  • Share
  • Comment
  • Save

Businesses now have to store an unprecedented amount of information. 1.8 zettabytes of data was generated in 2011 alone, and the total is expected to reach 35 zettabytes by 2020 - enough to fill approximately 70bn desktop hard drives.

The most pressing concern is where to store all this data. Unfortunately, simply moving the data from a server room to a data center only improves capacity: it doesn’t help organisations effectively manage the data, or the infrastructure it sits on.

Effectively storing and accessing data has become a problem for every business. Organisations such as financial institutions must keep their data under the watchful eye of strict legislative bodies with specific demands on how data is stored, managed and shared. Other companies, such as major retailers and firms that rely on CRM (customer relationship management) systems, need to access their data at will, for a variety of business needs.

These new demands mean organisations are concerned not only about where their data is sitting (whether on a local server, in a private data centre or hosted externally), but also that it is resilient and effectively managed. While data storage has certainly advanced from the days of filing cabinets, many of the challenges remain the same. Whether stored on paper, on a physical server or on a virtual server, data can’t just be filed away - businesses must think about the problems this will cause further down the line.

Getting a handle on data

Moving data to a data center environment may help businesses reduce costs in on-premise architecture and management, but the continual availability of this data is needed if organisations are to avoid losing access to one of their most treasured resources at the exact time they need it most. Power outages, extreme weather and a host of other unpredictable events can destroy business critical data in one fell swoop and take months or years to recover from. It is no longer enough to just move business data from an on-premise server room to an offsite data centre server: Businesses must ensure that all locations are equally protected and managed.

Virtualisation has been a great help for organisations dealing with explosive data growth, by allowing quick provisioning and scaling of technology infrastructures. Virtualisation also provides the opportunity to accelerate and enhance backup and recovery processes. While virtualisation is well on its way to becoming the IT norm, many businesses still use legacy backup tools that are not suited to their growing virtual infrastructures. Virtualisation can ensure that data center infrastructure is making full use of server capacity and providing businesses with unprecedented flexibility and scalability, but VMs do not conform to the same rules as physical servers and can make putting an effective data recovery policy in place a drawn-out and troublesome process.

Backup tools designed for virtual environments can help businesses recover in the event of an outage and get back to operational status as quickly and effectively as possible. By working to virtualisation’s strengths instead of treating it as physical infrastructure, business data can be recovered in a matter of hours or even minutes, rather than days. While this alone is a significant improvement for any organisation, virtualisation also makes it easier for data centre managers to replicate their infrastructure and data: essentially creating a parallel data centre environment that can quickly take over if one site fails. As server and data availability become of core importance to more businesses, we will undoubtedly see replication of virtual environments becoming the standard.

Once organisations have ensured their systems are safe and being backed up, the next step is ensuring that they can bring those systems and data back if the worst comes.

Testing the waters

Backup is all very well and good, but it doesn’t amount to much if systems and data cannot be recovered successfully.

Unfortunately, most organisations don’t know if their systems will come back online exactly as expected until they attempt to perform a recovery. This is because testing every backup is quite simply too time consuming, resource-hungry and expensive; instead, organizations take the risk that a certain percentage of backups will be unrecoverable.

According to a recent study on the virtualisation market conducted by Vanson Bourne, failed recoveries of virtual servers cost businesses an average of around £150,000 a year, a worrying figure for any CIO in modern times.

Thankfully, another benefit of virtualisation is that computing resources can be used more effectively. Resources from across the virtual infrastructure that would otherwise remain “idle” can be used to create testing environments, allowing organisations to test backups without interrupting their ongoing work. These resources can be drawn from the same virtual infrastructure that data is backed up on, eliminating the need for further Capex. This process delivers instant peace of mind that the organisation’s data is safe.

Virtual environments also have advantages in terms of recovering specific items from the sea of data available: a task that becomes increasingly important as the sea of data swells. This is especially true where only a few individual records are needed from a gargantuan database for a specific purpose, such as cases of eDiscovery.

Conventionally, once data has been backed up, it is not easy to recover individual items without recovering the whole backup. But by virtualising processes and components associated with the recovery, these barriers can be removed. As a result, businesses and data centre managers can ensure they are only dealing with items they need to recover, drastically saving recovery time and cost.

The big picture

As technology becomes ever more core to businesses and more information needs to be managed, we see a robust data recovery process becoming more important for businesses to remain resilient and retain an intelligent overview of their businesses data. However, in order to realise the tangible benefits, businesses and their data centre environments need to be able to provision the necessary resources and have the right tools on hand to deliver such resilience well into the future.

 

 

Related images

  • Veeam Software's Doug Hazelman

Have your say

Please view our terms and conditions before submitting your comment.

required
required
required
required
required
  • Print
  • Share
  • Comment
  • Save

Webinars

  • Do Industry Standards Hold Back Data Centre Innovation?

    Thu, 11 Jun 2015 14:00:00

    Upgrading legacy data centres to handle ever-increasing social media, mobile, big data and Cloud workloads requires significant investment. Yet over 70% of managers are being asked to deliver future-ready infrastructure with reduced budgets. But what if you could square the circle: optimise your centre’s design beyond industry standards by incorporating the latest innovations, while achieving a significant increase in efficiency and still maintaining the required availability?

  • The CFD Myth – Why There Are No Real-Time Computational Fluid Dynamics?

    Wed, 20 May 2015 14:00:00

    The rise of processing power and steady development of supercomputers have allowed Computational Fluid Dynamics (CFD) to grow out of all recognition. But how has this affected the Data Center market – particularly in respect to cooling systems? The ideal DCIM system offers CFD capability as part of its core solution (rather than as an external application), fed by real-time monitoring information to allow for continuous improvements and validation of your cooling strategy and air handling choices. Join DCIM expert Philippe Heim and leading heat transfer authority Remi Duquette for this free webinar, as they discuss: •Benefits of a single data model for asset management •Challenges of real-time monitoring •Some of the issues in CFD simulation, and possible solutions •How CFD can have a direct, positive impact on your bottom line Note: All attendees will have access to a free copy of the latest Siemens White Paper: "Using CFD for Optimal Thermal Management and Cooling Design in Data Centers".

  • Prioritising public sector data centre energy efficiency: approach and impacts

    Wed, 20 May 2015 11:30:00

    The University of St Andrews was founded in 1413 and is in the top 100 Universities in the world and is one of the leading research universities in the UK.

  • A pPUE approaching 1- Fact or Fiction?

    Tue, 5 May 2015 14:00:00

    Rittal’s presentation focuses on the biggest challenge facing data centre infrastructures: efficient cooling. The presentation outlines the latest technology for rack, row, and room cooling. The focus is on room cooling with rear door heat exchangers (RHx)

  • APAC - “I Heard It Through the Grapevine” – Managing Data Center Risk

    Wed, 29 Apr 2015 05:00:00

    Join this webinar to understand how to minimize the risk to your organization and learn more about Anixter’s unique approach.

More link