Before you can start to improve any aspect of your storage resource, you need to understand – accurately and fully – what assets you have. Conducting an effective and comprehensive asset discovery exercise is often complicated by several factors.
Industry best practice recommends that IT departments deploy a multi-vendor strategy to reap the full range of advantages including value for money, service availability and protection. Whilst enterprise organizations typically have a plethora of tools and diverse support skills to cater for the differing products, the size of their challenge is significantly larger and more complex to manage. SMB customers typically have less access to the right tools and often build a strong relationship with just one or two storage vendors, however, this can lead to reduced competition in the estate.
While they all have built-in tools that monitor the storage devices, each vendor’s tool produces different information, in different formats using different terminology. As soon as you introduce a manual process to combine the disparate outputs into one uniform report you increase the risk of error, resulting in inaccurate, and often incomplete, information.
Add to this staff changes, the scale and speed of data growth and dynamic changes in the environment, and it’s easy to see why many IT departments fall at the first hurdle of storage efficiency.
As well as mitigating risk and meeting regulatory compliance, transparency of an organisation’s storage resource delivers many benefits around supply and demand, waste reduction and cost optimisation. Accurate and detailed data enables system managers to fully exploit all cost-saving technologies and helps to stretch ever-tighter budgets.
Capacity optimization
More and more businesses are now citing their business intelligence – obtained from interrogating and analysing increasing amounts of structured and unstructured data – as their driver for growth and competitive differentiation. Often businesses know there’s a value in collecting this data but don’t necessarily know how they’re going to use it, so they get into a ‘collect everything and store everything’ cycle, with much of the data languishing unused.
Ballooning data sets usually mean bigger storage environments and the inevitable complexity that often comes with them. Complexity usually means more time and effort to manage the storage effectively, adding to the cost and reducing the ROI.
This financial burden needs to be offset against the business value of the data, so there’s a real incentive to rein in these costs. Making effective use of technologies around data compression and deduplication will help, and these savings can be further augmented with good data backup and archiving strategies.
Ensuring that the right data is placed on the right storage at the right time relies on accurate, real-time analytics to help you make smarter storage decisions around allocation and governance, provisioning, backup, replication and archiving.
Underutilized assets
Storage underutilization is prevalent in all but the most efficient environments. Failing to fully exploit and properly manage multiple terabytes of storage space is costing businesses thousands of pounds in unused capacity, wasted electricity, technical resources and floor space.
While some underutilization is essential to allow headroom for growth, it pays to keep this at the optimum level to balance security against wastage.
Technologies such as thin provisioning, storage resource management, capacity reclamation and storage virtualization are designed to help improve utilisation…but only if they’re properly and fully deployed. A lack of transparency into the storage environment is often one of the main obstacles to realising the full benefits promised by these tools.
Sometimes the efficacy of these technologies themselves is below par. We often find that SRM tools, for example, fail to discover large pockets of reclaimable storage, as shown in the case of recent client who had undertaken a large server decommissioning program spread across multiple internal and external support teams. Whilst the server decommissioning program was a success, the storage resources were ignored significantly impacting the ROI of the engagement. Standard SRM tools do not typically recognise this and therefore the storage continues to be mapped to a non-existent host and not returned for future LUN assignment.
It’s widely accepted that storage cost per gigabyte is going down – the real costs come from the expensive skilled human local resources to manage it, rising energy costs to power and cool the kit, and significant real estate costs to house it.
Understanding your storage utilisation with a comprehensive assessment will reveal where the greatest efficiency opportunities lie and where to focus your attention. Even a small percentage increase in utilisation rates could be saving you a big chunk of your IT budget.
Continual assessment
Accurate and comprehensive assessment and monitoring of the storage environment are key to keeping efficiency levels high and costs low. Continually assessing the environment brings compounded benefits that really drive forward improvements to IT service delivery.
By running repeat assessments, you can stop problems spreading and avoid creating bigger issues that left unchecked would start to unpick all the work that’s been done to achieve operational efficiency. Regular reporting allows system managers to spot weaknesses in internal procedures that may be contributing to storage inefficiencies.
With seemingly no end to the amount of data most organisations are collecting and storing, simply understanding what you’ve got today no longer suffices. To be really competitive, businesses need to look ahead and planning today for tomorrow’s demands.
Plan for growth
Knowing how your storage environment is being deployed is fundamental to maintaining performance and efficiency today. It’s also a prerequisite for confident buying decisions that will ensure storage provision is aligned to the business’s IT needs of the future.
Adopting effective procedures for regular reporting and assessment enables you to build scalable infrastructures. By building in robust management policies, your system admin team can spend less time fire-fighting and more time working on valuable planning activities.
Planning for growth leads to better buying decisions, helping you make the most of limited budgets by ensuring you get the best value-for-money from vendors. Generating reliable trend reports avoids the last-minute storage purchases that usually cost significantly more than planned purchases.
Predicting growth requirements needs to take into account end of life and end of service life timeframes, and allow for planned upgrades to modern, more efficient devices.
The opinions expressed in the article above are those of the author and do not reflect those of Datacenter Dynamics, its employers or affiliates.