When it comes to mission-critical applications, resiliency is the foundation upon which data centers are designed and built. From modernization and retrofit projects to energy efficiency upgrades and new facilities, there are undoubtedly a multitude of factors driving decision making - many of which are often based upon cost.
What is becoming increasingly concerning, however, is that design, especially from a procurement or tender perspective, is starting to become an afterthought. Instead of basing decision-making on effectiveness, or whether or not the specified infrastructure will be fit-for-purpose, cost has become the primary factor, where contracts are awarded based on budget rather than expertise.
Don’t get me wrong, this is not a gripe based on whether or not a project will be awarded to one company over another. This is about the bigger picture, where often, data center consultants who should have been involved in the process from inception, are required to step in and fix a poorly specified environment. One that should have been streamlined, optimized and designed for the business requirements from the outset.
While one project manager may see that a specific power, cooling or IT component ticks the box from a capital expenditure (CapEx) perspective, those of us chosen to install the system will immediately see whether or not it will meet the user demands, or leave them open to failures.
In essence, when focussing solely on cost, key requirements such as resilience, integration, and continuity take a back seat, while a short-term focus on CapEx comes to the fore, meaning the end-user faces a lottery of sorts. A guessing game asking will the data center come in under budget using components from several different manufacturers, or will it be fit for purpose for the duration of its lifecycle?
The CapEx-created monster
What’s clear is that data centers have indeed been instrumental throughout the pandemic and will continue to be as digital transformation gathers pace. Yet what’s interesting is that outages are occurring with disturbing frequency and are becoming both more damaging and expensive. According to the Uptime Institute’s 10th annual-data center survey, one third of survey participants admitted to experiencing a major outage in the last 12 months and a further one in six claimed it had cost more than $1m.
The fact is that data center short-sightedness can’t continue and something has to change, but does it begin at the design and installation phase, or do we need to disrupt and improve the traditional procurement process? If based solely on cost, the winning bid for a project may look very attractive on paper, but in reality there is a very real danger that the customer will be left with the data center equivalent of Frankenstein’s monster.
Here, one might encounter power distribution systems, uninterruptible power supplies (UPS), racks, physical security and environmental monitoring from different vendors. Some may work, others may not, and many will have no way to control or manage them from a centralized platform.
This situation is further exacerbated when project managers have to work with multiple vendors during the procurement cycle, each with their own opinion on what success looks like. Some may not even have the required expertise, recommending power and cooling systems that aren’t optimized for the room layout - leaving users with a fragmented approach to IT and data center design that works inefficiently and requires significant amounts of costly on-going maintenance.
Long-term, this also creates greater challenges for both the business and the IT Manager. So, how can we solve this problem?
Integrated designs work better
Experience tells me that many end-users know what they want to achieve from their mission-critical systems, but need some guidance on achieving reliability and long-term business outcomes, and help setting expectations around cost or return on investment (ROI).
Overall, our goal is to meet the customers' demands while reducing complexity from design to installation and operation. But having a streamlined, or fully integrated approach is essential to achieving this objective. Companies that take a holistic approach from the start and ensure their data centers are fit for purpose will enjoy significantly better business outcomes.
But what does this look like? End-user requirements will vary from business to business, but in the case of an enterprise organization a standardized, modular and pre-integrated approach will often allow the infrastructure to be deployed as needed, while enabling resource constrained budgets to be considered more effectively. Furthermore, a pre-engineered and modular data center can reduce design and installation time, in addition to mitigating failures by removing defects and ensuring the components work together seamlessly from the start.
Many modular systems will also include the ability to add an N+1 configuration for increased redundancy, reduced fault tolerance and concurrent maintenance. This also means that designs can be validated or certified to ISO or Uptime standards for increased compliance, security and safety. This, I believe, is critical to reassure the user that their hardware choices are fully compatible, and more so when considering that reliability and uptime are the key requirements.
Enlisting a specialist helps guarantee reliability
A final trap that many companies fall into is that they fail to enlist specialist help. Project managers may have an IT generalist onboard who can specify the cost to build an on-premise system. A specialist, however, knows how the pieces of the data center jigsaw should be joined together to successfully align with a company’s business goals, its budget and its risk profile.
A specialist can ensure an integrated approach to deployment from the very start. This does not simply mean one vendor supplying all the critical infrastructure components, piecing them together and handing the project over. We are talking about a fully joined-up approach to data center design, specification, installation, monitoring, management and servicing.
It begins with understanding the business needs and specifying a solution that’s truly fit for purpose. If done correctly, it will feature the-most optimum equipment for the budget, meet and exceed the design criteria and will provide a system that offers data-driven insights via next-generation data center infrastructure management (DCIM) software.
Collaboration and customer communication are also critical to success, and an expert consultancy that understands your business can also look to add value to your five-year IT strategy, ensuring the data center is future-proofed. For example, they can consider your current power and cooling requirements and create a flexible, modular solution that can scale up or down in a cost-effective way, while being operationally efficient and sustainable.
In an environment where the speed and quality of IT transformation has never been so important, it is crucial that a holistic, pre-integrated and end-to-end approach is taken to data center deployment. From proposal and specification through to lifecycle design, installation and operation, both collaboration and open communication have never been more essential to achieve mission-critical reliability.