The technology refresh cycle was an easy standard to follow for some years in the data center. You could order a shipment of servers to replace the old ones every three years or so, and keep an eye on when you’d need to upgrade to the newest version of the operating system. The budget was usually there, and so was the staff to do the work. But that cycle was very slow, and required big cash outlays.

Today, the concept of that cycle seems even slower with the speed of business demands. Tight budgets rarely allow for that kind of big spending. Refresh cycles are no longer something that occurs every few years. There are many more infrastructure components than in the past. And each of these independent components comes with its own refresh cycle.

When combined, all of these refresh cycles merge into a near-continuous cycle of defining requirements, researching solutions, vetting vendors, acquiring solutions, deploying and configuring technology, and training administrators on the new implementation. In fact, 22% of respondents to 451 Research’s Voices of the Enterprise: Servers & Converged Infrastructure report cite refresh projects for servers as a continual process, while over 15% noted the same for storage infrastructure, and another 22% for network infrastructure.

The larger market shows this playing out, too. The top server hardware and storage vendors are all posting revenue declines, according to IDC, as enterprise IT shops move away from the old model of buying and then re-buying servers and storage on prescribed schedules.

Tech refresh is a state of mind

Installing a blade server
Installing a blade server – Thinkstock / kjekol

The old refresh cycle worked well before the virtualization era, but it’s ill-suited to this new reality. Virtualization changes the way IT teams look at servers. Suddenly, hardware plays the supporting role, while the hypervisor software is the real star of the show. More than a decade later, virtualization is widespread in organizations of all sizes and budgets.

Technology refreshes mean something different than they used to. Today’s refresh cycles have to work with today’s IT budgets, which are flat or down for many IT teams.

They also need to happen seamlessly, without downtime. And while virtualization has made data migration less daunting, businesses still have to worry about a complex stack of storage, networking, and servers that many companies don’t have the expertise, staff, or funds to maintain.

Tech refreshes and new tools are an essential part of IT. Now, though, tech refreshes include a wide swath of devices and hardware, and it can be tricky to figure out which new tools are truly advantageous to the business and which are just distracting. As refreshes spread to multiple fronts, IT teams have to approach the idea of a refresh differently. The next round of tech refreshes should allow for gradual, continuous upgrades to fit into an always-on, virtualized world.

IT administrators should look at hyperconverged infrastructure when faced with their next tech refresh cycle. According to the same 451 Research study, IT refresh is the leading driver for deployment of converged infrastructure. Hyperconvergence’s scalability means it’s not a rip-and-replace proposition. IT teams can instead start small and build at the pace that’s right for them.

Hyperconvergence also offers, for the first time, a way to shop for multiple data center components at once, rather than choosing a server, then a backup solution, then a storage solution, and so on. And the components of hyperconverged infrastructure represent all the latest technology innovations in one place. Hyperconverged infrastructure consolidates all IT infrastructure components below the hypervisor into one streamlined solution and it also features a building block approach, enabling the infrastructure to scale as the business does.

Hyperconvergence is part of a new way of thinking about infrastructure lifecycles and refreshes. It’s a product of the post-virtualization era, architected for virtualization, rather than retrofitted for virtualized workloads. This approach makes all the difference in moving toward a data center that’s responsive to the business, high-performing, and seamlessly staying current.