When HP started down its road to becoming the world’s volume leader in multi-processor blade server production in 2001, it was with the explicitly stated goal of improving utilization and cooling ability simultaneously. Increased server density, decreased space and (hopefully) optimized ventilation paved the way, in just a few years’ time, for single racks to attain the processing power once attributed to supercomputers. BladeSystem architecture, said HP a decade ago, “is a flexible infrastructure that makes the computing, network and storage resources easy to install and arrange. It creates a general-purpose infrastructure that accommodates your changing business needs.”
Blades made it feasible for mass-produced, PC-style processors to pervade the data center in huge quantities, driving down computing costs per megaflop. The new architecture gave AMD its comeback chance. With AMD64 architecture in its Opteron processors, AMD held Intel at bay for years, until Intel could strike back with its 45 nm process.
That, as they say, was then. Today’s processors are pushing the envelope of the laws of physics once again, and the envelope is pushing back. With the high rack densities required by today’s cloud architectures, servers generate tremendous heat.
Now that it costs more to cool a server than to power it in the first place, and now that cloud architectures are demanding more from servers than ever before, HP has begun looking into whether blade architectures have run their course.
“What we’ve done is, we’ve actually made the rack the enclosure,” announces Gromala, in an interview with DatacenterDynamics FOCUS.
He’s referring to a high-performance computing architecture called Apollo. If that name sounds familiar (beyond the obvious reference to the space program, back when it accomplished great things), it’s because HP has had that brand in its back pocket for decades. During the 1980s, when workstations represented the apex of computing development, two brands dominated the space: Sun and Apollo. After HP acquired Apollo Computer in 1989, Sun wiped the floor with the rest of the workstation market, and the Apollo brand faded into obscurity in a few short years.
But Apollo had already changed the world, by introducing custom-built chasses with the fastest microprocessors to the realm of high-performance computing. In a different way, the new Apollo may do the same thing today.
Gromala tells FOCUS that Apollo will optimize the power and cooling components over the entire rack, moving processing power and storage to separate modules. “We’re optimizing at that larger scale with a single power system for a rack, whether it’s one power supply or two in a redundant fashion,” he says. “Same thing with networking and manageability as well.”
Reviving a brand
Each of the first two server models to carry the revived HP Apollo brand will try a unique approach to cooling. The smaller Apollo 6000 is designed for faster integration into existing air-cooled datacenters. “It’s a new approach, in that we designed it at rack scale. In other words,” Gromala explains, “we didn’t design a single server that we would install in a rack; we actually disaggregated the pieces of the server. The computing pieces are still inside a common chassis that gets plugged in, but we have shared power, shared management, and shared networking inside the rack, at full rack scale.”
The benefits, he claims, come from improved efficiency, including from consolidating power supplies to single connections. Data center managers can connect exactly the amount of power they need for the complete rack, and can configure that power for basic connectivity, or for n + 1 or n + n redundancy, depending on workload requirements.
If the Apollo 6000 design can be likened to a Saturn IB rocket, the Apollo 8000 is the Saturn V. The 8000 emerged from a joint project between HP and the Dept. of Energy’s National Renewable Energy Laboratory, called Peregrine (yet another great trademark from HP’s back pocket). The impetus for the Peregrine experiment, as Gromala describes it, was the realization “that a glass of water has more cooling capacity than a room full of air.”
Water has been spurned as a reliable or even safe source for cooling data centers for decades. It’s been tried, but because of the notoriously testy relationship between water and electricity, it’s been largely abandoned. It’s nice that HP wants to return to its heritage, but why dig up this?
“The difference here is, HP is implementing it in a way that is safer and easier to deploy than water cooling has ever been,” says Gromala. He describes a technology called dry-disconnect, which replaces the traditional heat sinks with heat pipes — thin, two-ply copper tubes that insulate about 1 ml of alcohol under vacuum. Such tubes are far more efficient at moving heat toward the cool end of the system, converting the liquid it’s conveying into a gas in the process. The cool end, in this case, is a thermal bus bar on the inside frame, which connects to a water wall, enabling the water cooling to take place in the center of the rack where it’s most effective.
“This allows us to do a 100% water-cooled system,” he explains, “but in a safe and efficient way where we’re never mixing water in with the actual electronics inside the rack.”
Apollo is not in a position to replace HP’s ProLiant or BladeSystem architecture any time soon. But John Gromala does describe Apollo as a response to what the company perceives as an impending revolution.
“We realized we’re moving from an era where people are using general purpose servers, to one where they’re buying the right compute for the right workload with the right economics,” he tells FOCUS. “And HP has the ability to deliver that every time, because we have a broad portfolio, and we focus very much on each of the workloads we’re delivering.”