HP’ has announced a purpose built “Compute” platform, based around new versions of Intel processors and its existing servers. It shows a company experimenting with every conceivable value proposition, seeking what cloud data center customers will buy.
Two months after producing a bare-bones scale-out architecture, followed almost immediately by a rack-scale custom design, HP announced it is bundling new versions of its Apollo, Integrity Superdome, and ProLiant servers in a series of special purpose built configurations that it calls the HP Compute platform.
Bringing compute closer to data
“In the world of today, we’re trying to bring compute closer to where the data is,” said Vineeth Ram, HP’s vice president of product marketing for servers, in an interview with DatacenterDynamics. In the past, data had to be filtered and “on-ramped” into a relational repository, before it could be analyzed. Today, analytics — one of the most important jobs a server-side application can before — can be applied to data while it is being collected.
For many organizations, the task of moving compute to where the data is, means bringing more local storage within the reach of processors. It takes a private cloud stack for HP to pull it off.
The cloud has been a challenge for HP in recent months. In May 2014, HP launched its Helion public cloud platform, with the intention of integrating public cloud capacity directly into its server products, enabling hybrid clouds out-of-the-box. To better integrate cloud management functions, last September it purchased Eucalyptus Software and appointed Eucalyptus’ chief — the illustrious Marten Mickos — as head of Helion.
Mickos had been an outspoken critic of OpenStack, the cloud infrastructure platform upon which much of the rest of the industry had standardized. His appointment to head of Helion made even ardent HP supporters wonder what its strategy was.
Apparently, it wasn’t only the supporters who were doing the wondering. Five months later, Mickos was reassigned.
HP then proceeded to build more server lines, and build new features into existing server lines, that leveraged Helion. In March, in response to the demands of major private cloud customers such as Facebook, it rolled out a new and intentionally undifferentiated line of “scale-out” servers called Cloudline. Then, placing a bet on the opposite side of the table, HP rolled out a rackscale architecture with the Helion brand, which appeared to position Helion conceptually on the opposite end of the pole from Facebook and Open Compute.
That made someone at The New York Times think, perhaps HP is changing its tune with Helion, moving that brand away from the public cloud. It published an interview with HP senior vice president Bill Hilf that appeared to verify that. Then Hilf made public statements saying the Times was wrong.
A way out and a way up
Now with Compute platform, HP is trying a simultaneous, but completely different, value proposition for hardware that could find its place in private clouds: Certain parts of the data center are best suited to “scale-out,” and others to “scale-up,” systems. Compute platform focuses on a select group of buildouts that can be matched to specific classes of workloads.
“Scale-out capability is about leveraging the right amount of capacity you want, and then adding as you go,” explained HP’s Ram. “We’re innovating in how we scale out, so we have very flexible configurations for mixing-and-matching processing and storage capability, as well as adding them as you go forward. In a scale-up environment, typically you scale processing, storage, and network independently of each other.”
HP perceives “scale-up” and “scale-out” as two alternate goals for different functions of the data center. It’s presenting the servers in its Compute platform as “enterprise bridges” (with apologies to Star Trek) to these two destinations, with each model taking a different, “purpose-built” route.
For the most distinctive new entrant in the series, the Apollo 4530, HP started out with a 4U space, and built into that space (unusually) three server nodes. Why three? Because in a Hadoop cluster, there are three copies of each block.
“It’s purpose-built for big data and analytics,” Ram tells us. A single chassis can be equipped with as much as 360 TB of storage, meaning that a single 42U rack can max out at 3.6 petabytes.
“We’re trying to take advantage of the way Hadoop is architected,” states Joseph George, HP’s executive director of Big Data solutions. “What it does is take that data and replicate it three times. We found that if we’re able to do that in one chassis, there are other performance benefits that we can get.”
Enterprises don’t own as much real estate as the major cloud providers, of course. For these customers, the problem with assuming that every server should be a unit with x processor, y memory, and z storage is that scaling up z to solve the storage problem, can create bottlenecks with respect to x.
In publishing its Big Data Reference Architecture, says George, HP is exploring ways to enable customers to scale certain things up to solve certain problems, and not scale everything in so doing.
“If I just need more capacity in the storage layer, why am I adding more compute power?” he asked rhetorically. “So we actually disaggregated those things, creating what we call an asymmetric architecture.”
With this asymmetric system, he says, a customer can devote an Apollo 4530 to Hadoop’s HDFS file system, then couple that system with a high compute-density Moonshot server. “With the amount of performance you get for two racks’ worth,” says George, “we were able to get it down to one rack. But now customers can be more intelligent about how they scale their data. If you’ve got bottlenecks in terms of compute size, you just enhance it with a few more Moonshots.”
“We have the ability now to position the Apollo family as sort of the storage piece,” explains Vineeth Ram, “and the Moonshot as the processing piece. The way Hadoop runs, we can run YARN — the processing side — on the Moonshot server. That gives us the processing capability, while the file system in HBase — the open-source, non-distributed, relational database — run on a cluster of density-optimized servers. What you’re getting, when you scale, is density-optimized storage and density-optimized processing.”
Fine-tuning
By comparison with the 4530, the Apollo 4510 dices up the 4U chassis in a much different way: as a single server node, but with up to 68 LFF disk drives, maximizing storage capacity. The “purpose-built” workload HP has in mind here is high-volume object storage — not big, unstructured data, but large clusters of structured data.
“Think of virtualized storage: It’s a cost-efficient, direct-attached storage with simplified management,” explains Vineeth Ram. “You’re not talking about a heck of a lot of processing. The Apollo 4510 has a single server, but the rest of it is all storage. It’s about fine-tuning that storage with that processing capability, and what you need there.”
Meanwhile, HP is unveiling its Apollo 2000 as what it calls the “enterprise bridge to scale-out.” Ram describes this as a kind of “anchor” system, designed for the traditional customer who is accustomed to installing the more typical racks and blades, offering space savings for enterprises that are just now starting a scale-out strategy.
For in-memory database management — specifically with SAP HANA — HP is offering under the Compute platform brand its Integrity Superdome X server, which actually is not new. Ram suggests high availability, high reliability, disaster tolerance, and other resilience features become more important when dealing with more conventional online transaction processing (OLTP).
Vineeth Ram remarks that Superdome X provides up to 16 processor sockets and 12 TB of memory, providing plenty of room to scale up.
Then for managing structured, more conventional, SQL databases, as well as more intensive HANA workloads, the Compute platform offers the new ProLiant DL580 Gen9, which builds on Gen8 by supporting, for the first time, up to four of Intel’s Xeon E7 v3 processors with up to 18 cores apiece.
“Here, it’s a scale-up system, which is about more memory and more performance,” describes RAM. “And we’ve got HP’s Smart Memory, and we have innovations that run on top of industry standards — DDR4 memory — to drive better performance from a workload perspective.”
Cloud dynamics has altered — and some may argue, fractured — the perspectives of data center managers. On one side of the argument, scalability needs consistent granularity; on the other side, truly scaling to match increasing workloads rather than forestall overflowing ones, requires fine-tuning.
You’re hearing every possible argument. Perhaps for the first time, you’re hearing them all from one vendor.