The question of using proprietary hardware versus open, or “white box” hardware used to be a fairly straightforward one. Customers chose proprietary systems for a number of reasons, but the primary ones were either specific hardware to solve a business problem that wasn’t available from any other vendor or vendor support and reputation in a very competitive marketplace.

But we’ve come a very long way from the days of “no one ever got fired for buying IBM.” Vendor reputation, while still important, has taken a back seat to cost, both capital and operational. And while ROI has always played a major role in product selection, the factors used to calculate ROI are now weighed differently than in the past. While there is no question that every business has its own specific requirements, the approach being taken to hardware investment by business has significantly impacted the hardware vendor environment. And these changes have opened the door to white box hardware in a way that was never before possible.

whitebox thinkstock photos wavebreak media tall
– Thinkstock / wavebreakmedia

Commodity products

Originally, white box hardware was most often server or desktop hardware that was assembled with commodity components, as opposed to name brand products. As data centers became a significant purchaser of X86 server hardware built to spec, white box equipment became a common sight, often interspersed between racks of name brand servers and server chassis.

This became even more common when the declining costs of hardware made entire servers either plug and play components in an enterprise data center, which were simply replaced when there was a hardware failure, or simply components that were left in place and the workload moved to another device when they failed, being replaced only when the server refresh cycle reached the scheduled replacement date.

Original design manufacturers (ODMs), as represented by well-known manufacturers like FoxConn and Quanta, together easily broke into the top five in overall server shipments getting to almost 20 percent of the server hardware market. While the top two or three server hardware vendors, such as Dell and HP, remain at the top of the heap, the writing is on the wall, at least for the server market.

Getting to bare metal

But the bottom line was that while white box server hardware (which also co-opted the term “bare metal servers”) became more commonplace, it didn’t really disrupt the data center in any way. For operators, the availability of white box systems mainly had the effect of causing name brand vendors to lower their prices and/or add additional features at competitive price points.

Analysts basically pointed to the emergence of ODMs in the server hardware business as an inevitability; the value that name brand server vendors added to their hardware most often took the form of management hardware / software combinations and service. Operating systems were dominated by Windows Server and various varieties of Linux, but the hardware components that made a good server were fundamentally commodity items themselves. There was simply rarely sufficient proprietary hardware that would make ODM servers non-competitive.

Facebook Backpack
Facebook’s Backpack switch – Facebook

Time for a switch

For a while it seemed that was the limit of open hardware but the next step soon presented itself. The network switch became the next target of the ODMs. While the standard for network switching was that you purchased a switch from HP, Cisco, Arista, or another vendor, and then ran the vendor’s proprietary software on their proprietary hardware.

But ODMs were already in the switch business, in many cases already manufacturing the hardware for those brand name vendors. So these ODMs decided what was good for server hardware, would also work for network switching gear. They set to with off the shelf silicon from the major switching fabric vendors, such as Broadcom, Mellanox, and Intel rather than the custom ASICs that define most high-end proprietary switching gear.

Major switch gear vendors are aware of this and in many cases offer their own, lower level product lines based around the off-the-shelf hardware. These product lines lack the tight integration between hardware and software that the high-end products from these vendors provide, but in many cases offer a significant percentage of the performance and reliability that the major vendors promise with their top tier product.

As white box server hardware became more commonplace, it didn’t really disrupt the data center. For operators, it mainly had the effect of causing name brand vendors to lower their prices.

Bare metal switches became practical with the release of the Open Network Install Environment (ONIE) bootloader which allows the user to install their network switching software environment of choice on the bare metal hardware. Cumulus Networks developed ONIE and contributed it to the Open Compute Project. Once the hardware choice is made from the products available from OSMs, buyers can the select their switch operating system of choice, with vendors such as Pica8 specializing in ODM hardware support with their PicOS, Broadcom’s FastPath, Cumulus’s own Cumulus Linux and others.

Many options

Understanding that many customers want the advantages of a tested white box solution, many software and hardware players in the switch gear ODM market offer packaged solutions that contain a standardized hardware solution pre-packaged with a tested and installed commercial operating system, such as those mentioned above. To add a little confusion, top tier vendors such as Dell and HP offer their own white box solutions, with both ODM manufactured and branded hardware available with the buyer’s choice of operating system. And to go a step farther, major vendors are also reselling lower tier vendor open hardware pre-packaged and ready to roll.

The final piece of the current puzzle is the Open Compute Project. Vendors are making their hardware designs available as contributions to the project and both ODMs and major vendors are building hardware to the OCP specifications. At this time these specifications cover everything from racks to servers to network switches. And with the ongoing adoption of SDN, SDDC, and open software and management tools these standards are becoming more attractive to data center operators.

whitebox thinkstock photos wavebreak media tall
– Thinkstock / wavebreakmedia

Cheaper and simpler

Commoditizing all the infrastructure components has two obvious advantages in these days of hyperscale data centers. It reduces the costs of the hardware components that allow massive scale out, allows operators to run their operating systems of choice, or even multiple operating systems to meet the needs of their customers, and takes away a large part of the decision process on investing in new or next generation hardware environments.

Organizations doing HPC, massive Big Data analysis and with other specialized computing needs with still have cutting edge hardware and software available to them from server and switching vendors, but more run of the mill operations, such as a hyperscale cloud data center, can depend on the availability of open solutions to give them better control over their operational expenses.