Open Compute Project (OCP), the open source hardware project initiated by Facebook, is reacting to an online attack which alleges its testing regime is inadequate.
The OCP shares open source hardware designs, which can be used to create data center hardware which is cheaper, because it eliminates gratuitous differentiation by vendors. But products built to shared designs need to be tested, and when an online source branded OCP’s testing ”a complete and utter joke”, an argument opened up. In response, the OCP’s founding director has argued the organization is working transparently to deliver reliable specifications.
A complete and utter joke?
“The bottom line of this programme simply seems to be building out cheap systems and data centres that have the lowest cost of ownership over their lifetime,” an anonymous test engineer told The Register . “So, basically, they’re looking for cheap engineering, components, testing, manufacturing, low power consumption and low thermal generating systems. Quality does not seem to be a metric of consideration.”
The article claimed that the two testing labs set up to certify OCP equipment were both defunct. One lab at the University of Texas at San Antonio (UTSA) was set up in January 2014, and now has a blank web presence, and another at the Industrial Technology Research Institute (ITRI) in Taiwan is also apparently unobtainable.
The comments have flushed out evidence of discussion behind the scenes: ”I sit on the OCP C&I committee and I have pushed for more rigorous testing,” said Barbara Aichinger, of FuturePlus Systems. ”Initially OCP wanted tier 1 quality at a tier 3 price and they would do this by standardizing the HW [hardware] and using volume. However to get the tier 1 quality you have to adopt the tier 1 Validation strategy. This has not happened. I would encourage the anonymous test engineer to join me in the battle to bring tier 1 validation to OCP servers.”
But OCP’s stance is that heavy-duty testing is not necessary for the purposes OCP hardware is intended for, in large web-scale applications.
”Facebook / Rackspace / Google / Amazon could all lose hundreds of servers / racks a day and you would never notice,” said the founding executive director of OCP, Cole Crawford, in a blog at DCD. ”If your workload requires fault tolerance, OCP probably isn’t right for you and you’d most likely benefit from insanely expensive certifications that ensure your hardware will last. If you’ve designed your environment to abstract away the hardware I’d argue there’s no better choice on the market today than OCP.”
The two test labs, it seems, were part of an early stage of the Compliance and Interoperability (C&I) program which is now more self-service, says Crawford: ”C&I matured with the tools we were given and we created OCP Certification with two trademarks to assist those who could self validate (OCP Ready) and those that wanted help certifying and ensuring industry standard metrics were performed (OCP Certified).”
Pushing the idea to its extreme, Crawford says: ”In a future world maybe entire cloud environments are run on private VLANs distributed across a million mobile phones. Do I need to certify that a mobile phone works? If I can lose 10,000 phones and not care I’d argue that I don’t.”