Archived Content

The following content is from an older version of this website, and may not display correctly.
Many storage vendors go on about how their systems are built from “commodity” hardware with all the “value add” being in their software stack, implying this is a good thing and buyers should prefer their product because of it. But just what is the argument between commodity and custom about?
 
Unlike the computer industry, which once argued over x86 versus non-x86 and then Linux versus vendor OS’, in the storage industry the questions tends to be different. Proprietary-hardware-based versus off-the-shelf x86 servers with most functionality implemented in software. In the flash storage space, the question is usually Solid State Drives (SSD) versus non-SSD.
 
The commodity camp perspective 
The theory behind most of the commodity camp arguments tends to run like this: if you build your system as software running on Intel CPUs, then it will get faster and faster without having to do anything. And if you buy all the hardware components of your system “off the shelf,” then you don’t have to spend time and money developing the hardware. If your chosen vendor sells enough of what you are buying to other people, then you can ride the coat tails of other people’s volumes to the land of ever decreasing costs.
 
The “commodity” SSDs make up most of the cost of a flash storage array, and today’s MLC flash requires a lot of specialized skill and knowledge to ensure it functions properly. Since the flash chip vendors make their own SSDs, their prices can’t be beat and they have all the knowledge, so why try and compete with them?
 
The custom camp perspective 
Those in the custom camp argue that we must always be racing to stay ahead of Intel – the volume server vendors who built their “off-the-shelf” components and the flash fabrications.
 
Sounds like a good reason to go with a software-based system running on “off-the-shelf” hardware, doesn’t it? Certainly that is what a lot of vendors would like you to think, although perhaps they are better called integrators. But let’s examine if those assumptions really hold up under scrutiny.
 
The first and greatest failing of the mantra that commodity is always better than custom is the assumption that the problem you seek to solve is a commodity problem. The functionality in the industry standard, off-the-shelf parts is designed to give them the best available cost per performance or cost per metric-of-choice, for a certain class of problems. If your problem is not in that class, if different functionality is needed to solve your problem, it may be that all the benefits of commodity parts are lost. 
 
A perfect example of this is the evolution of the supercomputer from the early days of the Cray 1 that was designed for absolute maximum performance without any other consideration. It was fully custom-everything: hardware, interfaces, packaging, OS, and it was the best in the world. 
 
But over time, the supercomputer business matured. While performance was the first goal – and let’s be clear, in the computer industry, the goal of the latest and greatest anything is always to be faster than what came before it – that is just assumed to be true of any new product. But now people care about other things like space, power, cooling, uptime, ease of use, expandability, and, oh yeah, cost.
 
Now, flash storage isn’t supercomputing, but it isn’t general-purpose computing, or standard disk storage. It is memory storage. If you want flash storage that delivers on that promise, you don’t want it built from commodity computing parts. You want it built from memory storage parts.
 
There is a lot of interest these days in software-defined storage. While it is possible to take a general-purpose x86 server from one original equipment manufacturer, plug in a disk shelf from a second, and fill it with SSDs from a third, with flash controllers from a fourth and flash chips from a fifth and then add some software, what you get might be storage, but it will not be Tier-1 (maybe not even Tier-2) and will certainly not be memory storage.
 
Super computers are not built from off-the-shelf servers for the same reason. Your standard server is not optimized for storage. It’s the wrong form factor, has the wrong interfaces, isn’t serviceable without removing the whole chassis or without disconnecting all its network plugs, lacks a way to mirror to its pair without the overhead of a network protocol and lacks many, many things that make a product a storage product.
 
Accessing memory chips by using a disk shelf, accessed over a disk protocol to run RAID, in disk-sector-sized chunks, stored on memory, run by controllers optimized for servicing a handful of streams rather than hundreds or thousands of independent streams, which may corrupt your data when the power goes out suddenly, sounded better when all you were told was the price of the MLC version, or before you calculated just how many racks of the product you would need to meet your performance or capacity needs.
 
The differences between the best designed memory storage and disk storage masquerading as memory storage are many. Base your purchasing decision on what it can actually deliver to your business, not what marketing lingo is used to describe it.
 
Jon Bennett is a co-founder and CTO of hardware and technology of Violin Memory
 
Views in the above article do not necessarily represent the views of DatacenterDynamics FOCUS