Cookie policy: This site uses cookies (small files stored on your computer) to simplify and improve your experience of this website. Cookies are small text files stored on the device you are using to access this website. For more information on how we use and manage cookies please take a look at our privacy and cookie policies. Some parts of the site may not work properly if you choose not to accept cookies.


Flash storage: commodity hardware versus custom-designed

  • Print
  • Share
  • Comment
  • Save
Many storage vendors go on about how their systems are built from “commodity” hardware with all the “value add” being in their software stack, implying this is a good thing and buyers should prefer their product because of it. But just what is the argument between commodity and custom about?
Unlike the computer industry, which once argued over x86 versus non-x86 and then Linux versus vendor OS’, in the storage industry the questions tends to be different. Proprietary-hardware-based versus off-the-shelf x86 servers with most functionality implemented in software. In the flash storage space, the question is usually Solid State Drives (SSD) versus non-SSD.
The commodity camp perspective 
The theory behind most of the commodity camp arguments tends to run like this: if you build your system as software running on Intel CPUs, then it will get faster and faster without having to do anything. And if you buy all the hardware components of your system “off the shelf,” then you don’t have to spend time and money developing the hardware. If your chosen vendor sells enough of what you are buying to other people, then you can ride the coat tails of other people’s volumes to the land of ever decreasing costs.
The “commodity” SSDs make up most of the cost of a flash storage array, and today’s MLC flash requires a lot of specialized skill and knowledge to ensure it functions properly. Since the flash chip vendors make their own SSDs, their prices can’t be beat and they have all the knowledge, so why try and compete with them?
The custom camp perspective 
Those in the custom camp argue that we must always be racing to stay ahead of Intel – the volume server vendors who built their “off-the-shelf” components and the flash fabrications.
Sounds like a good reason to go with a software-based system running on “off-the-shelf” hardware, doesn’t it? Certainly that is what a lot of vendors would like you to think, although perhaps they are better called integrators. But let’s examine if those assumptions really hold up under scrutiny.
The first and greatest failing of the mantra that commodity is always better than custom is the assumption that the problem you seek to solve is a commodity problem. The functionality in the industry standard, off-the-shelf parts is designed to give them the best available cost per performance or cost per metric-of-choice, for a certain class of problems. If your problem is not in that class, if different functionality is needed to solve your problem, it may be that all the benefits of commodity parts are lost. 
A perfect example of this is the evolution of the supercomputer from the early days of the Cray 1 that was designed for absolute maximum performance without any other consideration. It was fully custom-everything: hardware, interfaces, packaging, OS, and it was the best in the world. 
But over time, the supercomputer business matured. While performance was the first goal – and let’s be clear, in the computer industry, the goal of the latest and greatest anything is always to be faster than what came before it – that is just assumed to be true of any new product. But now people care about other things like space, power, cooling, uptime, ease of use, expandability, and, oh yeah, cost.
Now, flash storage isn’t supercomputing, but it isn’t general-purpose computing, or standard disk storage. It is memory storage. If you want flash storage that delivers on that promise, you don’t want it built from commodity computing parts. You want it built from memory storage parts.
There is a lot of interest these days in software-defined storage. While it is possible to take a general-purpose x86 server from one original equipment manufacturer, plug in a disk shelf from a second, and fill it with SSDs from a third, with flash controllers from a fourth and flash chips from a fifth and then add some software, what you get might be storage, but it will not be Tier-1 (maybe not even Tier-2) and will certainly not be memory storage.
Super computers are not built from off-the-shelf servers for the same reason. Your standard server is not optimized for storage. It’s the wrong form factor, has the wrong interfaces, isn’t serviceable without removing the whole chassis or without disconnecting all its network plugs, lacks a way to mirror to its pair without the overhead of a network protocol and lacks many, many things that make a product a storage product.
Accessing memory chips by using a disk shelf, accessed over a disk protocol to run RAID, in disk-sector-sized chunks, stored on memory, run by controllers optimized for servicing a handful of streams rather than hundreds or thousands of independent streams, which may corrupt your data when the power goes out suddenly, sounded better when all you were told was the price of the MLC version, or before you calculated just how many racks of the product you would need to meet your performance or capacity needs.
The differences between the best designed memory storage and disk storage masquerading as memory storage are many. Base your purchasing decision on what it can actually deliver to your business, not what marketing lingo is used to describe it.
Jon Bennett is a co-founder and CTO of hardware and technology of Violin Memory
Views in the above article do not necessarily represent the views of DatacenterDynamics FOCUS

Related images

  • Jon Bennett, co-founder and CTO, Violin Memory

Have your say

Please view our terms and conditions before submitting your comment.

  • Print
  • Share
  • Comment
  • Save


  • Overhead Power Distribution – Best Practice in Modular Design

    Tue, 10 Nov 2015 16:00:00

    Overhead power distribution in your data center offers many attractive possibilities but is not without its challenges. Join UE Corp’s Director of Marketing, Mark Swift, and CPI’s Senior Data Center Consultant, Steve Bornfield, for an exploration of the options and some of the pitfalls, supported by real-life examples from the field.

  • Overcoming the Challenges of High Power Density Deployments

    Wed, 4 Nov 2015 19:00:00

    Increasing rack power densities saves space and energy, and improves both OPEX and CAPEX. But it can also create unintended problems that could bring your data center to a screeching halt. Join Raritan’s VP of Products & Marketing, Henry Hsu, and DCD’s CTO Stephen Worn, as they reveal the three key challenges in deploying a high density cabinet, and explain how to: Reduce operating costs, Increase up-time , Improve mean time to repair, become more energy-efficient manage existing capacity and plan for growth.

  • Squeezing the Lemon - The Power to do More with Less

    Tue, 20 Oct 2015 08:00:00

    Energy costs rising, manpower resources falling – managing a data center is getting more stressful by the day. One cold night could be all it takes to tip your power supply over the edge. And let's not forget the never-ending demands from IT for additional space. More information on its own is not the answer. Join Rittal's webinar to understand how to: • Lower your power consumption and OPEX charges, with 'smart' power distribution • Identify issues before they become problems, with intelligent PDUs' monitoring capabilities • Expand your DC as your business grows, with modular PDUs • Profile your power requirements to help you plan and make better-informed decisions REGISTER NOW Note: All attendees will receive a free copy of the latest White Paper from Rittal.

  • Live Customer Roundtable: Optimizing Capacity (12:00 EST)

    Tue, 8 Sep 2015 16:00:00

    The biggest challenge facing many data centers today? Capacity. How to optimize what you have today. And when you need to expand, how to expand your capacity smarter. Learn from the experts about how Data Center Infrastructure Management (DCIM) and Prefabricated Modular Data Centers are driving best practices in how capacity is managed and optimized: - lower costs - improved efficiencies and performance - better IT services delivered to the business - accurate long-range planning Don;t miss out on our LIVE customer roundtable and your chance to pose questions to expert speakers from Commscope, VIRTUS and University of Montana. These enterprises are putting best practices to work today in the only place that counts – the real world.

  • Power Optimization – Can Your Business Survive an Unplanned Outage? (APAC)

    Wed, 26 Aug 2015 05:00:00

    Most outages are accidental; by adopting an intelligent power chain, you can help mitigate them and reduce your mean-time to repair. Join Anixter and DatacenterDynamics for a webinar on the five best practices and measurement techniques to help you obtain the performance data you need to optimize your power chain. Register today!

More link