Cookie policy: This site uses cookies (small files stored on your computer) to simplify and improve your experience of this website. Cookies are small text files stored on the device you are using to access this website. For more information on how we use and manage cookies please take a look at our privacy and cookie policies. Some parts of the site may not work properly if you choose not to accept cookies.

sections

Flash storage: commodity hardware versus custom-designed

  • Print
  • Share
  • Comment
  • Save
Many storage vendors go on about how their systems are built from “commodity” hardware with all the “value add” being in their software stack, implying this is a good thing and buyers should prefer their product because of it. But just what is the argument between commodity and custom about?
 
Unlike the computer industry, which once argued over x86 versus non-x86 and then Linux versus vendor OS’, in the storage industry the questions tends to be different. Proprietary-hardware-based versus off-the-shelf x86 servers with most functionality implemented in software. In the flash storage space, the question is usually Solid State Drives (SSD) versus non-SSD.
 
The commodity camp perspective 
The theory behind most of the commodity camp arguments tends to run like this: if you build your system as software running on Intel CPUs, then it will get faster and faster without having to do anything. And if you buy all the hardware components of your system “off the shelf,” then you don’t have to spend time and money developing the hardware. If your chosen vendor sells enough of what you are buying to other people, then you can ride the coat tails of other people’s volumes to the land of ever decreasing costs.
 
The “commodity” SSDs make up most of the cost of a flash storage array, and today’s MLC flash requires a lot of specialized skill and knowledge to ensure it functions properly. Since the flash chip vendors make their own SSDs, their prices can’t be beat and they have all the knowledge, so why try and compete with them?
 
The custom camp perspective 
Those in the custom camp argue that we must always be racing to stay ahead of Intel – the volume server vendors who built their “off-the-shelf” components and the flash fabrications.
 
Sounds like a good reason to go with a software-based system running on “off-the-shelf” hardware, doesn’t it? Certainly that is what a lot of vendors would like you to think, although perhaps they are better called integrators. But let’s examine if those assumptions really hold up under scrutiny.
 
The first and greatest failing of the mantra that commodity is always better than custom is the assumption that the problem you seek to solve is a commodity problem. The functionality in the industry standard, off-the-shelf parts is designed to give them the best available cost per performance or cost per metric-of-choice, for a certain class of problems. If your problem is not in that class, if different functionality is needed to solve your problem, it may be that all the benefits of commodity parts are lost. 
 
A perfect example of this is the evolution of the supercomputer from the early days of the Cray 1 that was designed for absolute maximum performance without any other consideration. It was fully custom-everything: hardware, interfaces, packaging, OS, and it was the best in the world. 
 
But over time, the supercomputer business matured. While performance was the first goal – and let’s be clear, in the computer industry, the goal of the latest and greatest anything is always to be faster than what came before it – that is just assumed to be true of any new product. But now people care about other things like space, power, cooling, uptime, ease of use, expandability, and, oh yeah, cost.
 
Now, flash storage isn’t supercomputing, but it isn’t general-purpose computing, or standard disk storage. It is memory storage. If you want flash storage that delivers on that promise, you don’t want it built from commodity computing parts. You want it built from memory storage parts.
 
There is a lot of interest these days in software-defined storage. While it is possible to take a general-purpose x86 server from one original equipment manufacturer, plug in a disk shelf from a second, and fill it with SSDs from a third, with flash controllers from a fourth and flash chips from a fifth and then add some software, what you get might be storage, but it will not be Tier-1 (maybe not even Tier-2) and will certainly not be memory storage.
 
Super computers are not built from off-the-shelf servers for the same reason. Your standard server is not optimized for storage. It’s the wrong form factor, has the wrong interfaces, isn’t serviceable without removing the whole chassis or without disconnecting all its network plugs, lacks a way to mirror to its pair without the overhead of a network protocol and lacks many, many things that make a product a storage product.
 
Accessing memory chips by using a disk shelf, accessed over a disk protocol to run RAID, in disk-sector-sized chunks, stored on memory, run by controllers optimized for servicing a handful of streams rather than hundreds or thousands of independent streams, which may corrupt your data when the power goes out suddenly, sounded better when all you were told was the price of the MLC version, or before you calculated just how many racks of the product you would need to meet your performance or capacity needs.
 
The differences between the best designed memory storage and disk storage masquerading as memory storage are many. Base your purchasing decision on what it can actually deliver to your business, not what marketing lingo is used to describe it.
 
Jon Bennett is a co-founder and CTO of hardware and technology of Violin Memory
 
Views in the above article do not necessarily represent the views of DatacenterDynamics FOCUS

Related images

  • Jon Bennett, co-founder and CTO, Violin Memory

Have your say

Please view our terms and conditions before submitting your comment.

required
required
required
required
required
  • Print
  • Share
  • Comment
  • Save

Webinars

  • Do Industry Standards Hold Back Data Centre Innovation?

    Thu, 11 Jun 2015 14:00:00

    Upgrading legacy data centres to handle ever-increasing social media, mobile, big data and Cloud workloads requires significant investment. Yet over 70% of managers are being asked to deliver future-ready infrastructure with reduced budgets. But what if you could square the circle: optimise your centre’s design beyond industry standards by incorporating the latest innovations, while achieving a significant increase in efficiency and still maintaining the required availability?

  • The CFD Myth – Why There Are No Real-Time Computational Fluid Dynamics?

    Wed, 20 May 2015 14:00:00

    The rise of processing power and steady development of supercomputers have allowed Computational Fluid Dynamics (CFD) to grow out of all recognition. But how has this affected the Data Center market – particularly in respect to cooling systems? The ideal DCIM system offers CFD capability as part of its core solution (rather than as an external application), fed by real-time monitoring information to allow for continuous improvements and validation of your cooling strategy and air handling choices. Join DCIM expert Philippe Heim and leading heat transfer authority Remi Duquette for this free webinar, as they discuss: •Benefits of a single data model for asset management •Challenges of real-time monitoring •Some of the issues in CFD simulation, and possible solutions •How CFD can have a direct, positive impact on your bottom line Note: All attendees will have access to a free copy of the latest Siemens White Paper: "Using CFD for Optimal Thermal Management and Cooling Design in Data Centers".

  • Prioritising public sector data centre energy efficiency: approach and impacts

    Wed, 20 May 2015 11:30:00

    The University of St Andrews was founded in 1413 and is in the top 100 Universities in the world and is one of the leading research universities in the UK.

  • A pPUE approaching 1- Fact or Fiction?

    Tue, 5 May 2015 14:00:00

    Rittal’s presentation focuses on the biggest challenge facing data centre infrastructures: efficient cooling. The presentation outlines the latest technology for rack, row, and room cooling. The focus is on room cooling with rear door heat exchangers (RHx)

  • APAC - “I Heard It Through the Grapevine” – Managing Data Center Risk

    Wed, 29 Apr 2015 05:00:00

    Join this webinar to understand how to minimize the risk to your organization and learn more about Anixter’s unique approach.

More link