Cookie policy: This site uses cookies (small files stored on your computer) to simplify and improve your experience of this website. Cookies are small text files stored on the device you are using to access this website. For more information on how we use and manage cookies please take a look at our privacy and cookie policies. Some parts of the site may not work properly if you choose not to accept cookies.

sections

Flash storage: commodity hardware versus custom-designed

  • Print
  • Share
  • Comment
  • Save
Many storage vendors go on about how their systems are built from “commodity” hardware with all the “value add” being in their software stack, implying this is a good thing and buyers should prefer their product because of it. But just what is the argument between commodity and custom about?
 
Unlike the computer industry, which once argued over x86 versus non-x86 and then Linux versus vendor OS’, in the storage industry the questions tends to be different. Proprietary-hardware-based versus off-the-shelf x86 servers with most functionality implemented in software. In the flash storage space, the question is usually Solid State Drives (SSD) versus non-SSD.
 
The commodity camp perspective 
The theory behind most of the commodity camp arguments tends to run like this: if you build your system as software running on Intel CPUs, then it will get faster and faster without having to do anything. And if you buy all the hardware components of your system “off the shelf,” then you don’t have to spend time and money developing the hardware. If your chosen vendor sells enough of what you are buying to other people, then you can ride the coat tails of other people’s volumes to the land of ever decreasing costs.
 
The “commodity” SSDs make up most of the cost of a flash storage array, and today’s MLC flash requires a lot of specialized skill and knowledge to ensure it functions properly. Since the flash chip vendors make their own SSDs, their prices can’t be beat and they have all the knowledge, so why try and compete with them?
 
The custom camp perspective 
Those in the custom camp argue that we must always be racing to stay ahead of Intel – the volume server vendors who built their “off-the-shelf” components and the flash fabrications.
 
Sounds like a good reason to go with a software-based system running on “off-the-shelf” hardware, doesn’t it? Certainly that is what a lot of vendors would like you to think, although perhaps they are better called integrators. But let’s examine if those assumptions really hold up under scrutiny.
 
The first and greatest failing of the mantra that commodity is always better than custom is the assumption that the problem you seek to solve is a commodity problem. The functionality in the industry standard, off-the-shelf parts is designed to give them the best available cost per performance or cost per metric-of-choice, for a certain class of problems. If your problem is not in that class, if different functionality is needed to solve your problem, it may be that all the benefits of commodity parts are lost. 
 
A perfect example of this is the evolution of the supercomputer from the early days of the Cray 1 that was designed for absolute maximum performance without any other consideration. It was fully custom-everything: hardware, interfaces, packaging, OS, and it was the best in the world. 
 
But over time, the supercomputer business matured. While performance was the first goal – and let’s be clear, in the computer industry, the goal of the latest and greatest anything is always to be faster than what came before it – that is just assumed to be true of any new product. But now people care about other things like space, power, cooling, uptime, ease of use, expandability, and, oh yeah, cost.
 
Now, flash storage isn’t supercomputing, but it isn’t general-purpose computing, or standard disk storage. It is memory storage. If you want flash storage that delivers on that promise, you don’t want it built from commodity computing parts. You want it built from memory storage parts.
 
There is a lot of interest these days in software-defined storage. While it is possible to take a general-purpose x86 server from one original equipment manufacturer, plug in a disk shelf from a second, and fill it with SSDs from a third, with flash controllers from a fourth and flash chips from a fifth and then add some software, what you get might be storage, but it will not be Tier-1 (maybe not even Tier-2) and will certainly not be memory storage.
 
Super computers are not built from off-the-shelf servers for the same reason. Your standard server is not optimized for storage. It’s the wrong form factor, has the wrong interfaces, isn’t serviceable without removing the whole chassis or without disconnecting all its network plugs, lacks a way to mirror to its pair without the overhead of a network protocol and lacks many, many things that make a product a storage product.
 
Accessing memory chips by using a disk shelf, accessed over a disk protocol to run RAID, in disk-sector-sized chunks, stored on memory, run by controllers optimized for servicing a handful of streams rather than hundreds or thousands of independent streams, which may corrupt your data when the power goes out suddenly, sounded better when all you were told was the price of the MLC version, or before you calculated just how many racks of the product you would need to meet your performance or capacity needs.
 
The differences between the best designed memory storage and disk storage masquerading as memory storage are many. Base your purchasing decision on what it can actually deliver to your business, not what marketing lingo is used to describe it.
 
Jon Bennett is a co-founder and CTO of hardware and technology of Violin Memory
 
Views in the above article do not necessarily represent the views of DatacenterDynamics FOCUS

Related images

  • Jon Bennett, co-founder and CTO, Violin Memory

Have your say

Please view our terms and conditions before submitting your comment.

required
required
required
required
required
  • Print
  • Share
  • Comment
  • Save

Webinars

  • Next Generation Data Centers – Are you ready for scale?

    Wed, 24 Aug 2016 16:00:00

    This presentation will provide a general overview of the data center trends and the ecosystem that comprises of “hyperscale DC”, “MTDC”, and “enterprise DC”.

  • White Space 46: We'll always have Paris

    Fri, 15 Jul 2016 10:35:00

    This week on White Space, we look at the safest data center locations in the world, as rated by real estate management firm Cushman & Wakefield. It will come as no surprise that Iceland comes out on top, while the US and the UK have barely made the top 10. French data center specialist Data4 is promoting Paris as a global technology hub, where it is planning to invest at least €100 million. Another French data center owned by Webaxys is repurposing old Nissan Leaf car batteries in partnership with Eaton. Brexit update: We’ve also heard industry body TechUK outline an optimistic vision of Britain outside the EU – as long as the country remains within the single market and subscribes to the principles of the General Data Protection Regulation.

  • Powering Big Data with Big Solar

    Tue, 12 Jul 2016 18:00:00

    The data center industry is experiencing explosive growth. The expansion of online users and increased transactions will result in the online population to reach 50% of the world’s projected population, moving from 2.3 billion in 2012 to an expected 3.6 billion people by 2017. This growth is requiring data centers to address the carbon impact of their business and to integrate more renewable resources into their projects. Join First Solar to learn: -Why major C&I companies are looking to utility-scale solar as a viable addition to their energy sourcing portfolios. -How cost-effective utility-scale solar options can support datacenters in securing renewable supply. -Case study of how a major data center player implemented solar into their portfolio

  • DC Professional - Meet John Laban

    Tue, 12 Jul 2016 15:25:00

    John has worked in the Telecommunications and Information Transport Systems (ITS) industry for over 35 years, beginning his career at the London Stock Exchange as a BT telecommunication technician. Believing there was a general lack of quality in the ITS industry, John was driven to "professionalize" the ITS industry – starting with a professional diploma programme for the Telecommunications Managers Association – which led to him becoming the first BICSI RCDD in the UK and soon after, a BICSI Master Instructor teaching RCDD and Technician programmes. Find out more about John and upcoming sessions here https://www.dc-professional.com/people/284/

  • White Space 45: Waste Not

    Sun, 10 Jul 2016 15:50:00

    In this episode of White Space, we look back at the news of the week with a special guest Adrian Barker, general manager for EMEA at RF Code and specialist in sensors and data.

More link