Cookie policy: This site uses cookies (small files stored on your computer) to simplify and improve your experience of this website. Cookies are small text files stored on the device you are using to access this website. For more information on how we use and manage cookies please take a look at our privacy and cookie policies. Some parts of the site may not work properly if you choose not to accept cookies.

sections

Flash storage: commodity hardware versus custom-designed

  • Print
  • Share
  • Comment
  • Save
Many storage vendors go on about how their systems are built from “commodity” hardware with all the “value add” being in their software stack, implying this is a good thing and buyers should prefer their product because of it. But just what is the argument between commodity and custom about?
 
Unlike the computer industry, which once argued over x86 versus non-x86 and then Linux versus vendor OS’, in the storage industry the questions tends to be different. Proprietary-hardware-based versus off-the-shelf x86 servers with most functionality implemented in software. In the flash storage space, the question is usually Solid State Drives (SSD) versus non-SSD.
 
The commodity camp perspective 
The theory behind most of the commodity camp arguments tends to run like this: if you build your system as software running on Intel CPUs, then it will get faster and faster without having to do anything. And if you buy all the hardware components of your system “off the shelf,” then you don’t have to spend time and money developing the hardware. If your chosen vendor sells enough of what you are buying to other people, then you can ride the coat tails of other people’s volumes to the land of ever decreasing costs.
 
The “commodity” SSDs make up most of the cost of a flash storage array, and today’s MLC flash requires a lot of specialized skill and knowledge to ensure it functions properly. Since the flash chip vendors make their own SSDs, their prices can’t be beat and they have all the knowledge, so why try and compete with them?
 
The custom camp perspective 
Those in the custom camp argue that we must always be racing to stay ahead of Intel – the volume server vendors who built their “off-the-shelf” components and the flash fabrications.
 
Sounds like a good reason to go with a software-based system running on “off-the-shelf” hardware, doesn’t it? Certainly that is what a lot of vendors would like you to think, although perhaps they are better called integrators. But let’s examine if those assumptions really hold up under scrutiny.
 
The first and greatest failing of the mantra that commodity is always better than custom is the assumption that the problem you seek to solve is a commodity problem. The functionality in the industry standard, off-the-shelf parts is designed to give them the best available cost per performance or cost per metric-of-choice, for a certain class of problems. If your problem is not in that class, if different functionality is needed to solve your problem, it may be that all the benefits of commodity parts are lost. 
 
A perfect example of this is the evolution of the supercomputer from the early days of the Cray 1 that was designed for absolute maximum performance without any other consideration. It was fully custom-everything: hardware, interfaces, packaging, OS, and it was the best in the world. 
 
But over time, the supercomputer business matured. While performance was the first goal – and let’s be clear, in the computer industry, the goal of the latest and greatest anything is always to be faster than what came before it – that is just assumed to be true of any new product. But now people care about other things like space, power, cooling, uptime, ease of use, expandability, and, oh yeah, cost.
 
Now, flash storage isn’t supercomputing, but it isn’t general-purpose computing, or standard disk storage. It is memory storage. If you want flash storage that delivers on that promise, you don’t want it built from commodity computing parts. You want it built from memory storage parts.
 
There is a lot of interest these days in software-defined storage. While it is possible to take a general-purpose x86 server from one original equipment manufacturer, plug in a disk shelf from a second, and fill it with SSDs from a third, with flash controllers from a fourth and flash chips from a fifth and then add some software, what you get might be storage, but it will not be Tier-1 (maybe not even Tier-2) and will certainly not be memory storage.
 
Super computers are not built from off-the-shelf servers for the same reason. Your standard server is not optimized for storage. It’s the wrong form factor, has the wrong interfaces, isn’t serviceable without removing the whole chassis or without disconnecting all its network plugs, lacks a way to mirror to its pair without the overhead of a network protocol and lacks many, many things that make a product a storage product.
 
Accessing memory chips by using a disk shelf, accessed over a disk protocol to run RAID, in disk-sector-sized chunks, stored on memory, run by controllers optimized for servicing a handful of streams rather than hundreds or thousands of independent streams, which may corrupt your data when the power goes out suddenly, sounded better when all you were told was the price of the MLC version, or before you calculated just how many racks of the product you would need to meet your performance or capacity needs.
 
The differences between the best designed memory storage and disk storage masquerading as memory storage are many. Base your purchasing decision on what it can actually deliver to your business, not what marketing lingo is used to describe it.
 
Jon Bennett is a co-founder and CTO of hardware and technology of Violin Memory
 
Views in the above article do not necessarily represent the views of DatacenterDynamics FOCUS

Related images

  • Jon Bennett, co-founder and CTO, Violin Memory

Have your say

Please view our terms and conditions before submitting your comment.

required
required
required
required
required
  • Print
  • Share
  • Comment
  • Save

Webinars

  • Power Optimization – Can Your Business Survive an Unplanned Outage? (APAC)

    Wed, 26 Aug 2015 05:00:00

    Most outages are accidental; by adopting an intelligent power chain, you can help mitigate them and reduce your mean-time to repair. Join Anixter and DatacenterDynamics for a webinar on the five best practices and measurement techniques to help you obtain the performance data you need to optimize your power chain. Register today!

  • Power Optimization – Can Your Business Survive an Unplanned Outage? (Americas)

    Tue, 25 Aug 2015 18:00:00

    Most outages are accidental; by adopting an intelligent power chain, you can help mitigate them and reduce your mean-time to repair. Join Anixter and DatacenterDynamics for a webinar on the five best practices and measurement techniques to help you obtain the performance data you need to optimize your power chain. Register today!

  • Power Optimization – Can Your Business Survive an Unplanned Outage? (EMEA)

    Tue, 25 Aug 2015 14:00:00

    Most outages are accidental; by adopting an intelligent power chain, you can help mitigate them and reduce your mean-time to repair. Join Anixter and DatacenterDynamics for a webinar on the five best practices and measurement techniques to help you obtain the performance data you need to optimize your power chain. Register today!

  • 5 Reasons Why DCIM Has Failed

    Wed, 15 Jul 2015 10:00:00

    Historically, DCIM systems have over-promised and under-delivered. Vendors have supplied complex and costly solutions which fail to address real business drivers and goals. Yet the rewards can be vast and go well beyond better-informed decision-making, to facilitate continuous improvement and cost savings across the infrastructure. How can vendors, customers and the industry as a whole take a better approach? Find out on our webinar on Wednesday 15 July.

  • Is Your Data Center Network Adapting To Constant Change? (APAC)

    Wed, 24 Jun 2015 05:00:00

    Over the next three years, global IP data center traffic is forecast to grow 23 percent—and 75 percent of that growth is expected to be internal*. In a constantly changing environment and as planners seek to control costs by maximizing floor space, choosing the right cabling architectures is now critical. Is your structured cabling system ready to meet the challenge? Join Anixter's Technical Services Director, Andrew Flint and DatacenterDynamics CTO Stephen Worn and Jonathan Jew, Editor ASI as they discuss how to: •Create network stability and flexibility •Future-ready cabling topology •Make the right media selection •Anticipate and plan for density demands Essential viewing for data center planners and operators everywhere – Register Now! Please note that these presentations will only be delivered in English. 1.EMEA: Tuesday 23 June, 3 p.m BST 2.Americas: Tuesday 23 June, 1 p.m CST 3.APAC: Wednesday 24 June, 1 p.m SGT APAC customers – please note the equivalent country times: India: 10:30am; Indonesia, Thailand: 12 noon; Singapore, Malaysia, Philippines, China, Taiwan, Hong Kong: 1pm; Australia (Sydney): 3pm ; New Zealand: 5pm.

More link