Cookie policy: This site uses cookies (small files stored on your computer) to simplify and improve your experience of this website. Cookies are small text files stored on the device you are using to access this website. For more information on how we use and manage cookies please take a look at our privacy and cookie policies. Some parts of the site may not work properly if you choose not to accept cookies.


MSPs and the storage struggle

  • Print
  • Share
  • Comment
  • Save

Large data storage firms can offer prices around $0.20 per Mbyte - which is great for a market that only wants to consume low tech, low cost storage pools, with the access and recovery charges scaled to make the service slightly more viable, but it is a long way from the more hybrid services that many managed service providers (MSPs) operate.

The upside for vendors and data center providers to engage with these MSP’s is that they consume the very products they are selling and keep the market moving. The downside for MSP’s in running their own infrastructure rather than consuming a public cloud version is that there is always a refresh cycle for all types of equipment on the horizon.

Hard drive

Source: Andrey Eremin / Thinkstock

At Vissensa, we recently replaced our legacy storage systems, in response to innovations coming to market, and the declining market status of our current vendor. 

We approached the project with some important parameters and, given the blank sheet of paper, our findings were both interesting and surprising.

Scalability and resilience

MSPs need to meet SLAs (service level agreements), so they must be able to add, remove and maintain storage systems without disrupting clients. They need hot-swappable modular storage, with disk controllers that can be configured into Active/ Active, Active Passive or Standby. Surprisingly, these features were missing from even some of the more well-known storage vendors’ product lines.

Systems must support mixed mode use, where some clients want different features to others. In a shared cloud, multiple clients are separated virtually on the same storage pools and servers, so any solution should allow the MSP to configure individual clients’ needs.

The easiest way to achieve this is to have a solution with many features that can be turned off for the clients that don’t require them, rather than not have those features at all.

Feature rich, financially viable

Storage systems must have baseline features, such as de-duplication and compression, so MSPs can get the best out of the asset and reduce overall consumption, effectively reducing the cost and bringing down the ROI. This helps justify the capital investment. An MSP will also have to able to see what resources a client is using and what return this is generating.

Some vendors still don’t have this functionality and some of those that do, have very ungainly ways to achieve the desired result, such as having to store and re-write data.

Another important feature that MSPs need in their kitbag is storage tiering and provisioning. This is storage that can allocate hot fast disk (flash) alongside slower low tech SAS or SATA disk to cater for the different client workloads that are presented.  For example, hot fast disk can be used for virtual desktop (VDI) provisioning and large analytical tasks, while low cost SAS storage can accommodate less intensive workloads such as DaaS (desktop as a service), and self-provisioning, i.e. virtual private clouds (VPCs). The storage array is bombarded with these types of workload requests each day and the real measure of the equipment’s performance is how intelligently it can apply this function and automatically move workloads between the tiers.

For some vendors the ink was only just dry on the roadmap plans for this, whereas the market clearly needs this functionality now.

We also found that many of the products have varying degrees of configurability to deliver types of disk pool including hot and cold storage. Some didn’t offer this fundamental ability at all and the differences in configurability makes it harder to have multiple vendors in the stable.

Fragmented and restrictive

We found that each vendor has a different take on including certain types of  functionality depending on the legacy of their equipment. Some had the ability to include features or turn features off, while other had  features hard-coded.  This makes for a very inflexible hybrid model for anyone trying to map workload to functionality and is very unwelcome in today’s storage market.

Finally, this kind of big investment has to be commercially viable. It doesn’t take a rocket scientist to realise that you can’t commercially purchase, house, operate, support, and maintain this infrastructure for $0.20 a Gb unless you’re subsidising it with something else. You also can’t keep switching and swapping technology - so you need to examine where the market is going.

Ten years ago we were all familiar with the capabilities of storage with SATA /SAS. As fast disk or flash became available to cater for intensive I/O and near memory performance, vendors modified their arrays into more hybrid storage. These new arrays allow SATA, SAS and flash to co-exist. However, many of these have been retro-fitted to provide the flash enhancements the market now looks for.

Today, pure flash arrays exist that have been designed and tuned from the ground up, and brought to market mainly by storage technology start-ups such as Solidfire (later purchased by NettApp) and XtremIO (snapped up by EMC).  

As this market is moving fast we also looked at the commercial risk of doing business with certain vendors (who will still be there in five years time?). For instance, Whiptail burst onto the storage scene only to be bought by Cisco and subsumed into Cisco’s UCS strategy, when Cisco’s Invicta arrays failed. If you bought into that early on, are you now a reluctant Cisco customer?

Our conclusion is that vendors are still catching up to the needs of the MSP market

You must choose wisely, as this market is in serious flux. Innovative Start-ups are being gobbled up by established vendors, who either side-line their own older technology, which doesn’t help customers’ sunk investments, by customers, or cherry pick the best bits from the new technology and dump the rest. 

One indicator of a storage vendor’s viability is how well it collaborates with third parties such as independent software vendors (ISVs) to increase software and hardware interoperability, and enable solutions such as backup and recovery, encryption and desktop services. 

Vendors are lagging

Our conclusion is that vendors are still catching up to the needs of the MSP market. Some vendors are more mature than others, and many are still catering for the mass low tech opportunities while suggesting that their technology can handle more workloads. On the commercial front, some vendors still force the client to purchase wasteful blocks of storage which in many cases will not map to any business requirement.

The MSP has to overcome these business and technology obstacles while still competing against other MSPs and the commodity storage market. Functionality, flexibility and features like as auto tiering, compression and encryption will add value and allow wider choice, so independent MSPs can differentiate themselves and provide choice.

Steve Groom is CEO of Vissensa, an MSP

Have your say

Please view our terms and conditions before submitting your comment.

  • Print
  • Share
  • Comment
  • Save


More link