Is Software-defined Storage key to a Successful Private Cloud?

(Originally posted April 17, 2013 on my domain blog at Reposted here for completeness as I move to

A recent Enterprise Management Associates (EMATM) research report suggests it is.

I’ve been on the topic of software-defined storage for three posts now – one with my perspective, one covering a multi-vendor round table at Storage Networking World, and now on an intriguing bit of research.

Earlier this year, IBM sponsored EMA research into Demystifying Cloud. The project was intended to collect lessons learned from organizations of all sizes that had completed at least the first stage of their initial private cloud deployment, and then use that data to provide guidance to organizations considering the purchase of cloud technologies. Along the way, EMA discovered what most folks would not have predicted — the critical role of storage for companies of any size and vertical when planning and implementing a private cloud.

EMA Senior Analyst Torsten Volk did a masterful job conducting the primary research and then distilling his findings into clear and understandable thoughts. The full report, linked above, is a highly recommended read. To whet your appetite, take a look at a storage focused excerpt.

The vast majority of folks Torsten spoke with in conducting his research had been directly involved in the private cloud decision process – influencers, evaluators, and the decision makers, both technical and financial. I got a kick out of Torsten’s description of the ‘cloud’ that these respondents are hoping to achieve in their projects – “…transition from a traditional infrastructure and resource-oriented approach to enterprise IT—let’s call it IT-as-a-Nuisance or Cost Center if you will—toward a business service focused definition of IT.”

The top two strategic goals identified for private cloud were probably expected – “less application downtime” and “shortened resource provisioning times”. It was number 3 and 4 on the list that grabbed my attention – “Easy storage provisioning” and “storage tiering”. Let’s think about those for a moment…

One topic that gets discussed a lot in storage circles is the negative effect that wide-spread use of server hypervisors has on the storage infrastructure – and if anything, cloud is characterized by wide-spread use of server hypervisors. Server hypervisors have been blamed for unexpected bumps in storage capacity growth and are notorious for creating very dense and non-sequential I/O patterns that result in I/O performance issues in traditional storage infrastructure. EMA’s research noted that as organizations progressed further into their private cloud deployments, the use of flash storage increased and that a remarkably large number of organizations had added capacity to deal with performance issues. They also noted, however, that “Simply throwing more spindles or Solid State Drives (SSDs) at a storage performance problem constitutes significant waste in situations where there is still plenty of storage capacity.”  Enter software-defined storage and storage tiering. For some expanded thoughts, see the Flash storage ‘everywhere’ paragraph in my last post. My view is that the rise in private cloud, the wide-spread use of server hypervisors, the increase in flash deployments, and the need for a software-defined storage layer to manage storage tiering are all interconnected. IBM’s recent announcement of a $1 billion investment in Flash and the ensuing #flashahead traffic on Twitter are testament that this interconnection has created a real technical and business need that vendors are willing to sink significant effort into solving.

On the provisioning side, it’s always been interesting to me how much attention is paid to the rapid deployment of workloads on virtual server farms, and until now how little has been paid to the rapid provisioning of the storage that houses the data that these workloads require. What the EMA research uncovered is that organizations who are well along in their private cloud deployments have had the ah-ha moment for storage. A majority now see provisioning and management of storage as a bottleneck, leading to storage automation being the highest ranked integration point for private cloud – against things like OS Image Management, Server Automation, Network Automation, Application Performance Mgmt, and Single sign-on. To get a workload rapidly deployed, it needs compute resources (virtual machines), storage resources (virtual storage), and network resources (virtual networks). Clouds don’t deal in physical infrastructure, it’s entirely too rigid – or as Torsten called it, IT-as-a-Nuisance. For all the reasons discussed in my previous two posts, a software-defined storage layer is required to complete the datacenter transition to cloud. My view is corroborated by EMA’s respondents, all of whom that had reported the adoption of a hardware-independent storage hypervisor stated that their storage hypervisor software was “important,” “very important,” or “critical” to their cloud deployment.

Join the conversation! Share your point of view here. Follow me on Twitter @RonRiffe and the industry conversation under #SoftwareDefinedStorage

One Reply to “Is Software-defined Storage key to a Successful Private Cloud?”

Join the conversation!

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s