Last Monday, EMC announced ViPR as its new Software-defined Storage platform. Almost simultaneously, Chuck Hollis described it as ‘Breathtaking’ in his usually excellent blog. I must admit, one thing I routinely find breathtaking about EMC is their approach to marketing. They have a knack for being able to take unexceptional technology (or, as in this case, combinations of technology and theories about the future), and spin an extraordinarily compelling story. With all seriousness and without tongue in cheek… Nicely done EMC!
Chuck’s blog described ViPR in three parts. To a heritage EMC customer, these three concepts may seem revolutionary because, to-date, EMC hasn’t successfully offered this sort of technology. However, for clients of IBM, Hitachi, or other smaller vendors, the environment EMC hopes to create with ViPR will seem familiar because, in large part, it’s been evolving for years. Let’s look at the three parts one at a time.
The first ViPR idea Chuck describes is to help create “a better control plane for existing storage arrays: EMC and others”. To be clear, EMC is just getting started with ViPR so initially the ‘others’ include only NetApp, but you can expect the list to expand if ViPR matures. Chuck is describing a software virtualization layer that discovers existing physical storage arrays and allows administrators to construct virtual storage arrays as abstractions across the multiple units. The ‘better control plane’ comes when the virtual array capabilities are surfaced via a storage service catalog that describes things like snaps, replication, remote sites, etc. Administrators are then able to make requests for these services, in turn driving an orchestrated set of provisioning steps. IBM clients over the last decade have come to understand that this first idea is extraordinarily powerful. Today, the IBM SmartCloud Virtual Storage Center helps clients create an software-defined abstraction layer over existing physical arrays from EMC and LOTS of others. Regardless of the brand, tier, or capability of your existing physical arrays, the virtual arrays are capable of snaps, replication, stretching a virtual volume across two physical sites at distance to facilitate active-active datacenters, thin provisioning, real-time compression, transparent data mobility, etc. Administrators can describe named collections of services for different workloads — “here are the services ‘Database’ workloads need, and here are the different set of services ‘E-mail’ workloads need” — greatly simplifying provisioning. If you need help in understanding your unique data and its needs, IBM has developed consulting services to assist. Once service levels are defined and named, administrators simply specify a) what service level they need, b) how much capacity they need in that service level, and c) what machine needs access. Requests kick off an orchestrated workflow that performs all the mundane tasks of creating virtual volumes with the right services, provisioning the remote replication relationships if needed, zoning the SAN and masking the virtual volumes for secure access, configuring the host multi-pathing for access resiliency, etc. Requests can be made by administrators via a visually intuitive GUI, or programatically via REST API’s, an OpenStack Cinder plug-in, or deep integration with VMware vSphere Storage API’s. SmartCloud Virtual Storage Center also meters client capacity usage by service level. CIO’s can effectively manage these and other IT costs with IBM SmartCloud Cost Management.
The second ViPR idea Chuck describes is ‘changing how data is presented depending on a given application’s access needs’. What he is describing is a storage approach that layers access methods. In the case of ViPR, Chuck describes NFS as the base method and then other methods that could be layered on top, like object-over-NFS or HDFS-over NFS. The SmartCloud Virtual Storage Center implements block storage as the base layer. Related offerings that use the same block code stack, like the IBM Storwize V7000 Unified and the IBM SONAS, offer file-over-block and are looking forward to adding other object methods. This area is evolving rapidly and I agree with Chuck’s speculation that storing a piece of data once and accessing it through multiple methods could be important in the future.
The third ViPR idea Chuck describes is ‘Storage Services For Cloud Applications’. In his blog, he’s wrestling with a great question. A decade ago, ‘server virtualization’ was a budding young concept. Today it is foundational to the way we do IT. CIO’s have long since made their decisions on server virtualization and are now working to complete the virtual datacenter. We’ve found that with the servers handled, virtualizing the storage infrastructure is the focus in 2013. The question Chuck is wrestling with is “Is ViPR a modern interpretation of what we now mean when we say ‘storage virtualization’?” It’s certainly EMC’s modern interpretation having tried before to virtualize physical storage with Invista (circa 2005) and VPLEX (circa 2010). At IBM, we started virtualizing storage in 2003. Today, that software stack and its ecosystem of integration with applications, server hypervisors, orchestrators, cloud stacks, and cost managers is implemented in thousands of datacenters. If nothing else, we’ve stayed focused on growing what works. In recent posts, I have explored how the industry defines software-defined storage, and whether it is a key to a successful private cloud. If EMC breaks tradition and sticks with ViPR for the long term, the words they are using in their marketing demonstrate they understand what ViPR needs to become if it wants to be a complete offering. However, as CIO’s make decisions on software-defining their storage in 2013, I think they’ll find that the IBM SmartCloud Virtual Storage Center is already accomplishing for storage what server hypervisors have accomplished for servers.