I’m just returning from the SNW Spring conference in Orlando. It seemed sparsely attended but my 5-foot tall wife of almost 28 years has always told me that dynamite comes in small packages (I believe her!).
As I noted in my last post, I was in Orlando to participate in a round table discussion on storage hypervisors hosted by ESG Senior Analyst Mark Peters. I was joined by Claus Mikkelsen – Chief Scientist at Hitachi Data Systems, Mark Davis – CEO of Virsto (now a VMware company), and George Teixeira – CEO of DataCore. Conspicuously missing from the conversation both at this SNW and at a similar round table held during the SNW Fall 2012 conference was any representation from EMC. More on that in a moment.
The session this time drew a crowd roughly three times the size of the Fall 2012 installment – a completely full room. And the level of audience participation in questioning the panel members further demonstrated just how much the industry conversation is accelerating. I was pleased to see that most of the discussion was focused on use cases for what was interchangeably referred to as storage virtualization, storage hypervisors, and software-defined storage. Following are a few of the use cases that were probed on.
Data migration was noted as an early and enduring use case for software-defined storage. Today’s physical disk arrays are capable of housing many TB’s of data, often from MANY simultaneous business applications. When one of these physical disk arrays has reached the end of its useful life (the lease is about to terminate), the process of emptying the data from that old disk array to a newer, more modern disk array can be consuming. The difficult part isn’t the volume of data, it’s the number of application disruptions that have to be scheduled to make the data available for moving. And if you happen to be switching physical disk array vendors, that can create related effort on each of the host machines accessing the data to ensure the correct drivers are installed. Clients we have worked with tell us the process can take months. That’s not only hard on the storage administration team, but it’s also wasteful because a) you have to bring in a new target array months ahead of time and b) both it and the source array remain only partially used during those months as the data is migrated. The economic value of solving this data migration issue is an early use case that has fueled solutions like IBM SAN Volume Controller (SVC), Hitachi Virtual Storage Platform, and DataCore SANsymphony-V. Each of these are designed to provide the basic mechanics of storage virtualization and mobility across most any physical disk array you might choose – all without disruption of any kind to the business applications that are accessing the data.
A quick side comment. While the data migration use case carries a strong economic benefit for IT managers (transparent migration from old to new disk arrays), it can just as easily be used to migrate from old to new disk array ‘vendors’. For the IT manager, this has the potential for even greater economic benefit because it creates the very real threat of competition among physical disk array vendors driving cost down and service up. But for an incumbent disk array vendor, there’s not a lot of built in motivation to introduce their client to such a technology. At SNW this week, it was suggested that this dynamic may be responsible for the relatively low awareness and deployment of storage virtualization technologies. Incumbent vendors are happy to keep their clients in the dark about software-defined storage and data migration use cases. Interestingly, almost 10 years after these technologies were first introduced, EMC (whose market share makes them the most frequent incumbent physical disk array vendor), is still only talking about this topic in the shadows of ‘small NDA sessions’. See Chuck’s Blog from earlier this week.
Flash storage ‘everywhere’ was identified as a more recent, and perhaps more powerful use case. SNW drew a strong contingent of storage industry analysts from firms like IDC, ESG, Evaluator Group, Silverton Consulting and Mesabi Group. A consistent theme from the analysts I spoke with, as well as from the panel discussion, is that data and performance hungry workloads are driving an unusually rapid adoption of flash storage. Early deployments were as simple as adding a new ‘flash’ disk type into existing physical disk arrays, but now flash is showing up ‘everywhere’ in the data path from the server on down. The frontier now is in the efficient management of this relatively expensive real estate whether it is deployed in disk arrays, in purpose-built drawers, or in servers. Flash is simply too expensive to park whole storage volumes on because a lot of what gets stored isn’t frequently accessed and would be better stored on something slower and less expensive. This is where the basic mechanics of storage virtualization and mobility from the Data migration use case come in. At IBM, we’ve evolved the original SVC capabilities to now couple the basic mechanics with analytics and automation that guide how and when to employ the mechanics most efficiently. The evolved offering, SmartCloud Virtual Storage Center, was introduced last year. Consider this scenario. You are an IT manager who has invested in two tiers of physical disk arrays. You have also added a third disk technology – a purpose-built flash drawer (perhaps an IBM TMS RamSan). You have gathered all that physical capacity and put it under the management of a software-defined storage layer like the SmartCloud Virtual Storage Center. All of your application data is stored in virtual volumes that SmartCloud Virtual Storage Center can move at-will across any of the physical disk arrays or flash storage. Knowing which ones to move, when, and where to move them is where SmartCloud Virtual Storage Center excels. Here’s an example. Let’s suppose there is a particular database-driven workload that is only active during month end processing. The analytics engine in SmartCloud Virtual Storage Center can discover this and create a pattern of sorts that has this volume living in a hybrid pool of tier-1 and flash storage during month end and on tier-2 storage the rest of the month. In preparation for month end, the volume can be transparently staged into the hybrid pool (we call it an EasyTier pool), at which point more real-time analytics take over identifying which blocks inside the database are being most accessed. Only these are actually staged into flash leaving the lesser utilized blocks on tier-1 spinning disks. Do you see the efficiency? The icing on the cake comes when all this data is compressed in real-time by the storage hypervisor. This kind of intelligent analytics – directing the mechanics of mobility – from a software-defined layer are critical to economically deploying flash.
Commoditization of physical disk capacity YYYYiiiikkkkeeeessss!!! One of the more insightful observations offered by panel members, including VMware, was that if you follow the intent of a software-defined storage layer to its conclusion, it leads to a commoditization of physical disk capacity prices. From a client perspective, this is welcomed news, and really, it’s economically required to keep storage viable. Think about it, data is already growing at a faster pace than disk vendor ability to improve areal density (the primary driver behind reduced cost), and the rate of data growth is only increasing. Intelligence, analytics, efficiency, mobility… in a software-defined storage layer will increase in value freeing IT managers to shift, in mass, toward much lower cost storage capacity.
Another quick side comment. With EMC still lurking in the shadows on this conversation and VMware agreeing with the ultimate end state, it seems the two still have some internal issues to resolve. I don’t fault them. It’s a sobering thought for any vendor who has a substantial business in physical disk capacity. But at least for the two disk vendors represented on this week’s SNW panel, we are actively engaged in helping clients achieve the necessary end goal.
The conversation continues. Check out the blog by Kate Davis at HP, How do you define software-defined storage?
Join the conversation! Share your point of view here. Follow me on Twitter @RonRiffe and the industry conversation under #SoftwareDefinedStorage.