I’ve spent a good bit of time this week talking to clients, business partners, managed service providers and IBMers about their perspective on IBM Edge. One of the strengths they point out is the diversity of programming. My experience at the conference has included main tent sessions and sessions at Executive Edge, Technical Edge, the MSP Summit and, today, Winning Edge. Winning Edge is a sales training boot camp exclusively for IBM Specialty Business Partners. It’s advertised mostly by word-of-mouth and on IBM PartnerWorld. Unlike other areas of Edge, the Winning Edge sessions and ensuing hallway conversations are focused on, well, winning competitive engagements. As a result, there is a fair amount of talk about the strength of IBM offerings and the weakness of competitive offerings. In other words – “there’s your competitor, go poke’em in the eye!”
Now, before I go on, here’s my disclaimer. Although I am employed by IBM, my perspectives are my own and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management. Enough said?
Back in September last year, IBM acquired Butterfly Software. This little company brought a tectonic shift in the way I and our customers think about the value of storage software. As has been repeated over and over this week at Edge, it’s all about economics. Butterfly has developed what they call an Analysis Engine Report (AER) that follows a straight forward thought process.
- Using a very light weight collector, gather real data about the existing storage infrastructure at a potential customer.
- Using that data, explain in good detail what the as-is effectiveness of the environment is and what costs will look like in five years time if the customer continues on the current approach.
- Show what a transformed storage infrastructure would look like compared to the as-is approach, and more importantly what future costs could look like compared to continuing as-is.
Butterfly has two flavors of AER’s, one for primary storage infrastructure and one for copy data (or backup) infrastructure. They have analyzed some 850 different infrastructures scattered across every industry in most parts of the world and comprising over 2 exabytes of data. In all that analysis, they have discovered some remarkable things about IBM’s ability to transform the economic future of storage for its clients. (Editorial comment: the results probably have something to do with why IBM acquired the company).
- When compared to as-is physical storage environments, transforming to a software-defined storage environment with IBM SmartCloud Virtual Storage Center (built on a Storwize family software-defined layer), the economic outlook is, on average, 63% more efficient. That’s the average, your results may vary. As an example, in my post on Tuesday I talked about LPL Financial who followed the recommendations of a Butterfly Storage AER and saved an astounding 47% in total infrastructure.
- When compared to as-is competitive backup environments, transforming to IBM Tivoli Storage Manager approach is, on average, 38% more efficient. Again, your results may vary. For example, when you look just at the mass of Backup AER results from as-is Symantec NetBackup environments, transforming to IBM TSM was 45% more efficient. For those who had CommVault Sympana, the backup AER results showed TSM to be 54% more economically efficient. EMC NetWorker? The transformed TSM approach showed 45% less expensive. There’s data by industry and for many other competitive backup approaches but you get the picture. Choosing a TSM approach saves money.
IBM software-defined storage vs EMC VPLEX
First I have to say that I wish EMC would focus. Trying to figure out which horse they are on for software defined storage and virtualization makes it hard for sellers to figure out what they’ll be competing against. Maybe that’s the point. In my post on EMC ViPR: Breathtaking but not Breakthrough, I talked about how EMC is openly asking the question “Is ViPR a modern interpretation of what we now mean when we say ‘storage virtualization’?” It’s certainly EMC’s modern interpretation having tried before to virtualize physical storage with Invista (circa 2005) and VPLEX (circa 2010). This week at Edge I heard an industry analyst refer to “ViPR-ware” (say it fast, you’ll get the pun) so for the moment, there’s still competitive talk about VPLEX.
In the hallway, I bumped into an IBM Business Partner whose firm has already helped 80 different clients implement active-active datacenters with the Storwize family software-defined storage layer in a stretched-cluster configuration using the IBM SAN Volume Controller engine. Eighty clients, one Business Partner. I wonder if that’s more than the sum total of EMC VPLEX deployments in the world? Anyway, back to the larger conversation. At Winning Edge there were a few observations made about EMC’s approach to VPLEX that are worth noting.
- VPLEX is missing most every important storage service IT managers need. Snapshot, mirroring, thin provisioning, compression, automated tiering, the list goes on. As a result, VPLEX depends on those capabilities being delivered somewhere else, either in an underlying physical disk array or in bolt-on software or appliances like EMC RecoverPoint. Because of the dependence, mobility of the VPLEX virtual volumes is limited. Think about it, if a workload stores its data on a VPLEX virtual volume and that data is expecting services like automated tiering or thin provisioning to keep storage costs down, then the virtual volume is limited in its mobility because it must be stored on some physical array that provides those services. It’s sort of like if you were using VMware for a virtual machine, but you were told you couldn’t use vMotion from a physical Dell server over to an HP server because there was some capability on the Dell server that your workload was depending on and the HP server couldn’t deliver it. That scenario simply doesn’t happen (and IT managers wouldn’t tolerate it if it did) because VMware provides the required services in the hypervisor and doesn’t depend on anything from the underlying physical servers. VPLEX hasn’t gotten there yet.
- Workload migration with VPLEX is a manual pain. Let’s think about what happens in a VPLEX environment when you need to move a workload off a certain piece of physical hardware. Why would you be moving? Everyday reasons like resolving a performance issue, getting off an array that is failing, or simply unloading a device that is being replaced. Assuming the IT manager has convinced himself that issue #1 above is okay (that tying the capability of a virtual volume to some piece of physical hardware is okay) and that they have a replacement array handy that happens to exactly match the set of capabilities available on the array that’s being vacated, then what’s the process for getting the workload moved? Well, for each and every virtual volume you want to move:
- A new physical LUN has to be created on the target array that looks exactly like the source LUN on the array being replaced.
- The virtual volume to be moved has to be mirrored between the two arrays
- Once in sync, the mirror has to be broken and the old physical volume taken offline.
Today’s large arrays can have LOTs of LUN’s on them meaning the above procedure would have to be executed LOTs of times. How often do new arrays come and go in your datacenter? How often do you experience a performance incident? How often do repetitive procedures like this work flawlessly over and over and over? This is not something that most customers I talk with would attempt. VPLEX hasn’t yet matured to the point where analytics, not people with procedures, drive virtual volume movement to avoid array performance issues and unloading a physical array is a simple command.
Who is your favorite competitor to poke at? Get your stick out and leave a comment!