Backup redesign: A top priority for IT managers (Part 3)

This is the conclusion of a three-part conversation with Dr. Xin Wang, product manager for IBM Tivoli Storage Manager (TSM). In part 1, Xin discussed what she has learned about the challenges IT managers are facing leading to the backup redesign movement. In part 2, she began discussing her near term plans for TSM. Let’s conclude the conversation:

The Line: It seems like there is lots to watch for in the VMware space. The final observation you made was about administrators. Say a little bit there.

Xin: The administration job is changing. Nobody has time to do anything they don’t need to be doing. We introduced a completely new approach to backup administration earlier this year with the IBM Tivoli Storage Manager Operations Center. Soon we plan to make the administrator’s job even easier.

  • Deployment of a new backup server instance can be a task that requires expertise and customization. So we’re planning to remove the guesswork with blueprints for different sized configurations.
  • Auto deployment and configuration. This includes daily care and feeding of the backup repository, provisioning the protection of a new workload (client), monitoring status and activities, redriving failures and so on. We’re expanding the Operations Center.

The Line: Xin, it sounds like the near future holds some exciting possibilities for IT managers as they redesign their backup environments. Is there anything else you would like to mention that I missed?

Xin: Actually, yes. There’s one more really important thing. Whether an IT manager is sticking with traditional backup methods or redesigning with snapshots, VMware integration or one of the best practice blueprints, often times there is still a need to move some of those copies somewhere else for safe keeping. Think vaulting, offsite storage or disaster recovery. This can take time, use up network resources and result in a really large repository.

Since the days when TSM pioneered the idea of incremental forever backup, we’ve been leading the industry in data reduction to minimize strain on the environment. It’s one of the things that drive the economic savings we show in Butterfly studies. Soon we are planning some enhancements to our native client-side and server-side deduplication that will improve ingest capacity on the order of 10x. That’s 1,000 percent more deduplicated data each day! We plan to fold this capability into our new blueprints so IT managers can get the benefit right out of the box.

The Line: Nice! Xin, thank you for taking the time to share your insights with my readers.

If you have questions for Xin, please join the conversation by leaving a comment below.

Backup redesign: A top priority for IT managers (Part 2)

This is part 2 of a three-part conversation with Dr. Xin Wang, product manager for IBM Tivoli Storage Manager (TSM).  In part 1, Xin discussed what she has learned about the challenges IT managers are facing leading to the backup redesign movement. Let’s continue the conversation:

The Line: So, first you mentioned that data is so big the old style of backup can’t keep up.

Xin: That’s right. For a long time, the primary method of capturing copies of data was to load a backup agent on a server, grab the copy and transfer it across some kind of network to a backup server for safe keeping. But data is getting too big for that. So today, we are helping clients redesign with a tool we call IBM Tivoli Storage FlashCopy Manager (FCM). The point of FCM is to bridge between the application that owns the data and the storage system that is housing the data so that IT managers can simply make snapshots where the data lives.

The Line: That may be new to some clients, but it seems things like FCM have been around for a while. What’s new?

Xin: You’re right. Snapshots have been around for a while and so has FCM. But the economic driver for redesigning backup environments so they use snapshots hasn’t been as strong as it is today.

The Line: Okay. How about the technical side? Snapshots are done differently on different storage devices. What storage infrastructures does FCM work with?

Xin: A lot is being made about the benefits of software defining your storage environment. One of the many benefits of that approach is that there is only one interface for snapshotting regardless of your choice in physical disk hardware. Move a workload from one kind of array to another, or even change vendors altogether, and your snapshot interface still stays the same. So:

IBM FlashCopy Manager in a Software Defined Storage environment
IBM FlashCopy Manager in a Software Defined Storage environment

If an IT manager chooses to continue working with native physical hardware, the support list is more specific.

  • For applications running on Windows, FCM still supports most any physical disk.
  • For applications running on Linux or Unix, FCM works with IBM and NetApp disk arrays.
  • Soon we plan to offer an open application programming interface (API) that will enable plugins for other physical disk arrays from vendors like EMC and Hitachi.

The Line: Nice! So let’s move to your second point, that VMware is causing IT managers to redesign their backup environment.

Xin: VMware has caused IT managers to redesign a lot of things. So much so that a lot of IT managers I talk with have had to prioritize. In many cases, backup redesign has been one of those things that has been put off until now.

The Line: What do you mean “until now”?

Xin: Again, it’s about economics. The number of workloads and amount of data in these VMware estates is reaching critical mass. IT managers who have plodded along simply loading a backup agent in each guest machine are feeling pain. We are helping IT managers redesign with a tool custom built for VMware— IBM TSM for Virtual Environments (TSM for VE). TSM for VE is already nicely integrated with the VMware API for Data Protection (VADP), meaning its full set of snapshot features is available to IT managers without deploying agents in every VM guest. Soon we plan to add some new and really powerful integration.

  • When restoring a virtual machine, the virtual machine disk (VMDK) will be instantly available for access the moment the restore starts. Verification can start right away, with no waiting around for data to be moved anywhere.
  • There’s new integration with the vCloud API to back up, manage and recover a vApp in vCloud Director.
  • When running workloads like Microsoft SQL Server or Exchange on VMware, there can be object or item level recovery without the need for an agent to back up the application VM.
  • There’s a richer set of administrative and reporting capabilities in the vCenter plug-in.
  • For FCM users, there is instant recovery of an entire VMware data store and coexistence with VMware Site Recovery Manager (SRM).

Check back soon for part 3 of the interview in which Xin finishes sharing her near term plans for TSM and adds some closing thoughts. If you have questions for Xin, please join the conversation by leaving a comment below.

Backup redesign: A top priority for IT managers (Part 1)

Person Marking in a Checkbox 5722106Backup redesign continues to be toward the top of most analysts’ lists for 2013 IT priorities. I’ve talked a lot about some of the catalysts behind this trend like data growth, big data, VMware and software defined storage. With IT managers redesigning, the incumbent enterprise backup vendors have a lot of motivation to offer innovative solutions that are a bit ahead of the times. The leaders have all placed strategic bets on what the winning formula will be. I discussed these bets in my post “Forrester’s take on enterprise backup and recovery.”

For its part, IBM is being quick about helping IT managers redesign. The help starts with a clear understanding of the economic benefit a redesign can bring. After all, in today’s environment few IT managers make technology moves simply for the sake of technology. Storage is about economics. I discuss this more fully in my post “Does trying to find a better economic approach to storage give you ‘Butterflies’?” But there is still efficient technology that enables these economic savings, and the person in IBM who is ultimately responsible for the technology in IBM Tivoli Storage Manager (TSM) is the product manager, Dr. Xin Wang.

Businesswoman touch the virtual cloud buttonRecently I spoke with Xin about the important shifts IT managers are facing and how she is helping IT managers reimagine backup.

The Line: Xin, I’m going to start with the “Dr.” part of your title. Should folks call you the Backup Doctor?

Xin: (laughing) Well, I don’t know about that. I’m actually a doctor of Applied Physics. One thing that drove me to earn a PhD and has moved me ever since is that I love to learn. I started my career in IBM hard disk drive research, spent some time as a storage software developer and development manager, and have now been working with backup clients as a product manager for several years.

The Line: Wow, I could probably do an entire post just on your career. But let’s stay focused. What have you learned about the challenges IT managers are facing and this whole backup redesign movement?

Labyrinth - business conceptXin:  It’s interesting. The challenges aren’t secret but they carry big implications for backup. Data is growing like crazy; that’s no secret. But it is now so big that the old method of loading an agent on a server to collect and copy backup data over a network to a tape isn’t keeping up. So IT managers are redesigning.

And what about servers? Servers aren’t servers anymore. Thanks to VMware, they are virtual machines that come, go and move around in a hurry. Traditional backup is too rigid. So IT managers are redesigning.

Administrators are changing too. The generation of backup admins who grew up tuning the environment is giving way to a new generation of backup, VMware and cloud admins who need much more intuitive and automated management tools. And so IT managers are redesigning. (Editorial comment: I discussed the change in administration in my post “Do IT managers really ‘manage’ storage anymore?)

The Line: Okay, I think I’m seeing your trend. IT managers are redesigning. And it seems like you’ve got a clear idea of why. Can we take your list one at a time? I think my readers would be interested in what you are doing with TSM in each of these areas.

Xin: Sure, that makes sense.

Check back for part 2 of the interview in which Xin shares her near term plans for TSM. If you have questions for Xin, please join the conversation by leaving a comment below.

Does trying to find a better economic approach to storage give you “Butterflies”? (Part 2)

Bright Blue and Black Butterfly 6061480This is the conclusion of a two part conversation with Liam Devine, the global post-sales face of Butterfly. In Part 1, we talked about Butterfly’s unique approach to storage infrastructure analytics and how Butterfly came to be an IBM company.

The Line: It’s been a couple of years since 2011, and you have had the opportunity to both analyze a lot of data and have a number of conversations with financial decision makers. What have you found to be the most compelling talking points?

Liam: The most compelling stuff comes from the data. We’ve analyzed hundreds of different infrastructures in most every conceivable configuration and have discovered some extraordinary things about software defining storage and IBM’s approach to backup.

  • When compared to an as-is physical storage environment, transforming to a software-defined storage environment with something like IBM SmartCloud Virtual Storage Center, the economic outlook can be, on average, 63% more efficient. That’s the average, your results may vary. (Editorial comment: in one of my posts from IBM Edge 2013 I talked about LPL Financial who followed the recommendations of a Butterfly Storage AER and saved an astounding 47% in total infrastructure. Listen to Chris Peek, Senior Vice President of IT at LPL Financial. )
  • When compared to as-is competitive backup environments, transforming to an IBM Tivoli Storage Manager approach can be, on average, 38% more efficient. Again, your results may vary. [Modified: For example, when we segment just the mass of Backup AER results from as-is Symantec NetBackup or CommVault Simpana or EMC NetWorker environments, each shows that transformation to a TSM approach produces different, and in these cases at least, somewhat stronger economic savings.] We’ve got data by industry and for many other competitive backup approaches but you get the picture. Choosing a TSM approach can save money.

Business Graph 13296111The Line:  For my readers, the Butterfly team had discovered most of these trends before IBM acquired them. As I noted above, that had a lot to do with IBMs interest in the company. [Modified: Now that IBM owns Butterfly, they have been quick to add legal disclaimers around anything that might be construed as a competitive claim*.]

Now Liam, switching back to you. Butterfly has been part of IBM for about 11 months. How has the transition been?

Liam: Very successful and pretty much as I had expected. We had a few minor technical hic-cups in switching infrastructure (freeware and open source components to a more IBM standard architecture), as you would expect, but those hic-ups are behind us now. The global IBM sales force and Business Partner community has created a lot more demand for our analytics so we are busy scaling out our delivery capability.  The good news is that we’re meeting our business goals.

The Line: Can you give us an idea of what you and the team are working on next?

Liam: Right, well we’re working on a couple of important things. First is an automated “self-service” AER generation model that will enable us to scale out further still and present the AER’s as a service to IBM and its Business Partners. And second, as you can imagine, the data driven AER reports are causing a lot of IT managers to rethink their infrastructure and begin transitioning to a new software defined approach. We are continuing to refine our migration automation to assist clients with the transition, especially between backup approaches.

The Line: Before ending, I have to ask about your Twitter handle @keydellkop. What’s the story?

Liam: Hmmm, bit of strange explanation here I am afraid to say, it’s more a play on words. I see much of life being a set of confused circumstances that can be placed into an ultimate order. This reminds me of the Keystone Kops. On that theme, I reside in an area called Keydell in the south of England and being a manic Liverpool supporter, you get The Kop (the famous Liverpool stand at the Anfield stadium) – Hence @KeydellKop. All tweets are my own covering such subjects as Information Protection, Liverpool Football Club, life in general with a smidge of humor thrown in where appropriate.

The Line: Liam, thank you for spending a few minutes with me and sharing your story with my readers.

If you have questions for Liam, please join the conversation below.

* Backup Analysis Engine Reports from >1.5Exabytes data analyzed by Butterfly Software.  Savings are the average of individual customer Analysis Engine Reports from Butterfly Software, May 2013, n+450.  The savings include cumulative 36-month hardware, hardware maintenance, and electrical power savings.  Excludes one-time TSM migration cost.  All client examples cited or described are presented as illustrations of the manner in which some clients have used IBM products and the results they have achieved.  Actual environmental costs and performance characteristics will vary depending on individual client configurations and conditions.  Contact IBM to see what we can do for you.

Does trying to find a better economic approach to storage give you “Butterflies”? (Part 1)

Financial Belt Tightening 8595689Recently there has been a lot of talk coming out of IBM about the economics of storage. In fact, all of my top 5 observations from IBM Edge 2013 had something to do with economics. Sure, technology advancements are still important, but increasingly what CIO’s are chasing is a clear understanding of the economic benefits a new technology approach can bring.

Monarch butterflyLate last year IBM acquired Butterfly Software, a small company in the United Kingdom who had developed some BIG thoughts around communicating the economic benefits brought by certain approaches to storage. Butterfly has developed what they call an Analysis Engine Report (AER) that follows a straight forward thought process.

  1. Using a very light weight collector, gather real data about the existing storage infrastructure at a potential customer.
  2. Using that data, explain in good detail what the as-is effectiveness of the environment is and what costs will look like in five years time if the customer continues on the current approach.
  3. Show what a transformed storage infrastructure would look like compared to the as-is approach, and more importantly what future costs could look like compared to continuing as-is.

Butterfly has two flavors of AER’s, one for primary storage infrastructure and one for copy data (or backup) infrastructure. They have analyzed some 850 different infrastructures scattered across every industry in most parts of the world and comprising over 2 exabytes of data. In all that analysis, they have discovered some remarkable things about IBM’s ability to transform the economic future of storage for its clients. (Editorial comment: the results probably have something to do with why IBM acquired the company).

Butterfly AER

I was able to catch up with the global post-sales face of Butterfly, Liam Devine, to talk about the company and where he sees the storage economics conversation going (see if you can hear his distinctly British accent come through).

The Line: Liam, let’s start with a little background for my readers. You’ve been a systems and storage manager, consulted for some pretty impressive companies in finance and healthcare and even spent a little time at vendors like NEC and EMC.

Liam: That’s right. I’ve had the pleasure of holding numerous IT roles in a variety of interesting companies for some 14 years previous to moving over to The Dark Side or vendor land, where I have been this past 12 years. The majority of that time spent at EMC in two stints, first supporting financial customers and second supporting Electronic Data Systems (EDS – now HP Enterprise Services).

The Line: Okay, so rewind us back to 2011. What was the motivation for joining Butterfly Software?

Liam: Everything is becoming software defined. Compute is ahead of storage, but storage is accelerating quickly. The reasons are rooted in economics. I became aware of this Butterfly company who were creating unique analytics to help communicate the economic value of shifting from traditional storage infrastructure approaches to more software oriented approaches. Once I had spoken to the founders and understood their strategic vision encompassing both primary storage infrastructure and data, there was no where else I wanted to be.

Check back soon for Part 2 of the interview as Liam shares some of the extraordinary savings that Butterfly analytics have uncovered.

For corporate BYOD, does Sync/Share equal Backup?

Last December while attending the 2012 Gartner Data Center Conference in Las Vegas, I listened to an insightful presentation by analysts Sheila Childs and Pushan Rinnen on the bring-your-own-device (BYOD) phenomenon. They were particularly focused on issues related to protecting an organizations data in a BYOD world (more on why in a moment). One scenario that captured my attention went something like this.

BYOD Mobile Sync and Share 3It’s my device. I had it before I brought it to work and I was using Dropbox or iCloud to sync and share all my files. Now, my device has work data on it too. My security-conscious CIO doesn’t want work data shared on those public services. But I’m accustomed to, and almost dependent on my sync and share capability and my organization hasn’t yet given us a private alternative.

Now, in my roles as a technology strategist I spend a good bit of time helping to plan our investments. With the speed at which mobile and social technologies are sweeping through organizations, I have to admit the case that Sheila, Pushan and other Gartner analysts made that week for the rapidly emerging data protection crisis in BYOD sync and share was compelling. It occurred to me that credible vendors who were able to solve the problem in short order would be in high demand. That was eight months ago.

Fast forward seven months

Infinity time spiral 15267876In July, Forrester analysts Ted Schadler and Rob Koplowitz published The Forrester Wave™: File Sync And Share Platforms, Q3 2013 in a quest to uncover those credible vendors. I liked the way they characterized the problem. “Employees’ need to synchronize files grew from a whisper to a scream over the past few years. . . .The scream will grow louder as the number of tablets will triple to 905 million by 2017 to join the billions of computers and smartphones used for work.” The report evaluated and scored 16 of the most significant solution providers against 26 criteria. Among the leaders was IBM SmartCloud Connections. You can see the complete list of leaders here.

Change is here

change aheadThe interesting thing that most folks miss in the sync and share conversation is – it’s about more than just syncing and sharing. As BYOD smartphones and tablets begin to proliferate the workplace, document management will shift from email attachments and file servers into social collaboration. Forrester points to a further social shift from casual partner collaboration to compliant workflow in regulated industries.
That kind of data is important – and the reason that the Gartner analysts were focused on the data protection issues of this BYOD world. Organizations today have well matured processes for protecting data on file servers and email systems, usually with an enterprise backup product. I commented on this set of tools in my post on Forrester’s Take on Enterprise Backup and Recovery.  But as corporate information is relocated from file servers and email systems to sync and share systems, Gartner had an unmistakable reminder for its customers, Consumer File Sync/Share Is Not Backup.

Check mark 18728815I agree! The good news is that IBM has taken the time to ensure its enterprise backup product, IBM Tivoli Storage Manager Suite for Unified Recovery, protects synched and shared files in IBM Connections with all the same efficiency it does file servers, email systems and most any other data important to an organization.

What is your organization doing with file sync and share? How are you protecting that information?

Forrester’s Take on Enterprise Backup and Recovery

Recently, Forrester published The Forrester Wave™: Enterprise Backup
And Recovery Software, Q2 2013. I wasn’t surprised by their suggestion that “CommVault [Sympana 10.0], EMC [Avamar 7.0 and NetWorker 10.1], IBM [TSM 6.4), and Symantec [Netbackup 7.5] lead the pack. It’s a tight four-horse race for the top honors — [they] all scored high on strategy and current offerings.” These are the four vendors that are always pushing and shoving on each other in analyst comparisons. The thing that caught my attention in this report was the expert job analyst Rachel Dines did in pealing back a complex market space to uncover some important strategic observations about each vendor.

After having participated in the IBM response to the investigation Forrester did, I have to give a shout out to Rachel for being thorough. Like most analyst studies, the Forrester Wave was backed by a detailed questionnaire. But Rachel went one step further requiring an exceptionally thorough live demonstration. You can watch the somewhat raw 1-hour TSM demonstration that IBM did.

Forrester’s punch lines

“CommVault excels with an integrated platform.” I’ve been watching CommVault for years and agree with Rachel’s punch line. CommVault established their position in the market by unifying such disciplines as backup and archive in a single interface and targeting jack-of-all-trades administrators in small and medium businesses. In recent years, they have carried that unified focus into the upper end of medium businesses and a healthy number of enterprises.

“EMC focuses on hardware and software integration.”  EMC is continuing to do what I think I would do if I were in their shoes. In most segments, their disk array business enjoys leading market share. So when it comes to software, they tend towards clothing their disk install base. It’s the “Would you like fries with that hamburger?” model. A vendor-centric closed loop strategy like that can result in some nice solutions but some customers will weigh the value vs. the circular lock-in it creates.

“Symantec reinvents itself and refines focus.” I’ve also been watching the Symantec backup business for years. , and I think Rachel sums up what they need at the moment – some reinvention and solid execution. When Symantec acquired VERITAS in 2005, they got some good technology. The downside of the merger was that VERITAS’ laser focus got watered down and the portfolio has struggled.

“IBM simplifies management, focuses on cloud.” Sometimes I wish I had a crystal ball to look into the future, but who has one of those? Maybe a magic eight ball will have to do. 8-ballRachel points out that “…many firms are weary of the constant battle with their backup software and are looking for a change.” The battle is with deploying agents, managing backup windows, resolving failed backups, upgrading and patching software, and performing restores.
Magic eight ball, when will IT managers just give up? IBM shares Rachel’s point-of-view that IT managers are already giving up in places – running old versions of software, letting maintenance contracts lapse, and even stopping backups on certain systems.
Magic eight ball, what are IT managers going to do instead? There is a lot being said about cloud backup, but mostly in the context of where backup copies should be stored. Everything on premise for quick restore, everything off premise in the public cloud for cost efficiency, or some hybrid of the two? I think the storage location will work itself out as IT managers balance economics and recovery time objectives. Hybrid storage models will ultimately prevail. But that doesn’t address the real question.
Magic eight ball, if IT managers are giving up on traditional management of backup systems, what are they going to do instead? Rachel picked up on two important strategic bets IBM is making with TSM.

  1. The generation of storage scientists that have managed backup, and all storage for that matter, are aging (like me). I talked about this phenomena in my post Do IT managers really “manage” storage anymore? So, for those firms who choose to continue the battle on their own, IBM, with a lot of help from its clients, is addressing the problem with an entirely new approach to storage administration.
  2. Rachel also observed that there are a good number of IT managers who are tired of the battle. This is where I differ from the pundits on cloud and backup. In this context, I think cloud is less about a storage location and more about a management model. Do I want to continue doing backup on my own (after all, as Rachel points out “it is painful, slow, and expensive, and seems to provide zero strategic business value”), or do I want to outsource it to a management provider in the cloud? IBM is spending a lot of time working with managed service providers who will take over all the day-to-day headaches of meeting backup service level agreements (SLA’s), most in any combination of on premise, public cloud, or hybrid storage configurations you want. Here are just a few.

Watch the LiveStream of Cobalt Iron CEO Richard Spurlock as he talks about the synergies between these two strategic bets.

At IBM, this entirely new approach to administration and focus on cloud as a management model are being coupled with a technology suite that Rachel says brings “…strengths in deduplication, manageability (due to significant improvements in TSM Operations Center), continuity, and restore features” and “…excels in extremely large and complex environments…”. Magic eight ball, do you know of an IT manager who would say “gee, my environment isn’t complex at all. It’s just downright simple”?

Whether you intend to continue the battle on your own or let a cloud service provider take over the headache, it’s worth looking into a software suite that excels at the kind of environment you have.

Final Thoughts – Edge 2013 Top 5 Observations

IBM’s second annual installment of the Edge conference was a huge success, and a great improvement over the inaugural year. Here are my top five observations from the week.

  1. There are a confluence of cultural changes driving big data and breakneck I/O rates.
    1. In my post on Edge 2013 Day 1 I recalled a story by IBMs Stephen Leonard about the data trails we are all creating, wider by the day and as unique to an individual as fingerprints and DNA. He noted that these data trails are also being created by manmade things like roads, railways, cities, and supply chains as well as nature made things like rivers, wind, and cattle.
    2. On Tuesday, IBMs Tom Rosamilia used a New York Times article titled How Companies Learn Your Secrets to describe how big data is being used to target marketing to a demographic of one. His observation was that we are developing an expectation that we’ll be reached out to individually and that this is feeding marketers desire to analyze more and more data.
    3. IBMs Clod Barrera made an obvious point that I just hadn’t thought about before. We are instrumenting everything – Stephen’ data trails from above. And we are processing that big data with analytics – Tom’s observation. Clod noted that as machines begin collecting data and passing it to other machines for analysis, the rate of input and output (I/O) operations is increasing at a skyrocketing pace.
  2. Storage decisions are about economics
    1. IBMs Ambuj Goyal first made the point and it was carried throughout the conference. The client conversations IBM is having seem to always be some combination of four themes that hit on the economics of IT, of the business being up and running, and of risk.Storage is about economics
      1. Keep me on the technical innovation curve… cost effectively
      2. Get me up and running… fast
      3. Keep me up and running… securely
      4. Help me mitigate risk… economically

      As I noted in my coverage of Edge 2013 Day 2, Intel’s Kim Stevenson summed it up best when she said “Organizations don’t buy technology, they buy benefits.”

  3. Software defined storage (SDS) is about improving storage economics
    1. IBMs Jamie Thomas, in talking about software defined environments (SDE) commented on how much of what is being done in creating patterns for workloads and virtualized infrastructure is about improving labor effectiveness and costs.
    2. Back in April I posted on a round table discussion I participated in at Storage Networking World – SNW Spring 2013 recap – on the topic of storage hypervisors, what the industry is now more commonly referring to as software defined storage. Toward the end of that post I commented on the coming commoditization of physical disk capacity. This week at Edge, IBMs Ambuj Goyal said he would be happy for SDS to help commoditize storage hardware, IBM’s and everybody else’s. It’s about economics.
    3. In my post on Edge 2013 Day 4 I described IBMs relatively recent acquisition of Butterfly Software. The motivation for the acquisition was all about effective communication of storage economics to a happily listening potential client base.
  4. Flash is about improving the economics of I/O’s
    1. IBMs Ed Walsh described an interesting set of downstream effects from flash storage. Compared to spinning disk, flash uses less power, cooling, and floor space.  That seems fairly well understood. But have you thought about the following:
      1. When I/O response time is faster, CPU’s spend less time waiting.
      2. When there is less CPU downtime waiting for I/O‘s, you need fewer cores.
      3. When you have fewer cores, the environmental impact of your processor farm is reduced.
      4. When you have fewer cores, you pay less in software licenses.
    2. Setting all of Ed’s downstream effects aside, Clod Barrera predicted that by the end of 2013, the cost per usable gigabyte in a flash system will be better than a high end 15k RPM spinning disk.
    3. Several IBM speakers and customer testimonials focused on the economically powerful use cases for combining SDS + flash. This one didn’t surprise me, but it was confirming of comments I made on Flash storage ‘everywhere’ back in April.
  5. Tivoli Storage Manager and the announcement of its Operations Center  are about improving backup administration economics.  In my coverage of Edge 2013 Day 4 in the section on Butterfly AERs, I noted that analysis of some 850 different infrastructures scattered across every industry in most parts of the world and comprising over 2 exabytes of data showed that, when compared to as-is competitive backup environments, transforming to IBM Tivoli Storage Manager approach is, on average, 38% more efficient. It’s about economics!

(Bonus) Cloud is about improving the economics of consumption. In my coverage of Edge 2013 Day 3 and the Managed Service provider (MSP) Summit I shared an example of how traditional value-added resellers are transforming as their small and medium business customers begin to consume technology as cloud services. This is much more than a technology fad. Like the other examples discussed above, the shift to cloud is about economics.

That’s my top 5 and a bonus. What did you find most interesting at IBM Edge 2013? Leave a comment.

I’ll see you next year at Edge 2014!

Edge 2013 Day 4 – Poke in the eye

I’ve spent a good bit of time this week talking to clients, business partners, managed service providers and IBMers about their perspective on IBM Edge. One of the strengths they point out is the diversity of programming. My experience at the conference has included main tent sessions and sessions at Executive Edge, Technical Edge, the MSP Summit and, today, Winning Edge. Winning Edge is a sales training boot camp exclusively for IBM Specialty Business Partners. It’s advertised mostly by word-of-mouth and on IBM PartnerWorld. Unlike other areas of Edge, the Winning Edge sessions and ensuing hallway conversations are focused on, well, winning competitive engagements. As a result, there is a fair amount of talk about the strength of IBM offerings and the weakness of competitive offerings. In other words – “there’s your competitor, go poke’em in the eye!

Now, before I go on, here’s my disclaimer. Although I am employed by IBM, my perspectives are my own and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management.  Enough said?

Butterfly AERs

Back in September last year, IBM acquired Butterfly Software. This little company brought a tectonic shift in the way I and our customers think about the value of storage software. As has been repeated over and over this week at Edge, it’s all about economics. Butterfly has developed what they call an Analysis Engine Report (AER) that follows a straight forward thought process.

  1. Using a very light weight collector, gather real data about the existing storage infrastructure at a potential customer.
  2. Using that data, explain in good detail what the as-is effectiveness of the environment is and what costs will look like in five years time if the customer continues on the current approach.
  3. Show what a transformed storage infrastructure would look like compared to the as-is approach, and more importantly what future costs could look like compared to continuing as-is.

Butterfly has two flavors of AER’s, one for primary storage infrastructure and one for copyButterfly AER data (or backup) infrastructure. They have analyzed some 850 different infrastructures scattered across every industry in most parts of the world and comprising over 2 exabytes of data. In all that analysis, they have discovered some remarkable things about IBM’s ability to transform the economic future of storage for its clients. (Editorial comment: the results probably have something to do with why IBM acquired the company).

  • When compared to as-is physical storage environments, transforming to a software-defined storage environment with IBM SmartCloud Virtual Storage Center (built on a Storwize family software-defined layer), the economic outlook is, on average, 63% more efficient. That’s the average, your results may vary. As an example, in my post on Tuesday I talked about LPL Financial who followed the recommendations of a Butterfly Storage AER and saved an astounding 47% in total infrastructure.
  • When compared to as-is competitive backup environments, transforming to IBM Tivoli Storage Manager approach is, on average, 38% more efficient. Again, your results may vary. For example, when you look just at the mass of Backup AER results from as-is Symantec NetBackup environments, transforming to IBM TSM was 45% more efficient. For those who had CommVault Sympana, the backup AER results showed TSM to be 54% more economically efficient. EMC NetWorker? The transformed TSM approach showed 45% less expensive. There’s data by industry and for many other competitive backup approaches but you get the picture. Choosing a TSM approach saves money.

IBM software-defined storage vs EMC VPLEX

First I have to say that I wish EMC would focus. Trying to figure out which horse they are on for software defined storage and virtualization makes it hard for sellers to figure out what they’ll be competing against. Maybe that’s the point. In my post on EMC ViPR: Breathtaking but not Breakthrough, I talked about how EMC is openly asking the question “Is ViPR a modern interpretation of what we now mean when we say ‘storage virtualization’?” It’s certainly EMC’s modern interpretation having tried before to virtualize physical storage with Invista (circa 2005) and VPLEX (circa 2010). This week at Edge I heard an industry analyst refer to “ViPR-ware” (say it fast, you’ll get the pun) so for the moment, there’s still competitive talk about VPLEX.

In the hallway, I bumped into an IBM Business Partner whose firm has already helped 80 different clients implement active-active datacenters with the Storwize family software-defined storage layer in a stretched-cluster configuration using the IBM SAN Volume Controller engine. Eighty clients, one Business Partner. I wonder if that’s more than the sum total of EMC VPLEX deployments in the world? Anyway, back to the larger conversation. At Winning Edge there were a few observations made about EMC’s approach to VPLEX that are worth noting.

  1. VPLEX is missing most every important storage service IT managers need. Snapshot, mirroring, thin provisioning, compression, automated tiering, the list goes on. As a result, VPLEX depends on those capabilities being delivered somewhere else, either in an underlying physical disk array or in bolt-on software or appliances like EMC RecoverPoint.  Because of the dependence, mobility of the VPLEX virtual volumes is limited. Think about it, if a workload stores its data on a VPLEX virtual volume and that data is expecting services like automated tiering or thin provisioning to keep storage costs down, then the virtual volume is limited in its mobility because it must be stored on some physical array that provides those services. It’s sort of like if you were using VMware for a virtual machine, but you were told you couldn’t use vMotion from a physical Dell server over to an HP server because there was some capability on the Dell server that your workload was depending on and the HP server couldn’t deliver it. That scenario simply doesn’t happen (and IT managers wouldn’t tolerate it if it did) because VMware provides the required services in the hypervisor and doesn’t depend on anything from the underlying physical servers. VPLEX hasn’t gotten there yet.
  2. Workload migration with VPLEX is a manual pain. Let’s think about what happens in a VPLEX environment when you need to move a workload off a certain piece of physical hardware. Why would you be moving? Everyday reasons like resolving a performance issue, getting off an array that is failing, or simply unloading a device that is being replaced. Assuming the IT manager has convinced himself that issue #1 above is okay (that tying the capability of a virtual volume to some piece of physical hardware is okay) and that they have a replacement array handy that happens to exactly match the set of capabilities available on the array that’s being vacated, then what’s the process for getting the workload moved? Well, for each and every virtual volume you want to move:
  • A new physical LUN has to be created on the target array that looks exactly like the source LUN on the array being replaced.
  • The virtual volume to be moved has to be mirrored between the two arrays
  • Once in sync, the mirror has to be broken and the old physical volume taken offline.

Today’s large arrays can have LOTs of LUN’s on them meaning the above procedure would have to be executed LOTs of times. How often do new arrays come and go in your datacenter? How often do you experience a performance incident? How often do repetitive procedures like this work flawlessly over and over and over? This is not something that most customers I talk with would attempt. VPLEX hasn’t yet matured to the point where analytics, not people with procedures, drive virtual volume movement to avoid array performance issues and unloading a physical array is a simple command.

Who is your favorite competitor to poke at? Get your stick out and leave a comment!

Edge 2013 Day 3 – Clouds over the horizon

Today I ventured from Executive Edge into Technical Edge and the Managed Service Provider (MSP) Summit.

At Technical Edge, I checked in on two of the hotter subject areas. One was VMware data protection and the other was active-active data centers.

  1. In so many cases, years ago when pockets of VMware began to spring up in datacenters, IT teams weren’t yet well coordinated around this new idea of server virtualization. So, along with pockets of VMware came pockets of tactical recovery tools. Today, VMware is mainstream to the business and IT managers are reducing risk by turning to more dependable, serious data protection techniques. The standing-room-only attendance in this session was testament to the client interest in IBM’s Tivoli Storage Manager for Virtual Environments (TSM for VE) and its close cousin IBM FlashCopy Manager (FCM).  Earlier this year, IBM released version 6.4 of TSM for VE, a data protection tool deeply integrated with VMware vStorage API for Data Protection and administered through VMware vCenter. IBM has taken these standard integration points and added its own efficiency spin with snapshots and its unique incremental forever data storage approach giving VMware administrators what they want (seamless VMware operation) and storage administrators what they want (hyper efficiency in cost). Check out my post VMware backup for the iPOD generation for more information.  FCM, the close cousin, takes the integration one step further by adding snapshot assist from both physical IBM or NetApp hardware as well as from the Storwize family software-defined storage layer. In the latter case, because it’s software-defined storage, the snapshot assist works regardless of what physical storage clients happen to choose.
  2. One of the more interesting use cases bubbling up in software-defined environments is the active-active datacenter. Because the virtual resources in a software-defined environment are not tied down to a physical piece of equipment, they are mobile. The first evolution was to move resources around in a single datacenter – virtual machines moving from one physical server to another and virtual storage moving from one physical array (or tier or vendor) to another. The active-active datacenter takes the notion of virtual mobility one step further giving the ability to duplicate of move a virtual resource from one physical infrastructure to another at distance. In the case of the Storwize family software-defined storage layer, this capability is referred to as the stretched-cluster configuration using the IBM SAN Volume Controller engine. This session was discussing the use of stretched-cluster with VMware vMotion, IBM PowerVM Live Partition Mobility, and Oracle Real Application Clusters (RAC) to transparently move workloads and their associated storage between active-active datacenters up to 300km apart. IBM has already helped upwards of 300 customers implement active-active datacenters with software-defined stretched-cluster configurations and judging from the interest at Edge, the number is going to grow quickly.

Over in the MSP Summit the conversation was much less technical in nature. Hundreds of existing and emerging MSP’s were gathered to talk about trends and business models. Some observations that captured my attention:

  • The traditional value-added reseller (VAR) business is evolving. Their traditional small and medium business (SMB) customers are being pressured by their boards and CEO’s to do more with less. As they strive to meet those requirements, SMBs are some of the early adopters who look to service contracts with trusted partners as a replacement for buying and operating their own infrastructure.
  • IBM is actively orchestrating connections between independent software vendors (ISVs) and service providers through its PartnerWorld. It used to be that the connections IBM focused on were within a geographical area shared by ISV’s and a certain set of VARs. Today, driven by cloud, the connections can reach over the horizon.
  • One example is in the area of cloud backup services. FrontSafe is an MSP in Denmark. They have used the cloud delivery model to offer backup services based on IBM Tivoli Storage Manager (TSM) to thousands of SMB customers in their country, including more than 2000 dentist offices. They are also an ISV who offers their portal software to other MSP’s helping them build cloud backup service businesses. Working with IBM, FrontSafe has now connected with MSPs in 14 countries. Companies like iSanity in South Africa and Cobalt Iron in the United States have now joined FrontSafe in an ever widening group of MSP’s who are successfully reaching new customers with TSM-based cloud backup services.

Looking forward to day 4 of Edge!