IBM Spectrum Protect – Crash Diet for Your Data Protection Budget

My career in storage started back in the late 1980’s when the IT world revolved around the computer system and everything else was considered a sub-system (I guess in some ways that made me a sub-administrator). The discipline of managing storage assets was just taking hold and the first order of business was to ensure all the corporate data was protected. Data that fed mainframe applications topped the list for most organizations but data associated with mission critical client-server workloads was growing rapidly. It was into this world that the great-great-grandfather of IBM Spectrum Protect was born.

Trivia question
Trivia question – leave a comment to play: Who can name the complete family lineage of IBM Spectrum Protect? Bonus points for expanding all the acronyms.

 

The world of IT has evolved a lot since then. Data is no longer a sub-thought, it is central – the new currency of business. The race to simply get all the important data protected is largely over. Spectrum Protect is now a highly evolved one-stop family that IT managers use to do that job. It is tightly integrated with workloads like databases, email systems and ERP applications; with the hypervisors they run in; with the file systems and storage devices they store their data on; and with the data capture tools that surround them such as snapshot and replication. It also includes advanced data reduction techniques like deduplication and compression.  Check out the live demo!

IBM Spectrum Protect dashboardThe question of simply ensuring your important data can be protected has been answered. The question now for most of the clients I talk to is just how efficiently the job of data protection can be done. They want to minimize the budget for data copies so they can shift investment to new business growth initiatives.

A few years ago IBM acquired Butterfly Software, a small company in the United Kingdom who had developed some BIG thoughts around communicating the economic benefits brought by certain approaches to storage. Butterfly had developed what they called an Analysis Engine Report (AER) that followed a straight forward thought process.

  1. Using a very light weight collector, gather real data about the existing storage infrastructure at a potential customer.
  2. Using that data, explain in good detail what the as-is effectiveness of the environment is and what costs will look like in five years time if the customer continues on the current approach.
  3. Show what a transformed storage infrastructure would look like compared to the as-is approach, and more importantly what future costs could look like compared to continuing as-is.

Using the Butterfly technology, IBM has partnered with clients to analyze thousands of different infrastructures scattered across every industry in most parts of the world and comprising exabytes of data. In all that analysis, our clients have discovered some remarkable things about software-defining storage and IBM’s ability to help transform the economic future of storage. One area of specialty for Butterfly is backup environments.

When compared to as-is competitive backup environments, transforming to an IBM Financial Belt Tightening 8595689Spectrum Protect approach can be, on average, 38% more efficient.  Of course your results may vary. For example, when we look at  just the mass of results from as-is Symantec NetBackup or CommVault Simpana or EMC NetWorker environments, each shows that transforming to a Spectrum Protect approach produces different, and in these three cases at least, somewhat stronger economic savings. We’ve got data by industry and for many other competitive backup approaches but you get the picture. Upgrading a backup environment to IBM Spectrum Protect is like a crash diet for your data protection budget. (Tweet this)

The best way to see for yourself is to contact IBM or an IBM Business Partner and ask for a Butterfly Backup AER study.

Join the conversation with #IBMStorage and #softwaredefined

Backup redesign: A top priority for IT managers (Part 3)

This is the conclusion of a three-part conversation with Dr. Xin Wang, product manager for IBM Tivoli Storage Manager (TSM). In part 1, Xin discussed what she has learned about the challenges IT managers are facing leading to the backup redesign movement. In part 2, she began discussing her near term plans for TSM. Let’s conclude the conversation:

The Line: It seems like there is lots to watch for in the VMware space. The final observation you made was about administrators. Say a little bit there.

Xin: The administration job is changing. Nobody has time to do anything they don’t need to be doing. We introduced a completely new approach to backup administration earlier this year with the IBM Tivoli Storage Manager Operations Center. Soon we plan to make the administrator’s job even easier.

  • Deployment of a new backup server instance can be a task that requires expertise and customization. So we’re planning to remove the guesswork with blueprints for different sized configurations.
  • Auto deployment and configuration. This includes daily care and feeding of the backup repository, provisioning the protection of a new workload (client), monitoring status and activities, redriving failures and so on. We’re expanding the Operations Center.

The Line: Xin, it sounds like the near future holds some exciting possibilities for IT managers as they redesign their backup environments. Is there anything else you would like to mention that I missed?

Xin: Actually, yes. There’s one more really important thing. Whether an IT manager is sticking with traditional backup methods or redesigning with snapshots, VMware integration or one of the best practice blueprints, often times there is still a need to move some of those copies somewhere else for safe keeping. Think vaulting, offsite storage or disaster recovery. This can take time, use up network resources and result in a really large repository.

Since the days when TSM pioneered the idea of incremental forever backup, we’ve been leading the industry in data reduction to minimize strain on the environment. It’s one of the things that drive the economic savings we show in Butterfly studies. Soon we are planning some enhancements to our native client-side and server-side deduplication that will improve ingest capacity on the order of 10x. That’s 1,000 percent more deduplicated data each day! We plan to fold this capability into our new blueprints so IT managers can get the benefit right out of the box.

The Line: Nice! Xin, thank you for taking the time to share your insights with my readers.

If you have questions for Xin, please join the conversation by leaving a comment below.

Backup redesign: A top priority for IT managers (Part 2)

This is part 2 of a three-part conversation with Dr. Xin Wang, product manager for IBM Tivoli Storage Manager (TSM).  In part 1, Xin discussed what she has learned about the challenges IT managers are facing leading to the backup redesign movement. Let’s continue the conversation:

The Line: So, first you mentioned that data is so big the old style of backup can’t keep up.

Xin: That’s right. For a long time, the primary method of capturing copies of data was to load a backup agent on a server, grab the copy and transfer it across some kind of network to a backup server for safe keeping. But data is getting too big for that. So today, we are helping clients redesign with a tool we call IBM Tivoli Storage FlashCopy Manager (FCM). The point of FCM is to bridge between the application that owns the data and the storage system that is housing the data so that IT managers can simply make snapshots where the data lives.

The Line: That may be new to some clients, but it seems things like FCM have been around for a while. What’s new?

Xin: You’re right. Snapshots have been around for a while and so has FCM. But the economic driver for redesigning backup environments so they use snapshots hasn’t been as strong as it is today.

The Line: Okay. How about the technical side? Snapshots are done differently on different storage devices. What storage infrastructures does FCM work with?

Xin: A lot is being made about the benefits of software defining your storage environment. One of the many benefits of that approach is that there is only one interface for snapshotting regardless of your choice in physical disk hardware. Move a workload from one kind of array to another, or even change vendors altogether, and your snapshot interface still stays the same. So:

IBM FlashCopy Manager in a Software Defined Storage environment
IBM FlashCopy Manager in a Software Defined Storage environment

If an IT manager chooses to continue working with native physical hardware, the support list is more specific.

  • For applications running on Windows, FCM still supports most any physical disk.
  • For applications running on Linux or Unix, FCM works with IBM and NetApp disk arrays.
  • Soon we plan to offer an open application programming interface (API) that will enable plugins for other physical disk arrays from vendors like EMC and Hitachi.

The Line: Nice! So let’s move to your second point, that VMware is causing IT managers to redesign their backup environment.

Xin: VMware has caused IT managers to redesign a lot of things. So much so that a lot of IT managers I talk with have had to prioritize. In many cases, backup redesign has been one of those things that has been put off until now.

The Line: What do you mean “until now”?

Xin: Again, it’s about economics. The number of workloads and amount of data in these VMware estates is reaching critical mass. IT managers who have plodded along simply loading a backup agent in each guest machine are feeling pain. We are helping IT managers redesign with a tool custom built for VMware— IBM TSM for Virtual Environments (TSM for VE). TSM for VE is already nicely integrated with the VMware API for Data Protection (VADP), meaning its full set of snapshot features is available to IT managers without deploying agents in every VM guest. Soon we plan to add some new and really powerful integration.

  • When restoring a virtual machine, the virtual machine disk (VMDK) will be instantly available for access the moment the restore starts. Verification can start right away, with no waiting around for data to be moved anywhere.
  • There’s new integration with the vCloud API to back up, manage and recover a vApp in vCloud Director.
  • When running workloads like Microsoft SQL Server or Exchange on VMware, there can be object or item level recovery without the need for an agent to back up the application VM.
  • There’s a richer set of administrative and reporting capabilities in the vCenter plug-in.
  • For FCM users, there is instant recovery of an entire VMware data store and coexistence with VMware Site Recovery Manager (SRM).

Check back soon for part 3 of the interview in which Xin finishes sharing her near term plans for TSM and adds some closing thoughts. If you have questions for Xin, please join the conversation by leaving a comment below.

Backup redesign: A top priority for IT managers (Part 1)

Person Marking in a Checkbox 5722106Backup redesign continues to be toward the top of most analysts’ lists for 2013 IT priorities. I’ve talked a lot about some of the catalysts behind this trend like data growth, big data, VMware and software defined storage. With IT managers redesigning, the incumbent enterprise backup vendors have a lot of motivation to offer innovative solutions that are a bit ahead of the times. The leaders have all placed strategic bets on what the winning formula will be. I discussed these bets in my post “Forrester’s take on enterprise backup and recovery.”

For its part, IBM is being quick about helping IT managers redesign. The help starts with a clear understanding of the economic benefit a redesign can bring. After all, in today’s environment few IT managers make technology moves simply for the sake of technology. Storage is about economics. I discuss this more fully in my post “Does trying to find a better economic approach to storage give you ‘Butterflies’?” But there is still efficient technology that enables these economic savings, and the person in IBM who is ultimately responsible for the technology in IBM Tivoli Storage Manager (TSM) is the product manager, Dr. Xin Wang.

Businesswoman touch the virtual cloud buttonRecently I spoke with Xin about the important shifts IT managers are facing and how she is helping IT managers reimagine backup.

The Line: Xin, I’m going to start with the “Dr.” part of your title. Should folks call you the Backup Doctor?

Xin: (laughing) Well, I don’t know about that. I’m actually a doctor of Applied Physics. One thing that drove me to earn a PhD and has moved me ever since is that I love to learn. I started my career in IBM hard disk drive research, spent some time as a storage software developer and development manager, and have now been working with backup clients as a product manager for several years.

The Line: Wow, I could probably do an entire post just on your career. But let’s stay focused. What have you learned about the challenges IT managers are facing and this whole backup redesign movement?

Labyrinth - business conceptXin:  It’s interesting. The challenges aren’t secret but they carry big implications for backup. Data is growing like crazy; that’s no secret. But it is now so big that the old method of loading an agent on a server to collect and copy backup data over a network to a tape isn’t keeping up. So IT managers are redesigning.

And what about servers? Servers aren’t servers anymore. Thanks to VMware, they are virtual machines that come, go and move around in a hurry. Traditional backup is too rigid. So IT managers are redesigning.

Administrators are changing too. The generation of backup admins who grew up tuning the environment is giving way to a new generation of backup, VMware and cloud admins who need much more intuitive and automated management tools. And so IT managers are redesigning. (Editorial comment: I discussed the change in administration in my post “Do IT managers really ‘manage’ storage anymore?)

The Line: Okay, I think I’m seeing your trend. IT managers are redesigning. And it seems like you’ve got a clear idea of why. Can we take your list one at a time? I think my readers would be interested in what you are doing with TSM in each of these areas.

Xin: Sure, that makes sense.

Check back for part 2 of the interview in which Xin shares her near term plans for TSM. If you have questions for Xin, please join the conversation by leaving a comment below.

Does trying to find a better economic approach to storage give you “Butterflies”? (Part 2)

Bright Blue and Black Butterfly 6061480This is the conclusion of a two part conversation with Liam Devine, the global post-sales face of Butterfly. In Part 1, we talked about Butterfly’s unique approach to storage infrastructure analytics and how Butterfly came to be an IBM company.

The Line: It’s been a couple of years since 2011, and you have had the opportunity to both analyze a lot of data and have a number of conversations with financial decision makers. What have you found to be the most compelling talking points?

Liam: The most compelling stuff comes from the data. We’ve analyzed hundreds of different infrastructures in most every conceivable configuration and have discovered some extraordinary things about software defining storage and IBM’s approach to backup.

  • When compared to an as-is physical storage environment, transforming to a software-defined storage environment with something like IBM SmartCloud Virtual Storage Center, the economic outlook can be, on average, 63% more efficient. That’s the average, your results may vary. (Editorial comment: in one of my posts from IBM Edge 2013 I talked about LPL Financial who followed the recommendations of a Butterfly Storage AER and saved an astounding 47% in total infrastructure. Listen to Chris Peek, Senior Vice President of IT at LPL Financial. )
  • When compared to as-is competitive backup environments, transforming to an IBM Tivoli Storage Manager approach can be, on average, 38% more efficient. Again, your results may vary. [Modified: For example, when we segment just the mass of Backup AER results from as-is Symantec NetBackup or CommVault Simpana or EMC NetWorker environments, each shows that transformation to a TSM approach produces different, and in these cases at least, somewhat stronger economic savings.] We’ve got data by industry and for many other competitive backup approaches but you get the picture. Choosing a TSM approach can save money.

Business Graph 13296111The Line:  For my readers, the Butterfly team had discovered most of these trends before IBM acquired them. As I noted above, that had a lot to do with IBMs interest in the company. [Modified: Now that IBM owns Butterfly, they have been quick to add legal disclaimers around anything that might be construed as a competitive claim*.]

Now Liam, switching back to you. Butterfly has been part of IBM for about 11 months. How has the transition been?

Liam: Very successful and pretty much as I had expected. We had a few minor technical hic-cups in switching infrastructure (freeware and open source components to a more IBM standard architecture), as you would expect, but those hic-ups are behind us now. The global IBM sales force and Business Partner community has created a lot more demand for our analytics so we are busy scaling out our delivery capability.  The good news is that we’re meeting our business goals.

The Line: Can you give us an idea of what you and the team are working on next?

Liam: Right, well we’re working on a couple of important things. First is an automated “self-service” AER generation model that will enable us to scale out further still and present the AER’s as a service to IBM and its Business Partners. And second, as you can imagine, the data driven AER reports are causing a lot of IT managers to rethink their infrastructure and begin transitioning to a new software defined approach. We are continuing to refine our migration automation to assist clients with the transition, especially between backup approaches.

The Line: Before ending, I have to ask about your Twitter handle @keydellkop. What’s the story?

Liam: Hmmm, bit of strange explanation here I am afraid to say, it’s more a play on words. I see much of life being a set of confused circumstances that can be placed into an ultimate order. This reminds me of the Keystone Kops. On that theme, I reside in an area called Keydell in the south of England and being a manic Liverpool supporter, you get The Kop (the famous Liverpool stand at the Anfield stadium) – Hence @KeydellKop. All tweets are my own covering such subjects as Information Protection, Liverpool Football Club, life in general with a smidge of humor thrown in where appropriate.

The Line: Liam, thank you for spending a few minutes with me and sharing your story with my readers.

If you have questions for Liam, please join the conversation below.

* Backup Analysis Engine Reports from >1.5Exabytes data analyzed by Butterfly Software.  Savings are the average of individual customer Analysis Engine Reports from Butterfly Software, May 2013, n+450.  The savings include cumulative 36-month hardware, hardware maintenance, and electrical power savings.  Excludes one-time TSM migration cost.  All client examples cited or described are presented as illustrations of the manner in which some clients have used IBM products and the results they have achieved.  Actual environmental costs and performance characteristics will vary depending on individual client configurations and conditions.  Contact IBM to see what we can do for you.

Does trying to find a better economic approach to storage give you “Butterflies”? (Part 1)

Financial Belt Tightening 8595689Recently there has been a lot of talk coming out of IBM about the economics of storage. In fact, all of my top 5 observations from IBM Edge 2013 had something to do with economics. Sure, technology advancements are still important, but increasingly what CIO’s are chasing is a clear understanding of the economic benefits a new technology approach can bring.

Monarch butterflyLate last year IBM acquired Butterfly Software, a small company in the United Kingdom who had developed some BIG thoughts around communicating the economic benefits brought by certain approaches to storage. Butterfly has developed what they call an Analysis Engine Report (AER) that follows a straight forward thought process.

  1. Using a very light weight collector, gather real data about the existing storage infrastructure at a potential customer.
  2. Using that data, explain in good detail what the as-is effectiveness of the environment is and what costs will look like in five years time if the customer continues on the current approach.
  3. Show what a transformed storage infrastructure would look like compared to the as-is approach, and more importantly what future costs could look like compared to continuing as-is.

Butterfly has two flavors of AER’s, one for primary storage infrastructure and one for copy data (or backup) infrastructure. They have analyzed some 850 different infrastructures scattered across every industry in most parts of the world and comprising over 2 exabytes of data. In all that analysis, they have discovered some remarkable things about IBM’s ability to transform the economic future of storage for its clients. (Editorial comment: the results probably have something to do with why IBM acquired the company).

Butterfly AER

I was able to catch up with the global post-sales face of Butterfly, Liam Devine, to talk about the company and where he sees the storage economics conversation going (see if you can hear his distinctly British accent come through).

The Line: Liam, let’s start with a little background for my readers. You’ve been a systems and storage manager, consulted for some pretty impressive companies in finance and healthcare and even spent a little time at vendors like NEC and EMC.

Liam: That’s right. I’ve had the pleasure of holding numerous IT roles in a variety of interesting companies for some 14 years previous to moving over to The Dark Side or vendor land, where I have been this past 12 years. The majority of that time spent at EMC in two stints, first supporting financial customers and second supporting Electronic Data Systems (EDS – now HP Enterprise Services).

The Line: Okay, so rewind us back to 2011. What was the motivation for joining Butterfly Software?

Liam: Everything is becoming software defined. Compute is ahead of storage, but storage is accelerating quickly. The reasons are rooted in economics. I became aware of this Butterfly company who were creating unique analytics to help communicate the economic value of shifting from traditional storage infrastructure approaches to more software oriented approaches. Once I had spoken to the founders and understood their strategic vision encompassing both primary storage infrastructure and data, there was no where else I wanted to be.

Check back soon for Part 2 of the interview as Liam shares some of the extraordinary savings that Butterfly analytics have uncovered.

For corporate BYOD, does Sync/Share equal Backup?

Last December while attending the 2012 Gartner Data Center Conference in Las Vegas, I listened to an insightful presentation by analysts Sheila Childs and Pushan Rinnen on the bring-your-own-device (BYOD) phenomenon. They were particularly focused on issues related to protecting an organizations data in a BYOD world (more on why in a moment). One scenario that captured my attention went something like this.

BYOD Mobile Sync and Share 3It’s my device. I had it before I brought it to work and I was using Dropbox or iCloud to sync and share all my files. Now, my device has work data on it too. My security-conscious CIO doesn’t want work data shared on those public services. But I’m accustomed to, and almost dependent on my sync and share capability and my organization hasn’t yet given us a private alternative.

Now, in my roles as a technology strategist I spend a good bit of time helping to plan our investments. With the speed at which mobile and social technologies are sweeping through organizations, I have to admit the case that Sheila, Pushan and other Gartner analysts made that week for the rapidly emerging data protection crisis in BYOD sync and share was compelling. It occurred to me that credible vendors who were able to solve the problem in short order would be in high demand. That was eight months ago.

Fast forward seven months

Infinity time spiral 15267876In July, Forrester analysts Ted Schadler and Rob Koplowitz published The Forrester Wave™: File Sync And Share Platforms, Q3 2013 in a quest to uncover those credible vendors. I liked the way they characterized the problem. “Employees’ need to synchronize files grew from a whisper to a scream over the past few years. . . .The scream will grow louder as the number of tablets will triple to 905 million by 2017 to join the billions of computers and smartphones used for work.” The report evaluated and scored 16 of the most significant solution providers against 26 criteria. Among the leaders was IBM SmartCloud Connections. You can see the complete list of leaders here.

Change is here

change aheadThe interesting thing that most folks miss in the sync and share conversation is – it’s about more than just syncing and sharing. As BYOD smartphones and tablets begin to proliferate the workplace, document management will shift from email attachments and file servers into social collaboration. Forrester points to a further social shift from casual partner collaboration to compliant workflow in regulated industries.
That kind of data is important – and the reason that the Gartner analysts were focused on the data protection issues of this BYOD world. Organizations today have well matured processes for protecting data on file servers and email systems, usually with an enterprise backup product. I commented on this set of tools in my post on Forrester’s Take on Enterprise Backup and Recovery.  But as corporate information is relocated from file servers and email systems to sync and share systems, Gartner had an unmistakable reminder for its customers, Consumer File Sync/Share Is Not Backup.

Check mark 18728815I agree! The good news is that IBM has taken the time to ensure its enterprise backup product, IBM Tivoli Storage Manager Suite for Unified Recovery, protects synched and shared files in IBM Connections with all the same efficiency it does file servers, email systems and most any other data important to an organization.

What is your organization doing with file sync and share? How are you protecting that information?

IBM at VMworld 2013: Making heroes of VMware admins and SQL Server DBAs

In the modern datacenter, there’s a lot of shifting going on when it comes to traditional storage management responsibilities. What used to be the domain of a central storage and backup administration team has been thrown up for grabs as server virtualization and software defined everything have entered the scene. I hinted at this a bit in my post Do IT managers really “manage” storage anymore? But let’s consider a practical example that’s quite common with clients I speak to. If you are going to VMworld 2013, plan on attending the IBM TSM for VE hands-on lab to get more details.

Microsoft SQL Server is the foundation for a lot of applications that are critical to business operation – meaning CIO’s and IT managers are interested in its recoverability. Those same CIO’s and IT managers are also interested in the recoverability of their VMware estates, the software defined compute (SDC) platform that houses those databases. For many clients, the problem is that these two domains are tightly guarded by two independent superheroes, and neither is specifically trained in storage.

Superhero #1: The database administrator (DBA)

VMware and SQL Server superheroMost DBAs that I’ve known have an almost personal connection with their databases. They care for them as they would their own children. The thought of leaving one unprotected (without a backup) equates to dereliction of duty. Ignoring the idea that it takes a village to raise a child (or in this case that there may be other members of the IT administration village like VMware admins and backup admins), SQL Server DBAs will often work alone with the backup tools Microsoft provides to ensure their databases are protected. Good for the SQL Server, but not so much for the surrounding infrastructure.  For databases running on VMware, routine full backups even with periodic differential backups can consume a LOT of disk space and virtual compute resources, and also contribute to the I/O blender effect.

Superhero #2: The VMware administrator

TSM for VE in VMware vSphere web client
TSM for VE in VMware vSphere web client

VMware administrators can be just as focused on their domain as DBAs are. Their attention is on being able to recover persistent or critical virtual machine (VM) images, regardless of what app happens to be riding along. VMware has done a nice job of creating and supporting an industry of tightly integrated backup providers. These tools can get at the VMware data through a set of vStorage API’s for Data Protection (VADP) and VMware administrators can manage them through vCenter plug-ins. But few VMware admins are completely aware of all the workloads that run on their VMs and even less aware of the unique recovery needs of all those workloads. It’s just hard to keep up.

Common ground exists

One tool that bridges the gap is IBM Tivoli Storage Manager for Virtual Environments (TSM for VE). Nicely integrated with both VADP and SQL Server, TSM for VE can bring together VMware administrators and the DBAs in ways that would make any IT manager smile. Here are two of the more common approaches.

We can each do our own thing – together

As noted above, SQL Server DBAs take full backups sprinkled with differentials. Even though this approach can tax server and storage resources, and contribute to the I/O blender effect, it is in the DBA comfort zone. When the app is running on a VMware virtual machine, the DBA has the option of storing those backups on disk storage associated with the VM. It’s a nice thing to do because it allows the VMware admin to stay within his comfort zone too. Using vCenter to drive a VADP integrated snapshot tool like TSM for VE, the VMware admin can capture a complete copy of the virtual machine, along with the SQL Server backups the DBA created. Since the likely use of such a snapshot would be to recover the VM and then recover the database from its backup, there’s really not a reason to include the source SQL Server database or logs in the snapshot. With TSM for VE, the VMware admin can exclude the source SQL Server database from being redundantly backed up adding to an already formidable set of built-in efficiency techniques (with TSM for VE, snapshots are taken incrementally – forever, and can be deduplicated and compressed).  It’s a good compromise solution letting each admin stay in his or her comfort zone. But it can be better.

We can join forces and do something really great

VMware and SQL Server superhero togetherWith TSM for VE, VMware admins and SQL Server DBAs can put their heads together and choose to do something really great. For the DBA, it’s an exercise in less-is-more. The DBA stops doing her own backups. No more full or differential copies of the database. No more taxing resource usage on the VM. No more I/O blender effect. Just, no more. How? Well, with a VMware VADP integrated backup tool like TSM for VE, the snapshot of the VM is accompanied by a freeze and thaw of the SQL Server database (techno-speak for putting the database in a consistent state), just like what happens when a backup is independently initiated by a DBA. And with TSM for VE, as soon as the TSM server confirms that it has successfully stored the consistent snapshot in a safe, physically separate place, it will connect back with the SQL Server to truncate the database logs.

In addition to the less-is-more benefits above, think about the differences in restore with these two scenarios. When the DBA and VMware admin simply coexist, each doing their own thing, restoring the SQL Server database includes steps for restoring:

  • The VM snapshot to get the database backups in place
  • The full database backup
  • The subsequent differential backups

By comparison, when the DBA and the VMware admin join forces with TSM for VE, the steps are dramatically simplified. Restoring the snapshot equates to restoring a consistent copy of the database.  And remember, because these snapshots are highly efficient, they can be taken quite frequently.

Superhero indeed!

Going to VMworld 2013? Come visit IBM on the Solutions Exchange floor at booth #1545.

IBM at VMworld 2013: Optimization, Backup, …

VMworld 2013 is just around the corner and at IBM, we’re gearing up for a great set of conversations with our joint clients. As you’re planning your agenda, here are a couple of things worth looking in to.

Virtualization Optimization

IBM has a lot of expertise to share when it comes to optimizing virtual environments. AIBM Virtualization Optimization few weeks ago in my Outside the Line – an interview on Virtualization optimization post, I was able to catch up with several of the experts who are leading this work. At VMworld, IBM will be showcasing these solutions on the Solutions Exchange floor at booth #1545.

VMware Backup

TSM for VE in VMware vSphere web client
TSM for VE in VMware vSphere web client

IBM Tivoli Storage Manager for Virtual Environments (TSM for VE) is one of the mostefficient backup integrations that have been done with the VMware vStorage API’s for Data Protection (VADP). I offered some quick insights in my post VMware backup for the iPOD generation. At VMworld 2013, you’ll have an opportunity to take a test drive in the TSM for VE hands-on lab.

Are you going to VMworld? What are you most looking forward to?

Outside The Line — an Interview on Virtualization Optimization

In my role, I get to spend a lot of time with customers and I also do a lot of reading. Over the years, I’ve developed a good bit of depth on most storage topics but one of the things I love is the pace of innovation going on all around me. There’s never a shortage of new things to learn. Occasionally I run across something particularly interesting that I want to talk about, but it’s a bit outside my area of direct expertise. This is one of those topics. To help me out, I’m going to reach outside The Line to get insights from real experts. Please let me know what you think of the format.

Several months ago IDC published an interesting Market Analysis Perspective: Worldwide Enterprise Servers, 2012 — Technology Market that uncovered something quite contrary to conventional wisdom when it comes to virtualized server environments. IDC - MAP Worldwide Enterprise Servers, 2012 — Spend 238530Server virtualization, or software defined compute (SDC) as it is coming to be known, promised to control server sprawl and return balance to the portion of the IT budget allocated to servers. The IDC research confirms that the controlled server sprawl part of the promise has largely been realized. Since 2000, the worldwide spend on x86 stuff has actually declined from $70B to about $56B. Equally as important, environmental spending on power and cooling has leveled off too. The revelation that surprised most folks was the dramatic expansion in spend on management. Since 2000, spend on management has more than tripled reaching $171B and now accounts for 68% of IT spend on x86 infrastructure.

These days the measuring stick for server virtualization seems to be around VM density, or the number of virtual machines that can be supported on a single physical server. The theory has been that as VM density increases, management costs decline. Most IT managers I talk to can point to fairly good and increasing VM density in their environments. So what’s causing associated management costs to increase so much and what can IT managers do to improve the situation?

IBM Virtualization OptimizationRecently I sat down with a group of product managers who are directing IBM SmartCloud work in the area of optimizing the management and administration of VMware environments.  Collectively, these guys are thinking about everything from virtual image construction and patching security vulnerabilities to planning capacity, charging for usage and recovering from failures.

The Line: Guys, let’s talk for a minute about this IDC research. Do you really see the cost of management and administration becoming a central issue in SDC environments today?

Tom Recchia: Absolutely.  Virtualization has introduced a pile of new challenges. Instead of server sprawl, we now talk about image sprawl. IT managers have to think about virtual data protection, performance, security compliance and cost management for all these images. Virtual machines can become dormant leading to out-of-date software versions, security issues, and just paying for things you aren’t using. The clients I talk with bought in to virtualization to improve costs and agility but they are realizing management challenges are slowing service delivery and constricting their return on investment.

Brooks York:  I agree. Virtualization has certainly helped optimize the use of physical resources. But the management improvements for the guest operating systems (OS) haven’t been nearly as dramatic. On the flip side, virtualization has made it much easier to create new OS instances leading to VM sprawl. IT managers are ending up with a lot more OS instances, each of which aren’t much easier to manage than they used to be. That’s leading to higher overall management costs.

The Line:  Say more about what you are learning from your client conversations. What kinds of specific issues are causing this rise in cost?

Alex Peay: Here’s one concrete example. Virtualization makes it easy to create a new image designed to deliver a service. Increasingly, IT managers are coupling that capability with a self-service portal. All of a sudden, the customers of IT are off and running creating new images almost at-will. New terms like image sprawl and image drift enter the vocabulary. Pretty soon you have unneeded images, duplicate resources, increased maintenance, rising storage costs for wasted space and a general lack of understanding of what all is in the environment. While clients have gained control of their physical environments they have lost control of their virtual environments.

Brooks: I often hear that the image sprawl is also creating issues with software licensing costs and with OS or system administrator costs. Remember, all those images have an operating system and some application software, and they need to be administered by somebody.

Mike Gare: That’s one of those double edged swords that comes with the flexibility of virtualization, isn’t it? Clients create all kinds of VM templates and disk images to support self-service provisioning. These things have to be properly managed or organizations risk virus outbreaks, hacking and the potential loss of intellectual property. With virtualization and these self-service portals it’s relatively easy to spin up a new VM from a template or clone in less than an hour. That’s cool from the rapid service delivery perspective but IT managers still need to concern themselves with patching, securing, and updating the applications on each VM in order to address potential security gaps.  This is especially true as virtualization moves from development and test environments where VM’s are often on separate networks and only live for a few hours or days, to production environments where VM’s are more persistent.

Kohi Easwarachandran: Image sprawl hits data protection too. An organizations data needs to be protected regardless of whether it is being operated on by an application on a physical server or a virtual machine. Before virtualization, the dominant method for protecting data was to have a backup agent on each OS image. Unfortunately, many IT managers carried this model forward with virtualization, loading a backup agent on each VM. As VMs proliferate so do backup agents and, well, the task of managing and troubleshooting this style of backup in a virtualized environment can be quite draining and expensive.

The Line:  So, increasing VM density is good from a hardware efficiency point-of-view, but the point you guys are making is that the things on those images – operating systems, application software, backup agents, patches, etc. – all still have to be tracked and managed by someone, and that’s contributing to the rising costs. Tom, what about managing those costs? It seems that with SDC, IT managers need to shift from planning budgets around physical servers to planning budgets around application instances that are packed together on physical servers as VM density increases. What impact are you seeing from that shift?

Tom: Over the years, most IT managers had developed tools for measuring usage of physical infrastructure. But it’s more difficult to manually keep track of VM usage. Fortunately there is cost management software that will automatically collect virtualized environment usage and even apply rates. I’ve seen this automation reduce finance staff labor costs up to 25%. Maybe the bigger thing though is that it can let that finance staff show the multiple users of a virtual environment what they are consuming

The Line: So it’s not just the cost IT managers incur for managing the VMware environment, it’s also a matter of how to account for who is consuming what in the virtual environment?

Tom: That’s right. This is a really important thought. When one application ran on one server and it was owned by one business line, tracking costs was sort of straight forward. With increasing VM density, tracking costs in a shared virtual environment is harder. But it’s still an incredibly powerful tool in the quest toward a culture of efficiency and freeing up unused resources. The next step can be actually charging end users for the resources or the services they use. This sets up IT to be a service provider rather than just a cost center.

The Line: Okay, I think we’re getting a good picture of the problem. So what are you guys cooking up that can help IT managers improve the situation?
Alex: One thing we are working on is providing insight through analytics.  Today, we are able to run real time analytics against a virtual environment and show IT managers where they have duplication in programs or files and even trace the ancestor of a specific image. This information helps determine which images to keep, which images to toss and when something new really needs to be created.

Brooks: Alex is right. Analytics is where a lot of our effort is. Beyond simply reporting the news about what’s going on in an environment, we are deriving insight about the interworkings of the environment to help optimize both the physical and virtual side of the datacenter. That drops costs.

Mike: Datacenters are being transformed in more ways than just virtual servers. Most IT managers that are concerned with virtualization are also dealing with cloud, mobile devices and BYOD (bring your own device). So, we’re opening the aperture a bit, taking many of the capabilities we have developed for virtual machine management and applying them to other endpoints like mobile devices and cloud.

Kohi:  For virtual data protection, we are helping clients move away from the old unsustainable approach of having a backup agent in every VM. The new approach snapshots whole VM estates, only captures incremental changes – forever – and deduplicates what it captures. We’ve just sidestepped the entire image sprawl issue and created something that is more efficient than the physical environment used to be.

The Line: Tom, Alex, Brooks, Mike, Kohi… thank you for spending a few minutes with me and sharing insights on where you are taking IBM SmartCloud offerings for virtualization optimization.

If you want to join the conversation or have a question for any of the team, please leave a comment below.