Software Defined Storage Use Case – Block SAN Storage for Traditional Workloads

In my last post, IBM Spectrum Storage Suite – Revolutionizing How IT Managers License Software Defined Storage, I introduced a simple and predictable licensing model for most all the storage needs an IT manager might have. That’s a pretty big concept if you think about all the storage use cases an IT manager has to deal with.

  • Block SAN storage for traditional workloadsIBM Spectrum Storage Suite Symbol
  • File storage for analytic or Big Data workloads
  • Object storage for cloud and mobile workloads
  • Scale-out block storage for VMware datastores
  • Storage for housing archive or backup copies

Just to name a few… The idea behind software defined storage is that an IT manager optimizes storage hardware capacity purchases for performance, environmentals (like power and space consumption), and cost. Then he ‘software defines’ that capacity into something useful – something that meets the needs of whatever particular use case he is trying to deal with. But is that really possible under a single, predictable software defined storage license? The best way I can think of to answer the question is to look at several of the most common use cases we see with our clients.

Perhaps the most widely deployed enterprise use case today is block SAN storage for the traditional workloads that all our businesses are built on – databases, email systems, ERP, customer relationship management and the like. Most IT managers know exactly what kind of storage capabilities they need to deploy for this use case. It’s stuff like:

Here’s the thing… This use case has been evolving for years and most IT managers have it deployed. The problem isn’t that the capabilities don’t exist. The problem is that the capabilities are most often tied directly to a specific piece of hardware. If you like a particular capability, the only way to get it is to buy that vendors hardware. It’s a hardware-defined model and you were locked in. With IBM Spectrum Storage, IBM has securely unboxed with software defined. All the capabilities I just mentioned can be accomplished with one IBM Spectrum Storage Suite software license and you have complete flexibility to pick whatever hardware vendor or tier you like. The idea of software defined changes everything. With the software securely unboxed from the hardware, you really are free to choose whatever hardware you want from most any vendor you like. And since the software can stay the same even while hardware is changing, it means you don’t experience any operational or procedural tax when you make those changes.

All of the capabilities mentioned above for addressing this Block SAN storage for traditional workloads use case can be accomplished with one IBM Spectrum Storage Suite software license. This may be the most widely deployed use case today, but it’s not the fastest growing use case. In my next posts, I’ll continue looking at the wide variety of use cases that are covered by the simple, predictable IBM Spectrum Storage Suite software defined storage license.

Are you interested in taking the first step with software defined storage? Contact your IBM Business Partner or sales representative. And join the conversation with #IBMStorage and #softwaredefined.

Backup redesign: A top priority for IT managers (Part 3)

This is the conclusion of a three-part conversation with Dr. Xin Wang, product manager for IBM Tivoli Storage Manager (TSM). In part 1, Xin discussed what she has learned about the challenges IT managers are facing leading to the backup redesign movement. In part 2, she began discussing her near term plans for TSM. Let’s conclude the conversation:

The Line: It seems like there is lots to watch for in the VMware space. The final observation you made was about administrators. Say a little bit there.

Xin: The administration job is changing. Nobody has time to do anything they don’t need to be doing. We introduced a completely new approach to backup administration earlier this year with the IBM Tivoli Storage Manager Operations Center. Soon we plan to make the administrator’s job even easier.

  • Deployment of a new backup server instance can be a task that requires expertise and customization. So we’re planning to remove the guesswork with blueprints for different sized configurations.
  • Auto deployment and configuration. This includes daily care and feeding of the backup repository, provisioning the protection of a new workload (client), monitoring status and activities, redriving failures and so on. We’re expanding the Operations Center.

The Line: Xin, it sounds like the near future holds some exciting possibilities for IT managers as they redesign their backup environments. Is there anything else you would like to mention that I missed?

Xin: Actually, yes. There’s one more really important thing. Whether an IT manager is sticking with traditional backup methods or redesigning with snapshots, VMware integration or one of the best practice blueprints, often times there is still a need to move some of those copies somewhere else for safe keeping. Think vaulting, offsite storage or disaster recovery. This can take time, use up network resources and result in a really large repository.

Since the days when TSM pioneered the idea of incremental forever backup, we’ve been leading the industry in data reduction to minimize strain on the environment. It’s one of the things that drive the economic savings we show in Butterfly studies. Soon we are planning some enhancements to our native client-side and server-side deduplication that will improve ingest capacity on the order of 10x. That’s 1,000 percent more deduplicated data each day! We plan to fold this capability into our new blueprints so IT managers can get the benefit right out of the box.

The Line: Nice! Xin, thank you for taking the time to share your insights with my readers.

If you have questions for Xin, please join the conversation by leaving a comment below.

Backup redesign: A top priority for IT managers (Part 2)

This is part 2 of a three-part conversation with Dr. Xin Wang, product manager for IBM Tivoli Storage Manager (TSM).  In part 1, Xin discussed what she has learned about the challenges IT managers are facing leading to the backup redesign movement. Let’s continue the conversation:

The Line: So, first you mentioned that data is so big the old style of backup can’t keep up.

Xin: That’s right. For a long time, the primary method of capturing copies of data was to load a backup agent on a server, grab the copy and transfer it across some kind of network to a backup server for safe keeping. But data is getting too big for that. So today, we are helping clients redesign with a tool we call IBM Tivoli Storage FlashCopy Manager (FCM). The point of FCM is to bridge between the application that owns the data and the storage system that is housing the data so that IT managers can simply make snapshots where the data lives.

The Line: That may be new to some clients, but it seems things like FCM have been around for a while. What’s new?

Xin: You’re right. Snapshots have been around for a while and so has FCM. But the economic driver for redesigning backup environments so they use snapshots hasn’t been as strong as it is today.

The Line: Okay. How about the technical side? Snapshots are done differently on different storage devices. What storage infrastructures does FCM work with?

Xin: A lot is being made about the benefits of software defining your storage environment. One of the many benefits of that approach is that there is only one interface for snapshotting regardless of your choice in physical disk hardware. Move a workload from one kind of array to another, or even change vendors altogether, and your snapshot interface still stays the same. So:

IBM FlashCopy Manager in a Software Defined Storage environment
IBM FlashCopy Manager in a Software Defined Storage environment

If an IT manager chooses to continue working with native physical hardware, the support list is more specific.

  • For applications running on Windows, FCM still supports most any physical disk.
  • For applications running on Linux or Unix, FCM works with IBM and NetApp disk arrays.
  • Soon we plan to offer an open application programming interface (API) that will enable plugins for other physical disk arrays from vendors like EMC and Hitachi.

The Line: Nice! So let’s move to your second point, that VMware is causing IT managers to redesign their backup environment.

Xin: VMware has caused IT managers to redesign a lot of things. So much so that a lot of IT managers I talk with have had to prioritize. In many cases, backup redesign has been one of those things that has been put off until now.

The Line: What do you mean “until now”?

Xin: Again, it’s about economics. The number of workloads and amount of data in these VMware estates is reaching critical mass. IT managers who have plodded along simply loading a backup agent in each guest machine are feeling pain. We are helping IT managers redesign with a tool custom built for VMware— IBM TSM for Virtual Environments (TSM for VE). TSM for VE is already nicely integrated with the VMware API for Data Protection (VADP), meaning its full set of snapshot features is available to IT managers without deploying agents in every VM guest. Soon we plan to add some new and really powerful integration.

  • When restoring a virtual machine, the virtual machine disk (VMDK) will be instantly available for access the moment the restore starts. Verification can start right away, with no waiting around for data to be moved anywhere.
  • There’s new integration with the vCloud API to back up, manage and recover a vApp in vCloud Director.
  • When running workloads like Microsoft SQL Server or Exchange on VMware, there can be object or item level recovery without the need for an agent to back up the application VM.
  • There’s a richer set of administrative and reporting capabilities in the vCenter plug-in.
  • For FCM users, there is instant recovery of an entire VMware data store and coexistence with VMware Site Recovery Manager (SRM).

Check back soon for part 3 of the interview in which Xin finishes sharing her near term plans for TSM and adds some closing thoughts. If you have questions for Xin, please join the conversation by leaving a comment below.

Backup redesign: A top priority for IT managers (Part 1)

Person Marking in a Checkbox 5722106Backup redesign continues to be toward the top of most analysts’ lists for 2013 IT priorities. I’ve talked a lot about some of the catalysts behind this trend like data growth, big data, VMware and software defined storage. With IT managers redesigning, the incumbent enterprise backup vendors have a lot of motivation to offer innovative solutions that are a bit ahead of the times. The leaders have all placed strategic bets on what the winning formula will be. I discussed these bets in my post “Forrester’s take on enterprise backup and recovery.”

For its part, IBM is being quick about helping IT managers redesign. The help starts with a clear understanding of the economic benefit a redesign can bring. After all, in today’s environment few IT managers make technology moves simply for the sake of technology. Storage is about economics. I discuss this more fully in my post “Does trying to find a better economic approach to storage give you ‘Butterflies’?” But there is still efficient technology that enables these economic savings, and the person in IBM who is ultimately responsible for the technology in IBM Tivoli Storage Manager (TSM) is the product manager, Dr. Xin Wang.

Businesswoman touch the virtual cloud buttonRecently I spoke with Xin about the important shifts IT managers are facing and how she is helping IT managers reimagine backup.

The Line: Xin, I’m going to start with the “Dr.” part of your title. Should folks call you the Backup Doctor?

Xin: (laughing) Well, I don’t know about that. I’m actually a doctor of Applied Physics. One thing that drove me to earn a PhD and has moved me ever since is that I love to learn. I started my career in IBM hard disk drive research, spent some time as a storage software developer and development manager, and have now been working with backup clients as a product manager for several years.

The Line: Wow, I could probably do an entire post just on your career. But let’s stay focused. What have you learned about the challenges IT managers are facing and this whole backup redesign movement?

Labyrinth - business conceptXin:  It’s interesting. The challenges aren’t secret but they carry big implications for backup. Data is growing like crazy; that’s no secret. But it is now so big that the old method of loading an agent on a server to collect and copy backup data over a network to a tape isn’t keeping up. So IT managers are redesigning.

And what about servers? Servers aren’t servers anymore. Thanks to VMware, they are virtual machines that come, go and move around in a hurry. Traditional backup is too rigid. So IT managers are redesigning.

Administrators are changing too. The generation of backup admins who grew up tuning the environment is giving way to a new generation of backup, VMware and cloud admins who need much more intuitive and automated management tools. And so IT managers are redesigning. (Editorial comment: I discussed the change in administration in my post “Do IT managers really ‘manage’ storage anymore?)

The Line: Okay, I think I’m seeing your trend. IT managers are redesigning. And it seems like you’ve got a clear idea of why. Can we take your list one at a time? I think my readers would be interested in what you are doing with TSM in each of these areas.

Xin: Sure, that makes sense.

Check back for part 2 of the interview in which Xin shares her near term plans for TSM. If you have questions for Xin, please join the conversation by leaving a comment below.

IBM at VMworld 2013: Making heroes of VMware admins and SQL Server DBAs

In the modern datacenter, there’s a lot of shifting going on when it comes to traditional storage management responsibilities. What used to be the domain of a central storage and backup administration team has been thrown up for grabs as server virtualization and software defined everything have entered the scene. I hinted at this a bit in my post Do IT managers really “manage” storage anymore? But let’s consider a practical example that’s quite common with clients I speak to. If you are going to VMworld 2013, plan on attending the IBM TSM for VE hands-on lab to get more details.

Microsoft SQL Server is the foundation for a lot of applications that are critical to business operation – meaning CIO’s and IT managers are interested in its recoverability. Those same CIO’s and IT managers are also interested in the recoverability of their VMware estates, the software defined compute (SDC) platform that houses those databases. For many clients, the problem is that these two domains are tightly guarded by two independent superheroes, and neither is specifically trained in storage.

Superhero #1: The database administrator (DBA)

VMware and SQL Server superheroMost DBAs that I’ve known have an almost personal connection with their databases. They care for them as they would their own children. The thought of leaving one unprotected (without a backup) equates to dereliction of duty. Ignoring the idea that it takes a village to raise a child (or in this case that there may be other members of the IT administration village like VMware admins and backup admins), SQL Server DBAs will often work alone with the backup tools Microsoft provides to ensure their databases are protected. Good for the SQL Server, but not so much for the surrounding infrastructure.  For databases running on VMware, routine full backups even with periodic differential backups can consume a LOT of disk space and virtual compute resources, and also contribute to the I/O blender effect.

Superhero #2: The VMware administrator

TSM for VE in VMware vSphere web client
TSM for VE in VMware vSphere web client

VMware administrators can be just as focused on their domain as DBAs are. Their attention is on being able to recover persistent or critical virtual machine (VM) images, regardless of what app happens to be riding along. VMware has done a nice job of creating and supporting an industry of tightly integrated backup providers. These tools can get at the VMware data through a set of vStorage API’s for Data Protection (VADP) and VMware administrators can manage them through vCenter plug-ins. But few VMware admins are completely aware of all the workloads that run on their VMs and even less aware of the unique recovery needs of all those workloads. It’s just hard to keep up.

Common ground exists

One tool that bridges the gap is IBM Tivoli Storage Manager for Virtual Environments (TSM for VE). Nicely integrated with both VADP and SQL Server, TSM for VE can bring together VMware administrators and the DBAs in ways that would make any IT manager smile. Here are two of the more common approaches.

We can each do our own thing – together

As noted above, SQL Server DBAs take full backups sprinkled with differentials. Even though this approach can tax server and storage resources, and contribute to the I/O blender effect, it is in the DBA comfort zone. When the app is running on a VMware virtual machine, the DBA has the option of storing those backups on disk storage associated with the VM. It’s a nice thing to do because it allows the VMware admin to stay within his comfort zone too. Using vCenter to drive a VADP integrated snapshot tool like TSM for VE, the VMware admin can capture a complete copy of the virtual machine, along with the SQL Server backups the DBA created. Since the likely use of such a snapshot would be to recover the VM and then recover the database from its backup, there’s really not a reason to include the source SQL Server database or logs in the snapshot. With TSM for VE, the VMware admin can exclude the source SQL Server database from being redundantly backed up adding to an already formidable set of built-in efficiency techniques (with TSM for VE, snapshots are taken incrementally – forever, and can be deduplicated and compressed).  It’s a good compromise solution letting each admin stay in his or her comfort zone. But it can be better.

We can join forces and do something really great

VMware and SQL Server superhero togetherWith TSM for VE, VMware admins and SQL Server DBAs can put their heads together and choose to do something really great. For the DBA, it’s an exercise in less-is-more. The DBA stops doing her own backups. No more full or differential copies of the database. No more taxing resource usage on the VM. No more I/O blender effect. Just, no more. How? Well, with a VMware VADP integrated backup tool like TSM for VE, the snapshot of the VM is accompanied by a freeze and thaw of the SQL Server database (techno-speak for putting the database in a consistent state), just like what happens when a backup is independently initiated by a DBA. And with TSM for VE, as soon as the TSM server confirms that it has successfully stored the consistent snapshot in a safe, physically separate place, it will connect back with the SQL Server to truncate the database logs.

In addition to the less-is-more benefits above, think about the differences in restore with these two scenarios. When the DBA and VMware admin simply coexist, each doing their own thing, restoring the SQL Server database includes steps for restoring:

  • The VM snapshot to get the database backups in place
  • The full database backup
  • The subsequent differential backups

By comparison, when the DBA and the VMware admin join forces with TSM for VE, the steps are dramatically simplified. Restoring the snapshot equates to restoring a consistent copy of the database.  And remember, because these snapshots are highly efficient, they can be taken quite frequently.

Superhero indeed!

Going to VMworld 2013? Come visit IBM on the Solutions Exchange floor at booth #1545.

IBM at VMworld 2013: Optimization, Backup, …

VMworld 2013 is just around the corner and at IBM, we’re gearing up for a great set of conversations with our joint clients. As you’re planning your agenda, here are a couple of things worth looking in to.

Virtualization Optimization

IBM has a lot of expertise to share when it comes to optimizing virtual environments. AIBM Virtualization Optimization few weeks ago in my Outside the Line – an interview on Virtualization optimization post, I was able to catch up with several of the experts who are leading this work. At VMworld, IBM will be showcasing these solutions on the Solutions Exchange floor at booth #1545.

VMware Backup

TSM for VE in VMware vSphere web client
TSM for VE in VMware vSphere web client

IBM Tivoli Storage Manager for Virtual Environments (TSM for VE) is one of the mostefficient backup integrations that have been done with the VMware vStorage API’s for Data Protection (VADP). I offered some quick insights in my post VMware backup for the iPOD generation. At VMworld 2013, you’ll have an opportunity to take a test drive in the TSM for VE hands-on lab.

Are you going to VMworld? What are you most looking forward to?

Outside The Line — an Interview on Virtualization Optimization

In my role, I get to spend a lot of time with customers and I also do a lot of reading. Over the years, I’ve developed a good bit of depth on most storage topics but one of the things I love is the pace of innovation going on all around me. There’s never a shortage of new things to learn. Occasionally I run across something particularly interesting that I want to talk about, but it’s a bit outside my area of direct expertise. This is one of those topics. To help me out, I’m going to reach outside The Line to get insights from real experts. Please let me know what you think of the format.

Several months ago IDC published an interesting Market Analysis Perspective: Worldwide Enterprise Servers, 2012 — Technology Market that uncovered something quite contrary to conventional wisdom when it comes to virtualized server environments. IDC - MAP Worldwide Enterprise Servers, 2012 — Spend 238530Server virtualization, or software defined compute (SDC) as it is coming to be known, promised to control server sprawl and return balance to the portion of the IT budget allocated to servers. The IDC research confirms that the controlled server sprawl part of the promise has largely been realized. Since 2000, the worldwide spend on x86 stuff has actually declined from $70B to about $56B. Equally as important, environmental spending on power and cooling has leveled off too. The revelation that surprised most folks was the dramatic expansion in spend on management. Since 2000, spend on management has more than tripled reaching $171B and now accounts for 68% of IT spend on x86 infrastructure.

These days the measuring stick for server virtualization seems to be around VM density, or the number of virtual machines that can be supported on a single physical server. The theory has been that as VM density increases, management costs decline. Most IT managers I talk to can point to fairly good and increasing VM density in their environments. So what’s causing associated management costs to increase so much and what can IT managers do to improve the situation?

IBM Virtualization OptimizationRecently I sat down with a group of product managers who are directing IBM SmartCloud work in the area of optimizing the management and administration of VMware environments.  Collectively, these guys are thinking about everything from virtual image construction and patching security vulnerabilities to planning capacity, charging for usage and recovering from failures.

The Line: Guys, let’s talk for a minute about this IDC research. Do you really see the cost of management and administration becoming a central issue in SDC environments today?

Tom Recchia: Absolutely.  Virtualization has introduced a pile of new challenges. Instead of server sprawl, we now talk about image sprawl. IT managers have to think about virtual data protection, performance, security compliance and cost management for all these images. Virtual machines can become dormant leading to out-of-date software versions, security issues, and just paying for things you aren’t using. The clients I talk with bought in to virtualization to improve costs and agility but they are realizing management challenges are slowing service delivery and constricting their return on investment.

Brooks York:  I agree. Virtualization has certainly helped optimize the use of physical resources. But the management improvements for the guest operating systems (OS) haven’t been nearly as dramatic. On the flip side, virtualization has made it much easier to create new OS instances leading to VM sprawl. IT managers are ending up with a lot more OS instances, each of which aren’t much easier to manage than they used to be. That’s leading to higher overall management costs.

The Line:  Say more about what you are learning from your client conversations. What kinds of specific issues are causing this rise in cost?

Alex Peay: Here’s one concrete example. Virtualization makes it easy to create a new image designed to deliver a service. Increasingly, IT managers are coupling that capability with a self-service portal. All of a sudden, the customers of IT are off and running creating new images almost at-will. New terms like image sprawl and image drift enter the vocabulary. Pretty soon you have unneeded images, duplicate resources, increased maintenance, rising storage costs for wasted space and a general lack of understanding of what all is in the environment. While clients have gained control of their physical environments they have lost control of their virtual environments.

Brooks: I often hear that the image sprawl is also creating issues with software licensing costs and with OS or system administrator costs. Remember, all those images have an operating system and some application software, and they need to be administered by somebody.

Mike Gare: That’s one of those double edged swords that comes with the flexibility of virtualization, isn’t it? Clients create all kinds of VM templates and disk images to support self-service provisioning. These things have to be properly managed or organizations risk virus outbreaks, hacking and the potential loss of intellectual property. With virtualization and these self-service portals it’s relatively easy to spin up a new VM from a template or clone in less than an hour. That’s cool from the rapid service delivery perspective but IT managers still need to concern themselves with patching, securing, and updating the applications on each VM in order to address potential security gaps.  This is especially true as virtualization moves from development and test environments where VM’s are often on separate networks and only live for a few hours or days, to production environments where VM’s are more persistent.

Kohi Easwarachandran: Image sprawl hits data protection too. An organizations data needs to be protected regardless of whether it is being operated on by an application on a physical server or a virtual machine. Before virtualization, the dominant method for protecting data was to have a backup agent on each OS image. Unfortunately, many IT managers carried this model forward with virtualization, loading a backup agent on each VM. As VMs proliferate so do backup agents and, well, the task of managing and troubleshooting this style of backup in a virtualized environment can be quite draining and expensive.

The Line:  So, increasing VM density is good from a hardware efficiency point-of-view, but the point you guys are making is that the things on those images – operating systems, application software, backup agents, patches, etc. – all still have to be tracked and managed by someone, and that’s contributing to the rising costs. Tom, what about managing those costs? It seems that with SDC, IT managers need to shift from planning budgets around physical servers to planning budgets around application instances that are packed together on physical servers as VM density increases. What impact are you seeing from that shift?

Tom: Over the years, most IT managers had developed tools for measuring usage of physical infrastructure. But it’s more difficult to manually keep track of VM usage. Fortunately there is cost management software that will automatically collect virtualized environment usage and even apply rates. I’ve seen this automation reduce finance staff labor costs up to 25%. Maybe the bigger thing though is that it can let that finance staff show the multiple users of a virtual environment what they are consuming

The Line: So it’s not just the cost IT managers incur for managing the VMware environment, it’s also a matter of how to account for who is consuming what in the virtual environment?

Tom: That’s right. This is a really important thought. When one application ran on one server and it was owned by one business line, tracking costs was sort of straight forward. With increasing VM density, tracking costs in a shared virtual environment is harder. But it’s still an incredibly powerful tool in the quest toward a culture of efficiency and freeing up unused resources. The next step can be actually charging end users for the resources or the services they use. This sets up IT to be a service provider rather than just a cost center.

The Line: Okay, I think we’re getting a good picture of the problem. So what are you guys cooking up that can help IT managers improve the situation?
Alex: One thing we are working on is providing insight through analytics.  Today, we are able to run real time analytics against a virtual environment and show IT managers where they have duplication in programs or files and even trace the ancestor of a specific image. This information helps determine which images to keep, which images to toss and when something new really needs to be created.

Brooks: Alex is right. Analytics is where a lot of our effort is. Beyond simply reporting the news about what’s going on in an environment, we are deriving insight about the interworkings of the environment to help optimize both the physical and virtual side of the datacenter. That drops costs.

Mike: Datacenters are being transformed in more ways than just virtual servers. Most IT managers that are concerned with virtualization are also dealing with cloud, mobile devices and BYOD (bring your own device). So, we’re opening the aperture a bit, taking many of the capabilities we have developed for virtual machine management and applying them to other endpoints like mobile devices and cloud.

Kohi:  For virtual data protection, we are helping clients move away from the old unsustainable approach of having a backup agent in every VM. The new approach snapshots whole VM estates, only captures incremental changes – forever – and deduplicates what it captures. We’ve just sidestepped the entire image sprawl issue and created something that is more efficient than the physical environment used to be.

The Line: Tom, Alex, Brooks, Mike, Kohi… thank you for spending a few minutes with me and sharing insights on where you are taking IBM SmartCloud offerings for virtualization optimization.

If you want to join the conversation or have a question for any of the team, please leave a comment below.  

VMware backup for the iPOD generation

Last week I explored the question, Do IT managers really “manage” storage anymore? In that post I talked about a shift I am seeing with a lot of the clients I work with, from specialized storage scientist administrators to a new breed of virtual environment, converged infrastructure and cloud admins I call the iPod generation. These folks have been brought up with a whole different level of administrative expectation. They have no desire or expertise to deal with unique knobs and dials on each type of device the way a storage scientist does when he tunes the environment. Instead, they value learning an outcome-based interface approach once and then using it for everything. These are the guys and gals who say “I like the Apple interface across my laptop, tablet and phone”.

Interestingly, there is another environment where the iPod generation flourishes. They do well as jack-of-all-trades administrators in small and medium businesses multitasking(SMBs). If for no other reason than scale, IT administrators in SMBs tend to be responsible for compute and storage and backup and networks and workloads and so on.  The simple fact that there is a lot of variety can severely impact the productivity of these administrators. You can imagine the value they would find in a single interface that could span the infrastructure.

My view is that this is one reason virtual datacenters are rising rapidly in SMBs. The recent IDC Worldwide SMB 2013 Predictions research agrees with the trend pointing to the growing importance of virtualization for servers and storage with 2013 being a key year for expansion beyond midmarket into larger small businesses. To highlight the benefits SMB administrators could enjoy, I’ll illustrate a specific use case with a virtual server environment using VMware vSphere.

An administrator working with VMware has the option of administering all virtual compute resources from the VMware vCenter interface. But, as mentioned above, there is a lot more to the daily care and feeding of an SMB datacenter than just virtual servers. Fortunately for the administrator, VMware has done a good job of opening up the interface for vCenter plug-ins from third party providers to add many of the capabilities the jack-of-all-trades administrator needs to routinely access. Backup capabilities are a good example.

IBM, as a third party provider, purpose built an exceptionally efficient backup tool uniquely for virtual server environments. It’s called Tivoli Storage Manager for Virtual Environments (TSM for VE).

TSM for VE in VMware vSphere web client
TSM for VE in VMware vSphere web client

The snapshot and disk-based backup capabilities of TSM for VE include de-duplication and incremental-forever technology for extreme efficiency, keeping software and infrastructure costs down. For the administrator, IBM has fully integrated the TSM for VE interface into VMware vCenter.

Now, for backup at least, the jack-of-all-trades administrator who grew up in the iPod generation can perform all their virtual server and backup administration from the same interface.

What do you think? Are you, or do you work with an SMB IT administrator? Is jack-of-all-trades or iPod generation a fair characterization of who you are?