Outside The Line — an Interview on Virtualization Optimization

In my role, I get to spend a lot of time with customers and I also do a lot of reading. Over the years, I’ve developed a good bit of depth on most storage topics but one of the things I love is the pace of innovation going on all around me. There’s never a shortage of new things to learn. Occasionally I run across something particularly interesting that I want to talk about, but it’s a bit outside my area of direct expertise. This is one of those topics. To help me out, I’m going to reach outside The Line to get insights from real experts. Please let me know what you think of the format.

Several months ago IDC published an interesting Market Analysis Perspective: Worldwide Enterprise Servers, 2012 — Technology Market that uncovered something quite contrary to conventional wisdom when it comes to virtualized server environments. IDC - MAP Worldwide Enterprise Servers, 2012 — Spend 238530Server virtualization, or software defined compute (SDC) as it is coming to be known, promised to control server sprawl and return balance to the portion of the IT budget allocated to servers. The IDC research confirms that the controlled server sprawl part of the promise has largely been realized. Since 2000, the worldwide spend on x86 stuff has actually declined from $70B to about $56B. Equally as important, environmental spending on power and cooling has leveled off too. The revelation that surprised most folks was the dramatic expansion in spend on management. Since 2000, spend on management has more than tripled reaching $171B and now accounts for 68% of IT spend on x86 infrastructure.

These days the measuring stick for server virtualization seems to be around VM density, or the number of virtual machines that can be supported on a single physical server. The theory has been that as VM density increases, management costs decline. Most IT managers I talk to can point to fairly good and increasing VM density in their environments. So what’s causing associated management costs to increase so much and what can IT managers do to improve the situation?

IBM Virtualization OptimizationRecently I sat down with a group of product managers who are directing IBM SmartCloud work in the area of optimizing the management and administration of VMware environments.  Collectively, these guys are thinking about everything from virtual image construction and patching security vulnerabilities to planning capacity, charging for usage and recovering from failures.

The Line: Guys, let’s talk for a minute about this IDC research. Do you really see the cost of management and administration becoming a central issue in SDC environments today?

Tom Recchia: Absolutely.  Virtualization has introduced a pile of new challenges. Instead of server sprawl, we now talk about image sprawl. IT managers have to think about virtual data protection, performance, security compliance and cost management for all these images. Virtual machines can become dormant leading to out-of-date software versions, security issues, and just paying for things you aren’t using. The clients I talk with bought in to virtualization to improve costs and agility but they are realizing management challenges are slowing service delivery and constricting their return on investment.

Brooks York:  I agree. Virtualization has certainly helped optimize the use of physical resources. But the management improvements for the guest operating systems (OS) haven’t been nearly as dramatic. On the flip side, virtualization has made it much easier to create new OS instances leading to VM sprawl. IT managers are ending up with a lot more OS instances, each of which aren’t much easier to manage than they used to be. That’s leading to higher overall management costs.

The Line:  Say more about what you are learning from your client conversations. What kinds of specific issues are causing this rise in cost?

Alex Peay: Here’s one concrete example. Virtualization makes it easy to create a new image designed to deliver a service. Increasingly, IT managers are coupling that capability with a self-service portal. All of a sudden, the customers of IT are off and running creating new images almost at-will. New terms like image sprawl and image drift enter the vocabulary. Pretty soon you have unneeded images, duplicate resources, increased maintenance, rising storage costs for wasted space and a general lack of understanding of what all is in the environment. While clients have gained control of their physical environments they have lost control of their virtual environments.

Brooks: I often hear that the image sprawl is also creating issues with software licensing costs and with OS or system administrator costs. Remember, all those images have an operating system and some application software, and they need to be administered by somebody.

Mike Gare: That’s one of those double edged swords that comes with the flexibility of virtualization, isn’t it? Clients create all kinds of VM templates and disk images to support self-service provisioning. These things have to be properly managed or organizations risk virus outbreaks, hacking and the potential loss of intellectual property. With virtualization and these self-service portals it’s relatively easy to spin up a new VM from a template or clone in less than an hour. That’s cool from the rapid service delivery perspective but IT managers still need to concern themselves with patching, securing, and updating the applications on each VM in order to address potential security gaps.  This is especially true as virtualization moves from development and test environments where VM’s are often on separate networks and only live for a few hours or days, to production environments where VM’s are more persistent.

Kohi Easwarachandran: Image sprawl hits data protection too. An organizations data needs to be protected regardless of whether it is being operated on by an application on a physical server or a virtual machine. Before virtualization, the dominant method for protecting data was to have a backup agent on each OS image. Unfortunately, many IT managers carried this model forward with virtualization, loading a backup agent on each VM. As VMs proliferate so do backup agents and, well, the task of managing and troubleshooting this style of backup in a virtualized environment can be quite draining and expensive.

The Line:  So, increasing VM density is good from a hardware efficiency point-of-view, but the point you guys are making is that the things on those images – operating systems, application software, backup agents, patches, etc. – all still have to be tracked and managed by someone, and that’s contributing to the rising costs. Tom, what about managing those costs? It seems that with SDC, IT managers need to shift from planning budgets around physical servers to planning budgets around application instances that are packed together on physical servers as VM density increases. What impact are you seeing from that shift?

Tom: Over the years, most IT managers had developed tools for measuring usage of physical infrastructure. But it’s more difficult to manually keep track of VM usage. Fortunately there is cost management software that will automatically collect virtualized environment usage and even apply rates. I’ve seen this automation reduce finance staff labor costs up to 25%. Maybe the bigger thing though is that it can let that finance staff show the multiple users of a virtual environment what they are consuming

The Line: So it’s not just the cost IT managers incur for managing the VMware environment, it’s also a matter of how to account for who is consuming what in the virtual environment?

Tom: That’s right. This is a really important thought. When one application ran on one server and it was owned by one business line, tracking costs was sort of straight forward. With increasing VM density, tracking costs in a shared virtual environment is harder. But it’s still an incredibly powerful tool in the quest toward a culture of efficiency and freeing up unused resources. The next step can be actually charging end users for the resources or the services they use. This sets up IT to be a service provider rather than just a cost center.

The Line: Okay, I think we’re getting a good picture of the problem. So what are you guys cooking up that can help IT managers improve the situation?
Alex: One thing we are working on is providing insight through analytics.  Today, we are able to run real time analytics against a virtual environment and show IT managers where they have duplication in programs or files and even trace the ancestor of a specific image. This information helps determine which images to keep, which images to toss and when something new really needs to be created.

Brooks: Alex is right. Analytics is where a lot of our effort is. Beyond simply reporting the news about what’s going on in an environment, we are deriving insight about the interworkings of the environment to help optimize both the physical and virtual side of the datacenter. That drops costs.

Mike: Datacenters are being transformed in more ways than just virtual servers. Most IT managers that are concerned with virtualization are also dealing with cloud, mobile devices and BYOD (bring your own device). So, we’re opening the aperture a bit, taking many of the capabilities we have developed for virtual machine management and applying them to other endpoints like mobile devices and cloud.

Kohi:  For virtual data protection, we are helping clients move away from the old unsustainable approach of having a backup agent in every VM. The new approach snapshots whole VM estates, only captures incremental changes – forever – and deduplicates what it captures. We’ve just sidestepped the entire image sprawl issue and created something that is more efficient than the physical environment used to be.

The Line: Tom, Alex, Brooks, Mike, Kohi… thank you for spending a few minutes with me and sharing insights on where you are taking IBM SmartCloud offerings for virtualization optimization.

If you want to join the conversation or have a question for any of the team, please leave a comment below.  

7 Replies to “Outside The Line — an Interview on Virtualization Optimization”

  1. I agree with Tom, tracking VM usage is not possible with nay of the available tools. I actually did a research paper on this too, but most of my discussion was on the cost management part, which is directly proportional to this.

    I myself use Replicon project costing software, which gives me reports on usage and costs. Worth a try.


    1. Peter, thanks for the reply. For determining the cost to deliver cloud services collecting resource usage is the foundation. You can then allocate your expenses based on resource usage to determine rates. Finally, collecting the service usage and applying the resource usage rates allows you to bill. Virtualization Optimization features IBM SmartCloud Cost Management, which collects VM, network, storage, DB, application usage, provides the associated cost management for those resources and tracks service usage for showback/chargeback – Tom


Join the conversation!

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s