A while back, I asked an IT manager in Europe what tools he uses to manage storage. His response changed the way I think about our mission as a supplier. He leaned back in his chair and with a grin on his face said, “Manage storage? I don’t manage storage. I herd storage.” As the conversation progressed, I listened to a story that has become quite familiar to most folks who are involved in the care and feeding of storage infrastructure:
- A constant coming and going of physical disk arrays
- Headaches associated with provisioning, scheduling data migration and the associated application outages
- Never ending series of reactionary events around backup, performance, replication, etc
- Disaster recovery tests to perform
And this IT manager was trying to do all that inside the constraints of a multi-vendor infrastructure that his CFO liked because it gave them bargaining power in their hardware infrastructure purchases. Sounds like a new job description – storage herder.
Today’s storage management crisis
In the time since that conversation, I’ve come to realize that IT is in a real crisis when it comes to storage administration. There is a generation of admins who grew up in a world of fast, but not overwhelming data growth, a world where a server was a server, a disk was a disk and managing them required a fairly good understanding of how all of it physically worked. These ‘storage scientists’ make great use of all the knobs and dials that we vendors expose to them in order to tune and tweak their environment. They take great time and great pride in “managing” storage. I know, because before coming to IBM, I spent the first 10 years of my career as one of those storage scientists. The IT manager I met with in Europe was one of them too. The crisis is that the world that trained these experts is rapidly evaporating.
Today’s world is marked by out-of-control, overwhelming data growth. Analysts have tried to describe the pace using words like avalanche, explosion, and tidal wave. You see the mental imagery. Whatever you call it, the reality is that data is growing faster than hardware vendors’ ability to increase the areal density of disk drives, meaning that from here on out, there is going to be more coming of storage capacity than there is going.
Today’s world is also virtual. Words like software-defined have worked their way into common vocabulary. Servers aren’t servers; they are virtual machines that are elastic in horsepower and mobile. Tapes aren’t tapes; they are a deduplicated, replicated figment of the imagination that is stored on a disk. And increasingly, disks aren’t disks; they are thin provisioned, compressed virtual volumes that are replicated, snapshotted and mobile from tier-to-tier, vendor-to-vendor, and site-to-site. Virtualization has also dramatically increased pace. There is no longer a built-in physical governor on the speed of provisioning new workloads – no cables to pull and physical infrastructure to power up. Workloads, and the data they work on, can be built up and torn down almost at the speed of thought. And managing global data availability (disaster avoidance, recovery, discovery) for all this data is converging into a single idea. In this world, an administrator who attempts to manage as a storage scientist simply can’t keep up. He becomes a storage herder.
A new generation of storage admins
Another shift I am seeing with a lot of the clients I work with is that the traditional storage administrators – the storage scientist generation – aren’t the only people with storage responsibilities any more. They are being joined by a new breed of virtual environment, converged infrastructure and cloud admins who are responsible – but not nearly as prepared – for managing storage. I call them the iPod generation. These folks have been brought up with a whole different level of administrative expectation. They want to pick an outcome and have the “system” just take care of the details. They have no desire or expertise to deal with knobs and dials to tune an environment. And they value learning an interface approach once and then using it everywhere. These are the guys and gals who say “I like the ‘Apple’ interface across my laptop, tablet and phone” or “the ‘Google’ interface across my tablet, phone and glasses”. In IT, these folks want to learn the interface and use it for all things storage regardless of what physical infrastructure vendor the CFO might be able to get a better deal from at the moment. I guess you could say that they want to deal in hardware-agnostic service level agreements.
So, IT has the storage scientists who, using knobs and dials, simply can’t keep up. They are scientists reduced to herders. And you can imagine the chaos if you were to hand the iPod generation a different command-line interface (CLI) for each vendor and type of storage device in the infrastructure.
An entirely new approach
IBM, with a lot of help from its clients, is addressing the problem with an entirely new approach to storage administration. Under the umbrella of IBM Design Thinking we are bringing together visually intuitive graphics and deep automation to create a storage administration approach that is so clear and straightforward that overall productivity climbs considerably.
One of the things I find most intriguing about this new approach to storage administration is that, for the first time in my career, social interaction was leveraged to engage clients at every phase of the development process. We call it transparent development. Through IBM Service Management Connect (which is powered by IBM Connections software) the designers and developers at IBM engaged hundreds of clients, administrators, and business partners to understand needs, share designs, test implementations, get feedback, and ultimately create something that hasn’t been seen before in competitive offerings. You can get a feel for the public side of this interaction in the Tivoli Storage Operations Center community.
The thing that has stopped our clients in their tracks is that, because this is a software-defined approach to storage, what we’ve designed works equally well regardless of the physical storage infrastructure they choose. Choose infrastructure from EMC, NetApp, Dell, IBM, HP, Brocade, Cisco, Oracle StorageTek – it really doesn’t matter. CFOs and IT managers have complete flexibility to get their best deal on hardware without affecting capability and productivity one bit. Sounds interesting huh?
Today’s IT manager is concerned with storage of two basic types of data.
- First is the data that supports the applications that run the business. This data is
characterized by the need for speed, mobility, accessibility and resiliency.The software-defined storage layerforthis primary data takes care of things like provisioning (from the storage capacity, to the replication relationships, to the storage networks that tie it all together), auto-tiering, performance analytics and problem isolation, snapshotting and replication. The clients we work with who choose IBM SmartCloud Virtual Storage Center as the software-defined layer for managing primary data enjoy dramatically improved productivity and common capabilities regardless of their choice in physical storage infrastructure.
- Second is the copy data – the copies of primary data that are maintained for backups, for archives, for development and testing, for future analytics and so on.
In sheer volume, copy data is now larger and growing faster than primary data, but it also has a significantly different set of needs. Copy data has the need for hyper efficiency, remote protection, and discoverability. The software layer for this ‘copy data’ takes care of things like capturing, deduplicating, compressing, encrypting, vaulting, and inventorying. Regardless of their chosen storage hardware infrastructure, the software layer for managing copy data is IBM Tivoli Storage Manager (TSM). The dramatically improved productivity is coming soon from the completely new TSM Operations Center.
Two visually intuitive and deeply automated interfaces – one for primary data and one for copy data – regardless of your choice in physical storage infrastructure. This is one of the use cases that clients we work with point to as making the idea of software-defined storage come alive.
I was recently asked “As IT managers shift toward a software-defined approach to managing storage, how well do the storage scientist skills adapt?” It’s a good question. I would like to hear from you. What do you think?