Software Defined Storage Use Case – Block SAN Storage for Traditional Workloads

In my last post, IBM Spectrum Storage Suite – Revolutionizing How IT Managers License Software Defined Storage, I introduced a simple and predictable licensing model for most all the storage needs an IT manager might have. That’s a pretty big concept if you think about all the storage use cases an IT manager has to deal with.

  • Block SAN storage for traditional workloadsIBM Spectrum Storage Suite Symbol
  • File storage for analytic or Big Data workloads
  • Object storage for cloud and mobile workloads
  • Scale-out block storage for VMware datastores
  • Storage for housing archive or backup copies

Just to name a few… The idea behind software defined storage is that an IT manager optimizes storage hardware capacity purchases for performance, environmentals (like power and space consumption), and cost. Then he ‘software defines’ that capacity into something useful – something that meets the needs of whatever particular use case he is trying to deal with. But is that really possible under a single, predictable software defined storage license? The best way I can think of to answer the question is to look at several of the most common use cases we see with our clients.

Perhaps the most widely deployed enterprise use case today is block SAN storage for the traditional workloads that all our businesses are built on – databases, email systems, ERP, customer relationship management and the like. Most IT managers know exactly what kind of storage capabilities they need to deploy for this use case. It’s stuff like:

Here’s the thing… This use case has been evolving for years and most IT managers have it deployed. The problem isn’t that the capabilities don’t exist. The problem is that the capabilities are most often tied directly to a specific piece of hardware. If you like a particular capability, the only way to get it is to buy that vendors hardware. It’s a hardware-defined model and you were locked in. With IBM Spectrum Storage, IBM has securely unboxed with software defined. All the capabilities I just mentioned can be accomplished with one IBM Spectrum Storage Suite software license and you have complete flexibility to pick whatever hardware vendor or tier you like. The idea of software defined changes everything. With the software securely unboxed from the hardware, you really are free to choose whatever hardware you want from most any vendor you like. And since the software can stay the same even while hardware is changing, it means you don’t experience any operational or procedural tax when you make those changes.

All of the capabilities mentioned above for addressing this Block SAN storage for traditional workloads use case can be accomplished with one IBM Spectrum Storage Suite software license. This may be the most widely deployed use case today, but it’s not the fastest growing use case. In my next posts, I’ll continue looking at the wide variety of use cases that are covered by the simple, predictable IBM Spectrum Storage Suite software defined storage license.

Are you interested in taking the first step with software defined storage? Contact your IBM Business Partner or sales representative. And join the conversation with #IBMStorage and #softwaredefined.

IBM Spectrum Accelerate – Enterprise Storage in Minutes

For years, enterprise datacenters have been dominated by traditional disk arrays, things IDC’s Worldwide Quarterly Disk Storage Systems Tracker would categorize as High-end and Midrange disk. Now, I don’t have anything against this kind of storage, in fact my company makes some of the most competitive offerings in these categories. But my last two posts Introducing IBM Spectrum Storage – Inside Perspective and IBM Spectrum Scale – Built for Efficiency, Optimized for Speed have centered on the idea that a shift is being forced by the scale, performance, and cost requirements of workloads like cloud, analytic, mobile, social, and Big Data. Infrastructure shiftThe storage industry needs a new way of doing things that utilizes much lower cost common building blocks with almost unlimited increments in scalability. I’m going to stay on that theme one more time as we talk about IBM Spectrum Accelerate, software defined storage that can help you construct enterprise storage in minutes.

The kind of datacenters being built by both enterprises and the public cloud pioneers for newer workloads are filled with much lower cost common building blocks. Imagine fields of identical computers each with some internal disk capacity. Cloud management software based on OpenStack dynamically schedules workloads onto hypervisors across this field of capacity. Workloads appear, storage is allocated, virtual networks are configured, workloads are dynamically scaled and relocated for optimization, and then they disappear when all the work is complete. This environment is hostile to the relative rigidity of traditional SAN storage. And yet it depends on many of the enterprise storage capabilities that SANs have matured over the years.

The genius of IBM Spectrum Accelerate is that it takes the complete set of enterprise storage capabilities available on one of the industry’s most competitive High-end disk arrays, and enables IT managers to leverage it on common building block hardware. You see, if you were to pull back the covers on the IBM XIV storage system you would discover that the forward-thinking engineers who architected it understood the demands that these newer workloads would bring. Inside the XIV you would find common building blocks – computers with disks – and intelligent software managing them.  IBM is the first company in the industry to extract intelligence directly from its traditional storage hardware products enabling clients to use it as software.

Think about the scenario. An IT manager has created a common building block physical infrastructure for new generation workloads. No SAN, no High-end or Midrange disk arrays. Just a field of low-cost, common building block computers – some with internal disk capacity – and a hypervisor. Workloads are being deployed in virtual machines, but these workloads need enterprise storage services. So the IT manager deploys IBM Spectrum Accelerate software into virtual machines on some of those common building blocks that have internal disk capacity. What happens next is the stuff legends are made of.

IBM Spectrum Accelerate forms those common building blocks and their disks into an iSCSI storage grid. Virtual volumes are spread across all the common building blocks so that all resources are used evenly, including memory in the servers which is formed into a distributed cache. Spectrum Accelerate storage gridFor robustness, each logical partition is stored in at least two copies on separate building blocks, so that if a part of a disk drive, an entire disk drive, or an entire building block fails, the data is still available. Since all resources in the grid are evenly utilized, all resources are engaged in quickly getting the system back to redundancy following a failure. If after the initial deployment, the IT manager wants to scale capacity, Spectrum Accelerate software can be deployed in virtual machines on additional building blocks. When the new building block joins the grid, data is automatically redistributed to maintain optimal use of resources. And this software defined storage system, deployed in minutes, includes all the enterprise storage capabilities IT managers have come to expect – thin provisioning, writable snapshots, synchronous and asynchronous replication with consistency groups, and failover/fail back.

If you are an IT manager, consider the rapid flexibility and potential cost benefits of this software defined approach to constructing enterprise storage.

And join the conversation with #IBMStorage and #softwaredefined

IBM on Software Defined Storage

I routinely follow a number of blogs by storage industry thought leaders. Among them is a usually insightful blog by EMC’s Chuck Hollis. Last Friday I read his post titled Software-Defined Storage – Where Are We?  As Chuck described, the post was intended to explore “Where are the flags being planted?  Is there any consistency in the perspectives?  How do various vendor views stack up?  And what might we see in the future?” The questions themselves captured my attention. First, they are great questions that everyone who is watching this space should want answered. Second, I wanted to see which vendors EMC was interested in comparing with. Notably missing from Chuck’s list was IBM, a vendor who both has a lot to say and a lot to offer on the subject of software defined.

I thought Chuck did a nice job in the sections of his post on Basic [Software Defined Storage] SDS Concepts and Towards a Superset of Characteristics. My only critique would be that he didn’t acknowledge some of the forward leaning work being done in the space. For example, in the area of concepts he rightly observed of the past that “there is little consensus on what is software-defined storage, and what isn’t” but he failed to acknowledge the important work by the team at IDC in providing the industry with an unbiased nomenclature and taxonomy for software-based storage. See my post from a couple months back on How do you define Software-defined Storage?  Chuck also suggested that “the required technology isn’t quite there yet — but there are all signs that it’s coming along very quickly.  By next year, there should be several good products in the marketplace to concretely evaluate.”  That may be true for EMC and the rest of the vendors he chose to talk about, but by the end of this post I hope you will understand that when it comes to IBM, Chuck’s statement is several years behind.

The aim of software-defined

Software defined storage isn’t an end unto itself. It is a necessary piece in the evolution to a software defined environment (SDE), also referred to as a software defined datacenter IBM Software Defined Environments(SDDC). I like IDC’s definition of what this is, “a loosely coupled set of software components that seek to virtualize and federate datacenter-wide hardware resources such as storage, compute, and network resources and eventually virtualize facilities-centric resources as well. The goal for a software-defined datacenter is to tie together these various disparate resources in the datacenter and make the datacenter available in the form of an integrated service…” IBM is one of the few vendors who are working in all the areas of software-defined and Jamie Thomas, Vice President and General Manager of Software Defined Systems is the head of the division that coordinates that work.

IBM SDE PatternsJamie thinks about SDE from the perspective of workloads and patterns of expertise that can help simplify operations reducing labor costs and improving security. A software defined environment is also more responsive and adaptive as workloads expand from today’s enterprise applications to mobile, social, big data analytics and cloud. Her view is that open source and standards communities are crucial to the long term viability of SDE. IBMs work in software defined compute with the Open Virtualization Alliance and oVirt, our work in SDN with Open Daylight, and our work in cloud with OpenStack is helping propel the construction of software defined environments.

IBM SDE Standards

IBM’s work in software defined storage

The words have morphed over time. What VMware did for Intel servers has been referred to as a hypervisor, as virtualization, and now is being called software defined compute to line up with the rest of the SDE vocabulary. The foundation of a software defined environment is, well, software that offers a full suite of services and federates physical infrastructure together to provide the basic commodity. In the case of VMware, the commodity is Intel megahertz. In the case of SDS, the commodity is terabytes.

IBM clients first began using these capabilities in 2003 with the IBM SAN Volume Controller software drawing its compute horsepower from commodity Intel processors and managing terabytes provided by federated disk arrays. That software base has since been renamed to the Storwize family software platform and given an expanded set of commodity engines to run on. Today, there are federating systems with no storage capacity of their own, systems with internal solid-state drives to speed the input/output (I/O) of other federated storage, and systems that carry their own serial attached SCSI (SAS) disk and flash capacity to augment other federated capacity. There are entry models, midrange models, enterprise models and even models that are embedded in the IBM PureSystems family converged infrastructure. For a more complete description of the suite of services offered, the breadth of physical storage that can be federated, and the I/O performance that can be enjoyed, see my post Has IBM created a software-defined storage platform? Over the last decade, this software platform has been referred to as virtualization, as a storage hypervisor, and now with a total capacity under Storwize software management on its way to an exabyte, we call it SDS v1.0.

IBM Software Defined Storage (SDS)

SDS v2.0

SDS v2.0 came along early in 2012 with the introduction of IBM SmartCloud Virtual Storage Center (VSC). Building on the successful base of the Storwize family software platform, VSC added a number of important capabilities.

  • Service catalog:  Administrators organize the suite of VSC storage services into named patterns – catalog entries.  Patterns describe workload needs in terms of capacity efficiency, I/O performance, access resilience, and data protection.  For example, a pattern for ‘Database’ might describe needs that translate to compressed, thin provisioned capacity on a hybrid flash and SAS pool, with a single direction synchronous mirror and load-balanced multi-path access. The beauty of the service catalog is that requestors (application owners or orchestrators as we’ll see shortly) don’t need to concern themselves with the details. They just need to know they need ‘Database’ capacity.
  • Programmable means of requesting services: VSC includes API’s that surface the service catalog patterns to portals and orchestrators. The questions that must be answered are quite simple. How much capacity do you need? In what service level do you need it? Who needs access? From there, storage-centric orchestration takes over and performs all the low level mundane tasks of satisfying the request. And it works on a wide variety of physical storage infrastructure. The VSC API’s have been consumed by an end-user accessible portal, SmartCloud Storage Access, and by higher level SDE orchestrators like SmartCloud Orchestrator.
  • Metering for usage-based chargeback: Service levels and capacity usage is metered in VSC. Metering information is made available to usage and cost managers like SmartCloud Cost Management so that individual consumers may be shown or charged for their consumption. Because VSC meters service levels as well as usage, higher prices can be established for higher levels of SDS service. Remember IBM’s perspective, we are building out SDE of which SDS is a necessary part. SmartCloud Cost Management follows the model providing insight into the full spectrum of virtualized and physical assets.
  • Management information and analytics:  When the challenges of day-to-day operations happen (and they do happen most every day), administrators need straightforward information surrounded by visually intuitive graphics and analytic-driven automation to speed decision making and problem resolution. Last year we
    SmartCloud Virtual Storage Center management and analytics
    SmartCloud Virtual Storage Center management and analytics

    introduced just this approach with SmartCloud Virtual Storage Center. I discussed it more thoroughly in my post Do IT managers really “manage” storage anymore? If you watch the news, you’ll know that IBM is leading a transformation toward cognitive computing. We’re not there yet with the management of SDS, but consider this scenario. You are an IT manager who has invested in two tiers of physical disk arrays, probably from different vendors. You have also added a third storage technology – a purpose-built flash drawer. You have gathered all that physical capacity and put it under the management of a software defined storage layer like the SmartCloud Virtual Storage Center. All of your workloads store their data in virtual volumes that SmartCloud Virtual Storage Center can move at-will across any of the physical disk arrays or flash storage. Knowing which ones to move, when, and where to move them is where SmartCloud Virtual Storage Center excels. Here’s an example. Let’s suppose there is a particular database workload that is only active during month end processing. The analytics in SmartCloud Virtual Storage Center can discover this and create a pattern of sorts that has this volume living in a hybrid pool of tier-1 and flash storage during month end and on tier-2 storage the rest of the month. In preparation for month end, the volume can be transparently staged into the hybrid pool (we call it an EasyTier pool), at which point more real-time analytics take over identifying which blocks inside the database are being most accessed. Only these are actually staged into flash leaving the lesser utilized blocks on tier-1 spinning disks. Can you see the efficiency?

SDS v3.0

So where are we?

  • SDS v1.0 delivered. Software that offers a full suite of services and federates physical infrastructure.
  • SDS v2.0 delivered. A service catalog with a programmable means of accessing services, a portal and SDE cloud orchestration integration. Metering for usage-based chargeback and management information with analytics.

Where do we go from here? At IBM we’re busy opening up the Storwize family software platform for industry innovation, helping VSC become even more aware of application patterns, and progressing the notion of cognitive and analytic driven decision making in SDS.  Watch this space!

Users of IBM SDS speak

More than just theory and a point of view, IBM SDS is helping real customers. At the recent IBM Edge conference there were over 75 client testimonials shared, many of them about the benefits realized from using IBM SDS. I covered several of them in my post on Edge Day 2.

One of the coolest stories came earlier in the year at the IBM Pulse conference from IBMs internal IT operations.  IBMs CIO manages 100 petabytes of data and by leveraging SmartCloud Virtual Storage Center was able to reduce costs by 50% with no impact to performance.

Did this help clarify IBM’s position in SDS?

Edge 2013 Day 2 – “I could have cried!”

In my post yesterday I mentioned that we heard the first of over 75 client testimonials being shared at IBM Edge 2013. Today, the client stories came fast and furious. Several caught my attention.

Sprint is a US telecommunications firm who has 90% of their 16 petabytes of SAN storage capacity under the control of software-defined storage – specifically the Storwize family software running on IBM SAN Volume Controller engines. Because of the flexibility of software-defined storage, Sprint was able to seamlessly introduce IBM FlashSystem capacity as a new tier of MicroLatency capacity and transparently move a call center workload to the new flash storage. The results were impressive: 45x faster access to customer records. That’s right, a 4,500% improvement!

eBay is both the world’s largest online marketplace as well as a company that offers solutions to help foster merchant growth. They are serious about open collaborative solutions in their datacenters. When it comes to cloud, they use OpenStack. eBay implemented IBM XIV storage with its OpenStack Cinder driver integration and now is able to guarantee storage service levels to their internal customers.

Ricoh is a global technology company specializing in office imaging equipment, production print solutions, document management systems and IT services. All of their physical storage capacity is under the control of a Storwize family software-defined layer inside the IBM SmartCloud Virtual Storage Center. This enabled extreme efficiency saving them 125TB of real capacity and a 40% cost reduction with tiering. As the Ricoh speaker left the stage, the IBM host asked an unscripted question “Can you imagine running your IT without software-defined storage virtualization?” to which Ricoh responded “No! It would be catastrophic.”

LPL Financial is the largest independent financial dealer-broker in the US. Their physical storage infrastructure was multi-vendor, isolated in islands, underutilized, with little administrative visibility. The inflexible nature of physical storage had isolated workloads with certain disk arrays even though excess capacity might exist elsewhere in the datacenter. LPL implemented SmartCloud Virtual Storage Center (built on a Storwize family software-defined layer) for their most problem areas in just three months – 3 months! The seamless workload mobility provided by this software-defined storage approach solved issues like performance incident resolution, islands of waste, and the headaches associated with retiring old physical arrays. The quote of the day came from tears-of-joyChris Peek, Senior Vice President of Production Engineering at LPL Financial: “It was so good I could have cried!” LPL continued by building a new datacenter with a 100% software-defined storage infrastructure using SmartCloud Virtual Storage Center. Using software layer capabilities like tiering, thin provisioning and real-time compression they were able to save an astounding 47% in total infrastructure.

Arkivum is a public archive cloud provider in Europe. They use tape media with the IBM Linear Tape File System as the foundation of an archive service that economically offers a 100% guarantee for 25 years. The thing that struck me is in a storage industry that speaks of things in terms of five-9’s, Arkivum are combining cloud and tape with a 100% guarantee.

There were others too. Kroger is a grocery chain in the US. They implemented IBM FlashSystem and reduced latency tenfold for their Oracle Retail platform. And CloudAccess.net is a cloud service provider who needed to drive 400,000 I/O’s per second. They replaced a bank of disk drives with a single IBM FlashSystem drawer at one-tenth the cost.

I have to say that all the focus on client outcomes is refreshing. Sure, Edge has plenty of discussion around IBM’s strategy and the innovative technology being announced this week But I agree with Kim Stevenson, Chief Information Officer at Intel, who said “Organizations don’t buy technology, they buy benefits.”

I’m sure there were other client stories shared in sessions that I missed. Share your favorite outcomes below. Leave a comment!

Edge 2013 Day 1 – Clouds and floods of Flash

After using Rainy days and sunshine to describe Day 0, Cloud and Flash seemed natural for Day 1. Not sure where there weather will lead tomorrow.

Day 1 of IBM Edge was action packed. With 65 new and refreshed products being announced and the first of over 75 client testimonials being shared, there was a lot of information to consume.

Here are the highlights that caught my attention.

Stephen Leonard is IBMs General Manager for Global Markets and a current resident of the UK. He described his experience traveling to the US for Edge. Sitting at breakfast in his home, he used his mobile phone to select a seat on the airplane, check in and get his boarding pass. Leaving his house, the satnav in his car downloaded real-time traffic data for the London area and optimized his route to the airport. In the airport terminal he read a British newspaper on his tablet, and because he had been looking at real estate in the US the day before, the digital British newspaper presented him with advertisements for realtors in Connecticut.

  • We are creating wider and wider trails of data that are as unique to an individual as fingerprints and DNA.
  • These data trails are also being created by manmade things like roads, railways, cities, and supply chains as well as nature made things like rivers, wind, and cattle.
  • The world is being shaped by Big Data.
  • But today’s datacenters aren’t made for this kind of world.
  • He referenced an analyst study (I admit I didn’t catch the source) that suggested in last 2 decades the cost of IT administration has grown from less than one-third of the IT budget to over two-thirds making investment in innovation difficult.

Bernie Meyerson is an IBM Fellow and Vice President of Innovation.

  • We will reach the density limits of silicon in 7-10 years.
  • The limits of magnetic recording are also approaching quickly.
  • Bernie showed a now famous IBM Research video of A Boy and His Atom to make the point that it’s important we know the physical limits of current technology and when they are coming because massive investment is needed to come up with what’s next.
  • IBM spends about $6B annually in R&D and has been #1 in patent production for the last 20 years. Last year alone, IBM was awarded 6,478 patents.

Kim Stevenson is the Chief Information officer at Intel.

  • We live in a sharing economy. We share pictures on Instagram, music on Pandora, bicycles with Citi Bike, cars with zipcar, and vacation rentals with airbnb.
  • As we move forward, most IT will be delivered in a shared model. Public, private, and hybrid clouds.
  • Stephen Leonard added IBMs point of view that important innovation will also be shared. He pointed to IBM’s strong involvement in and support of Linux, Eclipse, Apache, and now OpenStack and Hadoop.

Ed Walsh is IBMs Vice President Storage Systems Marketing and Strategy

  • You know about virtualizing servers and the benefits that lead both you and your peers to broadly adopt it for your compute infrastructure. Imagine if you could achieve the same benefits by virtualizing your storage infrastructure.
  • This is the promise of software-defined storage (SDS). The good news is that it is here today.
  • SDS v1.0 is virtualization of physical storage infrastructure, regardless of your choice in IMG_0245hardware vendor. IBM was delivering this in the Storwize family software platform  before the industry started calling it SDS.
  • SDS v2.0 is making that platform open and extensible kicking off an era of industry innovation. This is also here today.
  • SDS v3.0 adds analytic and application driven patterns, hints provided to the SDS platform through open APIs enabling it to adapt and optimize services to the workload.
  • I understand from other attendees that there was standing room only in two of the Technical Edge sessions in this area. Performance optimization expert and Master Inventor Barry Whyte discussed the history of the Storwize family software platform and product strategist Jason Davison talked about today’s SmartCloud Virtual Storage Center packaging of that technology. If you were in either of those sessions, please leave a comment below with your perspective.

TSM Operations Center

  • The much anticipated Tivoli Storage Manager Operations Center was announced today.
  • Reports were that the session with product manager Xin Wang on the TSM Operations Center was also standing room only. If you were in that session, please leave a comment below with your perspective.