Elegant flexibility – IBM Spectrum Virtualize and the SAN Volume Controller

4 minute read

Elegant flexibility

Strategic storage choices are more about the software than the hardware it happens to be running on at the moment. Do you agree?

In my post Change in the storage industry is coming – and it’s good! I shared why I think there is a trend toward radical simplification starting in the storage industry. IBM started the trend and others are sure to follow in bids to remain competitive. I’m continuing the discussion on what IBM has done. For other available posts in the series, look here.

With its February 2020 announcement of Storage Made Simple for Hybrid Multicloud, IBM introduced a strategic storage software platform, IBM Spectrum Virtualize, and an AI-infused management service, IBM Storage Insights, that operate on a family of enterprise storage systems spanning from entry to midrange to high-end, all engineered with the best from IBM Enterprise Design Thinking(1) to deliver great customer experiences. Spectrum Virtualize and Storage Insights can also be applied to over 500 heterogeneous on-premises storage systems and multiple public clouds to give IT managers consistency in storage operations across their hybrid multicloud. 

Discussions about storage systems and their capabilities are often compartmentalized – here’s one storage system and what it does, here’s another storage system and what it does differently, etc. The radically simple idea IBM has introduced is that, with Spectrum Virtualize and Storage Insights, all the IBM FlashSystem storage from entry enterprise through to high-end enterprise, and all the other block storage you might have from other vendors, and the block storage you might use on multiple public clouds can all behave in a consistent manner. It’s an elegant approach. To keep these posts consumable in length, I’m going to introduce one of the platforms and a few of the consistent software capabilities in each post. A good place to start is with the IBM SAN Volume Controller (SVC)

Meet the IBM SAN Volume Controller with Spectrum Virtualize

Now in its tenth generation, clients have been deploying Spectrum Virtualize on SVC for over a decade to bring the power of a strategic software foundation to over 500 on-premises storage systems from a wide mix of vendors. If you are interested in modernizing the storage you already have, this is a starting point. 

Like all storage based on Spectrum Virtualize…

These systems make your storage simple to scale.

One of the enemies of storage efficiency is stranded pools of capacity.  And an enemy of operational efficiency is a heterogeneous storage infrastructure. SAN Volume Controller helps defeat both enemies.

A single SAN Volume Controller cluster can combine up to 32 petabytes of storage capacity from over 500 different heterogeneous storage systems into a single pool of capacity. That capacity is presented as a single system, has a single point of management, and importantly a single set of APIs and procedures. Think of that for a moment. You have the freedom to maintain heterogeneous storage infrastructure, creating a little healthy competition between vendors, without the operational headaches that usually follow. And you can apply advanced features like encryption, data reduction, enhanced high-availability, and connection to the cloud across all your storage even if its an older model that predates that sort of capability. Then, when it comes time to add more capacity or shift your vendor strategy, no big deal. Just insert the new and remove the old. Your data can be migrated without disruption and your operations remain consistent. It’s that simple. 

SAN Volume Controller makes your storage simple to scale

These systems make your storage simply operationally resilient.

Ensuring your data stays available to your applications is the primary function of storage. When you are operating storage with Spectrum Virtualize, you’re covered. In fact, for the most stringent requirements, IBM guarantees it.

Regardless of your choice in storage hardware vendors, Spectrum Virtualize offers a consistent approach to traditional 2-site and 3-site replication configurations using your choice of synchronous or asynchronous data communication. There is also IBM Hyperswap. When properly deployed by IBM Lab Services, IBM will guarantee 100% data availability. That’s not five-9’s (99.999%) or six-9’s (99.9999%), it’s 100% availability. 

Finally, Spectrum Virtualize running on the SVC offers unique enhanced high-availability (HA) configurations that deliver zero-impact failover between two to four sites and the possibility of fifth-site replication that can include public cloud. 

SAN Volume Controller makes your storage simply operationally resilient

Check back for the next post where I’ll introduce FlashSystem high-end enterprise storage and a few more of the Spectrum Virtualize capabilities found across the hybrid multicloud. 

What do you think? Are strategic storage choices more about the software than the hardware it happens to be running on at the moment? Join the conversation – leave a comment!

(1) IBM Enterprise Design Thinking is a best practice that IBM teaches and certifies practitioners on across the industry. It is the method we use to ensure our offerings deliver great experiences to our customers.

Software Defined Storage Use Case – Block SAN Storage for Traditional Workloads

In my last post, IBM Spectrum Storage Suite – Revolutionizing How IT Managers License Software Defined Storage, I introduced a simple and predictable licensing model for most all the storage needs an IT manager might have. That’s a pretty big concept if you think about all the storage use cases an IT manager has to deal with.

  • Block SAN storage for traditional workloadsIBM Spectrum Storage Suite Symbol
  • File storage for analytic or Big Data workloads
  • Object storage for cloud and mobile workloads
  • Scale-out block storage for VMware datastores
  • Storage for housing archive or backup copies

Just to name a few… The idea behind software defined storage is that an IT manager optimizes storage hardware capacity purchases for performance, environmentals (like power and space consumption), and cost. Then he ‘software defines’ that capacity into something useful – something that meets the needs of whatever particular use case he is trying to deal with. But is that really possible under a single, predictable software defined storage license? The best way I can think of to answer the question is to look at several of the most common use cases we see with our clients.

Perhaps the most widely deployed enterprise use case today is block SAN storage for the traditional workloads that all our businesses are built on – databases, email systems, ERP, customer relationship management and the like. Most IT managers know exactly what kind of storage capabilities they need to deploy for this use case. It’s stuff like:

Here’s the thing… This use case has been evolving for years and most IT managers have it deployed. The problem isn’t that the capabilities don’t exist. The problem is that the capabilities are most often tied directly to a specific piece of hardware. If you like a particular capability, the only way to get it is to buy that vendors hardware. It’s a hardware-defined model and you were locked in. With IBM Spectrum Storage, IBM has securely unboxed with software defined. All the capabilities I just mentioned can be accomplished with one IBM Spectrum Storage Suite software license and you have complete flexibility to pick whatever hardware vendor or tier you like. The idea of software defined changes everything. With the software securely unboxed from the hardware, you really are free to choose whatever hardware you want from most any vendor you like. And since the software can stay the same even while hardware is changing, it means you don’t experience any operational or procedural tax when you make those changes.

All of the capabilities mentioned above for addressing this Block SAN storage for traditional workloads use case can be accomplished with one IBM Spectrum Storage Suite software license. This may be the most widely deployed use case today, but it’s not the fastest growing use case. In my next posts, I’ll continue looking at the wide variety of use cases that are covered by the simple, predictable IBM Spectrum Storage Suite software defined storage license.

Are you interested in taking the first step with software defined storage? Contact your IBM Business Partner or sales representative. And join the conversation with #IBMStorage and #softwaredefined.

IBM Spectrum Storage Suite – Revolutionizing How IT Managers License Software Defined Storage

About a year age IBM shook up the software defined storage world with IBM Spectrum Storage.  It was the industry’s first complete family of software defined storage offerings that could drive cost efficiency and total management across most all the needs an IT manager might have:

IBM Spectrum Storage Family Tall

  • Software defined storage for traditional workloads like database, email and ERP systems as well as new-gen workloads like analytic, mobile, Big Data, cloud and Cognitive business.
  • Capabilities needed for primary data as well as backup or archive copies.
  • Access via block, file and object protocols.
  • Operating on heterogeneous storage hardware from most any vendor – traditional SAN storage infrastructure as well as newer scale out infrastructure with storage-rich servers.
  • …all nicely integrated with a consistent set of interfaces and vocabulary.

 

In a world where much of this type of capability had been tied to some piece of hardware – hardware defined storage – IBM had securely unboxed storage and forever changed storage economics. In the last year, over 2,000 brand new clients have started with IBM Spectrum Storage. Through those conversations, we’ve learned two important things about how enterprises are approaching software defined storage – things that have led us to revolutionize how software defined storage is licensed.

  1. CFO’s are exasperated with the unpredictability of storage and storage software licensing. IT managers generally have a good feel for what type of capabilities they need to accomplish the use cases they want. But have you ever paused to think about how confusing it must be for them to figure out exactly how much software they need to license? Think about it from the perspective of someone not using IBM Spectrum Storage.
    • SAN Virtualization software can be licensed starting with a per frame base and then adding a per TB managed
    • Storage Resource Management software can be licensed per system and per tiered-TB
    • Backup software can be licensed per operating platform or application and then adding a per TB
    • Data stores for Virtual Machines can be licensed per server and per VM
    • Deduplication software can be licensed per system
    • …and the list can go on. If the IT manager can figure out how much of this stuff he wants at a given point in time, how is a CFO supposed to predict future costs?
  2. IT managers are dealing with transition in storage infrastructure. What software they need can shift rapidly. Most IT managers reading this post are responsible for estates of SAN storage. For the traditional workloads that our businesses are built on, this has been the dominant storage infrastructure approach for years. But there is a transition well underway. Who hasn’t been impacted by newer mobile and cloud workloads? What company or organization isn’t looking to make better use of Big Data and analytic workloads? These applications are being built to take advantage of a different kind of storage infrastructure, one that is characterized by direct-attach JBODs and fields of servers with lots of internal capacity, and really none of it connected to a SAN. IDC data suggests that IT managers are now purchasing more TB’s of this type of storage than they are SAN storage, and the trend isn’t likely to ever change. In many enterprises, the capacity mix is shifting from traditional SAN to newer storage-rich servers. The transition presents a challenge in storage software licensing. When physical infrastructure shifts from big SAN-attached storage systems to scale-out storage-rich servers, the types of capabilities needed don’t change dramatically, but the specific vendor software packages change a lot. Instead of a virtualizer for block SAN storage, an IT manager might need to shift toward offerings that software-define his field of storage-rich servers into something useful like performance optimized file storage for analytics, an integrated VMware block data store, or cost optimized object storage for backup and archive. This is all new and the rate of change is unknown – and that presents a challenge. Flexibility is paramount.

IBM Spectrum Storage Suite SymbolWith IBM Spectrum Storage Suite, IT managers now have a simple and predictable licensing model for the entire IBM Spectrum Storage family. Its straightforward per-TB pricing relates costs to the amount of storage capacity being software-defined, regardless of use case. That makes it easy for IT managers to grow and transition how they use storage, and for CFOs to predict costs. And with the Suite, clients can save up to 40% compared with licensing all capabilities separately.

Consider a typical enterprise starting with its existing SAN storage infrastructure but rapidly growing a new kind of infrastructure for new workloads. In most datacenters this transition is coming, but few really understand how fast or exactly which use cases will emerge. There’s going to be some experimentation and rapid change.

IBM Spectrum Storage Suite evolving use case 1Attempting to navigate the next few years using a’ la carte licenses of point products from multiple vendors is going to be difficult. In fact, CFO’s are going to push back against the unpredictability and may ration what software can be licensed. That can slow down innovation. IBM Spectrum Storage Suite offers cost predictability and frees IT managers to exploit any IBM Spectrum Storage capability required to get the job done.

IBM Spectrum Storage Suite evolving use case 2Let’s suppose you are an IT manager at the front end of this picture. You’ve deployed Block SAN storage for traditional workloads as your first IBM Spectrum Storage Suite use case. Now you want to explore another use case. Well, with IBM Spectrum Storage Suite you already own entitlement to all capabilities in the IBM Spectrum Storage family, so you are free to download any of the software you like. To help you quickly adopt the additional use cases your business may need, IBM Spectrum Storage Suite licensing offers the ability to perform extended tests in an evaluation sandbox proving ground without additional charge. So go ahead, experiment with your next use case. Prove it, become familiar with it, pay for it when it’s deployed for productive use.

Are you interested in taking the first step with software defined storage? Contact your IBM Business Partner or sales representative. And join the conversation with #IBMStorage and #softwaredefined

How software defined is changing storage economics

About a year ago, IBM’s bold move into software defined storage changed how IT looks at storage. The introduction of IBM Spectrum Storage incorporated more than 700 patents and was backed by plans to invest more than $1 billion over the next five years toward propelling the use of software defined storage in any form – as software, appliance or cloud service.

To better understand why this is redefining the economics of storage and helping IT optimize cost, performance and security, consider some of what IBM Spectrum Storage is bringing to the IT landscape.

Introducing…
In February, IBM introduced the IBM Spectrum Storage family, the first comprehensive family of software defined storage offerings that can centrally manage yottabytes of data on more than 300 different storage devices.
(Related: Introducing IBM Spectrum Storage – Inside Perspective)

Efficiency and speed
As workloads like cloud, analytic, mobile, social, and Big Data began affecting scale, performance and cost requirements, it became clear that traditional IT infrastructures had to change. Software defined can be used with common building block storage to construct file and object systems built for efficiency and optimized for speed.
(Related: IBM Spectrum Scale – Built for Efficiency, Optimized for Speed)

Implemented in minutes
The same dynamic is also at work in the block storage that dominates today’s enterprise datacenters. The genius of software defined storage is that the complete set of enterprise storage capabilities available on high end arrays is now available for IT managers to leverage on common building block hardware.
(Related: IBM Spectrum Accelerate – Enterprise Storage in Minutes)

SAN storage efficiency doubled
Despite the rapid growth in newer cloud, analytic, mobile, social and big data workloads, more than half of the worldwide spend is still on traditional SAN storage, the choice for more traditional workloads like transaction systems, email, supply chain, HR and virtual servers. Software defined storage can help IT managers gain a great deal of efficiency in this part of the data center.
(Related: IBM Spectrum Virtualize – Traditional SAN Storage at Twice the Efficiency)

Intelligent analytics
Whether it’s the traditional workloads we have all grown up with or the new generation workloads, it’s well understood that there is a mismatch between data growth and the budgets that are allocated to deal with the problem. Software defined storage can provide IT managers with intelligent analytics for managing storage.
(Related: IBM Spectrum Control – Intelligent Analytics for Managing Storage)

Dramatic cost reduction
A looming question for many IT managers is just how efficiently the job of data protection can be done. They want to minimize the budget for data copies so they can shift investment to new business growth initiatives. Software defined storage can be, on average, 38 percent more efficient.
(Related: IBM Spectrum Protect – Crash Diet for Your Data Protection Budget)

Ultra-low cost and flexibility
Data growth is being fueled by new workloads and their seemingly insatiable need for data to process. But in many enterprises, even more of the data growth is the result of simply keeping data around – stuff like regulatory archives you have to keep and asset archives you just want to keep. Software defined storage can balance the convenience of online access with ultra-low cost and flexibility.
(Related: IBM Spectrum Archive – Ultra-low Cost Storage for Retaining Data)

How is software defined changing your approach to storage? Connect with me on Twitter at @RonRiffe, and join the conversation with #IBMStorage and #SoftwareDefined.

Originally posted June 1, 2015 on the IBM SmarterComputing blog

IBM Spectrum Control – Intelligent Analytics for Managing Storage

There is a lot that gets said about the huge data growth IT managers face both from traditional workloads we have all grown up with and from new generation workloads like mobile, social, big data and analytics. It’s well understood that there is a mismatch between this growth and the budgets that are allocated to deal with the problem. For years the dominant conversation has been on lowering the cost of the raw capacity and on packing it with as much data as possible. There’s certainly a lot to be gained from lower-cost physical infrastructure, tiering, and technologies like thin provisioning, compression and deduplication. But as petabyte storage farms become commonplace and workloads become more sensitive to service levels, the job of balancing efficiency with performance, and performance with workload requirements becomes much more intense. That’s where IBM Spectrum Control excels – providing IT managers with intelligent analytics for managing storage.IBM Spectrum Control

Cost reduction and optimization

Most modern storage systems include tools that offer administrators a view of what’s going on under their covers – health of the system, performance of the system, utilization of the system, and so on. Two challenges arise for administrators responsible for storage estates of any consequential size.

  1. Consolidating a view of all storage: Datacenter storage is often a mix of vendors, tiers, and protocols like block and file. Tools included with individual storage systems don’t offer a broad enough perspective. Imagine being able to scan the complete environment and quickly identify capacity that isn’t yet allocated to a workload, or perhaps it is allocated but hasn’t seen any I/O activity in the last month. You could reallocate this idle capacity to maximize utilization. Imagine in that same scan identifying capacity that is working hard for you – and being able to forecast its future growth. No more guesswork. Imagine being able to monitor performance regardless of what storage tier or vendor you chose. You could apply that historical usage knowledge to tiering decisions and reduce cost.
  2. Managing consumer-oriented service levels: Businesses are intrerested in service levels for applications and business lines, not for pieces of hardware on the datacenter floor. Health, capacity, utilization, tiering… these are certainly all interesting at the storage system level, but their importance to the business is highlighted in more consumer-centric groupings like applications or business lines. Imagine being able to manage storage service levels – regardless of what hardware was in use – for an application like SAP or a business line like Corporate Accounting. When the financial quarter close was running (like it is in IBM at the time of this writing), you could show how the storage infrastructure associated with that business line was behaving.

We think this level of cost reduction and optimization should be quickly available to all storage administrators – whether they have deployed software defined storage like IBM Spectrum Virtualize or IBM Spectrum Scale, or are still operating with traditional hardware-centric arrays. That’s why we’re making IBM Spectrum Control available as a Software-as-a-Service offering called Storage Insights. Take a look and learn about the beta.

Intelligent Analytics

Clients who prefer to deploy Spectrum Control software on premises have the added opportunity to exploit advanced analytics for optimizing cost (one of my personal favorites). Here’s the scenario.

Suppose you are one of those IT managers I described above who are tasked with using multiple tiers of storage to balance efficiency with performance, and performance with workload requirements.  You’ve got a substantial storage estate so the prospects seem overwhelming. You’ve deployed Spectrum Control and you’re about to experience the value of analytics first hand.

For this example let’s suppose you have three pools of tier-1 storage and one pool of tier-2 storage that you want to analyze for re-tiering.IBM Spectrum Control storage poolsSpectrum Control discovers that one of the volumes in the tier-2 pool is over utilized. If it is moved to a tier-1 pool with sufficient performance capacity, the performance of the volume can be improved. IBM Spectrum Control overutilization analysisThe performance of the target pools are then analyzed and recommendations generated. The recommendations involve up-tiering the volume from the tier-2 pool to a tier-1 pool. You can review the recommendations or leverage the transparent data migration capability of Spectrum Virtualize to automatically move the volume to the tier-1 pool. IBM Spectrum Control uptierUsing a similar analysis, Spectrum Control can make recommendations to down-tier underutilized volumes that are occupying more expensive storage than is necessary. IBM Spectrum Control downtierA single tier analysis can result in multiple volume movements in which volumes are moved to both lower and higher storage tiers. You can also schedule analysis tasks to run at specified intervals so you regularly monitor opportunities for re-tiering.IBM Spectrum Control storage tieringAnother form of optimization is balancing. Pools in the same tier can have both low and high activity levels. But your goal might be to keep all the pools in a given tier close to the same utilization. Spectrum Control can identify the average utilization for a tier and specific pools that are deviating from that utilization by, say, more than 10%.IBM Spectrum Control pool balancint analysisBy analyzing pools on the same tier, Spectrum Control identifies opportunities to move volumes and optimize overall utilization of your storage assets. Again, when used in concert with Spectrum Virtualize, these volumes can be moved transparently.IBM Spectrum Control pool balancingThat’s intelligent analytics for managing storage. If you are an IT manager responsible for making the most of your storage investment, consider IBM Spectrum Control and Storage Insights.

And join the conversation with #IBMStorage and #softwaredefined

IBM Spectrum Virtualize – Traditional SAN Storage at Twice the Efficiency

There’s a great deal of buzz these days around newer cloud, mobile, social, analytic and Big Data workloads whose storage requirements are causing IT managers to re-think storage infrastructure. But what about the more traditional workloads like transaction systems, email, supply chain, HR, and virtual servers? According to the most recent IDC Worldwide Quarterly Disk Storage Systems Tracker , IT managers continue to spend more on traditional SAN storage – the infrastructure of choice for these workloads – than on NAS, IP SAN and direct attach arrays combined. With over half of the worldwide spend on external storage in play, there’s a lot of efficiency to be gained. IBM Spectrum Virtualize is software defined storage that can deliver twice the efficiency on existing SAN infrastructure.

It can be argued that IBM Spectrum Virtualize is the product that gave birth to the software defined storage movement. I started working with this product back in 2003 (at the time it was called IBM SAN Volume Controller) before the term Software Defined Storage had been coined. A couple years ago as folks were starting to talk about the concept of software defined, I posed a mostly rhetorical question Has IBM created a software-defined storage platform? IBM Spectrum Virtualize embodies the concept and is a cornerstone of the IBM Spectrum Storage software defined family.

IBM Spectrum Virtualize servicesIn every geography of the world, across every industry and size of enterprise, there are IT managers who have built SAN storage infrastructures with EMC, or HP, or Hitachi, or IBM, or any of a number of other vendors, but who have also software defined those infrastructures with IBM Spectrum Virtualize and are enjoying extraordinary benefits. It’s really not very complicated to understand where the savings come from.

When you choose to software define your SAN infrastructure with IBM Spectrum Virtualize:

  • You save money on software licensing costs. SAN storage vendors often charge you for advanced capabilities like snapshot, replication, tiering, and even device drivers. Choosing to software define means you don’t need to pay for these with your SAN storage.
  • You pack more data onto the physical disks you own. SAN disk arrays have boundaries. They are individually wrapped in sheet metal meaning workloads attached to one array can run out of space while workloads attached to another can have an over abundance of capacity. Choosing to software define means the boundaries disappear. Capacity is pooled together and overall utilization is improved. What’s more, IBM Spectrum Virtualize compresses data in real time allowing you to pack as much as 5x the data in the same capacity footprint. This validation from ESG was published about the time the industry was figuring out the idea of software defined.

  • You can choose lower cost disk arrays. SAN storage comes in various flavors. Generally, the higher priced physical arrays are where the enterprise class services are found. Many datacenters have an over abundance of these high end disk arrays simply because their workloads need the more robust services. Choosing to software define decouples the services from the physical storage. With IBM Spectrum Virtualize, you have access to the same enterprise services regardless of what storage tier or vendor you choose for capacity. And Spectrum Virtualize can transparently move data up and down those tiers to keep I/O patterns optimized. In a coming post I’ll talk about the intelligent analytics from IBM Spectrum Control that can help optimize your tiering choices and timing.
  • You can adopt new storage technology more quickly. There’s always something new in SAN storage. More dense disk drives resulting in lower cost per terabyte. Flash storage with much higher throughput. Integrating these new technologies into your SAN and getting your data migrated has been a perennial challenge for IT managers for as long as I’ve been in the storage industry.  Choosing to software define eliminates the problem. IBM Spectrum Virtualize can non-disruptively move data across tiers, vendors, and technologies of SAN storage. If you find a new storage type you want to exploit, plug it into your SAN tell Spectrum Virtualize to move data to it. If you have an old array going off lease, add the replacement to your SAN and tell Spectrum Virtualize to move the data. Software defined means these activities happen without interruption to workloads that are up and running, accessing the data.
  • You can implement ultra high availability using any type of physical storage. Datacenters with the most demanding availability requirements have traditionally been shackled to the most expensive SAN disk arrays because that’s where the most robust replication and fail-over / fail back capabilities were offered. Choosing to software define means you can implement those capabilities on lower cost physical storage. With IBM Spectrum Virtualize, you can implement multi-site ultra high availability configurations including sites at distance that operate as an active-active pair. The benefit of software defined is that this can be done regardless of your choice in vendor or tier of storage.

If you are an IT manager responsible for traditional SAN infrastructure, consider the common functionality, management, and mobility across heterogeneous storage types that can come from software defining your SAN.

And join the conversation with #IBMStorage and #softwaredefined

IBM Spectrum Accelerate – Enterprise Storage in Minutes

For years, enterprise datacenters have been dominated by traditional disk arrays, things IDC’s Worldwide Quarterly Disk Storage Systems Tracker would categorize as High-end and Midrange disk. Now, I don’t have anything against this kind of storage, in fact my company makes some of the most competitive offerings in these categories. But my last two posts Introducing IBM Spectrum Storage – Inside Perspective and IBM Spectrum Scale – Built for Efficiency, Optimized for Speed have centered on the idea that a shift is being forced by the scale, performance, and cost requirements of workloads like cloud, analytic, mobile, social, and Big Data. Infrastructure shiftThe storage industry needs a new way of doing things that utilizes much lower cost common building blocks with almost unlimited increments in scalability. I’m going to stay on that theme one more time as we talk about IBM Spectrum Accelerate, software defined storage that can help you construct enterprise storage in minutes.

The kind of datacenters being built by both enterprises and the public cloud pioneers for newer workloads are filled with much lower cost common building blocks. Imagine fields of identical computers each with some internal disk capacity. Cloud management software based on OpenStack dynamically schedules workloads onto hypervisors across this field of capacity. Workloads appear, storage is allocated, virtual networks are configured, workloads are dynamically scaled and relocated for optimization, and then they disappear when all the work is complete. This environment is hostile to the relative rigidity of traditional SAN storage. And yet it depends on many of the enterprise storage capabilities that SANs have matured over the years.

The genius of IBM Spectrum Accelerate is that it takes the complete set of enterprise storage capabilities available on one of the industry’s most competitive High-end disk arrays, and enables IT managers to leverage it on common building block hardware. You see, if you were to pull back the covers on the IBM XIV storage system you would discover that the forward-thinking engineers who architected it understood the demands that these newer workloads would bring. Inside the XIV you would find common building blocks – computers with disks – and intelligent software managing them.  IBM is the first company in the industry to extract intelligence directly from its traditional storage hardware products enabling clients to use it as software.

Think about the scenario. An IT manager has created a common building block physical infrastructure for new generation workloads. No SAN, no High-end or Midrange disk arrays. Just a field of low-cost, common building block computers – some with internal disk capacity – and a hypervisor. Workloads are being deployed in virtual machines, but these workloads need enterprise storage services. So the IT manager deploys IBM Spectrum Accelerate software into virtual machines on some of those common building blocks that have internal disk capacity. What happens next is the stuff legends are made of.

IBM Spectrum Accelerate forms those common building blocks and their disks into an iSCSI storage grid. Virtual volumes are spread across all the common building blocks so that all resources are used evenly, including memory in the servers which is formed into a distributed cache. Spectrum Accelerate storage gridFor robustness, each logical partition is stored in at least two copies on separate building blocks, so that if a part of a disk drive, an entire disk drive, or an entire building block fails, the data is still available. Since all resources in the grid are evenly utilized, all resources are engaged in quickly getting the system back to redundancy following a failure. If after the initial deployment, the IT manager wants to scale capacity, Spectrum Accelerate software can be deployed in virtual machines on additional building blocks. When the new building block joins the grid, data is automatically redistributed to maintain optimal use of resources. And this software defined storage system, deployed in minutes, includes all the enterprise storage capabilities IT managers have come to expect – thin provisioning, writable snapshots, synchronous and asynchronous replication with consistency groups, and failover/fail back.

If you are an IT manager, consider the rapid flexibility and potential cost benefits of this software defined approach to constructing enterprise storage.

And join the conversation with #IBMStorage and #softwaredefined

IBM Spectrum Scale – Built For Efficiency, Optimized For Speed

The pioneers in the cloud movement have blazed a new trail when it comes to physical infrastructure. I briefly discussed their Pets vs Cattle approach in my post Introducing IBM Spectrum Storage – Inside Perspective. These guys thought about the scale, performance, and cost requirements that would be driven by workloads like cloud, analytic, mobile, social, and Big Data, and they decided that traditional infrastructures had to change. IBM Spectrum Scale Built for efficiency Optimized for speedWe needed a new way of doing things that utilized much lower cost common building blocks with almost unlimited increments in scalability. Something built for efficiency and optimized for speed. A software defined storage layer that crafts these common storage building blocks into massively scalable and deeply capable storage systems is IBM Spectrum Scale.

To do the job, this layer of storage software must have a couple of important attributes.

  1. Software defined storage has to cobble together common building blocks into a reliable storage system that quickly delivers data to workloads over a variety of protocols. The technology behind IBM Spectrum Scale began in IBM research over 20 years ago and earned its stripes with some of the most demanding scientific, digital media, and Big Data workloads on the planet. IBM Spectrum Scale common building blocksToday, it’s the software defined storage layer behind many of the 500 most powerful computer systems in the world. With the rise of cloud, analytic, mobile, and social workloads, we have been adapting additional file and object protocols to this extreme scale and performance. IBM Spectrum Scale can now deliver data over POSIX, NFS, a Hadoop connector and the OpenStack Swift object interface with more protocols being added regularly. I’ll talk about block protocols in future posts on IBM Spectrum Virtualize and IBM Spectrum Accelerate.
  2. Software defined storage has to deliver the kind of data management and security capabilities required to keep common building block infrastructures running efficiently. Snapshots, replication and encryption have become table stakes. IT managers now look for distinguishing characteristics like sophisticated tiering policies that can take disk-based data and move it ‘up’ to Flash storage or Flash cache on the computer where the workload is running, and ‘down’ to tape and the cloud. You can find all these capabilities in IBM Spectrum Scale and that’s good, but they are capabilities that have a somewhat single-site feel to them. “My data is here on-premises – make a snapshot, encrypt it, move it to another tier”. The cloud adds a more global dimension to storage. IBM Spectrum Scale AFMIBM Spectrum Scale capitalizes on this idea buy enabling IT managers to connect Spectrum Scale instances across an enterprise or around the world to create a single inventory of data with policies that define the location and flow of files – moving the right data to the right place at the right time [Learn more here]. Importantly, because this global namespace can house something like 9 quintillion files (did I mention high scale?) for varying workloads with different data management needs, IBM Spectrum Scale enables IT managers to apply data management and security policies to individual sets of files.

As the public cloud pioneers observed, a key is the ability to utilize common building block storage hardware. I’ll give you a practical example. IBM Spectrum Scale is software defined storage available as software, appliance or cloud service. IBM Elastic Storage ServerAn example appliance is the IBM Elastic Storage Server which combines Spectrum Scale software with dense, low cost, common building block disk drawers. The Elastic Storage Server also includes IBM Spectrum Scale RAID, a software implementation of declustered RAID where RAID is done at a block level as opposed to a disk level. As disks continue to get larger, traditional RAID begins to struggle and rebuild times can be measured in hours or days. IBM Spectrum Scale RAID measures rebuild times in minutes. Think about the implications for IT managers – a massively scalable and deeply capable storage system without a SAN or RAID controller, just software defined storage and a pile of dense JBOD drawers.  There is indeed a big shift taking place in the storage market.

Join the conversation with #IBMStorage and #softwaredefined.

Introducing IBM Spectrum Storage – Inside Perspective

Blank Direction GuideEvery once in a long while there are discoveries that, well, change everything. Stuff like discovering the earth isn’t flat, it’s round. People talked about that possibility for centuries but it wasn’t until a Ferdinand Magellan expedition stepped out and circumnavigated the globe that everything changed. In some ways, today’s announcement of IBM® Spectrum Storage™  has done the same thing for the idea of software defined storage.

A good deal of the change we are experiencing in IT can be traced back to the advent of cloud computing. The pioneers in the cloud movement were public cloud providers like Amazon, RackSpace and SoftLayer. When compared to traditional IT infrastructures used by most private enterprises, these guys approached infrastructure in a completely new way. Microsoft engineer William Baker is credited for famously describing these two differing approaches as Pets vs Cattle. The analogy goes something like this:

In a traditional infrastructure, you think of machines as pets. You give them names and care for their well-being. When they get ill, you nurse them back to health. In a cloud infrastructure, you think of your machines as cattle. You give them numbers because they are all identical, and when one gets ill, you shoot it and get another cow.

The observation public cloud providers made was that cloud infrastructures should use cattle.

 

BOOM!  

 

 

From that point, the industry transition was on. Common building block hardware was on the rise. The ideas of availability and recovery, of scale and performance were being built into software rather than depending on a more expensive physical infrastructure to provide it.  Newer workloads for mobile, social, analytics and Big Data were being built with an expectation of software defined cattle infrastructure. For IT managers, it represented a big move that was going to take some re-thinking. For vendors like IBM who clients trust to help them through transitions like this, it represented an opportunity to focus.

compass 2014In 2014, you may have noticed a number of really bold moves IBM made that were all, in some way, tied to this transition. IBM’s System x server division joined Lenovo accelerating their journey toward becoming the #1 provider of x86 building block hardware. The OpenPOWER Foundation was established making IBM’s POWER microprocessor architecture and software available to open development. This is helping the industry build advanced server, networking, storage and graphics technology aimed at delivering more choice, control and flexibility to developers of next-generation cloud data centers. And GLOBALFOUNDRIES announced their intention to acquire IBM’s commercial semiconductor technology business allowing IBM to better focus on fundamental semiconductor research and the development of systems that will be used with the new cloud workloads. Innovation AheadCoupled with these moves, IBM has directed its considerable innovation in areas of client value where we excel, one of those being Software Defined Storage.

The industry has been talking about the idea of software defined storage for a while. For that matter, I’ve been talking about software defined storage for a while. But compared to today, this was all sort of like people talking about the world being round. Analysts proposed concepts, some vendors marketed their visions, there were even a few fairly successful point products…until today. Today, somebody stepped out and actually delivered the first comprehensive family of software defined storage offerings that can centrally manage yottabytes of data on more than 300 different storage devices – whether SAN and NAS “pet” infrastructure or common building block “cattle” infrastructure…  and everything changed.

The leaderAlready recognized as the #1 provider of software defined storage platforms, IBM’s introduction of IBM Spectrum Storage incorporates more than 700 patents and is backed by plans to invest more than $1 billion over the next five years toward propelling the use of software defined storage in any form – as software, appliance or cloud service. In coming weeks, I’ll be highlighting the members of the IBM Spectrum Storage software defined family.

Join the conversation with #IBMStorage and #softwaredefined.

The Pulse of the Cloud

baseball player hittingThose of us who live in the northern hemisphere generally love this time of year. We are on the downhill side of winter and about a month from the beginning of spring. Many of my friends here in the United States are already swinging baseball bats (or at least watching their children swing baseball bats). And at IBM, the smell of Pulse is in the air. Next week in Las Vegas, thousands of our valued clients and trusted partners will engage in a bold discussion on Cloud.

For those who are attending the Pulse Open Cloud Summit on Sunday February 23, or the main Pulse sessions beginning Monday February 24, one speaker of particular interest should be Jamie Thomas, General Manager, IBM Software Defined Systems. Take a moment to follow her on Twitter.

IBM CloudI was able to get advance copies of both Jamie’s main tent keynote Monday morning at 10am (session KEY-2550A) and her Cloud & Software Defined Environments (SDE) track kickoff Monday afternoon at 1pm (session CET-1463A). Jamie is a bold thinker and you should expect to walk away with good perspective on Cloud and SDE.

Jamie’s keynote centers on how cloud is changing the way work gets done. The lines between business leaders, developers and IT operations are blurring as they work in concert to compose new business models in a dynamic cloud. You’ll want to pay special attention as Jamie will be making some big announcements around IBM’s open community participation and a new composable environment to enable developers to rapidly build, deploy, and manage their cloud applications.

IBM SDE Open CommunitiesThe track kickoff lays the groundwork for a thorough week of Cloud and SDE sessions. Jamie will explain the business and technology dynamics that are pushing IT toward a new generation of infrastructure and she’ll give you a rich glimpse into how IBM is working in an open community context to make that infrastructure a reality. Some themes to watch for will be:

  • application aware – a great benefit of clouds and SDE is that they know something about the applications they are servicing
  • automation – SDE is automation for cloud
  • resource smart – you need to make the most effective use of you chosen hardware
  • openness – open source and open standards accelerate innovation and enable flexibility

Whether you are an IT decision maker, an architect, or a practitioner simply interested in the kinds of skills you’ll soon need to master, this session will help you organize your thinking.