RethinkBanner.png

Supporting Multiple Cloud Stacks in the Software-defined Data Center

Posted by Andy Waterhouse | Author Info

Jan 23, 2014 9:00:00 AM

When we talk about the software-defined data center, we often think of delivering our vision through the use of a single cloud stack or cloud automation platform. The use of a single stack helps to be able to implement, manage and visualize the delivery of an IT service and also means that there is a clear way of recording everything in a service catalog and CMDB/CMS.   

However, for many customers, the ability to use a single stack is simply not a reality.  There can be many reasons for this:

• A multiple hypervisor strategy, using an 'element management' approach
• The integration of a new organization with a different service delivery model, different storage array hardware or stack approach
• Organic business growth within the organization with non-interlocked business units
• A specific application infrastructure project delivery that required the use of a different stack or orchestration approach (such as HP OO, BMC Atrium Orchestrator, vCO, Puppet or scripts) 
Waterhouse1The challenge that IT teams face in this situation is that the underlying infrastructure is now running very inefficiently and any changes to the platform can take days or weeks to achieve. The storage array capacity is not being used efficiently as the capacity is now siloed, meaning that it is not pooled correctly.  Also, the service delivery process is being duplicated and overlapping or manual tasks are being used to deliver the service and different ways of managing and interfacing to different vendors storage arrays are taken into account.  The overall cost of IT is significantly higher than it should be. Meanwhile, the ability to deliver service quickly and effectively to the business users is compromised.  

The goal of the IT team should be to determine the service delivery requirements of each IT service and then deliver it using the appropriate underlying infrastructure, using as much shared infrastructure as possible in an agile manner. It should also be recognized that many organizations do not effectively deprovision storage once a business service has been decommissioned, which increases costs. Frequently additional capacity is purchased when more storage could be returned to the available pools. The reason for this is the inability to link storage to the service being delivered, especially when services are being delivered manually.  

This leads to the next challenge when using multiple stacks and multiple orchestration engines – how much effort is required to add new infrastructure (to expand services) and also when to decommission infrastructure that is 'end of life'? Each stack needs to take the changes into consideration; each orchestration engine needs to be recoded to take account of the infrastructure change.  When storage arrays from multiple vendors are used this problem escalates significantly. At best, this delays the availability of the new infrastructure or risks the service on outdated infrastructure that should have been removed. At worst, this leads to yet more inefficiency and cost of service. Given the growth in storage requirements for most organizations, this situation will occur even more frequently.  

The deployment of a true software-defined storage (SDS) platform, such as EMC ViPR, addresses the challenges that are raised above. It can be used as part of a wider Software-Defined Data Center (SDDC) strategy or as a way of resolving the issues around delivering an automated storage platform for today's datacenters.  

iStock_000019909158XSmall[1]Let’s look at how SDS can solve the challenge of dealing with multiple cloud stacks. The key issue is the fact that the multiple stacks -- whether they are integrated vendor stacks or include other orchestration platforms or scripts -- access the various underlying infrastructure components independently.  The SDS approach used by EMC ViPR provides an abstracted view of the underlying hardware and so delivers a single interface into the storage platform, irrespective of the use of different storage arrays (such as EMC and NetApp, and other vendors in the future) delivered as SAN and NAS platforms. When storage is added to the infrastructure, the ViPR platform discovers the array capabilities and then adds the capacity into the virtual arrays and virtual pools that can be allocated. The virtual arrays could represent a location, a business unit or a tenant (in a multi-tenant environment) and can span across multiple physical arrays. The virtual pools represent the storage services that will be delivered and include the policies that should be used to define the storage service e.g. The type of disk used, SAN or NAS storage, replication options, etc..  

Storage is provisioned based on the storage pools that are available and so when additional physical array capacity is added, it can be added to the existing storage services without the need to recode scripts, orchestration workflows or interfaces; it is immediately available for provisioning and use. The provisioning process will allocate storage from all arrays, thus the storage pools are best used in their entirety rather than on a silo basis, whilst still allocating based on policy.  Also, the whole end-to-end provisioning process (LUN to host) is fully automated and so storage is provisioned in minutes rather than days or weeks. It means that customers can now better leverage the strategic investment that they have made in their storage arrays.  

The net effect of this approach is that there is now a single interface to the whole storage platform, using a single storage service catalog.  Each cloud stack can continue to access storage services independently, but SDS (ViPR) controls access to all of the underlying storage. It allows all of the siloed storage to be effectively pooled and means that any scripts, orchestration workflows and tools do not need to be continually updated to reflect changes in the underlying storage platform. For VMware administrators there is an additional benefit in that ViPR provides a single VASA interface to all of the underlying arrays, irrespective of whether that are EMC or NetApp arrays. The final point to note is that VIPR will also help to decommission storage services as well.  As part of the deprovisioning process the storage is reallocated into the available pools of storage, ready for use again at the next request.

The need for this approach was highlighted to me recently when I met with a finance customer that had been running a datacenter consolidation program following a series of company acquisitions.  The result of bringing together the infrastructure of different companies was that they have storage arrays from three different vendors, two different stacks (one cloud and one orchestration platform) and a series of mismatched manual processes. They had moved the various infrastructure platforms to two datacenters but had not changed any of the operational profiles of the platforms. This meant that they had achieved the goal of reducing the number of datacenters, but were still incurring the same ongoing operational costs and process inefficiencies of the platform.  ViPR will now enable them to share the available storage across their EMC and NetApp arrays (and the third vendors’ arrays shortly) and also help them transform their operational processes from a broad set of misaligned and manual tasks to a single set of automated actions. This will realize the reduction in operational cost that they needed and also allow them to make a more strategic decision about ongoing capital investments in storage.

It is clear that SDS is a significant enabler for customers that are required to run multiple cloud stacks. The SDS approach enables them to get better leverage from their existing investments whilst supporting the business need to migrate services over time, whilst increasing agility and reducing operations cost.

Topics: Software Defined Storage, software-defined data center, VMware

About this blog

The future of storage is here.  Are you ready for it?  This blog will help lend advice and best practices on how to prepare your data center to become software-defined from the top storage minds at EMC.

The opinions expressed here are personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC nor does it constitute any official communication of EMC.

 

Subscribe to Email Updates

Recent Posts