Storage Transformation Demands New Thinking (When it Comes to Software-Defined Storage, if it Walks like a Duck and Quacks like a Duck, It Still Might be a Pig!)

Posted by Rich Simmons | Author Info

Jul 21, 2014 9:00:00 AM

We live in interesting times right now in the storage business. What was once considered a “boring” sector of IT is now hot again. We have new vendors entering the market at a furious pace, trying to gain position in all-flash, flash attach, and software-defined storage. Additionally, we also have traditional storage incumbents looking to box out the new entrants through different combinations of product re-brand, acquisition and/or partnerships.

The new vendor entrants are the most fun to watch in my opinion. Unencumbered by installed bases, or legacy technology (or politics!) they are free to try new approaches to long-standing issues and roadblocks that always emerge as technology matures. Some new players have truly unique and interesting solutions; others have only marketing spin. 

Watching some of the traditional storage vendors try to counter these new offerings is generally quite amusing, and in some cases just plain sad. They trot out technology that has been around for years declaring it to be Software-Defined, and Cloud ready or whatever they think will make them most relevant. The most common response I see is the re-brand. You know the drill: product XYZ was our storage virtualization/storage OS product for years, but now it’s called product ZYX and it’s software-defined storage because we dropped the hardware requirement! So it’s now Software-Defined Storage (SDS)?

It all just serves to remind me why I work where I do. One of the great things about working for EMC is the company’s ability to blend both the innovation and enthusiasm of a startup with our traditional storage business. My group, the Advanced Software Division is a great example of this. EMC looked out over the storage landscape some years ago and made a pretty bold bet. They did not choose to re-purpose and re-brand existing technology. Rather they went outside the box (literally outside the company walls) and hired Amitabh Srivastava to build it from the ground up. Now Amitabh was building cloud storage in his last gig, so he has been in on this SDS, cloud ready stuff for a while. EMC was listening to our customers tell us they needed a new approach, and that’s what we went out and did, starting from scratch to develop a solution that could help customers transition to the next storage generation.
Read More

Topics: Software Defined Storage, 3rd Platform, software-defined data center, HDFS, OpenStack

Looking Back to Get Ahead Using 'Divide and Conquer'

Posted by Mark A. O'Connell | Author Info

May 12, 2014 9:00:00 AM

While my last blogs encouraged taking advantage of new technologies and not being constrained by 
we've always done things", for this blog I'll emphasize the wisdom that can come from looking backwards.

Divide and conquer is an old strategy that has been applied effectively to many situations and problems. In politics, splitting the opposition, by offering deals that appeal to individual groups or subsets of the opposition, can enable successful policy implementations that would normally have united the opposition and prevented progress. Militarily, victory can be achieved by avoiding an enemy's combined strength, by engaging the enemy repeatedly in smaller battles and whittling down the enemy's fighting capacity.

It is not often that politics, warfare, computing, and storage all intersect, but in this case, leveraging an age old strategy can help us gain insights into today's seem
ingly intractable problems.
Read More


Don't Wait to Tap Your Big Data Goldmine

Posted by Rich Simmons | Author Info

Mar 17, 2014 9:00:00 AM

It seems like everywhere you turn there is another article, post, or expert pontificating about Big Data as a future storage consideration. But big data things are currently happening already: transactions are being transacted, devices are sending out location and status, etc. It’s not like we are going to have a starter gun go off announcing the beginning of the Big Data Era; it’s happening now, but most businesses aren’t doing anything with it.

This raises a couple of interesting questions: Where is this Big Data going now in your storage environment? And how does that impact what you may want to do with it later? We were wondering about that here in ASD, and we went looking for some answers. We commissioned a study with The 451Group to try and get a handle on what businesses are doing today . The study encompassed corporate IT departments (a majority of whom are at the Manager level or above) at over 100 Enterprise companies.

One of the key questions we asked was: Which storage methods do you today support or do you plan to support for your Big Data implementation? In retrospect, the responses where not that surprising if you believe that Big Data is alive and active in the data center today and not some future ideal. The data shows that almost 50% said they store Big Data on an existing SAN, 30% indicated that it was on an existing NAS, and 30% said a cloud based storage platform (multiple selections were accepted). 

Question: Which storage methods do you support today or plan to support for your Big Data implementation?

Read More

Topics: Software Defined Storage, HDFS, ViPR, Big Data

Understanding Hadoop, HDFS, and What That Means to Big Data

Posted by Amrita Sadhukhan | Author Info

Jan 27, 2014 9:00:00 AM

Over the past few years, usage of the Internet has increased massively.  People are now accessing emails, social networking sites, writing blogs, and creating multiple websites.  As a result, petabytes of data are generated every moment.  Enterprises today are trying to derive meaningful information from this data and convert it into information that can translate into business value as well as features and functionality for their various products. 

Huge volumes of a great variety of data, both structured and unstructured, are being generated at an unprecedented velocity and in many respects, that is the easy part!  It is the “gather, filter, derive and translate” part that has most organizations tied up in knots.  This is the genesis of today’s focus on Big Data solutions. 

Previously we have used traditional enterprise architectures consisting of one or more servers, storage arrays, and storage area networks connecting the servers and the storage arrays.  This architecture was built based on compute intensive applications that require a lot of processing cycles but mostly on a small subset of application data. 

Read More

Topics: HDFS, HADOOP, Big Data