In any event, it’s hard not to get a little cynical when you look back at all of the whiz bang data storage industry novelties like “break the mirror”, “information life-cycle management”, “data mining”, “business continuity”, “data deduplication”, “deterministic performance”, “big data” and my favorite “the storage hyper-visor”. Yeah, I’ve seen a lot in my twenty years in this industry and I get a kick out of how the more things change the more they stay the same. I wonder if those Berkley hippies who invented all of these ideas in the 70’s giggle themselves into a stupor when they read the current trade rags or maybe they sob uncontrollably when they see the staggering dollar amounts the latest VC investors are shelling out for the multitude of new data storage startups. It’s hard to argue when you see the billion dollar acquisitions that have taken place over the last few years. My guess is that a lot of those same investors who profited from those acquisitions are the ones who are now doubling down again.
Hold on—I’m getting to that point about time to tier.
So the circus goes on and on, and as we are at VMworld this week in San Francisco we will see all of the latest marketing from all of the players deftly pitching their wares to an increasingly well informed and techno-savvy group of prospects. The current trend is toward medium-to-large businesses moving more and more of their mission critical applications onto a dynamic virtualized server platform where they will require networked data storage that can reliably provide dynamic performance to a diverse workload of intermittently spiking data requests. Oh and yeah, we need to do this at a price point that will still allow them to update their infrastructure too.
The main challenge that data storage has faced is applications that demand highly random write intensive transactional workloads. The only way we could serve these workhorses in the past was to string together a bunch of 15,000 RPM spindles in a RAID 10. The result was very fast IO but at a very high cost and extremely poor utilization leaving much of that expensive capacity unused. If you really want to laugh about ineffective use of capacity I am sure the Berkley guys could tell you a funny story about short stroking but I’m not going to touch that one. Fast forward to the present and you see a number of new products offering an all SSD model that sounds really cool when you hear things like 250K write IOPS! Reminds me of the old muscle cars that had the exhaust coming up through the hood. Looks cool and goes real fast but it probably wouldn’t be a very good family car. Today’s data center requirements need an all-purpose storage solution that can do it all without breaking the ever shrinking budget of today’s economy challenged businesses.
As the workloads of the enterprise start to enter into the average medium-to-large data center as a result of increasing server and desktop virtualization, we are seeing the need for a better approach. Along came Auto-tiering promising to solve this issue of wasted high performance or wasted capacity by allowing the data to move between different media types within an array. What a great idea! We can now only buy the amount of high performance media that we need and put everything else on more affordable media. Brilliant! Well then we realized how it worked in practice. While they were continuously monitoring the data and analyzing the data, they were only moving the data when there was a break in the action usually at night. The result was no performance boost when the applications needed it most, because they had to wait till the next day to get access to the faster media and by that time the needs had shifted again. Remember Lucille Ball and the chocolates on the conveyor belt? That was what was happening to those high IO requests during the day. Wow, I just made a Lucille Ball reference in a technical blog… I hope the Berkeley guys got it. Sorry about the hippie thing.
Finally there is a product that actually delivers beyond the smoke and mirrors and is available today. The Dot Hill AssuredSAN™ Pro 5000 with RealStor™ software provides a real time auto-tiering solution that will continuously analyze your data and migrate it every five seconds to give your spiking applications the performance they need when they need it. All of this from a stable veteran company with a proven Five 9’s track record and over 500,000 systems in the field! Dot Hill Systems is going to make a big splash at this year’s VMworld show with a wicked fast, seriously smart, rock solid and disruptively simple storage solution! It’s unbreakable storage that won’t break the bank! See you at the show!
By: Mike Bettenburg, Dot Hill Channel Sales Manager