Source: flickr user digitalrealtytrust
Analysts at IDC projected last week that the dollar per gigabyte price barrier for solid state drives (SSDs) should fall by the second half of this year. That’s great news for consumers eager to see flash memory bring down the weight and increase the performance of laptops. But what does it mean for cloud computing and data centers, where many want to bring down energy usage while speeding up web serving?
The idea that SSDs could help data centers become more energy efficient was first codified in 2009 when iSuppli issued an eye-catching prediction that if global data centers switched from hard disk drives (HDDs) to SSDs, 166 gigawatts of energy would be saved over a five-year period, or enough to “power an entire country.” SSDs have no moving parts and don’t get hot, which means they are quieter and don’t require much cooling. They also use less space, which impacts how much data center real estate is needed.
Looking back, iSuppli’s research is useful but somewhat comical when one considers that in 2009 a gigabyte of flash storage was over $3 compared with less than ten cents for a gig of magnetic hard drive.
That significant cost difference matters less for consumer products, because storage is being moved to the cloud. Who needs a massive hard drive on their laptop if they are streaming their music from Spotify or iCloud? So move storage to massive HDD arrays in the cloud, put SSDs on thin clients for consumers, and we’re all done, right? Well, sort of.
Why we need SSDs in the data center
From Samsung and Intel to OCZ and STEC, a lot of companies are betting that SSDs will find their way into the data center. The first move from enterprise SSD makers has been to shift the conversation from cost per gigabyte of storage to IOPS per watt, or IOPS per dollar. IOPS stands for “input/output operations per second” and is a standard performance measurement of computer storage. It measures how fast a storage device can both write to itself and be read.
IOPS matters in data centers because if a storage device can be accessed more quickly, the amount of storage you need can drop. If software engineers can figure out a way to make IOPS performance as important a factor as total storage in providing end users with a solid experience, then SSDs will become more and more attractive.
Samsung, in particular, says that its SM825 SSD gets 200 times better performance than a 15,000 RPM HDD in terms of IOPS/watt. This type of thinking is mirroring what is happening in CPU design, where many efficiency gurus are pushing for performance per watt as a metric rather than just pure clock speed. Data center engineers want to know how much performance they are going to see as measured by how much power the data center must expend to do the tasks required. Samsung has even started benchmarking page views per watt to compare the performance of an SSD versus HDDs. The performance for SSDs is better here but not on the scale of only comparing IOPS.
And a pure performance comparison is why, when I turn on my Macbook Air, it boots so much more quickly than my old Macbook. A typical HDD drive running at 15,000 RPMS does about 200 IOPS. An Intel X25-E SSD does about 5,000 IOPS, and companies like Violin Memory say they have flash memory arrays that can do 250,000 IOPS. The speed with which flash storage can be accessed — its IOPS advantage — is what is driving the trend toward hybrid SSD/HDD drives, where the most relevant “hot data” is kept on the SSD.
Another case of mobile design driving data center design?
I spend a lot of time thinking about whether data centers will start to resemble smartphones in design, because smartphones were always designed with energy conservation as a critical factor, and power consumption is becoming a primary concern for data center operators.
One of the concerns I always have with SSDs in the data center is the up-front capital costs. To get the same storage, you have to spend more money up front, even if you save some on the back end with lower utility bills. But you are getting a lot more performance out of those SSDs even if you are sacrificing storage capacity. And you are saving a bit on operational costs with smaller power bills.
The Storage Networking Industries Association (SNIA) estimates that 13 percent of the total power consumption in a data center comes from storage. While not a massive amount, it is enough to impact Google or LinkedIn’s bottom line, and if the performance characteristics of SSDs show a marked difference in consumers’ experiences, going green and creating a place for SSDs in the data center will become an increasing consideration.