Doom! Doom I say!
Classic Nicholas Carr post yesterday. Big sensational headline to grab attention, followed by some specious reasoning.
The headline is “Is the server industry doomed?” Much like the title of his book, Nick Carr isn’t really asking the question; he’s already got the answer. And for Nick the answer is yes.
I couldn’t disagree more. While the server industry is definitely changing, it’s by no means dead, dying or doomed. Let’s walk through Nick’s argument, debunk it, then consider a few other data points as well.
First argument: Virtualization wrings more utilization out of a single server. Thus companies need fewer servers. The server industry is doomed.
It’s indisputable that virtualization dramatically increases server utilization. On average you may get from 15% to 60% utilization out of an x86 server using virtualization. The technology is wildly popular and I think most would agree it’s also reached the mainstream. With a 4x gain in utilization, by Nick’s logic shouldn’t the server market have fallen to ¼ of its 2002 size (when virtualization hit its growth spurt)? In actuality, this past year the server market grew by 5%, capping off 8th consecutive quarters of growth. This, on a $50 billion base, is no small feat. In fact, the advent of virtualization seems to be correlated with when the server market resumed its growth.
Let’s be charitable and assume we’ve just been in the early adopter phase of virtualization. Shouldn’t we have at least seen the server market shrink by 10% given some early utilization gains?
The reason we aren’t seeing market shrinkage after an increase in server utilization is the same reason we never heard Ross Perot’s “giant sucking sound.” We found a way to do more with less of a particular item. This makes that item more valuable to us because it’s become more productive. Thus we want more of this highly productive thing. This happens all the time when new technologies like virtualization are introduced. For example:
When word processors took off, everyone thought it would crater the employment of administrative assistants. Today there are more administrative assistants working than before because they are so much more productive (i.e. valuable).
When personal computers took off, everyone predicted the paperless office. Instead today offices consume more paper than ever. Why? Because all those electronic files are so darn easy to print!
Why do you think the #1 server vendor (IBM) is also the #1 virtualization software distributor?
Second argument: Computation will move to brand-less collections of chips and drives arrayed in a grid. Clever software will orchestrate all of this. Google showed us this can be done. The server industry is doomed.
This is basically a pitch for “grid computing,” a term IBM coined 5 years ago that has yet to bear fruit in the marketplace. The grid is not virtualization. Virtualization makes one server appear as many. A grid makes many servers act as one which is also what Google does. The question is, who really needs many servers to act as one? After all, we’ve just said your average application can barely utilize 20% of an Intel processor. What’s the rush to stitch all these processors together to serve a single application? It turns out, almost NO ONE needs to do this at a grid level. If you’re going to simulate nuclear explosions or serve up a billion searches a day, go buy or build a grid. Everyone else can sit tight with their 2-8 processor boxes which they can now get 80% utilization out of. And if you have an applications you think might require a grid, just wait a year longer and Moore’s law will save you the trouble.
Third Argument: Utility grids will supplant “sub-scale” corporate data centers. The server industry is doomed.
Prove it. Show me a spreadsheet that demonstrates how a grid costs materially less than in-house alternatives with a comparable quality of service. I think you’ll find that there’s little evidence that your typical Global 2000 company is sub-scale. Compared to electricity (Nick’s favorite analogy), what are the big, up-front overhead costs in corporate computing that need to be amortized across a user base that’s larger than the corporation? In the cases of electricity this is fairly obvious, a power plant costs a ton of dough. By contrast there isn’t much in computing that costs you more than $ 1 or 2 million up front. This is a trivial amount to amortize away in a large corporation.
Where are the computing utility economies of scale coming from? Purchasing power? Know-how? Let’s end the hand waving and pragmatically lay this out. Otherwise, let’s drop the topic and wait another decade until there's more substance behind the rhetoric.
I’ll cut Nick Carr a little slack due to his frequent use of the subjunctive “may,” but otherwise that post was pretty far off the mark. What’s even more interesting is today we’re actually witnessing the OPPOSITE of what Nick is predicting. He says it’s all going to commodity hardware run by clever software. In fact the reverse is happening as vendors push software tasks back down into the server layer. Witness:
Datapower (now IBM)
Netscreen (now Juniper)
Or you can read about it in BusinessWeek and the Mercury News.