Source: Adam Lesser
At the GigaOM Structure Conference last week the main hall was packed for Intel’s VP of Cloud Infrastructure Jason Waxman’s talk about brawny vs. wimpy cores. Waxman opened his presentation by referencing the last couple years’ academic paper wars, which have pitted Google’s Urs Hozle, arguing that brawny cores like Intel’s Xeon chip would prevail, versus Facebook/Tilera which published a paper showing that multicore Linux based Tilera processors were superior to x86 processors on performance per watt measurements. But Waxman wasn’t really there to take sides. He was there to argue Intel was repositioning itself as the chipmaker of choice, capable of offering its customers both brawny and wimpy cores.
To step back a moment, we have to remember there was a time when all Intel cared about was proving that it had amazing single threaded performance with the best clock speeds. The company’s tone has shifted dramatically over the past couple years as the way computers are used, exemplified by the explosion of smart phones and tablet computing, has changed how computing is done on the front end and how data centers function on the back end. In both arenas, power efficiency has become paramount.
And the controversial question on everyone’s minds is would hyperscale data centers like those at Facebook, Google and Amazon be willing to adopt ARM or even Linux based multi core processors in their data centers with the hope that the lower power servers were better suited for executing a higher volume of similar compute tasks while saving on power costs. In the case of Amazon and Facebook, there’s clearly testing going on and a general openness to the idea of new processor architecture, not to mention that even stalwart of the x86 days AMD has indicated it’s not beyond licensing an ARM core to build a better microserver processor.
But in the last six months, Intel has shown greater focus on the low power end of the game, tacitly accepting that there are going to be data center engineers who want to use highly parallelized processing on lots of wimpy cores, regardless of whether there are inherent software challenges in doing so. In May, it introduced the lowest power Intel Xeon single socket server build ever, a reasonable 17 watts, and it’s benefited from OEMs like HP’s recent decision to roll out low power servers built on Intel’s brand-new 64-bit Atom Centerton chips, which should use about 12-14 watts per server. A typical server build can run over 100 watts.
Recently acquired by AMD, SeaMicro has always built its low power servers on Intel Atom chips, and subsequently Xeon chips. CEO Andrew Feldman has often said he went with Atom chips precisely because he didn’t want to ask his customers to start recompiling code, what they’d have to do with ARM or Linux processors. And that’s the implicit pitch from Intel’s Waxman, that it too has wimpy cores that can be optimized. Waxman took the opportunity at Structure to demo a Centerton Atom chip, showing its power efficiency at around 9 watts, as well as announcing the next generation of Intel Atom chips, called Avoton, which will be a System on a Chip (SoC) built on new 22nm fabrication tech.
If there’s a message in all this, it’s that wimpy cores will have a place in the data center and that it’s likely that Intel is seeing demand out on the horizon for lower power chips. More importantly is the recognition that there are a variety of compute tasks, some of which really shouldn’t be running on brawny cores. ARM server startup Calxeda said last week that ARM and x86 could co-exist in the data center with specific tasks related to cloud processing off loaded to ARM processors the way graphics processing units (GPUs) from the likes of Nvidia have been paired with CPUs for many years.
The server is being pulled apart and rebuilt because major buyers like Facebook with sophisticated engineering teams want hardware that’s optimal for every compute task. And the companies that win will be the companies that give the customer what they want. Even Intel realizes that.