Thoughts on British ICT, energy & environment, cloud computing and security from Memset's MD
Over the last two years there has been a lot of debate about what the embedded energy of a PC or server is compared with how much power it uses. I have crunched the numbers and believe that the figure for a server is about 1,000,000 Watt-hours (1,000 kWh or 1MWh). Here is how I worked it out, and why it means that you should sweat the desktops but replace the servers.
First I started with what appears to be the only paper on the subject; “Energy Intensity of Computer Manufacturing: Hybrid Assessment Combining Process and Economic Input-Output Methods” by Eric Williams of the United Nations University in Japan, and published in Environmental Science & Technology in 2004.
Unfortunately the paper bundles CRT (old-style monitor) production in with the figures which really muddies the waters, especially given that they are redundant technology) However, there seems to be one very nice bit of information embedded in the paper – a table listing the electricity, fossil, and total energy use in computer production. A quick bit of analysis: The total estimated cost of production is 6,400MJ, and if we remove the CRT-specific bits, we take off:
So, from the paper a PC’s production is about 4,700MJ, which is 1,300kWh. At a green IT conference at Oxford University last year, Fujitsu gave a great presentation on their new super-green PC fabrication plant, and asserted that their range of green PCs took 730kWh to make (materials, production & distrubution). If his numbers are right that is an impressive improvement in 4 years, but Fujitsu have been working hard in the area. Of course, that does also depend on my estimates of what proportion are down the the CRT – I shouldn’t think I’m far off though (I’m good with numbers 😉 ).
As an aside, this is very interesting from a recycling point of view. Most PC manufacturers, be it Fujitsu, Dell or IBM will proudly telling us about less than 2% goes to landfill, but if you think about it surely the only energy that can be “reclaimed” from manufacture would be the bulk materials; all the energy of making chips, assembly, PCBs, transport etc is entirely lost. Therefore, in reality one could at most hope to recover perhaps 800-1,000MJ of the original energy-cost (ie. about 20%).
A server is just a PC with a slightly different set of components (an extra disk & more RAM, but less additional cards like graphics & audio), so I think it is reasonable to assume they are similar. Therefore, I pick a figure half way between what I have deduced from the paper (1,300 kWh) and the only convincing figure I have had from a vendor (730 kWh) and have gone for 1,000 kWh in my estimations.
So what about the fabrication energy vs. utilisation? Well, I think the paper’s 81% fab, 19% use lifetime cost is probably no longer very accurate. First, he assumes 3 hours per day, which is far too low given the number of office PCs out there and the often intensive use of family PCs. Second, I think a 3 year lifetime is too low – most people I know use their PCs much longer (they get passed down / re-used rather than thrown away) – I believe the Fujitsu figure of 6.6 years for home users at least.
I would not, however, disagree totally with his figure of 128W for PC+screen – the gains we have made in LCD screen efficiency have been outweighed by power-hungry CPU-intensive machines in recent years, although that trend is reversing. Fujitsu’s figure was 80W for their “green” PC in full power mode, and an average LCD screen uses about 20W (about half a similar CRT).
So, a quick updated estimate (based on an average of PC & home use):
120W * 5 hours/day * 365 * 5 years ~= 1,100 kWh
If we assume LCD screens are as energy intensive as CRTs and go with Eric’s figure of 1,700 kWh for production then the ratio is 61% fab : 39% use.
If we assume that Fujitsu are telling the truth though then it is 730kWh in fabrication, plus ~300kWh for a screen (a guestimate – it is about 465 kWh for a CRT), giving about 1,000kWh fabrication then the embedded vs. use energies are almost equal.
If one then does the calculation based on an office PC usage pattern and a 6.6 year lifetime, then even with more energy efficient PCs the ratio is more like 35% fab : 65% use.
Therefore, I think that we can conclude that the ratio of production energy to usage energy for a PC (with or without screen – the proportions seem about the same) range widely from something like (35% fab : 65% use) to (70% fab : 30% use), and that the main determining factor is the usage pattern of the PC, which is also the one bit of data that we probably have the worst grasp on. Either way, though, you will use less energy overall if you sweat the desktop PCs, as we discussed in the recent BCS Green IT debate.
The situation is very different for a server, however. A typical modern 1U pizza-box server will use 80W when idle and 140W when working hard. Most of the time they are not straining, so call it 100W:
100W * 24 hours/day * 365 * 1.25 PUE ~= 1,100 kWh per year
In other words, a server uses about the same amount of energy as was required to create it every single year, and the same amount that a PC with a fairly average usage pattern uses in 5 years.
Because of this it is worth while to replace servers with more efficient models on a fairly regular basis. Moore’s Law (that transistor density doubles every 18 months) means that server work capacity per Watt is increasing by a factor of 4 every 3 years. This means, that provided you are using the servers properly (virtualisation etc) and consolidating onto a smaller number of newer machines, if you replace a 3 year old server its 1,000 kWh embedded energy cost will be saved by the 3 you are turning off (4:1 consolidation) in only 4 months.
Since I first wrote this the technology has leapt forwards again. Based on discussions at a recent Digitual Europe workshop in Brussels on carbon intensity measurement methodologies for ICT I now believe that the embedded energy of a typical 1U, server is in the region of 600-700kWh (~350kg CO2). For a single quad-core with 2 disks in a 1 U chassis I think an estimate of 650kWh would be reasonable.
However, power consumption has also dropped dramatically. The average power used by the current range of Dell R310’s with 2.4GHz quad-core, 24GB of RAM and 2x2TB HDDs is 80Watts (and that’s them being worked quite hard). They last about five years, but most people would give a server a three year lifetime. Assuming a data centre PUE of about 1.5 that means that over its lifetime such a server uses:
80 Watts * 24 hours * 365 days * 3 years * 1.5 PUE / 1000 = 3,154 kilo-Watt hours (kWh)
The ratio of manufacturing (embedded) energy to use-phase energy in the case of a server is therefore roughly:
|3,150 kWh||650 kWh|
It should be noted that the chap from IBM at the talk said according to their calculations the use phase was more like 95%. I’m trying to find out what sort of server they were using, but I suspect they might have been basing that on the power plate (ie. max power) figure, which servers never actually use in the field, and assuming a hideously bad PUE! Also, as a manufacturer it is in their interests to down-play the fabrication carbon.