Kate's Comment

Thoughts on British ICT, energy & environment, cloud computing and security from Memset's MD

Utility Computing

I will be carrying on with the “Greening the data centre” series soon, but in the interim several people have recently been asking me about the concept of utility computing, and it has been a major theme of recent IT conferences. Despite the attention the concept is receiving there is still a lot of misunderstanding, both about what it is why it will be important over the next few years. So, what is utility computing all about?

First of all we need to be clear on what we mean by utility computing; very few organisations are offering true utility computing (ie. computing resources as a utility, in much the same way as gas, water or electricity is supplied) although there are some analogues. Our services, in some senses, can be regarded as utility computing, because we make computing facilities (specifically CPU resource, storage and bandwidth) available in convenient bite-sized chunks and allow customers to easily upgrade/downgrade.

A typical example is one of our Xen-based Miniserver Virtual Machines; a client might initially just want 256MB of RAM and 30GB of disk space, but in time their requirements might grow beyond one machine and onto a cluster of powerful dedicated servers. This approach (allowing the client to start small and grow the resource allocation as needed) gives very large cost savings to them (as well as no up-front capital expenditure) and is very green; we balance the load across our pool of Miniserver host machines to make efficient use of the available disk and CPU resource (bandwidth is secondary since if you don’t use it all, it is not really consuming power).

We, however, are progressively moving towards true utility computing. The next step is our deployment of on-demand clusters where the client has 10 (say) servers dedicated to his/her application, but at normal loads only 3 are required, so only 3 are powered up most of the time. As demand increases our in-house management software spots the trend and (ahead of requirement) brings the other nodes in the cluster online. We plan to incentivise our clients to use this system by billing them separately for electricity, so if they let us turn off the machines that are there just to cope with load spikes and normally not being used, it costs them less.

Our longer-term vision is to combine the two so that we can fully virtualise customers’ server clusters and dynamically allocate them to machines in our server pool that are not necessarily dedicated to them. That is when you truly get the big cost and energy savings; imagine us hosting a big online game in the same data centre as a back office function of a large corporate. During the daytime the back office function might need 50 servers to run, and the game only 10, but during the night the game might need 50 and the back office 10. With traditional provisioning you would have at least 100 machines on and running all the time, but with our system you 60 or less. In reality it is even worse since no sane CIO would run his application without some overhead room to cope with load spikes, but again you get that for free with utility computing since the load spikes just become a ripple on top of all the baseline operations, saving you even more cost and carbon. I estimate that if all our UK data centre operations were running in a true utility computing environment we would be able to reduce our power and hardware requirements by a factor of ten.

There is a catch though; to get the really big savings (in terms of money, energy and hardware) you need to consolidate large numbers of diverse applications (with different load characteristics and different usage patterns) into a small number of big data centers, or at least a small number of big utility computing pools. The problem is that most CIOs are still unsure about the security of virtualisation (for no good reason I might add), let alone allowing their applications to “roam” freely across pools of servers, being allocated CPU & disk resources that might have moments ago been used for one of their competitors.

As with most green initiatives, to get the real benefits of utility computing we need to change the way we think and operate at a organisational level – rolling out some shiny new technologies by itself is not enough. In this case we need to lose our outdated attachment to tin and the idea that “this application runs on those boxes there”. Instead we should view CPU time and storage space as facilities to be rented as and when needed, in much the same way as we do with bandwidth. After all, the routers feeding your ‘net connection might have been being used for something quite other moments before, but we don’t care – why should we with servers?

2 comments