Running a server in the US costs between $10,000 and $20,000 per year depending on the size of the company, how efficient they are at IT operations, and the level of redundancy required. This cost includes everything from air conditioning and networking to power and rack space, but is dominated by labor for systems administrator, test engineers, developers, etc.
Based on these figures, a small application running on six servers at first glance has an annual operating cost between $60,000 and $120,000. Except, of course, that the servers in production are only part of the hardware dedicated to the application. On average the total number of servers consumed by an application is 2.5 times the number in production. The additional servers are typically underutilized, but are set aside for staging, testing, development, support and education. So our six server application really has a total of fifteen servers dedicated to it. Therefore, the real annual cost is between $150,00 and $300,000.
To compound things, the decision to use six servers may have been made before the application was developed and there was no real information on how much usage there would really be. Our six server example may easily fit on 2 servers or may be struggling and in fact need 12 servers. Making that change, unfortunately, is another expense.
Utility computing offers three key values that address cost directly. First, the infrastructure for the application can be defined easily online, one time, and then be reused again and again, thus greatly reducing the administration time spent today provisioning and configuring servers, switches and volumes. Second, no spare resources need to be retained for staging, testing, support or education because they can be deployed at will and used only while actually needed. Third, applications can easily be designed to take advantage of the ability of the utility system to scale the resources they operate on so there’s no need to over-provision.