What is Utility Computing?

Filed under: Random Thoughts — barmijo — December 1, 2005 @ 12:00 am

Utility computing is a business model that allows operators of applications to run their applications without owning or operating computing hardware.

Like the early days of electricity, when companies had to have their own generators, the first ten years of the internet era have required businesses to own and operate servers. Of course, most companies aren’t in the IT business, so running a server isn’t any more an integral part of their business than running a generator for their electricity but there wasn’t an alternative. Then, in the midst of the dotcom bubble, several hardware vendors began talking about a new way to purchase resources – utility computing.

In concept utility computing is simple; rather than purchase and operate servers themselves, businesses subscribe to a computing utility. Just as you use electricity without owning the equipment that creates it, subscribers would utilize computing resources (CPU, memory, storage and connectivity) without owning hardware. They would build applications precisely as they do today but be able run them without concern for how the resources are provided and simply pay for what they’ve used.

I’ll leave it to you to start threads discussing whether any of the vendors who helped coin the term utility computing have actually achieved the vision. However, whether they have or not doesn’t impinge on the fact the vision is compelling.

Enough marketing hyperbole, the definition of utility computing is any system having the following characteristics:

  • access to the system is ubiquitous
  • subscribers can begin using the system without expensive hardware or software
  • subscribers can create and run applications of arbitrary size at any time
  • subscribers can use the utility to run any kind of applications they want
  • subscribers can change the amount of resources used by a running application
  • resources are paid for only as they’re used
  • subscribers can terminate service without exorbitant cancellation fees

The rationale for each requirement follows naturally from our experience with traditional utilities like electricity, telephones, water and gas. For instance requiring service from a provider to place an application on the system or to start/stop it would be much like using an operator to place a call. Yes, it used to be that way, and phone service was expensive because of it.

Some of the implications of these requirements may not be immediately apparent. For instance, in order to allow users to start applications and change the amount of resources they’re allowed to use, the basic unit of resource must be generic. A server, a CPU, a MB of storage are all generic enough to be aggregated. However, trying to specify CPU model, cache size, front side bus speed, or disk drive interface will create impediments to service. This isn’t a step taken lightly because most users have spent a great deal of time dealing with such options in configuring servers in the past, but consider the implications if such a model is carried forward. The number of possible permutations goes up geometrically with the number of specifiable parameters. Therefore, with just a handful of choices a utility service would have to carry dozens of different configurations and the probability that such a resource isn’t available when your process needs it also goes up geometrically. Specifying only generic resources allows those resources to be offered efficiently and cheaply and you can simply purchase more of them at the lower cost.

Why do we need Utility Computing?

Filed under: Random Thoughts — barmijo — @ 12:00 am

As an industry, we have moved IT from a back office function supporting accounting and manufacturing at the beginning of the last decade to a C level post. Even sales and marketing don’t have C level posts. However, that rapid growth has created a nightmare of complexity within IT. Simply put, IT is saturated and as a result applications are taking too long to get into production in order to respond to dynamic market conditions.

Three trends drive this complexity and they all began during the late 90′s. The first, of course, is the internet which transformed the nature of applications. Instead of hundreds of trained users in the enterprise we now expect tens of thousands of end users with no familiarity with our systems. Second, is the shift from vertically integrated computing platforms (think Sun and IBM) to commodity servers. Large system vendors spent billions ensuring all that their components worked together and we all leveraged that work. Commodity servers shift the integration effort to IT, and it must be repeated for every application. Last, surprisingly to some, is the move to open source software. While open source provides a great deal of flexibility, and of course costs nothing, it invariably requires greater effort to place into service than commercial equivalents.

Utility computing offers a solution to this problem. Infrastructure and software integration work can be packaged and reused in a fashion impossible today. And while the labor savings alone will be staggering, the real difference will come from improving the time to market of applications. Changes to applications can be made and tried in a day instead of weeks or months. IT can become nimble and respond to business conditions in near real-time.

How does Utility Computing reduce time to market?

Filed under: Random Thoughts — barmijo — @ 12:00 am

The most obvious impediment to deployment addressed by utility computing is the need to provision and configure hardware again and again. Depending on the development methodology used, the size of the application, and the level of desired redundancy a single application may be integrated with hardware as many as six times. Every integration cycle inevitably introduces errors and they must be found and corrected. Utility computing eliminates continuous rebuilding of infrastructure. The needed infrastructure is defined along with the application in a portable format and the utility system creates that infrastructure dynamically every time the application is started.

However, that’s only the tip of the iceberg. Utility computing’s more significant impact on delivery will come from more subtle changes in workflow enabled by the fact application infrastructure is portable and can be instantiated multiple times at minimal cost. Thus developers can run actual copies of the full application to unit test their code. Eliminating simulation environments improve developer efficiency and increase the number of bugs found and fixed by the developers themselves. Test engineers can also use the application infrastructure even before code is complete in order to build test suites. And, during the test cycle, multiple copies of the application can be run to reduce the duration of the cycle. Utility computing also makes scalability and reliability testing more efficient by allowing cheap, simple testing.

Using utility computing enables developers and test engineers to focus on their core skills rather than worrying about hardware. The result is a more productive, streamlined process for taking applications to production.

How does Utility Computing reduce cost?

Filed under: Random Thoughts — barmijo — @ 12:00 am

Running a server in the US costs between $10,000 and $20,000 per year depending on the size of the company, how efficient they are at IT operations, and the level of redundancy required. This cost includes everything from air conditioning and networking to power and rack space, but is dominated by labor for systems administrator, test engineers, developers, etc.

Based on these figures, a small application running on six servers at first glance has an annual operating cost between $60,000 and $120,000. Except, of course, that the servers in production are only part of the hardware dedicated to the application. On average the total number of servers consumed by an application is 2.5 times the number in production. The additional servers are typically underutilized, but are set aside for staging, testing, development, support and education. So our six server application really has a total of fifteen servers dedicated to it. Therefore, the real annual cost is between $150,00 and $300,000.

To compound things, the decision to use six servers may have been made before the application was developed and there was no real information on how much usage there would really be. Our six server example may easily fit on 2 servers or may be struggling and in fact need 12 servers. Making that change, unfortunately, is another expense.

Utility computing offers three key values that address cost directly. First, the infrastructure for the application can be defined easily online, one time, and then be reused again and again, thus greatly reducing the administration time spent today provisioning and configuring servers, switches and volumes. Second, no spare resources need to be retained for staging, testing, support or education because they can be deployed at will and used only while actually needed. Third, applications can easily be designed to take advantage of the ability of the utility system to scale the resources they operate on so there’s no need to over-provision.

Is there a difference between Grid and Utility Computing?

Filed under: Random Thoughts — barmijo — @ 12:00 am

Grid computing describes the sharing of computing resources by organizations and is most common with high performance computing applications that require a great deal of CPU power for a short time. The grid itself is the collection of servers and software that together enable participants to access each other’s systems in a secure fashion.

Grid computing differs significantly from utility computing in key respects. First, grids are not open to the general public, but require you to sign up and add your servers to the grid. Further, grid systems require applications to be built specifically for the grid, usually using a set of APIs specific to the grid being used, and as a result the application is no more scalable or portable than if you owned the servers yourself. Lastly grid systems provide no support for building the infrastructure necesary for web applications and are not designed to host applications 24/7.

High performance computing applications like weather models, integrated circuit simulations and derivative valuations have traits that make them ideal for operation on grids. They have a distinct end to their run, a result, and once the result has been produced the application is terminated until the next time it’s needed. Between runs, the computing resources are idle. More importantly, because these applications are computationally intensive, they can be broken down into numerous almost identicle processes that require minimal communications, meaning the greater the amount of computing resources used, the quicker the result will be generated. And, ironically, the longer the resources will sit idle.

Assume for a moment that you’re the head of a crash test simulation project. You’ve got large data sets from each run that need to be interpreted by engineers before you can run the next simulation. Your budget allows for 50 servers with which your application run takes five days. However, a grid might allow you to use 500 servers which could complete your application in less than a day, giving you more time to plan for the next run. And faster itteration leads to better design. The only cost to you is that you have to let other users of the grid “borrow” your computing resources when they’re idle. Therefore, operators of high performance computing applications find grids exceptionally useful.

Web applications operate differently because they’re transactional in nature. They’re broken down by function into components that are dissimilar and which need to communicate frequently. Applying more resources doesn’t produce results faster but rather allows for processing more transactions. Therefore, operators don’t have “idle” resources to add to the grid. And, because of the frequent communications between components of the application, an additional set of services is required to allow connections to be established and maintained.

Utility computing goes beyond grids and addresses the additional needs of web applications that challenge IT today.

Is there a Network Effect to Utility Computing?

Filed under: Random Thoughts — barmijo — @ 12:00 am

Network effects refer to aspects of a system that enable users to receive greater value as the number of users goes up. Phones are a classic example of a network effect. The more people that own phones the more people I as a subscriber can call. Utility computing has several potential network effects, although some of them are dependent on the implementation choices made by the owner of the system. The most interesting network effects will be related to sharing of applications and knowledge.

For instance, although it isn’t explicitly spelled out, for a system to meet our definition of utility computing, the applications that run on it must be portable. This is a complete departure from today’s methodologies where software is more or less glued to hardware. The implication, though, is that for the first time I can easily share entire multi-tier applications as a single entity. No longer is it necessary to deal with dozens of tar files, building servers and a network to bring up a second copy of an application; just copy and run.

Assume for a moment that you’re the CIO of a mid-size manufacturer and you’ve contracted me as an outsourcer to write a semi-custom scheduling application for your firm. You may even have chosen me because you know I’ve done similar work for other firms. If I’m using traditional methods of development and deployment we’ll negotiate terms, requirements and costs for weeks, I’ll write code for sample screens and perhaps six to eight weeks after the first contact you’ll see the first samples of what the application will look like. However, utility computing makes it possible for me to share existing applications with you. All I have to do is send you pointers and you can start copies of applications I’ve completed for previous clients even though they take several servers. You can try them in your office, with your staff, at your own pace. Then, during our first meeting you can let me know what you like and I can take notes on needed changes.

Another possible network effect will be the sharing of operational data on infrastructure software like web servers, database engines, or application servers. Imagine if every copy of software you integrated into your application came equipped with live counters indicating how it’s operating in every application in which it’s being used. How many copies are being run, in how many different applications, number of hours logged in total, number of failures, and even number of reported hacks could all be reported. Perhaps even a rating system from past users. Such data would be extremely useful but simply isn’t available today. However, with utility computing, all software is run by the system, so such metrics become easily available.

At the end utility computing isn’t simply about saving money or time – it’s about a smarter way of doing things.

Why has true Utility Computing taken so long to achieve?

Filed under: Random Thoughts — barmijo — @ 12:00 am

‘ve spoken with many people who, when I tell them I’m working on utility computing, scoff that true utility
computing is either a pipe dream or a decade away. Unfortunately, these folks aren’t alone. InfoWorld declared utility computing “a dream deferred” and you can find many other articles expressing dissatisfaction with vendor offerings so far.

Trust me, I understand and share the frustration. After all, the first time I heard the term utility computing was at an Infiniband Trade Association conference more than five years ago, yet in my humble opinion I still haven’t seen a product or service that delivers on the promise. Most utility computing offers are still little more than a wrapper of service around hosting and outsourcing with a contract designed to lock in the customer. Hardly a utility. So, what’s caused the delay? I believe that until recently we were missing a basic capability necesary to make utility computing a reality.


A fundamental missing building block

Just over five years ago I had the good fortune to help co-found Topspin Communications with a great bunch of compatriots. After the dust settled on the dotcom implosion we set out to build a switching system that would be the core of a new breed of data centers. Our system would provide a single fabric combining traditional network connections as well as storage and interprocess connections. On top of the fabric ran a connection system called V-Frame that dynamically applied policies and logical connections defined by the operator to establishing physical connections. If successful, we hoped to make hardware resources interchangeable through software. We didn’t envision it as utility computing, but more simply as a way to greatly simplify the networking of data centers.

While defining Topspin’s architecture, we drew thousands of pictures to describe how customers would use the system. A great deal of pictures, though, had to be scrapped because we didn’t know how to build the software to implement them. Such situations are common for startups, or at least for mine, but this time there was a definite pattern. The pictures in question all involved migrating live connections. For instance, if a server running Apache has a fan failure we can assume that within a short time the server will fail thanks to the very efficient heaters produced by Intel. With our flexible network fabric we hoped we could migrate the connections from that server to another instance of Apache. It’s a simple matter to stop an application, move the connection and restart all the software associated with it. However, we wanted to migrate connections live and despite being able to move a connection through our fabric, the software installed on the end systems wasn’t capable of dealing with dynamic migration.

Hence, the root of our probem was that PC server software stacks, whether Linux or Windows, were built on the fundamental assumption that they had physical hardware beneath them and that they had complete control of that hardware. As a result, without a new layer in the operating software stack, connections and software remained bound to hardware. Of course, not all software systems have this feature; mainframe software has had a built in assumption of virtualization for many years. Looking back now, I can see that I was afflicted with a disease common to networking professionals – we avoid touching the software on the end systems. Because of this we’re always trying to infer what a packet or connection is and what to do with it based solely upon what we can snoop off the wire. I helped build many systems that succesfully did this, such as load balancers and firewalls, but ultimately it’s a limitted approach. And getting more limmitted all the time.

Enter VMware

Virtual machine technology has convinced me that snooping the wire is a dead end. After all, when each wire can have dozens of virtual servers at the end, the real network isn’t on the wire any longer. Once a packet hits the wire it’s old news. VMware was in it’s infancy when we started Topspin so we didn’t have access to virtual machine technology. If we had, all our scrapped pictures might have made more sense.

In fact, that’s precisely what happened in late 2004 when I met with the founders of 3TERA (link). They were investigating building a shared memory system to allow scaling a server beyond one physical server; what’s commonly known as a single system image. Xen 2.0 had just been released and they were testing it, so it wasn’t long before our discussions turned to what virtual machines could enable. In fairly short order we were drawing pictures I recognized – they were the same pictures I’d drawn almost half a decade earlier.

Virtual machines as implemented by VMware and Xen provide an excellent abstraction layer between the operating system and the hardware. All the hardware. In fact, the abstration is so good that a virtual machine can be suspended and restarted or migrated to different physical resources with full state integrity. If you’re paying close attention you’ll note the foregoing is only true if the network connections remain valid, but as I’ve noted I’m already comfortable migrating connections live. Therefore, virtual machines were the last building block missing before someone could truly begin building a commodity utility computing service.

Of course, virtual machines by themselves don’t enable utility computing, despite what some vendors would have you believe. As a developer you can’t arbitrarily drop a set of virtual machines on a grid and have an application up and running.

Moving forward again

The next step is a system that understands the definition of the infrastructure your application needs and can create that infrastructure dynamically on a grid before starting virtual machines. Such systems are coming sooner than the pundits would have you believe, and utility computing will finally fulfill its promise.

The sources of innovation

Filed under: Random Thoughts — barmijo — @ 12:00 am

You can’t lead by listening to your customers! Yes, I do in fact mean exactly that, and as you might expect, I’ve taken a lot of flack for that position over the years. I know it flies in the face of everything taught in B-school, but never-the-less, it’s true, and I’ll explain a bit about why I believe that.

What spurred me to write about innovation is a recent interview in CIO magazine with Eric von Hippel, head of the Innovation and Entrepreneurship Group at MIT. Mr. von Hippel states in the interview that most companies lack the ability to innovate because they are stuck in a mode of “find a need and fill it” that leads to a fruitless dependency on market research. Instead, he argues, companies should identify “lead” users whose’ dissatisfaction with current products leads them to modify those products to meet their needs. Identifying these users, empowering them to experiment and then importing their innovations will enable manufacturers to lead the market in his view of the world. Mr. von Hippel’s contention that companies are stuck in a market research rut of listening to their customers struck a chord with me, but I think he has missed a couple crucial points.

Why does listening fail to lead to innovation?

To understand why listening to your customers fails to generate innovation, consider what your customers are going to tell you. First, I don’t care how loyal you think a customer is, he’s telling the world what he needs because having competing vendors benefits him. Therefore, you’ll learn exactly what your competitors already know and nothing more. Second, and this is the most important part, customers lack the ability to tell you everything they need. Unless you’re selling to IBM or Verizon, your customers simply don’t know what’s possible based on changes in the underlying technologies. Therefore, they apply their own understanding of what’s possible to your offering and provide feedback accordingly. Modifications based on such input will certainly help you evolve your product, but will seldom lead to true innovation.

Watch, watch, and watch some more!

Instead of simply listening to your customers, I firmly believe you have to carefully watch them. Watch them interact with your products. Watch them interact with your competitors products. Watch them interact with each other. It’s not what customers say, especially to vendors, but what they do that will lead you to innovation. This is especially true when there’s a disparity between what customers say and what they do. I’ve had the chance to work with quite a few professional market researches and they’re never surprised by this. However, executives frequently are. I can’t tell the number of times I’ve been told “I don’t care what the research shows customer Y told me this morning all we have to do is . . .” With one company this was so prevalent that we gave it a name, “Executive Briefing Center Disease” or EBC for short. This inability to properly weigh discussion and observation is what’s really at the root of the problem Mr. von Hippel addresses in the interview. I also happen to believe it’s a large part of the reason true innovation often comes from outside the core of an industry or from small companies started by disenfranchised innovators of larger companies.

One more thing. When you take a break from watching your customers, visit your vendors and watch how their processes work. As a marketing geek, vendors were constantly amazed I took the trouble to visit them. As a result, I was often treated like royalty. More importantly, though, learning about the underlying technology shifts they were experiencing helped me to understand what was possible for my customers.

A brief example

Let me give you an example I experienced first hand. While at Bay Networks in 1996 customers were constantly telling us they wanted ATM switches. Of course, for more than a year, vendors had been feeding them data about how fast ATM would be and how Ethernet had reached the end of its life. Not that we’re geniuses, but a few of us did a little digging and discovered our customers had no understanding of ATM at all. They were merely parroting what we and all the other large vendors had told them. Meanwhile, our customers didn’t know Ethernet could be made to operate at 1Gbps so they didn’t ask for it. Turns out customers wanted speed, and not ATM or any of its many features. Executives, though, were convinced the customer was right and plowed millions upon millions into ATM development. The result? ATM died a horrible, ugly, painful death in the enterprise. None of the major vendors invested in building Gigabit Ethernet products. And, as you may expect, some of us left to build gigabit Ethernet switches.

Evolution vs. Revolution

So far, however, we’ve only addressed one form of innovation. In his interview, Mr. von Hippel sites backpacks with power strips in them, surgical implements modified by doctors, and even new mud flaps on mountain bikes as innovation. I disagree. I see these as evolution of existing products. My chosen examples of innovation would be Apple’s first PC, Xerox’s first GUI; VisiCalc’s first spread sheet, the first web browser, Yahoo’s first index, and Google’s page rank system. None of these could come from any amount of market research. Users simply had no way to know these things were possible before they were built. No amount of time trying to explain your idea to users will be enough because they simply have no frame of reference to comment about it. In fact, often, even the first prototypes aren’t enough to convince users. Remember, it took Chester Carlson years to find a manufacturer for his xerography process. These innovations had to be built on faith, given to users, and then iterated quickly. Unlike user led product evolutions, these types of innovation are engineering led revolutions. Mr. von Hippel makes no mention of engineering led innovation in the article and I think that’s unfortunate because the rest of his points are quite valid.

Back to utility computing

So, how does all of this relate to utility computing? Simple. I believe utility computing is an engineering led revolutionary change. Having shown our system to almost 100 potential customers, there is a small subset who can internalize it enough to venture predictions about how it will affect their business. The rest are just anxious to try it. As a result, I’m constantly reminding our engineers, and investors much to their dismay at times, that we absolutely can not predict all the ways customers will use the new tools we’re providing them. To be sure, we’re always thinking about how the system will be used and these use cases help guide us. However, I’ll go on record now that at least 80% of our assumptions will prove to be wrong.

That brings us full circle to Mr. von Hippel again, because there’s one idea he puts forth that’s of critical importance; we as vendors need to view our systems as starting points for our users rather than as complete packages. Rather than dictate to them, we need to enable them to use our system in ways we never anticipated and we need to learn from and internalize that usage.

Will open source thrive in a utility computing market?

Filed under: Random Thoughts — barmijo — @ 12:00 am

Given the title, anyone who knows me is expecting me to predict the demise of open source. I have to admit that’s understandable, because they’ve all heard me rant about the failure of the community to prove the open source model is financially feasible in the long run. Well, they’re in for a surprise.

I’m a fan of open source, having used Linux as an embedded OS several times over the past eight years, from load balancers to switches. When selecting the OS for each of those systems, cost was only a small part of the decision. Linux simply proved to be a better solution each time than the old WindRiver OS we’d used in the early nineties. Windows was never really an option for all the obvious reasons. However, that hasn’t been enough to convince me of open source’s long term viability. There’s a difference, after all, between being a fan and being a believer.

What I’ve been waiting to see is open source taking a lead in breaking new technological ground. Of course there are thousands of open source projects, and the hundreds of startups, and even a few IPOs, but that’s not the point. The vast majority of what I read about is based on the commoditization of existing technology and not new innovation. Even Linux exists only because Linus wanted a free alternative to Minix. So, as venture capital poured into open source in the nineties, I remained unconvinced that much was being invested in fundamental innovation. Instead, the vast majority of open source startups are service plays.

When I tell people this I invariably hear “that’s where traditional software companies make their money anyway, so open source just bypasses the early stages when licenses generate significant revenue.” Perhaps it’s just me, but that seems to be copying what’s wrong with proprietary software rather than what’s right. As a professional marketer, I say this because, in my experience, firms that gain the majority of their revenue from service tend to empower the service organization to begin defining the boundaries product development must work within. When that happens, innovation inevitably suffers. As a customer, I say this because I want vendors to feel a need to continuously sell me on using their products rather thinking of me as an inexhaustible supply of service revenue extracted in ever more imaginative and painful ways.

Having said all this, I find it interesting to ponder the potential interactions of open source with utility computing. At first glance I see two effects. First, as the initial utility computing systems are built, open source is likely to provide the base functionality of both the systems and the deployed applications. With no license fees to negotiate for a virtual deployment and a user base interested in pushing the envelope, this seems natural. More interestingly though, is the impact utility computing will have on open source. I’ve come to the conclusion that utility computing may just be what open source needs. Why?

Of course, it’s only my opinion and I’m likely to take some heat for saying it, but I feel most open source software is beyond the capability of typical IT professionals to use. In fact, I’ve become convinced that’s part of the appeal. Can’t compile a Linux kernel? You have no business in the community. OK, so I’m exaggerating a bit, but there’s little doubt that complexities like compiling kernels limits the scope of open source use. Numerous consultants as well as companies that package open source software like RedHat and SpikeSource are based on monetizing that very niche.

Utility computing, however, can eliminate this barrier altogether. To understand why, consider how VMware and Xen are being used in many data centers. Initially popularized for server consolidation, virtualization is now frequently used to create server templates. A server running the appropriate hypervisor can boot any compatible virtual machine image, thus eliminating the typical server build cycle. Instead, a boot volume that can be used like a master template is created. When another server of that type is needed, a copy the boot image is made, configuration files are updated, and the virtual machine is ready to boot. The whole process can be performed in minutes instead of days. I think it’s reasonable to assume that any utility computing system will have to provide similar capabilities. It’s also likely that a selection of prebuilt images will be part of any service offering. Therefore, for users of utility computing, compiling kernels will be a thing of the past. Open source software will become far more accessible than ever before.

Innovators bringing out new services will be drawn to utility computing because it saves on capital, reduces cycle time and eliminates the bother of building and maintaining a data center. Of course, if they haven’t already, they’ll pick up open source as an offshoot of using utility computing. (For those of you who assume these innovators are already open source users, guess again.

Most I’ve spoken with have no clue how to build a Linux server.) As we all do when building a service, they’ll find pieces of functionality unmet and they’ll write code to fulfill that need. Code that may easily find its way back into the open source community.

So for the first time I find myself not only being a fan of open source, but (just maybe) a believer.

This blog is powered by WordPress running on AppLogic standard LAMP cluster.   RSS feed