I’ve blogged about standards for utility computing a few times before, and I continue to believe it’s too early for a standard.
Standards are a double edged sword – a trade-off to gain interoperability in exchange for stifling innovation. Once sufficient experimentation in an area of technology has been conducted that agreement can be negotiated between competitors on market requirements, a standard can be drafted that delivers interoperability and allows innovation at the next layer. IMHO it’s way too early to make that trade-off. We haven’t adequately explored the possibilities in utility computing.
For instance, the OVF is intended to provide interchange of applications between services. To my knowledge, no vendor other than 3tera has ever demonstrated this ability. Our customers do it frequently. Therefore, we can document precisely what the requirements of an interchange format are and what services are required on both ends. OVF, unfortunately, is simply insufficient.
Even if 3tera published our interchange specification tomorrow and all parties signed on to it, that wouldn’t be the end game, because transfering an application between data centers isn’t the end requirement. It’s merely one important step. 3tera’s roadmap includes several major leaps in capabilities that will require significant extension of our current interchange format. And, as I mentioned, that’s where the negative aspect of standards comes into play. To adequately explain the need for those extensions in a standard body, 3tera would be required to site use cases where they could be needed – in essence to divulge our product roadmap to our competitors.
That said, if the authors of the OVF draft want input and are willing to embrace capabilities they can’t yet provide, capabilities that will set the stage for years of innovation, then we’ll be happy to participate.