Ship when perfect enough…

Filed under: Random Thoughts,Startups — Tags: — peternic — June 22, 2011 @ 4:55 pm

As we’re closing down on the 3.0 beta, I found this blog post about the white iPhone.

When do you think is the right time to ship a product? Are the criteria different for consumer products and cloud products (and how)?

How to DevOps – the Flickr experience

Filed under: Cloud Computing,Random Thoughts,Startups — Tags: , , , , — peternic — May 24, 2011 @ 2:24 am

Staying late… browsing and learning… found a gem.

Since one of AppLogic’s traditional audiences is DevOps, some frequently ask us “what is DevOps”. While the following doesn’t really provide a definition, it tells us how to do it (and for a definition, there’s always Wikipedia)

In summary, here’s how Flickr does it:

  1. Automated Infrastructure
  2. Shared version control
  3. One step build – code to set of files in one step
  4. One step deploy
  5. Shared metrics
  6. Use IRC and IM Robots
  7. Culture
    • Shared Run-books and Escalation plans
    • Healthy attitude about failure – plan to respond, not just prevent. Fire drills.
    • No finger-pointing and blame

Good content, great presentation. Check it out at:

10+ Deploys Per Day: Dev and Ops Cooperation at Flickr

(thanks to Jonathan at the Combat Consulting blog for posting the link and summary).

CA 3Tera AppLogic 2.9 release to introduce optimized, self-healing, highly available cloud with a globally federated API

CA 3Tera® AppLogic® 2.9 is the first major new release of AppLogic as part of CA Technologies. With the new ownership of the product, there are the expected significant improvements for enterprise customers and the service providers catering to them. CA 3Tera AppLogic 2.9 also continues to provide the innovative and category-defining capabilities that have earned AppLogic many awards in the last four years.

CA 3Tera AppLogic 2.9 completed successfully a three-month beta program and will be generally available this month. (A link to the full release notes will be provided here.)

I would like to focus on the key new features and capabilities in this release:

1.  Full High Availability

High availability and full system redundancy is now integrated in every subsystem of the platform, ensuring that storage, networks, compute resources and the control node are all highly available and the system can recover from the failure of any single element without human intervention. The addition of redundant network support in 2.9 completed this functionality, adding support for redundant network switches.
The high availability support includes three important automated steps to restore operations quickly and efficiently:
• fault detection,
• isolation of the failed component, and
• recovery of the affected application(s).

Once affected applications are restored to operation – typically within minutes of the failure – AppLogic proceeds to rebuild the infrastructure redundancy, readying the system for handling future hardware failures.

What does this mean for you?

CA 3Tera AppLogic is designed from the ground up to expect hardware failures and recover applications quickly and automatically.

As a result, you get a very resilient and self-healing platform that can keep applications running, ensuring an amazing SLA without requiring emergency fire drills in the data center… using only plain commodity x86 servers and switches. CA 3Tera AppLogic is a unique solution where all redundant capabilities are managed by a single product and through a single user interface, bringing customers simplicity and reliability with zero integration effort.

2.  Network Topology Detection and Path Optimization.

CA 3Tera AppLogic automatically discovers the network topology and cabling layout, and ensures full cross-sectional bandwidth between any two servers in the system – all the while maintaining the network redundancy and recovery capability in case of a network component failure. CA 3TeraAppLogic 2.9 removes the majority of network infrastructure bottlenecks and eliminates manual network configuration. It also displays the state of all network infrastructure elements, the current paths used, and allows manual control for troubleshooting and testing. It dynamically detects and adjusts for changes that occur in the system (e.g., re-wiring) and issues alarms in case it has detected a failure and has performed automated recovery.

Why is this important?

Tracking down network infrastructure bottlenecks is not an easy task, and doing that in complex composite applications is one of the hardest IT projects. Maintaining full cross-sectional bandwidth in redundant network environments is the key to eliminating such bottlenecks and traditionally requires labor-intensive and fragile manual configuration.
CA 3Tera AppLogic fully manages the network path optimization and saves you from performing manual switch configurations. That means you always have available the maximum bandwidth from the network infrastructure, and you don’t need to intervene to recover or handle failure in the networking equipment.

3.  Federated Web Service API for Global Access.

CA 3Tera AppLogic 2.9 now comes with a federated web services API. It is based on the familiar AppLogic shell command semantics, provided over a REST-like, simple-to-use transport mechanism. In a typical AppLogic style, the API is implemented as an AppLogic composite application, using only standard catalog components.

The API provides programmatic control over nearly all CA 3Tera AppLogic functions, complementing the graphical user interface and the command line shell. For easier integration with Java and PHP/Javascript applications, the API can provide responses either in XML or JSON format. A special asynchronous request mode is provided to orchestrate commands that may take longer to execute, communicating the result of such operations upon completion in a clear and timely way.

The API can federate any number of CA 3Tera AppLogic clouds through a single API access point; and also a grid can be controlled through multiple API points, for redundancy and access segregation.

Why this matters?

The new API provides the basis for true hybrid cloud implementations. The federation capability allows cloud users to set up API access points that span their own virtual datacenters (on-premises/private cloud), as well as virtual datacenters provided by one or more public cloud providers – all accessed through the same API access point and with the same credentials. Further, the API provides abstraction of the infrastructure location, so that migrating clouds and virtual datacenters – between public and private, and between different service providers, can be performed without impacting control and monitoring software.
The asynchronous semantics for long-running commands provide proper job-specific progress report, resulting in client code that is simple and reliable.

4.  IP address enforcement improves customer isolation in shared public cloud environments

Like prior AppLogic versions, release 2.9 continues to strictly enforce the security of network connections between components of composite applications based on the zero-trust network architecture that is unique to CA 3Tera AppLogic. Release 2.9 provides further security by also enforcing the public IP addresses assigned to applications, automatically restricting incoming and outgoing traffic only to the assigned IP addresses.

Unlike virtualization solutions (which, in general, don’t enforce the IP addresses) and other clouds (which typically support only one IP address per VM), CA 3Tera AppLogic 2.9 supports multiple public IP addresses per appliance, allowing greater flexibility without sacrificing security.

Why this matters?

IP address enforcement improves significantly the isolation between unrelated applications. While very important in on-premise/private cloud environments, this enforcement is essential in multi-tenant environments such as shared public clouds. CA 3Tera AppLogic 2.9 extends the unique zero-trust/zero-configuration network architecture, providing this important layer of security without requiring expensive gear and complicated network switch VLAN setups – in fact, without any configuration whatsoever.

Addressing security is generally a compromise between the level of security and the effort required to maintain it. In a very convenient and error-proof way, CA 3Tera AppLogic gives you the highest level of security with zero configuration effort.

5. Other Features

OVF Support
CA 3Tera AppLogic 2.9 enables importing standards-based OVF VM packages, making it easier to on-board applications from existing virtualization environments. The new image2class utility command expands on the prior releases’ iso2class functionality and allows importing Linux virtual machines packaged in the DMTF-standard OVF format, converting them to standard CA 3Tera AppLogic appliances in the process.

Support for Microsoft Windows 2008 Server
CA 3Tera AppLogic 2.9 now fully supports both 32-bit and 64-bit Microsoft Windows 2008 Server-based appliances. It extends the capabilities for Windows 2008 support found in prior release, providing volume management for NTFS6, signed para-virtualized drivers for improved I/O performance, as well as new Windows 2008 Server templates and appliances.

National Language Support
CA 3Tera AppLogic 2.9 also expands national language support within customer workloads. The appliance kit included in the new release improves compatibility with international versions of Microsoft Windows 2003 and 2008. In addition, a new virtualization setting allows selecting the keyboard language mapping to improve operation with non-US keyboards.

Of course, CA 3Tera AppLogic 2.9 is backward compatible with appliances and applications from prior releases, so upgrading the grid or migrating your workloads onto a new 2.9 grid is trouble-free.

CA 3Tera AppLogic 2.9, like prior releases, is packaged for turnkey installation, providing zero-to-cloud setup in less than 4 hours. To try CA 3Tera AppLogic 2.9 today, please contact your CA account manager or reach us through our web site.

Happy cloud computing,

- Peter

VM stall – why it happens and how to overcome it

Filed under: Cloud Computing,virtualization — Tags: , , , — peternic — June 9, 2010 @ 7:08 pm

Andi Mann at CIO.com posted a great article on challenges in adoption of virtualization and proposed a new term – VM stall. It refers to the apparent limit in adoption of virtualization within companies — those who start tend to virtualize the low hanging fruit and virtualization efforts seem to stall around 20-30% of the applications. Read his article

I agree with Andi that virtualization is not as widely adopted as everybody makes it look. By now, most IT shops have done something, so there is a widespread notion of “everybody’s doing it”. Also, it tends to be picked for new projects (which is a good thing).

I don’t think we should set “virtualization goals” — say at 50%, 90% or 100%  - virtualization is not an end in itself. That said, it is an essential enabling technolgogy and I do believe it will become the norm within the next few years — every new server will be virtualized from the start; likely it will be either shipped this way by the hardware manufacturer or the OS will provide the layer by default.

That said, the let’s look at the possible reasons for the current stall in adoption. Based on my experience, I would propose three:

1. Not enough value in virtualization alone: Server virtualization as a solution is mostly about server consolidation. As a result, companies tend to consolidate the non-critical apps, the ones that can be packed in small boxes. Once that is done, the value of moving the bigger apps is simply not there — there will be no consolidation benefit. The bigger apps need more resources, not less. Virtualization alone does not help much (and can make things harder in some cases if not properly implemented).

2. Cost: To get most of the virtualization benefits, you end up needing very expensive hardware — SAN for everything, fast storage interconnect, lots of network bandwidth, etc., and lots of software licenses, both for the virtualization and for the management of all the pieces. Many projects simply can’t justify that cost. Not all virtualization products have this characteristic, but the default choice does, so it skews the statistics.

3. Complexity: the mere mention of a “virtualization team” (in IT departments) shows that the technology, in its current incarnation, is not ubiquitous enough. Virtualization was not (and is not) supposed to become yet another silo. All IT professionals should be skilled in virtualization. If the technology is so complex that it requires separate virtualization team, then we need better technology.

Best regards,
- Peter

PS The solution to the above is in cloud technologies, where virtualization is an enabler to further abstraction and encapsulation. Good cloud technology – like the one we have here at CA|3Tera — achieves simplification and flexibility that we have not had before. This helps overcome the 3 factors I listed above and move adoption beyond the stall limit. Watch this space for further posts on how this happens.

PPS After posting this, I found out that Andi has joined CA Technologies, heading Virtualization product marketing — now that is great synergy!

Mainstream IT Buys into Cloud Computing: CA to Acquire 3Tera – A Message from Barry X Lynn, CEO 3Tera

Filed under: 3tera,AppLogic,Cloud Computing,Customers,Random Thoughts,Utility Computing — Tags: , , — bxl — February 24, 2010 @ 7:13 am

We started 3Tera to radically ease the way IT deploys, maintains and scales – MANAGES – applications. Our AppLogic® cloud computing platform provides the foundation of our partners’ orchestration of cloud services for public and private clouds around the world. Today, we’re taking the next step in moving toward making cloud computing mainstream by joining CA.

CA and 3Tera share a common vision for the future of cloud computing, and we are excited about the opportunities that this acquisition will create for our customers, partners and their cloud users.

This is a historic moment in Cloud Computing. The significance of this acquisition is a heck of a lot more than just a land grab in a hot space. We are confident that as a team, CA and 3Tera, will extend our leadership of the cloud computing market.

We are honored, given the plethora of Cloud Computing companies that have emerged in the last few years, that CA has chosen us. We really are!

It would probably be arrogant to suggest that we, in turn, chose CA. So I won’t suggest that. But the fact is, we had many options for the future and this is the one that excited us the most.

Now, there are only two kinds of people thinking about Cloud Computing: those who believe it is the future of information technology and those who are in complete denial.

I’ve been around a long time, probably longer than most of the readers of this post. During this time, I have seen three major paradigm shifts in IT.

For my first 20 years in this game, Moore’s Law was, as it always has been, and still will be for a while, in effect. Computers became exponentially more powerful, faster and cheaper. But, for those 20 years it was big central computers doing everything.

So, the first paradigm shift was away from these big centralized systems to client server or distributed systems. There were those who had the vision that inexpensive work stations and servers, connected over a network, would take on much of the load that the big central computers were processing. And there were also those who were in denial.

The second big shift was the rise of the browser and eCommerce. Some of you may be surprised that I did not say the Internet. The fact is, though, Internet technology was around for years before there was a consumer-based Internet, deployed by the government as a way to interconnect various agencies. It was known as the ArpaNet. The browser put a friendly graphical user interface on top of it and eCommerce was born.

There were those who had a vision that the Internet would be a common way for businesses and consumers to communicate and become widely used for effecting financial transactions. And there were those who were in denial.

The third shift is Cloud Computing. Computing is pervasive. It is no longer something used and accessed by an elite few. Computing is as much a part of life as telephone, television, electricity, etc.

So, the natural evolution of computing is for it to become a utility that anyone can tap into, like other utilities, consuming only what one needs–no more, no less– but always having enough available capacity when needed.

This is Cloud Computing – the encapsulation of applications as autonomous services, abstracted from infrastructure that its users do not care about, except that it’s available and reliable when needed – services that can be available anytime, anywhere, when called upon.

There are those who believe Cloud is the future and there are those in denial.

Like distributed systems, which became pervasive when the ability to precisely manage networks of servers and work stations became available; and, like the Internet, which became pervasive when the ability to manage dynamic web sites securely with high performance; so will go Cloud Computing.

I’ve heard some compare what is going on now to the internet bubble of the ‘90s. I’ve actually heard it referred to as the Cloud bubble. The big difference between the Internet bubble and the Cloud bubble is that today’s economy doesn’t dictate the kind of crazy valuations we saw in the ‘90s (or maybe today’s economy is just more realistic than that of the Internet bubble).

But they have something very significant in common, I believe.

During the Internet bubble, everyone and his brother with a web site, from giant infrastructure companies to retailers of boutique niche products, were perceived to be the future. When the dust settled though, most couldn’t maintain their value – except for the Internet infrastructure providers, that is. It was not just anyone with an Internet presence. It was mostly those who enabled the Internet – who provided the infrastructure to deal with it – to manage it!

Just as everyone tried to stake a claim to a piece of the Internet in the ‘90s, now there are a gazillion companies with Cloud presence. When the dust settles, though, the long term value will be retained for the shareholders of the companies that provide the infrastructure, enabling capabilities and management of Cloud Computing.

CA is a management company. Their mission has always been and remains centered on the management of information technology. Their ability to adapt and manage each generation of technology has enabled them to thrive through all of these shifts.

While there are several management vendors out there, we see most figuring out how to shoehorn customers’ needs into what they already have. But tails can only wag dogs for a short period of time. The big winners will be those who adapt and evolve what they have into real, more than wannabe, Cloud Computing management.

That’s the historic statement. CA has drawn that line in the sand, and we’re thrilled to be part of it.

The leading innovator of IT management technology and the leading innovator of Cloud Computing technology are now one and the same!

Xseed and ScaleUp Team on Global Cloud Computing Framework

Filed under: 3tera,AppLogic,Cloud Computing,Customers,Service Provider — barmijo — February 16, 2010 @ 3:50 am

Almost from the moment we brought out AppLogic a little over three years ago it was clear that the market for cloud computing would be global. More than half of all registrations have been international, and they come from all over the world; Japan, Australia, England, Spain, UAE, Nigeria, South Africa, Korea, China, Hungary, Russia – you name it.

So it comes as no surprise then, that our two of our most innovative international partners are teaming up to provide solutions. Xseed in Japan and ScaleUp in Germany are working together in creating the framework for a globally connected cloud leveraging 3tera’s AppLogic cloud computing platform. We’re looking forward to doing our part, learning from their experiences to build an even better platform for the future.

You can read a bit further on the Cloud Computing Journal http://bit.ly/ci8LYU.

Cloud Awareness: Are You Smarter Than a Fifth Grader?

Filed under: Cloud Computing,Random Thoughts — Tags: , — bxl — January 18, 2010 @ 5:20 pm

We’ve crossed a very significant chasm.

What chasm is that?  You may ask. Enterprises?  Mainstream IT?  Government?  Telcos? Yes, all of those are gaining traction by the second, but we have crossed one that is far more  significant.

We received an email last week that asked, “Could you please tell me about cloud computing, what it does, why does it help, and what does your company do with it?”

Good question!  Why is this so significant?  It came from a fifth grader learning about Cloud Computing.

Is the world taking Cloud Computing seriously?  It better be.

Today’s students are tomorrow’s leaders.  The fact that fifth graders and elementary school teachers are aware of Cloud Computing, and learning/teaching it, is probably the most strategic chasm that we have ever crossed and ever will.

And something even better happened as a result of this.  I thought hard of how to tell a fifth grader what the benefits of Cloud Computing are, and I came up with an answer that, in a nutshell, says it all.

Cloud Computing helps people spend more time solving the problems they need to solve, and doing the things they have to do with computing, rather than thinking about the technology.

The Future of Virtualization; or, How I Stopped Worrying How it Relates to Cloud Computing in 2010

I don’t know why, but I am still surprised when I hear the following question. What’s the difference between virtualization and Cloud? To me, it’s like asking the question – What’s the difference between a hammer and carpentry? The latter is a comprehensive craft. The former is one of many tools used by the craftsmen who practice it.

Simple – right? So why does that question occur at all?

It occurs, in my opinion, for two reasons, one right and one not so right.

The first reason is that all of the server virtualization vendors of any significance are also introducing Cloud offerings to the market. So, people are naturally associating the two (and rightfully so, just like one would associate hammers and carpentry). The difference is, though, no one thinks hammers and carpentry are the same thing.

So, the not so right reason – There are Cloud computing laggards out there who would like us to think that virtualization and Cloud are similar because they have embraced virtualization technology and do not want to appear out of step. As a result, there is a ton of noise in the market that is very hard to sort through.

So, how do I suggest one sorts through this noise?

When faced with a potential Cloud solution, ask a few questions about it.

Does it help me provision and deploy virtual machines on demand? If the answer is no, I’d ask why are you even looking at it? But if the answer is yes, just deploying VMs on demand does not a Cloud make.

Does it enable the encapsulation and on demand deployment of multiple VMs as a single entity? If rather than managing VMs, you want to manage frequently used “appliances” that are comprised of multiple VMs (e.g. a specific app server, a specific messaging system and a specific database server), can you do it? If the answer is yes, you are on your way to a real Cloud solution.

Does it enable the encapsulation and on demand deployment of whole software stacks (e.g. LAMP, Ruby on Rails, .NET, etc.)? If the answer is yes, you are certainly in the Cloud.

But, do you want more? Does it enable encapsulation and on demand deployment of entire multi-tiered apps? If yes, you have a very powerful Cloud solution.

More? Does it enable the encapsulation of the apps along with everything they need to run – network, storage, infrastructure, configurations, policies, documentation, etc., etc., etc.? If yes, then you have the most complete Cloud solution of all.

So, you might sense a theme here – Encapsulation. Yes. Encapsulation is key, but it is only half of the story. Encapsulation itself results in many benefits, especially operational cost savings and decreased time to market. But encapsulation alone does not make a Cloud. It does not create portability. It does not create the ability, by itself, to deploy anywhere, any time.

What’s the second half of the story? Abstraction. Not only do the most comprehensive Cloud solutions have to provide unlimited granularity of encapsulation, but they must completely abstract what is encapsulated from the physical resources (machines) they run on, so that they can run anytime, anywhere there are available idle resources.

In short, you do not measure a Cloud solution by how it does virtualization. You measure it by the granularity of its encapsulation capabilities and its ability to abstract VMs, stacks, apps/services and entire data centers from the physical resources they run on.

So, what is the future of virtualization and where is it going in 2010?

Virtualization is going the way of the hammer. It will be a necessary commodity for the Cloud, just like the hammer is a necessary commodity for the carpenter.

Now, before all the virtualization vendors get their shorts in a knot and start screaming at me that I am implying that all virtualization is the same, I am not. I acknowledge that some have features others do not, some outperform others, etc. But, can you tell who the best carpenter is only by knowing what brand of hammer he uses?

NAS Replication Appliance Coming in 2.7 Production Release

Filed under: Appliances,AppLogic — barmijo — September 14, 2009 @ 4:18 pm

We’ll be completing the Disaster Recovery Suite of appliance with the release of NASR early next month, included in the production release of 2.7. NASR offers the replication of file storage between two instances of NASR, and they can be in the same app, in different VPDCs or even in different data centers.

In conjunction with INSSLR and MYSQLR, NASR offers a complete drag-and-drop disaster recovery solution for LAMP stack applications. Future releases will include additional data bases and complete stack templates.

673 days, 17 hours, 53 minutes … and counting

Filed under: Random Thoughts — barmijo — August 28, 2009 @ 5:08 pm

Once in a while, as you go about the routines of a normal workday, a number jumps out of the stream of consciousness and catches your attention. That’s exactly what happened today while one of our support engineers was working with a client on a new application. When checking system status he suddenly realized the client’s private cloud had been running continuously for almost two years.

While two years uptime in IT isn’t earth shattering, in the realm of cloud computing two years uninterupted service is noteworthy. IMHO that’s particularly true in a week that’s seen a couple private cloud announcements hard on the heels of another recent cloud outage. Private clouds aren’t simply about network addresses, they’re about control. It’s about giving the operations team the ability to affect uptime for their application.

It’s sometimes hard to remember that when we originally introduced the concept of private clouds many folks scoffed. Blog posts declared “If it’s not public, it’s not a cloud!” As this cloud turns two, though, pressure from users has made the need for security and control of operations clear. More vendors are looking to offer private clouds and the resulting competition will produce better services for clients. Next year, as this cloud turns three, I expect we’ll see a much broader set of applications in the cloud as a result.

Just in case you’re wondering if this cloud is an anomally, the second longest continuously running AppLogic private cloud is at 559 days . . . and counting. 

Longest Coninuously Operating Private Cloud, August 2009

Older Posts »
This blog is powered by WordPress running on AppLogic standard LAMP cluster.   RSS feed