Since we first started offering AppLogic last year, one of the questions users have asked about is how to test application performance. As it turns out, this hasn’t been an easy question to answer . . . until now.
Why has this been a problem? With any virtualization based system, including AppLogic and our friends at Amazon’s EC2, it’s not quite as simple as start your app and hit it with traffic. The reason for this is that all current hypervisors use a leaky-bucket scheduler. Whenever there are spare CPU cycles, they’ll be given to whatever virtual appliances are running. So, if an appliance (or image) happens to run on a physical node with no other appliances, it’ll get all available CPU cycles. If another appliance is later started on that same node the hypervisor will give it its share, but that’ll reduce the cycles available to the first. The effect of this is that performance isn’t always consistant.
In normal operation this isn’t an issue, but when trying to predict operations costs it can be critical. For online services computing is the largest part of COGS. Therefore, if you’re offer by a factor of 2 or more in the amount of users a given set of resources can support your pricing spreadsheet is going to have some potentially fatal flaws causing you to set your pricing at an un-profitable level.
In AppLogic 2.0 we’ve added a simple extension to enable resource capping. AppLogic overides the hypervisor scheduling, capping the resources for your appliances at the level you specify. You can then run your performance tests and know exactly what level of resources are required to support your user base.
Resource capping is a simple little feature that I expect will find a lot of use.