Last Friday I have posted our first results from recent monitoring experiments on Amazon EC2 instance types
were we published a comparison of average performance of the Amazon EC2 instance types.
Now, three days later, I can amend our observations on how the CPU, disk and memory performance of our EC2 instances behave if you monitor them for a couple of days (in our case between April 1st and April 5th).
The following graphs (taken straight out of PRTG Network Monitor) show the results of our CPU, disk and memory tests for the five available instance types. Each dot shows the results of one test:
EC2 CPU Performance
EC2 Disk Performance
EC2 Memory Performance
When I looked at the charts I got the impression that our test results for the "m1.small" instance where a lot more fluctuating than for the other instance types (i.e. show more "variance"). So we measured the range of results (the difference between maximum and minimum value) and calculated percentages for each test and instance type:
And, yes. Our "m1.small" test instance showed the largest range for CPU tests (46%) and memory tests (32%). "c1.xlarge" showed the most constant results (range of only 8% for the CPU test). Less range means that the time necessary for the test (or a real task) remains more predictable. Our disk tests were a different story. The ranges was a lot higher for all instance types and this time "m1.xlarge" had the strongest fluctuations. We did not find a sensible interpretation for these test results yet.
Again my impression is that Amazon's cheapest option "m1.small" compared to "c1.medium" (or the other instances) is not worth it's money. For "m1.small" you only have to pay half of the price of "c1.medium" - but you get less than a half-as-good-service in my oppinion. "m1-small" is 2,5- to 3-times slower than c1-medium
plus the fluctuations of our test results were strongest for "m1-small". This backs our results from Friday where we also found "c1.medium" instances to be a better deal for our needs. My final graph for today shows this evidently: The hourly averages for our CPU load tests had the strongest variations for "m1.small":
Of course our test programs are only very simple tests. They are no substitute for application-specific "real world" tests that you should run on the instance types if you consider moving applications onto EC2 or any other cloud hosting service. But even though they can not be used to exactly measure performance of a virtual machine or PC they still can give us a hint on the performance, especially the performance differences between "platforms".
All our tests were run between April 1st and April 5th 2009 on Windows based instances in the EC2 region "US East Coast" and availability zone "1c". This table shows the numerical results used for the chart above. Please note that for each sensor we dropped the 10 worst results (slowest test results) from our range calculations to avoid errors.