Even though the speedometer in my car maxes out at 150 MPH, I rarely drive at that speed (and the top end may be more optimistic than realistic), but it is certainly nice to have the option to do so when the time and the circumstances are right. Most of the time I am using just a fraction of the power that is available to me.

Many interesting compute workloads follow a similar pattern, with modest demands for continuous compute power and occasional needs for a lot more. Examples of this type of workload include remote desktops, development environments (including build servers), low traffic web sites, and small databases. In many of these cases, long periods of low CPU utilization are punctuated by bursts of full-throttle, pedal to the floor processing that can consume an entire CPU core. Many of these workloads are cost-sensitive as well. Organizations often deploy hundreds or thousands of remote desktops and build environments at a time; saving some money on each deployment can have a significant difference in the overall cost. For low traffic web sites and experiments, the ability to be lean-and-mean can have a profound effect on the overall economic model and potential profitability.

New T2 Instances

Today we are launching new T2 instances for Amazon EC2. The T2 instances will dramatically reduce costs for applications that can benefit from bursts of CPU power. The instances are available in three sizes (micro, small, and medium) with On-Demand prices that start at $0.013 per hour ($9.50 per month). You can also gain access to a pair of t2.micro instances (one running Linux and another running Windows) at no charge via the AWS Free Usage Tier.

The T2 instances are built around a processing allocation model that provides you a generous, assured baseline amount of processing power coupled with the ability to automatically and transparently scale up to a full core when you need more compute power. Your ability to burst is based on the concept of “CPU Credits” that you accumulate during quiet periods and spend when things get busy. You can provision an instance of modest size and cost and still have more than adequate compute power in reserve to handle peak demands for compute power.

If your workload fits into one of the categories that I mentioned above, the T2 instances will provide you with more than ample performance at a very compelling price point. Choose the right size based on your CPU and memory requirements, and the T2 will take very good care of you. In most cases, you can actually choose to ignore the finer details of the burst-based model and simply go about your business. Many applications rarely need even the baseline level of power, and will accumulate CPU Credits for those times when the application needs to burst. Of course, we’re making the details available so that you can make an informed decision as to the best instance type and size for your needs.

The bottom line: you now have a very economical way to start small, meet ever-changing demands for compute power, and take full advantage of the entire range of AWS Services.

Here are the specs (prices are for On-Demand Instances in US East (Northern Virginia) Region):

Name vCPUs Baseline Performance RAM (GiB) CPU Credits / Hour Price / Hour

(Linux) Price / Month

(Linux) t2.micro 1 10% 1.0 6 $0.013 $9.50 t2.small 1 20% 2.0 12 $0.026 $19.00 t2.medium 2 40% 4.0 24 $0.052 $38.00

The column labeled “Baseline Performance” indicates the percentage of single core performance of the underlying physical CPU allocated to the instance. For example, a t2.small instance has access to 20% of a single core of an Intel Xeon processor running at 2.5 GHz (up to 3.3 GHz in Turbo mode). A t2.medium has access to 40% of the performance of a single core, which you (or your operating system, to be a bit more precise) can use on one or both cores as dictated by demand.

The column labeled “CPU Credits / Hour” indicates the rate of CPU Credits that the T2 instance receives each hour. CPU Credits accumulate when the instance doesn’t use its baseline allocation of CPU, and are spent when the instance is active. Unused CPU Credits are stored for up to 24 hours.

CPU Credits

As listed in the table above, each T2 instance receives CPU Credits at a rate that is determined by the size of the instance. A CPU Credit provides the performance of a full CPU core for one minute.

For example, a t2.micro instance receives credits continuously at a rate of 6 CPU Credits per hour. This capability provides baseline performance equivalent to 10% of a CPU core. If at any moment the instance does not need the credits it receives, it stores them in its CPU Credit balance for up to 24 hours. If your instance doesn’t use its baseline CPU for 10 hours (let’s say), the t2.micro instance will have accumulated enough credits to run for almost an hour with full core performance (10 hours * 6 CPU Credits / hour = 60 CPU Credits).

Let’s say that you have a business process that needs a burst of CPU power at the beginning and end of the business day in each time zone in your geographic region. By putting this process on a T2 instance, you can handle the compute load at peak times expeditiously and cost-effectively using the CPU Credits that were accumulated during the non-peak times.

As another example, consider a dynamic web site that occasionally enjoys sudden, unpredictable bursts of popularity in response to external news items or inclement weather. Again, hosting the site on a T2 results in a cost-effective solution that includes plenty of capacity to handle these bursts.

When an instance starts to run low on CPU Credits, its performance will gradually return to the baseline level (10% to 40% of a single core, as listed in the table above). This deceleration process takes place over the course of a 15 minute interval in order to provide a smooth and pleasant experience for your users.

Credits will continue to accumulate if they aren’t used, until they reach the level which represents an entire day’s worth of baseline accumulation:

t2.micro – 144 – (6 CPU Credits / hour * 24 hours)

– 144 – (6 CPU Credits / hour * 24 hours) t2.small – 288 (12 CPU Credits / hour * 24 hours)

– 288 (12 CPU Credits / hour * 24 hours) t2.medium – 576 (24 CPU Credits / hour * 24 hours)

No further credits accumulate once an instance reaches this level. In general, suitable workloads for T2 instances will generally maintain a positive credit balance. If you find that you are consistently maxing out on credits, you might consider switching to a smaller instance size to reduce your costs.

You can spend accumulated credits in bursts or all at once, as your needs dictate. They are, however, lost across a stop/start cycle or if an instance terminates unexpectedly. You can track the accumulation and expenditure of credits by watching a pair of new CloudWatch metrics that are reported on a per-instance basis:

CPUCreditUsage – Tracks the expenditure of credits over time.

– Tracks the expenditure of credits over time. CPUCreditBalance – Tracks the accumulation of credits over time.

Each newly launched instance is given an initial allocation of CPU Credits to allow it to boot up a full-core speed. This initial credit, in conjunction with the “boot burst” of IOPS provided by the new SSD-backed Elastic Block Storage means that your T2’s will be up and running from a standing start more quickly than their predecessors.

T2 in Action

I decided to put a t2.small to the test in a development scenario. My intent was to show how CPU Credits are accumulated and then spent. Please do not think of this as a formal benchmark.

I booted up an instance, created, formatted, attached, and mounted a 100 GB General Purpose (SSD) EBS volume, installed the ncurses-devel and gcc packages, and downloaded the Linux kernel source tree from Kernel.org. I ran menuconfig, accepted all of the defaults, and saved a .config file. I then built the kernel:

As you can see, the entire kernel built in 23 minutes — just enough time for me to run down to the corner and get my lunch from Nosh the Truck. By the time I returned with my fish and chips, the build had completed. I opened up the AWS Management Console and took a look at the CloudWatch metrics to see how the build had affected the CPU Credits. Here’s what I saw:

The orange line denotes my usage of CPU Credits (in minutes) throughout the course of the build. The blue line represents the CPU Credit balance (again, in minutes) for the instance. As you can see, the balance was trending up before I started the build (the instance was idle). I spent credits during the build. However, I had more than enough credits to complete the entire build and the balance never went lower than 15. After the build concluded, credits began to accumulate again, rising from 16 minutes to almost 25 minutes in less than an hour.

Here’s the CloudWatch metric for CPU Load during the build:

And here’s the longer term view. I did a couple more kernel builds (three in parallel, each running make -j 2 in separate copies of the source tree) in an attempt to spend some CPU Credits. As you can see, I had more than enough:

Some Closing Thoughts

My personal experiments have led me to believe that the T2’s are going to be a really nice fit for a very wide variety of use cases. I’m looking forward to hearing your feedback!

Although the comparison is necessarily inexact, it is reasonable to map previous generations of EC2 instances to the T2 instances like this:

t1.micro to t2.micro

to m1.small to t2.small

to m1.medium to t2.medium

Replacing your previous generation instances with the equivalent T2 instances will give you significantly better performance at under half the cost. If you are planning to do this (I certainly am), don’t forget that the T2 instances do not include any local (instance) storage and that you’ll need to use one or more EBS volumes instead.

The T2 instances use Hardware Virtualization (HVM) in order to get the best possible performance from the underlying CPU and you will need to use an HVM AMI.

Availability & Pricing

T2 instances are available today in the following AWS Regions and you can launch them now:

US East (Northern Virginia)

US West (Oregon)

EU (Ireland)

Asia Pacific (Singapore)

Asia Pacific (Tokyo)

Asia Pacific (Sydney)

South America (São Paulo)

T2 instances will be launching in the US West (Northern California), China (Beijing), and AWS GovCloud (US) Regions in the near future.

As noted above, pricing starts at $0.013 per hour ($9.50 per month) for On-Demand t2.micro instances in the US East (Northern Virginia) Region. Once again, you can start small, scale as needed, and use the entire range of AWS services at a really sweet price point! For more information, take a look at the EC2 Pricing page.

Some Early Reactions

After a good night’s sleep, I woke up to find some interesting comments online. Here’s a sampling:

A post on the Mainframe 2 blog titled Great New Infrastructure From AWS post, says “The t2.medium instance is disruptive: its cost-performance ratio means that software vendors with products at virtually any price point can now afford to run their application in the cloud and reach users on any device.”.

— Jeff;