Amazon today announced a large update to its DynamoDB NoSQL database service that introduces a massively expanded free tier and the ability to store entire JSON-formatted documents as single database items.

Like all of Amazon’s cloud computing services, DynamoDB always offered a limited free tier for developers who wanted to try the service. For DynamoDB, that was a pretty limited offering with only 100MB of storage, 10 read units and 5 write units. Enough to try it, but not enough to do any serious work (which, to be fair, is the point of offering this free tier). Starting today, that limit goes up to a whopping 25 GB of storage and enough capacity to perform over 200 million requests per month.

As Amazon CTO Werner Vogels points out, that’s enough to run a gaming app with 15,000 monthly active users or an ad-tech platform that serves 500,000 impressions per month.

In comparison, Google’s NoSQL database service Cloud Datastore offers users 1GB of free storage space while Microsoft’s new JSON-centric DocumentDB service (which is still in preview) only offers a free tier for open source developers.

As Vogels also notes, many new NoSQL and relational databases (including Microsoft’s DocumentDB service) now use JSON-style document models. DynamoDB also allowed you to store these documents, but developers couldn’t directly work with the information stored in them. That’s changing today. With this update, developers can now use the AWS SDKs for Java, .NET, Ruby and JavaScript to easily map their JSON data to DynamoDB’s own data types. That turns DynamoDB in a fully-featured document store and is going to make life easier for many developers on the platform.

Because these objects can be pretty large, AWS now also allows developers to store larger documents, too, with up to 400KB in size.

It’s worth noting that while the focus here is on JSON objects, Vogels also points out that these new data types could be used to store HTML and XML documents with the help of a translation layer. You can find a full walkthrough of how all of this works here.

In addition, AWS now also makes it easier to change your provisioned throughput, either from the management console or the API. Before, developers could only double their throughput with each API call. Now, they can go from 10 writes per second to 100,000 with a single click. That should make it significantly easier for applications to quickly react to usage spikes (and scale down again after a spike) and reduce costs.

All of these new features are now available in Amazon’s US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo) and EU (Ireland) regions.