I’m a big fan of cloud computing in general and of Amazon Web Services in particular. I honestly believe that in a few years big providers will host all, or almost all, computing and storage resources. When this is the case, we won’t have to worry too much anymore about downtime, backups and system administrators. DynamoDB is one of the steps towards this future.

DynamoDB is a NoSQL database accessible through RESTful JSON API. Its design is relatively simple. There are tables, which basically are collections of data structures, or in AWS terminology, “items.”

Every item has a mandatory “hash,” an optional “range” and a number of other optional attributes. For instance, take the example table depts :

+------+--------+---------------------------+ | dept | worker | Attributes | +------+--------+---------------------------+ | 205 | Jeff | job="manager", sex="male" | | 205 | Bob | age=43, city="Chicago" | | 398 | Alice | age=27, job="architect" | +------+--------+---------------------------+

For Java, Amazon provides an SDK, which mirrors all RESTful calls to Java methods. The SDK works fine, but is designed in a pure procedural style.

Let’s say we want to add a new item to the table above. RESTful call putItem looks like (in essence):

putItem: tableName: depts item: dept: 435 worker: "William" job: "programmer"

This is what the Amazon server needs to know in order to create a new item in the table. This is how you’re supposed to make this call through the AWS Java SDK:

PutItemRequest request = new PutItemRequest (); request . setTableName ( "depts" ); Map < String , AttributeValue > attributes = new HashMap <>(); attributes . put ( "dept" , new AttributeValue ( 435 )); attributes . put ( "worker" , new AttributeValue ( "William" )); attributes . put ( "job" , new AttributeValue ( " programmer )); request . setItem ( attributes ); AmazonDynamoDB aws = // instantiate it with credentials try { aws . putItem ( request ); } finally { aws . shutdown (); }

The above script works fine, but there is one major drawback—it is not object oriented. It is a perfect example of an imperative procedural programming.

To allow you to compare, let me show what I’ve done with jcabi-dynamo. Here is my code, which does exactly the same thing, but in an object-oriented way:

Region region = // instantiate it with credentials Table table = region . table ( "depts" ); Item item = table . put ( new Attributes () . with ( "dept" , 435 ) . with ( "worker" , "William" ) . with ( "job" , "programmer" ) );

My code is not only shorter, but it also employs encapsulation and separates responsibilities of classes. Table class (actually it is an interface internally implemented by a class) encapsulates information about the table, while Item encapsulates item details.

We can pass an item as an argument to another method and all DynamoDB related implementation details will be hidden from it. For example, somewhere later in the code:

void sayHello ( Item item ) { System . out . println ( "Hello, " + item . get ( "worker" )); }

In this script, we don’t know anything about DynamoDB or how to deal with its RESTful API. We interact solely with an instance of Item class.

By the way, all public entities in jcabi-dynamo are Java interfaces. Thanks to that, you can test and mock the library completely (but I would recommend to use DynamoDB Local and create integration tests).

Let’s consider a more complex example, which would take a page of code if we were to use a bare AWS SDK. Let’s say that we want to remove all workers from our table who work as architects:

Region region = // instantiate it with credentials Iterator < Item > workers = region . table ( "depts" ). frame () . where ( "job" , Condition . equalTo ( "architect" )); while ( workers . hasNext ()) { workers . remove (); }

jcabi-dynamo has saved a lot of code lines in a few of my projects. You can see it in action at rultor-users.

The library ships as a JAR dependency in Maven Central (get its latest versions from Maven Central):