There are many use cases where auto expiry of documents work perfectly. Maintaining session state, shopping carts, price quotes for travel and more. In all these cases, applications using Couchbase Server, set the expiry time to seconds, minutes or hours to manage user behavior. For items with expiry in the database, Couchbase Server has built in intelligence to make the data disappear at the given expiry time. You can find all about TTL and expirtaion here in Couchbase Server documentation. The majority of these systems with expiration also come with use cases where you want to either renew expiring data, or notify a downstream system that a set of documents are about to expire. Renewal or notifications of expiring data typically involves a process looking at upcoming expirations of documents. Querying this information just got much faster with the addition of global indexes and N1QL.

In th past, you could index this information in map/reduce views. Even though map/reduce views can provide powerful indexing, due to scatter gather, global indexes hold an advantage.

Here is why: Global indexes partition independently from data. For example, even though you may have your data spread to 20 nodes, global indexes can reside on a single node if the index fits into that single node. You also don’t have to pick the same HW for the index vs data nodes in Couchbase Server. That means you can get an index node that can be “tall” enough to fit the index. You can read more about the differences between map/reduce views and global secondary indexes here.

Here is how you index and query the expiration information to detect data that is expiring: I’ll be using .Net in the samples.

Step 1- Include the expiration in the document body: you won’t need to do this in future but today, META() does not expose the expiration yet. In the code below add an exp_datetime attribute to JSON that calculates the rough time when the document would expire. (Remember this is a distributed system and there is no central time. Clocks will be slightly off across your client and server nodes so the exact time of expiry is hard to detect)

... cbDoc = new Document { Id = _key, Content = new { a1 = _a1, ... exp_datetime = DateTime.UtcNow.AddMilliseconds(30000) } }; //UPSERT cbDoc.Expiry = 30000; var upsert = cbBucket.Upsert(cbDoc); ... 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 . . . cbDoc = new Document { Id = _key , Content = new { a1 = _a1 , . . . exp_datetime = DateTime . UtcNow . AddMilliseconds ( 30000 ) } } ; //UPSERT cbDoc . Expiry = 30000 ; var upsert = cbBucket . Upsert ( cbDoc ) ; . . .

Step 2 – Create an index on expiration time.

CREATE INDEX iExpiration ON default(exp_datetime) USING GSI; 1 CREATE INDEX iExpiration ON default ( exp_datetime ) USING GSI ;

Step 3 – Query data expiring in the next 30 seconds. The following query will return the document IDs and full document values.

SELECT META(default).id, * FROM default WHERE DATE_DIFF_STR(STR_TO_UTC(exp_datetime),MILLIS_TO_UTC(DATE_ADD_MILLIS(NOW_MILLIS(),30,"second")),"second") < 30 AND STR_TO_UTC(exp_datetime) IS NOT MISSING; 1 2 3 4 SELECT META ( default ) . id , * FROM default WHERE DATE_DIFF_STR ( STR_TO_UTC ( exp_datetime ) , MILLIS_TO_UTC ( DATE_ADD_MILLIS ( NOW_MILLIS ( ) , 30 , "second" ) ) , "second" ) < 30 AND STR_TO_UTC ( exp_datetime ) IS NOT MISSING ;

The output will contain all the documents expiring in the next 30 seconds.

Happy Hacking