The Need

Ah, database migrations. After you’ve migrated to Couchbase where the JSON document data representation is much more flexible, you won’t as often need to twist your development process through the change request pipeline followed by the downtime, possible errors and just general unpleasantness when bound to rows and columns.

That said, just because you are on Couchbase does not mean that you are always free of needing to do data transformation with data in Couchbase. It will just be a lot less common.

This is actually an interesting place for Apache Spark!

An Aside …

With the Spark connector and Couchbase 4.0 you have not just one, but four interfaces that are relevant to those using Spark. They’re the K-V interface, the Database Change Protocol (a.k.a. DCP) streaming interface, the N1QL query interface via Spark SQL and the View Query interface.

These can be combined with a host of different datasources from the Spark ecosystem to pull together and manipulate data in a number of ways. For instance, you may want to stream the data out from Couchbase via DCP mix it up with an HDFS data source and put the target results back in a different Couchbase bucket.

The Solution …

Taking a simple case, how can we use Spark to write some code to efficiently transform a set of data inside Couchbase?

Imagine the scenario where you have acquired a new set of data about game players in JSON format. They’re all going to be playing your new FizzBuzz game soon, and the profiles have come over from a partner. The incoming profiles all look a bit like this:

{ “givenname”: “Joel”, “surname”: “Smith”, “email”: “joelsmith@g00glemail.com”, “entitlementtoken”: 78238743 } 1 2 3 4 5 6 { “ givenname ” : “ Joel ” , “ surname ” : “ Smith ” , “ email ” : “ joelsmith @ g00glemail . com ” , “ entitlementtoken ” : 78238743 }

The problem is that the FizzBuzz profiles all look like this:

{ “fname”: “Matt”, “lname”: “Ingenthron”, “email”: “matt@couchbase.com”, “currentscore”: 1000000 } 1 2 3 4 5 6 { “ fname ” : “ Matt ” , “ lname ” : “ Ingenthron ” , “ email ” : “ matt @ couchbase . com ” , “ currentscore ” : 1000000 }

Normally, if you had another shape for the data, you’d add a little bit of logic for mapping at the time of read and write. However, this particular transition is a one-time process and comes with an additional wrinkle. That “entitlementtoken” needs to be looked up from a MySQL database backup you also have. You don’t want to have to provision or maintain a big MySQL deployment to handle launch day traffic, so a one time pre-launch transformation is better.

Streaming the data out, finding those with the ‘shape’ we want and then transforming them with Spark based on a SQL query would be ideal.

First, we need to set up our connection and stream the data out, looking for the shape of the already imported JSON. This will use Couchbase’s DCP interface to stream out the data.

val ssc = new StreamingContext(sc, Seconds(5)) ssc.couchbaseStream("transformative") .filter(_.isInstanceOf[Mutation]) .map(m => (new String(m.asInstanceOf[Mutation].key), new String(m.asInstanceOf[Mutation].content))) 1 2 3 4 5 6 val ssc = new StreamingContext ( sc , Seconds ( 5 ) ) ssc . couchbaseStream ( "transformative" ) . filter ( _ . isInstanceOf [ Mutation ] ) . map ( m = > ( new String ( m . asInstanceOf [ Mutation ] . key ) , new String ( m . asInstanceOf [ Mutation ] . content ) ) )

A current limitation is that the DStream never stops, but we can simply monitor for when we don’t see any more data being transformed as a workaround for this simple case.

Then, on a per-item basis, we need to apply a transformation based on this MySQL lookup. To do so, we’ll need to load the data from MySQL. Assuming the MySQL table looks like so:

mysql> describe profiles; +------------------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +------------------+-------------+------+-----+---------+-------+ | givenname | varchar(20) | YES | | NULL | | | surname | varchar(20) | YES | | NULL | | | email | varchar(20) | YES | | NULL | | | entitlementtoken | int(11) | YES | | NULL | | +------------------+-------------+------+-----+---------+-------+ 4 rows in set (0.00 sec) 1 2 3 4 5 6 7 8 9 10 mysql > describe profiles ; +------------------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +------------------+-------------+------+-----+---------+-------+ | givenname | varchar ( 20 ) | YES | | NULL | | | surname | varchar ( 20 ) | YES | | NULL | | | email | varchar ( 20 ) | YES | | NULL | | | entitlementtoken | int ( 11 ) | YES | | NULL | | +------------------+-------------+------+-----+---------+-------+ 4 rows in set ( 0.00 sec )

We will want to load the MySQL data as a DataFrame. Since the StreamingContext gives us RDDs to join to, we’ll convert it to a set of RDDs for a later join inside the stream. Spark 1.6 may make this easier. That conversion looks like this (extracted to a function for redablility):

/** Returns an RDD based on email address extracted from the document */ def CreateMappableRdd(s: (String, String)): (String, JsonDocument) = { val return_doc = JsonDocument.create(s._1, JsonObject.fromJson(s._2)) (return_doc.content().getString("email"), return_doc) } 1 2 3 4 5 6 /** Returns an RDD based on email address extracted from the document */ def CreateMappableRdd ( s : ( String , String ) ) : ( String , JsonDocument ) = { val return_doc = JsonDocument . create ( s . _1 , JsonObject . fromJson ( s . _2 ) ) ( return_doc . content ( ) . getString ( "email" ) , return_doc ) }

We also need to add the new entitlement token (also extracted):

/** Returns a JsonDocument enriched with the entitlement token */ def mergeIntoDoc(t: (String, (JsonDocument, Integer))): JsonDocument = { val jsonToEnrich = t._2._1.content() val entitlementFromJoin = t._2._2 jsonToEnrich.put("entitlementtoken", entitlementFromJoin) t._2._1 } 1 2 3 4 5 6 7 8 /** Returns a JsonDocument enriched with the entitlement token */ def mergeIntoDoc ( t : ( String , ( JsonDocument , Integer ) ) ) : JsonDocument = { val jsonToEnrich = t . _2 . _1 . content ( ) val entitlementFromJoin = t . _2 . _2 jsonToEnrich . put ( "entitlementtoken" , entitlementFromJoin ) t . _2 . _1 }

In the end we have a nice fluent description of our transformation modifying the RDDs in flight that need changes. This ultimately writes the transformed data back into Couchbase, overwriting the items using the K-V interface.

// load the DataFrame of all of the users from MySQL. // Note, appending .cache() may make sense here (or not) depending on amount of data. val entitlements = mysqlReader.load() /* loading this: +---------+-----------+-----------------+----------------+ |givenname| surname| email|entitlementtoken| +---------+-----------+-----------------+----------------+ | Matt| Ingenthron| matt@email.com| 11211| | Michael|Nitschinger|michael@email.com| 11210| +---------+-----------+-----------------+----------------+ */ val entitlementsSansSchema = entitlements.rdd.map[(String, Integer)](f => (f.getAs[String]("email"), f.getAs[Integer]("entitlementtoken"))) val ssc = new StreamingContext(sc, Seconds(5)) ssc.couchbaseStream("transformative") .filter(_.isInstanceOf[Mutation]) .map(m => (new String(m.asInstanceOf[Mutation].key), new String(m.asInstanceOf[Mutation].content))) .map(s => CreateMappableRdd(s)) .filter(_._2.content().get("entitlementtoken").eq(null)) .foreachRDD(rdd => { rdd .join(entitlementsSansSchema) .map(mergeIntoDoc) //.foreach(println) // a good place to see the effect .saveToCouchbase("transformative") }) ssc.start() ssc.awaitTermination() 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 // load the DataFrame of all of the users from MySQL. // Note, appending .cache() may make sense here (or not) depending on amount of data. val entitlements = mysqlReader . load ( ) /* loading this: +---------+-----------+-----------------+----------------+ |givenname| surname| email|entitlementtoken| +---------+-----------+-----------------+----------------+ | Matt| Ingenthron| matt@email.com| 11211| | Michael|Nitschinger|michael@email.com| 11210| +---------+-----------+-----------------+----------------+ */ val entitlementsSansSchema = entitlements . rdd . map [ ( String , Integer ) ] ( f = > ( f . getAs [ String ] ( "email" ) , f . getAs [ Integer ] ( "entitlementtoken" ) ) ) val ssc = new StreamingContext ( sc , Seconds ( 5 ) ) ssc . couchbaseStream ( "transformative" ) . filter ( _ . isInstanceOf [ Mutation ] ) . map ( m = > ( new String ( m . asInstanceOf [ Mutation ] . key ) , new String ( m . asInstanceOf [ Mutation ] . content ) ) ) . map ( s = > CreateMappableRdd ( s ) ) . filter ( _ . _2 . content ( ) . get ( "entitlementtoken" ) . eq ( null ) ) . foreachRDD ( rdd = > { rdd . join ( entitlementsSansSchema ) . map ( mergeIntoDoc ) //.foreach(println) // a good place to see the effect . saveToCouchbase ( "transformative" ) } ) ssc . start ( ) ssc . awaitTermination ( )

The full example is in the couchbase-spark-samples repository.

The beauty of this example is that it’s easy to understand what’s happening and fairly trivial to scale. Chances are your own transformation is more complex, but this should give you a sense of what is possible and something to build upon.

There is always room for improvement.

One problem is that the MySQL could be larger than what I want to load into memory. Spark does account for that by giving you a way to split the DataFrames. I didn’t need that here and wanted the sample to be readable. The other thing that may help this is the ability to reference a SparkContext from within an existing StreamingContext. Spark doesn’t allow that at the moment for good reasons, but I’d argue this simple use case of doing a single record lookup from inside the stream there sort of makes sense.

On the Couchbase Connector, at the moment, the DCP interface is classified as volatile and should be considered experimental. Also, the example above is very fast but needs some help to scale. A forthcoming update from my colleague Sergey Avseyev will allow for splitting the DCP streams across Spark workers to parallelize this transformation.

To Conclude

Spark is a great new tool for this kind of transformation. The same techniques can certainly be applied to migrating to Couchbase from a different data source, like a relational database. The technique can even be expanded with Spark’s machine learning to build a model around data streaming from Couchbase for anticipating results.