This seems like it would work, but there are consistency problems. While effort mutations are strongly ordered within the Kafka partition, we’re still at the mercy of the message publishers to log them to that partition in the correct order. As part two of this series points out, efforts are mutated in our Rails application in parallel (occurring in web transactions, background jobs, etc) without any sort of synchronization or ordering. Consequently, effort mutations can (and will) be published out of order, which renders the order of logged mutations more-or-less useless.

There is hope, though. One important property of our system is that for the entire lifecycle of an effort, two important fields are immutable: the segment the effort traversed, and the user whose activity the effort belongs to. Knowing this, we can instead use the effort mutation messages logged by active only as a “notification” that something about the effort changed. The message doesn’t describe what about that effort specifically changed, we only know something changed and the leaderboard store may need to be updated.

Under that model, we can push the ordering requirement further down into the leaderboards system, where we have more control. If the worker consumes the partition of effort mutation in order, as it processes each message it can query our canonical effort storage for the latest effort data, and then make a decision on what kind of update to apply to leaderboards at that point.

No matter what order the effort mutation messages were logged in, the worker will eventually update leaderboard storage to the correct data.

The key takeaway here is that this architecture is eventually consistent, and handles all race conditions and out of order logging of effort mutations from the Rails app. This is due to a couple important properties:

The actual transaction mutating the row in the database for the effort commits before the effort mutation event is logged. A correct leaderboard update can be determined based on the state found in canonical effort storage, for any possible mutation event.

Even if the worker reads an effort row from canonical storage, and then another actor in the system immediately updates the effort, rendering the worker’s read stale and invalid, the worker can still safely apply its now stale update. This is safe because the worker will eventually consume the mutation logged by the actor due to its effort update. On consumption, the worker will then read the final correct effort state, and update leaderboards accordingly.

Immutability

Before declaring victory with the stream processing system, it’s also important to think about our other stream processing goal of idempotency. What would happen if we replayed a message more than once? Or, since we will be implementing our worker as a Kafka consumer, what happens if it crashes, and then restarts its consumption from an earlier offset in the log? Can leaderboard consistency be maintained?

Thankfully, yes. If we consider each effort mutation as a stateless notification the effort mutated in some way, then being re-notified of a mutation will simply query the latest state again. Our consumer will still attempt to issue the leaderboard update again, but update is based on the most recently queried state from our canonical effort storage. In the even more complicated example of our consumer crashing and restarting its processing from an earlier point in the log, the same property still holds. The worker will attempt to issue the leaderboard update again based on the latest data.

Set Up for Success

The end result of this design is that we have a system which turns a potentially out-of-order series of effort mutations into an ordered sequence of leaderboard updates. As long as sequential processing of a partition occurs, effort updates are idempotent, and rewindable, due to the strong ordering guarantees of a Kafka partition and the causal relationship of updates in the system.

Understanding the properties of messages in your system often times can reveal characteristics you are able to leverage for efficient and accurate processing. In particular, the leaderboards architecture sets us up nicely to use event sourcing and command query responsibility segregation (CQRS) in building later components of the segments leaderboard system. We will discuss those in part four of this series.

Data Store

With our stream processing architecture figured out, it’s time to turn our attention to the data store. To review: in part two of this series we noted the desire for a store that was distributed and highly available, as well as being able to achieve a high write load, and store all leaderboards in a homogenous way.

Influencing our data store decision was the preexisting usage of Cassandra at Strava. We use Cassandra in a variety of other systems already, and it was a common opinion that it could serve as a good candidate for the leaderboards store. On the surface, Cassandra certainly met a lot of the project’s needs — it’s distributed, highly available, able to support a high level of writes, and had a strong emphases on ordering. Consequently, a lot of our work in choosing the leaderboards data store went into determining if Cassandra would be an ideal fit.

Cassandra

Our confidence with Cassandra mostly mostly hinged on two main questions:

Is it possible to construct a suitable data model/table schema using Cassandra? Will Cassandra’s eventual consistency work for our use case?

Table Schema

Modern Cassandra schemas are generally defined in CQL, a SQL-like abstraction over native Cassandra data structures. So it’s best to think of the Cassandra schema in terms of more traditional SQL-like rows and columns.

Given our leaderboards use case, we crafted a schema we felt would work:

CREATE TABLE leaderboards ( segment_id int, leaderboard_type int, // overall, male, female, etc. elapsed_time int, effort_id bigint, effort_data blob, user_id int )

Each row in a Cassandra table is keyed by a unique PRIMARY KEY . Choosing a PRIMARY KEY , along with Cassandra data modeling in general, is a complicated topic, but some distilled guidance is best summed up via a blog post by Datastax as:

Rule 1: Spread Data Evenly Around the Cluster You want every node in the cluster to have roughly the same amount of data. […] Rows are spread around the cluster based on a hash of the partition key, which is the first element of the PRIMARY KEY . Rule 2: Minimize the Number of Partitions Read Partitions are groups of rows that share the same partition key. When you issue a read query, you want to read rows from as few partitions as possible.

One final note: within a partition, rows are ordered by the remainder of columns in the PRIMARY KEY , known as the clustering key.

Knowing this we might chose our PRIMARY KEY as:

(segment_id, leaderboard_type, elapsed_time, timestamp, effort_id)

Here, segment_id is the partition key, as it is the first element in the primary key. The remaining columns — leaderboard_type, elapsed_time, timestamp, effort_id — make up the clustering key, which defines the ordering of rows within the partition.

This seems great — with segment_id first in the PRIMARY KEY , all rows for a segment are partitioned onto the same node. In aggregate, this should spread all effort rows evenly around the cluster, with any one leaderboard request contained to a single partition. Within a partition, rows will be ordered by the leaderboard_type , and then by elapsed_time — just like we want for an actual leaderboard. This schema also allows us to store all leaderboards in a homogenous way, another one of the properties we wanted in our data store.

Good for Reads, Bad for Writes

While this schema works well for storing ordered sequences of efforts in a leaderboard, it is actually pretty awful at retrieving the effort for one particular user. This is a problem for our aforementioned worker consuming effort mutations. For any given add or remove message processed, our leaderboard service needs to query the leaderboard store for the user’s current best effort to compare it against the incoming effort. Due to the PRIMARY KEY we chose, we can’t efficiently issue that query. Cassandra only allows contiguous sequence of rows to be returned, and a particular user’s rows, for a segment, are interspersed amongst the ordered rows for all users for the segment.

We could maybe adjust the PRIMARY KEY to include the user_id :

segment_id, leaderboard_type, user_id, elapsed_time, timestamp, effort_id

This would make it trivial to query for an user’s best effort for that leaderboard_type , but now rows on disk are ordered by user_id , and not by elapsed_time like we want.

So what do we do? We looked at this and evaluated two options.

First, we considered having two separate tables. One table would have user_id in its PRIMARY KEY to allow efficient querying by user. Then, once that table had been updated, we would write the results to a second table, which had its PRIMARY KEY ordered for a leaderboard. This certainly would work, but keeping two tables in sync like this is tricky, and opens us up to a whole host of processing complications, edge cases, and failures you need to reason about and account for.

Second, we looked at using a Cassandra feature called secondary indexing. Secondary indexes in Cassandra are a way to add an index on, and thus filter by, a column that is not part of the clustering key. There are many people online who discourage the use of secondary indexing. Their objections are based mostly on the fact that while secondary indexes have plenty of use cases where they seem to solve a problem, due to their implementation they do not actually work that well. Additionally, they will slow down writes, as you need to update both the index and table on each write operation.

However, if you follow the guidance of the Cassandra documentation, understand your use case to ensure it fits, and dutifully benchmark your implementation, secondary indexes can work well. And thankfully, our data and query pattern was a good fit for a secondary index on the user_id column, so we created a test schema, benchmarked it, and determined that it would actually work well for us.

There is still one remaining problem with our Cassandra schema. Even with the index on user_id , determining the ranking of a user on a leaderboard is not straightforward. To answer this question, we need to know which number row, in order, the user is from the top of the leaderboard. There is no CQL operation that we can use to achieve this in an efficient way, so we have to query for all rows in the table, and manually determine the user ranking ranking in memory. This is not so bad for small leaderboards of a few hundred users, but many leaderboards are in the high tens of thousands of users in size. The cost of moving that amount of data across the network, combined with the computational cost of processing that many results is likely to be untenable.

This type of query is not handled well by most data stores, so we’re not really missing a whole lot by using Cassandra, but the situation is certainly less than ideal. To combat this problem, we’ll want to ensure that we both minimize the number of queries of this type as well as cache leaderboard data in such a wayas to make this class of queries more efficient. Stay tuned for part four of this series, where we will discuss this problem and the caching system in more detail.

Consistency Model

The other big consideration when using Cassandra is understanding its consistency model. Cassandra operates under last write wins (LWW) consistency, where each INSERT or UPDATE (really, an UPSERT ) is blindly appended to the database, without checking if a duplicate record exists. Then, during reads, if Cassandra encounters two versions of the same row, it chooses the newer one. “Newer,” defined as the row with the latest timestamp.

The “timestamp” a row is written with can be used-defined, but generally is the current timestamp. This may sound like a scary way to do conflict resolution (there are plenty of ways it can break), but it’s incredibly fast and easy to reason about. In any running system you will likely see a small degree of data inconsistency, but nearly always that is a worthwhile tradeoff to the benefits of availability, replication and performance.

There are also ways to lower the rate of data inconsistency. Cassandra allows tunable consistency parameters, which you can use to ensure all read and write operations are executed on a quorum number of nodes. Doing so will hurt both read/write latency and availability (as a larger number of machines need to be healthy in order to complete the request), however, you can use this technique to prioritize consistency over availability if the performance hit is manageable for your application. We also run Cassandra configured with 3 replicas of data (r value of 3), allowing us to tolerate the loss of one node and still successfully complete reads and writes on a quorum number of nodes (2). Granted if we lose 2 nodes we will stop serving requests that require those nodes to establish a quorum, but losing two nodes is unlikely to happen in isolation without a more systemic widespread networking issue affecting many parts of the system.

The leaderboard system also structures its updates to reduce the likelihood of conflicting updates. Since leaderboard updates are partitioned by segment/user, the likelihood of conflicting updates is lowered dramatically, as no other actor in the system can be reading or writing the same row at the same moment. There are certainly scenarios where this fact could become false — i.e. a consumer pauses (perhaps due to GC) during message processing, a replacement consumer is started, and then the original consumers resumes processing. Generally these types of situations are incredibly rare, or at least rare enough that the engineering work and system performance hit incurred by engineering to prevent them are not worth the small amount of data inconsistency seen.

After some more benchmarking, testing, and dual writing leaderboard entries to Cassandra, as well as the old system, we eventually determined Cassandra would be a suitable data store for the leaderboards system.

Conclusion

Our leaderboard system is composed of two main components:

A Kafka stream processing system, which consumes effort mutation messages logged from the Rails application, fetches the latest effort data from canonical storage, determines an update to apply, and then applies it to leaderboard storage. A Cassandra leaderboard store made up of a single CQL table keyed by segment and leaderboard type, with a secondary index on user_id to facilitate querying by user.

These two systems work in conjunction as a pipeline to transform mutations of efforts by our Rails application into a denormalized Cassandra index of the best effort for any given segment/user/leaderboard combination. This index can be used to serve leaderboard read requests quickly and efficiently, with some known drawbacks and deficiencies that can be solved through strategic caching.

While this final architecture may seem simple or obvious, arriving at it took many weeks of analysis, prototyping, and benchmarking. This article also skipped over a decent amount of extra functionality and implementation details of the final production system. Some of that detail will be discussed in part four of this series, where we will review accessory systems built in conjunction with leaderboards, such as a leaderboard aggregate counters and a caching system, as well as enumerating additional benefits of this architecture.

Continued Reading

For the next article in the “ Rebuilding the Segment Leaderboards Infrastructure” series, check out Part 4: Accessory Systems.