MongoDB 3.0 is a major release with long awaited improvements. The most notable? The optional storage engine WiredTiger. After all, WiredTiger was founded by the people behind Berkeley DB. MongoDB claims WiredTiger write performance is 7-10x faster than the default storage engine, MMAP. Maybe it is, maybe it isn’t. Either way, WiredTiger is better.

So, did MongoDB close the performance gap with Couchbase Server?

Avalon Consulting benchmarked MongoDB and Couchbase Server to find out.

Benchmark Scenario

The benchmark scenario called for strong consistency, MongoDB and Couchbase Server guarantee strong consistency by default, a balanced workload of 50% reads and 50% updates to represent a wide range of use cases and reflect both read and write performance, a working set that did not fit into memory, and the data to be replicated for durability and availability. Finally, the read and write latency can not exceed 5ms.

Avalon Consulting deployed both databases with nine nodes and nine servers – one node per server. After all, deploying 3x the number of MongoDB nodes would require 3x the number of subscriptions in a supported production environment.

Strong Consistency

50% Reads, 50% Updates

Read and Write Latency < 5ms

9 Servers, 9 Nodes – 1 Server per Node

Replicated Data (1 Primary, 2 Secondary)

Data > Memory * 300M Documents 286GB Primary (1x) + 572GB Secondary (2x) 90GB Primary Resident in Memory (32%)



* The working set was the entire data set.

Methodology

Perform the benchmark on Amazon Web Services with Yahoo! Cloud Serving Benchmark, an open source performance testing framework. Measure throughput and 95th percentile latency while increasing the number of concurrent clients from 70 to 525 in increments of 35 until latency exceeds 5ms.

Results

MongoDB latency exceeded 5ms with 245 concurrent clients at 72K.

Couchbase Server throughput was 2.x higher with 245 concurrent clients at 186K.

Couchbase Server latency was less than 5ms with 525 concurrent clients at 298K.

Couchbase Server latency exceeded 5ms with 805 concurrent clients at 336K.

Conclusion

The problem with MongoDB was not the storage engine, though MMAP didn’t do it any favors, it was sharding. It still is. There is no storage engine capable of overcoming the limitations of sharding. MongoDB latency is acceptable, but not for long. WiredTiger helped with latency, but throughput was still limited by a database not engineered for concurrency. Assuming MongoDB scales linearly, it would have required 3-5x the number of nodes to perform as well as Couchbase Server.

You can find all of the details in the complete report.

Discuss on Hacker News