Why ROXIMITY selected MongoDB

At Compose we love to hear what goes into our customer's decision making as it teaches us a lot. In this guest article by ROXIMITY's Dusty Candland, he explains what went into their decision to use, and stick with, MongoDB.

ROXIMITY's business is location-based advertising where we offer a full-featured platform and technology that lets retailers, brands and venues interact with nearby consumers. This is enabled through a combination of innovative beacon hardware, a robust SDK and a suite of targeted mobile messaging and analytics tools. That combination has been put to work by brands like Mondelez, Autotrader and the Brooklyn Nets to drive exceptional results.

When we initially started development of ROXIMITY, I decided to go with MongoDB. There were three reasons for this choice: Geospatial support, redundancy and scalability and a lack of schema. If you are thinking about MongoDB, these are still all valid reasons for considering it and our experience should aid your decision making.

Geospatial

Geospatial querying and storage continues to be important to us and the MongoDB's support for it continues to grow. One of the things I liked initially was the simplified support for spherical data, namely using WGS84 for everything. This means everything is stored in and uses longitude and latitude coordinates. This may not be accurate enough for some applications. If you need something with more accurate projections or just need to use different projections, then PostGIS could be a better option. When we began, storage and query support for geospatial data was limited and we had to use a couple of work-arounds to get the functionality we needed, but current versions of MongoDB support all kinds of queries and storage options, including points, lines, and polygons.

Redundancy and scalability

Redundancy and scalability were another big reason I chose MongoDB. Replica sets give you redundancy and some additional scalability for reading and we’ve relied heavily on Compose.io to manage the replica set for us.

One thing that caused trouble was MongoDBs global write lock. While this is being addressed with the per-collection locking of MongoDB 3.0 and the WiredTiger engine's optimistic concurrency, it’s still good to optimize for fast writes, which often means minimal indexes and non-growing documents. When things do go wrong, it’s often not the slowest queries that are causing trouble as they usually a symptom rather than the problem. We've found that it’s often the case that the trouble is coming from a different collection from where the slow queries are happening.

If we were going to do this again, we would use more databases to partition the system and give us more write locks. Even with WiredTiger, where write locks will be less of an issue, we would use more databases from the beginning.

Lack of Schema

The lack of a rigid schema has turned out to be the least important feature of MongoDB for us. While it was initially helpful as we created one application with the database, it quickly became less helpful when other applications started using the same database. Ideally, each application would have it's own database but in practice this is hard to do while retaining the ability to move quickly. Anyway, database migration tools make managing the scheme much less painful. All in all, I would not use MongoDB's schema-less nature a consideration for using or not using MongoDB.

Lessons

We've learned some things along the way. By far the biggest is that MongoDB is not great at storing data when you don't know exactly how you want to use that data later on. This is where relational databases have the edge. Conversely, when you do know how you'll use the data and work with that in mind, MongoDB performs well. We knew this going in but we underestimated the effort needed on the reporting and analytics side of things where you do come up with new ways to work with your data.

At a more practical level, if you do go with MongoDB: