Being able to treat the database as a value has made most of our code extremely deterministic and removes many different classes of potential errors due to "oh this value changed during execution" since the db value is consistent.

d/transact returns the resulting db value after transacting which removes issues that traditionally occur with reads after writes with read-replicas.

tx-data being represented as simple vectors and maps allows us to compose transactions elegantly and makes it much easier to programatically generate code that either creates or modifies multiple entities at once.

The transaction listener queue is an elegant way to listen for data changes that require notifications to get triggered without tight coupling to the point at which the transaction was executed. If our notification process needs to shut-down or restart, we can use d/tx-log to walk through the log again (lazily no less!) from our last known notification point.

d/as-of and d/since have been used to help time-travel to understand what a user was seeing at a particular time or what the system saw when it was executing. Both bugs and correct behavior have been confirmed this way in production when used in combination with code that expects the database as an argument.

Transaction metadata allows us to easily record who did what when how and why and then both query against as well as use the corresponding tx-log entities as arguments for d/as-of and d/since to understand what happened in the system.

Testing and developer setup is massively simplified with speculative transactions via d/with and the in-memory database allowing us to not have an external process to manage or setup.

Database filtering via d/filter provides an elegant way to re-use queries. For instance, we have queries that report on total activity on our system, and then re-use the exact same query but with a database filter applied to a breakdown by geographical region or scoped by time (requested created within past 7 days, past month, etc).

Note that this is somewhat less efficient than writing a separate function to do the aggregation on a per-region level during a single pass, but for us the total dataset is still fairly small so the CPU processing time is irrelevant. It is more important to us to be able to flexibly filter out data from the query and can’t beat the ease of implementation …​