You might have heard the story of Stack Overflow switching to their own C# ORM framework because all the existing ones are too slow. Yesterday this framework was named Dapper and published as an open-source project. Dapper also includes a simple performance benchmark and the results for pretty much all the popular C# ORMs. This caught my attention and I thought, why not implement the same benchmark for ODB C++ ORM and see how it stacks up against the C# crowd?

In a nutshell, the idea of the benchmark is to measure the time it takes to pull 500 random post objects from the database. The post object here is meant to simulate a Stack Overflow question. Its C++ version is shown below:

#pragma db object class post { public: #pragma db id unsigned long id; std::string text; boost::posix_time::ptime creation_date; boost::posix_time::ptime last_change_date; int counter1; int counter2; int counter3; int counter4; int counter5; int counter6; int counter7; int counter8; int counter9; };

Once the above class is compiled with the ODB compiler, the resulting database schema looks like this, which is identical to the one used in Dapper’s benchmark:

CREATE TABLE post ( id BIGINT UNSIGNED NOT NULL PRIMARY KEY, text TEXT NOT NULL, creation_date DATETIME, last_change_date DATETIME, counter1 INT NOT NULL, counter2 INT NOT NULL, counter3 INT NOT NULL, counter4 INT NOT NULL, counter5 INT NOT NULL, counter6 INT NOT NULL, counter7 INT NOT NULL, counter8 INT NOT NULL, counter9 INT NOT NULL)

First, the ODB benchmark loads 10000 objects into the database:

transaction t (db->begin ()); for (unsigned long i (0); i < total_objects; ++i) { post p; p.id = i; p.text = text; p.creation_date = second_clock::local_time () - time_duration (i, 0, 0); p.last_change_date = second_clock::local_time (); p.counter1 = i + 1; p.counter2 = i + 2; p.counter3 = i + 3; p.counter4 = i + 4; p.counter5 = i + 5; p.counter6 = i + 6; p.counter7 = i + 7; p.counter8 = i + 8; p.counter9 = i + 9; db->persist (p); } t.commit ();

Then it runs the following iteration a couple of hundred times while measuring the time:

void test (database& db) { post p; transaction t (db.begin ()); for (unsigned long i (0); i < 500; ++i) { unsigned long id (rand () % total_objects); db.load (id, p); } t.commit (); }

I used the MySQL database server to run this benchmark (the C# test uses Microsoft SQL Server). In my case, both the database and the benchmark were running on the same multi-core, 64-bit Linux machine. As you can see on Dapper’s web site, the best performance one can get with C# is 47ms (hand-coded) per 500 iterations with Dapper coming second at 49ms. ODB does it in 24ms. Half the time of the hand-coded C# version is not bad! As a bonus, I also ran the ODB test with SQLite (just had to recompile with different options — no source code changes required). The same benchmark using SQLite takes 7ms (14μs per object)! Now, that is fast.

UPDATE: As was pointed out by a reader, it would be useful to know the hardware that was used for the benchmarks. Unfortunately, Dapper’s results don’t mention the test hardware.ODB results are for a Xeon E5520 2.27GHz machine.