There's a lot of FUD out there about the performance of various OO modules, particularly Mouse. So let's set it straight with some benchmarking.

I've chosen to simulate that buggaboo of OO performance in Perl, the simple accessor. The one that you're going to call millions of times and that you'll be sorely tempted to reimplement with a hash or tear all the argument checks out of for "performance". To make it a little more realistic, I'm checking both getting and setting as well as a simple argument check.

Each benchmark gets or sets an integer once via an accessor (except in the case of the plain hash). The object is created outside the benchmark to avoid distortion.

Each accessor validates that the argument is an integer.

I picked the following modules because they are either popular or make performance claims:

Mouse, both XS and pure Perl

Moose

Moo, with and without Sub::Quote

A plain hash

A class written by hand (aka "manual")

I also threw in Object::Tiny and Object::Tiny::XS, even though it's not really fair. They're read only and do no argument checks. They sacrifice everything for performance. Let's see if it's worth it.

Finally, I did a hash with no argument checks, just to get an upper bound.

I won't clog this post with the full benchmark code, but here's what the Moose class looks like for example.

package Foo::Moose; use Moose; has bar => (is => 'rw', isa => "Int"); __PACKAGE__->meta->make_immutable;

Results

And the results, from slowest to fastest getter with the cheaters at the bottom.

Name Get Set ----------------------------------------- Moo -11% -4% manual 0% 0% Mouse, no XS 0% -33% Moo w/quote_sub 5% -46% Moose 8% -40% Mouse 145% 468% manual, no check 0% 153% Object::Tiny 20% n/a Object::Tiny::XS 226% n/a hash, no check 279% 1,116% hash 289% 147%

That's with Perl 5.12.2 on OS X using the latest versions of all those modules as of this writing (Moose 1.24, Mouse 0.91, Moo 0.009007, Object::Tiny 1.08, Object::Tiny::XS 1.01).

The percentages are the % improvement from a hand written class (manual).

You can get all the data as a CSV.

Conclusions

The go-to module is clearly Mouse with XS. The power of Moose, but faster than everything equivalent, faster than writing it by hand, faster (on setting with equivalent validation) than a raw hash, and no required dependencies. Unless you need the meta capabilities of Moose, there's little reason to use anything else.

If you want a read-only object with no argument checks, use Object::Tiny::XS, hands down.

Object::Tiny, on the other hand, is pretty poky; significantly slower than Mouse with XS. Given its extreme limitations, if you can use XS then there's no point to Object::Tiny.

Moose and pure Perl Mouse stack up about the same and fare well against a hand written class, though lose out in setting. You should be getting far more than setting, so you won't be seeing much performance gain by hand writing methods.

The big surprise is the newcomer, Moo... but not in a good way. It's "an extremely light-weight, high-performance Moose replacement" but the numbers don't stack up. Absolutely creamed by Mouse with XS, it's the slowest getter of them all with a slight edge in setting.

Another surprise is Moo's quote_sub . This is supposed "to create coderefs that are inlineable, giving us a handy, XS-free speed boost." The Moo docs suggest using them for "isa" type checks, so I did. The numbers don't pan out, it's worse than a regular sub. I'm going to assume it's a bug and I've filed a report. Curiously, using quote_sub on isa made the getter faster... which makes no sense.

The final conclusion is this: performance is no longer an excuse for not using OO. Accessors are what will be called most often and they're what tempts programmers to micro optimize and throw out their abstractions. Mouse and Object::Tiny::XS can blow the doors off what you can write by hand. The performance improvements in Mouse show that abstraction not only makes the code cleaner, but it allows radical optimizations which you can have just by upgrading to a new version.

Bias

Every benchmark has its biases. Here's the ones in this one that I can identify.

Of course the version of Perl, operating system and versions of modules all matter. One particularly large gap will be in Mouse. Its XS optimizations are a fairly recent thing, so older versions of Mouse will not perform well.

The benchmark uses a very simple type check, an integer. I chose that because it's what an awful lot of methods are going to use. Because it's so simple and common Moose, and especially Mouse, have a clear opportunity to optimize for it. A less common or more complicated type check might have impacted performance more. A string might provide different numbers.

Finally, I really like Mouse. :-)

Update

Corrected my statement about Mouse being faster than a raw hash. That only happens when setting with validation. When I originally did the benchmarks they combined setting and getting.

Added a wider conclusion about the role of performance in choosing an OO system.