While memory is a dry topic, many of you would have experienced Blender grind to a halt (or worse), with big files the workstation can’t handle.

Yesterday Beorn was having trouble loading a scene which was taking a lot of memory, After concluding it wasn’t a memory leak in blender I looked into why Blender would show 75mb in use while the system monitor shows over 700mb.

It turns out is because the operating system cant always efficiently deal with applications memory usage and ends up using a lot more ram.

So I tested jemalloc, an opensource drop-in replacement for the operating systems memory allocation calls, used by firefox, facebook and freeBSD according to their site.

I was surprised to find memory usage went down by up to 1gig in some cases, without noticeable slowdown, using less memory in almost every case.

It can also decrease render times in cases where the system starts to use virtual memory.

The first example loads a complex scene, and then a blank file, notice there is over 600mb difference.

The second graph shows rendering the sintel model, I’d like to have made a few more examples but don’t have much time right now.

While testing we found Blender exposed a bug in jemalloc’s thread-cache, Jason Evans was kind enough to look into the problem for us, fixing it the next day in bugfix version 1.01.

JeMalloc is now used on every workstation, if all goes well we may include this with Blender as Firefox does.

For more info see: http://www.canonware.com/jemalloc/

– Campbell

Notes…

Hoard was also tested, but it didn’t improve memory usage all that much.

Edited, now ‘decrease rendertimes’.

Comparing against the default allocator in Linux (modified ptmalloc), one could argue the problems is because of bad memory usage within Blender, from what I have read the default malloc in linux is quite good. Nevertheless it may prove to be an advantage for us.

For anyone who wants to test on *nix, you don’t need to rebuild Blender, just pre-load:

LD_PRELOAD=/usr/lib/libjemalloc.so blender.bin