The Garbage Collector is really strange business in the Ruby land. For me, it is the major performance drain currently. If you are not aware of the limitations here's a list:



The GC is mark and sweep, it needs to scan the whole heap for each run. It is directly affected by heap size O(n). The GC cannot be interrupted and hence all threads must wait for it to finish (shameful pause on big heaps). The GC marks objects in the objects themselves destroying any value of copy on write. The GC does not (edit: usually) give memory back to the system. What goes in does not (edit: usually) go out. It is a bit on the conservative side. Meaning garbage can stay because it is not sure that it is so.

Needless to say, some of these are being addressed, specially 3 and 5. But the patches are not yet accepted in the current Ruby release. I believe though that they will find their way to 1.8.x which is being maintained by Engine Yard. The EY guys are really working hard to solve the issues of Ruby as a server platform which is the most popular use for it today thanks to Rails.

Alas, my issue today involves Ruby 1.9.1 (it does not affect 1.8.x). See I have built this toy server to experiment with multi process applications and some Unix IPC facilities. I did make the design a bit modular to make it easier to test and debug different aspects of the stack. So I have these tcp, http handler modules that extend the connection object (a socket) whenever a connection is accepted. Here's a sample:

conn = server_socket.accept

conn.extend HttpHandler

..

..

This worked really great and I was even able to chain handlers to get more stack functionality (a handler will simply include those that it requires). This worked great, until I looked at memory usage.

I discovered that after showering the server with requests it will start to grow in size. This is acceptable as it is making way for new objects. But given the way the GC works it should have allocated enough heap locations after a few of those ab runs. On the contraty, even when I am hitting the same file with ab the server keeps growing. After 10 or more ab runes (each doing 10000 requests) it is still consuming more memory. So I suspected there is a leak some where. I tested a hello world and found that the increase was very consistent. Every 10K requests the process gains 0.1 to 0.2 MB. (10 to 20 Bytes per request). So I started removing components one after another till I was left with a bare server that only requires socket and reactor.

When I tested that server the process started to gain memory then after like 3 or 4 ab runs it stabilized. It would no longer increase its allocated memory no matter how many times I run ab on it. So the next logical move was to re-insert the first level of the stack (the tcp handler module). Once I did that the issue started appearing again. So the next test was to disable the use of the tcp handler but still decorate my connections with it. The issue still appeared. Since the module is not overriding Module.extended to do any work upon it extending an object it became clear that it is the guilty party.

Instead of Object#extend I tried reopening the BasicSocket class and including the required module there. After doing that memory usage pattern resembled the bare bones server. It would increase for a few runs and then remain flat as long as you are hitting the same request.

To isolate the problem further I created this script:

# This code is Ruby 1.9.x and above only



@extend = ARGV[0]



module BetterHash

def blabla

end

end



unless @extend

class Hash

include BetterHash

end

end



t = Time.now

1_000_000.times do

s = {}

s.extend BetterHash if @extend

end

after = Time.now - t

puts "done with #{GC.count} gc runs after #{after} seconds"

sleep # so that it doesn't exit before we check the memory

351 GC runs, 9.108 seconds, 18.7 MB

117 GC runs, 0.198 seconds, 2.8 MB

using extend:using include:

Besides being much faster, the resulting process was much smaller. Around 16MB smaller. I am suspecting that the leak is around 16 bytes or a little less per extend invokation. This means that a server that uses a single extend per request will increase around 160KB in size after every 10K requests. Not that huge but it will pile up fast if left for a while and the server is under heavy load.



A quick grep in Rails sources showed that this pattern is being used heavily throughout the code. But it is used to extend base classes rather than objects. Hence it will not be invoked on every request and the effect will be mostly limited to the initial start size (a few bytes actually). You should avoid using it dynamically at request serving time though, till it gets fixed.