A few weeks ago I started wondering about 'the test that occasionally and randomly breaks' in a large test suite at my job. The test, called 'overloaded origin', tests a situation where a web server becomes overwhelmed with requests and a proxy server (the code being tested) has to handle the situation gracefully.



The test works by having a dummy web server that could randomly decide to (a) return a normal web page for a request (b) read the HTTP headers and then do nothing for 30 seconds and (c) read the HTTP headers, wait 30 seconds and then send a valid response. The proxy server is hit by 5,000 clients simultaneously requesting the same URL.



And sometimes, every now and again, this test failed.



And like many engineers I'd ignored it for a long time. But it kept worrying me because it must have meant something: computers are deterministic, after all. I was spurred to action by a colleague suggesting that the test be disabled because it was 'flaky'.



It took me two days of continuous work to find out what was wrong and it explained other occasional problems that had been seen with the code. And it made the test suite 100% stable on all platforms. That 'randomly failing test' was really 'a genuine bug in the code'.



But getting to that point was tricky because this was a system level test with clients, servers and the proxy and a memcached server in the middle. It turned out that the memcached server was the problem. In the end, I had to implement my own memcached server (a simple one) so that I had complete control over the environment. In doing so, I discovered the root cause of the problem.



The program has a timeout used to stop it waiting for memcached if it doesn't respond quickly (within 100s of ms). Here are the lines of code that handle the memcached timeout (this is from inside the proxy server being tested).

var Timeout time.Duration Timeout = time.Duration(conf("timeout", 100)) * time.Millisecond cache := memcache.New(Servers...) cache.Timeout = Timeout * time.Millisecond