Today we found a bug in the way our connection pool. The code in question?

public HttpClient GetClient(TimeSpan timeout, OperationCredentials credentials, Func<HttpMessageHandler> handlerFactory) { var key = new HttpClientCacheKey(timeout, credentials); var queue = cache.GetOrAdd(key, i => new ConcurrentQueue<Tuple< long , HttpClient>>()); Tuple< long , HttpClient> client; while (queue.TryDequeue( out client)) { if (Stopwatch.GetTimestamp() - client.Item1 >= _maxIdleTimeInStopwatchTicks) { client.Item2.Dispose(); continue ; } client.Item2.CancelPendingRequests(); client.Item2.DefaultRequestHeaders.Clear(); return client.Item2; } return new HttpClient(handlerFactory()) { Timeout = timeout }; } public void ReleaseClient(HttpClient client, OperationCredentials credentials) { var key = new HttpClientCacheKey(client.Timeout, credentials); var queue = cache.GetOrAdd(key, i => new ConcurrentQueue<Tuple< long , HttpClient>>()); queue.Enqueue(Tuple.Create(Stopwatch.GetTimestamp(), client)); }

Can you see the bug?

I'll let you think about it for a while.

Done?

Okay, here we go!

The basic idea is that we'll keep a bunch of client connected, and free them if they are too old.

The issue is with the use of the concurrent queue. Consider the case of having a sudden spike in traffic, and we suddenly need 100 connections. This code will generate them, and then put them back in the pool.

We then go back to normal level, only handle 20 concurrent requests, but because we are using a queue, we'll actually cycle all the 100 requests we have through the queue all the time, keeping all of them opened.