After some two months of coding me and Michal Novotný are closing to have first “private testing” stable enough build with new and simplified HTTP cache back-end.

The two main goals we’ve met are:

Be resilient to crashes and process kills

Get rid of any UI hangs or freezes (a.k.a janks)

We’ve abandoned the current disk format and use separate file for each URL however small in size it is. Each file is using self-control hashes to check it’s correct, so no fsyncs are needed. Everything is asynchronous or fully buffered. There is a single background thread to do any I/O like opening, reading and writing. On Android we are writing to the context cache directory. This way the cached data are actually treated as that.

I’ve performed some first tests using http://janbambas.cz/ as a test page. Currently as I write this post there are some 460 images. Testing was made on a relatively fast machine, but important is to differentiate on the storage efficiency. I had two extremes available: an SSD and an old and slow-like-hell microSD via a USB reader.

Testing with a microSD card:

First-visit load Full load First paint mozilla-central 16s 7s new back-end 12s 4.5s new back-end and separate threads for open/read/write 10.5s 3.5s

Reload, already cached and warmed Full load First paint mozilla-central 7s 700ms new back-end 5.5s 500ms new back-end and separate thread for open/read/write 5.5s 500ms

Type URL and go, cached and warmed Full load First paint mozilla-central 900ms 900ms new back-end 400ms 400ms

Type URL and go, cached but not warmed Full load First paint mozilla-central 5s 4.5s new back-end ~28s 5-28s new back-end and separate threads for open/read/write *) ~26s 5-26s

*) Here I’m getting unstable results. I’m doing more testing with having more concurrent open and read threads. It seems there is not that much effect and the jitter in time measurements is just a noise.

I will report on thread concurrent I/O more in a different post later since I find it quite interesting space to explore.

Clearly the cold “type and go” test case shows that blockfiles are beating us here. But the big difference is that UI is completely jank free with the new back-end!

Testing on an SSD disk:

The results are not that different for the current and the new back-end, only a small regression in warmed and cold “go to” test cases:

Type URL and go, cached and warmed Full load First paint mozilla-central 220ms 230ms new back-end 310ms 320ms

Type URL and go, cached but not warmed Full load First paint mozilla-central 600ms 600ms new back-end 1100ms 1100ms

Having multiple threads seems not to have any affect as far as precision of my measurements goes.

At this moment I am not sure what causes the regression for both the “go to” cases on an SSD, but I believe it’s just a question of some simple optimizations, like delivering more then just 4096 bytes per a thread loop as we do now or a fact we don’t cache redirects – it’s a known bug right now.

Still here and want to test your self? Test builds can be downloaded from ‘gum’ project tree. Disclaimer: the code is very very experimental at this stage, so use at your own risk!