I devised an idea a long time ago and never got around to implementing it, and I would like to know whether it is practical in that it would work to significantly decrease loading times for modern browsers. It relies on the fact that related tasks are often done more quickly when they are done together in bulk, and that the browser could be downloading content on different pages using a statistical model instead of being idle while the user is browsing. I've pasted below an excerpt from what I originally wrote, which describes the idea.

When people visit websites, I conjecture that that a probability density function P(q, t), where q is a real-valued integer representing the ID of a website and t is another real-valued, non-negative integer representing the time of the day, can predict the sequence of webpages visited by the typical human accurately enough to warrant requesting and loading the HTML documents the user is going to read in advance. For a given website, have the document which appears to be the "main page" of the website through which users access the other sections be represented by the root of a tree structure. The probability that the user will visit the root node of the tree can be represented in two ways. If the user wishes to allow a process to automatically execute upon the initialization of the operating system to pre-fetch webpages from websites (using a process elaborated later) which the user frequently accesses upon opening the web browser, the probability function which determines whether a given website will have its webpages pre-fetched can be determined using a self-adapting heuristic model based on the user's history (or by manual input). Otherwise, if no such process is desired by the user, the value of P for the root node is irrelevant, since the pre-fetching process is only used after the user visits the main page of the website.

Children in the tree described earlier are each associated with an individual probability function P(q, t) (this function can be a lookup table which stores time-webpage pairs). Thus, the sequences of webpages the user visits over time are logged using this tree structure. For instance, at 7:00 AM, there may be a 71/80 chance that I visit the "WTF" section on Reddit after loading the main page of that site. Based on the values of the p> robability function P for each node in the tree, chains of webpages extending a certain depth from the root node where the net probability that each sequence is followed, P_c, is past a certain threshold, P_min, are requested upon the user visiting the main page of the site. If the downloading of one webpage finishes before before another is processed, a thread pool is used so that another core is assigned the task of parsing the next webpage in the queue of webpages to be parsed. Hopefully, in this manner, a large portion of those webpages the user clicks may be displayed much more quickly than they would be otherwise.