In an beautiful new paper, Betancourt writes:

The geometric foundations of Hamiltonian Monte Carlo implicitly identify the optimal choice of [tuning] parameters, especially the integration time. I then consider the practical consequences of these principles in both existing algorithms and a new implementation called Exhaustive Hamiltonian Monte Carlo [XMC] before demonstrating the utility of these ideas in some illustrative examples.

The punch line is that, as measured by effective sample size, XMC is about twice as fast as regular NUTS, with the gains coming from better use of the intermediate steps in the Hamiltonian paths. XMC is already implemented in Stan, and it’s one reason that the version 2.12 is faster than earlier editions of Stan.

Mike’s graphs are so pretty, I found it hard to pick just a few for this post:

As I said, this new, more efficient NUTS algorithm is already coded in Stan and hence it’s already being used in tons of applications, including for example Kremp’s open-source and fully reproducible poll aggregator. Bob and Michael are also planning to write it up in pseudocode and put it into the Stan manual.