Before each change to the OpenStack projects is merged into the main tree, unit and integration tests are run on the change, and only if they pass, is the change merged. We call this "gating". We use Jenkins to run the tests, along with the Gerrit Trigger Plugin to kick them off and manage the resulting approval or rejections.Currently, this process is (mostly) serialized due to the fact that Jenkins is configured to only run one build of each job at a time. This serialized aspect of trunk gating is desirable -- it means that each change is tested exactly as it will be eventually merged into the repository. For example, change A will be tested against HEAD, then merged, then change B will be tested against HEAD, which now includes change A. If we allowed those jobs to run in parallel, it may be that change A introduces a condition that causes change B to fail, but without testing B against A, we would not detect it until after the change is merged. Strict serialization of testing and merging changes is therefore useful.However, a problem arises as the tests become longer or the rate of changes increases. If a given test takes, say, one hour (which is entirely reasonable for some kinds of tests), then the entire project can only merge, at most, 24 changes each day. That is the very definition of un-scalable, and quite inconvenient for developers too, who may have to wait a very long time for the tree to change.When processor designers hit the wall for how fast a processor could execute instructions, they branched out, so to speak. Taking a page from processor design, I have written a program that performs speculative execution of tests. By constructing a virtual queue of changes based on the order of their approval, it runs jobs in parallel assuming they will all be successful. If any of them fail, then any jobs that were run based on the assumption they succeeded are re-run without the problematic changes included. This means that in the best case, as many changes can be tested and merged in parallel as computing resources will allow for testing. And of course, with cloud computing, that isn't much of a hurdle.Most changes to OpenStack do pass tests the first time, so planning for the best case is very useful. Other changes we are making, such as executing tests as soon as they are uploaded to Gerrit for review will help to provide early feedback to developers so that reviewers (and Jenkins) don't waste time trying to merge changes that we know ahead of time will fail.The program that now drives our execution of tests is called Zuul. It is quite generalized and not at all specific to the OpenStack workflow. In fact, it's so configurable, it doesn't even have the idea of gating programmed into it. With only some YAML configuration, it can be made to run all of the kinds of jobs we've developed during the course of OpenStack development:: tests that run immediately on submission of a patch. No speculative execution is done, all tests can simply run in parallel and provide early feedback to developers.: changes are tested in parallel but in a virtually serialized manner so that each change is tested exactly as it will be merged. Changes with failed tests don't merge.: jobs that run after a change is committed (eg, generating a tarball, or documentation).: jobs that should not provide feedback (perhaps the jobs are not ready for production use).Zuul can be found here:andIt should be easy to use with any project that uses Gerrit and Jenkins. The internal interfaces should be clean enough that if you don't use Jenkins, you can easily plug in another kind of job system (patches welcome!). With a little more trouble, you could probably factor Gerrit out as well.Development is done just like the rest of the OpenStack project. Clone the git repo, commit your change, and "git review". Visit us in #openstack-infra on freenode if you want to chat about it.