First (A) and second (B) steps of the REVOLVER algorithm to fit the data from a cohort of cancer patients. The first step is an Expectation Maximization of which we show the optimization gradient in the E and M-steps of the fit. We are interested in repeated trajectories among drivers observed in more than one patient (coloured nodes; see also Supplementary Figure 2). During the fit, we have identified the best model for this patient (left), but the next iteration of the EM might change our best guess for this patient. Here we focus on the trajectory for the gray driver, currently downstream green ones in the information transfer. REVOLVER measures the correlation of this tree against the ones fit to the rest of the cohort: in this example this prediction is supported by only one other model, while three suggest an alternative trajectory initiated by the turquoise driver (central panel). Via w, we define a gradient that can induce a new scoring of the trees by means of a penalized likelihood; the model to the right is the new best (maximum likelihood estimate) since its trajectory is more correlated to the rest of the patients. Notice that we can place the gray mutation in 5 different positions to still obtain the same information transfer; in this case the one that we select is totally driven by the likelihood (red asterisks). This change is driven by a combination of factors: (i) how better the “alternative” model explains this patient's data, with respect to the original model, and (ii) how strong is the consensus/ information transfer on the trajectory of grey/ turquoise drivers. Once we have converged to the EM solution, we can further expand our models (B) with Transfer Learning. Intra-group trajectories for drivers that belong to the same node of the tree cannot be inferred from data of a single patient. This is the case here for A, B, C and D which are clonal drivers. After correlating the structures of the models, however, we can observe the orderings of A, B, C and D in the rest of the cohort via w. Here, we show a graph representation of w (central panel), and highlight in red the Maximum Likelihood Estimate (MLE) of the driver upstream each one of A, B, C and D (most frequent parent). We than expand the node of our model to reflect those orderings. Uncertainty reflects in the structure of the estimated paths; it should be a linear chain of events (assuming that A, B, C and D are all true drivers) but w might not be able to retrieve it. For instance, in this example, we are not sure if the pink driver is downstream the green or the turquoise one, and we have no evidence of the ordering between gray and pink drivers as well.