Markov Logic Networks (MLNs) are a tool for capturing your beliefs about the world and then calculating the likelihood of outcomes based on those beliefs. Since it's the NBA playoffs, let's use MLNs and our beliefs about the NBA to predict the 2016 NBA championship.

Specific Beliefs (Predicates)

To get us started, I know for a fact that the Golden State Warriors won the 2015 championship. In MLN-speak [1], we write this fact as:

champion(2015, Warriors)

Here, champion(...) is a called a predicate. A predicate is either true or false depending on its arguments (in this case, the year and the team).

With MLNs, we can also state beliefs that aren't certain. For example, I'm 60% sure the Warriors will win the 2016 championship. In MLN-speak, we write this belief by assigning a probability to the predicate:

0.6 champion(2016, Warriors)

General Beliefs (Rules)

In addition to statements about specific things, we can use MLNs to capture rules about how the world works. For example, in the NBA, we know there's exactly one NBA champion each year. We write this in MLN-speak with two statements:

champion(year, team!) EXIST team champion(year, team).

The team! in the first statement says that, by definition, there is at most one team for which the predicate champion(year, team) holds. The second statement says that, by definition, there exists at least one championship team each year. Together, these two statements encode that there's exactly one championship team per year.

In reality (and fantasy sports), we also have beliefs that are generally true, but are not hard-and-fast rules, e.g.:

A team that wins the championship will also win the next year

A team with an injured starter won't win the championship

In MLN-speak, we write these rules as:

??? champion(year1, team), year2 = year1 + 1 => champion(year2, team) ??? hasInjuredStarter(year, team) => !champion(year, team)

The => symbol indicates that the predicates on the left imply the predicates on the right. That is, if the the left side is true, then the right side is also true (but not necessarily vice versa). The ! symbol in the second statement is negating the predicate champion — that team is NOT the champion for year .

But what probabilities should we assign these rules? If we did have probabilities, what would the relative importance of the rules be? Are the Golden State Warriors more likely to win even if Stephen Curry is out with an injured ankle? For rules, instead of assigning a probabilities, we assign them relative weights. Relative weights allow the MLN to handle multiple, possibly conflicting, rules.

Learning Rule Importance (Weights)

If you have good historic evidence (data), a good way to assign the weights is to have the MLN learn the weights automatically from the evidence. The MLN will pick the weights that make the historical outcomes most likely. In our case, it's going to pick weights that made the previous NBA champions the most likely champions according to our rules (upsets be damned).

If the NBA just had 4 teams, the Heat, the Spurs, Warriors, and the Cavaliers, the historic evidence for 2015 would be:

hasInjuredStarter ( 2015 , Cavaliers ) !hasInjuredStarter(2015, Heat) hasInjuredStarter ( 2015 , Spurs ) !hasInjuredStarter(2015, Warriors) !champion(2015, Cavaliers) !champion(2015, Heat) !champion(2015, Spurs) champion ( 2015 , Warriors )

Learning the weights based on the 2013-2015 seasons, the MLN finds the following weights:

-3.7 champion(year1, team), year2 = year1 + 1 => champion(year2, team) 4.2 hasInjuredStarter(year, team) => !champion(year, team)

Here the negative weight for the first rule indicates that our intuition was incorrect — based on the evidence, winning the championship makes a team less likely to win the next year! Indeed, there was no back-to-back champion in the 2013-2015 seasons (the Heat won in 2012 and 2013). The size of the weights indicate the two rules are roughly equally important, but in opposite directions.

Predicting the 2016 Championship (Inference)

Once we've assigned weights to the rules in our model, we can have the MLN infer (estimate) the probability of uncertain predicates. In our case, we want to know the probability of each team winning the 2016 championship. This is called our query:

champion(2016, team)

Before we query our model, though, we need to provide our beliefs about the likelihood of injury for each team in the 2016 playoffs. We provide these beliefs by adding the predicates, with their probabilities, to the evidence:

0.9 hasInjuredStarter(2016, Warriors) 0.8 hasInjuredStarter(2016, Heat) 0.4 hasInjuredStarter(2016, Spurs) 0.2 hasInjuredStarter(2016, Cavaliers)

The MLN can then infer the probability of each team winning the championship. The inferred probabilities will be consistent with the rules we captured, our beliefs about the probability of injury, and the historical evidence:

0.3500 champion(2016, Cavaliers) 0.3400 champion(2016, Spurs) 0.3000 champion(2016, Heat) 0.0100 champion(2016, Warriors)

The probabilities sum to 100%, which is a good sign. Our model is bearish on the Warriors, giving them only a 1% chance of winning the championship. The low probability is due to fact they won in 2015 and we assigned them a high likelihood of injury this year.

Next Steps

There's a lot of different directions we could go to improve our model. We could capture our beliefs about anything from match ups to home court advantage to backup players to mascot popularity. In general you'll want to target the areas that will likely have the largest impact. These will be beliefs that either you haven't accounted for yet, or that your current model is most sensitive to (i.e., rules with large weights).

Summary

Markov Logic Networks (MLNs) are a tool for capturing your beliefs and inferring the likelihood of events based on those beliefs. In this post, we used an MLN to capture our beliefs about the NBA playoffs. We had the MLN learn the relative importance of general rules based on historic evidence, and then inferred the probability of each team winning the 2016 championship.

Footnotes

[1] This post uses the Tuffy syntax for Markov Logic Networks.