In a previous post I looked at how we can describe causal systems mathematically using the framework of causal graphical model. These models give us the ability to be precise about how we can phrase and answer causal questions of the form:

"What effect does changing $X$ have on $Y$?"

These causal graphical model show us exactly why causality is difficult: if there exist "backdoor paths" - or confounding variables, common causes for both $X$ and $Y$, then it is possible that any observed correlation between $X$ and $Y$ is due to these confounding paths, and not a direct causal relationship between $X$ and $Y$.

In situations where we observe the confounding variables in a causal graphical model we can overcome this limitation by "adjusting" for the backdoor path (sometimes called "covariate adjustment", "backdoor adjustment, "controlling for variables"). My first post contained a number of techniques for doing this.

This leaves the question: if we do not observe enough variables to adjust for a backdoor, can we still make causal inferences?

In some situations the answer to this question is yes. We will be examining one of these situations in this post. As in my previous post I will be using the causalgraphicalmodels python package for my examples.