Here’s the example I always use to explain this concept: Let’s consider a hypothetical illness, thumb cancer. We have no method to detect the disease other than feeling a lump. From that moment, everyone lives about four years with our best therapy. Therefore, the five-year survival rate for thumb cancer is effectively zero, because within five years of detection, everyone dies.

Now, let’s assume that we develop a new scanner that can detect thumb cancer five years earlier. We prevent no more deaths, mind you, because our therapy hasn’t improved. Everyone now dies nine years after detection instead of four. The five-year survival rate is now 100 percent.

But the mortality rate remains unchanged, because the same relative number of people are dying every year. We’ve just moved up the time of diagnosis and potentially subjected people to five more years of therapy, increased health care spending and caused more side effects. No real improvements were made.

But if we just looked at survival rates, we would think we made a difference. Unfortunately, that happens far too often in international comparisons, as the United States often does much more screening than other countries and then justifies it through improved survival rates.

The second problem with using survival rates is overdiagnosis bias. Let’s say that a certain number of cases of thumb cancer that are detectable by scan never progress to a lump. That means some subclinical cases that would never lead to death are now being counted as diagnoses.

Since they were never dangerous, and we’re now picking them up by scans, they’re improving our survival rates. But they do nothing for mortality rates because no fewer people are dying.

These two factors are important to consider when you compare ways of caring for cancer, especially when there are differences in the ways diagnosis and screening occur. For many cancers, we’ve been diagnosing significantly more cases, but making little headway in mortality rates.