Official statistics say we are winning the War on Cancer. Cancer incidence rates, mortality rates, and five-year-survival rates have generally been moving in the right direction over the past few decades.

More skeptical people offer an alternate narrative. Cancer incidence and mortality rates are increasing for some cancers. They are decreasing for others, but the credit goes to social factors like smoking cessation and not to medical advances. Survival rates are increasing only because cancers are getting detected earlier. Suppose a certain cancer is untreatable and will kill you in ten years. If it’s always discovered after seven years, five-year-survival-rate will be 0%. If it’s always discovered after two years, five-year-survival-rate will be 100%. Better screening can shift the percent of cases discovered after seven years vs. two years, and so shift the five-year-survival rate, but the same number of people will be dying of cancer as ever.

This post tries to figure out which narrative is more accurate.

First, incidence of cancer:

This chart doesn’t look good (in both senses of a chart not looking good – seriously, put some pride into your work). Although there’s a positive trend since 2001, it’s overwhelmed by a general worsening since 1975. But this isn’t the right way to look at things: average age has increased since 1975. Since older people are at higher risk of cancer, an older population will look like higher cancer rates. Also, something has to kill you, so if other issues like violent crime or heart disease get better, it will look like a higher cancer rate.

Here’s a better graph:

This is adjusted for age. I’ve switched from incidence rates to death rates, which is bad, but I can’t find good age-adjusted incidence data. Also, notice that this graph truncates its y-axis differently than the other. Still, it shows a similar pattern of adjusted death rates getting worse until 1990 and better thereafter. Why?

Smoking! That graph is just this one plus a 20-to-30-year delay:

Through the first half of the twentieth century, improved tobacco-making technology, increased wealth, and better advertising caused order-of-magnitude increases in smoking. It takes on average a few decades for smoking to cause lung cancer, so there’s a peak in cancer (overwhelmingly driven by lung cancer) with a few-decade delay from the smoking graph. As smoking started to decline, so did lung cancer.

What about the other striking increase on the incidence graph, that of prostate cancer? In the late 1980s, guideline-making bodies suggested that doctors test harder for prostate cancers; doctors followed the recommendation, detected every little tiny irrelevant prostate tumor, and treated patients aggressively for cancers that never would have affected them before they died of something else. In the late 1990s, guideline-making bodies admitted this had been a bad idea, made the opposite recommendation, and people stopped diagnosing prostate cancer as often. If you look at incidence rates, that spike is much bigger. I’m not sure why this shows up on death rates, but perhaps the treatment itself contributed to mortality, or perhaps coroners were biased to attribute a death to prostate cancer if they knew the cancer was present.

Meanwhile, stomach cancer has declined dramatically; different sources attribute this to improved treatment for the cancer-causing stomach bacterium h. pylori, improved food processing methods, and increased vitamin C. Colon cancer is decreasing because colonoscopies remove more pre-cancerous polyps. Liver cancer increased because of a hepatitis C epidemic. A few other cancers are increasing or declining for similarly diverse reasons.

But overall cancer incidence and death rates increased up to 1990 and have declined thereafter. Pretty much everyone attributes the bulk of the decreasing death rate to improved prevention. If improved cancer treatment is contributing, it’s swamped by the social factors and we can’t see it in these data.

The most common method for measuring the effect of improved cancer treatment is the five-year survival rate – what percent of people survive five years after being diagnosed with cancer? Here are the relevant data (source):

This is the best graph I can find, but it unfortunately leaves out breast cancer, colon cancer, and several other major cancers where we’ve made important advances. It’s from 2008, but the trends shown have continued since then. Note that change in the “All Cancers” category also reflects changing distribution of sites.

That looks like progress. But this is where the early diagnosis concerns come in. They’re best expressed by Welch, Schwartz, and Woloshin, who find that among different types of cancer, secular decreases in five-year-survival-rate are not correlated at all with improvement in the cancer death rate, but they are very correlated with change in the incidence rate. In other words, why are people living longer after being diagnosed with cancer? It can’t be because we’re treating the cancer successfully – if it were, they would be linked to decreases in the number of people dying of cancer. But it must be because we’re detecting more cases of small cancers too minor or slow-growing to kill people quickly (“lead-time bias” and “length bias”), which shows up as increases in the cancer detection rate.

This study does not prove that cancer treatment is not improving. It just shows that five-year-survival-rates do not in and of themselves provide evidence for improving cancer treatment. Any signal from improving cancer treatment is drowned out by the signal from improved detection.

How do we get around this? One possibility is to investigate change in stage-specific survival rates. That is, doctors classify cancers by stage, all the way from very early poorly-developed cancers with good prognosis to very advanced cancers with bad prognosis. A lead-time bias or length bias would show up as cancers being detected at an earlier stage. So if we found that more people were surviving even within each bin of “stage at which the cancer was detected”, this would be strong evidence that cancer treatment really is getting better.

Several groups have looked into this. The best data comes from the government’s national cancer statistics clearinghouse at SEER (source):

Even within each stage, five-year-survival-rate has increased significantly from 1975 to 2012.

Closer investigations of specific cancers are similar. Stage-adjusted cervical cancer risk and colon cancer risk both show most of the modern gains in survival rate persisting.

But maybe stages are too big a bin to serve as a useful proxy. Imagine a study that wanted to prove that having more cars made you happier. They do a survey and find that people with more cars are happier, but someone objects that maybe wealthy people have more cars and wealth makes you happier. Imagine that their response is to separate people into two bins: “poor people” who make below $50K and “rich people” who make more. They find that even within each bin, cars still make you happier. But this is just a problem of too few bins: a person making $10K is still very different from a person making $40K (and likely to have fewer cars). The attempt to remove confounding with bins fails. These cancer studies generally use only a few broad stages; might this be allowing effects from early diagnosis to creep back in?

Elkin, Hudis, Begg & Schrag look into this. They find that within each stage, tumors have gotten smaller since 1975, suggesting that the staging system isn’t capturing everything we care about regarding cancer. But they find that even when adjusted for size, some of the stage-specific modern gains in cancer survival still remain. In particular, decreasing size explains 61% of improved survival in localized-stage breast cancer, and 28% of improved survival in regional-stage breast cancer. Another study on breast cancer does a similar adjustment with other ways of classifying cancer and concludes that “improvements were shown irrespective of tumor size, lymph node status, and ER status” and “the impact of screening was by nature of limited magnitude. The modified treatment strategies implemented by the use of nationwide guidelines seemed to have a major impact on the substantial survival improvements.” Another group does a simulation and finds that it’s implausible that screening-related biases are the entire source of improved survival:

The results from our study suggest that lead-time bias introduced by mammography screening does not explain the survival improvement observed during the recent decades in the Nordic countries. The absolute as well as relative bias was generally small, and much smaller than the observed increase in relative survival between 1964-2003. However, in some settings the absolute bias reached 4.0-5.7 percentage points, on a survival around 68-77%, a difference that many would see as an interesting improvement in survival.

A lot of this work has been done in breast cancer, probably because it’s had a strong push for screening recently. We would expect screening to be even less important in other cancers, but there hasn’t been as much work on it. One exception is Tong et al, who find that changes in tumor stage and size explain only 20% of improved survival rates in colon cancer, but advancements in therapy explain about 71%. Separately, an authoritative-sounding collection of colon cancer experts express their opinion that “it is possible that within-stage migration had some effect on our findings, but it is implausible as the major source of the trends we observe.”

The only contrary data point I can find is this study of laryngeal cancer, which finds worsening stage-specific survival rates for high-stage laryngeal cancer since 1977. However, the study authors note this was the only one of 24 cancer types examined to show decreasing survival rates. They speculate that maybe some kind of change in smoking behavior over this period has changed the nature of laryngeal carcinomas to favor a more aggressive type. They don’t really have any evidence for this, but given that this is the only one of 24 cancer types to show a decrease in survival rate, it’s probably something at least that unique, and doesn’t indicate a general failure in cancer treatment.

There could still be unobserved confounders. Stage alone wasn’t enough, but merely adding size to stage might still not be enough. Even the papers that look at a few more esoteric things like receptor status might not be enough. All we can say with certainty is that right now, adjusting for everything we know about and are able to monitor, cancer survival rates still seem to have increased. Tomorrow we might discover new confounders that take that away from us, but right now there is no particular reason to expect that we should.

So: age-adjusted cancer incidence rates and death rates have been going down since 1990, primarily due to better social policies like discouraging smoking. Five-year-survival rates have been gradually improving since at least 1970, on average by maybe about 10% though this depends on severity. Although some of this is confounded by improved screening, this is unlikely to explain more than about 20-50% of the effect. The remainder is probably a real improvement in treatment. Whether or not this level of gradual improvement is enough to represent “winning” the War on Cancer, it at least demonstrates a non-zero amount of progress.

I don’t want to frame this in terms of “here we DEMOLISH the pseudoscientific narrative that cancer progress is weak”. Many of the people I know who critique this research are from an older generation. They remember Nixon assuring them at the very beginning of the War on Cancer that we would have a cure within five years. If they’re really old, maybe they remember victories of that scale over polio and smallpox. If those were their hopes, it’s right for them to feel disappointed. But I come from a generation that doesn’t expect much, and I think the evidence suggests my low expectations have more or less been met.