This is a study whose immediate applications are in the clinic, but it could have follow-on effects in drug discovery and development as well. Writing in Science Translational Medicine, a team from the Moffitt Cancer Center and USF (with analytical help from Bruker) have been trying a new approach in mouse models of cancer. The standard way to treat most forms of cancer, when it comes to chemotherapy, is to hit it as hard as possible. That makes sense, and it’s a natural human impulse, too: here’s something that is in the process of killing the patient, so why wouldn’t go you all out? But in recent years, data on what’s going on inside tumor cell populations has called this approach into some doubt.

Some tumors are obviously heterogeneous, but cell-by-cell sequencing has shown that they’re even more varied than people had thought. This has a lot of functional consequences – metastatic tumors, for example, tend to be formed from cell line mutations of the original tumor that have ended up with weaker cellular adhesion proteins and the like. Along these same lines, a recent study showed that targeting the metabolic anomalies in cancer cells, a popular area of research the last few years, will also be affected by this variability. Inside a given tumor, it turns out, some of the cells are showing Warburg-effect metabolism, but some of them aren’t.

This all means that if you charge in and try to blast the cancer out of a patient, you’re going to end up only blasting some of it out – probably the easier part, in many cases. Oncologists have realized this for a long time, naturally, but there hasn’t been much that could be done about it. This style of therapy still leads to increased survival (well, most of the time), even if the cancer does generally come back in a much less treatable form. But what else is there to do? That’s what this latest study addresses: the authors are deliberately taking an evolutionary approach, trying to fight the tumor population to a long-term draw rather than wipe it off the board. To that end, they start with high doses of chemotherapy, but back off, titrating down so that there’s still a population of treatable cells left, one that will possibly compete with the untreatable ones and keep them from taking over. (The resistant cells are presumably paying a metabolic penalty for being so hard-core).

This actually seems to work. Using breast cancer xenografts in mice (two different models) and paclitaxel/taxol therapy, the effects are actually very impressive. In up to 80% of the animal, they ended up being dosed only once every few weeks to keep things stable, with no treatment at all in between. The tumor tissues were monitored by NMR imaging, and this information was used to adjust the dosing schedules. Biopsies show that there seems, over time, to be some vascular normalization taking place, which might help account for the improvements over time.

Now, there are things here that are going to have to be worked out. Mouse xenograft models are notorious for being poor predictors of clinical efficacy – but that said, these seem to be well-chosen cell lines, and treating such patients with taxol is exactly what you’d do in the clinic. And this evolutionary approach might be more translatable from the model as well (wouldn’t that be nice), since some of the disconnects might be due to that regrowth-of-the-resistant problem. At any rate, this seems very much worth following up on, and I hope that this paper creates as much of a stir in the oncology community as it seems ready to. Doing a human trial like this is going to take some serve, but the potential benefits are large.

The implications for drug discovery are worth thinking about, too. Traditionally, the framework for drug discovery in this area has been the hit-it-with-everything-you’ve-got paradigm (taxol is certainly part of that). This paper takes a mighty hammer like that compound and adjusts it to use in an evolutionary framework by careful dosing (thus the NMR monitoring, which would seem to be essential for this idea to work). Another approach might be more regular dosing of a less ferocious compound, which is not something that we’ve been thinking about much in drug discovery.

Why would you take a sort of medium-effective chemotherapy drug forward? Well, this latest paper might be the answer to that question. I don’t know if that approach would be equivalent to the one used here, but it’s worth thinking about. Alternatively, you might imagine a period of whacking the tumor population into shape a bit with high-dose chemotherapy of the traditional sort (or worse), followed by a switch to something less vicious for maintenance. There’s a lot to think about here, and if you’re doing cancer research, you should definitely set aside some time to think about it.