Way back in 1996 science writer John Horgan published The End of Science, in which he made the argument that various fields of science were running up against obstacles to any further progress of the magnitude they had previously experienced. One can argue about other fields (please don’t do it here…), but for the field of theoretical high energy physics, Horgan had a good case then, one that has become stronger and stronger as time goes on.

A question that I always wondered about was that of what things would look like once the subject reached the endpoint where progress had stopped more or less completely. In the book, Horgan predicted:

A few diehards dedicated to truth rather than practicality will practice physics in a nonempirical, ironic mode, plumbing the magical realm of superstrings and other esoterica and fret­ting about the meaning of quantum mechanics. The conferences of these ironic physicists, whose disputes cannot be experimentally resolved, will become more and more like those of that bastion of literary criticism, the Modern Language Association.

This is now looking rather prescient. For some other very recent indications of what this endpoint looks like, there’s the following:

Another frightening vision of the future of this field that has recently struck me as all too plausible has turned up appended to a piece entitled The Twilight of Science’s High Priests, by John Horgan at Scientific American. This is a modified version of a review of books by Hawking and Rees that Horgan wrote for the Wall Street Journal, and it attracted a response from Martin Rees, who has this to say about string theory:

On string theory, etc., I’ve been wondering about the possibility that an AI may actually be able to ‘learn’ a particular model and calculate its consequences even of this was too hard for any human mathematician. If it came up with numbers for the physical constants that agreed (or that disagreed) with the real world, would we then be happy to accept its verdict on the theory? I think the answer is probably ‘yes’ — but it’s not as clear-cut as in the case of (say) the 4-colour theorem — in that latter case the program used is transparent, whereas in the case of AI (even existing cases like Alpha Go Zero) tor programmer doesn’t understand what the computer does.

This is based on the misconception about string theory that the problem with it is that “the calculations are too hard”. The truth of the matter is that there is no actual theory, no known equations to solve, no real calculation to do. But, with the heavy blanket of hype surrounding machine learning these days, that doesn’t really matter, one can go ahead and set the machines to work. This is becoming an increasingly large industry, see for instance promotional pieces here and here, papers here, here, here and here, and another workshop coming up soon.

For an idea of where this may be going, see Towards an AI Physicist for Unsupervised Learning, by Wu and Tegmark, together with articles about this here and here.

Taking all these developments together, it starts to become clear what the future of this field may look like, and it’s something even Horgan couldn’t have imagined. As the machines supersede human’s ability to do the kind of thing theorists have been doing for the last twenty years, they will take over this activity, which they can do much better and faster. Biological theorists will be put out to pasture, with the machines taking over, performing ever more complex, elaborate and meaningless calculations, for ever and ever.

Update: John Horgan points out to me that he had thought of this, with a chapter at the end of his book, “Scientific Theology, or the End of Machine Science” which discusses the possibility of machines taking over science.