Nothing is more banal than talking about the weather. But talking about weather forecasts? Pretty interesting, as it turns out. Who makes them? From what data? And what does it mean for your beach day on Friday, or for New Orleans on Saturday, where a slightly different path for Tropical Storm Barry could have huge consequences? On Wednesday, I called up Andrew Blum, author of The Weather Machine: A Journey Inside the Forecast, to talk about how sophisticated weather models are changing the way we manage everything from baseball games to evacuations. In his previous book, Tubes, Blum explained the infrastructure of the internet. In the current one, he turns his eye toward the satellites, equations, and algorithms—and, yes, humans involved at every step of the way—creating the forecast. Our conversation has been edited and condensed for clarity.

Henry Grabar: I wanted to ask you first about the end of your book. You talk to this TV weatherman in Connecticut, and you seem to suggest that the weatherman as we know him, or her, is going extinct. Why is that?

Andrew Blum: I don’t know if they’re going extinct, but their role is definitely changing, because they have less work to do to write the forecast and more work to do to communicate the impacts. It used to be that they would spend hours deciding whether or not they should say it’s going to be 72 or 76 tomorrow. There’s much less need for that. High, low, it’s going to rain, it’s not going to rain. There’s much less value to add on a daily basis.

So you’re saying that back in the day, people like David Letterman or Al Roker were actually looking at the raw data and thinking about how that was going to translate into rain or sun three days down the road.

That even continues today. The truism that I think is really worth holding onto is the forecast is only as good as the decisions you can make from it. It’s not about that technical precision—“I nailed my high temperature for tomorrow. I said, it’s going to be 78 and it was 78.” It’s really about what the consequences are for the way that we use that forecast. That’s definitely where the National Weather Service is putting their emphasis in recent years.

Paradoxically, it becomes harder to make decisions when you have a better forecast. If you don’t know when thunderstorms are coming, you’re not going to cancel the ballgame. But if you know that the storms are coming at three o’clock, you might want to push up the start time of your tournament to finish before those storms come.

It’s incredibly difficult to make a decision based on a probability of an event, when there’s advantages to making that decision sooner, but the assurance is going to be stronger later. It helps enormously to know what the rhythm of the forecast is. The example from this winter, when [New York City Mayor Bill] de Blasio called a snow day at six o’clock on the night before. [Editor’s note: It did not snow.] I knew from Twitter that the most recent weather models were showing a lesser storm. The recognition of where the trend was going wasn’t there.

It’s indicative of the way we are still reliant on meteorologists to mesh the gears between the weather models and decisions we make.

There’s still a temptation to treat the models as the secret ingredients to a good forecast, when in fact they have become much more than that.

Maybe you can give a little background here. When you say models and you talk about how good they’ve gotten, that’s the “Weather Machine” in the title?

“When the iPhone came out, we could trust the weekend forecast on Wednesday, and now we can trust it on Tuesday.” — Andrew Blum

What I mean by “weather machine” is the entire global infrastructure, both of observation and prediction. You have three parts to the model. You have the science of it, the physics, the equations. You have the observations—sampling and measuring the global atmosphere. And then you have this computation to put the two together. You need all three parts. You need the supercomputers, you need the satellite system and the weather stations and the instruments and airliners, and then you need the algorithms, the code itself, to take the weather of the present and speed it up into the weather of the future.

How good is this machine now? Or maybe a more effective way to phrase that question, since we have an intuitive sense of how it works now, is how bad did it used to be and how much has it improved?

The way meteorologists describe it is that it has been improving, on average, a day a decade. So a five-day forecast today is as good as a four-day forecast 10 years ago, and as good as a one-day forecast 40 years ago. What that means for those of us who’ve been watching forecasts for 20 years is that the 48-hour forecast is now the four-day forecast. That’s a pretty meaningful change. When the iPhone came out, we could trust the weekend forecast on Wednesday, and now we can trust it on Tuesday. This is a pretty rapid rate of improvement and it’s been consistent. And the implication is that it’s going to increase, that the [European Centre for Medium-Range Weather Forecasts], home of the Euro model, the world’s best weather model, is talking about a 14-day forecast for extreme events by 2025.

The implication is that a 10-day forecast would have seemed crazy to somebody in 1970?

A 10-day forecast was meaningless in 1970. It had no skill.

These models that spit out weather predictions—are they just constantly evaluating the value of their past predictions against the way the real weather turned out and fine-tuning their variables to figure out how the future is going to go?

No, no. In fact they work almost the opposite way. They’re actually describing the movement and evolution of the atmosphere. They are not using past weather patterns as a basis for predicting future weather patterns. While humans might be bound by past experience, “A storm like that never happens, it can’t happen now,” the weather models are more than happy to spit that out.

“The reanalysis of the D-Day forecast is fascinating, because given the observations that existed, we could have accurately predicted this six days ahead.” — Andrew Blum

They’re empirical then. They’re just saying, “Well this is how much moisture we have, this is what the wind says, and so this is what it’s going to be.”

Their improvement is not automatic but rather a sort of handmade fine-tuning over years and decades. The reason it’s been so successful is: Not only do you get a new data set every single day to test your model again, but you can develop a new model. You can change the algorithms and run it backward against all your previous weather observations.

It’s not a meat grinder—not the weather of the present goes in, the future comes out. It’s an ongoing simulation, the simulated atmosphere and the real atmosphere kind of dancing together, trying to get as close to each other as possible.

So you could figure out, like, what the weather was like on top of Mount Everest in 1900 or something like that.

Not quite that far back but close. You can essentially reforecast, using past observations in your present models. D-Day is the great meteorology success story. The reanalysis of the D-Day forecast is fascinating, because given the observations that existed, we could have accurately predicted this six days ahead.

And in reality, they made the decision how fast?

In reality, they made the decision one day ahead. It was just enough. It was a successful one-day forecast. It’s a history-changing example of a forecast that resulted in a correct decision.

Are we on a trajectory toward perfect weather anticipation all the time?

Particularly in parts of the world where you have afternoon thundershowers, the moment and place where they form is very hard to predict. And it might be a five- or 10-mile difference between massive storms and clear skies.

If a large number of people live in a certain space, then suddenly your typical margin of error has to go down a lot.

That’s where the enormous opportunity is at the moment. It’s not just about a perfect prediction for a given point in time. It’s about this convergence of that and being able to make a decision from it. Twenty years ago, school would not be closed for a snowstorm until it started snowing. It just didn’t happen. Now it’s almost always the case that school is closed before it starts snowing.

What is the motivation for this technology to keep improving? Who pays for it and, and who profits from it?

Well, the 150-year tradition is for weather prediction to be a public good. More recently, there’s an industry in taking the information provided by government and fine-tuning it for different uses. The shift is when it becomes financially worthwhile for private companies to run their own weather models, to fly their own weather satellites, such that the foundations of the system are potentially fragmented.

There was a big Bloomberg story about how AccuWeather has tried to undercut the National Weather Service, which suggests that maybe for the time being, at least, what’s being provided by the government is so good that private companies have a hard time on their own.

The Trump administration is saying that they are very eager to encourage new types of weather companies and they want to make sure the NWS is not in competition with them. It suggests that the public system needs to be changed to accommodate the need for profitability for the private players.

I’ve also read that the U.S. system is not considered as good as the European model.

Is that a lack of investment here in the United States? Does it matter? Does it change the way we forecast?

Yeah, the European model is unequivocally superior, statistically, than other weather models. And the U.S. model is not even second, but more often third or fourth after the U.K. and Canadian model. That’s specifically for global models.

The reason it’s better, it’s often pointed out, is that their computers are bigger than ours. But that misses the point. The institutional structure of the European forecast is built around the improvement. They have a powerful computer, but half of it is used on a daily basis to run the model and half of it is used to improve the model. The entire culture is built around research and operations going back and forth, and improving a model continually.

The U.S. is not organized that way, and it shows. There’s this new modeling system called Epic, it’s literally crowdsourced improvement, and the idea that crowdsourced improvement is going to compete with the finely honed process of improvement that the European Centre has been developing for 30 years? I don’t think that’s going to happen.

But this is not a Trump thing, right? This is just a story about institutional decline in the United States and a lack of investment in science and research and all that.

The Trump administration has made a lot of noise about literally making American weather modeling great again. That sounds good, but Epic, which is Neil Jacobs’, the current acting administrator’s, project, is a very small step in my opinion.

How do these themes we’ve discussed come to bear on the various models that are coming out, that everyone’s seeing on Twitter as we watch this storm bear down on New Orleans?

One thing that has been a great development over the last five years is the extent to which the forecasters describe the ways in which they’re incorporating the models into their public forecasts. Capital Weather Gang, Weather Underground, they’re acknowledging the importance of the models in their thinking.

There’s a phrase I just pulled from the Shreveport NWS office discussion, that the Global Forecast System is now farther east than the European model by a handful of parishes. You know, the unit of measurement in Louisiana is parishes, and we’re talking: Four days from now, the European model is west by a handful of parishes compared to the American model. So that’s a small difference, but that transparency is really important, and the next step with the transparency, and we’re partly there, is for the communication of the rhythm of the model outputs. It’s a big step for meteorologists because they’re giving up a lot of their own expertise, they’re shifting their expertise from their own skill to the improved skill of the model itself. It’s a big deal.