Above: The variable-mesh MPAS grid can be customized to feature higher resolution where added detail is desired (as illustrated here for North America). Image credit: MPAS.

On June 21, IBM; The Weather Company (TWC), an IBM Business; and the University Center for Atmospheric Research (UCAR) announced a partnership to develop and improve global weather prediction models. See the IBM and UCAR news releases. More specifically, the partnership focuses on a new model developed at the National Center for Atmospheric Research and Los Alamos National Laboratory called the Model for Prediction Across Scales, or MPAS.

The primary goal of the partnership is to optimize MPAS to run efficiently on the next generation of IBM supercomputing technologies, with the goal of operational prediction of the weather at resolutions that can predict individual thunderstorms explicitly. If the optimization is successful, then The Weather Company could be among the first to provide truly global prediction of the weather at these so-called “storm-resolving” scales.

Model resolution and computing horsepower

Today, many models—including the NWS High-Resolution Rapid Refresh (HRRR) model, the UK Met Office’s Unified Model, The Weather Company’s DeepThunder (a variation of the Advanced Research Weather Research and Forecasting Model, or WRF-ARW) and many others—already provide storm-scale forecasts of the weather, but only over relatively small patches of the globe (e.g. the continental US in the case of the HRRR). On the other hand, global models such as NOAA’s Global Forecasting System (GFS) and the ECMWF model provide global forecasts, but there is typically 3 to 5 times more space between the grid points in these models as opposed to the storm-scale models. This means that thunderstorm evolution in the global models must be inferred rather than explicitly predicted (for instance, by tracking locations of converging low-level winds, unstable air, and high precipitation as indicators of where thunderstorms may occur).

The computational requirements to run a model depend heavily on the resolution of the model, such that increasing the horizontal resolution of the model by twofold requires at least eight times the computing power. And this doesn’t even account for vertical resolution, which should also typically increase as the horizontal resolution increases. Hence, to take a 12-km global model and run it at a 3-km resolution (which is sufficient to begin to predict thunderstorms explicitly) requires at least 64 times more computing horsepower—and some say it should be closer to 100. Today’s global weather prediction models already tax the available supercomputers, and having access to 100 times that computing power is just not yet practical.

In addition, weather prediction models are unable to push the central processing units (CPUs) in today's supercomputers to their theoretical peak performance. There are many reasons for this, but, one reason is because the delivery of data to the CPU cannot keep up with the processing being done by the CPU, essentially leaving the CPU idle while it waits for data to arrive. Weather models require complex data that is interdependent across parameters (e.g., weather, temperature, pressure), space, and time, and all that data must be delivered to the CPU for processing.

Just to think about one aspect of the problem: In order to predict the weather at one point, you constantly need to know what’s going on all around you. Thus, every point in a weather model is constantly “talking” to its neighbors. All that talk can take a lot of time—time that is lost for the CPU to do the computations that forecast the weather. Next-generation supercomputers can help make that process more efficient, particularly when the model software is built to take optimal advantage of the resources the supercomputer has.

Some studies have shown that today’s models are using 5% or less of the horsepower theoretically available in CPUs in supercomputers. Improving this efficiency, even just a little bit, effectively yields free extra computer power. This is one of the primary aims of the IBM/TWC/UCAR partnership. And if it’s successful, then some of the first global model predictions at thunderstorm-allowing resolutions start becoming practical.

One of the new resources that next-generation supercomputers are starting to incorporate into the system is a new type of technology called graphical processing units, or GPUs. GPUs consist of thousands of small, efficient cores designed to handle multiple tasks simultaneously. GPUs have shown promise in scaling models to both small and large areas as well as delivering forecasts for local, regional and global weather. GPUs are not commonly used in weather forecasting today, but in our collaboration we plan to develop a fully GPU-enabled weather forecasting code for use in actual forecasting. See this article by IBM’s James Sexton for more on this aspect of the collaboration.

Figure 1. A complex pattern of thunderstorm activity at 7:00 pm CDT May 9, 2016 (left) was largely captured by a forecast issued 96 hours earlier by a preliminary version of MPAS (right). Image courtesy NCAR and National Severe Storms Laboratory.

Figure 2. Composite image of radar reflectivity at 1 kilometer above ground level at 1:00 am CDT May 17, 2015 (lower right panel), compared with forecasts issued by various time frames by a preliminary version of the MPAS model. This event, which occurred during the wettest month on record for both Texas and Oklahoma, produced significant flooding and tornadoes. The global MPAS forecast on a variable-resolution mesh appeared to possess some skill in forecasting this event even at the longer forecast times of 4 and 5 days. Image credit: MPAS.

Why model thunderstorms on the global scale?

So what’s the big deal about predicting thunderstorms at a global scale? First, and most obvious, is that thunderstorms are important to the day-to-day weather wherever they occur over the world. Hence, predictions of the weather a few hours to days out can become more accurate by explicitly predicting those thunderstorms rather than inferring their behavior. We’ve seen this transformation with some of the regional thunderstorm models mentioned above. The successful and realistic prediction of the severe derecho that rolled through Washington, DC, in June 2012 was one great example. If you want to be able to accurately predict such weather everywhere on the planet, you need a global model. Enabling such global forecasts helps bring the benefits of modern weather forecasting enjoyed in the United States to nations that have been historically underserved by weather technology.

Beyond the local weather, thunderstorms can have significant impacts on weather far and wide. A cluster of thunderstorms in the western equatorial Pacific Ocean can sometimes lead to dramatic changes in the weather over North America a week or two later. Improvements in the accuracy of weather predictions weeks in advance can thus benefit from having more accurate predictions of thunderstorms hours in advance. The so-called Madden-Julian Oscillation (MJO), which impacts global weather patterns, is fundamentally a large cluster of thunderstorms that circumnavigates the globe near the equator every 30 to 50 days or so. Today’s global models have a very hard time predicting the MJO, and this is thought to be partly due to the fact that the models are not explicitly predicting its thunderstorms. For this reason, having a global model that explicitly predict the MJO’s thunderstorms might enable more skillful weather forecasts in other parts of the globe weeks or months in advance.

The private sector’s role in weather modeling

Global weather models require large, expensive supercomputers, and therefore traditionally they have been run only by government weather agencies that can afford such. Decades ago, this also was true about the regional models. However, as pointed out earlier, many private sector and academic institutions now routinely run regional models. The declining cost of computing equipment and the expanding economic opportunities in the use of weather information have catalyzed this change.

Up until recently, global models have remained largely in the domain of government weather agencies, but that too is changing. Several private sector companies routinely run global weather models today. Perhaps most notably, Panasonic has claimed to have a global modelling capability based on NOAA’s GFS model that is as skillful as any of the government models today. In this light, the TWC/IBM/UCAR partnership of global modelling should not come to anyone’s surprise.

As reported by weather.com last year, MPAS was one of the two finalists for NOAA’s 2016 selection of a new global model framework, though NCAR ultimately pulled MPAS out of the selection process. Some members of the weather community have suggested that our MPAS partnership signals a new era of competition in provisioning of weather services. At TWC/IBM, we do not see the partnership as competition with the NWS, but rather a means by which the entire weather enterprise can improve. It’s not our business or our motive to “out-forecast” the NWS, but rather to enable society as a whole to be better served by emerging scientific and technical weather capabilities. Advancements to the open-source version of MPAS that emerge through this partnership will be shared with the community, and it is our hope that the NWS and the entire weather modeling community will benefit, regardless of their model of choice.