Those who follow commercial supercomputing already know that automotive design is a hot area for high performance computing, but Indycar is taking modeling and simulation in unexpected directions—and it could change both the cars and how drivers approach the sport in the near term.

Until recently, IndyCar used single vehicle computational fluid dynamics (CFD) simulations and rolled the results of those into the famous driver in the loop simulations that let Indy drivers take their cars for an immersive test drive. By upping the game with multi-car, large-scale CFD, racers can now factor in the dynamics of turbulent fluid flow simulations that account for leading and trailing cars, something that has a much heavier impact than even a well-seasoned racecar aerodynamics pro like IndyCar’s Tino Belli could have realized.

And if Tino Belli says a new development could revolutionize both car design and driver advantage, it is wise to listen.

Belli has been a force in the evolution of IndyCar design from an aeronautical engineering perspective for decades. In the mid-1980s he started a career designing racing vehicles before moving into the technical director role for Andretti Autosport twenty years later. As current director of aerodynamic development at IndyCar, he leads the development of the most recent aerodynamics toolkit (the Universal AeroKit) and integrates structural, performance, and safety into next generation vehicles in conjunction with the Automotive Research Center (ARC) near the speedway in Indianapolis.

We will get much deeper into how the enhanced simulation feed the future of racing in a moment, but let’s spend a second on the all-important driver in the loop simulator.

From a physical perspective, seasoned drivers have been trained for years to get a feel for certain things, particularly the sensations of leading, trailing, and passing. However, getting that experience is difficult given the limited practice time on the track.

Belli points to the simulator’s value by relating Fernando Alonso’s experience coming from Formula One to do the 500 in 2017 with no Indy experience. “Our drivers practice for two weeks before the race. Alonso would get into the simulator every morning and practice, try different things, then in the afternoon get in a car. The only way he could experience what a car would feel like in the race is in practice, especially latching onto another car and getting a feel since he had no IndyCar experience, then he had to go out in traffic and try to pass. And the veteran drivers will position themselves so they do not help rookies when they come into the series.” In other words, the only way to break into IndyCar is to rely on the simulator—and later comes the real-world skill with all of its challenges.

The key to Alonso’s ability to jump into IndyCar was the simulator, but this was single-car only. The real IndyCar revolution lies in some added computational power, CFD modeling, and near real-time machine learning. And so begins the other (more technical) side of this story, which we’ll bring back full circle to connect the impact to racing since it gives an edge to inexperienced—and much younger—drivers as well as how cars are designed to account for a better understanding of the aerodynamics of leading and trailing.

“The driver in the loop simulator is complex; this is fully turbulent CFD. We have exposed open wheels so our airflow is not as ideal as NASCAR or a sports car where it’s all enclosed and the airflow is smoother. The open wheels mean high turbulence and you have to model everything in fully turbulent CFD, which means a lot of computing power for the solution to converge,” Belli says, noting that this is new to his team, although the car design and manufacturing shop, Dallara, in Indianapolis has its HPC central for modeling and simulation.

For its CFD simulations, IndyCar, in conjunction with ARC, has developed a proprietary layer for vehicle aerodynamics called Element on top of an OpenFOAM base. When running full-bore, their software can capture around 50 million elements for a single vehicle model. This scale has been tested on eight 32-core nodes but could, like OpenFOAM, scale for even higher fidelity in the future to study slipstream, traffic pattern, and other features. But the team started thinking, what might happen if they were to add more cars to the simulation to accurately visualize and predict shifting aerodynamics between leaders and trailers in the traffic stream of a race where speeds hit over 240 miles per hour.

The team introduced a second car to the OpenFOAM models for a picture of a leading and trailing car and all of the aerodynamics that ensure. “As you can imagine, as you introduce more surfaces, the computational time goes way up, so we turned down the resolution of the mesh to five million elements on the six 32-core nodes and ran those for 10-12 hours each and used those to understand where any trailing position vehicle was relative to the leading car and what aerodynamic metrics are created behind a leading car,” Matthew Shaxted, founder of Parallel Works explains.

Parallel Works toiled on the code side with ARC and IndyCar to add more vehicles to the mix, using the cores for simulation from HPC cloud company, R Systems with eventual inference happening on Google’s cloud platform with GPUs to accelerate predictive flow in near-real time in the simulator.

As Shaxted tells The Next Platform, “Ultimately, we created a workflow that coupled the DAKOTA tool, which is used to design experiments and do optimization and ran a Monte Carlo study where we sampled a design space that specified where a trailing car was positioned behind a lead. We ran 100 samples at the five million mesh resolution across one hundred individual cases, each with a different trailing position. Those were all run in parallel on the PWRS platform, a joint product between Parallel Works and R Systems for technical computing applications to interface with public and R Systems cloud resources. In this case the MPI parts went to the HPC cloud and the DAKOTA piece and some post-processing was crunched on GPU instances on Google’s cloud.

And this brings us back to the way racing will change, in the physical world, but because of the simulated one.

The system is now able to simulate the aerodynamic effects of twenty cars on the virtual track, something that Belli says came with some unexpected revelations. “One of the great advantages of CFD are the visualizations; you can see the air but this is hard to see in the wind tunnel, even with the smoke. The most eye opening part of this was how much effect the trailing car can have on the leading car,” he explains. “A lot of studies in the past have focused on just the leading car not having a trailing car—taking that and examining the wake and drawing conclusions from changes to the aerodynamics and bodywork of the car. It turns out a lot of that is incorrect because the trailing car actually affects the wake of the leading car quite significantly.”

“The acid test is when Scott Dixon or Will PWRS gets out of the simulator and says it feels like racing in the Indy 500. And at that point we can make some very significant concept changes to the aerodynamics, map those into CFD and use HPC clouds to take it to the driver in the loop simulator and prove we are making improvements before we go anywhere near manufacturing. The tooling costs to make these cars are astronomical and the cars themselves are over a million dollars each. We don’t want to be making mistakes and doing re-work later, it gets very expensive.” – Tino Belli, IndyCar

As a side note, HPC simulations have replaced physical testing of products in several industries but IndyCar will keep holding onto its wind tunnels. These are used to validate the CFD models for better confidence pre-manufacturing. Belli says the results from CFD, the wind tunnels, and the physical world are al consistent for the most part.

“They are consistent and I think that’s why we’re moving into this phase where we’re studying multiple cars on track simultaneously, which can’t be done in a wind tunnel since you can imagine the challenge of positioning two full-size race cars in there, doing a scan where we model the trailing car twenty positions back and creating a full map for the following car. That can’t be done. So our plan is to take the map we create of the trailing cars, feed it into the simulator, and get complete feedback before we go anywhere near manufacturing.”

“Hopefully soon, with the visualizations and these new benefits, the driver in the loop and response surface mapping approach we’ve implemented will let the drivers actually feel what the simulations tell us. You can see it with the output visuals but feeling it is what is important at the end of the day,” Shaxted says.

And again, this feeling that can be gleaned from the simulations that factors in the aerodynamics of twenty cars instead of one will give new drivers a boost. Those who have not driven based on limited assumptions about the nature of drag, downforce, and balance predictions, which are the three main elements important for the simulation, have a distinct advantage based on what they don’t know going in.

Belli says that it would take a very experienced eye to notice the subtle differences in how the drivers maneuver based on this new way of training, but what we might begin to notice is a decrease in the age of IndyCar winners in the next few years. “The ability for a younger driver to become competitive very quickly should be enhanced. That’s what I expect in the next several years as the younger drivers pick up experience without driving much on the track.”