Formula 1 is perfectly named - because its success is entirely dependent on a series of formulas and the computers that solve them.

Many sports are now increasingly data-informed. Football teams do post-match analysis, shot-by-shot data is collected at Wimbledon, and stadiums themselves are kitted out with sensors. Formula 1 differentiates itself by being data-driven at every step of the process: in the factory where the cars are made, before the race, and on-track mid-lap.

Formula 1 hasn’t always been data-driven. The sport has been around since 1950, when the ‘formula’ was simply a set of rules that the cars and drivers had to follow. By the 1990s, teams began putting sensors on the cars to collect data and improve their race strategies. Today, the cars can have as many as 400 sensors on board, collecting vast quantities of data for analysis.

In as much as the race is a feat of athleticism and concentration by the drivers, it is also a feat of engineering, computational fluid dynamics, calculation, and scientific thinking. It is a demonstration of what we can achieve when we invest staggering amounts of money into a set goal.

However, in recent years, the Federation Internationale de l’Automobile (FIA) has introducing new regulations at a rapid pace, including a cost cap which limited the amount of money teams can spend. Since then, extremely well-funded teams have had to pull back on investment, and IT choices have never been more important.

This feature appeared in the latest issue of the DCD Magazine. Read it for free today.

Enter the Cost Cap

Prior to the cost cap, F1 teams were limited only by the depth of their owner's pockets, as well as various other stringent requirements for the cars and testing facilities.

As a result, the leaderboard has remained mostly consistent throughout the years, with Mercedes, Ferrari, and Red Bull Racing battling it out for the top three spots. In 2019, Mercedes spent $484 million, Ferrari $463m, and Red Bull Racing $445m. The next closest was Renault, at $272m.

The cost cap limited all teams to $145m for 2021, $140m for 2022, and $135m for 2023-2025, and applies to anything that improves the performance of the car – from engineer’s wages to materials used, to the IT set up powering the simulations and monitoring the cars on the track.

While still a staggering amount of money, this effectively cut the top teams’ allowance by two-thirds, and forced them to re-budget.

The cost cap is intended to level the playing field (or smooth out the race track), but so far it has not yet had that effect. Dominic Harlow, FIA’s Head of Technical Audit agreed that this was a “valid observation,” but argued that financing isn’t necessarily on par with the value of expertise.

McLarenTechnologyCentre_MG_1340.JPG
– McLaren Racing

“In truth, the engineering of the F1 car is a process. It's something that is built up over time across the board, in terms of the cars, the teams, and the knowledge. It doesn't necessarily follow that if you change the amount of spending in one area then performance is going to be impacted straight away,” explained Harlow.

In other words, it will take time for the results to be reflected on the scoreboard.

While the financial limitations obviously reduce the computational power F1 companies can invest in, we have also seen that the teams who violate the spending limits are made to pay a cost in terms of technology.

In 2021, the new top dog and universal nemesis Red Bull Racing violated its cost cap by 1.6 percent. It was ultimately fined $7 million and had its aerodynamic testing allowance cut by 10 percent, a significant loss for the most compute-heavy element of the sport.

Aerodynamic testing and CFD (computational fluid dynamics) simulations are considered so valuable in the sport that deductions are made for teams based on their ranking on the final scoreboard. The first place spot is allowed 1,400 simulations during its eight-week testing period, while 10th place gets 2,300.

For Red Bull, who in 2021 finished in 2nd place in the constructors' standings, the 10 percent cut saw their simulations dip from 1,500 to 1,350. Still a significant amount of testing, but drastically lower than some of the other teams. Regardless, Red Bull still managed to claim the title in 2022 and remains comfortably at the top of the standings for 2023 so far, more than double the amount of points ahead of second-placed Mercedes.

Home and away

IT is essential at every stage of the racing process. For McLaren Racing, this is split into three stages: design, build, and race.

The team starts designing its new cars a year before the season starts – and given the lengthy season this means that new designs are being worked on when the current car has just hit the track.

“A small group of people will start looking at next year's car,” said Ed Green, commercial head of technology at McLaren Racing. “A lot of our work is done inside computer-aided design (CAD). Our engineers will design parts for the new cars, and the design is run through a virtual wind tunnel which is based on computational fluid dynamics (CFD).”

According to Green, that process alone generates upward of 99 petabytes of data, all of which is processed on-premise due to CFD-related regulations.

The amount of time the wind tunnel simulations can be used is also limited by the FIA, meaning that the precision and efficiency of the team's computers are a key advantage.

F12304_070147_U1A1205 McLaren Racing.jpg
– McLaren Racing

From this point, the digitally-tested parts will be 3D printed at a small scale, and tested in a physical wind tunnel where performance is monitored by sensors, and the data on geometries and air pressures are measured. Provided these results are consistent with the CFD findings, manufacturing begins.

McLaren uses a hybrid approach to its IT, some of which is processed on Google Cloud but, according to Green, the team favors on-premise compute, in part for the reduced latency.

“We have around 300 sensors on the car, and over the course of a race weekend where we are driving for around five hours, that will generate a terabyte and a half of data. We analyze that at the track but we are also connected all the way back to HQ (or what we call ‘mission control’).”

Those sensors are gathering data on every element that could affect the outcome of the race - from tire pressure, race telemetry, speed, location on the track, fuel availability and flow, wind speed, heat, and much more.

“There’s a NASA-style environment where the team back home will also analyze the data and support with decision-making for the racer,” explained Green.

With so much of the infrastructure on-premise, and the value it has for the performance of the race teams, Green was unwilling to share any more information about McLaren’s compute capacity beyond saying the team has around 54 terabytes of memory.

Track-side computing

For the team to analyze and process race data on the trackside, a portable data center must be transported alongside the cars to every race.

“We have a portable data center that’s two racks inside a shock-mounted flight case, and it is one of the only things that travel alongside our cars via plane – everything else is done via sea freight,” said Green.

Those portable data centers have to be extremely flexible, as they face wildly different environments week by week. It could be the usually mild climate of Silverstone in the UK, the 40°C steam room of Singapore, or an abnormally dusty track in India, and McLaren has to set up its data center 23 times over the course of a season.

Green recalls one of his first race weekends, when he entered the data center to find a colleague hoovering the servers.

On the trackside, McLaren is using Dell servers and storage, with Cisco switching gear. In total, the team lugs around 140 terabytes of solid-state storage to every race for on-site analysis which is also relayed to the factories. Should the connection to “mission control” fail, the compute at the Edge can make or break the performance.

Aston Martin’s data shifted the scoreboard

One of the most notable shifts in recent F1 seasons is the sudden and drastic rise of Aston Martin’s team.

In both 2021 and 2022, Aston Martin’s drivers finished 7th on the leaderboard, with several races seeing them placed 10th or lower (even placing 20th on an occasion in 2021). But this year has been a true comeback – led by veteran racer and two-time world champion Fernando Alonso who has placed on the podium six times this year, bringing the team up to an overall 3rd place – and who doesn’t love an underdog?

The change started in 2020, when Canadian businessman Lawrence Stroll took ownership of Aston Martin’s race team. In 2021, the race team partnered with Net App, and in 2022 hired a new CIO, Clare Lansley, who was previously the director of digital transformation at Jaguar Land Rover.

“When I joined the team, it was very clear that IT had been somewhat under-invested in, given the heritage of the team,” said Lansley. “Since Stroll bought it and obviously provided some serious investment, we are now in a position to transform the IT, and the very first start was to ensure that the infrastructure was performant, reliable, and secure. So the concept of implementing a data fabric was absolutely fundamental.”

But while this new investment brought with it new opportunities, the team still had to remain within the bounds of the sport’s budget. Freight weight costs around $500 per kilo transported, and given the near-weekly travel involved, this adds up quickly. Accordingly to Lansley, it was partially this that cinched NetApp the job.

Everywhere that the cars and the drivers go, a NetApp FlexPod follows.

GP2303_174042_U1A4002 (1).jpg
– NetApp

“For these devices, the fact that they were going to reduce the freight weight and the actual footprint, that they were just smaller than the previous kit, was a massive boost. But they were also simpler to set up. When we arrive at the track, my team is given a concrete shell that is completely bare, so I don't want to run numerous cables. I want something that can effectively plug and play at speed,” explained Lansley.

The FlexPod solution reduced Aston Martin’s trackside compute from multiple racks and 10 to 15 individual pieces of equipment, to just one pair of servers. One server for processing and storage, and another for redundancy purposes.

During the race, sensors from the cars transmit data to the FlexPod via radio frequency. This then uses SnapMirror to take snapshots of the data, saving only the differences between each snapshot, which is then transmitted to the FlexPod at the Silverstone factory where the 50-odd engineers start testing and simulating different options for the rest of the race.

Once that data reaches mission control, simulations, real-time CFD (rCFD) and testing begin. But one notable limitation placed on this process by the FIA is that “the solver part or parts of all rCFDs must only be carried out using a compute resource that contains a set of homogeneous processing units,” and those homogeneous processing units must be CPU cores.

GPUs versus CPUs

FIA’s Dominic Harlow explained the decision by the FIA to solely allow CPU-based CFD.

“The decision to use CPUs was based on the discussions we had quite a while back with the teams, independent industry experts, and our own specialists on how to quantify the amount of compute used for a CFD simulation. We came up with a metric that is effectively based around a core hour,” said Harlow.

“For GPUs particularly, it's obviously an enormous number of cores potentially and quite difficult to define a core, similarly for Field Programmable Gate Arrays, or other types of processors that you might use for CFD.

"CPUs are by far the most common and it was the most practical implementation to regulate.”

While you can have CPUs running in tandem, the nature of CFD, like AI, makes it very complementary to GPU-based processing. To understand this we need to dive deeper into the specific use cases of CPUs and GPUs. CPUs are designed for task parallelism whereas a GPU is designed for data-parallelism and applying the same instruction or set of instructions to multiple data items.

This is why GPUs are central to video games - where the instruction set is the same for the character model, the virtual world elements, and all the assets that the gamer will see on their screen.

This data-parallelism is also why GPUs are great for artificial intelligence models - after all the same instruction set is applied to huge data sets.

CFD involves breaking data down into small blocks. In the case of an F1 car simulation, the air around the car, the ground beneath, the car itself, is converted into tiny polygons and each needs to be processed in parallel.

In a paper presented at the 2017 International Conference on Computation Science in Zurich, Switzerland, researchers found that GPUs could speed up a 2D simulation using HSMAC and SIMPLE algorithms by 58x and 21x respectively with double precision, and 78× and 32× with single precision compared to the sequential CPU version.

Harlow agreed that, as GPUs are steadily improving, the anti-GPU ruling could change in the future.

“Where the industry is going now we obviously need to watch very, very carefully because it seems that GPUs are reaching a greater level of maturity, particularly for the applications, and not just as accelerators, but actually as the main processor for the simulation. So watch this space.”

It is also this need for homogenization that prevents some F1 teams from processing on the cloud, due to the difficulty of quantifying the core hours used, and the stringent reporting requirements placed upon them.

Fernando Alonso
Fernando Alonso – Aston Martin

Racing to the cloud

This has not prevented some F1 teams from relying heavily on cloud computing, however. Oracle Cloud Infrastructure (OCI) became Red Bull Racing’s title sponsor as part of a $500m deal in 2022, and the top-of-the-charts team has publicly stated that it uses Oracle for running Monte Carlo simulations as part of its race prep.

At the end of 2022, winner and Oracle Red Bull Racing Driver Max Verstappen said: “Due to all the simulations before the race even starts, it's very easy to adopt a different strategy during the race because everything is there, everything is prepared. I think we definitely had a strategy edge over other teams.”

Monte Carlo simulations use computer algorithms reliant on repeated random sampling. By exploring a vast variety of possibilities, that randomness can be used to solve deterministic problems. For Red Bull, this means applying a variety of surface variables, wind and weather speeds, possible car issues or choices – any factor that could impact the outcome of a race, and testing them all.

This was all done with cloud computing – the team used Oracle Container Engine for Kubernetes to containerize those simulation applications, and then run those models using high-performing Ampere Arm-based CPUs on OCI.

“The first workloads we moved to OCI were our Monte Carlo simulations for race strategy,” said Matt Cadieux, Red Bull Racing’s CIO at Oracle CloudWorld in London. “The reason this is our race strategy is mission-critical. It has a big influence on our race results. We were running on an obsolete on-prem cluster, and our models were growing so we needed more compute capacity, and we also needed to do something that was very affordable in the era of cost caps.”

In this case, the flexibility of cloud computing and the ability of Red Bull to spin up CPUs for short periods of time to conduct these simulations may have won them the title.

Regardless of whether teams are taking an on-prem or cloud-first approach, the cost cap has proven to be an obstacle that has forced innovation on all fronts. With a sport so reliant upon technological innovation, scientific prowess and out-the-box thinking, there is an argument that this limitation only serves to strengthen the sport.

Forcing the teams to find more financially sustainable methods of achieving this will not only bolster competition in the long term, but it could also change the way the industry, and others, approach technological issues in constrained environments and situations.