Spiceworks recently had the opportunity to meet with trackside engineer Antony Smith of the Caterham F1 team, a super-friendly British chap who found himself here in Austin once again for the U.S. Formula One Grand Prix. In Part 1 of this interview series, Antony explains the crucial role IT plays in the F1 industry today, gives us an inside look at the technologies Caterham F1 uses, and tells us how they manage to pull it off, race after race, at F1 circuits around the globe.

P.S. Interviewing someone at an F1 track during practice laps is no mean feat. We weren't allowed to shoot video, but we've put together some of the audio and images in a one-minute video below to give you a taste of the noise and chaos we experienced at the Circuit of the Americas. I only wish you could also see the huge grins on our faces!

I just saw the movie Rush, and there were definitely no IT pros in F1 in 1976. How has IT changed the industry?

I started in ’95, and at that point, we were taking one laptop and one desktop per car, and that was it. No servers, no nothing. It was pretty simple stuff. In fact, we didn’t have an IT person trackside. We had one IT person at the factory, and that was me. But then it snowballed – the amount of data we were collecting, the amount of reliance on the IT. And now there’s no way back. When they arrive, they’ve got to build the car up to the specifications of the circuit. Without the IT, we can’t do that, let alone run it during the weekend. We can’t afford any downtime now at all. You can’t afford for anything to go wrong.

You must be a bundle of nerves on race weekend . . .

Yeah, I used to have hair but it’s all gone now! We have one person doing IT trackside, and that’s it. The setup for this race on Sunday started first thing Monday morning. You get the servers up, you get the core systems up, and then you can start thinking about getting cabling in – you’ve got to do everything from square one. There’s a lot of pressure, and there’s no getting out of it with one person doing IT.

And there are things beyond your control. You can have power problems, we’ve had problems with the building, water leaks coming down through the ceilings, temperature problems, odd things. The temperature in Malaysia – we used to run to within one degree of shutting down. So you’re packing the computers with dry ice and everything, just trying to get them cold. It takes it out of you. There’s not many people who have done it for a long time.

And what if you get hit by a bus??

It almost happened one year. We had bad food poisoning and we had a lot of people down. You just have to drag yourself in and do your job and then just get back to bed. We’re so restrained on numbers. We only get 45 passes, so if we were to take two IT people, that would mean we would have to get rid of somebody else that does another job, like strategy or engineering or something like that.

So, break it down for me. What’s the IT setup here at the circuit?

We’ve got a complete data center. It’s all resilient and all virtualized, and that’s the key for everything. And then on top of that, everybody’s got laptops.

We’ve got 30-odd virtual servers and we’ve got three hosts. We’re using VMware, and virtualization makes a big difference with trackside stuff. To actually do it with one server per physical machine, it would cripple us in terms of weight and size and power. Some teams down the pit lane are still doing it that way – still fully physical. They won’t touch virtualization. They’re old-school. But if they have a failure of a machine, what do they do?

If we have a failure of one machine, it would be a problem, but it wouldn’t affect anybody because we’ve got so much over-capacity in terms of processing power. They’re only 1U and we’ve got three. If one fails, we can run absolutely everything on two. And if another one fails, then we can run everything that we need on one server. And we’ve also got a backup server in a different location, which we replicate to. For example, if we had a fire – and teams have had fires – then we haven’t lost everything.

I was really worried to start with because I’ve virtualized before but I’d always kept it within the same box, but it’s been fantastic. We’ve gone completely diskless on the servers, so we boot off SD and then they connect to the SAN. And the VMs are held on the SAN, which gives you the flexibility that you need to be able to move it around from one box to another. But it’s really quite scary to start with because the machine isn’t anywhere. It’s floating. It’s on the SAN, and if you lose the SAN, then you’re in trouble, but the SANs have been bulletproof. They’ve got multiple interface cards, they’re all RAIDed with RAID 15. We’ve got two SANs, one for performance and one for capacity. We don’t do anything clever. One thing that you learn with experience – you’ve got to keep things stupidly simple. You can over-complicate things so easily.

What kind of capacity are we talking about, storage-wise?

Trackside, we’ve got 30 TB here – we’ve got to have this year’s data but we also have last year’s and the year before. During a weekend, we generate about 35 GB of data from the car. And then of course on top of that we’ve got video and audio, because we’re capturing all of that. We get 120 hours of audio per weekend and about 40 GB worth of video, because then we can take it back to the UK and analyze it. It’s turned out to be really, really useful.

And how do you get that data from the car?

There is a live telemetry link. Each car sends back about 2 Mb, and it’s streaming back a subset of the data – not the entire thing that we’re collecting because we’re logging something silly like tens of thousands of samples per second, and you can be doing that for two hours and it makes a lot of data. We log that to onboard storage and that’s downloaded when we get back to the garage. But certain key things are streamed back live. I think here at Circuit of the Americas there are seven aerials around the track, and there’s a little stubby aerial on the top of the car just in front of the driver. As the car goes around the circuit, that data’s all relayed from those remote stations back to us over fiber and we pick it up. The rest of the teams all do the same thing.

Basically it sounds like you’re orchestrating a data center move for every race. How do you pull that off?

We have a pod – a big cube that’s just the right size to slot into a jumbo jet. So that goes in, and that’s the transport. When we get to the circuit, we take everything out, and the pod then becomes the data center. We take the panels off the side, and there’s all the monitors already built in, so we just put chairs along the side and those are the workstations where people sit. Other teams have got other ways of doing it, but we’ve found this way works. During transit, it’s a big box, and during racing it makes an air-conditioned room.

In the meantime, what’s going on back at the Caterham F1 factory in the UK?

It’s sort of like a mission control back in the UK. It’s not really NASA but there are loads of desks, big screens, and people there can do strategy and performance analysis. They get all the data live, they get all the video feeds that we’ve got, they can see into the garage. All the other teams do the same thing, but it can save you a fortune because you don’t have to bring people out.

They can be connected to the virtual machines here, as if they were here. We use RDP for remote access. It’s just standard RDP, and it works really well. We can be in the UK or in Belgium, say, and we might be ten or twenty milliseconds down the road, or we can be in Australia, and it’s 400 milliseconds away. It’s that much of a difference. And it doesn’t matter how big the pipe is.

So we’ve got dedicated links back to the factory, but we’ve got to be standalone. We’ve got to be self-contained. You’ve got to run as if there’s nothing there, because you never know when someone might put a digger through one of the cables. It’s a long way back to the UK, and there are a lot of links in that chain that are way, way out of our control. We do rely on the UK – we have IT resources working there at the moment, in fact. But if they were to stop working, it would be a pain, but it wouldn’t stop the car from going ‘round.

What do you have in the way of specialized IT equipment back at the factory?

We have the Dell HPC, which is 180 nodes, around 1500 cores. We’re going to be shooting that up to 5,000 cores next year, and that’s doing all the aerodynamic work and simulations. It saves us from having to spend so much time in the wind tunnel. Consider having to build everything first and test it in the wind tunnel. This way we can just model it, find out which ones work, test 20 of them, and then only make two or three, rather than trying to make 20 of them, which you could never do. It saves us a huge amount of time. It really tightens up the loop of development.

Tell me some of the unique challenges you face supporting an IT infrastructure on an F1 circuit.

Well, we don’t use mil-spec servers – we just use standard Dell office servers and laptops, but we do push the computers to the limits. I think Dell quite likes that, because it means they can say, “If it lasts here, then it’s going to work for you, no matter what you do.” We’ve just replaced our servers after three years, and they’d been around the world about seven times. They can be in a plane and shaken up all the time, hot and cold, hot and cold, they get left out between races, they can be left out on an airport runway for a week. And they don’t go wrong. So, Dell loves it because it’s one of these proving grounds. These are just standard Dell EqualLogic SANs, standard servers, it’s all normal stuff. We take them apart and give them a clean-down and get the dust out, but that’s it.

And the amount of abuse that our laptops get! And yet they just keep on going and coming back for more. You’ve obviously heard, it’s quite noisy around here. If you’re standing next to the car when they start it, that’s about 125 or 130 decibels, and that will kill hard disks. The vibration will physically kill them. If you’re on your laptop and they start the car – blue screen instantly. So everything’s solid state. And they’re not rugged laptops. They’re just normal Latitudes, Precisions. Just normal stuff. And our users are just normal users. They’re not particularly careful. Except if it rains on the grid, we might take them out and put them in a plastic bag! My laptop now is from 2010. I’m still using it. It’s never had a hiccup.

OK, this is slightly off-topic, but I got to see the F1 steering wheel up close during our tour of the garage. What’s up with all those knobs and buttons?? The only thing my steering wheel does is turn the radio up and down.

They’ve got I don’t know how many dials and settings, but it’s almost like a menuing system, where you select one thing to go into a mode, and then you change the parameters. They can change the diff or the fuel-mixture ratios during the race, depending on whether they want performance or overtaking power for the curves. That’s all done through the steering wheel.

It used to be just a little bit of metal with rubber on the outside, but all the display is in the steering wheel now. So you’ve got the shift lights and the timing information, so the driver can actually see how his lap is progressing as he goes ‘round. He can see whether he’s up before he’s finished the lap. It’s got paddles for gearshift, clutch. And if there’s a sensor giving us bad information, sometimes you’ll hear the race engineer on the radio telling the driver to go into such-and-such mode and fail the sensor. And that’s whilst they’re driving around during a race!

By the way, those steering wheels cost $100,000.



<span content="Q&A with F1 IT pro Antony Smith" itemprop="name"></span> <span content="Spiceworks interviews Antony Smith, trackside engineer for the Caterham F1 team, during practice laps at the Formula One U.S. Grand Prix, held at the Circuit of the Americas in Austin, Texas in November of 2013." itemprop="description"></span> <span content="83" itemprop="duration"></span> <span content="http://cdnbakmi.kaltura.com/p/1626222/sp/162622200/thumbnail/entry_id/1_wrooqr7l/version/100000/acv/...; itemprop="thumbnail"></span> <span content="700" itemprop="width"></span> <span content="393" itemprop="height"></span>



Thanks for the chat, Antony! SpiceHeads, be sure and check out Part 2 of this series, where Antony tells us how he broke into the biz, shares his favorite IT horror stories, and reveals some of the challenges (and perks) of life in F1. You can also visit Interview Central for more great Q&A articles. Know someone who would make an awesome IT interview? Pitch it to us at contentninjas@spiceworks.com.