Five years ago at the Code Conference, self-driving cars seemed as though they were just around the corner: Google unveiled the project that would later become Waymo, and Uber’s then-CEO Travis Kalanick stirred controversy when he talked about the benefits of replacing human drivers. But in 2019, autonomous vehicle prototypes are a rarity in most cities outside of San Francisco, and humans are still vital to companies like Uber and its first-to-IPO rival Lyft.

That’s because self-driving is a really, really hard technological problem, Ford CTO Ken Washington said on the latest episode of Recode Decode with Kara Swisher. But, very slowly, beginning in 2021, you’re going to start seeing cars with no one in the driver’s seat.

“You may see some earlier ones in 2020, but we believe in taking the time to work with the cities,” Washington said. “If you just put a bunch of autonomous vehicles in the city without designing it to make life better in that city, you’re gonna have an analogous problem to what happened when Ubers first started showing up. People hated them because they’re camping out on the corners, and it made congestion worse, it created additional pollution.”

Ford is currently testing its self-driving cars (still with humans in the front seat as a precaution) in Miami, Washington, Dearborn, Pittsburgh, and multiple places in California. Washington explained that, in order to be ready for regular consumers, these “robo-cars” need to have a pre-existing 3-D scan of every street they might drive on.

“This is not your navigation map, the kind of map that you would use on your cellphone that you pull out and you do a Google Map or an Apple Map,” he said. “It’s actually shooting light beams out in three dimensions off of the roof of the vehicle ... [and] capturing these points and creating a 3-D image of what the world looks like.”

“If you don’t have that part of the map, you’re relying on, in real-time, detecting everything that might happen, and that’s just too hard of a problem,” Washington added, before taking a dig at Tesla’s so-called Autopilot features. “That’s why these vehicles that don’t have LIDAR, that don’t have advanced radar, that haven’t captured a 3-D map, are not self-driving vehicles. Let me just really emphasize that. They’re consumer vehicles with really good driver-assist technology.”

You can listen to Recode Decode wherever you get your podcasts, including Apple Podcasts, Spotify, Google Podcasts, Pocket Casts, and Overcast.

Below, we’ve shared a lightly edited full transcript of Kara’s conversation with Ken.

Kara Swisher: How are you doing?

Ken Washington: I’m great.

Now Ken, you’re gonna tell us how cool all this stuff is now that we talked about the disaster that’s coming. Let’s talk a little bit about ... I wrote a column last week, which did get a crazy amount of commentary. Thousands and thousands of comments on the New York Times. People were for it, people were really, really for it, and what I did in the column is that I really don’t want to own a car again. I wrote a piece in the Wall Street Journal 25 years ago saying, “You will have mobile phones. You will not have landlines. You will not be wired. It will be all be wireless.” It was a very good prediction, and then I said, “Now, you’re never gonna own a car,” and, “It will be as quaint as owning a horse.” I think that’s the expression I used.

I was trying to get a discussion going. I mean, obviously you wanna talk about where this is going, but I do truly believe that we’re on the cusp of this because of self-driving and AI and the stuff that are gonna go into transportation. So let’s talk a little bit about cars first, and then we’ll get into the other things that companies like Ford and others are doing, where AI does benefit a company. So let’s talk about where we are with autonomous vehicles right now and where AI fits in.

Well first, I just wanna say, I think you definitely got people’s attention with the article.

Yeah.

I think the response was a reflection of the fact that people love cars, and some people hate cars, but everyone needs to move around. What autonomous vehicles has done for us is given us the potential and the promise of a new way of moving around, and a new way of creating mobility, and a new way of solving real pain points in cities. What I loved about your article was it really shined a light on the fact that in urban centers where you really need a different model, that now this new model is beginning to emerge.

Right. I wasn’t talking about car ownership. I was talking about — and not — I was talking about car ownership, not car driving. I will continue to drive and move in mobile vehicles depending on ... but it was the car ownership and it’s the idea of ownership. Just the way we owned entertainment before, now we really don’t. Just the way we owned records, we owned this. It’s the same concept, that this is one that’s moving in, and especially in urban areas where I think 90 percent of the population’s going to be in a megalopolis over the next 25 years, like 90 percent. So those other 10 percent can have their cars, it’s fine, but it’s just what happens when those population ...

So talk about where we are and how AI fits into the idea of where we are with autonomous. Let’s start with autonomous vehicles. We’re right in several stages, right? Three, four, five, we’re in three now, which is?

Well, let me just clarify that.

Okay.

It’s a term that’s often really misused and misunderstood. When you’re talking about autonomous vehicles, particularly in urban cities where you predict 90 percent of people are gonna live. What you really have to think about is an autonomous vehicle that can truly sense the environment completely and can truly take the human out of the loop. So that’s a level-four autonomous vehicle.

Even that level-four autonomous vehicle that can operate in this urban environment, it’s gonna have some boundaries around it. It’s gonna have to have the ability to operate in a city that it’s seen before, so it has to have been mapped. There are gonna be weather restrictions on it, at least with today’s technology, and there are gonna speed restrictions, because the sensors aren’t perfect and hopefully no one in this room believes that the AI part of the problem has been completely solved. You can’t even review résumés with AI and not screw it up.

Getting the AI right in a vehicle is really hard. It involves a lot of testing and a lot of validation, a lot of data gathering, but most importantly, the AI that we put into our autonomous vehicles is not just machine learning. You don’t just throw a bunch of data and teach a deep neural network how to drive. You build a lot of sophisticated algorithms around that machine learning, and then you also do a lot of complex integration with the vehicle itself.

Right. Talk about the challenges. What are the challenges faced right now by getting to that level of, I guess it’s full autonomy. There’s all kinds of different ways people discuss it, but the full autonomy, where you get it, you call something. It comes to you efficiently, and then you take it somewhere.

We like to use the term “self-driving” for the full autonomous vehicle that you would use in an urban city.

This is without a driver in the front?

Without a driver in the front. And the state that we’re in today is that it’s still a development activity.

Right.

We still use safety drivers who sit in the driver’s seat-

In my experience ... I’ve been in a lot of these cars. The original Google ones that were started before Waymo, I guess, were people who are sitting in a car that’s been tricked up. A car that’s not built specifically to be autonomous, which means they took a regular car and stuck stuff all over it, which you all, everybody ...

Pretty much everyone does that today.

Everyone does that today.

Pretty much everyone does that.

Then, they had their clown car, which had no ... Which I drove and I tried to run over the head of Waymo, but it didn’t work. It had no steering wheel. It had no pedals. It was like a Disney ride, essentially. You got in, and I just was texting and drinking the whole time. You know? No, I wasn’t. No, but it was ... It felt like I was on a ride at Disney World. Between that, there’s something else going on. Talk about where we are in that.

Where we are right now, and the reason you don’t see those kind of cars on the roads today is that I think Google and a lot of other tech companies, many of them out of Silicon Valley, realized just how hard it is to make the car part.

Right.

Where we are right now is we have a, basically a software company that was a startup. It was founded by one of the ex-Google leads, Bryan Salesky, and co-founded by Pete Rander from Uber. They joined forces and created a company. Ford invested in that company. We invested a billion dollars in that company, and they’ve grown that company to a little north of 300 people. They’re working shoulder to shoulder with engineers from Ford to develop the vehicle part, the software part, and all the really messy stuff that has to happen in between, to connect them together.

Then, we’re working with cities, and that’s a really critical part. You can’t just put a vehicle together, and just show up and expect it to actually do something that’s gonna be a good experience. You need to work with cities to understand, “How do vehicles work in this city? What are the behavioral patterns of citizens in this city? What are the pain points of people that wanna move around? What are the congestion patterns like?” Then, you interact with the people in the cities so that you’ll understand what the human needs are.

Then, we’re working on taking all of that information and folding it into creating a business that will work with a self-driving car. That’s where we are today, and we’re in the process of testing. We’re testing today in five cities, preparing for deploying a ...

What are the cities you’re in?

The five cities are Miami, Washington, Dearborn, Pittsburgh, and we just started testing in California.

Why did you pick these cities?

Miami was our first pick because it’s a fairly large city and it had a fairly diverse population. That was really important to us. We wanted to work with a city that had a really favorable political climate, while also a really favorable ... it really had a need. Right? We started talking with the city. We realized that we actually could have a good relationship with the city, and they had a real need for us to solve a pain point in the city.

Which is congestion?

It’s congestion, and there are a lot of underserved communities that don’t even own a car and had no way to get to work. It was a multilingual city, so we knew that we had to solve the diversity challenge and meet that challenge to be effective at scale. So it was a really ideal city for us, and also it’s a place that has really good weather.

We didn’t want to take on too big of a problem at first because, look, level-four autonomous vehicle, a self-driving car, that’s going to operate without a person behind the steering wheel. You’re gonna put your loved ones in this vehicle, and you’re gonna trust that this vehicle is gonna take them where they want to go, it’s a really hard problem.

Yeah, but I’ve driven with my mother, so I’ve already passed that test. I’d take a self-driving vehicle any day of the week, and twice on Sunday. Talk about the AI elements, because people don’t realize it’s both making the car and putting the AI elements ... AI is pulling in so much data, pulling in and understanding it, and then there’s the sensors part of it and the radar. There’s several different pieces of it.

The carmaking is a really important part that people, I think, don’t realize. There’s all kinds of different kinds of cars being made now, but I remember being struck by someone at Uber many years ago telling me, “This self-driving thing’s gonna be easy. Carmaking is trivial,” and I was like, “You’re an idiot.” Like, for so many other reasons, but this one is really stupid. This is a particularly stupid thing to say.

Talk about that. How do you integrate these physical objects, a car, which could be made of anything going forward, and the AI together? What is the challenge of that?

Well, there are many layers to that challenge. The first one that I’ll talk about is the challenge of getting the software to integrate into the hardware, because there are many ways you can do that. We’ve learned through experience that the best way to do that is to design the hardware to have an abstraction layer. That is, don’t design the software in the middle of the vehicle hardware. Make it so that the software team can just work on the software, and then the hardware team can define well-defined interfaces so the software can then integrate in with the vehicle.

So that’s just one of the first challenges, but the actual design of the vehicle itself is not trivial because the self-driving car is gonna have a different interaction experience with the human that’s gonna use the vehicle. When you ride in a self-driving vehicle, you’re gonna have a different way of approaching the vehicle. You’re gonna have to hail the vehicle. It’s gonna have to know that, “Oh, that’s you that suggested to hail the vehicle,” and, “How do I know that it’s not the person that’s walking behind you and is gonna hop in the car instead?” So there are all kinds of challenges associated with having a way to interact with the person it’s gonna get in and take the ride.

There’s design challenges, and one of the things that we’ve been thinking about and actually implementing it with AI is using AI inside of our engineering teams to help us do the design of the vehicle itself so that we can optimize the solution to some of these challenges.

Meaning what?

Let me give you an example. One of the things that we’ve begun to work with in the AI space is with 3-D printing and designing differently. When you can 3-D print certain parts and certain components on a vehicle, you can design the part to look any way you want, because you can print things that you couldn’t forge or you couldn’t make with some other method.

There are AI-enabled engineering tools that will design the part in the same way that nature would. It’s called “generative design.” We’re working with some of the companies that are on the forefront of this technology, to do generative design of some parts that we may end up putting on both autonomous and our non-autonomous vehicles.

Right, meaning designing for ... How would that ... It knows what people want?

You give it the parameters, and you tell it what engineering constraints it has to meet. It has to be so strong, it has to have this property, has to have this size, and then the AI will actually go in and say, “Well, it needs to have material here, and here, and here.”

Which is what engineers used to pick.

But it doesn’t need to have a year so it can lightweight the vehicle and it can ensure that it’s gonna have the right properties and the right strength. Then, you can send that to a 3-D printer and it will make the part.

Make that particular part?

Make that particular part.

It could also be used with consumer preferences too, correct?

Absolutely, you can use it for customizing parts, because imagine that you’re the owner of an autonomous vehicle fleet and you’re entering into a ride service. What if you wanted to make your autonomous vehicles customized for your service? You could actually design a customized part that would have an emblem with your company name on it, and you could print that part and put it onto the vehicle. It’s a great example of using AI together with a new technology, with 3-D printing, together with the real opportunity to provide a new service to society. A lot of technologies clashing together to create new opportunities.

When you think about other part of the AIs, it’s also mapping, it’s also understanding cities and geographies. We’ll be having Code in Arizona, and we wanna take people to this one area, but they don’t have it mapped at all in autonomous vehicles. So they can’t really do anything. There’s also, there’s no sensors in the roads. Some places are putting sensors in roads to be able to do that. There’s all kinds of ways to do that, but they didn’t, they hadn’t had enough AI technology deployed there to understand the place. Talk to people about what you have to do. You’ve got to basically map the whole world again, and then it’s also a world that’s also changing all the time.

Yeah, so this is a really important point. It’s really important to know that a self-driving car has to operate in a region that you have fully mapped. This is not your navigation map, the kind of map that you would use on your cell phone that you pull out and you do a Google Map or an Apple Map.

This is a map that is developed by driving the space with an autonomous vehicle, that’s got light detection and ranging sensors on it — LIDAR for short. It’s actually shooting light beams out in three dimensions off of the roof of the vehicle if that’s where you’ve got your LIDARs mounted. It’s receiving the light back, and it’s then capturing these points and creating a 3-D image of what the world looks like.

Then, the vehicle uses that static image of what the world looks like and it does some fancy footwork to take off the parked cars and things that could move, because it wants to know, what won’t move? When the car is in self-driving in that region, the LIDARs operate again. Then, they make a comparison of what was there statically and, “What do I see now?”

If something is there that wasn’t there when you were mapping it, that means it could move. It could be a kid on a bike. It could be another person. It could be a pedestrian. It could another car. It could be a dog. The task of, first of all, knowing that these are things that you need to know about, you need to predict them, you need to know their trajectory. If you don’t have that part of the map, you’re relying on, in real-time, detecting everything that might happen, and that’s just too hard of a problem.

That’s why these vehicles that don’t have LIDAR, that don’t have advanced radar, that haven’t captured a 3-D map, are not self-driving vehicles. Let me just really emphasize that. They’re consumer vehicles with really good driver-assist technology. Some West Coast companies that sell really great electric vehicles, I won’t name them, they’re really great drivers as technology vehicles, but they’re not self-driving vehicles. Right? If in fact, if you can prove that, because you can Google and find out that they’ve been tricked. They do all kinds of crazy things because you put things in the environment that they don’t understand, or when the line markings are covered with dirt or gravel or snow, they don’t work. I can go on and on.

Yeah, yeah, don’t. Okay.

But the point is, you gotta have a prior map.

Buy a Ford. I got it.

You gotta have advanced ...

Okay.

No, that’s not the point. No, seriously, that’s not the point. The point is, you gotta be clear about the fact that self-driving involves a lot of complex technology, and you gotta approach the problem with that kind of seriousness. Ford is not the only company doing it that way, but we are doing it that way.

Though, let me be fair, y’all wouldn’t have been in this if they hadn’t started it. Let’s be, this is removed for ...

No, no. I gotta push back on you. I gotta fix that.

I don’t think you would’ve. I don’t recall you saying it, not you.

Ford was the only automaker participating in the first DARPA challenge. We just didn’t go in with our name.

Explain what DARPA is.

The DARPA challenge is a Department of Defense .... DARPA stands for the Defense Agency Research Projects Agency, and they, every year, put out a big, difficult challenge out to the technical community. It’s usually something that they think is too hard to solve. They pour a bunch of money on it and they say, “What teams can come in here and give us a solution to this problem?” Usually, they’re so hard that nobody solves the problem, but you learn a ton. Sometimes, in the case of self-driving cars, it sparks an entire industry.

The very first DARPA challenge was, they put out the challenge of building a self-driving car, because nobody knew how to do that at the time. Ford was the only automaker that participated in that first challenge. In fact, we bought the first Velodyne LIDAR and bolted it on the top of an F-250. Of course, we didn’t succeed in the trial, and nobody else did either, but we learned a lot. That launched the Ford autonomous vehicle project.

I wanna get back to AI and the bigger question of where AI fits into it. What would you say if you had to estimate, and I know you probably hate this question, when will you see a shift completely from human driving? Because, I think we can all agree, humans are the problem here. Humans driving is the issue.

One person told me, very interesting, I thought it was really smart, it really stuck with me, was, “When an autonomous vehicle has an accident or has a problem, all the others learn through AI and other technologies. When a human makes a problem, they make it again,” like I did the other day when I just hit a car. I’ve done that many times. Talk about that. When do you imagine it rolling out?

I don’t think it’s gonna be a step function. It’s gonna be a gradual deployment.

You mean like horses to cars?

Yeah, it’s gonna be kinda like that, and it’s gonna be kinda like when the internet rolled out. Everybody didn’t just suddenly jump on the internet, right? When cell phones came out, you had people walking around with bricks on their shoulder, and every once in a while somebody would show up with one. I think it’s gonna be kinda like that.

You’re gonna see, some of the easy cities have programs, like Miami and Washington, in the case of Ford and our competitors, have picked their cities. You’re gonna see self-driving cars roll out in fleet deployments for both people movement and package delivery in those easy, urban environments that are easy to work with.

When?

You’re gonna start seeing that in 2021.

2021, this is delivering ...

You may see some earlier ones in 2020, but we believe in taking the time to work with the cities, to design the business right so that when you show up, you make it better instead of making it worse. I mean look, this is an optimization problem. If you just put a bunch of autonomous vehicles in the city without designing it to make life better in that city, you’re gonna have an analogous problem to what happened when Ubers first started showing up. People hated them because they’re camping out on the corners, and it made congestion worse, it created additional pollution.

So we’re gonna go into these cities and work to design the solution so it makes the experience better. We’re gonna take 2020 to finish that process, and then go in 2021. I think that’s gonna be the beginning of this slow deployment.

Does that make Ford and your competitors, the others, are you car companies anymore or are you data companies, AI companies? I do understand the car itself is interesting, but these will be fleets of cars, and possibly you will be running the fleets, and not people owning them.

That’s right.

It won’t be that everyone doesn’t have an autonomous vehicle. They will just be in fleets and you will rent them, or they’ll be something like that.

Yeah, I think that’s right. We think of ourselves as a mobility company, which includes being both a carmaker and also a company that has an AI core competency. We have to have an AI core competency, not only because you need AI in order to pull off the self-driving task. But in parallel to slowly and gradually rolling out fully self-driving vehicles to a small set of cities, we’re gonna keep putting more and more AI into the vehicles that people are buying and leasing and renting and using in Lyft and ...

Why is that, to stop — to becoming assisted driving?

Well, because humans aren’t very good at driving. Most people are worse drivers than they think they are.

I’m a real bad driver. I know it.

I think a lot of people are.

My children know it, they’re here.

You’re not alone. We’re all flawed humans when it comes to driving, because this is ... It’s actually not that easy to be a great driver-

No, I’m in a constant rage, but go ahead. Go ahead. Sorry. Go ahead. So you have to have AI in the cars? What does that look like?

You need to have assistance, and we know that we can provide more and more driver assistance because the technology is getting cheaper, sensors are getting smaller, compute is getting faster, memory is getting more ….

So how does that look at ... You talked about sort of in five, 10, 20 years. Right now I’ve got a car that beeps at me going backwards, and I can see the backwards. It beeps at me when I don’t have ... It just beeps at me when I don’t have a belt on. It’s not very assistant. It’s an irritating person, really, is what ...

I’m actually glad you said that, and the reason I’m glad you said that is the reason these cars are beeping at you and doing all that stuff is ... We’ve done a terrible job of designing the human experience in the vehicle. And we’ve learned that we’ve got to take a step back and start thinking about that differently.

You’re right. You know, if it spoke to me and said, “Kara, you don’t want to die today. Put that on!” You know? That would be nice.

And everyone’s going to want a personalized experience, right?

Right.

So think about when you first bought ... How many of you have an Alexa device or some kind of smart home device? If you have a smart home device, you start to interact with it and it gets to kind of know you, and you can actually train your voice on it, and, well, why shouldn’t your car have that same kind of interaction with you?

So we’re thinking about building AI into our vehicles so that it can be personalized, so that it can be a better experience for you, so that when you get in your vehicle it’s your sort of oasis of ... It’s yours, right? And even if you don’t own it, if you get into a vehicle that’s shared, there’s no reason why that shared vehicle can’t know who you are, too, because you’ve got your smartphone with you.

It knows you got in. It’s got all the data in the cloud, and so it should be able to say, “Oh, hey, Kara. I see you’re going on a shared ride or you’re on a Lyft or an Uber ride,” and it knows that you don’t like this kind of music and you like this kind of music, and you like the temperature set to 72 or 71. And so AI can totally change how we think about delivering an experience that’s curated for you.

Okay, so talk five years out, 10 years out. What could it be? Before we get to fully autonomous, what could be in it? You’re driving a car, it has your temperature. You don’t have to figure out how to ... The hours I’ve spent trying to get the car to talk to the phone to play the music is insane. It does that without a problem, right?

Right.

Never, never. That is never going to happen. But what does it do? What are the other things it does? Puts your seatbelt on properly, does the seat properly, right? That kind of ...

Well, I think we should think more broadly about how the future of smart, AI-enabled vehicle will interact with the rest of the smart world around us. I mean, I love tinkering with technology in my home, so my home has all these smart sensors on it. Well, in five years, your car that’s got AI in it’s going to interact with your home that’s got AI in it. And so when you pull up to your home, the garage door will open, and when you pull in, the garage lights ought to come on and the ... It should unlock the door, but only if you want it to. You could say, “Don’t unlock the door. Unlock the door.” So it should just be seamless and frictionless, because you’ve got this rolling computer that’s got AI in it with all these sensors on it.

Oh, and another thing is, why can’t this vehicle serve you in other ways if it’s in your life? Let’s say it could be a sentinel for you. It could turn on the lights and warn you if somebody comes around your home. It could ... If you ...

Wasn’t that the plot of Christine? That ended badly for the people, I recall.

Well, look. We can take cues from sci-fi in lots of ways, right?

Right, okay. So it’s your sentinel. It sits out there, and when someone comes along, like a dog, like what?

Well, so this all part of the iteratively improving part of AI.

So it uses AI to take sensors from around ... It’s already got sensors, so why not let it do other things?

Exactly, exactly. And imagine at a work site, if you’re driving a truck or a commercial vehicle, how it could be an assistant to you. So, look, I think there are lots of opportunities here to stitch AI into a vehicle, but one other thing is, we’re also using AI to help us make the vehicles higher quality.

An example is out of our Silicon Valley lab, and, by the way, I don’t know if your audience knows that we have a presence in Silicon Valley, and it’s been a huge benefit to us, because we’ve met over 1,000 startups. And of those 1,000 startups, one that we met is an AI company that actually was using AI to detect flaws for totally another industry, and we began talking to them, and says, “You know what? One of the really hard problems we have is finding flaws in the quality of wrinkles on seats when we make them.” I mean, it sounds really silly, but it turns out it’s a hard problem.

Wrinkles on seats?

Yes. It’s actually a hard problem, and we actually had ...

I hate the wrinkly seat of a car.

We literally had people looking at seats going, “Yeah, that one’s wrinkled. Send it back. Nope, that one’s not,” right?

Wow.

And so now we have that company deploying AI to do quality inspection, improving the quality of our seat inspector in the manufacturing process using image recognition and AI.

So, what? It takes pictures of the seats and then it knows a wrinkly seat?

Take pictures of the seats and learns, and it iterates, and it gets better and smarter, and make us ...

So it rejects this ...

Yeah.

It just says, “No,” like ...

“No. Yes. Yes. No.”

Yes. Right. Yeah.

It’s a great example of the use of image recognition to improve the manufacturing process. And there are many other examples. The point is that we’re embedding AI as a core competence to enable us to make cars better, but to make the experience of owning a car better, and also driving a car more safely and more smartly, in addition to self-driving.

Talk about that part. Talk about that, driving safely, and what could it do?

So earlier I was downtown talking with another group about the fact that we still have a large number of fatalities in the US and globally with automobiles on the road, and the majority, the vast majority of those fatalities are driven by human error. The fact that the fatality rate in the US went up in the last few years, and that’s largely driven by both the complexity of traffic on the road and by distracted driving. And so, if you can use AI to detect when a person is distracted — and they’re driving — by having internal image recognition or other bio-cues, you could save lives. And so we’re working on interior ...

And what would it do? Shut down the phone or whack it out of your hand or what?

Well, hopefully nothing quite that intrusive, bu you could certainly ...

I think it’s a brilliant idea, but go ahead. You can borrow it.

You can certainly slow their speed down, or you could alert them, or you could vibrate the ...

Report them, to the authorities.

That’s a little more intrusive than we’re thinking.

I know. I’m going to get to the creepy stuff in a minute. But go ahead. So safety, so slowing people down if they’re texting, and ...

That’s right. So you can make the experience safer. We’re also working on the next step in driver assistance technology, because it’s going to be a gradual rollout before you have fully autonomous vehicles. Oh, and by the way, fully autonomous vehicles are not going to be the kind of technology you’re going to go out to your dealer or even to a showroom and buy a personally owned autonomous vehicle any time soon. They’re extremely expensive because of the sensor suite and the kind of compute that’s integrated into the vehicle.

But the kind of technology we can increasingly put on the vehicle can give you really great driver assist technology experiences, because we can now put the kind of sensors similar to the ones that we put on our self-driving cars, just without a prior map and without LIDAR.

Radar is getting really good. Camera technology is getting extremely good. And then the AI software is getting very good. So we can do more than just lane-keeping. You can do lane-centering, lane-following, the kind of thing that’s been available from that West Coast, that battery/electric company, for a while but that’s now rolling out very at scale by ...

In human-driven cars.

In human-driven cars.

When you think about all this stuff, some of it sounds great. Some of it sounds extraordinarily creepy.

Yes.

Talk about that, and because ... given the previous stuff about where these things go wrong. Some of this sounds great. Some of the stuff they talked about sound great, you know, weather, climate, things like that. What are the worries you have as the CTO? It could do a lot of things that aren’t so good.

Yeah, that’s right, and that’s why you have to be very intentional about how you treat data and how you treat both the collection and the diversity of data and the care of that data.

Right, like the thing you talked about, the texting. That could go right to the insurance company. That could go right to wherever, the police, things like that.

Exactly right. So it starts with building the trust of an owner to feel good enough about how you’re going to care for their data that they’re willing to give you access to it. And our experience is that trust and that willingness will happen when you can offer something in exchange for that, that’s of value, and then when you don’t break that trust. And so trust is really hard to gain and it’s really easy to lose, and so we’re being very intentional about how we’re caring for the data that we have the honor of managing.

So that speeds people going, whether they’re speeding or not, where they’re going, what they’re doing in the car.

All of that is ...

What they might be playing on the ... because, I mean, I was just thinking the other day, the reason I was laughing a second ago is I was driving in San Francisco, of course, and someone was watching one of the movies. And I looked over and it was a porn movie, and I was like, “Whoa. That’s too much.” I was like, “Wow.”

A little TMI there.

I was like, “Whoa, that’s a porn movie over there,” then I went, “San Francisco. It’s fine.” But it was really interesting, and I thought, “Well, I feel intrusive, and yet I’m appalled by these people,” and at the same ... It went on and on, but it was like, they would know that someone was doing that in the car, what they were at. Your car is your oasis, so why should there be sensors and AI telling you what to do and making decisions for you in these cars?

Well, your example is a good example that you can choose what kind of activity you want to do in your own car, because it’s your car. And if you’ve given us access to that data because we’re going to offer you some service, we have the responsibility to not share that with other people or use it for any other purpose other than what we’ve contracted to do with it.

Right, well, I ... It might come as a surprise, but some of these companies are actually sharing data that you didn’t intend for them to share.

I understand that, and that’s why it’s really important to us that when we say our aspiration is to be the most trusted company, that’s what we mean, that we’re not going to share if we’re not supposed to.

Talk about what you think about the ethics around AI. It seems to me that any data that they can suck up they do, in any way they can chop it up and use it. It’s open season. It seems like that. That’s the ethos, is that this is going to be good, and if you just sit quietly you’ll be able to benefit from this.

How do you deal with ... You have an office in Silicon Valley. How do you look at the broader tech industry, which is moving into your businesses? Uber is, Google is, Apple sort of is but isn’t anymore? I can’t tell. But they’re all moving in. Amazon probably is lurking somewhere around.

Yeah, yeah. Well, they definitely are.

They’re lurkers. Yeah.

They definitely are. There’s no doubt about that. I can’t speak for how they manage and treat their data, but we’re very careful about how we treat the data that we have access to.

Well, yeah, but you have to work with them.

We do.

So how do you think about that going forward? Because this is not going to be just a solution for Ford. It’s going to be with Google, with Apple, with the Amazon delivery, with ... You’re not going to get in the delivery business is my guess, for example.

Yeah, so our self-driving vehicle may be a self-driving technology for a fleet delivery service, so we might be in that business, from that point of view.

Yeah.

And we’ll have to work with them and agree that if you’re going to work with us, we have to agree how you’re going to treat the data of our customers. Because if it’s a customer that’s in our car, they’re our customer. And we’ll have to have an arrangement so that you color inside the lines.

What do you think about these issues around the ethics of AI, then? Who decides what gets done? Because this is being done by your company. It’s being done by Google. Everyone’s making these decisions that are private companies and in the interest of shareholders. Like you said, you want to make a business of it.

Right, right. So, I mean, it’s a big question, and I don’t think any one company has the answer to that, which is why we’re working with coalitions of companies. I think the whole mobility industry, the tech companies, the tier-one supply base, automakers, we’ve all got to have much more conversation about that topic. It’s a hard topic, and I’m not going to sit here and try to make up an answer, because I don’t have an answer, because it’s ...

Look, we’re charting into uncharted territory. No one’s built a robo-car before, and no one has deployed an autonomous vehicle at scale or any scale in a city where people are riding in it and they have access to data and watching movies in the back of the car. This is new stuff, and so we’ve got to have the conversations about, well, where are the boundaries? What’s fair game and what’s not? And how do I exchange the access to your data for something that you’re going to say is worth it? And I think we’re going to have to go slowly, try some things out, test it out, and see, “hey, how did that feel?” And we’re going to capture, and we have to measure that and then build on that and learn from it.

All right. What is the —and then we’re going to do questions from the audience. What is the scariest thing that you’ve seen with the AI that you’ve been, “Oh, wait a minute. That’s not good”? And what’s the totally weirdest, and what’s the coolest?

So I think ... Let me start with the coolest, and actually the coolest might be the scariest, too. So we’re doing some research on something called GANs, G-A-N, stands for Generative Artificial Networks.

That’s scary, just the name.

Yeah, even the name is scary, but this is pretty freaky stuff. So using this technology, you can actually take a neural network, an AI algorithm, and you can throw a bunch of data and teach it what Kara Swisher looks like. And after it learns what Kara Swisher looks like, it could then project your image onto ... pick any random person who’s roughly about the same size as you. And then that person can start talking and can deliver a speech, and it will look and sound just like Kara Swisher. And so it’s got ...

Generate it onto this guy, right here?

Yeah, absolutely. This guy. He could be talking and ...

I know. That was the plot of Captain Marvel, but okay. All right.

Using this technology, you could create a digital movie that made it look like you were saying something that you weren’t.

Well, yeah. That’s deepfakes, too. Yeah, yeah, yeah, yeah.

Now, here’s the practical application, and this is why I think it’s cool.

All right. Where’s the cool part? Because it sounds horrible.

Here’s the cool part. The cool part is using this technology, you can take scenarios of environments, say, like city streets, and let’s say you want to do a bunch of autonomous vehicle testing in the city of Miami like we really do, and we go and we take a bunch of video of cars running in a bunch of scenarios in the streets of Miami, and we take the video of cars, and we do it in daylight on a nice, sunny, good-weather day, and we test vehicles against those images and those videos. Using this technique, I can now project a rainy day on that same scenario. I can project a snowy day. I can do a foggy day. I can put new people in that environment. I can change the conditions of the road.

I think that’s the holodeck from Star Trek.

It’s kind of like the holodeck, exactly. It’s super cool technology, and it’s a way to amplify the ability to do simulation testing, which is why we’re doing research in that phase.

Because it has to have so many factors in it.

You can’t just test in sunny weather. You have to test in all kinds ... but who wants to go out and test in the snow and the rain, right?

Yeah, who wants to do that?

Right, so ...

Because it would be safer. And what’s the weirdest?

Wow. So I don’t know if it’s the weirdest, but something that I think is really promising is the AI that we’ve been talking about is taking AI and putting it in the car. But you don’t have to stop there. The AI and the sensors and the intelligence that goes into a self-driving car, well, it can be out in the world, too, right? Why just put LIDAR on the top of the roof of cars? What if you stuck LIDARs in every intersection that you wanted to drive in? And all of a sudden every car could kind of be a self-driving car if you could get the data that the intersections determine into that car.

So that’s something that we’re doing some sort of early-phase research on is how might you instrument the world so that self-driving could be democratized? We think that’s pretty cool, and it’s kind of weird because it flips the self-driving problem on its ear. And it says, “Well, you don’t have to just build a bunch of robo-cars. You could build sort of robo cities, too.” And that could make life better even if you’re not in a smart car.

Right, which would create that these cars would react to everything, but they’d have things in the cars that would react to it, right?

Absolutely. Yeah.

Right.

Yeah, they’d be able to say, “Oh, I just got this signal. I’m supposed to stop now, or I can turn left, or I have to take this path, not that path.” Oh, and by the way, if you’ve got an array of this kind of sensors in a city and you’ve got vehicles that have been equipped to react to that, now you can begin to think about, how would you create a society where congestion begins to decrease?

Because congestion’s not going to get any better if you just put a bunch of robo-cars in a city. You’ve got to figure out how do you make them synchronize better, how do they behave differently, how do they optimize? If I drop this person off, what’s the next person I should pick up, and should I pick up that person eight blocks away because I’m going to get a better fare, or should I let somebody else pick that person up because that’s going to reduce congestion?

That’s an optimization problem. Turns out that’s a really hard optimization problem, because you’ve got lots of factors. You got lots of pieces. You got lots of potential paths, and there’s no hope of solving that optimization problem. If you try to put all the potential states in a traditional computer and then crunch the numbers, it just won’t work. There are too many variables. There are too many scenarios.

But you know what will work? A quantum computer can solve that problem, seriously. And so we’re talking to the quantum computer companies about factoring that problem in quantum space, so this is the perfect application of quantum computers. So a lot of people say, “Quantum computers, ah, they’re just science projects.” Yes, today they’re science projects, but they’re really good at solving complex optimization problems, and we’re working on trying to factor that into that space so that we can apply a quantum computer to actually solve congestion. It’s a little far out there, but hey, you asked for the weirdest thing.

Good. That’s a good weird one.

That’s a weird one.

I thought you were going to say hovercraft, which is what Larry Page always says. But hovercrafts.

Well, I knew that he said that. That’s why I didn’t say it.

Yeah. Okay, good. Are you working on hovercrafts? Whatever.

We’re studying them, yes.

Why?

It’s my job to look over the horizon at all the weird, freaky stuff that might happen one day.

Your thoughts on a hovercraft? No, they’re really making hovercrafts in Silicon Valley.

Look, I mean, they’re ... It’s really not that wacky, right? Because batteries have gotten lighter. The fact that you can fly drones now, it’s just a big drone. And carbon fiber has gotten light enough and strong enough that you can actually make them so that they can fly for several hours, and if you could actually fly one of these things and put three or four people in it and go from San Jose to San Francisco in 15 minutes, the economics actually works out so that it’s cheaper than taking an Uber.

Yeah. And also vertical lift and takeoff vehicles.

Exactly, and it’s quieter, quiet. I mean, the reason people don’t do helicopters is because they’re noisy, right? They’re noisy and they don’t play nice with air space. So, if you’ve got something that’s quiet and it’s electric and can vertically take off from basically any of these underutilized regional airports and it can fly from San Jose to San Francisco and you could do the economics and make it work, I think people would pay for it. So, that’s why we’re studying it.

Anything else weird you’re doing? Sounds weird enough.

Well, so, that’s an example of one mode of many potential modes of transportation. We’re looking at a lot of different modes of transportation. We bought a scooter company. Most people don’t know that.

Which one do you have?

We bought Spin. And we like Spin...

It just showed up here.

Yeah. We like Spin a lot because they kind of took the same philosophy that we did for taking vehicles into a city. They took their scooters into a city but only after they talked to the city, which we thought was pretty polite. And so, Spin and Ford really have a common culture and we’re working with them to figure out how do they help us solve the last-mile problem?

Do you like the scooters? I love the scooters.

I love them. I absolutely love them.

Anything scooter.

They’re cool.

They caught me in a video, my boss did, actually, and put it up on the internets. And I wasn’t wearing a helmet. I usually do wear a helmet.

Oh, not good.

Most people do not wear helmets.

Yeah, well, wear a helmet.

I like the scooter.

We want you to wear a helmet.

All right, questions from the audience? There’s lots of them here. Let’s start right here and then we’ll go ...

Audience member: Hi, so, you mentioned a specific West Coast company that was in the news recently for some ...

He meant Tesla.

Ken Washington: Yeah, come on, all right. I was just being funny, it was Tesla.

Audience member: So, Teslas have been changing lanes without...

Elon’s not going to find it funny today on Twitter, but go ahead.

Audience member: So, people realized you could put stickers on the ground, make them change lanes when they shouldn’t, right? That kind of problem isn’t specific to their camera feed stuff, right? You could do that with LIDAR. That’s more of like an AI problem where you can trick the systems by constructed edge cases that cause them to behave outside of spec, right? And this is a problem that you see also in Internet of Things devices where they suddenly have a whole lot of attack spaces that you can attack them from. And they’re really only as secure as the weakest link in the network. So, when you’re designing a car that uses AI and that networks with the home, for example, how do you deal with that?

Ken Washington: You deal with that by not having a single line of defense. Tricking the car by putting stickers on the road and doing other things can trick a path-planning algorithm if that path-planning algorithm has been trained on existing images that didn’t have the stickers. And it also can be tricked if you’re using cameras that are looking for cues on the road. It can’t be tricked if you’re looking for cameras that look for cues on the road if you’re looking for a comparison of the world relative to a prior map and if you’re looking for signals from radar and if you’re looking for geolocation information from a GPS and if you’re looking from localizing the vehicle based on bounces off of other objects in 3-D space. That’s exactly the approach we take.

We don’t rely on any one or two or three sets of signals. We do multiple lines of defense. It’s never going to be perfect. But it’s going to be a lot better than just saying, “Oh, I’m looking at lines on the road,” or, “I’m just going to rely on either radar or camera.” You got to have at least three and in some cases four approaches.

kay. Another question?

Miriam Vogel: Hi, Miriam Vogel, executive director of EqualAI, and I’d love to build on the last conversation. It sounds like you’ve given a lot of thought into the variations and the complexities and I’m curious what you’ve done to make sure that your training sets are mindful of drivers and passengers and pedestrians that are not the prototypical coder.

Ken Washington: Yes, so a couple things. First, our algorithms for our self-driving system that Argo is building are not all machine learning-based. It’s a mix of machine learning that’s trained by diverse data sets in the real world and in simulation space, and rule-based algorithms that are based on rules of the road, like “this is a stop sign, this is a yield sign, you’re supposed to turn, do a yielded left, yielded right.” And so, it’s a combination of a deterministic and a learning-based machine learning algorithm.

As far as the diversity of the data set, it all comes down to having test data from multiple cities. And we’re currently testing, as I said earlier, in five cities. And we started with a neighborhood that was ethnically diverse on purpose, for that reason. And went to Washington secondly, again, ethnically diverse, in Washington. And then, other populations in the other three cities. And we’re going to expand and go from there.

And then on top of that, we’re building on ... we’re hoping to leverage the value of using advanced technology like this kind of creepy GAN thing I talked about earlier to further diversify the simulation data that we use to test and validate our data.

Okay, right here.

Karen Friedman: Hi. Well this was totally fascinating. I’m somewhat of a luddite, so, all this stuff, both I find weird, scary and cool. My name’s Karen Friedman, I’m actually a consumer advocate. I work on pension issues. And I work with a lot of truck drivers in the Midwest who already are having their pensions cut because there’s not enough active workers paying into the pension funds. Can you talk a little bit about the impact on jobs? Because as I’m watching the horizon of these self-driving cars, I’m also watching all these people who will no longer have jobs. Taxicab drivers, now Lyft drivers, Uber drivers, truck drivers throughout the country. I’m sure you guys have thought about this. So, I just want your ... what do you think?

Ken Washington: Yeah, well, thanks for the question, Karen. It’s a very important question. And it’s not the first time that an impressive technology has displaced a subset of workers in a particular discipline.

The good news ... I don’t know what this particular form is going to take, but the good news is, history teaches us that every time that happens, the quality of the job that they move to and they get retrained to actually repurpose to, improves.

And so, I hope that this leads to the creation of new economies like assisting the truck ecosystems to do more work and create more value, just as one example.

You may not need as many truck drivers if you deploy autonomous truck solutions at scale. But you may need more workers working in the truck depots. You may need more people supporting the companies who are in the business of deploying these technologies to these companies that are integrating them into the trucks.

I just kind of made those up, but I believe that there will be some form of a new economy created around the promise of new business models that come from having autonomous trucks and autonomous vehicles and autonomous package delivery services.

But Karen, to be fair, I had this discussion with Marc Andreessen and he’s like, “Oh, farming to manufacturing was better for people because there were more jobs.” The fact of the matter is, there was enormous displacement and problems and social problems and fights and terrible ... There’s going to be a terrible toll on a certain group of people. There’s no question. And anybody who tells you different is ...

And in terms of some of the truck staff, they’re not going to just have drivers, they’re going to have robots loading these things. If anyone’s visited any of these, like an Amazon warehouse or anything, they’re going robotic. They have this thing called Kiva that’s amazing. They have all ... they’re going ... everyone’s going robotic and automation in a way that I think is another big trend and this is all governed by AI.

It’s really ... it’s fascinating. And in fact, workers probably shouldn’t be putting stuff in boxes. That should be a robot. It’s a repetitive job. As it becomes more efficient... Same thing with ... right now, in San Francisco, we have burger flippers. Burger companies where you make a burger. It’s just burger people that make burgers are cheaper than the robots right now. But eventually, they won’t be. That kind of thing.

So, it’s going to have this ... the place about the issues will they come up with new jobs? And who does it? Who does that? And that’s, the problem is we don’t know. Is it Silicon Valley? Is it the government?

Right. I think the broader issue is what’s going to happen to the middle-class blue-collar worker over the long term? I don’t have the answer for that. But I think it’s a real issue.

Yeah, and years ago I did an interview with Travis Kalanick when he still was CEO of Uber, before he “left.” And he actually was honest about it. And I said, “What’s the problem you face and what is the thing you want to do?” And he actually spoke the truth, which Silicon Valley people tend not to do sometimes.

And he goes, “Well you know, Kara, the real problem is the drivers. Once we get rid of them, it’s a great business. But the drivers are the problem.” And he’s an awful human being, but he was correct. He was telling the truth. He was saying, once we remove the drivers in the equation, the business becomes economically fantastic.

And I was like ... and I was sitting there, going, “Thank you, thank you, thank you, thank you for saying that truthfully.” And the whole room was like, “Huh.” And all of Silicon Valley was like, “Don’t tell them that!” Like, “Don’t say that kind of thing.” But that’s really the truth, probably. Anyway, next question … Right there? And then, right there. We’ll answer just a few more.

Audience member: Hi, thanks so much for this conversation today. I wanted to revisit the issue of congestion. I think it’s a really important issue. Anyone who lives in the city, whether you drive or not, you know it’s an issue. New York just ...

They just passed something today.

Audience member: Just approved, right, a congestion pricing on Lower Manhattan. It’s only gotten worse with Uber and Lyft and has numerous issues on public ... the public residents, from public transportation taking longer, so, might people take longer to get to work if they use public transportation, to emergency vehicles taking longer to get to places they need to. And that’s not even talking about pollution or climate change effects. So, I wanted to know more about how you think about how autonomous vehicles might address this problem. At least, in the beginning, introduction of more vehicles on the road, whether they’re autonomous or not seems like it could make this worse in the short term. How do you think autonomous vehicles could help address this problem and is there a way that autonomous vehicles could address this problem that would take more vehicles off the road?

Ken Washington: So, I think autonomous ... the fact that the vehicles are autonomous in and of themselves will not make the problem better. I think autonomous vehicles deployed smartly into the city in a way that positions the autonomous vehicle after it drops off the person in a way that minimizes the additional movement, in other words, optimizes the routes, so that it’s not a dumb autonomous vehicle in the sense of what ride it chooses to pick up.

So, it’s not just the AI for the driving task. There’s got to be AI in the routing task of which vehicles do I send where and how do I reposition them when they’re not busy moving people? That can reduce congestion, because if you didn’t do that with an AI algorithm, you would be doing it one ride at a time by human ... where the human optimizing it based on their sort of social contract.

But Uber and Lyft have these maps. They’ve been mapping this kind of stuff for a while, correct?

They do. And I think those are examples of how their algorithms are actually helping their drivers optimize their system. But an autonomous vehicle, in a longer-term scenario, could optimize across multiple fleets and not just individual fleets, if you could somehow figure out how to do a contract that way.

But you have no hope of making it better if you don’t think about the problem as an optimization problem and a routing problem.

And that’s what you’re doing with these scooters now, where they should be and where they should be put back once you’ve charged them.

Exactly right. And so, the scooters are a part of this solution as well, because you can offer a person a way that goes somewhere without getting in a car if it’s short enough. So, that’s part of the optimization solution, too.

Or you can do what has happened in Austin, where they’ve dropped 400,000 scooters on a very small city and it’s insane.

Oh, boy.

I love it. So, where’s the next one? Two more.

Trooper Sanders: Hi, Trooper Sanders. So, as you’re thinking about deploying fleets, you have to think about maximizing revenue. What are your thoughts or plans on people who can’t necessarily afford to pay for transport and dealing with the equity and access issues?

Ken Washington: Yeah, that’s a really important point. And so, we’re creating a living laboratory in downtown Detroit to help us figure out how to solve that problem. For those of you who don’t know, we bought the Michigan Central Station, which had unfortunately become sort of the iconic eyesore for the downfall of Detroit. But at one time, it was the grandest train station in the world. And so we bought that with the promise to revitalize it and bring it back to life and make it, again, the sort of centerpiece of the Detroit mobility ecosystem.

And around the train station we bought four other properties and we’re working with the city and we’re in the process now of talking to strategic partners to join us in sorting out that very problem. How do you solve the mobility problem in an inner city, in this case in our own backyard, in our hometown, in a way that gives transportation to the underserved, that revitalizes a community, that figures out how do you tap into the potential of making the streets smart?

So, this idea I talked about earlier in terms of putting sensors on the road and making the city smart, well, we’re going to start with experimenting in Corktown. And we might find a way to offer very affordable mobility to the underserved community there that we can then scale up to the world.

But this isn’t just altruism. This is part of a way for us to actually have a viable business as well, because if you can democratize mobility, you can make a good business. Henry Ford proved that over a 100 years ago.

But there’s nothing wrong with altruism. Also, one of the issues is when you start ... all this stuff is being done, let’s underscore, by private companies, a lot of this new stuff. And it takes away from public transport. A lot of these innovations ... you should see the stuff they’re doing in China, it’s insane, around buses, around small cars, around rickshaws, around ... and it’s all private. And so, once private ... it’s like private prisons, private anything, you’re going to get a lot of problems. And so, it’ll take money away from public transportation things, as you know, which are so hard to ... we can complain all we want about our subways, but they are miracles, the way they work right now. And at the same time, they’re not adequate. And so, that’s one of the problems, is they’re all private companies going to be taking over transportation. And they sure aren’t going to go where the money isn’t. So, that’s one of the problems.

Yeah, well, one of the things that we’re going to explore is how can you make these things coexist. And in the case of Detroit, there isn’t a really healthy public transportation system, so we don’t really have that issue. But if you think about taking a solution like the Corktown solution that we’ll be developing over the next several years to a city that does have a healthy subway system, we would want to design the solution so that it amplified that and it could coexist with it and make it better and solve some of the pain points.

Because not everybody wants to take the subway but some people would. So, I think that’s in the category of work-to-do, but you’ve got to start somewhere.

Yeah. I think you know this, Uber’s trying to get into the subway systems to pay for subways with your Uber app. Which is great, but maybe not so much. You know what I mean? You really start to think about it. Okay, last question. Wherever, whoever has it. Right here.

Audience member: How much data sharing is happening between companies? And if there’s an incentive to kind of hoard this data to yourself, does that hold back the industry as a whole in terms of making it safer for consumers?

Ken Washington: Yeah.

That’s been controversial because Facebook was sharing...

Yeah, I mean, look, I’ll just be transparent. There’s basically no data sharing between the companies. Data’s the new oil. We all have our oil wells.

Exactly right.

That’s just the way it is. And that doesn’t mean it’s going to always be that way. At some point, this is going to get to the point where the technology itself is somewhat commoditized. The exception to the rule here is China. And you guys talked about this earlier on the prior discussion. In that case, the data’s all state-owned. And so they’ve got an unfair advantage just because of the way the government works in China.

I think there’s going to be some pretty tough decisions and discussions that we’re going to have to have over the course of the next, say, decade as AI evolves and grows up and begins to be truly adopted and matures. And some of these sectors, like the autonomous vehicle sector and smart home and digital assistance.

But right now, there’s no data sharing. I mean, Amazon’s not sharing their data pool with Google, and Ford’s not sharing our data pool with GM. Nobody’s sharing their data, which is why competitive collaborations are so tough.

And then in China, the fact of the matter is, the reason they’re innovating so much is because they have the ability.

Because they have that. That’s right.

The innovation going on there is amazing.

I just spent a week in China and I came back and my head was about to explode. I mean they’re just going at light speed, so I think we got to learn how to go at that speed, so it’s a powerful question.

Okay. Thank you so much and thank you, everybody. Thank you.

Sign up for the newsletter Recode Daily Email (required) By signing up, you agree to our Privacy Notice and European users agree to the data transfer policy. Subscribe