You may not think things like 4G or multicore processors have anything to do with cars, but you'd be wrong. They have everything to do with what driving will be like in the next five to ten years.

Ars Technica recently sat down with Kaveh Hushyar, a former senior VP at AT&T and current CEO of Telemetria, makers of the DashTop in-car compute appliance, to discuss the ultimate evolution of not just the car, but the automotive experience. The car of the future will be more like a mobile office, and traffic will be a moving mesh of real-time, cloud-connected data sensors. Each car will act as a node on a giant peer-to-peer network.

Read on for a look at how all of this will work.

Ars Technica: Tell me a bit about your background, for our audience.

Kaveh Hushyar: I graduated from Stanford with a masters in industrial engineering about 35 years ago, and since then I've worked in a number of companies outside the U.S. and in the U.S. The last 25 years I was at AT&T. I started at Bell Labs, where I started looking into automation of AT&T manufacturing's global operations, and then I moved into product development [where I worked on] a whole bunch of products, including PBX. And then I moved into the telecommunications side for almost half of my time at AT&T, and this is where I grew from developing the operational support systems for the company all the way to my last position where I was the senior VP of network engineering and planning engineering for AT&T's global network.

In that time, I had responsibility for building the overall network — the backbone and the quad-play network — inclusive of wireline, wireless, data, and video. So that was my overall responsibility at AT&T, and I also come from the old AT&T. SBC bought the old AT&T — "Ma Bell" — and it became the AT&T that it is today.

Ars Technica: So how did you get into the car space from that?

Hushyar: When I retired from AT&T I had intended to just invest in technology, rather than doing any specific kind of job. I did a lot of that [along with some consulting], until I came across this startup. When I looked at what they were doing, I got passionate about it.

Technology today causes a lot of problems when we're driving. I myself have three devices, and I'm juggling them all when I get into the car. It really raises concerns.

When I came across [DashTop], I became really passionate about it, and in the following sense: What does it take to transform the way people drive, making sure that the same environment they have at home in the office is enabled inside of the car while also ensuring the safety of drivers and passengers? This was the element that made me passionate about this.

When I looked at this area, I became convinced that the last piece of technology needed to make all of this work was 4G. And now we know where 4G is and where it's headed, so the technology is ready and it's a matter of making the inside of the car as exposed to the technology as every other aspect of our lives is.

Driving in 2015 and 2020 ————————

Ars Technica: Let me take a step back and ask you what it will mean to drive in the year 2020. What is the driving experience in 2020, after these developments have taken hold, and how is it different from the driving experience that I have now?

KH: Let me answer the question in two ways, by talking about two milestones. One milestone is more like 2015 or 2016, and the other milestone is in 2020 and afterward.

I believe in 2020, the car will drive itself. The infrastructure will be in place, and that infrastructure will be very significant and hefty. But in that target environment, you and I don't have to be sitting behind the wheel. In that environment, everyone will be a passenger, and you want to have full connectivity with full access to any media, or any person anywhere via the best videoconferencing available. So you need a rich media experience in the car.

At the same time, there will be a significant amount of safety applications that will be running in the car, making sure that the car is fully protected and is communicating through the infrastructure to other cars. That would be the nature of how I see the driving experience transforming in 10 years plus.

Ford recently voice-activation of smartphone apps to its popular Sync system. Image: Ford

In the 2015 time frame, I see a significant presence of voice enablement technology inside the car. You and I still have to drive, but pretty much all of the applications that are available to us when we're sitting at home or in the office will be available to use through voice enablement. Voice technology will be much more advanced, so that you'll be sitting in the car and you'll be directing music, searching for any point of interest or anything else on the internet, or whatever you normally do at home or in the office. At the same time, there will be a variety of safety applications running in the car.

For example, a sleepy eye detection application will be watching your eyes, and if you're getting a little bit tired it will respond to you. There will be a series of applications for pointing out, for example, if there's an ice patch ahead of you, and on and on. So I would think that in five years, voice enablement will revolutionize the way you and I drive.

And in ten years, there's not going to be a driver. It's going to be a totally different environment.

Processing on the Client Side —————————–

Ars Technica: For the voice interface, is that going to be done on the client side, or is it going to be done on the server side like we see with Android, where the client just grabs the voice sample and uploads it to a server that does the recognition remotely?

Hushyar: To be honest, it should be done on the client side. That's the way it should be, and must be, regardless of the communication technology. You want to have as much of the basic infrastructure — and voice would be basic infrastructure — on the client side. However, having said that, there are going to be cases where parts of that processing will have to be done in the cloud and there is no other way. Even today, we're building our technology so that a good portion of it is running on the client, but there are pieces of it running in the cloud. But strategic parts of the technology will have to be client side.

Ars Technica: For the processing that's going to happen on the client side, will that be Intel or ARM, or some mix?

Hushyar: From where I'm sitting right now, it doesn't matter whether it's ARM-based, or Intel, or whatever. I can take that, and through sound engineering turn it into a crash-proof, cost-effective product that can enable us to run the apps that are critical for us.

And to make another point: I feel very strongly that, when we talk about the smartphone and the environment that you want to have in car to enable the kind of experience that we talked about, the smartphone has a long way to go. I'm not saying it's not possible, but it's a long way to go.

We selected Intel and the Atom, but I could've selected some other processors. One of the requirements that I have is the operating range. As long as the operating range is the right range, I see a lot of these processor companies that are very competitive in terms of overall device performance.

Ars Technica: To continue in that vein, you guys are using multicore processors from Intel, right?

Hushyar: Right now as we speak, it's single-core, but we're going to dual-core and then to multiprocessor with dual-core.

Ars Technica: And that's because you need the concurrency to handle all of the different real-time datastreams that you get out of the vehicle. Is that correct?

Hushyar: I have been talking to God knows how many people, and you are the first one who got the essence of it — that concurrency is at the heart of what we're trying to do. Anyone who's trying to do things in this area has to be very sensitive to concurrency issues.

Volvo successfully tested a semi-autonomous "road train" last month, a significant step toward the day when cars drive themselves.

Data Streams, Buses, and Multitasking ————————————-

Ars Technica: Can you tell us about the different kinds of data streams that you'll get from the vehicle that you're going to be processing in parallel? I know that for myself — I'm not a car buff — I know that there are a lot of sensors in a modern car and that cars can throw off a lot of data, but I can't name but a few things, like tire pressure or whatever. So what kind of data do you get, and what do you want to do with it? How do you filter it and make it meaningful to a driver?

Hushyar: The multitasking environment is going to be dealing with a lot of data — independent and dependent data — that has to be processed in real time. But let's start with one piece at a time.

After 1996, by federal law cars had to be built so that they provide onboard diagnostic interface, and there were sensory devices that had to be in the car. At that time, the number of sensors was about 90, but now it's more in the range of 400. In any car, these sensors are managed through at least one or two major buses inside of the car. So from a perspective of getting access to these devices, you need one or two streams of applications that are constantly probing the buses and accessing all those sensors. It's not that for every one of the sensors I have to have something running on the processor that's caring for it; no, it's pretty much that you're accessing that bus, and you're polling the sensors to get the data. And you can do whatever you want to do with that. Whether you want to help manage the speed of the car, or read the temperature of the coolant and project and trend it, and you can see if the control limit is going to be exceeded at some point in the future. Or if you're going to have a problem with the water temperature in a few minutes or a few hours.

I'm using these as examples, but there are 400 sensors in the car that you can access and the software will translate that into meaningful information that you can take action on. So there will be one cluster of applications that is just doing that.

But then, keep in mind that I mentioned about sleepy eye detection. You need an application that's watching your eyes through a webcam, and it's doing the right thing at the right time. Then you can add in another application to manage your communication, or maybe you're switching from navigating to music, or allowing your SMS to be played. Every one of these could be running concurrently in the car, and the processor has to have the capability to handle it.

There are applications that would send information to roadside assistance automatically if you need help. So there are a variety of applications that are going to be focusing on the driver and on safety.

Now what I haven't mentioned to you is that there are applications that will be playing music from the internet or your smartphone, or whatever. And then you could have people in the backseat watching different video-on-demand streams. These are also applications that are going to be running concurrently.

At the same time, there will be applications running in the cloud. The tracking of the vehicle will be cloud-based. Or, if the person is a boomer and is aging, they might subscribe to a cloud application that will be watching to see if they're going in a loop and maybe confused, and will redirect them in the right direction through the onboard navigation.

There are many, many applications that we're bringing to market and that we have on our roadmap when it comes to safety.

Aftermarket and Security ————————

Ars Technica: The buses that you mentioned that you get the data feed from — are those standardized? I'm asking because I'm wondering what the aftermarket picture for this is. Will there be different systems that will be built into shipping cars, and is there going to be a robust aftermarket? Our audience has a lot of people who like to tweak and hack, and I can definitely see this being something that they'll get into.

Hushyar: Pretty much all of these sensory devices that are dictated by the government — for example, anything that has to do with the exhaust or the heat of the engine, the air temperature, etc. — those devices are managed through the CAN (controller area network) bus. Then there are other buses, for instance higher end cars like Mercedes have a multimedia bus. Pretty much all of the multimedia is loaded and managed through that bus. There's also another bus, the FlexRay bus, which is a more advanced version of the CAN bus — advanced in the sense of the speed with which you can access the sensors and extract data. Car companies are going to be evolving in that direction over time. So these are the buses in the car that you get access to.

Now, you made a very good point: Security is going to be a massive issue going forward. Of course, the car manufacturers have built a firewall in the cars so that you don't have access to alter or write into those sensors. Just imagine if someone hacked into the bus and got access to the airbag, for example. So there are firewalls that are built in not to allow something like that. But for reading information, we need to have a proper security mechanism in place.

Audi's autonomous TTS climbed to the top of Pikes Peak in September. Photo: Audi

Drive-by-Wire Infrastructure —————————-

Ars Technica: One of the things that I'm really interested in is drive-by-wire, and the kind of things that we need at the infrastructure level that we need to support drive-by-wire. What are the kinds of things that need to be built out for this to happen?

Hushyar: We do have a contractual relationship with one of the major carriers in Europe, and by virtue of that we're getting very much into this area now. And I know that DOT is doing work in this area. I can tell you that when you compare the U.S. vs. Europe, I see Europe as much more advanced in terms of the progress they're making and the resources they're throwing at it.

But first of all, you need what I call a "hard infrastructure" in place. Think of a very robust wireless router that you place every so many feet or miles on the road, and the aim of that is to provide short-range communication, so that car-to-car communication doesn't have to go through the cellular network. Otherwise, we're going to be driving that network to its knees. So you need that kind of hard infrastructure in place.

Then on top of it, you need a soft infrastructure, which is going to be a hierarchical architecture where information will be collected and analyzed at the location where you're driving, and then maybe a mile or a mile and a half ahead of you there will be another agent that is collecting that kind of information — analyzing it, aggregating it, and making a determination as to what aspect of the information is worth being communicated to the rest of the cars. For example, if someone two miles ahead of you has a traction control problem, that has to be analyzed and properly communicated to the right cars that are approaching that environment. So it requires a whole bunch of processing in the cloud, which is all going to be directed and managed by a major service provider or a government agency.

You need a whole set of protocols and applications that are going to be running in the cloud, just to provide the necessary communication from car to car without touching the cellular network.

In that regard, there is a tremendous amount of work that's out there that no one has even scratched the surface of. All I have read so far is more in the area of research and trying to write white papers — I haven't seen it in prototype form yet.

Ars Technica: If all of the cars are essentially intelligent nodes in this peer-to-peer network that you're describing, what about the cars that are the legacy vehicles — the ones that don't have the hardware and don't participate in this network, but nonetheless share the road with the rest of the smart traffic.

Hushyar: This is where a company like mine comes into the picture. Our focus is on the aftermarket. We firmly believe that for you to benefit from this technology, you don't have to buy a new car. And even if you do buy a new car, it will have a long way to go relative to the product that we're going to be introducing into the market with our partner very soon.

So this is an area where there's a lot of room for expansion. And when it comes to the aftermarket, companies like ours have an excellent opportunity to penetrate this.

Ars Technica: Any final points you want to make?

Hushyar: I want to emphasize that we're not bringing a gadget solution to the car. There are plenty of them in the market; many companies are throwing resources at creating a gadget that has to be manually handled by the poor driver. Our focus is on transforming the driving experience, and on doing things fundamentally different inside the car. How do you relieve the driver of the chore that has been created by technology? So I want to highlight the fact that transformation of the driving experience requires a totally different focus — a total understanding of what is going on inside the car, and what the driver is going through with the technology that is there today.

This interview was conducted by Jon Stokes and originally published by Ars Technica on Feb. 3.

See Also: