Tuesday, the California DMV held a workshop on how they will write regulations for the operation of robocars in California. They already have done meetings on testing, but the real meat of things will be in the operation. It was in Sacramento, so I decided to just watch the video feed. (Sadly, remote participants got almost no opportunity to provide feedback to the workshop, so it looks like it's 5 hours of driving if you want to really be heard, at least in this context.)

The event was led by Brian Soublet, assistant chief counsel, and next to him was Bernard Soriano, the deputy director. I think Mr. Soublet did a very good job of understanding many of the issues and leading the discussion. I am also impressed at the efforts Mr. Soriano has made to engage the online community to participate. Because Sacramento is a trek for most interested parties, it means the room will be dominated by those paid to go, and online engagement is a good way to broaden the input received.

As I wrote in my article on advice to governments I believe the best course is to have a light hand today while the technology is still in flux. While it isn't easy to write regulations, it's harder to undo them. There are many problems to be solved, but we really should see first whether the engineers who are working day-in and day-out to solve them can do that job before asking policymakers to force a solution. It's not the role of the government to forbid theoretical risks in advance, but rather to correct demonstrated harms and demonstrated unacceptable risks once it's clear they can't be solved on the ground.

With that in mind, here's some commentary on matters that came up during the session.

How do the police pull over a car?

Well, the law already requires that vehicles pull over when told to by police, as well as pull to the right when any emergency vehicle is passing. With no further action, all car developers will work out ways to notice this -- microphones which know the sound of the sirens, cameras which can see the flashing lights.

Developers might ask for a way to make this problem easier. Perhaps a special sound the police car could make (by holding a smartphone up to their PA microphone for example.) Perhaps the police just reading the licence plate to dispatch and dispatch using an interface provided by the car vendor. Perhaps a radio protocol that can be loaded into an officer's phone. Or something else -- this is not yet the time to solve it.

It should be noted that this should be an extremely unlikely event. The officer is not going to pull over the car to have a chat. Rather, they would only want the car to stop because it is driving in an unsafe manner and putting people at risk. This is not impossible, but teams will work so hard on testing their cars that the probability that a police officer would be the first to discover a bug which makes the car drive illegally is very, very low. In fact, not to diminish the police or represent the developers as perfect, but the odds are much greater that the officer is in error. Still, the ability should be there.

It is important to reiterate just how different the driving logic of software will be compared to the thinking of human drivers. Human drivers knowingly break the law all the time, and they get sloppy all the time. Software is not perfect, but if it breaks the law, it will be for very different reasons.

One way this could happen is if a person who has summoned their unmanned vehicle to come to them commands the vehicle remotely to speed. A person in a hurry might do that. And they'll get in a world of trouble if caught. It's much more likely that a vehicle carrying a person would be told to speed by its occupant -- in fact that's a good thing to do -- but in this case the occupant is responsible, and the occupant is already compelled by law to make the car pull over if police signal this.

Who gets the ticket?

This question gets asked a lot. I'm just going to come out and say what none of the car companies are willing to say. If a vehicle breaks the law because of a bug, or because it was programmed deliberately to break the law, the developers of the car should have responsibility. Nobody will say this because you never want to go on record saying you should have responsibility. It can only hurt you to say this, never help you. So we'll fight over this but I can't see it going any other way in the long run.

I say in the long run, because in the early days, the early adopters, keen to get robocars, might be quite willing to sign a contract taking all responsibility onto them. They will do this fully informed -- they are willing to take the risk to be early adopters. This even makes sense -- if you want to be the first to try an experimental technology, you do bear responsibility for that decision, even if the cause of the particular bug that caused trouble had nothing to do with you.

This works for a while, but once you move away from early adopters, the ordinary public will not accept a vehicle that, if the vehicle makes a mistake, leaves them on the hook for liability, fines and demerit points or in extremely rare cases, criminal charges.

So the market will sort this out. Let the early adopters sign a contract taking responsibility. But as the market matures, that will fade away.

Once again, a vehicle doing something for which it would get a ticket should again be an extremely unlikely result. The developers would have tested the vehicle extensively to fix any such issues. If their own testing doesn't find it, users of vehicles will notice it and report back to the vendor. It will be very rare for the police to witness such a bug for the first time, but of course not impossible.

How does a vehicle "come to a stop"

California's statute requires the regulations to govern times when a vehicle must come to a safe stop. And again, car developers will work to make this happen, even if components of their systems fail. Fault tolerance is frequently discussed among developers, it is not something they are unaware of. The question came up of "what if the car fails in a tunnel, with no shoulder?" Chances are, the car's map knows where the shoulders are, and so it will do its best not to stop in the tunnel. But if it has to, it has to. If regulation is needed here, it should be road-specific. Late the state say, as it already does on road signs, if there are specific regulations for specific roads. It would be nice if it said it in a database of road-specific regulations.

Nobody wants to stop in a dangerous place or to cause a big traffic jam, except perhaps New Jersey governors.

Driver's test for cars and operators

The topic came up of having a driver's test for the cars. This would mean a developer would show up at the DMV and a tester would put it through some paces, as new drivers are tested. This is not, on the surface, a bad idea, but it has a few big problems:

Cars need not drive all roads and conditions in order to enter service only on limited roads and conditions. So you might need a different test for each car. If the car does not do streets over 40mph, for example, that's perfectly fine, it can just refuse. Human drivers can't do that.

You don't want to have to do a new test with every new software revision.

The car might not even handle any streets around the DMV, so testers would need to come to where the car operates. For example, a shuttle that drives around the Google campus would not be able to do anything else, and would need a test just for that situation.

More problematic is the idea of a test for operators of the vehicles. Nevada's law requires you get an endorsement, but that's just some paperwork, you send in $5 and sign to say you understand some basic rules.

Tests for operators present many problems. They would be different for every car, for one thing, until standardization arises a decade down the road. Cars might even differ after a software upgrade. In this case the car will put operators through some training to be sure they know about the difference, but a new test at the DMV is a different order.

Secondly there is the problem of people who come from out of state. You can't be required to do new driving test just to pick up a rental car at the airport, or have it pick you up. This is not just practical, it's probably not even legal. Perversely, for example, while a Nevada resident has to have the endorsement on their licence to operate a robocar in Nevada, I with my California licence do not!

User privacy

John Simpson from Consumer Watchdog expressed a lot of worry over the car allowing the maker (and Google most of all) to track your movements. He wants to be sure people can turn off any reporting of their travels back to Google. While Google has a pretty good history of providing opt-out on location tracking -- they do a lot of location tracking via Android -- it is not the owners of cars who need to be so worried.

In my view, and that of many others, taxi service is the real future of vehicles, and taxi clients are not owners and will not be able to configure the car's privacy settings unless there's a lot of market pressure to do so. Even if they do, and we work out a means of anonymous payment for taxi services, or the erasure of logs, the trips themselves need to be logged to manage the cars, and you can't really disconnect your identity from the logs of a cab that took an anonymous passenger from your house to your office in the morning. It's very difficult.

The law does demand that cars record all data for 30 seconds before an accident. The reason for this is clear, but owners, in this case, might have the right to not have their technology be required to betray them. There are black box recorders in many cars these days that do this, but people have the right to disable them. This only applies in autonomous mode, however.

Markings on the car

Some people think robocars should have some light or indicator required by law to show they are in self-driving mode. I think that's a very poor idea. The vehicle is either safe on the road or it isn't, and there is nothing that other drivers should do differently around it if it's safe. I could see this for testing (like a student driver sign) but that would actually interfere with testing, since the goal of testing is to see how the car performs in real world situations, not ones where people are scared of it.

It's a moot question today -- Cars with LIDARs on them are quite obvious -- and I suspect they will always be pretty obvious in the future, even if the sensors get smaller.

Slow Operations

A new issue I'm going to start raising is one of slow operations. A simple reality is that the easiest path to safety in these early phases is just to go more slowly. It's not a perfect path, but there are vehicles like the Navia that one is comfortable seeing operate at 20 km/h but not at 40 km/h.

While there has been much talk of whether the cars might exceed the speed limit (they definitely should if there is an occupant in them commanding this) I have not seen much talk about minimum speeds. We don't want to have super-slow vehicles blocking traffic over the long term, but we might want to decide to legally tolerate it in the introductory phase of this technology, to speed up the benefits that come with time.

Owned vehicles vs. taxis

The hearings were almost entirely about cars that are sold to and operated by individual owners. That's what the car companies all imagine, but another school of thought suggests taxi service is the most interesting market. I touched on the privacy issues there, but we're not talking about this nearly as much as we should.

Taxis need to be unmanned to self-deliver, though they can go slow on more limited roads to do that. The desire for taxi service pushes up our consideration of unmanned operation.

The DMV does licence taxis and taxi drivers, but 99% of what they do revolves around ordinary car sales to driver-owners, and licencing these folks. That's going to change.