As this applies to automated cars and certain people, it could be the duty of manufacturers to not only figure out where a car’s driver should go, but also where he or she should not go. In some distant future, if the locations of most people can be pinpointed through GPS and other methods, a robot car could tell when a driver is about to violate a restraining order and then refuse to travel there. If they have the data to connect the dots, they probably should do it when it matters.

And it doesn’t just matter for legal reasons, but other factors could be important to users of future wired cars. An owner of a shiny new robot car probably wouldn’t appreciate being deliberately driven—because of advertisers—past fast-food restaurants if she’s on a diet, or by a cluster of bars if she’s a recovering alcoholic, or toward maternity stores if she hasn’t publicly revealed her pregnancy.

It could be that drivers and passengers can instruct cars to avoid certain destinations. Putting aside the question of why we should be imposed upon like this at all, if the car were to drive to those verboten destinations anyway, that’s probably wrong. Recall in Isaac Asimov’s novels that the second law of robotics is to always obey human orders (where they don’t violate the first rule to not cause or allow harm to humans).

However, resisting humans is a major point of autonomous cars: We humans are often error-prone and reckless, while algorithms and unblinking sensors can physically drive better than us in most if not all cases. An automated vehicle is designed precisely to disregard our orders where they are imminently risky. That’s to say, refusing human orders is sometimes a feature, not a bug. It’s unclear, then, whether opting-out of certain destinations (or opting-in) is reason enough for cars to comply with those commands.

* * *

The app itself is becoming the new killer app. The latest Windows 8 machines mimic the app dashboards on Apple OS and Android mobile phones. And we can expect online applications to be part of future cars, robotic or not. As existing apps on our mobile phones and computers are already doing now, in-car apps will raise a host of legal and ethical dilemmas, from privacy and beyond.

The problem I discussed at the beginning was related to advertising, but advertising itself isn’t the problem. At their best, advertising could be helpful video clips or images that educate you about products and solutions you truly might be interested in. At their worst, they’re annoyances that interrupt your concentration while you’re absorbed in an essay, video, podcast, or video game. Ads can push you to vote one way, or buy this thing you don’t need. They could make you into a worse person—or a better person.

So while advertising gets a lot of criticism, ads seem to be a necessary evil if the consumer wants to pay as little as possible. That’s neither here nor there in our discussion now, but the decision to allow a car to be controlled by third-parties—directing the route for an advertiser’s interests and not the car owner’s—is the real problem. Advertising inside a wired car is not just about showing you tantalizing stuff, but it could be about driving you physically to that stuff. This paradigm shift would make ads even more invasive than critics today might imagine.

More seriously, manufacturers will also need to make hard life-and-death choices in programming autonomous cars, and these decisions should be considered thoughtfully, openly, to ensure a responsible product that millions will buy, ride in, and possibly be injured with. That’s all the more reason to focus on ethics—not just on law, as we’re doing at the Center for Automotive Research at Stanford (CARS)—in steering the future of transportation in the right direction.