A self-driving car operated by the transportation company Uber stops at a red light in Pittsburgh, Pennsylvania. In just a few years, self-driving robotaxis will share the roads with human drivers in many cities. (Gene J. Puskar/AP)

In just a few years, well-mannered self-driving robotaxis will share the roads with reckless, lawbreaking human drivers. That prospect is causing headaches for the people developing the robotaxis.

A self-driving car would be programmed to drive at the speed limit — which humans frequently exceed by 10 to 15 miles per hour. Self-driving cars wouldn’t dare cross a double yellow line. But humans do it all the time.

And then there are those odd local traffic customs to which humans quickly adapt.

In Los Angeles, and other places, for instance, there’s the “California stop,” where drivers roll through stop signs if no traffic is crossing. In Southwestern Pennsylvania, courteous drivers practice the “Pittsburgh left,” where it’s customary to let one oncoming car turn left in front of them when a traffic light turns green. The same thing happens in Boston, Massachusetts.

“There’s an endless list of these cases where we as humans know the context, we know when to bend the rules and when to break the rules,” said Raj Rajkumar, a computer engineering professor at Carnegie Mellon University who leads the school’s self-driving car research.

Although driverless cars are likely to carry passengers or cargo in limited areas during the next three to five years, experts say it will take many years before robotaxis can coexist with human-piloted vehicles on most side streets, boulevards and freeways. That’s because programmers have to figure out human behavior and local traffic customs. And teaching a car to use that knowledge will require massive amounts of data and big computing power that is super expensive at the moment.

“Driverless cars are very rule-based, and they don’t understand social graces,” said Missy Cummings, a Duke University professor who studies interactions between humans and self-driving cars.

Driving customs and road conditions are dramatically different across the globe. There are narrow, busy streets in European cities, and a near free-for-all exists in the giant traffic jams of Beijing, China. In India’s capital, New Delhi, luxury cars share poorly marked lanes with bicycles, scooters, trucks and even an occasional cow or elephant.

Then there is the problem of aggressive humans who make dangerous moves such as cutting cars off on freeways or turning left in front of oncoming traffic.

Already there have been cases of human drivers pulling into the path of cars such as Teslas, knowing they will stop because they’re equipped with automatic emergency braking.

“It’s hard to program-in human stupidity or someone who really tries to game the technology,” says John Hanson, spokesman for Toyota’s self-driving car unit.

Kathy Winter, vice president of automated driving solutions for Intel, is optimistic that the cars will be able to see and think like humans before 2030.

Cars with sensors designed to assist drivers are already gathering data about road signs, lane lines and human driver behavior. Winter hopes companies developing driverless systems and cars will contribute this information to a giant database.

Artificial intelligence developed by Intel and other companies eventually could access the data and make quick decisions similar to humans, Winter says.

Someday self-driving cars will have common sense programmed in, so that they will cross a double-yellow line when needed or speed up and find a gap to enter a freeway. Carnegie Mellon has taught its cars to handle the “Pittsburgh left” by waiting a full second or longer for an intersection to clear before proceeding at a green light.

Still, some people involved in public safety say computerized cars will never be able to think exactly like humans.

“You’ll never be able to make up a person’s ability to perceive what’s the right move at the time, I don’t think,” said New Jersey State Police Sergeant Ed Long.