Driverless cars remain on a slow but steady march toward widespread deployment. This week proved as much.

On Tuesday, Google spinoff Waymo became the first to obtain a driverless testing permit from the California Department of Motor Vehicles (DMV). A 40-strong fleet of fully autonomous Chrysler Pacifica minivans — overseen by remote operators — will drive day and night on city streets, rural roads, and highways around Mountain View, Sunnyvale, Los Altos, Los Altos Hills, and Palo Alto.

The day before, Volkswagen, Intel’s Mobileye division, and car distributor Champion Motors unveiled a plan to launch a commercial autonomous taxi service in Israel next year.

Baidu, not to be outdone, announced on Wednesday at its Open World conference in Beijing several collaborations with automakers on autonomous vehicle technologies. It’s embarking on a two-year project to test self-driving vehicles on Chinese roads, and it’s working with Volvo to produce self-driving electric cars for the Chinese market. Lastly, it said it intends to soon deploy Level 4 autonomous cars — cars that can operate with limited human input and oversight in specific conditions and locations, as defined by the Society of Automotive Engineers — manufactured by Chinese state-owned FAW Group.

It’s exciting — if expected — technological progress. But I’d be lying if I said the lack of accompanying regulation didn’t give me pause. I don’t count myself among the 60 percent of people who told the Brookings Institution they “weren’t inclined” to ride in self-driving cars, but it’s my belief that such technological leaps are — if unguided by principles — fraught with ethical peril.

Pekka Ala-Pietilä, the former president of Nokia and tech entrepreneur who’s overseeing the European Union’s efforts to develop guiding AI principles, shares that sentiment.

“[We have to] make sure that we do regulate when it’s the right time,” he told Politico this week. “Ethics and competitiveness are intertwined, they’re dovetailed.”

Unfortunately, in the U.S., legislation remains stalled, at least at the Congressional level. More than a year ago, the House unanimously passed the SELF DRIVE Act, which would create a regulatory framework for autonomous vehicles. It has yet to be taken up by the Senate, which this summer tabled a separate bill, the AV START Act, that made its way through committee in November 2017.

Automakers aren’t the ones voicing opposition — on the contrary. GM CEO Mary Barra recently called on Congress to provide a path to deployment for OEMs and manufacturers, and in June, Waymo, Uber, Ford, and others formed the Partnership for Transportation Innovation and Opportunity (PTIO), which seeks to “foster awareness” of driverless vehicle technologies. Rather, regulators and advocacy groups are standing in the way. And to be fair, they’re not unjustified in doing so.

In March, Uber suspended testing of its autonomous Volvo XC90 fleet after one of its cars struck and killed a pedestrian in Tempe, Arizona. Separately, Tesla’s Autopilot driver-assistance system has been blamed for a number of fender benders, including one earlier this year in which a Tesla Model S collided with a parked Culver City fire truck. (Tesla stopped offering “full self-driving capability” on select new models in early October.)

David Friedman, former acting administrator of the National Highway Traffic Safety Administration (NHTSA) and vice president at Consumer Reports, said recently that Congress should direct the NHTSA to implement privacy protections, minimum performance standards, and accessibility rules for self-driving cars, trucks, SUVs, and crossovers.

And Senator Dianne Feinstein (D-CA) said bills such as the AV START Act threaten to loosen the rules on self-driving cars before researchers have had adequate time to study their impact. The Rand Corporation, for one, estimates autonomous cars will have to rack up 11 billion miles before we’ll have reliable statistics on their safety.

“Until new safety standards are put in place, the interim framework must provide the same level of safety as current standards,” Feinstein and a handful of other senior Democratic Senators wrote in a letter to the Senate Commerce Committee in March. “Self-driving cars should be no more likely to crash than cars currently do, and should provide no less protection to occupants or pedestrians in the event of a crash.”

That’s not to suggest U.S. driverless vehicle policy is at a complete standstill.

In early October, the Department of Transportation, through NHTS, issued the third iteration of its voluntary guidelines on the development and deployment of driverless car technology: Automated Vehicles 3.0. In it, the agency proposes new safety standards “to accommodate automated vehicle technologies and the possibility of setting exceptions to certain standards … that are relevant only when human drivers are present.”

And in March, President Donald Trump signed into law a $1.3 trillion spending bill that earmarks $100 million for projects that “test the feasibility and safety” of autonomous cars.

But the changes aren’t coming fast enough. And with some analysts predicting as much as 10 million cars with some form of autonomy on the road by 2020, that’s dangerous.

Max Tegmark, a professor at the Massachusetts Institute of Technology and cofounder of the Future of Life Institute (FLI), said it best in an interview earlier this year:

“You begin to realize how amazing the opportunities are with AI if you do it right, and how much of a bummer it would be if we screw it up … Technology isn’t bad and technology isn’t good; technology is an amplifier of our ability to do stuff. And the more powerful it is, the more good we can do and the more bad we can do.”

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to bookmark our AI Channel.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

P.S. Please enjoy this video of YOLOv3, a real-time object detection algorithm targeted for real-time processing.

From VB

Alexa gains Reminders API, calendar availability, and integration with Routines

Three productivity-centric features are heading to Alexa: calendar integration with Routines, calendar availability browsing, and a new Reminders API.

Read the full story

Spaceborne Computer brings supercomputing capabilities to ISS astronauts

HP’s Spaceborne Computer will soon supply supercomputing services to NASA astronauts aboard the International Space Station.

Read the full story

Alexa can now talk about the midterm elections

Amazon’s Alexa is teaming up with Ballotpedia and the Associated Press to share info about ballot measures and candidates and results on Election Day.

Read the full story

Flex Logix unveils neural inferencing engine for AI in datacenters and on the edge

Flex Logix today debuted hardware for quick AI model inference deployment in datacenters or on the edge with TensorFlow or Caffe.

Read the full story

iRobot partners with Google to improve smart home devices with indoor maps

Google and iRobot will team up to improve smart home devices with the help of spatial mapping data collected by the latter’s robot vacuums.

Read the full story

Starship Technologies launches commercial package delivery service using autonomous robots

Starship Technologies has launched what it claims to be the world’s first commercial autonomous ground-based robotic package delivery service.

Read the full story

Amazon Fire TV Stick 4K review: The best streaming dongle for the money

Amazon’s new Fire TV Stick 4K is one of the best streaming devices on the market.

Read the full story

Autonomous drone startup Airobotics raises $30 million to accelerate U.S. expansion

Israel’s Airobotics today announced a $30 million round of funding as the company continues to build out its U.S. operations.

Read the full story

Beyond VB

Can artificial intelligence help stop religious violence?

Software that mimics human society is being tested to see if it can help prevent religious violence. (via BBC)

Read the full story

Machine-learning algorithm beats 20 lawyers in NDA legal analysis

AI learned from tens of thousands of legal documents (via TechSpot)

Read the full story

Harvard just put more than 6 million court cases online to give legal AI a boost

After five years of work, nearly 6.5 million US court cases are now available to access for free online. (via MIT Technology Review)

Read the full story

Using artificial intelligence to detect written lies

There’s no foolproof way to know if someone’s verbally telling lies, but scientists have developed a tool that seems remarkably accurate at judging written falsehoods. Using machine learning and text analysis, they’ve been able to identify false robbery reports with such accuracy that the tool is now being rolled out to police stations across Spain. (via Quartz)

Read the full story