Without a doubt, self-driving cars will change and save lives. Interest is high, money—and ink—pours in, and demos are everywhere. But as we’re starting to see with the Google Car, technical, philosophical, and business model questions stand between us and the third transportation revolution.

We’ll start with a hat tip to Google co-founder Larry Page. From multiple sources, we learn that Page has taken an all-or-nothing position on the company’s self-driving car development. Partial automation inevitably leads to a lower level of attention on the part of the driver as the car (mostly) pilots itself. Then, when an exception occurs, when the car is unable to deal with a situation not encoded in its algorithms, the driver must “wake up” and intervene, quickly and accurately. Relying on such instant re-awakening is unsafe. According to some, driver inattention while in autopilot mode is what caused the recent, fatal Tesla crash in Florida.

Instead of putting lives at risk—and jeopardizing the company’s reputation—Page wants a perfect self-driving car, one that would never surprise drivers or passengers. There is, of course, no such thing as perfection. Instead, we have to put numbers on what we’re willing to accept as perfect enough. For this we can turn to traffic-related fatalities statistics compiled by the World Health Organization. I’ve extracted numbers for the US, UK, France, and Germany:

Jean-Louis Gassée

UK drivers and roads seem reasonably safe, the US not so much. German autobahns, some sections without speed limits, look moderately safer than France’s radar-monitored autoroutes.

For self-driving cars, a “perfect enough” goal might be the reduction of fatality numbers by a factor of 100—two orders of magnitude safer than tired, angry, distracted—or drunk—human drivers. For the average US driver who logs 13,000 miles a year (~21,000 km), that translates into a 0.15-in-a-million chance of a fatality for the entire year. (Keep in mind the relevant joke about the consultant who drowned in a river that was only one-foot deep… on average.) The actual odds will vary depending on external factors—weather, traffic density, and the like—but the self-driving algorithms don’t drink or get combative.

The tech fantasy of my childhood described video telephones, automated houses from laundry rooms to kitchens, and flying cars. The fantasy is slowly becoming reality. We now have enthusiastic descriptions of a third transportation revolution in the not-very-distant future (2025) where “private car ownership will all-but end in major US cities.” (We should note that the author, John Zimmer, is the co-founder of struggling car-sharing company Lyft.) We have announcements from Uber of self-driving cars tested in Pittsburgh, Pennsylvania, and a similar effort by Nutonomy in Singapore.

With all of the “can you top this?” PR that surrounds driving automation, Page’s stance is an admirable injection of thoughtfulness—a sobriety check. The visionary statements and self-driving demos (cue demo jokes) blithely omit the “mere matter of implementation.” What’s the plan, the timeline? What are we going to do with the 235 million cars and trucks on US roads, some expected to last 20 years or more? How will manufacturers negotiate the US Department of Transportation’s Federal Automated Vehicles Policy? Sometimes, the last 5% of a project takes 200% of the time and money.

Then we have another unanswered Google Car question: The path to money.

Personally, I think a company needs one really good idea every ten years, so for a company as rich as Google, a few billion dollars for a new breakthrough looked eminently affordable…for a while. But there is such a thing as too much, such as Google barges and many other puzzling pursuits that fall into the “because we can” category.

In May 2015, Ruth Porat left Morgan Stanley where she was executive VP and chief financial officer to become Alphabet’s CFO. The story is that her appointment had been heavily encouraged by investors who were concerned about Alphabet’s runaway “moonshot” projects. As expected, Porat set out to improve financial discipline and, for many projects, to demand a path to profitability. Highly speculative research, such as the Calico project’s quest to extend human life by 20 to 100 years, doesn’t entail huge financial outlays; but a grand and realistic endeavor such as developing the Google Self-Driving Car will require billions to reach its destination, and raises business model questions as a result.

The first big question: Licensing or product sales? Will Google ever build a car factory and sell Google Cars? Highly unlikely. The company’s forays into modestly-priced hardware have done nothing for its bottom line or its reputation (and I realize they’ll try again next month with the rumored Pixel phone). Becoming a car maker, either directly or through a contract manufacturer such as Magna Steyr, Valmet, or a Chinese automaker such as BYD Auto, doesn’t feel consonant with the company’s culture.

The alternative is that Google licenses its self-driving platform, its software “stack,” to automakers. Not an easy sale. Automakers would tell Google they’ve seen this movie already. Twice. First when Microsoft made all the money while PC makers did all the work, and now in Part Deux, where most Android handset makers (with the notable exception of Samsung) lose money.

As a thought experiment, can you see Dieter Zetsche (a.k.a Dr. Z), chairman of century-old Daimler Benz, licensing self-driving technology from Google, especially if the exchange of fluids comprises money and data? German automakers got together and bought Nokia’s mapping operation HERE precisely to avoid depending on Google. Speaking of maps, one unanswered technical question is the precision that’s needed to achieve true self-driving. High-resolution, real-time radar or Lidar may require maps with one inch or less precision, or even 3D; we don’t seem to know. Math nerds might also be interested in the recursive delights of SLAM (Simultaneous Localization and Mapping).

On the other hand, consider Tesla’s approach: It makes its own hardware/software platform and collects mapping and other data in what it calls “fleet learning.” To paraphrase Elon Musk, the idea is that if a car detects a previously unmapped feature—new roadwork, a fallen tree, a snowdrift—the entire fleet of Tesla cars will benefit from the information. Regarding the quest for “perfect enough,” Musk doesn’t buy Page’s argument that partial automation lowers driver vigilance, making the car more dangerous. Instead, Tesla’s CEO feels it would be immoral to withhold intermediary steps to total automation: “[O]f the over one million auto deaths per year worldwide, approximately half a million people would have been saved if the Tesla autopilot was universally available.”

We won’t be bored, especially if we pay attention to what products actually do. In that vein, I’m anxious to test my wife’s Tesla with its new 8.0 software version.

This post originally appeared at Monday Note.