The self-driving car industry is blowin’ it.

The definitions of self-driving—from ADAS to SAE automation levels to the inconsistent nomenclature used by the media—are a semantic disaster concealing a vast opportunity. There is no doubt increasing automation will make driving safer, but the safest possible implementation is one that maximizes human capabilities rather than treating them like a cancer.

Automakers are missing the biggest opportunity to profit from saving lives on what is likely to be a long, gentle ascent to Level 4. It requires tossing the insufficient logic behind L2/L3 semi-autonomy and probably even Advanced Driver Assistance Systems (ADAS), and deploying the same hardware and software being developed for L4 as a way to augment human driving.

Though augmented driving represents a clear break from the current crop of semi-autonomous systems, it’s not without precedent. Aircraft are being transformed by automation just as profoundly as cars, but because there is no impetus to move toward pilotless airliners, flight automation systems have been developed to enhance rather than replace human pilots. By following the example set by the commercial aviation sector, automakers can replace the risks inherent to semi-autonomy with the comprehensive assistance of augmented driving.

The Problem is the Transition Gap

Virtually all criticism of Semi-Autonomy focuses on transitions, meaning the length and nature of the control handoff from the system to a human operator.

Transitions are not the problem.

The flaw in Semi-Autonomous driving is inherent: it temporarily substitutes rather than comprehensively assists. The more it improves, the more human skills decline. Even as it improves, every “failure” is attributed to technology rather than human ignorance of it. Its perceived limitations discourage rather than encourage adoption of any form of automation, including future iterations decreasingly skilled drivers will need most, like L4.

Even if someone could “perfect” transitions the overall safety of partial automation will always remain hostage to the atrophying skills of humans in the loop. As Captain Chesley “Sully” Sullenberger stated in an interview about automation, driver’s education is “a national disgrace.” Human driving skills — especially in the United States — have never been great, and the recent spikein American road deaths suggests they are in decline well in advance of automation’s rise. If semi-autonomous systems continue to focus on replacing these skills rather than enhancing them, they will contribute to the very problem they are supposed to solve.

The “transition gap” between declining skills and rising automation will always exist, as untrained humans will always place more faith in technology (and their skills) than warranted. This gap is inherent to semi-autonomy because it is totally binary: it is on, or it is off. That such systems are safer than the average human driver when engaged makes commercializing them a moral imperative, but since they can never improve as quickly as human skill declines, and since the only solution offered by current thinking is L4, they will remain a conceptual dead end, a snake of safety technology eating its own tail until L4 magically becomes ubiquitous at some future date.

That’s nowhere near the best we can do using all the technologies developed along the way.