Tesla Autopilot for Autonomous Driving is like a Submarine for a Cave Rescue Michael DeKort Follow Feb 1 · 6 min read

I usually stick to technical points where engineering and safety are involved. However, when the root cause involves a person’s ethical and moral fortitude, intentions and conduct I feel the subject matter must include personal criticism. This brings me to Elon Musk. A man I used to think highly of until life tested him, and he failed over and over. Often harming or killing people as a result. But more on that in a bit.

Regarding my analogy regarding the Thai cave rescue. Elon Musk sent a mini submarine to help divers rescue the boys trapped in the cave a couple years ago. As soon as I heard this, having seen the cave video with its narrow and winding passageways, I knew it would not work. It was simply too stiff. (something everyone involved realized right away as well. Including Vern Unsworth who helped find the boys. That turned into the whole “pedo guy” debacle I would assume most people have read about.)

It was very obvious that something flexible was needed to get around bends. This event made me think about Elon’s motivation and competence. It seems to me, going back to PayPal, that while intelligent, he does not work hard enough to be smart. I believe that is due to his ego and extremely fragile sense of self-worth and self-confidence trumping his intellect (pun intended). I think Elon leaps to show people how smart he is before he spends much time vetting the idea. Beyond the cave submarine are his tunnels, SpaceX barge landings and approach to building “autopilot” as examples. Regarding the tunnels. Look at the grandiose statements and the actual tunnels. From huge, safe and sporting extremely fast pods to small, dangerous, due to air exchange and escape issues, and only fitting his cars. Where the SpaceX rocket rescue and barge landings are concerned. Great idea to rescue the rockets and fly them back to earth. But landing on a barge in the middle of the water that does not have self-stabilization? Boats have had the capability for some time. It got so bad in one case a rocket slid off into the ocean. (Yes, I know lots of other people are around and its not all of Elon. You live by the mouth sword and you die by it. And I bet some of his folks mentioned these things to him and he dismissed them.) And finally autopilot. He, like pretty much every autonomous vehicle maker, is using public shadow and safety driving, as well as deep learning, for most of the development and testing. Relying on human Guinea pigs to literally sacrifice their own lives so Elon can get the data he needs to train his neural networks. Those approaches, in addition to using inadequate gaming vs DoD level simulation, make the whole effort untenable and reckless. This is made far worse though since Elon assumed, he did not need LiDAR to properly detect and range objects. Relying instead on crude radar and non-stereo cameras. That has proven to be a grossly negligent debacle. With up to seven people dead so far. As this has gone on for years and Elon ups his lies with regard to the capability of his “full self-driving” and it being “feature ready”. It’s clear he knows this effort is fatally flawed but cannot bring himself to admit it and fix it. (More on my issues with the development and testing approach below as well as how to do this right.)

This brings me back to Elon’s fatal character flaws leading to injuring and fatally injuring people. While I do not think he is a sociopath, I believe his need to protect his wounded inner child means that doing the right ethical and moral thing is only accomplished if doing so is not in conflict with his hair trigger self-defense mechanism. His excessively fragile self-confidence and damaged ego result in a weak moral and ethical core that is often betrays his intellect. He would much rather be thought of as being cool, looking like he is doing the right thing and smart, than actually doing the right thing and being smart. While he has admitted he is wrong in the past. Like stating he over relied on robots to build his cars, doing so is extremely difficult for him. In the case of “autopilot” he would rather people literally continue to be harmed or die to avoid the mea culpa. (And I am sure part of this is that it has happened so often, with him not just lying more but tripling down, that he realizes he is not just facing civil suits but probably jail time for reckless endangerment, manslaughter or worse.) This, as well as his vast wealth and huge cultish fan club, result in a spoiled petulant wounded man-child with far too much power, who has no problem using humans, including children, as props, toys and needless human Guinea pigs to satisfy his selfish needs. Soon this will result in his downfall. “Autopilot” is his nemesis because no one can make it work right using the approach I mentioned above. Utter failure is inevitable. (Not unlike Theranos.) With over 750k vehicles in the public domain with “autopilot” the first of many avoidable deaths of a child or family will happen soon. It hasn’t so far purely due to luck. Intention matters when it comes to the safety of others. Elon unfortunately and very clearly has no intention of reversing his course here and every intention of sacrificing as many people as he needs to perpetuate his self-generated myth. The king is not just starkly naked but incompetent, grossly negligent and a net negative to the human race.

More in my articles here

Proposal for Successfully Creating an Autonomous Ground or Air Vehicle

· https://medium.com/@imispgh/proposal-for-successfully-creating-an-autonomous-ground-or-air-vehicle-539bb10967b1

Tesla “Autopilot” has killed 3 more people in past month — It will get far worse from here

· https://medium.com/@imispgh/tesla-autopilot-has-killed-3-more-people-in-past-month-it-will-get-far-worse-from-here-dc32f42ae47e

Tesla hits Police Car — How much writing on the wall does NHTSA need?

· https://medium.com/@imispgh/tesla-hits-police-car-how-much-writing-on-the-wall-does-nhtsa-need-8e81e9ab3b9

Autonomous Vehicles Need to Have Accidents to Develop this Technology

Using the Real World is better than Proper Simulation for AV Development — NONSENSE

Simulation can create a Complete Digital Twin of the Real World if DoD/Aerospace Technology is used

How NHTSA and the NTSB can save themselves and the Driverless Vehicle Industry

· https://medium.com/@imispgh/how-nhtsa-and-the-ntsb-can-save-themselves-and-the-driverless-vehicle-industry-8c6febe0b8ef

NHTSA saved children from going to school in autonomous shuttles and leaves them in danger everywhere else

· https://medium.com/@imispgh/nhtsa-saved-children-from-going-to-school-in-autonomous-shuttles-and-leaves-them-in-danger-4d77e0db731

The Hype of Geofencing for Autonomous Vehicles

My name is Michael DeKort — I am a former system engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, the Aegis Weapon System, and on C4ISR for DHS.

Key Industry Participation

- Lead — SAE On-Road Autonomous Driving SAE Model and Simulation Task

- Member SAE ORAD Verification and Validation Task Force

- Member DIN/SAE International Alliance for Mobility Testing & Standardization (IAMTS) Sensor Simulation Specs

- Stakeholder for UL4600 — Creating AV Safety Guidelines

- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee (AI&ASPC)

- Presented the IEEE Barus Ethics Award for Post 9/11 Efforts

My company is Dactle

We are building an aerospace/DoD/FAA level D, full L4/5 simulation-based testing and AI system with an end-state scenario matrix to address several of the critical issues in the AV/OEM industry I mentioned in my articles below. This includes replacing 99.9% of public shadow and safety driving. As well as dealing with significant real-time, model fidelity and loading/scaling issues caused by using gaming engines and other architectures. (Issues Unity will confirm. We are now working together. We are also working with UAV companies). If not remedied these issues will lead to false confidence and performance differences between what the Plan believes will happen and what actually happens. If someone would like to see a demo or discuss this further please let me know.