Members of the media film machinery inside the Tesla Gigafactory in Sparks, Nevada. Troy Harvey | Bloomberg | Getty Images

Tesla's lawsuit against former employee Martin Tripp alleges that he hacked computer systems to steal intellectual property, not to harm drivers of the company's cars. But the idea that a malicious insider could successfully tamper with software used in the vehicles' battery testing process is more fodder for worst-case scenarios raised by lawmakers over self-driving cars. In May, the House Financial Services Committee discussed how autonomous vehicles could impact the insurance industry. It was the third Congressional hearing on the safety of autonomous cars in the past year. A bill to create a "Driving System Cybersecurity Advisory Council" within the Department of Transportation was introduced in July 2017 to create standards and controls over testing and deploying self-driving cars. It's one of four current bills circulating in Congress to deal with the lack of federal standards regulating the security of systems that make and operate self-driving cars. "There are a number of people out there that are somewhat resistant to entrusting their lives with autonomous vehicles," said Sen. Sean Duffy (R, Wisconsin) at the most recent hearing.

It's not easy to hack a car. But it's much easier for insiders.

The incidents described in CEO Elon Musk's email to employees and the company's lawsuit against the former employee are jarring because they show how much access insiders have to critical systems of these vehicles, and how difficult it might be to determine whether they are altering code on machines that test the cars.

Cybersecurity professionals have demonstrated how to hack into the infotainment systems of several vehicle brands over the years. These demonstrations have shown that, while it's fairly easy to break into the computer systems that control dashboard computers, getting deeper into the systems that actually run a vehicle – and control its steering, acceleration and braking -- is much harder. It is often difficult to get to these computers physically, and they typically aren't connected to the internet or remotely available, making it necessary for an attacker to have physical access to the device. It's even less likely outside attackers could get access to computers used in vehicle testing. But insiders have far greater access. Employees may not only have physical access to the critical systems that run manufacturing or program car components, but they may know important information that allows them to write code that can cause meaningful damage to the vehicle.

If Tesla can't weed out a malicious insider, who can?