Computational devices that are vulnerable to cheating are not limited to cars. Consider, for example, voting machines. Just a few months ago, the Virginia State Board of Elections finally decertified the use of a touch-screen voting machine called “AVS WinVote.” It turned out that the password was hard-wired to “admin” — a default password so common that it would be among the first three terms any hacker would try. There were no controls on changes that could be made to the database tallying the votes. If the software fraudulently altered election results, there would be virtually no way of detecting the fraud since everything, including the evidence of the tampering, could be erased.

If software is so smart and its traces of tampering are possible to erase, does this mean that we have no hope of catching cheaters? Not at all. We simply need to adopt and apply well-known methods for testing computing devices.

First, smart objects must be tested “in the wild” and not just in the lab, under the conditions where they will actually be used and with methods that don’t alert the device that it’s being tested. For cars, that means putting the emissions detector in the tail pipe of a running vehicle out on the highway. For voting machines that do not have an auditable paper trail, that means “parallel testing” — randomly selecting some machines on Election Day, and voting on them under observation to check their tallies. It is otherwise too easy for the voting machine software to behave perfectly well on all days of the year except, say, Nov. 8, 2016.

Second, manufacturers must not be allowed to use copyright claims on their software to block research into their systems, as car companies and voting machine manufacturers have repeatedly tried to do. There are proprietary commercial interests at stake, but there are many ways to deal with this obstacle, including creating special commissions with full access to the code under regulatory supervision.

Third, we need to regulate what software is doing through its outputs. It’s simply too easy to slip in a few lines of malicious code to a modern device. So the public can’t always know if the device is working properly — but we can check its operation by creating auditable and hard-to-tamper-with logs of how the software is running that regulators can inspect.

None of this is impossible. There is one industry in particular that employs many of these safeguards in an admirable fashion: slot machines in casinos. These machines, which in some ways present the perfect cheating scenario, are run by software designed by the manufacturers without a centralized database of winnings and losses to check if frequencies of losses are excessive. Despite all these temptations, in many jurisdictions, these machines run some of the best regulated software in the country. The machines are legally allowed to win slightly more often than lose, of course, ensuring a tidy profit for the casinos (and tax revenues for the local governments) without cheating on the disclosed standards.

It’s a pity that casinos have better scrutiny of their software than the code running our voting machines, cars and many other vital objects, including medical devices and even our infrastructure. As computation spreads in society, our regulatory systems need to be funded appropriately and updated in their methods so that keeping our air clean and our elections honest is not a worse gamble than a slot machine.