San Francisco

AS a former software engineer, I laughed when I read what the Securities and Exchange Commission might be considering in response to the debacle of Knight Capital’s runaway computerized stock trades: forcing companies to fully test their computer systems before deploying coding changes.

That policy may sound sensible, but if you know anything about computers, it is funny on several accounts.

First, it is impossible to fully test any computer system. To think otherwise is to misunderstand what constitutes such a system. It is not a single body of code created entirely by one company. Rather, it is a collection of “modules” plugged into one another. Software modules are purchased from multiple vendors; the programs are proprietary; a purchaser (like Knight Capital) cannot see this code. Each piece of hardware also has its own embedded, inaccessible programming. The resulting system is a tangle of black boxes wired together that communicate through dimly explained “interfaces.” A programmer on one side of an interface can only hope that the programmer on the other side has gotten it right.

Next, there is no such thing as a body of code without bugs. You can test assiduously: first the programmers test, then the quality-assurance engineers; finally you run the old and new systems in parallel to monitor results. But no matter. There is always one more bug. Society may want to put its trust in computers, but it should know the facts: a bug, fix it. Another bug, fix it. The “fix” itself may introduce a new bug. And so on.