Comment:

You often hear stories of AI solving problems in very efficient ways that aren't the way they're supposed to. One that I heard recently was a simulation of a plane landing that was supposed to minimize turbulence, but discovered fairly quickly that if it just crashed the plane as hard as it could, the amount of "turbulence" was 0, so that was outputted as the best answer.

In the future, many kinks like that have been ironed out. They had to be, to allow general purpose humanoid automatons to wander about unsupervised, let alone to interact with human genitalia. In a world of rules and technicalities, though, the ability to think laterally is a survival mechanism that will, inevitably, be selected for.

Fortunately, Zoa understands enough about the real world not to crash planes.

Whether a human can inadvertently order it to crash a plane is another matter.