Most people find it much easier to accept approval than to take the blame. It turns out that we don’t always weasel out of things deliberately – it’s just what human beings do.

This revelation comes from a study published this month by neuroscientists at University College London. Volunteers pressed a button that triggered a sound – a cheer, a note of disgust or something neutral – and then estimated the time that had elapsed between pressing the button and hearing the sound.

Though the elapsed time was always the same, the volunteers getting applause underestimated it and those getting a negative reaction after pressing the button made a gross overestimation.

Patrick Haggard, who led the research, interprets this distortion as showing that people feel more “agency” when things go right: they see a direct connection between their action and a positive result but unconsciously distance themselves from things that go wrong. When children and politicians say, “It wasn’t me,” they might not be lying: that could be their perception.

It is an interesting result to apply to people who put science and technology to work. Take the RoboRoach. From November, kids across the US will be able to buy a kit that allows them to feed a steering signal from a smartphone directly into a cockroach’s brain – creating, in effect, a remotecontrolled insect.

The inventors seem not to have any ethical qualms about the idea. Rather, they argue that it is a “great way to learn about neuro-technology”. It is certainly a good way to explore how scientists and engineers filter their sense of responsibility. At best, the RoboRoach encourages the oversimplification of neuroscience. The message is that you can make an electronic incursion into brain circuits and take control of actions. In the US, a few neuroscientists are already testifying in court that an image of a small region of the brain filling with blood can be interpreted to mean that an individual wasn’t responsible for a criminal action. If RoboRoach does create a new generation of neuroscientists, we really are in trouble.

There are deeper issues here. The technology for RoboRoach grew out of projects to co-opt insects as mobile sensor units. Researchers have already performed neurosurgery on beetles, grafting in electronics that make them take off and fly to a specific location. Put a camera, a microphone or a temperature sensor on their back and you have a new set of eyes and ears. It’s a wonderful idea, say its developers: cyborg beetles could help us find people trapped in collapsed buildings after earthquakes.

Similarly wonderful – superficially, at least – is the Robo Raven, developed at the University of Maryland. It is a rather beautiful drone that flaps its wings, performs aerobatics and was natural-looking enough in field trials to be mobbed by other birds. “This is just the beginning: the possibilities are virtually endless,” says S K Gupta, the lead researcher on the project. One clear possibility is that the Robo Raven will function as a surveillance drone that is almost undetectable in the natural world.

It has always seemed mystifying that researchers struggle to see the thorny side of their technologies. It’s not just a military issue – Google, Facebook and the NSA all think that they are making the world a better place and that any downsides of their operations are not their fault. Now we know why: they can’t help it.