Would you be more likely to trust a robot to know the right answer than yourself or other human beings?

A recent study led by Anna-Lisa Vollmer at Germany's Bielefeld University has found that children do, and that they'll cave in to robot "peer pressure" at a pretty disturbing rate, even when the robot is clearly wrong.

That has significant implications when children spend a large amount of time in front of a screen, says Dr Mandie Shean, a specialist in education and child resilience.

Participants in the study were asked to match a set of vertical lines on a computer screen by size.

The study looked at a group of 43 children aged between seven and nine years old.

Some completed the task on their own (the control group), while others did so in a group of robots.

A group of adults undertook the same activity in similar circumstances.

The point was not to test their eyesight, but to assess their ability to resist pressure to agree with the wrong answer, when that answer was given by a robot.

Adults in the study were able to reject pressure from the robots to give the wrong answer, but 75 per cent of the children caved in.

"It's quite interesting that the children would think the robots have a significant authority voice in their heads," Dr Shean told RN Drive.

She says children have grown up with robots, whereas adults did not and have more confidence in rejecting the robot's advice — something the children have not yet developed.

Children should be taught to have confidence in their opinions, Dr Shean says. ( ABC News: David Coady )

Why does it matter?

Dr Shean says when things come to us through a screen, we tend to think they're correct.

"Just that inability to filter what is right, what is wrong, and that computers are truth and the things that come through them," she said.

The findings are pertinent, given the amount of time children spend on computers at school, with their use of technology often continuing at home.

"[There's] this whole new generation of children that have always had this," said Dr Shean.

She thinks it's important to encourage young people to develop a critical mindset.

"We really need to think about how we help children filter the messages they receive," she said

Sorry, this video has expired Is AI and data only as good as the humans who create it?

Teaching "resilience around robots"

Dr Shean thinks the results shown in the research could be related to development; the participants were very young — aged between seven and nine years old.

"Certainly as we develop, we should have more confidence in our opinions," she said.

"Although we also need to be thinking about do we think for children too much, and do we allow computers to think for children too much?"

She says we need to help children have confidence in well-founded opinions, but it's not just about confidence.

"Confidence unfounded is a terrible trait, but getting children to be really critical — have that conversation with them," she advised.

Would you be more likely to trust a robot to know the right answer than yourself or other human beings? ( Reuters: Denis Balibouse )

"Why do you believe that? What are you basing that on? What's your evidence?

"I teach my students here: disagree with me, and just give me a reason why."

She says having those conversations can help children develop the process of evaluation and give them faith in their own decision-making.

"That's a really important thing for growing up."