Smart assistants could soon come with a 'moral AI' to decide whether to report their owners for breaking the law.

That's the suggestion of academics at who say that household gadgets like the Amazon Echo and Google Home should be enhanced with ethical smart software.

This would let them to weigh-up whether to report illegal activity to the police, effectively putting millions of people under constant surveillance.

Scroll down for video

Smart assistants could soon come with a 'moral AI' to decide whether to report their owners for breaking the law. That's the suggestion of academics at who say that household gadgets like the Amazon Echo and Google Home should be enhanced with ethical smart software (stock image)

Academics at the University of Bergen, Norway, touted the idea at the ACM conference on Artificial Intelligence, Ethics and Society in Hawaii.

Marija Slavkovik, associate professor the department of information science and media studies, led the research.

Leon van der Torre and Beishui Liao, professors at the University of Luxembourg and Zhejiang University respectively, also took part.

Dr Slavkovik suggested that digital assistants should possess an ethical awareness that simultaneously represents both the owner and the authorities - or, in the case of a minor, their parents.

Devices would then have an internal 'discussion' about suspect behaviour, weighing up conflicting demands between the law and personal freedoms, before arriving at the 'best' course of action.

That said, there would need to be room for compromise because the world itself is not black and white.

'What we propose is very simple and of course there is so much more to do,' Dr Slavkokik told MailOnline.

'There is [already] an ethical conflict between people in one family, let alone between people and manufacturer, or shareholders of the manufacturer and programers.

'If we want to avoid Orwellian outcomes it's important that all stakeholders are identified and have a say, including when machines shouldn't be able to listen in. Right now only the manufacturer decides.'

How much information could they gather? Most bots don’t have smell sensors, but they do have microphones and cameras - which could catch illegal activity

The concept creates its own moral maze however, specifically, whether computers should have the ability to make human decisions.

'Humans and human situations are far messier than this method makes out,' Beth Singler from the University of Cambridge told New Scientist.

'Some might want it dealt with within the family, while others may take a hard line and seek police involvement. This disparity is likely to be found in all the groups of people'.

The ethics of the culture the product is launched in would also need to be taken into consideration.

Eerie future: Last year, in April 2018, The House of Lords Artificial Intelligence Committee said ethics need to be put at the centre of the development of AI

In April 2018, the House of Lords Artificial Intelligence Committee said ethics need to be put at the centre of the development of AI.

The peers said international safeguards must be established and warned it was particularly crucial here with Britain poised to become a world leader in the controversial technological field.

The committee insisted AI needs to be developed for the common good and that the 'autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence'.

The report also stressed that AI should also not be used to diminish the data rights of individuals, and people 'should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence'