Image copyright Thinkstock Image caption AI programs could inhabit androids, appliances, phones, cars... any machine we interact with

How do we stop intelligent machines from taking over the world and enslaving us all?

Give them emotions.

That's the radical suggestion of Patrick Levy Rosenthal, founder and chief executive of Emoshape, a tech firm that has developed a computer chip that can synthesise 12 human emotions.

"It's logical to conclude that autonomous machines made of electricity and metal will eventually see us as their main competitors for those resources, and try to take control," he says.

This is the dystopian vision of artificial intelligence (AI) run amok that luminaries such as physicist Prof Stephen Hawking, and tech entrepreneurs Bill Gates and Elon Musk, worry about.

But Mr Rosenthal believes this nightmare scenario will be avoided if we create machines that can empathise.

"We can teach them to feel happiness when they perform well, solve problems and receive positive feedback from humans," he says. "This will reduce the threat, because they will always work to achieve human happiness."

Image copyright Emoshape

Machines that can understand human emotion - and express their own emotions - will also be more effective colleagues and helpers, he believes.

By analysing our tone of voice, facial expressions and phrases, computers will become adept at reading our emotional states and this will help them better understand what we're asking them to do, argues Mr Rosenthal.

Danger signs

So why do some people think robots and self-learning programs are such a threat?

Even the tech optimists admit that many jobs involving menial or repetitive tasks will be automated. Machines can do a lot of what we do faster, more accurately, and at lower cost. And they don't go off sick, strike, or ask for pay rises.

It's the latest development of the industrial revolution, and could be just as disruptive.

More features about the Future of Work

Image copyright Thinkstock

Only recently, mobile phone components manufacturer Foxconn announced that it would replace 60,000 factory workers with robots.

Already some cars are made entirely by robots; warehouses full of goods hum in the darkness staffed by robots who do not need light to know where they are going; companies are increasingly using "chatbots" to deal with customers.

By some estimates, nearly half of all the jobs we do now could be performed by machines in the near future. Click here to find out how vulnerable your job may be to automation.

Image caption Marvin the Paranoid Android, from The Hitchhiker's Guide to the Galaxy, was always depressed

And once these intelligent programs, with access to limitless data crunched by increasingly powerful computers, can learn from past mistakes and create new programs autonomously without any human intervention, we could lose control.

AI could become like Frankenstein's monster. That's the fear at least.

Friends not enemies?

But tech evangelists are fond of pointing out that before the machine age, around two-thirds of all jobs were in agriculture. Now, with entire farms capable of being managed by automated robots, the sector accounts for just 2% of jobs.

The point being that we created new jobs - we adapted.

Jaap Zuiderveld, European vice-president for chip maker Nvidia, says: "Every new technology is an opportunity and a threat. But from my point of view, AI is only creating opportunities. Yes, it may replace many jobs, but it could also help humanity cure cancer."

Image copyright Virtusa Polaris Image caption Frank Palermo of Virtusa Polaris thinks AI will empower workers "to make better decisions"

And he reminds us that when it comes to the crucial decisions, we should always have the final say.

Frank Palermo, executive vice-president of global digital solutions for Virtusa Polaris, which numbers JPMorgan Chase, AIG and BT among its clients, thinks AI will be for "enabling workers and empowering them to make better decisions".

This benign or "weak AI", as he calls it, will "help workers navigate the working day", organising our calendars, booking meeting rooms, warning us about traffic congestion, and so on.

Chatty chums

And we'll be chatting naturally to these supersmart assistants wherever we happen to be - in cars, offices, homes and via our smartphones.

We'll only need to type on a keyboard when we don't want people to hear what we're saying.

They will be capable of natural conversations but have huge computing power behind them, tapping in to supercomputers like IBM's Watson or Google's AI platforms, Mr Palermo believes.

Image copyright Microsoft Image caption Microsoft's AI chatbot proved not to be so smart

The big tech companies - Apple, Microsoft, Google, Amazon, Samsung - are all convinced that this is the way we'll be interacting with service providers in the future.

And as menial tasks are automated it will leave us free to concentrate on more valuable activities, like developing better customer relationships or dreaming up new products and services, he argues.

Work will no longer be about sitting behind a desk and screen but "will be more of a natural discussion with your surroundings... much more of an interactive experience".

Image copyright Thinkstock Image caption Emoshape believes computer programs will be able to feel human emotions

"I don't have a doom and gloom outlook," he says. "I think the man-plus-machine model will be the template for many years to come. I don't think machines will ever run things by themselves."

Emoshape's Mr Rosenthal agrees, saying: "By 2050, humans will talk more to AI than to other humans. It is like electricity was at the beginning of the 20th Century - soon AI will be everywhere."

And these chatty assistants will have personalities as well as emotional complexity, he believes, imbuing driverless cars, for example, with unique characteristics that we can fall in love with.

But if AI programs develop personalities, emotions and can generate new, improved versions of themselves, does this make them effectively "people" legally speaking? Could they be given rights and obligations?

"In the US, they've already decided that the 'driver' of a driverless car could be the AI, legally speaking. So who is liable if it has a crash?" says Andrew Joint, commercial technology partner at law firm Kemp Little.

At the moment car manufacturers accept the responsibility, but generations of self-learning driverless cars could eventually operate independently, some think. What then?

"And in the workplace, if an autonomous AI program takes discriminatory decisions against employees, you can see how some employers might try to duck their responsibilities and blame the AI," says Mr Joint.

This is another reason why firms will want to keep a tight rein on AI programs they employ, he believes.

Whether you believe AI programs will be chatty, helpful chums or power-mad dictators, one thing is clear: they're going to have a profound effect on the world of work.

Follow Matthew on Twitter @matthew_wall

Click here for more Technology of Business features