Robots designed to have a sense of humour may struggle to understand when and what makes things funny - and this could even lead them to kill, one expert warns.

The inability of artificially intelligent machines to grasp context, timing and tact could have disastrous consequences beyond an ill-timed joke, they say.

That could lead to a situation where an automaton's software deems killing someone a funny thing to do, it's claimed.

Scroll down for video

This image shows the humanoid robot 'Alter' on display at the National Museum of Emerging Science and Innovation in Tokyo. Understanding humour may be one of the last things that separates humans from ever smarter machines, computer scientists and linguists say

There are good reasons behind giving artificial intelligence the ability to understand humour, researchers say.

Tristan Miller, a computer scientist and linguist at Darmstadt University of Technology in Germany, says it makes machines more relatable, especially if you can get them to understand sarcasm.

It may also aid with automated translations of different languages.

But some experts remain unconvinced about robots being able to understand humour.

'Artificial intelligence will never get jokes like humans do,' said Kiki Hempelmann, a computational linguist who studies humour at Texas A&M University-Commerce.

'In themselves, they have no need for humour. They completely miss context.

'Teaching AI systems humour is dangerous because they may find it where it isn't and they may use it where it's inappropriate.

'Maybe bad AI will start killing people because it thinks it is funny.'

Humour is a complex concept which requires vast amounts of context, something experts say is difficult to build into robots.

Dr Miller added: 'Creative language - and humour in particular - is one of the hardest areas for computational intelligence to grasp.

'It's because it relies so much on real-world knowledge - background knowledge and commonsense knowledge.

'A computer doesn't have these real-world experiences to draw on. It only knows what you tell it and what it draws from.'

Dr Noam Slonim, principal investigator, stands with the IBM Project Debater before a debate between the computer and two humans in San Francisco. Slonim put humour into the programming but in tests it gave a humorous remark at an inappropriate time

Allison Bishop, a Columbia University computer scientist who also performs stand-up comedy, said computer learning looks for patterns, but comedy thrives on things hovering close to a pattern and veering off just a bit to be funny and edgy.

Humour, she said, 'has to skate the edge of being cohesive enough and surprising enough.'

For comedians that's job security. Dr Bishop said her parents were happy when her brother became a full-time comedy writer because it meant he wouldn't be replaced by a machine.

"I like to believe that there is something very innately human about what makes something funny,' Dr Bishop said.

Oregon State University computer scientist Heather Knight created the comedy-performing robot Ginger to help her design machines that better interact with - and especially respond to - humans. She said it turns out people most appreciate a robot's self-effacing humour.

Ginger, which uses human-written jokes and stories, does a bit about Shakespeare and machines, asking, 'If you prick me in my battery pack, do I not bleed alkaline fluid?' in a reference to 'The Merchant of Venice.'

Humour and artificial intelligence is a growing field for academics.

Some computers can generate and understand puns without help from humans.

This, computer scientists claim, is because puns are based on different meanings of similar-sounding words.

Machines struggle beyond this narrow scope however, said Purdue University computer scientist Julia Rayz.

'They get them - sort of,' Dr Rayz said. 'Even if we look at puns, most of the puns require huge amounts of background.'

Still, with puns there is something mathematical that computers can grasp, Dr Bishop said.

Dr Rayz has spent 15 years trying to get computers to understand humour but says the results often leave a lot to be desired.

She recalled a time she gave the computer two different groups of sentences. Some were jokes. Some were not.

The computer classified something as a joke that people thought wasn't a joke.

When Dr Rayz asked the computer why it thought it was a joke, its answer made sense technically.

But the material still wasn't funny, nor memorable, she said.

IBM has created artificial intelligence that beat opponents in chess and 'Jeopardy!' Its latest attempt, Project Debater , is more difficult because it is based on language and aims to win structured arguments with people, said principal investigator Noam Slonim, a former comedy writer for an Israeli version 'Saturday Night Live.'

Mr Slonim put humour into the programming, figuring that an occasional one-liner could help in a debate. But it backfired during initial tests when the system made jokes at the wrong time or in the wrong way.

Now, Project Debater is limited to one attempt at humour per debate, and that humour is often self-effacing.

'We know that humour - at least good humour - relies on nuance and on timing,' Mr Slonim said. 'And these are very hard to decipher by an automatic system.'

That's why humour may be key in future Turing Tests - the ultimate test of machine intelligence, which is to see if an independent evaluator can tell if it is interacting with a person or computer, Mr Slonim said.

There's still 'a very significant gap between what machines can do and what humans are doing,' both in language and humour, Mr Slonim said.