Dealing with chatbots and virtual assistants can be so frustrating that it’s normal for humans to start getting snarky.

Such run-ins would be a little more entertaining if the machines could give some of that sass back. Unfortunately, it’ll be awhile before that can happen since computers don’t really understand sarcasm at all. So researchers from Oregon State University in the US have tried to teach software to do just this, using neural networks.

It’s tricky. Computers have to follow what is being said by whom, the context of the conversation and often some real world facts to understand cultural references. Feeding machines single sentences is often ineffective; it’s a difficult task for humans to detect if individual remarks are cheeky too.

The researchers, therefore, built a system designed to inspect individual sentences as well as the ones before and after it. The model is made up of several bidirectional long-short term memory networks (BiLSTMs) stitched together, and was accurate at spotting a sarcastic comment about 70 per cent of the time.

“Typical LSTMs read and encode the data – a sentence – from left to right. BiLSTMs will process the sentence in a left to right and right to left manner,” Reza Ghaeini, coauthor of the research on arXiv and a PhD student at Oregon State University, explained to The Register this week.

"The outcome of the BiLSTM for each position is the concatenation of forward and backward encodings of each position. Therefore, now each position contains information about the whole sentence (what is seen before and what will be seen after)."

So, where’s the best place to learn sarcasm? Reddit's message boards, of course. The dataset known as SARC – geddit? – contains hundreds of thousands of sarcastic and non-sarcastic comments and responses.

“It is quite difficult for both machines and humans to distinguish sarcasm without context,” Mikhail Khodak, a graduate student at Princeton who helped compile SARC, previously told El Reg.

"One of the advantages of our corpus is that we provide the text preceding each statement as well as the author of the statement, so algorithms can see whether it is sarcastic in the context of the conversation or in the context of the author’s past statements."

The cheek of it all

First, the sentences in the training dataset are converted into vectors. These vectors are passed onto an “attention function,” a fancy term that describes how neural networks decide to what words to focus on.

“The attention function helps us extract relevant information from the comments and responses. When a human reads a pair of sentences, he or she automatically extracts the dependencies that exist in the aforementioned pair. In the deep [learning] model, we need to use a mechanism that simulates this behavior.”

Experts build AI joke machine that's about as funny as an Adam Sandler movie (that bad) READ MORE

The model then rereads the previous encoding to try and understand context before it is classified as a sarcastic comment or response. It had an accuracy of 69.45 per cent.

Neural networks act like black boxes and it’s difficult to understand and interpret their decisions, but the system does seem to pick up on certain words associated with sarcasm Ghaeini said.

“We show a comment and response pair, where the comment is 'man accidentally shoots himself when concealed weapon goes off in movie theater,' and the response is 'just another responsible gun owner exercising his rights under the 2nd amendment.' It’s a sarcastic response and our model identifies it as sarcastic response as well."

“The word 'responsible' in the response appears to be the key phrase that delivers the sarcastic intent of the response when paired with the phrase 'man accidentally shoots,' we see the highest saliency, suggesting the most significant impact toward the final prediction.”

But don’t expect this model to understand more nuanced snidey jokes in longer conversations, because it only processes a maximum of 200 words for a single comment and 100 for a following response. It has been designed to deal with simple, short interactions on the internet and won’t help your Google Home or Amazon Alexa anytime soon.

“Developing an AI system that is capable of communicating with a human is an old and interesting goal of the researchers in this domain,” Ghaeini concluded.

"Recently, dialogue systems have received a lot of attention from researchers. Unfortunately, existing approaches often fail to detect sarcastic user comments in order to provide proper responses. We need to be able to detect sarcastic intent in user's responses in order to provide more realistic dialogue systems." ®