Researchers from the University of Wisconsin-Madison are working with Colin Stokes, assistant to New Yorker cartoon editor Bob Mankoff, to help develop an automated way of judging entries in the magazine’s weekly comic strip caption contest.

“Not too long ago, Stokes would spend eight hours a week trawling through about 5,000 submissions for the magazine’s caption contest,” writes Ben Fox Rubin in CNET. “The 18-year-old contest is wildly popular and winning it is prestigious enough to put on a resume. (OK, not really.) But Stokes was in danger of going, as Mankoff put it, ‘humor blind.’”

Instead, using a crowdsourcing website, volunteers are asked to look at as many sample New Yorker cartoons as they care to (well, at least five), and, for each one, categorize the caption as funny, somewhat funny, or not funny. Researchers—as well as the beleaguered editors at the New Yorker, who would like the computer to take over preliminary judging of the more than 5,000 weekly entries—hope that eventually the artificial intelligence (AI) system will learn what humans consider funny.

FunnyScript

This isn’t the first time that researchers have attempted to use AI to teach a computer humor. There’s a whole fistful of scientific papers and computer programs on the subject, most with unfunny titles such as Ontology-based view of natural language meaning: the case of humor detection. (Though one program had some success teaching a computer the appropriate times to add, “That’s what she said.”)

However, many of these previous projects focused on teaching computers about word meanings and puns, and then having the computer generate its own jokes—which were mostly terrible. With the increased computing power available today and the techniques of deep learning, researchers have discovered that it’s easier to teach computers things by simply showing them lots and lots of examples.

Hence the crowdsourcing of New Yorker cartoons. The technique, in fact, isn’t so different from that used in the 1966 science fiction classic The Moon is a Harsh Mistress, where the sentient computer Mike learns the difference between jokes that are funny once, always funny, or not funny at all by giving humans lists of jokes to categorize.

In fact, this isn’t even the first time that researchers have attempted to teach a computer humor using New Yorker cartoons. The magazine worked on a similar project last year with Microsoft, and by the end of it, the computer reportedly came up with the same answers as New Yorker editors almost two-thirds of the time. Sadly, that wasn’t considered good enough, Rubin writes.

So it is worth noting that in addition to having a computer science department, the University of Wisconsin at Madison is known as “Our Funny University,” having spawned not only Scott Dikkers, Founding Editor, The Onion but also Jim Mallon, Producer and Director of Mystery Science Theater 3000, among others.

Further applications

There’s more to the whole project, of course, than simply making life easier for New Yorker editors. The point of the algorithms is to help discover the funniest captions more quickly, not waste people’s time showing them unfunny captions, Rubin writes. Consequently, because the algorithms can pick the best choice from many options, they could help create better drugs to fight viruses or design more-secure communication networks, he writes.

In addition, teaching a computer what humans think is funny could help improve human-computer interactions, such as with digital assistants like Siri and Cortana, Rubin writes. “Comedy may be the key to unlocking this emotional intelligence, since humor embodies many of the complexities of lateral thinking, problem-solving and unexpected insight that characterize the mind,” writes Ilya Lebovitch in IQ.

What computers can learn about human humor from cartoons in the New Yorker is another issue, of course. Critics ranging from the Partisan Review in 1937 to Seinfeld in 1998 have complained that New Yorker cartoons are esoteric and, frankly, not very funny. There are, in fact, even websites devoted to explaining the droll nuance in each New Yorker cartoon. So if the goal is to develop AI systems that can relate better to humans, one can argue whether researchers are using the right source to explain human humor.

On the other hand, perhaps researchers aren’t ready for a computer that learns humor from South Park.