Generative Data Intelligence

Does not compute: Why machines aren’t funny

Date:

We all know what it’s like to tell a joke that dies horribly.

Right now, across the world, from workplaces to wedding receptions, there are a huge number of unfortunate souls whose funnies have just become faux pas as their jokes crash landed, confusing the people around them or, even worse, upsetting them. The agony of having misjudged a joke is one of those memories that can haunt you for years, and whenever anybody dares to be funny, they take that high-risk gamble.

Computers mitigate risk in so many aspects of our lives these days, and we allow them to shoulder various burdens when our judgement is found to be wanting.

Humour, however, is not a proven area of computer expertise. Making people laugh is very different to controlling a nuclear power facility. And given that some of our own friends and family cannot be relied upon to be funny, it’s difficult to imagine a scenario where a computer could be trusted to make us laugh.

But that hasn’t stopped people working diligently to try and make that happen. Indeed, it’s seen as one of the ultimate challenges of artificial intelligence (AI).

It’s a little over 25 years since one of the first proposals for a computer humour algorithm appeared in the Russian scientific journal Biofizika, and scientists with linguistic interests have been fascinated by the idea ever since. If a computer can learn which words sound alike, then puns should be a piece of cake, surely? If it’s able to detect sarcasm, that’s got to be a small step towards delivering withering one-liners worthy of Joan Rivers? If we can program it to accurately append the phrase “that’s what she said” to the end of a sentence, might it become a master of double entendre?

Over the years, all these things have been achieved with varying degrees of success, but the question remains: were they actually funny? And even if computers could absorb, learn and process an infinite number of jokes, would they ever be able to make us laugh?

Girlfriend in a Korma

There can be a mechanical element to generating funny material. You see those so-called hashtag games on Twitter, where someone will suggest a subject, say, #smithscurry, where songs by The Smiths will be combined with Indian foodstuffs to create amusing amalgamations, e.g. “Girlfriend In A Korma”. Anyone who throws themselves into this endeavour will usually end up with one browser window open with a list of Smiths songs, another with an Indian takeaway menu, and flit between the two looking for matches.

One senses that a computer could find those matches more efficiently than we could, but it’s hard to credit it with the necessary instinct to sort the wheat from the chaff, and sense why “Girlfriend In A Korma” is funny and “The Keema Is Dead” isn’t.

An anthology of computer comedy wouldn’t make for great reading. “I like my coffee like I like my war. Cold.” This slightly desperate joke was generated by an algorithm at the University of Edinburgh in 2013, and while you might grudgingly admit that it has the shape of a joke, there’s a fundamental element missing: humour.

It’s true that some of the world’s most talented stand-ups are able to deliver jokes that are all shape and no substance and still make audiences rock with laughter, but that’s down to a combination of reputation, momentum, presence and timing – things that computers haven’t got around to tackling yet. They’re still trying to work out why “Girlfriend In A Korma” is funny, and you have to feel for them, because we don’t really know either.

Formulaic, not algorithmic

The problem of determining why things are funny has been struggled with for millennia, from Plato to Freud to Pascal, and that struggle has resulted in various theories, ranging from schadenfreude to disruption of expectation. But there’s an inherently joyless aspect to analysing humour, one which you can easily observe if you leaf through any manual of comedy writing, where various devices are dissected for readers hoping to learn how to be funny. Rather like a forgotten thought, the harder you look for the secret of humour, the more elusive it seems to become.

“There are a number of basic tools to comedy,” says Joel Morris, an author and comedy writer who has worked extensively in British television and radio. “Sitcoms are almost pieces of engineering,” he says, “these mathematical systems with story wheels that feature characters rising and falling.”

As a comedy writer learns his or her craft, they develop a sense of where the dead ends are and where, as Morris puts it, “the maths doesn’t work”. “Given that there are formulae,” he continues, “I can see why people think that, say, a computer could plot a story. If you tell it that a guy called Geoff is going on holiday with the one man he’d never want to go on holiday with, why shouldn’t it be able to deliver acts two, three and four?”

In the same way, surreal comedy, where completely incongruous elements find themselves juxtaposed, should be meat and drink for a computer program (“It’s the bacon tractor!”) but again, it wouldn’t be funny. “With comedy,” concludes Morris, “you’re looking for a glimpse of humanity.”

There’s no better illustration of this than jokes told by children, jokes that make no sense and aren’t funny in any traditional sense, but in context are faintly hysterical. The @KidsWriteJokes Twitter account delivers a regular stream of these glorious nuggets, e.g:

Q: What do you call a fish with no tail?
A: A one eyed grape!

However, when a computer delivers something similar, e.g. …

Q: What kind of animal rides a catamaran?
A: A cat!

…we roll our eyes in despair. The former is charming evidence of human nature, because we can remember a time when we, too, almost understood how jokes worked, but not quite. The latter is a merely a programming failure.

Why machines aren’t funny

To hur-hur is human

Perhaps it’s just early days for computer humour. Everyone (and everything) has to start somewhere. The child who reels off undecipherable gags today might be the arena-filling comedian of tomorrow, and who’s to say that computers can’t follow the same path, as neural networks bestow them with new powers and their learning abilities step up a gear.

In interview with GQ magazine in 2013, Peter McGraw at the University of Colorado was bullish about their prospects: “If we can map the human genome,” he said, “if we can create nuclear energy, we can understand how and why humor arises.” Many academics, such as Julia Taylor Rayz at Purdue Polytechnic in Indiana, have devoted huge amounts of time and energy to ‘modeling and detecting humor’.

Their optimism and sense of possibility seems to align them with advocates of Strong AI, those who believe that there’s nothing intrinsically special about living matter that prevents it from being modelled by a computer. In other words, humour is just inputs, outputs and memory.

Comedy writers argue that humour is inherently human, and humourlessness has a “robotic” quality. There are plenty of scientists and academics who would agree with them, and that schism is an illustration of the fiercely-argued “Hard Problem” of AI: how you might ever be able to model things like consciousness, sentience and self-awareness – things that seem so intrinsic to comedy.

“We have no idea how much sentience has to do with being ensnared in the body we have”, said Sir Nigel Shadbolt, Professor of AI at the University of Southampton, when I interviewed him back in 2015. “We’re building super smart micro-intelligences, but we have no clue about what a general theory of intelligence is.” Or, for that matter, a theory of humour. “We still don’t have a definition,” said Scott Weems, the author of “Ha! The Science of When We Laugh and Why”, to IQ, Intel’s tech culture magazine, a couple of years back. “Ask ten scientists, you’ll get ten different responses.”

Before a computer can even start trying to be funny it needs to be able to think creatively. Thus far, attempts by computers to produce art or music are often intriguing, but at the same time feel somewhat hollow.

“Creativity has always been fascinating,” wrote David Gelernter, professor of computer science at Yale University, in an essay for Frankfurter Allgemeine Zeitung. “[It] doesn’t operate when your focus is high; only when your thoughts have started to drift… We find creative solutions to a problem when it lingers at the back of our minds… No computer will be creative unless it can simulate all the nuances of human emotion.”

But even if a computer could successfully simulate those nuances, there’s still no guarantee that we’d laugh at its jokes. “Jokes are very tribal, they’re a way of marking shared values,” says Joel Morris. “It’s hard to tell a joke if you don’t share a culture, or language. Jokes are clues and signifiers to who you are; the ones that work are effectively saying ‘I’m like you’. Ultimately, it’s the soul of the joke, the truth of the joke that matters. That’s the thing you accept and open yourself up to. We’re very sensitive to a lack of truth, and a computer that’s telling you a joke is really telling you a lie. Because it’s saying ‘I am human too’.”

“R2D2! You know better than to trust a strange computer!”

In the realm of science fiction, robots are gently mocked for their gaucheness and inability to connect emotionally with humans. C3PO not knowing how to talk to Luke Skywalker (despite being a droid that’s trained to meet people) is why that character is charming – but the other reason it’s charming is because we don’t find it threatening. By contrast, when we discover in the film Alien that the character Ash, played by Ian Holm, is an android that has been passing itself off as human, it’s a thoroughly traumatic moment. It begs the question of how human we really want machines to be, and why so much effort is being made to blur lines between the two.

One of the reasons is obvious: the unravelling one of life’s great mysteries will always pose a compelling challenge. But there’s a more practical, short-term use for this kind of work: to bond us more closely to the devices and apps we use every day by giving them a warm, friendly tone. “It’s all about making the communication between people and the machine a smooth, compelling interaction,” said Northwestern University professor Kristian Hammond to Wired magazine in 2014.

But while it’s been shown that we appreciate a certain level of politeness in our interactions with computers, there may be a point where the insincerity begins to jar. Automated apologies for delayed trains, for example, don’t feel like apologies, because we know that computers cannot be sorry. Similarly, quips and witticisms delivered to us by automated assistants such as Siri or Google Now can be warm and funny, but we don’t feel that way because a computer delivered it. We’re appreciating the ingenuity of the human being who programmed it.

Many find the faux bonhomie displayed by automated assistants to be an irritant, and this erects another significant hurdle for any computer tasked with being funny: it doesn’t know who its audience is. As Morris reminds us, if you get comedy very slightly wrong, it fails utterly. “For example, it’s wrong to tell a joke that punches down to someone in the room,” he says, “and that’s why it can hard to make jokes on Twitter – because you can’t see the room.”

For a computer about to tell a joke, however, the room is colossal, and largely invisible. The chances of that joke dying are very high indeed. But, crucially, a computer doesn’t experience the same sense of shame that accompanies our own misjudgement of humour. Ultimately, you wonder if computers will be unable to make us laugh, not because they don’t have the jokes, but because they simply don’t care if the joke goes wrong.

Source: https://unbabel.com/blog/artificial-intelligence-machine-funny/

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?