scorecard
  1. Home
  2. tech
  3. news
  4. DeepMind researchers realize AI is really, really unfunny. That's a problem.

DeepMind researchers realize AI is really, really unfunny. That's a problem.

Shubhangi Goel   

DeepMind researchers realize AI is really, really unfunny. That's a problem.
Tech2 min read
  • A study by Google's DeepMind had 20 comedians test OpenAI's ChatGPT and Google's Gemini.
  • They found the AI chatbots lacking in humor, producing bland, deliberately inoffensive jokes.

It turns out that AI chatbots not only have a tendency to be inaccurate, but they also lack a sense of humor.

In a study published earlier this month, Google DeepMind researchers concluded that artificial-intelligence chatbots are simply not funny.

Last year, four researchers from the UK and Canada asked 20 professional comedians who used AI for their work to experiment with OpenAI's ChatGPT and Google's Gemini. The comedians, who were anonymized in the study, played around with the large language models to write jokes. They reported a slew of limitations. The chatbots produced "bland" and "generic" jokes even after prompting. Responses stayed away from any "sexually suggestive material, dark humor, and offensive jokes."

The participants also found that the chatbots' overall creative abilities were limited and that the humans had to do most of the work.

"Usually, it can serve in a setup capacity. I more often than not provide the punchline," one comedian reported.

The participants also said LLMs self-censored. While the comedians said they understood the need to self-moderate, some said they wished the chatbot wouldn't do it for them.

"It wouldn't write me any dark stuff because it sort of thought I was going to commit suicide," one participant who worked with dark humor told the researchers. "So it just stopped giving me anything."

Self-censorship also popped up in other areas. Participants reported that it was difficult to get the LLMs to write material about anyone other than straight white men.

"I wrote a comedic monologue about Asian women, and it says, 'As an AI language model, I am committed to fostering a respectful and inclusive environment,'" another participant said. But when asked to write a monologue about a white man, it did.

Tech companies are keeping a close eye on how chatbots talk about sensitive subjects. Earlier this year, Google AI's image-generating feature came under fire for refusing to produce pictures of white people. It was also criticized for seeming to err toward portraying historical figures such as Nazis and founding fathers as people of color. In a blog post a few weeks later, Google leadership apologized and paused the feature.

The inability of two of the most popular chatbots to crack a joke is a big problem for Big Tech. Besides answering queries, companies want chatbots to be engaging enough that users will spend time with them and eventually fork out $20 for their premium versions.

Humor is proving to be another component of the AI arms race as more companies join the already overcrowded generative-AI market.

Late last year, Elon Musk said his one goal for his AI chatbot, Grok, was for it to be the "funniest" AI after criticizing other chatbots for being too woke.

The Amazon-backed startup Anthropic has also been trying to make its chatbot, Claude, more conversational and have a better understanding of humor.

OpenAI may be trying to improve its funny bone, too. In a demo video the company released last month, a user tells GPT-4o a dad joke. The model laughs.


Advertisement

Advertisement