What are some funny jokes about Harvard

Computer, tell me a joke! Can AI have a sense of humor?

Computers understand human language better and better. Thanks to methods from the field of Natural Language Processing (NLP) - a branch of artificial intelligence - machines can independently translate texts, talk to people like Amazon's Alexa or answer customers' questions as a virtual service chatbot. To this end, computer scientists train models from artificial neural networks with language data.

Read here how NLP works: "Natural Language Processing makes companies more digital and smarter"

Particularly large language models, such as the GPT-3 text generator from OpenAI, show that conversations between humans and machines are quite acceptable to a certain extent. These advances have led AI research to go one step further: If computers can learn human language, can they also have a sense of humor?

Humor training is “AI-Complete” - so “AI impossible”?

Scientists have been trying for some time to train AI models that can generate jokes. Not easy. With its ambiguity, language is already a challenge for the machine. Humor, which often thrives on ambiguity, is then something of a supreme discipline. Scientists assign the “humor training” of AI models to the “AI Complete” category. This includes research into the most complex arithmetic problems, the solution of which is just as difficult as finding an answer to the question: What actually is intelligence? A humorous artificial intelligence needs a language model that works at a similarly high level as the human language center. That actually means that AI applications cannot develop the humor as we know it from humans.

Harvard Business School was not satisfied with that. Under the direction of Michael H. Yeomans, researchers at the university tried to approach humorous AI. In one study, they investigated whether humans or machines are better able to predict whether a joke is funny for a person or not. The researchers used it to test how far AI can approach human judgment - with which we humans first assess the humor content of the joke before we laugh.

AI predicts the humor content of jokes

For the study, 75 pairs of people competed against the computer. It is interesting that 71% of the test subjects had known each other for more than five years - actually an advantage over the machine. Those who know each other well probably know better what makes others laugh. In the experiment, Person A of the team rated 33 jokes on a scale from “extremely funny” to “not at all funny”. Person B then looked at the ratings for four jokes and, based on this information, tried to predict how funny the other person would find eight more jokes.

Following the same scheme, the algorithm tried to predict the humor content. Instead of analyzing the linguistic structure of the jokes or being trained with joke features, the computer's predictions were based on “collaborative filtering”. In other words, the AI ​​evaluated the behavior of the test subjects in order to identify patterns and thus infer their humor preferences. The surprising result: the assessment of the artificial intelligence was more precise than that of the test subjects. The algorithm was correct 61 percent of the time, but humans only 57 percent of the time. Artificial intelligence can at least understand what people tend to find funny.

The AI ​​in the Harvard Business School study worked on a person-specific basis. In order to train a joke-generating AI that a broad masses really find amusing, one would need a prerequisite: Humanity would have to agree on what is funny and what is not. However, there is no general humor formula that the AI ​​can learn. Even if artificial intelligence were only concerned with a certain cultural area, it ultimately depends on a person's individual perspective which jokes they laugh at.

The thing about ethics

Artificial intelligence and humor also raise ethical questions. “What some people find funny can hurt others,” says our machine translation engineer Andrada Pumnea. Humor is a highly sensitive topic because there are people who find racist and sexist expressions funny. “And since the artificial intelligence learns from examples in the form of data, the output naturally reflects that. Developers have a responsibility to take into account possible discrimination and the reinforcement of prejudices. "

As is so often the case, AI needs people in the process to correct errors, adapt data sets and train models. Artificial intelligence can produce funny statements to a limited extent with a great deal of effort. But since humor is too subjective and complex to set up general computer-appropriate rules, it will probably remain approximations for the time being. It is also questionable whether AI with a sense of humor is even needed. “The technology is currently best suited to support people in clearly defined areas,” says Andrada.