AI

This Paper Tests ChatGPT’s Sense of Humor: Over 90% of ChatGPT Generated Jokes Were The Same 25 Jokes

2 Mins read

Humor may improve human performance and motivation and is crucial in developing relationships. It is an effective tool for influencing mood and directing attention. Therefore, a sense of humor that is computational has the potential to improve human-computer interaction (HCI) greatly. Sadly, even though computational humor is a long-standing study area, the computers created are far from “funny.” This issue is even regarded as AI-complete. However, ongoing improvements and recent machine learning (ML) discoveries create a wide range of new applications and present fresh chances for natural language processing (NLP). 

Transformer-based large language models (LLMs) increasingly reflect and capture implicit knowledge, including morality, humor, and stereotypes. Humor is frequently subliminal and driven by minute nuances. So there is cause for optimism regarding future developments in artificial humor, given these fresh properties of LLMs. OpenAI’s ChatGPT most recently attracted much attention for its ground-breaking capabilities. Users may have conversations-like exchanges with the model through the public chat API. The system can respond to a wide range of inquiries while considering the prior contextual dialogue. As seen in Fig. 1, it can even tell jokes. Fun to use, ChatGPT engages on a human level. 

Figure 1: An excellent example of a dialogue between a human user and a chatbot. The joke is a real response to the question that ChatGPT asked.

However, consumers may immediately see the model’s shortcomings while engaging with it. Despite producing text in almost error-free English, ChatGPT occasionally has grammar and content-related errors. They found that ChatGPT will likely regularly repeat the same jokes throughout the previous investigation. The jokes that were offered were also quite accurate and nuanced. These findings supported that the model did not create the jokes produced. Instead, they were copied from the training data or even hard-coded into a list. They ran several structured prompt-based experiments to learn about the system’s behavior and enable inference regarding the generation process of ChatGPT’s output because the system’s inner workings are not disclosed. 

Researchers from the German Aerospace Center (DLR), Technical University Darmstadt,  and Hessian Center for AI  specifically want to know, through a systematic prompt-based investigation, how well ChatGPT can capture human humor. The three experimental conditions of joke invention, joke explanation, and joke detection are assembled as the major contribution. Artificial intelligence vocabulary frequently uses comparisons to human traits, such as neural networks or the phrase artificial intelligence itself. In addition, they utilize human-related words when discussing conversational agents, which aim to emulate human behavior as closely as possible. For instance, ChatGPT “understands” or “explains.” 

Although they think these comparisons accurately capture the behavior and inner workings of the system, they may be deceptive. They want to clarify that the AI models under discussion are not on a human level and, at most, are simulations of the human mind. This study does not attempt to answer the philosophical question of whether AI can ever think or understand consciously.


Check Out The Paper and GitHub link. Don’t forget to join our 24k+ ML SubRedditDiscord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more. If you have any questions regarding the above article or if we missed anything, feel free to email us at Asif@marktechpost.com

🚀 Check Out 100’s AI Tools in AI Tools Club


Aneesh Tickoo is a consulting intern at MarktechPost. He is currently pursuing his undergraduate degree in Data Science and Artificial Intelligence from the Indian Institute of Technology(IIT), Bhilai. He spends most of his time working on projects aimed at harnessing the power of machine learning. His research interest is image processing and is passionate about building solutions around it. He loves to connect with people and collaborate on interesting projects.



Source link

Related posts
AI

Google AI Releases Gemini 2.0 Flash: A New AI Model that is 2x Faster than Gemini 1.5 Pro

2 Mins read
Google AI Research introduces Gemini 2.0 Flash, the latest iteration of its Gemini AI model. This release focuses on performance improvements, notably…
AI

Microsoft Research Introduces AI-Powered Carbon Budgeting Method: A Real-Time Approach to Tracking Global Carbon Sinks and Emission

3 Mins read
Since the Industrial Revolution, burning fossil fuels and changes in land use, especially deforestation, have driven the rise in atmospheric carbon dioxide…
AI

Evaluating Gender Bias Transfer between Pre-trained and Prompt-Adapted Language Models

1 Mins read
*Equal Contributors Large language models (LLMs) are increasingly being adapted to achieve task-specificity for deployment in real-world decision systems. Several previous works…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *