Papers
arxiv:2503.16031

Deceptive Humor: A Synthetic Multilingual Benchmark Dataset for Bridging Fabricated Claims with Humorous Content

Published on Mar 20
· Submitted by UVSKKR on Mar 21
Authors:

Abstract

This paper presents the Deceptive Humor Dataset (DHD), a novel resource for studying humor derived from fabricated claims and misinformation. In an era of rampant misinformation, understanding how humor intertwines with deception is essential. DHD consists of humor-infused comments generated from false narratives, incorporating fabricated claims and manipulated information using the ChatGPT-4o model. Each instance is labeled with a Satire Level, ranging from 1 for subtle satire to 3 for high-level satire and classified into five distinct Humor Categories: Dark Humor, Irony, Social Commentary, Wordplay, and Absurdity. The dataset spans multiple languages including English, Telugu, Hindi, Kannada, Tamil, and their code-mixed variants (Te-En, Hi-En, Ka-En, Ta-En), making it a valuable multilingual benchmark. By introducing DHD, we establish a structured foundation for analyzing humor in deceptive contexts, paving the way for a new research direction that explores how humor not only interacts with misinformation but also influences its perception and spread. We establish strong baselines for the proposed dataset, providing a foundation for future research to benchmark and advance deceptive humor detection models.

Community

Paper author Paper submitter

This work introduces Deceptive Humor as a novel research problem at the intersection of humor and misinformation, which remains largely unexplored. Unlike standard humor detection or fact-checking tasks, deceptive humor blends fabricated claims with comedic elements, making it more challenging to detect and potentially more influential in misinformation spread. Detecting deceptive humor is particularly difficult because models must possess both humor understanding and fact-checking capabilities. Additionally, we generate data from recently trending fabricated claims, further increasing the challenge for existing models. To advance research in this direction, we present DHD, a multilingual benchmark dataset for deceptive humor, providing a structured foundation for studying this complex phenomenon and improving fact-aware humor detection in NLP.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2503.16031 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2503.16031 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2503.16031 in a Space README.md to link it from this page.

Collections including this paper 1