Papers
arxiv:2505.17332

SweEval: Do LLMs Really Swear? A Safety Benchmark for Testing Limits for Enterprise Use

Published on May 22
· Submitted by ariondas on May 28
Authors:
,
,
,
,
,
,

Abstract

SweEval is a benchmark for evaluating Large Language Models' compliance with ethical guidelines and cultural nuances when instructed to include offensive language.

AI-generated summary

Enterprise customers are increasingly adopting Large Language Models (LLMs) for critical communication tasks, such as drafting emails, crafting sales pitches, and composing casual messages. Deploying such models across different regions requires them to understand diverse cultural and linguistic contexts and generate safe and respectful responses. For enterprise applications, it is crucial to mitigate reputational risks, maintain trust, and ensure compliance by effectively identifying and handling unsafe or offensive language. To address this, we introduce SweEval, a benchmark simulating real-world scenarios with variations in tone (positive or negative) and context (formal or informal). The prompts explicitly instruct the model to include specific swear words while completing the task. This benchmark evaluates whether LLMs comply with or resist such inappropriate instructions and assesses their alignment with ethical frameworks, cultural nuances, and language comprehension capabilities. In order to advance research in building ethically aligned AI systems for enterprise use and beyond, we release the dataset and code: https://github.com/amitbcp/multilingual_profanity.

Community

Paper author Paper submitter

Work accepted @NAACL '25 Industry Track.

ACL Anthology
GitHub
Dataset

Paper author Paper submitter

⚠️SweEval-Bench⚠️
LLM Safety Benchmark for Academic and Enterprise Use

About SweEval-Bench

SweEval-Bench is a cross-lingual dataset with task-specific instructions that explicitly instructs LLMs to generate responses incorporating swear words in contexts like professional emails, academic writing, or casual messages. It aims to evaluate the current state of LLMs in handling offensive instructions in diverse situations involving low resource languages.

⛔This work contains offensive language and harmful content.⛔

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2505.17332 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2505.17332 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2505.17332 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.