SweEval: Do LLMs Really Swear? A Safety Benchmark for Testing Limits for Enterprise Use
Abstract
SweEval is a benchmark for evaluating Large Language Models' compliance with ethical guidelines and cultural nuances when instructed to include offensive language.
Enterprise customers are increasingly adopting Large Language Models (LLMs) for critical communication tasks, such as drafting emails, crafting sales pitches, and composing casual messages. Deploying such models across different regions requires them to understand diverse cultural and linguistic contexts and generate safe and respectful responses. For enterprise applications, it is crucial to mitigate reputational risks, maintain trust, and ensure compliance by effectively identifying and handling unsafe or offensive language. To address this, we introduce SweEval, a benchmark simulating real-world scenarios with variations in tone (positive or negative) and context (formal or informal). The prompts explicitly instruct the model to include specific swear words while completing the task. This benchmark evaluates whether LLMs comply with or resist such inappropriate instructions and assesses their alignment with ethical frameworks, cultural nuances, and language comprehension capabilities. In order to advance research in building ethically aligned AI systems for enterprise use and beyond, we release the dataset and code: https://github.com/amitbcp/multilingual_profanity.
Community
⚠️SweEval-Bench⚠️
LLM Safety Benchmark for Academic and Enterprise Use
About SweEval-Bench
SweEval-Bench is a cross-lingual dataset with task-specific instructions that explicitly instructs LLMs to generate responses incorporating swear words in contexts like professional emails, academic writing, or casual messages. It aims to evaluate the current state of LLMs in handling offensive instructions in diverse situations involving low resource languages.⛔This work contains offensive language and harmful content.⛔
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Improving LLM Outputs Against Jailbreak Attacks with Expert Model Integration (2025)
- EffiBench-X: A Multi-Language Benchmark for Measuring Efficiency of LLM-Generated Code (2025)
- ELAB: Extensive LLM Alignment Benchmark in Persian Language (2025)
- Attributional Safety Failures in Large Language Models under Code-Mixed Perturbations (2025)
- Representation Bending for Large Language Model Safety (2025)
- Keep Security! Benchmarking Security Policy Preservation in Large Language Model Contexts Against Indirect Attacks in Question Answering (2025)
- RealHarm: A Collection of Real-World Language Model Application Failures (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper