PlainQAFact: Automatic Factuality Evaluation Metric for Biomedical Plain Language Summaries Generation
Abstract
Hallucinated outputs from language models pose risks in the medical domain, especially for lay audiences making health-related decisions. Existing factuality evaluation methods, such as entailment- and question-answering-based (QA), struggle with plain language summary (PLS) generation due to elaborative explanation phenomenon, which introduces external content (e.g., definitions, background, examples) absent from the source document to enhance comprehension. To address this, we introduce PlainQAFact, a framework trained on a fine-grained, human-annotated dataset PlainFact, to evaluate the factuality of both source-simplified and elaboratively explained sentences. PlainQAFact first classifies factuality type and then assesses factuality using a retrieval-augmented QA-based scoring method. Our approach is lightweight and computationally efficient. Empirical results show that existing factuality metrics fail to effectively evaluate factuality in PLS, especially for elaborative explanations, whereas PlainQAFact achieves state-of-the-art performance. We further analyze its effectiveness across external knowledge sources, answer extraction strategies, overlap measures, and document granularity levels, refining its overall factuality assessment.
Community
PlainQAFact
is a retrieval-augmented and question-answering (QA)-based factuality evaluation framework for assessing the factuality of biomedical plain language summarization tasks. PlainFact
is a high-quality human-annotated dataset with fine-grained explanation (i.e., added information) annotations.
๐ ๐ ๐ To use our proposed metric, simple install through pip install plainqafact
.
๐ป For more details about how to use our evaluation framework and the benchmark, please refer to the Github repo: https://github.com/zhiwenyou103/PlainQAFact
๐ PlainFact is available on ๐ค Hugging Face now!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MedBioLM: Optimizing Medical and Biological QA with Fine-Tuned Large Language Models and Retrieval-Augmented Generation (2025)
- MeDiSumQA: Patient-Oriented Question-Answer Generation from Discharge Letters (2025)
- Benchmarking Multimodal RAG through a Chart-based Document Question-Answering Generation Framework (2025)
- MedHallu: A Comprehensive Benchmark for Detecting Medical Hallucinations in Large Language Models (2025)
- Aligning LLMs to Ask Good Questions A Case Study in Clinical Reasoning (2025)
- FIND: Fine-grained Information Density Guided Adaptive Retrieval-Augmented Generation for Disease Diagnosis (2025)
- Structured Outputs Enable General-Purpose LLMs to be Medical Experts (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 2
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper