Papers
arxiv:2504.02810

Generative Evaluation of Complex Reasoning in Large Language Models

Published on Apr 3
· Submitted by pkuHaowei on Apr 9
Authors:
,
,
,
,
,

Abstract

With powerful large language models (LLMs) demonstrating superhuman reasoning capabilities, a critical question arises: Do LLMs genuinely reason, or do they merely recall answers from their extensive, web-scraped training datasets? Publicly released benchmarks inevitably become contaminated once incorporated into subsequent LLM training sets, undermining their reliability as faithful assessments. To address this, we introduce KUMO, a generative evaluation framework designed specifically for assessing reasoning in LLMs. KUMO synergistically combines LLMs with symbolic engines to dynamically produce diverse, multi-turn reasoning tasks that are partially observable and adjustable in difficulty. Through an automated pipeline, KUMO continuously generates novel tasks across open-ended domains, compelling models to demonstrate genuine generalization rather than memorization. We evaluated 23 state-of-the-art LLMs on 5,000 tasks across 100 domains created by KUMO, benchmarking their reasoning abilities against university students. Our findings reveal that many LLMs have outperformed university-level performance on easy reasoning tasks, and reasoning-scaled LLMs reach university-level performance on complex reasoning challenges. Moreover, LLM performance on KUMO tasks correlates strongly with results on newly released real-world reasoning benchmarks, underscoring KUMO's value as a robust, enduring assessment tool for genuine LLM reasoning capabilities.

Community

Paper author Paper submitter
Paper author Paper submitter

Excited to present KUMO, a generative evaluation benchmark for LLMs. Unlike static benchmarks, KUMO dynamically generates diverse, multi-turn reasoning tasks with controllable difficulty—avoiding data leakage and ensuring trustworthy evaluation.

📄 Paper: https://arxiv.org/pdf/2504.02810

Why KUMO?
✅ 95%+ correlation with SOTA reasoning benchmarks—synthetic but realistic!
✅ Avoids test-set contamination (no risk of pre-training data leaks).
✅ Controllable difficulty & domain diversity for fine-grained evaluation.

Key Findings:
1️⃣ Simple vs. Complex Reasoning: LLMs outperform undergrads on easy tasks, but only deep-thinking models match humans on hard problems.
2️⃣ Universal Difficulty Metric: KUMO can standardize difficulty across benchmarks (LiveBench-Reason ≈ KUMO-Hard).
3️⃣ Domain Matters! Model performance varies widely across fields (medical, gaming, etc.)—knowledge structure is key.
4️⃣ Generalization Challenge: Fine-tuning on expert trajectories fails when KUMO’s tasks evolve, demanding strong OOD/domain/difficulty generalization.

🌐 Beyond KUMO: Generative evaluation is the future! Our earlier work on agent evaluation (https://arxiv.org/pdf/2310.08367) also shows how dynamic benchmarks can transform evaluation into a science.

💡 Join Us! KUMO is open-source with RL-friendly reward signals.

Fantastic benchmark!

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2504.02810 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2504.02810 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.