Abstract
Pretrained language models can adapt to operate in sentence space, reason over structured semantic units, and achieve competitive performance with Chain-of-Thought while reducing inference-time FLOPs.
Autoregressive language models (LMs) generate one token at a time, yet human reasoning operates over higher-level abstractions - sentences, propositions, and concepts. This contrast raises a central question- Can LMs likewise learn to reason over structured semantic units rather than raw token sequences? In this work, we investigate whether pretrained LMs can be lifted into such abstract reasoning spaces by building on their learned representations. We present a framework that adapts a pretrained token-level LM to operate in sentence space by autoregressively predicting continuous embeddings of next sentences. We explore two embedding paradigms inspired by classical representation learning: 1) semantic embeddings, learned via autoencoding to preserve surface meaning; and 2) contextual embeddings, trained via next-sentence prediction to encode anticipatory structure. We evaluate both under two inference regimes: Discretized, which decodes each predicted embedding into text before re-encoding; and Continuous, which reasons entirely in embedding space for improved efficiency. Across four domains - mathematics, logic, commonsense, and planning - contextual embeddings under continuous inference show competitive performance with Chain-of-Thought (CoT) while reducing inference-time FLOPs on average by half. We also present early signs of scalability and modular adaptation. Finally, to visualize latent trajectories, we introduce SentenceLens, a diagnostic tool that decodes intermediate model states into interpretable sentences. Together, our results indicate that pretrained LMs can effectively transition to abstract, structured reasoning within latent embedding spaces.
Community
In this work, we propose a framework that enables language models to reason at a higher level of abstraction (i.e. sentence level) in a latent yet interpretable manner.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Pretraining Language Models to Ponder in Continuous Space (2025)
- Think Silently, Think Fast: Dynamic Latent Compression of LLM Reasoning Chains (2025)
- Reasoning Beyond Language: A Comprehensive Survey on Latent Chain-of-Thought Reasoning (2025)
- System-1.5 Reasoning: Traversal in Language and Latent Spaces with Dynamic Shortcuts (2025)
- Learning Interpretable Representations Leads to Semantically Faithful EEG-to-Text Generation (2025)
- When Less Language is More: Language-Reasoning Disentanglement Makes LLMs Better Multilingual Reasoners (2025)
- ThinkLess: A Training-Free Inference-Efficient Method for Reducing Reasoning Redundancy (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper