ReGuLaR: Variational Latent Reasoning Guided by Rendered Chain-of-Thought
Abstract
ReGuLaR introduces a variational auto-encoding framework that compresses reasoning processes into latent space while maintaining performance through image-rendered explicit reasoning chains for guidance.
While Chain-of-Thought (CoT) significantly enhances the performance of Large Language Models (LLMs), explicit reasoning chains introduce substantial computational redundancy. Recent latent reasoning methods attempt to mitigate this by compressing reasoning processes into latent space, but often suffer from severe performance degradation due to the lack of appropriate compression guidance. In this study, we propose Rendered CoT-Guided variational Latent Reasoning (ReGuLaR), a simple yet novel latent learning paradigm resolving this issue. Fundamentally, we formulate latent reasoning within the Variational Auto-Encoding (VAE) framework, sampling the current latent reasoning state from the posterior distribution conditioned on previous ones. Specifically, when learning this variational latent reasoning model, we render explicit reasoning chains as images, from which we extract dense visual-semantic representations to regularize the posterior distribution, thereby achieving efficient compression with minimal information loss. Extensive experiments demonstrate that ReGuLaR significantly outperforms existing latent reasoning methods across both computational efficiency and reasoning effectiveness, and even surpasses CoT through multi-modal reasoning, providing a new and insightful solution to latent reasoning. Code: https://github.com/FanmengWang/ReGuLaR.
Community
Introduces ReGuLaR, a variational latent reasoning framework that renders reasoning as images to regularize posterior inference, achieving efficient multimodal reasoning beyond traditional chain of thought.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Render-of-Thought: Rendering Textual Chain-of-Thought as Images for Visual Latent Reasoning (2026)
- Latent Chain-of-Thought as Planning: Decoupling Reasoning from Verbalization (2026)
- Forest Before Trees: Latent Superposition for Efficient Visual Reasoning (2026)
- Chain-of-Thought Compression Should Not Be Blind: V-Skip for Efficient Multimodal Reasoning via Dual-Path Anchoring (2026)
- Interleaved Latent Visual Reasoning with Selective Perceptual Modeling (2025)
- LaViT: Aligning Latent Visual Thoughts for Multi-modal Reasoning (2026)
- Sketch-in-Latents: Eliciting Unified Reasoning in MLLMs (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper