Efficient Reasoning with Balanced Thinking
arXiv:2603.12372v1 Announce Type: new Abstract: Large Reasoning Models (LRMs) have shown remarkable reasoning capabilities, yet they often suffer from overthinking, expending redundant computational steps on simple problems, or underthinking, failing to explore sufficient reasoning paths despite inherent capabilities. These issues lead to inefficiencies and potential inaccuracies, limiting practical deployment in resource-constrained settings. Existing methods to mitigate overthinking, such as suppressing reflective keywords or adjusting reasoning length, may inadvertently induce underthinking, compromising accuracy. Therefore, we propose ReBalance, a training-free framework that achieves efficient reasoning with balanced thinking. ReBalance leverages confidence as a continuous indicator of reasoning dynamics, identifying overthinking through high confidence variance and underthinking via consistent overconfidence. By aggregating hidden states from a small-scale dataset into reasoning
arXiv:2603.12372v1 Announce Type: new Abstract: Large Reasoning Models (LRMs) have shown remarkable reasoning capabilities, yet they often suffer from overthinking, expending redundant computational steps on simple problems, or underthinking, failing to explore sufficient reasoning paths despite inherent capabilities. These issues lead to inefficiencies and potential inaccuracies, limiting practical deployment in resource-constrained settings. Existing methods to mitigate overthinking, such as suppressing reflective keywords or adjusting reasoning length, may inadvertently induce underthinking, compromising accuracy. Therefore, we propose ReBalance, a training-free framework that achieves efficient reasoning with balanced thinking. ReBalance leverages confidence as a continuous indicator of reasoning dynamics, identifying overthinking through high confidence variance and underthinking via consistent overconfidence. By aggregating hidden states from a small-scale dataset into reasoning mode prototypes, we compute a steering vector to guide LRMs' reasoning trajectories. A dynamic control function modulates this vector's strength and direction based on real-time confidence, pruning redundancy during overthinking, and promoting exploration during underthinking. Extensive experiments conducted on four models ranging from 0.5B to 32B, and across nine benchmarks in math reasoning, general question answering, and coding tasks demonstrate that ReBalance effectively reduces output redundancy while improving accuracy, offering a general, training-free, and plug-and-play strategy for efficient and robust LRM deployment. Code is available at https://github.com/yu-lin-li/ReBalance .
Executive Summary
The article proposes ReBalance, a training-free framework that achieves efficient reasoning with balanced thinking in Large Reasoning Models (LRMs). ReBalance leverages confidence as a continuous indicator of reasoning dynamics to identify overthinking and underthinking, and uses a steering vector to guide LRMs' reasoning trajectories. Extensive experiments demonstrate that ReBalance effectively reduces output redundancy while improving accuracy, offering a general strategy for efficient and robust LRM deployment. The framework has the potential to improve the practical deployment of LRMs in resource-constrained settings.
Key Points
- ▸ ReBalance is a training-free framework for efficient reasoning with balanced thinking
- ▸ It leverages confidence as a continuous indicator of reasoning dynamics
- ▸ The framework uses a steering vector to guide LRMs' reasoning trajectories
Merits
Efficient Reasoning
ReBalance achieves efficient reasoning by reducing output redundancy and improving accuracy
Training-Free
The framework does not require additional training, making it a plug-and-play strategy for LRM deployment
Generalizability
ReBalance is applicable to various LRM models and tasks, including math reasoning, general question answering, and coding tasks
Demerits
Complexity
The framework's reliance on confidence as a continuous indicator of reasoning dynamics may add complexity to the reasoning process
Scalability
The effectiveness of ReBalance in very large-scale models or tasks is not fully explored in the article
Expert Commentary
The article presents a significant contribution to the development of efficient and robust LRM systems. ReBalance's ability to leverage confidence as a continuous indicator of reasoning dynamics and guide LRMs' reasoning trajectories is a novel approach that addresses the limitations of existing methods. The framework's training-free nature and generalizability to various LRM models and tasks make it a promising strategy for practical deployment. However, further research is needed to fully explore the scalability and complexity of ReBalance, as well as its implications for explainability and robustness.
Recommendations
- ✓ Further research should be conducted to explore the scalability and complexity of ReBalance
- ✓ The framework should be tested in real-world applications to evaluate its practical effectiveness and robustness