EmoLLM: Appraisal-Grounded Cognitive-Emotional Co-Reasoning in Large Language Models
arXiv:2603.16553v1 Announce Type: new Abstract: Large language models (LLMs) demonstrate strong cognitive intelligence (IQ), yet many real-world interactions also require emotional intelligence (EQ) to produce responses that are both factually reliable and emotionally appropriate. In settings such as emotional support, technical assistance, and consultation, effective dialogue depends on how situations are appraised with respect to the user's needs, goals, and coping capacity. Inspired by appraisal theory, we propose EmoLLM, an appraisal-grounded framework for IQ/EQ co-reasoning in dialogue. EmoLLM uses an explicit Appraisal Reasoning Graph (ARG) to structure intermediate reasoning over contextual facts, inferred user needs, appraisal dimensions, emotional states, and response strategies before generating a reply. We train EmoLLM in a multi-turn role-play environment with reinforcement learning, where reverse-perspective reasoning provides reward signals based on predicted user-side c
arXiv:2603.16553v1 Announce Type: new Abstract: Large language models (LLMs) demonstrate strong cognitive intelligence (IQ), yet many real-world interactions also require emotional intelligence (EQ) to produce responses that are both factually reliable and emotionally appropriate. In settings such as emotional support, technical assistance, and consultation, effective dialogue depends on how situations are appraised with respect to the user's needs, goals, and coping capacity. Inspired by appraisal theory, we propose EmoLLM, an appraisal-grounded framework for IQ/EQ co-reasoning in dialogue. EmoLLM uses an explicit Appraisal Reasoning Graph (ARG) to structure intermediate reasoning over contextual facts, inferred user needs, appraisal dimensions, emotional states, and response strategies before generating a reply. We train EmoLLM in a multi-turn role-play environment with reinforcement learning, where reverse-perspective reasoning provides reward signals based on predicted user-side consequences of responses. Across diverse dialogue settings, EmoLLM improves emotional state outcomes and response quality over strong baselines while preserving strong factual reliability.
Executive Summary
The article introduces EmoLLM, a framework for integrating cognitive and emotional intelligence in large language models. EmoLLM utilizes an Appraisal Reasoning Graph to structure intermediate reasoning, considering contextual facts, user needs, and emotional states. Trained through reinforcement learning, EmoLLM demonstrates improved emotional state outcomes and response quality across various dialogue settings, while maintaining factual reliability.
Key Points
- ▸ EmoLLM integrates cognitive and emotional intelligence in large language models
- ▸ Appraisal Reasoning Graph structures intermediate reasoning for more effective dialogue
- ▸ Reinforcement learning with reverse-perspective reasoning enhances response quality and emotional state outcomes
Merits
Comprehensive Framework
EmoLLM provides a structured approach to co-reasoning, considering multiple factors for more effective and emotionally appropriate responses.
Demerits
Complexity and Scalability
The introduction of an Appraisal Reasoning Graph and reinforcement learning may add complexity, potentially affecting scalability and real-world application.
Expert Commentary
The EmoLLM framework represents a significant step forward in integrating cognitive and emotional intelligence in large language models. By explicitly considering appraisal dimensions and emotional states, EmoLLM can generate more empathetic and supportive responses. However, the complexity of the framework and the need for high-quality training data may pose challenges for widespread adoption. Further research is needed to explore the applications and limitations of EmoLLM in real-world settings.
Recommendations
- ✓ Further evaluation of EmoLLM in diverse dialogue settings and applications
- ✓ Investigation into the potential for EmoLLM to be adapted for use in multilingual and multicultural contexts