Speech emotions play a crucial role in human-computer interaction, shaping engagement and context-aware communication. Despite recent advances in spoken dialogue systems, a holistic system for evaluating emotional reasoning is still lacking. To address this, we introduce EMO-Reasoning, a benchmark for assessing emotional coherence in dialogue systems. It leverages a curated dataset generated via text-to-speech to simulate diverse emotional states, overcoming the scarcity of emotional speech data. We further propose the Cross-turn Emotion Reasoning Score to assess the emotion transitions in multi-turn dialogues. Evaluating seven dialogue systems through continuous, categorical, and perceptual metrics, we show that our framework effectively detects emotional inconsistencies, providing insights for improving current dialogue systems. By releasing a systematic evaluation benchmark, we aim to advance emotion-aware spoken dialogue modeling toward more natural and adaptive interactions.
TODO.
@article{,
author = {Liu, Jingwen and Cheng, Kan Jen and Lian, Jiachen and Anand, Akshay and Jain, Rishi and Qiao, Faith and Netzorg, Robin and Chou, Huang-Cheng and Li, Tingle and Lin, Guan-Ting and Anumanchipalli, Gopala},
title = {EMO-Reasoning: Benchmarking Emotional Reasoning Capabilities in Spoken Dialogue Systems},
year = {2025},
booktitle = {2025 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)}
}