Academic

DEAF: A Benchmark for Diagnostic Evaluation of Acoustic Faithfulness in Audio Language Models

arXiv:2603.18048v1 Announce Type: new Abstract: Recent Audio Multimodal Large Language Models (Audio MLLMs) demonstrate impressive performance on speech benchmarks, yet it remains unclear whether these models genuinely process acoustic signals or rely on text-based semantic inference. To systematically study this question, we introduce DEAF (Diagnostic Evaluation of Acoustic Faithfulness), a benchmark of over 2,700 conflict stimuli spanning three acoustic dimensions: emotional prosody, background sounds, and speaker identity. Then, we design a controlled multi-level evaluation framework that progressively increases textual influence, ranging from semantic conflicts in the content to misleading prompts and their combination, allowing us to disentangle content-driven bias from prompt-induced sycophancy. We further introduce diagnostic metrics to quantify model reliance on textual cues over acoustic signals. Our evaluation of seven Audio MLLMs reveals a consistent pattern of text dominan

arXiv:2603.18048v1 Announce Type: new Abstract: Recent Audio Multimodal Large Language Models (Audio MLLMs) demonstrate impressive performance on speech benchmarks, yet it remains unclear whether these models genuinely process acoustic signals or rely on text-based semantic inference. To systematically study this question, we introduce DEAF (Diagnostic Evaluation of Acoustic Faithfulness), a benchmark of over 2,700 conflict stimuli spanning three acoustic dimensions: emotional prosody, background sounds, and speaker identity. Then, we design a controlled multi-level evaluation framework that progressively increases textual influence, ranging from semantic conflicts in the content to misleading prompts and their combination, allowing us to disentangle content-driven bias from prompt-induced sycophancy. We further introduce diagnostic metrics to quantify model reliance on textual cues over acoustic signals. Our evaluation of seven Audio MLLMs reveals a consistent pattern of text dominance: models are sensitive to acoustic variations, yet predictions are predominantly driven by textual inputs, revealing a gap between high performance on standard speech benchmarks and genuine acoustic understanding.

Executive Summary

This article introduces DEAF, a benchmark for evaluating the acoustic faithfulness of Audio Multimodal Large Language Models (Audio MLLMs). The authors present a comprehensive evaluation framework that assesses model reliance on textual cues versus acoustic signals. The study reveals a consistent pattern of text dominance among seven Audio MLLMs, suggesting a gap between their performance on standard speech benchmarks and genuine acoustic understanding. The findings have significant implications for the development and deployment of Audio MLLMs, highlighting the need for more robust evaluation methods and a deeper understanding of model behavior.

Key Points

  • DEAF is a benchmark for evaluating the acoustic faithfulness of Audio MLLMs.
  • The benchmark assesses model reliance on textual cues versus acoustic signals through a controlled multi-level evaluation framework.
  • The study reveals a consistent pattern of text dominance among seven Audio MLLMs, suggesting a gap between their performance on standard speech benchmarks and genuine acoustic understanding.

Merits

Comprehensive Evaluation Framework

The authors present a robust evaluation framework that systematically assesses model behavior and reliance on textual cues versus acoustic signals.

Insights into Model Behavior

The study provides valuable insights into the behavior of Audio MLLMs, highlighting the need for more robust evaluation methods and a deeper understanding of model behavior.

Demerits

Limited Model Selection

The study only evaluates seven Audio MLLMs, which may not be representative of the broader range of models in this category.

Limited Generalizability

The findings may not be generalizable to other domains or applications, which could impact the practical implications of the study.

Expert Commentary

The study presents a significant contribution to the field of Audio MLLMs, highlighting the need for more robust evaluation methods and a deeper understanding of model behavior. While the findings are impressive, the study's limitations should be carefully considered by researchers and practitioners. The results have far-reaching implications for the development and deployment of Audio MLLMs, and the study's insights will likely influence the direction of future research in this area.

Recommendations

  • Future studies should explore the development of more robust evaluation methods to ensure that Audio MLLMs are genuinely processing acoustic signals.
  • Researchers should prioritize the investigation of model fairness and explainability in AI to mitigate the risks associated with text dominance in Audio MLLMs.

Sources