Academic

Brittlebench: Quantifying LLM robustness via prompt sensitivity

arXiv:2603.13285v1 Announce Type: new Abstract: Existing evaluation methods largely rely on clean, static benchmarks, which can overestimate true model performance by failing to capture the noise and variability inherent in real-world user inputs. This is especially true for language models, which can face human-generated text queries containing mistakes, typos, or alternative ways of phrasing the same question. In this work, we introduce a theoretical framework for quantifying model sensitivity to prompt variants, or brittleness, that can enable us to disentangle data-induced difficulty from prompt-related variability. Using this framework, we design a novel evaluation pipeline, Brittlebench, to holistically evaluate the sensitivity of frontier models. We apply semantics-preserving perturbations to a suite of popular benchmarks, and observe model performance to degrade as much as 12%. However, these perturbations do not affect all models equally: even a single perturbation alters the

arXiv:2603.13285v1 Announce Type: new Abstract: Existing evaluation methods largely rely on clean, static benchmarks, which can overestimate true model performance by failing to capture the noise and variability inherent in real-world user inputs. This is especially true for language models, which can face human-generated text queries containing mistakes, typos, or alternative ways of phrasing the same question. In this work, we introduce a theoretical framework for quantifying model sensitivity to prompt variants, or brittleness, that can enable us to disentangle data-induced difficulty from prompt-related variability. Using this framework, we design a novel evaluation pipeline, Brittlebench, to holistically evaluate the sensitivity of frontier models. We apply semantics-preserving perturbations to a suite of popular benchmarks, and observe model performance to degrade as much as 12%. However, these perturbations do not affect all models equally: even a single perturbation alters the relative ranking of models in 63% of cases, impacting conclusions about comparative model performance. Decomposing the total variance of both state-of-the-art open-weight and commercial models, we find that semantics-preserving input perturbations can account for up to half of the performance variance for a given model. Brittlebench highlights the need for more robust evaluations and models, and allows us to systematically understand model brittleness.

Executive Summary

The article introduces Brittlebench, a framework for evaluating language model robustness via prompt sensitivity. It highlights the limitations of traditional evaluation methods, which often overestimate model performance by not accounting for real-world input variability. The authors demonstrate that semantics-preserving perturbations can significantly impact model performance, altering relative rankings and accounting for up to half of performance variance. This work emphasizes the need for more robust evaluations and models, enabling a systematic understanding of model brittleness.

Key Points

  • Introduction of Brittlebench, a framework for evaluating language model robustness
  • Demonstration of significant performance degradation due to semantics-preserving perturbations
  • Impact of perturbations on model rankings and performance variance

Merits

Novel Evaluation Framework

Brittlebench provides a systematic approach to understanding model brittleness, enabling more robust evaluations and models

Comprehensive Analysis

The study applies semantics-preserving perturbations to various benchmarks, offering a thorough examination of model performance

Demerits

Limited Scope

The study focuses primarily on language models, which may limit the generalizability of the findings to other areas of AI research

Expert Commentary

The introduction of Brittlebench marks a significant step forward in evaluating language model robustness. By systematically examining model sensitivity to prompt variants, researchers can develop more effective strategies for improving model performance and reliability. The findings have important implications for both practical applications and policy-making, highlighting the need for more robust evaluation methods and transparency in AI development. As the field continues to evolve, it is essential to prioritize model robustness and accountability to ensure the responsible development and deployment of AI systems.

Recommendations

  • Develop and apply Brittlebench to a broader range of AI models and applications
  • Establish interdisciplinary collaborations to address the complex challenges of model robustness and explainability

Sources