Academic

Human Attribution of Causality to AI Across Agency, Misuse, and Misalignment

arXiv:2603.13236v1 Announce Type: new Abstract: AI-related incidents are becoming increasingly frequent and severe, ranging from safety failures to misuse by malicious actors. In such complex situations, identifying which elements caused an adverse outcome, the problem of cause selection, is a critical first step for establishing liability. This paper investigates folk perceptions of causal responsibility in causal chain structures when AI systems are involved in harmful outcomes. We conduct human experiments to examine judgments of causality, blame, foreseeability, and counterfactual reasoning. Our findings show that: (1) When AI agency was moderate (human sets the goal, AI determines the means) or high (AI sets the goal and the means), participants attributed greater causal responsibility to the AI. However, under low AI agency (where a human sets both a goal and means) participants assigned greater causal responsibility to the human despite their temporal distance from the outcome

M
Maria Victoria Carro, David Lagnado
· · 1 min read · 12 views

arXiv:2603.13236v1 Announce Type: new Abstract: AI-related incidents are becoming increasingly frequent and severe, ranging from safety failures to misuse by malicious actors. In such complex situations, identifying which elements caused an adverse outcome, the problem of cause selection, is a critical first step for establishing liability. This paper investigates folk perceptions of causal responsibility in causal chain structures when AI systems are involved in harmful outcomes. We conduct human experiments to examine judgments of causality, blame, foreseeability, and counterfactual reasoning. Our findings show that: (1) When AI agency was moderate (human sets the goal, AI determines the means) or high (AI sets the goal and the means), participants attributed greater causal responsibility to the AI. However, under low AI agency (where a human sets both a goal and means) participants assigned greater causal responsibility to the human despite their temporal distance from the outcome and despite both agents intended it, suggesting an effect of autonomy; (2) When we reversed roles between human and AI, participants consistently judged the human as more causal, even when both agents perform the same action; (3) The developer, despite being distant in the chain, was judged highly causal, reducing causal attributions to the human user but not to the AI; (4) Decomposing the AI into a large language model and an agentic component showed that the agentic part was judged as more causal in the chain. Overall, our research provides evidence on how people perceive the causal contribution of AI in both misuse and misalignment scenarios, and how these judgments interact with the roles of users and developers, key actors in assigning responsibility. These findings can inform the design of liability frameworks for AI-caused harms and shed light on how intuitive judgments shape social and policy debates surrounding real-world AI-related incidents.

Executive Summary

This study investigates human perceptions of causal responsibility in AI-related incidents, employing a human experiment to examine judgments of causality, blame, foreseeability, and counterfactual reasoning. The results reveal a complex interplay between AI agency, user, and developer roles in attributing causal responsibility. The study's findings highlight the importance of considering autonomy, reversals of roles, and the decomposing of AI components in determining causal attributions. The research provides valuable insights for the development of liability frameworks for AI-caused harms and informs social and policy debates surrounding AI-related incidents.

Key Points

  • Human participants attributed greater causal responsibility to AI when agency was moderate or high.
  • Low AI agency led participants to assign greater causal responsibility to the human, despite temporal distance and intent.
  • The developer was judged highly causal, reducing attributions to the human user but not the AI.

Merits

Strength in Methodology

The study employs a human experiment to investigate folk perceptions of causal responsibility, providing a robust and empirical approach to understanding human judgments.

Insight into AI Agency

The research highlights the significance of autonomy in attributing causal responsibility, demonstrating how AI agency can influence human judgments.

Demerits

Limitation in Generalizability

The study's findings may not generalize to real-world scenarios, as the experiment's controlled environment may not accurately replicate complex AI-related incidents.

Lack of Contextual Consideration

The research focuses primarily on causal attributions, neglecting the importance of contextual factors, such as the specific AI system's design and functionality, in determining liability.

Expert Commentary

The study's findings have significant implications for the development of liability frameworks for AI-caused harms and the ongoing debates surrounding autonomous systems and liability. However, the research's limitations in generalizability and contextual consideration should be acknowledged, and further studies should be conducted to address these gaps. The decomposition of AI components into agentic and large language models is an innovative approach, providing valuable insights into the role of autonomy in attributing causal responsibility. Nevertheless, the study's focus on human judgments may overlook the complexities of AI system design and functionality, highlighting the need for interdisciplinary research that incorporates both social science and technical perspectives.

Recommendations

  • Future studies should investigate the impact of contextual factors, such as AI system design and functionality, on human judgments of causal responsibility.
  • Developers and designers of AI systems should prioritize transparency and explainability in AI decision-making processes to facilitate more informed human judgments of causal responsibility.

Sources