Academic

Who is responsible? US Public perceptions of AI governance through the lenses of trust and ethics

The governance of artificial intelligence (AI) is an urgent challenge that requires actions from three interdependent stakeholders: individual citizens, technology corporations, and governments. We conducted an online survey ( N = 525) of US adults to examine their beliefs about the governance responsibility of these stakeholders as a function of trust and AI ethics. Different dimensions of trust and different ethical concerns were associated with beliefs in governance responsibility of the three stakeholders. Specifically, belief in the governance responsibility of the government was associated with ethical concerns about AI, whereas belief in governance responsibility of corporations was related to both ethical concerns and trust in AI. Belief in governance responsibility of individuals was related to human-centered values of trust in AI and fairness. Overall, the findings point to the need for an interdependent framework in which citizens, corporations, and governments share governa

P
Prabu David
· · 1 min read · 13 views

The governance of artificial intelligence (AI) is an urgent challenge that requires actions from three interdependent stakeholders: individual citizens, technology corporations, and governments. We conducted an online survey ( N = 525) of US adults to examine their beliefs about the governance responsibility of these stakeholders as a function of trust and AI ethics. Different dimensions of trust and different ethical concerns were associated with beliefs in governance responsibility of the three stakeholders. Specifically, belief in the governance responsibility of the government was associated with ethical concerns about AI, whereas belief in governance responsibility of corporations was related to both ethical concerns and trust in AI. Belief in governance responsibility of individuals was related to human-centered values of trust in AI and fairness. Overall, the findings point to the need for an interdependent framework in which citizens, corporations, and governments share governance responsibilities, guided by trust and ethics as the guardrails.

Executive Summary

The article 'Who is responsible? US Public perceptions of AI governance through the lenses of trust and ethics' explores the public's views on AI governance responsibilities among citizens, corporations, and governments. Through an online survey of 525 US adults, the study finds that trust and ethical concerns significantly influence perceptions of governance responsibility. Government responsibility is linked to ethical concerns, corporate responsibility to both ethics and trust, and individual responsibility to human-centered values of trust and fairness. The findings advocate for an interdependent governance framework guided by trust and ethics.

Key Points

  • Trust and ethical concerns shape public perceptions of AI governance responsibilities.
  • Government responsibility is primarily associated with ethical concerns about AI.
  • Corporate responsibility is linked to both ethical concerns and trust in AI.
  • Individual responsibility is tied to human-centered values of trust and fairness.
  • An interdependent governance framework is necessary for effective AI governance.

Merits

Comprehensive Approach

The study effectively examines the interplay between trust, ethics, and governance responsibilities, providing a holistic view of public perceptions.

Empirical Evidence

The use of a large-scale survey (N = 525) lends empirical weight to the findings, enhancing the study's credibility.

Interdisciplinary Insights

The study bridges legal, ethical, and social science perspectives, offering valuable insights for policymakers, corporations, and researchers.

Demerits

Sample Limitations

The survey sample, while substantial, may not fully represent the diverse demographic and cultural landscape of the US, potentially limiting the generalizability of the findings.

Survey Methodology

The reliance on self-reported data may introduce biases, such as social desirability bias, which could affect the accuracy of the results.

Contextual Factors

The study does not delve deeply into how contextual factors, such as recent AI-related incidents or policy changes, might influence public perceptions.

Expert Commentary

The article provides a timely and insightful examination of public perceptions regarding AI governance, emphasizing the critical roles of trust and ethics. The study's findings are particularly relevant in the current landscape, where AI technologies are rapidly advancing and their governance remains a contentious issue. The call for an interdependent governance framework is well-justified, as it acknowledges the complex interplay between different stakeholders. However, the study could benefit from a more nuanced exploration of how cultural, demographic, and contextual factors influence perceptions of governance responsibility. Additionally, the reliance on self-reported data introduces potential biases that should be addressed in future research. Overall, the article contributes significantly to the discourse on AI governance and offers valuable recommendations for policymakers, corporations, and researchers.

Recommendations

  • Future research should employ mixed-methods approaches to triangulate findings and provide a more comprehensive understanding of public perceptions.
  • Policymakers should collaborate with technology corporations and civil society to develop inclusive and adaptive governance frameworks that reflect the interdependent nature of AI governance.

Sources