Academic

Ethics and governance of trustworthy medical artificial intelligence

Abstract Background The growing application of artificial intelligence (AI) in healthcare has brought technological breakthroughs to traditional diagnosis and treatment, but it is accompanied by many risks and challenges. These adverse effects are also seen as ethical issues and affect trustworthiness in medical AI and need to be managed through identification, prognosis and monitoring. Methods We adopted a multidisciplinary approach and summarized five subjects that influence the trustworthiness of medical AI: data quality, algorithmic bias, opacity, safety and security, and responsibility attribution, and discussed these factors from the perspectives of technology, law, and healthcare stakeholders and institutions. The ethical framework of ethical values-ethical principles-ethical norms is used to propose corresponding ethical governance countermeasures for trustworthy medical AI from the ethical, legal, and regulatory aspects. Results Medical data are primarily unstructured, lacking

J
Jie Zhang
· · 1 min read · 12 views

Abstract Background The growing application of artificial intelligence (AI) in healthcare has brought technological breakthroughs to traditional diagnosis and treatment, but it is accompanied by many risks and challenges. These adverse effects are also seen as ethical issues and affect trustworthiness in medical AI and need to be managed through identification, prognosis and monitoring. Methods We adopted a multidisciplinary approach and summarized five subjects that influence the trustworthiness of medical AI: data quality, algorithmic bias, opacity, safety and security, and responsibility attribution, and discussed these factors from the perspectives of technology, law, and healthcare stakeholders and institutions. The ethical framework of ethical values-ethical principles-ethical norms is used to propose corresponding ethical governance countermeasures for trustworthy medical AI from the ethical, legal, and regulatory aspects. Results Medical data are primarily unstructured, lacking uniform and standardized annotation, and data quality will directly affect the quality of medical AI algorithm models. Algorithmic bias can affect AI clinical predictions and exacerbate health disparities. The opacity of algorithms affects patients’ and doctors’ trust in medical AI, and algorithmic errors or security vulnerabilities can pose significant risks and harm to patients. The involvement of medical AI in clinical practices may threaten doctors ‘and patients’ autonomy and dignity. When accidents occur with medical AI, the responsibility attribution is not clear. All these factors affect people’s trust in medical AI. Conclusions In order to make medical AI trustworthy, at the ethical level, the ethical value orientation of promoting human health should first and foremost be considered as the top-level design. At the legal level, current medical AI does not have moral status and humans remain the duty bearers. At the regulatory level, strengthening data quality management, improving algorithm transparency and traceability to reduce algorithm bias, and regulating and reviewing the whole process of the AI industry to control risks are proposed. It is also necessary to encourage multiple parties to discuss and assess AI risks and social impacts, and to strengthen international cooperation and communication.

Executive Summary

This article provides a comprehensive analysis of the ethics and governance of trustworthy medical artificial intelligence (AI). The authors identify five key factors influencing medical AI trustworthiness: data quality, algorithmic bias, opacity, safety and security, and responsibility attribution. They propose an ethical framework to address these issues from an ethical, legal, and regulatory perspective. The authors emphasize the importance of prioritizing human health, ensuring data quality, and promoting transparency and accountability in medical AI development and deployment. Their recommendations include strengthening data management, improving algorithm transparency, and regulating the AI industry. This article provides valuable insights into the complex challenges surrounding medical AI and offers practical suggestions for promoting trustworthiness and safety in healthcare technology.

Key Points

  • Medical AI requires a multidisciplinary approach to ensure trustworthiness
  • Key factors influencing medical AI trustworthiness include data quality, algorithmic bias, opacity, safety and security, and responsibility attribution
  • An ethical framework is proposed to address these issues from an ethical, legal, and regulatory perspective

Merits

Comprehensive analysis

The article provides a thorough examination of the complex issues surrounding medical AI, considering multiple perspectives and stakeholders.

Practical recommendations

The authors offer concrete suggestions for promoting trustworthiness and safety in medical AI development and deployment.

Demerits

Limited scope

The article focuses primarily on medical AI and may not fully address broader AI-related issues in healthcare.

Overemphasis on technical solutions

The authors may rely too heavily on technical fixes, neglecting the need for systemic and societal changes to address AI-related challenges.

Expert Commentary

This article provides a valuable contribution to the ongoing discussion on the ethics and governance of medical AI. The authors' emphasis on the importance of prioritizing human health and promoting transparency and accountability in medical AI development and deployment is well-justified. However, their focus on technical solutions may overlook the need for systemic and societal changes to address AI-related challenges. To fully address these issues, a more nuanced approach that considers the complex interplay between technical, legal, and social factors is necessary. Furthermore, the article highlights the need for international cooperation and communication to address the global implications of medical AI. As such, this article serves as a timely reminder of the importance of responsible AI development and deployment in healthcare.

Recommendations

  • Researchers and developers should prioritize human-centered design and ethics in medical AI development
  • Regulatory bodies should establish clear guidelines and frameworks for medical AI development and deployment

Sources