Abstracts
Abstract
Informed consent is often argued to be one of the more significant potential problems for the implementation and widespread onboarding of artificial intelligence (AI) and machine learning in healthcare decision-making. This is because of the concern revolving around whether, and to what degree, patients can understand what contributes to the decision-making process when an algorithm is involved. In this paper, I address what I call the Understanding Objection, which is the idea that AI systems will cause problems for the informational criteria involved in proper informed consent. I demonstrate that collaboration with clinicians in a human-in-the-loop partnership can alleviate these concerns around understanding, regardless how one conceptualizes the scope of understanding. Importantly, I argue that the human clinicians must be the second reader in the partnership to avoid institutional deference to the machine and best promote clinicians as the experts in the process.
Keywords:
- artificial intelligence,
- clinical ethics,
- consent,
- clinical decision-making,
- radiology,
- understanding,
- human-in-the-loop
Résumé
Le consentement éclairé est souvent considéré comme l’un des problèmes potentiels les plus importants pour la mise en oeuvre et l’intégration généralisée de l’intelligence artificielle (IA) et de l’apprentissage automatique dans la prise de décision en matière de soins de santé. En effet, on se demande si, et dans quelle mesure, les patients peuvent comprendre ce qui contribue au processus de prise de décision lorsqu’un algorithme est impliqué. Dans cet article, j’aborde ce que j’appelle l’objection de la compréhension, c’est-à-dire l’idée que les systèmes d’IA causeront des problèmes pour les critères d’information impliqués dans un consentement éclairé approprié. Je démontre que la collaboration avec les cliniciens dans le cadre d’un partenariat humain dans la boucle peut atténuer ces préoccupations concernant la compréhension, quelle que soit la manière dont on conceptualise la portée de la compréhension. Surtout, je soutiens que les cliniciens humains doivent être le deuxième lecteur dans le partenariat afin d’éviter la déférence institutionnelle envers la machine et de promouvoir au mieux les cliniciens en tant qu’experts dans le processus.
Mots-clés :
- intelligence artificielle,
- éthique clinique,
- consentement,
- prise de décision clinique,
- radiologie,
- compréhension,
- humain dans la boucle
Download the article in PDF to read it.
Download
Appendices
Acknowledgements / Remerciements
This research was funded in part by a Social Sciences and Humanities Research Council of Canada Doctoral Fellowship 752-2020-2225. The author would also like to acknowledge Daniel Steel for reviewing and commenting on an earlier project that evolved into this analysis.
Cette recherche a été financée en partie par la bourse de doctorat 752-2020-2225 du Conseil de recherches en sciences humaines du Canada. L’auteur souhaite également remercier Daniel Steel pour avoir revu et commenté un projet antérieur qui a donné lieu à cette analyse.
Bibliography
- 1. Beauchamp TL, Childress JF. Principles of Biomedical Ethics 7th ed. Oxford University Press; 2013.
- 2. Lipton ZC. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue. 2018;16(3):31-57.
- 3. London AJ. Artificial intelligence and black-box medical decisions: Accuracy versus explainability. Hastings Center Report. 2019;49(1):15-21.
- 4. Schiff D, Borenstein J. How should clinicians communicate with patients about the roles of artificially intelligent team members? AMA Journal of Ethics. 2019;21(2):138-45.
- 5. Watson DS, Krutzinna J, Bruce IN, et al. Clinical applications of machine learning algorithms: Beyond the black box. BMJ. 2019;364:l886.
- 6. Lee EE, Torous J, De Choudhury M, et al. Artificial intelligence for mental health care: Clinical applications, barriers, facilitators, and artificial wisdom. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging. 2021;6(9):856-64.
- 7. Wadden JJ. What kind of artificial intelligence should we want for use in healthcare decision-making applications? Canadian Journal of Bioethics/Revue Canadienne de Bioéthique. 2021;4(1):94-100.
- 8. Wadden JJ. Defining the undefinable: the black box problem in healthcare artificial intelligence. Journal of Medical Ethics. 2021;48(10):764-68
- 9. Geijer H, Geijer M. Added value of double reading in diagnostic radiology, a systematic review. Insights into Imaging. 2018;9(3):287-301.
- 10. McKinney SM, Sieniek M, Godbole V, et al. International evaluation of an AI system for breast cancer screening. Nature. 2020;577(7788):89-94.
- 11. Kempt H, Nagel SK. Responsibility, second opinions and peer-disagreement: ethical and epistemological challenges of using AI in clinical diagnostic contexts. Journal of Medical Ethics. 2022;48(4):222-29.
- 12. Grote T, Berens P. How competitors become collaborators—Bridging the gap(s) between machine learning algorithms and clinicians. Bioethics. 2022;36(2):134-42.
- 13. Declaration Development Committee. Montreal Declaration on Responsible AI. Montreal, QC: Universite de Montreal.
- 14. Johnson J. The AI commander problem: Ethical, political, and psychological dilemmas of human-machine interactions in AI-enabled warfare. Journal of Military Ethics. 2023;21(3-4):246-71.
- 15. Slater L-A, Ravintharan N, Goergen S, et al. RapidAI compared with human readers of acute stroke imaging for detection of intracranial vessel occlusion. Stroke: Vascular and Interventional Neurology 2024;4(2):e001145.
- 16. Topol E. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Toronto, Basic Books; 2019.
- 17. Rudin C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence. 2019;1(5):206-15.
- 18. Lesgold A, Rubinson H, Feltovich PJ, Glaser R, Klopfer D, Wang Y. Expertise in a complex skill: Diagnosing x-ray pictures. In: Chi MTH, Glaser R, Farr MJ, editors. The Nature of Expertise. Mahwah, NJ: Lawrence Erlbaum Associates; 1988. p. 311-42.
- 19. Brady, AP. Error and discrepancy in radiology: Inevitable or avoidable? Insights into Imaging. 2017;8(1):171-82.
- 20. Szegedy C, Zaremba W, Sutskever I, et al. Intriguing properties of neural networks. International Conference on Learning Representations 2014 Proceedings. arXiv:1312.6199
- 21. Alvarado R. Should we replace radiologists with deep learning? Pigeons, error and trust in medical AI. Bioethics. 2022;36(2):121-33.
- 22. Creative Destruction Lab. Geoff Hinton: On Radiology. Filmed at the 2016 Machine Learning and Market for Intelligence Conference, Toronto: Ontario; November 2016.