Résumés
Abstract
The prospect of including artificial intelligence (AI) in clinical decision-making is an exciting next step for some areas of healthcare. This article provides an analysis of the available kinds of AI systems, focusing on macro-level characteristics. This includes examining the strengths and weaknesses of opaque systems and fully explainable systems. Ultimately, the article argues that “grey box” systems, which include some combination of opacity and transparency, ought to be used in healthcare settings.
Keywords:
- artificial intelligence,
- clinical decision-making,
- grey box,
- explainability,
- opaque systems
Résumé
La perspective d’inclure l’intelligence artificielle (IA) dans la prise de décision clinique est une prochaine étape passionnante pour certains secteurs des soins de santé. Cet article propose une analyse des types de systèmes d’IA disponibles, en se concentrant sur les caractéristiques de niveau macro. Il examine notamment les forces et les faiblesses des systèmes opaques et des systèmes entièrement explicables. En fin de compte, l’article soutient que les systèmes de type « boîte grise », qui combinent opacité et transparence, devraient être utilisés dans le domaine des soins de santé.
Mots-clés :
- intelligence artificielle,
- prise de décision clinique,
- boîte grise,
- explicabilité,
- systèmes opaques
Veuillez télécharger l’article en PDF pour le lire.
Télécharger
Parties annexes
Acknowledgements / Remerciements
I want to give special acknowledgement to the Canadian Bioethics Society for hosting the student writing competition that resulted in this version of my paper. In addition to this, I want to thank the editors and reviewers at the Canadian Journal of Bioethics for their time, consideration, and suggestions. I also want to recognize the suggestions given to me by several members of the UBC Department of Philosophy, especially those from Daniel Steel, Christopher Mole, and Jonathan Ichikawa. This paper draws on research supported by a Social Sciences and Humanities Research Council Doctoral Fellowship.
Je tiens à remercier tout particulièrement la Société canadienne de bioéthique d’avoir organisé le concours de rédaction pour étudiants qui a donné lieu à cette version de mon manuscrit. En outre, je tiens à remercier les rédacteurs et les réviseurs de la Revue canadienne de bioéthique pour leur temps, leur considération et leurs suggestions. Je tiens également à souligner les suggestions qui m’ont été faites par plusieurs membres du département de philosophie de l’UBC, en particulier celles de Daniel Steel, Christopher Mole et Jonathan Ichikawa. Cet article s’appuie sur des recherches financées par une bourse de doctorat du Conseil de recherches en sciences humaines.
Bibliography
- 1. McKinney SM, Sieniek M, Godbole V, et al. International evaluation of an AI system for breast cancer screening. Nature. 2020;577(7788):89-94.
- 2. Coleman F. A Human Algorithm: How Artificial Intelligence is Redefining Who We Are. Berkley, California: Counterpoint; 2019.
- 3. Tannam E. What are the benefits of white-box models in machine learning? Silicon Republic. 20 Feb 2019.
- 4. Topol E. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. New York, NY: Basic Books; 2019.
- 5. Shladover SE. The truth about “self-driving” cars. Scientific American. Dec 2016.
- 6. Chockley K, Emanuel E. The end of radiology? Three threats to the future practice of radiology. Journal of the American College of Radiology. 2016;13(12):1415-1420.
- 7. Recht M, Bryan RN. Artificial intelligence: threat or boon to radiologists? Journal of the American College of Radiology. 2017;14(11):1476-1480.
- 8. Akkus Z, Ali I, Sedlář J, et al. Predicting deletion of chromosomal arms 1p/19q in low-grade gliomas from MR images using machine intelligence. Journal of Digital Imaging. 2017;30(4):469-476.
- 9. Bahl M, Barzilay R, Yedidia AB, et al. High-risk breast lesions: a machine learning model to predict pathologic upgrade and reduce unnecessary surgical excision. Radiology. 2017;286(3):810-818.
- 10. Watson DS, Krutzinna J, Bruce IN, et al. Clinical applications of machine learning algorithms: beyond the black box. BMJ. 2019;364:l886.
- 11. Zednik C. Solving the black box problem: a normative framework for explainable artificial intelligence. arXiv:1903.04361 [cs.GL]; 4 Jul 2019.
- 12. Hall MA, Dugan E, Zheng B, Mishra AK. Trust in physicians and medical institutions: what is it, can it be measured, and does it matter? The Milbank Quarterly. 2001;79(4):613-639.
- 13. Nundy S, Montgomery T, Wachter RM. Promoting trust between patients and physicians in the era of artificial intelligence. JAMA. 2019;322(6):497-498.
- 14. Schiff D and Borenstein J. How should clinicians communicate with patients about the roles of artificially intelligent team members? AMA Journal of Ethics. 2019;21(2):E138-145.
- 15. Cohen IG. Informed consent and medical artificial intelligence: what to tell the patient? Georgetown Law Journal. 2020; 108:1425-1469.
- 16. Beauchamp TL, Childress JF. Principles of Biomedical Ethics (7th ed.). New York, NY: Oxford University Press; 2013.
- 17. The Lancet Respiratory Medicine. Opening the black box of machine learning. The Lancet Respiratory Medicine. 2018;6(11):801.
- 18. Doran D, Schulz S, Besold TR. What does explainable AI really mean? a new conceptualization of perspectives. arXiv:1710.00794. 2 Oct 2017.
- 19. Hsu W, Elmore JG. Shining light into the black box of machine learning. Journal of the National Cancer Institute. 2019; 111(9):877-879.
- 20. Wellner G, Rothman T. Feminist AI: can we expect our AI systems to become feminist? Philosophy & Technology. 2019;33:191-205.
- 21. Biddle JB. On predicting recidivism: epistemic risk, tradeoffs, and values in machine learning. Canadian Journal of Philosophy. 2020; First view. 1-21.
- 22. London AJ. Artificial intelligence and black-box medical decisions: accuracy versus explainability. Hastings Center Report. 2019;49(1):15-21.
- 23. Christensen JC, Lyons JB. 2017. Trust between humans and learning machines: developing the gray box. Mechanical Engineering. 2017;139(6):S9-S13.