Résumés
Abstract
The introduction of ImpactPro to identify patients with complex health needs suggests that current bias and impacts of bias in healthcare AIs stem from historically biased practices leading to biased datasets, a lack of oversight, as well as bias in practitioners who are overseeing AIs. In order to improve these outcomes, healthcare practitioners need to engage in current best practices for anti-bias training.
Keywords:
- artificial intelligence,
- machine learning,
- bias,
- implicit bias,
- racism,
- ImpactPro
Résumé
L’introduction d’ImpactPro pour identifier les patients ayant des besoins de santé complexes suggère que les préjugés actuels et les impacts des préjugés dans les IA de soins de santé proviennent de pratiques historiquement biaisées menant à des ensembles de données biaisés, d’un manque de supervision, ainsi que de préjugés chez les praticiens qui supervisent les IA. Afin d’améliorer ces résultats, les praticiens de la santé doivent adopter les meilleures pratiques actuelles en matière de formation à la lutte contre les préjugés.
Mots-clés :
- intelligence artificielle,
- apprentissage automatique,
- préjugés,
- préjugés implicites,
- racisme,
- ImpactPro
Veuillez télécharger l’article en PDF pour le lire.
Télécharger
Parties annexes
Acknowledgements / Remerciements
Thank you to the Waterloo Philosophy Department and in particular Katy Fulfer for her review and help with this paper; to my peer-reviewer Carl Mörch for his suggestions; to Amanda and Sydney, the students at large for the Canadian Bioethics Society, and administrators of the CBS-CJB student essay contest; to my anonymous reviewers; and to Nathalie Brown for her comments and suggestions.
Merci au département de philosophie de Waterloo et en particulier à Katy Fulfer pour son aide et la révision de ce manuscrit; à mon réviseur Carl Mörch pour ses suggestions; à Amanda et Sydney, les étudiants de la Société canadienne de bioéthique et les administrateurs du concours de rédaction du SCB-RCB; à mes réviseurs anonymes; et à Nathalie Brown pour ses commentaires et suggestions.
Bibliography
- 1. Miller D, Brown E. Artificial Intelligence in medical practice: the question to the answer? The American Journal of Medicine. 2018;131(2):129-133.
- 2. Nuffield Council on Bioethics. Artificial Intelligence (AI) in healthcare and research. Nuffield Council on Bioethics; 2018.
- 3. Challen R, Denny J, Pitt M, et al. Artificial intelligence, bias and clinical safety. BMJ Quality & Safety. 2019;28(3):231-237.
- 4. Hague D. Benefits, pitfalls, and potential bias in health care AI. North Carolina Medical Journal. 2019;80(4):219-223.
- 5. Akhtar A. New York is investigating UnitedHealth’s use of a medical algorithm that steered black patients away from getting higher-quality care. Business Insider; 28 Oct 2019.
- 6. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019;366(6464):447-453.
- 7. Chen Y, Szolovits P, Ghassemi M. Can AI help reduce disparities in general medical and mental health care? AMA Journal of Ethics. 2019;21(2):E167-179.
- 8. Nordling L. A fairer way forward for AI in health care. Nature. 2019;573(7775):S103-S105.
- 9. van Ryn M, Burke J. The effect of patient race and socio-economic status on physicians’ perceptions of patients. Social Science & Medicine. 2000;50(6):813-828.
- 10. Heinrich J, Nachum O. Identifying and correcting label bias in machine learning. arXiv. 2019;arXiv:1901.04966.
- 11. Howard A, Borenstein J. The ugly truth about ourselves and our robot creations: the problem of bias and social inequity. Science and Engineering Ethics. 2017;24(5):1521-1536.
- 12. Mittelstadt, B. Principles alone cannot guarantee ethical AI. Nature Machine Intelligence. 2019;1:501-507.
- 13. FitzGerald C, Hurst S. Implicit bias in healthcare professionals: a systematic review. BMC Medical Ethics. 2017;18:19.
- 14. Chapman E, Kaatz A, Carnes M. Physicians and implicit bias: how doctors may unwittingly perpetuate health care disparities. Journal of General Internal Medicine. 2013;28(11):1504-1510.
- 15. Wylie L, McConkey S. Insiders’ insight: discrimination against Indigenous peoples through the eyes of health care professionals. Journal of Racial and Ethnic Health Disparities. 2019;6:37-45.
- 16. Reilly, J. Ogdie, A. et. al. Teaching about how doctors think: a longitudinal curriculum in cognitive bias and diagnostic error for residents. BMJ Quality & Safety 2013;22:1044-1050.
- 17. Gonzalez, C. Kim, M., Marantz, P. Implicit bias and its relation to health disparities: a teaching program and survey of medical students. Teaching and Learning in Medicine 2014;26(1):64-71.
- 18. Frey C, Osborne M. The future of employment. The Oxford Martin Programme on Technology and Employment. Working Paper. 2013.
- 19. Algorithmic Justice League. 2019.
- 20. The Montreal Declaration for the Responsible Development of Artificial Intelligence. Inven_T, University of Montreal; 2017.
- 21. The High-Level Expert Group on AI Guidelines. European Commission; 2019.
- 22. Price WN. Regulating black box medicine. Michigan Law Review. 2017;116(3):421-474.