Résumés
Abstract
This study uses folk theories of the Spotify music recommender system to inform the principles of human-centered explainable AI (HCXAI). The results show that folk theories can reinforce, challenge, and augment these principles facilitating the development of more transparent and explainable recommender systems for the non-expert, lay public.
Keywords:
- Folk theories,
- human-centered explainable artificial intelligence (HCXAI),
- recommender systems,
- explanations
Résumé
Cette étude utilise les théories populaires du système de recommandation de musique de Spotify pour éclairer les principes de l'IA explicable centrée sur l'humain (HCXAI). Les résultats montrent que les théories populaires peuvent renforcer, remettre en question et augmenter ces principes, facilitant le développement de systèmes de recommandation plus transparents et explicables pour le public non initié et non expert.
Mots-clés :
- Théories populaires,
- intelligence artificielle explicable centrée sur l'humain (HCXAI),
- systèmes de recommandation,
- explications
Veuillez télécharger l’article en PDF pour le lire.
Télécharger
Parties annexes
Bibliography
- Anderson, Jack. 2020. “Understanding and Interpreting Algorithms: Toward a Hermeneutics of Algorithms.” Media, Culture & Society 42 (7–8): 1479–94. https://doi.org/10.1177/0163443720919373.
- Braun, Virginia, and Victoria Clarke. 2006. “Using Thematic Analysis in Psychology.” Qualitative Research in Psychology 3 (2): 77–101. https://doi.org/10.1191/1478088706qp063oa.
- Bucher, Taina. 2017. “The Algorithmic Imaginary: Exploring the Ordinary Affects of Facebook Algorithms.” Information, Communication & Society 20 (1): 30–44. https://doi.org/10.1080/1369118X.2016.1154086.
- Bucher, Taina. 2018. If...Then: Algorithmic Power and Politics. New York: Oxford University Press.
- Burkart, Nadia, and Marco F. Huber. 2021. “A Survey on the Explainability of Supervised Machine Learning.” Journal of Artificial Intelligence Research 70: 245–317. https://doi.org/10.1613/jair.1.12228.
- Chari, Shruthi, Oshani Seneviratne, Daniel M. Gruen, Morgan A. Foreman, Amar K. Das, and Deborah L. McGuinness. 2020. “Explanation Ontology: A Model of Explanations for User-Centered AI.” In The Semantic Web - ISWC 2020, edited by Jeff Z. Pan, Valentina Tamma, Claudia d’Amato, Krzysztof Janowicz, Bo Fu, and Axel Plleres, 228–43. Cham: Springer.
- Chodos, Asher Tobin. 2019. “Solving and Dissolving Musical Affection: A Critical Study of Spotify and Automated Music Recommendation in the 21st Century.” PhD Dissertation, University of California San Diego. https://escholarship.org/uc/item/2c27z9xk.
- Crawford, Kate. 2021. “Time to Regulate AI That Interprets Human Emotions.” Nature 592 (7853): 167. https://doi.org/10.1038/d41586-021-00868-5.
- DARPA. 2016. “Explainable Artificial Intelligence (XAI).” Arlington, VA: DARPA. http://www.darpa.mil/attachments/DARPA-BAA-16-53.pdf.
- DeVito, Michael Ann. 2021. “Adaptive Folk Theorization as a Path to Algorithmic Literacy on Changing Platforms.” Proceedings of the ACM on Human-Computer Interaction 5 (CSCW2): 339:1-339:38. https://doi.org/10.1145/3476080.
- DeVito, Michael Ann, Darren Gergle, and Jeremy Birnholtz. 2017. “‘Algorithms Ruin Everything’: #RIPTwitter, Folk Theories, and Resistance to Algorithmic Change in Social Media.” In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 3163–74. CHI ’17. Denver, Colorado: Association for Computing Machinery. https://doi.org/10.1145/3025453.3025659.
- Doherty, Kevin, and Gavin Doherty. 2018. “The Construal of Experience in HCI: Understanding Self-Reports.” International Journal of Human-Computer Studies 110: 63–74. https://doi.org/10.1016/j.ijhcs.2017.10.006.
- Ehsan, Upol, and Mark O. Riedl. 2020. “Human-Centered Explainable AI: Towards a Reflective Sociotechnical Approach.” In Proceedings of HCI International Conference on Human-Computer Interaction. Copenhagen, Denmark. http://arxiv.org/abs/2002.01092.
- Ehsan, Upol, Philipp Wintersberger, Q. Vera Liao, Elizabeth Anne Watkins, Carina Manger, Hal Daumé III, Andreas Riener, and Mark O Riedl. 2022. “Human-Centered Explainable AI (HCXAI): Beyond Opening the Black-Box of AI.” In CHI Conference on Human Factors in Computing Systems, 1–7. New York, NY: Association for Computing Machinery. https://doi.org/10.1145/3491101.3503727.
- Eriksson, Maria, Rasmus Fleischer, Anna Joansson, Pelle Snickars, and Patrick Vonderau. 2019. Spotify Teardown: Inside the Black Box of Streaming Music. Cambridge MA: MIT Press.
- Fleischer, Rasmus, and Pelle Snickars. 2017. “Discovering Spotify.” Culture Unbound 9 (2): 130–45. https://doi.org/10.3384/cu.2000.1525.1792.
- French, Megan, and Jeff Hancock. 2017. “What’s the Folk Theory? Reasoning about Cyber-Social Systems.” In 67th Annual Conference of the International Communication Association. San Diego, CA. https://doi.org/10.2139/ssrn.2910571.
- Gelman, Susan A., and Christine H. Legare. 2011. “Concepts and Folk Theories.” Annual Review of Anthropology 40: 379–98. https://doi.org/10.1146/annurev-anthro-081309-145822.
- Gentile, Davide, Greg Jamieson, and Birsen Donmez. 2021. “Evaluating Human Understanding in XAI Systems.” In HCXAI ’21: ACM CHI Workshop on Human-Centered Perspectives in Explainable AI. https://hfast.mie.utoronto.ca/wp-content/uploads/HCXAI2021_paper_25.pdf.
- Hamilton, Kevin, Karrie Karahalios, Christian Sandvig, and Motahhare Eslami. 2014. “A Path to Understanding the Effects of Algorithm Awareness.” In CHI ’14 Extended Abstracts on Human Factors in Computing Systems, 631–42. Toronto, Ontario: Association for Computing Machinery. https://doi.org/10.1145/2559206.2578883.
- Hank, Carolyn, Mary Wilkins Jordan, and Barbara M. Wildemuth. 2009. “Survey Research.” In Applications of Social Research Methods to Questions in Information and Library Science, edited by Barbara M. Wildemuth, 256–69. Westport, Conn.: Libraries Unlimited.
- Jackson, Kristi. 2019. Qualitative Data Analysis with NVivo. Third edition. London: Sage Publications.
- Jebara, Tony. 2020. “For Your Ears Only: Personalizing Spotify Home with Machine Learning.” Spotify Labs (blog). January 16, 2020. https://labs.spotify.com/2020/01/16/for-your-ears-only-personalizing-spotify-home-with-machine-learning/.
- Kant, Tanya. 2020. Making It Personal: Algorithmic Personalization, Identify, and Everyday Life. Oxford: Oxford University Press.
- Liao, Q. Vera, and Kush R. Varshney. 2022. “Human-Centered Explainable AI (XAI): From Algorithms to User Experiences.” http://arxiv.org/abs/2110.10790.
- Lin, Jialiu, Shahriyar Amini, Jason I. Hong, Norman Sadeh, Janne Lindqvist, and Joy Zhang. 2012. “Expectation and Purpose: Understanding Users’ Mental Models of Mobile App Privacy through Crowdsourcing.” In Proceedings of the 2012 ACM Conference on Ubiquitous Computing, 501–10. Pittsburgh, Pennsylvania: ACM. https://doi.org/10.1145/2370216.2370290.
- Lincoln, Yvonne S., and Egan G. Guba. 1985. Naturalistic Inquiry. Beverly Hills, Calif.: Sage.
- Lopes, Pedro, Eduardo Silva, Cristiana Braga, Tiago Oliveira, and Luís Rosado. 2022. “XAI Systems Evaluation: A Review of Human and Computer-Centred Methods.” Applied Sciences 12 (19). https://doi.org/10.3390/app12199423.
- Luo, Lili, and Barbara M. Wildemuth. 2009. “Semistructured Interviews.” In Applications of Social Research Methods to Questions in Information and Library Science, edited by Barbara M. Wildemuth, 232–41. Westport, Conn.: Libraries Unlimited.
- Miller, Tim. 2019. “Explanation in Artificial Intelligence: Insights from the Social Sciences.” Artificial Intelligence 267: 1–38. https://doi.org/10.1016/j.artint.2018.07.007.
- Mohseni, Sina, Niloofar Zarei, and Eric D. Ragan. 2021. “A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems.” ACM Transactions on Interactive Intelligent Systems 11 (3–4): 24:1-24:45. https://doi.org/10.1145/3387166.
- Mueller, Shane T., Robert R. Hoffman, William Clancey, Abigail Emrey, and Gary Klein. 2019. “Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AI.” ArXiv. http://arxiv.org/abs/1902.01876.
- Mueller, Shane T., Elizabeth S. Veinott, Robert R. Hoffman, Gary Klein, Lamia Alam, Tauseef Mamun, and William J. Clancey. 2021. “Principles of Explanation in Human-AI Systems.” In Explainable Agency in Artificial Intelligence Workshop. AAAI 2021. AAAI. http://arxiv.org/abs/2102.04972.
- Mul, Jos de, and Bibi van den Berg. 2011. “Remote Control: Human Autonomy in the Age of Computer-Mediated Agency.” In Law, Human Agency, and Autonomic Computing, edited by Mireille Hildebrandt and Antoinette Rouvroy, 46–63. Abingdon: Routledge.
- Ngo, Thao, and Nicole Krämer. 2021. “Exploring Folk Theories of Algorithmic News Curation for Explainable Design.” Behaviour & Information Technology. https://doi.org/10.1080/0144929X.2021.1987522.
- Norman, Donald A. 1983. “Some Observations on Mental Models.” In Mental Models, edited by Dedre Gentner and Albert L. Stevens, 7–14. New York: Psychology Press.
- Park, Ok-Choon, and Stuart Gittelman. 1995. “Dynamic Characteristics of Mental Models and Dynamic Visual Displays.” Instructional Science 23 (5–6): 303–20. https://doi.org/10.1007/BF00896876.
- Payne, Stephen J. 2003. “Users’ Mental Models: The Very Ideas.” In HCI Models, Theories, and Frameworks toward a Multidisciplinary Science, edited by John M. Carroll, 135–56. San Francisco, CA: Morgan Kaufmann.
- Price, Bob. 2002. “Laddered Questions and Qualitative Data Research Interviews.” Journal of Advanced Nursing 37 (3): 273–81. https://doi.org/10.1046/j.1365-2648.2002.02086.x.
- Reynolds, Thomas J., and Jonathan Gutman. 1988. “Laddering Theory Method, Analysis, and Interpretation.” Journal of Advertising Research 28 (1): 11–31.
- Ribera, Mireia, and Agata Lapedriza. 2019. “Can We Do Better Explanations? A Proposal of User-Centered Explainable AI.” In Joint Proceedings of the ACM IUI 2019 Workshops. New York ; ACM. http://ceur-ws.org/Vol-2327/IUI19WS-ExSS2019-12.pdf.
- Saldaña, Johnny. 2021. The Coding Manual for Qualitative Researchers. 4th ed. Thousand Oaks, Calif.: SAGE Publications.
- Samek, Wojciech, and Klaus-Robert Muller. 2019. “Towards Explainable Artificial Intelligence.” In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, edited by Wojciech Samek, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, and Klaus-Robert Muller, 5–22. Lecture Notes in Artificial Intelligence 11700. Cham: Springer International Publishing.
- Sandvig, Christian, Kevin Hamilton, Karrie Karahalios, and Cedric Langbort. 2014. “Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms.” In Annual Meeting of the International Communication Association. Seattle, WA. http://www-personal.umich.edu/~csandvig/research/Auditing%20Algorithms%20--%20Sandvig%20--%20ICA%202014%20Data%20and%20Discrimination%20Preconference.pdf.
- Savolainen, Laura. 2022. “The Shadow Banning Controversy: Perceived Governance and Algorithmic Folklore.” Media, Culture & Society. https://doi.org/10.1177/01634437221077174.
- Schoeffer, Jakob, Maria De-Arteaga, and Niklas Kuehl. 2022. “On the Relationship between Explanations, Fairness Perceptions, and Decisions.” In ACM CHI 2022 Workshop on Human-Centered Explainable AI (HCXAI). ACM. http://arxiv.org/abs/2204.13156.
- Selbst, Andrew D., and Solon Barocas. 2018. “The Intuitive Appeal of Explainable Machines.” Fordham Law Review 87 (3). https://ir.lawnet.fordham.edu/flr/vol87/iss3/11.
- Shen, Hua, and Ting-Hao Huang. 2021. “Explaining the Road Not Taken.” In ACM CHI Workshop of Operationalizing Human-Centered Perspectives in Explainable AI. ACM. http://arxiv.org/abs/2103.14973.
- Siles, Ignacio, Andrés Segura-Castillo, Ricardo Solís, and Mónica Sancho. 2020. “Folk Theories of Algorithmic Recommendations on Spotify: Enacting Data Assemblages in the Global South:” Big Data & Society. https://doi.org/10.1177/2053951720923377.
- Speith, Timo. 2022. “A Review of Taxonomies of Explainable Artificial Intelligence (XAI) Methods.” In 2022 ACM Conference on Fairness, Accountability, and Transparency, 2239–50. FAccT ’22. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3531146.3534639.
- Spotify. 2021. “Annual Report.” https://s29.q4cdn.com/175625835/files/doc_financials/2021/AR/2021-Spotify-AR.pdf
- Spotify Stream on [Video]. 2021. YouTube. https://youtu.be/Vvo-2MrSgFE.
- Staggers, Nancy, and A. F. Norcio. 1993. “Mental Models: Concepts for Human-Computer Interaction Research.” International Journal of Man-Machine Studies 38 (4): 587–605. https://doi.org/10.1006/imms.1993.1028.
- Stål, Oskar. 2021. “How Spotify Uses ML to Create the Future of Personalization.” In TransformX. Scale AI. https://youtu.be/n16LOyba-SE.
- Stark, Luke, and Jesse Hoey. 2021. “The Ethics of Emotion in Artificial Intelligence Systems.” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 782–93. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3442188.3445939.
- Stoyanovich, Julia, Jay J. Van Bavel, and Tessa V. West. 2020. “The Imperative of Interpretable Machines.” Nature Machine Intelligence 2 (April): 197–99. https://doi.org/10.1038/s42256-020-0171-8.
- Sweeney, Latanya. 2018. “How to Save Democracy and the World.” In Association of Computing Machinery (ACM) Conference on Fairness, Accountability, and Transparency. New York: ACM.
- Terry, Gareth, Nikki Hayfield, Victoria Clarke, and Virginia Braun. 2017. “Thematic Analysis.” In The SAGE Handbook of Qualitative Research in Psychology, edited by C. Willig and W. Rogers, 17–36. SAGE Publications. https://10.4135/9781526405555.n2.
- Turek, Matt. 2016. “Explainable Artificial Intelligence (XAI).” Arlington, VA: DARPA. https://www.darpa.mil/program/explainable-artificial-intelligence.
- Vaughan, Jennifer Wortman, and Hanna Wallach. 2021. “A Human-Centered Agenda for Intelligent Machine Learning.” In Machines We Trust: Perspectives on Dependable AI, edited by Marcello Pelillo and Teresa Scantabmurlo, 123–38. Cambridge MA: MIT Press.
- Villareale, Jennifer, and Jichen Zhu. 2021. “Understanding Mental Models of AI through Player-AI Interaction.” In HCXAI ’21: ACM CHI Workshop on Human-Centered Perspectives in Explainable AI. http://arxiv.org/abs/2103.16168.
- Wang, Danding, Qian Yang, Ashraf Abdul, and Brian Y. Lim. 2019. “Designing Theory-Driven User-Centric Explainable AI.” In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–15. CHI ’19. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3290605.3300831.
- Weber, Leander, Sebastian Lapuschkin, Alexander Binder, and Wojciech Samek. 2022. “Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement.” ArXiv. http://arxiv.org/abs/2203.08008.
- Whitman, Brian. 2012. “How Music Recommendation Works — and Doesn’t Work.” Variogram (blog). December 11, 2012. https://notes.variogr.am/2012/12/11/how-music-recommendation-works-and-doesnt-work/.
- Ytre-Arne, Brita, and Hallvard Moe. 2021. “Folk Theories of Algorithms: Understanding Digital Irritation.” Media, Culture & Society 43 (5): 807–24. https://doi.org/10.1177/0163443720972314.