Abstracts
Abstract
Traditionally, human rights activists gathered evidence about violations of particular individuals' human rights to demand that states change their conduct and adopt measures to prevent further violations. Deploying artificial intelligence as part of the decision-making process creates challenges for activists to detect all sources of harm and demand that states take action to address the harms. Abeba Birhane points out that employing artificial intelligence technology can generate harmful impacts that are either difficult to detect or invisible. If harms remain invisible, then it is difficult for human rights defenders to document them. Equally, it becomes challenging to articulate why the harms in question constitute international human rights law violations. As a result, it is harder for human rights defenders to call on states to take action to safeguard fundamental rights. This article puts forward that individuals can make harms arising from the deployment of artificial intelligence as part of the decision-making process more visible by using the theoretical framework of media ecology. It demonstrates that media ecology can provide an additional tool for human rights activists to detect how using artificial intelligence as part of the decision-making process can undermine the enjoyment of a human right. The article uses the right to mental health as a case study to develop this argument. In order to contextualise the analysis, the article focuses on the employment of artificial intelligence to screen candidates for employment as a case study.
Keywords:
- media ecology,
- international human rights law,
- harm,
- mental integrity,
- mental well-being,
- mental health,
- artificial intelligence technology,
- decision-making
Download the article in PDF to read it.
Download