
Volume 22, numéro 4, 2024 Open Issue
This open issue offers six original articles and a special Dialogue section on the surveillance dimensions of “Synthetic Data.” The issue also includes two book reviews.
Cover image: Background face made of particles. (Credit: Designed by Freepik.)
Sommaire (14 articles)
Articles
-
Squeeveillance: Performing Cuteness to Normalise Surveillance Power
Garfield Benjamin
p. 350–363
RésuméEN :
Cute videos are everywhere online. Many of these videos increasingly come from footage taken by doorbell cameras. Amazon’s Ring, and related connected camera devices, introduce new sociotechnical relations into domestic environments. First, I outline “squeeveillance” as the affective and performative dimensions of cuteness within surveillance. I explore the Ring surveillant assemblage and why it needs the power of cuteness. Then, I examine squeeveillance as the use of cuteness in the way Ring operates. I use the TV show Ring Nation (2022–present) to discuss the remediation of cute footage from doorbell cameras onto other media, before discussing the ways in which cuteness is performed as a normalisation of surveillance power. The article draws on theories of cuteness in conjunction with surveillance studies of power relations. In presenting squeeveillance as a lens through which to assess the expanding scope of Ring, I offer a discussion of the interconnected role of surveillance in contemporary domestic and media settings and its relation to current forms of power in surveillant assemblages.
-
Abstracting Injustice
Vincent Huynh-Watkins et Bryce Clayton Newell
p. 364–380
RésuméEN :
In this paper, we explore how the neorepublican concepts of domination and antipower can contribute to the surveillance studies literature and a more democratic and participatory approach to technology development and deployment within the criminal justice system. We frame the neorepublican approach as an alternative to the predominant liberal paradigm, arguing that normative surveillance studies scholarship should emphasize the dominating potential of surveillance practices rather than merely trying to limit actual interference in peoples’ lives. To illustrate, we focus on the use of surveillance technologies that capture images of individuals within the US criminal justice system for recognition and/or identification. Facial or other biometric recognition technologies (FRTs) are increasingly built on artificial intelligence and machine learning algorithms (“AI”). Often seen as a faster, more accurate, and less labor-intensive alternatives to human cognition, AI-powered biometric and facial recognition and other image capture technologies have become widely used within public law enforcement agencies around the world. The deployment of these technologies within the US criminal justice system has produced significant forms of injustice, including faulty identification and the subsequent arrest, detention, and incarceration of innocent individuals. These forms of data injustice are often opaque, hidden behind secretive law enforcement practices or commercial secrecy agreements. We draw from neorepublican conceptions of domination and antipower to frame this legal and technological opacity as an abstraction of injustice. We argue that handing important criminal justice decision-making over to code and algorithms designed, owned, and maintained by private interests exacerbates the potential for the public deployment of unjust systems that subject individuals and communities to unwarranted, arbitrary, and uncontrolled state power. Such government interference represents clear forms of data injustice and domination.
-
Data-driven Management and Taylorist Fantasies: A Case Study of Performance Quantification in the Indian IT Services Industry
Thomson Chakramakkil Sathian
p. 381–394
RésuméEN :
Human capital management (HCM) software applications are being widely used to assess the performance of knowledge workers in various sectors of the Indian economy. The use of data-driven performance management systems is claimed to make the performance appraisals fairer and more transparent to the worker, and their supposed objectivity is used to justify their deployment in the workplace as instruments of remote surveillance. This paper presents insights from the case study of a performance management system that was installed at an Indian IT services organisation following the transition to remote work. Despite its mobilisation around greater accuracy and objectivity in performance appraisals, the paper demonstrates that techniques of quantification in the system generated information that was empty of any managerial value. The dichotomous understanding of productivity encoded in the system also entrenched the subjective interests of managers into performance appraisals, creating conflicts of interest in supervision that eroded the interests of the workers. As the system misread and distorted the everyday realities of work, the workers governed by it were compelled to engage in “meta work” (Maggiori 2023) that effectively undermined their productivity. By staging the empirical insights from this case study within the politically fraught character of digital Taylorism, the paper seeks to understand why the performance management system failed to meet the sanguine promises of the digital that are often marshalled around data-driven management. Based on evidence from the Indian IT sector, it argues that the translation of worker activities into egregiously oversimplified productivity data, and the eventual normalization of the data using statistical techniques, enabled the organisation to translate the worker population into a fleet of disposable human capital. This, it further argues, has been a strategy that has been in place not only to control alienated labour but also to protect the IT industry’s ability to arbitrage labour costs amidst the vagaries of informational capitalism.
-
Towards a Macro-Level Theoretical Understanding of Police Services’ Acquisition of Risk Technologies
Dallas Hill et Christopher O’Connor
p. 395–409
RésuméEN :
Most North American police services have rapidly acquired and implemented a range of emerging and disruptive technologies in recent years. This rapid adoption of technologies has left a significant gap in our theoretical understanding of how police make decisions about which technologies to acquire. While existing research has focused on technology’s impact at the organizational level, the macro-level context that shapes technological acquisition by the police is undertheorized. To address this gap in the literature, this article combines theorizing by Ericson and Haggerty (1997) on policing the risk society (PRS) and Zuboff (2019) on surveillance capitalism (SC) to develop a macro-level theoretical framework. We consider technologies acquired by the police to be risk technologies and argue that combining key elements of PRS and SC theorizing offers a macro-level understanding of police decision-making about which technologies to adopt that can complement meso-level organizational theories. While calling for additional empirical research, this article concludes by discussing the potential impacts associated with private-sector involvement in public-sector initiatives and providing directions for future research.
-
“Privacy Is Overrated”: Situating the Privacy-related Beliefs and Practices of Italian Parents with Young Children
Lorenzo Giuseppe Zaffaroni
p. 410–427
RésuméEN :
The widespread surveillance of everyday family life poses threats to parents’ and children’s right to privacy. Even though considerable research on privacy in families with young children exists, more evidence on the interplay between contextual factors and privacy issues is needed to enrich our understanding of privacy as grounded in everyday family life. To this aim, this paper conceptualises privacy as a situated and emergent phenomenon related to family cultures, socioeconomic background, technological imaginaries, and other significant markers of everyday family life. Drawing on qualitative data from a longitudinal research project with parents of children aged zero to eight, the study shows that privacy risks and threats are mostly associated with the interpersonal context; corporate and institutional surveillance are naturalised within notions of convenience or resignation to big-tech corporations. As technological and surveillance imaginaries influence such a complex web of privacy dynamics, this paper advocates for a situated and contextual approach to family privacy and surveillance in times of datafication.
-
Understanding Attitudes Toward Police Surveillance: The Role of Authoritarianism, Fear of Crime, and Private-Sector Surveillance Attitudes
Camille Conrey et Craig Haney
p. 428–447
RésuméEN :
Public attitudes toward domestic police surveillance have important implications for its political salience and regulation. An increasing number of jurisdictions have sought to regulate law enforcement surveillance, in part due to growing concerns over issues related to privacy, civil liberties, and the potential for bias (Beyea and Kebde 2021; Chivukula and Takemoto 2021; Smyth 2021). This study explores what factors help to predict and shape public attitudes toward police surveillance. Two groups of participants (n = 131 and n = 299) completed measures of authoritarianism, fear of crime, consumer surveillance technology use, and attitudes toward private-sector surveillance (such as surveillance by private companies, employers, or citizens) and police surveillance. Demographic factors (age, race/ethnicity, education level, gender, and political leaning) were also examined. Of these factors, legal authoritarianism, level of interaction with surveillance-related consumer technology, and attitudes toward private-sector surveillance were positively associated with the acceptance of police surveillance.
Dialogue
-
Synthetic Data, Synthetic Media, and Surveillance
-
Critical Provocations for Synthetic Data
Daniel Susser et Jeremy Seeman
p. 453–459
RésuméEN :
Training artificial intelligence (AI) systems requires vast quantities of data, and AI developers face a variety of barriers to accessing the information they need. Synthetic data has captured researchers’ and industry’s imagination as a potential solution to this problem. While some of the enthusiasm for synthetic data may be warranted, in this short paper we offer critical counterweight to simplistic narratives that position synthetic data as a cost-free solution to every data-access challenge—provocations highlighting ethical, political, and governance issues the use of synthetic data can create. We question the idea that synthetic data, by its nature, is exempt from privacy and related ethical concerns. We caution that framing synthetic data in binary opposition to “real” measurement data could subtly shift the normative standards to which data collectors and processors are held. And we argue that by promising to divorce data from its constituents—the people it represents and impacts—synthetic data could create new obstacles to democratic data governance.
-
Synthetic Training Data and the Reconfiguration of Surveillant Assemblages
Louis Ravn
p. 460–465
RésuméEN :
Synthetic training data promise considerable performance improvements in machine learning (ML) surveillance tasks, including such applications as crowd counting, pedestrian tracking, and face recognition. In this context, synthetic training data constitute techno-fixes primarily by virtue of acting as “edge cases”—data that are hard to come by in the “real world” yet straightforward to produce synthetically—which are used to enhance ML systems’ resilience. In this dialogue paper, I mobilize Haggerty and Ericson’s (2000) concept of the surveillant assemblage to argue that synthetic training data raise well-known, entrenched surveillance issues. Specifically, I contend that conceptualizing synthetic data as but one component of larger surveillant assemblages is analytically meaningful because it challenges techno-deterministic imaginaries that posit synthetic data as fixes to deep-rooted surveillance issues. To exemplify this stance, I draw from several examples of how synthetic training data are already used, illustrating how they may both intensify the disappearance of disappearance and contribute to the leveling of hierarchies of surveillance depending upon the surveillant assemblage that they reconfigure. Overall, this intervention urges surveillance studies scholarship to attend to how synthetic data reconfigure specific surveillant assemblages, with both problematic and emancipatory implications.
-
Synthetic Data and Reverse Image Search: Constructing New Surveillant Indexicalities
Renée Ridgway et Nicolas Malevé
p. 466–471
RésuméEN :
The recent evolution of algorithmic techniques (mining, filtering, modelling) makes people more transparent through sophisticated search interactions and online monitoring, heightening opportunities for surveillance. The advent of computer-generated “synthetic data” has created another twist in the techno-information revolution of generative “artificial intelligence.” Promoted by tech companies to circumvent privacy legislation and to develop cheaper monitoring technologies, synthetic data is touted as a solution to surveillance capitalism. This dialogue paper focuses on the use of synthetic data in the context of “fake” images and discriminatory technologies by first discussing the relation between representation and indexicality via the medium of (digital) photography and then via reverse image search. A digital ethnography by the authors uses artificially generated images of people “who do not exist” to query the reverse search engine PimEyes, which offers a biometric search for anyone wishing to find their faces on the internet. PimEyes finds faces similar to a person that doesn’t exist, provoking questions both about the generated image used as a query and the status of the search result. The results show the tensions inherent to the use of synthetic data: a dialectic between increasing precision and increasing scepticism. When visiting the offered, linked websites, the confusion increases as the user struggles to determine if the images PimEyes found are synthetic or real. In this context, reverse image search will likely stimulate future synthetic data development and simultaneously offer services that embed metadata into files, as well as forensics to secure indexicality, introducing yet other factors into the loop between representation and generation. Therefore, the matter of concern won’t be the ability to produce realistic representations through the use of synthetic data, but the demand of indexicality that their use triggers and the bureaucratic apparatuses of verification that emerge to contain it.
-
Synthetic Data: From Data Scarcity to Data Pollution
Tanja Wiehn
p. 472–476
RésuméEN :
The increasing development and adaptation of synthetic data raises critical concerns about the perpetuation of datafication logics. In examining some of synthetic data’s core promises, this dialogue paper aims to uncover the potential harm of further de-politicizing synthetic data. With synthetic data, technological opportunities are introduced that promise to resolve a growing demand for data needed to train AI models. Furthermore, models trained on synthetic data are praised as more precise and effective while bring cheaper than collected data (Zewe 2022). With this dialogue paper, I aim to nuance the ways in which synthetic data complicate a critique directed at AI-driven technologies. I build my argument on two elements fundamental to the debate on the promises and perils of synthetic data. The first is the notion of data scarcity—often leveraged to argue for the implementation and further development of synthetic data to train bespoke models. Second, I discuss the concerns of data pollution and contamination with synthetic data. Through these entry points, I argue that synthetic data re-ignites issues previously raised by scholars in the field of critical data and surveillance studies. Therefore, the aim of this dialogue paper is to call for a critical understanding of synthetic data as living information, much like collected data, and to account for synthetic data and the conditions of its generation in the context of simulated environments.
-
Why Synthetic Data Can Never Be Ethical: A Lesson from Media Ethics
Andrew Fitzgerald
p. 477–482
RésuméEN :
This Dialogue paper argues that the use of synthetic data can never be “ethical.” My argument imports a normative stance from media ethics that “being-ethical-means-being-accountable” (Glasser and Ettema 2008). Building from discourse ethics, this stance positions such ethics as having “the facility to argue articulately and deliberate thoughtfully about moral dilemmas, which in the end means being able to justify, publicly and compellingly, their resolution” (Glasser and Ettema 2008: 512). Crucially, this approach is dialogical and social, necessitating a space open to all affected by relational practices and processes. While the use of synthetic data in commercial institutional contexts may offer workarounds to privacy concerns regarding personally identifiable information (PII) or unpaid user labor—or seem relatively innocuous, as in the case of training computer vision algorithms in video games—this facilitates, as others in this Dialogue section argue in their respective papers, a “fix” or “solutionist” framing that elides ethics and de-politicizes synthetic data. Synthetic data therefore intensifies a pre-existing lack of accountability inherent within automated systems more generally, and through this, entrenches and compounds surveillant practices. In some arenas, the stakes are quite literally life or death, such as in the development of medical AI, and more perniciously, the migration of models from commercial to state deployment in law enforcement and military contexts. Given the foreclosure of thoughtful, articulate, and reflexive inclusive deliberation on the significant moral implications of AI’s vast and ever-growing assemblages, and synthetic data’s role in further mystifying and legitimating its seemingly unbridled development and deployment, I argue that synthetic data can never meet the standard of “ethical” practice.