The Ambivalence of Surveillance | Commentaries: Technology and Policing (Part 1 of 3)
By Jessica Saxby
Technology and Policing
As part of a new series of articles for Technophany’s Commentaries section, we have translated the essay “Tout le monde déteste la TechnoPolice” by the collective TechnoPolice, first published in 2018. We also carried out an interview with philosopher Florian Cramer on surveillance, AI, and systems of control. The short article below introduces this dossier and its focus on technology, policing, and surveillance in France.
One of the most urgent political tasks of any philosophical understanding of technology is to investigate the ways in which it exacerbates and creates new forms of control and repression. It is for this reason that these texts focus on the relationship between technology and policing. The translation of Tout le monde déteste la TechnoPolice seemed urgent, following the implementation of rapid sentencing trials that swept France last summer after riots broke out when a French policeman shot 17-year old Nahel Merzouk at point-blank range. Many of those arrested and convicted in conjunction with the riots were so on the basis of flimsy evidence: video-surveillance images and geo-location tags, provoking questions about the utility and uses of such technologies. Concurrently, France was amping up its security and surveillance infrastructure in preparation for the Olympic Games. Many of the technologies and policies flagged by TechnoPolice in their 2018 report were being rolled out on a mass scale.
The short article below attempts to contextualise this translation by linking together these seemingly disparate events, the mass incarceration of (mostly) young men in June 2023 following the riots that responded to the police murder of a French teenager, and the recent July 2024 Olympic Games, looking at the ambivalence of such technologies. That is to say, examining the (racist and biased) contingencies that occur when the totalising view of technological surveillance starts to undo itself, casting its net too wide, abandoning any aspirations towards precision or accuracy.
The Ambivalence of Surveillance
Three hundred and thirty new cameras for Marseille. Four hundred for Paris, that makes 4,400 in total. And 500 for Saint-Denis, announces Gerald Darmanin in 2022. One million euros. A clot of filming frogspawn. Total surveillance for zero delinquency. Four cameras are filming me right now.
With the Olympic Games which took place in Paris this summer, the Interior Minister announced his “plan for zero delinquency.” According to him, the most egregious delinquents are the visible ones: drug dealers, street vendors and petty thieves.
The Prefecture have reported on the many preventative ground operations carried out as part of the plan for zero delinquency: 25,000 individual checks between January and March 2023 alone. Five thousand officers were deployed during this period, but many more individuals were filmed. One hundred thousand cameras are estimated to be watching over the nation in total.
Governments are splurging huge sums of money on video surveillance hardware at the moment. The market was worth 45 billion euros in 2020, and could be worth up to 75 billion by 2025, report La Quadrature du Net. The justification for such state investment in video surveillance and other forms of more advanced policing technology often relies on a series of interconnected arguments. In a liberal context these revolve largely around the impartiality of technology and the supposed superior precision and accuracy of technology compared to humans(/cops).
While it has long been clear that technologies are far from impartial—as Florian Cramer notes in his article “Crapularity Hermeneutics: Interpretation as the Blind Spot of Analytics, Artificial Intelligence, and Other Algorithmic Producers of the Postapocalyptic Present”, “today, it is a widely reported fact that data sets and algorithms, or the combination of both, can and do discriminate”—the myth of precision and accuracy paradoxically remains. This is especially the case when it comes to the supposed sacred truth of images.
Yet this technology is not accurate, for a number of reasons, but firstly, precisely because of its inputted biases. According to the US-based collective Data 4 Black Lives, facial recognition technologies are 10 to 100 times more likely to misidentify Black people and other people of colour, even ending up in wrongful convictions. The Innocence Project, also based in the US, note “seven confirmed cases of misidentification due to the use of facial recognition technology, six of which involve Black people who have been wrongfully accused: Nijeer Parks, Porcha Woodruff, Michael Oliver, Randall Reid, Alonzo Sawyer, and Robert Williams.” But this technology is also imprecise for another reason, one that emerges from the proliferation of data, even while this proliferation justifies the automation of such technologies in the first place.
How many cops would be required to keep an eye on the cameras at all times? The live stream of images of millions of bodies roaming dense streets. Eyes come in pairs, but even so, this kind of surveillance is potentially infinite. There are now too many cameras for them to be effectively used by human personnel, reported La Quadrature du Net in 2023. It is this question of proliferation of images and the limitations of human apprehension that raises both political and philosophical questions. In the first instance it is proliferation that led France, in 2023, to become the first country in Europe to legalise algorithmic surveillance (VSA), or what is more accurately known as biometric surveillance, engaged in “perpetual identification, analysis and categorisation of bodies, physical attributes, gestures, silhouettes, and gaits,” effectively violating European law. There is too much data and there are too many images, and so, the human (/cop) unable to keep up with the flux of images and data is thus short-circuited from certain law enforcement tasks, leaving a ghostly void of ambivalent data and algorithms that are motivated by nothing at all, and also by racist programmed biases.
In the second instance, it becomes both an epistemological and ontological question. Speaking of individual memory, Bernard Stiegler notes that humans are bound by a “retentional finitude,” a finite capacity to retain information about the past, thus leading to the necessity of mnemotechniques, objects and systems that enable recall beyond this limit. Humans are also bound by what we might call an operational finitude when it comes to processing data (as well as a number of other limitations including time and resources). While the technological capacity exists to exponentially multiply surveillance hardware such as cameras, this increase is out of joint with the human capacity to process and analyse the images they produce. When it comes to memory, Stielger notes that “we exteriorise ever more cognitive functions in contemporary mnemotechnological equipment. And in doing so, we delegate more and more knowledge to apparatuses and to the service industries that network them, control them, formalise them, model them, and perhaps even destroy them.” Indeed the cognitive function of surveillance is gladly handed over to industry.
This is convenient for other reasons too. As Jackie Wang puts it in her essay “This is a Story About Nerds and Cops”: PredPol and Algorithmic Policing:
given that critics of the police associate law enforcement with the arbitrary use of force, racial domination, and the discretionary power to make decisions about who will live and who will die, the rebranding of policing in a way that foregrounds statistical impersonality and symbolically removes the agency of individual officers is a clever way to cast police activity as neutral, unbiased, and rational.
The images are impartial, the surveillance is ambivalent.
Now liberated from the arduous task of the human impossibility for impartiality the cops can get to work, arresting and convicting. Zero delinquency! But as I mentioned before, this technology isn’t as precise as it claims to be. La Quadrature du Net call it a “technological bluff,” which is operational enough to elicit a veritable downpour of public financing. But these cameras don’t work, they don’t deter crime, they misrecognise people, they’re easily broken, often police municipalities forget to turn them on. When the images they record fly in the face of power “how many times has it happened that bodycams are lost, malfunction, or that their footage is ‘accidentally deleted’?” writes Rona Lorimer in her essay “Images Against Images.” And so this technological prowess takes on an absurd character, what Cramer calls Crapularity, when “artificial intelligence systems that see faeces in clouds” are hailed as the harbingers of singularity. So the cops can’t see but the cameras can’t either. Yet in any case there is no real displacement of power in this game of technologisation of the police.
In June 2023, 3,600 arrests were made following the killing of Nahel Merzouk by French policeman Florian M. Those arrested were passed hastily through rapid-sentencing courts. The convictions came quickly, and these surveillance technologies played a key role. “CCTV images and the police statement likewise attested to the presence of a ‘corpulent’ young black man at the front of the crowd, whom the judge pointed to as Emil,” reports Lorimer with journalist Harrison Stetler from the courtroom. Snapchat messages were translated as “incitements”, “Thanks to cell-tracing technology, investigators were also able to locate the presence of Samir and Hassan’s phones in the vicinity of the town hall,” they write. And thus according to one of the defending lawyers they cite, the prosecutors’ argument became the following: “just the fact of being present on the scene where rioting is taking place seems to mean that you’re liable to face charges.”
This aspect of “guilt by association” is central to the kind of policing that algorithmic video surveillance enables. According to Eligon and Williams, who Cramer cites, it abides by the following logic: “because you live in a certain neighborhood or hang out with certain people, we are now going to be suspicious of you and treat you differently, not because you have committed a crime or because we have information that allows us to arrest you, but because our predictive tool shows us you might commit a crime at some point in the future.”
In large part, the ambivalence of this data lies in its utterly speculative nature. “Compared to 1970s and 1980s database dragnets” writes Cramer, which also made racist errors based on computational convictions, “contemporary Big Data analytics have only become more speculative since their focus is no longer on drawing conclusions for the future, and since they no longer target people based on the fact their data matches other database records, but instead based on more speculative statistical probabilities of environmental factors and behavioural patterns.” Environmental factors like living in a cité where cops kill young men.
Yet this automation is never totalising, while many cops might see themselves relieved of their responsibility by certain algorithmic or technological features, reduced to executioners of technological will, the kids they arrest will appear in front of judges, for now unthreatened by machinic replacement. And as Lorimer notes, “judges are judges.” That is to say the ultimate figure of interpretation, those with the power to create a narrative around a nebulous mass of ambivalent data. In the case of “bonafide, extra judicial images, raising consciousness of police violence,” these images “don’t lead to conviction”, she writes. Yet they do when grainy CCTV and approximate geotags associate masses of kids committing property damage and setting off fireworks.
*
In the translation that accompanies this text, Tout le monde déteste la Technopolice Félix Tréguer maps out the evolution of these technologies over the past decade or so in France and points to the places they collude with systems of oppression on a national and international level. This text is followed by the interview with Florian Cramer based mainly on his essay "Crapularity Hermeneutics"
If let's say the cameras are accurate 80% of the time on a large scale that otherwise would not have been covered by security, I would sacrifice the 20% that would be wrongly convicted to increase the conviction rates of actual criminals, regardless of the 20%'s individual liberties.
Also with predictive analytics, of course coming from a bad area will make you more likely to become a criminal so why wouldn't AI lean that way?