Interview with Florian Cramer | Commentaries: Technology and Policing (Part 3 of 3)
By Jessica Saxby, Guillermo Collado Wilkins, Florian Cramer
The illusion of control
Given the focus of this series of articles in Technophany’s Commentaries on techno-policing and surveillance in France, we would like to start by asking you about the following assertion that you make in Crapularity Hermeneutics: “Whether or not A.I., or some types of A.I., are fundamentally flawed and unfit for their purpose, they nevertheless will be developed and used when they seem to get things done and when they deliver, most importantly, quantifiable results.” Drawing on some of the psychoanalytic undertones of the text, we wanted to ask how AI systems create an “illusion of control,” and how that illusion structures our reality.
Speaking of psychoanalysis and psychology, a good practical example is emotion detection in AI machine vision systems, which has been widely used for many years, including in airport camera surveillance systems, likely also at the Parisian Olympics. Ruben van de Ven researched the inner workings of these systems in his master’s thesis at Piet Zwart Institute in Rotterdam as early as 2016 (see Emotion Hero and his 2017 paper Choose How You Feel; You Have Seven Options). He found that machine emotion recognition systems are based on the model of seven, supposedly universal and evolutionally evolved, human emotions (“anger, contempt, disgust, fear, happiness, sadness, and surprise”) developed by the American psychologist Paul Ekman. While this model itself is not uncontroversial, it becomes further simplified and operationalized in the machine vision system, as if these emotions could be objectively and unambiguously detected, and as if their analysis would yield objectively computer-detectable threat scenarios, such as a person at an airport whose facial expression has been deemed fearful and angry by machine vision is a possible terrorist.
The promise of such a system is that this analysis and interpretation is simply a matter of technical machine recognition, like scanning and recognizing a QR code; that semantics boils down to syntax. It also means that human and social science theories get extracted and reinterpreted as machine-operational models, with the collateral damage of being turned into pseudoscience. Another example is the translation of René Girard’s literary-cultural theory of mimetic desire into social media and e-commerce algorithms and business models by his former Stanford students who now run Silicon Valley companies. Or the frequent profiling of employees in human resource management through the “Big Fifty” personality trait test, whose model is based on Jungian psychology and thus operationalizes a speculative, controversial model into pseudo-objective staff assessments. This example also shows that such operations or “analytics” neither need computers nor are necessarily a product of computer technology, but exist in a speculative realm between pseudoscience, management, and technology in the broadest sense. (Another such example is the “Oxford Capacity Analysis” test and “tech” of the Church of Scientology, whether or not it is used in conjunction with the Church’s “e-meter” electronic device.)
One might expect the humanities and social sciences profession to be outraged at the oversimplification and operationalization of analysis into “analytics,” but - out of fear of missing out - it seems to mimic it in its predominant understanding of digital humanities as computer-based quantitative analysis (of texts and other semiotic entities).
In cases such as emotion detection systems, it is ultimately irrelevant to the practicality of these systems whether their underlying translation of psychological models and theories into software is scientifically flawed. It does not matter as long as the system is operational and - while in most cases operating as a black box - produces quantifiable results, such as: more targeted people searches at airports with improved efficiency statistics. It does not even matter how biased these results and their underlying methodology, including the data sets used to train the AI, may be on a qualitative or interpretative level. The very existence of the system makes contingency seem more manageable, thus creating a convenient illusion of control. (One could even argue that the entire concept of management is based on such illusions of control, regardless of the - digital or non-digital - technology and techniques used.)
Quantification sustains the illusion of control
You mention crime statistics as one way these technologies seem efficient. France has installed thousands of surveillance cameras for the Olympic Games as part of its “plan for zero delinquency.” The Interior Minister declared that the most egregious delinquents are the visible ones: drug dealers, street vendors and petty thieves. In which ways does quantification sustain this illusion of control and security?
Perhaps the question is rather to what extent managed visibility, or rather invisibility, creates an illusion of control and security, and to what extent quantification is a side effect of this managed invisibility. A “cleaned up” public space, for example, means fewer incidents recorded by surveillance systems, and thus lower crime statistics. A likely scenario, which I wouldn’t be surprised if it is already a reality today, is that regimes of invisibility (here I have to think of Jacques Rancière’s aesthetic philosophy) are only created in order to “game” the algorithms of surveillance systems and their output statistics, similar to the web “content farms” created to game the ranking algorithms of search engines. In other words, “analytics” is no longer a means to an end, but becomes the end in itself.
This means that the actual target of this (apparent) clean-up gradually shifts from the public space itself to sensor data (such as surveillance camera images) on to the “analytics” algorithm and finally to the output statistics. In my essay (written in the spring of 2016, shortly before the first election of Donald Trump and under the impression of the “Alt-Right” campaign for him), I already proposed to read political populism as a carnal counter-reaction to such big data post-politics.
Regimes of visibility and racist bias
To stay on the theme of visibility with you: Given the sheer volume of video-cameras installed for the Olympic Games, their footage will have to be sifted through algorithmically. We want to ask you about proliferation and automation, and the new regimes of visibility that these create. There is a lot of literature on data discrimination, on analytics perpetuating bias, on techno-policing and systemic racism. They produce situations by which, as Jackie Wang puts it ‘geography is a proxy for race’ or where people are deemed ‘guilty by association’ because of the neighbourhoods where people live or work or spend time. How do these new regimes of visibility intersect with pre-digital racist biases, some of which you describe in your paper?
The simplest answer lies in the very structure of AI machine learning, which is technically bound to be a replication of the past. It operates on the basis of a post-histoire epistemology, according to which the present and the future can be inferred, inflected, and predicted from past data. This, in turn, means that the present and the future, at least insofar as they exist as reality within the computational system, cannot exist outside the boundaries of that retrospective system-reality. This strikes me as the de facto end of Bateson’s definition of information as “a difference that makes a difference,” apart from stochastic remixes of that past data, producing the odd glitch or surprising conjunction here and there, always within the limits of what was previously fed into the system and can be combined with each other. Elsewhere, I have called this the “kaleidoscope constraint” of generative systems.[1] For my own work and field of research in the arts in relation to generative AI - as well as older combinatorial and generative systems - these structural limitations of machine learning ultimately mean the end of philosopher Max Bense’s equation of “aesthetic innovation” as stochastic-informational improbability, except when such systems are used only as auxiliary tools in a practice or process.
Automation and the repression of hermeneutics
A central part of your text addresses automation in another way too. Automation involves the delegation of decision which, by definition, involves the assessment/ analysis of different options. Thus, analytics and automation cannot do away with interpretation and hermeneutics, but simply defer it. This deferral amounts to a repression of the political, of the very possibility of choosing. What are the consequences of repressing interpretation, and ultimately political subjectivity, in technical systems? What are the symptoms, the return of this fundamental repression?
I’d argue that it’s less a repression of the political as such, but rather of its visibility. Politics is still enacted and political decisions are still being made in these processes and systems that involve both machine and human actors, much like in dystopian sci-fi like “Robocop” (which, in Paul Verhoeven’s original version, I think is a brilliant political reflection on this subject). This is why, in my essay, I compared big data politics to Colin Crouch’s theory of “post-politics” and the “there is no alternative” regimes of Thatcher and Merkel.
What I have only touched on in passing, in the original essay, is the extent to which anti-humanist thinking in the humanities, particularly in media theory and the German humanities, has been unwittingly complicit in such denials of agency, throwing interpretation and hermeneutics, and ultimately subjectivity and human agency, out with the bathwater of philosophical idealism, driven partly by what I see as a simplified, existentialist reading of Foucault’s discourse theory.
But whether one believes in their existence or not, interpretation and political agency are inseparable, corresponding to the reciprocal pair of hermeneutics and poetics. Interestingly, this reciprocity seems to have been first understood in early computational schools of literature, such as the Stuttgart School of literary scholars and poets around Max Bense in the 1960s, whose “information aesthetics” and “artificial poetry” were two sides of the same coin. (I apologize for my repeated references to Bense, which are due to the fact that I have just returned from a conference on his work.) This has now been re-enacted in mainstream AI technology, first in 2015 with Google Lab’s reversal of machine learning AI image recognition into its “Deep Dream” AI image generation neural network, marking the beginning of today’s generative AI.
I don’t think the suppression of visibility as a political tactic is historically new. It’s the principle of undemocratic rule. Computer systems have only made it possible to (a) scale up such suppression through automation, and (b) deflect personal responsibility for it by blaming systems - or “the system” - as a scapegoat.
A concrete example is the childcare benefit scandal in the Netherlands, where flawed big data analytics were used to falsely accuse 26,000 parents of fraud, resulting in, among other things, one thousand children being taken from their families and placed in state custody. In such cases, the data analytics system functions as a grammatical deflection of personal responsibility and accountability through the use of the passive voice. In other words, the malfunctioning of “the system” is the equivalent of saying “mistakes were made”. The hermeneutic-interpretative and political response would be to ask: “Who made the mistake and why?”
Symptoms of repression: crapularity and fascism
Can the Crapularity, which you define following Justin Pickard, as a converging of technologies, “layered on top of each other and kept running without maintenance, often even without anybody around who still knows how they work” itself be considered a symptom of the repression of subjectivity in analytics; the point at which the illusions of objectivity and control break? What are the political consequences of this ruptured illusion? In your text, you talk about two competing forms of fascism: Big-Data fascism, on the one hand, and a return to “decisionist” fascism, precisely as a reaction to the repression of decision, of subjectivity, in big data.
Crapularities can often be the result of a suppression of critical questioning (in the simple sense of the child asking why the emperor is naked) and a failure to foresee the long-term consequences of short-term system design. But I am not advocating perfection in system design. In my opinion and observation, crapularity creeps into any system design (including, for example, curriculum design, to take an example from my own immediate work environment) when the ontology of the designed system begins to diverge from the ontology of the social system on which it was mapped, resulting in the designed system being tweaked, patched, and hacked, or: reinterpreted to fit the ground-level social dynamics that it is supposed to manage. This typically continues to the point where the designed system becomes completely dysfunctional or has been repurposed into something else. In this sense, the crapularity is just a symptom of, and a system state corresponding to, life’s messiness.
There is a certain affinity of the crapularity to the Filipino quotidian practice of “diskarte” waste economy, ecology and culture which my colleague Clara Balaguer analysed as “people saving and using all sorts of scraps and fragments” that may end up as makeshift, crude-looking but highly practical “site-specific design solutions”.[2] The crapularity is its corresponding opposite, a system marketed as a clean design while actually being messy and dysfunctional. The crapularity is the junk in a data center or AI model that is covered up rather than used overtly and creatively as diskarte. Its main functions are (a) pseudo-objective decisionism and (b) collateral damage. It is a perfect tool for masking military and genocidal actions in technocratic post-politics.
The discursive strength of contemporary fascism is that it superficially appears to alleviate over-complexity and un-accountability through its decisionism, while itself being inherently contradictory, irrationalist, and highly messy - often literally, like the interior of Boris Johnson’s car, or the outer appearance of Javier Milei. It promises populist diskarte instead of crapular technocracy. Precisely because of its subjectivism, which includes messiness and chaos, it appears to be a more grounded and honest form of decisionism and fascism than the abstracted decisionism and fascism of big data systems. It is a politics that can even be openly and ostensibly crapular without the crapularity appearing as failure.
This also explains what has fundamentally changed since I wrote my essay in 2016: Big data fascism and populist-decisionist fascism have now fully converged, as we see, among others, in Elon Musk’s role in the second Trump administration and Peter Thiel as a major backer of Vice President JD Vance. In 2016, this convergence had existed only as a utopia in “dark enlightenment” accelerationist circles.
In contemporary - or more precisely: postmodern libertarian - fascism, the crapularity is physically enacted, as in the orange-haired nihilist chaos-clown performances of Trump, Wilders, and Johnson (which are themselves fashion reenactments of Johnny Rotten’s 1977 punk hairstyle, with Wilders being a punk rock fan and Rotten/Lydon a present-day Trump supporter). This fascism is postmodern because it offers no grand narrative except ruined, fragmentary evocations of nostalgia (such as “MAGA/Make America Great Again”) that function first of all as memes. It is carnival in Bakhtin’s sense while weaponizing the carnevalesque, with its libertarian politics and aesthetics of transgression being textbook examples of what Lyotard had analysed as libidinal economy in the early 1970s.
Overtly embracing of the crapularity, postmodern fascism has made itself seemingly invulnerable. Like the hedgehog and the hare, it always appears to be ahead of its opponents. This is why both bourgeois-enlightened and neoliberal-technocratic liberalism, as well as classical Marxism, fail in their attempts of debunking postmodern fascism by pointing out its internal contradictions.
References:
[1] Florian Cramer, The Kaleidoscope Constraint: The automation of arts, seen from its dead ends, conference paper, 2020, http://cramer.pleintekst.nl/essays/kaleidoscope_constraint/, revised version to be in German in a forthcoming book on generative AI poetics edited by Ann Cotten and Hannes Bajohr, Berlin: Matthes & Seitz, 2025.
[2] In the forthcoming essay “Against the [cozy] prettyprinters: a defense of crappy print” by Florian Cramer, Marc van Elburg and Clara Balaguer, in: Annette Gilbert and Andreas Bülhoff (eds.), Library of Artistic Print-on-Demand. Post-Digital Publishing in Times of Platform Capitalism, Leipzig: Spector Books, 2025.