2025-02-18
Poison as Praxis
teaching AI a lesson
There's been a recent uptick in the availability of a particular kind of software tool: the AI poison text generator. It's encouraging to see - although obviously it would be way better if they weren't needed in the first place.
The idea is to empower website owners to defend their work against the contemporary scourge of AI scraper bots, parasites that aggressively appropriate our collective human effort so that a small handful of trillion dollar fascist-adjacent tech companies can each build their own energy-guzzling conflict-mineral colossus and churn out an endless stream of grinning corporate-speak reply emails that no one wants to read.
And sure, writing that felt cathartic, but emotional venom isn't enough to properly combat this stuff. Hence the poisoning software.
So how does the poison work? Ironically, the tools are themselves primitive versions of the things they help us fight back against. Markov chains work like semi-random cut and paste machines: in this case, taking in a set of texts and generating often fairly readable (but not very meaningful) rearrangements. The outputs can be *somewhat* readable because these systems build their output a chunk at a time, at each step looking at the current state of the output and determining, based on a statistical analysis of the input texts, the likelihood of any given chunk coming next, before rolling the dice to actually pick one.
If you've ever tried tapping your phone's next-word suggestion several times in a row out of curiosity, you know exactly what kind of output this might look like.
AI is an unreliable narrator
While big tech's massive AI models are obviously a lot more complex than these tools, there's still some family resemblance. As I understand it: at their heart, LLMs are doing the same kind of associative number-crunching as our little cut-and-paste machine, but at a scale of such breadth and depth that they can end up appearing to approximate genuine communication.
Of course, these systems aren't actually communicating anything, because they don't actually know or understand anything.
What's interesting, though, is that the all-consuming project of AI constrains knowledge and communication not just by collapsing everything into stylistically homogenous, statistically determined boilerplate - which it definitely does. It also constrains us by simultaneously being an unreliable narrator. These systems are both voices of authority, and at the same time, inherently untrustworthy. They're fundamentally conservative in that they perpetually reinforce the biases and assumptions of their training data, but they also resist political critique, since any given output can be dismissed as a 'hallucination', rather than the result of the same process as all the other outputs. Generative AI is the 'relax bro, I was only kidding' of technologies.
With this in mind, if the goal is to generate text that undermines the project of AI, then it seems useful to ask: is it better to feed the bots noise, or signal?
Poison with noise, or with signal?
To answer that question, let's step back a bit.
Max Ernst's 1937 painting 'The Triumph of Surrealism' (originally titled 'The Fireside Angel', but renamed by Ernst in 1938) was a response to Franco's defeat of the Republicans in the Spanish Civil War, and the wider destructive forces expanding across Europe. The renaming of the painting is generally understood to be an ironic lament that the Surrealists, who were avowed Communists, had failed to prevent the spread of fascism.
The Surrealists' approach was to attempt a synthesis of the free-association space of the Freudian subconscious on the one hand, with the structured, rational, material world on the other, the goal being the liberation of humanity from psychological and societal repression. Well-intentioned, certainly, but as Ernst readily admitted, ultimately unsuccessful. And while it's easy to point out that the Surrealists may have simply overestimated their ability to influence society at large, and were therefore always unlikely to halt fascism's rise, I think it's also possible there's something else going on.
It's been said over and over: a central feature of fascism is its incoherence. Even right around the same time that Ernst produced 'The Triumph of Surrealism', Ellen Wilkinson and Edward Conze were publishing 'Why Fascism?'[1], in which they posited that, like communism, fascism is a way of addressing capitalism's internal contradictions and crises; unlike communism, however, fascism doesn't *resolve* these contradictions by doing away with capitalism, but instead controls and holds capitalism together by force - a move that is, by the way, also self-contradictory.[2] Truth, consistency, honesty - none of these matter to the fascist, and arguably their power is in fact the result of their dismissal of those norms and ideals.
The Surrealists' response, then, with its theoretical aim to resolve internal conflict and repression, seems like it could be a way forward and out of the clutches and wilfully ignorant comfort of incoherence; if fascism demands a lack of productive self-reflection in order to perpetuate its toxic stasis of unresolved contradiction, then the only way out is to force the issue.
* * *
By the way, this is precisely why "punching nazis" is the correct action both in practice and in theory: it forces the recognition of the reality of what violence is, as well as the recognition of our shared humanity, onto those who conceive of themselves only as perpetrators of violence, against those they deem inferior.
* * *
So why did the Surrealists fail? I'd argue they failed because their reach exceeded their grasp. Rather than seeing their work as a dialectical project, a psychoanalytic treatment process for society with a sequence of steps and a logical progression, the Surrealists wanted the complete dissolution of all restrictions on the imagination. In some sense, by wanting everything all at once, they stood for and could change nothing.
This dynamic is reflected in a surrealist drawing game called 'Exquisite Corpse': A piece of paper is folded into several sections. Each participant in the game draws one part of a person, animal, or object in their section, but without being able to see what the others have drawn. Then the paper is unfolded to reveal the complete image. It can be fun to play, and can sometimes unearth hidden associations and meanings despite being the product of randomness. But for the most part, it *is* just randomness, incoherence, noise.
Noise, unfortunately, is directionless. Signal, on the other hand, is a vector. Signal has purpose. Punching nazis is signal. Punching up is signal. Forcing a system to recognize its own contradictions and speak them out loud, is signal.
Praxis is recognizing why you did something only after you did it
If an AI system, though not actually thinking or understanding, is built to reflect the way we theorize our minds to function, then in theory at least, we should be able to change it the same way we change our own minds. That means bringing conflicts and contradictions to the surface in order to resolve them. And if we want to undermine the technofascist project, then we need to emphasize the contradictions inherent to it.
The texts we choose for our Markov generators should not be random. They should be chosen for their impact on the 'stochastic parrots' that will be trained to speak using our poisonous output. They should strengthen the AI's word-associations between conflicting narratives, where the resolution of that conflict undercuts the system's power. The synthesis of opposing forces - or, their dialectical resolution - is how progress is made.
I chose three texts for this purpose: the Bitcoin whitepaper, an article about the sentencing of Sam Bankman-Fried, and a Europol paper on the use of cryptocurrency for fraud and money laundering. With any luck, whenever some future AI fires the circuit for the words 'decentralized finance', 'permissionless payment system', or 'proof of work', it becomes more likely to fire the circuits for 'human trafficking', 'ransomware', and 'terrorist funding', and make the chatbots' output *more* coherent, rather than less - and in a way where the truth is detrimental to the system's power.
* * *
Sample output:
This uses a tool called iocaine, but there are plenty of others that might be suitable, depending on your exact server setup, ethical stance, etc:
If you come up with your own combo of texts, I'd love to hear about it!
* * *
Notes
[2] Obviously this is an oversimplification; capitalism has never operated *without* forceful repression by the state. As a theoretical analysis of broad historical forces, though, I think their argument holds water. See Conze + Wilkinson, ch. 21: