The Silicon Monkey

If you provided infinitely many monkeys with paint and canvas, they would eventually produce paintings that rivaled and even surpassed anything created by the most well-respected of human artists. If one or more of those infinite monkeys make a pixel perfect duplicate of Van Gogh's Starry Night or whatever, does it imply that monkeys can produce art? I don't think it does, and I'll discuss why I don't in just a moment.

Likewise, if you throw enough silicon at the problem, and train it with the knowledge of every human artist, it can give you "art" for the asking. It's even getting iteratively better at doing this sort of thing, unlike our monkeys, the overwhelming majority of whom ignore the paint and simply fling their feces at the unfortunate canvas for the LOLs.

I'd argue that art is the product of creative labor by sentient beings. That automatically rules out monkeys and clouds of silicon. Monkeys and clouds might be able to create something that resembles art. They might be able to create a bit-perfect duplicate of actual art. In the case of silicon, that product was created by complicated calculations, and in the case of the monkeys, it was created by pure random chance. But in neither case was there intent. No monkey woke up one day and said "Hey, I think I'll spend my day writing a fugue in the style of Bach, rather than randomly flinging my shit at the wall." Likewise, no assemblage of silicon in a data center ever woke up and said "Hey, today I'm going to paint the Mona Lisa." I think the real point that I am trying to make is that art requires intent, and AI by itself has none. It is not sentient. It is not conscious, it has no ego, no will. It cannot make art unless told to do so, in which case, it is merely a tool.

I'll grant that humans can use generative AI to create art. Maybe. For a loose definition of create. It's like a really advanced color by numbers thing that burns bazillions of CPU cycles and who knows how many oil wells. I assert that it doesn't matter whether art made by prompting generative AI is art, because there are just so many reasons it isn't worth the trouble.

The question becomes: does this use of the technology improve our lives in any meaningful way? I'd say that it does not, for any of the following reasons.

The resource expenditure to train LLMs is astoundingly high. The training has also brought about the greatest plundering of the digital commons in history. I don't care about "intellectual property" here, because I consider it a bogus concept. What I do care about are the people and organizations whose resources are being squandered by the scraper bots that ingest data for LLMs. I have no doubt that this post of mine will be ingested by some LLM. That same LLM might well plagiarize me when making an anti-LLM argument. It's just all grist for the corporate overmind.

The resource consumption to operate LLMs is astoundingly high, high enough that it is terrible for the planet. Furthermore, it leads to centralization. There are open source LLMs that you can run locally, and I have done so. It's apparently not as good as the commercial offerings, or so my friends who use that stuff tell me. My understanding is that generative AI is primarily a cloud service, rather than something that is being built into consumer devices. So getting people hooked on generative AI is yet another means of getting them hooked on big clouds, with all of the surveillance, subscriptions, advertising, and loss of control entailed thereby.

Humans have been using tools to make art for thousands of years. I don't have a complaint against that in the general case. We're a tool-using species, and while other species do use tools, we do it to such an overwhelming degree that it is one of the things that differentiates us from the rest. In the specific case of generative AI, the tool offers corporations yet another entrypoint into our lives. The camera made it easy for people to make pictures, but it didn't require a subscription to use it. If you had your own equipment, you could even develop your own film without being dependent on some company to develop it for you. This was totally feasible and hardly unheard-of. For instance, my high school had a dark room. Once digital cameras came along, it became even easier. Yet again, these devices did not require a subscription service to operate. In order to make art with AI, you will likely sign up for a subscription service or the free tier of a service, and in both cases, the service provider gets your product and can control what you make with generative AI. That doesn't sound too artistic to me. In fact, it sounds Orwellian.

Centralization begets learned helplessness. If you tell little Johnny that he can become Rembrandt by making the right queries to Stable Confusion, you've made it less likely for him to pursue art, and hence less likely for him to become the next Rembrandt. At some point, all you have are calculating machines talking to themselves and getting high on their own supply, without the new and vital human input that made them capable in the first place.

Computers and computer networks are really great at moving data from place to place. They are also great at flooding a target with unwanted data. This is another drawback of generative AI. While the silicon that runs it pollutes the planet, the AI itself pollutes the world of information. The Internet is still, with reference to human history, very new. Language has been around for what, tens or hundreds of millennia? Writing has been around for between 5 and 6 thousand years. On those comparative timescales, the Internet may as well have been invented yesterday. We're still struggling to deal with the flooding of misinformation on the Internet by bad-faith actors. Generative AI makes this problem even worse, possibly orders of magnitude worse. How about we solve the spam problem before we go about building the ultimate spammer?

It's also a grift. Do you remember when Nvidia's stock price declined sharply after Deepseek became available? Shortly after that, Dear Orange Leader Kim Jong Donald announced that under his administration, the United States would invest an astounding $500 billion in artificial intelligence. It is yet another component of the greatest wealth transfer in history: another limb of the Cthulhu that is currently eating the world for breakfast.

Quoting from Nineteen Eighty-Four:

The Party said that Oceania had never been in alliance with Eurasia. He, Winston Smith, knew that Oceania had been in alliance with Eurasia as short a time as four years ago. But where did that knowledge exist? Only in his own consciousness, which in any case must soon be annihilated. And if all others accepted the lie which the Party imposed—if all records told the same tale—then the lie passed into history and became truth. ‘Who controls the past,’ ran the Party slogan, ‘controls the future: who controls the present controls the past.’ And yet the past, though of its nature alterable, never had been altered. Whatever was true now was true from everlasting to everlasting. It was quite simple. All that was needed was an unending series of victories over your own memory. ‘Reality control’, they called it: in Newspeak, ‘doublethink’.

Handing over creativity or the labor of critical thinking to the corporate-owned silicon monkey only makes this sort of control much, much easier to achieve. Generative AI could well become the ultimate tool of "reality control." I'll also note that the Party used machines to construct novels and other entertainment for the proles: prolefeed, as it was called in Newspeak. Orwell not also predicted the telescreen and forever wars, but also foresaw generative AI!

You know what I say? Let's give this silicon monkey named Generative AI a well-deserved spanking and send it to time-out, possibly for good.

A Tangent: Anthropocentrist and Bloody Proud of It

Anthropocentrism as a worldview still holds, because as far as we can prove with current science, we are the only sentient beings in the universe. If we want to wander deeply into the territory of belief, I'd say that it is overwhelmingly likely that there is sentient life out there in the galaxy or the universe. I also think that dolphins are probably sentient, and science will prove it pretty soon. But for now, the only sentient minds we know of are human. Generative AI isn't mind at all.

It's good to talk about anthropocentrism here, because a lot of AI's strongest boosters are likely the types of people I'd describe as machine worshipers. They gave up on God, but they needed a God substitute. No sodium, reduced fat, fewer additives and preservatives. Low carb God? They found it in silicon. Musk is one such person. So is Ray Kurzweil, who has been preaching the Gospel of the Machine for decades now. Do you remember what Eugenics Boy Elon Musk said about empathy? I doubt he believes in anthropocentric concepts like human dignity and human rights, either. If we let ourselves be convinced that this technology is comparable to us, we're opening the door just a little bit wider for the kinds of horrors that will be perpetrated by people who don't believe in quaint concepts like empathy and dignity. You can be sure that they do believe in the rights of capital, however.

I cannot say for certain that we will or won't ever develop sentient artificial intelligence. If we do, it will be way far in the future. And we won't interact with it by prompting it or ordering it around, like we interact with a read-eval-print loop. Ethically, we will have to expand our definitions of personhood, dignity, and rights to include it, just like we'll have to expand them if we discover extraterrestrial life or if we figure out that dolphins are sentient. That is tomorrow's problem. Today's problem is learning to respect the rights and dignity of our fellow humans, and we have a lot of work to do on that front.

Speaking of Ray Kurzweil, This guy is a real fool and a guru of the Silicon Valley elite. He wants to stick around in his physical body, long enough to be able to upload his consciousness into a robot, so he can live forever. To that end, he pops a boatload of dietary supplement pills every day. When I was in 7th grade, my school bus driver was a Jehovah's Witness. He gave me one of their Watchtower tracts in braille. It was titled "You Can Live Forever, in Paradise on Earth." If you replaced occurrences of Jehovah and Jesus with computer and machine, you'd probably get a Kurzweil book. Replace hell with oblivion as well, because I will bet my left kidney that every member of the "upload my consciousness to silicon heaven" crowd is as scared of oblivion as any Christian was ever scared of hell.

Here's a quote from Ray's Wikipedia article, so you can get an idea of just how deep the well of bat-shit goes.

In 2007, Kurzweil was ingesting "250 supplements, eight to 10 glasses of alkaline water and 10 cups of green tea" every day and drinking several glasses of red wine a week in an effort to "reprogram" his biochemistry. By 2008, he had reduced the number of supplement pills to 150. By 2015, Kurzweil further reduced his daily pill regimen to 100 pills.

Does this guy have any time in his day to do anything other than pop pills?

Google hired him to work on machine learning. I'm surprised they didn't make a C-suite position just for him: Chief Prophecy Officer. Maybe they did, and I missed the news.

While I'm going off on tangents, it would be worth discussing animal rights. Ethically, I believe it to be wrong to cause avoidable suffering to animals. We can debate whether animal research is avoidable, can be done ethically, and so on, but I don't want to go down that rabbit hole. I do not hold with Descartes, who claimed that animals were "mere mechanisms" and their cries of agony were akin to the sounds of clock gears. Yeah he literally said that kind of shit. But they obviously do not and can not have the same rights as we do. What does it mean for a mouse to have freedom of expression or a rat to have freedom of religion?