Whither the AI Apocalypse?

Ten years ago, I wrote an essay for my Open News Fellowship which critically assessed the Web Analytics industry’s proclamation that “the pageview is dead.” The essay got some traction online and was eventually used as a framework for analyzing a right-wing politician’s use of the “rapture” when criticizing Obama’s foreign policy. The gist of the argument, which borrowed from Jacques Derrida’s 1982 essay Of an Apocalyptic Tone Recently Adopted In Philosophy, was that doomsayers are primarily “concerned with seducing you into accepting the terms on which their continued existence, their vested interests, and their vision of ‘the end’ are all equally possible.” As Derrida concisely put it: “the subject of [apocalyptic] discourse [hopes] to arrive at its end through the end.” A decade later, I can confidently say that the pageview is very much still alive and that the leaders of the analytics companies who predicted otherwise are now very rich.

––

Yesterday, a collection of academic and industry leaders collectively signed a statement which issued this stark, apocalyptic proclamation:

Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.

The statement implies that since AI is a “societal-scale” risk, governmental regulation on the scale of responses to other apocalyptic threats is required (note their blatant omission of climate change). By positioning themselves as “AI experts,” it also puts them in a natural position to help craft these regulations given the complexity of the technology. While they are purposefully vague on what an AI apocalypse might look like, most doomsday scenarios follow the concept of the “singularity” in which AI undergoes “a ‘runaway reaction’ of self-improvement cycles… causing an ‘explosion’ in intelligence and resulting in a powerful superintelligence that…far surpasses all human intelligence.” As the story goes, this superintelligence, no longer moored by its creators, begins to act in its own interests and eventually wipes out humanity. A particularly sophisticated and influential example of this narrative was published on the online forum LessWrong last year.

––

Earlier this month, an interesting document leaked which was confirmed to have originated from a Google AI researcher. The essay, entitled We Have No Moat, And Neither Does OpenAI, traces the rapid development of open-source alternatives to OpenAI’s and Google’s large language models (LLMs) over the past few months. It argues that the open-source community is much better equipped to push the forefront of LLM research because it’s more nimble, less bound by bureaucratic inefficiencies, and has adopted tools like “Low-Rank Adaptation” for fine-turning models without the need of large clusters of GPUs. As the author summarizes:

We have no secret sauce. Our best hope is to learn from and collaborate with what others are doing outside Google.

People will not pay for a restricted model when free, unrestricted alternatives are comparable in quality. We should consider where our value add really is.

Giant models are slowing us down. In the long run, the best models are the ones which can be iterated upon quickly.

For anyone who has been closely following the development of open-source tooling for LLMs over the past couple of months, these assertions are not particularly controversial. Each day brings a fresh crop of tweets and blog posts announcing a new software package, model, hosting platform, or technique for advancing LLM development. These tools are then quickly adopted and iterated upon leading to the next day’s announcements. In many ways, this rapid advancement mirrors the ‘runaway reaction’ at the heart of the singularity narrative.

––

If we understand apocalyptic narratives as a rhetorical sleight-of-hand, then we must ask the same questions of the AI industry which my previous essay asked of Web Analytics:

What then of the [AI] apocalypse and the prophets who giddily proclaim it? To what ends are these revelations leading us? What strategic aims and benefits are these claims predicated upon?

Given the arguments outlined in the leaked document above (and backed up by anecdotal evidence) we can conclude that the “existential threat” AI companies are most concerned with is their inability to profit from the nascent “AI boom.” Google is particularly vulnerable to this threat given its heavy reliance on search-related advertising which reportedly accounts for more thant 80% of its yearly revenue. Regulation, while absolutely necessary, could be shaped in a way to potentially stifle the rapid development of the open-source community by requiring complex security, privacy, or other restrictions thereby making it illegal (or at least cost prohibitive) for an individual to develop an LLM on their laptop. Since many “AI experts” directly work for, or receive funding from the companies that signed the letter, it stands to reason that the aim of this proclamation is to ensure that they have a hand in shaping the eventual regulations in a way which ensures their monopolistic domination of the AI industry.

––

I have a different take on the “singularity” which is informed by recently completing Palo Alto – an excellent history of American capitalism vis-a-vis the tech industry. The basic idea is that capitalists have long sought to replace employees with automation and, for those tasks they can’t fully automate, to make the workers performing them function more like machines. In this reading, the “singularity” is not achieved simply by making machines “sentient,” but by simultaneously turning humans into machines, effectively lowering the bar an AI has to jump over to achieve sentience; if you work in an Amazon Fulfillment Center, you are already functioning very close to a machine, and much of your work is probably focused on training the robots that will replace you. Keep in mind that the crucial difference between GPT-3 and ChatGPT was the addition of reinforcement learning from human feedback (or R.L.H.F) which involved underpaying and overworking Kenyan contractors to make the model “less toxic” (this exploitative process is a “joke” for AI researchers). That the AI industry’s development will likely coincide with a rise in factory-like conditions for those asked to train the models is no less of a disastrous scenario, but at least in this reading we can rightfully point the finger at the capitalists for the “end of the world” rather than AI. To modify William Gibson’s adage (which might not actually be his): “the apocalypse is already here it’s just not evenly distributed.”


« How to make a custom short URL and qr code site for free.
Sitting in the seat of empire tonight. »