Whither the AI Apocalypse?

Ten years ago, I wrote an essay for my Open News Fellowship which critically assessed the Web Analytics industry’s proclamation that “the pageview is dead.” The essay got some traction online and was eventually used as a framework for analyzing a right-wing politician’s use of the “rapture” when criticizing Obama’s foreign policy. The gist of the argument, which borrowed from Jacques Derrida’s 1982 essay Of an Apocalyptic Tone Recently Adopted In Philosophy, was that doomsayers are primarily “concerned with seducing you into accepting the terms on which their continued existence, their vested interests, and their vision of ‘the end’ are all equally possible.” As Derrida concisely put it: “the subject of [apocalyptic] discourse [hopes] to arrive at its end through the end.” A decade later, I can confidently say that the pageview is very much still alive and that the leaders of the analytics companies who predicted otherwise are now very rich.

––

Yesterday, a collection of academic and industry leaders collectively signed a statement which issued this stark, apocalyptic proclamation:

Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.

The statement implies that since AI is a “societal-scale” risk, governmental regulation on the scale of responses to other apocalyptic threats is required (note their blatant omission of climate change). By positioning themselves as “AI experts,” it also puts them in a natural position to help craft these regulations given the complexity of the technology. While they are purposefully vague on what an AI apocalypse might look like, most doomsday scenarios follow the concept of the “singularity” in which AI undergoes “a ‘runaway reaction’ of self-improvement cycles… causing an ‘explosion’ in intelligence and resulting in a powerful superintelligence that…far surpasses all human intelligence.” As the story goes, this superintelligence, no longer moored by its creators, begins to act in its own interests and eventually wipes out humanity. A particularly sophisticated and influential example of this narrative was published on the online forum LessWrong last year.

––

Earlier this month, an interesting document leaked which was confirmed to have originated from a Google AI researcher. The essay, entitled We Have No Moat, And Neither Does OpenAI, traces the rapid development of open-source alternatives to OpenAI’s and Google’s large language models (LLMs) over the past few months. It argues that the open-source community is much better equipped to push the forefront of LLM research because it’s more nimble, less bound by bureaucratic inefficiencies, and has adopted tools like “Low-Rank Adaptation” for fine-turning models without the need of large clusters of GPUs. As the author summarizes:

We have no secret sauce. Our best hope is to learn from and collaborate with what others are doing outside Google.

People will not pay for a restricted model when free, unrestricted alternatives are comparable in quality. We should consider where our value add really is.

Giant models are slowing us down. In the long run, the best models are the ones which can be iterated upon quickly.

For anyone who has been closely following the development of open-source tooling for LLMs over the past couple of months, these assertions are not particularly controversial. Each day brings a fresh crop of tweets and blog posts announcing a new software package, model, hosting platform, or technique for advancing LLM development. These tools are then quickly adopted and iterated upon leading to the next day’s announcements. In many ways, this rapid advancement mirrors the ‘runaway reaction’ at the heart of the singularity narrative.

––

If we understand apocalyptic narratives as a rhetorical sleight-of-hand, then we must ask the same questions of the AI industry which my previous essay asked of Web Analytics:

What then of the [AI] apocalypse and the prophets who giddily proclaim it? To what ends are these revelations leading us? What strategic aims and benefits are these claims predicated upon?

Given the arguments outlined in the leaked document above (and backed up by anecdotal evidence) we can conclude that the “existential threat” AI companies are most concerned with is their inability to profit from the nascent “AI boom.” Google is particularly vulnerable to this threat given its heavy reliance on search-related advertising which reportedly accounts for more thant 80% of its yearly revenue. Regulation, while absolutely necessary, could be shaped in a way to potentially stifle the rapid development of the open-source community by requiring complex security, privacy, or other restrictions thereby making it illegal (or at least cost prohibitive) for an individual to develop an LLM on their laptop. Since many “AI experts” directly work for, or receive funding from the companies that signed the letter, it stands to reason that the aim of this proclamation is to ensure that they have a hand in shaping the eventual regulations in a way which ensures their monopolistic domination of the AI industry.

––

I have a different take on the “singularity” which is informed by recently completing Palo Alto – an excellent history of American capitalism vis-a-vis the tech industry. The basic idea is that capitalists have long sought to replace employees with automation and, for those tasks they can’t fully automate, to make the workers performing them function more like machines. In this reading, the “singularity” is not achieved simply by making machines “sentient,” but by simultaneously turning humans into machines, effectively lowering the bar an AI has to jump over to achieve sentience; if you work in an Amazon Fulfillment Center, you are already functioning very close to a machine, and much of your work is probably focused on training the robots that will replace you. Keep in mind that the crucial difference between GPT-3 and ChatGPT was the addition of reinforcement learning from human feedback (or R.L.H.F) which involved underpaying and overworking Kenyan contractors to make the model “less toxic” (this exploitative process is a “joke” for AI researchers). That the AI industry’s development will likely coincide with a rise in factory-like conditions for those asked to train the models is no less of a disastrous scenario, but at least in this reading we can rightfully point the finger at the capitalists for the “end of the world” rather than AI. To modify William Gibson’s adage (which might not actually be his): “the apocalypse is already here it’s just not evenly distributed.”


How to make a custom short URL and qr code site for free.

TLDR; The source code is here.

As a part of its regular operations, Bushwick Ayudua Mutua makes regular use of short URLs and QR codes to share links to assistance request forms, volunteer sign-up forms, and other important information hosted online.

Like many small groups, BAM relied on bit.ly and a bevy other QR-code generation platforms which either cost a lot of money (bit.ly is $30/month) or harvest your data, exposing community members to potential privacy violations.

However, the underlying technology to create short URLs and QR codes is fairly simple and can be easily replicated using urlzap, python’s qrcode package, and GitHub’s Actions and Pages products.

I packaged these tools together into baml.ink which can be forked and reconfigured to create your own, fully-static short URL / QR code generation service.

The repository contains a human-readable/editable yaml file (named “baml.yaml” :p ) which looks something like this:

urls:
  ig: https://www.instagram.com/bushwickayudamutua/

Here, “ig” is the path of short link, so you could then then share “baml.ink/ig” and it would point to “https://www.instagram.com/bushwickayudamutua/”. Each time this file is updated in the “main” branch, a GitHub Action is triggered which runs the urlzap GitHub Action which fetches the metadata for the long url and creates a static HTML file which includes the following meta tag in the “<head>” of the document:

<meta http-equiv="refresh" content="0; url=https://www.instagram.com/bushwickayudamutua/" />

This tells browsers to redirect the visitor to “https://www.instagram.com/bushwickayudamutua/”. By also including the metadata which is present on the source page, the short URL will appear normally when unfurled within messaging apps.

This static HTML file is then added to the “gh-pages” branch of the repository and deployed as a new page hosted by GitHub. You can see a full example of such a file here.

How QR code generation works:

In addition to the link shortening process, an additional script is executed each time there is an update to the “main” branch which iterates through the list of URLs in “baml.yaml”, generates a QR code for the short URL, and writes it to a the “qr/” directory of the repository. These images take the format: “https://baml.ink/qr/{short_path}.png”. So, given the Instagram example above, the QR code would be hosted at baml.ink/qr/ig.png.

How non-coders can use this:

While this project must be hosted on GitHub to remain free, by centralizing all short URL and QR code generation into a single file, volunteers can use GitHub’s built-in code editor to easily add new URLs to “baml.yaml” and commit their changes, all without ever opening a terminal or cloning a repository. These changes are then automatically applied via GitHub Actions such that they should see their short URL and QR code go live within minutes.


welcome-to.miami

loop / video / art

While migrating my old Splice projects before they shutdown their Studio tool (RIP), I unearthed a screen recording of a pair of sites I built a few years ago, “welcome-to.miami” and “bienvenidos-a.miami”. While each site played a short loop of the music video by @willsmith with the relevant lyric, “welcome-to.miami” would redirect to “bienvenidos-a.miami” and “bienvenidos-a.miami” would redirect back and also open a new tab of “welcome-to.miami” such that the number of tabs grew exponentially, causing a cacophonous, phasing effect. I eventually took the site down once browsers disabled autoplay on video. Happy to have found documentation of it :)


WFMU Radio Row Mix

dj / music / mix
WFMU Radio Row Mix

I’m excited to share a new mix I made for Radio Row, a program run by Olivia Bradley-Skill, WFMU’s Music Director. If you’re interested in submitting your own show, you should fill out this form as it’s a great opportunity to play your favorite music on the airwaves.

I love WFMU. We listen to it daily in our house and I value it as a source of new music. For my submission, I wanted to showcase a style of mixing that is not often done on air, making use of DJ tools, acapellas, and live effects to create a multi-layered, ever-evolving blend of dance music, while also paying homage to some of the halcyon sounds WFMU DJs gravitate towards. It features many of my favorite songs and artists, and some of my closest friends and mentors.

It’ll broadcast live this Sunday, April 23rd, at 5 PM ET on 91.9 FM in NYC and online at wfmu.org

I hope you like it <3.


globally.ltd/studio

As a part of releasing software and music for the last couple of years on my label globally.ltd, I’ve slowly built up a functioning recording studio in my basement and I’m excited to now open it to others.

globally.ltd/studio is a small, project-based art and recording studio in Glendale, Queens offering a variety of services including in-person (solo or supervised) sessions and remote mixing, mastering, and mentorship.

Rates will be give-what-you-can, with a suggested range of $15-30/hr, though bartering and skill-sharing is preferred!

You can read more about it at: globally.ltd/studio/.


bripolar #1




I’m pleased to be releasing bripolar #1, the first in a collection of mostly live takes of hard, industrial techno. It’s been fun rigging up my mixer with multiple sends and bus channels to really achieve a rich, ever-evolving sound. More of these to come…



Making your mac sing

Saysynth logo

When I was first introduced to a computer, my parents made it talk to me. I was transfixed by its text-to-speech capabilities and, throughout my childhood, joyously commanded it to “Start Speaking.” Perhaps it was because I was ashamed of my own speech impediment — for which I was often bullied in school — but the awkward and monotonous cadence of its voice was endearing to me, and I spent countless hours playing with it.

Sometime in college I realized that you could, with your own voice, ask it “What time is it?” and it would respond in its synthesized drawl. I proceeded to make this part of my morning routine, much to the annoyance of my roommates, since the feature rarely worked and required finding the exact phrasing of the prompt.

As I began programming computers in my twenties, I discovered that you could control these voices in your terminal:

$ say "hello world" -v Fred

The say command and its many options and voices deepened my fascination with text-so-speech. With each new technology I experimented with — twitter bots, data sonificaiton, haiku generation, or chat.meatspac.es - I would inevitably try piping the output of my programs into say. Just to experience, once again, the childlike wonder of my computer talking to me.

saysynth represents the outcome of a lifelong fascination with these synthesized voices, sparked by the discovery of an obscure documentation website for Apple’s Speech Synthesis Framework. saysynth works by harnessing a domain-specific language (DSL) Apple created to allow users to control the duration and pitch contours of individual phonemes, enabling the creation of musical passages with three of Mac’s built-in voices (Fred, Victoria, and Alex). By releasing it as open-source software, I hope to make it widely accessible to musicians and tinkerers alike.

But while I’m excited for people to play around and build on top of saysynth, I’m happy if I end up being the only one who ever uses it. Sometimes the greatest joy in creation is satisfying your own curiosity.

I still have many features planned, including pitch modulation, real-time midi control, and hopefully a UI or interactive website. And with each release, I hope to create new demos! Give them a a listen and let me know what you think. xx