In 2019, OpenAI released GPT-2, a language model capable of generating whole paragraphs of text at a time. GPT-2’s output, stripped of inhibition and ego, offers delightful linguistic surprises run after run.
As exciting as it is to watch a machine produce something so convincingly human, the novelty eventually wears off. When it does, we’re left to wonder: how do we make this statistical trick — an assembly of words no longer contingent on an author’s intention — mean something to us?
In Exercises in Meta-cohesion, my mechanical co-writer (GPT-2) and I tell two stories. First, we tell a fictional tale of characters whose connections to each other, fragile as they are, build a society out of selves. Underneath this surface, we tell our own tale of human and machine working together through formulas, improv, and endless material to put words artfully together.
Read more in the Hindsight Reader.
In this project’s namesake, Exercices de Style, Raymond Queneau writes the same story ninety-nine times, changing the style with each iteration. Here, GPT-2 and I execute our own collaborative procedure twelve times, changing the texture of the language each time.
I give GPT-2 a narrative scaffolding in the form of four fixed prompts: I wish people understood me better; I just want one thing; Why? Because; and Maybe we’re not that different. I ask GPT-2 to continue each prompt for one to three sentences before moving to the next prompt.
Once GPT-2 has filled in the narrative scaffolding, I remove the scaffolding from the final surface form.
This process is repeated twelve times; for each iteration, I change GPT-2’s tuning corpus, giving rise to twelve distinct personalities. As a final step, each personality is linked to another through manually-written narrative threads.
Exercises in Meta-cohesion uses PRAW (Python Reddit API Wrapper) for creating tuning corpora, and Max Woolf’s gpt-2-simple for fine-tuning GPT-2.