Against Output

AI writing has made a singular focus on textual meaning untenable. Texts and their meanings matter primarily because they provide occasion for interpretation and thinking.

4 mins read
[Artwork credit:]

No theorist in the humanities simply writes up their discoveries. The writing process is constitutive to any meaning in humanities scholarship. So, as radically focused on authorship as “Against Theory” is—undercutting all literary criticism by claiming that the author is the sole arbiter of a text’s meaning—it misses the real action in authorship. It’s oriented to outputs, not processes; texts, not writers. Steven Knapp and Walter Benn Michaels introduce us briefly to labcoat-wearing figures in a submarine who might be experimenting with the impossible wave poem on the beach, but these author-engineers submerge as quickly as they surfaced. What the process of writing does for an author is nowhere discussed and rarely in literary theory more generally. Yet one thing literary theorists, compositionists, and humanities scholars have going for us in the context of generative AI is a deep understanding of the processes of writing and reading and interpretation. We gain that by writing, and we share it by teaching.

By forbidding literary interpretation, Knapp and Michaels suggest scholars and students can simply retrieve meaning from authors, tacitly arguing for a “banking model” of meaning. Paulo Freire derides this model of education because it positions students as passive vessels of others’ perspectives. Students learn only to parrot the values and language of their teachers.

We can see the same dangers in producing such students as Emily Bender and her coauthors see in language models merely parroting their training data. Bender and colleagues define language models as “systems which are trained on string prediction tasks,” that is, models that produce expected language and avoid interpretation. They argue for scrutiny of the training data, an understanding of how it operates in social contexts—in other words, interpretation. They advocate for a focus on the processes behind language models, not just their output. When we teach writing, we must avoid producing the student who is a language model, “a stochastic parrot.”

If we cultivate stochastic parrots, we risk becoming them ourselves. Pinning the locus of meaning solely to the author undermines the literate development of both students and scholars.

Scholars learn through teaching—both from the processes of textual curation and interpretation that teaching requires and through the questions and ideas their students bring through their writing. That we learn through the interpretive processes of pedagogy is obvious to anyone who has ever taught a course, and yet rarely acknowledged in literary scholarship, as Rachel Sagner Buurma and Laura Heffernan note in The Teaching Archive: A New History for Literary Study (2020)By shifting the lens from textual output to teaching, Buurma and Heffernan demonstrate how students and the act of teaching not only influenced but also constituted the scholarship of major literary theorists. Their work makes explicit the importance of teaching for thinking and, I would argue, the literate development of scholars.  We’re accustomed to thinking about literacy as concerning those who have very little of it: illiteracy, low literacy, early childhood education. Scholars have so much literacy that we cease to consider it as such. So, in the production of scholarship, we imagine a difference between ourselves and our students—that what we produce matters in some important way, and what they produce is merely exercises for their development. But, in the humanities, our interpretive processes are the point, not our output. We’re about the journey, not the destination.

In the current/future of AI writing, how do we avoid producing stochastic students or becoming language models ourselves? Bender and her coauthors argue for more critical engagement with language models and implicitly offer us paths forward. They note that language models work statistically, predicting next words without reference to meaning. Drawing on the fact that writing from language models tends to be predictable, AI writing detectors use perplexity to discriminate between AI and human writing. But a student, if we’re not careful, can also predict words without reference to meaning. If a student is taught and rewarded for commonplaces and stock genres, they will reproduce their training data: the boring commonplaces no teacher relishes reading and no writer learns from reproducing. Instead, we should teach and write for perplexity not so much to avoid plagiarism detectors but to avoid the commonplaces that block critical thinking. We should all write for critical inquiry.

Bender and her coauthors ask whether language models can be too large. At scale, the (re)production of language is uncurated and statistical. The second lesson we can take from them, then, is the danger of scale in our teaching and writing. The relentless drive for greater scale in the tech industry—more parameters, larger datasets, leaner but more efficient teams—mirrors the pressures on general education in the modern university. How can universities deliver education at scalecutting costs while agilely pivoting toward current job markets? AI writing ratchets up this pressure while simultaneously offering a potential solution: automating teaching through AI feedback and editing. Will this solution increase literate development for either students or teachers? Or merely its output? Outputs scale, but literacies do not. Texts scale; writers don’t. Even leaders in tech worry that AI’s displacement of writing processes may undermine the critical thinking processes of individual writers. Jack Clark (AnthropicAI) and Paul Graham (Y-Combinator) recently asserted the value of writing processes to thinking. Certainly the thousands of words I wrote for this piece—and then cut—helped me to think through some implications of generative AI on teaching and writing.

AI writing has made a singular focus on textual meaning untenable. Texts and their meanings matter primarily because they provide occasion for interpretation and thinking. I disagree with Knapp and Michaels that “the theoretical enterprise should come to an end,” not because what literary theory tells us matters—it doesn’t—but because the act of writing it matters, how we all learn and change in the process. We write to learn alongside our students. Attention to the processes of writers and readers over their output can return us to questions about why writing and the humanities matter. We’re not here for the texts; we’re here to learn.

Annette Vee

Annette Vee is associate professor of English and director of the composition program at University of Pittsburgh, where she teaches courses in writing, digital composition, materiality, and literacy. At the intersection of computation and writing, her research encompasses computer programming, AI writing, blockchain technology, NFTs, surveillance, and intellectual property.

Leave a Reply

Your email address will not be published.

Latest from Blog