Afterword: Learning to Read AI Texts

What could an entity that constructs billions or trillions of inferences, networks of inferences, and networks of networks, learn about human languages, cultures, and social relations from ingesting billions of human-authored texts, in the absence of any real-world experience?

4 mins read
[Illustartion Courtey:]

Left uninterrogated in “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ” and “Against Theory” is what “authorial intent” means for LLMs, as well as the assertion that meaning must derive from real world experience.  One attack would notice that LLMs certainly do have intentions, also known as their programs.  Moreover, these originate in human brains, so intentions, including the intention to communicate, underlie their architectures and pervade their operations.  “But that is not what we meant,” the “On the Dangers of Stochastic Parrots” and “Against Theory” crowd snaps; “We meant that the models themselves do not have intentions.” This assertion requires that we examine LLM programs to see whether or not they are set up to generate intentions. 

As Leif Weatherby and Brian Juste note, LLMs are inherently indexical.  They break words into tokens and assign vector locations to them in positional embedding spaces, which are constructed according to what other vectors they are related to, as well as their positions within sentences.  The connections between vectors are expressed as weights, or parameters, assigned by the program to the different neurons during training.   Attention and self-attention mechanisms running in parallel built connections between tokens, according to their syntactic and semantic correlations. Roughly, the number of parameters indicate how many connections there are between neurons; in the case of GPT-3, 175 billion; for GPT-4, 170 trillion.  From these vectors, manipulated through matrix math, the programs construct complex multidimension maps of vector correlations; the resulting probabilities are run through a software package such as Softmax that converts them back into words. In essence, then, the vectors act as indexicals in the Peircean sense: signs indicating correlation. 

GPT-3 and -4 have a number of unanticipated capabilities that emerged spontaneously through their training (they were not explicitly programmed in).  One is the ability to detect and replicate literary styles and genres.  How can we explain these capacities?  Essentially, styles employ rhetorics that carry multiple implications about relations between people; genres operate according to rules determining the kind of world in which the literary action takes place.  In a detective novel, for instance, corpses cannot spontaneously crawl out of graves.  The capacity to detect style results from massive networks of correlations that the LLMs use to draw inferences about the relations that rhetorics imply.  Moreover, these inference themselves form networks that lead to higher-order inferences, for example, in the leap from style to genre.  If LLMs can learn about the kinds of worlds that genres imply, what can they learn about the (admittedly much more complex) world that humans inhabit? 

What could an entity that constructs billions or trillions of inferences, networks of inferences, and networks of networks, learn about human languages, cultures, and social relations from ingesting billions of human-authored texts, in the absence of any real-world experience?  My answer is, quite a lot.  There would of course be what I call a systemic fragility of reference, in which the lack of grounding in real world experience leads to errors of interpretation and fact.  LLMs are like the figure, beloved by philosophers, of a brain in a vat; they construct models not of the world, but only models of language.  Nevertheless, embedded in the immense repertoire of human-authored texts on which LLMs are trained are any number of implications about the human world of meanings.  These are understood in the contexts of what LLMs can and do apprehend, that is, in relation to their world-horizons, or in a phrase that Jakob von Uexküll usefully designated for biological world-horizons, their umwelten. Just as von Uexküll used the term umwelt to emphasize that all biological creatures have species-specific ways of perceiving the world, so LLMs also have distinctive ways of apprehending the world.   

Apprehension may be a more appropriate term for what LLMs learn than comprehension (because what they learn is far from comprehensive, lacking sensory inputs or real world experience), thought (too associated with human cognition), or sentience (whose etymological roots refer to sensations, which are precisely what LLMs lack).  Moreover, the other meaning of apprehension is a sense of dread or anxiety, also appropriate for human reactions to LLMs, as the recent “open letters” from tech leaders emphasize. 

Because of the very significant differences between human umwelten and the umwelten of LLMs, there will inevitably be a gap between the meanings that humans project onto the texts that LLMs generate and what the texts mean in the context of LLMs own umwelten. How can we humans learn to read these messages sent from what is literally a different (language-only) world?  Here is where the literary-critical methods of textual interpretation can play important roles.  The distinctions between what the author intended and what readers project onto a text is a typical problem for which literary-critical practices have devised many strategies to explore and understand.  Close reading practices, for instance, pay attention to rhetorical structures,  networks of metaphors and how they work to guide reception, and implicit assumptions undergirding a line of argument.  From these, authorial intent is inferred, as well as how they elicit specific responses from readers. Not coincidentally, these are also the patterns of correlation that the LLMs used to construct their responses in the first place.

The question, as I see it, is not whether these texts have meanings, but rather what kinds of meanings they can and do have.  In my view, this should be determined not through abstract arguments, but rather through actually reading the texts and using an informed background knowledge of LLM architectures to interpret and understand them.  The proof is in the pudding; they will certainly elicit meanings from readers, and they will act in the world of verbal performances that have real-world consequences and implications.  The worst thing we can do is dismiss them as meaningless, when they increasingly influence and determine how real-world systems work.  The better path is to use them to understand how algorithmic meanings are constructed and how they interact with other verbal performances to create the linguistic universe of significations, which is no longer only for or of humans. 

Katherine Hayles

N. Katherine Hayles is the Distinguished Research Professor of English at the University of California, Los Angeles, and the James B. Duke Professor Emerita of Literature from Duke University. Her research focuses on the relations of literature, science and technology in the twentieth and twenty-first centuries. She is the author of twelve books and over one hundred peer-reviewed articles and is a member of the American Academy of Arts and Sciences. She is currently at work on Bacteria to AI: Human Futures with our Nonhuman Symbionts.

Leave a Reply

Your email address will not be published.

Latest from Blog