At the end of 2022, generative AI (GenAI) became available for the general public. Large language models (LLMs), such as ChatGPT, allow users to produce texts about any topic in the style and voice of their choice. The question how we should read the textual output of LLMs has been addressed in the field of linguistics before this technology became widely available. Most linguists agree that LLMs lack semantic understanding, and the metaphor of LLMs as ‘stochastic parrots’ is now deeply embedded in the (academic) discourse on GenAI (Bender et al., 2021; Lake & Murphy, 2023; Titus, 2024). The user experience with GenAI, however, is at odds with the no.on of LLMs as stochastic parrots: users do, in fact, experience the output as meaningful. Considering LLMs as models participating in meaningful language production is arguably more productive, and insights from literary theory (Agüera y Arcas, 2022; Hayles, 2019; Rees, 2022) could broaden the discussion on GenAI.
In interacting with LLMs, users often rely on cultural frames. The interface of ChatGPT activates the science fiction trope of the friendly robot assistant, for instance (Bluijs, forthcoming). At the same time, the frame of (Gen)AI as a potentially existential threat to humankind hinges on how these technologies are depicted in dystopian fantasies (e.g. Future of Life, 2023). Both tropes are frequently used to ascribe specific identities to robots, and we argue that, given the ubiquity of LLMs in our daily lives, awareness of these frames allows users to interact with LLMs more critically, consciously, and creatively.
To consider these tropes in more detail, this paper offers a close reading of ‘The Robot of the Machine is Man’ (2017), a short science fiction story in Dutch that was authored by the human author Ronald Giphart in collaboration with a language model. Because the story thematizes the interconnection between machines and humans both on the level of the story and the level of its production, it raises question about processes of meaning-making in the age of AI. We consider how readers are invited to ascribe properties to the robot characters. By using the narratological concept of characterization, we consider how readers make inferences about the intents and psychological make-up of the robots based on textual cues (Jannidis, 2012).
References
Agüera y Arcas, B. (2022). Do Large Language Models Understand Us? Daedalus, 151(2), 183–197. https://doi.org/10.1162/daed_a_01909
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922
Bluijs, S. (forthcoming). Oracle, Echo, or Stochastic Parrot? Who (or What) Speaks in AI-Generated Literature? In: G. Lively & W. Slocombe (eds.), The Routledge Handbook of AI and Literature. London: Routledge.
Hayles, N. K. (2019). Can Computers Create Meanings? A Cyber/Bio/Semiotic Perspective. Critical Inquiry, 46(1), 32–55. https://doi.org/10.1086/705303
Jannidis, F. (2012). Character. In: P. Hühn et al. (eds.), The Living Handbook of Narratology. Hamburg: Hamburg University. https://www-archiv.fdm.uni-hamburg.de/lhn/node/41.html
Lake, B. M., & Murphy, G. L. (2023). Word meaning in minds and machines. Psychological Review, 130(2), 401–431. https://doi.org/10.1037/rev0000297
Rees, T. (2022). Non-Human Words: On GPT-3 as a Philosophical Laboratory. Daedalus, 151(2), 168–182. https://doi.org/10.1162/daed_a_01908
Titus, L. M. (2024). Does ChatGPT have semantic understanding? A problem with the statistics-of-occurrence strategy. Cognitive Systems Research, 83, 101174. https://doi.org/10.1016/j.cogsys.2023.101174