Making Sense of the Self: Generative AI, Understanding, and Language

Large-language-model generative artificial intelligence, such as ChatGPT, has led to new emphasis—among both proponents and critics—on the concept of understanding. Understanding is regularly presented in popular media as that human capacity or act that generative AI, despite its incredible ability to process linguistic data, cannot do: AI does not understand, and cannot understand itself. The framing of understanding as the dividing line between human and artificial intelligence thus relies on an implicit contrast with human understanding, including our ability to
understand ourselves.

Recent philosophical work on understanding has recognized the challenges posed by generative AI (Browning 2023, Firt 2023, Fleisher 2022, Sullivan 2022). But so far this work has retained broadly linguistic assumptions about meaning, framing understanding in terms of linguistic competency (Hannon 2021). This linguistic framing is problematic for any theory that hopes to differentiate the human from large-language models such as ChatGPT: if human understanding is a matter of linguistic competency, then recent generative AI, with its unprecedented capacity for using, processing, and producing language, appears to understand—and perhaps even understand itself—after all.

After sketching the above problematic, I outline a response by arguing that the core of our capacity for understanding or sense-making (Grimm 2018) is not language, but rather embodied lived experience rooted in intentionality (the aboutness of experience). It is through our judgments—not merely those in linguistic or propositional form characteristic of explicit knowing-that, but judgments in the broader, embodied and sometimes tacit sense that encompasses knowing-how—that we understand ourselves and the world. We should thus approach the dividing line between human and artificial intelligence, and new questions about uniquely human identities in the digital age, by looking not to linguistic expression, but to uniquely human conditions for understanding and self-understanding: our embodied sense-making in the lived world.

Works Cited
Browning, Jacob. 2023. “Personhood and AI: Why Large Language Models Don’t Understand Us.” AI & Society (issue not yet assigned).
Firt, Erez. 2023. “Artificial Understanding: A Step Toward Robust AI.” AI and Society (issue not yet assigned).
Fleisher, Will. 2022. “Understanding, Idealization, and Explainable AI.” Episteme 19: 534–60.
Grimm, Stephen R., ed. 2018. Making Sense of the World: New Essays on the Philosophy of Understanding.Oxford: Oxford University Press.
Hannon, Michael. 2021. “Recent Work in the Epistemology of Understanding.” American Philosophical Quarterly 58 (3): 269-90.
Sullivan, Emily. 2022. “Understanding from Machine Learning Models.” British Journal for the Philosophy of Science 73 (1): 109-133.


Jacob Martin Rump

Don’t want to miss out on the symposium? Sign up is now open.

Digital Humanities Tilburg