Therapeutic artifacts: Mental health chatbots as a special kind of cognitive artifact

Speaker 2023

Conversational Artificial Intelligence (CAI) (“chatbots”) is among the most promising examples of the use of technology in mental health care. With already millions of users worldwide, CAIs (e.g., Woebot or Tess), are likely to change the landscape of psychological help (Abd-Alrazaq et al., 2020). Many authors have already argued convincingly that —despite how they are often advertised by their producers—existing CAIs should not be considered therapists and the services they provide fall short of fully fledged psychotherapy (Sedlakova & Trachsel, 2022). But if they are not “digital therapists,” what are they, and what role can they play in mental health care? To answer these questions, we appeal to a well-established and widely discussed concept of cognitive artifact. Cognitive artifacts are artificial devices contributing functionally to the performance of a cognitive task (Norman, 1991; Heersmink, 2013). We argue that therapeutic CAIs are a special kind of cognitive artifacts—therapeutic artifacts, which work by (i) simulating a therapeutic interaction, and (ii) contributing to the performance of cognitive tasks which lead to positive therapeutic change (identified, e.g., with symptom reduction or the improvements of a patient’s overall functioning).

We discuss several ways in which mental health chatbots contribute to the achievement of the proximal goal of Cognitive Behavioral Therapy (CBT), i.e., cognitive change, thus meeting the conditions for being cognitive artifacts. The simplest, yet important functionalities implemented in mental health chatbots are mood tracking and journaling. We argue this functionality supports not only user’s memory but also two kinds of interpretation distinguished by Brey (2005): quantitative (measurement) and qualitative (classification). Firstly, just as a therapist can support a patient in a cognitive task of assessing the intensity of their emotion or believability of their thought, so can a cognitive artifact —a mental health chatbot. Mental health chatbots, like flesh-and-blood therapists, can also help users realize what exactly they feel. Here, CAI change the task of answering an open-ended and overwhelming question “How do you feel?” or “What emotion do you experience?” with a task of choosing from a list of feelings and emotions provided in the app. Another example of qualitative interpretation supported by mental health chatbots is the identification of cognitive distortions. Finally, therapeutic CAI scan also support their users in achieving the goal of cognitive change by guiding them through the process of self-reflection. A strategy of Socratic questioning incorporated in CAIs aims to increase users’ awareness and support them in evaluating their own thoughts. Prompted by accurate and well-timed questions from a chatbot, users can examine different hypotheses they embrace about themselves, others, and the world, and analyze consequences of their actions and inactions. At least in some cases, this may lead to deepening their self-understanding (Grodniewicz & Hohol, 2023).

Our investigation sheds new light on why virtually all existing mental health CAIs implement principles and techniques of CBT—a therapeutic orientation according to which positive therapeutic change is mediated by cognitive change. Simultaneously, it allows us to better conceptualize the limitations of applying these technologies in therapy.

Speaker:  

Mateusz Hohol

Other authors:  

Jędrzej Grodniewicz

Don't want to miss out on the symposium? Sign up is now open.

Digital Humanities Tilburg