Deepfakes and Lying Eyes: How Deepfake Detection Systems Compromise Epistemic Autonomy

Speaker 2023

Artificial intelligence is increasingly used to identify, flag, and remove misinformation and other problematic content from social media platforms. The use of AI in this context is due partly to the scale of social media platforms and partly to the ability of automated systems to detect problematic content that may escape human detection. It is widely thought that artificially intelligent detection systems will be invaluable tools in the effort to combat the spread of manipulative and otherwise problematic deepfakes. While most deepfakes are pornographic, I focus here especially on the use of AI to combat deepfakes utilized for misinformation purposes. I argue that, in addition to a range of practical concerns about such systems, the use of AI to flag and remove deepfakes threatens the epistemic autonomy of individual social media users.

To grasp the shortcomings of automated systems for combatting deepfakes, it is necessary first to understand the threat that deepfakes pose to human knowledge. I argue that this threat is three-pronged. Deepfakes are potentially deceptive, lead to decreased trust in video footage, and weaken the evidential significance of video footage. Deepfakes thus threaten the truth, belief, and warrant conditions on knowledge.

The limitations of automated systems for detecting deepfakes are due in part to the fact that these work by picking up on imperfections that are steadily eliminated as deepfakes improve. Consequently, today’s detection systems may not recognize tomorrow’s deepfakes. Moreover, there is a risk that advanced deepfakes that evade detection will enjoy an undue air of authenticity, thereby increasing their deceptive potential. Moreover, insofar as these concerns are recognized, automated systems do not fully address the negative effect of deepfakes on trust in video footage.

The preceding concerns for AI detection systems arise from the inevitable limitations of such systems. However, even if such systems worked perfectly, an important concern for their use would remain. The promise of detection systems built on artificial intelligence is that they can pick up on features of video footage that are undetectable, or at least typically undetected, by human observers. Reliance on such systems therefore requires human users to trust artificial intelligence more than their own senses. In this way, the use of AI detection systems represents a threat to the epistemic autonomy of human persons. This is problematic, partly because empirical evidence suggests that ordinary persons are often hesitant to rely on AI judgments, even when AI outperforms human judgment. But it is also problematic in light of the value of epistemic autonomy itself.

I conclude by arguing that the threat to epistemic autonomy posed by reliance on AI systems goes beyond that posed by reliance on experts. First, AI judgments are typically opaque in a way that expert judgments are not. Second, the identification of experts to trust is itself an expression of epistemic autonomy, which has no analog in the case of reliance by social media users on AI judgments.

Don't want to miss out on the symposium? Sign up is now open.

Digital Humanities Tilburg