In January 2012, members of Facebook’s Data Science Team conducted an experiment: using algorithms, they tailored the feeds of 689,003 Facebook users such that, for one week, some users saw significantly fewer posts featuring negative emotional words/phrases, while others fewer positive posts.
This is a particularly explicit example of online affective manipulation, but there are other, more subtle cases, including how Twitter’s algorithm highlights the ‘most salacious or controversial tweets’ and so-called hate click articles.
The aim of this talk is broadly exploratory: we want to better understand online affective manipulation and what, if anything, is morally problematic about it. To do so, we begin by pulling apart various forms of online affective manipulation. We then proceed to discuss why online affective manipulation is properly categorized as manipulative, as well as what is wrong with (online) manipulation more generally. Building on this, we argue that, at its most extreme, online affective manipulation constitutes a novel form of affective injustice that we call affective powerlessness. To demonstrate this, we introduce the notions of affective injustice and affective powerlessness, and show how several forms of online affective manipulation leave users in this state. The upshot is a better grip on the nature of online affective manipulation, as well some tools to help us understand when and why it is morally problematic. \