An assumption of self-hood is essential for the explanation morality and, as a result, to many ethical theories. To take but one example, the concept of responsibility collapses without an assumption of the moral agent who is a certain self. Furthermore, there is a formal requirement of being the same with the agent from whom the action originated (c.f., e.g., Slors 2002, Dan-Cohen 1992, Anscombe 1958, Hart 1948). Some, in line with Hegel’s view (1991), claim that personal responsibility is a matter of ourselves being expressed in the action (Sripada, 2016). On the other hand, many seems to suggest that the attribution of responsibility requires being a certain type of the self (c.f., e.g., Oshana 1997). And even though philosophers’ debate about which properties are sufficient for the attribution of responsibility, many agree on consciousness and the ability to understand moral terms (for reviews see e.g., Shoemaker 2021, Gogoshin 2021, Hakli & Mäkelä 2019).
A number of major theories (e.g., virtue ethics; ethical idealism) in normative ethics go beyond seeing the self as requirement for certain moral concepts and connect being a certain self directly with the concepts of moral goodness and rightness. However, the rapid development of novel data-mining technologies disrupts familiar notions of the self with tangible implications for our moral life and practices. Currently cutting-edge artificial intelligence (AI) is increasingly used to turn raw data into “actionable insights” (Fantuzzo, Culhane 2015) in such domians as helth care and policing, which is based on a dataistic premise. This gives rise to a whole set of specialized concepts such as digital identity (John Cheney-Lippold 2017), digital twin (de Kerckhove 2021) and data double (Ruckenstein, 2014).
What these concepts point to is that data-mining technologies attempt to re-define the self in terms of data and, consequently, transform our relation to ourselves; they increasingly put claims on what we are and even what we should become. The image that is formed from by data mining technologies seems to completely escape our control and ability to influence and possibly correct it. The use of (often times “black box”) AI makes it virtually impossible to influence the construction of such an image, since we ourselves barely understand how such image is formed. In this light, we need to ask: If data mining technologies increasingly predict our behavior and influence who we are, how can we ensure that it is still us who determine and form ourselves (with respect to such value-concepts as autonomy, authenticity, and human dignity)? Where should we draw the line in acceptability of these effects? What are we justified to infer about a person from data? This paper examines the limitations of dataism, i.e. the claim that self is reducible to data about behavior and/or physiology, from the standpoint of the moral agency.
The important consequence of AI’s drive to simulate human cognition—and its general strive to make machines more like humans—is making humans more like machines, so that they could be processed by algorithms. This is due to differences between machine and human epistemologies which translate into different ontologies (what exists for the machine and for the human) and phenomenologies (how what is, for the machine or for the human, is processed by their respective internal process). In this context, the ethical implication of dataism become ever more important. The claim—often implicit—that persons are reducible to data has profound implications for how a person is perceived in various contexts, especially with regards to her cognitive and emotive agency: to the extent that the agent’s beliefs, desires, intentions, and even actions become irrelevant to the decisions that are being made about her (e.g., in automated mortgage decisions, medical diagnosis) and even by herself (in the case of delegating decision making to a digital twin). Despite the transhumanist (c.f., e.g., Mohanty 2023; Akdevelioglu 2022) optimism, from the moral point for view, the disruptive potential of reducing personhood to data about an individual is enormous. And still, we hardly have an understudying of the range of consequences to one’s moral life from the fact of such reductionism approach. It is not impossible that, for example, your digital twin, while having the right to “decide” and commit on your behalf on legal, financial and health matters, would promote values and goals you are not (or lo longer) committed to. In the case that such discrepancies occur, would it be rational for you to change your own belied and values in favor of your twins’?
References:
Adamczyk, C. L. (2023). Communicating dataism. Review of Communication, 23(1), 4-20.
Akdevelioglu, Hansen & Venkatesh (2022). Wearable technologies, brand community and the growth of a transhumanist vision. Journal of Marketing Management, 38: 569-604.
Anscombe, G.E.M. (1958). Intention. Oxford: Blackwell.
Arvanitis, A. (2017). Autonomy and morality: a self-determination theory discussion of ethics. New Ideas in Psychology, 47: 57-61.
Cheney-Lippold, J. (2017). We Are Data. New York University Press.
Dan-Cohen, M. (1992). Responsibility and the boundaries of the self. Harvard Law Review 105(5): 959-1003.
de Kerckhove, D. (2021). The personal digital twin, ethical considerations. Philosophical Transactions of the Royal Society A, 379 (2207), 20200367.
Fantuzzo, J., & Culhane, D. P. (2015). Actionable Intelligence. NY: Palgrave
Gogoshin, D. L. (2021). Robot responsibility and moral community. Frontiers in Robotics and AI, 8, 768092.
Hakli, R., & Mäkelä, P. (2019). Moral responsibility of robots and hybrid agents. The Monist, 102(2): 259-275.
Hart, H. L. (1948). The ascription of responsibility and rights. In Proceedings of the Aristotelian society, Vol. 49: 171-194).
Hegel, G.W.F. (1991). Elements of the Philosophy of Right. Cambridge: Cambridge University Press.
Mohanty, H. (2023). Digital Life: an advent of transhumanism. In International Conference on Multi-disciplinary Trends in Artificial Intelligence. Cham: Springer Nature Switzerland.
Oshana, M. A. (1997). Ascriptions of responsibility. American Philosophical Quarterly, 34(1): 71-83.
Ruckenstein, M. (2014). Visualized and interacted life: personal analytics and engagements with data doubles. Societies, 4(1): 68-84.
Slors, M. (2000). Personal identity and responsibility for past actions. In Moral Responsibility and Ontology. Dordrecht: Springer Netherlands.