In this embodied experiment, I trained an algorithm on my personal data (social media, psychological self-assessment, third-party assessment). I then had this bot, or digital “clone”, direct what I would do every day: who I would see, what I would wear, what I should eat… I also used it to communicate with my family and friends on my behalf through generated messages and generated voice notes. Very quickly, my clone restricted me to repeating the same activity, seeing a very small group of people, and navigating a tiny geographic range.
This demonstrated that algorithms are reductive reflections of who we are, pushing us into echo chambers of past behaviors and beliefs in order to serve the commercial interests of their creators. By relying on algorithms too much to guide our personal decisions, we lose our agency and individuality, instead fusing with this reductive version.
Algorithms and AI carry scientific authority, leading us to misidentify their recommendations as diagnostic rather than probabilistic. My thesis questions the wisdom of entrusting extensive personal data to algorithms in our quest for self-improvement and scrutinizes the unsettling ease with which we accept digital reflections as true representations of ourselves. While the fear of AI becoming sentient is misplaced, its ability to simulate sentience can pose as much of a risk. My thesis calls for a more discerning engagement with technology, advocating for a critical stance toward the data we relinquish to digital platforms and the interpretations made by algorithms. In short, the goal is critical awareness in order to increase user agency in an increasingly digital and AI-reliant world.