Artificial Us is an interactive installation that confronts the lack of representation in mainstream image generation models. Modern generative AI models are fueled by two overlapping extraction mechanisms. The first is material: lithium, copper, and other minerals removed from Chilean and Argentine soil used to power the batteries, semiconductors, and data-center hardware behind these systems. The second is digital: images and text often harvested without consent, and low-paid annotation labor from the Global South is constantly fed into models that end up misrepresenting us.
The dominance and control by the Global North of most large datasets used to train these generative models lead to Latin-American places, people, and cultures being invisibilized, reinforcing biases and erasing local specificity. As artists and researchers born and raised in Argentina and Chile, we are doubly excluded:
Artificial Us responds by reclaiming the same tools that marginalise us. Using personal photographs from our own communities, we fine-tune the image generation model Stable Diffusion XL, creating multiple custom models representative of different landscapes.
These models are then used to generate frames that are transformed into depth-based point clouds. A Kinect camera tracks visitors’ bodies; their movements steer a first-person navigation through these dynamic 3-D environments, while an audio narration explains the twin economies of mineral and data extraction. Early scenes are vivid and site-specific; later ones degrade into generic imagery, mirroring how value and meaning are lost at every step of the extractive cycle. The waste, physical and in the form of misrepresentation, end up back where it all started.
By making these processes visible, Artificial Us invites audiences to recognize their place in the chain and to question how to use generative AI in a way that is meaningful, intimate, and representative.