Norms in the Age of Intelligent Machines: Bodies, Knowledge, Governmentality — Dec. 4 & 5 at Stanford

Norms in the Age of Intelligent Machines — a two-day conference organized by Shane Denson, Armen Khatchatourov, and Johan Fredrikzon and sponsored by the France-Stanford Center for Interdisciplinary Studies, Villa Albertine, and the Stanford Department of Art & Art History — will take place at Stanford on December 4-5, 2025.

Speakers
Morehshin Allahyari (Stanford)
Hannes Bajohr (UC Berkeley)
David Bates (UC Berkeley)
Bilel Benbouzid (University Gustave Eiffel, Paris)
Shane Denson (Stanford)
Jean-Pierre Dupuy (Stanford)
Noel Fitzpatrick (TU Dublin)
Johan Fredrikzon (KTH Royal Institute of Technology, Stockholm)
Julia Irwin (Stanford)
Armen Khatchatourov (DICEN / University Gustave Eiffel, Paris)
Helen Nissenbaum (Cornell Tech)
Warren Sack (UC Santa Cruz)
Antonio Somaini (University Sorbonne Nouvelle – Paris 3)
Fred Turner (Stanford)

The prospect of intelligent machines challenges our societal norms. Matters of debate over the past half century concerning digital networks – e.g. access, privacy, subjectivity, participation – must be reconsidered in the age of machine learning. More specifically, the proliferation of AI-based systems leads to new ways of understanding what normativity is. Social norms don’t change overnight; however, the mechanisms and processes that drive these changes are increasingly influenced by AI-based infrastructures, characterized by a heightened level of automation, while being opaque, inscrutable, and anthropomorphic.

Faced with such conditions, we have to ask, first, what it means to instill or break a norm and, second, what norms even mean or represent. This landscape presents both profound challenges to maintain just and stable means of interaction and, at the same time, novel and creative opportunities for alternative modes of being.

The two conferences (December 4-5, 2025 at Stanford, April or May in Paris) aim to investigate how norms of embodiment, forms of knowledge, and techniques of governmentality operate in the age of AI, and to address the imbrication of two movements: how the evolution of social norms is reflected in new algorithmic practices, and how these algorithms influence social norms in various domains. It will bring together the humanities, social sciences, and law to address issues of crucial contemporary importance.

Sponsored by France – Stanford Center for Interdisciplinary Studies, Villa Albertine, and Stanford Department of Art & Art History

Image: Brett Amory, Archive Drift ⧑⧗⧖⧔. Photo: Shaun Roberts

More info here

View the full conference program with agenda, abstracts, and speaker bios

Registration

“The Bride of Frankenstein Minute-by-Minute” — Monday Night Seminar at the Coach House, Centre for Culture and Technology, Toronto, Nov. 10

Rounding out my trip to Canada, I’ll be giving a talk about my recent book on Bride of Frankenstein at the University of Toronto’s Centre for Culture and Technology on Nov. 10! Info and registration here.

“Non/phenomenalities: A Hodological Laboratory for Unstable Times” — Artist Talk with Karin Denson at Western Film & Art Festival, London, Ontario, Nov. 9, 2025

On Nov. 9, 2025, Karin Denson and I will give an artist talk, titled “Non/phenomenalities: A Hodological Laboratory for Unstable Times,” at the Western Film & Art Festival. In line with the festival theme of “Emerging Visions of AI, Art, and Environment,” we will be discussing our recent artistic and curatorial collaborations around AI and environments, both natural and computational. Selected pieces from our ongoing series GlitchesAreLikeWildAnimalsInLatentSpace! will also be screening throughout the festival.

“AI as Existential(ist) Risk and Aesthetic Opportunity” — Keynote at Media Theory Conference 2025 in Toronto, Nov. 7-8

I’m excited to be giving one of the keynotes at the Media Theory Conference 2025 at the Centre for Culture and Technology in Toronto. On Nov. 8, I’ll give a talk titled “AI as Existential(ist) Risk and Aesthetic Opportunity.” Here is the abstract:

Contemporary debates around artificial intelligence often frame the technology in terms of “existential risk.” Yet such framings rarely pause to consider what existential might mean in the existentialist sense. In this talk I return to Heidegger’s account of the “worldhood of the world” and Sartre’s concept of “hodological space” to argue that the risk posed by AI is not confined to catastrophic scenarios of planetary survival, but lies more immediately in the reconfiguration of subjectivity itself. AI systems bypass conscious perception, modulating aesthesis—the sensory, affective, and preconscious conditions of experience—and in doing so recalibrate the orientations that make ethical deliberation possible in the first place.

Seen from this angle, the hazard of AI is not external to us but infrastructural, shaping our movements, postures, and affective attunements. At the same time, this hazard can be taken up as an opportunity: artworks that use machine learning to stage glitches, detours, or dissonances do not merely represent technological change but provide laboratories for inhabiting it, exposing how bodies and worlds are being rewritten. If AI marks an existentialist risk, it also opens an occasion to engage aesthetically with the reorganization of perception and orientation, and to confront the stakes of ethics where they begin—in the aesthetic, in the felt conditions of living and acting in a changing world.