Artist Panel: EXTRA/PHENOMENALITIES — Feb. 23, 2026

Join us for an artist panel featuring participants from EXTRA/PHENOMENALITIES, on view at Stanford Art Gallery from January 22-March 13, 2026. Bringing together artists whose work explores the limits of experience, this program offers a special opportunity to hear directly from those behind the exhibition.

Each participating artist will give a brief talk reflecting on their work in the exhibition, followed by a moderated conversation and audience Q&A. 

Participating artists include Morehshin Allahyari, Mark Amerika, Will Luers, & Chad Mossholder, Brett Amory, Rebecca Baron + Douglas Goodwin, Jon Bernson, Daniel Brickman, Paul DeMarinis, Karin + Shane Denson, Ebti, Frank Floyd, Gabriel Harrison, DJ Meisner, Joshua Moreno, Carlo Nasisse, Miguel Novelo, Andy Rappaport, William Tremblay, Camille Utterback, and Kristen Wong.

The exhibition is curated by Brett Amory, Karin Denson, and Shane Denson.

Moderator to be announced.

VISITOR INFORMATION: Oshman Hall is located within the McMurtry Building on Stanford campus at 355 Roth Way. Visitor parking is available in designated areas and is free after 4pm on weekdays. Alternatively, take the Caltrain to Palo Alto Transit Center and hop on the free Stanford Marguerite Shuttle. If you need a disability-related accommodation or wheelchair access information, please contact Julianne White at jgwhite@stanford.edu. This event is open to Stanford affiliates and the general public. Admission is free.

More info here: https://events.stanford.edu/event/artist-panel-extraphenomenalities

“The Latent Space of Meaning and the Novel” — Hannes Bajohr at Digital Aesthetics Workshop, Jan. 13, 2026

We are excited to announce our first event of 2026! Hannes Bajohr will present on “The Latent Space of Meaning and the Novel” on Tuesday, January 13, from 5-6:30pm PT. The event will take place in the Stanford Humanities Center Board Room. Refreshments will be served.

Zoom link for those unable to join in-person: tinyurl.com/3xm7rdku

We look forward to seeing you there!

Abstract: 

“A world – nothing less – is the theme and postulate of the novel,” German philosopher Hans Blumenberg wrote in 1963. At that same moment, AI research, already emerging from its early optimism, turned to “world models” as a means of stabilizing its brittle systems. Today, these two conceptions of “world” – the literary and the computational – converge in large language models (LLMs), which use their latent spaces not just to generate plausible sentences, but entire narratives, even novels, albeit with still uneven results. Yet in what sense are the “worlds” of novels and of AI analogous, and what can each illuminate about the other?

The talk proposes that both novels and LLMs operate within structured networks of relations – assemblages of events, inferences, and expectations – that can yield a form of coherence even when classical causality is weak or absent. Literary techniques from realism to modernism build patterned universes: realist and naturalist fiction through causal-social dynamics, genre fiction through explicit world-building, and modernism through fragmented but still intelligible world-logics. These traditions offer a vocabulary for assessing LLM-generated texts. 

Where early systems like SHRDLU pursued explicit symbolic world models and failed outside narrow domains, contemporary LLMs rely on distributed vector spaces that encode statistical regularities without grounding. My own experiments with a fine-tuned German-language model yielded narratives with stylistic unity but little causal depth. Like certain experimental novels, they evoke meaning through a “weak force” of association rather than strong narrative causality. This talk tries to follow these ideas and aims to resist both overhyping LLMs’ understanding and dismissing them as mere mimicry, thus placing AI-generated fiction, as the meeting points of the two uses of “world,” within a broader theory of modeling and meaning.

Bio:

Hannes Bajohr, is Assistant Professor of German at the University of California, Berkeley. His research focuses on media studies, political philosophy, philosophical anthropology, and theories of the digital. Recent publications include: Thinking with AI: Machine Learning the Humanities (as editor, London: Open Humanities Press) and “Surface Reading LLMs: Synthetic Text and its Styles” (arXiv preprint, forthcoming in New German Critique). In 2027, the English-language translation of his LLM-co-generated novel (Berlin, Miami) will appear with MIT Press.

This event is generously co-sponsored by the Stanford Literary Lab and Stanford Department of English. 

Norms in the Age of Intelligent Machines: Bodies, Knowledge, Governmentality — Dec. 4 & 5 at Stanford

Norms in the Age of Intelligent Machines — a two-day conference organized by Shane Denson, Armen Khatchatourov, and Johan Fredrikzon and sponsored by the France-Stanford Center for Interdisciplinary Studies, Villa Albertine, and the Stanford Department of Art & Art History — will take place at Stanford on December 4-5, 2025.

Speakers
Morehshin Allahyari (Stanford)
Hannes Bajohr (UC Berkeley)
David Bates (UC Berkeley)
Bilel Benbouzid (University Gustave Eiffel, Paris)
Shane Denson (Stanford)
Jean-Pierre Dupuy (Stanford)
Noel Fitzpatrick (TU Dublin)
Johan Fredrikzon (KTH Royal Institute of Technology, Stockholm)
Julia Irwin (Stanford)
Armen Khatchatourov (DICEN / University Gustave Eiffel, Paris)
Helen Nissenbaum (Cornell Tech)
Warren Sack (UC Santa Cruz)
Antonio Somaini (University Sorbonne Nouvelle – Paris 3)
Fred Turner (Stanford)

The prospect of intelligent machines challenges our societal norms. Matters of debate over the past half century concerning digital networks – e.g. access, privacy, subjectivity, participation – must be reconsidered in the age of machine learning. More specifically, the proliferation of AI-based systems leads to new ways of understanding what normativity is. Social norms don’t change overnight; however, the mechanisms and processes that drive these changes are increasingly influenced by AI-based infrastructures, characterized by a heightened level of automation, while being opaque, inscrutable, and anthropomorphic.

Faced with such conditions, we have to ask, first, what it means to instill or break a norm and, second, what norms even mean or represent. This landscape presents both profound challenges to maintain just and stable means of interaction and, at the same time, novel and creative opportunities for alternative modes of being.

The two conferences (December 4-5, 2025 at Stanford, April or May in Paris) aim to investigate how norms of embodiment, forms of knowledge, and techniques of governmentality operate in the age of AI, and to address the imbrication of two movements: how the evolution of social norms is reflected in new algorithmic practices, and how these algorithms influence social norms in various domains. It will bring together the humanities, social sciences, and law to address issues of crucial contemporary importance.

Sponsored by France – Stanford Center for Interdisciplinary Studies, Villa Albertine, and Stanford Department of Art & Art History

Image: Brett Amory, Archive Drift ⧑⧗⧖⧔. Photo: Shaun Roberts

More info here

View the full conference program with agenda, abstracts, and speaker bios

Registration

“Non/phenomenalities: A Hodological Laboratory for Unstable Times” — Artist Talk with Karin Denson at Western Film & Art Festival, London, Ontario, Nov. 9, 2025

On Nov. 9, 2025, Karin Denson and I will give an artist talk, titled “Non/phenomenalities: A Hodological Laboratory for Unstable Times,” at the Western Film & Art Festival. In line with the festival theme of “Emerging Visions of AI, Art, and Environment,” we will be discussing our recent artistic and curatorial collaborations around AI and environments, both natural and computational. Selected pieces from our ongoing series GlitchesAreLikeWildAnimalsInLatentSpace! will also be screening throughout the festival.

“AI as Existential(ist) Risk and Aesthetic Opportunity” — Keynote at Media Theory Conference 2025 in Toronto, Nov. 7-8

I’m excited to be giving one of the keynotes at the Media Theory Conference 2025 at the Centre for Culture and Technology in Toronto. On Nov. 8, I’ll give a talk titled “AI as Existential(ist) Risk and Aesthetic Opportunity.” Here is the abstract:

Contemporary debates around artificial intelligence often frame the technology in terms of “existential risk.” Yet such framings rarely pause to consider what existential might mean in the existentialist sense. In this talk I return to Heidegger’s account of the “worldhood of the world” and Sartre’s concept of “hodological space” to argue that the risk posed by AI is not confined to catastrophic scenarios of planetary survival, but lies more immediately in the reconfiguration of subjectivity itself. AI systems bypass conscious perception, modulating aesthesis—the sensory, affective, and preconscious conditions of experience—and in doing so recalibrate the orientations that make ethical deliberation possible in the first place.

Seen from this angle, the hazard of AI is not external to us but infrastructural, shaping our movements, postures, and affective attunements. At the same time, this hazard can be taken up as an opportunity: artworks that use machine learning to stage glitches, detours, or dissonances do not merely represent technological change but provide laboratories for inhabiting it, exposing how bodies and worlds are being rewritten. If AI marks an existentialist risk, it also opens an occasion to engage aesthetically with the reorganization of perception and orientation, and to confront the stakes of ethics where they begin—in the aesthetic, in the felt conditions of living and acting in a changing world.

Art & Artifice: Or, What AI Means for Aesthetics — John Fekete Distinguished Lecture, Trent University, November 6, 2025

I am honored to be delivering this year’s John Fekete Distinguished Lecture at Trent University. On November 6, 2025, I will speak about my current book project, Art & Artifice: Or, What AI Means for Aesthetics.

Abstract:

The rapid spread of generative AI tools has sparked urgent debates about ethics, governance, and even existential risk. These concerns are real, but they often miss a prior and constitutive dimension: the aesthetic. In this talk, I argue that no adequate understanding of artificial intelligence—and no robust AI ethics—can be developed without sustained attention to the aesthetic forms through which AI enters human experience.

Today, many critical responses to AI focus on transparency, bias, or political economy. Yet when machine learning systems generate images, sounds, and texts, or when they infiltrate experience in subtler ways, they reshape foundational lived relations to the sensible world. Aesthetics is not merely a matter of artistic style but of the mediation of experience itself—a matter of the ways we sense, interpret, and imagine.

Accordingly, to speak of “AI aesthetics” is to invoke both aesthesis—the broad field of perception and sensation—and aesthetics in the narrower sense of artistic form. Both are crucially at stake in today’s machine-learning algorithms. AI systems like Midjourney, DALL-E, or GPT-5 not only generate potential artworks but also make otherwise invisible computational processes indirectly perceptible and actionable; in so doing they insinuate themselves into the fabric of experience and reshape the very conditions of perception. In this sense, aesthetic forms are not secondary embellishments but essential mediators of how AI becomes intelligible to us—as well as crucial vectors with respect to who “we,” as perceiving, deliberating, and agential subjects, are. By analyzing artworks that grapple with these new technologies, I show that AI aesthetics is foundational to the cultural, political, and ethical challenges now unfolding. 

More info here.

NON/PHENOMENALITIES — July 26 – Aug 30, 2025 at Gallery 120710 in Berkeley

NON/PHENOMENALITIES — a show that I am co-curating with artist Brett Amory at Gallery 120710 in Berkeley — opens July 26 with an amazing lineup of artists.

The title of this exhibition plays on the multiple senses of the “phenomenal.” On the one hand, the phenomenal is equated with spectacle and the spectacular, the exceptional appearance that dazzles its audience, like a pop phenomenon. On the other hand, phenomenality refers to the way anything whatsoever appears to our embodied senses; this less extravagant sense of the word “phenomenon” is at the heart of phenomenology and Kantian philosophy (where it is opposed to the noumenal, which can never appear to sensation).

Both senses of the phenomenal are contested and reconfigured in the contemporary networks of computational media and machine-learning algorithms. For example, AI produces a steady stream of spectacles, each more spectacular than the last, but the underlying operations are immune to human perception. In this interplay, not only the objects of perception but also the very conditions of experience are up for grabs. The phenomenal itself is conditioned by a new realm of nonphenomenality, which poses a special challenge for artists working with these new technologies.

As a way of approaching this new situation, we look to works that stage multiple aesthetic inversions of the phenomenal, ranging from the subtle or understated to the invisible. What comes to the fore when vision encounters computation’s resistance to consciousness, its “discorrelation” from the phenomenology of embodied experience? How can we perceive what artist Trevor Paglen has dubbed the “invisible images” that populate our world? And how can these inversions connect with or be illuminated by other traditions of the non/ phenomenal—for example Buddhist ideas of appearance as illusion, the Lacanian notion of the unperceived Real, or neuroscientific theories of consciousness as a nonsubstantial epiphenomenon?

Looking beyond the spectacles of contemporary technology, Non/phenomenalities asks us to imagine an aesthetics of the subtle, the muted, the “barely perceptible difference,” maybe even the boring.

New Article: “On the Very Idea of a (Synthetic) Conceptual Scheme” — Out now in Philosophy and Digitality

My article “On the Very Idea of a (Synthetic) Conceptual Scheme” has just been published in the open access journal Philosophy & Digitality, in a special issue on “LLMs and the Patterns of Human Language Use.”

The title of the piece plays on, and the article draws substantially on, Donald Davidson’s “On the Very Idea of a Conceptual Scheme.” By way of this classic text, I engage closely with M. Beatrice Fazi’s provocative article “The Computational Search for Unity: Synthesis in Generative AI.” I agree with Fazi that we have to take the outputs of LLMs as genuine language (contra the “stochastic parrots” crew), and that the best way to account for their operations is in terms of a kind of philosophical “synthesis.” But whereas Fazi sees LLMs synthesizing their own individual “worlds within,” I argue that the genuineness of their linguistic outputs (i.e. the fact that they produce real language) instead suggests that they refer to a world shared in common with human language-users (which commonality should not, however, detract from their alterity or alienness to our embodied Lebensform, or form of life).

In the same issue of Philosophy & Digitality, Fazi has a response to my article, titled “A Transcendental Philosophy of Large Language Models,” which I also highly recommend, and which brings our differences—as well as agreements—into sharper relief. I have the feeling this is the beginning of a longer exchange!

I’d like to thank Sybille Krämer and Christoph Durt for inviting my participation in the special issue and shepherding it toward publication–and for soliciting Fazi’s response. And thanks, above all, to Beatrice Fazi for producing such thought-provoking work in the philosophy of AI and computation!

“Dimensionality, Perspective, and Imagination in Computational Media” — Talk at UC Berkeley conference on Dimensional Vision in Flux, May 29-31, 2025

I’m excited to be speaking, alongside an amazing lineup of scholars, at a conference this week (May 29-31, 2025) on Dimensional Vision in Flux: The Stereo-Aesthetics and Politics of 3D Cinema and Media, hosted by the Department of Film & Media at UC Berkeley. I’ll be giving a talk on “Dimensionality, Perspective, and Imagination in Computational Media.”

The complete program can be found here. And here’s my abstract:

Dimensionality, Perspective, and Imagination in Computational Media

Dimensional vision finds itself in flux, as the title of this symposium would have it. The flux in question has to do with recent and contemporary transformations in visual media: witness the many booms and busts of 3D cinema, recall the short-lived push to put 3D televisions in our living rooms, and consider the rapidly changing landscape of VR, AR, MR, XR, whatever-R. In order to get a handle on the flux of dimensional vision in relation to such media-technological changes, however, I would like to take a step back and observe that dimensional vision has always and only ever been in flux. I mean this, first, in the sense that dimensionality is given to human experience immediately and inseparably from the spatiotemporal flux of embodied existence; this “microperceptual” dimension (in Don Ihde’s terms) is epitomized in Edmund Husserl’s descriptions of the flux of “adumbrations” as he walks around a tree, whereby a multidimensional model of “the tree,” never wholly seen, takes shape in his mind. In a second, more historical sense, dimensional vision has always been in flux in a way that is more closely attuned to the media changes described above; rather than exceptional, however, such flux is a constant because there is no natural or neutral state apart from mediation: the “microperceptual” level of embodied experience can never be thought apart from what Ihde calls the “macroperceptual” level of cultural and technological conditioning (and vice versa).

Taken seriously, this means that dimensionality and perspectival vision are inherently contingent and deeply political—not just perspectival representation, but the embodied experience of perceptual perspective and spatial orientation itself. And while I argue that this has always been the case for humans as an essentially biotechnical species, the political stakes are heightened in an era of computational media. The latter, including VR and similar media of 3D visuality, operate faster than and bypass human perception, opening dimensional vision to fine-grained reengineering. In order to make this argument, I turn to Kant’s notion of the productive imagination (Einbildungskraft) and the stereotyping operation of the “schematism” that connects visual stimuli to concepts of the understanding. Following philosophers Wilfrid Sellars and Alan Thomas, schemata are perspectivally indeterminate but determinable, and through them the Kantian imagination is responsible for our empirical experience of things as having depth and unseen backsides—responsible, that is, for our sense of the world as a dimensional, volumetric space within which I am positioned. Meanwhile, computational media are constructing their own spatial models of the world (or worlds), models that exceed and resist human perceptual access while positioning us both virtually and physically. In this way, they assume functions of the imagination and modulate the flux of dimensional vision at both microperceptual and macroperceptual scales.