“Making Politics: Commemoration, Resistance, and Play” — Joseph DeLappe at Digital Aesthetics Workshop, Oct. 22, 2025

With apologies for the late announcement, the Digital Aesthetics Workshop is delighted to welcome our first speaker of the 2025-26 academic year! Joseph DeLappe will present on “Making Politics: Commemoration, Resistance, and Play” on Wednesday, October 22, from 5-6:30pm PT. The event will take place in Wallenberg 433A, at the Stanford Center for Spatial and Textual Analysis (CESTA). Dinner will be served.

Zoom link for those unable to join in-person: tinyurl.com/5cjwfmej

Below you will find the Joseph DeLappe’s bio and abstract. We look forward to seeing you there!

Abstract: 

Can art be a catalyst for change in times of war and conflict? What role can creative acts of counter-memorialization, interventionist practices, play, and participatory art take to change how we perceive and act upon issues of contemporary and historical violence and in the broader politics of memory? Media artist and activist Joseph DeLappe will share documentation from a diversity of creative projects and actions developed over the past several decades that utilize digital and analogue processes to creatively address such questions. A lineage of works, including video games, public actions (online and IRL), participatory making, performance, play, protest and memorialization will illuminate upon his critical and interrogative strategies engaging the intersections of art, technology, and social engagement.

Bio: 

Joseph DeLappe, born San Francisco 1963, is an artist, activist and educator, he relocated to Scotland in 2017 after 23 years directing the Digital Media program at the University of Nevada, Reno. Working with electronic and digital media since 1983, projects in online gaming performance, sculpture and electromechanical installation have been shown throughout the world. In 2006 he began the project dead‐in‐iraq, to type consecutively, all names of America’s military casualties from the war in Iraq into the America’s Army first person shooter online recruiting game. More recently he developed the concept behind Killbox (funded in part by a Creative Scotland), an interactive computer game about drone warfare created with the Biome Collective in Scotland. Killbox was nominated for a BAFTA Scotland (British Academy of Film and Television Arts) as “Best Computer Game”. His works have been featured in the New York Times, The Australian Morning Herald, Art in America, The Guardian and the BBC. He has authored several book chapters, including “Me and My Predator(s): Tactical Remembrance and Critical Atonement, Drone Aesthetics: War, Culture, Ecology, Open Humanties Press, 2022 and “Making Politics: Engaged Social Tactics, A conversation between Joseph DeLappe and Laura Leuzzi”, Art as Social Practice: Technologies for Change, Routledge, 2022. DeLappe was awarded a Guggenheim Fellowship in the Fine Arts in 2017.

This event is co-sponsored by the Silicon Valley Archives and the Patrick Suppes Center for History & Philosophy of Science. 

Installation at GearBox Gallery, Oakland — opening Nov. 1!

I’m excited to announce that GlitchesAreLikeWildAnimals! — BOVINE, part of a larger series of collaborations between me and Karin Denson, will be installed at GearBox Gallery in Oakland. The opening is Saturday, November 1 (1-4pm), and it will be on view through December 6 (with a closing event and artist talk starting at 2pm).

The installation comprises a set of paintings and custom software that runs a real-time generative audiovisual experience. You can read more about the piece here and here.

And here are a couple of installation shots from a recent show at 120710 Gallery in Berkeley:

NON/PHENOMENALITIES — July 26 – Aug 30, 2025 at Gallery 120710 in Berkeley

NON/PHENOMENALITIES — a show that I am co-curating with artist Brett Amory at Gallery 120710 in Berkeley — opens July 26 with an amazing lineup of artists.

The title of this exhibition plays on the multiple senses of the “phenomenal.” On the one hand, the phenomenal is equated with spectacle and the spectacular, the exceptional appearance that dazzles its audience, like a pop phenomenon. On the other hand, phenomenality refers to the way anything whatsoever appears to our embodied senses; this less extravagant sense of the word “phenomenon” is at the heart of phenomenology and Kantian philosophy (where it is opposed to the noumenal, which can never appear to sensation).

Both senses of the phenomenal are contested and reconfigured in the contemporary networks of computational media and machine-learning algorithms. For example, AI produces a steady stream of spectacles, each more spectacular than the last, but the underlying operations are immune to human perception. In this interplay, not only the objects of perception but also the very conditions of experience are up for grabs. The phenomenal itself is conditioned by a new realm of nonphenomenality, which poses a special challenge for artists working with these new technologies.

As a way of approaching this new situation, we look to works that stage multiple aesthetic inversions of the phenomenal, ranging from the subtle or understated to the invisible. What comes to the fore when vision encounters computation’s resistance to consciousness, its “discorrelation” from the phenomenology of embodied experience? How can we perceive what artist Trevor Paglen has dubbed the “invisible images” that populate our world? And how can these inversions connect with or be illuminated by other traditions of the non/ phenomenal—for example Buddhist ideas of appearance as illusion, the Lacanian notion of the unperceived Real, or neuroscientific theories of consciousness as a nonsubstantial epiphenomenon?

Looking beyond the spectacles of contemporary technology, Non/phenomenalities asks us to imagine an aesthetics of the subtle, the muted, the “barely perceptible difference,” maybe even the boring.

“where do old sounds go to die?” and “murnau model” — Critical Making Collaborative, May 16, 2025

The Critical Making Collaborative at Stanford invites you to our Spring event — an evening of sharing and discussion with two recipients of the Critical Making Award, Lemon Guo and J. Makary, who will present their ongoing work in music and performance on Friday, May 16 (6PM) at the CCRMA Stage (3rd floor). 

Lemon Guo — where do old sounds go to die?

Since 2017, I have been visiting the Kam villages in Guizhou, China to work with the elder women singers. In my recent trips, I noticed that a sound that used to pulse through the village in all waking hours had disappeared. To make textile for clothing, many women used to spend months at a time hammering cotton outdoors. I made several field recordings of this practice, when it seemed commonplace and quotidian. As cultural tourism transformed the village soundscape, I started to listen to these files in my hard drive. In this piece, the performers were only allowed to listen to these recordings in the first rehearsal. They were not told that the field recordings would be taken away from them. This performance is made from what they can remember.

J. Makary — murnau model

For murnau model, I used a machine learning model trained on still frames from F. W. Murnau’s 1924 silent film The Last Laugh/Der letzte Mann to generate new hypothetical images that emerge from its lengthy dream sequence. After subsequent interventions to guide image generation and alter their evolution, the images were “married” back to the film through photographic capture of individual frames of the physical filmstrip. By embedding these digital apparitions into the material substrate of celluloid, I intended to create a dialogue between analog and digital dreams, from film to data and back again. The resulting work becomes a reflection on cinema’s dual nature as both technological process and dream machine.

Unintended Outcomes: AI in the Artist’s Studio — Roundtable at SF Art Fair, April 20, 2025

This coming Sunday, April 20, I’ll be on a roundtable with Halim Madi, Jill Miller, and Asthma Kazmi, moderated by Kate Hollenbach, organized by Gray Area at the San Francisco Art Fair.

In today’s cultural landscape, artificial intelligence has moved beyond buzzword status: machine learning-driven processes are thoroughly integrated—both visibly and invisibly—into the tools we use every day. Hailed as democratizing digital labor yet decried for diluting human creativity and agency, AI is clearly here to stay. As creators continue to experiment with AI, what has stuck? Beyond the hype, which tools and processes are making a real difference in artists’ studios, and how is that impacting a broader visual culture? How can artists reclaim agency over algorithmic processes, and take command of their own learning models? In this panel discussion presented by Gray Area, scholars of AI aesthetics and visual practitioners working with AI, will come together to map the current state of artificial intelligence and artistic creation. The panel includes Shane Denson (Professor, Stanford University Department of Communication), Halim Madi (Programmer, poet, and storyteller), Jill Miller (Visual artist and Professor, Department of Art Practice, UC Berkeley), and Asma Kazmi (artist). The discussion will be moderated by Kate Hollenbach, Education Director, Gray Area. 

More info here: https://sanfranciscoartfair.com/events/unintended-outcomes-ai-in-the-artists-studio/

“Unit Operations” and “Alloy Resonator 0.2” — Critical Making Collaborative, March 10, 2025

The Critical Making Collaborative at Stanford invites you to an evening of sharing and discussion with two recipients of the Critical Making Award, Daniel Jackson and Kimia Koochakzadeh-Yazdi, who will present their ongoing work in music and performance—Monday, March 10 (4PM) at the CCRMA Stage (3rd floor). 

Alloy Resonator 0.2 – Kimia Koochakzadeh-Yazdi (Music Composition) 

Alloy Resonator, a hybrid wearable instrument, embraces the fragility and rigidity of the body as an expressive medium for playing electronic music. It experiments with physical thresholds and explores ways to position the performer’s body at the center of the performance. The goal is to have every movement, whether subtle or exaggerated, become an amplified sonic gesture.

The Unit Operations Here Are Highly Specific – Daniel Jackson (Theater and Performance Studies)

The Unit Operations Here Are Highly Specific is a devised, movement-based work exploring the relationship between text, performance, and reception by allowing each audience member to choose from and switch between soundtracks while they watch a choreographed performance. The work playfully confronts the limits of personalization in the context of collective experience while interrogating how meaning is generated and where meaning resides in complex performance-media environments.

GlitchesAreLikeWildAnimalsInLatentSpace! CANINE! — Karin + Shane Denson

CANINE! (2024)

Karin & Shane Denson

Canine! is a part of the GlitchesAreLikeWildAnimalsInLatentSpace! series of AI, generative video, and painting works. Inspired in equal parts by glitch-art vernaculars, the chronophotography of Eadweard Muybridge and Étienne-Jules Marey, the cut-up methods of Brion Gysin and William Burroughs, and generative practices from Oulipo to Brian Eno and beyond, our ongoing series GlitchesAreLikeWildAnimalsInLatentSpace! stages an encounter between human imagination and automated image-making — including the mental “schematisms” theorized by Kant and now embodied in algorithmic stereotypes.

This is a screen recording of a real-time, generative/combinatory video.

Canine! is a sort of “forest of forking paths,” consisting of 64 branching and looping pathways, with alternate pathways displayed in tandem, along with generative text, all composited in real time. It is mathematically possible but virtually impossible that the same combination of image, sound, and text will ever be repeated.

The underlying video was generated in part with RunwayML (https://runwayml.com). Karin’s glitch paintings (https://karindenson.com) were used to train a model for image generation. Prompting the model with terms like “Glitches are like wild animals” (a phrase she has been working with for years, originally found in an online glitch tutorial, now offline), and trying to avoid the usual suspects (lions, tigers, zebras), produced a set of species-indeterminate canines, which Karin painted with acrylic on canvas. The painting was fed back into RunwayML as the seed for a video clip (using Gen-2 in spring/summer 2024), which was extended a number of times in branching paths before looping back. The resulting video was glitched with databending methods (in Audacity). The soundtrack was produced by feeding a jpg of the original canine painting into Audacity as raw data, interpreted with the GSM codec.

Onscreen and spoken text is generated by a Markov model trained on Shane’s article “Artificial Imagination” (https://ojs.library.ubc.ca/index.php/cinephile/article/view/199653).

Made with Max 8 (https://cycling74.com/products/max) on a 2023 Mac Studio (Mac 14,14, 24-core Apple M2 Ultra, 64 GB RAM) running macOS Sonoma (14.6.1). Generative text is produced with Pavel Janicki’s MaxAndP5js Bridge (https://www.paweljanicki.jp/projects_maxandp5js_en.html) to interface Max with the p5js (https://p5js.org) version of the RiTa tools for natural language and generative writing (https://rednoise.org/rita/). Jeremy Bernstein’s external Max object, shell 1.0b3 (https://github.com/jeremybernstein/shell/releases/tag/1.0b3), passes the text to the OS for text-to-speech.

See also: Bovine! (https://vimeo.com/manage/videos/1013903632)

“Democratizing Vibrations” and “Opera Machine” — Critical Making Collaborative, Nov. 22, 2024

The Critical Making Collaborative at Stanford invites you to an evening of sharing and discussion with two recipients of the Critical Making Award, West Montgomery and Lloyd May, who will present their ongoing work in opera and haptic art—Friday, Nov. 22 (5PM) at the CCRMA Stage (3rd floor). 

Democratizing Vibrations – Lloyd May (Music Technology)

What would it mean to put vibration and touch at the center of a musical experience? What should devices used to create and experience vibration-based art (haptic instruments) look and feel like? These questions are at the core of the Musical Haptics project that aims to co-design haptic instruments and artworks with D/deaf and hard-of-hearing artists. 

Opera Machine – Westley Montgomery (TAPS)

Opera Machine is a work-in-process exploring music, measurement, and the sedimentation of culture in the bodies of performers. How does the cultural legacy of opera reverberate in the present day? How have the histories of voice-science, race “science,” and the gendering of the body co-produced pedagogies and styles of opera performance? What might it look like (sound like) to resist these histories? 

GlitchesAreLikeWildAnimalsInLatentSpace! BOVINE! — Karin + Shane Denson (2024)

BOVINE! (2024)
Karin & Shane Denson

Bovine! is a part of the GlitchesAreLikeWildAnimalsInLatentSpace! series of AI, generative video, and painting works. Inspired in equal parts by glitch-art vernaculars, the chronophotography of Eadweard Muybridge and Étienne-Jules Marey, the cut-up methods of Brion Gysin and William Burroughs, and generative practices from Oulipo to Brian Eno and beyond, our ongoing series GlitchesAreLikeWildAnimalsInLatentSpace! stages an encounter between human imagination and automated image-making.

The above video is a screen recording of a real-time, generative/combinatory video. There are currently two versions:

Bovine.app displays generative text over combinatory video, all composited in real time. It is mathematically possible but virtually impossible that the same combination of image, sound, and text will ever be repeated.

Bovine-Video-Only.app removes text and text-to-speech elements, and only features generative audiovideo, which is assembled randomly from five cut-up versions of a single video, composited together in real-time.

The underlying video was generated in part with RunwayML (https://runwayml.com). Karin’s glitch paintings (https://karindenson.com) were used to train a model for image generation.

Karin Denson, Training Data (C-print, 36 x 24 in., 2024)

Prompting the model with terms like “Glitches are like wild animals” (a phrase she has been working with for years, originally found in an online glitch tutorial, now offline), and trying to avoid the usual suspects (lions, tigers, zebras), produced a glitchy cow, which Karin painted with acrylic on canvas:

Karin Denson, Bovine Form (acrylic on canvas, 36 x 24 in., 2024)

The painting was fed back into RunwayML as the seed for a video clip (using Gen-2 in spring/summer 2024), which was extended a number of times. The resulting video was glitched with databending methods (in Audacity). The soundtrack was produced by feeding a jpg of the original cow painting into Audacity as raw data, interpreted with the GSM codec. After audio and video were assembled, the glitchy video was played back and captured with VLC and Quicktime, each of which interpreted the video differently. The two versions were composited together, revealing delays, hesitations, and lack of synchronization.

The full video was then cropped to produce five different strips. The audio on each was positioned accordingly in stereo space (i.e. the left-most strip has its audio turned all the way to the left, the next one over is half-way from the left to the center, the middle one is in the center, etc.). The Max app chooses randomly from a set of predetermined start points where to play each strip of video, keeping the overall image more or less in sync.

Onscreen and spoken text is generated by a Markov model trained on Shane’s book Discorrelated Images (https://www.dukeupress.edu/discorrelated-images), the cover of which featured Karin’s original GlitchesAreLikeWildAnimals! painting.

Made with Max 8 (https://cycling74.com/products/max) on a 2023 Mac Studio (Mac 14,14, 24-core Apple M2 Ultra, 64 GB RAM) running macOS Sonoma (14.6.1). Generative text is produced with Pavel Janicki’s MaxAndP5js Bridge (https://www.paweljanicki.jp/projects_maxandp5js_en.html) to interface Max with the p5js (https://p5js.org) version of the RiTa tools for natural language and generative writing (https://rednoise.org/rita/). Jeremy Bernstein’s external Max object, shell 1.0b3 (https://github.com/jeremybernstein/shell/releases/tag/1.0b3), passes the text to the OS for text-to-speech.

Karin Denson, Bovine Space (pentaptych, acrylic on canvas, each panel 12 x 36 in., total hanging size 64 x 36 in., 2024)