NON/PHENOMENALITIES — July 26 – Aug 30, 2025 at Gallery 120710 in Berkeley

NON/PHENOMENALITIES — a show that I am co-curating with artist Brett Amory at Gallery 120710 in Berkeley — opens July 26 with an amazing lineup of artists.

The title of this exhibition plays on the multiple senses of the “phenomenal.” On the one hand, the phenomenal is equated with spectacle and the spectacular, the exceptional appearance that dazzles its audience, like a pop phenomenon. On the other hand, phenomenality refers to the way anything whatsoever appears to our embodied senses; this less extravagant sense of the word “phenomenon” is at the heart of phenomenology and Kantian philosophy (where it is opposed to the noumenal, which can never appear to sensation).

Both senses of the phenomenal are contested and reconfigured in the contemporary networks of computational media and machine-learning algorithms. For example, AI produces a steady stream of spectacles, each more spectacular than the last, but the underlying operations are immune to human perception. In this interplay, not only the objects of perception but also the very conditions of experience are up for grabs. The phenomenal itself is conditioned by a new realm of nonphenomenality, which poses a special challenge for artists working with these new technologies.

As a way of approaching this new situation, we look to works that stage multiple aesthetic inversions of the phenomenal, ranging from the subtle or understated to the invisible. What comes to the fore when vision encounters computation’s resistance to consciousness, its “discorrelation” from the phenomenology of embodied experience? How can we perceive what artist Trevor Paglen has dubbed the “invisible images” that populate our world? And how can these inversions connect with or be illuminated by other traditions of the non/ phenomenal—for example Buddhist ideas of appearance as illusion, the Lacanian notion of the unperceived Real, or neuroscientific theories of consciousness as a nonsubstantial epiphenomenon?

Looking beyond the spectacles of contemporary technology, Non/phenomenalities asks us to imagine an aesthetics of the subtle, the muted, the “barely perceptible difference,” maybe even the boring.

“where do old sounds go to die?” and “murnau model” — Critical Making Collaborative, May 16, 2025

The Critical Making Collaborative at Stanford invites you to our Spring event — an evening of sharing and discussion with two recipients of the Critical Making Award, Lemon Guo and J. Makary, who will present their ongoing work in music and performance on Friday, May 16 (6PM) at the CCRMA Stage (3rd floor). 

Lemon Guo — where do old sounds go to die?

Since 2017, I have been visiting the Kam villages in Guizhou, China to work with the elder women singers. In my recent trips, I noticed that a sound that used to pulse through the village in all waking hours had disappeared. To make textile for clothing, many women used to spend months at a time hammering cotton outdoors. I made several field recordings of this practice, when it seemed commonplace and quotidian. As cultural tourism transformed the village soundscape, I started to listen to these files in my hard drive. In this piece, the performers were only allowed to listen to these recordings in the first rehearsal. They were not told that the field recordings would be taken away from them. This performance is made from what they can remember.

J. Makary — murnau model

For murnau model, I used a machine learning model trained on still frames from F. W. Murnau’s 1924 silent film The Last Laugh/Der letzte Mann to generate new hypothetical images that emerge from its lengthy dream sequence. After subsequent interventions to guide image generation and alter their evolution, the images were “married” back to the film through photographic capture of individual frames of the physical filmstrip. By embedding these digital apparitions into the material substrate of celluloid, I intended to create a dialogue between analog and digital dreams, from film to data and back again. The resulting work becomes a reflection on cinema’s dual nature as both technological process and dream machine.

Unintended Outcomes: AI in the Artist’s Studio — Roundtable at SF Art Fair, April 20, 2025

This coming Sunday, April 20, I’ll be on a roundtable with Halim Madi, Jill Miller, and Asthma Kazmi, moderated by Kate Hollenbach, organized by Gray Area at the San Francisco Art Fair.

In today’s cultural landscape, artificial intelligence has moved beyond buzzword status: machine learning-driven processes are thoroughly integrated—both visibly and invisibly—into the tools we use every day. Hailed as democratizing digital labor yet decried for diluting human creativity and agency, AI is clearly here to stay. As creators continue to experiment with AI, what has stuck? Beyond the hype, which tools and processes are making a real difference in artists’ studios, and how is that impacting a broader visual culture? How can artists reclaim agency over algorithmic processes, and take command of their own learning models? In this panel discussion presented by Gray Area, scholars of AI aesthetics and visual practitioners working with AI, will come together to map the current state of artificial intelligence and artistic creation. The panel includes Shane Denson (Professor, Stanford University Department of Communication), Halim Madi (Programmer, poet, and storyteller), Jill Miller (Visual artist and Professor, Department of Art Practice, UC Berkeley), and Asma Kazmi (artist). The discussion will be moderated by Kate Hollenbach, Education Director, Gray Area. 

More info here: https://sanfranciscoartfair.com/events/unintended-outcomes-ai-in-the-artists-studio/

“Unit Operations” and “Alloy Resonator 0.2” — Critical Making Collaborative, March 10, 2025

The Critical Making Collaborative at Stanford invites you to an evening of sharing and discussion with two recipients of the Critical Making Award, Daniel Jackson and Kimia Koochakzadeh-Yazdi, who will present their ongoing work in music and performance—Monday, March 10 (4PM) at the CCRMA Stage (3rd floor). 

Alloy Resonator 0.2 – Kimia Koochakzadeh-Yazdi (Music Composition) 

Alloy Resonator, a hybrid wearable instrument, embraces the fragility and rigidity of the body as an expressive medium for playing electronic music. It experiments with physical thresholds and explores ways to position the performer’s body at the center of the performance. The goal is to have every movement, whether subtle or exaggerated, become an amplified sonic gesture.

The Unit Operations Here Are Highly Specific – Daniel Jackson (Theater and Performance Studies)

The Unit Operations Here Are Highly Specific is a devised, movement-based work exploring the relationship between text, performance, and reception by allowing each audience member to choose from and switch between soundtracks while they watch a choreographed performance. The work playfully confronts the limits of personalization in the context of collective experience while interrogating how meaning is generated and where meaning resides in complex performance-media environments.

GlitchesAreLikeWildAnimalsInLatentSpace! CANINE! — Karin + Shane Denson

CANINE! (2024)

Karin & Shane Denson

Canine! is a part of the GlitchesAreLikeWildAnimalsInLatentSpace! series of AI, generative video, and painting works. Inspired in equal parts by glitch-art vernaculars, the chronophotography of Eadweard Muybridge and Étienne-Jules Marey, the cut-up methods of Brion Gysin and William Burroughs, and generative practices from Oulipo to Brian Eno and beyond, our ongoing series GlitchesAreLikeWildAnimalsInLatentSpace! stages an encounter between human imagination and automated image-making — including the mental “schematisms” theorized by Kant and now embodied in algorithmic stereotypes.

This is a screen recording of a real-time, generative/combinatory video.

Canine! is a sort of “forest of forking paths,” consisting of 64 branching and looping pathways, with alternate pathways displayed in tandem, along with generative text, all composited in real time. It is mathematically possible but virtually impossible that the same combination of image, sound, and text will ever be repeated.

The underlying video was generated in part with RunwayML (https://runwayml.com). Karin’s glitch paintings (https://karindenson.com) were used to train a model for image generation. Prompting the model with terms like “Glitches are like wild animals” (a phrase she has been working with for years, originally found in an online glitch tutorial, now offline), and trying to avoid the usual suspects (lions, tigers, zebras), produced a set of species-indeterminate canines, which Karin painted with acrylic on canvas. The painting was fed back into RunwayML as the seed for a video clip (using Gen-2 in spring/summer 2024), which was extended a number of times in branching paths before looping back. The resulting video was glitched with databending methods (in Audacity). The soundtrack was produced by feeding a jpg of the original canine painting into Audacity as raw data, interpreted with the GSM codec.

Onscreen and spoken text is generated by a Markov model trained on Shane’s article “Artificial Imagination” (https://ojs.library.ubc.ca/index.php/cinephile/article/view/199653).

Made with Max 8 (https://cycling74.com/products/max) on a 2023 Mac Studio (Mac 14,14, 24-core Apple M2 Ultra, 64 GB RAM) running macOS Sonoma (14.6.1). Generative text is produced with Pavel Janicki’s MaxAndP5js Bridge (https://www.paweljanicki.jp/projects_maxandp5js_en.html) to interface Max with the p5js (https://p5js.org) version of the RiTa tools for natural language and generative writing (https://rednoise.org/rita/). Jeremy Bernstein’s external Max object, shell 1.0b3 (https://github.com/jeremybernstein/shell/releases/tag/1.0b3), passes the text to the OS for text-to-speech.

See also: Bovine! (https://vimeo.com/manage/videos/1013903632)

“Democratizing Vibrations” and “Opera Machine” — Critical Making Collaborative, Nov. 22, 2024

The Critical Making Collaborative at Stanford invites you to an evening of sharing and discussion with two recipients of the Critical Making Award, West Montgomery and Lloyd May, who will present their ongoing work in opera and haptic art—Friday, Nov. 22 (5PM) at the CCRMA Stage (3rd floor). 

Democratizing Vibrations – Lloyd May (Music Technology)

What would it mean to put vibration and touch at the center of a musical experience? What should devices used to create and experience vibration-based art (haptic instruments) look and feel like? These questions are at the core of the Musical Haptics project that aims to co-design haptic instruments and artworks with D/deaf and hard-of-hearing artists. 

Opera Machine – Westley Montgomery (TAPS)

Opera Machine is a work-in-process exploring music, measurement, and the sedimentation of culture in the bodies of performers. How does the cultural legacy of opera reverberate in the present day? How have the histories of voice-science, race “science,” and the gendering of the body co-produced pedagogies and styles of opera performance? What might it look like (sound like) to resist these histories? 

GlitchesAreLikeWildAnimalsInLatentSpace! BOVINE! — Karin + Shane Denson (2024)

BOVINE! (2024)
Karin & Shane Denson

Bovine! is a part of the GlitchesAreLikeWildAnimalsInLatentSpace! series of AI, generative video, and painting works. Inspired in equal parts by glitch-art vernaculars, the chronophotography of Eadweard Muybridge and Étienne-Jules Marey, the cut-up methods of Brion Gysin and William Burroughs, and generative practices from Oulipo to Brian Eno and beyond, our ongoing series GlitchesAreLikeWildAnimalsInLatentSpace! stages an encounter between human imagination and automated image-making.

The above video is a screen recording of a real-time, generative/combinatory video. There are currently two versions:

Bovine.app displays generative text over combinatory video, all composited in real time. It is mathematically possible but virtually impossible that the same combination of image, sound, and text will ever be repeated.

Bovine-Video-Only.app removes text and text-to-speech elements, and only features generative audiovideo, which is assembled randomly from five cut-up versions of a single video, composited together in real-time.

The underlying video was generated in part with RunwayML (https://runwayml.com). Karin’s glitch paintings (https://karindenson.com) were used to train a model for image generation.

Karin Denson, Training Data (C-print, 36 x 24 in., 2024)

Prompting the model with terms like “Glitches are like wild animals” (a phrase she has been working with for years, originally found in an online glitch tutorial, now offline), and trying to avoid the usual suspects (lions, tigers, zebras), produced a glitchy cow, which Karin painted with acrylic on canvas:

Karin Denson, Bovine Form (acrylic on canvas, 36 x 24 in., 2024)

The painting was fed back into RunwayML as the seed for a video clip (using Gen-2 in spring/summer 2024), which was extended a number of times. The resulting video was glitched with databending methods (in Audacity). The soundtrack was produced by feeding a jpg of the original cow painting into Audacity as raw data, interpreted with the GSM codec. After audio and video were assembled, the glitchy video was played back and captured with VLC and Quicktime, each of which interpreted the video differently. The two versions were composited together, revealing delays, hesitations, and lack of synchronization.

The full video was then cropped to produce five different strips. The audio on each was positioned accordingly in stereo space (i.e. the left-most strip has its audio turned all the way to the left, the next one over is half-way from the left to the center, the middle one is in the center, etc.). The Max app chooses randomly from a set of predetermined start points where to play each strip of video, keeping the overall image more or less in sync.

Onscreen and spoken text is generated by a Markov model trained on Shane’s book Discorrelated Images (https://www.dukeupress.edu/discorrelated-images), the cover of which featured Karin’s original GlitchesAreLikeWildAnimals! painting.

Made with Max 8 (https://cycling74.com/products/max) on a 2023 Mac Studio (Mac 14,14, 24-core Apple M2 Ultra, 64 GB RAM) running macOS Sonoma (14.6.1). Generative text is produced with Pavel Janicki’s MaxAndP5js Bridge (https://www.paweljanicki.jp/projects_maxandp5js_en.html) to interface Max with the p5js (https://p5js.org) version of the RiTa tools for natural language and generative writing (https://rednoise.org/rita/). Jeremy Bernstein’s external Max object, shell 1.0b3 (https://github.com/jeremybernstein/shell/releases/tag/1.0b3), passes the text to the OS for text-to-speech.

Karin Denson, Bovine Space (pentaptych, acrylic on canvas, each panel 12 x 36 in., total hanging size 64 x 36 in., 2024)

“Artificial Imagination” — Out now in new issue of Cinephile (open access)

The new issue of Cinephile, the University of British Columbia’s film and media journal, is just out. The theme of the issue is “(Un)Recovering the Future,” and it’s all about nostalgia, malaise, history, and (endangered) futurities.

In this context, I am happy to have contributed a piece called “Artificial Imagination” on the relation between AI and (visual) imagination. The essay lays some of the groundwork for a larger exploration of AI and its significance for aesthetics in both broad and narrow senses of the word. It follows from the emphasis on embodiment in my essay “From Sublime Awe to Abject Cringe: On the Embodied Processing of AI Art,” recently published in Journal of Visual Culture, as part of a larger book project tentatively called Art & Artificiality, or: What AI Means for Aesthetics.

Thanks very much to editors Will Riley and Liam Riley for the invitation to contribute to this issue!

OUT NOW: “From Sublime Awe to Abject Cringe: On the Embodied Processing of AI Art” in Journal of Visual Culture

The new issue of Journal of Visual Culture just dropped, and I’m excited to see my article on AI art and aesthetics alongside work by Shannon Mattern, Bryan Norton, Jussi Parikka, and others. It looks like a great issue, and I’m looking forward to digging into it!

“Mimetic Virtualities” — Yvette Granata at Digital Aesthetics Workshop, February 6, 2024

Please join us for the next Digital Aesthetics Workshop, when we will welcome Yvette Granata for her talk on “Mimetic Virtualities: Rendering the Masses and/or Feminist Media Art?” on February 6, 5-7pm PT. The event will take place in the Stanford Humanities Center Board Room, where refreshments will be served. Below you will find the abstract and bio attached, as well as a poster for lightweight circulation. We look forward to seeing you there!

Zoom link for those unable to join in-person: tinyurl.com/2r285898

Abstract: 

From stolen election narratives to Q-anon cults, the politics of the 21st century are steeped in the mainstreaming of disinformation and the hard-core pursuit of false realities via any media necessary. Simultaneously, the 21st century marks the rise of virtual reality as a mass media. While spatial computing technologies behind virtual reality graphics and head-mounted displays have been in development since the middle of the 20th century, virtual reality as a mass media is a phenomenon of the last decade. Concurrently with the development of VR as a mass media, the tools of virtual production have proliferated – such as motion capture libraries, 3D model and animation platforms, and game engine tools. Does the pursuit of false realities and the proliferation of virtual reality technologies have anything to do with each other? Has virtual reality as a mass medium shaped the aesthetics of the digital masses differently? Looking to the manner in which virtual mimesis operates via rendering methods of the image of crowds, from 2D neural GAN generators to the recent development of neural radiance fields (NERFs) as a form of mass 3D rendering, I analyze the politics and aesthetics of mimetic virtualities as both a process of rendering of the masses and as a process of the distribution of the sensibility of virtualized bodies. Lastly, I present all of the above via feminist media art practice as a critical, creative method.

Bio:

Yvette Granata is a media artist, filmmaker, and digital media scholar. She is Assistant Professor at University of Michigan in the department of Film, Television and Media and the Digital Studies Institute. She creates immersive installations, video art, VR experiences,  and interactive environments, and writes about digital culture, media art, and media theory. Her work has been exhibited nationally and internationally at film festivals and art institutions including, Slamdance, CPH:DOX, The Melbourne International Film Festival, The Annecy International Animation Festival, Images Festival, Harvard Carpenter Center for the Arts, The EYE Film Museum, McDonough Museum of Art, and Hallwalls Contemporary Art, among others. Her most recent VR project,  I Took a Lethal Dose of Herbs, premiered at CPH:DOX in 2023, won best VR film at the Cannes World Film Awards, and received an Honorable Mention at Prix Ars Electronica in Linz Austria. Yvette has also published in Ctrl-Z: New Media PhilosophyTrace JournalNECSUS: European Journal of Media StudiesInternational Journal of Cultural Studies and AI & Society. She lives in Detroit.