Bride of Frankenstein [film|minutes] — Out now in print, open-access ebook, and special videographic/interactive editions!

My book on James Whale’s 1935 masterpiece Bride of Frankenstein, the inaugural volume in Lever Press’s new film|minutes book series, is out now! The book offers a minute-by-minute engagement with the film, combining close looking, philosophical speculation, historical contextualization, and a variety of other ekphrastic and experimental approaches. Print versions are available anywhere books are sold, including on the publisher’s website, where you can also read the open access version online or download a free EPUB or PDF.

In addition, I have programmed an interactive version of the book, which is available through the Stanford Digital Repository. There you can find apps for Mac and Windows that allow you to load a copy of the film and play it — on loop, one minute at a time — alongside the text corresponding to that minute. This way, you can immediately put my observations to the test and discover other details that complement or even challenge the claims that I make about the film. (Due to copyright restrictions, you will need to supply your own copy of the film, for example by ripping a copy from a DVD or BluRay [I used the 2018 Classic Monster Collection version], or grabbing a copy from Vimeo or the Internet Archive.)

The interactive book app is a dedicated “reader,” but if you’d prefer a different experience I have also prepared a packet of text files that can be loaded into the film|minutes video|graphic workstation — a platform for both reading and writing — that I released earlier this summer. The texts are available as a zip file at the same address as the interactive book (https://doi.org/10.25740/qj474bx8626), and the workstation is available here: https://doi.org/10.25740/xq320wq3449 (also for Windows and Mac). You’ll still need to supply your own copy of the film, but then you can load the text packet and not only read but also actively revise or rewrite my text, should you so choose.

Whether on paper, ebook, or interactive version, I hope you’ll check out this experimental book and revisit the film, which is iconic in its own right but perhaps newly relevant in an age of AI. Thanks to series editor Bernd Herzogenrath and senior acquiring editor Sean Guynes for their support of the project!

Introducing the film|minutes video|graphic workstation

I made a piece of software! It’s called the film|minutes video|graphic workstation. It’s pretty niche but cool if you want to take notes or do very close readings of films/videos.

It’s a combination video player and text editor designed for close analysis of moving image media in scholarly research and academic writing, including student writing about film and video. The app plays videos on loop, one minute at a time, while the user enters notes or commentary on that segment. Timestamped notes are saved as txt files that can be reloaded and revised at a later time. Accordingly, the app serves either as a writing tool (as an informal notebook, for example, or for composing more complex and detailed close readings of films) or as a platform for reading previously compiled texts. The app was designed to facilitate the type of writing featured in Lever Press’s film|minutes book series, from which the workstation takes its name. Each book in the series takes a minute-by-minute approach to an individual film and conducts a close analysis on this basis. I am also the author of the first book in the series, on the 1935 film Bride of Frankenstein. A demo featuring the video and corresponding text of the first ten minutes of the book/film are included in the app.

In addition to its use as a tool for writing, the app promotes reflection on the relation of text and image in an age of digital media, where conventional scholarly writing now competes with “videographic criticism” (or video essays). Instituting what might be called a videographic method, though not necessarily producing a strictly videographic product, the film|minutes video|graphic workstation invites users to reflect on the status of the seen (video) and the written (graphic) more generally.

Accordingly, the app continues a series of videographic and critical making projects aimed at probing the edges of what is possible in video as a self-reflexive medium for theory — not just a vehicle or container medium for theorization but a platform that potentially creates new modes of looking and seeing. My video essay “Sight and Sound Conspire: Monstrous Audio-Vision in James Whale’s Frankenstein (1931)” is an intertext in more ways than one; while it is more or less technically conventional, in that it is a linear video with beginning, middle, and end, its formal structure of repetition and variation on a single scene anticipates the kind of close looking that the film|minutes workstation (and the film|minutes book series) promotes. (More obviously, the focus on Frankenstein films is of course another point of conversation.) My interactive video essay “Don’t Look Now: Paradoxes of Suture” subsequently challenged the linear form and experimented with spatializing and looping structures to foster close looking; the film|minutes video|graphic workstation inherits from it the focus on interactivity and the loop. My critical making project “The Algorithmic Nickelodeon” went even further outside the bounds of linear video, using data from an EEG headset to influence playback in realtime and thus to open the focusing and capture of attention to scrutiny. While the film|minutes video|graphic workstation does not stage quite so radical a disruption of the video source, perhaps it opens similarly self-reflexive questions around the way, as Nietzsche put it, “our writing utensils work alongside us in the formation of our thoughts.” That is, the medium in and through which we write and think — whether that is pen and paper, word processor, or nonlinear digital editing platform — is not neutral with respect to the things we conceive and express. It is my hope that the film|minutes video|graphic workstation will help us to think through the transformation of writing about moving-image media in conjunction with ubiquitous digital video.

The app is open access (CC-BY-NC-SA) and available for Windows and Mac. You can download it here: https://doi.org/10.25740/xq320wq3449.

“where do old sounds go to die?” and “murnau model” — Critical Making Collaborative, May 16, 2025

The Critical Making Collaborative at Stanford invites you to our Spring event — an evening of sharing and discussion with two recipients of the Critical Making Award, Lemon Guo and J. Makary, who will present their ongoing work in music and performance on Friday, May 16 (6PM) at the CCRMA Stage (3rd floor). 

Lemon Guo — where do old sounds go to die?

Since 2017, I have been visiting the Kam villages in Guizhou, China to work with the elder women singers. In my recent trips, I noticed that a sound that used to pulse through the village in all waking hours had disappeared. To make textile for clothing, many women used to spend months at a time hammering cotton outdoors. I made several field recordings of this practice, when it seemed commonplace and quotidian. As cultural tourism transformed the village soundscape, I started to listen to these files in my hard drive. In this piece, the performers were only allowed to listen to these recordings in the first rehearsal. They were not told that the field recordings would be taken away from them. This performance is made from what they can remember.

J. Makary — murnau model

For murnau model, I used a machine learning model trained on still frames from F. W. Murnau’s 1924 silent film The Last Laugh/Der letzte Mann to generate new hypothetical images that emerge from its lengthy dream sequence. After subsequent interventions to guide image generation and alter their evolution, the images were “married” back to the film through photographic capture of individual frames of the physical filmstrip. By embedding these digital apparitions into the material substrate of celluloid, I intended to create a dialogue between analog and digital dreams, from film to data and back again. The resulting work becomes a reflection on cinema’s dual nature as both technological process and dream machine.

“Unit Operations” and “Alloy Resonator 0.2” — Critical Making Collaborative, March 10, 2025

The Critical Making Collaborative at Stanford invites you to an evening of sharing and discussion with two recipients of the Critical Making Award, Daniel Jackson and Kimia Koochakzadeh-Yazdi, who will present their ongoing work in music and performance—Monday, March 10 (4PM) at the CCRMA Stage (3rd floor). 

Alloy Resonator 0.2 – Kimia Koochakzadeh-Yazdi (Music Composition) 

Alloy Resonator, a hybrid wearable instrument, embraces the fragility and rigidity of the body as an expressive medium for playing electronic music. It experiments with physical thresholds and explores ways to position the performer’s body at the center of the performance. The goal is to have every movement, whether subtle or exaggerated, become an amplified sonic gesture.

The Unit Operations Here Are Highly Specific – Daniel Jackson (Theater and Performance Studies)

The Unit Operations Here Are Highly Specific is a devised, movement-based work exploring the relationship between text, performance, and reception by allowing each audience member to choose from and switch between soundtracks while they watch a choreographed performance. The work playfully confronts the limits of personalization in the context of collective experience while interrogating how meaning is generated and where meaning resides in complex performance-media environments.

“A Sexual History of the Internet” — Mindy Seu at Digital Aesthetics Workshop, Jan. 28, 2025

The Digital Aesthetics Workshop is proud to welcome Mindy Seu, who will present “A Sexual History of the Internet: Lecture Performance Beta Test” on Tuesday, January 28, 5-7pm PT. The event will take place in Wallenberg Hall 433A, where refreshments will be served. Below you will find the speaker’s bio and a brief abstract, as well as the poster for the event. We hope to see you there!

Zoom link for those unable to join in-person: https://tinyurl.com/3t6y9fd9

Abstract:

“A Sexual History of the Internet” is a revisionist techno-history that introduces device-mediated relationships, the computer mouse as vulva, and the sex workers who built the internet.

Bio:

Mindy Seu is a designer and technologist based in New York City and Los Angeles. Her expanded practice involves archival projects, techno-critical writing, performative lectures, and design commissions. Her latest writing surveys feminist economies, historical precursors of the metaverse, and the materiality of the internet. Mindy’s ongoing Cyberfeminism Index, which gathers three decades of online activism and net art, was commissioned by Rhizome, presented at the New Museum, and awarded the Graham Foundation Grant. She has lectured internationally at cultural institutions (Barbican Centre, New Museum), academic institutions (Columbia University, Central Saint Martins), and mainstream platforms (Pornhub, SSENSE, Google), and been a resident at MacDowell, Sitterwerk Foundation, Pioneer Works, and Internet Archive. Her design commissions and consultation include projects for the Serpentine Gallery, Canadian Centre for Architecture, and MIT Media Lab. Her work has been featured in Vanity Fair, Frieze, Dazed, Brooklyn Rail, i-D, and more. Mindy holds an M.Des. from Harvard’s Graduate School of Design and a B.A. in Design Media Arts from the University of California, Los Angeles. As an educator, Mindy was formerly an Assistant Professor at Rutgers Mason Gross School of the Arts and Critic at Yale School of Art. She is currently an Associate Professor at University of California, Los Angeles in the Department of Design Media Arts. 

This event is generously co-sponsored by the d.school, the Asian American Research Center at Stanford, and the Center for Spatial and Textual Analysis.

GlitchesAreLikeWildAnimalsInLatentSpace! CANINE! — Karin + Shane Denson

CANINE! (2024)

Karin & Shane Denson

Canine! is a part of the GlitchesAreLikeWildAnimalsInLatentSpace! series of AI, generative video, and painting works. Inspired in equal parts by glitch-art vernaculars, the chronophotography of Eadweard Muybridge and Étienne-Jules Marey, the cut-up methods of Brion Gysin and William Burroughs, and generative practices from Oulipo to Brian Eno and beyond, our ongoing series GlitchesAreLikeWildAnimalsInLatentSpace! stages an encounter between human imagination and automated image-making — including the mental “schematisms” theorized by Kant and now embodied in algorithmic stereotypes.

This is a screen recording of a real-time, generative/combinatory video.

Canine! is a sort of “forest of forking paths,” consisting of 64 branching and looping pathways, with alternate pathways displayed in tandem, along with generative text, all composited in real time. It is mathematically possible but virtually impossible that the same combination of image, sound, and text will ever be repeated.

The underlying video was generated in part with RunwayML (https://runwayml.com). Karin’s glitch paintings (https://karindenson.com) were used to train a model for image generation. Prompting the model with terms like “Glitches are like wild animals” (a phrase she has been working with for years, originally found in an online glitch tutorial, now offline), and trying to avoid the usual suspects (lions, tigers, zebras), produced a set of species-indeterminate canines, which Karin painted with acrylic on canvas. The painting was fed back into RunwayML as the seed for a video clip (using Gen-2 in spring/summer 2024), which was extended a number of times in branching paths before looping back. The resulting video was glitched with databending methods (in Audacity). The soundtrack was produced by feeding a jpg of the original canine painting into Audacity as raw data, interpreted with the GSM codec.

Onscreen and spoken text is generated by a Markov model trained on Shane’s article “Artificial Imagination” (https://ojs.library.ubc.ca/index.php/cinephile/article/view/199653).

Made with Max 8 (https://cycling74.com/products/max) on a 2023 Mac Studio (Mac 14,14, 24-core Apple M2 Ultra, 64 GB RAM) running macOS Sonoma (14.6.1). Generative text is produced with Pavel Janicki’s MaxAndP5js Bridge (https://www.paweljanicki.jp/projects_maxandp5js_en.html) to interface Max with the p5js (https://p5js.org) version of the RiTa tools for natural language and generative writing (https://rednoise.org/rita/). Jeremy Bernstein’s external Max object, shell 1.0b3 (https://github.com/jeremybernstein/shell/releases/tag/1.0b3), passes the text to the OS for text-to-speech.

See also: Bovine! (https://vimeo.com/manage/videos/1013903632)

“Democratizing Vibrations” and “Opera Machine” — Critical Making Collaborative, Nov. 22, 2024

The Critical Making Collaborative at Stanford invites you to an evening of sharing and discussion with two recipients of the Critical Making Award, West Montgomery and Lloyd May, who will present their ongoing work in opera and haptic art—Friday, Nov. 22 (5PM) at the CCRMA Stage (3rd floor). 

Democratizing Vibrations – Lloyd May (Music Technology)

What would it mean to put vibration and touch at the center of a musical experience? What should devices used to create and experience vibration-based art (haptic instruments) look and feel like? These questions are at the core of the Musical Haptics project that aims to co-design haptic instruments and artworks with D/deaf and hard-of-hearing artists. 

Opera Machine – Westley Montgomery (TAPS)

Opera Machine is a work-in-process exploring music, measurement, and the sedimentation of culture in the bodies of performers. How does the cultural legacy of opera reverberate in the present day? How have the histories of voice-science, race “science,” and the gendering of the body co-produced pedagogies and styles of opera performance? What might it look like (sound like) to resist these histories? 

GlitchesAreLikeWildAnimalsInLatentSpace! BOVINE! — Karin + Shane Denson (2024)

BOVINE! (2024)
Karin & Shane Denson

Bovine! is a part of the GlitchesAreLikeWildAnimalsInLatentSpace! series of AI, generative video, and painting works. Inspired in equal parts by glitch-art vernaculars, the chronophotography of Eadweard Muybridge and Étienne-Jules Marey, the cut-up methods of Brion Gysin and William Burroughs, and generative practices from Oulipo to Brian Eno and beyond, our ongoing series GlitchesAreLikeWildAnimalsInLatentSpace! stages an encounter between human imagination and automated image-making.

The above video is a screen recording of a real-time, generative/combinatory video. There are currently two versions:

Bovine.app displays generative text over combinatory video, all composited in real time. It is mathematically possible but virtually impossible that the same combination of image, sound, and text will ever be repeated.

Bovine-Video-Only.app removes text and text-to-speech elements, and only features generative audiovideo, which is assembled randomly from five cut-up versions of a single video, composited together in real-time.

The underlying video was generated in part with RunwayML (https://runwayml.com). Karin’s glitch paintings (https://karindenson.com) were used to train a model for image generation.

Karin Denson, Training Data (C-print, 36 x 24 in., 2024)

Prompting the model with terms like “Glitches are like wild animals” (a phrase she has been working with for years, originally found in an online glitch tutorial, now offline), and trying to avoid the usual suspects (lions, tigers, zebras), produced a glitchy cow, which Karin painted with acrylic on canvas:

Karin Denson, Bovine Form (acrylic on canvas, 36 x 24 in., 2024)

The painting was fed back into RunwayML as the seed for a video clip (using Gen-2 in spring/summer 2024), which was extended a number of times. The resulting video was glitched with databending methods (in Audacity). The soundtrack was produced by feeding a jpg of the original cow painting into Audacity as raw data, interpreted with the GSM codec. After audio and video were assembled, the glitchy video was played back and captured with VLC and Quicktime, each of which interpreted the video differently. The two versions were composited together, revealing delays, hesitations, and lack of synchronization.

The full video was then cropped to produce five different strips. The audio on each was positioned accordingly in stereo space (i.e. the left-most strip has its audio turned all the way to the left, the next one over is half-way from the left to the center, the middle one is in the center, etc.). The Max app chooses randomly from a set of predetermined start points where to play each strip of video, keeping the overall image more or less in sync.

Onscreen and spoken text is generated by a Markov model trained on Shane’s book Discorrelated Images (https://www.dukeupress.edu/discorrelated-images), the cover of which featured Karin’s original GlitchesAreLikeWildAnimals! painting.

Made with Max 8 (https://cycling74.com/products/max) on a 2023 Mac Studio (Mac 14,14, 24-core Apple M2 Ultra, 64 GB RAM) running macOS Sonoma (14.6.1). Generative text is produced with Pavel Janicki’s MaxAndP5js Bridge (https://www.paweljanicki.jp/projects_maxandp5js_en.html) to interface Max with the p5js (https://p5js.org) version of the RiTa tools for natural language and generative writing (https://rednoise.org/rita/). Jeremy Bernstein’s external Max object, shell 1.0b3 (https://github.com/jeremybernstein/shell/releases/tag/1.0b3), passes the text to the OS for text-to-speech.

Karin Denson, Bovine Space (pentaptych, acrylic on canvas, each panel 12 x 36 in., total hanging size 64 x 36 in., 2024)

Don’t Look Now: From Flawed Experiment in Videographic Interactivity to New Open-Source Tool — Interactive Video Grid

Back in 2016, my experimental video essay “Don’t Look Now: Paradoxes of Suture” was published in the open access journal [in]Transition: Journal of Videographic Film and Moving Image Studies. This was an experiment with the limits of the “video essay” form, and a test to see if it could accommodate non-linear and interactive forms (produced with some very basic javascript and HTML/CSS so as to remain accessible and viewable even with updates to web infrastructures). Seeing as the interactive video essay was accepted and published in a peer-reviewed journal devoted, for the most part, to more conventional linear video essays, I considered the test passed. (However, since the journal has recently moved to a new hosting platform with Open Humanities Library, the interactive version is no longer included directly on the site, instead linking to my own self-hosted version here.)

But even if the test was passed in terms of publication, the peer reviewers noted that the experiment was not altogether successful. Richard Misek noted that the piece was “flawed,” qualifying nevertheless that “the work’s limitations are integral to its innovation.” The innovation, according to Misek, was to point to a new way of looking and doing close analysis:

“Perhaps one should see it not as a self-contained video essay but as a walk-through of an early beta of an app for viewing and manipulating video clips spatially. Imagine, for example… The user imports a scene. The app then splits it into clips and linearly spatializes it, perhaps like in Denson’s video. Each clip can then be individually played, looped, or paused. For example, the user can scroll to, and then pause, the in points or out points for each clip; or just play two particular shots simultaneously and pause everything else. Exactly how the user utilizes this app depends on the film and what they hope to discover from it. The very process of doing this, of course, may then also reveal previously unnoticed themes, patterns, or equivalences. Such a platform for analyzing moving images could hugely faciliate close formal analysis. I imagine a moving image version of Warburg’s Mnemosyne Atlas – a wall (/ screen) full of images, all existing in spatial relation with each other, and all in motion; a field of connections waiting to be made.

“In short, I think this video points towards new methods of conducting close analysis rather than new methods of presenting it. In my view, the ideal final product would not be a tidied-up video essay but an app. I realize that, technically and conceptually, this is asking a lot. It would be a very different, and much larger project. For now, though, this video provides an inspiring demo of what such an app could help film analysts achieve.”

Fast-forward eight years, to a short article on “Five Video Essays to Close Out May,” published on May 28, 2024 in Hyperallergic. Here, author Dan Schindel includes a note about an open-source and open-access tool, the Interactive Video Grid by Quan Zhang, that is inspired by my video essay and aims to realize a large part of the vision laid out by Misek in his review. As one of two demos of the tool, which allows users to create interactive grids of video clips for close and synchronous analysis, Zhang even includes “Don’t Look Now: Paradoxes of Suture. A Reconfiguration of Shane Denson’s Interactive Video Essay.”

I’m excited to experiment with this in classrooms, or as an aid in my own research. And I can imagine that additional development might point to further innovations in modes of looking. For example, what if we make the grid dynamic, such that the clips can be dragged and rearranged? Or added and removed, resized, slowed down or speeded up, maybe even superimposed on one another? Of course, many such transformations are already possible within nonlinear digital editing platforms — but it’s only the editing process that is nonlinear, while the operations imagined here only become visible in the outputted products that are, alas, still linear videos.

Like my original video, Zhang’s new tool might also be “flawed” and in need of further development, but it is successful in terms of pointing to new ways of looking that go beyond linear forms of film and video and that take fuller advantage of the underlying nonlinearity of digital media. The latter, I would suggest, are anyway transforming our modes of visual attention, so it seems only right that we should experiment self-reflexively and probe the limits of the new ways of looking.

Sunset with a Sky Background — Screening and discussion on AI Aesthetics with filmmaker J. Makary and respondent Caitlin Chan

On May 7, 2024 (4:30pm in McMurtry 115), the Critical Making Collaborative at Stanford is proud to present a screening of Sunset with a Sky Background, followed by a discussion on AI aesthetics with filmmaker J. Makary and respondent Caitlin Chan.

J. Louise Makary is a filmmaker and Ph.D. candidate in art history specializing in film studies and lens-based art practices. She is interested in using methodologies foundational to the study of cinema, such as psychoanalysis and semiotics, to interpret emergent visual forms of A.I. with film in mind. Her works have been exhibited at ICA Philadelphia, Bauhaus University, the Slought Foundation, Mana Contemporary (Jersey City and Chicago), Human Resources LA, Moore College, SPACES Cleveland, and the Spring/Break Art Show.

Caitlin Chan is a second year Ph.D. student in art history. She is currently working on a project that historicizes the aesthetics and phenomenology of A.I.-generated images by tracing a genealogy to early 19th-century photographic practices of making and viewership.