Post-Cinematic Bodies — pics from US book launch!

Book launch in the Terrace Room, Margaret Jacks Hall, Nov. 6, 2023
It was a packed house. Gave away 40 copies of my book!
A long overdue gathering of friends and colleagues. My last book came out in the deepest Zoom time of the pandemic, so I had to make up for it!
Discussing Rafael Lozano-Hemmer’s Pulse series, which I write about in Chapter 5.
Wonderful response from Annika Butler-Wall, focusing on the implications of my book for feminist studies of technology.
Hank Gerba introducing Scott Bukatman
Scott Bukatman’s excellent response focused on the continuities in spirit, and changes in the world, from his Terminal Identity (1993) to my Post-Cinematic Bodies (2023)
Bryan Norton
TRON
Pavle Levi
With Sepp Gumbrecht
With Grace Han
Cheers!

»Post-Cinematic Bodies« US Book Launch, November 6, 2023

On November 6, 2023 at 5:30 pm in the Margaret Jacks Hall Terrace Room (Building 460, Room 426), I will be presenting my new book Post-Cinematic Bodies (meson press, 2023), along with responses by Professor Scott Bukatman (Film & Media Studies, Stanford), and Dr. Annika Butler-Wall (Feminist, Gender & Sexuality Studies, Stanford MTL Ph.D. ’23).

Food and drinks will be provided. The first 40 attendees will receive a free copy of the book.

RSVPs are encouraged but not required. Please RSVP using the linked form by October 30th if you plan on attending.

About the book:

“How is human embodiment transformed in an age of algorithms? How do post-cinematic media technologies such as AI, VR, and robotics target and re-shape our bodies? Post-Cinematic Bodies grapples with these questions by attending both to mundane devices—such as smartphones, networked exercise machines, and smart watches and other wearables equipped with heartrate sensors—as well as to new media artworks that rework such equipment to reveal to us the ways that our fleshly existences are increasingly up for grabs. Through an equally philosophical and interpretive analysis, the book aims to develop a new aesthetics of embodied experience that is attuned to a new age of predictive technology and metabolic capitalism.”

Speaker and Respondents

Shane Denson is Associate Professor of Film and Media Studies in the Department of Art & Art History and, by Courtesy, of German Studies in the Division of Literatures, Cultures, and Languages and of Communication in Stanford’s Department of Communication. He is currently the Director of the PhD Program in Modern Thought and Literature, as well as Director of Graduate Studies in Art History. His research and teaching interests span a variety of media and historical periods, including phenomenological and media-philosophical approaches to film, digital media, comics, games, and serialized popular forms.

Scott Bukatman is a cultural theorist and Professor of Film and Media Studies at Stanford University. His research explores how such popular media as film, comics, and animation mediate between new technologies and human perceptual and bodily experience. His books include Terminal Identity: The Virtual Subject in Postmodern Science Fiction, one of the earliest book-length studies of cyberculture; a monograph on the film Blade Runner commissioned by the British Film Institute; and a collection of essays, Matters of Gravity: Special Effects and Supermen in the 20th Century. The Poetics of Slumberland: Animated Spirits and the Animating Spirit, celebrates play, plasmatic possibility, and the life of images in cartoons, comics, and cinema.

Dr. Annika Butler-Wall is a Lecturer in the Program in Feminist, Gender, and Sexuality Studies. She is an interdisciplinary scholar and teacher working at the intersections of gender studies, media studies, and science and technology studies (STS). Her current research project explores how digital platforms are restructuring forms of historically feminized labor by examining platforms such as TaskRabbit, Yelp, and LinkedIn Learning. 

She holds a PhD in Modern Thought and Literature with a minor in Feminist, Gender, and Sexuality Studies from Stanford University and a BA in American Studies and Economics from Wesleyan University. Her research has been supported by the Clayman Institute for Gender Research and the Ric Weiland Graduate Research Fellowship among others.  

This event is sponsored by The Program in Modern Thought & Literature and Intermediations.

“The Negative Aesthetic of AI” — Luciana Parisi at Digital Aesthetics Workshop, Oct. 20, 2023

We are happy to announce the first Digital Aesthetics Workshop event of the year. Please join us in welcoming Luciana Parisi, who will present on “The negative aesthetic of AI” on October 20, 2-4PM PT. The event will take place in the Stanford Humanities Center Boardroom, where refreshments will be served. Below you will find the abstract and bio attached, as well as a poster for lightweight circulation. We look forward to seeing you there!

Zoom link for those unable to join in-person: tinyurl.com/3fx49d8p  

Abstract:

Does AI have an aesthetic form? Perhaps one can argue that this form may entail a thinking without self-reflectivity and yet one may still hang on a function of imagination for artificial thinking. But one cannot neglect that self-reflectivity precisely defines the procedure by which reason is supplemented by imagination – a generative function that grants the system not to fall into its dogmatic premises. From this standpoint, the function of imagination seems to collide with the role of noise and randomness in generative AI. The scope here however is not to establish a direct correlation between imagination and noise or even to argue for a machine aesthetics that carries through the project of aesthetic judgment in the moment of the sublime, namely the encounter with the incalculable and the unmeasurable. Instead of a prosthetic extension of aesthetic judgement, this talk discusses   the negative function of imagination in Generative AI as an instance of a negation of aesthetics: a socio-techno-genic insurgence of radical alienness  from where the recursive iteration of the sublime fails its task of rebooting the system.

Bio:

Luciana Parisi’s research lays at the intersection of continental philosophy, information sciences, digital media, computational technologies. Her writings investigate technology in terms of ontological and epistemological possibilities of transformation in culture, aesthetics and politics. Her publications address the techno-capitalist investment in artificial intelligence, biotechnology, nanotechnology to explore challenges to conceptions of gender, race and class. She has also written extensively within the fields of media philosophy and computational design in order to investigate metaphysical possibilities of instrumentality. 

She was a member of the CCRU (Cybernetic Culture Research Unit) and currently a co-founding member of CCB (Critical Computation Bureau) through which she co-ideated the Symposium Recursive Colonialism, Artificial Intelligence and Speculative Computation (Dec 2020) https://recursivecolonialism.com/home/

In 2004, she published Abstract Sex: Philosophy, Biotechnology and the Mutations of Desire, which investigates capitalist experimentations in molecular strata of nature together with non-linear theories of endosymbiosis to argue against biocentric models of sexual reproduction and conceptions of sex and gender in terms of biodigital replications and non-filiative bacterial sex. Her book Contagious Architecture: Computation, Aesthetics and Space (2013) explores algorithms in architecture and interaction design as a symptom of global cultural transformation, where algorithmic computation represents a mode of thought that challenges dominant models of human cognition. Her current project, Automating Philosophy (forthcoming) explores the possibilities of a radical thought and critique which starts with inhuman intelligence and cosmocomputations. Part of this research has been published in recent articles “Media Ontology and Transcendental Instrumentality” (2019) and “Xenopatterning: Predictive Intuition and Automated Imagination” (2019).

Correlative Counter-Capture in Contemporary Art @ ASAP/14

Rafael Lozano-Hemmer, “Pulse Index”, 2010. “Recorders”, Museum of Contemporary Art, Sydney, 2011. Photo by: Antimodular Research

On Saturday, September 30, at 9am Pacific Time, I’ll be giving the following talk at ASAP/14 (online):

Correlative Counter-Capture in Contemporary Art

Computational processing takes place at speeds and scales that are categorically outside human perception, but such invisible processing nevertheless exerts significant effects on the sensory and aesthetic—as well as political—qualities of artworks that employ digital and/or algorithmic media. To account for this apparent paradox, it is necessary to rethink aesthetics itself in the light of two evidently opposing tendencies of computation: on the one hand, the invisibility of processing means that computation is phenomenologically discorrelated (in that it breaks with what Husserl calls the “the fundamental correlation between noesis and noema”); on the other hand, however, when directed toward the production of sensory contents, computation relies centrally on statistical correlations that reproduce normative constructs (including those of gender, race, and dis/ability). As discorrelative, computation exceeds the perceptual bond between subject and object, intervening directly in the prepersonal flesh; as correlative, computation not only expresses “algorithmic biases” but is capable of implanting them directly in the flesh. Through this double movement, a correlative capture of the body and its metabolism is made possible: a statistical norming of subjectivity and collectivity prior to perception and representation. Political structures are thus seeded in the realm of affect and aesthesis, but because the intervention takes place in the discorrelated matter of prepersonal embodiment, a margin of indeterminacy remains from which aesthetic and political resistance might be mounted (with no guarantee of success). In this presentation, I turn to contemporary artworks combining the algorithmic (including AI, VR, or robotics) with the metabolic (including heartrate sensors, ECGs, and EEGs) in order to imagine a practice of dis/correlative counter-capture. Works by the likes of Rashaad Newsome, Rafael Lozano-Hemmer, Hito Steyerl, or Teoma Naccarato and John MacCallum point to an aesthetic practice of counter-capture that does not elude but re-engineers mechanisms of control for potentially, but only ever locally, liberatory purposes.

The Film Comment Podcast: A Long Night of Dreaming about the Future of Intelligence, with Shane Denson

Audio of my talk on “The Future of Intelligence and/or the Future of Unintelligibility” (from the Locarno Film Festival’s Long Night of Dreaming about the Future of Intelligence, Aug. 9, 2023), followed by a conversation with Film Comment Co-Deputy Editor Devika Girish, is now online on the Film Comment Podcast.

I was dealing with jet lag, and it was a late evening event, so the talk gets off to a somewhat rocky start but fairly quickly settles into a groove. Devika Girish was a great interlocutor and asked very good questions.

Listen here, or see the Film Comment website for more info. Also available on Apple Podcasts, Google Podcasts, Spotify, RadioPublic, iHeart Radio, and Amazon Music.

The Future of Intelligence and/or the Future of Unintelligibility

The following is an excerpt of my talk from the Locarno Film Festival, at the “Long Night of Dreaming about the Future of Intelligence” held August 9-10, 2023. (Animated imagery created with ModelScope Text to Video Synthesis demo, using text drawn from the talk itself.)

Thanks to Rafael Dernbach for organizing and inviting me to this event, and thanks to Francesco de Biasi and Bernadette Klausberger for help with logistics and other support. And thanks to everyone for coming out tonight. I’m really excited to be here with you, especially during this twilight hour, in this in-between space, between day and night, like some hypnagogic state between waking existence and a sleep of dreams. 

For over a century this liminal space of twilight has been central to thinking and theorizing the cinema and its shadowy realm of dreams, but I think it can be equally useful for thinking about the media transitions we are experiencing today towards what I and others have called “post-cinematic” media.

In the context of a film festival, the very occurrence of which testifies to the continued persistence and liveliness of cinema today, I should clarify that “post-cinema,” as I use the term, is not meant to suggest that cinema is over or dead. Far from it.

Rather, the “post” in post-cinema points to a kind of futurity that is being integrated into, while also transforming and pointing beyond, what we have traditionally known as the cinema.

That is, a shift is taking place from cinema’s traditional modes of recording and reproducing past events to a new mode of predicting, anticipating, and shaping mediated futures—something that we see in everything from autocorrect on our phones to the use of AI to generate trippy, hypnagogic spectacles. 

Tonight, I hope to use this twilight time to prime us all for a long night of dreaming, and thinking, maybe even hallucinating, about the future of intelligence. The act of priming is an act that sets the stage and prepares for a future operation.

We prime water pumps, for example, removing air from the line to ensure adequate suction and thus delivery of water from the well. We also speak of priming engines, distributing oil throughout the system to avoid damage on initial startup. Interestingly, when we move from mechanical, hydraulic, and thermodynamic systems to cybernetic and more broadly informatic ones, this notion of priming tends to be replaced by the concept of “training,” as we say of AI models. 

Large language models like ChatGPT are not primed but instead trained. The implication seems to be that (dumb) mechanical systems are merely primed, prepared, for operations that are guided or supervised by human users, while AI models need to be trained, perhaps even educated, for an operation that is largely autonomous and intelligent. But let’s not forget that artificial intelligence was something of a marketing term proposed in the 1950s (Dartmouth workshop 1956) as an alternative to, and in order to compete with, the dominance of cybernetics. Clearly, AI won that competition, and so while we still speak of computer engineers, we don’t speak of computer engines in need of priming, but AI models in need of training.

In the following, I want to take a step back from this language, and the way of thinking that it primes us for, because it encodes also a specific way of imagining the future—and the future of intelligence in particular—that I think is still up for grabs, suspended in a sort of liminal twilight state. My point is not that these technologies are neutral, or that they might turn out not to affect human intelligence and agency. Rather, I am confident in saying that the future of intelligence will be significantly different from intelligence’s past. There will be some sort of redistribution, at least, if not a major transformation, in the intellective powers that exist and are exercised in the world.

I am reminded of Plato’s Phaedrus, in which Socrates recounts the mythical origins of writing, and the debate that it engendered: would this new inscription technology extend human memory by externalizing it and making it durable, or would it endanger memory by the same mechanisms? If people could write things down, so the worry went, they wouldn’t need to remember them anymore, and the exercise of active, conscious memory would suffer as a result.

Certainly, the advent of writing was a watershed moment in the history of human intelligence, and perhaps the advent of AI will be regarded similarly. This remains to be seen. In any case, we see the same polarizing tendencies: some think that AI will radically expand our powers of intelligence, while others worry that it will displace or eclipse our powers of reason. So there is a similar ambivalence, but we shouldn’t overlook a major difference, which is one of temporality (and this brings us back to the question of post-cinema).

Plato’s question concerned memory and memorial technologies (which includes writing as well as, later, photography, phonography, and cinema), but if we ask the question of intelligence’s future today, it is complicated by the way that futurity itself is centrally at stake now: first by the predictive algorithms and future-oriented technologies of artificial intelligence, and second by the potential foreclosure of the future altogether via climate catastrophe, possible extinction, or worse—all of which is inextricably tied up with the technological developments that have led from hydraulic to thermodynamic to informatic systems. To ask about the future of intelligence is therefore to ask both about the futurity of intelligence as well as its environmentality—dimensions that I have sought to think together under the concept of post-cinema.

In my book Discorrelated Images, I assert that the nature of digital images does not correspond to the phenomenological assumptions on which classical film theory was built. While film theory is based on past film techniques that rely on human perception to relate frames across time, computer generated images use information to render images as moving themselves. Consequently, cinema studies and new media theory are no longer separable, and the aesthetic and epistemological consequences of shifts in technology must be accounted for in film theory and cinema studies more broadly as computer-generated images are now able to exceed our perceptual grasp. I introduce discorrelation as a conceptual tool for understanding not only the historical, but also the technological specificity, of how films are actively and affectively perceived as computer generated images. This is a kind of hyperinformatic cinema – with figures intended to overload and exceed our perceptual grasp, enabled by algorithmic processing. In the final chapter of the book, I consider how these computer-generated images have exceeded spectacle, and are arguably not for human perception at all, thus serving as harbingers of human extinction, and the end of the environment as defined by human habitation.

At least, that is what you will read about my book if you search for it on Google Books — above, I have only slightly modified and excerpted the summary included there. Note that this is not the summary provided by my publisher, even though that is what Google claims. I strongly suspect that a computer, and not a human, wrote this summary, as the text kind of makes sense and kind of doesn’t. I do indeed argue that computer-generated images exceed our perceptual grasp, that their real-time algorithmic rendering and futural or predictive dimensions put them, at least partially, outside of conscious awareness and turn them into potent vectors of subjectivation and environmental change. But I honestly don’t know what it means to say that “computer generated images use information to render images as moving themselves.” The repetition of the word images makes this sentence confusing, and the final words are ambiguous: are these supposed to be “self-moving images,” or images that, themselves, are moving? Or do the images use information to render themselves as moving images? What would that mean? The images are self-rendering? There is a multilayered problem of intelligibility involved, despite the fact that the sentences are more or less grammatical. The semantic ambiguities, the strange repetitions, and the feeling that something is just a little off are tell-tale signs of AI-generated text. This is not full-blown “hallucination,” as they say when AI just makes things up, but instead a kind of twilight recursion, suspended between the past of the training data and the future of the predictive algorithm, generating a sleepy, hypnagogic loop or a quasi-lucid, semi-waking dream.

But that summary was generated back in 2020. Since then, with GPT and other tools proliferating, we have witnessed a quantum leap in the intelligibility of AI-generated texts. In preparation for this event, I asked ChatGPT to summarize several of my books and to explain key concepts and arguments I made in them. The results were much better than what I just discussed (even though I was using the basic version that runs on GPT-3.5, not the more advanced GPT-4). Asked to explain my theory that “media are the originary correlators of experience,” the algorithm responded: “In this context, ‘originary’ suggests that media have been present from the beginning of human existence and have continuously evolved alongside our species. They are ingrained in our social and cultural development and have become integral to how we make sense of the world. […] Whether it’s language, art, writing, photography, film, or digital technology, each medium influences and organizes our experiences, constructing the framework through which we navigate reality.” That’s not bad, and it gets at what I’m calling the environmentality of media, including the medium or milieu of intelligence. 

We could say, then, that artificial intelligence technology functions as a contemporary manifestation of the correlation between media and human experience. ChatGPT represents a significant leap in the relationship between humans and technology in the digital age. As a sophisticated language model, it mediates human interaction with information, communication, and even decision-making processes. ChatGPT is an intermediary that transforms the way we engage with knowledge and ideas, redefining the boundaries between human and machine. As an AI language model, ChatGPT embodies the fusion of the organic (human intelligence) and the artificial (machine intelligence). This fusion blurs the lines between human creativity and algorithmic generation, questioning traditional notions of authorship and creativity.

The only problem, though, is that everything I just said about ChatGPT was written by ChatGPT, which I asked to speculate, on the basis of my books, about what I would say about large language model AIs. The impersonation is competent, and even clarifying, as it brings out implications of my previous thinking in transferring them to the new case. Significantly, it points the way out of the impasse I described earlier with reference to Plato’s Phaedrus: AI will neither simply empower nor simply imperil human intelligence but will fundamentally alter it by transforming the parameters or environment of its operation. 

The fact that ChatGPT could write this text, and that I could speak it aloud without any noticeable change in my voice, style, or even logical commitments, offers a perfect example of the aforementioned leap in the intelligibility of AI-generated contents. Intelligibility is of course not the same as intelligence, but neither is it easily separated from the latter. Nevertheless, or as a result, I want to suggest that perhaps the future of intelligence depends on the survival of unintelligibility. This can be taken in several ways. Generally, noise is a necessary condition, substrate, or environment for the construction of signals, messages, or meanings. Without the background of unintelligible noise, meaningful figures could hardly stand out as, well, meaningful. In the face of the increasingly pervasive—and increasingly intelligible—AI-generated text circulating on the Internet (and beyond), Matthew Kirschenbaum speaks of a coming Textpocalypse: “a tsunami of text swept into a self-perpetuating cataract of content that makes it functionally impossible to reliably communicate in any digital setting.” Kirschenbaum observes: “It is easy now to imagine a setup wherein machines could prompt other machines to put out text ad infinitum, flooding the internet with synthetic text devoid of human agency or intent: gray goo, but for the written word.” 

Universal intelligibility, in effect, threatens intelligence, for if all text (or other media) becomes intelligible, how can we intelligently discriminate, and how can we cultivate intelligence? Cultivating intelligence, in such an environment, requires exposure to the unintelligible, that which resists intellective parsing: e.g. glitches, errors, and aesthetic deformations that both expose the computational infrastructures and emphasize our own situated, embodied processing. Such embodied processing precedes and resists capture by higher-order cognition. The body is not dumb; it has its own sort of intelligence, which is modified by way of interfacing with computation and its own sub-intellective processes. In this interface, a microtemporal collision takes place that, for better or for worse, transforms us and our powers of intelligence. If I emphasize the necessary role of unintelligibility, this is not (just) about protecting ourselves from being duped and dumbed by all-too-intelligible deepfakes or the textpocalypse, for example; it is also about recognizing and caring for the grounds of intelligence itself, both now and in the future.

And here is where art comes in. Some of the most intelligent contemporary AI-powered or algorithmic art actively resists easy and uncomplicated intelligibility, instead foregrounding unintelligibility as a necessary substrate or condition of possibility. Remix artist Mark Amerika’s playful/philosophical use of GPT for self-exploration (or “critique” in a quasi-Kantian sense) is a good example; in his book My Life as an Artifical Creative Intelligence, coauthored with GPT-2, and in the larger project of which it is a part, language operates beyond intention as the algorithm learns from the artist, and the artist from the algorithm, increasingly blurring the lines that nevertheless reveal themselves as seamful cracks in digital systems and human subjectivities alike. The self-deconstructive performance reveals the machinic substrate even of human meaning. In her forthcoming book Malicious Deceivers, theater and performance scholar Ioana Jucan offers another example, focusing on the question of intelligibility in Annie Dorsen’s algorithmic theater. For example, Dorsen’s play A Piece of Work (2013) uses Markov chains and other algorithms to perform real-time analyses of Shakespeare’s Hamlet and generate a new play, different in each performance, in which human and machinic actors interface on stage, often getting caught in unintelligible loops that disrupt conventions of theatrical and psychological/semantic coherence alike. 

Moreover, a wide range of AI-generated visual art foregrounds embodied encounters that point to the limits of intellect as the ground of intelligence: as I have discussed in a recent essay in Outland magazine, artists like Refik Anadol channel the sublime as a pre- or post-intellecitve mode of aesthetic encounter with algorithms; Ian Cheng uses AI to create self-playing videogame scenarios that, because they offer not point of interface, leave the viewer feeling sidelined and disoriented; and Jon Rafman channels cringe and the uncomfortable underbellies of online life, using diffusion models like Midjourney or DALL-E 2 to illustrate weird copypasta tales from the Internet that point us toward a visual equivalent of the gray goo that Kirschenbaum identifies with the textpocalypse. These examples are wildly divergent in their aesthetic and political concerns, but they are all united, I contend, in a shared understanding of environmentality and noise as a condition of perceptual engagement; they offer important challenges to intelligibility that might help us to navigate the future of intelligence.

To be continued…

A Long Night of Dreaming about the Future of Intelligence — at Locarno Film Festival, August 9, 2023

On August 9, I will be speaking at the Long Night of Dreaming about the Future of Intelligence, which is taking place from dusk to dawn (8:44pm to 6:17am) at the Locarno Film Festival in Switzerland. I was asked to give a pithy statement of my contribution, and I settled on this:

“The future of intelligence depends crucially on the survival of unintelligibility.”

I’m still working out what this means, and if (and how) it’s even correct, but it’s prompted by some thoughts about the quantum leap forward that generative AI has recently made in terms of producing “intelligible” text (and other contents). Intelligibility is of course not the same as intelligence. Meanwhile, some of the most intelligent art using these new technologies works against the grain of “innovation,” foregrounding instead the unintelligible noise upon which these algorithms depend.

Here’s more info about the Long Night of Dreaming from their website:

On Wednesday, August 9th, “A Long Night of Dreaming about The Future of Intelligence” takes place at the Locarno Film Festival. From sunset to sunrise, Festival guests and visitors are invited to learn and dream together about possible futures of intelligence. Guided by researchers, artists, and cinephiles, these questions will be addressed: how do different forms of artificial and ecological intelligence manifest today? How might intelligence change in the future? And what is the role of cinema in shaping intelligence and rendering it visible? For the duration of an entire night, emerging forms of intelligence and their impact on society can be discussed and experienced in talks, workshops and performances.

The Long Night is a collaboration between the Locarno Film Festival, BaseCamp and the Università della Svizzera italiana (USI). It is supported by Stiftung Mercator Schweiz. The event is a successor of “The 24h long conversation on The Future of Attention” at Locarno75. As last year, it is curated by researcher and futurist Rafael Dernbach.

“Our image of intelligence has become a feverish dream, lately.Generative Artificial Intelligence has opened up a world of wondrous pictures, sounds and texts. We are astonished, amused, or disturbed by these creations. And by their loud promises of a radically different future. At the same time, ecological critique and its images of devasted landscapes, anticipating forests and networking fungi challenges our concept intelligent behavior: Have we neglected non-human forms of intelligence for too long? Might fungi be more capable of solving certain problems than human minds? Cinema, with its deep relation to dreams, has a strong influence on what we perceive as intelligence.”

During the Long Night, leading researchers in the field of cinema and intelligence such as Shane Denson (Stanford University) and Kevin B. Lee (USI) will share their research. Filmmakers such as Gala Hernández López will give insights into her work with emerging technologies. And designers such as Fabian Frey and Laura Papke will create intimate learning encounters to experience different forms of intelligence and explore its futures.

Inspired by cinema’s deep relation with dreams – but going far beyond the world of moving images – this night creates a unique opportunity for exchange about intelligence from artistic as well as scientific perspectives. It offers the chance for unexpected and memorable encounters with guests of the Locarno Film Festival. The exploratory journey starts on August 9th at sunset, 20:44 – and ends nine hours later on August 10th at sunrise, 6:17. Every full hour a new encounter, talk, performance or experience will take the lead, and visitors can join throughout the night.

The Long Night of Dreaming is open to anyone who is interested (free admission) and will take place at BaseCamp Istituto Sant’Eugenio (Via al Sasso 1, Locarno). The detailed program will be soon available here.

“AI Art as Tactile-Specular Filter” at Film-Philosophy Conference 2023

Artwork by Agnieszka Polska

On Wednesday, June 14, I’ll be presenting a paper called “AI Art as Tactile-Specular Filter” at the Film-Philosophy Conference at Chapman University (in Orange County, CA). It’s the first time I’ll be attending the conference, which is usually held in the UK, and I am excited to get to know the association, meet up with old and new friends, and hear their papers. The abstract for my paper is below:

AI Art as Tactile-Specular Filter

Though often judged by its spectacular images, AI art needs also to be regarded in terms of its materiality, its temporality, and its relation to embodied existence. Towards this end, I look at AI art through the lens of corporeal phenomenology. Merleau-Ponty writes in Phenomenology of Perception: “Prior to stimuli and sensory contents, we must recognize a kind of inner diaphragm which determines, infinitely more than they do, what our reflexes and perceptions will be able to aim at in the world, the area of our possible operations, the scope of our life.” This bodily “diaphragm” serves like a filtering medium out of which stimulus and response, subject and object emerge in relation to one another. The diaphragm corresponds to Bergson’s conception of affect, which is similarly located prior to perception and action as “that part or aspect of the inside of our bodies which mix with the image of external bodies.” For Bergson, too, the living body is a kind of filter, sifting impulses in a microtemporal interval prior to subjective awareness. In his later work, Merleau-Ponty adds another dimension with his conception of a presubjective écart or fission between tactility and specularity, thus complexifying the filtering operation of the body. With both an interiorizing function (tactility) and an exteriorizing one (specularity), the écart lays the groundwork for what I call the “originary mediality” of flesh—and a view of mediality itself which is always tactile in addition to any visual, image-oriented aspects. This is especially important for visual art produced with AI, as the underlying algorithms operate similarly to the body’s internal diaphragm: as a microtemporal filter that sifts inputs and outputs without regard for any integral conception of subjective or objective form. At the level of its pre-imagistic processing, AI’s external diaphragm thus works on the body’s internal diaphragm and actively modulates the parameters of tactility-specularity, recoding the fleshly mediality from whence images arise as a secondary, precipitate form.

“Acting Algorithms” — Mihaela Mihailova at Digital Aesthetics Workshop, May 26, 2023

Please join the Digital Aesthetics Workshop for our last event of the year with Mihaela Mihailova, who will present “Acting Algorithms: Animated Deepfake Performances in Contemporary Media” on Friday, May 26 from 1-3PM PT, where lunch will be served. The event will take place in McMurtry 007.

Zoom link for those unable to join in-person: https://tinyurl.com/3nnj32et

Abstract:

From the moving Mona Lisa deepfake created by the Moscow Samsung AI Center to the (re)animated life-size digital avatar of Salvador Dalí who greets visitors at the Dalí Museum in St. Petersburg, Florida, algorithmically generated performances are becoming integral to emerging media forms. As products of the collaboration between tech researchers, coders, animators, digital artists, and actors, as well as the labor of the (often deceased) makers of the original works, such amalgamated, multi-modal performances challenge existing definitions and conceptualizations of acting in/for the animated medium, along with notions of authorship and authenticity. Additionally, they expand the disciplinary reach and relevance of the subject, highlighting the necessity of thinking through contemporary digital animation’s relationship with data science and machine learning in order to better understand its ever-growing variety of non-filmic permutations.  

At the same time, fan-made deepfakes, ranging from movie mashups to unauthorized pornographic edits, further complicate the aesthetic and legal landscape of animated algorithmic performance. Juxtaposing these amateur, free, often low-quality videos and images with the commissioned, well-funded works described above reveals fascinating tensions between the institutional implementations of deepfakes and their popular use on online platforms.   

This talk explores the application, dissemination, and ontological status of deepfake performances across a variety of contexts, including digital artworks, viral videos, museum initiatives, and tech demos. It interrogates the practical, ideological, and ethical implications of their means of creation, including the digital “resurrection” of deceased individuals, the repurposing and rebranding of centuries-old artwork, and the superimposition of actors’ faces onto footage of other performers’ roles. It asks the following questions: who (or what) do these animated performances belong to? What new terms and approaches might be necessary in order to fully evaluate and account for their complicated relationship with existing theories of acting? How are they shaping – and being shaped by – contemporary animated media? 

Bio:

Mihaela Mihailova is Assistant Professor in the School of Cinema at San Francisco State University. She is the editor of Coraline: A Closer Look at Studio LAIKA’s Stop-Motion Witchcraft (Bloomsbury, 2021). She has published in Journal of Cinema and Media Studies, [in]Transition, Convergence: The International Journal of Research into New Media Technologies, Feminist Media Studies, animation: an interdisciplinary journal, and Studies in Russian and Soviet Cinema.  

This event is generously co-sponsored by the Stanford McCoy Family Center for Ethics in Society and Feminist, Gender, and Sexuality Studies. Image credit goes to The Zizi Show, A Deepfake Drag Cabaret.

“Selfie/Portrait” — Damon Young at Digital Aesthetics Workshop, May 9, 2023

Please join the Digital Aesthetics Workshop for our next event with Damon Young, who will present “Selfie/Portrait” on Tuesday, May 9 from 5-7PM PT. The event will take place, as usual, in the Stanford Humanities Center Board Room. Find an abstract and bio attached, as well as a poster for lightweight circulation. Looking forward to seeing you there !

Zoom link for those unable to join in-person: tinyurl.com/aty2zf2a

Abstract:
The selfie, ubiquitous and quotidian, is a media form that has risen to preeminence in the digital environments of the twenty-first century. While it appears banal and superficial, I argue that it is for this very reason that the selfie indexes a larger transformation of subjectivity, akin to the kind Walter Benjamin, one hundred years ago, associated with the invention of early photography. The “self” of the selfie appears in a fundamental relationship to transformation (in both analog forms of body modification and surgery, and digital forms of filters and retouching) in the context of a circulation economy. These same terms indicate the axes along which the selfie refashions contemporary gender and sexuality. On the one hand, drawing unapologetically (if not always consciously) from the visual archive of pornography, the selfie advances the legacy of the “male gaze,” familiar from the history of narrative cinema. At the same time, it destabilizes both the gendered positions associated with that gaze, and their implicit heterosexuality. Moreover, unlike the cinema, the selfie is no longer a voyeuristic medium, but a medium of address. But to whom is it addressed? The answer to that question bears on the way it reconfigures the mediated field of contemporary sexuality. Often said to embody a contemporary “narcissism” — itself a feminized concept— the selfie also puts on view a subject who is no longer an individual but is becoming-generic. At the fault line between historically transforming media paradigms in their intersection with transforming paradigms of gender, sexuality, and desire, the selfie allows us to take the measure of the tensions between the common and the singular, the generic and the particular, as well as the self-satisfied and the anxious, that shape the contours of a contemporary cultural logic.

Bio:
Damon Young is co-appointed with the department of French and is affiliated with the Program in Critical Theory, the Berkeley Center for New Media, the Institute for European Studies, and the Designated Emphasis in Women, Gender & Sexuality. He teaches courses on art cinema, on sexuality and media, and on topics in digital media and film theory (including classical film theory, phenomenology, psychoanalysis, semiotics, feminist and queer theory). His first book, Making Sex Public and Other Cinematic Fantasies was published in the Theory Q series at Duke University Press in 2018, and shortlisted for the 2019 Association for the Study of the Arts of the Present Book Prize. That book examines fears and fantasies about women’s and queer sexualities—as figures for social emancipation or social collapse–in French and US cinema since the mid-1950s. It also considers the way cinema produces a new model of the private self as it challenges the novel’s dominance in the twentieth century. The latter idea is the basis for Professor Young’s current book project, After the Private Self, which explores the technical and technological ground of subjectivity across media forms, from the written diary through to big data, algorithms, and contemporary Internet cultures. Is the self of Rousseau’s Confessions the same as the self of the digital selfie? The inquiry integrates topics in digital media theory with “earlier” questions of language and subjectivity.