The new issue of Journal of Visual Culturejust dropped, and I’m excited to see my article on AI art and aesthetics alongside work by Shannon Mattern, Bryan Norton, Jussi Parikka, and others. It looks like a great issue, and I’m looking forward to digging into it!
In this artist talk, Mark Amerika shares his creative process as a digital artist whose symbiotic relationship with both language and diffusion models informs his artistic and theoretical pursuits. Turning to his most recent book, My Life as an Artificial Creative Intelligence (Stanford University Press) and his just-released art project, Posthuman Cinema, Amerika will demonstrate, through personal narrative and theoretical asides, how different rhetorical uses of language can transform AI into a camera, a fiction writer, a poet and a philosopher.
Throughout the performance, Amerika will ask us to consider at what point a language artist becomes a language model and vice-versa. He will also question what new skills artists will have to develop as they co-evolve in a creative work environment where one must maintain a playful and dynamic relationship with the rapid technical maneuvering of the machinic Other. Will a more robust, intuitive yet interdependent relationship with AI models require artists to fine-tune what Amerika refers to as a cosmotechnical skill, one that is at once imaginative and indeterminate, playful and profound, grounded yet otherworldly in its aesthetic becoming? And how do we teach this skill at both the undergraduate and graduate level?
Borrowing from Beatnik poets and jazz musicians alike, Amerika suggests that a continuous call-and-response improvisational jam session with AI models may unlock personal insights that reveal how one’s own unconscious neural mechanism acts (performs) like a Meta Remix Engine. Engaging with other artists and writers who have tapped into their creative spontaneity as a primary research methodology, Amerika will discuss how digital artists can train themselves to intuitively select and defamiliarize datum for aesthetic effect. In so doing, Amerika suggests that this is how an artist connects with their own alien intelligence, a mediumistic sensibility that takes them out of their anthropocentric stronghold and invites them to reimagine what it means to be creative across the human-nonhuman spectrum.
—
Mark Amerika has exhibited his art in many venues including the Whitney Biennial, the Denver Art Museum, ZKM, the Walker Art Center, and the American Museum of the Moving Image. His solo exhibitions have appeared all over the world including at the Institute of Contemporary Arts in London, the University of Hawaii Art Galleries, the Marlborough Gallery in Barcelona and the Norwegian Embassy in Havana.
Amerika has had five early and/or mid-career retrospectives including the first two Internet art retrospectives ever produced (Tokyo and London). In 2009-2010, The National Museum of Contemporary Art in Athens, Greece, featured Amerika’s comprehensive retrospective exhibition entitled UNREALTIME. The exhibition included his groundbreaking works of Internet art GRAMMATRON and FILMTEXT as well as his feature-length work of mobile cinema, Immobilité. In 2012, Amerika released his large-scale transmedia narrative, Museum of Glitch Aesthetics (MOGA), a multi-platform net artwork commissioned by Abandon Normal Devices in conjunction with the London 2012 Olympic and Paralympic Games. His public art project, Glitch TV, was featured at the opening of the “video towers” at Denver International Airport.
He is the author of thirteen books including My Life as an Artificial Creative Intelligence, the inaugural title in the “Sensing Media” series published in 2022 by Stanford University Press.
For our second Digital Aesthetics workshop of the year, please join us in welcoming Ge Wang, who will present on “Artful Design and Artificial Intelligence: What do we (really) want from AI?” on November 14, 5-7PM PT. The event will take place in the Stanford Humanities Center Watt Dining Room, where refreshments will be served. Below you will find an abstract and bio, as well as a poster for lightweight circulation. We look forward to seeing you there!
We all design, shaping the world around us in the form of tools, policies, education, and communities. In recent months we’ve seen the growing emergence of “astoundingly competent” AI tools, leading many of us to wonder how AI might soon impact our work, our lives, our world. How do we (want to) live and work with artificial intelligence? How might we artfully design tools and systems that balance machine automation and human interaction? And perhaps the most basic question of all, what do we (really) want from AI?
In this presentation, we will engage with these questions through an artful design lens, considering factors such as aesthetics, ethics, and accountability. As a case study, we will draw from the teaching of “Music and AI”, a critical-making course at Stanford, and explore the power of human creativity in using AI not as an “oracle”, but as a tool for creative expression.
Bio:
Ge Wang is an Associate Professor at Stanford University’s Center for Computer Research in Music and Acoustics (CCRMA). He researches the artful design of tools, toys, games, musical instruments, programming languages, expressive VR experiences, and interactive AI systems with humans in the loop. Ge is the architect of the ChucK audio programming language, the director of the Stanford Laptop Orchestra and the Stanford VR Design Lab. He is the Co-founder of Smule and the designer of the Ocarina and Magic Piano apps for mobile phones. He is a Senior Fellow and a Associate Directory of Stanford Human-Centered AI Institute. A 2016 Guggenheim Fellow,Ge is the author of /Artful Design: Technology in Search of the Sublime/, a photo comic book about how we shape technology — and how technology shapes us.
On August 9, I will be speaking at the Long Night of Dreaming about the Future of Intelligence, which is taking place from dusk to dawn (8:44pm to 6:17am) at the Locarno Film Festival in Switzerland. I was asked to give a pithy statement of my contribution, and I settled on this:
“The future of intelligence depends crucially on the survival of unintelligibility.”
I’m still working out what this means, and if (and how) it’s even correct, but it’s prompted by some thoughts about the quantum leap forward that generative AI has recently made in terms of producing “intelligible” text (and other contents). Intelligibility is of course not the same as intelligence. Meanwhile, some of the most intelligent art using these new technologies works against the grain of “innovation,” foregrounding instead the unintelligible noise upon which these algorithms depend.
Here’s more info about the Long Night of Dreaming from their website:
On Wednesday, August 9th, “A Long Night of Dreaming about The Future of Intelligence” takes place at the Locarno Film Festival. From sunset to sunrise, Festival guests and visitors are invited to learn and dream together about possible futures of intelligence. Guided by researchers, artists, and cinephiles, these questions will be addressed: how do different forms of artificial and ecological intelligence manifest today? How might intelligence change in the future? And what is the role of cinema in shaping intelligence and rendering it visible? For the duration of an entire night, emerging forms of intelligence and their impact on society can be discussed and experienced in talks, workshops and performances.
The Long Night is a collaboration between the Locarno Film Festival, BaseCamp and the Università della Svizzera italiana (USI). It is supported by Stiftung Mercator Schweiz. The event is a successor of “The 24h long conversation on The Future of Attention” at Locarno75. As last year, it is curated by researcher and futurist Rafael Dernbach.
“Our image of intelligence has become a feverish dream, lately.Generative Artificial Intelligence has opened up a world of wondrous pictures, sounds and texts. We are astonished, amused, or disturbed by these creations. And by their loud promises of a radically different future. At the same time, ecological critique and its images of devasted landscapes, anticipating forests and networking fungi challenges our concept intelligent behavior: Have we neglected non-human forms of intelligence for too long? Might fungi be more capable of solving certain problems than human minds? Cinema, with its deep relation to dreams, has a strong influence on what we perceive as intelligence.”
During the Long Night, leading researchers in the field of cinema and intelligence such as Shane Denson (Stanford University) and Kevin B. Lee (USI) will share their research. Filmmakers such as Gala Hernández López will give insights into her work with emerging technologies. And designers such as Fabian Frey and Laura Papke will create intimate learning encounters to experience different forms of intelligence and explore its futures.
Inspired by cinema’s deep relation with dreams – but going far beyond the world of moving images – this night creates a unique opportunity for exchange about intelligence from artistic as well as scientific perspectives. It offers the chance for unexpected and memorable encounters with guests of the Locarno Film Festival. The exploratory journey starts on August 9th at sunset, 20:44 – and ends nine hours later on August 10th at sunrise, 6:17. Every full hour a new encounter, talk, performance or experience will take the lead, and visitors can join throughout the night.
The Long Night of Dreaming is open to anyone who is interested (free admission) and will take place at BaseCamp Istituto Sant’Eugenio (Via al Sasso 1, Locarno). The detailed program will be soon available here.
Jon Rafman
Counterfeit Poast, 2022
4K stereo video
23:39 min
MSPM JRA 49270
film still
Today I have a short piece in Outland on AI art and its embodied processing, as part of a larger suite of articles curated by Mark Amerika.
The essay offers a first taste of something I’m developing at the moment on the phenomenology of AI and the role of aesthetics as first philosophy in the contemporary world — or, AI aesthetics as the necessary foundation of AI ethics.
On Wednesday, June 14, I’ll be presenting a paper called “AI Art as Tactile-Specular Filter” at the Film-Philosophy Conference at Chapman University (in Orange County, CA). It’s the first time I’ll be attending the conference, which is usually held in the UK, and I am excited to get to know the association, meet up with old and new friends, and hear their papers. The abstract for my paper is below:
AI Art as Tactile-Specular Filter
Though often judged by its spectacular images, AI art needs also to be regarded in terms of its materiality, its temporality, and its relation to embodied existence. Towards this end, I look at AI art through the lens of corporeal phenomenology. Merleau-Ponty writes in Phenomenology of Perception: “Prior to stimuli and sensory contents, we must recognize a kind of inner diaphragm which determines, infinitely more than they do, what our reflexes and perceptions will be able to aim at in the world, the area of our possible operations, the scope of our life.” This bodily “diaphragm” serves like a filtering medium out of which stimulus and response, subject and object emerge in relation to one another. The diaphragm corresponds to Bergson’s conception of affect, which is similarly located prior to perception and action as “that part or aspect of the inside of our bodies which mix with the image of external bodies.” For Bergson, too, the living body is a kind of filter, sifting impulses in a microtemporal interval prior to subjective awareness. In his later work, Merleau-Ponty adds another dimension with his conception of a presubjective écart or fission between tactility and specularity, thus complexifying the filtering operation of the body. With both an interiorizing function (tactility) and an exteriorizing one (specularity), the écart lays the groundwork for what I call the “originary mediality” of flesh—and a view of mediality itself which is always tactile in addition to any visual, image-oriented aspects. This is especially important for visual art produced with AI, as the underlying algorithms operate similarly to the body’s internal diaphragm: as a microtemporal filter that sifts inputs and outputs without regard for any integral conception of subjective or objective form. At the level of its pre-imagistic processing, AI’s external diaphragm thus works on the body’s internal diaphragm and actively modulates the parameters of tactility-specularity, recoding the fleshly mediality from whence images arise as a secondary, precipitate form.