In this artist talk, Mark Amerika shares his creative process as a digital artist whose symbiotic relationship with both language and diffusion models informs his artistic and theoretical pursuits. Turning to his most recent book, My Life as an Artificial Creative Intelligence (Stanford University Press) and his just-released art project, Posthuman Cinema, Amerika will demonstrate, through personal narrative and theoretical asides, how different rhetorical uses of language can transform AI into a camera, a fiction writer, a poet and a philosopher.
Throughout the performance, Amerika will ask us to consider at what point a language artist becomes a language model and vice-versa. He will also question what new skills artists will have to develop as they co-evolve in a creative work environment where one must maintain a playful and dynamic relationship with the rapid technical maneuvering of the machinic Other. Will a more robust, intuitive yet interdependent relationship with AI models require artists to fine-tune what Amerika refers to as a cosmotechnical skill, one that is at once imaginative and indeterminate, playful and profound, grounded yet otherworldly in its aesthetic becoming? And how do we teach this skill at both the undergraduate and graduate level?
Borrowing from Beatnik poets and jazz musicians alike, Amerika suggests that a continuous call-and-response improvisational jam session with AI models may unlock personal insights that reveal how one’s own unconscious neural mechanism acts (performs) like a Meta Remix Engine. Engaging with other artists and writers who have tapped into their creative spontaneity as a primary research methodology, Amerika will discuss how digital artists can train themselves to intuitively select and defamiliarize datum for aesthetic effect. In so doing, Amerika suggests that this is how an artist connects with their own alien intelligence, a mediumistic sensibility that takes them out of their anthropocentric stronghold and invites them to reimagine what it means to be creative across the human-nonhuman spectrum.
—
Mark Amerika has exhibited his art in many venues including the Whitney Biennial, the Denver Art Museum, ZKM, the Walker Art Center, and the American Museum of the Moving Image. His solo exhibitions have appeared all over the world including at the Institute of Contemporary Arts in London, the University of Hawaii Art Galleries, the Marlborough Gallery in Barcelona and the Norwegian Embassy in Havana.
Amerika has had five early and/or mid-career retrospectives including the first two Internet art retrospectives ever produced (Tokyo and London). In 2009-2010, The National Museum of Contemporary Art in Athens, Greece, featured Amerika’s comprehensive retrospective exhibition entitled UNREALTIME. The exhibition included his groundbreaking works of Internet art GRAMMATRON and FILMTEXT as well as his feature-length work of mobile cinema, Immobilité. In 2012, Amerika released his large-scale transmedia narrative, Museum of Glitch Aesthetics (MOGA), a multi-platform net artwork commissioned by Abandon Normal Devices in conjunction with the London 2012 Olympic and Paralympic Games. His public art project, Glitch TV, was featured at the opening of the “video towers” at Denver International Airport.
He is the author of thirteen books including My Life as an Artificial Creative Intelligence, the inaugural title in the “Sensing Media” series published in 2022 by Stanford University Press.
Rafael Lozano-Hemmer, “Pulse Index”, 2010. “Recorders”, Museum of Contemporary Art, Sydney, 2011. Photo by: Antimodular Research
On Saturday, September 30, at 9am Pacific Time, I’ll be giving the following talk at ASAP/14 (online):
Correlative Counter-Capture in Contemporary Art
Computational processing takes place at speeds and scales that are categorically outside human perception, but such invisible processing nevertheless exerts significant effects on the sensory and aesthetic—as well as political—qualities of artworks that employ digital and/or algorithmic media. To account for this apparent paradox, it is necessary to rethink aesthetics itself in the light of two evidently opposing tendencies of computation: on the one hand, the invisibility of processing means that computation is phenomenologically discorrelated (in that it breaks with what Husserl calls the “the fundamental correlation between noesis and noema”); on the other hand, however, when directed toward the production of sensory contents, computation relies centrally on statistical correlations that reproduce normative constructs (including those of gender, race, and dis/ability). As discorrelative, computation exceeds the perceptual bond between subject and object, intervening directly in the prepersonal flesh; as correlative, computation not only expresses “algorithmic biases” but is capable of implanting them directly in the flesh. Through this double movement, a correlative capture of the body and its metabolism is made possible: a statistical norming of subjectivity and collectivity prior to perception and representation. Political structures are thus seeded in the realm of affect and aesthesis, but because the intervention takes place in the discorrelated matter of prepersonal embodiment, a margin of indeterminacy remains from which aesthetic and political resistance might be mounted (with no guarantee of success). In this presentation, I turn to contemporary artworks combining the algorithmic (including AI, VR, or robotics) with the metabolic (including heartrate sensors, ECGs, and EEGs) in order to imagine a practice of dis/correlative counter-capture. Works by the likes of Rashaad Newsome, Rafael Lozano-Hemmer, Hito Steyerl, or Teoma Naccarato and John MacCallum point to an aesthetic practice of counter-capture that does not elude but re-engineers mechanisms of control for potentially, but only ever locally, liberatory purposes.
I’ve been traveling a lot outside of California this summer, but whenever I get the chance I like to spend time up north in Mendocino or Fort Bragg, where my wife Karin is part of the artist collective at Edgewater Gallery.
Earlier in the summer, we observed tons of California brown pelicans and common murres (which look like penguins) camped out on some small offshore islands. The assembly has attracted a lot of attention — from locals, tourists, artists, and scientists. The local newspaper, The Mendocino Voice, just put out a long piece on the birds and the possible reasons for their convergence there, and they quoted Karin and featured a glitch collage that she did a while back.
Karin has been photographing, filming, glitching, and painting pelicans and other California wildlife for several years now. Check out more of her work at karindenson.com.
The following is an excerpt of my talk from the Locarno Film Festival, at the “Long Night of Dreaming about the Future of Intelligence” held August 9-10, 2023. (Animated imagery created with ModelScope Text to Video Synthesis demo, using text drawn from the talk itself.)
Thanks to Rafael Dernbach for organizing and inviting me to this event, and thanks to Francesco de Biasi and Bernadette Klausberger for help with logistics and other support. And thanks to everyone for coming out tonight. I’m really excited to be here with you, especially during this twilight hour, in this in-between space, between day and night, like some hypnagogic state between waking existence and a sleep of dreams.
For over a century this liminal space of twilight has been central to thinking and theorizing the cinema and its shadowy realm of dreams, but I think it can be equally useful for thinking about the media transitions we are experiencing today towards what I and others have called “post-cinematic” media.
In the context of a film festival, the very occurrence of which testifies to the continued persistence and liveliness of cinema today, I should clarify that “post-cinema,” as I use the term, is not meant to suggest that cinema is over or dead. Far from it.
Rather, the “post” in post-cinema points to a kind of futurity that is being integrated into, while also transforming and pointing beyond, what we have traditionally known as the cinema.
That is, a shift is taking place from cinema’s traditional modes of recording and reproducing past events to a new mode of predicting, anticipating, and shaping mediated futures—something that we see in everything from autocorrect on our phones to the use of AI to generate trippy, hypnagogic spectacles.
Tonight, I hope to use this twilight time to prime us all for a long night of dreaming, and thinking, maybe even hallucinating, about the future of intelligence. The act of priming is an act that sets the stage and prepares for a future operation.
We prime water pumps, for example, removing air from the line to ensure adequate suction and thus delivery of water from the well. We also speak of priming engines, distributing oil throughout the system to avoid damage on initial startup. Interestingly, when we move from mechanical, hydraulic, and thermodynamic systems to cybernetic and more broadly informatic ones, this notion of priming tends to be replaced by the concept of “training,” as we say of AI models.
Large language models like ChatGPT are not primed but instead trained. The implication seems to be that (dumb) mechanical systems are merely primed, prepared, for operations that are guided or supervised by human users, while AI models need to be trained, perhaps even educated, for an operation that is largely autonomous and intelligent. But let’s not forget that artificial intelligence was something of a marketing term proposed in the 1950s (Dartmouth workshop 1956) as an alternative to, and in order to compete with, the dominance of cybernetics. Clearly, AI won that competition, and so while we still speak of computer engineers, we don’t speak of computer engines in need of priming, but AI models in need of training.
In the following, I want to take a step back from this language, and the way of thinking that it primes us for, because it encodes also a specific way of imagining the future—and the future of intelligence in particular—that I think is still up for grabs, suspended in a sort of liminal twilight state. My point is not that these technologies are neutral, or that they might turn out not to affect human intelligence and agency. Rather, I am confident in saying that the future of intelligence will be significantly different from intelligence’s past. There will be some sort of redistribution, at least, if not a major transformation, in the intellective powers that exist and are exercised in the world.
I am reminded of Plato’s Phaedrus, in which Socrates recounts the mythical origins of writing, and the debate that it engendered: would this new inscription technology extend human memory by externalizing it and making it durable, or would it endanger memory by the same mechanisms? If people could write things down, so the worry went, they wouldn’t need to remember them anymore, and the exercise of active, conscious memory would suffer as a result.
Certainly, the advent of writing was a watershed moment in the history of human intelligence, and perhaps the advent of AI will be regarded similarly. This remains to be seen. In any case, we see the same polarizing tendencies: some think that AI will radically expand our powers of intelligence, while others worry that it will displace or eclipse our powers of reason. So there is a similar ambivalence, but we shouldn’t overlook a major difference, which is one of temporality (and this brings us back to the question of post-cinema).
Plato’s question concerned memory and memorial technologies (which includes writing as well as, later, photography, phonography, and cinema), but if we ask the question of intelligence’s future today, it is complicated by the way that futurity itself is centrally at stake now: first by the predictive algorithms and future-oriented technologies of artificial intelligence, and second by the potential foreclosure of the future altogether via climate catastrophe, possible extinction, or worse—all of which is inextricably tied up with the technological developments that have led from hydraulic to thermodynamic to informatic systems. To ask about the future of intelligence is therefore to ask both about the futurity of intelligence as well as its environmentality—dimensions that I have sought to think together under the concept of post-cinema.
In my book Discorrelated Images, I assert that the nature of digital images does not correspond to the phenomenological assumptions on which classical film theory was built. While film theory is based on past film techniques that rely on human perception to relate frames across time, computer generated images use information to render images as moving themselves. Consequently, cinema studies and new media theory are no longer separable, and the aesthetic and epistemological consequences of shifts in technology must be accounted for in film theory and cinema studies more broadly as computer-generated images are now able to exceed our perceptual grasp. I introduce discorrelation as a conceptual tool for understanding not only the historical, but also the technological specificity, of how films are actively and affectively perceived as computer generated images. This is a kind of hyperinformatic cinema – with figures intended to overload and exceed our perceptual grasp, enabled by algorithmic processing. In the final chapter of the book, I consider how these computer-generated images have exceeded spectacle, and are arguably not for human perception at all, thus serving as harbingers of human extinction, and the end of the environment as defined by human habitation.
At least, that is what you will read about my book if you search for it on Google Books — above, I have only slightly modified and excerpted the summary included there. Note that this is not the summary provided by my publisher, even though that is what Google claims. I strongly suspect that a computer, and not a human, wrote this summary, as the text kind of makes sense and kind of doesn’t. I do indeed argue that computer-generated images exceed our perceptual grasp, that their real-time algorithmic rendering and futural or predictive dimensions put them, at least partially, outside of conscious awareness and turn them into potent vectors of subjectivation and environmental change. But I honestly don’t know what it means to say that “computer generated images use information to render images as moving themselves.” The repetition of the word images makes this sentence confusing, and the final words are ambiguous: are these supposed to be “self-moving images,” or images that, themselves, are moving? Or do the images use information to render themselves as moving images? What would that mean? The images are self-rendering? There is a multilayered problem of intelligibility involved, despite the fact that the sentences are more or less grammatical. The semantic ambiguities, the strange repetitions, and the feeling that something is just a little off are tell-tale signs of AI-generated text. This is not full-blown “hallucination,” as they say when AI just makes things up, but instead a kind of twilight recursion, suspended between the past of the training data and the future of the predictive algorithm, generating a sleepy, hypnagogic loop or a quasi-lucid, semi-waking dream.
But that summary was generated back in 2020. Since then, with GPT and other tools proliferating, we have witnessed a quantum leap in the intelligibility of AI-generated texts. In preparation for this event, I asked ChatGPT to summarize several of my books and to explain key concepts and arguments I made in them. The results were much better than what I just discussed (even though I was using the basic version that runs on GPT-3.5, not the more advanced GPT-4). Asked to explain my theory that “media are the originary correlators of experience,” the algorithm responded: “In this context, ‘originary’ suggests that media have been present from the beginning of human existence and have continuously evolved alongside our species. They are ingrained in our social and cultural development and have become integral to how we make sense of the world. […] Whether it’s language, art, writing, photography, film, or digital technology, each medium influences and organizes our experiences, constructing the framework through which we navigate reality.” That’s not bad, and it gets at what I’m calling the environmentality of media, including the medium or milieu of intelligence.
We could say, then, that artificial intelligence technology functions as a contemporary manifestation of the correlation between media and human experience. ChatGPT represents a significant leap in the relationship between humans and technology in the digital age. As a sophisticated language model, it mediates human interaction with information, communication, and even decision-making processes. ChatGPT is an intermediary that transforms the way we engage with knowledge and ideas, redefining the boundaries between human and machine. As an AI language model, ChatGPT embodies the fusion of the organic (human intelligence) and the artificial (machine intelligence). This fusion blurs the lines between human creativity and algorithmic generation, questioning traditional notions of authorship and creativity.
The only problem, though, is that everything I just said about ChatGPT was written by ChatGPT, which I asked to speculate, on the basis of my books, about what I would say about large language model AIs. The impersonation is competent, and even clarifying, as it brings out implications of my previous thinking in transferring them to the new case. Significantly, it points the way out of the impasse I described earlier with reference to Plato’s Phaedrus: AI will neither simply empower nor simply imperil human intelligence but will fundamentally alter it by transforming the parameters or environment of its operation.
The fact that ChatGPT could write this text, and that I could speak it aloud without any noticeable change in my voice, style, or even logical commitments, offers a perfect example of the aforementioned leap in the intelligibility of AI-generated contents. Intelligibility is of course not the same as intelligence, but neither is it easily separated from the latter. Nevertheless, or as a result, I want to suggest that perhaps the future of intelligence depends on the survival of unintelligibility. This can be taken in several ways. Generally, noise is a necessary condition, substrate, or environment for the construction of signals, messages, or meanings. Without the background of unintelligible noise, meaningful figures could hardly stand out as, well, meaningful. In the face of the increasingly pervasive—and increasingly intelligible—AI-generated text circulating on the Internet (and beyond), Matthew Kirschenbaum speaks of a coming Textpocalypse: “a tsunami of text swept into a self-perpetuating cataract of content that makes it functionally impossible to reliably communicate in any digital setting.” Kirschenbaum observes: “It is easy now to imagine a setup wherein machines could prompt other machines to put out text ad infinitum, flooding the internet with synthetic text devoid of human agency or intent: gray goo, but for the written word.”
Universal intelligibility, in effect, threatens intelligence, for if all text (or other media) becomes intelligible, how can we intelligently discriminate, and how can we cultivate intelligence? Cultivating intelligence, in such an environment, requires exposure to the unintelligible, that which resists intellective parsing: e.g. glitches, errors, and aesthetic deformations that both expose the computational infrastructures and emphasize our own situated, embodied processing. Such embodied processing precedes and resists capture by higher-order cognition. The body is not dumb; it has its own sort of intelligence, which is modified by way of interfacing with computation and its own sub-intellective processes. In this interface, a microtemporal collision takes place that, for better or for worse, transforms us and our powers of intelligence. If I emphasize the necessary role of unintelligibility, this is not (just) about protecting ourselves from being duped and dumbed by all-too-intelligible deepfakes or the textpocalypse, for example; it is also about recognizing and caring for the grounds of intelligence itself, both now and in the future.
And here is where art comes in. Some of the most intelligent contemporary AI-powered or algorithmic art actively resists easy and uncomplicated intelligibility, instead foregrounding unintelligibility as a necessary substrate or condition of possibility. Remix artist Mark Amerika’s playful/philosophical use of GPT for self-exploration (or “critique” in a quasi-Kantian sense) is a good example; in his book My Life as an Artifical Creative Intelligence, coauthored with GPT-2, and in the larger project of which it is a part, language operates beyond intention as the algorithm learns from the artist, and the artist from the algorithm, increasingly blurring the lines that nevertheless reveal themselves as seamful cracks in digital systems and human subjectivities alike. The self-deconstructive performance reveals the machinic substrate even of human meaning. In her forthcoming book Malicious Deceivers, theater and performance scholar Ioana Jucan offers another example, focusing on the question of intelligibility in Annie Dorsen’s algorithmic theater. For example, Dorsen’s play A Piece of Work (2013) uses Markov chains and other algorithms to perform real-time analyses of Shakespeare’s Hamlet and generate a new play, different in each performance, in which human and machinic actors interface on stage, often getting caught in unintelligible loops that disrupt conventions of theatrical and psychological/semantic coherence alike.
Moreover, a wide range of AI-generated visual art foregrounds embodied encounters that point to the limits of intellect as the ground of intelligence: as I have discussed in a recent essay in Outland magazine, artists like Refik Anadol channel the sublime as a pre- or post-intellecitve mode of aesthetic encounter with algorithms; Ian Cheng uses AI to create self-playing videogame scenarios that, because they offer not point of interface, leave the viewer feeling sidelined and disoriented; and Jon Rafman channels cringe and the uncomfortable underbellies of online life, using diffusion models like Midjourney or DALL-E 2 to illustrate weird copypasta tales from the Internet that point us toward a visual equivalent of the gray goo that Kirschenbaum identifies with the textpocalypse. These examples are wildly divergent in their aesthetic and political concerns, but they are all united, I contend, in a shared understanding of environmentality and noise as a condition of perceptual engagement; they offer important challenges to intelligibility that might help us to navigate the future of intelligence.
Jon Rafman
Counterfeit Poast, 2022
4K stereo video
23:39 min
MSPM JRA 49270
film still
Today I have a short piece in Outland on AI art and its embodied processing, as part of a larger suite of articles curated by Mark Amerika.
The essay offers a first taste of something I’m developing at the moment on the phenomenology of AI and the role of aesthetics as first philosophy in the contemporary world — or, AI aesthetics as the necessary foundation of AI ethics.
This year’s Berkeley-Stanford Symposium will again take place at SFMOMA on April 28, 2023. This is always an exciting event, open to graduate student presenters working in art history, visual culture, film and media studies, and interdisciplinary spaces. This year’s theme is “In-Between: Art and Cultural Practices from Here.”
Please see the CFP above. Those interested should submit an abstract no longer than 300 words and a brief bio by February 28th to berkeleystanford2023@gmail.com.
We’re excited to announce our next event at the Digital Aesthetics Workshop, a talk by writer and curator Legacy Russell, author of Glitch Feminism, which will take place next Thursday, May 20th at 10 am Pacific and is co-sponsored by the Clayman Institute for Gender Research.
Join writer and curator Legacy Russell in a discussion about the ways in which artists engaging the digital are building new models for what monuments can be in a networked era of mechanical reproduction.
Legacy Russell is a curator and writer. Born and raised in New York City, she is the Associate Curator of Exhibitions at The Studio Museum in Harlem. Russell holds an MRes with Distinction in Art History from Goldsmiths, University of London with a focus in Visual Culture. Her academic, curatorial, and creative work focuses on gender, performance, digital selfdom, internet idolatry, and new media ritual. Russell’s written work, interviews, and essays have been published internationally. She is the recipient of the Thoma Foundation 2019 Arts Writing Award in Digital Art, a 2020 Rauschenberg Residency Fellow, and a recipient of the 2021 Creative Capital Award. Her first book Glitch Feminism: A Manifesto (2020) is published by Verso Books. Her second book, BLACK MEME, is forthcoming via Verso Books.
Sponsored by the Stanford Humanities Center. Made possible by support from Linda Randall Meier, the Mellon Foundation, and the National Endowment for the Humanities. Co-sponsored by the Michelle R. Clayman Institute for Gender Research.
The Digital Aesthetics Workshop is excited to announce our second event of the Spring quarter: on May 19th, at 5 PM, we’ll host a workshop with Kris Cohen, via Zoom. This workshop has been co-organized with Stanford’s Critical Practices Unit (CPU), whom you can (and should!) follow for future CPU events here. Please email Jeff Nagy (jsnagy at stanford dot edu) by May 18th for the Zoom link.
Professor Cohen will discuss new research from his manuscript-in-progress, Bit Field Black. Bit Field Black accounts for how a group of Black artists working from the Sixties to the present were addressing, in ways both belied and surprisingly revealed by the language of abstraction and conceptualism, nascent configurations of the computer screen and the forms of labor and personhood associated with those configurations.
Professor Cohen is Associate Professor of Art and Humanities at Reed College. He works on the relationship between art, economy, and media technologies, focusing especially on the aesthetics of collective life. His book, Never Alone, Except for Now (Duke University Press, 2017), addresses these concerns in the context of electronic networks.
A poster with all the crucial information is attached for lightweight recirculation.
Thank you to all of the very many of you who logged on for our first Spring workshop with Sarah T. Roberts. We hope you will also join us on the 19th, and keep an eye out for an announcement of our third Spring workshop, with Xiaochang Li, coming up on May 26th
Last evening I had the pleasure of discussing Jim Campbell’s work with him at the Anderson Collection at Stanford, where he has a wonderful exhibition of LED-based works up right now. It was a far-ranging discussion, in a packed gallery, and great fun all around. Here are my opening remarks:
Before we start our conversation, I have the honor of offering some framing thoughts about Jim Campbell’s work. I want to use this opportunity to put that work into dialogue with some of my own interests and concerns as a theorist of the intersection between computational and moving-image media. I am concerned, in other words, with the historical and phenomenological encounter between the invisible processing of digital information and the visible forms that result from it—and it is precisely this encounter that Jim’s LED-based artworks enact or perform in a variety of thought-provokingly deformative ways. This is to say that his work, by means of occluding, blocking, and de-focusing our view, ironically makes perceptible the very mismatch between perception and computational processing that lies at the heart of digital video as it circulates online, on our smartphones, on DVDs and BluRays, on digital cable and satellite TV, and in the digital projection systems of contemporary movie theaters. In all of those contexts, digital processing remains resolutely invisible to perception (except, that is, through exceptional moments of glitching, buffering, and the like); but, those exceptional and denigrated moments aside, the perceptual “content” of digital video is privileged, thus blinding us to the ways that the medial form of video’s computational processing is changing the very parameters of our embodied perception, or the ways that, as Canadian media theorist Marshall McLuhan put it, our “sensory ratios” are being reformed by our encounter with a new media environment.
By re-valorizing the exceptional, or that which disrupts or impedes the easy transmission of visual “content,” Jim’s work offers an oblique view of the hidden parameters of this new environment; he makes what I call the “discorrelation” between our perception and its infrastructure perceptible—if only in a necessarily incomplete and volatile form. And the volatility of these operations is key: Jim’s works keep our eyes and our bodies moving, making us move now closer and then farther away, causing us to squint and then relax our focus, in order to catch a glimpse of something figural, recognizable, the so-called “content” of the moving images. Certainly, this content is not irrelevant, but it is hardly the ultimate telos or desideratum towards which the work directs our attention. The works are not simple puzzles that are “solved” once we identify their contents. Rather, the incessant oscillation between perception and non-perception, between seeing and not seeing, would seem to be closer to the point, as it is this oscillation that keeps everything at play, unsettling basic categories and forms. We shift our focus between individual LEDs, the screen or wall upon which they reflect, and an indirect, sometimes volumetric illumination of bodies or objects in motion. Our perception doesn’t come to rest upon a stable object or meaning, and this instability infects the broader conceptual context within which our perception is situated: Jim’s work upsets and makes us question so many basic distinctions—for example, between video art and sculpture, between art and engineering, between material substrates and perceptual forms, between perception and imagination. Through his destabilization of perception, he re-opens also the gap between art and technology, a gap created around the time of the industrial revolution, when thinkers like Immanuel Kant helped engineer a split between the aesthetic and the technical, or between the fine arts and the applied arts. Earlier, both the Greek term techne and the Latin ars referred indiscriminately to both arts and technologies. Now, the poets were to work with words while the engineers worked on steam engines; artists concerned themselves with the non-utilitarian forms of aesthetic experience while technologists made the machines that kept the factories running. However, in the space cleared between art and technology, a third thing emerged, a common ground for aesthetic and technological production alike: namely, media in its modern sense. A medium in this sense is not reducible to its “content” in a narrow way; rather, it is something that straddles perceptual form and infrastructure. Take, for example, the way the Sunday comics capitalized on innovations in four-color printing processes, or the way cinema responded to synchronized sound with new genres like the musical or the horror film, which involves its spectator through an offscreen space of screams and bumps in the night. It is in this sense that McLuhan proclaimed that “the medium is the message”—a claim that he explained with the example of the light bulb, a content-less medium, the message of which is the electrification of the world and the resulting transformation of agency, perception, and social relation. In order to explore the message or the meaning of more recent shifts in the media environment, Jim replaces McLuhan’s light bulb with LEDs—the same light emitting diodes that provide backlighting for flatscreen computer monitors and television sets, that power digital projectors, or that illuminate our increasingly “smart” homes. Routing perception through these characteristically digital-era lights, and powering them by way of unseen “custom electronics,” Jim defocuses intentional perception, foregrounds the obfuscation of infrastructure, and indirectly illuminates a media environment in which computation has finally (arguably) rendered the industrial-era split between art and technology untenable.
When I recently spoke to him on the phone, Jim identified himself not as an artist but as an engineer—and certainly he holds the degrees, the patents, and the experience to justify that statement. But he is an engineer of a special sort: an engineer of perception in an age when perception teeters precariously atop invisible circuits and computational infrastructures not cut to our measure, an engineer of experience when experience is routed through ubiquitous circuits of computational processing. Occluding both the image and its digital infrastructure, Jim’s work puts our perceptual experience in motion, incessantly circulating between what we can and cannot see. The work arouses a curiosity about the conditions of this circulation, including the means by which the LEDs, and hence also our perception, have been programmed. In the context of nineteenth-century magic shows and scientific expositions, this curiosity about how the spectacle works has been called an “operational aesthetic”—an aesthetic that, fittingly for the era of industrial media, includes an enjoyment in the sight of technical operation. In the twenty-first-century context of ubiquitous computational processing and experiential engineering, Jim offers us something slightly different, I suggest: an operational aesthetic of perception itself, a questioning of our ability and the means of seeing in an age of discorrelation, when visibility is rendered ambiguously at the margins of human signs and invisible informatic signals.
I am excited to announce the inaugural session of Critical Practices Unit (CPU), on November 19 at 6:30pm (in McMurtry 360).
In this interdisciplinary and practice-based group, with support from the Vice President for the Arts, we hope to stage collisions between the various epistemes and critical frameworks we all know and love through performances, art-objects, interactive media, and “critical making” projects, which, in some sense to be explored, materialize critical reflection.
In fidelity to these objects’ disobedience to any specific field, we want to stress that CPU is for those in the humanities, sciences, and arts. These conversations—spanning computation, performance, race, personhood, gesture, interaction, and more—will be made all the richer by a diversity of perspectives.
For our first event, we will be playing with haptic devices for underwater robots graciously loaned by The Stanford Robotics Lab, involving ourselves in a live performance piece / installation by Catie Cuan, and settling into a conversation about the grafting of robotics and performativity. We are overjoyed that situating this discussion will be Sydney Skybetter, Lecturer in Theater and Performance Studies at Brown University, and Matthew Wilson Smith, Professor of German Studies and Performance Studies here at Stanford.