“Six Theses on an Aesthetics of Always-On Computing” — James J. Hodge at Digital Aesthetics Workshop, April 30, 2024

We’re pleased to announce the second event of the Digital Aesthetics Workshop for spring quarter. Please join us in welcoming James J. Hodge, who will present on “Six Theses on an Aesthetics of Always-On Computing” on Tuesday, April 30, 5:00-7:00pm PT. The event will take place in the Stanford Humanities Center Board Room, where refreshments will be served. Below you will find the abstract and bio. We look forward to seeing you there!

Zoom link for those unable to join in-person: tinyurl.com/27afjatd

Abstract:

This talk comes from my book project, “Ordinary Media: An Aesthetics of Always-On Computing.” The premise of the project is that the smartphone has become for many the signature technology and engine of experience in the twenty-first century. One of the project’s larger claims is that ambient givenness of smartphones in contemporary life has significantly reorganized the human sensorium and, moreover, has elevated the significance of experience at the level of the skin’s surface, or what the psychoanalyst Thomas Ogden terms “boundedness.” This talk attends to the ways in which this dramatic shift in the general orientation of experience entails a sea change in the general nature of aesthetics native and responsive to the always-on world. Discussing a variety of examples from film, literature, video, games, digital art, and vernacular aesthetic forms and genres, this talk explores six “theses” of aesthetics in this still-novel yet ordinary arena.

Bio:

James J. Hodge is Associate Professor in the Department of English at Northwestern University. His essays on digital aesthetics have appeared in Critical Inquiry, Postmodern Culture, TriQuarterly, Film Criticism, and elsewhere. He is the author of Sensations of History: Animation and New Media Art (Minnesota, 2019).

MTL/Intermediations Presents: Regina Schober, “Female Algorithmic Selfhood, Literary Fiction, and the Digital Pharmakon,” March 6, 2024

The Program in Modern Thought and Literature and Intermediations invite you to attend a lunch-time talk with Professor Regina Schober (American Studies, Heinrich-Heine-University Duesseldorf) on Female Algorithmic Selfhood, Literary Fiction, and the Digital Pharmakon

This event will be taking place in the Terrace Room in Margaret Jacks Hall (Building 460, 4th Floor, room 426) on March 6th at 11am.

Lunch will be provided. If you are planning to intend, we invite you fill out an RSVP form for logistics and headcount. RSVPs are appreciated but not required. We ask that if you RSVP that you do so by March 1st.

If you have any questions or concerns about this event, please do not hesitate to reach out to Leah Chase at lachase@stanford.edu

Abstract:

While algorithms have increasingly come to shape the ways of writing the self, for example through data tracking and recording, personalized recommendation systems, and online identity curation, literary fiction has simultaneously negotiated such ways of being in and experiencing our algorithmically driven, digital environment. This talk will look at a selection of contemporary US American novels that critically inquire into modes of algorithmic self-writing, as they scrutinize the ways in which digital affect, automated scripts, and the dynamics of the attention economy play into the construction of selfhood. With a particular focus on female digital experiences, this talk reframes posthuman perspectives on human-/technology interactions in emphasizing affective and collective spaces of the “digital pharmakon” (Stiegler 2012). At the same time, these novels explore their own intermedial potential as counter-attentional forms in negotiating the ‘failed knowledges’ of scripting the digital female self.

About the speaker:

Regina Schober is Professor of American Studies at Heinrich-Heine-University Duesseldorf. Her research interests include literary negotiations of networks and algorithmic selfhood, theories of failure, and intermediality. She is author of ‘Spiderweb, Labyrinth, Tightrope Walk: Networks in US-American Literature and Culture’ (De Gruyter, 2023), of ‘Unexpected Chords: Musicopoetic Intermediality in Amy Lowell’s Poetry and Poetics’ (Winter, 2011), editor of ‘Data Fiction: Naturalism, Numbers, Narrative’ (special issue of Studies in American Naturalism, with James Dorson, 2017), ‘The Failed Individual: Amid Exclusion, Resistance, and the Pleasure of Non-Conformity’ (Campus, 2017, with Katharina Motyl) of ‘Laboring Bodies and the Quantified Self’ (Transcript, with Ulfried Reichardt, 2020), and of ‘Network Theory and American Studies’ (Special Issue of Amerikastudien/American Studies, 2015, with Ulfried Reichardt and Heike Schäfer. She is part of the DFG Research Network ‘The Failure of Knowledge/Knowledges of Failure’, the DFG Research Network ‘Model Aesthetics: Between Literary and Economic Knowledge’, and the interdisciplinary BMBF Project ‘AI4All’.

M. Beatrice Fazi at Digital Aesthetics Workshop, February 28!

Poster by Hank Gerba

Please join us for our next event with M. Beatrice Fazi on Tuesday February 28 @ 5-7pm Pacific time. We’ll meet in the Stanford Humanities Center, as usual. Zoom Registration, if not able to attend IRL: https://tinyurl.com/39tsjc62

The topic of Beatrice’s talk is “On Digital Theory.”

Abstract:

What is digital theory? In this talk, M. Beatrice Fazi will advance and discuss two parallel propositions that aim to answer that question: first, that digital theory is a theory that investigates the digital as such and, second, that it is a theory that is digital insofar as it discretizes via abstraction. Fazi will argue that digital theory should offer a systematic and systematizing study of the digital in and of itself. In other words, it should investigate what the digital is, and that investigation should identify the distinctive ontological determinations and specificities of the digital. This is not the only scope of a theoretical approach to the digital, but it constitutes a central moment for digital theory, a moment that defines digital theory through the search for the definition of the digital itself. Fazi will also consider how, if we wish to understand what digital theory is, we must address the characteristics of theoretical analysis, which can be done only by reflecting on what thinking is in the first place. Definitions of the digital, definitions of thought, and definitions of theory all meet at a key conceptual juncture. To explain this, Fazi will discuss how to theorize is to engage in abstracting and that both are processes of discretization. The talk will conclude by considering whether the digital could be understood as a mode of thought as well as a mode of representing thought. 

Bio:

M. Beatrice Fazi is Reader in Digital Humanities in the School of Media, Arts and Humanities at the University of Sussex, United Kingdom. Her primary areas of expertise are the philosophy of computation, the philosophy of technology and the emerging field of media philosophy. Her research focuses on the ontologies and epistemologies produced by contemporary technoscience, particularly in relation to issues in artificial intelligence and computation and to their impact on culture and society. She has published extensively on the limits and potentialities of the computational method, on digital aesthetics and on the automation of thought. Her monograph Contingent Computation: Abstraction, Experience, and Indeterminacy in Computational Aesthetics was published by Rowman & Littlefield International in 2018.

“Code” — Bernard Dionysius Geoghegan at Digital Aesthetics Workshop (Jan. 17, 2023)

Poster by Hank Gerba

Please join us on Tuesday, January 17th @ 5-7pm, in the Stanford Humanities Center Board Room, for a very special event with Bernard Dionysius Geoghegan. Bernard’s new book, Code: From Information Theory to French Theory, releases just 3 days later on January 20th (https://www.dukeupress.edu/code) ! At Digital Aesthetics he will be discussing the book as well as his future project, Screenscapes: How Formats Render Territories.

Zoom registration, if you can’t make it IRL: https://tinyurl.com/4dhyjuna.

Bio:
Bernard Dionysius Geoghegan is a Reader in the History and Theory of Digital Media (loosely equivalent to associate or w2 professorship)​. An overarching theme of his research is how “cultural” sciences shape—and are shaped by—digital media. This concern spans his writing on the mutual constitution of cybernetics and the human sciences, ethnicity and AI, and the role of mid-twentieth century military vigilance in the development of interactive, multimedia computing.  His attention to cultural factors in technical systems also figured in his work as a curator, notably for the Anthropocene and Technosphere projects at the Haus der Kulturen der Welt.

Bernard’s book Code: From Information Theory to French Theory examines how liberal technocratic projects, with roots in colonialism, mental health, and industrial capitalism, shaped early conceptions of digital media and cybernetics. It offers a revisionist history of “French Theory” as an effort to come to terms with technical ideas of communications and as a predecessor to the digital humanities. N. Katherine Hayles wrote of this book that it “upends standard intellectual histories” and Lev Manovich that “after reading this original and fascinating book, you will never look at key thinkers of the twentieth century in the same way.” Early drafts of the book’s argument appeared in journals including Grey Room and Critical Inquiry.

Bernard’s current book project, Screenscapes: How Formats Render Territories, draws on infrastructure studies and format studies to offer a radical account of how digital screens produce global space. It considers the digital interface in terms of articulation, i.e., in its technoscientific formatting of territories, temporalities, and practices as “ecologies of operations.” Excerpts appear in Representations (An Ecology of Operations) and MLN  (The Bitmap is the Territory).

In Conversation: Jean Ma and Tung-Hui Hu at Digital Aesthetics Workshop (December 2)

Poster by Hank Gerba

Please join us for the Digital Aesthetics Workshop’s next event, “In Conversation: Jean Ma & Tung-Hui Hu.” The two authors will discuss their recently released books—Jean Ma’s At the Edges of Sleep: Moving Images and Somnolent Spectators and Tung-Hui Hu’s Digital Lethargy—before moving into a more synthetic conversationA version of this event was originally scheduled in 2020 as a discussion of work in Jean Ma’s book-to-be, but was cancelled due to the pandemic. We are *thrilled* to bring the event back as, in part, a celebration of the book’s launch : ) 

The meeting will be held December 2nd, 10am-12, in McMurtry 370. Breakfast will be provided !

Zoom registration if unable to attend in-person: tinyurl.com/3nujuzkr

Jean Ma is the Victoria and Roger Sant Professor in Art in Stanford’s Department of Art & Art History. She has published books on the temporal poetics of Chinese cinema (Melancholy Drift: Marking Time in Chinese Cinema), singing women on film (Sounding the Modern Woman: The Songstress in Chinese Cinema), and the relationship of cinema and photography (Still Moving: Between Cinema and Photography). She is the coeditor of “Music, Sound, and Media,” a book series at the University of California Press. Her writing has appeared in Camera Obscura, Criticism, Film Quarterly, Grey Room, Journal of Chinese Cinemas, and October. Her new book At the Edges of Sleep: Moving Images and Somnolent Spectators is the recipient of an Andy Warhol Foundation Arts Writer Book Grant. To access the open-source digital edition, please visit: luminosoa.org/site/books/m/10.1525/luminos.132/

Tung-Hui Hu is a poet and scholar of digital media. The winner of a Rome Prize and a NEA fellowship for literature, Hu has also received an American Academy in Berlin Prize for his research. He is the author of A Prehistory of the Cloud (MIT Press, 2015), described by The New Yorker as “mesmerizing… absorbing [in] its playful speculations”. His research has been featured on CBS News, BBC Radio 4, Boston Globe, New Scientist, Art in America, and Rhizome.org, among other venues. His brand-new book, an exploration of burnout, isolation, and disempowerment in the digital underclass, is Digital Lethargy (MIT Press, October 2022).

Image Objects — In Conversation with Jacob Gaboury, December 8, 2021

On Wednesday, December 8, 2021 (12:00 – 1:00pm Pacific time), I will be in conversation with Jacob Gaboury about his excellent new book Image Objects: An Archaeology of Computer Graphics for UC Berkeley’s Townsend Center for the Humanities.

The event will be livestreamed on YouTube and is therefore open for all to view.

More info can be found on the Townsend Center website.

Algorithmic Serialities

I recently gave a talk with the unwieldy title “Post-Cinematic Seriality and the Algorithmic Conditions of Identity and Difference” for the Center for Inter-American Studies at the University of Graz and the Austro-American Society for Styria in Austria (see the *somewhat creepy, but appropriately so, lol* flyer below); and on October 12, 2021 (at 6:30pm Central European time / 9:30am Pacific US time) I’ll be giving a related talk with the much more wieldy (possibly misleadingly simple) title “Seriality and Digital Cultures” at the University of Zurich’s English Department (see the flyer with registration info above).

Both of these talks are related to a larger project that I am developing, which will link seriality as a medial form (in both popular and artistic media) and as a social form (following the late Sartre, Iris Marion Young, Benedict Anderson, and others) in order to think about the ways that — with the shift from a broadly “cinematic” media regime (with its past-oriented, memorial, recording, retentional functions) to a “post-cinematic” one (with its future-oriented, anticipatory, predictive, protentional functions) — algorithmic media are poised to transform categories and lived realities of class, gender, and race.

On the Embodied Phenomenology of DeepFakes — Full Text of Talk from #SLSA21

DeepFake videos pose significant challenges to conventional modes of viewing. Indeed, the use of machine learning algorithms in these videos’ production complicates not only traditional forms of moving-image media but also deeply anchored phenomenological categories and structures. By paying close attention to the exchange of energies around these videos, including the consumption of energy in their production but especially the investment of energy on the part of the viewer struggling to discern the provenance and veracity of such images, we discover a mode of viewing that both recalls pre-cinematic forms of fascination while relocating them in a decisively post-cinematic field. The human perceiver no longer stands clearly opposite the image object but instead interfaces with the spectacle at a pre-subjective level that approximates the nonhuman processing of visual information known as machine vision. While the depth referenced in the name “deep fake” is that of “deep learning,” the aesthetic engagement with these videos implicates an intervention in the depths of embodied sensibility—at the level of what Merleau-Ponty referred to as the “inner diaphragm” that precedes stimulus and response or the distinction of subject and intentional object. While the overt visual thematics of these videos is often highly gendered (their most prominent examples being so-called “involuntary synthetic pornography” targeting mostly women), viewers are also subject to affective syntheses and pre-subjective blurrings that, beyond the level of representation, open their bodies to fleshly “ungenderings” (Hortense Spillers) and re-typifications with far-reaching consequences for both race and gender.

Let me try to demonstrate these claims. To begin with, DeepFake videos are a species of what I have called discorrelated images, in that they trade crucially on the incommensurable scales and temporalities of computational processing, which altogether defies capture as the object of human perception (or the “fundamental correlation between noesis and noema,” as Hussserl puts it). To be sure, DeepFakes, like many other forms of discorrelated images, still present something to us that is recognizable as an image. But in them, perception has become something of a by-product, a precipitate form or supplement to the invisible operations that occur in and through them. We can get a glimpse of such discorrelation by noticing how such images fail to conform or settle into stable forms or patterns, how they resist their own condensation into integral perceptual objects—for example, the way that they blur figure/ground distinctions.

The article widely credited with making the DeepFake phenomenon known to wider public in December 2017 notes with regard to a fake porn video featuring Gal Gadot: “a box occasionally appeared around her face where the original image peeks through, and her mouth and eyes don’t quite line up to the words the actress is saying—but if you squint a little and suspend your belief, it might as well be Gadot.” There’s something telling about the formulation, which hinges the success of the DeepFake not on the suspension of disbelief—a suppression of active resistance—but on the suspension of belief—seemingly, a more casual form of affirmation—whereby the flickering reversals of figure and ground, or of subject and object, are flattened out into a smooth indifference.

In this regard, DeepFake videos are worth comparing to another type of discorrelated image: the digital lens flare, which is both to-be-looked-at (as a virtuosic display of technical achievement) and to-be-overlooked (after all, the height of their technical achievement is reached when they can appear as transparently naturalized simulations of a physical camera’s optical properties). The tension between opacity and transparency, or objecthood and invisibility, is never fully resolved, thus undermining a clear distinction between diegetic and medial or material levels of reality. Is the virtual camera that registers the simulated lens flare to be seen as part of the world represented on screen, or as part of the machinery responsible for revealing it to us? The answer, it seems, must be both. And in this, such images embody something like what Neil Harris termed the “operational aesthetic” that characterized nineteenth-century science and technology expos, magic shows, and early cinema alike; in these contexts, spectatorial attention oscillated between the surface phenomenon, the visual spectacle of a machine or a magician in motion, and the hidden operations that made the spectacle possible.

It was such a dual or split attention that powered early film as a “cinema of attractions,” where viewers came to see the Cinematographe in action, as much as or more than they came to see images of workers leaving the factory or a train arriving at the station. And it is in light of this operational aesthetic that spectators found themselves focusing on the wind rustling in the trees or the waves lapping at the rocks—phenomena supposedly marginal to the main objects of visual interest.

DeepFakes also trade essentially on an operational aesthetic, or a dispersal of attention between visual surface and the algorithmic operation of machine learning. However, I would argue that the post-cinematic processes to whose operation DeepFakes refer our attention fundamentally transform the operational aesthetic, relocating it from the oscillations of attention that we see in the cinema to a deep, pre-attentional level that computation taps into with its microtemporal speed.

Consider the way digital glitches undo figure/ground distinctions. Whereas the cinematic image offered viewers opportunities to shift their attention from one figure to another and from these figures to the ground of the screen and projector enabling them, the digital glitch refuses to settle into the role either of figure or of ground. It is, simply, both—it stands out, figurally, as the pixely appearance of the substratal ground itself. Even more fundamentally, though, it points to the inadequacy, which is not to say dispensibility, of human perception and attention with respect to algorithmic processing. While the glitch’s visual appearance effects a deformation of the spatial categories of figure and ground, it does so on the basis of a temporal mismatch between human perception and algorithmic processing. The latter, operating at a scale measured in nanoseconds, by far outstrips the window of perception and subjectivity, so that by the time the subject shows up to perceive the glitch, the “object” (so to speak) has already acted upon our presubjective sensibilities and moved on. This is why glitches, compression artifacts, and other discorrelated images are not even bound to appear to us as visual phenomena in the first place in order to exert a material force on us. Another way to account for this is to say that the visually-subjectively delineated distinction between figure and ground itself depends on the deeper ground of presubjective embodiment, and it is the latter that defines for us our spatial situations and temporal potentialities. DeepFakes, like other discorrelated images, are able to dis-integrate coherent spatial forms so radically because they undercut the temporal window within which visual perception occurs. The operation at the heart of their operational aesthetic is itself an operationalization of the flesh, prior to its delineation into subjective and objective forms of corporeality. The seamfulness of DeepFakes—their occasional glitchy appearance or just the threat or presentiment that they might announce themselves as such—points to our fleshly imbrication with technical images today, which is to say: to the recoding not only of aesthetic form but of embodied aesthesis itself. 

In other words: especially and as long as they still routinely fail to cohere as seamless suturings of viewing subjects together with visible objects, but instead retain their potential to fall apart at the seams and thus still require a suspension of belief, DeepFake videos are capable of calling attention to the ways that attention itself is bypassed, providing aesthetic form to the substratal interface between contemporary technics and embodied aesthesis. To be clear, and lest there be any mistake about it, I in no way wish to celebrate DeepFakes as a liberating media-technology, the way that the disruption of narrative by cinematic self-reflexivity was sometimes celebrated as opening a space where structuring ideologies gave way to an experience of materiality and the dissolution of the subject positions inscribed and interpellated by the apparatus. No amount of glitchy seamfulness will undo the gendered violence inflicted, mostly upon women, in involuntary synthetic pornography. Not only that, but the pleasure taken by viewers in their consumption of this violence seems to depend, at least in part, precisely on the failure or incompleteness of the spectacle: what such viewers desire is not to be tricked into actually believing that it is Gal Gadot or their ex-girlfriend that they are seeing on the screen, but precisely that it is a fake likeness or simulation, still open to glitches, upon which the operational aesthetic depends. Nevertheless, we should not look away from the paradoxical opening signaled by these viewers’ suspension of belief. The fact that they have to “squint a little” to complete the gendered fantasy of domination also means that they have to compromise, at least to a certain degree or for a short duration, their subjective mastery of the visual object, that they have to abdicate their own subjective ownership of their bodies as the bearers of experience. Though it is hard to believe that any trace of conscious awareness of it remains, much less that viewers will be reformed as a result of the experience, it seems reasonable to believe that viewers of DeepFake videos must experience at least an inkling of their own undoing as their de-subjectivized vision interfaces with the ahuman operation of machine vision. 

What I am saying, then, and I am trying to be careful about how I say it, is that DeepFake videos open the door, experientially, to a highly problematic space in which our predictive technologies participate in processes of subjectivation by outpacing the subject, anticipating the subject, and intervening materially in the pre-personal realm of the flesh, out of which subjectivized and socially “typified” bodies emerge. The late Sartre, writing in the Critique of Dialectical Reason, defined commodities and the built environment in terms of the “practico-inert,” in light of the ways that “worked matter” stored past human praxis but condensed it into inert physical form. Around these objects, increasingly standardized through industrial capitalism’s serialized production processes, are arrayed alienated and impotent social collectives of interchangeable, fungible subjects. Compellingly, feminist philosopher Iris Marion Young takes Sartre’s argument as the basis for rethinking gender as a non-essentialist formation, a nascent collectivity, that is imposed on bodies materially—through architecture, clothing, and gender-specific objects that serve to enforce patriarchy and heterosexism. The practico-inert, in other words, participated in the gendered typification of the body—and we could extend the argument to racialization processes as well. But the computational infrastructures of today’s built environment are no longer adequately captured by the concept of the practico-inert. These infrastructures and objects are still the products of praxis, but they are far from inert. In their predictive and interactive operations, they are better thought of under the concept of the practico-alert—they are highly active, always on alert, and like the viewers of DeepFake videos on the lookout for a telling glitch, so are we ever and exhaustingly on the alert. In these circuits, which are located deeper than subjective attention, the standardization and typification processes I just mentioned are more fine-grained, more “personalized” or targeted, operating directly on the presubjective flesh. In this sense, the flattening of subjectivity, the suspension of belief and depersonalization of vision in DeepFake videos, points towards the contemporary “ungendering” of the flesh, as Hortense Spillers calls it in a different context, that marks a preliminary step in the computational intensification of racialized and gendered subjectivization. This is a truly insidious aesthetics of the flesh.Sartre and practico-inert — updated to practico-alert; cf. gender via Iris Marion Young: typification (or serialization) via practico-inert. Now a more direct, because immeasurably fast, operation on presubjective flesh.

Discorrelation, or: Images between Algorithms and Aesthetics — Nov. 3 at CESTA

On November 3 (12pm Pacific), I’ll be giving a talk, via Zoom, titled “Discorrelation, or: Images between Algorithms and Aesthetics” at Stanford’s Center for Spatial and Textual Analysis (CESTA). The talk will focus on my book Discorrelated Images, just out from Duke University Press (and 50% off right with code FALL2020).

In case you’re wondering, this is a different “book talk” than anything you might have seen recently, so check it out if you can! (Though I am told that there is something else going on on November 3rd, so only tune in if you’ve already voted!)

See here for more information and registration!

Rendered Worlds: New Regimes of Imaging — October 23, 2020

The Digital Aesthetics Workshop is extremely excited to announce a collaborative panel with UC Davis’ Technocultural Futures Research Cluster.

Rendered Worlds: New Regimes of Imaging‘ will take place on Friday, October 23 at 10am PDT. Co-organized by teams from Stanford University and University of California Davis, this event brings together a transatlantic group of scholars to discuss the social, historical, technical, and aesthetic entanglements of our computational images.

Talking about their latest work will be Deborah Levitt (The New School), Ranjodh Singh Dhaliwal (UC Davis and Universität Siegen), Bernard Dionysius Geoghegan (King’s College London), and Shane Denson (Stanford). Hank Gerba (Stanford) and Jacob Hagelberg (UC Davis) will co-moderate the round-table. Please register at tinyurl.com/renderedworlds for your zoom link!

We hope to see you there! If you have any questions, please direct them to Ranjodh Singh Dhaliwal (rjdhaliwal at ucdavis dot edu).

Sponsored by the Stanford Humanities Center. Made possible by support from Linda Randall Meier, the Mellon Foundation, and the National Endowment for the Humanities.