Exploring Cinematic Mixed Realities

Exploring Cinematic Mixed Realities: Deformative Methods for Augmented and Virtual Film and Media Studies

Arguably, all cinema, with its projection of three-dimensional spaces onto a two-dimensional screen, is a form of mixed reality. But some forms of cinema are more emphatically interested in mixing realities—like Hale’s Tours (dating back to 1904), which staged its kinesthetic, rollercoaster-like spectacles of railway travel inside of a train car that rocked back and forth but otherwise remained stationary. Here the audience of fellow “passengers” experienced thrills that depended not so much on believing as on corporeally feelingthe effects of the simulation, an embodied experience that was at once an experience of simulated travel and of the technology of simulation. Evoking what Neil Harris has called an “operational aesthetic,” attention here was split, as it is in so many of our contemporary augmented and virtual reality experiences, between the spectacle itself and its means of production. That is, audiences are asked both to marvel at the fictional scenario’s spectacular images and, as in the case of the “bullet time” popularized a century later by The Matrix, to wonder in amazement at the achievement of the spectacle by its underlying technical apparatus. The popularity of “making of” videos and VFX reels attests to a continuity across cinematic and computational (or post-cinematic) forms of mixed reality, despite very important technological differences—including most centrally the emergence of digital media operating at scales and speeds that by far exceed human perception. Seen from this angle, part of the appeal—and also the effectiveness—of contemporary AR, VR, and other mixed reality technologies lies in this outstripping of perception, whereby the spectacle mediates to us an embodied aesthetic experience of the altogether nonhuman dimensionality of computational processing. But how, beyond theorizing historical precursors and aesthetic forms, can this insight be harnessed practically for the study of film and moving-image media?

Taking a cue from Kevin L. Ferguson’s volumetric explorations of cinematic spaces with the biomedical and scientific imaging software ImageJ, I have been experimenting with mixed-reality methods of analysis and thinking about the feedback loops they initiate between embodied experience and computational processes that are at once the object and the medium of analysis. Here, for example, I have taken the famous bullet-time sequence and imported it as a stack of images into ImageJ, using the 3D Viewer plugin to transform what Gilles Deleuze called cinema’s presentation of a “bloc of space-time” into a literal block of bullet-time. This emphatically post-cinematic deformation uses transparency settings to gain computational insight into the virtual construction of a space that can be explored further in VR and AR settings as abstract traces of informational processing. Turned into a kind of monument that mixes human and computational spatiotemporal forms, this is a self-reflexive mixed reality that provides aesthetic experience of low-level human-computational interfacing—or, more pointedly, that re-constitutes aesthesis itself as mixed reality.

Clearly, this is an experimental approach that is not interested in positivistic ideas of leveraging digital media to capture and reconstruct reality, but instead approaches AR and VR technologies as an opportunity to transform and re-mix reality through self-reflexively recursive technoaesthetic operations. Here, for example, I have taken the bullet-time sequence, produced with the help of photogrammetric processes along with digital smoothing and chromakeying or green-screen replacement, and fed it back into photogrammetry software in order to distill a spatial environment and figural forms that can be explored further in virtual and augmented scenarios. Doing so does not, of course, present to us a “truth” understood as a faithful reconstruction of pro-filmic reality. On the contrary, the abstraction and incoherence of these objects foreground the collision of human and informatic realities and incompatible relations to time and space. If such processes have analytical or theoretical value, it resides not in a positivistic but rather a deformative relation to data, both computational and experiential. Indeed, the payoff, as I see it, of interacting with these objects is in the emergence of a new operational aesthetic, one that transforms the original operational aesthetic of the scenario—its splitting of attention between spectacle and apparatus—and redirects it to a second-order awareness of our involvement in mixed reality as itself a volatile mixture of technoaesthetic forms. Ultimately, this approach questions the boundaries between art and technology and reimagines the “doing” of digital media theory as a form of embodied, operational, and aesthetic practice.

“Aesthetics of Discorrelation” and “Exploring Cinematic Mixed Realities” — Two Events at Duke University, Feb. 20 and Feb. 21, 2020

This coming week I will be at Duke University for two events:

First, on Thursday, February 20 (5pm, exact location to be determined), I will be giving a talk titled “Aesthetics of Discorrelation” (drawing on work from my forthcoming book Discorrelated Images).

Then, on Friday, February 21 (1-3pm in Smith Warehouse, Bay 4), I will be participating in a follow-up event to the NEH Institute for Virtual and Augmented Reality for the Digital Humanities, or V/AR-DHI. I will present work on “Exploring Cinematic Mixed Realities: Deformative Methods for Augmented and Virtual Film and Media Studies” and participate in a roundtable discussion with other members of the Institute.

On Display: Immemory, Soft Cinema, After Video

About two years ago, the exhibition On Display: Immemory, Soft Cinema, After Video at Bilkent University in Ankara brought together projects by Chris Marker, Lev Manovich, and the contributors to the “video book” after.video — including the collaborative AR piece “Scannable Images” that Karin Denson and I made. Recently, Oliver Lerone Schultz (one of the editors of after.video) brought to my attention this “critical tour” of the exhibition, which takes the form of a discussion between Ersan Ocak and Andreas Treske. It is audio only, and you might need to turn up the volume a bit, but it’s an interesting discussion of video and media art.

(See here for more on after.video. Also, I should note that the AR on “Scannable Images” is currently not working due to the ephemeral business models of AR platforms these days, but I hope to port it over to a new platform and get it up and running again soon!)

Virtual and Augmented Reality Digital (and/or Deformative?) Humanities Institute at Duke

I am excited to be participating in the the NEH-funded Virtual and Augmented Reality Digital Humanities Institute — or V/AR-DHI — next month (July 23 – August 3, 2018) at Duke University. I am hoping to adapt “deformative” methods (as described by Mark Sample following a provocation from Lisa Samuels and Jerome McGann) as a means of transformatively interrogating audiovisual media such as film and digital video in the spaces opened up by virtual and augmented reality technologies. In preparation, I have been experimenting with photogrammetric methods to reconstruct the three-dimensional spaces depicted on two-dimensional screens. The results, so far, have been … modest — nothing yet in comparison to artist Claire Hentschker’s excellent Shining360 (2016) or Gregory Chatonsky’s The Kiss (2015). There is something interesting, though, about the dispersal of the character Neo’s body into an amorphous blob and the disappearance of bullet time’s eponymous bullet in this scene from The Matrix, and there’s something incredibly eerie about the hidden image behind the image in this famous scene from Frankenstein, where the monster’s face is first revealed and his head made virtually to protrude from the screen through a series of jump cuts. Certainly, these tests stand in an intriguing (if uncertain) deformative relation to these iconic moments. In any case, I look forward to seeing where (if anywhere) this leads, and to experimenting further at the Institute next month.

Deformative Criticism at #SCMS17

ScannableImages-smallgif

At the upcoming SCMS conference in Chicago, I will be participating in a workshop on “Deformative Criticism and Digital Experimentations in Film & Media Studies” (panel K3 on Friday, March 24, 2017 at 9:00am):

Deformative criticism has emerged as an innovative site of critical practice within media studies and digital humanities, revealing new insights into media texts by “breaking” them in controlled or chaotic ways. Deformative criticism includes a wide range of digital experiments that generate heretical and non-normative readings of media texts; because the results of these experiments are impossible to know in advance, they shift the boundaries of critical scholarship. Media scholars are particularly well situated to such experimentation, as many of our objects of study exist in digital forms that lend themselves to wide-ranging manipulation. Thus, deformative criticism offers a crucial venue for defining not only contemporary scholarly practice, but also media studies’ growing relationship to digital humanities.

Also participating in the workshop will be Jason Mittell (Middlebury College), Stephanie Boluk (UC Davis), Kevin L. Ferguson (Queens College, City University of New York), Mark Sample (Davidson College), and Virginia Kuhn (USC).

My own presentation/workshop contribution will focus on glitches and augmented reality as a deformative means of engaging with changing media-perceptual configurations, including the following case study:

Glitch, Augment, Scan

Scannable Images is a collaborative art/theory project by Karin + Shane Denson that interrogates post-cinema – its perceptual patterns, hyperinformatic simultaneities, and dispersals of attention – through an assemblage of static and animated images, databending and datamoshing techniques, and augmented reality (AR) video overlays. Viewed through the small screen of a smartphone or tablet – itself directed at a computer screen – only a small portion of the entire spectacle can be seen at once, thus reflecting and emulating the selective, scanning regard of post-cinematic images and confronting the viewer with the materiality of the post-cinematic media regime through the interplay of screens, pixels, people, and the physical and virtual spaces they occupy.

Post-Cinema AR

13122900_1733166986928333_1269794987351584805_o13131607_1733166983595000_88851548299674089_o

The augmented reality piece featured on the cover of Post-Cinema: Theorizing 21st-Century Film (http://reframe.sussex.ac.uk/post-cinema/), a collaborative piece made by Karin Denson and me, was displayed recently at a glitch-oriented gallery show organized by some nice people associated with Savannah College of Art and Design.

Try it out for yourself here: http://reframe.sussex.ac.uk/post-cinema/artwork/.

After.Video at Libre Graphics 2016 in London

banner_glitch_1

Recently, I posted about a project called after.video, which contains an augmented (AR) glitch/video/image-based theory piece that Karin Denson and I collaborated on. It has now been announced that the official launch of after.video, Volume 1: Assemblages — a “video book” consisting of a paperback book and video elements stored on a Raspberry Pi computer packaged in a VHS case, which will also be available online — will take place at the Libre Graphics Meeting 2016 in London (Sunday, April 17th at 4:20pm).

The Gnomes Are Back: Business cARd 2.0

gnome-cARd

Ever since our old AR platform was bought out and shut down by Apple, the “data gnomes” that Karin and I developed in conjunction with the Duke S-1: Speculative Sensation Lab’s “Manifest Data” project have been bumbling about in digital limbo, banished to 404 hell. So today I finally made the first steps in migrating our beloved creatures over to a new AR platform (Wikitude), where they’re starting to feel at home. While I was at it, I went ahead and reprogrammed my business card:

2016-01-31 12.21.55 pm

The QR code on the front now redirects the browser to shanedenson.com, while the AR content on the back side is made visible with the Wikitude app (free on iOS or Android) — just search for “Shane Denson” and point your phone/tablet’s camera at the image below:

2016-01-31 12.22.20 pm

(In case you’re wondering what this is: it’s a “data portrait” generated from my Internet browsing behavior. You can make your own with the code included in the S-1 Lab’s Manifest Data kit.)

DEMO Video: Post-Cinema: 24fps@44100Hz

As Karin posted yesterday (and as I reblogged this morning), our collaborative artwork Post-Cinema: 24fps@44100Hz will be on display (and on sale) from January 15-23 at The Carrack Modern Art gallery in Durham, NC, as part of their annual Winter Community Show.

Exhibiting augmented reality pieces always brings with it a variety of challenges — including technical ones and, above all, the need to inform viewers about how to use the work. So, for this occasion, I’ve put together this brief demo video explaining the piece and how to view it. The video will be displayed on a digital picture frame mounted on the wall below the painting. Hopefully it will be both eye-catching enough to attract passersby and it will effectively communicate the essential information about the process and use of the work.