David W. Bates, “An Artificial History of Natural Intelligence: Thinking with Machines from Descartes to the Digital Age,” May 22, 2024

On May 22 (4:30pm, Bldg. 200, room 307), David W. Bates (Department of Rhetoric, UC Berkeley) will be discussing his new book An Artificial History of Natural Intelligence: Thinking with Machines from Descartes to the Digital Age. Merve Tekgürler will provide a response.

The Program in Modern Thought & Literature is a proud co-sponsor of the event — along with History of Philosophy & Science, the Program in Science, Technology, & Society, Stanford Communication, the Division of Literatures, Cultures, and Languages, and Stanford Symbolic Systems.

M. Beatrice Fazi at Digital Aesthetics Workshop, February 28!

Poster by Hank Gerba

Please join us for our next event with M. Beatrice Fazi on Tuesday February 28 @ 5-7pm Pacific time. We’ll meet in the Stanford Humanities Center, as usual. Zoom Registration, if not able to attend IRL: https://tinyurl.com/39tsjc62

The topic of Beatrice’s talk is “On Digital Theory.”

Abstract:

What is digital theory? In this talk, M. Beatrice Fazi will advance and discuss two parallel propositions that aim to answer that question: first, that digital theory is a theory that investigates the digital as such and, second, that it is a theory that is digital insofar as it discretizes via abstraction. Fazi will argue that digital theory should offer a systematic and systematizing study of the digital in and of itself. In other words, it should investigate what the digital is, and that investigation should identify the distinctive ontological determinations and specificities of the digital. This is not the only scope of a theoretical approach to the digital, but it constitutes a central moment for digital theory, a moment that defines digital theory through the search for the definition of the digital itself. Fazi will also consider how, if we wish to understand what digital theory is, we must address the characteristics of theoretical analysis, which can be done only by reflecting on what thinking is in the first place. Definitions of the digital, definitions of thought, and definitions of theory all meet at a key conceptual juncture. To explain this, Fazi will discuss how to theorize is to engage in abstracting and that both are processes of discretization. The talk will conclude by considering whether the digital could be understood as a mode of thought as well as a mode of representing thought. 

Bio:

M. Beatrice Fazi is Reader in Digital Humanities in the School of Media, Arts and Humanities at the University of Sussex, United Kingdom. Her primary areas of expertise are the philosophy of computation, the philosophy of technology and the emerging field of media philosophy. Her research focuses on the ontologies and epistemologies produced by contemporary technoscience, particularly in relation to issues in artificial intelligence and computation and to their impact on culture and society. She has published extensively on the limits and potentialities of the computational method, on digital aesthetics and on the automation of thought. Her monograph Contingent Computation: Abstraction, Experience, and Indeterminacy in Computational Aesthetics was published by Rowman & Littlefield International in 2018.

Post-Cinema / Post-Phenomenology

artifact

Following my talk last week at the Texas State Philosophy Symposium, details have now been finalized for another talk at Texas State: this time in the context of the Philosophy Department’s Dialogue Series, where I’ll be talking about post-cinema (i.e. post-photographic moving image media such as video and various digital formats) and what I’ve been arguing is an essentially post-phenomenological system of mediation (see, for example, my talk from the 2013 SCMS conference or these related musings). For anyone who happens to be in the area, the talk will take place on Monday, April 14, 2014 at 12:30 pm (in Derrick Hall 111). UPDATE: The time has been changed to 10:00 am.

Philosophy of Science De-Naturalized: Notes towards a Postnatural Philosophy of Media (full text)

Phil-Sci-Denaturalized.034

As I recently announced, I was invited to give the keynote address at the 17th annual Texas State University Philosophy Symposium. Here, now, is the full text of my talk:

Philosophy of Science De-Naturalized: Notes towards a Postnatural Philosophy of Media

Shane Denson

The title of my talk contains several oddities (and perhaps not a few extravagances), so I’ll start by looking at these one by one. First (or last) of all, “philosophy of media” is likely to sound unusual in an American context, but it denotes an emerging field of inquiry in Europe, where a small handful of people have started referring to themselves as philosophers of media, and where there is even a limited amount of institutional recognition of such appellations. In Germany, for example, Lorenz Engell has held the chair of media philosophy at the Bauhaus University in Weimar since 2001. He lists as one of his research interests “film and television as philosophical apparatuses and agencies” – which, whatever that might mean, clearly signals something very different from anything that might conventionally be treated under the heading of “media studies” in the US. On this European model, media philosophy is related to the more familiar “philosophy of film,” but it typically broadens the scope of what might be thought of as media (following provocations from thinkers like Niklas Luhmann, who treated everything from film and television to money, acoustics, meaning, art, time, and space as media). More to the point, media philosophy aims to think more generally about media as a philosophical topic, and not as mere carriers for philosophical themes and representations – which means going beyond empirical determinations of media and beyond concentrations on media “contents” in order to think about ontological and epistemological issues raised by media themselves. Often, these discussions channel the philosophy of science and of technology, and this strategy will indeed build the bridge in my own talk between the predominantly European idea of “media philosophy” and the context of Anglo-American philosophy.

OK, but if the idea of a philosophy of media isn’t weird enough, I’ve added this weird epithet: “postnatural.” The meaning of this term is really the crux of my talk, but I’m only going to offer a few “notes towards” a postnatural theory, as it’s also the crux of a big, unwieldy book that I have coming out later this year, in which I devote some 400 pages to explaining and exploring the idea of postnaturalism. As a first approach, though, I can describe the general trajectory through a series of three heuristic (if oversimplifying) slogans.

Phil-Sci-Denaturalized.010

First, in response to debates over the alleged postmodernity of (Western) societies at the end of the twentieth century, French sociologist and science studies pioneer Bruno Latour, most famous for his association with so-called actor-network theory, claimed in his 1991 book of the same title that “We have never been modern.” What he meant, centrally, was that the division of nature and culture, nonhuman and human, that had structured the idea of modernity (and of scientific progress), could not only be seen crumbling in contemporary phenomena such as global warming and biotechnology – humanly created phenomena that become forces of nature in their own right – but that the division was in fact an illusion all along. We have never been modern, accordingly, because modern scientific instruments like the air pump, for example, were simultaneously natural, social, and discursive phenomena. The idea of modernity, according to Latour, depends upon acts of purification that reinforce the nature/culture divide, but an array of hybrids constantly mix these realms. In terms of a philosophy of media, one of the most important conceptual contributions made by Latour in this context is the distinction between “intermediaries” and “mediators.” The former are seen as neutral carriers of information and intentionalities: instruments that expand the cognitive and practical reach of humans in the natural world while leaving the essence of the human untouched. Mediators, on the other hand, are seen to decenter subjectivities and to unsettle the human/nonhuman divide itself as they participate in an uncertain negotiation of these boundaries.

Phil-Sci-Denaturalized.012

The NRA, with their slogan “guns don’t kill people, people kill people,” would have us believe that handguns are mere intermediaries, neutral tools for good or evil; Latour, on the other hand, argues that the handgun, as a non-neutral mediator, transforms the very agency of the human who wields it. That person takes up a very different sort of comportment towards the world, and the transformation is at once social, discursive, phenomenological, and material in nature.

Phil-Sci-Denaturalized.014

With Donna Haraway, we could say that the human + handgun configuration describes something on the order of a cyborg, neither purely human nor nonhuman. And Haraway, building on Latour’s “we have never been modern,” ups the ante and provides us with the second slogan: “We have never been human.” In other words, it’s not just in the age of prosthetics, implants, biotech, and “smart” computational devices that the integrity of the human breaks down, but already at the proverbial dawn of humankind – for the human has co-evolved with other organisms (like the dog, who domesticated the human just as much as the other way around). From an ecological as much as an ideological perspective, the human fails to describe anything like a stable, well-defined, or self-sufficient category.

Phil-Sci-Denaturalized.016

Now the third slogan, which is my own, doesn’t so much try to outdo Latour and Haraway as to refocus some of the themes that are inherent in these discussions. Postnaturalism, in a nutshell, is the idea not that we are now living beyond nature, whatever that might mean, but that “we have never been natural” (and neither has nature, for that matter). Human and nonhuman, natural and unnatural agencies are products of mediations and symbioses from the very start, I contend. In order to argue for these claims I take a broadly ecological view and focus not on discrete individuals but on what I call the anthropotechnical interface (the phenomenal and sub-phenomenal realm of mediation between human and technical agencies, where each impinges upon and defines the other in a broad space or ecology of material interaction). This view, which I develop at length in my book, allows us to see media not only as empirical objects, but as infra-empirical constraints and enablers of agency such that media may be described, following Mark Hansen, as the “environment for life” itself. Accordingly, media-technical innovation translates into ecological change, transforming the parameters of life in a way that outstrips our ability to think about or capture such change cognitively – for at stake in such change is the very infrastructural basis of cognition and subjective being. So postnaturalism, as a philosophy of media and mediation, tries to think about the conditions of anthropotechnical evolution, conceived as the process that links transformations in the realm of concrete, apparatic media (such as film and TV) with more global transformations at a quasi-transcendental level. Operating on both empirical and infra-empirical levels, media might be seen, on this view, as something like articulators of the phenomenal-noumenal interface itself.

So the more I unpack this thing, the weirder it gets, right? Well, let me approach it from a different angle. Here’s where the first part of my title comes into play: “Philosophy of Science De-Naturalized.” Now, I mentioned before that postnaturalism does not postulate that we are living “after” nature; what I want to emphasize now is that it also remains largely continuous with naturalism, conceived broadly as the idea that the cosmos is governed by material principles which are the object, in turn, of natural science. And, more to the point, the first step in the derivation of a properly postnatural theory, which never breaks with the idea of a materially evolving nature, is to work through a naturalized epistemology, in the sense famously articulated by Willard V. O. Quine, but to locate within it the problematic role of technological mediation. By proceeding in this manner, I want to avoid the impression that a postnatural theory is based on a merely discursive “deconstruction” of nature as a concept. Against the general thrust of broadly postmodernist philosophies, which might show that our ideas of nature and its opposites are incoherent, mine is meant to be a thoroughly materialist account of mediation as a transformative force. So the “Philosophy of Science De-Naturalized,” as I put it here, marks a particular trajectory that takes off from what Ronald Giere has called “Philosophy of Science Naturalized” and works its way towards a properly postnatural philosophy of media.

Phil-Sci-Denaturalized.019

Giere’s naturalized philosophy of science is of interest to me because it aims to coordinate evolutionary naturalism (in the sense of Darwin) with revolutionary science (in the sense of Thomas Kuhn). In other words, it aims to reconcile the materialism of naturalized epistemology with the possibility of radical transformation, which Kuhn sees taking place with scientific paradigm shifts, and which I want to attribute to media-technical changes. Taking empirical science as its model, and taking it seriously as an engagement with a mind-independent reality, an “evolutionary epistemology” posits a strong, causal link between the material world and our beliefs about it, seeing knowledge as the product of our biological evolution. Knowledge (and, at the limit, science) is accordingly both instrumental or praxis-oriented and firmly anchored in “the real world.” As a means of survival, it is inherently instrumental, but in order for this instrumentality to be effective – and/or as the simplest explanation of such effectivity – the majority of our beliefs must actually correspond to the reality of which they form part. But, according to Kuhn’s view of paradigm shifts, “after a revolution scientists work in a different world” (Structure of Scientific Revolutions 135). This implies a strong incommensurability thesis that, according to critics like Donald Davidson, falls into the trap of idealism, along with its attendant consequences; i.e. if paradigms structure our experience, revolution implies radical relativism or else skepticism. So how can revolutionary transformation be squared with the evolutionary perspective?

Phil-Sci-Denaturalized.020

Convinced that it contains important cues for a theory of media qua anthropotechnical interfacing, I would like to look at Giere’s answer in some detail. Asserting that “[h]uman perceptual and other cognitive capacities have evolved along with human bodies” (384), Giere’s is a starkly biology-based naturalism. Evolutionary theory posits mind-independent matter as the source of a matter-dependent mind, and unless epistemologists follow suit, according to Giere, they remain open to global arguments from theory underdetermination and phenomenal equivalence: since the world would appear the same to us whether it were really made of matter or of mind-stuff, how do we know that idealism is not correct? And because idealism contradicts the materialist bias of physical science, how do we know that scientific knowledge is sound? According to Giere, we can confidently ignore these questions once the philosophy of science has itself opted for a scientific worldview. Of course, the skeptic will counter that naturalism’s methodologically self-reflexive relation to empirical science renders its argumentation circular at root, but Giere turns the tables on skeptical challenges, arguing that they are “equally question-begging” (385). Given the compelling explanatory power and track record of modern science and evolutionary biology in particular, it is merely a feigned doubt that would question the thesis that “our capacities for operating in the world are highly adapted to that world” (385); knowledge of the world is necessary for the survival of complex biological organisms such as we are. But because this is essentially a transcendental argument, it does not break the circle in which the skeptic sees the naturalist moving; instead, it asserts that circularity is an inescapable consequence of our place in nature. In large part, this is because “we possess built-in mechanisms for quite direct interaction with aspects of our environment. The operations of these mechanisms largely bypass our conscious experience and linguistic or conceptual abilities” (385).

Phil-Sci-Denaturalized.024

So much for the evolutionary perspective, but where does revolutionary science fit into the picture? To answer this question, Giere turns to the case of the geophysical revolution of the 1960s, when a long established model of the earth as a once much warmer body that had cooled and contracted, leaving the oceans and continents more or less fixed in their present positions, was rapidly overturned by the continental drift model that set the stage for the now prevalent plate tectonics theory (391-94). The matching coastlines of Africa and South America had long suggested the possibility of movement, and drift models had been developed in the early twentieth century but were left, by and large, unpursued; it was not just academic protectionism that preserved the old model but a lack of hard evidence capable of challenging accepted wisdom – accepted because it “worked” well enough to explain a large range of phenomena.

Phil-Sci-Denaturalized.025

The discovery in the 1950s of north-south ocean ridges suggested, however, a plausible mechanism for continental drift: if the ridges were formed, as Harry Hess suggested, by volcanism, then “sea floor spreading” should be the result, and the continents would be gradually pushed apart by its action. The discovery, also in the 1950s, of large-scale magnetic field reversals provided the model with empirically testable consequences (the Vine-Matthews-Morley hypothesis): if the field reversals were indeed global and if the sea floor was spreading, then irregularly patterned stripes running parallel to the ridges should match the patterns observed in geological formations on land. Until this prediction was corroborated, there was still little impetus to overthrow the dominant theory, but magnetic soundings of the Pacific-Antarctic Ridge in 1966, along with sea-floor core samples, revealed the expected polarity patterns and led, within the space of a year, to a near complete acceptance of drift hypotheses among earth scientists.

According to Giere, naturalism can avoid idealistic talk of researchers living “in different worlds” and explain the sudden revolution in geology by appealing only to a few very plausible assumptions about human psychology and social interaction – assumptions that are fully compatible with physicalism. These concern what he calls the “payoff matrix” for accepting one of the competing theories (393). Abandoning a pet theory is seldom satisfying, and the rejection of a widely held model is likely to upset many researchers, revealing their previous work as no longer relevant. Resistance to change is all too easily explained. However, humans also take satisfaction in being right, and scientists hope to be objectively right about those aspects of the world they investigate. This interest, as Giere points out, does not have to be considered “an intrinsic positive value” among scientists, for it is tempered by psychosocial considerations (393) such as the fear of being ostracized and the promise of rewards. The geo-theoretical options became clear – or emerged as vital rather than merely logical alternatives – with the articulation of a drift model with clearly testable consequences. We may surmise that researchers began weighing their options at this time, though it is not necessary to consider this a transparently conscious act of deliberation. What was essential was the wide agreement among researchers that the predictions regarding magnetic profiles, if verified, would be extremely difficult to square with a static earth model and compellingly simple to explain if drift really occurred. Sharing this basic assumption, the choice was easy when the relevant data came in (394).

Phil-Sci-Denaturalized.026

But the really interesting thing about this case, in my opinion, is the central role that technology played in structuring theoretical options and forcing a decision, which Giere notes but only in passing. The developing model first became truly relevant through the availability of technologies capable of confirming its predictions: technologies for conducting magnetic soundings of the ocean floor and for retrieving core samples from the deep. Indeed, the Vine-Matthews-Morley hypothesis depended on technology not only for its verification, but for its initial formulation as well: ocean ridges could not have been discovered without instruments capable of sounding the ocean floor, and the discovery of magnetic field reversals depended on a similarly advanced technological infrastructure. A reliance on mediating technologies is central to the practice of science, and Giere suggests that an appreciation of this fact helps distinguish naturalism from “methodological foundationism” or the notion that justified beliefs must recur ultimately to a firm basis in immediate experience (394). His account of the geological paradigm shift therefore “assumes agreement that the technology for measuring magnetic profiles is reliable. The Duhem-Quine problem [i.e. the problem that it is logically possible to salvage empirically disconfirmed theories by ad hoc augmentation] is set aside by the fact that one can build, or often purchase commercially, the relevant measuring technology. The background knowledge (or auxiliary hypotheses) are embodied in proven technology” (394). In other words, the actual practice of science (or technoscience) does not require ultimate justificational grounding, and the agreement on technological reliability ensures, according to Giere and contra Kuhn, that disagreeing parties still operate in the same world.

But while I agree that Giere’s description of the way technology is implemented by scientists is a plausible account of actual practice and its underlying assumptions, I question his extrapolation from the practical to the theoretical plane. With regard to technology, I contend, the circle problem resurfaces with a vengeance. As posed by the skeptic, Giere is right, in my opinion, to reject the circle argument as invalidating naturalism’s methodologically self-reflexive application of scientific theories to the theory of science. Our evolutionary history, I agree, genuinely militates against the skeptic’s requirement that we be able to provide grounds for all our beliefs; our survival depends upon an embodied knowledge that is presupposed by, and therefore not wholly explicatable to, our conscious selves. But as extensions of embodiment, the workings of our technologies are equally opaque to subjective experience, even – or especially – when they seem perfectly transparent channels of contact with the world. Indeed, Giere seems to recognize this when he says that “background knowledge (or auxiliary hypotheses) are embodied by proven technology” (394, emphasis added). In other words, scientists invest technology with a range of assumptions concerning “reliability” or, more generally, about the relations of a technological infrastructure to the natural world; their agreement on these assumptions is the enabling condition for technology to yield clear-cut decision-making consequences. Appearing neutral to all parties involved, the technology is in fact loaded, subordinated to human aims as a tool. Some such subordinating process seems, from a naturalistic perspective, unavoidable for embodied humans. However, agreement on technological utility – on both whether and how a technology is useful – is not guaranteed in every case. Moreover, it is not just a set of cognitive, theoretical assumptions (“auxiliary hypotheses”) with which scientists entrust technologies, but also aspects of their pre-theoretically embodied, sensorimotor competencies. Especially at this level, mediating technologies are open to what Don Ihde calls an experiential “multistability” – capable, that is, of instantiating to differently situated subjectivities radically divergent ways of relating to the world. But it is precisely the consensual stability of technologies that is the key to Giere’s contextualist rebuttal of “foundationism.”

Phil-Sci-Denaturalized.030

Downplaying multistability is the condition for a general avoidance of the circle argument, for a pragmatic avoidance of idealism and/or skepticism. This, I believe, is most certainly the way things work in actual practice; (psycho)social-institutional pressures work to ensure consensus on technological utility. But does naturalism, self-reflexively endorsing science as the basis of its own theorization, then necessarily reproduce these pressures? Feminists in particular may protest on these grounds that the “nature” in naturalism in fact encodes the white male perspective historically privileged by science because embodied by the majority of practicing scientists. What I am suggesting is that the tacit, largely unquestioned processes by which technological multistability is tamed in practice form a locus for the inscription of social norms directly into the physical world; for in making technologies the material bearers of consensual values (whether political, epistemic, psychological, or even the animalistically basic preferability of pleasure over pain) scientific practice encourages certain modes of embodied relations to the world – not just psychic but material relations themselves embodied in technologies. It goes without saying that this can only occur at the expense of other modes of being-embodied.

More generally stated, the real problem with naturalism’s self-reflexivity is not that it fails to take skeptical challenges seriously or that it provides a false picture of actual scientific practice, but that in extrapolating from practice it locks certain assumptions about technological reliability into theory, embracing them as its own. While it is contextually – indeed physically – necessary that assumptions be made, and that they be embodied or exteriorized in technologies, the particular assumptions are contingent and non-neutral. This may be seen as a political problem, which it is, but it also more than that. It is, moreover, an ontological problem of the instability of nature itself – not just of nature as a construct but of the material co-constitution of real, flesh-and-blood organisms and their environments. Once we enter the naturalist circle – and I believe we have good reason to do so – we accept that evolution dislodges the primacy of place traditionally accorded human beings. At the same time, we accept that the technologies with which science has demonstrated the non-essentiality of human/animal boundaries are reliable, that they show us what reality is really, objectively like. This step depends, however, on a bracketing of technological multistability. If we question this bracketing, as I do, we seem to lose our footing in material objectivity. Nevertheless convinced that it would be wrong to concede defeat to the skeptic, we point out that adaptive knowledge’s circularity or contextualist holism is a necessary requirement of human survival, that it follows directly from embodiment and the fact that the underlying biological mechanisms “largely bypass our conscious experience and linguistic or conceptual abilities” (Giere 385). But if we admit that technological multistability really obtains as a fact of our phenomenal relations to the world, this holism seems to lead us back precisely to Kuhn’s idealist suggestion that researchers (or humans generally) may occupy incommensurably “different worlds.” If we don’t want to abandon materialism, then we have to find an interpretation of this idea that is compatible with physicalism.

Indeed, it is the great merit of naturalism that it provides us with the means for doing so; however, it is the great failure of the theory that it neglects these resources. The failure, which consists in reproducing science’s subordination of technology to thought – in fact compounding the reduction, as contextually practiced, by subordinating it to an overarching (i.e. supra-contextual) theory of science – is truly necessary for naturalism, for to rectify its oversight of multistability is to admit the breakdown of a continuous nature itself. To consistently acknowledge the indeterminacy of human-technology-world relations and simultaneously maintain materialism requires, to begin with, that we extend Giere’s insight about biological mechanisms to specifically technological mechanisms of embodied relation to the world: they too “bypass our conscious experience and linguistic or conceptual abilities.” If we take the implications seriously, this means that technologies resist full conceptualization and are therefore potentially non-compliant with human (or scientific) aims; reliance on technology is not categorically different in kind from reliance on our bodies: both ground our practice and knowledge in the material world, but neither is fully recuperable to thought. Extending naturalism in this way means recognizing that not only human/animal but also human/technology distinctions are porous and non-absolute. But whereas naturalism tacitly assumes that the investment of technology with cognitive aims is only “natural” and therefore beyond question, the multistability of non-cognitive investments of corporeal capacities implies that there is more to the idea of “different worlds” than naturalism is willing or able to admit: on a materialistic reading, it is nature itself, and not just human thought or science, that is historically and contextually multiple, non-coherently splintered, and subject to revolutionary change. Serious consideration of technology leads us, that is, to embrace a denatured naturalism, a techno-evolutionary epistemology, and a material rather than social constructivism. This, then, is the basis for a postnatural philosophy of media.

Phil-Sci-Denaturalized.033

Philosophy of Science De-Naturalized: Notes towards a Postnatural Philosophy of Media

010406-N-0000X-002

I am very honored to have been invited to hold a keynote address at the Texas State University Philosophy Department’s annual philosophy symposium on April 4, 2014. Having studied as an undergraduate at Texas State (which back then was known as Southwest Texas State University, or SWT for short), this will be something of a homecoming for me, and I’m very excited about it!

In fact, one of the first talks I ever delivered was at the 1997 philosophy symposium — the very first year it was held. My talk back then, titled “Skepticism and the Cultural Critical Project,” sought to bridge the divide between, on the one hand, the analytical epistemology and philosophy of science that I was studying under the supervision of Prof. Peter Hutcheson and, on the other hand, the Continental-inspired literary and cultural theory to which I was being exposed by a young assistant professor of English, Mark B. N. Hansen (before he went off to Princeton, then University of Chicago, and now Duke University).

In a way, my effort back then to mediate between these two very different traditions has proved emblematic for my further academic career. For example, my dissertation looked at Frankenstein films as an index for ongoing changes in the human-technological relations that, I contend, continually shape and re-fashion us at a deeply material, pre-subjective, and extra-discursive level of our being. The cultural realm of monster movies was therefore linked to the metaphysical realm of what I call the anthropotechnical interface, and my argument was mounted by way of a lengthy “techno-scientific interlude” in which I revisited many of the topics in Anglo-American epistemology and philosophy of science that I had first thought about as an undergrad in Texas.

Thus, without my knowing it (and it’s really only now becoming clear to me), my talk back in 1997 marked out a trajectory that it seems I’ve been following ever since. And now it feels like a lot of things are coming full circle: A book based upon my dissertation, for which Mark Hansen served as reader, is set to appear later this year (but more on that and a proper announcement later…). In addition, as I announced here recently, I will be moving to North Carolina this summer to commence a 2-year postdoctoral fellowship at Duke, where I will be working closely with Hansen. Now, before that project gets underway, I have the honor to return to the philosophy symposium in San Marcos, Texas and, in a sense, to revisit the place where it all started.

I thought it would be appropriate, therefore, if I delivered a talk that continued along the trajectory I embarked upon there 17 years ago (wow, that makes me feel old…). My talk, titled “Philosophy of Science De-Naturalized: Notes towards a Postnatural Philosophy of Media,” takes a cue from Ronald N. Giere’s “Philosophy of Science Naturalized” — which sought to reconcile Thomas Kuhn’s idea of revolutionary paradigm shifts in the history of science with W. V. O. Quine’s notion of “Epistemology Naturalized,” i.e. a theory of knowledge based more in the material practice and findings of natural science (especially evolutionary biology) than in the “rational reconstruction” of ideal grounds for justified true belief. As I will show, my own “postnaturalism” — which is ultimately a philosophy of media rather than of knowledge or science — represents not so much a break with such naturalism as a particular manner of thinking through issues of technological mediation that emerge in that context, issues that I then subject to phenomenological scrutiny and ultimately post-phenomenological transformations in order to arrive at a theory of anthropotechnical interfacing and change.

Techno-Phenomenology, Medium as Interface, and the Metaphysics of Change

Reno_1902_escalator_patent

On June 17, 2013, I will be presenting a paper at the conference “Conditions of Mediation: Phenomenological Approaches to Media, Technology and Communication” at Birkbeck, University of London. There’s a diverse and interesting group of keynote speakers, including David Berry, Nick Couldry, Graham Harman, Shaun Moores, Lisa Parks, and Paddy Scannell, and a list of other presenters — among whom I am proud to be counted — has also gone online now.

Below is the abstract for my modest contribution:

Techno-Phenomenology, Medium as Interface, and the Metaphysics of Change

Shane Denson, Leibniz Universität Hannover

Walter Benjamin famously argued that the emergence of modern media of technical reproducibility (photography, film) corresponded to sweeping changes in the organization of what he calls the “medium” of sense perception. To a skeptic like film scholar David Bordwell, Benjamin’s “modernity thesis” (along with Tom Gunning’s related arguments about the “culture of shock”) is pure hyperbole, for cognitive structures are subject to the slow processes of biological evolution while impervious to rapid technological change. The debate has tended to reach impasses over questions of the causal agencies and effects of media change—e.g. whether they concern the broad cultural domain of discourse and signification or the “hard-wiring” of the brain itself. In this presentation, I argue that a “techno-phenomenological” approach—which (following cues from Heidegger, Merleau-Ponty, and Don Ihde, among others) focuses on the embodied interfaces in which human intentionalities are variously mediated by technologies—enables us to see media change as involving experiential transformations that are at once robustly material, and hence not restricted to cultural or psycho-semiotic domains, while still compatible with the long durations of biological evolution. An “anthropotechnical interface,” based in proprioceptive and visceral sensibilities, will be shown to constitute the primary site of media change.

Mark Hansen in Hannover

Here are a few images from Mark Hansen’s talks on July 2 and 3.

The first two were taken Monday, at a very inspiring talk called “Feed-Forward, or the ‘Future’ of 21st Century Media.”

Above, a picture taken Tuesday, at the talk given in the context of my media theory seminar: “The End of Pharmacology?: Historicizing 21st Century Media.”

And a picture taken over the weekend, during an exciting game of “Vikinger-Schach”!

Finally, here is the text of my introduction to the Monday night talk:

First of all, I’d like to say that I am very honored, and I am very happy, to introduce Mark Hansen to you today. Mark is Professor in the Literature Program at Duke University, where he is also affiliated with a range of departments, programs, and interdisciplinary centers, including the department of Art, Art History, and Visual Studies, the Program in the Arts of the Moving Image, the Visual Studies Initiative, and the Program in Information Science + Information Studies. Before going to Duke in 2008, Mark served as Professor of English, Visual Arts, and Cinema and Media Studies at the University of Chicago, prior to which he held positions in the English Department at Princeton. Over the past decade or so, he has established himself as one of the leading media theorists in America and the world, a reputation built on a steady stream of equally demanding and rewarding publications, including three monographs to date. His book Embodying Technesis: Technology Beyond Writing, which was published in 2000, set the stage for much of his subsequent work by arguing for a robustly material conception of technologies and their relations to and impacts on experiencing bodies. Identifying the ways that many of the master thinkers of twentieth century high theory, including Freud, Heidegger, Derrida, Deleuze and Guattari, had struggled with but ultimately perpetuated a reduction of the technical to the narrow frames of discourse and subjective thought, thus obscuring technology’s more diffuse impacts and its role as infrastructure for thought and experience, the book cleared the ground for a more positive engagement with changes in this infrastructure, especially as occasioned by the advent of computational media. Thus, New Philosophy for New Media, published in 2004, undertook a careful analysis of the digital image, which was shown with the help of resources updated from Henri Bergson to be far less fixed and visually concentrated than one might assume; instead, digital images turned out to be highly processual and dispersed across a network of materially embodied agents — processors, flickering pixels, and above all human bodies that filter and select the relevant forms, providing the very frame for computationally generated images. Mark’s next book, Bodies in Code: Interfaces with New Media, from 2006, continued this focus on our affective engagement with the world, and on the modulation of that engagement through media that articulate an ongoing coevolution of humans and technics. Mark has also co-edited several important volumes, including The Cambridge Companion to Merleau-Ponty (co-edited with Taylor Carman), Emergence and Embodiment: New Essays on Second-Order Systems Theory (with Bruce Clarke), and Critical Terms for Media Studies (with William J. T. Mitchell). He is currently wrapping up a book project entitled Feed-Forward: The “Future” of 21st Century Media, and this, I presume, is the basis of what he’ll be talking about today.

So, conventionally, this is where I would say “and now, without further ado,” but in fact I do want to subject you to just a little bit more “ado.” If the list of professorships, books, and ideas that I’ve been recounting here can be said to constitute an official “text” of Mark Hansen’s career as a world-class media theorist, there’s also a little-known subtext, or perhaps paratext, through which he has been connected with Hannover and exerted here a subtle but definite influence over the years. Most recently, I have had my students reading his thoughts on “New Media” this semester, while our Film & TV Reading Group also met to discuss an important article called, simply, “Media Theory.” These are texts that have been very important to me personally, and they played a key role in challenging me to articulate some of the foundational ideas in my dissertation. As some of you may know, Mark served as the second examiner for that project, and some of the people here today were also present at my thesis defense in December 2010, when Mark joined us, quite fittingly, as a digital image, by way of video-conferencing technology. But the intellectual and personal connections with Hannover run deeper and are older than that. What many people don’t know is that this is Mark’s second — real-life, corporeal — visit to the English Department at the University of Hannover. The first one was exactly 15 years ago, in the summer of 1997. Few people know this, because it was before most of the current faculty, staff, and students had ever set foot in this building. Well, not to brag or anything, but: I was there. In fact, it was my very first trip to Germany, an exchange trip headed by Mark, who in those almost prehistoric days — prior to Duke, Chicago, and Princeton — was employed at a place called Southwest Texas State University (which, incidentally, is a name that has since lost its power of designation, as that university is now called something else). Anyway, it was there, and here (back then), that Mark planted many of the seeds that would come to fruition much later in my own work, and that have quietly informed my teaching practice here for over a decade. I am grateful, then, to the Fulbright Program and to our university’s Gastwissenschaftler-Programm for making it possible to bring Mark back once again after all these years. Above all, though, and this is what I’ve been trying to get at with this excavation of a “Hannover connection,” I wish to express my gratitude to Mark both as a mentor and as a friend. Thank you. And now, I am very proud to present to you Mark Hansen.

Dylan Trigg, Digital Media, and Phenomenology

Over at Figure/Ground Communication, there is a new interview up with Dylan Trigg (whose blog Side Effects you’ll find linked in the sidebar here). The whole interview is well worth your time, but especially interesting (and relevant to the focus of this blog) is the following question and answer:

Is phenomenology still relevant in this age of information and digital interactive media?

Phenomenology is especially relevant in an age of information and digital media. Despite the current post-humanist “turn” in the humanities, we remain for better or worse bodily subjects. This does not mean that we cannot think beyond the body or that the body is unchallenged in phenomenology. Phenomenology does not set a limit on our field of experience, nor is it incompatible with the age of information, less even speculative thinking about non-bodily entities and worlds. Instead, phenomenology reminds us of what we already know, though perhaps unconsciously: that our philosophical voyages begin with and are shaped by our bodily subjectivity.

It’s important to note here that phenomenology’s treatment of the body is varied and complex. It can refer to the physical materiality of the body, to the lived experience of the body, or to enigmatic way in which the body is both personal and anonymous simultaneously. In each case, the body provides the basis for how digital media, information, and post-humanity are experienced in the first place. Phenomenology’s heightened relevance, I’d say, is grounded in the sense that these contemporary artefacts of human life tend to take for granted our bodily constitution.

But phenomenology’s relevance goes beyond its privileging of the body. It has become quite fashionable to critique phenomenology as providing a solely human-centric access to the world.  This, I think, is wrong. One of the reasons why I’m passionately committed to phenomenology is because it can reveal to us the fundamentally weird and strange facets of the world that we ordinarily take to be clothed in a familiar and human light. Phenomenology’s gesture of returning to things, of attending to things in their brute facticity, is an extremely powerful move. Merleau-Ponty will speak of a “hostile and alien…resolutely silent Other” lurking within with the non-human appearance of things. For me, the lure of this non-human Other is a motivational force in my own work. It reminds us that no matter how much we affiliate ourselves with the familiar human world, in the act of returning to the things themselves, those same things stand ready to alienate us.

(The image at the top of this post, by the way — and lest there be any confusion about the matter — is not a picture of Dylan Trigg but of body-augmentor extraordinaire, performance artist Stelarc.)

Twitter, Technics, and Time

One of the most important philosophical reflections on technology in recent years (or ever, for that matter) is Bernard Stiegler‘s three-volume work, Technics and Time. Now, over on twitter, someone has undertaken the task of adapting this work to a series of tweets–performatively raising questions about contemporary changes in the meting out of time by means of digital technics, perhaps? In any case, the stream (@TechnicsAndTime), signed “Not Bernard Stiegler”, just got underway a couple of days ago, so it’s not too late to jump in. And it’s always nice to read a gem like this in the midst of all the other significant and insignificant tweets scrolling by: “Nonorganic organizations of matter have their own dynamic when compared with that of either physical or biological beings.” In this spirit, enjoy!

CFP: The Nonhuman Turn

Honeycomb image

This promises to be a great event at the Center for 21st Century Studies at the University of Wisconsin-Milwaukee, with an excellent lineup of speakers:

May 4-5, 2012
The Nonhuman Turn in 21st Century Studies

This conference takes up the “nonhuman turn” that has been emerging in the arts, humanities, and social sciences over the past few decades. Intensifying in the 21st century, this nonhuman turn can be traced to a variety of different intellectual and theoretical developments from the last decades of the 20th century:

actor-network theory, particularly Bruno Latour’s career-long project to articulate technical mediation, nonhuman agency, and the politics of things

affect theory, both in its philosophical and psychological manifestations and as it has been mobilized by queer theory

animal studies, as developed in the work of Donna Haraway, projects for animal rights, and a more general critique of speciesism

the assemblage theory of Gilles Deleuze, Manuel DeLanda, Latour, and others

new brain sciences like neuroscience, cognitive science, and artificial intelligence

new media theory, especially as it has paid close attention to technical networks, material interfaces, and computational analysis

the new materialism in feminism, philosophy, and marxism

varieties of speculative realism like object-oriented philosophy, vitalism, and panpsychism

and systems theory in its social, technical, and ecological manifestations

Such varied analytical and theoretical formations obviously diverge and disagree in many of their aims, objects, and methodologies. But they are all of a piece in taking up aspects of the nonhuman as critical to the future of 21st century studies in the arts, humanities, and social sciences.

Running roughly parallel to this nonhuman turn in the past few decades has been the“posthuman turn” articulated by such important theoretical works as Katherine Hayles’ How We Became Posthuman and Cary Wolfe’s What Is Posthumanism? Thinking beyond the human, as posthumanism is sometimes characterized, clearly provides one compelling model for 21st century studies. But the relation between posthumanism and humanism, like that of postmodernism to modernism, can sometimes seem as much like a repetition of the same as the emergence of something different.

Thus, one of the questions that this conference is meant to take up is the relation between posthumanism and the nonhuman turn, especially the ways in which taking the nonhuman as a matter of critical, artistic, and scholarly concern might differ from, as well as overlap with, the aims of posthumanism. In pursuing answers to such questions, the conference is meant to address the future of 21st century studies by exploring how the nonhuman turn might provide a way forward for the arts, humanities, and social sciences in light of the difficult challenges of the 21st century.

Invited speakers (to date) include:

Jane Bennett (Political Science, Johns Hopkins)

Ian Bogost (Literature, Communication, Culture, Georgia Tech)

Bill Brown (English, Chicago)

Wendy Chun (Media and Modern Culture, Brown)

Mark Hansen (Literature, Duke)

Erin Manning (Philosophy/Dance, Concordia University, Montreal)

Brian Massumi (Philosophy, University of Montreal)

Tim Morton (English, UC-Davis)

In addition to the invited speakers, the conference will hold several breakout sessions for additional participants to present their work. Please refer to this Call for Papers for details and deadlines.