R. Scott Bakker's Blog, page 19

March 18, 2014

The Ontology of Ghosts

In the courtyard a shadowy giant elm


Spreads ancient boughs, her ancient arms where dreams,


False dreams, the old tale goes, beneath each leaf


Cling and are numberless.


–Virgil, The Aenied, Book VI


.


I’m always amazed, looking back, at how fucking clear things had seemed at this or that juncture of my philosophical life—how lucid. The two early conversions, stumbling into nihilism as a teenager, then climbing into Heidegger in my early twenties, seem the most ‘religious’ in retrospect. I think this is why I never failed to piss people off even back then. You have this self-promoting skin you wear when you communicate, this tactical gloss that compels you to impress. This is what non-intellectuals hear when you speak, tactics and self-promotion. This is why it’s so easy to tar intellectualism in the communal eye: insecurity and insincerity are of its essence. All value judgements are transitive in human psychology: Laugh up your sleeve at what I say, and you are laughing at me. I was an insecure, hypercritical, know-it-all. You add the interpersonal trespasses of religion—intolerance, intensity, and aggressiveness—and I think it’s safe to assume I came across as an obnoxious prick.


But if I was evangelical, it was that I could feel those transformations. Each position possessed its own, distinct metacognitive attitude toward experience, a form of that I attributed to this, whatever it might be. With my adolescent nihilism, I remember obsessively pondering the way my thoughts bubbled up out of oblivion—and being stupefied. I was some kind of inexplicable kink in the real. I was so convinced I was an illusion that I would ache for being alone, grip furniture for fear of flying.


But with Heidegger, it was like stepping into a more resonant clime, into a world rebarred with meaning, with projects and cares and rules and hopes. A world of towardness, where what you are now is a manifold of happenings, a gazing into an illuminated screen, a sitting in a world bound to you via your projects, a grasping of these very words. The intentional things, the phenomena of lived life, these were the foundation, I believed, the sine qua non of empirical inquiry. Before we can ask the question of freedom and meaning we need to ask the question of what comes first.


What could be more real than lived life?


It took a long time for me to realize just how esoteric, just how parochial, my definition of ‘lived life’ was. No matter how high you scratch your charcoal cloud, the cave wall always has the final say. It’s the doctors that keep you alive; philosophers just help you fall to sleep. Everywhere I looked across Continental philosophy, I saw all these crazy-ass interpretations, variants spanning variants, revivals and exhaustions, all trying to get the handle on the intentional ontology of a ‘lived life’ that took years of specialized training to appreciate. This is how I began asking the question of the cognitive difference. And this is how I found myself back at the beginning, my inaugural, adolescent departure from the naive.


The difference being, I am no longer stupefied.


I have a new religion, one that straightens out all the kinks, and so dispels rather than saves the soul. I am no exception. I have been chosen by nobody for nothing. I am continuous with the x-dimensional totality that we call nature—continuous in every respect. I watch images from Hubble, the most distant galactic swirls, and I tell myself, I am this, and I feel grand and empty. I am the environment that chokes, the climate that reels. I am the body that the doctor attends…


And you are too.


Thus the most trivial prophecy, the prediction that you will waver, crumble, that the florescent light will wobble to the sound of loved ones weeping… breathing. That someone, maybe, will clutch your hand.


Such hubris, when you think about it, to assume that lived life lay at your intellectual fingertips—the thing most easily grasped! For someone who has spent their life reading philosophy this stands tall among the greater insults: the knowledge that we have been duped all along, that all those profundities, that resonant world I found such joy and rancour pondering, were little more than the artifact of machines taking their shadows for reflections, the cave wall for a looking glass.


I am the residue of survival—living life. I am an astronomically complicated system, a multifarious component of superordinate systems that cannot cognize itself as such for being such. I am a serial gloss, a transmission from nowhere into nowhere, a pattern plucked from subpersonal pandemonium and broadcast to the neural horde. I am a message that I cannot conceive. As. Are. You.


I can show you pictures of dead people to prove it. Lives lived out.


The first-person is a selective precis of this totality, one that poses as the totality. And this is the trick, the way to unravel the kink and see how it is that Heidegger could confuse his semantic vision with seeing. The oblivion behind my thoughts is the oblivion of neglect. Because oblivion has no time, I have no time, and so watch amazed as my shining hands turn to leather. I breathe deep and think, Now. Because oblivion constrains nothing, I follow rules of my own will, pursue goals of my own desire. I stretch forth my hand and remake what lies before me. Because oblivion distinguishes nothing, I am one. I raise my voice and declare, Me. Because oblivion reveals nothing, I stand opposite the world, always only aimed, never connected. I squint and I squint and I ask, How do I know?


I am bottomless because my foundation was never mine to see. I am a perspective, an agent, a person, just another dude-with-a-bad-attitude—I am all these things because of the way I am not any of these things. I am not what I am because of what I am—again, the same as you.


Ghosts can be defined as a fragment cognized as a whole. In some cultures ghosts have no backs, no faces, no feet. In most all cultures they have no substance, no consistency, temporal or otherwise. The dimensions of lived life have been stripped from them; they are shades, animate shadows. As Virgil says of Aeneas attempting to embrace his father, Anchises, in the Underworld:


 Then thrice around his neck his arms he threw;


And thrice the flitting shadow slipp’d away,


Like winds, or empty dreams that fly the day.


Ghosts are the incorporeal remainder, the something shorn of substance and consistency. This is the lived life of Heidegger, an empty dream that flew the day. Insofar as Dasein lacks meat, Dasein dwells with the dead, another shade in the underworld, another passing fancy. We are not ghosts. If lived life lies in the meat, then the truth of lived life lies in the meat. The truth of what we are runs orthogonal to the being that we all swear that we must be. Consciousness is an anosognosiac broker, and we are the serial sum of deals struck between parties utterly unknown. Who are the orthogonal parties? What are the deals? These are the questions that aim us at our most essential selves, at what we are in fact. These are the answers being pursued by industry.


And yet we insist on the reality of ghosts, so profound is the glamour spun by neglect. There are no orthogonal parties, we cry, and therefore no orthogonal deals. There is no orthogonal regime. Oblivion hides only oblivion. What bubbles up from oblivion, begins with me and ends with me. Thus the enduring attempt to make sense of things sideways, to rummage through the ruin of heaven and erect parallel regimes, ones too impersonal to reek of superstition. We use ghosts of reference to bind our inklings to the world, ghosts of inference to bind our inklings to one another, ghosts of quality to give ethereal substance to experience. Ghosts and more ghosts, all to save the mad, inescapable intuition that our intuitions must be real somehow. We raise them as architecture, and demur whenever anyone poses the mundane question of building material.


‘Thought’… No word short of ‘God’ has shut down more thinking.


Content is a wraith. Freedom is a vapour. Experience is a dream. The analogy is no coincidence.


The ontology of meaning is the ontology of ghosts.


 


 


 


 •  0 comments  •  flag
Share on Twitter
Published on March 18, 2014 10:44

March 14, 2014

Incomplete Cognition: An Eliminativist Reading of Terrence Deacon’s Incomplete Nature

Incomplete Nature: How Mind Emerged from Matter


Goal seeking, willing, rule-following, knowing, desiring—these are just some of the things we do that we cannot make sense of in causal terms. We cite intentional phenomena all the time, attributing them the kind of causal efficacy we attribute to the more mundane elements of nature. The problem, as Terrence Deacon frames it, is that whenever we attempt to explain these explainers, we find nothing, only absence and perplexity.


“The inability to integrate these many species of absence-based causality into our scientific methodologies has not just seriously handicapped us, it has effectively left a vast fraction of the world orphaned from theories that are presumed to apply to everything. The very care that has been necessary to systematically exclude these sorts of explanations from undermining our causal analyses of physical, chemical, and biological phenomena has also stymied our efforts to penetrate beyond the descriptive surface of the phenomena of life and mind. Indeed, what might be described as the two most challenging scientific mysteries of the age—both are held hostage by this presumed incompatibility.” Incomplete Nature,12


The question, of course, is whether this incompatibility is the product of our cognitive constitution or the product of some as yet undiscovered twist in nature. Deacon argues the latter. Incomplete Nature is a magisterial attempt to complete nature, to literally rewrite physics in a way that seems to make room for goal seeking, willing, rule-following, knowing, desiring, and so on—in other words, to provide a naturalistic way to make sense of absences that cause. He wants to show how all these things are real.


My own project argues the former, that the notion of ‘absences that cause’ is actually an artifact of neglect. ‘We’ are an astronomically complicated subsystem embedded in the astronomically complicated supersystem that we call ‘nature,’ in such a way that we cannot intuitively cognize ourselves as natural.


The Blind Brain Theory claims to provide the world’s first genuine naturalization of intentionality—a parsimonious, comprehensive way to explain centuries of confusion away. What Intentionalists like Deacon think they are describing are actually twists on a family of metacognitive illusions. Crudely put, since no cognitive capacity could pluck ‘accuracy’ of any kind from the supercomplicated muck of the brain, our metacognitive system confabulates. It’s not that some (yet to be empirically determined) systematicity isn’t there: it’s that the functions discharged via our conscious access to that systematicity are compressed, formatted, and truncated. Metacognition neglects these confounds, and we begin making theoretical inferences assuming the sufficiency of compressed, formatted, and truncated information. Among many things, BBT actually predicts a discursive field clustered about families of metacognitive intuitions, but otherwise chronically incapable of resolving among their claims. When an Intentionalist gives you an account of the ‘game of giving and asking for reasons,’ say, you need only ask them why anyone should subscribe to an ontologization (whether virtual, quasi-transcendental, transcendental, or otherwise) on the basis of almost certainly unreliable metacognitive hunches.


The key conceptual distinction in BBT is that between what I’ve been calling ‘lateral sensitivity’ and ‘medial neglect.’ Lateral sensitivity refers to the brain’s capacity to be ‘imprinted’ by other systems, to be ‘pushed’ in ways that allow it to push back. Since behavioural interventions, or ‘pushing-back,’ requires some kind of systematic relation to the system or systems to be pushed, lateral sensitivity requires being pushed by the right things in the right way. Thus the Inverse Problem and the Bayesian nature of the human brain. The Inverse Problem pertains to the difficulty of inferring the structure/dynamics of some distal system (an avalanche or a wolf, say) via the structure/dynamics of some proximal system (ambient sound or light, say) that reliably co-varies with that distal system. The difficulty is typically described in terms of ambiguity: since any number of distal systems could cause the structure/dynamics of the proximal system, the brain needs some way of allowing the actual distal system to push through the proximal system, if it is to have any hope of pushing back. Unless it becomes a reliable component of its environment, it cannot reliably make components of its environments. This is an important image to keep in mind: that of the larger brain-environment system, the way the brain is adapted to be pushed, or transformed into a component of larger environmental mechanisms, so as to push back, to ‘componentialize’ environmental mechanisms. Quite simply, we have evolved to be tyrannized by our environment in a manner that enables us to tyrannize our environment.


Lateral sensitivity refers to this ‘tyranny enabling tyranny,’ the brain’s ability to systematically covary with its environment in behaviourally advantageous ways. A system that solves via the Inverse Problem possesses a high degree of reliable covariational complexity. As it turns out, the mechanical complexity required to do this is nothing short of mind-boggling. And as we shall see, this fact possesses some rather enormous consequences. Up to this point, I’ve really only provided an alternate description of the sensorimotor loop; the theoretical dividends begin piling up once we consider lateral sensitivity in concert with medial neglect.


The machinery of lateral sensitivity is so complicated that it handily transcends its own ‘sensitivity threshold.’ This means the brain possesses a profound insensitivity to itself. This might sound daffy, given that the brain simply is a supercomplicated network of mutual sensitivities, but this is actually where the nub of cognition as a distinct biological process is laid bare. Unlike the dedicated sensitivity that underwrites mechanism generally, the sensitivity at issue here involves what might be called the systematic covariation for behaviour. Any process that systematically covaries for behaviour is a properly cognitive process. So the above could be amended to, ‘the brain possesses a profound cognitive insensitivity to itself.’ Medial neglect is this profound cognitive insensitivity.


The advantage of cognition is behaviour, the push-back. The efficacy of this behavioural push-back depends on the sensory push, which is to say, lateral sensitivity. Innumerable behavioural problems, it turns out, require that we be pushed by our pushing back: that our future behaviour (push-back) be informed (pushed) by our ongoing behaviour (pushing-back). Behavioural efficacy is a function of behavioural versatility is a function of lateral sensitivity, which is to say, the capacity to systematically covary with the environment. Medial neglect, therefore, constitutes a critical limit on behavioural efficacy: those ‘problem ecologies’ requiring sensitivity to the neurobiological apparatus of cognition to be solved effectively lay outside the capacity of the system to tackle. We are, quite literally, the ‘elephant in the room,’ a supercomplicated mechanism sensitive to most everything relevant to problem-solving in its environment except itself.


Mechanical allo-sensitivity entails mechanical auto-insensitivity, or auto-neglect. A crucial consequence of this is that efficacious systematic covariation requires unidirectional interaction, or that sensing be ‘passive.’ The degree to which the mechanical activity of tracking actually impacts the system to be tracked is the degree to which that system cannot be reliably tracked. Anticipation via systematic covariation is impossible if the mechanics of the anticipatory system impinge on the mechanics of the system to be anticipated. The insensitivity of the anticipatory system to its own activity, or medial neglect, perforce means insensitivity to systems directly mechanically entangled in that activity. Only ‘passive entanglement’ will do. This explains why so-called ‘observer effects’ confound our ability to predict the behaviour of other systems.


So the stage is set. The brain quite simply cannot cognize itself (or other brains) in the same high-dimensional way it cognizes its environments. (It would be hard to imagine any evolved metacognitive capacity that could achieve such a thing, in fact). It is simply too complex and too entangled. As a result, low-dimensional, special purpose heuristics—fast and frugal kluges—are its only recourse.


The big question I keep asking is, How could it be any other way? Given the problems of complexity and complicity, given the radical nature of the cognitive bottleneck—just how little information is available for conscious, serial processing—how could any evolved metacognitive capacity whatsoever come close to apprehending the functional truth of anything inner’? If you are an Intentionalist, say, you need to explain how the phenomena you’re convinced you intuit are free of perspectival illusions, or conversely, how your metacognitive faculties have overcome the problems posed by complexity and complicity.


On BBT, the brain possesses at least two profoundly different covariational regimes, one integrated, problem-general, and high-dimensional, mediating our engagement in the natural world, the other fractious, problem-specific and low-dimensional, mediating our engagements with ourselves and others (who are also complex and complicit), and thereby our engagement in the natural world. The twist lies in medial neglect, the fact that the latter fractious, problem-specific, and low-dimensional covariational regime is utterly insensitive to its fractious, problem-specific, and low-dimensional nature. Human metacognition is almost entirely blind to the structure of human cognition. This is why we require cognitive science: reflection on our cognitive capacities tells us little or nothing about those capacities, reflection included. Since we have no way of intuiting the insufficiency of these intuitions, we assume they’re sufficient.


We are now in a position to clearly delineate Deacon’s ‘fraction,’ what makes it vast, and why it has been perennially orphaned. Historically, natural science has been concerned with the ‘lateral problem-ecologies,’ with explicating the structure and dynamics of relatively simple systems possessing functional independence. Any problem ecology requiring the mechanistic solution of brains lay outside its purview. Only recently has it developed the capacity to tackle ‘medial problem-ecologies,’ the structure and dynamics of astronomically complex systems possessing no real functional independence. For the first time humanity finds itself confronted with integrated, high-dimensional explications of what it is. The ruckus, of course, is all about how to square these explications with our medial traditions and intuitions. All the so-called ‘hard problems’ turn on our apparent inability to naturalistically find, let alone explain, the phenomena corresponding to our intuitive, metacognitive understanding of the medial.


Why do our integrated, high-dimensional, explications of the medial congenitally ‘leave out’ the phenomena belonging to the medial-as-metacognized? Because metacognitive phenomena like goal seeking, willing, rule-following, knowing, desiring only ‘exist,’ insofar as they exist at all, in specialized problem-solving contexts. ‘Goal seeking’ is something we all do all the time. A friend has an untoward reaction to a comment of ours, so we ask ourselves, in good conscience, ‘What was I after?’ and the process of trying to determine our goal given whatever information we happen to have begins. Despite complexity and complicity, this problem is entirely soluble because we have evolved the heuristic machinery required: we can come to realize that our overture was actually meant to belittle. Likewise, the philosopher asks, ‘What is goal-seeking?’ and the process of trying to determine the nature of goal-seeking given whatever information he happens to have begins. But the problem proves insoluble, not surprisingly, given that the philosopher almost certainly lacks the requisite heuristic machinery. The capacity to solve for goal-seeking qua goal-seeking is just not something our ancestors evolved.


Deacon’s entire problematic turns on the equivocation of the first-order and second-order uses of intentional terms, on the presumption that the ‘goal-seeking’ we metacognize simply has to be the ‘goal-seeking’ referenced in first-order contexts—on the presumption, in other words, of metacognitive adequacy, which is to say something we now know to be false as a matter of empirical fact. For all its grand sweep, for all its lucid recapitulation and provocative conjecture, Incomplete Nature is itself shockingly incomplete. Nowhere does he consider the possibility that the only ‘goal-seeking phenomenon’ missing, the only absence to be explained, is this latter, philosophical goal-seeking.


At no point in the work does he reference, let alone account for, the role metacognition or introspection plays in our attempt to grapple with the incompatibility of natural and intentional phenomena. He simply declares “the obvious inversion of causal logic that distinguishes them” (139), without genuinely considering where that ‘inversion’ occurs. Because this just is the nub of the issue between the emergentist and the eliminativist: whether his ‘obvious inversion’ belongs to the systems observed or to the systems observing. As Deacon writes:


“There is no use denying there is a fundamental causal difference between these domains that must be bridged in any comprehensive theory of causality. The challenge of explaining why such a seeming reversal takes place, and exactly how it does so, must ultimately be faced. At some point in this hierarchy, the causal dynamics of teleological processes do indeed emerge from simpler blind mechanistic dynamics, but we are merely restating this bald fact unless we can identify exactly how this causal about-face is accomplished. We need to stop trying to eliminate homunculi, and to face up to the challenge of constructing teleological properties—information, function, aboutness, end-directedness, self, even conscious experience—from unambiguously non-teleological starting points.” 140


But why do we need to stop ‘trying to eliminate’ homunculi? We know that philosophical reflection on the nature of cognition is woefully unreliable. We know that intentional concepts and phenomena are the stock and trade of philosophical reflection. We know that scientific inquiry generally delegitimizes our prescientific discourses. So why shouldn’t we assume that the matter of intentionality amounts to more of the same?


Deacon never says. He acknowledges “there cannot be a literal ends-causing-the-means process involved” (109) when it comes to intentional phenomena. As he writes:


“Of course, time is neither stopped nor running backwards in any of these processes. Thermodynamic processes are proceeding uninterrupted. Future possible states are not directly causing present events to occur.” 109-110


He acknowledges, in other words, that this ‘inversion of causality’ is apparent only. He acknowledges, in other words, that metacognition is getting things wrong, just not entirely. So what recommends his project of ontologically meeting this appearance halfway over the project of doing away with it altogether? The project of rewriting nature, after all, is far more extravagant than the project of theorizing metacognitive shortcomings.


Deacon’s failure to account for observation-dependent interpretations of intentionality is more than suspiciously convenient, it actually renders the whole of Incomplete Nature an exercise in begging the question. He spends a tremendous amount of time and no little ingenuity in describing the way ‘teleodynamic systems,’ as the result of increasingly recursive complexity, emerge from ‘morphodynamic systems’ which in turn emerge from standard thermodynamic systems. Where thermodynamic systems exhibit straightforward entropy, morphodynamic systems, such as crystal formation, exhibit the tendency to become more ordered. Building on morphodynamics, teleodynamic systems then exhibit the kinds of properties we take to be intentional. A point of pride for Deacon is the way his elaborations turn, as he mentions in the extended passage quoted above, on ‘unambiguously non-teleological starting points.’


He sums this patient process of layering causal complexities in the postulation of what he calls an autogen, “a form of self-generating, self-repairing, self-replicating system that is constituted by reciprocal morphodynamic processes” (547-8), and arguably his most ingenious innovation. He then moves to conclude:


“So even these simple molecular systems have crossed a threshold in which we can say that a very basic form of value has emerged, because we can describe each of the component autogenic processes as there for the sake of autogen integrity, or for the maintenance if that particular form of autogenicity. Likewise, we can describe different features of the surrounding molecular environment as ‘beneficial’ or ‘harmful’ in the same sense that we would apply these assessments to microorganisms. More important, these are not merely glosses provided by a human observer, but intrinsic and functionally relevant features of the consequence-organized nature of the autogen itself.” 322


And the reader is once again left with the question of why. We know that the brain possesses suites of heuristic problem solvers geared to economize by exploiting various features of the environment. The obvious question becomes: How is it that any of the processes he describes do anything more than schematize the kinds of features that trigger the brain to swap out its causal cognitive systems for its intentional cognitive systems?


Time and again, one finds Deacon explicitly acknowledging the importance of the observer, and time and again one finds him dismissing that importance without a lick of argumentation—the argumentation his entire account hangs on. One can even grant him his morphodynamic and teleodynamic ‘phase transitions’ and still plausibly insist that all he’s managed to provide is a detailed description of the kinds of complex mechanical processes prone to trigger our intentional heuristics. After all, if it is the case that the future does not cause the past, then ‘end directedness,’ the ‘obvious inversion of causality,’ actually isn’t an inversion at all. The fact is Deacon’s own account of constraints and the role they play in morphodynamics and teleodynamics is entirely amenable to mechanical understanding. He continually relies on disposition talk. Even his metaphors, like the ‘negentropic ratchet’ (317), tend to be mechanical. The autogen is quite clearly a machine, one that automatically expresses the constraints that make it possible. The fact that these component constraints result in a system that behaves in ways far different than mundane thermodynamic systems speaks to nothing more extraordinary than mechanical emergence, the fact that whole mechanisms do things that their components could not (See Craver, 2007, pp. 211-17 for a consideration of the distinction between mechanical and spooky emergence). Likewise, for all the ink he spills regarding the holistic nature of teleodynamic systems, he does an excellent job explaining them in terms of their contributing components!


In the end, all Deacon really has is an analogy between the ‘intentional absence,’ our empirical inability to find intentional phenomena, and the kind of absence he attributes to constraints. Since systematicity of any kind requires constraints, defining constraints, as Deacon does, in terms of what cannot happen—in terms of what is absent—provides him the rhetorical license he needs to speak of ‘absential causes’ at pretty much any juncture. Since he has already defined intentional phenomena as ‘absential causes’ it becomes very easy thing indeed to lead the reader over the ‘epistemic cut’ and claim that he has discovered the basis of the intentional as it exists in nature, as opposed to an interpretation of those systems inclined to trigger intentional cognition in the human brain. Constraints can be understood in absential terms. Intentional phenomena can only be understood in absential terms. Since the reader, thanks to medial neglect, has no inkling whatsoever of the fractionate and specialized nature of intentional cognition, all Deacon needs to do is comb their existing intuitions in his direction. Constraints are objective, therefore intentionality is objective.


Not surprisingly, Deacon falls far short of ‘naturalizing intentionality.’ Ultimately, he provides something very similar to what Evan Thompson delivers in his equally impressive (and unconvincing) Mind in Life: a more complicated, attenuated picture of nature that seems marginally less antithetical to intentionality. Where Thompson’s “aim is not to close the explanatory gap in a reductive sense, but rather to enlarge and enrich the philosophical and scientific resources we have for addressing the gap (x), Deacon’s is to “demonstrate how a form of causality dependent on specifically absent features and unrealized potentials can be compatible with our best science” (16), the idea being that such an absential understanding will pave the way for some kind of thoroughgoing naturalization of intentionality—as metacognized—in the future.


But such a naturalization can only happen if our theoretical metacognitive intuitions regarding intentionality get intentionality right in general, as opposed to right enough for this or that. And our metacognitive intuitions regarding intentionality can only get intentionality right in general if our brain has somehow evolved the capacity to overcome medial neglect. And the possibility of this, given the problems of complexity and complicity, seems very hard to fathom.


The fact is BBT provides a very plausible and parsimonious observer dependent explanation for why metacognition attributes so many peculiar properties the medial processes. The human brain, as the frame of cognition, simply cannot cognize itself the way it does other systems. It is, as a matter of empirical necessity, not simply blind to its own mechanics, but blind to this blindness. It suffers medial neglect. Unable to access and cognize its origins, and unable to cognize this inability, it assumes that it accesses all there is to access—it confuses itself for something bottomless, an impossible exception to physics.


So when Deacon writes:


“These phenomena not only appear to arise without antecedents, they appear to be defined with respect to something nonexistent. It seems that we must explain the uncaused appearance of phenomena whose causal powers derive from something nonexistent! It should be no surprise that this most familiar and commonplace feature of our existence poses a conundrum for science.” 39


we need to take the truly holistic view that Deacon himself consistently fails to take. We need to see this very real problem in terms of one set of natural systems—namely, us—engaging the set of all natural systems, as a kind of linkage between being pushed and pushing back.


On BBT, Deacon’s ‘obvious inversion of causality’ is merely an illusory artifact of constraints pertaining to the human brain’s ability to cognize itself the way it cognizes its environments. They appear causally inverted simply because no information pertaining to their causal provenance is available to deliberative metacognition. Rules constrain us in some mysterious, orthogonal way. Goals somehow constrain us from the future. Will somehow constrains itself! Desires, like knowledge, are somehow constrained by their objects, even when they are nowhere to be seen. These apparently causally inverted phenomena vanish whenever we search for their origins because they quite simply do not exist in the high-dimensional way things in our environments exist. They baffle scientific reason because the actual neuromechanical heuristics employed are adapted to solve problems in the absence of detailed causal information, and because conscious metacognition, blind to the rank insufficiency of the information available for deliberative problem-solving, assumes that it possesses all the information it needs. Philosophical reflection is a cultural achievement, after all, an exaption of existing, more specialized cognitive resources; it seems quite implausible to assume the brain would possess the capacity to vet the relative sufficiency of information utilized in ways possessing no evolutionary provenance.


We are causally embedded in our environments in such a way that we cannot intuit ourselves as so embedded, and so intuit ourselves otherwise, as goal seeking, willing, rule-following, knowing, desiring, and so on—in ways that systematically neglect the actual, causal relations involved. Is it really just a coincidence that all these phenomena just happen to belong to the ‘medial,’ which is to say, the machinery responsible for cognition? Is it really just a coincidence that all these phenomena exhibit a profound incompatibility with causal explanation? Is it really just a coincidence that all our second-order interpretations of these terms are chronically underdetermined (a common indicator of insufficient information), even though they function quite well when used in everyday, first-order, interpersonal contexts?


Not at all. As I’ve attempted to show in a variety of ways the past couple years a great number of traditional conundrums can be resolved via BBT. All the old problems fall away once we realize that the medial—or ‘first person’—is simply what the third person looks like absent the capacity to laterally solve the third person. The time has come to leave them behind and begin the hard work of discovering what new conundrums await.


 •  0 comments  •  flag
Share on Twitter
Published on March 14, 2014 14:16

March 8, 2014

The Closing and Opening of Covers

My agent has the book, and I’m having several copies of the manuscript printed up and bound to distribute to some keen-eyed friends today. That’s as much as I can say detail-wise, at the moment. As soon as my publishers and my agent and I have the details hashed out I will post them here post-haste.


I also finally managed to trap True Detective on my PVR. People have sent me so many links (such as this and this) to mainstream articles on the character of Cohle and his creator Nic Pizzolatto’s inspirations that I thought it worth a looksee. I haven’t watched an episode yet, but the notion of Mathew McConaughy (a devote believer) playing a nihilistic prophet appeals to my sense cosmic perversity. I suppose he would make a good Disciple Manning. Who knows, maybe a thunderbolt will strike someone at HBO–they’ll take a sip of latte and wonder, “Egad! What if we take True Detective and Game of Thrones  and mash them together!” Either way, given the way society continues to inexorably creep toward Golgotterath, the popularization of this fact has got to be a good thing… if it’s true that informed gamblers enjoy better odds than sleepwalkers, that is.


 •  0 comments  •  flag
Share on Twitter
Published on March 08, 2014 08:42

February 25, 2014

Interstellar Dualists and X-phi Alien Superfreaks

I came up with this little alien thought experiment to illustrate a cornerstone of the Blind Brain Theory: the way systems can mistake information deficits for positive ontological properties, using a species I call the Walleyes (pronounced ‘Wally’s'):


Walleyes possess two very different visual systems, the one high dimensional, adapted to tracking motion and resolving innumerable details, the other myopic in the extreme, adapted to resolving blurry gestalts at best, blobs of shape and colour. Both are exquisitely adapted to solve their respective problem-ecologies, however; those ecologies just happen to be radically divergent. The Walleyes, it turns out, inhabit the twilight line of a world that forever keeps one face turned to its sun. They grow in a linear row that tracks the same longitude around the entire planet, at least wherever there’s land. The high capacity eye is the eye possessing dayvision, adapted to take down mobile predators using poisonous darts. The low capacity eye is the eye possessing nightvision, adapted to send tendrils out to feed on organic debris. The Walleyes, in fact, have nearly a 360 degree view of their environment: only the margin of each defeats them.


The problem, however, is that Walleyes, like anenomes, are a kind of animal that is rooted in place. Save for the odd storm, which blows the ‘head’ about from time to time, there is very little overlap in their respective visual fields, even though each engages (two very different halves of) the same environment. What’s more, the nightvision eye, despite its manifest myopia, continually signals that it possesses a greater degree of fidelity than the first.


Now imagine an advanced alien species introduces a virus that rewires Walleyes for discursive, conscious experience. Since their low-dimensional nightvision system insists (by default) that it sees everything there is to be seen, and its high-dimensional system, always suspicious of camoflaged predators, regularly signals estimates of reliability, the Walleyes have no reason to think heuristic neglect is a problem. Nothing signals the possibility that the problem might be perspectival (related to issues of information access and problem solving capacity), so the metacognitive default of the Walleyes is to construe themselves as special beings that dwell on the interstice of two very different worlds. They become natural dualists…


The same way we seem to be.


Perhaps some X-phi super-aliens are snickering as they read this!


 •  0 comments  •  flag
Share on Twitter
Published on February 25, 2014 12:14

February 17, 2014

The Missing Half of the Global Neuronal Workspace: A Commentary on Stanislaus Dehaene’s Consciousness and the Brain

Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts


.


Introduction


Stanislaus Dehaene, to my mind at least, is the premier consciousness researcher on the planet, one of those rare scientists who seems equally at home in the theoretical aether (like we are here) and in the laboratory (where he is there). His latest book, Consciousness and the Brain provides an excellent, and at times brilliant, overview of the state of contemporary consciousness research. Consciousness has come a long way in the past two decades, and Dehaene deserves credit for much of the yardage gained.


I’ve been anticipating Consciousness and the Brain for quite some time, especially since I bumped across “The Eternal Silence of the Neuronal Spaces,” Dehaene’s review of Cristopher Koch’s Consciousness: Confessions of a Romantic Reductionist, where he concludes with a confession of his own: “Can neuroscience be reconciled with living a happy, meaningful, moral, and yet nondelusional life? I will confess that this question also occasionally keeps me lying awake at night.” Since the implications of the neuroscientific revolution, the prospects of having a technically actionable blueprint of the human soul, often keep my mind churning into the wee hours, I was hoping that I might see a more measured, less sanguine Dehaene in this book, one less inclined to soft-sell the troubling implications of neuroscientific research.


And in that one regard, I was disappointed. Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts is written for a broad audience, so in a certain sense one can understand the authorial instinct to make things easy for the reader, but rendering a subject matter more amenable to lay understanding is quite a different thing than rendering it more amenable to lay sensibilities. Dehaene, I think, caters far too much to the very preconceptions his science is in the process of dismantling. As a result, the book, for all its organizational finesse, all its elegant formulations, and economical summaries of various angles of research, finds itself haunted by a jagged shadow, the intimation that things simply are not as they seem. A contradiction—of expressive modes if not factual claims.


Perhaps the most stark example of this contradiction comes at the very conclusion of the book, where Dehaene finally turns to consider some of the philosophical problems raised by his project. Adopting a quasi-Dennettian argument (from Freedom Evolves) that the only ‘free will’ that matters is the free will we actually happen to have (namely, one compatible with physics and biology), he writes:


“Our belief in free will expresses the idea that, under the right circumstances, we have the ability to guide our decisions by our higher-level thoughts, beliefs, values, and past experiences, and to exert control over our undesired lower-level impulses. Whenever we make an autonomous decision, we exercise our free will by considering all the available options, pondering them, and choosing the one that we favor. Some degree of chance may enter in a voluntary choice, but this is not an essential feature. Most of the time our willful acts are anything but random: they consist in a careful review of our options, followed by the deliberate selection of the one we favor.” 264


And yet for his penultimate, concluding line no less, he writes, “[a]s you close this book to ponder your own existence, ignited assemblies of neurons literally make up your mind” (266). At this point, the perceptive reader might be forgiven for asking, ‘What happened to me pondering, me choosing the interpretation I favour, me making up my mind?’ The easy answer, of course, is that ‘ignited assemblies of neurons’ are the reader, such that whatever they ‘make,’ the reader ‘makes’ as well. The problem, however, is that the reader has just spent hours reading hundreds of pages detailing all the ways neurons act entirely outside his knowledge. If ignited assemblies of neurons are somehow what he is, then he has no inkling what he is—or what it is he is supposedly doing.


As we shall see, this pattern of alternating expressive modes, swapping between the personal and the impersonal registers to describe various brain activities, occurs throughout Consciousness and the Brain. As I mentioned above, I’m sure this has much to do with Dehaene’s resolution to write a reader friendly book, and so to market the Global Neuronal Workspace Theory (GNWT) to the broader public. I’ve read enough of Dehaene’s articles to recognize the nondescript, clinical tone that animates the impersonally expressed passages, and so to see those passages expressed in more personal idioms as self-conscious attempts on his part to make the material more accessible. But as the free will quote above makes plain, there’s a sense in which Dehaene, despite his odd sleepless night, remains committed to the fundamental compatibility of the personal and the impersonal idioms. He thinks neuroscience can be reconciled with a meaningful and nondelusional life. In what follows I intend to show why, on the basis of his own theory, he’s mistaken. He’s mistaken because, when all is said and done, Dehaene possesses only half of what could count as a complete theory of consciousness—the most important half to be sure, but half all the same. Despite all the detailed explanations of consciousness he gives in the book, he actually has no account whatsoever of what we seem to take consciousness to be–namely, ourselves.


For that account, Stanislaus Dehaene needs to look closely at the implicature of his Global Neuronal Workspace Theory—it’s long theoretical shadow, if you will—because there, I think, he will find my own Blind Brain Theory (BBT), and with it the theoretical resources to show how the consciousness revealed in his laboratory can be reconciled with the consciousness revealed in us. This, then, will be my primary contention: that Dehaene’s Global Neuronal Workspace Theory directly implies the Blind Brain Theory, and that the two theories, taken together, offer a truly comprehensive account of consciousness…


The one that keeps me lying awake at night.


.


Function Dysfunction


Let’s look at a second example. After drawing up an inventory of varous, often intuition-defying, unconscious feats, Dehaene cautions the reader against drawing too pessimistic a conclusion regarding consciousness—what he calls the ‘zombie theory’ of consciousness. If unconscious processes, he asks, can plan, attend, sum, mean, read, recognize, value and so on, just what is consciousness good for? The threat of these findings, as he sees it, is that they seem to suggest that consciousness is merely epiphenomenal, a kind of kaliedoscopic side-effect to the more important, unconscious business of calculating brute possibilities. As he writes:


“The popular Danish science writer Tor Norretranders coined the term ‘user illusion’ to refer to our feeling of being in control, which may well be fallacious; every one of our decisions, he believes, stems from unconscious sources. Many other psychologists agree: consciousness is the proverbial backseat driver, a useless observer of actions that lie forever beyond its control.” 91


Dehaene disagrees, claiming that his account belongs to “what philosophers call the ‘functionalist’ view of consciousness” (91). He uses this passing criticism as a segue for his subsequent, fascinating account of the numerous functions discharged by consciousness—what makes consciousness a key evolutionary adaptation. The problem with this criticism is that it simply does not apply. Norretranders, for instance, nowhere espouses epiphenomenalism—at least not in The User Illusion. The same might be said of Daniel Wegner, one the ‘many psychologists,’ Dehaene references in the accompanying footnote. Far from epiphenomenalism, the argument that consciousness has no function whatsoever (as, say, Susan Pockett (2004) has argued), both of these authors contend that it’s ‘our feeling of being in control’ that is illusory. So in The Illusion of Conscious Will, for instance, Wegner proposes that the feeling of willing allows us to socially own our actions. For him, our consciousness of ‘control’ has a very determinate function, just one that contradicts our metacognitive intuition of that functionality.


Dehaene is simply in error here. He is confusing the denial of intuitions of conscious efficacy with a denial of conscious efficacy. He has simply run afoul the distinction between consciousness as it is and consciousness as appears to us—the distinction between consciousness as impersonally and personally construed. Note the way he actually slips between idioms in the passage quoted above, at first referencing ‘our feeling of being in control’ and then referencing ‘its control.’ Now one might think this distinction between these two very different perspectives on consciousness would be easy to police, but such is not the case (See Bennett and Hacker, 2003). Unfortunately, Dehaene is far from alone when it comes to running afoul this dichotomy.


For some time now, I’ve been arguing for what I’ve been calling a Dual Theory approach to the problem of consciousness. On the one hand, we need a theoretical apparatus that will allow us to discover what consciousness is as another natural phenomenon in the natural world. On the other hand, we need a theoretical apparatus that will allow us to explain (in a manner that makes empirically testable predictions) why consciousness appears the way that it does, namely, as something that simply cannot be another natural phenomenon in the natural world. Dehaene is in the business of providing the first kind of theory: a theory of what consciousness actually is. I’ve made a hobby of providing the second kind of theory: a theory of why consciousness appears to possess the baffling form that it does.


Few terms in the conceptual lexicon are quite so overdetermined as ‘consciousness.’ This is precisely what makes Dehaene’s operationalization of ‘conscious access’ invaluable. But salient among those traditional overdeterminations is the peculiarly tenacious assumption that consciousness ‘just is’ what it appears to be. Since what it appears to be is drastically at odds with anything else in the natural world, this assumption sets the explanatory bar rather high indeed. You could say consciousness needs a Dual Theory approach for the same reason that Dualism constitutes an intuitive default (Emmons 2014). Our dualistic intuitions arguably determine the structure of the entire debate. Either consciousness really is some wild, metaphysical exception to the natural order, or consciousness represents some novel, emergent twist that has hitherto eluded science, or something about our metacognitive access to consciousness simply makes it seem that way. Since the first leg of this trilemma belongs to theology, all the interesting action has fallen into orbit around the latter two options. The reason we need an ‘Appearance Theory’ when it comes to consciousness as opposed to other natural phenomena, has to do with our inability to pin down the explananda of consciousness, an inability that almost certainly turns on the idiosyncrasy of our access to the phenomena of consciousness compared to the phenomena of the natural world more generally. This, for instance, is the moral of Michael Graziano’s (otherwise deeply flawed) Consciousness and the Social Brain: that the primary job of the neuroscientist is to explain consciousness, not our metacognitive perspective on consciousness.


The Blind Brain Theory is just such an Appearance Theory: it provides a systematic explanation of the kinds of cognitive confounds and access bottlenecks that make consciousness appear to be ‘supra-natural.’ It holds, with Dehaene, that consciousness is functional through and through, just not in any way we can readily intuit outside empirical work like Dehaene’s. As such, it takes findings such as Wegner’s, where the function we presume on the basis of intuition (free willing) is belied by some counter-to-intuition function (behaviour ownership), as paradigmatic. Far from epiphenomenalism, BBT constitutes a kind of ‘ulterior functionalism’: it acknowledges that consciousness discharges a myriad of functions, but it denies that metacognition is any position to cognize those functions (see “THE Something about Mary“) short of sustained empirical investigation.


Dehaene is certainly sensitive to the general outline of this problem: he devotes an entire chapter (“Consciousness Enters the Lab”) to discussing the ways he and others have overcome the notorious difficulties involved in experimentally ‘pinning consciousness down.’ And the masking and attention paradigms he has helped develop have done much to transform consciousness research into a legitimate field of scientific research. He even provides a splendid account of just how deep unconscious processing reaches into what we intuitively assume are wholly conscious exercises—an account that thoroughly identifies him as a fellow ulterior functionalist. He actually agrees with me and Norretranders and Wegner—he just doesn’t realize it quite yet.


.


The Global Neuronal Workspace


As I said, Dehaene is primarily interested in theorizing consciousness apart from how it appears. In order to show how the Blind Brain Theory actually follows from his findings, we need to consider both these findings and the theoretical apparatus that Dehaene and his colleagues use to make sense of them. We need to consider his Global Neuronal Workspace Theory of consciousness.


According to GNWT, the primary function of consciousness is to select, stabilize, solve, and broadcast information throughout the brain. As Dehaene writes:


“According to this theory, consciousness is just brain-wide information sharing. Whatever we become conscious of, we can hold it in our mind long after the corresponding stimulation has disappeared from the outside world. That’s because the brain has brought it into the workspace, which maintains it independently of the time and place at which we first perceived it. As a result, we may use it in whatever way we please. In particular, we can dispatch it to our language processors and name it; this is why the capacity to report is a key feature of a conscious state. But we can also store it in long-term memory or use it for our future plans, whatever they are. The flexible dissemination of information, I argue, is a characteristic property of a conscious state.” 165


A signature virtue of Consciousness and the Brain lays in Dehaene’s ability to blend complexity and nuance with expressive economy. But again one needs to be wary of his tendency to resort to the personal idiom, as he does in this passage, where the functional versatility provided by consciousness is explicitly conflated with agency, the freedom to dispose of information ‘in whatever way we please.’ Elsewhere he writes:


“The brain must contain a ‘router’ that allows it to flexibly broadcast information to and from its internal routines. This seems to be a major function of consciousness: to collect the information from various processors, synthesize it, and then broadcast the result–a conscious symbol–to other, arbitrarily selected processors. These processors, in turn, apply their unconscious skills to this symbol, and the entire process may repeat a number of times. The outcome is a hybrid serial-parallel machine, in which stages of massively parallel computation are interleaved with a serial stage of conscious decision making and information routing.” 105


Here we find him making essentially the same claims in less anthropomorphic or ‘reader-friendly’ terms. Despite the folksy allure of the ‘workspace’ metaphor, this image of the brain as a ‘hybrid serial-parallel machine’ is what lies at the root of GNWT. For years now, Dehaene and others have been using masking and attention experiments in concert with fMRI, EEG, and MEG to track the comparative neural history of conscious and unconscious stimuli through the brain. This has allowed them to isolate what Dehaene calls the ‘signatures of consciousness,’ the events that distinguish percepts that cross the conscious threshold from percepts that do not. A theme that Dehaene repeatedly evokes is the information asymmetric nature of conscious versus unconscious processing. Since conscious access is the only access we possess to our brain’s operations, we tend to run afoul a version of what Daniel Kahneman (2012) calls WYSIATI, or the ‘what-you-see-is-all-there-is’ effect. Dehaene even goes so far as to state this peculiar tendency as a law: “We constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (79). The fact is the nonconscious brain performs the vast, vast majority of the brain’s calculations.


The reason for this has to do with the Inverse Problem, the challenge of inferring the mechanics of some distal system, a predator or a flood, say, from the mechanics of some proximal system such as ambient light or sound. The crux of the problem lies in the ambiguity inherent to the proximal mechanism: a wild variety of distal events could explain any given retinal stimulus, for instance, and yet somehow we reliably perceive predators or floods or what have you. Dehaene writes:


“We never see the world as our retina sees it. In fact, it would be a pretty horrible sight: a highly distorted set of light and dark pixels, blown up toward the center of the retina, masked by blood vessels, with a massive hole at the location of the ‘blind spot’ where cables leave for the brain; the image would constantly blur and change as our gaze moved around. What we see, instead, is a three-dimensional scene, corrected for retinal defects, mended at the blind spot, and massive reinterpreted based on our previous experience of similar visual scenes.” 60


The brain can do this because it acts as a massively parallel Bayesian inference engine, analytically breaking down various elements of our retinal images, feeding them to specialized heuristic circuits, and cobbling together hypothesis after hypothesis.


“Below the conscious stage, myriad unconscious processors, operating in parallel, constantly strive to extract the most detailed and complete interpretation of our environment. They operate as nearly optimal statisticians who exploit the slightest perceptual hint—a faint movement, a shadow, a splotch of light—to calculate the probability that a given property holds true in the outside world.” 92


But hypotheses are not enough. All this machinery belongs to what is called the ‘sensorimotor loop.’ The whole evolutionary point of all this processing is to produce ‘actionable intelligence,’ which is to say, to help generate and drive effective behaviour. In many cases, when the bottom-up interpretations match the top-down expectations and behaviour is routine, say, such selection need not result in consciousness of the stimuli at issue. In other cases, however, the interpretations are relayed to the nonconscious attentional systems of the brain where they are ranked according to their relevance to ongoing behaviour and selected accordingly for conscious processing. Dehaene summarizes what happens next:


“Conscious perception results from a wave of neuronal activity that tips the cortex over its ignition threshold. A conscious stimulus triggers a self-amplifying avalanche of neural activity that ultimately ignites many regions into a tangled state. During that conscious state, which starts approximately 300 milliseconds after stimulus onset, the frontal regions of the brain are being informed of sensory inputs in a bottom-up manner, but these regions also send massive projections in the converse direction, top-down, and to many distributed areas. The end result is a brain web of synchronized areas whose various facets provide us with many signatures of consciousness: distributed activation, particularly in the frontal and parietal lobes, a P3 wave, gamma-band amplification, and massive long-distance synchrony.” 140


As Dehaene is at pains to point out, the machinery of consciousness is simply too extensive to not be functional somehow. The neurophysiological differences observed between the multiple interpretations that hover in nonconscious attention, and the interpretation that tips the ‘ignition threshold’ of consciousness is nothing if not dramatic. Information that was localized suddenly becomes globally accessible. Information that was transitory suddenly becomes stable. Information that was hypothetical suddenly becomes canonical. Information that was dedicated suddenly becomes fungible. Consciousness makes information spatially, temporally, and structurally available. And this, as Dehaene rightly argues, makes all the difference in the world, including the fact that “[t]he global availability of information is precisely what we subjectively experience as a conscious state” (168).


.


A Mile Wide and an Inch Thin


Consciousness is the Medieval Latin of neural processing. It makes information structurally available, both across time and across the brain. As Dehaene writes, “The capacity to synthesize information over time, space, and modalities of knowledge, and to rethink it at any time in the future, is a fundamental component of the conscious mind, one that seems likely to have been positively selected for during evolution” (101). But this evolutionary advantage comes with a number of crucial caveats, qualifications that, as we shall see, make some kind of Dual Theory approach unavoidable.


Once an interpretation commands the global workspace, it becomes available for processing via the nonconscious input of a number of different processors. Thus the metaphor of the workspace. The information can be ‘worked over,’ mined for novel opportunities, refined into something more useful, but only, as Dehaene points out numerous times, synoptically and sequentially.


Consciousness is synoptic insofar as it samples mere fractions of the information available: “An unconscious army of neurons evaluates all the possibilities,” Dehaene writes, “but consciousness receives only a stripped down report” (96). By selecting, in other words, the workspace is at once neglecting, not only all the alternate interpretations, but all the neural machinations responsible: “Paradoxically, the sampling that goes on in our conscious vision makes us forever blind to its inner complexity” (98).


And consciousness is sequential in that it can only sample one fraction at a time: “our conscious brain cannot experience two ignitions at once and lets us perceive only a single conscious ‘chunk’ at a given time,” he explains. “Whenever the prefrontal and parietal lobes are jointly engaged in processing a first stimulus, they cannot simultaneously reengage toward a second one” (125).


All this is to say that consciousness pertains to the serial portion of the ‘hybrid serial-parallel machine’ that is the human brain. Dehaene even goes so far as to analogize consciousness to a “biological Turing machine” (106), a kind of production system possessing the “capacity to implement any effective procedure” (105). He writes:


“A production system comprises a database, also called ‘working memory,’ and a vast array of if-then production rules… At each step, the system examines whether a rule matches the current state of its working memory. If multiple rules match, then they compete under the aegis of a stochastic prioritizing system. Finally, the winning rule ‘ignites’ and is allowed to change the contents of working memory before the entire process resumes. Thus this sequence of steps amounts to serial cycles of unconscious competition, conscious ignition, and broadcasting.” 105


The point of this analogy, Dehaene is quick to point out, isn’t to “revive the cliché of the brain as a classical computer” (106) so much as it is to understand the relationship between the conscious and nonconscious brain. Indeed, in subsequent experiments, Dehaene and his colleagues discovered that the nonconscious, for all its computational power, is generally incapable of making sequential inferences: “The mighty unconscious generates sophisticated hunches, but only a conscious mind can follow a rational strategy, step after step” (109). It seems something of a platitude to claim that rational deliberation requires consciousness, but to be able to provide an experimentally tested neurobiological account of why this is so is nothing short of astounding. Make no mistake: these are the kind of answers philosophy, rooting through the mire of intuition, has sought for millennia.


Dehaene, as I mentioned, is primarily interested in providing a positive account of what consciousness is apart from what we take it to be. “Putting together all the evidence inescapably leads us to a reductionist conclusion,” Dehaene writes. “All our conscious experiences, from the sound of an orchestra to the smell of burnt toast, result from a similar source: the activity of massive cerebral circuits that have reproducible neuronal signatures” (158). Though he does consider several philosophical implications of his ‘reductionist conclusions,’ he does so only in passing. He by no means dwells on them.


Given that consciousness research is a science attempting to bootstrap its way out of the miasma of philosophical speculation regarding the human soul, this reluctance is quite understandable—perhaps even laudable. The problem, however, is that philosophy and science both traffic in theory, general claims about basic things. As a result, the boundaries are constitutively muddled, typically to the detriment of the science, but sometimes to its advantage. A reluctance to speculate may keep the scientist safe, but to the extent that ‘data without theory is blind,’ it may also mean missed opportunities.


So consider Dehaene’s misplaced charge of epiphenomenalism, the way he seemed to be confusing the denial of our intuitions of conscious efficacy with the denial of conscious efficacy. The former, which I called ‘ulterior functionalism,’ entirely agrees that consciousness possesses functions; it denies only that we have reliable metacognitive access to those functions. Our only recourse, the ulterior functionalist holds, is to engage in empirical investigation. And this, I suggested, is clearly Dehaene’s own position. Consider:


“The discovery that a word or a digit can travel throughout the brain, bias our decisions, and affect our language networks, all the while remaining unseen, was an eye-opener for many cognitive scientists. We had underestimated the power of the unconscious. Our intuitions, it turned out, could not be trusted: we had no way of knowing what cognitive processes could or could not proceed without awareness. The matter was entirely empirical. We had to submit, one by one, each mental faculty to a thorough inspection of its component processes, and decide which of those faculties did or did not appeal to the conscious mind. Only careful experimentation could decide the matter…” 74


This could serve as a mission statement for ulterior functionalism. We cannot, as a matter of fact, trust any of our prescientific intuitions regarding what we are, no more than we could trust our prescientific intuitions regarding the natural world. This much seems conclusive. Then why does Dehaene find the kinds of claims advanced by Norretranders and Wegner problematic? What I want to say is that Dehaene, despite the occasional sleepless night, still believes that the account of consciousness as it is will somehow redeem the most essential aspects of consciousness as it appears, that something like a program of ‘Dennettian redefinition’ will be enough. Thus the attitude he takes toward free will. But then I encounter passages like this:


“Yet we never truly know ourselves. We remain largely ignorant of the actual unconscious determinants of our behaviour, and therefore cannot accurately predict what our behaviour will be in circumstances beyond the safety zone of our past experiences. The Greek motto ‘Know thyself,’ when applied to the minute details of our behaviour, remains an inaccessible ideal. Our ‘self’ is just a database that gets filled in through our social experiences, in the same format with which we attempt to understand other minds, and therefore it is just as likely to include glaring gaps, misunderstandings, and delusions.” 113


Claims like this, which radically contravene our intuitive, prescientific understanding of self, suggest that Dehaene simply does not know where he stands, that he alternately believes and does not believe that his work can be reconciled with our traditional understand of ‘meaningful life.’ Perhaps this explains the pendulum swing between the personal and the impersonal idiom that characterizes this book—down to the final line, no less!


Even though this is an eminently honest frame of mind to take to this subject matter, I personally think his research cuts against even this conflicted optimism. Not surprisingly, the Global Neuronal Workspace Theory of Consciousness casts an almost preposterously long theoretical shadow; it possesses an implicature that reaches to the furthest corners of the great human endeavour to understand itself. As I hope to show, the Blind Brain Theory of the Appearance of Consciousness provides a parsimonious and powerful way to make this downstream implicature explicit.


.


From Geocentrism to ‘Noocentrism’


“Most mental operations,” Dehaene writes, “are opaque to the mind’s eye; we have no insight into the operations that allow us to recognize a face, plan a step, add two digits, or name a word” (104-5). If one pauses to consider the hundreds of experiments that he directly references, not to mention the thousands of others that indirectly inform his work, this goes without saying. We require a science of consciousness simply because we have no other way of knowing what consciousness is. The science of consciousness is literally predicated on the fact of our metacognitive incapacity (See “The Introspective Peepshow“).


Demanding that science provide a positive explanation of consciousness as we intuit it is no different than demanding that science provide a positive explanation of geocentrism—which is to say, the celestial mechanics of the earth as we once intuited it. Any fool knows that the ground does not move. If anything, the fixity of the ground is what allows us to judge movement. Certainly the possibility that the earth moved was an ancient posit, but lacking evidence to the contrary, it could be little more than philosophical fancy. Only the slow accumulation of information allowed us to reconceive the ‘motionless earth’ as an artifact of ignorance, as something that only the absence of information could render obvious. Geocentrism is the product of a perspectival illusion, plain and simple, the fact that we literally stood too close to the earth to comprehend what the earth in fact was.


We stand even closer to consciousness—so close as to be coextensive! Nonetheless, a good number of very intelligent people insist on taking (some version of) consciousness as we intuit it to be the primary explanandum of consciousness research. Given his ‘law’ (We constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (79)), Dehaene is duly skeptical. He is a scientific reductionist, after all. So with reference to David Chalmers’ ‘hard problem’ of consciousness, we find him writing:


“My opinion is that Chalmers swapped the labels: it is the ‘easy’ problem that is hard, while the hard problem just seems hard because it engages ill-defined intuitions. Once our intuition is educated by cognitive neuroscience and computer simulations, Chalmer’s hard problem will evaporate.” 262


Referencing the way modern molecular biology has overthrown vitalism, he continues:


“Likewise, the science of consciousness will keep eating away at the hard problem until it vanishes. For instance, current models of visual perception already explain not only why the human brain suffers from a variety of visual illusions but also why such illusions would appear in any rational machine confronted with the same computational problem. The science of consciousness already explains significant chunks of our subjective experience, and I see no obvious limits to this approach.” 262


I agree entirely. The intuitions underwriting the so-called ‘hard problem’ are perspectival artifacts. As in the case of geocentrism, our cognitive systems stand entirely too close to consciousness to not run afoul a number of profound illusions. And I think Dehaene, not unlike Galileo, is using the ‘Dutch Spyglass’ afforded by masking and attention paradigms to accumulate the information required to overcome those illusions. I just think he remains, despite his intellectual scruples, a residual hostage of the selfsame intuitions he is bent on helping us overcome.


Dehaene only needs to think through the consequences of GNWT as it stands. So when he continues to discuss other ‘hail Mary’ attempts (those of Eccles and Penrose) to find some positive account of consciousness as it appears, writing that “the intuition that our mind chooses its actions ‘at will’ begs for an explanation” (263), I’m inclined to think he already possesses the resources to advance such an explanation. He just needs to look at his own findings in a different way.


Consider the synoptic and sequential nature of what Dehaene calls ‘ignition,’ the becoming conscious of some nonconscious interpretation. The synoptic nature of ignition, the fact that consciousness merely samples interpretations, means that consciousness is radically privative, that every instance of selection involves massive neglect. The sequential nature of ignition, on the other hand, the fact that the becoming conscious of any interpretation precludes the becoming conscious of another interpretation, means that each moment of consciousness is an all or nothing affair. As I hope to show, these two characteristics possess profound implications when applied to the question of human metacognitive capacity—which is to say, our capacity to intuit our own makeup.


Dehaene actually has very little to say regarding self-consciousness and metacognition in Consciousness and the Brain, aside from speculating on the enabling role played by language. Where other mammalian species clearly seem to possess metacognitive capacity, it seems restricted to the second-order estimation of the reliability of their first-order estimations. They lack “the potential infinity of concepts that a recursive language affords” (252). He provides an inventory of the anatomical differences between primates and other mammals, such as specialized ‘broadcast neurons,’ and between humans and their closest primate kin, such as the size of the dendritic trees possessed by human prefrontal neurons. As he writes:


“All these adaptations point to the same evolutionary trend. During hominization, the networks of our prefrontal cortex grew denser and denser, to a larger extent than would be predicted by brain size alone. Our workspace circuits expanded way beyond proportion, but this increase is probably just the tip of the iceberg. We are more than just primates with larger brains. I would not be surprised if, in the coming years, cognitive neuroscientists find that the human brain possesses unique microcircuits that give it access to a new level of recursive, language-like operations.” 253


Presuming the remainder of the ‘iceberg’ does not overthrow Dehaene’s workspace paradigm, however, it seems safe to assume that our metacognitive machinery feeds from the same informational trough, that it is simply one among the many consumers of the information broadcast in conscious ignition. The ‘information horizon’ of the Workspace, in other words, is the information horizon of conscious metacognition. This would be why our capacity to report seems to be coextensive with our capacity to consciously metacognize: the information we can report constitutes the sum of information available for reflective problem-solving.


So consider the problem of a human brain attempting to consciously cognize the origins of its own activity—for the purposes of reporting to other brains, say. The first thing to note is that the actual, neurobiological origins of that activity are entirely unavailable. Since only information that ignites is broadcast, only information that ignites is available. The synoptic nature of the information ignited renders the astronomical complexities of ignition inaccessible to conscious access. Even more profoundly, the serial nature of ignition suggests that consciousness, in a strange sense, is always too late. Information pertaining to ignition can never be processed for ignition. This is why so much careful experimentation is required, why our intuitions are ‘ill-defined,’ why ‘most mental operations are opaque.’ The neurofunctional context of the workspace is something that lies outside the capacity of the workspace to access.


This explains the out-and-out inevitability of what I called ‘ulterior functionalism’ above: the information ignited constitutes the sum of the information available for conscious metacognition. Whenever we interrogate the origins or our conscious episodes, reflection only has our working memory of prior conscious episodes to go on. This suggests something as obvious as it is counterintuitive: that conscious metacognition should suffer a profound form of source blindness. Whenever conscious metacognition searches for the origins of its own activity, it finds only itself.


Free will, in other words, is a metacognitive illusion arising out of the structure of the global neuronal workspace, one that, while perhaps not appearing “in any rational machine confronted with the same computational problem” (262), would appear in any conscious system possessing the same structural features as the global neuronal workspace. The situation is almost directly analogous to the situation faced by our ancestors before Galileo. Absent any information regarding the actual celestial mechanics of the earth, the default assumption is that the earth has no such mechanics. Likewise, absent any information regarding the actual neural mechanics of consciousness, the default assumption is that consciousness also has no such mechanics.


But free will is simply one of many problems pertaining to our metacognitive intuitions. According to the Blind Brain Theory of the Appearance of Consciousness, a great number of the ancient and modern perplexities can be likewise explained in terms of metacognitive neglect, attributed to the fact that the structure and dynamics of the workspace render the workspace effectively blind to its own structure and dynamics. Taking Dehaene’s Global Neuronal Workspace Theory of Consciousness, it can explain away the ‘ill-defined intuitions’ that underwrite attributions of some extraordinary irreducibility to conscious phenomena.


On BBT, the myriad structural peculiarities that theologians and philosophers have historically attributed to the first person are perspectival illusions, artifacts of neglect—things that seem obvious only so long as we remain ignorant of the actual mechanics involved (See, “Cognition Obscura“). Our prescientific conception of ourselves is radically delusional, and the kind of counterintuitive findings Dehaene uses to patiently develop and explain GNWT are simply what we should expect. Noocentrism is as doomed as was geocentrism. Our prescientific image of ourselves is as blinkered as our prescientific image of the world, a possibility which should, perhaps, come as no surprise. We are simply another pocket of the natural world, after all.


But the overthrow of noocentrism is bound to generate even more controversy than the overthrow of geocentrism or biocentrism, given that so much of our self and social understanding relies upon this prescientific image. Perhaps we should all lay awake at night, pondering our pondering…


 •  0 comments  •  flag
Share on Twitter
Published on February 17, 2014 10:39

February 4, 2014

The Ironies of Modern Progress and Infantilization (by Ben Cain)

It’s commonly observed that we tend to rationalize our flaws and failings, to avoid the pain of cognitive dissonance, so that we all come to think of ourselves as fundamentally good persons even though many of us must instead be bad if “good” is to have any contrastive meaning. Societies, too, often exhibit pride which leads their chief representatives to embarrass themselves by declaring that their nation is the greatest that’s ever been in history. Both the ancients and the moderns did this, but it’s hard to deny the facts of modern technological acceleration. Just in the last century, global and instant communications have been established, intelligent machines run much of our infrastructure, robots have taken over many menial jobs, the awesome power of nuclear weapons has been demonstrated, and humans have visited the moon. We tend to think that the social impact of such uniquely powerful machines must be for the better. We speak casually, therefore, of technological advance or progress.


The familiar criticism of technology is that it destroys at least as much as it creates, so that the optimists tell only one side of the story. I’m not going to argue that neo-Luddite case here. Instead, I’m interested in the source of our judgment about progress through technology. Ironically, the more modern technology we see, the less reason we have to think there’s any kind of progress at all. This is because modernists from Descartes and Galileo onward have been compelled to distinguish between real and superficial properties, the former being physical and quantitative and the latter being subjective and qualitative. Examples of the superficial, “secondary” aspects are the contents of consciousness, but also symbolic meaning, purpose, and moral value, which include the normative idea of progress. For the most part, modernists think of subjective qualities as illusory, and because they devised scientific methods of investigation that bypass personal impressions and biases, modernists acquired knowledge of how natural processes actually work, which has enabled us to produce so much technology. So it’s curious to hear so many of us still assuming that our societies are generally superior to premodern ones, thanks in particular to our technological advantage. On the contrary, our technology is arguably the sign of a cognitive development that renders such an assumption vacuous.


.


Animism and Angst


One way of making sense of this apparent lack of social awareness is to point out that there are always elites who understand their society better than do the masses. And we could add that because the modern technological changes have happened so swiftly and have such staggering implications, many people won’t catch up to them or will even pretend there are no such consequences because they’re horrifying. But I think this makes for only part of the explanation. The masses aren’t merely ignoring the materialistic implications of science or the bad omens that technologies represent; instead, they have a commonsense conviction that technology must be good because it improves our lives.


In short, most citizens of modern, technologically-developed societies are pragmatic about technology. If you asked them whether they think their societies are better than earlier ones, they’d say yes and if you asked them why, they’d say that technology enables us to do what we want more efficiently, which is to say that technology empowers us to achieve our goals. And it turns out that this pragmatic attitude is more or less consistent with modern materialism. There’s no appeal here to some transcendent ideal, but just an egocentric view of technologies as useful tools. So our societies are more advanced than ancient ones because the ancients had to work harder to achieve their goals, whereas modern technology makes our lives easier. Mind you, this assumes that everyone in history has had some goals in common, and indeed our instinctive, animalistic desires are universal in so far as they’re matters of biology. By contrast, if all societies were alien and incommensurable to each other, national pride would be egregiously irrational. And most people probably also assume that our universal desires ought to be satisfied, because we have human rights, so that there’s moral force behind this social progress.


The instincts to acquire shelter, food, sex, power, and prestige, however, seem to me likewise insufficient to explain our incessant artificialization of nature. There’s another universal urge, which we can think of as the existential one and this is the need to overcome our fear of the ultimate natural truths. There are two ways of doing so, with authenticity or with inauthenticity, which is to say with honour, integrity, and creativity or with delusions arising from a weak will. (Again, this raises the question of whether even these values make sense in the naturalistic picture, and I’ll come back to this at the end of this article.) Elsewhere, I talk about the ancient worldviews as glorifying our penchant for personification. Prehistoric animists saw all of nature as alive, partly because hardly anything at that time was redesigned and refashioned to suit human interests and the predominant wilderness was full of plant and animal life. Also, the ancients hadn’t learned to repress their childlike urge to vent the products of their imagination. At that time, populations were sparse and there were no machines standing as solemn proofs of objective facts; moreover, there wasn’t much historical information to humble the Paleolithic peoples with knowledge of opposing views and thus to rein in their speculations. For such reasons, those ancients must have confronted the world much as all children do—at least with respect to their trust in their imagination.


More precisely, they didn’t confront the world at all. When a modern adult rises in the morning, she leaves behind her irrational dreams and prides herself on believing that she controls her waking hours with her autonomous and rational ego. By contrast, there’s no such divergence between the child’s dream life and waking hours, since the child’s dreams spill into her playful interpretations of everything that happens to her. To be sure, modern children have their imagination tempered by the educational system that’s bursting at the seams with lessons from history. But children generally have only a fuzzy distinction between subject and object. That distinction becomes paramount after the technoscientific proofs of the world’s natural impersonality. The world has always been impersonal and amoral, but only modernists have every reason to believe as much and thus only we inheritors of that knowledge face the starkest existential choice between personal authenticity and its opposite. The prehistoric protopeople, who were still experimenting with their newly acquired excess brain power, faced no such decision between intellectual integrity and flagrant self-deception. They didn’t choose to personify the world, because they knew no different; instead, they projected their mental creations onto the wilderness with childlike abandon and so distracted themselves from their potential to understand the nature of the world’s apparent indifference. After all, in spite of the relative abundance of the ancient environments, things didn’t always go the ancients’ way; they suffered and died like everyone else. Moreover, even early humans were much cleverer than most other species.


Thus, the ancients weren’t so innocent or ignorant that they felt no fear, if only because few animals are that helpless. But human fear differs from the reactionary animal kind, because ours has an existential dimension due to the breadth of our categories and thus of our understanding. Humans attach labels to so many things in the world not just because we’re curious, but because we’re audacious and we have excess (redundant) brain capacity. Animals feel immediate pain and perhaps even the alienness of the world beyond their home territory, but not the profound horror of death’s inexorability or of the world’s undeadness, which is to say the fear of nature’s way of developing (through complexification, natural selection, and the laws of probability) without any normative reason. Animals don’t see the world for what it is, because their vision and thus their concern are so narrow, whereas we’ve looked far out into the macrocosmic and microcosmic magnitudes of the universe. We’ve found no reassuring Mind at the bottom of anything, not even in our bodies. Our overactive brains compel us to care about aspects of the world that are bad for our mental health, and so we’re liable to feel anxious. And as I say, we cope with that anxiety in different ways.


.


Modernity and Infantilization


But how does this existentialism relate to the source of our myth of modern progress? Well, I see a comparison between prehistoric, mythopoeic reverie and the modern consumer’s infantilization. In each case, we have a lack of enlightenment, a retreat from rational neutrality, and an intermixing of subject and object. I’ve discussed the mythopoeic worldview elsewhere, so here I’ll just say that it amounts to thinking of the world as entirely enchanted and filled with vitality. Again, the modern revolutions (science and capitalistic industry) have led to our disenchantment with nature, because we’ve been forced to see the world as dead inside. That’s why late modernists are at best pragmatic about progress. We must somehow express our naïve pride in ourselves and in our self-destructive modern nations, because we prefer not to suffer as alienated outsiders. But modernity’s ideal of ultrarationality makes absolutist and xenophobic pride seem uncivilized—although American audiences are notorious for stooping to that sort of savagery when they chant “USA! USA!” to quell disturbances in their proceedings. In any case, we postmodern pragmatists think of progress as being relative to our interests.


Arguably, then, we should all be despairing, nihilistic antinatalists, cheering on our species’ extinction to spare us more horror from our accursed powers of reason, because of the atheistic implications of science-led philosophical naturalism. But something funny happened along the way to the postmodern now, which is that our high-tech environment has driven most of us to revert to the mythopoeic trance. We, too, collapse the distinction between subject and object, because we’re not surrounded by the wilderness that science has shown to be the “product” of undead forces; instead, we’ve blocked out that world from our daily life and immersed ourselves in our technosphere. That artificial world is at our beck and call: our technology is designed for us and it answers to us a thousand times a day. Science has not yet shown us to be exactly as impersonal as the lifeless universe and so we can take comfort in our amenities as we assume that while there’s no spirit under any rock, there’s a mind behind every iPhone.


So while we’re aware of the scientist’s abstract concept of the physical object, we don’t typically experience the world as including such absurdly remote quantities. Heidegger spoke of the pragmatic stance as the instrumentalization of every object, in which case we can look at a rock and see a potential tool, a “ready-to-hand” helper, not just an impersonal, undead and “given” object. (This is in contrast to objectification, in which we treat things only as “present-to-hand,” or as submitting to scientific scrutiny. The latter seems to reduce to the former, though, since objectification is still anthropocentric, in that the object is viewed not as a fully independent noumenon, but as a subject of human explanation and that makes it a sort of tool. True objectivity is the torment not of scientists but of those suffering from angst on account of their experience of nature’s horrible indifference and undeadness. True objectivity is just angst, when we despair that we can’t do anything with the world because we’re not at home in it and nature marches on regardless. All other attitudes, roughly speaking, are pragmatic.) In any case, the modern environment surpasses that instrumentalism with infantilization, because we late modernists usually encounter actual artifacts, not just potential ones. The big cities, at least, are almost entirely artificial places. Of course, everything in a city is also physical, on some level of scientific explanation, but that’s irrelevant to how we interpret the world we experience. A city is made up of artifacts and artifacts are objects whose functions extend the intentions of some subjects. Thus, hypermodern places bridge the divide between subjects and objects at the experiential level.


However, that’s only a precondition of infantilization. What is it for an adult to live as a child? To answer this, we need standards of psychological adulthood and infancy. My idea of adulthood derives from the modern myths of liberty and rational self-empowerment. Ours is a modern world, albeit one infected with our postmodern self-doubts, so it’s fitting that we be judged according to the standards set by modern European cultures. The modern individual, then, is liberated by the Enlightenment’s break with the past, made free to pursue her self-interest. Above all, this individual is rational since reason makes for her autonomy. Moreover, she’s skeptical of authority and tradition, since the modern experience is of how ancient Church teachings became dogmas that stifled the pursuit of more objective knowledge; indeed, the Church demonized and persecuted those who posed untraditional questions. The modern adult idolizes our hero, the Scientist, who relies on her critical faculties to uncover the truth, which is to say that the modern adult should be expected to be fearlessly individualistic in her assessments and tastes. Finally, this adult should be cosmopolitan—which is very different from Catholic universalism, for example. The Catholic has a vision of everyone’s obligation to convert to Catholicism, whereas the modernist appreciates everyone’s equal potential for self-determination, and so the modernist is classically liberal in welcoming a wide variety of opinions and lifestyles.


What, then, are the relevant characteristics of an infant? The infant is almost entirely dependent on a higher power. A biological infant has no choice in the matter and her infancy is only a stage in a process of maturation. Similarly, an infantile adult lacks autonomy and may be fed information in the same way a biological infant is fed food. For example, a cult member who defers to the charismatic leader in all matters of judgment is infantile with respect to that act of self-surrender. Many premodern cultures have been likewise infantile and our notion of modern progress compares the transition from that anti-modern version of maturity to the modern ideal of the individual’s rational autonomy, with the baby’s growth into a more independent being.


That’s the theory, anyway. The reality is that modern science is wedded to industry which applies our knowledge of nature, and the resulting artificial world infantilizes the masses. How so? For starters, through the post-WWII capitalistic imperative to grow the economy through hyper-consumption. Artificial demand is stimulated through propaganda, which is to say through mostly irrational, associative advertising. The demand is artificial in that it’s manufactured by corporations that have mastered the inhuman science of persuasion. That demand is met by mass-produced supply, the products of which tend to be planned for obsolescence and thus shoddier than they need to be.


The familiar result is the rebranding of the two biologically normal social classes: the rich and powerful alphas and everyone else (the following masses). Modern wealth is rationalized with myths of self-determination and genius, since no credible appeal can be made now to the divine right of kings. Mind you, the exception has been the creation of distinct middle classes which is due to socialist policies in liberal parts of the world that challenge the social Darwinian cynicism that’s implicit in capitalism. Maintaining a middle class in a capitalistic society, though, is a Sisyphean task: it’s like pushing a boulder up a hill we’re doomed to have to keep reclimbing. The middle class members are fattened like livestock awaiting slaughter by the predators that are groomed by capitalistic institutions such as the elite business schools. And so the middle class inevitably goes into debt and joins the poor, while the wealthy consolidate their power as the ruling oligarchs, as has happened in Canada and the US. (For more on what are effectively the hidden differences between democratic liberals and capitalistic conservatives, see here.)


The masses, then, are targeted by the propaganda arm of modern industry, while the wealthy live in a more rarified world. For example, the wealthy tend not to watch television, they’re not in the market for cheap, mass-produced merchandise, and they don’t even gullibly link their self-worth to their hording of possessions in the crass materialistic fashion. No, the oligarchs who come to power through the capitalistic competition have a much graver flaw: they’re as undead as the rest of nature, which makes them fitting avatars of nature’s inhumanity. Those who are obsessed with becoming very powerful or who are corrupted by their power tend to be sociopathic, which means they lack the ability to care what others feel. For that reason, the power elite are more like machines than people: they tend not to be idealistic and so associative advertising won’t work on them, since that kind of advertising construes the consumption of a material good as a means of fulfilling an archetypal desire. Of course, the relatively poor masses are just the opposite: burdened by their conscience, they trust that our modern world isn’t a horror show. Thus, they’re all-too ready to seek advice from advertisers on how to be happy, even though advertisers are actually deeply cynical. The masses are thereby indoctrinated into cultural materialism.


Workers in the service industry literally talk to the customer as if she were a baby, constantly smiling and speaking in a lilting, sing-songy voice; telling the customer whatever she wants to hear, because the customer is always right (just as Baby gets whatever it wants); working like a dog to satisfy the customer as though the latter were the boss and the true adult in the room—but she’s not. The real power elite don’t deal directly with lowly service providers, such as the employees of the average mall. Their underlings do both their buying and their selling for them, so that they needn’t mix with lower folk. This is why George H. W. Bush had never before seen a grocery scanner. No, the service provider is the surrogate parent who is available around the clock to service the consumer, just as a mother must be prepared at any moment to drop everything and attend to Baby. The consumer is the baby—and a whining, selfish one she is at that. That’s the unsettling truth obscured by the illusion of freedom in a consumption-driven society. A consumer can choose which brand name to support out of the hundreds she surveys in the department store, and that bewildering selection reassures her that she’s living the modern dream. But just as the democratic privileges in an effective plutocracy are superficial and structurally irrelevant, so too the consumer’s freedom of choice is belied by her lack of what Isaiah Berlin calls positive freedom. Consumers have negative freedom in that they’re free from coercion so that they can do whatever they want (as long as they don’t hurt anyone). But they lack the positive freedom of being able to fulfill their potential.


In particular, consumers fail to live up to the above ideal of modern adulthood. Choosing which brand of soft drink to buy, when you’ve been indoctrinated by a materialistic culture, is like an infant preferring to receive milk from the left breast rather than the right. Obviously, the deeper choice is to prefer something other than limitless consumption, but that choice is anathema because it’s bad for business. Still, in so far as we have the potential to be mature in the modern sense, to be like those iconoclastic early modern scientists who overcame their Christian culture by way of discovering for themselves how the real world works, we manic consumers have fallen far short. Almost all of us are grossly immature, regardless of how old we are or whether consumer-friendly psychologists pronounce us “normal.”


Now, you might think I’ve established, at best, not a one-way dependence of the masses on the plutocrats, but a sort of sadomasochistic interdependence between them. After all, the producers need consumers to buy their goods, just as a farmer needs to maintain his livestock out of self-interest. Unfortunately, this isn’t so in the globalized world, since the predators of our age have learned that they can express the nihilism at the heart of social Darwinian capitalism, without reservation, just by draining one country of its resources at a time and then by taking their business to a developing country when the previous host has expired, perhaps one day returning as that prior host revivifies in something like the Spenglerian manner. Thus, while it’s true that sellers need buyers, in general, it’s not the case that transnational sellers need any particular country’s buyers, as long as some country somewhere includes willing and able customers. But whereas the transnational sellers don’t need any particular consumers and the consumers can choose between brands (even though companies tend to merge to avoid competing, becoming monopolies or oligopolies), there’s asymmetry in the fact that the mass consumer’s self-worth is attached to consumption and thus to the buyer-seller relationship, whereas that’s not so for the wealthy producers.


Again, that’s because the more power you have, the more dehumanized you become, so that the power elite can’t afford moral principles or a conscience or a vision of a better world. Those who come to be in positions of great power become custodians of the social system (the dominance hierarchy), and all such systems tend to have unequal power distributions so that they can be efficiently managed. (To take a classic example, soviet communism failed largely because its system had to waste so much energy on the pretense that its power wasn’t centralized.) Centralized power naturally corrupts the leaders or else it attracts those who are already corrupt or amoral. So powerful leaders are disproportionately inhuman, psychologically speaking. (I take it this is the kernel of truth in David Icke’s conspiracy theory that our rulers are secretly evil lizards from another dimension.) Although the oligarch may be inclined to consume for her pleasure and indeed she obviously has many more material possessions than the average consumer, the oligarch attaches no value to consumption, because she’s without human feeling. She feels pleasure and pain like most animals, but she lacks complex, altruistic emotions. Ironically, then, the more wealth and power you have, the fewer human rights you ought to have. (For more on this naturalistic, albeit counterintuitive interpretation of oligarchy, see here.)


In any case, to return to the childish consumer, the point is that consumption-driven capitalism infantilizes the masses by establishing this asymmetric relationship between transnational producer and the average buyer. Just as a biological baby is almost wholly dependent on its guardian, the average consumer depends on the economic system that satisfies her craving for more and more material goods. The wealthy consume because they’re predatory machines, like viruses that are only semi-alive, but the masses consume because we’ve been misled into believing that owning things makes us happy and we dearly want to be happy. We think wealth and power liberate us, because with enough money we can buy whatever we want. But we forget the essence of our modern ideal or else we’ve outgrown that ideal in our postmodern phase. What makes the modern individual heroic is her independence, which is why our prototypes (Copernicus, Galileo, Bruno, Darwin, Nietzsche) were modern especially because of their socially subversive inquiries. We consumers aren’t nearly so modern or individualistic, regardless of our libertarian or pragmatic bluster. As consumers, we’re dependent on the mass producers and on our material possessions themselves. We’re not autonomous iconoclasts, we’re just politically correct followers. We don’t think for ourselves, but put our faith in the contemptible balderdash of corporate propaganda. We haven’t the rationality even to laugh at the foolish fallacies that are the bread and butter of associative ads. It doesn’t matter what we say or write; if we enjoy consuming material goods, our subconscious has been colonized by materialistic memes and so our working values are as shallow as they can be without being as empty as those of the animalistic power elite. As consumers, we’re children playing at adult dress-up; we’re cattle that make-believe we’re free just because we routinely choose from among a preselected array of options.


So both technology and capitalism infantilize the masses. By doing our bidding and so making us feel we’re of central importance in the artificial world, technology suppresses angst and alienation. We therefore live not the modern dream but the ancient mythopoeic one—which is also the child’s experience of playing in a magical place, regardless of where the child actually happens to be. And capitalism turns us into consumers, first and foremost, and constant consumption is the very name of the infant’s game, because the infant needs abundant fuel to support her accelerated growth.


A third source of our existential immaturity is inherent in the myth of the modern hero. For many years, this problem with modernism lay dormant because of the early modernists’ persistent sexism, racism, and imperialism. Only white European males were thought of as proper individuals. Their rationalism, however, implied egalitarianism since we’re all innately rational, to some extent, and once the civil rights of women and minorities were recognized, there was a perceptible decline in the manliness of the modern hero. No longer a bold rebel against dogmas or a skeptical lover of the truth, the late-modern individual now is someone who must tolerate all differences. Ours is a multicultural, global village and so we’re consigned to moral relativism and forced to defer to politically correct conventions out of respect for each other’s right to our opinions. Thus, bold originality, once regarded as heroic, is now considered boorish. Early modernists loved to discuss ideas in Salons, but now even to broach a political or religious subject in public is considered impolite, because you may offend someone.


Such rules of political correctness are like parents’ futile restrictions on their child’s thoughts and actions. Western children are protected from coarse language and violence and nudity, because postmodern parents labour under the illusion that their children will be infantile for their entire lifespan, whereas we’re all primarily animals and so are bound to run up against the horrors of natural life sooner or later. Compare these arbitrary strictures with the medieval Church’s laws against heresy. In all three cases (taboos for infantilized adults, protectionist illusions for children, and medieval Christian imperialism), the rules are uninspired as solutions to the existential problem of how to face reality, but the Church went as far as to torture and kill on behalf of its absurd notions. At most, postmodern parents may spank their child for saying a bad word, while an adult who carries the albatross of the archaic ideal of the independent person and so wishes to test the merit of her assumptions by attempting to engage others in a conversation about ideas will only find herself alone and ignored at the party, inspecting the plant in the corner of the room. Still, our postmodern mode of infantilization is fully degrading despite the lack of severe consequences when we step out of bounds.


This is the ethic of care that’s implicit in modern individualism, which is at odds with the modern hunt for the truth. Modernism was originally framed in the masculine terms of a conflict between scientific truth and Christian dogmatic opinion, but now that everyone is recognized as an autonomous, dignified modern person, feminine values have surged. And just as someone with a hammer sees everything else as a nail, a woman is inclined to see everyone else as a baby. This is why, for example, young women who haven’t outgrown their motherly instincts overuse the word “cute”: handbags are cute, as are small pets and even handsome men. This is also why girls worship not tough, rugged male celebrities, but androgynous ones like Justin Bieber. As conservative social critics appreciate, manliness is out of fashion. Even hair on a man’s chest is perceived as revolting, let alone the hair on his back. Men’s bodies must be shorn of any such symbol of their unruly desires, because men are obliged to fulfill women’s fantasy that men are babies who need to be nurtured. Men must be innocent, not savage; they must be eternally youthful and thus hairless, not battered and scarred by the heartless world; they must be doe-eyed and cheerful, not grim, aloof and embittered. Men must be babies, not the manly heroes celebrated by the early modernists, who brought Europe out of the relative Dark Age. Men have been feminized, thanks ironically to the early modern ideal of personal autonomy through reason. As for women themselves, those who must see themselves primarily as care-givers in so far as they’re naturally inclined to infantilize men, they too become child-like, because “care” is reflexive. And so modern women baby themselves, treating themselves to the spa, to the latest fashions and accessories, to the inanities of daytime television, to the sentimental fantasies of soap operas and romance novels, and to the platitudes of flattering, feel-good New Age cults.


.


The Ignorant Baby and the Enlightened Aesthete


Those are three sources of modern infantilization: technology, capitalism, and postmodern culture. I submit, then, that the reason we can be so ignorant as to speak of technoscientific progress, even though scientific theories imply naturalism which in turn implies the unreality of normative values and the undeadness of all processes, is that we lack self-knowledge because we’re infantile. We’re distracted by the games of possessing and playing with our technotoys, because our artificial environment trains us to be babies. And babies aren’t interested in ideas, let alone in terribly dispiriting philosophies such as naturalism with its atheistic and dark existential implications. That’s why we can parrot the meme of modern progress, because we’ve already swallowed a thousand corporate myths by the time we’ve watched a year’s worth of materialistic ads on TV. What’s one more piece of foolishness added to that pile? If we were to look at the myth of progress, we’d see it derives from ancient theistic apocalypticism, and specifically from the Zoroastrian idea of a linear and teleological arrow of historical time. The idea was that time would come to a cataclysmic end when God would perfect the fallen world and defeat the forces of evil in a climactic battle. All prior events are made meaningful in relation to that ultimate endpoint. In that teleological metaphysics, the idea of real progress makes sense. But there’s no such teleology in naturalism, so there can be no modern progress. At best, some scientific theory or piece of technology can meet with our approval and allow us to achieve our personal goals more readily, but that subjective progress loses its normative force. Mind you, that’s the only kind of progress that pragmatists are entitled to affirm, but there’s no real goodness in modernity if that’s all we mean by the word.


The titular ironies, then, are that the so-called technoscientific signs of modern progress are indications rather of the superficiality or illusoriness of the very concept of social progress that most people have in mind, despite their pragmatic attitude, and that the late great modernists who are supposed to stand tall as the current leaders of humanity are instead largely infantilized by modernity and so are similar to the mythopoeic, childlike ancients.


Here, finally, I’ve pointed out that there’s no real progress in nature, since nature is undead rather than enchanted by personal qualities such as meaning or purpose, and yet I affirmed the existential value of personal authenticity. I promised to return to this apparent contradiction. My solution, as I’ve explained at length elsewhere, is to reduce normative evaluation to the aesthetic kind. For example, I say intellectual integrity is better than self-delusion. But is that judgment as superficial and subjective as a moral principle in light of philosophical naturalism? Not if the goodness of personal integrity and more specifically of the coherence of your worldview which drives your behaviour, is thought of as a kind of beauty. When we take up the aesthetic perspective, all processes seem not just undead but artistically creative. Life itself becomes art and our aesthetic duty is to avoid the ugliness of cliché and to strive for ingenious and subversive originality in our actions.


Is the aesthetic attitude as arbitrary as a theistic interpretation of the world, given science-centered naturalism? No, because aesthetics falls out of the objectification made possible by scientific skepticism. We see something as an art object when we see it as complete in itself and thus as useless and indifferent to our concerns, the opposite being a utilitarian or pragmatic stance. And that’s precisely the essence of cosmicism, which is the darkest part of modern wisdom. Natural things, as such, are complete in themselves, meaning that they exist and develop for no human reason. That’s the horror of nature: the world doesn’t care about us, our adaptability notwithstanding, and so we’re bound to be overwhelmed by natural forces and to perish with just as little warning as we were given when nature evolved us in the first place. But the point here is that the flipside of this horror is that nature is full of art! The undeadness of things is also their sublime beauty or raw ugliness. When we recognize the alienness and monstrosity of natural processes, because we’ve given up naïve anthropocentrism, we’ve already adopted the aesthetic attitude. That’s because we’ve declined to project our interests onto what are wholly impersonal things, and so we objectify and aestheticize them with one and the same act of humility. The angst and the horror we feel when we understand what nature really is and thus how impersonal we ourselves are are also aesthetic reactions. Angst is the dawning of awe as we begin to fathom nature’s monstrous scope, horror the awakening of pantheistic fear of the madness of the artist responsible for so much wasted art. The aesthetic values which are also existential ones aren’t merely subjective, because nature’s undead creativity is all-too real.


 •  0 comments  •  flag
Share on Twitter
Published on February 04, 2014 08:59

February 1, 2014

Text as Teeter-Totter

Neuropath will always occupy a special, yet prickly place in my psyche. The book is special to me because of its genesis, first and foremost, arising as it did out of what (I can now see) was a truly exceptional experience teaching Popular Culture and a bet with my incredulous wife. But it’s also special because of the kind of critical reception it’s since received: I’ve actually come across reviews warning people to take Thomas Metzinger’s blurb, “You should think twice before reading this!” seriously. I was aiming for something that balanced the visceral on a philosophical edge, so I was overjoyed by these kinds of visceral responses. But I was troubled that no one seemed to be grasping the philosophical beyond the visceral, seeing the implications considered in the book beyond what was merely personal. Then, several years back someone sent me this link to Steven Shaviro’s penetrating and erudite review of Neuropath. And I can remember feeling as though some kind of essential circuit between author, book, and critic, had been closed.


That the book had truly been completed.


Now I’m genuinely honoured to have the opportunity to once again complete that circuit in the flesh at Western University in a couple weeks time. Steven Shaviro has spent his career jamming cutting edge speculative fiction and speculative theory together in his skull, a semantic Large Hadron Collider, and publishing the resulting Feynman diagrams on the ground-breaking The Pinnochio Theory as well as in his numerous scholarly works. He will be presenting on Neuropath, and I will be responding, at a public lecture on Thursday, February 13th, at 4:30 PM, the North Campus Building, Rm 117. All are welcome.


 •  0 comments  •  flag
Share on Twitter
Published on February 01, 2014 07:23

January 22, 2014

Love, God, and Entropy

An infinitesimal speck becomes everything, and the universe blooms and burns with the hard light of innumerable stars autocannibalizing, finally choking on iron and exploding into clouds that collapse into new stars surrounded by discs of parental debris that they cook and cook, forcing matter to flee into ever more complicated forms to maximize entropy, a constant pressure culminating in the generational replication of structure, and a vast proliferation of different entropic maximizations–the sum of life!–each more efficient than the previous, adaptations stacked upon adaptations, mapping ever more possibilities of structure and dynamics, morphology and behaviour, until the sheer complexity of the latter undoes the integrity of the former, and behaviour begins complicating without material constraint, collapsing the entropic interval, becoming a black hole of a more horrifying sort, an entropy maximizing God–Death–grown to a devouring haze that encompasses stars, galaxies, clusters and superclusters, consuming filaments and more, until passing at long last through its own jaws and fading into the infrared wash that is the end of the universe.


Or you can read this, courtesy of ochlocrat…


 •  0 comments  •  flag
Share on Twitter
Published on January 22, 2014 18:46

January 4, 2014

New and… Improved?

Roger Eichorn here.  I hate to bump Ben’s post off the top spot, but I wanted to let folks know that I’ve just finished a pretty significant rewrite of the prologue and first chapter of my historical fantasy The House of Yesteryear.  You can check them out here, if you’re so inclined.


As always, thanks to everyone who has taken the time to read and offer comments over the nearly two years (!!!) that various drafts of my opening chapters have appeared on the TPB.  I have every intention — come hell or high water — of finishing the book before another two years go by!


 •  0 comments  •  flag
Share on Twitter
Published on January 04, 2014 13:01

January 1, 2014

Ancient and Modern Enlightenment: from Noosphere to Technosphere (by Ben Cain)

Enlightenment is elite cognition, the seeing past collective error and illusion to a hidden reality. But the ancient idea of enlightenment differs greatly from the modern one and there may be a further shift in the postmodern era. I’ll try to shed some light on enlightenment, by pursuing these comparisons.


.


Ancient Enlightenment: Monism and Personification


Enlightenment in the ancient world was made possible by a falling away from our mythopoeic, nomadic prehistory. In that Paleolithic period, symbolized by the wild Enkidu in the Epic of Gilgamesh and by the biblical Adam in Eden, there was no enlightenment since everything was thoroughly personified and so nothing could have been perceived as unfamiliar or alien to the masses. The world was experienced as a noosphere, filled with mentality. Only after the rise of sedentary civilization in the Neolithic Era, when farming replaced nomadic hunting in 10,000 BCE, which allowed for much larger populations, was there a loss of that enchanted mode of experience which actually depended on a sort of blissful collective ignorance. As a population increases, the so-called Law of Oligarchy takes hold, which means that social power must be concentrated to avoid civilizational collapse. Dominance hierarchies are established and those in the lower classes become envious of the stronger and more privileged members who are sure to display their greater wealth and access to women with symbols of their higher status. By doing so, each social class learns its boundaries so that the social structure won’t be overridden, which would invite anarchy.


As Rousseau argued, civilization was the precondition of what we might call the sin of egoism. Contrary to Rousseau, prehistoric life wasn’t utopian; at least, objectively, human life in the Paleolithic Era was likely quite savage. But the ancients seemed to have an easier time perceiving the world in magical terms, judging from the evidence of their religions and extrapolating from what we know of children’s experience, given their similar dearth of content to occupy their collective memory. Thus, even as they killed each other over trifles, the prehistoric people would have interpreted such horror as profoundly meaningful. In any case, I think Rousseau is right that civilization made possible a falling away from a kind of intrinsic innocence. Specifically, the increased social specialization led to an epistemic inequality. As food was stored and more and more people lived together, there was greater need for practical knowledge in such areas as architecture, medicine, sanitation, and warfare. The elites became decadent and alienated from nature, since they found themselves free to indulge their appetites with artificial diversions, as specialists took care of the necessities of survival such as the harvesting of food or the defense of the borders. These elites codified the myths that expressed the population’s mores, but while the uneducated majority clung to their naïve, anthropocentric traditions, the cynical and self-absorbed elites more likely regarded the folk tales as superstitions.


Here, then, was the origin of enlightenment as the opposite of wholesale ignorance—and this was a normative dichotomy. Enlightenment was good and its opposite, mental darkness, was bad. Whereas prior to civilization everyone was enlightened, in a sense, or at least everyone deferred to the shaman’s interpretation of how the spiritual and material worlds are intermixed, civilized people came to believe there’s a secret perspective which alone imparts the ultimate truth, leaving the majority in relative ignorance. As for the content of the enlightened worldview in the ancient world, this was informed by both the egoism and the cynicism that distinguished the hierarchical civilization from the prehistoric past. The content thus had two elements: monism and personification. On the one hand, reality was thought to be a unity, whereas the world appeared to be a multiplicity. Enlightenment was the ability to see past the illusion of change, to the underlying timeless interconnection between all events. Again, in the mythopoeic world, there was no distinction between reality and appearance, because mental projections were given equal weight with the material unfolding of events. The world was a magical place. But the enlightened person had to recover a distorted memory of that childlike, mythopoeic vision, as it were, by theorizing a unity beyond the disenchanted multiplicity that confronted the civilized ancients.


On the other hand, ultimate reality was generally personified. So the absolute unity was called God, equated with the self, and often compared to the particular human who actually ruled the land. That is, the civilizational structure was projected onto the spirit world and the gods were used as symbols to reassure the ancients that their social order was just. There was such personification even in Buddhism, specifically in the Mahayana variety, according to which Bodhisattvas are worshipped and Buddha nature is thought to take not just the inconceivable and thus impersonal form, but ghostly or celestial as well as physical ones.


Ancient enlightenment thus had to reconcile the urge to personify, which was a remnant of the mythopoeic experience that was exacerbated by the advent of egoism even among the masses, and which the elites came to use for political purposes, with the world’s alien, indifferent oneness. That theoretical oneness expressed especially the elites’ growing alienation from nature and their nostalgia for the presumed innocence of the earlier, nomadic period. Monism made egoism out to be preconditioned by ignorance, since if the world were really an ultimate unity, the apparent self’s independence would be an illusion. But because egoism had numerous social and economic causes, the enlightened worldview retained some anthropomorphic projections onto the unity, to rationalize the nature of the civilized individual. There were degrees of enlightenment, so that one or the other factor, impersonal metaphysical unity or personification, predominated. For example, in the Eastern religions, the anthropomorphisms were stripped away as the enlightened person was thought to experience a transcendent unity, in a purified state of consciousness. Alternatively, the monotheistic Western traditions generally took a personal deity to be the highest principle.


.


Modern Enlightenment: Objectivity and Artificialization


The next epochal change was the birth of modern civilization in the European Renaissance and Scientific Revolution, followed by the Enlightenment and the Industrial Revolution. This transition was marked by profound advances in investigative techniques, which presented the educated upper classes with an altogether impersonal world. Instead of being horrified by this new knowledge, modernists relished the opportunity to conquer a material world that has no prior rights or else they sought refuge in the halfway house of deism. In any case, modernists were forced to reconceptualize the idea of enlightenment. Whereas the ancient kind posited a metaphysical unity that was somehow both transcendent and personal, modernists eventually eliminated personhood altogether, not just in metaphysics but in psychology. And so modern enlightenment is an appreciation of the implications of thoroughgoing metaphysical naturalism. The real world is still a hidden unity and scientists seek to uncover the causal pattern that establishes that unity. Thus, the dichotomy between the reality of the hidden spirit world and the illusion of mundane plurality in the spatiotemporal field of opposites became the split between a rational understanding of nature’s impersonality, as confirmed by the impartiality of cause and effect, and the naïve personification of anything, including ultimate reality or the human self. Enlightened modernists are materialists who think that mind is an illusion and that fundamental reality is bound to be alien to our sensibilities.


However, the conception of enlightenment as a matter of rationality, set off against the darkness of superstition, can’t hold, because rationality is a personal matter which takes for granted the illusion of the personal self. The modern myth of enlightenment as merely the courage to follow the logic and the evidence where they lead can’t be the whole story of the great transition to the modern period. Something else must have happened, not just a rise of rational neutrality, if rationality itself is merely peripheral. Instead of seeing modern enlightenment in terms of the symbol of the Light of Reason, and thus as a mental phenomenon, we should see it as technological: modernists exited the Dark Age through their technological advances which literally made the world brighter in the case of the commercial use of electricity. More broadly, modern enlightenment is the expansion of the “Light” of Artificiality, which makes for a wealth of historical data points. After all, what makes a dark age dark is the lack of lasting evidence of the culture’s identity, due to massive illiteracy and the absence of durable technologies that tell the tale. All of that changed with the printing press and the computer, for example. A Bright Age, then, is bright with cultural information and the light rays should be thought of as being transmitted especially to future historians.


Commercial light bulbs were patented in the late 19th C, although scientists studied electricity as early as 1600 CE. The Age of Enlightenment is primarily an 18th C. period, so the world didn’t literally become much brighter during the modern Enlightenment. However, the paradigmatic rationality of Enlightenment intellectuals, especially that of Isaac Newton, led directly to the Industrial Revolution in the late 18th and early 19th centuries, which included the invention of the light bulb. So we should look at modern enlightenment as beginning with the myth of rationality and giving way to wonder at the undeniable reality of recent technological advance. First came the light of Reason, then scientists realized that personhood and thus reason are illusory. But all along, the modern process was set in motion which replaced the darkness of nature with the light of artificiality (with technological incarnations of culture which endure and testify to our historical identity). Thus, modern enlightenment is only inchoately the dichotomy between neutral (non-personifying) reason and ignorance; the real distinction is between natural, pristine reality, which is dark and monstrous precisely because of its impersonality, and the light we bring to the world by impressing our stamp into it—not subjectively through mere theological interpretation or magical supposition, as in the mythopoeic period, but through the inexorable, objective spread of modern technology.


What’s monumental about modernity isn’t that some white male Europeans learned to think more rigorously, thanks to the scientific methods they invented. Of course, there are such methods, but modern enlightenment shouldn’t be personalized. When you characterize the new kind of enlightenment in that way, you’re left with incoherence since naturalism won’t support naïve personification. Instead, modern enlightenment must be thought of as a great widening of perspective, so that instead of projecting our ego onto indifferent nature, we eliminate our ego through existential encounters with nature’s monstrosity which humiliate us, doing away with our pretensions. Left thusly vacated, the real world is free to flow through us, as it were. In this case, the glory goes not to the great scientists, regardless of how exoteric modern history is told; the scientific methods, for example, must be part of nature’s self-overcoming on our planet, due to a shift from biological processes to artificial ones.


Scientific methods of thought are algorithms which presage the functions of high technology, as in the computer. In other words, before mass technology there was massive regimentation of intellectual life, whereas prior to the Scientific Revolution, social regimentation was confined to the army, to government, farming, and the like, while the business of discovering the nature of reality was still a free-wheeling affair. Ancient philosophy was mostly an artistic kind of speculation, although there are protoscientific aspects of ancient Greek and Indian philosophies. The Presocratics, for example, followed the logic of their hypotheses, however counterintuitive those hypotheses may have been. But what made the Scientific Revolution so special, objectively speaking, was a social transformation. Instead of being ruled mainly by biological norms, such as by the instinct of preserving the genes through sexual reproduction, which were thinly rationalized by the art of myth-making, a new dynamic was introduced: what Jacques Ellul called the necessity of efficiency as a matter of technique.


All species employ techniques, because they’re adapted to their environment, but the Scientific Revolution was the birth of an impersonal, regimented subculture of cognitive elites, one that’s modeled more and more on the machines made possible by that cognitive labour. In place of personification, mystification, or artistic speculation, there’s surrender to rational technique, to algorithms, and to the other scientific methods (public and repeatable testing of hypotheses, mathematical precision, and so on). It’s as though in depersonalizing ourselves, thanks to skepticism, the disempowerment of the Catholic Church, and so forth, we allowed nature’s impersonality to flow more easily through our social structures. Whereas hitherto, our bodies were governed by evolutionary norms and our minds were consumed by myths and illusions of personhood, which we projected onto nature so that we became doubly deluded, modernists abandoned personification, which freed the mind to mimic what the rest of the universe is doing, namely to flow in what I call an undead (impersonal but not inert) fashion.


We still personify techniques when we think of them teleologically, as having a mentally represented goal. However, even if there’s no divine mind desiring nature to end in some way, natural processes do have ends, which is just to say that there are natural processes, as such, or changes that have initial conditions, transitional periods, and probable points of termination. The more we understand nature, the wider our field of vision until we think of everything as a cosmic whole having a beginning (the Big Bang), a middle (evolution and complexification in space and time), and an end state, such as the universe’s heat death. What we call the scientific methods, then, or the more efficient modern techniques of rational thought, are really—according to the enlightened modernist—an inflowing of some underlying natural process besides biological evolution, one which begins with ultra-rational cognition and continues with the elimination of the noosphere and with the transformation of the biosphere into the technosphere.


.


Counter-Enlightenment and the Return of Mythopoeic Reverie


As long as we’re depersonalizing enlightenment, we should note the Counter-Enlightenment period which leads from the Romantics and other early critics of modern hyper-rationality to postmodern relativism and general jadedness. I won’t attempt to adjudicate this debate here, but I want to close by reflecting on whether the Counter-Enlightenment should be interpreted as an omen indicating that modern enlightenment will itself be transformed. Again, if we ignore the psychological and social levels of inquiry, since an enlightened modernist must regard them as misleading, we can look at historical developments as stages of some larger process. Natural selection explains the design of living bodies, but not the cultural shifts between elite forms of cognition. From mythopoeic animism, to the middle ground of ancient mystical theism, to modern naturalism, there’s a clear elimination of personhood from grand theories. Moreover, there’s exponential progress in technical innovation, as modernists have come to divorce rationality from artistic interpretation. Rather than seeing herself as similar to a shaman, in being a wise person, healer, or hero for venturing into the unknown, an enlightened modernist is more likely to think of herself as a glorified calculator. Modern cognition is hyper-rational in that logic for us is demythologized, and the sciences are separate from the arts and from the humanities, which means that scientific cognition is inhuman (objective and neutral). Science is thus the indwelling of natural mechanisms, due to a breakdown in resistance from religious delusions, resulting in the perfection of the artificial world. Modern geniuses are distorted mirrors held up to undead nature, the reflected image being a technological bastardization of the monstrous original.


And yet we may be witnessing here a cycle rather than a linear progression. Technology may allow us to recover the mythopoeic union of object and subject, so that modern objectivity overcomes itself through its technological progeny. After all, the artificial world caters to our whims and so exacerbates egoism and the urge to personify. Whereas modern enlightenment began with a vision of a lifeless, mechanical universe, the postmodern kind is much less arid and austere. This is because postmodernists are immersed in an artificial world which turns fantasies into realities on a minute-by-minute basis, thus perhaps fulfilling the promise of mythopoeic speculation. For example, if you’re hungry, you may ask your smartphone where the nearest restaurant is and that phone will speak to you; next, you’ll follow the signs in your car which adjusts to your preferences in a hundred ways, and you’ll arrive at the restaurant and be served without having to hunt or cook the animal yourself. The prehistoric fantasy was that nature is alive. Modernists discovered that everything is at best undead and certainly devoid of purpose or of mental, as opposed to biological, life. But perhaps postmodernists are realizing that the world was undead whereas it’s now being imbued with purpose and brought to nonbiological life by us through technology. Instead of mythologizing the world, we postmodernists artificialize it, and whereas natural mechanisms train us to be animals following evolutionary rhythms, artificial mechanisms may train us to be something else entirely, such as infantilized consumers that recapture the prehistoric sense of being at the world’s all-important center, thanks to our history of taming the hostile wilderness.


 


 


 


 


 •  0 comments  •  flag
Share on Twitter
Published on January 01, 2014 11:28

R. Scott Bakker's Blog

R. Scott Bakker
R. Scott Bakker isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow R. Scott Bakker's blog with rss.