R. Scott Bakker's Blog, page 37
July 28, 2011
Hooking in the Nunnery
Aphorism of the Day I: The problem with being a Bible salesman is that everyone thinks they already have one.
Aphorism of the Day II: Theory is the precise observation of the invisible, as opposed to fantasy, the dramatic description of the impossible.
I watched this show on parallel universes last night and was reminded why physicists make for such bad philosophers. Imagine hatching a bunch of flies in a glass jar on your kitchen table. These are pretty ingenious flies, given the puny size of their brains, so they puzzle through the laws of perspective, theorize what little they can see of the house beyond the kitchen. Soon they begin positing things like 'hallways,' and 'livingrooms,' given what scant evidence they can gather. Then they begin speculating about what lies beyond these theoretical posits. Soon you have jacuzzi's and helicopter pads, wetbars and indoor swimming pools. Then some crazy fly asks, 'What if this isn't the only kitchen?' They buzz in profound thought for several years, then another, even crazier fly asks, 'What if there's an infinite number of kitchens with an infinite number of flies in them?'
And the Fly Discovery Channel decides to make a documentary about this, instead of the kitchen, which is difficult enough to understand as it is.
I know, I'm one to talk. I love theory. I love painting portraits of invisible things. Having a theory means you are only wrong if you want to be–what's not to love about that? No matter what, the images are always spitting.
The world can be real a prick: it's nice to keep some bubbles safe. But televise them?
Everyone loves bubbles, I guess.








July 25, 2011
Walking in the Shadow of the George…
Aphorism of the Day: No man is a biosphere.
Ever since A Dance with Dragons came out my fantasy sales rankings have taken a frickin nose-dive. I'm not complaining, simply because I don't think The Darkness that Comes Before would have even been published were it not for A Game of Thrones. In fact, I sometimes think of The Second Apocalypse as a long, skinny tapeworm in the great belly of A Song of Ice and Fire. George, it sometimes seems to me, is the truly viral meme. People read him then ask, "What next?" and, lo, my name comes up.
Whenever I pitched the commercial premise for the fantasies I would discuss the fantasy continuum, beginning with Harry Potter and ending with me. I would describe The Prince of Nothing as epic fantasy that could not be 'outgrown,' that would continue rewarding the reader even if they went on to complete a literature PhD. "There's this huge market of fallen fantasy readers," I would say, channelling my inner Bill Shatner. "We just… need… to reach them."
Well, maybe not so huge, after all. Just huge enough.
Anyway, I thought I should throw out a word of thanks to all the ASOIAF fans who have done so much to spread the word. Perhaps the TSA tapeworm will grow big enough to merit a tapeworm of its very own.








July 18, 2011
Scratching my Duck Quack
Aphorism of the Day: Think of how you roll your eyes whenever someone utters the words, "If I were you, I would…" Everything's easy for the observer – and for the reader, easier still. This is what makes hard choices in hard circumstances the most difficult thing to convincingly portray in fiction.
Here's an attitude I seem to bump into on occasion:
R. Scott Bakker, for instance, sees fantasy as a result of the mind's inability to cope with the rapid change and rationalistic thinking of modernity, but I think it's best to avoid this kind of thinking because fantasy literature has been around for a really long time; there was no distinction between imaginative and realist forms in literature for a good while, and seeing fantasy and science fiction as some sort of aberration strikes me as foolish.
The notion that fantasy literature is not only as old as literature itself, but obviously so, is painfully common. The idea, again, is that different species of writing are best defined according to relations of resemblance. So you compile ad hoc lists of the things you find in The Lord of the Rings and the things you find in Beowulf and… If it looks like a duck, the assumption is, then it is a duck. The upshot is that you can claim that the fantasy you write is the Ur-Literature, that you're sitting on Poppa Homer's lap (hoping that it's his cellphone you feel in his pocket).
But what if that duckish thing is actually a decoy. The cliche saying, remember, is, If it looks like a duck, and it quacks like a duck, then it's a duck. In other words, it not only has to look like a thing, it has to do like a thing as well. Expressed this way, the gulf between The Lord of the Rings and Beowulf couldn't be greater.
A 'book,' remember, is simply conceptual shorthand for families of related readings. Once you realize that the meaning is all in the audience's collective head, then you can see the perils of using formal resemblances as your yardstick for grouping what belongs to what. In fact, the identical piece of code can result in profoundly different reading experiences (something Borges was fond of playing with on occasion).
The crazy remarkable thing about fantasy literature is the way it utilizes the forms of premodern scripture to do something diametrically opposite. Where scripture is the truest of the true literature, fantasy is the falsest of the false. Similar forms, completely different sets of cognitive committments, and drastically different cultural roles. This is why I think fantasy is kind of canary in the cultural coalmine: nowhere do we see the socio-historical rupture of the Enlightenment with greater clarity. Use the kinds of anthropomorphic ontologies you find in preEnlightenment scriptures to structure your fictional settings and you find yourself writing the most fictional fiction.
Is fantasy as old as the Bible? To say yes is to say something genuinely foolish, to prioritize appearances over consequences, and just as importantly, to overlook the constitutive role of the audience, culture, and history in the production of meaning. Anthropomorphic ontologies are as old as the Bible, even older. What readers make of those ontologies depends on a supercomplicated soup of social and historical considerations.
Fantasy fiction, on the other hand, is only as old as its audience. Unless you want to call the ancient Greeks, Israelites, Hindus, etc., fantasy readers, then you need to bite the bullet and admit that it is young – and in a very telling way.








July 14, 2011
Running Over the Pedestrian (to shake his hand)
Aphorism of the Day: Never forget that the boot on your neck is either there for your own good, a figment of your imagination, what nature intended, or an unfortunate accident of history, whichever you happen to find the most convincing in a moment of weakness.
The Devil's greatest trick, the old platitude goes, was convincing the world he doesn't exist. In today's Globe and Mail, Russell Smith argues that the distinction between 'brows' is no longer real or significant. As he writes:
It interests me that a commercial magazine thinks these are points that still need to be scored. This question of what is an embarrassing taste refuses to go away. Despite all the postmodern theorizing about the erasure of high and low, despite the triumph of mass culture, people still feel guilty or inadequate about their lack of intellectual cool.
As far as I'm concerned, the 'postmodern erasure' he mentions did little more than sanction a certain rhetorical egalitarianism, one where, in the course of teaching students what to take seriously, you have to make sure not to discriminate against the silly. The serious is still serious, and the silly is still silly, it's just no longer as polite to openly laugh at the latter. The kernel of the quote, the one that betrays his basic misunderstanding of the phenomena, lies in the last line – the fact that people still feel guilty.
He talks as though the theoretical delegitimation of our traditional attitudes regarding high and low art has dispatched with the problem. The problem, however, is one of actual attitudes, not theories. And as he states himself, those attitudes still exist. So the quote, which suggests that the problem has been solved by theory, therefore the attitudes are unjustified, actually inverts the situation. What he is actually saying is that the theory changed little or nothing.
The problem is that people use cultural products to identify themselves over and against other people. This is human nature, which is to say, it will always be endemic in contemporary culture. The solution has to be institutional and educational: both must be designed in such a way to mitigate this paleolithic relic of human psychology. My argument is that in their present form both seem designed to aggravate the dysfunction rather than solve it.
The fact that high-brow snobbery seems to poke through so many corners of Smith's piece illustrates, I think, just how pernicious this trap has become for his tribe: the postmodern rhetorical commitment is used obscure actual evaluations. His preference for things highbrow, his disdain for the 'entertainment spewed at you by massive corporations,' fairly oozes from the text. (I guarantee you the vast majority of 'highbrow' cultural products he identifies with are also some kind of corporate spew – just the good kind). Still he remains unconcerned, because he thinks his theoretical commitments preclude the possibility of highbrow chauvinism.
This is when cultural attitudes become genuinely aristocratic: there is no assumption of cultural superiority more complete, and more toxic, than the one that pretends to make no such assumption. The value attributions are just as automatic, just as thoughtless, but lie beyond the pale of self-criticism because of the conviction that the criticism comes ready made. 'I'm a critical thinker, therefore my attitudes have to be critical.' There is no chauvinism more inescapable than the one that confuses itself for openness.
What Smith is arguing is the cultural analogue to, 'I'm not racist, but those x…'








July 7, 2011
The Game Blame
So I've begun teaching that creative writing class at Fanshawe: it been awesome so far, though its reminded me of just how bad of a monologuer I can be. When 50 minutes vanishes into the echo of your own voice you know it's time to start shutting up… If you can.
And I can't.
Neth, one of the most perceptive and intellectually honest genre bloggers on the web, has come out with his review of The White-Luck Warrior. Even though the book didn't work for him, I can't say I disagree with any of his criticisms. It does make me think I need to change up my tactics in my interviews, however! No matter how nuanced I think I'm being, I seem to continually bump into the claim that I continually 'blame the reader' for their difficulties with my books. The fact is, I do blame the reader, but as much for their enjoyment as for their 'difficulties.'
This just makes me think that I'm not doing a good job communicating the sensibility I take to writing and reading. My point has always been that fiction is intrinsically ambiguous, that the reader imposes as much as they are imposed upon. When it comes to defending myself from accusations of sexism, I have no choice but to hold the reader to account. I always try to be careful with my qualifications, to point out that pattern imposition is simply what all humans do all the time. It's unconscious and effortless, so it feels as though nothing is being 'done' at all, that the meanings are simply 'discovered as is.'
But we now know this sense of 'semantic giveness' is illusory, through and through. This is why the readerly default is to always blame the book, and why Amazon reviews take the form they do. Somehow, in the course of explaining this situation, everything I say gets boiled to the assumption that I simply hold the contrary view, which is to blame the reader, and that I think that anyone who doesn't like my books is simply 'too stupid to get them.'
In point of fact, I blame everybody involved. Me. You. Obama. The evangelical Right. I actually think that I bite quite a few bullets in my interviews, but for whatever reason this seems to get blotted out. It probably has a lot to do with the context: I am defending myself and my writing, so it would be easy to assume that I'm not really biting any bullets. All I can say is that I spend way, way more time upbraiding myself for my compositional shortcomings than anything. I really think I'm trying to do something that is very, very difficult, so much so that falling on my face is inevitable. I look at it as a game of averages, and most certainly not one of good readers versus bad.
'Books' as bearers of meaning (as opposed to bearers of code) are a fiction, after all, a conceptual shorthand for multitdinous and multifarious readings. I write readings. I throw narrative patterns at people, and quite often the patterns received are at odds with the patterns transmitted. Given that our brains, both yours and mine, are only three pounds, go figure.








July 4, 2011
Philosophy X
Aphorism of the Day: Philosophers are beetles as much as the rest of us, aimless and slow-moving, seeing only what's immediately before them. All that differs is the abstraction and relevance of the pattern on the floor.
So every few months I have this burst of philosophical insight/fabrication that lights up my nucleus acumbens and makes me traipse around thinking I've solved a bunch of the world's deepest philosophical problems. Then the trickle of sober reflection begins, questions like cups of steaming hot Joe, slowly reviving the bitchier angels of my assumptive nature.
Now that the black box of the brain has been cracked open, we will begin to tinker, and we can expect that science will eventually renovate the inner world as radically as it has the outer. I still think this is probably an inevitability, though the counterarguments some of you raised has reminded me that any number of things could happen in the interim. I've also softened my hard stance against the technological optimist: even if technology is the rope that will likely hang us, it remains the only rope we got. Maybe some intermediate stage of neurotechnology will give us the intelligence we need to our way through our crazy future. I remain pessimistic, but it's certainly a possibility.
The two acronyms I coined, UNNF (for Universal Natural Neurophysiological Frame) and IANF (for Idiosyncratic Artificial Neurophysiological Frame) got me thinking about 'the boundary conditions' of consciousness–again. Most of the information processed by the brain never reaches consciousness: out of three pounds, we're stranded with a few ounces, the famous NCCs, the neural correlates of consciousness–or what some researchers call the thalamocortical system. Our UNNF is spread like luminous cobwebs through the filamentary gloom of our brain.
The 'Extinction Argument' I've been making is that human identity will not be conserved across substantial changes to our UNNF. What struck me is simply the force of this argument. Of course, radical changes to the neurophysiology of consciousness will translate into radically altered consciousnesses–structures and modalities of experience that could be beyond our ability to comprehend. And this got me thinking…
My argument for several years now–the Blind Brain Hypothesis–has been that the 'information horizons' of the thalamocortical system can actually explain some of the most baffling features of conscious experience. The conceptual centerpiece of this argument is something I call encapsulation, the way information horizons seem to pinch experience into self-contained 'bubbles.'
My problem has always been one of making this argument as obvious to others as it seems to me. I've come to realized that the gestalt shift I'm advocating is by no means an easy one, and that absent any institutional authority I can only sound like yet another crackpot with another theory of consciousness. There is no escaping the noose of value attribution, believe you me! (I actually submitted a paper comparing consciousness to coin tricks for The Journal of Consciousness Studies around five years ago, one which the editor was quite enthusiastic about, but the peer reviews I received made me think the article had been dismissed on a quick skim of the abstract.)
So I started wondering if there was I way I could yoke the force of my Extinction Argument (EA) to the Blind Brain Hypothesis (BBH). The force of the former, I thought turns on the differences between our UNNF and the multifarious IANFs to come–in other words, a kind of expansion of consciousness into inexplicable terrain. And this go me thinking about examples of 'diminished consciousness.' Neuropathology is the uranium mine of consciousness studies, the place where many researchers derive their fuel. Agnosia and neglect are among the most popular breakdowns considered.
In cases of agnosia and neglect, a boundary of consciousness that was once coterminous with the rest of humanity suddenly collapses, robbing the victim of basic experiences and competencies. These disorders have the effect of 'shrinking consciousness,' of rewriting the thalamocortical system's information horizons. Not only do certain 'boundaries of consciousness' become clear, the functional roles played by various neural resources are also thrown into relief. The loss of neural circuitry packs an experiential wallop. The smallest of lesions can transform how we experience ourselves and the world in catastrophic and often baffling ways.
These cases of 'shrunken consciousness' demonstrate the profound role thalamocortical information horizons play in shaping and structuring conscious experience. To understand ENCAPSULATION, you have to appreciate the way the neural correlates of consciousness necessarily suffer a neurophysiologically fixed form of frame neglect. Unless you think information horizons only become efficacious once pathology renders them obvious, the kinds of local and idiosyncratic experiential scotomata (unperceived absences) resulting from neuropathology simply must have global and universal counterparts.
If so, then what are they? I think encapsulation answers this question.
The way some sufferers of unilateral neglect lose all sense of 'extrapersonal space' on their left or right, to the point of being unable to recognize they have lost that space, demonstrates what I call the 'holophenomenal' character of experience, the way it's always 'full,' (so that we require secondary systems to detect absences). For each neuromodular component of the thalamocortical system, the correlated experience has to be 'full' simply because those components cannot process information they cannot access: this is why our visual field has no proper 'edge' (the way it does when cinematically represented as a peephole). Only interrelated systems (the varieties of memory in particular) can generate the intimation of something more or something missing.
Now, consider how all the deep structural mysteries of consciousness–unity, transparency, nowness, self-identity–turn on the absence of distinctions.
Consciousness, for instance, seems 'unified,' holistic insofar as everything seems internally related to everything else: a change in this feature seems to bring about a change in that. This is one of the reasons the kinds of experiential distortions arising from brain injuries seem so counterintuitive: the thalamocortical system has no access to its componential structure, to the diverse, externally related subsystems that continually feed and absorb its information. When one of those subsystems is knocked out, the information feed vanishes, and the bandwidth of consciousness shrinks–you shrink–and in ways that seem impossible (if you can recognize the loss at all) because our thalamocortical system is a surface feeder, utterly unable to digest what goes on beyond its information horizons. All experience is 'given,' and absent any parallel experience of its information donors (of the kind we have of our perceptual systems, for instance: blindness makes sense to us), everything is pinched into a queer kind of absolute identity. Despite the constitutive differences in the information fed forward, a background of sameness haunts all of it–what I call 'default identity.'
What is default identity? This is the Escher twist in the portrait. Differences in conscious experience reflect differences in neural information. There is no 'experience of' identity or difference absent that information–there is no experience at all–and this lack, I'm suggesting, leverages a kind of higher order, structural identity. In the same way unilateral neglect causes individuals to confuse half of their extrapersonal space with the whole, 'temporal frame neglect' (the 'nonpathological neglect' forced on consciousness by the information horizons of the thalamocortical system) causes individuals to confuse a moment of time with all the time there is. Each moment is at once 'just another moment' and the only moment, which is to say, the same. Thus the paradox of the Now.
I know this must sound like a kind of 'neuro-existentialism,' and therefore hinky to those hailing from more scientifically institutionalized background. But my thesis is empirical: Some of the most perplexing structural features of consciousness can be explained as the result of various kinds of neurophysiologically fixed 'agnosias.' I like to think it's strangeness is simply a function of its novelty: I'm not sure anyone has explored this "What would it be like to be a thalamocortical system?" line of thinking before, and the ways lacking certain kinds of information can structure experience.
At the same time I find it exciting the way these speculations seem to bear on so many 'continental philosophical' problematics. I could see writing papers, for instance, reinterpreting Heidegger's hypokaimenon, or Derrida's differance, or Adorno's nonidentical as various expressions of encapsulation; or how the intuitions that underwrite Kant's notion of the transcendental, the way consciousness stands both inside and outside the world of experience, derive from encapsulation.
At the same time it seems to provide a single explanatory rubric for a whole array of more general philosophical problems, especially regarding intentionality. Once you understand that consciousness is cross-sectional, the result of a set of integrated systems that access only fractions of the brain's information load, and as such has to make sideways sense of things (because these cross-sections are all it has to work with) then a whole shitload of apparent conundrums seem to evaporate, including the problem of why we find these things so problematic in the first place!
But none of this, of course, comes close to tackling the kernel of the 'Hard Problem,' which is simply why these tangles of electrified meat should be conscious at all. But it does offer tantalizing suggestions as to why we find this, the holy grail of philosophical questions, so difficult to answer. Consciousness could very well be a coin trick that the brain plays on itself.
Lastly, I should point out that as fascinating as I find it all, I actually can't bring myself to believe any of this. Which is why I decided to make it the protagonist's 'fictional philosophy' in Light, Time, and Gravity.
My 'Philosophy X' is his heartbreaking burden.








June 29, 2011
Future X
Definition of the Day I - Human: A biological system connecting the dinner plate to the shitter.
Definition of the Day II – Writer: A biological system convinced that it does more than simply connect the dinner plate to the shitter. See, Flatulance.
Quite the dialogue we've had going the past several days. For my part, I've pared back my commitment to Argument (1) from the previous post. I really don't think anyone has seriously challenged Argument (2) regarding identity, which, I think anyway, is the core of the dilemma facing us.
Since all our rationales turn on our neurophysiology as it exists, I just don't see how anyone can argue that it would be 'better' to leave our neurophysiology behind. If what we call 'morality' is a product of our neurophysiology, then abandoning that neurophysiology entails abandoning that morality. How can it be 'better' to leave BETTER behind?
This just underscores the real problem faced by the technological enthusiast: they really don't know what they're arguing for… Why should anyone embrace some Future X, especially when all we know for certain is that we will cease to exist? Because there's a good chance the incomprehensible aliens that follow us will be 'more intelligent' (whatever that means, post UNNF (universal natural neurophysiological frame))?
Why should anyone give a damn about them?
Anyway, here are the links to a couple of more or less apropo pieces I wrote for Tor.com a couple years back. (Thanks Bhaal!)
A Fact More Indigestible than Evolution I
A Fact More Indigestible than Evolution II








June 27, 2011
Encircled by Armageddon
Aphorism of the Day: Holding a fool accountable is like blaming your last cigarette for giving you cancer. Behind every idiot stands a village.
This is a horse I've been flogging for several years now, the way that the picture(s) offered by the technological optimists seem to entail our doom as much as the pictures(s) offered by the pessimists. To crib a metaphor from Marx, we will be hanged by the very rope that is supposed to save us.
It seems to me that the two best ways to attack the argument from the two previous posts are to argue that the biological revolution I describe simply won't happen, or that if it does happen, it doesn't entail the 'end of humanity.'
My argument for the first is simply: in the absence of any obvious and immediate deleterious effect, any technology that renders competitive advantages will be exploited. My argument for the second is simply: identity is not conserved across drastic changes in neurophysiology.
The inevitability of the former entails the 'drastic changes' of the latter. Even though 'loss of identity' counts as an 'obvious deleterious effect,' it does not count as an immediate one. Creeping normalcy will be our undoing, the slow accumulation of modifications as our neurophysiology falls ever deeper into our manipulative purview.
The question of whether we should fear this potential likelihood is the same as asking whether we should fear that other profound loss of identity, death. Either way, whatever the paradise or pandamonium that awaits us on the other side, it ain't human.
POST-SCRIPT: Here's an interesting little tidbit from The Atlantic that a buddy just sent me. We're standing at the wee beginning of Enlightenment 2.0, and we're already talking about overturning the entire foundation of our whole legal system.








June 25, 2011
On the Varieties of Enlightenment
Aphorism of the Day: The fantasy fiction of the 22nd century will be living in a human body with a human brain.
This aphorism, by the way, is kind of what the Framers in Disciple of the Dog believe: that the world we live in is a massive fantasy role-playing game.
So the debate in the comments on "What is the Semantic Apocalype?" has got me thinking about ways to clarify the position I'm offering. So here's a different comic strip:
Enlightenment 1.0, whose dream we happen to be living right now, turned on the wholesale critique of traditional knowledge. The authority of the ancient sources was thrown overboard, and we turned to reason and observation for our answers. In the early days, this revolution bubbled with intellectual promise: some thought reason alone was sufficient for knowledge, and various 'systems' were devised to provide knowledge of unobservables like truth, beauty, God, and so on. The plethora of competing systems, and the abject inability of any of their partisans to resolve their disagreements, quickly made this secondary Enlightenment project seem like a dead end. The consequences of this Enlightenment 1.1, however, were quite extreme. By dragging so many implicit norms into the light of explicit reflection and failing to make any positive, consensus commanding determinations whatsoever, E 1.1 managed to demolish all of our old ways of making sense of our life without providing anything new. Postmodernism attempted to make a virtue out of this failure: if received cultural norms can't be trusted, then we must innovate our way into our normative future, make ourselves meaningful. Call this Enlightenment 1.2. (I see postmodernism as a radicalization of romanticism).
The original E 1.0 insight, meanwhile, kept chugging along, producing what has been the greatest explosion in human knowledge in the history of the species. Reason and observation, a.k.a. science, became the institutional backbone of society, giving us the grip we needed to throttle the planet and extort any number of technological and organizational goodies. But since human meaning turned on unobservables, it had nothing to offer us save tools to pursue what ever purpose we cooked up for ourselves. E 1.0, in other words, provided us with an endless array of means, but absolutely no end or goal. Thus modern consumer society: the pointless accumulation of means. Biological imperatives become the new consensual foundation: all the norms and laws and rights that make up our new cage are (implicitly or explicitly) rationalized as means, as ways to maximize the satisfaction of these biological imperatives, while leaving the question of meaning to individuals. E 1.0 led us to a promised land where we were no longer the chosen people: small wonder we have such a hankering for pre-Enlightenment worlds! Fantasy reminds us of what it was like to live in a meaningful reality.
If Enlightenment 1.0 allowed us to escape our normative prison, only to strand us in a meaningless world, the question is one of how Enlightenment 2.0 – which is happening as we speak – will transform things. E 2.0 is set to tear down our biological prison the way E 1.0 tore down our traditional one. We escaped tradition to find ourselves trapped by biology. If we escape biology does that mean we are finally free?
I chose the rhetoric of constraint and escape intentionally, because it seems to be the register that E 2.0 enthusiasts are most inclined to use. Nothing like 'emancipation' to sell toothpaste. But the fact is, constraints enable. The English language is a system of constraints. All languages are. 'Escape' any of those systems, and you escape communication, which is to say, imprison yourself in unintelligility.
Humanity is also a system of biological constraints. Breaking out of the 'human system' is relatively easy to do, so much so, that many of us live in perpetual terror of being 'freed.' Suicide, as they say, is easy.
The question is, what prevents E 2.0 from being a form of mass collective suicide? Is it the incrementalism of the transformation? Do E 2.0 enthusiasts think that the gradualism of the change will allow them to somehow conserve their identities across profound systematic transformations. This strikes me as wishful thinking.
I hate to say it, but the pro E 2.0 arguments always strike me as out-and-out religious: "Who you are now will pass, but after, oh what joy! Paradise awaits, my friend! Imagine a world without tears!"
Hmmm… I think the only thing we can say with any certainty is that who we are now will no longer exist, and that this sounds suspiciously like death. Whether you shut your brain down, or rewire it to tangle the stars: either way you are gone.
And this is something we should welcome with open arms? Because we have faith in some vision of techno-paradise?
To be fair they think their argument has a rational basis. E 2.0 enthusiasts typically rely on a straightforward optimistic induction: by and large, technological innovation has improved our 'quality of life' in the past, therefore, radical technological innovation will radically improve our quality of life in the future.
I don't think the argument is remotely convincing because of the disjunct between 'technological innovation' and 'radical technological innovation.' This inductive chasm deepens once you make a distinction between tweaking our environment, exo-technological innovation, and tweaking ourselves, endo-technological innovation. 'By and large, exo-technological innovation has improved our quality of life, therefore endo-technological innovation will improve our quality of life in the future' does not follow simply because 'our quality of life' turns on a humanity that 'endo-technological innovation' promises to render archaic and ultimately extinct.
Their argument really is: By and large, exo-technological innovation has improved our quality of life, therefore endo-technological innovation will… well, we can't say 'improve' because that is an artifact of our standards, which will almost certainly be thrown out the window, and we can't say 'our the quality of life,' because we will no longer exist as we exist now, and no one can say whether we'll be able to conserve our personal identity in any meaningful sense, let alone what 'quality of life' might mean to whatever it is that supplants us.
So their claims of techno-paradise might as well be declarations of faith, the substance of things hoped for…
The rest of us should be shitting our drawers.








June 21, 2011
What is the Semantic Apocalypse?
Here's the comic book version (the only version, given the kinds of complexities these issues generalize over):
In social terms, you could suggest that the Semantic Apocalypse has already happened. Consumer society is a society where liberal democratic states have retreated from the 'meaning game,' leaving the intractable issue to its constituents. Given the interpretative ambiguity that permeates the Question of Meaning, there is no discursive or evidential way of commanding any kind of consensus: this is why states past and present had to resort to coercion to promote meaning solidarity. Absent coercion, people pretty much climb on whatever dogmatic bandwagon that appeals to them, typically the ones that most resonate with their childhood socialization, or as we like to call it, their 'heart.'
The result of this heterogeniety is a society lacking any universal meaning-based imperatives: all the 'shoulds' of a meaningful life are either individual or subcultural. As a result, the only universal imperatives that remain are those arising out of our shared biology: our fears and hungers. Thus, consumer society, the efficient organization of humans around the facts of their shared animality.
In biological terms, my fear is that the Semantic Apocalypse is about to happen. Despite the florid diversity of answers to the Question of Meaning, they tend to display a remarkable degree of structural convergence. This is what you would expect, given that we are neurologically wired for meaning, to look at the world in terms of intent, purpose, and propriety. Research in this last, the biology of morality, has found striking convergences in moral thought across what otherwise seem vast cultural chasms.
Even though we cannot agree on the answer, we all agree on the importance of the question, and the shapes that answers should take – even apparently radical departures from traditional thought, such as Buddhism. No matter how diverse the answers seem to be, they all remain anchored in the facts of our shared neurophysiology.
So what happens when we inevitably leave that shared neurophysiology behind?
The breakdown of traditional solidarity under the reflective scrutiny of the Enlightenment was recouped by the existence of what might be called a bigger box: the imperatives we share by virtue of our common neurophysiology. We could do without shared pictures of meaning (as traditionally defined) because we could manage quite nicely – flourish even – relying on our common instincts and desires.
The million dollar question is really one of what happens once that shared neurophysiology begins to fragment, and sharing imperatives becomes a matter of coincidence. It has to be madness, one that will creep upon us by technological degrees.
Why does it have to be madness? Because we define madness according what our brains normally do. Once we begin personalizing our brains, 'normally do' will become less and less meaningful. 'Insanity' will simply be what one tribe calls another, and from our antiquated perspective, it will all look like insanity.
It's hard to imagine, I admit, but you have to look at all the biologically fixed aspects of your conscious experience like distinct paints on a palette. Once the human brain falls into our manipulative purview, anything becomes possible. Certain colours, like suffering and fear, will likely be wiped away. Other colours, like carnal pleasure or epiphany, will be smeared across everything. And this is just the easy stuff: willing might be mixed with hearing, so that everytime a dog barks, you have the senstation of willing all creation into existence. Love might be mutated, pressed in experiential directions we cannot fathom, until it becomes something indistinguishable from cruelty. Reason could be married to vision, so that everything you see resounds with Truth. The combinatorial possibilities are as infinite as are the possibilities for creating some genuinely new…
And where does the slow and static 'human' fit into all this? Nowhere I can see.
And why should any human want to embrace this, when they are the ladder that will be kicked away? How could reasons be offered, when rationality finds itself on the chopping block with everything else. How do you argue for madness?
Perhaps our only recourse will be some kind of return to State coercion, this time with a toybox filled with new tools for omnipresent surveillance and utter oppression. A world where a given neurophysiology is the State Religion, and super-intelligent tweakers are hunted like animals in the streets.
Maybe that should be my next standalone: a novel called Semantica… I could set it up as a standard freedom-fighter tale, then let the sideways norms slowly trickle in, until the reader begins rooting for totalitarian oppression.








R. Scott Bakker's Blog
- R. Scott Bakker's profile
- 2168 followers
