R. Scott Bakker's Blog, page 14

May 13, 2015

Updated Updates…

My Posthuman Aesthetics Research Group talk has been pushed back to June 2nd. I blame it on administrative dyslexia and bad feet, which is to say… me. So, apologies all, and a heartfelt thanks to Johannes Poulsen and comrades for hitting the reset button.


1 like ·   •  0 comments  •  flag
Share on Twitter
Published on May 13, 2015 07:46

May 9, 2015

Updates…

Regarding the vanishing��American��e-books, my agent tells me that Overlook has recently switched distributors, and that the kerfuffle will be sorted out shortly. If you decide to pass this along, please take the opportunity to shame those who illegally download. I’m hanging on by my fingernails, here,��and yet the majority of hits I get whenever I��do my weekly vanity Google��are��for��links to��illegal downloads of my books. I increasingly meet fools who seem to think they’re ‘sticking it to the man’ by illegally downloading, when in fact,��what they’re doing is driving commercially borderline artists–that is, those artists dedicated to sticking��it to the man–to the food bank.


As for pub dates, still no word from either Overlook (who will also be handling the Canadian edition) or Orbit. Sorry guys.


Also, I’ll be in Denmark to give a��seminar entitled, “Writing After the Death of Meaning,” for the Posthuman Aesthetics Research Group (a seriously cool handle!)��at Aarhus University on the thirteenth of this month.��I realized writing this that I had simply assumed it wasn’t��open to the public, but when I reviewed��my correspondence, I couldn’t discover any reason��for assuming this short its billing as a ‘seminar.’ I’ve emailed my host asking for clarification,��just in case��any of��you happen to be twiddling your thumbs in Denmark next Wednesday.


 •  0 comments  •  flag
Share on Twitter
Published on May 09, 2015 14:29

May 3, 2015

Le Cirque de le Fou

Crusty


There’s nothing better than a blog to confront you with the urge to police appearances. Given the focus on hypocrisy at Three Pound Brain, I restrict myself to blocking only those comments that seemed engineered to provoke fear. But as a commenter on other blogs, I’ve had numerous comments barred on the basis of what was pretty clearly��argumentative merit. I remember on Only Requires Hate, I asked Benjanun Sriduangkaew��what criteria she used to distinguish spurious charges��of misogyny from serious ones, a comment that never made the light of day. I’ve also��seen questions I had answered��rewritten in a way that made my answers look ridiculous. I’ve��even had the experience of entire debates suddenly vanishing in the aether!


Clowns don’t like having their make-up pointed out to them–at least not by a clown as big as me! This seems to be particularly the case among those invested in the��academic humanities. At least these are the forums the least inclined to let my questions past moderation.


This, combined with��the��problems arising from the��vicissitudes of the web, convinced me way back to use Word documents to create a record I could go back to if I needed to.


So, for your benefit and mine, here’s a transcript of how the comment thread to Shaun Duke’s response to “Hugos Weaving” (which proved to be a record-breaking post)��should read:


.


BAKKER: So you agree that genre both reaches out and connects. But you trust that ‘literature’ does as well, even though you have no evidence of this. Like Beale, you have a pretty optimistic impression of yourself and your impact and the institutions you identify with. You find the bureaucracies problematic (like Beale), but you have no doubt the value system is sound (again like Beale). You accost your audiences with a wide variety of interpretative tactics (like Beale), and even though they all serve your personal political agenda (again, like Beale), you think that diversity counts for something (again, like Beale). You think your own pedagogic activity in no way contributes to your society’s social ills (like Beale), that you are doing your bit to make the world a better place (again, like Beale).


So what is the difference between you and Beale? Pragmatically, at least, you both look quite similar. What makes the ‘critical thinking’ you teach truly critical, as opposed to his faux critical thinking? Where and how does your institution criticize and revise its own values? Does it take care to hire genuine critics such as myself, or does it write them off (the way all institutions do) as outgroup bozos, as one of ‘them’?


More importantly, what science do you and your colleagues use to back up your account of ‘critical thinking’? Or are you all just winging it?


Your department doesn’t sound much different than mine, 20 years back, except that genre is perhaps accorded a more prominent role (you have to get those butts in seats, now, for funding). The only difference I can see is that you genuinely believe in it, take genuine pride in belonging to such a distinguished and enlightened order… the way any ingroup soldier should. But if you and your institution is so successful, how do you explain the phenomena of conservative creep? Even conservative commentators are astounded how the Great Recession actually seems to have served right wing interests.


.


DUKE: This is the point where we part company. I am happy to have a discussion with you about my perspectives of academia, even if you disagree. I’m even happy to defend what I do and its value. But I will not participate in a discussion with someone who makes a disingenuous (and fallacious) comparison between myself and a someone like Beale. The comparison, however rhetorical, is offensive and, frankly, unnecessarily rude.


Have a good day.


.


[excised]


BAKKER: Perfect! This is what the science shows us: ‘critical’ always almost means ‘critical of the other.’ Researchers have found this dynamic in babies, believe it or not. We can call ourselves ‘critical thinkers,’ but really this is just cover for using the exact same socio-cognitive toolbox as those we impugn. Group identification, as you’ve shown us once again, is primary among those tools. By pointing out the parallels between you and Beale, I identified you with him, and this triggers some very basic intuitions, those tasked with policing group boundaries and individual identities. You feel ‘disgusted,’ or ‘indignant.’


Again, like Beale.


Don’t you see Shaun? The point isn’t to bait or troll you. The point is to show you the universality of the moral cognitive mechanisms at work in all such confrontations between groups of humans. Beale isn’t some odious, alien invader, he is our most tragic, lamentable SELF. Bigotry is a bullet we can only dodge by BITING. Of course you’re a bigot, as am I. Of course you write off others, other views, without understanding them in the least. Of course you essentialize, naturalize. Of course you spend your days passing judgement for the entertainment of others and yourself. Of course you are anything but a ‘critical thinker.’


You’re human. Nothing magical distinguishes you from Beale.


.


Shaun does not��want to be an ingroup��clown. No one reading this wants to be an ingroup��clown. It is troubling, to say the least, that the role deliberative cognition plays in moral problem-solving is almost entirely strategic. But it is a��fact,��one��that explains the endless mire��surrounding ethical issues. Pretending will not make it otherwise.


If Shaun knew anything scientific��about critical thinking, he would have recognized what he was doing, he would have acknowledged the numerous ways groupishness necessarily drives his discourse. But he doesn’t. Since teaching critical thinking stands high among his group’s mythic values, interlocutors such as myself put him into a jam. If he doesn’t actually know anything about critical thinking, then odds are he’s simply in the indoctrination business (just as his outgroup competitors claim). The longer he engages someone just as clownish, but a little more in the scientific know, the more apparent this becomes. The easiest way to prevent��contradiction��is to shut down contrary voices. The best way to shut down contrary voices, is to claim moral indignation.


Demonizing Beale is the easy road. The uncritical, self-congratulatory��one. You kick him off your porch, tell him to throw his own party. Then you spend the afternoon laughing him off with your friends, those little orgies of pious self-congratulation that we all know so well. You smile, teeth gleaming, convinced that justice has been done and the party saved. Meanwhile the bass booms ever louder across the street. More and more cars line up.


But that’s okay, because life is easier among good-looking friends who find you good-looking as well.


 •  0 comments  •  flag
Share on Twitter
Published on May 03, 2015 07:20

April 27, 2015

Hugos Weaving

Red Skull


So the whole idea behind Three Pound Brain, way back when, was to open a waystation between ���incompatible empires,��� to create a forum where ingroup complacencies are called out and challenged, where our native tendency to believe flattering bullshit can be called to account. To this end, I instigated two very different blog wars, one against an extreme ���right��� figure in the fantasy community, Theodore Beale, another against an extreme ���left��� figure, Benjanun Sriduangkaew. All along the idea was to expose these individuals, to show, at least for those who cared to follow, how humans were judging machines, prone to rationalize even the most preposterous and odious conceits. Humans are hardwired to run afoul pious delusion. The science is only becoming more definitive in this regard, I assure you. We are, each and every one of us, walking, talking, yardsticks. Unfortunately, we also have a tendency to affix spearheads to our rules, to confuse our sense of exceptionality and entitlement with the depravity and criminality of others���and to make them suffer.


When it comes to moral reasoning, humans are incompetent clowns. And in an age where high-school students are reengineering bacteria for science fairs, this does not bode well for the future. We need to get over ourselves���and now. Blind moral certainty is no longer a luxury our species can afford.


Now we all watch the news. We all appreciate the perils of moral certainty in some sense, the need to be wary of those who believe too hard. We���ve all seen the ���Mad Fanatic��� get his or her ���just desserts��� in innumerable different forms. The problem, however, is that the Mad Fanatic is always the other guy, while we merely enjoy the ���strength of our convictions.��� Short of clinical depression at least, we���re always���magically you might say���the obvious ���Hero.���


And, of course, this is a crock of shit. In study after study, experiment after experiment, researchers find that, outside special circumstances, moral argumentation and explanation��are strategic���with us being none the wiser! (I highly recommend Joshua Greene���s Moral Tribes or Jonathan Haidt���s The Righteous Mind for a roundup of the research). It may feel like divine dispensation, but dollars to donuts it���s nothing more than confabulation. We are programmed to advance our interests as truth; we���d have no need of Judge Judy otherwise!


It is the most obvious invisible thing. But how do you show people this? How do you get humans to see themselves as the moral fool, as the one automatically���one might even say, mechanically���prone to rationalize their own moral interests, unto madness in some cases. The strategy I employ in my fantasy novels is to implicate the reader, to tweak their moral pieties, and then to jam them the best I can. My fantasy novels are all about the perils of moral outrage, the tragedy of willing the suffering of others in the name of some moral verity, and yet I regularly receive hate mail from morally outraged readers who think I deserve to suffer���fear and shame, in most cases, but sometimes death���for having written whatever it is they think I���ve written.


The blog wars were a demonstration of a different sort. The idea, basically, was to show how the fascistic impulse, like fantasy, appeals to a variety of inborn cognitive conceits. Far from a historical anomaly, fascism is an expression of our common humanity. We are all fascists, in our way, allergic to complexity, suspicious of difference, willing to sacrifice strangers on the altar of self-serving abstractions. We all want to master our natural and social environments. Public school is filled with little Hitlers���and so is the web.


And this, I wanted to show, is the rub. Before the web, we either kept our self-aggrandizing, essentializing instincts to ourselves or risked exposing them to the contradiction of our neighbours. Now, search engines assure that we never need run critical gauntlets absent ready-made rationalizations. Now we can indulge our cognitive shortcomings, endlessly justify our fears and hatreds and resentments. Now we can believe with the grain our stone-age selves. The argumentative advantage of the fascist is not so different from the narrative advantage of the fantasist: fascism, like fantasy, cues cognitive heuristics that once proved invaluable to our ancestors. To varying degrees, our brains are prone to interpret the world through a fascistic lens. The web dispenses fascistic talking points and canards and ad hominems for free���whatever we need to keep our clown costumes intact, all the while thunderously declaring ourselves angels. Left. Right. It really doesn���t matter. Humans are bigots, prone to strip away complexity and nuance���the very things required to solve modern social problems���to better indulge our sense of moral superiority.


For me, Theodore Beale (aka, Vox Day) and Benjanun Sriduangkaew (aka, acrackedmoon) demonstrated a moral version of the Dunning-Kruger effect, how the bigger the clown, the more inclined they are to think themselves angels. My strategy with Beale was simply to show the buffoonery that lay at the heart of his noxious set of views. And he eventually obliged, explaining why, despite the way his claims epitomize bias, he could nevertheless declare himself the winner of the magical belief lottery:


Oh, I don���t know. Out of nearly 7 billion people, I���m fortunate to be in the top 1% in the planet with regards to health, wealth, looks, brains, athleticism, and nationality. My wife is slender, beautiful, lovable, loyal, fertile, and funny. I meet good people who seem to enjoy my company everywhere I go.


He. Just. Is. Superior.


A king clown, you could say, lucky, by grace of God.


Benjanun Sriduangkaew, on the other hand, posed more of a challenge, since she was, when all was said and done, a troll in addition to a clown. In hindsight, however, I actually regard my blog war with her as the far more successful one simply because she was so successful. My schtick, remember, is to show people how they are the Mad Fanatic in some measure, large or small. Even though Sriduangkaew���s tactics consisted of little more than name-calling, even though her condemnations were based on reading the first six pages of my first book, a very large number of ���progressive��� individuals were only too happy to join in, and to viscerally demonstrate the way moral outrage cares nothing for reasons or casualties. What���s a false positive when traitors are in our midst? All that mattered was that I was one of them according to so-and-so. I would point out over and over how they were simply making my argument for me, demonstrating how moral groupthink deteriorates into punishing strangers, and feeling self-righteous afterward. I would receive tens of thousands of hits on my posts, and less than a dozen clicks on the links I provided citing the relevant research. It was nothing short of phantasmagorical. I was, in some pathetic, cultural backwoods way, the target of a witch-hunt.


(The only thing I regret is that several of my friends became entangled, some jumping ship out fear (sending me ���please relent��� letters), others, like Peter Watts, for the sin of calling the insanity insanity.)


It���s worth noting in passing that some Three Pound Brain regulars actually tried to get Beale and Sriduangkaew together. Beale, after all, actually held the views she so viciously attributed to me, Morgan, and others. He was the real deal���openly racist and misogynistic���and his blog had more followers than all of her targets combined. Sriduangkaew, on the other hand, was about as close to Beale���s man-hating feminist caricature as any feminist could be. But… nothing. Like competing predators on the savannah, they circled on opposite sides of the herd, smelling one another, certainly, but never letting their gaze wander from their true prey. It was as if, despite the wildly divergent content of their views, they recognized they were the same.


So here we stand a couple of years after the fray. Sriduangkaew, as it turns out, was every bit as troubled as she sounded, and caused others far, far . Beale, on other hand, has been kind enough to demonstrate yet another one of my points with his recent attempt to suborn the Hugos. Stories of individuals gaming the Hugos are notorious, so in a sense the only thing that makes Beale���s gerrymandering remarkable is the extremity of his views. How? people want to know. How could someone so ridiculously bigoted come to possess any influence in our ���enlightened��� day and age?


Here we come to the final, and perhaps most problematic moral clown in this sad and comedic tale: the Humanities Academic.


I���m guessing that a good number of you reading this credit some English professor with transforming you into a ���critical thinker.��� Too bad there���s no such thing. This is what makes the Humanities Academic a particularly pernicious Mad Fanatic: they convince clowns���that is, humans like you and me���that we need not be clowns. They convince cohort after cohort of young, optimistic souls that buying into a different set of flattering conceits amounts to washing the make-up off, thereby transcending the untutored ���masses��� (or what more honest generations called the rabble). And this is what makes their particular circus act so pernicious: they frame assumptive moral superiority���ingroup elitism���as the result of hard won openness, and then proceed to judge accordingly.


So consider what Philip Sandifer, ���a PhD in English with no small amount of training in postmodernism��� thinks of Beale���s Hugo shenanigans:


To be frank, it means that traditional sci-fi/fantasy fandom does not have any legitimacy right now. Period. A community that can be this effectively controlled by someone who thinks black people are subhuman and who has called for acid attacks on feminists is not one whose awards have any sort of cultural validity. That sort of thing doesn’t happen to functional communities. And the fact that it has just happened to the oldest and most venerable award in the sci-fi/fantasy community makes it unambiguously clear that traditional sci-fi/fantasy fandom is not fit for purpose.


Simply put, this is past the point where phrases like “bad apples” can still be applied. As long as supporters of Theodore Beale hold sufficient influence in traditional fandom to have this sort of impact, traditional fandom is a fatally poisoned well. The fact that a majority of voices in fandom are disgusted by it doesn’t matter. The damage has already been done at the point where the list of nominees is 68% controlled by fascists.


The problem, Sandifer argues, is institutional. Beale���s antics demonstrate that the institution of fandom is all but dead. The implication is that the science fiction and fantasy community ought to be ashamed, that it needs to gird its loins, clean up its act.


Many of you, I���m sure, find Sandifer���s point almost painfully obvious. Perhaps you���re thinking those rumours about Bakker being a closet this or that must be true.��I am just another clown, after all.��But catch that moral reflex, if you can, because if you give in, you will be unable���as a matter of empirical fact���to consider the issue rationally.


There���s a far less clownish (ingroupish) way to look at this imbroglio.


Let���s say, for a moment, that readership is more important than ���fandom��� by far. Let���s say, for a moment, that the Hugos are no more or less meaningful than any other ingroup award, just another mechanism that a certain bunch of clowns uses to confer prestige on those members who best exemplify��their self-regarding values���a poor man���s Oscars, say.


And let���s suppose that the real problem facing the arts community lies in the impact of technology on cultural and political groupishness, on the way the internet and preference-parsing algorithms continue to ratchet buyers and sellers into ever more intricately tuned relationships. Let���s suppose, just for instance, that so-called literary works no longer reach dissenting audiences, and so only serve to reinforce the values of readers…


That precious few of us are being challenged anymore���at least not by writing.


The communicative habitat of the human being is changing more radically than at any time in history, period. The old modes of literary dissemination are dead or dying, and with them all the simplistic assumptions of our literary past. If writing that matters is writing that challenges, the writing that matters most has to be writing that avoids the ���preference funnel,��� writing that falls into the hands of those who can be outraged. The only writing that matters, in other words, is writing that manages to span significant ingroup boundaries.


If this is the case, then Beale has merely shown us that science fiction and fantasy actually matter, that as a writer, your voice can still reach people who can (and likely will) be offended…��as well as��swayed, unsettled, or any of the things Humanities clowns claim writing should do.


Think about it. Why bother writing stories with progressive values for progressives only, that is,��unless moral entertainment is largely what you���re interested in? You gotta admit, this is pretty much the sum of what passes for ���literary��� nowadays.


Everyone���s crooked is someone else���s straight���that���s the dilemma. Since all moral interpretations are fundamentally underdetermined, there is no rational or evidential means to compel moral consensus. Pretty much anything can be argued when it comes to questions of value. There will always be Beales and Sriduangkaews, individuals adept at rationalizing our bigotries���always. And guess what? the internet has made them as accessible as fucking Wal-Mart. This is what makes engaging them so important. Of course Beale needs to be exposed���but not for the benefit of people who already despise his values. Such ���exposure��� amounts to nothing more than clapping one another on the back. He needs to be exposed in the eyes of his own constituents, actual or potential. The fact that the paths leading to bigotry run downhill makes the project of building stairs all the more crucial.


���Legitimacy,��� Sandifer says. Legitimacy for whom? For the likeminded���who else? But that, my well-educated friend, is the sound-proofed legitimacy of the Booker, or the National Book Awards���which is to say, the legitimacy of the irrelevant, the socially inert. The last thing this accelerating world needs is more ingroup ejaculate. The fact that Beale managed to pull this little coup is proof positive that science fiction and fantasy matter, that we dwell in a rare corner of culture where the battle of ideas is for… fucking… real.


And you feel ashamed.


2 likes ·   •  0 comments  •  flag
Share on Twitter
Published on April 27, 2015 04:39

April 12, 2015

Reason, Bondage, Discipline

We can understand all things by her; but what she is we cannot apprehend.


–Robert Burton, Anatomy of Melancholy, 1652


.


So I was rereading Ray Brassier���s account of Churchland and eliminativism in his watershed Nihil Unbound: Enlightenment and Extinction��the other day and I thought it worth a short post given the similarities between his argument and Ben���s. I’ve already considered his attempt to rescue subjectivity from the neurobiological dismantling of the self in “Brassier’s Divided Soul.” And in “The Eliminativistic Implicit II: Brandom in the Pool of Shiloam,” I dissected the central motivating argument for his brand of normativism (the claim that the��inability of natural cognition to��substitute for��intentional cognition means that only intentional cognition can theoretically solve intentional cognition), showing how��it��turns on metacognitive neglect and thus can only��generate underdetermined claims. Here I��want to consider Brassier’s problematic attempt to domesticate the challenge posed by scientific reason, and to provision traditional philosophy with a more robust sop.


In Nihil Unbound, Brassier casts Churchland���s eliminativism as the high water mark of disenchantment, but reads his appeal to pragmatic theoretical virtues as a concession to the necessity of a deflationary normative metaphysics. He argues (a la Sellars) that even though scientific theories possess explanatory priority over manifest claims, manifest claims nevertheless possess conceptual parity. The manifest self is the repository of requisite ���conceptual resources,��� what anchors the ���rational infrastructure��� that makes us intelligible to one another as participants in the game of giving and asking for reasons���what allows, in other words, science to be a self-correcting exercise.


What makes this approach so attractive is the promise of providing transcendental constraint absent ontological tears. Norms, reasons, inferences, and so on, can be understood as pragmatic functions, things that humans do, as opposed to something belonging to the catalogue of nature. This has the happy consequence of delimiting a supra-natural domain of knowledge ideally suited to the kinds of skills philosophers already possess. Pragmatic functions are real insofar as we take them to be real, but exist nowhere else, and so cannot possibly be the object of scientific study. They are ���appearances merely,��� albeit appearances that make systematic, and therefore cognizable, differences in the real world.


Churchland���s eliminativism, then, provides Brassier with an exemplar of scientific rationality and the threat it poses to our prescientific self-understanding that also exemplifies the systematic dependence of scientific rationality on pragmatic functions that cannot be disenchanted on pain of scuttling the intelligibility of science. What I want to show is how in the course of first defending and then critiquing Churchland, Brassier systematically misconstrues the challenge eliminativism poses to all philosophical accounts of meaning. Then I want to discuss how his ���thin transcendentalism��� actually requires this misconstrual to get off the ground.


The fact that Brassier treats Churchland���s eliminativism as exemplifying scientific disenchantment means that he thinks the project is coherent as far as it goes, and therefore denies the typical tu quoque arguments used to dismiss eliminativism more generally. Intentionalists, he rightly points out, simply beg the question when accusing eliminativists of ���using beliefs to deny the reality of beliefs.���


���But the intelligibility of [eliminative materialism] does not in fact depend upon the reality of ���belief��� and ���meaning��� thus construed. For it is precisely the claim that ���beliefs��� provide the necessary form of cognitive content, and that propositional ���meaning��� is thus the necessary medium for semantic content, that the eliminativist denies.��� (15)


The question is, What are beliefs? The idea that the eliminativist must somehow ���presuppose��� one of the countless, underdetermined intentionalist accounts of belief to be able to intelligibly engage in ���belief talk��� amounts to claiming that eliminativism has to be wrong because intentionalism is right. The intentionalist, in other words,��is simply begging the question.


The real problem that Churchland faces is the problem that all ���scientistic eliminativism��� faces: theoretical mutism. Cognition is about getting things right, so any account of cognition lacking the resources to explain its manifest normative dimension is going to seem obviously incomplete. And indeed, this is the primary reason eliminative materialism remains a fringe position in psychology and philosophy of mind today: it quite simply cannot account for what, pretheoretically, seems to be��the most salient feature of cognition.


The dilemma faced by eliminativism, then, is dialectical, not logical. Theory-mongering in cognitive science is generally abductive, a contest of ���best explanations��� given the intuitions and scientific evidence available. So far as eliminativism has no account of things like the normativity of cognition, then it is doomed to remain marginal, simply because it has no horse in the race. As Kriegel says in Sources of Intentionality, eliminativism ���does very poorly on the task of getting the pretheoretically desirable extension right��� (199), fancy philosopher talk for ���it throws the baby out with the bathwater.���


But this isn���t quite the conclusion Brassier comes to. The first big clue comes in the suggestion that Churchland avoids the tu quoque because ���the dispute between [eliminative materialism] and [folk psychology] concerns the nature of representations, not their existence��� (16). Now although it is the case that possessing an alternative theory makes it easier to recognize the question-begging nature of the tu quoque, the tu quoque is question-begging regardless. Churchland need only be skeptical to deny rather than affirm the myriad, underdetermined interpretations of belief one finds in intentional philosophy. He no more need specify any alternative theory to use the word ���belief��� than my five-year old daughter does. He need only assert that the countless intentionalist interpretations are wrong, and that the true nature of belief will become clear once cognitive science matures. It just so happens that Churchland has a provisional neuroscientific account of representation.


As an eliminativist, having a theoretical horse in the race effectively blocks the intuition that you must be riding one of the myriad intentional horses on the track, but the intuition is faulty all the same. Having a theory of meaning is a dialectical advantage, not a logical necessity. And yet nowhere does Brassier frame the problem in these terms. At no point does he distinguish the logical and dialectical aspects of Churchland���s situation. On the contrary, he clearly thinks that Churchland���s neurocomputational alternative is the only thing rescuing his view. In other words, he conflates the dialectical advantage of possessing an alternate theory of meaning with logical necessity.


And as we quickly discover, this oversight is instrumental to his larger argument. Brassier, it turns out, is actually a fan of the tu quoque���and a rather big one at that. Rather than recognizing that Churchland���s problem is abductive, he frames it more abstrusely as a ���latent tension between his commitment to scientific realism on the one hand, and his adherence to a metaphysical naturalism on the other��� (18). As I mentioned above, Churchland finds himself in a genuine dialectical bind insofar as accounts of cognition that cannot explain ���getting things right��� (or other apparent intentional properties of cognition) seems to get the ���pretheoretically desirable extension��� wrong. This argumentative predicament is very real. Pretheoretically, at least, ���getting things right��� seems to be the very essence of cognition, so the dialectical problem posed is about as serious as can be. So long as intentional phenomena as they appear remain part of the pretheoretically desirable extension of cognitive science, then Churchland is going to have difficulty convincing others of his view.


Brassier, however, needs the problem to be more than merely dialectical. He needs some way of transforming the dialectically deleterious inability to explain correctness into warrant for a certain theory of correctness���namely, some form of pragmatic functionalism. He needs, in other words, the tu quoque. He needs to show that Churchland, whether he knows it or not, requires the conceptual resources of the manifest image as a condition of understanding science as an intelligible enterprise. The way to show this requirement, Brassier thinks, is to show���you guessed it���the inability of Churchland���s neurocomputational account of representation to explain correctness. His inability to explain correctness, the assumption is, means he has no choice but to utilize the conceptual resources of the manifest image.


But as we���ve seen, the tu quoque begs the question against the eliminativist regardless of their ability to adduce alternative explanations for the phenomena at issue. Possessing an alternative simply makes the tu quoque easier to dismiss. Churchland is entirely within his rights to say, ���Well, Ray, although I appreciate the exotic interpretation of theoretical virtue you���ve given, it makes no testable predictions, and it shares numerous family resemblance to countless other such, chronically underdetermined theories, so I think I���m better off waiting to see what the science has to say.���


It really is as easy as that. Only the normativist is appalled, because only they are impressed by their intuitions, the conviction that some kind of intentionalist account is the only game in town.


So ultimately, when Brassier argues that ���[t]he trouble with Churchland���s naturalism is not so much that it is metaphysical, but that it is an impoverished metaphysics, inadequate to the task of grounding the relation between representation and reality��� (25) he���s mistaking a dialectical issue with an inferential and ontological one, conflating a disadvantage in actual argumentative contexts (where any explanation is preferred to no explanation) with something much grander and far more controversial. He thinks that lacking a comprehensive theory of meaning automatically commits Churchland to something resembling his theory of meaning, a deflationary normative metaphysics, namely his own brand of pragmatic functionalism.


For the naturalist, lacking answers to certain questions can mean many different things. Perhaps the question is misguided. Perhaps we simply lack the information required. Perhaps we have the information, but lack the proper interpretation. Maybe the problem is metaphysical���who the hell knows? When listing these possibilities, ���Perhaps the phenomena is supra-natural,��� is going to find itself somewhere near, ���Maybe ghosts are real,��� or any other possibility that amounts to telling science to fuck off and go home! A priori claims on what science can and cannot cognize have a horrible track record, period. As Anthony Chemero wryly notes, ���nearly everyone working in cognitive science is working on an approach that someone else has shown to be hopeless, usually by an argument that is more or less purely philosophical��� (Radical Embodied Cognitive Science, 3).


Intentional cognition is heuristic cognition, a way to cognize systems without cognizing the operations of those systems. What Brassier calls ‘conceptual parity’ simply pertains to the fact that intentional cognition possesses its own adaptive ecologies. It’s a ‘get along’ system, not a��‘get it right’ system, which is why, as a rule, we resort to it in ‘get along’ situations.��The sciences enjoy ‘explanatory priority’ because they cognize systems via cognizing the operations of those systems:��they solve on the basis of information regarding what is going on.��They constitute a��‘get it right’ system. The question that Brassier and��other normativists need to answer is why, if��intentional cognition is the product of a system that systematically ignores what���s going on, we should think it could provide reliable theoretical cognition regarding what���s going on. How can a get along system get itself right? The answer quite plainly seems to be that it can’t, that the��conundrums and��perpetual disputation that characterize all attempts to solve intentional cognition via intentional cognition are exactly what we should expect.


Maybe��the millennial discord is��just a coincidence. Maybe it isn’t a matter of jamming the stick to find gears that don’t exist. Either way, the weary traveller is entitled to know how many more centuries are required, and, if these issues will never find decisive resolution, why they should continue the journey. After all, science has just thrown down the walls of the soul. Billions are being spent to transform the tsunami of data into better instruments of control. Perhaps tilting yet one more time at problems that have defied formulation, let alone solution, for thousands of the years is what humanity needs…


Perhaps the time has come to consider worst case scenarios–for real.


Which brings us to the moral: You can���t concede that science monopolizes reliable theoretical cognition then swear up and down that some��chronically underdetermined speculative account somehow makes that reliability possible, regardless of what the reliability says!�� The apparent conceptual parity between manifest and scientific images is something only the science can explain. This allows us to see just how conservative Brassier’s position is. Far from��pursuing the ���conceptual ramifications entailed by a metaphysical radicalization of eliminativism��� (31), Brassier is actually arguing for the philosophical status quo. Far from following reason no matter where it leads, he is, like so many philosophers before him, playing another version of the ���domain boundary game,��� marshalling what amounts to a last ditch effort to rescue intentional philosophy from the depredations of science. Or as he himself might put it, devising another sop.


As he writes,


���At this particular historical juncture, philosophy should resist the temptation to install itself within one of the rival images… Rather, it should exploit the mobility that is one of the rare advantages of abstraction in order to shuttle back and forth between images, establishing conditions of transposition, rather than synthesis, between the speculative anomalies thrown up within the order of phenomenal manifestation, and the metaphysical quandaries generated by the sciences��� challenge to the manifest order.��� 231


Isn’t this just another old, flattering��trope? Philosophy as fundamental broker, the medium that allows the dead to speak to the living, and the living to speak to the dead? As I’ve been arguing for quite sometime, the facts on the ground simply do not support anything so��sunny. Science will determine the relation between the manifest and the scientific images, the fate of ‘conceptual parity,’��because science actually��has explanatory priority. The dead decide, simply because nothing has ever been alive, at least not��the��way our ancestors��dreamed.


 •  0 comments  •  flag
Share on Twitter
Published on April 12, 2015 07:36

April 6, 2015

Bleaker than Bleak (by Paul J. Ennis)

Bleak theory accepts that it itself is almost entirely wrong. However, precisely on the basis that it accepts humans are almost always wrong about how it goes with the world and so what are the chances of this theory being right? In this paradoxical, confused sense it is a theory of human fallibility. Or the inability of humans to see themselves for what they are, even when, as per contemporary neuroscience, we kind of know (have you not yet heard the ���good��� news that you are not what you think you are?). We kind of know because we are beginning to see ourselves from the third-person perspective. Subjectivity is devolving into objectivity and objectivity entails seeing things clearly, even if not transparently. That opacity, always there in the subject-object distinction, is collapsing and the consequences are bleak. The second reality-appearance ���appeared��� as a crack we cracked. It has been going on ever since. Consider the insanity of the entire post-Kantian tradition and the in-itself ��� is it not just an expression of what it feels like when you recognise what was once a ���transparent cage��� (Sartre) of looking directly at the world is a hallucination, a real one, all the same.


We cannot outpace this very blindspot that renders us a self or a subject. We are deluded about our beliefs or intentions (a given, so to speak), but more significantly we are deluded that somehow we can ���recursively��� leap ���over our own shoulders��� and see not just the trick, as Bakker might put it, but something substantial. Rather than just a model or a process withholding information from ���you��� yourself. Your own brain lies to you. It hides noise (���data-reduction���) so that you do not collapse into a schizophrenia of buzzing information. This much Bergson, Deleuze, and Meillassoux have suggested is a most horrifying possibility. If all the data of the world flowed in you would be at one with matter, but what would you hear? Do you even want to countenance what that might involve? Hell is all around you. Your brain is just trying its best to stop you being lit on fire.


Everything is pretty patterns (Ladyman and Ross) and you are too. The problem with patterns is that sometimes they clash. If the brain has been hacked together it���s bound to be buggy as hell. Look at your computer. One subpersonal process goes askew and you need it fixed. The technician tries a few things, maybe it works, or maybe it does not. Maybe, as in severe cases of schizophrenia or depression, you just have a crappy system. I���ve said before that consciousness is the holocaust of happiness, meant sincerely, not lightly, and by this I mean that if the conditions or constraints that created a self never came together, in just the way it has for us, there would never have been any conscious suffering. Consciousness is the final correlate of all human suffering. You can blame almost anything else, but had ���we��� (is it really ���us���) never believed we should be stable, integrated selves none of the bugs that followed would have appeared. Our world would have been a beautiful, empty, unthinking collection of material patterns: perhaps even a heaven of unthinking noise?


Chaos, as I am sure you have heard, is a ladder, but so too is evolution. Lifted up from the dregs of biology into cultural evolution we came to see what nothing else could see. Some foolishly believed this was a gift. Civilisation was realised. When in reality each one was built on war. Philosophers know how to dance around this problem: we can think our way, collectively, toward a more rational, constrained future. Except collective intelligence most often works best when deployed toward destructive ends: where do you find the most creative minds? The war-room. ���War, everywhere I look������ (Tormentor). To make it explicit, so to speak, if you want new masters, as Lacan said, you will find them. Look into the dead eyes of those who desire freedom and there rests fear. Fear that they will build a palace of reason only for the stability so hard fought for to collapse under the weight of the chronic irrationalism of the baser human aspect, untameable, unpredictable, and unknowable. History books are the evidence you stack up to adduce this, but at least today we have learnt enough to include the accelerated process of decline into our calculations. We no longer fight our enemies. We kiss them on the mouth and ask if we can join them in the decadent decline in advance.


I know I should not speak like this. What a waste to spend your time reasoning about the impossibility of one day sticking the hook in and indexing some little part of reality that, Tetris-like, delivers temporary respite. Only, of course, here comes more bricks. As I feel, always in my very bones, what I know is coming, the far-off end (it is never close enough), bleak theory morphs into even and ever bleaker theory, sometimes just bleak, once bleaker than black, but now bleaker than bleak. Rust Cohle, in True Detective, at one points let���s his interrogators know: ���I know who I am. And after all these years, there’s a victory in that.��� It is the most paradoxical of victories. The ���pyrrhic��� victory of traditional philosophy, found in thinkers as diverse as Husserl and Meillassoux, where one gains a foothold on the world after a long struggle. The question bleak theory asks, adrift the perennial tradition, is whether knowing who we are will result in precisely the inverse of the oldest goal of philosophical self-knowledge: we cannot understand ourselves except as that entity which cannot truly know itself. Know thyself? Perhaps all along it has been the wrong question.


The tradition of philosophy always hinges on a subtle revision of position and orientation. This is the generative process whereby, for instance, the ambiguity of postmodern philosophy culminates in a counter-revolution of rational normativity. This is our contemporary example, but it is found everywhere. Heidegger ontologising phenomenology. Hegel gobbling up the Kantian noumena. Today there is possibly another: one that, again to evoke Rust Cohle, means to ���start asking the right fucking questions.��� Not about what we are, but what we are not: ���transcendental egos,��� ���subjects,��� or ���selves.��� Perhaps not even ���agents,��� but I leave that problem for other minds to debate. I know what I am, a ���disinterested onlooker��� (Husserl), but deluded that I am unconcerned.


True madness lies ahead for our species. Normativity, humanism, anti-reductionism, anything not bathed in the acid of neuroscience are all contributing to a sharpening of the knives. Building dams to keep the coming dissolution at bay they will render the shattering of the illusion that much harsher, harder. We are not going to Mars. We are going to go out of our minds.


.


[��Dr. Paul J. Ennis is a Research Fellow in the School of Business, Trinity College Dublin. He is the author of Continental Realism (Zero Books, 2011), co-editor with Peter Gratton of the Meillassoux Dictionary (Edinburgh University Press, 2014) and co-editor with Tziovanis Georgakis of Heidegger in the Twenty-First Century (Springer, 2015). A version of bleak theory, ‘Bleak,’ first appeared in the DVD booklet for A Spell to Ward off the Darkness (Soda Pictures, 2014).]


 


 •  0 comments  •  flag
Share on Twitter
Published on April 06, 2015 07:36

March 30, 2015

Are Minds like Witches? The Catastrophe of Scientific Progress (by Ben Cain)

machine brain


.


As scientific knowledge has advanced over the centuries, informed people have come to learn that many traditional beliefs are woefully erroneous. There are no witches, ghosts, or disease-causing demons, for example. But are cognitive scientists currently on the verge of showing also that belief in the ordinarily-defined human self is likewise due to a colossal misunderstanding, that there are no such things as meaning, purpose, consciousness, or personal self-control? Will the assumption of personhood itself one day prove as ridiculous as the presumption that some audacious individuals can make a pact with the devil?


Progress and a World of Mechanisms


According to this radical interpretation of contemporary science, everything is natural and nature consists of causal relationships between material aggregates that form systems or mechanisms. The universe is thus like an enormous machine except that it has no intelligent designer or engineer. Atoms evolve into molecules, stars into planets, and at least one planet has evolved life on its surface. But living things are really just material objects with no special properties. The only efficacious or real property in nature, very generally speaking, is causality, and thus the real question is always just what something can do, given its material structure, initial conditions, and the laws of nature. As one of the villains of The Matrix Reloaded declares, ���We are slaves to causality.��� Thus, instead of there being people or conscious, autonomous minds who use symbols to think about things and to achieve their goals, there are only mechanisms, which is to say forces acting on complex assemblies of material components, causing the system to behave in one way rather than another. Just as the sun acts on the Earth���s water cycle, causing oceans to evaporate and thus forming clouds that eventually rain and return the water via snowmelt runoff and groundwater flow to the oceans, the environment acts on an animal���s senses, which send signals to its brain whereupon the brain outputs a more or less naturally selected response, depending on whether the genes exercise direct or indirect control over their host. Systems interacting with systems, as dictated by natural laws and probabilities���that���s all there is, according to this interpretation of science.


How, then, do myths form that get the facts so utterly wrong? Myths in the pejorative sense form as a result of natural illusions. Omniscience isn���t given to lowly mammals. To compensate for their being thrown into the world without due preparation, as a result of the world���s dreadful godlessness, some creatures may develop the survival strategy of being excessively curious, which drives them often to err on the side not of caution but of creativity. We track not just the patterns that lead us to food or shelter, but myriad other structures on the off-chance that they���re useful. And as we evolve more intelligence than wisdom, we creatively interpret these patterns, filling the blanks in our experience with placeholder notions that indicate both our underlying ignorance and our presumptuousness. In the case of witches, for example, we mistake some hapless individual���s introversion and foreignness for some evil complicity in suffering that���s actually due merely to bad luck and to nature���s heartlessness. Given enough bumbling and sanctimony, that lack of information about a shy foreigner results in the burning of a primate for allegedly being a witch. A suitably grotesque absurdity for our monstrously undead universe.


And in the corresponding case of personhood itself, the lack of information about the brain causes our inquisitive species to reify its ignorance, to mistake the void found by introspection for spirit or mind which our allegedly wise philosophers then often interpret as being all that���s ultimately real. That is, we try to control ourselves along with our outer environment, to enhance our fitness to carry our genes, but because our brain didn���t evolve to reveal its mechanisms to themselves, the brain outputs nonsense to satisfy its curiosity, and so the masses mislead themselves with fairytales about the supernatural property of personhood, misinterpreting the lack of inner access as being miraculous direct acquaintance with oneself by something called self-consciousness. We mislead ourselves into concluding that the self is more than the brain that can���t understand its operations without scientific experimentation. Instead, we���re seduced into dogmatizing that our blindness to our neural self is actually magical access to a higher, virtually immaterial self.


Personhood and the Natural Reality of Illusions


So much for the progressive interpretation of science. I believe, however, that this interpretation is unsustainable. The serpent���s jaws come round again to close on the serpent���s own tail, and so we���re presented with yet another way to go spectacularly wrong; that is, the radical, progressive naturalist joins the deluded supernaturalist in an extravagant leap of logic. To see this, realize that the above picture of nature can be no picture at all. To speak of a picture, a model, a theory, or a worldview, or even of thinking or speaking in general, as these words are commonly defined is, of course, forbidden to the austere naturalist. There are no symbols in this interpretation which is no interpretation; there are only phases in the evolution of material systems, objects caught between opposing forces that change according to ceteris paribus laws which are not really laws. Roughly speaking���and remember that there���s no such thing as speaking���there���s only causality in nature. There are no intentional or normative properties, no reference, purpose, or goodness or badness.


In the unenlightened mode of affecting material systems, this ���means��� that if you interpret scientific progress as entailing that there are no witches, demons, or people in general, in the sense that the symbols for these entities are vacuous, whereas other symbols enjoy meaningful status such as the science-friendly words, ���matter,��� ���force,��� ���law,��� ���mechanism,��� ���evolution,��� and so forth, you���ve fallen into the same trap that ensnares the premodern ignoramus who fails to be humbled by her grievous knowledge deficit. All symbols are equally bogus, that is, supernatural, according to the foregoing radical naturalism. Thus, this radical must divest herself not just of the premodern symbols, but of the scientific ones as well���assuming, that is, she���s bent on understanding these symbols in terms of the na��ve notion of personhood which, by hypothesis, is presently being made obsolete by science. So for example, if I say, ���Science has shown that there are no witches, and the commonsense notion of the mind is likewise empty,��� the radical naturalist is hardly free to interpret this as saying that premodern symbols are laughable whereas modern scientific ones are respectable. In fact, strictly speaking, she fails to be a thoroughgoing eliminativist as soon as she assumes that I���ve thereby said anything at all. All speaking is illusion, for the radical naturalist; there are only forces acting on material systems, causing those systems to behave, to exercise their material capacities, whereupon the local effects might feed back into a larger system, leading to cycles of average collective behaviour. There is no way of magically capturing that mechanistic reality in symbolic form; instead, there���s just the illusion of doing so.


How, then, should scientific progress be understood, given that there���s no such things as scientific theories, progress, or understanding, as these things are commonly defined? In short, what���s the uncommon, enlightened way of understanding science (which is actually no sort of understanding)? What���s the essence of postmodern, scientific mysticism, as we might think of it? In other words, what will the posthuman be doing once her vision is unclouded with illusions of personhood and so is filled with mechanisms as such? The answer must be put in terms, once again, of causality. Scientific enlightenment is a matter (literally) of being able to exercise greater control over certain systems than is afforded by those who lack scientific tools. In short, assuming we define ourselves as a species in terms of the illusions of a supernatural self, the posthuman who embraces radical naturalism and manages to clear her head of the cognitive vices that generate those illusions will be something of a pragmatist. She���ll think in terms of impersonal systems acting and reacting to each other and being forced into this or that state, and she���ll appreciate how she in turn is driven by her biochemical makeup and evolutionary history to survive by overpowering and reshaping her environment, aided by this or that trait or tool.


Radical, eliminativistic naturalism thus implies some version of pragmatism. The version not implied would be one that defines usefulness in terms of the satisfaction of personal desires. (And, of course, there would really be some form of causality instead of any logical implication.) But the point is that for the eliminativist, an illusion-free individual would think purely in terms of causality and of materialistic advantage based on a thorough knowledge of the instrumental value of systems. She���d be pushed into this combative stance by her awareness that she���s an animal that���s evolved with that survivalist bias, and so her scientific understanding wouldn���t be neutral or passive, but supplemented by a more or less self-interested evaluation of systems. She���d think in terms of mechanisms, yes, but also of their instrumental value to her or to something with which she���s identified, although she wouldn���t assume that anyone���s survival, including hers, is objectively good.


For example, the radical naturalist might think of systems as posing problems to be solved. The posthuman, then, would be busy solving problems, using her knowledge to make the environment more conducive to her. She wouldn���t think of her knowledge as consisting of theories made up of symbols; instead, she���d see her brain and its artificial extensions as systems that enable her to interact successfully with other systems. The success in question would be entirely instrumental, a matter of engineering with no presumption that the work has any ultimate value. There could be no approval or disapproval, because there would be no selves to make such judgments, apart from any persistence of a deluded herd of primates. The re-engineered system would merely work as designed, and the posthuman would thereby survive and be poised to meet new challenges. This would truly be work for work���s sake.


What, then, should the enlightened pragmatist say about the dearth of witches? Can she sustain the sort of positivistic progressivism with which I began this article? Would she attempt to impact her environment by making sounds that are naively interpreted as meaning that science has shown there are no witches? No, she would ���say��� only that the neural configuration leading to behaviour associated with the semantic illusion that certain symbols correspond to witchy phenomena has causes and effects A and B, whereas the neural configuration leading to so-called enlightened, modern behaviour, often associated with the semantic illusion that certain other symbols correspond to the furious buying and selling of material goods and services and to equally tangible, presently-conventional behaviour thus has causes and effects C and D. Again, if everything must be perceived in terms of causality, the neural states causing certain primates to be burned as witches should be construed solely in terms of their causes and effects. In short, the premodern, allegedly savage illusion of witchcraft loses its sting of embarrassment, because that illusion evidently had causal power and thus a degree of reality. Cognitive illusions aren���t nothing at all; they���re effects of vices like arrogance, self-righteousness, impertinence, irrationality, and so forth, and they help to shape the real world. There���s no enlightened basis for any normative condemnation of such an illusion. All that matters is the pragmatic, instrumental judgment of something���s effectiveness at solving a problem.


Yes, if there���s no such thing as the meaning of a symbol, there are no witches, in that there���s no relation of non-correspondence between ���witch��� and creatures that would fit the description. Alas, this shouldn���t comfort the radical naturalist since there can likewise be no negative semantic relation between ���symbol��� and symbols to make sense of that statement about the nonexistence of witches. If naturalism forces us to give up entirely on the idea of intentionality, we mustn���t interpret the question of something���s nonexistence as being about a symbol���s failure to pick out something (since there would be no such thing as a symbol in the first place). And if we say there are no symbols, just as there are no witches or ghosts or emergent and autonomous minds, we likewise mustn���t think this is due merely to any semantic failure.


What, then, must nonexistence be, according to radical naturalism? It must be just relative powerlessness. To say that there are no witches ���means��� that the neural states involved in behaviour construed in terms of witchcraft are relatively powerless to systematically or reliably impact their environment. Note that this needn���t imply that the belief in witches is absolutely powerless. After all, religious institutions have subdued their flocks for millennia based on the ideology of demons, witches and the like, and so the pragmatist mustn���t pretend she can afford to ���say��� that witches have a purely negative ontological status. Again, just because there aren���t really any witches doesn���t mean there���s no erroneous belief in witchcraft, and that belief itself can have causal power. The belief might even conceivably lead to a self-fulfilling prophecy in which case something like witchcraft will someday come into being. At any rate, the belief in witches opens up problems to be solved by engineering (whether to side with the oppressive Church or to overthrow it, etc.), and that would be the enlightened posthuman���s only concern with respect to witches.


Indeed, a radical naturalist who understands the cataclysmic implications of scientific progress has no epistemic basis whatsoever for belittling the causal role of a so-called illusion like witchcraft. Again, some neural states have causes and effects A and B while others have causes and effects C and D���and that���s it as far as objective reality is concerned. On top of this, at best, there���s pragmatic instrumentalism, which raises the question merely of the usefulness of the belief in witches. Is that belief entirely useless? Obviously not, as Western history attests. Is the belief in witches immoral or beneath our dignity as secular humanists? The question should be utterly irrelevant, since morality and dignity are themselves illusions, given radical naturalism; moreover, the ���human��� in ���humanist��� must be virtually empty. What an enlightened person could say with integrity is just that the belief in witches benefits some primates more than others, by helping to establish a dominance hierarchy.


The same goes for the nonexistence of minds, personhood, consciousness, semantic meaning, or purpose. If these things are illusions, so what? Illusions can have causal power, and the radical naturalist must distinguish between causal relations solely by assigning them their instrumental value, noting that some effects help some primates to survive by solving certain problems, while hindering others. Illusions are thus real enough for the truly radical naturalist. In particular, if the brain tries to discover its mechanisms through introspection and naturally comes up empty, that need not be the end of the natural process. The cognitive blind spot delivers an illusion of mentality or of immaterial spirituality, which in turn causes primates to act as if there were such things as cultures consisting of meaningful symbols, moral values and the like. We���d be misled into creating something that nevertheless exists as our creation. Just as the whole universe might have popped into existence from nothing, according to quantum mechanics, cognitive science might entail that personhood develops from the introspective experience of an inner emptiness. In fact, we���re not empty, because our heads are full of brain matter. But the tool of introspection can be usefully misapplied, as it evidently causes the whole panoply of culture-dependent behaviours.


What is it, then, to call personhood a mere illusion? What���s the difference between illusion and reality, for the radical naturalist, given that both can have causal power in the domain of material systems? If we say that illusions depend on ignorance of certain mechanisms, this turns all mechanisms into illusions and deprives us of so-called reality, assuming none of us is omniscient. As long as we select which mechanisms and processes to attend to in our animalistic dealings with the environment, we all live in bubble worlds based on that subjectivity which thus has quasi-transcendental status. To illustrate, notice that when the comedian Bill Maher mocks the Fox News viewer for living in the Fox Bubble and for being ignorant of the ���real world,��� Maher forgets that he too lives in a culture, albeit in a liberal rather than a conservative one, and that he doesn���t conceive of everything with the discipline of strict impersonality or objectivity, as though he were the posthuman mystic.


What seems to be happening here is that the radical naturalist is liable to identify with a science-centered culture and thus she���s quick to downgrade the experience of those who prefer the humanities, including philosophy, religion, and art. From the science-centered perspective, we���re fundamentally animals caught in systems of causality, but we nevertheless go on to create cultures in our bumbling way, blissfully ignorant of certain mechanistic realities and driven by cognitive vices and biases as we allow ourselves to be mesmerized by the ���illusion��� of a transcendent, immaterial self. ��But there���s actually no basis here for any value judgment one way or the other. From a barebones scientific ���perspective,��� the institution of science is as illusory as witchcraft. All that���s real are configurations of material elements that evolve in orderly ways���and witchcraft and personhood are free to share in that reality as illusions. Judging by the fact that the idea of witches has evidently caused some people to be treated accordingly and that the idea of the personal self has caused us to create a host of artificial, cultural worlds within the indifferent natural one, there appears to be more than enough reality to go around.


 •  0 comments  •  flag
Share on Twitter
Published on March 30, 2015 11:22

March 27, 2015

Earth and Muck

FINAL COVER - SMALL


So��Grimdark magazine has released the conclusion to “The Knife of Many Hands,” as well as an��interview containing cryptic questions and evasive answers. It’s fast becoming a great venue, and a great way to spotlight grim new talent.


As for information regarding the next book, I wish I knew what to say. I submitted��the final manuscript the end of January, and still I’ve heard nary a peep about possible publications dates. Rest assured, as soon as��I know, I’ll let you know.


I’d also like to recommend The Shadow of Consciousness: A Little Less Wrong, by Peter Hankins. Unlike so many approaches to the issue, Peter refuses to drain the swamp of phenomenology into the bog of intentionality. In some respects, the book is a little too clear-eyed! For those of us who have followed Conscious Entities over the years, it’s downright fascinating watching Peter slowly reveal those cards he’s been stubbornly holding to his chest! I’m hoping to work up��a review when I’m completed, OCD-permitting.


Shadow of Consciousness cover


I’d like to thank Roger for stepping into the breach these past couple months, giving everyone another glimpse of why he’ll be turning fantasy on its ear.��Why the breach? Early in February I began working on what I thought was a killer idea for an introduction to Through the Brain Darkly. The idea was to write it in two parts, posting each��here for feedback. Normally, the keyboard sounds like a baby rattle when I do blog/theory stuff, but��not so this time. I’m sure burn-out is part of the problem. I’m also cramped by a deep-seated need for perfection, I suppose, but I’ve never been quite so stymied by a good idea��before. So I thought I would open it up to��the collective,��gather a few thoughts on what people think it is I’m doing here (aside from the predictable, paleolithic factors), and what it is I need to communicate this effectively.


Babette Babich has recently posted her own thoughts on Diogenes in the Marketplace–pretty much calls out all my defense mechanisms! Check it out. If only more couples would lounge in bed with The White-Luck Warrior. She’s given me a gift with that lovely��image.


Despite my blockages, this��post��inaugurates a spate of guaranteed activity here on TPB.�� I’m pleased to announce that Ben Cain will be returning with a piece on eliminativism this upcoming Monday, then��Paul Ennis will be posting on Bleak Theory the��Monday��following.�� Maybe a good old-fashioned blog debate will be just the tonic.


1 like ·   •  0 comments  •  flag
Share on Twitter
Published on March 27, 2015 08:14

February 20, 2015

Three Roses, Bk. 1: Chapter Two

Hey all!�� Roger here.


I’ve posted the second chapter of the new draft of Three Roses, Book 1: The Anarchy.�� It’s first-draft stuff, but still I’m pretty happy with it.�� So I figure what the hell, I’ll post it here.


As always, any comments or questions are welcomed and appreciated.


1 like ·   •  0 comments  •  flag
Share on Twitter
Published on February 20, 2015 21:10

February 7, 2015

Introspection Explained

Las Meninas


So I couldn���t get past the first paper in Thomas Metzinger���s excellent��Open MIND offering without having to work up a long-winded blog post! Tim Bayne���s ���Introspective Insecurity��� offers a critique of Eric Schwitzgebel���s Perplexities of Consciousness, which is my runaway favourite book on introspection (and consciousness, for that matter). This alone might have sparked me to write a rebuttal, but what I find most extraordinary about the case Bayne lays out against introspective��skepticism is the way it directly implicates Blind Brain Theory. His�� defence of introspective optimism, I want to show, actually vindicates an even more radical form of pessimism than the one he hopes to domesticate.


In the article, Bayne divides the philosophical field into two general camps, the introspective optimists, who think introspection provides reliable access to conscious experience, and introspective pessimists, who do not.��Recent years have witnessed��a sea change in��philosophy of mind��circles (one due in no small part to Schwitzgebel’s amiable assassination of assumptions).��The case against introspective reliability has grown so prodigious that��what Bayne now terms ‘optimism’–introspection as a possible source��of metaphysically reliable information regarding the mental/phenomenal–would have been considered rank introspective pessimism not so long ago. The Cartesian presumption of ‘self-transparency’ (as Carruthers calls it in his excellent The Opacity of Mind) has died a sudden death at the hands of cognitive science.


Bayne identifies himself as��one of these new��optimists. What��introspection needs, he��claims, is��a balanced account, one sensitive to the vulnerabilities of both positions. Where proponents of optimism have difficulty accounting for introspective error, proponents of pessimism have difficulty accounting for introspective success. Whatever it amounts to, introspection is characterized by perplexing failures and thoughtless successes. As��he writes in his response piece,�������The epistemology of introspection is that it is not flat but contains peaks of epistemic security alongside troughs of epistemic insecurity��� (���Introspection and Intuition,�����1). Since any final theory of introspection will have to account for this mixed ���epistemic profile,��� Bayne suggests that it provides a useful speculative constraint, a way to sort the metacognitive wheat from the chaff.


According to Bayne, introspective optimists motivate their faith in the deliverances of introspection on the basis of two different arguments: the Phenomenological Argument and the Conceptual Argument. He restricts his presentation of the phenomenological argument to a single quote from Brie Gertler���s ���Renewed Acquaintance,��� which he takes as representative of his own introspective sympathies. As Gertler writes of the experience of pinching oneself:


When I try this, I find it nearly impossible to doubt that my experience has a certain phenomenal quality���the phenomenal quality it epistemically seems to me to have, when I focus my attention on the experience. Since this is so difficult to doubt, my grasp of the phenomenal property seems not to derive from background assumptions that I could suspend: e.g., that the experience is caused by an act of pinching. It seems to derive entirely from the experience itself. If that is correct, my judgment registering the relevant aspect of how things epistemically seem to me (this phenomenal property is instantiated) is directly tied to the phenomenal reality that is its truthmaker. “Renewed Acquaintance,”��Introspection and Consciousness, 111.


When attending to a given experience, it seems indubitable that the experience itself has distinctive qualities that allow us to categorize it in ways unique to first-person introspective, as opposed to third-person sensory, access. But if we agree that the phenomenal experience���as opposed to the object of experience���drives our understanding of that experience, then we agree that the phenomenal experience is what makes our introspective understanding true. ���Introspection,��� Bayne writes, ���seems not merely to provide one with information about one���s experiences, it seems also to ���say��� something about the quality of that information��� (4). Introspection doesn���t just deliver information, it somehow represents these deliverances as true.


Of course, this doesn���t make them true: we need to trust introspection before we can trust our (introspective) feeling of introspective truth. Or do we? Bayne replies:


it seems to me not implausible to suppose that introspection could bear witness to its own epistemic credentials. After all, perceptual experience often contains clues about its epistemic status. Vision doesn���t just provide information about the objects and properties present in our immediate environment, it also contains information about the robustness of that information. Sometimes vision presents its take on the world as having only low-grade quality, as when objects are seen as blurry and indistinct or as surrounded by haze and fog. At other times visual experience represents itself as a highly trustworthy source of information about the world, such as when one takes oneself to have a clear and unobstructed view of the objects before one. In short, it seems not implausible to suppose that vision���and perceptual experience more generally���often contains clues about its own evidential value. As far as I can see there is no reason to dismiss the possibility that what holds of visual experience might also hold true of introspection: acts of introspection might contain within themselves information about the degree to which their content ought to be trusted. 5


Vision is replete with what might be called ���information information,��� features that indicate the reliability of the information available. Darkness, for instance, is a great example, insofar as it provides visual information to the effect that visual information is missing. Our every glance is marbled with what might be called ���more than meets the eye��� indicators. As we shall, this analogy to vision will come back and haunt Bayne���s thesis. The thing to keep in mind is the fact that the cognition of missing information requires more information. For the nonce, however, his claim is modest enough to acknowledge his point: as it stands, we cannot rule out the possibility that introspection, like exospection, reliably indicates its own reliability. As such, the door to introspective optimism remains open.


Here we see the ���foot-in-the-door strategy��� that Bayne adopts throughout the article, where his intent isn���t so much to decisively warrant introspective optimism as it is to point out and elucidate the ways that introspective pessimism cannot decisively close the door on introspection.


The conceptual motivation for introspective optimism turns on the necessity of epistemic access implied in the very concept of ���what is it likeness.��� The only way for something to be ‘like something’ is for it to like something for somebody.�����[I]f a phenomenal state is a state that there is something it is like to be in,” Bayne writes,��“then the subject of that state must have epistemic access to its phenomenal character” (5). Introspection has to be doing some kind of cognitive work, otherwise “[a] state to which the subject had no epistemic access could not make a constitutive contribution to what it was like for that subject to be the subject that it was, and thus it could not qualify as a phenomenal state��� (5-6).


The problem with this argument, of course, is that it says little about the epistemic access involved. Apart from some unspecified ability to access information, it really implies very little. Bayne convincingly argues that the capacity to cognize differences, make discriminations, follows from introspective access, even if the capacity to correctly categorize those discriminations does not. And in this respect, it places another foot in the introspective door.


Bayne then moves on to the case motivating pessimism, particularly as Eric presents it in his Perplexities of Consciousness. He mentions the privacy problems that plague scientific attempts to utilize introspective information (Irvine provides a thorough treatment of this in her Consciousness as a Scientific Concept), but since his goal is to secure introspective reliability for philosophical purposes, he bypasses these to consider three kinds of challenges posed by Schwitzgebel in Perplexities, the Dumbfounding, Dissociation, and Introspective Variation Arguments. Once again, he���s careful to state the balanced nature of his aim, the obvious fact that


any comprehensive account of the epistemic landscape of introspection must take both the hard and easy cases into consideration. Arguably, generalizing beyond the obviously easy and hard cases requires an account of what makes the hard cases hard and the easy cases easy. Only once we���ve made some progress with that question will we be in a position to make warranted claims about introspective access to consciousness in general. 8


His charge against Schwitzgebel, then, is that even conceding his examples of local introspective unreliability, we have no reason to generalize from these to the global unreliability of introspection as a philosophical tool. Since this inference from local unreliability to global unreliability is his primary discursive target,��Bayne doesn���t so much need to problematize Schwitzgebel���s challenges as to reinterpret������quarantine������their implications.


So in the case of ���dumbfounding��� (or ���uncertainty���) arguments, Schwitzgebel reveals the epistemic limitations of introspection via a barrage of what seem to be innocuous questions. Our apparent inability to answer these questions leaves us ���dumbfounded,��� stranded on a cognitive limit we never knew existed. Bayne���s strategy, accordingly, is to blame the questions, to suggest that dumbfounding, rather than demonstrating any pervasive introspective unreliability, simply reveals that the questions being asked possess no determinate answers. He writes:


Without an account of why certain introspective questions leave us dumbfounded it is difficult to see why pessimism about a particular range of introspective questions should undermine the epistemic credentials of introspection more generally. So even if the threat posed by dumbfounding arguments were able to establish a form of local pessimism, that threat would appear to be easily quarantined. 11


Once again, local problems in introspection do not warrant global conclusions regarding introspective reliability.


Bayne takes a similar tack with Schwitzgebel���s dissociation arguments, examples where our na��ve assumptions regarding introspective competence diverge from actual performance. He points out the ambiguity between the reliability of experience and the reliability of introspection: Perhaps we���re accurately introspecting mistaken experiences. If there���s no way to distinguish between these, Bayne, suggests, we���ve made room for introspective optimism. He writes: ���If dissociations between a person���s introspective capacities and their first-order capacities can disconfirm their introspective judgments (as the dissociation argument assumes), then associations between a person���s introspective judgments and their first-order capacities ought to confirm them��� (12). What makes Schwitzgebel���s examples so striking, he goes on to argue, is precisely that fact that introspective judgments are typically effective.


And when it comes to the introspective variation argument, the claim that the chronic underdetermination that characterizes introspective theoretical disputes attests to introspective incapacity, Bayne once again offers an epistemologically fractionate picture of introspection as a way of blocking any generalization from given instances of introspective failure. He thinks that examples of introspective capacity can be explained away, ���[b]ut even if the argument from variation succeeds in establishing a local form of pessimism, it seems to me there is little reason to think that this pessimism generalizes��� (14).


Ultimately, the entirety of his case hangs on the epistemologically fractionate nature of introspection. It���s worth noting at this point, that from a cognitive scientific point of view, the fractionate nature of introspection is all but guaranteed. Just think of the mad difference between Plato���s simple aviary, the famous��metaphor he offers for memory in the Theaetetus,��and the imposing��complexity of memory as we understand it today. I raise this ���mad difference��� for two reasons. First, it implies that any scientific understanding of introspection is bound to radically complicate our present understanding. Second, and even more importantly, it evidences the degree to which introspection is blind, not only to the fractionate complexity of memory, but to its own fractionate complexity as well.


For Bayne to suggest that introspection is fractionate, in other words, is for him to claim that introspection is almost entirely blind to its own nature (much as it is to the nature of memory). To the extent that Bayne has to argue the fractionate nature of introspection, we can conclude that introspection is not only blind to its own fractionate nature, it is also blind to the fact of this blindness. It is in this sense that we can assert that introspection neglects its own fractionate nature. The blindness of introspection to introspection��is the implication that hangs over his entire case.


In the meantime, having posed an epistemologically��plural account of introspection, he���s now on the hook to explain the details. ���Why,��� he now asks, ���might certain types of phenomenal states be elusive in a way that other types of phenomenal states are not?��� (15). Bayne does not pretend to possess any definitive answers, but he does hazard one possible wrinkle in the otherwise featureless face of introspection, the 2010 distinction that he and Maja Spener made in ���Introspective Humility��� between ���scaffolded��� and ���freestanding��� introspective judgments. He notes that those introspective judgments that seem to be the most reliable, are those that seem to be ���scaffolded��� by first-order experiences. These include the most anodyne metacognitive statements we make, where we reference our experiences of things to perspectivally situate them in the world, as in, ���I see a tree over there.��� Those introspective judgments that seem the least reliable, on the other hand, have no such first-order scaffolding. Rather than piggy-back on first-order perceptual judgments, ���freestanding��� judgments (the kind philosophers are fond of making) reference our experience of experiencing, as in, ���My experience has a certain phenomenal quality.���


As that last example (cribbed from the Gertler quote above) makes plain, there���s a sense in which this distinction doesn���t do the philosophical introspective optimist any favours. (Max Engel exploits this consequence to great effect in his Open MIND reply to Bayne���s article, using it to extend pessimism into the intuition debate). But Bayne demurs, admitting that he lacks any substantive account. As it stands, he need only make the case that introspection is fractionate to convincingly block the ���globalization��� of Schwitzgebel���s pessimism. As he writes:


perhaps the central lesson of this paper is that the epistemic landscape of introspection is far from flat but contains peaks of security alongside troughs of insecurity. Rather than asking whether or not introspective access to the phenomenal character of consciousness is trustworthy, we should perhaps focus on the task of identifying how secure our introspective access to various kinds of phenomenal states is, and why our access to some kinds of phenomenal states appears to be more secure than our access to other kinds of phenomenal states. 16


The general question of whether introspective cognition of conscious experience is possible is premature, he argues, so long as we have no clear idea of where and why introspection works and does not work.


This is where I most agree with Bayne���and where I���m most puzzled. Many things puzzle me about the analytic philosophy of mind, but nothing quite so much as the disinclination to ask what seem to me to be relatively obvious empirical questions.


In nature, accuracy and reliability are expensive achievements, not gifts from above. Short of magic, metacognition requires physical access and physical capacity. (Those who believe introspection is magic���and many do���need only be named magicians). So when it comes to deliberative introspection, what kind of neurobiological access and capacity are we presuming? If everyone agrees that introspection, whatever it amounts to, requires the brain do honest-to-goodness work, then we can begin advancing a number of empirical theses regarding access and capacity, and how we might find these expressed in experience.


So given what we presently know, what kind of metacognitive access and capacity should we expect our beans to possess? Should we, for instance, expect it to rival the resolution and behavioural integration of our environmental capacities? Clearly not. For one, environmental cognition coevolved with behaviour and so has the far greater evolutionary pedigree���by hundreds of millions of years, in fact! As it turns out, reproductive success requires that organisms solve their surroundings, not themselves. So long as environmental challenges are overcome, they can take themselves for granted, neglect their own structure and dynamics. Metacognition, in other words, is an evolutionary luxury. There���s no way of saying how long homo sapiens has enjoyed the particular luxury of deliberative introspection (as an exaptation, the luxury of ���philosophical reflection��� is no older than recorded history), but even if we grant our base capacity a million year pedigree, we���re still talking about a very young, and very likely crude, system.


Another compelling reason to think metacognition cannot match the dimensionality of environmental cognition lies in the astronomical complexity of its target. As a matter of brute empirical fact, brains simply cannot track themselves the high-dimensional way they track their environments. Thus, once again, ‘Dehaene’s Law,’ the way�����[w]e constantly overestimate our awareness���even when we are aware of glaring gaps in our awareness��� (Consciousness and the Brain, 79).����The vast resources society is presently expending to cognize the brain attests to the degree to which our brain exceeds its own capacity to cognize in high dimensional terms. However the brain cognizes its own operations, then, it can only do so in a radically low dimensional way. We should expect, in other words, our brains to be relatively insensitive to their own operation���to be blind to themselves.


A third empirical reason to assume that metacognition falls short environmental dimensionality is found in the way it belongs to the very system it tracks, and so lacks the functional independence as well as the passive and active information-seeking opportunities belonging to environmental cognition. The analogy I always like to use here is that of a primatologist sewn into a sack with a troop of chimpanzees versus one tracking them discretely in the field. Metacognition, unlike environmental cognition, is structurally bound to its targets. It cannot move toward some puzzling item���an apple say���peer at it, smell it, touch it, turn it over, crack it open, taste it, scrutinize the components. As embedded, metacognition is restricted to fixed channels of information that it could not possibly identify or source. The brain, you could say, is simply too close to itself to cognize itself as it is.


Viewed empirically, then, we should expect metacognitive access and capacity to be more specialized, more adventitious, and less flexible compared to that of environmental cognition. Given the youth of the system, the complexity of its target, and the proximity of its target, we should expect human metacognition will consist of various kluges, crude heuristics that leverage specific information to solve some specific range of problems. As Gerd Gigerenzer and the Adaptive Behaviour and Cognition Research Group have established, simple heuristics are often far more effective than optimization methods at solving problems. “As the amount of data available to make predictions in an environment shrinks, the advantage of simple heuristics over complex algorithms grows” (Hertwig and Hoffrage, “The Research Agenda,” Simple Heuristics in a Social World,��23). With complicated problems yielding little data, adding parameters to a solution can compound the chances of making mistakes. Low dimensionality, in other words, need not be a bad thing, so long as the information consumed is information enabling the solution of some problem set. This is why evolution so regularly makes use of it.


Given this broad-stroke picture, human metacognition can be likened to a toolbox containing multiple, special-purpose tools, each possessing specific ���problem-ecologies,��� narrow, but solvable domains that trigger their application frequently and decisively enough to have once assured the tool���s generational selection. The problem with heuristics, of course, lies in the narrowness of their respective domains. If we grant the brain any flexibility in the application of its metacognitive tools, then the potential for heuristic misapplication is always a possibility. If we deny the brain any decisive capacity to cognize these misapplications outside their consequences (if the brain suffers ���tool agnosia���), then we can assume these misapplications will be indistinguishable from successful applications short of those consequences.


In other words, this picture of human metacognition (which is entirely consistent with contemporary research) provides an elegant (if sobering) recapitulation and explanation of what Bayne calls the ���epistemic landscape of introspection.��� Metacognition is fractionate because of the heuristic specialization required to decant behaviourally relevant information from the brain. The ���peaks of security��� correspond to the application of metacognitive heuristics to matching problem-ecologies, while the ���troughs of insecurity��� correspond to the application of metacognitive heuristics to problem-ecologies they could never hope to solve.


Since those matching problem-ecologies are practical (as we might expect, given the cultural basis of regimented theoretical thinking), it makes sense that practical introspection is quite effective, whereas theoretical introspection, which attempts to intuit the general nature of experience, is anything but. The reason the latter strike us as so convincing���to the point of seeming impossible to doubt, no less���is simply that doubt is expensive: there���s no reason to presume we should happily discover the required error-signalling machinery awaiting any exaptation of our deliberative introspective capacity, let alone one so unsuccessful as philosophy. As I mentioned above, the experience of epistemic insufficiency always requires more information. Sufficiency is the default simply because the system has no way of anticipating novel applications, no decisive way of suddenly flagging information that was entirely sufficient for ancestral problem-ecologies and so required no flagging.


Remember how Bayne offered what I termed ���information information��� provided by vision as a possible analogue of introspection? Visual experience cues us to the unreliability or absence of information in a number of ways, such as darkness, blurring, faintness, and so on. Why shouldn���t we presume that deliberative introspection likewise flags what can and cannot be trusted? Because deliberative introspection exapts information sufficient for one kind of practical problem-solving (Did I leave my keys in the car? Am I being obnoxious? Did I read the test instructions carefully enough?) for the solution of utterly unprecedented ontological problems. Why should repurposing introspective deliverances in this way renovate the thoughtless assumption of ‘default sufficiency’ belonging to their original purposes?


This is the sense in which Blind Brain Theory, in the course of explaining the epistemic profile of introspection, also explodes Bayne���s case for introspective optimism. By tying the contemplative question of deliberative introspection to the empirical question of the brain���s metacognitive access and capacity, BBT makes plain the exorbitant biological cost of the optimistic case. Exhaustive, reliable intuition of anything involves a long evolutionary history, tractable targets, and flexible information access���that is, all the things that deliberative introspection does not possess.


Does this mean that deliberative introspection is a lost cause, something possessing no theoretical utility whatsoever? Not necessarily. Accidents happen. There���s always a chance that some instance of introspective deliberation could prove valuable in some way. But we should expect such solutions to be both adventitious and local, something that stubbornly resists systematic incorporation into any more global understanding.


But there���s another way, I think, in which deliberative introspection can play a genuine role in theoretical cognition���a way that involves looking at Schwitzgebel���s skeptical project as a constructive, rather than critical, theoretical exercise.


To show what I mean, it���s worth recapitulating one of the quotes Bayne selects from Perplexities of Consciousness for sustained attention:


How much of the scene are you able vividly to visualize at once? Can you keep the image of your chimney vividly in mind at the same time you vividly imagine (or ���image���) your front door? Or does the image of your chimney fade as your attention shifts to the door? If there is a focal part of your image, how much detail does it have? How stable is it? Suppose that you are not able to image the entire front of your house with equal clarity at once, does your image gradually fade away towards the periphery, or does it do so abruptly? Is there any imagery at all outside the immediate region of focus? If the image fades gradually away toward the periphery, does one lose colours before shapes? Do the peripheral elements of the image have color at all before you think to assign color to them? Do any parts of the image? If some parts of the image have indeterminate colour before a colour is assigned, how is that indeterminacy experienced���as grey?���or is it not experienced at all? If images fade from the centre and it is not a matter of the color fading, what exactly are the half-faded images like? Perplexities, 36


Questions in general are powerful insofar as they allow us to cognize the yet-to-be-cognized. The slogan feels ancient to me now, but no less important: Questions are how we make ignorance visible, how we become conscious of cognitive incapacity. In effect, then, each and every question in this quote brings to light a specific inability to answer. Granting that this inability indicates either a lack of information access and/or metacognitive incapacity, we can presume these questions enumerate various cognitive dimensions missing from visual imagery. Each question functions as an interrogative ���ping,��� you could say, showing us another direction that (for many people at least) introspective inquiry cannot go���another missing dimension.


So even though Bayne and Schwitzgebel draw negative conclusions from the ���dumbfounding��� that generally accompanies these questions, each instance actually tells us something potentially important about the limits of our introspective capacities. If Schwitzgebel had been asking these questions of a painting���Las Meninas, say���then dumbfounding wouldn���t be a problem at all. The information available, given the cognitive capacity possessed, would make answering them relatively straightforward. But even though ���visual imagery��� is apparently ���visual��� the same as a painting, the selfsame questions stop us in our tracks. Each question, you could say, closes down a different ���degree of cognitive freedom,��� reveals how few degrees of cognitive freedom human deliberative introspection possesses for the purposes of solving visual imagery. Not much at all, as it turns out.


Note this is precisely what we should expect on a ���blind brain��� account. Once again, simply given the developmental and structural obstacles confronting metacognition, it almost certainly consists of an ���adaptive toolbox��� (to use Gerd Gigerenzer���s phrase), a suite of heuristic devices adapted to solve a restricted set of problems given only low-dimensional information. The brain possesses a fixed set of metacognitive channels available for broadcast, but no real ���channel channel,��� so that it systematically neglects metacognition���s own fractionate, heuristic structure.


And this clearly seems to be what Schwitzgebel���s interrogative barrage reveals: the low dimensionality of visual imagery (relative to vision), the specialized problem-solving nature of visual imagery, and our profound inability to simply intuit as much. For some mysterious reason we can ask visual questions that for some mysterious reason do not apply to visual imagery. The ability of language to retask cognitive resources for introspective purposes seems to catch the system as a whole by surprise, confronts us with what had been hitherto relegated to neglect. We find ourselves ���dumbfounded.���


So long as we assume that cognition requires work, we must assume that metacognition trades in low dimensional information to solve specific kinds of problems. To the degree that introspection counts as metacognition, we should expect it to trade in low-dimensional information geared to solve particular kinds of practical problems. We should also expect it to be blind to introspection, to possess neither the access nor the capacity required to intuit its own structure. Short of interrogative exercises such as Schwitzgebel���s, deliberative introspection has no inkling of how many degrees of cognitive freedom it possesses in any given context. We have to figure out what information is for what inferentially.


And this provides the basis for a provocative diagnosis of a good many debates in contemporary psychology and philosophy of mind. So for instance, a blind brain account implies that our relation to something like ���qualia��� is almost certainly one possessing relatively few degrees of cognitive freedom���a simple heuristic. Deliberative introspection neglects this, and at the same time, via questioning, allows other cognitive capacities to consume the low-dimensional information available. ���Dumbfounding��� often follows���what the ancient Greeks liked to call, thaumazein. The practically minded, sniffing a practical dead end, turn away, but the philosopher famously persists, mulling the questions, becoming accustomed to them, chasing this or that inkling, borrowing many others, all of which, given the absence of any real information information, cannot but suffer from some kind of ���only game in town effect��� upon reflection. The dumbfounding boundary is trammelled to the point of imperceptibility, and neglect is confused with degrees of cognitive freedom that simply do not exist. We assume that a quale is something like an apple���we confuse a low-dimensional cognitive relationship with a high-dimensional one. What is obviously specialized, low-dimensional information becomes, for a good number of philosophers at least, a special ���immediately self-evident��� order of reality.


Is this Adamic story really that implausible? After all, something has to explain our perpetual inability to even formulate the problem of our nature, let alone solve it. Blind Brain Theory, I would argue, offers a parsimonious and comprehensive way to extricate ourselves from the traditional mire. Not only does it explain Bayne���s ���epistemic profile of introspection,��� it explains why this profile took so long to uncover. By reinterpreting the significance of Schwitzgebel���s ���dumbfounding��� methods, it raises the possibility of ���Interrogative Introspection��� as a scientific tool. And lastly, it suggests the problems that neglect foists on introspection can be generalized, that much of our inability to cognize ourselves turns on the cognitive short cuts evolution had to use to assure we could cognize ourselves at all.


 •  0 comments  •  flag
Share on Twitter
Published on February 07, 2015 09:07

R. Scott Bakker's Blog

R. Scott Bakker
R. Scott Bakker isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow R. Scott Bakker's blog with rss.