onsciousness studies is one of the most fervid, enjoyable, and enticing fields now, an intellectual carnival attracting scores of brilliant players to come take their swing through the phantasmagoria. It is too early to tell whether the pursuit will end up yet another of psychology's corpses, like behaviorism, which had spurted suddenly to life with brash enthusiasm and hubris, only to expend its players' energy and pocketbooks, closing up its booths after all the noise and pomp, leaving little real behind beyond a memory of bustle, having fulfilled none of its promises of riches. Whatever its exact goals, the present offshoot of psychology may be as varied as the participants which are rarely stated in discussions. But, insofar as these goals can be examined within the context of key figures in the field, they deserve scrutiny alongside a critique of their accomplishments up to now. It may be hard to critique accomplishments when they do not arise from a clearly stated goal, so the attempt must be made to extract some essence of a goal from the field's contributions so far.
This examination must be neither science nor philosophy but a critical essay that deals with both. Both fields figure prominently in consciousness studies, which, probably more than previous schools of psychology, has attracted scholars from almost every academic discipline. "Consciousness," after all, seems to encompass all our daily sense of being in a way the colder "mind" does not. This come-one-come-all attractiveness of the field may prove its boon, by garnering a wealth of minds as motley as human consciousness itself to uncover, synergistically, its secret nature, or its undoing from the unwieldiness and Babel of so many disciplines that have their own ancient traditions and ways of speaking, all too long ago diverged in intellectual evolution to communicate successfully with one another ever again. Science in its various trappings --from physics at one end to cognitive science at the other-- and philosophy have been the loudest players. Though each is equally rigorous and disciplined in its own way, each and its many subdisciplines has its historical concerns, obsessions, and language that make it a different field and make it almost impossible for one problem actually to remain the same problem --though the two may appear the same to an outside observer-- when taken from the science room and placed in the philosophy room. But to critique consciousness studies fairly, some attempt must be made to couch the problem in ways people in both rooms (and their many partitioned sections) can understand --as well as that oddball, linguistics, that appears to be in the wall between. To address the problem so may mean standing in the hallway where both can see, in neither room, and the best that can be hoped is that the signaling will come off less as handwaving and more as meaningful semaphore --not only to those in the two rooms ( and the wall between) but those in all the other rooms of the carnival-atmosphere school.
There was an awful rainbow once in heaven. We know her woof, her texture, she is given In the dull catalog of common things. Philosophy will clip an angel's wings Conquer all mysteries by rule and line Empty the haunted air and gnomed mine, Unweave a rainbow. --Keats, Lamia
he last great frontier of commodification is the human mind. The United States Supreme court ruled that human genes are patentable (Rollin, 1994), so theoretically, someday, you may not own the gene in your body that makes the protein that helps you smell the odor of spring roses. Medical research has already sprinted several paces down the track of making parts of the human body purchasable (New York Times, July 8, 1997). Even if potentially replaceable by machines as the source of economic demand, the human mind and its folk-psychological partners, soul and spirit, currently stand as a massive, unmined territory, some writers characterizing its dimensions of complexity as greater than the universe (Scott, 1995). Just as China and its billions of human desires alone have kept it elevated to Most Favored Nation trade status for years, the mind promises some uncharted but gleaming and potentially explosive resource. Vaguely through the clouds of its Himalayan heights, it appears to have lodes where billions --if not trillions-- of dollars can be extracted in the form of pharmaceuticals, if we only could map it better. Its Tibetan mysteries also seem to hide the only wisdom, the only intelligence, the only model to which all computers so desperately vie to emulate that they appear --to anyone who does not know better-- to want to unseat their Dalai Lama. In general, the mind being the humble and magnificent throne, right now, the ineffable seat of the dismal science's now-dead God, it appears to be the grail for all good explorers of the world, to be found, measured, struck, broken, melted, extracted, smelted, slagged, purified, molded, encased, sold, used, thrown out, recycled, disposed.
Of course, the mind without scientific characterization has already proven a bountiful field: from televangelists to the patent-holders of fluoxetine, from writers of the Mars/Men/Women/Venus ilk to artificial intelligence talking heads, the trade in soul-spirit-emotion-psyche has flourished. While it may be argued that the human mind is the patient to which the entire doctor-economy ministers, the subject of discourse then becomes too broad: There are particular mental-type concerns that address the mechanism, as it were, that makes all the other human activities possible. Thus, chlorpromazine and fluoxetine are directed at correcting apparent imbalances in this mechanism in a way that ACE inhibitors or metoclopramide dare not; the book I Just Can't Seem To Understand You addresses this mechanism, and, say, Guide to Ford Bronco Repair does not; and the machine Darwin III does, but the Ford Bronco does not. Even behaviorists acknowledged there was some kind of behavior-generating mechanism (Skinner, 1971); they just hated to call it "mind." Now, even their materialist inheritors posit a human consciousness (Dennett 1991; Crick 1994; Grush and Churchland 1995). Machine "intelligence" enthusiasts (Chalmers 1996; Michie 1994, 1995) have even seized consciousness as the hot flavor of the decade to sweeten their machines. As the furor foams in the scientific and near-entire academic community, from physics to ethology to theology, to conquer this entity, with a zeal commensurate with the millennium-long scholastic rush to conquer the equally palpable concept of God, "consciousness" becomes the buzzword that is defined as no other than the thing that, if we could pry it open, would tell us what life means (see Gülzedere 1995a and 1995b for difficulties in definition; other definitions in Shear 1995, Chalmers 1995, and McGinn 1995, among the host of nearly all articles on consciousness, exhibiting no clarity on just what is "consciousness"). The thrust of this hivelike wingbeating promises that the sieziers of the flower consciousness will be privileged to do more than joy in its floral beauty, which all of us average drones merely get to sight. Whether all this buzzing will pump nectar from the ephemeral goal is, of course, the (unspoken) 64-million dollar question.
It is no coincidence that the "Decade of the Brain" (Ackerman 1992) is also the decade of consciousness. Mind and body have been lovers quarreling at least since Descartes tried to tease them apart (Descartes 1952), and now after decades of body trying to go it alone, many have counseled mind to come back. But we have dressed it in what we believe is new garb, "consciousness," (though James (1890) had already breathed scientific cachet into the term), which appears for now more bright and rainbow-colored than the drab of its gray-cloaked old pal, mind. The workhorse brain must suffer the drugs, the excisions, the radioglucose tests, while flitter-winged consciousness comes and goes wantonly.1 If only we could get our hands on that fleeting sprite, it seems we could wrench its close companion into any shape we want it, in turn make that consciousness what we want it, make the collection of them --society-- what, well, someone wants it to be. A decade (or century or the rest of humanity's existence) of the brain implies an era of consciousness: Brain study essentially concerns an organ that needs medical fixing when it goes wrong; now we know that one of the many functions of the human brain (as all organs, it has multiple functions) is to produce consciousness (Edelman 1992). There is a curious asymmetry: Individual G makes blood bg of composition completely unique to that individual; G's brain also produces Faust, Part 2. Medical tests can detect if that completely unique blood bg is unhealthy. Will medical tests determine if Faust, Part 2 is unhealthy? Consciousness researchers appear dimly aware of the brain's discontinuity from other organs, evident in their target organ and its particular dazzling output that lured the swarm of them to the study in the first place; but this critical difference in the organ's output has not led one writer to ask, Exactly how can we be objective about what should be the proper (healthy) output of this organ?2 Are these outputs --Faust, Godel's theorem, Hustler Magazine, the Bosnian War, sidewalks, abortion, the Moral Majority-- all embedded within a system of social constructs that assign these outputs values of a widely varying spectrum, according to individuals and subgroups and outside of which none of us can go (though we must, to be scientifically objective) and inside of which we must be in order to gain access to the system? Yet being within the system to gain access to it, we cannot be properly objective, because we will perceive it through one of a completely relative spectrum of values.
Wagner. No one's grasped how, each with either Body and soul can fit so well together, Hold fast as if not to be separated Yet each by other daily fixed and hated. Mephistopheles. Stop! I would rather ask if he Can say why man and wife so till agree. --Goethe, Faust
he physicist and consciousness dabbler Alwyn Scott has christened his adoptive field "the new science of consciousness," bestowing it with an alarmingly respectable title, which even physics did not get at first, having to suffer a few centuries as mere "natural philosophy." Not a bad climb to the top for a field about a decade old. Like barons and marquises signing up to be captains and colonels and add more prestige to their name and in turn to the army, scientists of all brands and flavors have stampeded to add their imprimatur ro the glorious little fray and in turn gain a glistening from the field's sex appeal. Among the horde too massive and myriad to count can be spotted physicists Roger Penrose, Harold Atmanspacher, Ilya Prigogine, and Harry Stapp; psychologists Susan Blackmore, Steven Hamad, and James Newman; mathematicians Chris Clarke and N. G. deBruijn; neuroscientists Francis Crick, James Fallow, Susan Greenfield, Benjamin Libet, and Oliver Sacks; computer scientists Jaron Lanier and Donald Michie; anesthesiologist Stuart Hameroff; philosophers (the most guilty lot) Patricia Smith Churchland, Owen Flanagan, Colin McGinn, Mary Midgley, and John Searle; biophysicist Rodney Cotterill; cognitive scientists Rafael Nunez and Bernard Baars; physician Anthony Campbell; and judge David Hodgson. And at least a couple boast of being musicians --Lanier and Tad Rockwell. Certainly, the infinite spectrum of the colorful flock should not be surprising in light of how consciousness encompasses all human social activity and most of it is private. One cornerstone of the new field, Journal of Consciousness Studies, in its first issue, set forth the need for such interspecific crossbreeding --which was one of its major missions: "We aim to include contributions from all branches of scholarship which [sic] have things to say about consciousness.... to create a forum where we can begin to overcome... obstacles and where people from different disciplines and different metaphysical viewpoints can enter into fruitful debate" (JCS, Vol. 1, No. 1:4).3 The musician's aural bias on consciousness, the painter's light-and-imagery slant, the novelist's bent on narrative structure of events; the cook's taste for sensations --perhaps even the thief's eyes crooked on the unguarded object and opportune challenge or the lover's penchant for genital and tactile arousal-- all human experience, it seems, are needed; and all who participate graduate to poet-scientist. Move over, subatomic particle physicist. More than physicist Steven Weinberg's quest for the God particle as a route to a theory of everything, consciousness and some kind of "psychon" promise to be the real God particle and theory of everything. Consciousness researchers seem to understand that, if all human consciousness were unplugged tonight, there would be no theory of anything, at least in this corner of the galaxy. Such anthropocentrism is not a Berkleyan idealism, nor a solipsism, but a recognition that what makes whatever "consciousness" so intriguing --and this point is about the only one which (at least as the avidity of their actions imply) they generally concur-- seems to be responsible for everything (physics, cuisine, golf) that makes human life interesting and completely distinct from other life. Without it, there would be no Zorn's Lemma, Mexico City, or its smog. This huge cosmos of human life and activity that consumes and delights most every waking moment seems to be contained in this one tiny sac, as it were, consciousness. Grasp it, and you grasp everyone.
ecause of the nature of this Protean beast, consciousness, it transforms as soon as you think you have your hands on it, then appears as something entirely different, eluding definition. Definition, in the sense of some community concurrence as to what is the object of study and simple clarity of that object's delimitation, has proven essential for scientific advancement. Bodies that could undergo motion were defined as those that had extension. Each element was defined as a pure substance of uniform atomic structure. Many consciousness researchers (Searle 1992, McGinn 1995, Chalmers 1996) have acknowledged that their object of study, unlike any object in science, exists only in the private worlds of each human being. Of course, any object, such as a quark or a quartz crystal, enters science only through human perception or imagination of the facts and arguably may only exist within consciousness. But such existence would differ from the container itself. Consciousness as either a process (Dennett 1991) or phenomena (Searle 1992) is different from the things it processes or holds, just as a mine sluice differs from the ore slurry that sloshes through it. At least one researcher has noted that consciousness itself cannot really be perceived through introspection: "the model of specting intro requires a distinction between the object spected and the specting of it, and we cannot make this distinction for conscious states" (Searle 1992, 144). Searle does not deny self-awareness or introspection of mental states, such as whether one feels love, pain, joy. Rather, consciousness cannot step back and see what it is as consciousness, because still consciousness is the thing perceiving, and it remains within its own boundaries: It cannot step back. So how do we define this object that apparently not even those who have access to the results of its processes can perceive themselves?
The question of definition is not just an academic quibble but the sine qua non of the entire program. Its interlocutors must know what phenomena they are discussing. Even if researchers cannot perceive the object they are examining, they must at least collectively grope around a circumscribed area, eliminating pieces of the mental realm that are not consciousness or that are mere results of consciousness, until, as in statistical mechanics, as good an approximation of definition as is humanly possible is achieved. Güzeldere (1995a, 1995b) is one of the few who overtly grapples with this problem of definition, though for him the lacuna is not so much a hole to be filled before the field can proceed to be properly cultivated as it is simply one of the many awkwardnesses of its unwieldiness. Though in his review of the historical and current literature, it grows apparent that researchers suffer a plethora of definitions of the object --"a single, useful, noncircular definition of consciousness is indeed hard to come by" (32)-- Güzeldere never lays out the issue of whether some standardization in characterizing the object of study is necessary before we even hope to explain its mechanism, then attempts a preliminary standardization. In fact, several writers he quotes dismiss the need for definition: "According to [William] James, consciousness was a phenomenon too familiar to be given a definition" (112). (Were the same true for all-too-familiar motion, then why was Newton's definition of velocity as v=da/dx so revolutionary?) Similarly, Freud: "What is meant by consciousness we need not discuss; it is beyond all doubt" (113). Kathleen Wilkes: "science can dispense with the concept of consciousness and lose thereby none of its comprehensiveness and explanatory power" (113). "According to Block, there being no way to give a reductive definition of [phenomenal] consciousness is not embarrassing, given the history of reductive definitions in philosophy --presumably full of failures" (123). And most blatantly, Francis Crick and Christoff Koch state "they need not provide a precise definition of consciousness since `everyone has a rough idea of what is meant by consciousness'" (32).4 Curiously, Newton did not feel similarly about velocity, acceleration, and force --all phenomena as everyday as consciousness-- when he finally made progress in explaining motion with his definitions.
However, the problem of definition obsesses Güzeldere enough that he continues turning to it. He looks to the Oxford English Dictionary to sift two broad concepts of consciousness --one social, as in "feminist consciousness," the other psychological and pertaining only to individuals. He excludes the former from present discussion (35, 118) and bifurcates the latter into the facility of being conscious and the normal, healthy state of being conscious. But these definitions then serve only as means to weed out literature from the discussion and narrow the focus, not as a route to establish rigorously just what consciousness is for systematic study. Güzeldere provides little further definition. However, when critiquing the literature, he does further characterize consciousness and allows another possible bifurcation: He contrasts two sets of aspects --the "causal" and "phenomenal" and the "creature" and the "state"-- and combines them to make four classifications: causal/creature, causal/state, phenomenal/creature, and phenomenal/state (118). "Creature" is the sense of being awake and alert; "state" refers to specific mental state; "phenomenal" pertains to how inner life seems or feels; and "causal" somehow relates consciousness either to behavior or the mental economy of states or what consciousness does. But Güzeldere develops these classifications no further than to make the point (which is one of his major themes) that "how consciousness feels cannot be conceptualized in the absence of what consciousness does" (122) --a heretofore asymmetry or disunion he feels must be brought back together. By way of critiquing the distinction --which he feels runs deep in philosophical thinking-- between causal and phenomenal consciousness, he presents Block's divisions into "Access (A) Consciousness" and "Phenomenal (P) Consciousness," giving the reader a bifurcation with a semblance of rigorous definition. A state M is A consciousness if the content of M is at least "possible to be used as a premise of reasoning... for rational control of action [or] to be used for rational control of speech" (122). P-consciousness is qualia or "the way is feels to us to have experiences" (122). For Güzeldere, defining Block's A-consciousness is straightforward --the "three crucial capacities centered on rationality: rational cogitation, speech and action-- the subject matter of cognitive psychology" (122), that is, objective and public. P-consciousness is completely private and cannot be reduced or defined concircularly. Güzeldere questions whether Block's construal of P-consciousness itself, stripped of causal/functional qualities, is pretheoretically rendered unassailable by scientific investigation. Güzeldere lumps this general quandary about phenomenal experience with the question of "How does any physical mechanism give rise to any kind of phenomenal experience?" (126) --all essentially sides of Chalmers' "hard problem" of consciousness, "the subjective aspect of every experience that resists explanation." (123) But Güzeldere finds that these polarities between causal and phenomenal--"easy" and "hard" problems-- lead to absurdities, such as, for phenomenalists, the insolubility of whether there are other minds or whether even oneself is a zombie, and such functionalist cul-de-sacs as eliminativism or the denial of qualia's existence. With syntactic legerdemain, he proposes to solve the dilemma, cheflike, by mixing the two perspectives, "to yield a cross-fertilization of the first person and the third-person perspectives, which would allow us to talk about the causal efficacy of how consciousness feels and the phenomenal quality of what consciousness does." (141) Although this word salad looks green and hopeful, one bite proves it nutrient-free. Sadly, this morsel of inscrutability --his solution to the explanatory gap of phenomenal consciousness and to the general talking-at-cross-purposes among the sides in the field-- stands as a pat metaphor for apparent solutions for this study and its object.
fter whipping up our appetite for that one lettuce-leaf of an entree, Güzeldere drops an ironic dinner mint of a dessert: "A better understanding of consciousness will be possible only when it is theoretically situated in a more comprehensive framework, the constitutive elements of which are themselves in need of better explanation." (141) Having started with a larder, admittedly empty of definition, he mesmerizes us through a two-part article as if he is sizzling up a sumptuous bolus to fill our aching gut, only to leave us with one breath-freshener of repast. One must fill that larder to make a meal. Unlike Crick and Koch (1990), who outright deny the need for ingredients in their cuisine, but still without discoursing on definition's exact place in the systematic study, other authors do make attempts at [rigorous] definitions. In one of the most clear and cogent works in the field, Susan A. Greenfield (1995) lays out a testable, scientific theory of consciousness. She begins by acknowledging the problem of definition, as well as consciousness' distinction from mind and arousal, then promises "Since consciousness itself must be characterized in some way if it is to be accommodated in the brain," to "attempt a working definition of consciousness before actually attacking the enigma of its physical basis." After spending half the book on brain and neural structure, computers, and brain lesions and other pathologies, Greenfield derives three "probable" properties of consciousness, then combines them into one description:
Consciousness is spatially multiple yet effectively single at any time. It is an emergent property of nonspecialized and divergent groups of neurons (gestalts) that is continuously variable with respect to, and always entailing, a stimulus epicenter. The size of the gestalt, and hence the depth of prevailing consciousness, is a product of the interaction between the recruiting strength of epicenter and the degree of arousal. (104)
Although Greenfield does not designate this description as her promised definition of consciousness, no other material before or after appears so definitional, so this one must be the promised one. It is also her theory. That is, her definition of consciousness is her theory of consciousness --a perfectly viable approach, as a theory, such as one describing force, often is an offered definition of an object of study-- just so the components are defined (such as acceleration or, for evolution, species) or axiomatic. And Greenfield makes the bold and near-obsessive effort to configure the theory in a way to make it testable. In later chapters, she does relate a few experimental and clinical findings that appear compatible with her theory. She evokes the image of moments of consciousness being like the familiar concentric rings that develop in water upon a pebble drop: There is an epicenter, and there are gestalts, like rings, forming until a larger set of rings comes to take over --as only one at a time can compose consciousness at a given time-- thus the spatially multiple yet singular-at-any-given-moment characteristic of consciousness. Gestalts steadily form; most die out before they achieve the unique privilege of attaining the momentary throne of consciousnes.
Greenfield musters experimental data to support the neurophysiological reality of gestalts. Several animal studies taken together have shown that a sensory stimulus, such as a flash of light, recruits assemblies of cortical neurons around a cerebral focus (115-119), and that a single neuron can be recruited into any of many different groups (121). The central focus corresponds to the epicenter; the marshalled group of neurons, to the gestalt. The stronger the stimulus, the stronger the epicenter, the more neurons recruited, and the more competitive or overpowering the gestalt. (Strength of epicenter can be due not only to the stimulus' physical power but also associative or cognitive significance, as in associations with pain or reward.) The fact a neuron may be recruited by several assemblies, thus diverted from one assembly by a stronger assembly, explains the way one gestalt of consciousness, like a cognitive water ripple, may be overpowered by another. Arousal, she notes, is distinct from consciousness (5-8) but is necessary for it. With low arousal, as in dreaming sleep, epicenters are weak and thus easily give in from one to another, thus accounting for the steady illogical flow of dreams; high arousal causes many strong epicenters, one quickly overcoming the other, so focus is short and restless; whereas at medium arousal --of usual everyday existence-- epicenters are neither too weak nor strong, gestalts have reasonable lifetime, and steady focus of consciousness is possible. Greenfield also drums up laboratory evidence about amine neurotransmitters and their role in her model of consciousness. Serotonin apparently makes neurons more sensitive to firing, thus assisting gestalt formation, besides facilitating long-tern associations (151); acetylcholine raises arousal in sleep to lead to dreams, or in wakefulness to allow focused consciousness; dopamine and norepinephrine lead to less focused arousal. Finally, Greenfield summons clinical findings to boost her model. Type I schizophrenics often exhibit motor restlessness, stereotypy, difficulty with abstract concepts, enhanced sensitivity to physical properties of objects, inability to form metarepresentations, and egocentric consciousness --all charcateristic of small gestalt formation. Features of small gestalt formation also apply to small children --poor memory for past events, limited concept formation, and little linguistic ability. On the other hand, depressives seem to suffer from large gestalts, one "conflated into one vast consciousness of despair," (183) so that sensation-laden events that occupy most of our attentions barely impinge on these patients.5 Tellingly, depression is treated by drugs that eventually lower serotonin effectiveness. Serotonin is needed for gestalt formation, so blocking it blocks these patients' large gestalts. Furthermore, blocking serotonin and thus gestalt formation in normal subjects via lysergic acid diethylamide would lead to a small gestalt profile, as in schizophrenics, with similar clinical signs developing.
Tidy though this theory of consciousness, as quoted on p. 13, may be, it still does not single out consciousness from any number of other possible or imaginable results of neuronal activity (it is not sufficiently specific) thus does not fulfill what Greenfield promised --to define consciousness. In fact, if it contains Greenfield's definition of consciousness, it only defers that definition by stating noncommittally consciousness "is an emergent property" (italics mine). At best, one might hope that the other parts of the three-sentence theory would delineate which among the universe of emergent properties is that consciousness, but no such luck. That is, if there were some phenomenon f that arose when and only when a stimulus epicenter excited a neuron assembly, such that its size (depth) were a "product of the interaction between the recruiting strength of [this] epicenter and degree of arousal," then we might have been on our way to the laborious task of tabulating the infinitely divisible spectrum of phenomena that arises in our experimental subjects when we alter their brains by a controlled variable via, perhaps, drugs. In fact, the program appeared to have potential by pointing to schizophrenics and drug users, from whom we tabulate at least qualitative changes in what in ordinary normal language is called "consciousness." Thus, raise serotonin levels, facilitate neural assembly firing, heighten gestalts, and feel the resultant change in consciousness. This scheme does add a step or two to the truism, "Mind-altering drugs alter consciousness," or "Schizophrenics have altered consciousness," but it does not tell us what that altered thing is. For instance, Is this thing the combined set of all sense data we received at any slice of time? In a given period of time? Is it the particular sense data we are focusing on? If so, what does focusing mean? In vision, we can focus on one (vague) area, but we still see other things: In consciousness, can we focus on one thing and yet have in consciousness, by lesser degrees, many other things?7 Or is consciousness the overall apperception or awareness of everything at once?8 What role do deliberate thoughts, meditation, memory, apostrophes (as to distant lovers) have? Are they contained within consciousness? If not, how can we both apostrophize a distant lover while having unrelated images in our mind's eye? These are just some of the very serious questions about what the object of study is that are simply bypassed.
The extra steps listed above do reveal a structural pattern to look for. Consciousness comes in these little spurts --these gestalts. Still, the question remains: Is this set of gestalts the structure of how we experience consciousness? It comes in these microspurts, and even though there is an appearance of smoothness or continuity, if you look really close, then, as smooth white light gives way to bumpy little photons upon close inspection, so, too, does consciousness give way to bubbly little gestalts. Surely, if you close your eyes, random images appear, or thoughts or memories snap up seemingly out of nowhere, or a door shutting downstairs is heard; but all these phenomena would be dubbed "coming into consciousness" in ordinary language, not comprising the whole structure of consciousness itself. And certainly, neural assemblies firing do make plausible correlates for these phenomena. But present urge to construct a theory should not compel us to latch onto a convenient and apparent correlation between physical and phenomenal structure and hastily call it an explanation of anything, particularly when no clear definition is given and the explanation dismisses a common way humans describe phenomena in consciousness though does not address why we should ignore this reality. Indeed, Greenfield admits, "We all `know' what consciousness is," the quote marks signifying an intuitive, if unspeakable, knowledge. And here is the crux: How are we to lay the ground rules for determining when a theory or definition gibes with our collective intuition? That is, who is to judge what that intuition is when there is not even a beam of particles to point to?
Greenfield does at least admit the smoothness problem, though only by talking right past it: The "individual linkaging of epicenters enables us to maintain a continuity of consciousness" while "the ripples of one gestalt trigger the epicenter of the next." (170) The imagery still evokes a bumpiness that is just not there along the contours of consciousness --though, of course, it is there among the funny squiggles that flitter on closed eyelids or the random thoughts that come to consciousness. Greenfield is grappling to save her theory, which could not find a consciousness center in the brain but somehow had to make that brain have the person feel that consciousness is seamless; and the reason she must stretch her scheme for the sake of such smoothness is most of us apparently do find it in consciousness. Greenfield's bubbling gestalts fall into the same genre as Dennett's multiple drafts (Dennett 1991) --consciousness as nowhere but constantly reinvented by certain flashes of neurons. She distances herself from Dennett and what she finds a chaotic randomization that has little to do with "where we were or what we were doing." (11) But both theorists shun a center or "theater" --and even if there is no brain locus for a theater, this genre of theory still must answer, If there is no single consciousness within which phenomena are registered, then how do we get such an illusion of singleness about our consciousness?9
Another way the theory bypasses the reality of daily conscious experience is that it only seems to account for passive consciousness phenomena: epicenters form from sense data or by a trigger of memory that brings up a thought or image. As pointed out, the pond-ripple image makes a poignant metaphor for the white noise of consciousness as well as the Joycean associative stream-of-consciousness frame our minds often idle through. But the metaphor does not hold as well for other states of consciousness, such as deliberate thought or meditation. When I am weighing whether to vacation in Baffin Island this winter, I may summon five or six reasons to vacation so and five or six not to. They are hovering before me like chess pieces: "It is not crowded that time of year," "There is not much boat service then," "It is a little nippy." No telling how the decision is made and what is the neurophysiology of weighing the reasons till a decision is reached (and such weighing may vary situation-to-situation, person-to-person). But somehow, Greenfield's model has to be contorted to make a way to corral a dozen or so concentric rings or thought bubbles in one place --which does not gibe with her assertion about nonlocality of gestalt apperception. Like the ripple on the puddle, the largest, strongest one is the one that predominates until it disperses or another takes over --but there is no center to that puddle. Then how can I hold all my pro- and con-Baffin Island bubbles right here where I want them? Once we start considering the realities of conscious thought, we begin to see the results of playing physics with ghost-objects and training the language of precision upon nouns that have no borders (see Part II).
Instead of making the merest femtometer of a step toward systematically exploring the fathomless expanse of the structure of thought and conscious experience and thus demonstrating how her theory says something about this object of study, Greenfield glosses over that entire universe under discussion and substitutes a few skitter-scatter and unenlightening examples: "Imagine that you are looking at an orange. It might engender not merely thoughts of eating or of slaking your thirst but also of a holiday in Morocco, balls, the sun, an orange dress, or even an allergy to citrus fruit." (90)10 Without any systematic discussion or definition of any properties, we are supposed to know what "an emergent property" is and somehow be so tacitly convinced there is no central consciousness itself into which phenomena appear but instead there are these steadily succeeding ring-ripples of little consciousnesses that we trust Greenfield's dismissal of a center of consciousness, whether or not there is a corresponding neural region for one.11 Either she harbors secret knowledge she does not share, or she knows better and does not so admit; still, she goes on to dismiss three theories (Edleman's, Harth's, Crick's) that attempt to account for a physical correlate of central consciousness or "searchlight" (126-133). Ironically, two of the reasons she levels against these theories apply equally to her own. She finds Edelman's idea of "reentrant loops" (see §1.2.3) inapplicable to the explanation of consciousness within the general holistic functioning of the brain because of inherent vagueness:
How would reëntrant circuits be coördinated across every sensory modality? Would all these neuronal loops finally feed into a master loop? And, if so, where is it, and exactly how does it actually correlate with consciousness? The problem is that "a reëntrant circuit," when couched in such terms, is really just a metaphor with no obvious relation or any sort of correlation to conscious states. (128)
Closer examination of Greenfield's gestalts has shown that they, too, are metaphors with no rigorous relation to consciousness. Similarly, in dismissing Crick's assertion that synchronous firing of neurons at 40 Hz frequency correlates with consciousness, Greenfield writes:
Just because consciousness and synchronized activity between the thalamus and the cortex can both occur in the absence of sensory stimulation does not mean that one causes the other, that consciousness arises wholly from that synchronized activity. (133)
Certainly, Greenfield has also not shown that just because schizophrenics and drug users have altered neural assemblies (and supposedly gestalts) and altered consciousness, that one causes the other, that consciousness or whatever gestalts are arise wholly from that neural assembly.
Often good at confessions, Greenfield does finally admit "there is still the nagging question of how the combination within a neuronal group (its epicenter) really is the equivalent of, for example, our consciousness of an orange. The answer there has no real, empirically proven answer." (130) Yet she makes no atonement for this trespass against our credulity (just as she never fulfilled her initial contrition for lack of definition). Admitting there is a hole in your theory as big as your neighbor's you just dismissed does not plug anything. That there are such holes (along with a lack of clear reference to anything) in her theory --the same sort of holes as those in the theory with which she contrasts her own,12 brings up the question of what motivates her theory, as well as her colleagues': A prompt from ground-level inquiry-- or, What is this sort of thing we are dealing with? In sum, pure inquiry shows no evidence of being at work here.13
Instead, from the beginning of this work, among the most cogent in its field, we are flung into the middle of nowhere --the current tooting merry-go-round of pheromone-charged consciousness studies-- and by the end, are flung off into nowhere.
In one of the most crucial and telling --but inadvertent-- statements in the work, brought up only as an aside, while critiquing Crick's theory, Greenfield notes, "However, he suggested that consciousness only occurs some of the time, under special conditions." (130) This wondrous spectacle, dazzling portent, jaw-dropping prodigy gets no further attention --while Greenfield should be asking: "Are Crick and I speaking of the same thing?" They may be, but at cursory glance, a thing that is operating specially and occasionally may be very different from an object steadily emanating pondlike ripples. The final and greatest spectacle we are left with is of capable researchers discussing at elaborate and ceremonious lengths and at complete cross-purposes to each other, with copious charts and diagrams, about something in a black box neither they nor the audience have ever opened with their eyes. They have a word "conscious" in common; one would think that, as with "goat" or "beaker," they would be talking about the same thing --but they appear at once to be and not to be.
s gulls lay eggs and Nobel Laureates hatch speeches, so do scientists and philosophers seem to toss their pennies into the consciousness bucket. Neuroscientist Edelman has made the hobby his passion for years before the current craze, so his three-part work, ending with Bright Air, Brilliant Fire, is no mere obligatory contribution to catch the wave. His theory has a citadel's feel of yard-thick walls built over decades to fend any onslaught, and he orates with the brassy tone of the Laureate, whose prize was pinned to his sleeve. He hammers the piles of his edifice down deep into strata lain by Galileo, Kant, James, and especially Darwin. Though he never is so presumptuous as to schematize the psychological realm, in both his subtitle, The Matter of the Mind, and through the first half of the book, he refers to "mind." Later, he discusses memory, consciousness, and the unconsciousness as aspects of the mind, so his is not a consciousness theory per se, but such theory weighs in heavily, and without his elaborate scheme for how mind arises from brain, he develops a model of consciousness. That is, his theory enables "us to come to grips with the daunting problem of consciousness, and it is one of the main reasons for explaining the theory in any detail." (83)
Called the "theory of neuronal group selection" (TNGS)," it draws on evolution and embryology and is formulated to be both testable and historically explanatory --that is, explain how brain and mind could evolve and could develop in an individual and how mind relates to brain. Edelman first enlightens us with another selective system --the immune. Earlier theory held that foreign molecules, such as virus's surface proteins, "instructs" immune cells with the proper shape for the antibody molecules to take. But research has shown a very different process: When a foreign molecule invades, the body responds with an entire population of cells like an army of servants carrying hats to see which will fit the newcomer. Each cell has produced an antibody in advance, in "hopes" it might fit the foreigner: When an antibody proves to fit, the cell containing it is stimulated to produce many clones of itself. Thus, a Darwinian selection of populations is at work at the cellular level within an individual.
Similarly are such Darwinian forces at work in the brain. Edelman proffers three tenets of TNGS: developmental selection, experimental selection, and reentrant mapping. Developmental selection holds that the genetic code does not dictate the specific wiring of neural networks but just imposes a set of constraints on the process of the Darwinian cell selection during embryonic cell movement and differentiation, neural process extension, and neuron death during development. Like species, populations of neurons are actually in competition with one another, and like the origin of species themselves, structure in an individual is realized not by genes but by happenstance: "the actual microscopic fate of a cell is determined by epigenetic events that depend on developmental histories unique to each individual cell of the embryo." (62) Experimental selection refines this process through the organism's experience. Certain circuits that have high values for the individual are strengthened while other circuits are weakened. Reentrant mapping occurs between groups of neurons; processes are sent, say from a lower-level visual processing center to a higher-order cortical region but also back from the higher-order to the lower-order. By such linking, each area can "know" that the other holds a certain kind of information. The reentrant circuits are crucial for Edelman's construal of consciousness with the TNGS.
Edelman spells out his three assumptions for his theory of consciousness within the TNGS: the laws of physics are not violated, consciousness arose as a phenotypic trait sometime during evolution, and qualia must be accounted for. What he calls "primary consciousness" requires three evolved biological functions: a cortical system strongly linking "conceptual functions" to the limbic system, which applies survival value to learning and qualia; a new memory system based on this linkage, sorting by value-category for terms of mutual interaction between limbic system and cortex; and the aforementioned reentrant circuit, allowing "continual reentrant signaling between the value-category memory and the ongoing global mappings... concerned with perceptual categorization in real time." (119) By this means, the organism is freed to link past perceptual events to an ongoing scene, and the current perceptions can be categorized before the perceptual signals contribute lastingly to value-category memory. Thus, consciousness is a kind of "remembered present," with reentrant loops between the value-category memory and cortical sense-data processing area, so the present can be perceived as if a memory. Edelman goes on to describe the "high-order consciousness" of humans, afforded by language, adding yet more reentrant loops between the two other looped systems and language processing areas, thus freeing consciousness from the whole time loop --"from the bondage of an immediate time-frame or ongoing events occurring in real time." (133) The world may be modeled, plans made, past and future made both accessible as the present, and a sense of self --or self-awareness-- established within that modeled world.
Like Greenfield, Edelman begins his consciousness discussion by eschewing a round definition of the object of study, giving instead that object's properties and sounding more like a riddle or parlor game than a pure inquiry.
It is personal... it is changing, yet continuous; it deals with objects independent of itself; and... it does not exhaust all aspects of the objects with which it deals... [It] shows intentionality; it is of or about things or events. It is also to some extent bound up with volition. Some psychologists suggest that consciousness is armed by the presence of mental images and by their use to regulate behavior. But it is not a simple copy of experience... nor is it necessary for a good deal of behavior. Some kinds of learning, conceptual processes, and even some forms of inference proceed without it. (112)
There is nothing inherently wrong in using a set of descriptions for a definition --if it selects the object and completely distinguishes it from its context. (After all, definitions are usually either analytic or descriptive.) If there are a dozen objects in a crib, say many green Wonder bread sandwiches and a green battery, saying, "Object X is green and cylindrical" sufficiently distinguishes the object from its context, that is, delimits or defines it within that context. However, Edelman has not even given a description but only offered faint impressions, as if, in the objects-in-the-crib example, one were to say, "It is useful and portable... it is everyday... you can buy it." He has not clearly delimited his object of study from its context. The analogy to the crib-objects breaks down because they are discrete, whereas the object under Edelman's study may not be discrete --that is, with no physically delineable boundaries; and we need to know where it ends and other things in our heads begin. Like Güzeldere, Edelman quotes Henry James on consciousness' intangibility--"it is something the meaning of which we know as long as no one asks us to define it" (111) --apparently thinking that our intuition about this thing and a few miscellaneous faint impressions stated will leave us without a doubt about what he is discussing. It is as if someone were to tell us we all know what prettiness is, state a few properties --"It is personal, it is poignant and pleasing; it deals with objects (such as faces) independent of itself, yet it is within that object. It is of or about things and even events. It is not necessary for a good deal of behavior..." --then proceed to state a scientific theory of the evolution and substance of prettiness. We might know a pretty thing when we see it or whether we are conscious or semiconscious or even know what is in our consciousness, but such possibilities do not mean we truly know what prettiness or consciousness is.15 Or someone needs to clarify through copious research and explanation how this type of private knowing can suffice --or can assure us that the object of study has sufficient unity across its many manifestations (individual persons)-- for us to use the public or objective language of scientific explication to describe this object. That so many researchers cannot even agree as to whether this object even exists points up the possibility there is a problem in using a public, objective language to discuss something that exists --if it does at all-- only privately. Edelman, like many researchers, glides right past this foundation-level question, then builds a whole edifice and expects us to buy it. If we all know what consciousness is, then why bother writing tomes to tell us what consciousness is?
Edelman's list of properties, like Greenfield's description of consciousness, can be applied to any number of mental states, in which the universe of mental entities can be sliced to all different sizes of wedges (see §1.2.2). Demonstrating no awareness of all these problems of definitions, thus turning his deft and earnest effort to position consciousness within scientific or public language completely void of meaning into so much structure without meat, Edelman appears to sense subconsciously this discomfort of dearth and seek some kind of atonement through his christening of at least two consciousnesses --primary and higher-order. Although a laudable shot at blotting obscurity, it only, in fact, smears that obscurity over a larger area. Like the monster in the movie who is made into two when bifurcated, now Edelman waves two undefined consciousnesses at us, with the heightening terror of just how these two neither material nor immaterial beasts relate, communicate, and interact with one another. The razor has been used not à la Ockham to hone down to barest essences but to multiply entities with no justification as to whether the entity is indeed so multiplied. It feels good to have our species' shoulder patted and told we are higher order, but we need to be instructed as to what this thing is that makes us so exalted; we seem a little less exalted if we have touted we can understand it yet be unable to say what "it" is.
Triumphant with his theory-without-an-object, Edelman proceeds to break the history of philosophy into a chart of isms (158) and consign all to the dustbin of intellectual history, save "a favored set: qualified realism, sophisticated materialism, selectionism, and Darwinism." (164) (The last two isms are apparent from his philosophy; realism is qualified by the constraint our concepts have by the mind's and language's being shaped by evolutionary necessity and expediency; and materialism is qualified by sophistication because, science itself being a biologically based epistemology, science cannot be entirely value-free, in the old Galilean sense.) Among the 99% of world philosophy he flicks off the end of his lit butt, Edelman tosses property dualism --without seeing his thus also lets fly his own philosophy. He does admit he wants to maintain the "distinction between selective and nonselective material systems." (160) That is, biological and psychological systems, with the added rules of Darwinian selection, via TNGS, built upon their physically based substance, submit to rules other than those of purely physical systems, such as planets and muons. Edelman has to make this distinction because he opts to dismiss the assertions of "fancy physics" --presumably such as Roger Penrose's (1989) or Penrose's and Hameroff's (1996) hypotheses about quantum-level phenomena influencing mental events (via microtubules in the theoretical extensions published after Edelman's work).16 He must dismiss such possibilities because they would confound his centering of consciousness within the reentrant loops creating "the remembered present" and wreck his entire gossamer structure of consciousness that arises from basic evolutionary principles. Yet in his desperation to be scientific and thus to acknowledge life's basis in physical matter, while stating that life has developed its own set of laws --selection-- which are like nothing seen in physics yet are equally subject to scientific description, he overlooks the naked fact that he has nonetheless been indelibly caught --whatever his excuses-- in a property dualism: Life and "psychology can be described only in its own terms" --the very words he uses to dismiss property dualism also holds true for his own theory. Edelman lets himself fall into philosophical self-annihilation for the sake of a theory, at least one part of which --that concerning consciousness-- is derived for the sake of explaining an object he never delineates; in other words, he commits philosophical self-destruction for nothing and wants to drag almost all world philosophy into his wasteland.
This scenario could be averted by the theorist beginning at a simple level --first principles, at least for his or her field of study, working out the nature and extent of the object, and then asking how it may be examined. Instead, Edelman begins at a middle-to-high level with a baggage of assumptions about how this thing is to be studied, even without saying what it is. This approach speaks not of caring for the object but of superimposing upon it a characteristic whether or not the characteristic inheres in it. Care and attention thus is given to the methodology employed; the result, at least in this case, is unflattering to the scientific methodology upheld, to the object thus characterized, and to the philosophical attempt so wanton. The only question remaining in the aftermath is, as this act was not one of pure inquiry, what was it an act of?
set of astronomers on separate islands might gaze above and track the wanderings of a heavenly body. When their bottled messages float upon one another's shores, they may scratch their heads, because one had gazed at what in English is called the moon, the other at the sun, another at Mars, and the last at Andromeda. But once they boat to their island group's astronomical meeting and share a night gaze, they quickly see each heavenly body was different.
Consciousness researchers have not even attempted such a mass night-gaze. In English, we get by with the "heavenly-body" term in question here --"consciousness"-- and seem to understand one another sufficiently, and consciousness students take this bit of linguistic legerdemain at face value and waddle blithely on toward their sunny hypothesizing. After all, once the isoflurane wears off from the quadruple bypass surgery, the doctor asks, "Were you conscious of any pain?" and we say, "not at all," and everyone is happy. Thus, Edelman, reminds us, "As James puts it, it is the something the meaning of which `we know as long as no one asks us to define it.'" (111) Greenfield states, "We all `know' what consciousness is," (4) and Güzeldere also quotes James, "What is meant by consciousness we need not discuss; it is beyond all doubt.'" (113) On the other hand, all three researchers curiously demonstrate the impulse to define consciousness, and at least Edelman and Greenfield promise a definition but never pay up. Thus, they seem to understand their object of study is simultaneously undefinable yet tacitly known, while their professional instincts urge them to define anyway and paradoxically set forth the ineffable in public terms because the attempt is the sine qua non of their methodology, and failing to do so is failing the application of such methodology to this object from the start. And from the start, these researchers never provide the definition. Their discussion gains any coherence by leaning on the fact we speakers of human language "know" what "consciousness" is. (The fact we "know" through language gives a feeling of apparent currency to these works, supplying enough momentum to keep up the discussion for years, without the discussion ever bordering on the kind of "knowing" an object that is discussed in science.)
Yet, as through extremely wide-angle-lensed, unfocused telescopes taking in whole quadrants of sky, we can get some milky impression --scattered throughout their works-- of where they focus upon consciousness. Edelman and Greenfield both grant that qualia are somewhere in their sky. Greenfield includes dreams somewhere in a dim corner of her horizon; Edelman does not discuss them. Self-awareness is a significant part of Edelman's so-called high-consciousness; Greenfield barely mentions self, in the context of "sense of self" --to acknowledge we have one (54), hardly placing it anywhere in the consciousness scheme. What Greenfield calls "arousal," which is roughly like the gas fuel determining the amount of consciousness burning at one time, has some vague similarities to Edelman's "attention," which also has some elements of Greenfield's concept of focus. Both are sheepish as to how memory, thought and emotion play into consciousness --whether part or mere vassals to it. Certainly, such vagueness and lack of common language and definition have riddled psychology for centuries, but these shortcomings ought not to burden these sheriffs from the hard sciences who have whirlwinded in to clean up and set consciousness in proper order astride Citizen Physics.
In fact, one after the other, the remains of one good-meaning law-and-order type strew the edge of the street where they have blustered in, with the outlaw's name on their lips but, among them, equally nonagreed just what that outlaw looks like. Chalmers (1996) takes the Jamesian copout, admitting the object's undefinability, but at least clarifies the nature of the object by saying that, like matter or space, it simply cannot be broken down into more primitive notions. The trouble is, matter is broken down into more primitive notions, such as energy, and is well-defined; and space can be defined either as void or the extension of all matter. Yet ultimately, he turns to our own inborn wisdom concerning just what he is talking about: "I presume that every reader has conscious experience of his or her own... it is just those that we are talking about." (4) Sageful Dennett (1991) does not have to fall back on readerly wisdom --to the contrary, "we talk about our conscious decisions... about the conscious experiences we have... but we [do not] know what we mean when we say these things." (23) Without ever stooping to explain what is the object he is talking about, he nonetheless spends a book purportedly explaining consciousness, per the title. While the title, along with much of the text, predicates a single object, consciousness, a major theme holds there is no single phenomenon but multiplicitous phenomena always in the process of being brain-processed (23, 101-138) --what he calls "multiple drafts," like drafts of articles from rough sketch through drafts sent over e-mail, corrected by reviewers then by the author, gradually, so that even the published version is no longer definitive, just another stage. Thus, he provides his sole overt description or characterization --the closest approaching a definition, though still far from one-- of the object: Paradoxically, it is not an object but, at any given moment of time, objects: "if one wants to settle on some moment of the brain of processing in the brain as the moment of consciousness, this has to be arbitrary." (126) No one would deny that the brain is processing many levels of data at once; the problem for the consciousness researcher is whether all these can fairly be said to contribute to consciousness, and if not, which ones are and how do they contribute. Dennett asserts that whatever we think consciousness is, is essentially just iceberg-tips of many different streams of brain-processing, with no central self anywhere watching it all: "I" is just a convenient label for the collection of all these streams or drafts. However, Dennett's scheme begs the question on two counts: For one, which among all these brain processes are those that correspond to those we call consciousness --because (to compound the first problem with a second) if we think it appears as a single thing, there must be some process that corresponds to our so thinking, and that thought, which guides so much of our daily functions because of the sense it gives of ourselves, is not trivial. Furthermore, if consciousness is a multiple object, what process melds these parts into the apparent single flowing thing that even Dennett himself refers to as a single thing? (See also note 9). Though he never sets out what consciousness is, Dennett asserts that whatever it is, is an illusion of the system so is not what it seems, so what it seems has no significance and can be disre- garded --yet it is that very illusion that is the object of study and that consciousness researchers cannot dispose of just because it is not amenable to a currently well-formed methodology. In sum, if consciousness is a sort of hallucination and we are always hallucinating it, it is a very real hallucination and must have some brain correlate.
n groping towards a "fundamental theory" of consciousness, Chalmers (1996) lays out a moderately detailed scheme for mental geography, detailing some miscellaneous landmarks that he wants in his world of consciousness: visual, auditory, tactile, olfactory, taste, and hot/cold experiences; pain; other bodily sensations; mental imagery, conscious thoughts, emotions; and the sense of self. He distinguishes these aspects of "phenomenal consciousness" --which signifies the experience or what something "is like" or how it feels-- from "psychological consciousness," which signifies mental function or what mind does (11). The notion of psychological consciousness includes awakeness, introspection, reportability, self-consciousness, attention, voluntary control, and knowledge. Chalmers then sets up a tidy analogy: awareness is to psychological consciousness as experience is to phenomenal consciousness. He clarifies that, for his account, "consciousness" refers to phenomenal consciousness (21), which he feels is due an explanation by some kind of (philosophic--or scientific?) theory.
Later, after thus breaking away from cognition, Chalmers returns and interweaves or interdefines it with consciousness: "where there is consciousness, there is awareness, and where there is (the right kind of) awareness, there is consciousness." (222) Chalmers aims to break cognitive psychology's bar on drawing conclusions of consciousness experience from empirical evidence --by using a bridging principle, which he states is precisely this connection between consciousness and experience:
When a system is aware of some information, in the sense that the information is directly available for global control, then the information is conscious.... at least in a language-using system... information is conscious if it is reportable. (237)
As he maintains, elsewhere in the volume, that one's judgments about phenomenal experience are reliable, the bridging principle, via reportability of experience, allows epistemic levers into investigating consciousness and the establishment of its correlates with physical processes, and "conclusions drawn on the basis of this principles are better than no conclusions at all" (238) --though he gives no justification for this loaded statement as to whether we would be better off with such tenuous conclusions.
What Chalmers calls his psychophysical law of coherence --that consciousness is always accompanied by awareness and vice versa-- forms a basis of his program to establish that consciousness is a nonreductive property of certain functional organization of the physical or what he calls "nonreductive functionalism." (249) He seeks to maintain the functionalist principle of organizational invariance --that any two functional isomorphs, whether realized in neurons, ping pong balls, or silicon chips, are conscious isomorphs-- an idea common among functional reductionists,19 while denying consciousness gains any identification when it is broken down or reduced to lower-level phenomena. He seeks to retain the integrity or foundation of conscious experience as phenomena that gain nothing by being "explained away" --in fact, cannot be because, after all such explanation, they stand (or fly) sure as space saucers in an old science fiction movie that are pounded with Earthling firepower. At the same time, Chalmers wants scientific integrity and to grant that these phenomena arise out of the physical,20 thus empirically describable as worthy subjects of scientific discourse. He characterizes this philosophy of nonreducible consciousness that is naturally supervenient on the physical (just as biology is) as "naturalistic dualism" --a property, not a substance, dualism. Thus, he is not guilty of the Cartesian bifurcation, which has yet to answer how the world of mind and world of physical can interact. Because Chalmers' two properties are the physical or the supervenient upon the physical, they have no problem interacting. This two-sides-of-the-coin perspective on the object of study gains clarity of resolution when he hones his lens upon information states: "Physics requires information states but cares only about their relations, not their intrinsic nature; phenomenology requires information states but cares only about the intrinsic nature." Or: "Experience is information from the inside; physics is information from the outside." (303)
Neat and pat as this approach to consciousness may be in mustering evidence to justify its nonreductive functionalism and reconcile it with its dualism, Chalmers never clarifies a serious apparent inconsistency. He denies reductionism can address what consciousness is, by five arguments. In the first two, he describes two situations --the possibilities of zombies and inverted spectrum -- as evidence that consciousness is not logically supervenient on the physical. He adds two epistemological arguments, which are the most convincing. For one, consciousness has an epistemic asymmetry: all the physical facts of the world can only provide indirect evidence of consciousness unless one has experienced it oneself to know it is there. Thus, there may be an "other minds" problem --but there is no "other feet" or "other heights" problem. The knowledge argument gives the famous story of neuroscientist Mary, who lives in a future when all is known about neuroscience, but she grows up in a black and white world: No knowledge of neural processes will ever cue her in to what red looks like. The knowledge is "a factual knowledge that is not entailed a priori by knowledge of the physical facts." (104) Finally, Chalmers argues that no amount of defining causal roles of consciousness --for example, stating it is verbally reportable-- can define conscious experience itself. So, reductive explanations cannot account for logical possibilities concerning consciousness, its epistemic asymmetry, or the peculiar nature of phenomenal knowledge, nor can they analyze it as a functional property.
But Chalmers wants to wed the nonreductive approach to functionalism, through what he calls the weaker form, in contrast to the stronger, "upon which functional organization is constitutive of conscious experience," whereas in the weaker form, "conscious experience is determined by functional organization, but it need not be reducible to functional organization." (275) He has not made it clear that his own does not, in fact, inadvertently make functional organization constitutive of consciousness. Along similar lines, he defines his principle of organizational invariance as "holding that given any system that has conscious experiences, then any system that has the same fine-grained functional organization will have qualitatively identical experiences." (249) If we derive the combinatorial state automata (CSA) M, whether we apply that functional organization to neurons, the population of China, silicon chips, or potato chips, we will get the identical conscious experiences. Perhaps Chalmers tries to deny that this M is consciousness itself, or that CSA M has not reduced consciousness to CSA M, because he has a program to assert it is an entity that must be accounted for as a real entity and not explained or "reduced" away, as functional reductionists (whom he vows he contends with) have asserted they have done. But to say that his CSA M has not reduced consciousness to functional organization is to maintain an unusual usage of the language. Arguing that, since a physical system gives rise to consciousness, consciousness must be calculable, he contends that one of a certain class of computations, combinatorial-state automata (CSA), has the potential to calculate a conscious system:
For a given conscious system M, its fine-grained functional organization can be abstracted into a CSA M, such that any system that implements M will realize the same functional organization will therefore have conscious experiences qualitatively indistinguishable from those of the original system. (321)
Certainly, CSA M is not consciousness but only becomes consciousness when it is implemented. Similarly, F=ma is not force, but by Chalmers' own definition of "reductive explanation [as] an explanation wholly in terms of simpler entities," (42) F is a reductive explanation of force. By analogy, CSA M is a reductive explanation of consciousness because it explains consciousness in terms of a simpler entity, functional organization.22 Another lapse of explanation arises from a logical extension of Chalmers' idea of experience as information processing (Chapter 8), so that even a thermostat may be said to have experience. Furthermore, as there is information in any object --rock, mucus, quark-- anything has experience, therefore everything has some level of consciousness. (297) However, Chalmers has argued that consciousness arises out of the brain's functional organization. He does admit that his ontology of experience does lead to panpsychism (297), but he tries to ease out of this uncomfortable position by saying variously, "I would not quite say rocks have experiences... but may have experiences associated with them," (297-299) and "Personally, I am much more confident of naturalistic dualism than am of panpsychism." (299) Though admitting the latter issue remains open, he does not discuss how crucial it is to resolve it: Either consciousness arises out of neural functional organization, or it arises out of everything. Ad hoc, one might try to make the argument consistent by saying, "The brain's functional organization allows it to build up great degrees of information, therefore the brain has a lot bigger, more interesting consciousness than a rock." The problem here lies in the imprecision of "information." A phonebook may have as much information in it as a living brain --not only the address and phone of each person, but also the number of letters in the street name, the amount of quarks in the ink it takes to spell each name, ad infinitum. Thus, a muon, too, can have infinite information. Why is everything then not infinitely conscious? Then, the argument must fall back on the peculiar type of (neural) organization of the brain that gives it consciousness; but this fall-back contradicts the assumption that everything is conscious. It may be said brain organization engenders the peculiar kind of human consciousness, but then we would be discussing the ontology of types and not of consciousness itself.23 Chalmers has built an indelible hole in his theory: Information processing implies experience, experience implies consciousness, and information is everywhere --yet consciousness arises by virtue of functional organization. It may be time to start at step 1; the term "consciousness" allows such apparently infinite bendability that it seemingly makes any theory work at first blush (even when the theorist attempts some delineation of the term's meaning), but upon slight nudging, such flexibility also allows the theory to bend around, like the ourobouros, and bite its own tail --or roll away to nowhere.
The concept of functional organization itself has trouble standing within the context of the kind of consciousness Chalmers proposes to explain because of an inexactitude in where this consciousness actually may be pinpointed to exist --though Chalmers' aim of showing consciousness arises out of functional organization turns out to be an assumption on which he bases his demonstration. The principle holds "that given any system that has conscious experiences, then any system that has the same fine-grained functional organization will have qualitatively identical experiences." (249) He assumes there may be "functional isomorphs" (248) --two systems that share functional organization at the "fine-grained" level, the level fine enough to determine behavioral capacities." (248)
Even if our neurons were replaced with silicon chips, then as long as these chips had states with the same pattern of causal interactions as we find in the neurons, the system would produce the same behavior." (248)
Though he does not state whether he is speaking of logical or natural possibility, the entire context of brains, neurons, and physical systems bespeaks natural possibility. The assumption is that the essential, fine-grained level of organization is found within the kind of entities whose operations can be replaced by silicon chips. He is making this assumption within the context of natural possibility without offering evidence that consciousness does arise from such entities. Though elsewhere (259-260) he rightly defends a hypothetical case in which neurons are replaced by silicon chips against attacks on such possibility, by saying the burden is on the attacker to show such replacement is impossible, in the present case, the burden is on him to show that consciousness arises out of entities with functions replaceable by silicon chips because he is establishing the scenario in which these functions can be organized into consciousness. If consciousness is derived from entities with functions that cannot be replaced by silicon chips, then his later argument falls into the circular form of, "Consciousness, based on entities that are functionally replaceable to silicon chips, arises out of a functional organization, and thus this functional organization can be replicated by any other nonbiological system, such as one silicon-based, with the same organization." Apparently, there is an assumption of fact in order to demonstrate that fact. Furthermore, there is an assumption that a consciousness-generating system can be broken down into its fine-grained components, when this is a question of empirical fact that must be demonstrated. There are other natural possibilities. Assume there are two clones in which every electron spin is the same at time t0, every particle in the same quantum state and the two clones are in identical parallel universes in which every photon hitting the eyes are in the same quantum state, and the same event occurs for each moment t0 + 1, t0 + 2, .... It would be difficult to argue that the two clones would experience any difference in consciousness, because the two universes are identical in every aspect, down to the history of their first 10-80 second. But this scenario really says nothing other than that the universe is identical to itself and gives us no clue into whether there is a level of fine-grained organization fine enough to determine either which universe is which or to determine behavioral capacity.24 But what other way may there be functional isomorphs that may be distinguished from circumstantial pseudoisomorphs?
Now take two universes identical but in one factor: At time t0, they are identical, but at time t0 + 1, a photon1, coming toward clone1, decides to change quantal states from the photon2 flying toward clone2. That photon1 does not set off the reaction in molecule rhodopsin1 in clone1, though the rhodopsin reaction does go off in clone2. We no longer have two physical isomorphs; whether the two have stopped becoming functional isomorphs --that is, reaching the point where some change in functional organization (say enough rhodopsin not reacting to change neural firing) to alter behavioral capacities by Chalmers' definition --must await further study. So we observe the two now slowly changing universes, measuring each of the zillions of photons striking every eye pigment molecule and striking every molecule in every cell of the body, besides monitoring every position of every object around either clone, including every quark in every star and every molecule in every gnat and dust mote from one corner of each universe to another, because whenever we shift the first change in behavior in either clone, we have to be able to tie the shifts in the physical universe to the shifts in the clones' behavior.25 Clone1 and clone2 are walking down Broadway in their respective New Yorks. As they approach their 42nd Streets, notice clone1's left big toe jiggles a centimeter and clone2's does not. We have noted that everything physical in the two universes was the same except for a solar flare in universe2 that changed radio reception in the Walkman that clone2 wore but not in clone1's, one second before the toe jiggle. We find the distinct auditory pathways, after the solar flare, activated in clone2, whose toe did not jiggle. We assume there was a change in consciousness in the two clones because there would be different auditory qualia. We have all the neural firing data for all of both clones. What made the toe in clone1 wiggle --not hearing the radio interference? Was clone2's toe "destined" to wiggle until the new sound halted the pathways that would materialize in the toe wiggle? Either possibility is as improbable as the other --yet either possibility is dependent on the other, so the improbability of one cancels out the other. Again, the case is locked in indeterminancy.
If we had a theory of a [sufficient] level of simplicity that could predict all the specific facts about our experiences --even only those facts familiar from the first-person case-- when given facts about our processing system, that would be a remarkable achievement.... There is no reason to believe that such a theory is impossible. (310)
Not knowing what the writer thinks, the reader is left only to interpret the statement: The goal is to devise a theory --a simple one, perhaps as simple as v(t) = a'(t)-- that can take as input a few bits of data, such as what all the molecules in our body are doing, and give as output all of our thoughts, perhaps something like the old schoolyard whim about the dreamcorder that videotapes dreams for later viewing. This great proposal, right in line with the psychologist's unfulfilled vision of devising mathematico-psychological theories as clean and elegant as those in physics, misses the closed-system requirement: the continuum from consciousness to body to environment is not elegantly divisible. At moment t, I am looking at the nighttime vista from the top of Guadalupe Peak in West Texas. We have to take into account not only all my molecules but all the photons arising from or bouncing off the atoms from the hundreds of mile of rock and plain below and from the sky and stars. One might say we only have to measure the collection of photons striking my retina --but the problem is my consciousness is interpreting those photons according to some internal construal of the terrain, and the scientist cannot make meaning of my construal without a thorough detailing of all the miles of rocks and all the photons bouncing off them. My construal, after all, is built partly upon my understanding of the coral-reef-based geology of the Guadalupe Range, besides some idea of the relative distance of the stars and how their relative luminescence and color temperatures influence their appearance. In fact, I am simultaneously perceiving the sky three-dimensionally above a multi-eoned range and plain, and at the same time sniffing an odor salad of mesquite, pine, juniper, and recent burn --all of which emanating objects, the scientist must detail extensively to the molecular level. Simultaneously in the exultant moment, I am sensing the presence of a lover with whom I shared a similar experience several months ago, so the scientist must go uncover this lover and detail this person's biology and experiences exhaustively, to properly understand my memories of this person. A full understanding of this person, unfortunately, will lead to similar exhaustive studies of the rest of the human population, past and present, and all the physical facts that pertain to their processing systems. The exhausted scientist might protest that we must draw the line somewhere as all the other sciences do and allow a body to rest, but we have to say, we have decided to study an open system, and Chalmers has asked us to give the physical facts about the processing system, so you must keep working. Indeed, we might call the equation we are working on the "Chalmers equation," derived exactly from his proposal:
where Cme is a function deriving all the specific facts about my conscious experience at time time (upon Guadalupe Peak) and x1t + x2t + ... + xnt are all the given physical facts of the processing system at time t, and C is the consciousness operator operating upon all these facts. The scientist cannot ask me about anything on the left hand side --thus all the aforementioned extensive fact-gathering for the right-hand side. Although we intuitively feel there is a separation of the brain and mind and the rest of the world, and it makes good sense to act with that assumption, when it comes to this particular scientific program to explain consciousness by physical facts about the processing system, what those facts entail requires a kind of fact-gathering that makes it virtually impossible to distinguish consciousness from brain from environment. Moreover, because of such requirement on fact-gathering, the scientist must somehow already know what is in my consciousness (such as my construal of stars and my lover) to know which facts to look for.
Finally, the beleaguered scientist (likely, a team of graduate students) has gathered all the physical data. Now it is time to spit out the left-hand side of the equation --all conscious experiences at time t. I can describe some of the experience-- but can I describe all of it? Much of what I describe is in a continuum with many other points in time: Imagine some consciousness subset (such as the sense of presence of my lover) as a long blob through time. How much of a cross-section of this blob at time t (such cross-section waxing and waning in relative "pie-sectioning" with time) occupies in the total space of Cme(t) may be virtually impossible to describe, if I can even get a grasp on it myself. Besides this epistemological problem is the empirical problem of my determining whether I have described all the things in my consciousness --such as focus: My eye was pointed toward the Delaware Mountains, but actually my focus was on the absorption of the sensual (smells, sight) and ideational (geology, stars, lover), all melded into a great dreamlike wonderment. Towards the periphery, did I quite see ß-Ceti? A car going down Highway 62? Did the patterns of the shadows on the Delawares make a difference in the way I was sensing my lover's presence? There are countless potential such empirical problems for me in describing my side of the equation.
Practically, one might have a hard time justifying the expense of gathering all this data, when the basis of the theory has drawbacks of circularity and inconsistency, all in turn based on an inscrutably ineffable or vague concept "consciousness", when science funds are scarce as is and when one's own idea about one's consciousness suffices for now. In light of such a proposal --so modestly stated, yet when lightly scratched, proves underneath to be gargantuan as Darling's plans to create universes-- one cannot but help to wonder, Is pure inquiry into the nature of consciousness really guiding this theory development?
ennett (1991) paradoxically first allows himself not to delineate consciousness, then proceeds through his tome to deconstruct the consciousness that he never delineated, saying that what appears as a single flow or "Joycean machine" (220) is actually an illusion created by several processes within the brain arising in rapid succession, in what he calls "multiple drafts" (111). In a variation of the fox and grapes tale, Dennett handicaps/sabotages his tool in advance, so it cannot reach the fruit, then turns and says the fruit does not even exist.26 One wonders why even bother building the tool (perhaps because so many others are building one, and one must appeal to them?). The problem nags; he never delimits the one thing he wants to show is not one thing, so the consciousness he is attacking is a defanged straw person from the start.
But his model/metaphor of multiple drafts does not do its alleged job even in the context of the metaphor itself. The image of the metaphor is of a contemporary drafting of an article or book: Professors often send early drafts out, usually through email, to colleagues for review and comment, so the final published version that comes out in the journal is only an anticlimax, since all the readers have read it. Furthermore, it has gone on to change since then. In consciousness, too, there is no definitive draft: at any one moment, signals are coming from visual processing, memory, and other places, so that even the sequence of time itself is an illusion, and there is no one central or final processing place where all these signals come together and pass through some kind of filter into the consciousness arena (what he labels the "Cartesian theater"). Of all the experimental examples he cites, the most telling may be color phi phenomena: When two dots of different colors, spaced apart, are presented to subjects in rapid succession like a little movie, they appear like one dot moving back and forth27 --moreover, the color of the dot appears to change when the apparent dot is halfway between dot 1 and dot 2. Two Cartesian-theater-style explanations can account for the apparent time-warp: the "Orwellian," which holds that the history of the second dot's color was "rewritten" for the consciousness theater after the fact, and the "Stalinesque," which contends the pathways to consciousness fill in the missing information between the two dots. Dennett contends these two interpretations are both experimentally indeterminate, and as they are the only two Cartesian theatrical explanations, such theatrics cannot be the case. Time, as it appears in consciousness, is not necessarily related to time in the real world; the brain has its own ways and purposes in processing information. But Dennett, striving in his metaphor to capture this never-quite-definitive, multiply- and simultaneously-existing (many drafts are circulating at any given time) reality, never discusses the crucial basis of the metaphor: there is, for all its drafts, one work.
In fact, in perhaps an inadvertent paradox, Dennett keeps implying that there is some kind of emergent entity over and above (perhaps between?) all these drafts, though he never acknowledges it. Most tellingly, while deflating critics of strong AI by attacking the allegedly small size of their imaginations, Dennett states, "They just can't imagine how understanding could be a property that emerges from lots of distributed quasi-understanding in a large system." (439) Though one might think Dennett would certainly deny what it sounds like he just said, the words say that somewhere there is emerging some kind of whole over and above these widely distributed multiple drafts or subparts of consciousness. For something to emerge, it must come from somewhere and then exist in another place. Less explicitly, Dennnett's prose elsewhere keeps waxing pregnant with this same entity or process at least one step beyond the multiple drafts, but he never carries this burgeoning to term: Referring to a certain optical illusion in which pink is seen in the white space between red lines of a grid, Dennett inscrutably says, "You seem to be referring to a private, ineffable something or other in you mind's eye, a private shade of homogenous pink, but this is just how it seems to you, not how it is." (329) Dennett does not add that the simple word "seems" does not merely appear via ink and paper and promptly wipe out the entire problem many authors are concerned about, but the word represents something --possibly very complicated --happening in the brains of humans. Yes, this pink seems to be here, and I seem to experience a steady flow of time and consciousness --from a set of data and input more discontinuous than that for a movie. But just because there is a discontinuity in the input from all these multiple drafts and quasi-understandings does not make this "seems" that emerges somehow unworthy of our consideration and thus necessitate our turning the spotlight only on the multiple drafts themselves. To the contrary, the scenario makes the "seems" all the more amazing, even more begging us to account for it instead of turning our backs on it. In a curious twist of scholarship, Dennett finds that by adding "just" to this magnificent "seems" he has unearthed, he can then convince us we need not tremble when he tosses it out, saying it is "not how it is"28 --with no justification for an epistemology that would have a brain that would generate a "seems" and yet somehow that brain activity is not worthy of study. Apparently, Dennett intends only to study the aspect of the brain amenable to his method and simply dismiss the rest of it --hardly a theory of the whole brain and mind.29
This spottiness in the theory is reflected in Dennett's waxing and waning, hemming and hawing throughout the book, on whether he really is presenting a theory. Chapter after chapter, he refers to his "theory" and even, as the work proceeds, "developing our theory of consciousness as we go," (282) or "here is my theory so far." (253) But then he wavers: "My main task in this book is philosophical: to show how a genuinely explanatory theory of consciousness could be constructed." (256) But then at the end, he demonstrates a confidence in the model as theory by presenting several hypothetical experimental situations that would test the theory (thus presumably being falsifiable, thus scientific (Popper 1959) --certainly an assertion that this model is a theory. Yet this elusiveness has the effect of, on the one hand, asking to be excused from the rigors of clearly or distinctly stating the theory and what it applies to while, on the other hand, asserting predictive powers from pieces of what is said to inspire the fear and respect of a full-blown, entrenched theory. Thus, Dennett finds a way to shirk definitions and delineations and talk about a wide range of experiments on mental phenomena with the assertion they fall within consciousness while dismissing a whole range of "seeming" as not consciousness, all with the tenor and omniscience of authority of a predictive theory. By this means, he does not have to state that even if the predictions he makes prove true after experimentation, we still do not have a theory about a sizeable part of the brain that is generating "seeming": Thus, accurate predictions will corroborate only that Dennett has characterized some properties about the "quasi-understanding" part of the brain but not about that emergent/"seeming" part. Instead, he jettisons that latter part --which, by the fact he acknowledges it is functioning because it does "seem," must exist-- into the realm of mysteriousness and inexplicability, just as he criticizes other philosophers for wallowing in mystery and "wonder tissue." His predictions would only corroborate that consciousness is indeed made up of modules but would say nothing about the whole that other parts of his text keep implying exists, even while he overtly denies their existence. In an ironically telling passage, he reveals why he denies it: "Postulating special inner qualities that are not only private and intrinsically valuable, but also unconfirmable and immeasurable is just obscurantism." (450) With his methodological assumption, he must deny qualities he assumes are univestigatable.30 But his text throughout states or implies there is an emergent entity beyond the "quasi-understandings," a "seems" that must be somehere in the brain; apparently, he does not address this "seems" because it is univestigatable. But his text posits it. Therefore, the text is guilty of obscurantism --even a doubly-embedded obscurantism, because it does not acknowledge its buried postulation.
ichie (1994, 1995), seeing "consciousness" as an engineering issue, knots himself in such paradoxes that one can only be thankful he is not building a bridge. On the one hand, he maintains that consciousness intervention in problem-solving is virtually peripheral, "only for intermittent monitoring and goal-setting... is largely specialized to the construction and communication of appropriate after-the-event histories and explanation." (182) In his conclusion, he notes that computer users are starting to attribute personalities to their machines --while he neglects to confess that humans have always had such anthropomorphic predispositions to mechanisms, as in calling ships "she"-- and when machines are sufficiently "intelligent," he holds, we will attribute essentially conscious experience to them. So, by Michie's functionalist assumption, the machines will be conscious, for all intents and purposes. Thus, Michie, like Dennett, skirts the whole issue of just what it is he proposes to engineer --yet even if he could define it (his system prevents him because, apparently, consciousness is defined only by the mind of the beholder31 why would he try to build it into the machine when consciousness is unnecessary for the kinds of problem-solving he is inter- ested in doing? It would be bad as building a bridge with hundreds of pounds of useless metal weighting it down.
While Dennett's and Michie's ambitions are swallowed by the very vacuum their ambitions create by voiding their object of definition, the general vacuum arising from the lack of consensus on what consciousness researchers as a whole are discussing merely makes each contributor not add to any whole; thus, she or he is speaking in a virtual void. The problem came to fore when Chalmers (1995) presented his "easy/hard" scenario in a symposium of articles addressing his approach. Lowe (1995) and Velmans (1995) outright contend with the definition, as it were, that Chalmers set forth; Shear (1995) less contentiously but more adventurously expands on the usage that the contributors were urged to follow. Lowe takes issue with Chalmers' characterizing consciousness in terms of "the sensuous, or phenomenal, or qualitative character of experience." (267) Lowe finds that experiences may validly be either perceptual or sensational in character; though grounded in its phenomenal character, a perceptual experience is also affected by its representational (intentional) content (thought being purely intentional without sensuous content). Lowe also lashes at Chalmers' idea of cognitive information, which lacks the notion of conceptual content. With these alleged misunderstandings, Chalmers' whole system of easy/hard problems collapses, Lowe asserts, for not addressing anything real. Velmans faults Chalmers for consigning "awareness" to "phenomena associated with consciousness, such as the ability to discriminate, categorize, and react to environmental stimuli; the integration of information by a cognitive system," (258) while "consciousness" should be relegated to "experience." Velmans finds that this terminology is already theoretically loaded, for it implies that "information processes associated with consciousness are themselves in some sense `sentient.'" (258) Shear presents certain traditions in eastern thought "removing attention from all phenomenal objects of consciousness... leaving consciousness alone by itself." (64) Such an assumption about what consciousness is would mean the researcher holding the assumption could not mean the same as Chalmers does with the same word, so their theories could not refer to the same thing so, together, would be meaningless.
The centrality of this definitional mire has not gone unnoticed. Referring to how contributors to the "easy/hard" symposium were asked to follow Chalmers' usage, Velmans remarks, "if there is a consensus, this might become standard as a whole. Given this, it is important to examine Chalmers' usage with care" (258) --though such consensus appears to be about as immanent as earthly paradise. Admirably, Güzeldere (1995b) plugs on to explore the definition of consciousness. Success in the endeavor would likely be marked by a strong and rigorous outpouring of research and debate on just what is the delineation of this object of study. The importance not just of such success but of being accurate may have ramifications that reach beyond a handful of researchers' careers and into humanity's conception of itself; after all, hard-core behaviorism was successful for decades in psychology departments, and its influence reached into psychological counseling and mental institutions (through behavior modification therapy), industrial design ("operant conditioning") and the vernacular ("positive reinforcement"). What Güzeldere (1995b) writes about the concern over whether the third-person perspective is sufficient to advance understanding of consciousness can apply as well to the term's definition if and when one is ever agreed upon in the field. "The issue is whether such an approach is always doomed to leave something essential to consciousness out of its explanatory scope." (165) If there is to be a science --engineering individuals' consciousness, building machines that allegedly reflect what we are as human beings, and molding society and the species according to reputedly inexorable laws-- the "issue" is more than a negligible exercise. For such a science, we have seen that definitional consensus is a minimum.32 Although we get by in everyday speech with the word "consciousness" and so, as James pointed out, we all seem to know what it is, the amount of cacophony that erupts when we attempt to articulate our tacit knowledge shows the questions remains: "Is accurate definitional consensus possible, and how do we know we have reached it?"
onsciousness researchers remain so unsettled on just what they want to talk about that some of the visions of grandeur of vanquishing the human mind and spirit might sound a tad presumptuous, like making plans of travel to other galaxies before inventing the wheel. In 1994, Francis Crick projected that by the turn of the century, scientists --perhaps himself in the pack-- will have found "the general principle of neural correlates of consciousness, which has now been abbreviated to NCC." (11) Now that our scientific australopithici have their acronym, they only have to invent the wheel in enough time to get to the galaxies in the next year. Veering ever closer to the stars, one astrophysicist cum consciousness commentator has projected humanity progressing from "consciousness" interconnected via television to the race's mind literally and directly linked via computers until human consciousness is somehow incorporated into one vast machine that transports consciousness bits through wormhole relay stations to other galaxies (Darling 1993). (Needless to say, this large, technically human consciousness will be smart enough to create universes like we now make pies.) Though these two commentators' projections range from the highly hopeful to the mysteriously hopeful, the hallucinogenic emotions that generally stir within wannabe conquerors of this redoubt consciousness belie the tone of perfunctory humility that Crick coats over the enterprise: "We did not work on the structure of DNA in order to produce technologies.... It is the same for the brain." (14)33 In a modest aside, this monk, in the same worldly-wise monastic tradition that engendered Chartreuse and Benedictine from cloistered herbal knowledge, adds with an almost unnoticeable whisper, "but it would be naïve to think that the work will not have implications." (14) We are to believe, in an age when Prozac has grossed billions, when computer tycoons have personal worths upwards of 23 billion and Harvard biochemists have founded biotech companies with stock sales astronomical as David Darling's future for humanity, that the current nuclear detonation of zeal for the secrets of consciousness among our eremite researchers arises from a knowledge quest pure as medieval scholars' for how many angels dance on a pinhead?
The lack of systematic approach and the ad hoc, chaotic scrambling to assert theories with hardly or barely a nod toward clarification of what is being discussed only induce the question of just what are consciousness researchers attempting to get out of a theory. Darwin, Edelman's evoked muse, followed a long tradition, stretching back through Linnaeus and at least to the 1600s, of steady, arduous, systematic classification of plants and animals by type; the theory of natural selection, after long observations about variations in forms and their geographical distributions, when coupled with "the doctrine of Malthus, applied to the whole of the animal and vegetable kingdom about survival," (Darwin, 1952, 7) finally made sense of both the relations between forms and how similar ones arose in different regions. Such careful systematization is wanting in consciousness studies, which surely must face a density of kinds of phenomena as thick as the planet is with species. A catalog of the kinds of thoughts, images, pains and pleasures, even dreams (deserving the sort of characterizing Freud began, but without the interpretation), perhaps inspired partly by the introspectionists34 but without their methodology.35 Certainly, the problems of nonconfirmable subjectivity of the phenomena studied pose apparently almost insuperable obstacles of the sort that scientific pursuit, constrained by the finitude of time, shies from; but if truth lies within our having a systematic grasp of the phenomena we want to theorize about, then profound methodological problems should not hinder a truth-seeking researcher. Instead, researchers blindly throw out theories in hopes that one might stick in the right place, only no one has lit the gargantuan jungle of consciousness to see where the right place may be. Then what does the field aim to achieve from its exertions if it is not for pure, systematic knowledge for its own sake?
Though author intentions cannot be fairly fathomed --unless stated-- and clearly followed through in the text, some consciousness writers do at least make strong projectons about the influential consequences if their theories will out --consequences with certain socioeconomic character. At the heart of many of the extended treatises on consciousness theory, in fact, appears a practical applications section (Edelman 1992, Greenfield 1995, Chalmers 1996) --not too surprising, as many theories are derived not only to collate current data into a coherent whole that solves some problems about interpreting the data, but also to manipulate the world via this data and theory. What is striking in the consciousness treatises is their headlong rush into applications at such a patent --and even sometimes self-admitted-- extreme, preliminary juncture. We might imagine a pre-Newton scribbling a tome that cannot even get straight what is meant by an object being in position x, much less getting to the next step of x(t) = x0 + v'(t), but then spending a good part of the time fulminating how in a few months we will be able to create God. Such ambitiousness draws attention to the ambition and away from the real progress of the theory. However, ears have grown more jaded from scientists' predicting --as they have over the last 50 years-- how we are a few months away from a cancer cure or an unfailingly human robot or machine speaker of natural language that little notice is paid when yet another theorist jumps to all the possible applications of even the most gossamer of hypotheses; in fact, the literature hardly bristles when a scientist makes yet another shattering prophecy. Furthermore, the fact that the Wright brothers' peers denied that humans can fly, along with other such stories, has helped fuel a mythology that says any technology that can be simply stated will someday step into life, so questioning exactly how a technology is supposed to arise --much less how a given theory behind that technology can have substance-- has ceased to be an adaptive response. Nonetheless, now that it has been seen (§1.2) how little progress has been made toward either identifying the object of study in consciousness studies, not to speak of community consensus on this identity, theorists' leaps to create new universes, as it were, simply suppressing the tendency of contemporary jadedness unleashes the question, "What's the hurry? Is the anxiety to manipulate and control some niche of the universe so overwhelming that we cannot first calmly learn and characterize what we want to manipulate before we mess with it? Why are we so anxious to manipulate these niches?"
delman relates the anecdote of once talking at a party to some minor artist-aster, who explained that when he imitated a celebrity's photo in a painting, he did not have to worry about precise rendition of the visage. Edelman sternly replied, "`Well, in science, it has to be exact, as exact as you can make it.'" (195) Edelman's reply actually has two quite different parts, communicating different schools of philosophy with which he may not agree. To say science must be exact is to take a "God's-eye-view." With theory P, science has finally stated the truth about domain D, as God created that domain and God can corroborate whether the scientists have succeeded. To say science must be as exact as you can make it implies that truth is like, say, the limit of a logarithmic function and the theory is the curve of the equation, only more closely approaching the limit, truth, the higher the number (the greater the scientific effort and success) that is fed into the function. In this volume, Edelman, consistent with a common strain of current scientific philosophy, allies himself with the second attitude, as can be discerned by tidbits sprinkled through the text, such as: "Our description of the world is qualified by the way in which our concepts arise" (161) --that is, by a mind shaped and limited by the vicissitudes of evolutionary selection. When arguing for his theory, Edelman understandably defends it as if absolute truth: It must be, alas, as exact as it can be. But when he comes to technological applications of the TNGS, Edelman gains no due modesty: As has been seen (§ 1.2.3), he has not even defined x, yet he proposes to build bridges, based upon his ghost of a dream of a theory, and have us drive across them.
In the very first paragraph of the book, in the preface, Edelman rattles from stilts how the consciousness revolution in the neurosciences "may be looked upon as a prelude to the largest possible scientific revolution, one with inevitable and important social consequences." (xiii) The two areas of consequence or applications that he later covers are mental disease and robots --or "artefacts," as he euphemizes them, blunting the sharp metallic edge off of Karel Capek's heavily connotative term, and fitting the concept into the apparently more innocuous tradition of de Vaucanson, the 18th-century French contriver of mechanical angels for the Church, who invented a waddling, quacking mechanical duck (Channel, 1991). For mental dysfunction, TNGS has direct bearing. Edelman hypothesizes, concerning schizophrenia,
that a disorder in the production of response to several neurotransmitters could cause a general disabling of communications between reentrant maps. If there is a failure in appropriate mapping or an asynchronicity between maps, imaging may predominate over perceptual input or different modalities may no longer be coordinated. This could lead to hallucinations and failure to coordinate real-world signals. (185)
Both neurological diseases, such as Parkinson's, and psychiatric conditions, such as schizophrenia, are "diseases of consciousness" and "physical causes are sufficient to account for the disturbances." (180) He mentions the work of two psychiatrists, Modell and Hundert, who have already adapted TNGS to their concepts. Edelman is more forthright and confident about the future of robots based on TNGS. First, he states his ambition about seeing such machines created within the humble context of a tool for better understanding our pathologies: "If these efforts are successful, they will play an important part in helping us understand our place in nature, both in health and disease." (187) As his discussion of robots (such as NOMAD) picks up speed, he even hedges, as the properly skeptical scientist, as to how likely we will soon build a machine with higher consciousness --but finally, his passion can hold back no longer, and in the coda to the robot discussion, bursts free: "There is no reason to believe that we will not be able to construct such artefacts someday." (194-195) He waxes braver in his predictions: Since in the last 50 years we have gotten machines to imitate one brain function (mathematical calculations), "There is no reason that we should fail in the attempt to imitate other brain functions within the next decade or so." (196) The robots he describes are not digital computers per se, as he boils away many pages in the text proper and appendices detailing how computational systems or algorithms alone cannot serve as an analogue to nor generate the kind of thinking of a conscious brain. Among the many problems is the one that these systems lack the selectional process. Edelman's robots, then, leap beyond the confines of digitalization by being modeled on selectional process. The machine Darwin III employs (computer-) modeled synapses in circuits that are strengthened or weakened according to whether those circuits when active accomplish a certain task and thus are selected for or against. A robot based on such modeling learns, and who says one complex enough cannot attain consciousness?37 Something kicks Edelman from grimaced, exact science to frantic-eyed speculation. What --and why? Closing his discussion of robots, he says eagerly, "The results from computers hooked to NOMADs or noetic devices will, if successful, have enormous practical and social implications." (196) In other words, society, industry, will be able to mine consciousness.
Greenfield, a pharmacologist and Parkinson's researcher, concentrates solely on mental manipulation applications of her theory-- though she never clarifies if she means to keep merely to aiding the mentally distressed or to go further:
If we cannot create consciousness, then we might aim to do the next best thing and manipulate it as precisely as possible, rather as molecular biologists are starting to manipulate DNA... Unlike the actual creation of consciousness or indeed of biological life, this more modest goal may actually be within our eventual grasp." (161)
halmers (1996), though approaching consciousness from a philosophic, not overtly scientific tack, nonetheless dedicates 47-page Part IV to "Applications," half of which stumps machine consciousness.38 His support for machine consciousness rests on his argument that the fine-grained functional organization of a conscious system can be abstracted into a CSA (see §188.8.131.52). Because this argument turns upon the question of whether conscious systems can be practically characterized into their fine-grained organization, it must answer the same empirical problems, inconsistency, and circularity inherent in his theory of consciousness as functional organization. Chalmers is correct that, once we have in hand all the essential physical data correlating with the consciousness data (via something like the "Chalmers equation" of §184.108.40.206) and we can thus derive the fine-grained functional organization, then we can abstract it into a CSA and then build a machine to realize the same functional organization and consciousness. Perhaps to strengthen his case, he simply does not tell the story of the task of recording the reaction of every particle in the subject person M's body and in the environment, not to speak of the epistemological and empirical quagmire of subject M definitively describing her or his consciousness, all as described in §220.127.116.11. Indeed, consciousness happens; by all reason, it comes from the brain; and the brain is a physical object governed by physical laws, which are computable. There, the situation may be stated in one 25-word sentence. But it is only fair to readers to fill them in on the enormous amount of sentences it would take to fulfill the right-hand side of the Chalmers equation --when after all that expense, we could not trust the subject to supply accurately the precise data needed for the left-hand side.39
Indeed, if the chapter's assumptions, resting as they do on the rest of the book, are granted, the chapter propounding strong AI or machine consciousness is the most sound in the book. The only problem is those assumptions and the flaws (discussed in §18.104.22.168) of the preceding text. With this strength in the AI chapter, the entire work is lopsided, as if the whole rest of the work is leaning, grunting to support, this one chapter, which thus has the air of denouement. In fact, the real novelty of the book may be in this chapter: While plenty of works from Descartes onward (including Searle 1992) have argued for not just the reality of phenomenal consciousness but its centrality in any account of mind, Chalmers takes on the (usually) opposite pole --the cognitive psychology and AI perspective-- from an AI perspective. While the common, cognitive-psychology/AI tack is to downplay the role of phenomenal consciousness and "explain it away" through reductive functionalism or epiphenomenalism --as steering widely of it because the scent of green papaya seemed too noncomputable-- Chalmers drives head-on into qualia and says, is essence, "This is the stuff of consciousness. You cannot avoid it. If your machine does not have thus, it is not real AI because it does not have real consciousness." Thus, Chalmers' work strikes new territory by summoning up all powers of philosophical support to contend it is possible to create a machine that has every degree and quality of phenomenal consciousness as our own human consciousness; in fact, our consciousness is not all-too-human but one mere manifestation or implementation of consciousness via our serendipitous functional organization. Unfortunately, the sandbags of Chalmers' philosophical support have proven to wash away readily, leaving the barren, skinny beacon of ambition for machines.40
Similarly, Dennett's peculiarly weighted theory, which leans heavily toward explaining one aspect of brain and concsiousness while completely ignoring other at least equally significant parts (see §22.214.171.124), draws attention to the weightedness and to just what is pushing it that direction. Dennett is much less coy than other consciousness writers as to the source of his influence:
why do I persist in likening human consciousness to software? Because, as I hope to show, some important and otherwise extremely puzzling features of consciousness get illuminating explanations. (219)
Indeed, throughout the text, over and over, he returns to the computer analogy: Consciousness is the software, "largely a product of social evolution that gets imparted to brains early in training" (219) on the hardware of the brain. He works the analogy the other direction as well: Early computers, "von Neumann machines... were, in fact, giant electronic minds..." (214) Dennett hardly hides the fact the mind is like a computer, and vice versa; what he never clarifies is just why he so believes.
An analogy or metaphor may be useful in a philosophical text such as Dennett's to help clarify or illustrate for the reader a certain point made or steer the reader's understanding in a certain direction and away from others. But Dennett's computer metaphor goes beyond such mere heuristics to become a bedrock of his argument. For instance, he states:
Since any computing machine at all can be imitated by a virtual machine on a von Neumann machine, it follows that if the brain is a massive parallel processing machine, it too can be perfectly imitated by a von Neumann machine.... And from the very beginning of the computer age, theorists used this chameleonic power of von Neumann machines to create virtual parallel architectures that were supposed to model brainlike structures. (217)
The language remains tenuous and hypothetical --"if the brain," "supposed to model"-- but this tenuousness does not halt Dennett from going on to assure us that the brain is a massive parallel processor. To say the brain and mind are such processors is a large step to let "if" handle the leaping; the step requires us to have an undisputable definition of mind and what sort of output it generates from what kind of input in all cases (besides how the brain plays into this activity)41 --then we can assess whether the brain is indeed such a processor. However, we do not have this information; in fact, Dennett's theory is a route to gain such information, though in the meantime, he has made an assumption that such information already has been gathered, thus begging the question.
Indeed, the brain/mind as a massively parallel processor would be convenient because it would be just the right machine to generate Dennett's multiple drafts consciousness and do all the things that whatever the thing is Dennett calls a "mind" can do (apparently, not such uninvestigatable thus nonexistent activities as "seems" --see §126.96.36.199). Dennett's method of operation goes something like the following: Latch onto a metaphor (brain/mind = computer hardware/software); shuck off any material within the subject that is not amenable to the metaphor (discard the fact the mind's illusions about how consciousness comes together actually happens because this material apparently is not investigatable, therefore noncomputable); develop a model (multiple drafts) that fits within the metaphor; then suddenly burst out of the mere metaphor and say that if the brain is like a computer, the computer is like a brain, and voilà, this scenario is reality, not just a metaphor: we can build a machine that does anything a human brain/mind can do. Dennett must create such sizeable gaps in his own theory of mind and make such momentous leaps of faith across the pit of question-begging, in terms of "if" the brain/mind is a massively parallel processor, that he appears more bent on asserting the image of brain/mind as computer than in investigating the nature of his object of study. The question still nags as to why this theorist insists on making such gaps and leaps.
Though Chalmers and Dennett wind their ways through widely varying routes that actually lead them occasionally to clash (Dennett 1996; Chalmers 1997), the two authors arrive at a similar conclusion: Consciousness is information, therefore can be generated by information processing. But Dennett takes a running jump past Chalmers --not settling for building conscious machines and bathing in whatever material benefit may result-- to join hands with astrophysicist Darling (1993): Dennett proposes to can, like strawberry preserves, one's consciousness --which is purportedly so much information ("narrative gravity"? )-- in a computer for indefinite preservation, that is, for immortality:
If what you are is that organization of information that has structured your body's control system... then you could in principle survive the death of your body as intact as a program can survive the destruction of the computer on which it was created and first run. (430)
The idea that the Self --or the Soul-- is really an abstraction strikes many people as simply a negative idea, a denial rather than anything positive. But in fact it has a lot going for it, including --if it matters to you-- a somewhat more robustly conceived version of potential immortality than anything to be found in traditional ideas of a soul, but that will have to wait until chapter 13. (368)
It is curious that a philosopher who has expended his energies debunking a traditional theory --Cartesian dualism-- that is based upon an indefatigable need to justify the immortal Soul while acknowledging there is a body and that he contends is mere myth,43 would turn around and seize the very urge that led to the myth --immortality-- and intends through his own (now dualist) theory to play the part of the Prime Mover of that earlier myth. The imagination, of which Dennett credits his critics as having too little, is left to picture what it would be like for a set of consciousness information to thrive on a hard disk or perhaps on the internet, without that part of the brain that creates the uninvestigable illusion that the consciousness is whole. If you are spread across the internet, when and if you get tired of it all, will there be a Kevorkian Website you can visit, where a special virus can go into a designated hard drive where you are downloaded and erase you from silicon?44 Whether Dennett would defend his immortal computer consciousness is tangential or core to his multiple drafts theory, within a theory chockful of gaps and leaps of faith, this computer immortality plugs the holes as the most poignant and solid image in the book: that homunculus consciousness bottled tightly in the computer jar on the shelf, forever.
1 The figures of speech here are not to imply a dualism is pos- ited. Even the most avid monists in the consciousness discussion speak of the brain and of consciousness, if as two ways of looking at the same thing.
2 At least none of those writers listed in the bibliography have asked this question.
3 Undoubtedly, to make progress in such an unprecedentedly overarching campaign, such massive forces of every kind of talent are necessary: Researchers from every possible perspective need to volley their shot on the story until a leg-drubbing flurry gets flying. Ideally, somewhere in this mad exchange, everyone's energies and muscles will be so stretched and exercised that creative magic--peculiar to humans when their heads start bumping--will mix its potions and out of the game will pop a solution. Surely the concocters of this open event hoped to see a ferment brew in their arena as happened in the time of Descartes, Galileo, and Newton. In that era, the Classical question of how to account for change in the natural world met with the development of the mechanical clock, which served as a model that, combined with Classical atomism, gave rise to a mechanical philosophy of natural change and eventually Newtonian mechanics (Channel 1991). During the 1500's and 1600's, a violently animated exchange mounted in perhaps the most (if not only) productive period in philosophy as practitioners, such as Gassendi, Hobbes, Boyle, and Leibniz, sent flying their own take on mechanical philosophy; narrowing in on the focus of discourse (motion of extended bodies); defining and redefining terms (motion, matter, inertia); seizing on exemplary phenomena to explain (motion of planets, falling bodies); allowing or disallowing forces (desires, occult powers, inanimate forces); constructing edifices of explanation of the subject phenomena, usually based upon a key principle (Descartes' vortices, Gassendi'a corpuscles, Hobbes' hardness and cohesion, Leibniz' monads); and, of course, saying what is wrong with other theories.
4 These quotes appear to color Güzeldere's thought, as he never settles the problem itself of the definition of consciousness.
5 Technically, the drugs initially increase serotonin, eventually getting their effect, because when serotonin receptors then become saturated with serotonin, they grow more sensitive, and serotonin is thus less effective.
6 Greenfield's explanation suffers from a lack of distinction between formation of small gestalts and the small amount of formation of gestalts (and conversely, with large gestalts and the large amount of gestalts). If serotonin facilitates gestalt formation, blocking it would block formation of all sizes of gestalts. Large gestalts are formed by many neurons around a focus or epicenter firing; small gestalts by fewer neurons. By her theory, more serotonin makes more neurons more readily potentiated to fire. A given amount X of serotonin could potentiate M neurons, but why should these M neurons not be in one big cluster to create a large gestalt as opposed to being in several small clusters to make several small gestalts? Greenfield does not say.
7 By Greenberg's theory, apparently, consciousness has only one thing in it at a time, yet what the extent this thing can be goes undefined.
8 Greenfield denies an "inflexible or intangible boss" but does not say what she means by this boss: If it is simply something that assimilates the sense data or if it is a controller of some sort. She does seem to abhor a "Cartesian theater"--a place where all these things take place, because she cannot commit to a part of the brain where this theater may be (and thalamocortical reverberation apparently is unpromising). But she does not address whether such a theater or awareness could be so nonlocal.
9 Greenfield has the problem of, if these gestalts are folding so neatly one into the other that they seem to be continual if not seamless, to what are they so seeming? Dennett and his multiple drafts and his idea that consciousness in general is an illusion has need for a more amazing feat of contortion: somehow, each of these multiple drafts must have with them a concomitant neuronal configuration that generates the illusion of consciousness; moreover, this configuration must be remarkably similar to one in a distant part of the brain where the next draft appears, so we will not notice there has been a shift of "fooling machinery" firing and so not be sufficiently fooled; and yet moreover, there must be remarkable infrastructure connecting all these many fooling sites to keep the illusion smoothly running from one site in a complete smoothness.
10 In other examples, Greenfield also makes some completely outrageous assertions, such as "children do not dwell on pain," which is "easily banished by some distraction, such as a nearby bird." This observation is readily banished by my own childhood experiences--all of which points up to the distinctly individual nature of consciousness, an individuality that cannot be ignored in characterizing it (see §2.4).
11 Just because it is argued here that Greenfield does not make a convincing case for why there is no central consciousness does not mean that I am asserting there is one.
12 These holes in her theory may be at least partly attributable to Chalmers' (1995) hard/easy problem (which is really just a rewording of the old mind/body problem).
13 The question of motivations in current theory are discussed in §1.3.
14 See §2.2 for further discussion of internal states; also, §2.4 distinguishes a fallback on our linguistic intuition--as Edelman and other writers rely upon---about the lexical item "consciousness" from the type of knowledge necessary for an object to be the object of scientific inquiry.
15 This discussion is not implying that consciousness is a property rather than a state--nor is it implying that prettiness is a property. See §2.4.
16 Such physics is also sometimes called "spooky physics" because it strives to use quantum mechanics at the subcellular level to account for such inexplicables as free will, which seems to parallel the apparently "volitional" behavior of quanta. However, among the many criticisms of the theory (esp. Grush and Churhland) are those holding that such macroscopic quantum effects could only be seen in supercooled conditions, as in superconductors. Hameroff and Penrose (1996) counter that the special structure and water environment inside the microtubule could allow such effects. The debate rants on.
17 By "thought" here, I do not mean a single, fleeting thought, but in the context--addressing Dennett's construal--it must be a steady mental state, such that when we are conscious in some strange way (within Dennett's system, that is) we simultaneously always have this construal that we are experiencing one thing.
18 This innocuous and untrumpeted statement actually plays into a later chapter, when Chalmers makes his startling hypothesis that his nonreductive approach to consciousness theory should not disappoint the prospect of machine intelligence but instead should help make it possible; but the statement unwittingly makes this projection for application not cogent. (See §1.3.1 for the problems of Chalmers' projected application.)
19 Such as Dennett (1991).
20 Chalmers defines supervenient: "B-properties [such as biological] supervene on A-properties [such as physical] if no two possible situations are identical with respect to their A-properties while differing in the B properties." (33)
21 Chalmers' argument for zombies as logically possible is not convincing. He draws the analogy to a mile-high unicycle, which is indeed logically possible, as one cannot offer anything contradictory in joining the two concepts "mile high" and "unicycle." However, there is a patency or transparency in the two concepts that makes it possible to judge the logical content when adjoined. Jumping into consciousness and whether it is possible to have a zombie that looks and acts just like one but is not conscious, we are unable to see all the intervening factors--whether consciousness' presence will make a difference in behavior, for one--to even begin to assess whether there is a contradiction, and thus we cannot say whether it is logically possible or not. This objection differs from the one Chalmers (1996) addresses (98) as to whether there is some unimagined neural detail that may make a difference in sophisticated functioning, as that is an empirical objection and has no affect on logical possibility. The present objection is an epistemological question as to whether we can even assess the properties of the subjects to predicate them in the sentence, "If consciousness can produce behaviors X, then a zombie can produce them."
The inverted spectrum argument says that for all places that S sees blue, I see red; thus, physical structure remains the same while experiences are inverted, thus consciousness is not logically supervenient on the physical. The problem here is that inverted spectrum may also not be logically possible. The argument against such possibility that maintains inverted spectrum suffers asymmetry of color space--yellow is warm, red is alarming--only argues psychological states, not logical possibility, so indeed is not cogent. But there is a correlation of color with saturation that may be maintained in all possible worlds--red and yellow have different reflectances and absorptions, so any scheme composed of color must always maintain the same colors to maintain the same structure. If it is not possible to have a world that is not this way (just as it is not possible to have a world where 1 + 1 = 3, though we can speak it or even think we think it, just as we think we can conceive of a world in where red is yellow, though we are not imagining a possible world), then it is not logically possible.
22 It is hard to imagine how Chalmers may deny CSA M is not a simpler entity and thus salvage his program. The only possibility would be to contend CSA M is not simpler or more complex than implemented consciousness M but equally complex (or simple). Say we have two systems, MN, composed of neurons, and MD, composed of dust motes. They can have identical consciousnesses. I test them and derive CSA MND, with which I can construct more consciousnesses with the same thoughts as MN and MD. Are MN and MD of identical complexity as CSA M? CSA M must be simpler because MD, say, is not just the functional organization; it also has additional complexity if managing its dust-mote connections and so on--it is a physical thing as well as an organizational thing. Arguably, if CSA M were of the same complexity as MN or MD, MD would be the same thing as MN.
23 This brand of information theory assumes that information simply exists in some object, as if in a Platonic realm, overlooking the fact it requires humans perceiving or inquiring into that object to give it shape and size, whereas the human brain does not require another object perceiving it to give the information in it shape and size.
24 The scenario establishes the identity relation x = x, not even the equality relation x + y = z. There is absolutely no way to distinguish these two universes, not even their position among the placement of universes, other than our saying there are two of them, just as there are two x's in the identity relation x = x, so for all purposes, they are the same.
25 This is assuming something naturally impossible: That we can measure the universes without changing them. For as we measure them and thus change them, we cannot measure how we would have changed them, so we could never even found out how the physical universes and the functional organizations of the two clones' brains are changing and thus cannot correlate their fine-grained organization with behavioral capacity. So, such correlation is either a physical absurdity or, as was seen with true physical isomorphs, it is trapped, ineffable, within the meaningless identity relation. Case closed against the possibility of determining fine-grained functional organization. However, to discover yet more absurdity in the notion of determining fine-grainedness, the scenario can be continued as if we could measure without altering the universe.
26 Dennett does not deny conscious phenomena; he just asserts that the unitary thing "consciousness" is an illusion thus does not exist. He does state at the outset that he will not "feign anesthesia"... (40)--that is, deny conscious experience.
27 This is, in fact, the visual phenomenon that makes motion pictures possible.
28 "The discontinuity of consciousness is striking because of the apparent continuity of consciousness." (356) Or, concerning speech acts, "the benign illusion created of there being an author." (365) Where are all these highly complex appearances and illusions happening but in the brain? A brain that can be so capable of such illusions must be terribly interesting--more so than the brain that can merely see and small like any old cat or bird.
29 The adamancy with which Dennett insists on dismissing a significant part of the brain and mind by labeling it "just" draws attention to just why does he so insist on narrowing himself against the very interests he has of completely describing brain and mind. See §1.3.2.
30 In one of the most revealing passages of the book, in Chapter 13 on the self, Dennett describes clothes as part of the human phenotype. "An illustrated encyclopedia of zoology should no more picture Homo sapiens naked than it should picture Ursus arctus--the black bear--wearing a clown suit and riding a bicycle." (416) Dennett thus says that the many people who spend their days in their homes or tribal villages naked are as unnatural as a bear riding a bike in a circus. The implication may not be that humans in these other activities are generally unseen, uninvestigatable; to even appear nude in public is a transgression. This approach thus affirms public existence to the expense of private. Even if a good part of one's weekend or weekday night is spent nude, that reality is somehow lesser or even nonexistent. Yet the further implication is, this hypothetical representation is in the context of science--a zoology text--so we must somehow portray the typical human, who is a social animal, and so in typical social, public habitat is seen with clothes on invariably. On the other hand, such implication reveals its own problem: for science truly to account for this animal, it must account for the complexities of the beast: Does it really have a single typical habitat? It is both a social public creature and a social private creature, whose privacy is pro- tected by rules as strict as those that define public behavior. What one person alone with her spouse or friends does in private is as much a part of human existence as what is done in public. Just because much of what is done in private is univestigatable does not make it less real. Dennett's ostrichlike methodology is simply to deny the existence of the apparently uninvestigatable. Such tyranny likely will not hold, if people are told a good part of their existences do not exist. (As to the zoology text and its typical specimens, why not have both a clothed and naked human?)
31 See also §2.4. Michie creates an impossibility here because if we humans are the ones who attribute consciousness, and if all humans died and only the machines were left, they would not be conscious. (He had built the tacit knowledge into the machine, yet how could he do so unless he could define consciousness?)
32 Possibly researchers would not have to define "consciousness" if there was some evidence they are all talking about the same thing. As the evidence shows the contrary, the community needs not only a definition but some degree of consensus to make headway on one. How then do we dig into what individuals means when they say "consciousness"? (cf. §2.4.) Are crosscultural studies extend that Shear's (1995) discussion necessary?
33 Considering the history of agriculture's dependence on genetic manipulation and the neck-in-neck competition with such triathletes as Linus Pauling (Marinacci, 1995) for what would most certainly be the Nobel Prize, the image of knowledge pursued for its own reward holds the charm of the gold-studded televangelist who humbly whines he has dedicated his life purely to the will of God.
34 It may be argued that scientific theory comes about because of expediency and feasibility, not because one feels a theory lies in direction X and so all efforts are placed in that direction. Linnaeus and other taxonomists likely did not know their work would lead to a theory like Darwin's; perhaps they did their work because it in itself satisfied some curiosity about life. That their results and traditions were available made Darwin's efforts, building on theirs, expedient and feasible. Unless researchers develop a Linnaean interest in consciousness, the world may never see its Darwin of consciousness.
35 The Introspectionists did start to accumulate a daunting catalog of types of mentality, though debates about what kinds of experience actually could be experienced finally tore their pursuit apart (Güzeldere 1995a?), leaving a vacuum filled by the sharp and clean behaviorists.
36 For a scientist who, just the page before, chided the trivial artist over the exactitude of science, Edelman has made a cravenly vague prediction. Though he did warn in the preface that his present volume is not science but about science, in the context of his presenting a scientific theory and its consequences, he can only be judged in his own words of striving for exactitude. "Other brain function" can be interpreted with even more dizzyingly vague breadth "the next decade or so," so whatever happens, the prediction must be right.
37 Of course, supercomputers are required to run Darwin III, and more and more high-powered computers will be necessary the closer one tries to model consciousness. On a technicality, Edelman gets by without calling these robots "computers," though computers are their sine qua non.
38 The other half concerns interpreting quantum-mechanics in terms of mind, which is a theoretical, not technological, application.
39 There might be the protest: "Just general algorithm (CSA) is needed, not every possible equation (1) for every moment for subject M." But how else is the CSA derived but by several observations of the sort that solve equation (1)?
40 Perhaps if he had started from more primitive principles to make a pure inquiry into the nature of consciousness, he might not have had any idea in advance where the inquiry might lead in its infinite potential directions, and might or might not have ended up with his conclusion about machine consciousness.
41 Not to imply brain and mind are separate, simply different ways to approach the subject. (Furthermore, the use of "parallel processor" is meant here more than in the trivial, obvious, and hardly useful sense that the brain processes information and many line of information are being processed at once--as even skeletal muscle and paramecial cell membranes may be said to; but that the brain's processing is in a precise way like that of a digital-machine parallel processor.)
42 The software/hardware cliché itself merits closer scrutiny as well as the entire computer metaphor. With a computer, software is readily separated from hardware. Furthermore, one string of software code or program is readily interchangeable with an infinite amount of others, while the computer hardware itself sits there mutely (essentially) unchanged. You cannot pull the software out of a human brain and insert another program. The human brain and the information in it are one in the same; the configurations of neurons, their alterations by neurotransmitters to different synaptic potentials, their changes in permeability--all these properties are both hardware and information processing at the same time and cannot be teased apart. To dismiss this characteristic of the brain as a mere technicality that still does not affect the metaphor is then to leave the metaphor as a flawed one that cannot apply to the brain as a scientific model and must remain a metaphor or worse, to admit it is a false analogy.
43 "... silly or baseless beliefs do tend to be extinguished in the long run in spite of superstition." (453)
44 Of course, by the time this immortality program is developed, computer and information technology will likely have rocketed to the point that the internet will look quaint as stone axes, so this example must serve as a metaphor to that uninvestigable future.