Incomplete Nature

Chapter 3

The habit of reflecting on oneself in homuncular terms comes naturally, even though this leaves us caught in a mental hall of mirrors where endless virtual tunnels of self-reflections make it impossible to navigate our way out. There is little reason to expect that evolution would have equipped us with a special set of corrective gla.s.ses for dealing with the problems posed by such self-reflection. It is in the realm of social interaction with other creatures like ourselves that we need tools for navigating the challenges created by ententional processes. We don"t have to worry very often why it is that we ourselves do the things we do, and we are seldom caught entirely by surprise by our own actions. But this is often the case with others. It is one of the claimed benefits of years of psychoa.n.a.lysis or meditation that they can provide a modest capacity for intervention in our otherwise una.n.a.lyzed habits and predispositions. But social life constantly demands that we guess at, antic.i.p.ate, and plan for the actions of others. Many students of mental evolution argue that this is a capacity that is only well developed in h.o.m.o sapiens. The ability to develop a mental model of another"s experiences and intentions is often given the tongue-in-cheek name "mind reading" by behavioral researchers. However, despite our comparatively better evolved capacity, we are still notoriously bad at it. Because it is both difficult and fraught with error, we spend a considerable fraction of our days antic.i.p.ating others, and from an early age appear predisposed to project the expectation of another mind into inanimate toys.

So one reason we may tend to think in homuncular terms, even in contexts where rationally we know it makes no sense (as in superst.i.tious behaviors, or wondering at the "meaning" of some apropos coincidence), is that it just comes naturally. This psychological habit should not, however, absolve us as philosophers and scientists from the requirement that we think more clearly about these issues. And yet, as we will see in the examples discussed below, avoiding thinking this way, without throwing out the baby with the bathwater, is both remarkably difficult to achieve and deeply counterintuitive. Not only do we homuncularize without thinking, even careful theorists who systematically try to avoid these errors often fall easy prey to only slightly more subtle and cryptic versions of this fallacy. For the same reason that the homunculus fallacy is so seductive, it is also a slippery target. Thinking that we have finally destroyed all vestige of this crafty adversary, we often become complacent and miss its reappearance in more subtle and cryptic forms.

As noted in the last chapter, most considerations of ententional phenomena implicitly treat the critical details of their causal dynamics as though they are hidden in a black box, and worse, invoke the causal influence of explicitly absent ent.i.ties. Because of this, researchers in the natural sciences have little choice but to make every effort to avoid a.s.signing explanatory roles to ententional processes in their theories. And wherever a field such as cellular biology or cognitive neuroscience encounters issues of information or functional organization, it treats them as heuristic placeholders and endeavors to eventually replace them with explicit physical mechanisms. Indeed, biologists and cognitive neuroscientists treat this as an imperative, and rigorously scour their theories to make sure they are free of any hint of teleology. In philosophical circles, this methodological presumption has come to be known as eliminative materialism because it presumes that all reference to ententional phenomena can and must be eliminated from our scientific theories and replaced by accounts of material mechanisms.

But can this eliminative strategy be carried out exhaustively such that all hint of ententional explanation is replaced by mechanistic explanation? And even if it can, will the result be a complete account of the properties that make life and mind so different from energy and matter? I have little doubt that a universe devoid of ententional phenomena is possible. In fact, I believe that at one point in the distant past, the entire universe was in just such a state. But that is not the universe in which we now live. For this reason, I believe that this eliminative enterprise is forced to sacrifice completeness for consistency, leaving important unfinished business in its wake. We need to explain the ententional nature of our own existence, not explain it away.

HEADS OF THE HYDRA.



Since the Enlightenment, the natural sciences have progressively dealt with the homunculus problem by trying to kill it. The presumption is that it is illegitimate as an explanatory principle, and worse, that accepting it risks readmitting G.o.ds, demons, and other such supernatural surrogates and argument-stoppers back into the discourse of science. As the philosopher Daniel Dennett rightly warns, accepting these sorts of explanations incurs a serious explanatory debt.

Any time a theory builder proposes to call any event, state, structure, etc., in any system (say the brain of an organism) a signal or message or command or otherwise endow it with content, he takes out a loan of intelligence. . . . This loan must be repaid eventually by finding and a.n.a.lysing away these readers or comprehenders; for failing this, the theory will have among its elements una.n.a.lysed man-a.n.a.logues endowed with enough intelligence to read the signals, etc.2 Homunculi are stand-ins for incomplete explanations, and since, according to current scientific canon, good explanations should take the form of mechanistic a.n.a.lysis, all references to the teleological properties attributed to homuncular loci need to be replaced by mechanisms. But often, efforts to explain away blatant homunculi lead to their unwitting replacement with many less obvious homunculi. Instead of a little man in the head, there are sensory maps; instead of an elan vital animating our bodies, there are genes containing information, signaling molecules, receptor sites, and so on, to do the teleological work.

This reminds me of the cla.s.sic tale of Hercules" battle with the Hydra. The Hydra was a monster with many heads. The very effort to cut off one of the Hydra"s heads caused two more to grow where the one had been before. In the story, Hercules" helper Iolaus prevented the regrowth of new heads by searing each neck with flame as each head was chopped off, until eventually the one immortal head was also removed and buried under a boulder. The head might rightly be described as the organ of intention: the source of meaning and agency. As in the story, the effort to remove this organ of intention only compounds the challenge, progressively ceding power to the foe one is trying to subdue, because it reappears in other places. In the modern theoretical a.n.a.logues of this myth, positing a locus of ententional cause merely pa.s.ses the explanatory buck to an as-yet-to-be-explained process occurring at that locus or elsewhere, that serves this same role-only less obviously. The process is thus implicitly incomplete, and possibly not even able to be completed, requiring a similar effort at some later point in the a.n.a.lysis to deal with these new creations. The effort to deny the special character of ententional processes at that locus, and thus eliminate altogether any reference to teleological phenomena, only serves to displace these functional roles onto other loci in the explanatory apparatus. Instead of one homunculus problem, we end up creating many. And in the end the original homunculus problem remains. Removed from the body of scientific theory but unable to be silenced, it can only be hidden away, not finally eliminated. The a.n.a.logy is striking, and cautionary.

An explicit homunculus-slaying proposal for dealing with mental causality was articulated by the artificial intelligence pioneer Marvin Minsky. In his book Society of Mind, he argues that although intelligence appears to be a unitary phenomenon, its functional organization can be understood as the combined behavior of vast numbers of very stupid mindless homunculi, by which he ultimately means robots; simple computers running simple algorithms. This is a tried-and-true problem-solving method: break the problem down into smaller and smaller pieces until none appears too daunting. Mind, in this view, is to be understood as made up of innumerable mindless robots, each doing some tiny fraction of a homuncular task. This is also the approach Dan Dennett has in mind. Of course, everything depends on mental processes being a c.u.mulative effect of the interactions of tiny mindless robots. Though the homunculus problem is in this way subdivided and distributed, it is not clear that the reduction of complex intentionality to many tiny intentions has done any more than give the impression that it can be simplified and simplified until it just disappears. But it is not clear where this vanishing point will occur. Though intuitively one can imagine simpler and simpler agents with stupider and stupider intentional capacities, at what point does it stop being intentional and just become mechanism?

So long as each apparently reduced agent must be said to be generating its mindless responses on the basis of information, adaptation, functional organization, and so on, it includes within it explanatory problems every bit as troubling as those posed by the little man in the head-only multiplied and hidden.

What are presumed to be eliminable are folk psychology concepts of such mental phenomena as beliefs, desires, intentions, meanings, and so forth. A number of well-known contemporary philosophers of mind, such as Richard Rorty, Stephen Stich, Paul and Patricia Churchland, and Daniel Dennett, have argued for versions of an eliminative strategy.3 Although they each hold variant interpretations of this view, they all share in common the a.s.sumption that these folk psychology concepts will eventually go the way of archaic concepts in the physical sciences like phlogiston, replaced by more precise physical, neurological, or computational accounts. Presumably the teleological framing of these concepts provides only a temporary stand-in for a future psychology that will eventually redescribe these mental attributes in purely neurological terms, without residue. Stronger versions of this perspective go further, however, arguing that these mentalistic concepts are vacuous: fict.i.tious ent.i.ties like demons and magic spells.

While these eliminative efforts superficially appear as explanatory victories, and briefly create the impression that one can re-describe a sentient process in terms of mechanism, this ultimately ends up creating a more difficult problem than before. In the examples below, we will see how the presumption that ententional properties can be eliminated by just excising all reference to them from accounts of living or mental processes inadvertently reintroduces them in ever more cryptic form.

Fractionation of a complex phenomenon into simpler component features is a common strategy in almost all fields of science. Complicated problems that can be decomposed into a number of simpler problems, which can each be solved independently, are not hard to find. The greatest successes of science are almost all the result of discovering how best to break things down to manageable chunks for which our tools of a.n.a.lysis are adequate. This is because most physical phenomena we encounter are componential at many levels of scale: stones composed of microscopic crystal grains, crystal grains composed of regular atomic "unit cells," crystal cells composed of atoms, atoms composed of particles, and some of these composed of yet smaller particles. Wherever we look, we discover this sort of compositional hierarchy of scale. However, this does not mean that it is universal, or that the part/whole relationship is as simple as it appears. Mathematicians recognize that some problems do not allow solution by operating independently on separate parts, and computational problems that do not decompose into chunks that can be computed independently are not aided by the power of parallel processing. So, although it may be nearly tautological to claim that all complex phenomena have parts and can be described in terms of these parts, it is not necessarily the case that the same complex whole can be understood one part at a time. What appear to be "proper parts" from the point of view of description may not have properties that can be described without reference to other features of the whole they compose.

For this reason I prefer to distinguish between reducible systems and decomposable systems. Reduction only depends on the ability to identify graininess in complex phenomena and the capacity to study the properties of these subdivisions as distinct from the collective phenomenon that they compose. Decomposition additionally requires that the subdivisions in question exhibit the properties that they exhibit in the whole, even if entirely isolated and independent of it. For example, a clock is both reducible and decomposable to its parts, whereas a living organism may be a.n.a.lytically reducible, but it is not decomposable. The various "parts" of an organism require one another because they are generated reciprocally in the whole functioning organism. Thus, a decomposable system is by definition reducible, but a reducible system may not be decomposable. Just because something is complicated and const.i.tuted by distinguishable subdivisions doesn"t mean that these subdivisions provide sufficient information about how it functions, or how it is formed, or why as a complex whole it exhibits certain distinctive properties.

The question before us is whether ententional phenomena are merely reducible or are also decomposable. I contend that while ententional phenomena are dependent on physical substrate relationships, they are not decomposable to them, only to lower-order ententional phenomena. This is because although ententional phenomena are necessarily physical, their proper parts are not physical parts.

At this point, the distinction may sound like an overly subtle and cryptic semantic quibble, but without going into detail, we can at least gain some idea of why it might lead to the Hydra problem. If complex ententional phenomena are reducible but not decomposable into merely physical processes, it is because components are in some sense "infected" with properties that arise extrinsic to their physical properties (e.g., in the organization of the larger complex context from which they are being a.n.a.lytically isolated). So, a.n.a.lyzed in isolation, the locus of these properties is ignored, while their expression is nevertheless recognized. Since this is the case for each a.n.a.lyzed part, what was once a unitary ententional feature of the whole system is now treated as innumerable, even more cryptic micro-ententional phenomena.

Consider, for example, the complex DNA molecules that, by reductionistic a.n.a.lysis, we recognize as components of an organism. Each nucleotide sequence that codes for a protein on a DNA molecule has features that can be understood to function as an adaptation that evolved in response to certain demands posed by the conditions of that species" existence. But to attribute to these sequences such properties as being adaptive, or serving a function, or storing information, is to borrow a property from the whole and attribute it to the part. These properties only exist for the nucleotide sequence in question in a particular systemic context, and may even change if that context changes over the course of a lifetime or across generations. What may be functional at one point may become dysfunctional at another point, and vice versa.

As we will see below, whether we ascribe cryptically simplified ententional properties to computer algorithms or biological molecules, if we treat these properties as though they are intrinsic physical properties, we only compound the mystery. In the effort to dissolve the problem by fractionation, we end up multiplying the mysteries we have to solve.

DEAD TRUTH.

Jewish folklore of the late Middle Ages tells of a creature called a golem. A golem is formed from clay to look like a man and is animated by a powerful rabbi using magical incantations. Whereas the Almighty Jehovah had the capacity to both form a man from clay and also imbue him with a soul, the mystic could only animate his figure, leaving it soulless. Like a sophisticated robot of contemporary science fiction, the golem could behave in ways similar to a person, but unlike a normal person there would be no one home. The golem would perceive without feeling, interact without understanding, and act without discernment. It is just an animated clay statue following the explicit commands of its creator, like a robot just running programs.

If we take the homunculus as an avatar of cryptic ententional properties smuggled into our theories, we can take the golem as the avatar of its opposite: apparently mindlike processes that are nonetheless devoid of their own ententional properties. If a homunculus is a little man in my head, then the golem is a hollow-headed man, a zombie.

Zombies are closely related mythical creatures that have recently been invoked in the debates about whether mental phenomena are real or not. The popular concept of a zombie traces its origins to voodoo mythology, in which people are "killed" and then reanimated, but without a mind or soul, and thus enslaved to their voodoo master. They are, to use a somewhat enigmatic word, "undead." A zombie in the philosophical sense is in every physical respect just like a person-able to walk, talk, drive an automobile in traffic, and give reasonable answers to complicated questions about life-but completely lacking any subjective experience a.s.sociated with these behaviors. The plausibility of zombies of this sort, truly indistinguishable from a normal person in every other respect but this, is often proposed as a reductio ad absurdum implication of a thoroughly eliminative view. If subjective mental phenomena, such as the sense of personal agency, can be entirely explained by the physics and neurochemistry of brain processes, then the conscious aspect of this is playing no additional explanatory role. It shouldn"t matter whether it is present or not.

Stepping back from these extremes, we can recognize familiar examples of what might be termed near zombiehood. We often discover that minutes have pa.s.sed while we engaged in a complicated skill, like driving a car, and yet have no recollection of making the decisions involved or being alerted by changes of scenery or details of the roadway. It"s like being on autopilot. In fact, probably the vast majority of brain processes contributing to our moment-to-moment behavior and subjective experience are never a.s.sociated with consciousness. In this respect, we at least have a personal sense of our own partial zombie nature (which probably makes the concept intuitively conceivable). And many of these behaviors involve beliefs, desires, and purposes.

The real problem posed by golems and zombies is that what appears superficially to be intrinsic purposiveness in these ent.i.ties is actually dead-cold mechanism. In the cla.s.sic medieval golem story, a golem was animated to protect the discriminated Jewish population of Prague, but in the process it ended up producing more harm than good because of its relentless, mindless, insensate behavior. According to one version of this story, the power of movement was "breathed" into the clay figure by inscribing the Hebrew word for "truth"--on its forehead.4 Thus animated, the golem was able to autonomously follow the commands of its creator. But precisely because the golem carried out its creator"s missions relentlessly and exactly, this very literalness led to unantic.i.p.ated calamity. When it became clear that the golem"s behavior could not be channeled only for good, it had to be stopped. To do so, one of the letters had to be erased from its forehead, leaving the word for death-. With this the golem would again become an inanimate lump of clay.

This myth exemplifies one of many such stories about the inevitable ruin that comes of man trying to emulate a G.o.d, with Mary Sh.e.l.ley"s story of Dr. Frankenstein"s monster as the modern prototype. It is no surprise that variants on the golem and Frankenstein themes have become the source of numerous contemporary morality tales. Contemporary science fiction writers have found this theme of scientific hubris to be a rich source of compelling narratives. The mystic"s a.s.sumption that the golem"s behavior would be fully controllable and the scientists" a.s.sumption that the processes of life and mind can be understood like clockwork have much in common. Like homunculi, golems can be seen as symbolic of a much more general phenomenon.

The golem myth holds a subtler implication embodied in its truth/death pun. Besides being a soulless being, following commands with mechanical dispa.s.sion, the golem lacks discernment. It is this that ultimately leads to ruin, not any malevolence on either the golem"s or its creator"s part. Truth is heartless and mechanical, and by itself it cannot be trusted to lead only to the good. The "truth" that can be stated is also finite and fixed, whereas the world is infinite and changeable. So, charged with carrying out the implications that follow from a given command, the golem quickly becomes further and further out of step with its context.

Golems can thus be seen as the very real consequence of investing relentless logic with animate power. The true golems of today are not artificial living beings, but rather bureaucracies, legal systems, and computers. In their design as well as their role as unerringly literal slaves, digital computers are the epitome of a creation that embodies truth maintenance made animate. Like the golems of mythology, they are selfless servants, but they are also mindless. Because of this, they share the golem"s lack of discernment and potential for disaster.

Computers are logic embodied in mechanism. The development of logic was the result of reflection on the organization of human reasoning processes. It is not itself thought, however, nor does it capture the essence of thought, namely, its quality of being about something. Logic is only the skeleton of thought: syntax without semantics. Like a living skeleton, this supportive framework develops as an integral part of a whole organism and is neither present before life, nor of use without life.

The golem nature of logic comes from its fixity and closure. Logic is ultimately a structure out of time. It works to a.s.sure valid inference because there are no choices or alternatives. So the very fabric of valid deductive inference is by necessity preformed. Consider the nature of a deductive argument, like that embodied in the cla.s.sic syllogism: 1. All men are mortal.

2. Socrates is a man.

Therefore.

3. Socrates is mortal.

Obviously, 2 and 3 are already contained in 1. They are implied (folded in) by it, though not explicitly present in those exact words. The redundancy between the mention of "men" in 1 and "man" in 2 requires that if 1 and 2 are true, then 3 must follow necessarily. Another way one could have said this is that (1) the collection of all men is contained within the collection of all mortals; and (2) Socrates is contained within the collection of all men; so inevitably, (3) Socrates is also contained within the collection of all mortals. Put this way, it can be seen that logic can also be conceived as a necessary attribute of the notion of containment, whether in physical s.p.a.ce or in the abstract s.p.a.ce of categories and cla.s.ses of things. Containment is one of the most basic of spatial concepts. So there is good reason to expect that the physical world should work this way, too.

It should not surprise us, then, that logic and mathematics are powerful tools for modeling natural processes, and that they should even provide prescient antic.i.p.ations of physical mechanisms. Deductive inference allows only one and always the same one consequence from the same antecedent. The mechanical world appears to share the same non-divergent connectivity of events, hence its predictability. Mathematics is thus a symbol manipulation strategy that is governed by the same limitations as physical causality, at least in Newtonian terms. Mechanical processes can for this reason be constructed to precisely parallel logico-mathematical inferences, and vice versa. But mathematical symbolization is finite and each side of an equation specifying a possible transformation is complete and limited.

Machines, such as we humans construct or imagine with our modeling tools, are also in some sense physical abstractions. We build machines to be largely unaffected by all variety of micro perturbations, allowing us to use them as though they are fully determined and predictable with respect to certain selected outcomes. In this sense, we construct them so that we can mostly ignore such influences as expansions and contractions due to heat or the wearing effects of friction, although these can eventually pose a problem if not regularly attended to. That machines are idealizations becomes obvious precisely when these sorts of perturbing effects become apparent-at which point we say that the machine is no longer working.

This logico-mathematical-machine equivalence was formalized in reverse when Alan Turing showed how, in principle, every valid mathematical operation that could be precisely defined and carried out in a finite number of steps could also be modeled by the actions of a machine. This is the essence of computing. A "Turing machine" is an abstraction that effectively models the manipulations of symbols in an inferential process (such as solving a mathematical equation) as the actions of a machine. Turing"s "universal machine" included a physical recording medium (he imagined a paper tape); a physical reading-writing device so that symbol marks could be written on, read from, or erased from the medium; and a mechanism that could control where on the medium this action would next take place (e.g., by moving the paper tape). The critical constraint on this principle is complete specification. Turing recognized that there were a variety of problems that could not be computed by his universal machine approach. Besides anything that cannot be completely specified initially, there are many kinds of problems for which completion of the computation cannot be determined. Both exemplify limits to idealization.

Consider, however, that to the extent that we map physical processes onto logic, mathematics, and machine operation, the world is being modeled as though it is preformed, with every outcome implied in the initial state. But as we just noted, even Turing recognized that this mapping between computing and the world was not symmetrical. Gregory Bateson explains this well: In a computer, which works by cause and effect, with one transistor triggering another, the sequences of cause and effect are used to simulate logic. Thirty years ago, we used to ask: Can a computer simulate all the processes of logic? The answer was "yes," but the question was surely wrong. We should have asked: Can logic simulate all sequences of cause and effect? The answer would have been: "no."5 When extrapolated to the physical world in general, this abstract parallelism has some unsettling implications. It suggests notions of predestination and fate: the vision of a timeless, crystalline, four-dimensional world that includes no surprises. This figures into problems of explaining intentional relationships such as purposiveness, aboutness, and consciousness, because as theologians and philosophers have pointed out for centuries, it denies all spontaneity, all agency, all creativity, and makes every event a pa.s.sive necessity already prefigured in prior conditions. It leads inexorably to a sort of universal preformationism. Paradoxically, this ultimate homunculus move eliminates all homunculi, and in the process provides no a.s.sistance in understanding our own homuncular character. But we should be wary of mistaking the map for the territory here. Logical syntax is const.i.tuted by the necessities that follow when meanings are a.s.sumed to be discrete, fixed, and unambiguous. A mechanism is a similar abstraction. Certain properties of things must be held constant so that their combinations and interactions can be entirely predictable and consistent. In both cases, we must pretend that the world exhibits precision and finiteness, by ignoring certain real-world details.

Curiously, even a.s.suming this sort of total ideal separability of the syntax from the semantics of logic, complete predictability is not guaranteed. Kurt G.o.del"s famous 1931 incompleteness proof is widely recognized as demonstrating that we inevitably must accept either incompleteness or inconsistency in such an idealization. So long as the syntactic system is as powerful as elementary algebra and allows mapping of expressions to values, it must always admit to this loophole. The significance of this limitation for both computation and mental processes has been extensively explored, but the deliberations remain inconclusive. In any case, it warns that such an idealization is not without its problems. A complete and consistent golem is, for this reason, un.o.btainable.

To simplify a bit, the problem lies in the very a.s.sumption that syntax and semantics, logic and representation, are independent of one another. A golem is syntax without semantics and logic without representation. There is no one at home in the golem because there is no representation possible-no meaning, no significance, no value, just physical mechanism, one thing after another with terrible inflexible consistency. This is the whole point. The real question for us is whether golems are the only game in town that doesn"t smuggle in little man-a.n.a.logues to do the work of cognition. If we eliminate all the homunculi, are we only left with golems?

As we"ve seen, golems are idealizations. Formal logic already a.s.sumes that the variables of its expressions are potential representations. It only brackets these from consideration to explore the purely relational constraints that must follow. We might suspect, then, that whenever we encounter a golem, there is a hidden homunculus, a man behind the curtain, or a rabbi and his magical incantations pulling the nearly invisible strings.

THE GHOST IN THE COMPUTER.

Behaviorism was conceived of as a remedy to the tacit acceptance of homuncular faculties standing in for psychological explanations. To say that a desire, wish, idea, scheme, and so on, is the cause of some behavior merely redirects attention to a placeholder, which-although familiar to introspection and folk psychology-is an una.n.a.lyzed black box insofar as its causal mechanism is concerned. B. F. Skinner and his colleagues believed that a scientific psychology should replace these homuncular notions with observable facts, such as the stimuli presented and the behaviors emitted by organisms. The result would be a natural science of behavior that was solidly grounded on entirely unambiguous empirical matters of fact.

Unfortunately, in hindsight, the behaviorist remedy was almost deadly for psychology. In an effort to avoid these sorts of circular explanations, behaviorism ignored the role of thoughts and experiences, treating them as taboo subjects, thus effectively pretending that the most notable aspects of having a working brain were mere metaphysical fantasies. Behavioral researchers also had to ignore the "behavior" going on inside of brains as well, because of technical limitations of the period. In recent decades, however, though cla.s.sic behaviorism has faded in influence, neuroscientists have married behaviorism"s methodology with precise measurements of neurological "behavior," such as are obtained by electrode recordings of neuronal activity or in vivo imaging of the metabolic correlates of brain activity. But even with this extension inwards, this logic still treats the contents of thoughts or experiences as though they play no part in what goes on. There is only behavior-of whole organisms and their parts.

With its minimalist dogma, behaviorism was one of the first serious efforts to explicitly design a methodology for systematically avoiding the homunculus fallacy in psychology and thus to base the field firmly in physical evidence alone. But perhaps what it ill.u.s.trated most pointedly was that the homunculus problem won"t go away by ignoring it. Although remarkable insights about brain function and sensory-motor control have come from the more subtle and neurologically based uses of the behaviorist logic, it has ultimately only further postponed the problem of explaining the relationship between mental experience and brain function.

During the 1960s, the dominance of behaviorism faded. There were a few identifiable battles that marked this turning point,6 but mostly it was the austerity and absurdity of discounting the hidden realm of cognition that was the downfall of behaviorism. Still, the homunculus allergy to behaviorism was not abandoned in this transition. Probably one of the keys to the success of the new cognitive sciences, which grew up during the 1970s, was that they found a way to incorporate an empirical study of this hidden dimension of psychology while still appearing to avoid the dreaded homunculus fallacy. The solution was to conceive of mental processes on the a.n.a.logy of computations, or algorithms.

Slowly, in the decades that followed, computing became a more and more commonplace fact of life. Today, even some cars, GPS systems, and kitchen devices have been given synthesized voices to speak to us about their state of operation. And whenever I attempt to place a phone call to some company to troubleshoot an appliance gone wrong or argue over a computer-generated bill for something I didn"t purchase, I mostly begin by answering questions posed by computer software in a soothing but not quite human voice. None of these are, of course, anything more than automated electronic-switching devices. With this new metaphor of mind, it seemed like the homunculi that Skinner was trying to avoid had finally been exorcised. The problem of identifying which of the circuits or suborgans of the brain might const.i.tute the "self" was widely acknowledged to be a non-question. Most people, both laymen and scientists, could imagine the brain to be a computer of vast proportions, exquisite in precision of design, and run by control programs of near-foolproof perfection. In this computer there was no place for a little man, and nothing to take his place. Finally, it seemed, we could dispense with this crutch to imagination.

In the 1980s and 1990s, this metaphor was sharpened and extended. Despite the consciously acknowledged vast differences between brain and computer architectures, the computer a.n.a.logy became the common working a.s.sumption of the new field. Many university psychology departments found themselves subsets of larger cognitive science programs in which computer scientists and philosophers were also included. Taking this a.n.a.logy seriously, cognition often came to be treated as software running on the brain"s hardware. There were input systems, output systems, and vast stores of evolved databases and acquired algorithms to link them up and keep our bodies alive long enough to do a decent job of reproducing. No one and nothing outside of this system of embodied algorithms needed to watch over it, to initiate changes in its operation, or to register that anything at all is happening. Computing just happens.

In many ways, however, this was a move out of the behaviorist frying pan and into the computational fire. Like behaviorism before it, the strict adherence to a mechanistic a.n.a.logy that was required to avoid blatant homuncular a.s.sumptions came at the cost of leaving no s.p.a.ce for explaining the experience of consciousness or the sense of mental agency, and even collapsed notions of representation and meaning to something like physical pattern. So, like a secret reincarnation of behaviorism, cognitive scientists found themselves seriously discussing the likelihood that such mental experiences do not actually contribute any explanatory power beyond the immediate material activities of neurons. What additional function needs to be provided if an algorithm can be postulated to explain any behavior? Indeed, why should consciousness ever have evolved?

Though it seemed like a radical shift from behaviorism with its exclusively external focus, to cognitive science with its many approaches to internal mental activities, there is a deeper kinship between these views. Both attempt to dispose of mental homunculi and replace them with physical correspondence relationships between physical phenomena. But where the behaviorists a.s.sumed that it would be possible to discover all relevant rules of psychology in the relationships of input states to output responses, the computationalists took this same logic inside.

A computation, as defined in computer science, is a description of a regularly patterned machine operation. We call a specific machine process a computation if, for example, we can a.s.sign a set of interpretations to its states and operations such that they are in correspondence with the sequence of steps one might go through in performing some action such as calculating a sum, organizing files, recognizing type characters, or opening a door when someone walks toward it. In this respect, the workings of my desktop computer, or any other computing machine, are just the electronic equivalents of levers, pendulums, springs, and gears, interacting to enable changes in one part of the mechanism to produce changes in another part. Their implementation by electronic circuitry is not essential. This only provides the convenience of easy configurability, compact size, and lightning speed of operation. The machines that we use as computers today can be described as "universal machines" because they are built to be able to be configured into a nearly infinite number of possible arrangements of causes and effects. Any specific configuration can be provided by software instructions, which are essentially a list of switch settings necessary to determine the sequence of operations that the machine will be made to realize.

FIGURE 3.1: The relationship of computational logic to cognition. Computation is an idealization made possible when certain forms of inference can be represented by a systematic set of operations for writing, arranging, replacing, and erasing a set of physical markers (e.g., alphanumeric characters) because it is then possible to arrange a specific set of mechanical manipulations (e.g., patterns of moving beads on an abacus or of electrical potentials within a computer circuit) that can subst.i.tute for this symbol manipulation. It is an idealization because there is no such simple codelike mapping between typographical symbolic operations and thought processes, or between mental concepts and neurological events.

Software is more than an automated process for setting switches only insofar as the arrangement of physical operations it describes also corresponds to a physical or mental operation performed according to some meaningful or pragmatic logic organized to achieve some specified type of end. So, for example, the description of the steps involved in solving an equation using paper and pencil has been formalized to ensure that these manipulations of characters will lead to reliable results when the characters are mapped back to actual numerical quant.i.ties. Since these physical operations can be precisely and unambiguously described, it follows that any way that a comparable manipulation can be accomplished will lead to equivalent results. The implications of being able to specify such correspondence relationships between mechanical operations and meaningful manipulations of symbols was the insight that gave rise to the computer age that we now enjoy. An action that a person might perform to achieve a given end could be performed by a machine organized so that its physical operations match one-to-one to these human actions. By implication it seemed that if the same movements, subst.i.tutions, and rearrangements of physical tokens could be specified by virtue of either meaningful principles or purely physical principles, then for any mental operation one should be able to devise a corresponding mechanical operation. The mystery of how an idea could have a determinate physical consequence seemed solved. If teleologically defined operations can be embodied in merely physical operations, then can"t we just dispense with the teleology and focus on the physicality?

THE BALLAD OF DEEP BLUE.

The cla.s.sic version of this argument is called the computer theory of mind, and in some form or other it has been around since the dawn of the computer age. Essentially it can be summarized by the claim that the manipulation of the tokens of thoughts-the neural equivalents of character strings, or patterns of electric potential in a computer-completely describes the process of thinking. The input from sensors involves replacing physical interactions with certain of these "bits" of information; thinking about them involves rearranging them and replacing them with others; and acting with respect to them involves transducing the resultant neural activity pattern into patterns of muscle movement. How these various translations of pattern-to-pattern take place need not require anyone acting as controller, only a set of inherited or acquired algorithms and circuit structures.

But is computation fully described by this physical process alone, or is there something more required to distinguish computation from the physical shuffling of neural molecules or voltage potentials?

A weak link in this chain of a.s.sumptions is lurking within the concept of an algorithm. An algorithm is effectively a precise and complete set of instructions for generating some process and achieving a specific consequence. Instructions are descriptions of what must occur, but descriptions are not physical processes in themselves. So an algorithm occupies a sort of middle position. It is neither a physical operation nor a representation of meanings and purposes. Nor is an algorithm some extra aspect of the mechanism. It is, rather, a mapping relationship between something mechanical and something meaningful. Each interpretable character or character string in the programming language corresponds to some physical operation of the machine, and learning to use this code means learning how a given machine operation will map to a corresponding manipulation of tokens that can be a.s.signed a meaningful interpretation. Over the many decades that computation has been developing, computer scientists have developed ever more sophisticated ways of devising such mappings. And so long as an appropriate mapping is created for each differently organized machine, the same abstract algorithm can be implemented on quite different physical mechanisms and produce descriptively equivalent results. This is because the level of the description comprising the algorithm depends only on a mapping to certain superficial macroscopic properties of the machine. These have been organized so that we can ignore most physical details except for those that correspond to certain symbol manipulations. Additional steps of translation (algorithms for interpreting algorithms) allow an algorithm or its data to be encoded in diverse physical forms (e.g., patterns of optically reflective pits on a disk or magnetically oriented particles on a tape), enabling them to be stored and transferred independent of any particular mechanism. Of course, this interpretive equivalence depends on guaranteeing that this correspondence relationship gets reestablished every time it is mapped to a specific mechanism (which poses a problem as computer technology changes).

In this respect, algorithms-or more colloquially, software-share a useful attribute with all forms of description: they don"t specify all the causal details (e.g., all the way down to electrons and atomic nuclei). This is not even provided by the translation process.7 This ability to ignore many micro causal details is possible because a well-designed computing device strictly limits subtle variations of its states, so that it tends to a.s.sume only unambiguously distinguishable discrete states. Many different physical implementations of the same computing process are thus possible. It only needs to be sufficiently constrained so that its replicable macro states and its possible state transitions match the descriptive level of detail required by the algorithm.

Like words printed in ink on a page (which is often one way to encode and store software for access by humans), any particular embodiment of this code is just a physical pattern. Its potential to organize the operations of a specially designed machine is what leads us to consider it to be something more than this. But this "something more" is not intrinsic to the software, nor to the machine, nor to the possibility of mapping software to machine operations. It is something more only because we recognize this potential. In the same way that the printed text of a novel is just paper and ink without a human mind to interpret it, the software is just pattern except for being interpretable by an appropriate machine for an appropriate user. The question left unanswered is whether the existence of some determinate correspondence between this pattern and some pattern of machine dynamics const.i.tutes an interpretation in the sense that a human reader provides. Is the reader of a novel herself also merely using the pattern of ink marks on the page to specify certain "machine operations" of her brain? Is the implementation of that correspondence sufficient to const.i.tute the meaning of the text? Or is there something more than these physical operations? Something that they are both about?

One of the signal events of the last decade of the twentieth century with regard to intelligence and computation was the defeat of the world"s chess master, Garry Kasparov, by a computer chess program: Deep Blue. In many ways this is the modern counterpart to one of the great American folk tales: the ballad of John Henry.

John Henry was, so the ballad says, a "steel-drivin" man," the epitome of a nearly superhuman rail worker, whose job was to swing a ma.s.sive hammer used to drive steel spikes into the ties (the larger wooden cross members) that held the tracks in place. He was a ma.s.sive man, who was by reputation the most powerful and efficient of all steelmen. The introduction of a steam-driven spike driver in the mid-nineteenth century threatened to make this job irrelevant.8 The tale centers on a contest between John Henry and the machine. In the end, John Henry succeeds at keeping up with the machine, but at the cost of his own life.

In the modern counterpart, Kasparov could usually play Deep Blue to a draw, but for the man this was exhausting while the machine felt nothing. In the end of both contests, the machines essentially outlasted the men that were pitted against them. Deep Blue"s victory over Kasparov in 1997 signaled the end of an era. At least for the game of chess, machine intelligence was able to overcome the best that biology had to offer. But in the midst of the celebrations by computer scientists and the laments of commentators marking the supremacy of silicon intelligence, there were a few who weren"t quite so sure this was what they had just witnessed. Was the world chess master playing against an intelligent machine, or was he playing against hundreds of chess-savvy programmers? Moreover, these many dozens of chess programmers could build in nearly unlimited libraries of past games, and could take advantage of the lightning-fast capacities of the computer to trace the patterns of the innumerable possible future moves that each player might make following a given move. In this way the computer could compare vastly many more alternative consequences of each possible move than could even the most brilliant chess master. So, unlike John Henry"s steel compet.i.tor that matched force against force, with steel and steam against muscle and bone, Deep Blue"s victory was more like one man against an army in which the army also had vastly more time and library resources at its disposal. Garry Kasparov was not, in this sense, playing against a machine, but against a machine in which dozens of homunculi were cleverly smuggled in, by proxy.

Like the machinery employed by the Wizard of Oz to dazzle his unwitting subjects, today"s computers are conduits through which people (programmers) express themselves. Software is a surrogate model of what some anthropomorphic gremlin might do to move, store, and transform symbols. Ultimately, then, software functions are human intentions to configure a machine to accomplish some specified task. What does this mean for the computer model of cognition? Well, to the extent that a significant part of cognition is merely manipulating signals according to strict instructions, then the a.n.a.logy is a good one. As Irving J. Good explained at the dawn of the computer age: "The parts of thinking that we have a.n.a.lyzed completely could be done on the computer. The division would correspond roughly to the division between the conscious and unconscious minds."9 But what parts of thinking can be a.n.a.lyzed completely? Basically, only that small part that involves thoroughly regular, habitual, and thus mechanical thought processes, such as fully memorized bits of addition or multiplication, or pat phrases that we use unconsciously or that are required by bureaucratic expediency. Thus, on this interpretation of Good"s a.s.sessment, computation is like thoroughly unconscious mental processes. In those kinds of activities, no one is at the controls. Throw the right switches and the program just runs like a ball rolling downhill. But who throws the switches? Usually some homunculus; either a conscious organism like you or me, or someone now out of the picture who connected sensors or other kinds of transducers to these switches such that selected conditions could stand in for flipping the switches directly.

In all these examples, the apparent agency of the computer is effectively just the displaced agency of some human designer, and the representational function of the software is effectively just a predesigned correspondence between marks of some kind and a machine that a human programmer has envisioned. Is there any sense in which computers doing these kinds of operations could be said to have their own agency, purpose, meaning, or perhaps even experience, irrespective of or despite these human origins and reference points? If not, and yet we are willing to accept the computer metaphor as an adequate model for a mind, then we would be hard-pressed to be able to attribute agency to these designers and programmers either, except by reference to some yet further outside interpreter. At the end of this line we either find a grand black box, an ultimate homunculus, infinite regress, or simply nothing-no one home all the way up and down.

Many critics of the computer theory of mind argue that ultimately the algorithms-and by implication the cognitive level of description with its reference to intentions, agency, and conscious experience-provide no more than a descriptive gloss of a mapping from operation to operation. What matters is the correspondence to something that the algorithm and the computer operations are taken to refer to by human users. If we presume that what is being represented is some actual physical state of affairs in the world, then what we have described is ultimately an encoded parallelism between two physical processes that its users can recognize, but this human-mediated correspondence is otherwise not intrinsic to the computer operation and what it is taken to refer to. This problem is inherited by the computational theory of mind, rendering even this human mediation step teleologically impotent. A computer transducing the inputs and outputs from another computer is effectively just one larger computer.

But notice that there may be innumerable mappings possible between some level of description of a machine state and some level of description of any other physical state. There is nothing to exclude the same computer operation from being a.s.signed vastly many mappings to other kinds of meaningful activities, limited only by the detail required and the sophistication of the mapping process. This multiple interpretability is an important property from the point of view of computer simulation research. Yet, if innumerable alternative correspondence relationships are possible, are we forced to conclude that the same physically implemented operation is more than one form of computation? Is this operation only a computation with respect to one of these particular mappings? Does this mean that given any precisely determined mechanical process, we are justified in calling it a potential computation, on the a.s.sumption that some interpretable symbolic operation could ultimately be devised to map onto it?

This last point demonstrates that treating computation as intrinsic to the machine operations essentially forces us to understand every physical process as a potential computation. Alternatively, if we define something as a computation only when actualized by an additional process that determines the relationship between a given machine operation and a specific corresponding process in some other domain, such as mathematical calculation, then this additional process is what ultimately distinguishes computation from mere mechanism. Of course, in the realm of today"s electronic computers, this process is the activity of a human mind; so, if human minds are just computers, then indeed there is no one home anywhere, no interpretation. It"s all just machine operations mapped to machine operations mapped to machine operations.

There is another equally troubling way that the logic of computation undermines the explanation of the special features of mental processes. It also seems to suggest that it may be impossible for minds to expand themselves, to develop new meanings, to acquire new knowledge or capacities. The reductio ad absurdum of this argument was probably best articulated by the philosopher Jerry Fodor. He poses the problem this way: There literally isn"t such a thing as the notion of learning a conceptual system richer than the one that one already has; we simply have no idea of what it would be like to get from a conceptually impoverished to a conceptually richer system by anything like a process of learning.10 In his characteristically blunt and enigmatic manner, Fodor appears to be saying that we are permanently boxed in by the conceptual system that we must rely on for all our knowledge. According to this conception of mental representation, what a mind can know must be grounded on a fixed and finite set of primitives and operations, a bit like the initial givens in Euclid"s geometry, or the set of possible machine operations of a computer. Although there may be a vast range of concepts implicitly reachable through inference from this starting point, what is conceivable is essentially fixed before our first thoughts. At face value, this appears to be an argument claiming that all knowledge is ultimately preformed, even though its full extent may be beyond the reach of any finite human mind. This interpretation is a bit disingenuous, however, or at least in need of considerable further qualification. At some point the buck must stop. We humans each do possess a remarkably complex conceptual system, and almost certainly many aspects of this system arose during our evolution, designed into our neural computers by natural selection (as evolutionary psychologists like to say). But unless we are prepared to say that our entire set of mental axioms magically appeared suddenly with humans, or that all creatures possess the same conceptual system, our inherited conceptual system must have been preceded at some point in our ancestors" dim past by a less powerful system, and so on, back to simpler and simpler systems. This is the escape route that Fodor allows. The argument is saved from absurdity so long as we can claim that evolution is not subject to any such stricture.

Presumably, we arrived at the computational system we now possess step by step, as a vast succession of ancestors gave birth generation upon generation to progressively more powerful conceptual systems. By disa.n.a.logy then, evolution could be a process that generates a richer (i.e., functionally more complex) system from a simpler one. But could cognition be similar to evolution in this respect? Or is there some fundamental difference between evolution and learning that allows biology to violate this principle but not with respect to cognition? One fundamental difference is obvious. The principles governing physical-chemical processes are different from those governing computation. If evolution is merely a physical-chemical process, then what can occur biologically is not constrained to computational strictures. There will always be many more kinds of physical transformations occurring in any given biological or mechanical process than are a.s.signed computational interpretations. So a change in the structure of the computational hardware (of whatever sort) can alter the basis for the mapping between the mechanism and the interpretation, irrespective of the strictures of any prior computational system. In other words, precisely because computation is not itself mechanism, but rather just an interpretive gloss applied to a mechanism, changes of the mechanism outside of certain prefigured constraints will not be "defined" within the prior computational system. Computation is fragile to change of its physical instantiation, but not vice versa. With respect to the computational theory of mind, this means that changes of brain structure (e.g., by evolution, maturation, learning, or damage) are liable to undermine a given computational mapping.

This might at the same time preserve Fodor"s claim and yet overcome the restriction it would otherwise impose, though undermining the force of it in the process. Thus, on the a.n.a.logy of a function that is not computable on a Turing machine, an intrinsically interminable machine process could, for example, be halted by damaging its mechanism at some point to make it stop (indeed, simply shutting off my computer is how I must sometimes stop a process caught in an interminable loop). This outside interference is not part of the computation, of course, any more than is pulling the plug on my computer to interrupt a software malfunction. If interference from outside the system (i.e., outside the mechanistic idealization that has been a.s.signed a given computational interpretation) is capable of changing the very ground of computation, then computation cannot be a property that is intrinsic to anything. Computation is an idealization about cognition based on an idealization about physical processes.

Turning this logic around, however, offers another way to look at this loophole. The fact that computation can be altered by mechanistic failure imposed from without is a clue that both the mechanism and the correspondence relationship that links the mechanism to something else in the world are idealizations. Certain mechanistic processes must be prevented from occurring and certain properties must be ignored in establishing the representational correspondence that defines the computation. Computation is therefore derived from these extrinsic, simplifying constraints on both mechanical operations and the a.s.signment of correspondences. These constraints are not only imposed from the outside, they are embodied in relations that are determined with respect to operations that are prevented or otherwise not realized. Paying attention to what is not occurring is the key to a way out of this conceptual prison. We must now ask: What establishes these mechanical and mapping constraints? It cannot be another computing relationship, because this merely pa.s.ses the explanatory buck. Indeed, this implies that computation is parasitic on a conception of causality that is not just different from computation, it is its necessary complement.

So whatever processes impose these constraints on the physical properties of the mechanism and on the mapping from mechanism to use, these are what determine that something is a computation, and not merely a mechanical process. Of course, for digital computers, human designers-users supply these constraints. But what about our own brains? Although we can pa.s.s some part of the explanatory buck to evolution, this is insufficient. Even for evolution there is a similar mapping problem to contend with. Like computation, only certain abstract features of the physiological processes of a body are relevant to survival and reproduction. These are selectively favored to be preserved generation to generation. But although natural selection preserves them with respect to this function, natural selection does not produce the physical mechanisms from which it selects. Although natural selection is not a teleological process itself, it depends on the intrinsic end-directed and informational properties of organisms: their self-maintenant, self-generative, and self-reproducing dynamics. Without these generative processes that are the prerequisites for evolution, and require certain supports from their environment, there is no basis for selection. Like computation, the determination of what needs to be mapped to what and why is necessarily determined by prior ententional factors.

The ententional properties that make something a computation or an adaptation must ultimately be inherited from outside this mapping relationship, and as a result their ententional properties are parasitic on these more general physical properties. This is why, unlike computation, the messiness of the real-world physics and chemistry is not a bug, it"s an essential feature of both life and mind. It matters that human thought is not a product of precise circuits, discrete voltages, and distinctively located memory slots. The imprecision and noisiness of the process are what crucially matters. Could it be our good fortune to be computers made of meat, rather than metal and silicon?

This brief survey of the homunculi and golems that have haunted our efforts to explain the properties of life and mind has led to the conclusion that neither preformation nor elimination approaches can resolve the scientific dilemmas posed by their ententional phenomena. The former merely a.s.sumes the existence of homunculi to supply the missing teleology; the latter denies their existence while at the same time smuggling in golems disguised as physical principles. To accept ententional properties as fundamental and una.n.a.lyzable is merely to halt inquiry and to rest our explanations of these phenomena on a strategy of simply renaming the problem in question. To deny the existence of these properties in the phenomena we study inevitably causes them to reappear, often cryptically, elsewhere in our theories and a.s.sumptions, because we can"t simply simplify them down and down until they become purely chemical or mechanical processes. To pretend that merely fractionating and redistributing homunculi will eventually dissolve them into chemistry and physics uses the appearance of compositionality to distract our attention from the fact that there is something about their absential organization and their embeddedness in context that makes all the difference.

4.

TELEONOMY.

Teleology is like a mistress to a biologist: he cannot live without her but he"s unwilling to be seen with her in public.

-J. B. S. HALDANE.

BACK TO THE FUTURE.

Living and mental processes appear to work against an otherwise universal trend of nature. When it comes to processes that produce new structures, the more rare, complicated, or well designed something appears, the less likely that it could have happened spontaneously. It tends to take more effort and care to construct something that doesn"t tend to form on its own, especially if it is composed of many complicated parts. In general, when things work well despite the many ways they could potentially fail, when they exhibit sophisticated functional

© 2024 www.topnovel.cc