INDIVIDUALISM AND LOCAL CONTROL IN COMPLEX SYSTEMS[1]
(from Canadian Journal of Philosophy suppl. vol. 20:165-185,  1994).

©  Ronald de Sousa
Department of Philosophy
University of Toronto
 

Introduction.

In both biology and psychology, the notion of an individual is indispensable yet puzzling. It has played a variety of roles in diverse contexts, ranging from philosophical puzzles of personal identity to scientific questions about the immunological mechanisms for distinguishing "self" from "non-self." There are notorious cases in which the question of individuality is difficult to settle -- ant hill, slime mold, or beehive. Yet the notion of an individual organism, both dependent on and independent of other individuals in specific ways, is crucial to our conception of life itself. It is also crucial to our notion of mentality, and hence to other concepts -- moral and social -- which must be explicated in terms of individual mentality. (Think, for example, of the importance of the quality and nature of individual consciousness to debates about abortion and euthanasia.)

But what notion of individuality is involved here? A general criterion of demarcation between individuals and parts of individuals is elusive. A genuine individual with parts of its own may "independently" enter into communication with other individuals of the same or different categories; does this differ in any general and systematic way from a mere part of such an individual, which must needs communicate with other parts?

No answer may be forthcoming that is not essentially a matter of degree. Intuitively, however, what individuals must have in sufficient degree is integrity, separability, durability, and organization. By integrity, I mean something like the capacity to maintain themselves within identifiable boundaries against at least some of the environment's incursions. Separability is a closely related property, but stresses the possibility of the whole being moved, in relation to the environment, without significant changes occurring within its boundaries. Durability is especially vague, but it excludes both eternal objects like numbers or abstractions and extremely ephemeral objects like physical particles that go in and out of existence in a fraction of a second. All of those properties are in some sense supervenient on the last, organization. This paper is a discussion of the right way to understand organization in the relevant sense.

The usual sense of organization that we tend to take for granted involves a central hierarchy, in which some bits identified as parts are in some sense "subordinated" to the whole, and in which some plan or blueprint represents the whole and its relations to the parts. And in articulating this relationship, it turns out to be difficult to avoid the notion of teleology.[2]

Alex Rosenberg and Matthen and Levy provide instructive examples. Rosenberg argues that the substitution of thymine in DNA for uracil in RNA can be described in strictly mechanical terms, but can properly be understood only in terms of the "need" for the genetic system to "correct" for replication errors. (Rosenberg 1985, ch. 3) Similarly, Mohan Matthen and Edwin Levy have argued that we can speak of the immune system as occasionally liable to errors consisting of "misrecognition of self". (Matthen and Levy, 1984). To be sure, we can always, in such cases, give a merely chemical account of the response and its causes. But to do so may result in the loss of an epistemologically indispensable level of description and explanation. For it is only if we think of the response as springing from a device whose function is the protection of an individual organism -- a notion that has no place in chemical and physical description as such -- that we can identify some of these reactions as "normal" and others as "errors." And conversely, whether something does or doesn't count as "self" is partly an evaluative issue: there are good reasons, for example, to regard a cancerous growth as "non-self" rather than self, even though in some respects its development is an outcome of the organisms's `'own'' evolution.[3]

In sum, the intuitive idea of an individual includes the ideas teleology and hierarchical organization, represented in a single control centre responsible for the integrity, separability, durability, and organization of the individual. My goal in this paper is to explore a type of alternative to that assumption, which I call local control.

There are two reasons to examine such alternatives: we need some, and there are some. First, the best guess as to where central control would have to be located in the case of those organisms we take as paradigms of individuality -- namely animals, and people in particular -- is the DNA. But the sheer quantity of information that would be needed to specify the structure of every part of an animal far exceeds the quantitative capacity of DNA. Again, the immune system is a case in point: the DNA doesn't have enough informational space to provide for the enormous diversity of pathogens that a vertebrate is liable to encounter. (See Langman 1988) More generally, embryological development generates informational structures far too rich to be encoded as blueprints in the DNA. Therefore, if we are ever to understand embryology, we need models that will tell us how a complex and organized structure can arise without being specified in advance in a comprehensive detailed blueprint. (See Edelman 1988)

Second, there are a number of models that appear to fill just that specification. These models, including especially Cellular Automata, provide us with examples of systems in which general rules applied purely locally -- i.e. without regard for any general plan -- can turn out to have surprising power in generating complex and highly organized patterns. These patterns seem to behave as if they had been devised explicitly to produce something with the integrity and hierarchical organization of a genuine individual. Yet they are not guided by, and indeed nowhere contain, any representation of the system and its states as a whole. In fact they have not been designed, in any literal sense, with any such overarching goal in mind -- or at least they might not have been so designed: the exact formulation appropriate here is elusive, as we shall see below.

Examples of systems with local control

Local control in some form has been studied in a number of different types of systems. I shall first briefly illustrate the idea in terms of three relatively well known examples: from economics, the invisible hand; from biology, natural selection; and from Artificial Intelligence, Connectionism. I shall then turn to Cellular Automata, to argue that some of the main players of the game of CA modelling tend to misinterpret the import of their own models.

The invisible hand.

The oldest example of the idea of a complex system with local control is perhaps Adam Smith's economic model of the invisible hand, in which the appearance of large scale teleology results from a multitude of decisions independently made for individual reasons. In this scheme, "every individual is continually exerting himself to find out the most advantageous employment for whatever capital he can command," and this turns out in practice to result in advancing the general welfare as if some overall planning had been put into effect. Thus each individual, acting for her own ends and without concertation with others is, in Smith's famous phrase, "led by an invisible hand to promote an end which was no part of his intention." There is something that seems quite magical in this ability of unconnected individual decisions to result in an overall pattern governing the society as a whole. And there is something elusive about the simple question it prompts, namely why does it work? That question is, in one way of understanding it, closely akin to that other unfathomable question, Why does it ever work out to suppose that any laws of nature are simple?[4] But taken another way maybe we can give an answer of a sort: it works because it is a system that incorporates a probabilistic negative feedback loop. It works roughly like this: If I raise my price, the probability that you will find someone whose price is lower rises; therefore more people will buy for less and fewer will buy from me. This will incline me to lower my prices, for if I do, the probability that price comparisons will be in my favour will rise, with correspondingly beneficial effect on my sales. Because probability translates directly and literally into frequency here, the average effect it will have on my actions is an efficient substitute for the deliberate application of complete information about the overall picture to every individual decision. The probabilistic negative feedback loop obviates the need for central planning.

Natural Selection.

Evolution by natural selection provides another example. The illusion of teleology in evolution seems to have a similar root in the convergence of a multitude of unrelated accidents. Gene selectionism in neo-Darwinism is the doctrine that whatever the appearance of teleological organization in natural selection, it is only an illusion in the sense of being the unplanned result of individual events acting on individual organisms. The contrast here is with theories that posit some sort of mechanism for getting a result that is good for the species. Again, part of the reason it works is that it incorporates, in certain domains, probabilistic negative feedback loops. A good example of this is the balance of the sexes. A naive "good for the species" explanation might well assume that having a roughly equal balance of the sexes is a good idea "for the species". But further thought might suggest that from the point of view of the good of the species, having as many males as females is enormously wasteful. For consider two groups, one of which has a negligible proportion of males (sufficient, however, to impregnate the females) and the other an equal numbers of males and females. Now suppose that the maximum annual capacity for reproduction in the female of both groups is N, (the same in both: in humans, for example, about one every year). Since just half of the first population are female, their maximum yearly number of offspring will be something like N/2. But the second group could produce nearly N offspring, at roughly the same cost in food and energy. So obviously the second, as a group, is reproductively far more efficient.

Indeed, as few species have somehow jumped the probability moat that generally protects sexual species from reverting to parthenogenesis (see Cole 1984). But they are very rare. So even in the absence of any speculation about the mechanism involved, the explanation in terms of what is good for the species is unattractive. In the gene selectionist model, however, the probabilistic feedback model provides an explanation of the status quo. The status quo is not, as one might think, that equal numbers of males and females are conceived, but that male conceptions outnumber females ones just enough, given their greater vulnerability during pregnancy and infancy, to insure equal numbers of males and females at the reproductively peak age. According to the probabilistic feedback model, any genetically transmitted bias in favour of males, say, will be passed on to the next generation at the same time as males turn out to be more numerous: but since they will have to select their mates among those that are both female and less likely to have the "male biasing" gene, the "female biasing genes" will get disproportionately spread about in the next generation, and the balance will tend to be restored. The mechanism is exactly symmetrical: any imbalance in favour of either sex will tend to be reversed in subsequent generations.

Feedback mechanisms aren't the only ones that operate in the context of evolution by natural selection. Game theory provides other instances of "games" in which there are one or more equilibrium points, and several of those other models have applications in economics and in evolutionary theory as well. There is no reason to expect their outcomes to be good in any sense. Some, like `prisoners' dilemma' situations, tend to result with equal reliability in wholly undesirable overall effects. But the two models just discussed do illustrate at least one form of "system" in which the appearance of teleology on a global scale is due entirely to mechanisms that operate both mechanically and locally. They have their limitations and exceptions, as illustrated by the parthenogenetic lizards: feedback mechanisms are a specially well behaved case of "hill climbing" devices, and those are local in the following further sense: that nothing that happens can ever be influenced by the fact that there is a better maximum just a hill away from the next trough. Therefore they will strictly speaking not be maximizers, if that is defined in terms of an overall criterion. But since they always work probabilistically, they may get shaken off local hilltops by random fluctuations (this is presumably what happened to our parthenogenetic lizards.)

Connectionism

The contrast between classical AI systems and connectionist systems may provide yet another example, but from a different perspective. Classical AI systems are constructed "top down"--at least insofar as programmers follow good programming form. What this means is that every part of the task to be performed by every part of the machine is specified in explicit detail. A program is quite literally meant to do something: it has an entirely non- mysterious source of teleology, which is the conscious intention of the programmer to get it to perform just that task.

The crucial feature of connectionist systems for my purposes here is, by contrast, that no explicit instructions are given for the actual performance of the desired tasks. Instead, the machine is "wired up" in ways that are calculated to enable it to learn. It is built "bottom up," though the research that leads to its being built in some particular way rather than others can be grounded in any number of calculations as well as experiments. In connectionist systems there is no clear hierarchy of control, and no explicit design of the methods to be used to accomplish a task. In those two senses, then, there is no "global control."

Cellular Automata

More recently, as illustrated in Christopher Langton's recent collection of papers on Artificial Life (Langton 1986; Langton 1989; Langton 1992, the power of local control has been intensively studied in connection with a different class of models. A Cellular Automaton is essentially a collection of cells in a virtual space (typically represented on a computer screen), each of which can be in any of a finite number of states. At every cycle of a universally ticking clock, the state of each cell is revised according to rules that refer exclusively to its own state and the state of its neighbours. Like Turing machines, this very simple characterization is compatible with a kind of device able to simulate indefinitely complex behaviour; in fact, CAs have been shown to be equivalent to Turing machines.(Langton 1986)

The "Game of Life" is one toy CA model which has become widely known thanks to some discussions in the Mathematical Games section of Scientific American. Its design is essentially as follows. The screen is divided into basic cells, each of which can be either on or off (light or dark). Each cell is considered to have 8 neighbours. The next state of each cell is determined entirely by its own present state in conjunction with information about its own neighborhood. The precise rules are as follows:

0.0.0.1.form the Eightsum of each cell's eight neighbors
0.0.0.2.If a cell is 0 and its Eightsum is 3, the cell's new state is 1
0.0.0.3.If a cell is 1 and its Eightsum is 2 or 3, the new state is 1
0.0.0.4.In all other cases the cell's new state is 0. (Rucker 1989)
Now what is striking thing about this model is that playing with different rules sometimes (though not, unsurprisingly, all that often) yields patterns that look for all the world as if they had been intentionally designed, and even sometimes, though even less often, they look as if they had given rise to individuals with their own coherent structure and integrated behaviour: structures that move along on the screen and seem capable of destroying competing structures. Rudy Rucker boasts: "The remarkable things about CA's is their ability to produce interesting and logically deep patterns on the basis of very simply stated preconditions" (Rucker 1989, p. 16).

Conceptual Problems

Although the above examples are suggestive, they do not yield any very obvious definition of what is meant by `local control'. Indeed, that idea may seem to cave in if it is placed under any pressure.

To begin with, there is one way of looking at the notion of local control that seems to threaten it with trivialization. In some sense, all causality acts locally. "Physics is local" (John Walker in Rucker 1989). On the other hand, all laws of physics are global; similarly, in CA models all rules are global in the sense that they apply equally to all cells; but all changes actually effected, i.e. all causes, are local, in that what happens to a particular cell doesn't depend on any events or states affecting cells beyond its neighborhood. So why the fuss? When we start thinking about it in that perspective, it seems one might lose sight of the issue. For isn't every system, from that point of view, a locally-controlled system?

This rhetorical question suggests the following answer: Yes, every physical system is a locally controlled system (ignoring for our purposes the paradoxical examples of non-locality in quantum physics). But do some systems -- biological ones, or psychological ones, for example -- need to be described also at another level, in terms of properties that cannot be deduced from the totality of the physical interactions they involve? This raises the old question of emergence. And on this issue, as we shall see, the proponents of local control systems are not themselves altogether clear about the implications of their ideas.

What, then, do the mavens of local control themselves say?

Apart from local determination of behaviour, Langton speaks of the systems he is interested in as characterized by non linearity. "Linear systems are those for which the behavior of the whole is just the sum of the behaviour of its parts." Non linearity is what allows systems with local determination of behaviour to be interesting: for if such systems were linear, their global behaviour would afford us no surprises. Linear systems are said to obey the "superposition principle":

"We can break up complicated linear systems into simpler constituent parts, and analyze these parts independently. Once we have reached an understanding of the parts in isolation, we can achieve a full understanding of the whole system by composing our understanding of the isolated parts."(Langton 1989, 41)

The principle of superposition is clearly a rejection of emergence, and Langton himself argues that his models go beyond it, and present us with genuine cases of emergence. But this is not clear enough. For just what notion of emergence is involved cannot be decided unless we have a clearer understanding of the meaning of "composing our understanding of the isolated parts." As far as I can see there are five distinguishable (though not all mutually exclusive) notions or levels of emergence that might be at stake here. I define them in terms of their negation: at each level of closeness in the relationship between parts and whole, a certain notion of emergence is excluded.

Levels of emergence:

Level 0: No Intentional emergence. If I give you instructions for doing something without telling you what the result will be, I will say the result is intentionally emergent. The following illustrates the notion: Suppose I give you a "join the dots" picture of a Unicorn. You join the dots, and find you have drawn a Unicorn. The unicorn is intentionally emergent, since I nowhere referred to a unicorn: I simply said: join the dots. This differs from the case where I have given you explicit instructions and told you that you were drawing a unicorn. The latter case exemplifies 0-level emergence: there is no emergence whatever, because I not only could have predicted the result, but because of what you told me I did do so without being required to do any reasoning or calculation. At level 0, everything that happens is in accord with explicit instructions. Beyond this 0-level, every case involves intentional emergence. But this is too weak to amount to "emergence" in any sense usually intended. Above 0-level, any degree of "compositionality" in the relation between parts and whole can be described as having some measure of emergence; but the lowest levels are furthest away from what is usually meant by the notion, and only the higher level are "emergent" in any interesting sense.

Epistemic Emergence of level 1. Logical deducibility is the tightest relation between parts and wholes compatible with intentional emergence. If I can deduce the behaviour of the whole from the behaviour of the parts, there would not normally be said to be emergence of the former in relation to the latter. But I might still have trouble figuring out the deducible consequence. In that case, we have epistemic emergence based on purely logical complexity. Call this epistemic emergence of level 1, or epistemic emergence with deducibility.

Epistemic Emergence of level 2. This is the most common case. Law-like connections between the individual and the global (or micro and macro levels) have to be discovered empirically. Once discovered, they can serve to yield information about the behaviour of the whole on the basis of information about the parts. But this represents a genuine level of emergence in relation to purely logical deducibility. Call it epistemic emergence with law-like connections.

Epistemic Emergence of Level 3. What Langton and others often seem to have in mind by "emergence" is that logical and lawlike connections too complex to be worked out in practice. in this perspective, it is acknowledged that one could "in principle" deduce the state of the whole from the state of the parts plus information about macro-micro linking laws. But the macro states are practically emergent nonetheless. As with the previous level, we are dealing with epistemic emergence, but this time it is based on the complexity of law-like interactions, not on logical complexity. This is epistemic emergence with presumed but inaccessible law-like connections.

Full emergence of Level 4: Radical indeterminacy. The possibility exists that because of radical indeterminacy in the way that the micro (or local) level affects the macro (or global) level, no logical or natural law could ever be discovered that would even in principle allow us to predict the latter from information about the former. Only this would constitute ontological emergence, and no one has claimed it for cellular automata. (It has, however, been claimed in various forms for the relation between the physical and the mental.).

Now which form of emergence is Langton claiming for cellular automata?

Intentional emergence, as already remarked, is too weak: intentions are neither necessary nor sufficient for deciding the issue of emergence in any interesting sense.

Suppose first that I didn't intend it. That isn't enough, since it may be simply laziness or inadvertence that stopped me from working out the consequences of putting together individual units in this particular pattern. Intentional emergence is not a sufficiently strong form of emergence to match any of our intuitions.

But now suppose I did intend it. Even that isn't sufficient to rule out emergence, because while I intended a specific result, I might use random methods in the hope of getting it. I might, for example, have used the "biomorphs" selection simulation program set up by Dawkins, in the hopes of chance throwing out a certain shape that I very specifically have in mind. Although I have that shape in mind, I won't actually have explicitly specified how to get it, or even whether it can be gotten at all. On the contrary, I had no idea how and when it would come about. And yet the whole process, while being run without specific overall instructions, would be subject to an overall goal, if not a plan.

Emergence in the classical sense has to belong at least to level 2. That is, it has to preclude any reliable prediction of the emergent things or properties on the basis of even the fullest information about the elements that constitute them. But Langton tells us that "the system computes the local, non-linear interactions explicitly and the global behavior--which was implicit in the local rules--emerges spontaneously without being treated explicitly." (Langton 1986 p. 42) [my emphasis] In what sense, "implicit"? If we mean this literally, we have to infer that the global behaviour can be logically deduced from the sum of the local behaviour. But if this is so then by the definition just given we are dealing with "linear" systems after all, where the only form of emergence involved is level 1. Or does Langton mean that the effects are "causally implicit", in the sense that the behaviour of the parts provides causally sufficient conditions for the behaviour of the whole, but that no knowledge of the parts and their properties in isolation could have been sufficient to deduce logically how the system as a whole would behave? This would give us emergence of level 2 or 3. But since CAs are logical objects defined over their own virtual space, it is unlikely that anything will be emergent at either of those levels.

All that remains is epistemic emergence at level 1. That's not emergence in any classical sense, but, as I shall argue in a moment, it's still an idea that we should find interesting.

An Objection

But first I want briefly to consider another objection, due to P. Hogeweg, that has been made to the claims made on behalf of CA. (Hogeweg 1988) It is that there is actually something fraudulent about the claim to locality, because the whole system needs a single, coordinated clock:

Synchrony is at variance with the localness of cellular automata, which is their major strength. To achieve synchrony one needs a global clock. Particularly in the case of a small number of states (the second major strength of cellular automata) synchrony can influence the qualitative behaviour of the system considerably by imposing this global constraint.[5]

How serious is this criticism? At first sight, it seems like a damaging point. The overall clock must indeed be organized "from the outside", and any failure of the clock to keep up a consistent layering of simultaneous events in the whole system might lead to very different results. On the other hand, maybe the existence of the clock should be viewed simply as another aspect of the universality of the laws of nature. Cells of a cellular automaton are essentially homogeneous, in the sense that they belong to a single virtual domain in which all are subject to exactly the same rules. So I don't see the objection as damaging; but it does serve to remind us that locality of interaction is not some sort of absolute metaphysical requirement: many systems may be so organized that some interaction is determined locally, while framework assumptions and constraints are imposed from the outside, as it were, on the space as a whole. In CA programs, for example, we sometimes see rigid frames or barriers set up to constrain the behaviour of the cellular automaton proper. In one, called "perfume", we see the outline of a couple of flasks, one partly stoppered, the other unstoppered, and the evolving image serves to model the diffusion of molecules (of perfume or anything else) gradually taking over the whole space, but constrained by the barriers which affect the speed and order of the diffusion process.

Locality and hierarchic systems.

Local control does not necessarily preclude the formation of hierarchies. A hierarchy can involve external (non-locally controlled) constraints, as just illustrated, or else there can be hierarchies of levels at which a system behaves like a cellular automaton. In what sense can local control systems form the basis of hierarchies? Evolution again provides some nice examples. Something might act on its immediate neighbours only at a certain level of causation and analysis, but be part of a larger structure capable of interacting with other larger sized structures at another level.

Let me give an example. Although the notion of group selection fell into disfavour after the attacks on it by G.C. Williams (1966) it is now generally recognized that there are cases of selection acting on units other than the individual gene. Below the level of the gene, there can sometimes be gamete selection, working through segregation distorters, that can compete directly against the prevalent tendency of gene selection. (see Dawkins 1982) as in the example of the t allele in the house mouse, an allele that subsists in the population in spite of the fact that it is deleterious, because there is a segregation distorter at work which favours the allele at the time of meiosis. On the other side, there can actually be group selection, in some cases, and that too can work in a sense opposed to the trend of gene selection. Given a certain trend in sexual selection as in the traditional (but disputed: see Gould 1977 pp. 79-90) case of the extinction of the Irish Elk, for example, extinction might follow for one species to the benefit of another, though in this case, unlike the case of pure gametic selection, the mechanism for this "group selection" is parasitic on the mechanism of gene selection even when their consequences run opposite to one another.

So what does local control really amount to?

In the end, the concept of local control may seem less exciting than at first appeared. It seems there nothing more to it, after all, than the rather trite observation that physical causes act only by spatio-temporal contiguity, and that complex systems do things we find it practically impossible to predict.

I think this is right. But we would be wrong not to find it interesting. If we don't, it's because we've been misled by the mistaken emphasis given by the advocates of CA to the issue of emergence. They seem to be saying: "Look, what wonderful phenomena of emergent behaviour we are getting." And on inspection, that claim fizzles to not much. But what they should be saying is: "Look what powerful effects can be produced without any strong levels of emergence. Look in other words, at the powerful new evidence we are bringing for a strong reductionist program."

For suppose that what "emerges" is, if I may be permitted the oxymoron, simply complexity. If the complexity gives rise to undesigned but genuine patterns, that is not trivial at all. For apart from general prejudice in favour of the reductionist program based on the past successes of science, there was no good reasons to bet, ahead of time, on such successful reduction of stable complex patterns and objects to the simple interaction of locally acting rules.

Nevertheless, there is also something important about the level of emergence, however weak, that we can claim for locally controlled global systems. The reason that even weak or "level 1 epistemic" emergence matters, has to with the fact of the non-homogeneity of biological individuals.

Non Homogeneous Systems

To clarify what I mean by this, compare the kinds of complex systems that physics is wont to study. Take weather systems, for example. They are immensely complex; their behaviour is not predictable in practice from knowledge of all their parts, but this is essentially because the sheer quantity of relevant information about all their parts is unmanageably large. Nevertheless, we believe it would be in principle possible to make the calculation if, per impossibile, we could gather all the relevant information. Therefore trivial prediction of the state of the weather from its component parts is "in principle" possible, though in practice impossible, and we have a case here too of weak emergence, at level 2. Why doesn't this seem to matter much for the present argument?

The reason is that weather systems are not of interest to us assuch. Or rather, though we are interested in the weather on this or that occasion, but not on weather systems considered as individual things. We don't care about any way of cutting weather systems up into particulars. By contrast, we care very much about biological organisms. Their non-homogeneity demands that they be in certain respects treated differently. So we would actually like to do this impossible thing, which is to predict by global rules (i.e. rules that apply to this whole individual) what is the next thing that individual is going to do. The possibility that by adverting entirely to the states not of the whole but of spatiotemporal parts subject to a relatively manageable number of rules we should be able to make such predictions is indeed interesting news. It is interesting philosophically that we can make such predictions in principle, even if we could not possibly make them in practice.[6]

Consequences for the notion of an individual in psychology.

I close with a couple of brief observations about two areas in which the issues I have been discussing may have interesting implications. First, our notion of an individual in psychology; second, our conception of understanding in science.

The Society of Minds

Marvin Minsky (1985) has argued that the mind should be viewed not as a mysterious simple unity but as a society of agents and agencies. This notion has some features that put it in the camp of locally controlled systems, but it also has some features that suggest that it belongs in the other camp. In the former class, is the fact of the high degree of modularity of the "agencies" of the mind. Each does its own job, communicating only with the agencies with which it is directly in contact. In this way it seems to illustrate, in the context of a theory of mind, the idea of a whole that is made up of parts seeing only other, immediately adjacent parts. On the other hand, the modular design of the whole is definitely hierarchical. The "immediate neighbours" of any given agency either are under it in the order of the hierarchy or represent a higher agency to which it is itself responsible.

This shows that the local control model is not logically incompatible with hierarchy. The hierarchical organization of the mind is not necessarily diminished by its modularity. On the other hand, the mere possibility of a completely decentralized model in which the appearance of hierarchy is merely emergent at levels 1 or 2 may shake any lingering attraction we might still feel for the notion of a unified Cartesian soul. Hierarchical organization might itself be something of an illusion, as has recently been urged by Dennett, who favours what he calls a "multiple drafts" model of the mind:

...localized discriminative states transmit effects to other places, contributing to further discriminations, and so forth.... Where does it all come together? The answer is: Nowhere.... there is no one place in the brain through which all these causal trains must pass in order to deposit their content "in consciousness." Dennett (1991), pp. 134-135.

Both Minsky's and Dennett's models contrast with the view that consciousness, or language, are somehow transcendent in relation to the physical level at which causes act locally. Though everyone shuns the term "dualist" one of opprobrium, while often hastening to apply it to other philosophers, the view that physical, local causation does not exhaust the phenomena of language and consciousness is vigorously defended in various forms.[7] But the sorts of models that I have been considering, as well as their proponents' insistence on their capacity to produce "emergent" patterns in some sense of that word which I have tried to clarify, suggests that an ultimately reductionist interpretation of even those "transcendent" relations cannot be ruled out a priori.

Externalism and the philosophy of mind.

The question of the sufficiency of local causation can bring a fresh perspective to the question of externalism in philosophy of meaning and mind. Tyler Burge has written:

Information from and about the environment is transmitted only through proximal stimulations but the information is individuated partly by reference to the nature of normal distal stimuli.... causation is local. Individuation may presuppose facts about the specific nature of a subject's environment. (Burge 1979 pp. 16-17)

The point, then, is that the strictly extensional causal structure of the meaning relations involving a speaker in a given linguistic community can be defined as acting "purely locally", but that, on the externalist view, the proper identification of the mental states of such speakers requires us to make reference to chains of causation that take us outside the immediate causal environment of the speaker. Thus, the issue of locality can be used as a way to reformulate the debate between internalists and externalists.

Consequences for our conception of scientific understanding.

In the twilight of sanguine philosophy of science, which we have been living in the past few decades, people have found various reasons to cut down to size the ancient aspirations of science. First, there was the brute fact of irreversibility that followed from the second law of thermodynamics. Some of the consequences of this law are so simple that it is puzzling, now, that no one in the Laplacean tradition of 18th century determinism had seen it. When we drop a stone onto the ground, we may be able to predict its trajectory most exactly. But once it is on the ground no trace remains of the further passage of time: it is impossible to say when it got there. This placed a first limitation on the ambition of total knowledge.

A second limitation came with the notion of chaos, a third with the notion of absolute randomness. The latter idea came first, with quantum physics; but the former is actually less radical, because it is compatible with strict determinism.

Reductionism as exemplified by local control systems actually presents us with a further limitation to the possibility of full understanding. Insofar as there is no grand plan, there no possibility of understanding things by getting hold of the grand plan. The most we can hope for is to understand how things work in principle. And while in physics that, together with the capacity to create machines that work as we intend them to, may seem quite sufficient, it may be less satisfying in the case of our ambitions for understanding human beings. For the individuals we are interested in are not just whatever units science decides it can most conveniently understand the world in terms of: we already have an antecedent sense of what units we want to focus on. This is the "evaluative" aspect of our individuation of units which I spoke of at the outset. Given our interest in those units, then, perhaps the most philosophical motivation for science -- the ambition of understanding everything -- is bound to be forever frustrated.

**FOOTNOTES**     [hit your browser's BACK button to return to text]

[1]:. My thinking in this paper was usefully influenced and stimulated by a seminar on Artificial Life given in colllaboration with Paul Thompson in the Fall of 1990. Thanks to all participants, especially Greg Crookall, Lisa Blake, and Niko Scharer.

[2]:. A curious etymological tidbit is suggestive here. It turns out that the word `whole' and the various cognates of `holism' are etymologically unrelated, even though the `w' in `whole' is adventitious. `Whole' is etymologically related to `health' (cf. `hale'); but health has become semantically so closely identified with the idea of an integral totality that the sense of "whole" has actually shifted to merge with that of the Greek `holos'. We can see the turning point in the word `wholesome', in which the morpheme is `whole' but the sense is `'health''. Obviously I'm not suggesting that etymology has much weight as an argument; but it is an indication that the notions of an individual -- of wholeness -- is difficult to keep separate from some normative notion of functional integration.

[3]:. The notion of function has, in recent years, received a promising line of explication, suggesting that it might be entirely naturalized. The standard line is roughly this: the production of a goal G is a function of a given activity or event of type A, if the fact that A results in events of type G explains the existence of A. This is indeed a naturalistic account, but it works only insofar as we are not interested in the issue of what counts as an individual. As soon as that issue is broached we cannot altogether escape evaluative issues. Cases where such references to individuals seem inescapable include the case of the immune system as well as the case of mentality. For the standard line, see Wright (1973), Taylor (1964), Bennett (1976), Millikan (1984). For a powerful recent case in favour of the ineliminability of value from the analysis of teleology, see Mark Bedau (1992)

[4]:. Bertrand Russell once suggested that the simplicity of natural law is an illusion reflecting our stupidity: simplicity is merely a feature shared by the only laws we have been smart enough to discover.

[5]:. Hogeweg 1988 , p. I owe this reference to Greg Crookall.

[6]:. Note that this doesn't preclude hierarchy altogether. Consider, for example, the different levels at which the DNA might affect the development of the individual: protein composition, interaction with environment in epigenesis, interaction with allele in selective competition, etc.

[7]:. For one such defense relating to the content of perceptual experience, see McDowell 1991. See also Davidson 1980, 1989.

REFERENCES

Bedau, Mark A. 1992. Naturalism and teleology. In Naturalism: A critical appraisal. Notre Dame, IN: Notre Dame University Press.

Bennett, Jonathan. 1976. Linguistic Behaviour. Cambridge: Cambridge University Press.

Cole, Charles J. 1984. Unisexual lizards. Scientific American 250:94-100.

Davidson, Donald. 1982. Mental events. In Inquiries into truth and interpretation, 207-27. Oxford: Oxford University Press, Clarendon.

------. 1989. Knowing one's own mind. APA proceedings.

Dawkins, Richard. 1982. The extended phenotype: The gene as unit of selection. Oxford: Oxford University Press.

------. 1987. The blind watchmaker. New York: W.W. Norton. Software available for "biomorph" selection modeling.

Dennett, Daniel C. 1991. Consciousness explained. Boston, Toronto, London: Little, Brown.

Edelman, Gerald. 1988. Topobiology: An introduction to molecular embryology. New York: Basic Books.

Fisher, R. A. 1930. The genetical theory of natural selection. Oxford: Clarendon Press.

Gardner, Martin. 1983. Wheels, Life, and other mathematical amusements. San Francisco: W.H. Freeman.

Gould, Stephen Jay. 1977. Ever since Darwin: Reflections in natural history. New York: W.W. Norton.

Hogeweg, P. 1988. Cellular automata as a paradigm for ecological modeling. Applied mathematics and computation 27:81-100.

Langman, Rodney E. 1989. The immune system. Foreword by Melvin Cohn. San Diego: Academic Press.

Langton, Christopher G.. 1986. Studying artificial life with cellular automata. Physica D 10(1-2):120-49.

Langton, Christopher G., ed. 1989. Artificial Life. Proceedings of an Interdisciplinary Workshop on the Sythesis and Simulation of Living Systems Held September 1987 in Los Alamos, NM. Santa Fe Institute Studies in the sciences of complexity. Redwood City, CA: Addison-Wesley.

------, ed. 1992. Artificial Life II. Proceedings of the Workshop on Artificial Life Held February, 1990 in Santa Fe, NM. Santa Fe Institute Studies in the sciences of complexity. Redwood City, CA: Addison-Wesley.

McDowell, John. 1991. The content of perceptual experience.

Matthen, Mohan and Levy, Edwin. 1984. Teleology, error, and the human immune system. Journal of philosophy 81 (July):351-72.

Millikan, Ruth. 1984. Language, thought, and other biological categories. Cambridge, MA: MIT Press <A Bradford Book>.

Minsky, Marvin. 1985. The society of mind. New York: Simon & Schuster.

Rosenberg, Alexander. 1985. The structure of biological science. Cambridge, UK and New York: Cambridge University Press.

Rucker, Rudy. 1989. CA Lab: Rudy Rucker's Cellular automata laboratory. With an essay by John Walker. Sausalito, CA: Autodesk Inc. software included.

Smith, Adam. 1937. An inquiry into the nature and causes of the wealth of nations. Ed & notes by Edwin Cannan. Introd. by Edwin Cannan and Max Lerner. New York: Modern Library.

Taylor, Charles. 1964. The explanation of behaviour. International Library of Philosophical and Scientific Method. London: Routledge and Kegan Paul.

Williams, George C. 1966. Adaptation and natural selection. Princeton, NJ: Princeton University Press.

Wright, Larry. 1973. Functions. Philosophical Review 82:139-68.

TOP