Why Computers Will Never Be People

Similar documents
Realism and instrumentalism

Machine Consciousness, Mind & Consciousness

Dennett's Reduction of Brentano's Intentionality

A copy can be downloaded for personal non-commercial research or study, without prior permission or charge

Please remember to sign-in by scanning your badge Department of Psychiatry Grand Rounds

Belief, Rationality and Psychophysical Laws. blurring the distinction between two of these ways. Indeed, it will be argued here that no

FOREWORD: ADDRESSING THE HARD PROBLEM OF CONSCIOUSNESS

NICHOLAS J.J. SMITH. Let s begin with the storage hypothesis, which is introduced as follows: 1

EPIPHENOMENALISM. Keith Campbell and Nicholas J.J. Smith. December Written for the Routledge Encyclopedia of Philosophy.

Can Rationality Be Naturalistically Explained? Jeffrey Dunn. Abstract: Dan Chiappe and John Vervaeke (1997) conclude their article, Fodor,

Why I Am Not a Property Dualist By John R. Searle

Examining the nature of mind. Michael Daniels. A review of Understanding Consciousness by Max Velmans (Routledge, 2000).

BonJour Against Materialism. Just an intellectual bandwagon?

Philosophy 1100 Introduction to Ethics. Lecture 3 Survival of Death?

The Philosophical Review, Vol. 100, No. 3. (Jul., 1991), pp

Magic, semantics, and Putnam s vat brains

Chalmers, "Consciousness and Its Place in Nature"

Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.

Positive Philosophy, Freedom and Democracy. Roger Bishop Jones

On Dispositional HOT Theories of Consciousness

Positive Philosophy, Freedom and Democracy. Roger Bishop Jones

Department of Philosophy TCD. Great Philosophers. Dennett. Tom Farrell. Department of Surgical Anatomy RCSI Department of Clinical Medicine RCSI

Functionalism and the Chinese Room. Minds as Programs

Delusions and Other Irrational Beliefs Lisa Bortolotti OUP, Oxford, 2010

Phenomenal Consciousness and Intentionality<1>

Moral Argumentation from a Rhetorical Point of View

The personal/subpersonal distinction Zoe Drayson To appear in Philosophy Compass. Abstract

SIMON BOSTOCK Internal Properties and Property Realism

Philosophical Review.

SHARPENING THINKING SKILLS. Case study: Science and religion (* especially relevant to Chapters 3, 8 & 10)

DISCUSSION THE GUISE OF A REASON

Reductio ad Absurdum, Modulation, and Logical Forms. Miguel López-Astorga 1

World without Design: The Ontological Consequences of Natural- ism , by Michael C. Rea.

Belief Ownership without Authorship: Agent Reliabilism s Unlucky Gambit against Reflective Luck Benjamin Bayer September 1 st, 2014

THE TWO-DIMENSIONAL ARGUMENT AGAINST MATERIALISM AND ITS SEMANTIC PREMISE

The Self and Other Minds

BEYOND CONCEPTUAL DUALISM Ontology of Consciousness, Mental Causation, and Holism in John R. Searle s Philosophy of Mind

In Epistemic Relativism, Mark Kalderon defends a view that has become

Here s a very dumbed down way to understand why Gödel is no threat at all to A.I..

The Qualiafications (or Lack Thereof) of Epiphenomenal Qualia

BOOK REVIEW: Gideon Yaffee, Manifest Activity: Thomas Reid s Theory of Action

REVIEW. Hilary Putnam, Representation and Reality. Cambridge, Nass.: NIT Press, 1988.

PHL340 Handout 8: Evaluating Dogmatism

ON CAUSAL AND CONSTRUCTIVE MODELLING OF BELIEF CHANGE

Rethinking Knowledge: The Heuristic View

the notion of modal personhood. I begin with a challenge to Kagan s assumptions about the metaphysics of identity and modality.

Lecture 3. I argued in the previous lecture for a relationist solution to Frege's puzzle, one which

Philosophy of Mind. Introduction to the Mind-Body Problem

9 Knowledge-Based Systems

Elements of Mind (EM) has two themes, one major and one minor. The major theme is

Causation and Free Will

Response to The Problem of the Question About Animal Ethics by Michal Piekarski

Introduction to Philosophy Fall 2018 Test 3: Answers

From: Michael Huemer, Ethical Intuitionism (2005)

On the Conceivability of Zombies

Lecture 38 CARTESIAN THEORY OF MIND REVISITED Overview. Key words: Cartesian Mind, Thought, Understanding, Computationality, and Noncomputationality.

Purple Haze: The Puzzle of Consciousness

Moral requirements are still not rational requirements

Lecture 9. A summary of scientific methods Realism and Anti-realism

Kevin Scharp, Replacing Truth, Oxford: Oxford University Press, 2013, At 300-some pages, with narrow margins and small print, the work

Nature and its Classification

2.1 Review. 2.2 Inference and justifications

Computer and consciousness

Behavior and Other Minds: A Response to Functionalists

Review of Erik J. Wielenberg: Robust Ethics: The Metaphysics and Epistemology of Godless Normative Realism

The Kripkenstein Paradox and the Private World. In his paper, Wittgenstein on Rules and Private Languages, Kripke expands upon a conclusion

Rationality in Action. By John Searle. Cambridge: MIT Press, pages, ISBN Hardback $35.00.

Experiences Don t Sum

WHY IS GOD GOOD? EUTYPHRO, TIMAEUS AND THE DIVINE COMMAND THEORY

Saying too Little and Saying too Much. Critical notice of Lying, Misleading, and What is Said, by Jennifer Saul

Lesson 2 The Existence of God Cause & Effect Apologetics Press Introductory Christian Evidences Correspondence Course

Reliabilism: Holistic or Simple?

To Appear in Philosophical Studies symposium of Hartry Field s Truth and the Absence of Fact

Nancey Murphy, Bodies and Souls, or Spirited Bodies? Cambridge University Press, 2006, 154pp, $22.99 (pbk), ISBN

Today we re gonna start a number of lectures on two thinkers who reject the idea

UC Berkeley UC Berkeley Previously Published Works

Direct Realism and the Brain-in-a-Vat Argument by Michael Huemer (2000)

Quine s Naturalized Epistemology, Epistemic Normativity and the. Gettier Problem

Oxford Scholarship Online Abstracts and Keywords

General Philosophy. Dr Peter Millican,, Hertford College. Lecture 4: Two Cartesian Topics

The readings for the course are separated into the following two categories:

Understanding Truth Scott Soames Précis Philosophy and Phenomenological Research Volume LXV, No. 2, 2002

At the Frontiers of Reality

The Zimboic Hunch By Damir Mladić

17. Tying it up: thoughts and intentionality

part one MACROSTRUCTURE Cambridge University Press X - A Theory of Argument Mark Vorobej Excerpt More information

Test 3. Minds and Bodies Review

Philosophy of Mind (MIND) CTY Course Syllabus

Right-Making, Reference, and Reduction

Personal Identity and the Jehovah' s Witness View of the Resurrection

Naturalized Epistemology. 1. What is naturalized Epistemology? Quine PY4613

Date: Tuesday, 10 February :00PM. Location: Barnard's Inn Hall

Nagel, Naturalism and Theism. Todd Moody. (Saint Joseph s University, Philadelphia)

Aboutness and Justification

Different kinds of naturalistic explanations of linguistic behaviour

PHENOMENALITY AND INTENTIONALITY WHICH EXPLAINS WHICH?: REPLY TO GERTLER

Review of David J. Chalmers Constructing the World (OUP 2012) David Chalmers burst onto the philosophical scene in the mid-1990s with his work on

Philosophy of Science. Ross Arnold, Summer 2014 Lakeside institute of Theology

Saying too Little and Saying too Much Critical notice of Lying, Misleading, and What is Said, by Jennifer Saul

The Debate Between Evolution and Intelligent Design Rick Garlikov

Transcription:

Why Computers Will Never Be People Keith Price 166 Duffy Street Ainslie ACT 2602 keith_price@smartchat.net.au Abstract The notion that computers and robots either have a measure of intelligence, or at least will have at some stage, has firmly taken root in Western culture. It has inspired a slew of science fiction novels and films, and one of these, The Matrix, has attained the status of a modern classic. Moreover, it has powerful advocates in the scientific and philosophical fraternities, the most prominent of whom are probably Daniel Dennett and Stephen Pinker. In my opinion, however, most of the ideas that make up this most influential of popular themes are demonstrably false. I approach the issue by way of a critique of Daniel Dennett, who mounts perhaps the most sophisticated defense of such ideas. Dennett is charged with fudging the distinction between an instrumental and a realist adoption of the intentional stance towards an entity. Other, more reductionist, positions are critiqued. I conclude with an outline of the philosophy of mind which underlies my approach, which may be termed paninteriorism, following Ken Wilber, and a sketch of what I think it would take to produce a sentient, aware computer or robot. 1 Introduction Many people seem to feel that the idea that computers will never be people is obviously well founded, but most, I suspect, would be hard pressed to say exactly why. Others, perhaps influenced by science fiction novels and films, or more generally the scientistic zeitgeist of modernity, will be inclined to feel that there is no good reason why, given enough advances in computer technology, robotics, neuroscience etc, some sophisticated artefact descended from our current devices may not become one of us. These ideas are often defended in terms of scientific materialism, or physicalism, as, for instance, by Daniel Dennett. (Dennett, 1978, 1991). Here is an argument against the possibility of true machine personhood taking Dennett into account: 1. Computers work on patterned data (data defined according to its syntax) according to fixed rules in a serial process. That a problem can be set out so that it can be worked on in this fashion is, roughly, what we mean by saying that it is computable. Copyright (c) 2004, Australian Computer Society, Inc. This paper appeared at the Computing and Philosophy Conference, Canberra. Conferences in Research and Practice in Information Technology, Vol. 37. J. Weckert and Y. Al-Saggaf, Eds. Reproduction for academic, not-for profit purposes permitted provided this text is included. 2. Persons are a subset of all those beings which have beliefs, desires and other propositional attitudes. In Dennett s terms, persons are intentional systems. 3. Propositional attitudes are not computable. 4. Therefore, computers are not persons. As a formal argument, this one needs a little fixing up to be clearly valid. It does, however, serve to lead us in. 2 Computers and Computability There are several points to make about computers and computability. Perhaps the first is that, for the purposes of my argument, I take the difference between serial von Neumann machines and neural network machines to be of no moment. We know that the architecture of the human brain is massively parallel, without any obvious serial processing bottlenecks, and that this has inspired development of (notoriously hard to program) machines with parallel architecture. I am content to agree with Dennett, who notes: Since any computing machine at all can be imitated by a virtual machine on a von Neumann architecture, it follows that if the brain is a massively parallel processing machine, it too can be perfectly imitated by a von Neumann machine. (Dennett, 1991) p.217. Moreover, Dennett hypothesizes that: Just as you can simulate a parallel brain on a serial von Neumann machine, you can, also, in principle, simulate (something like) a von Neumann machine on parallel hardware Conscious human minds are more or less serial virtual machines implemented inefficiently on the parallel hardware that evolution has provided for us. (Dennett, 1991) p.218. The focus on brains and brain states here is, of course, typical of the views I am here attacking. The basic idea is that, since human mental life, and hence the sophisticated propositional states that it requires, seems to have required the massive development of the neo-cortex in humans, those mental states must be identical, in some fashion, with states of human brains. Following Lynne Rudder Baker (Baker, 1995), I call this The Standard View. The Standard View would seem to imply, as Dennett explicitly does here, that thinking is a matter of computation in a virtual serial processor in my brain, and that processor is (more or less) my conscious mind. This computability requirement places severe constraints on our understanding of how propositional attitudes might be understood as causal factors in producing overt behaviour. In particular, the content of a propositional attitude needs to be understood syntactically, as a

physical structure in the brain, rather than semantically. This fact will lie at the centre of the particular difficulties that I am urging for The Standard View, and thus for the very idea that computers could be people. 3 Persons and Intentional Systems What, then, is a person? I do not here intend to do more than give a nod of acknowledgement to the incredibly rich cluster of concepts which centre around the idea of a person. For instance, to be a person means, inter alia, to belong to a community of rational, valuing beings, and it thus belongs to the essence of personhood that a person be at least in principle capable of communicating, by means of symbols and signs, in a shared space of interpersonal meanings and shared experience of pleasure, pain, desires, hopes, fears, etc. This is, broadly, the notion that what is distinctive about persons is the possession of language. Moreover, a precondition of language seems to be, as Baker argues (Baker, 1998, 2000), possession of a first person point of view and being capable of irreducibly indexical thought. Many (maybe all) animals appear to have a perspective on the world, but only persons are capable of reflection on that perspective. (If an animal did have that capability, I would argue that it is at least on the way to becoming a person the point being that personhood is, partly at least, so constituted.) There is much that could be said about this lightning sketch of a theory of personhood, but none of it is apposite to my current purpose, except the point that having beliefs, desires and other propositional attitudes are essential to being a person. Now, there is a view in cognitive science which regards the propositional attitudes as comprising a bad psychological theory, dubbed folk psychology, which needs to be replaced with a properly scientifically verified theory of what goes on in people s heads. This is eliminative materialism, which is a Standard View approach to propositional attitudes, together with the further judgement that no brain states will turn out to be suitable candidates for such attitudes. It has often been supposed in recent philosophical discussions of mind that if your approach could be shown to commit you to mind/body dualism, that constituted a refutation of your position. Dualism, it was said, was not a position that one adopted, but more a cliff that one fell off. Whether or not that is fair to dualism and on balance I am inclined to say that it is not I certainly feel the same way about eliminative materialism. I agree with Lynne Baker that, if it is true that no suitable candidates for computable brain states to identify with propositional attitudes can be found, that will impugn the Standard View rather than the reality of the attitudes. I can recommend two of Baker s books (Baker, 1987, 1995) for relevant argumentation. Here I will only say that I do not think folk psychology actually is a scientific theory subject to disconfirmation, and that the very idea of science presupposes, both in theory and practice, the integrity of the propositional attitudes. What is of more interest because more apparently believable is Dennett s characterization of persons as intentional systems. In Brainstorms (Dennett, 1978) he sets out the notion of three stances one can take to an entity or system the physical, design and intentional. From the physical stance our predictions are based on the actual physical state of the particular object, and are worked out by applying whatever knowledge we have of the laws of nature. (Dennett, 1978), p.4. From the design stance, we predict behaviour based on our knowledge of the intended function or purpose of the system or its parts. Finally, from the intentional stance we predict behaviour by treating the system as an intelligent agent with its own beliefs and desires. Dennett s standard example is a chess-playing computer. It is not practical to treat a chess-playing computer from either the design or physical stances when playing against it it is much more fruitful to treat it as if it had beliefs and desires, that is, from the intentional stance. The as if is important. In Dennett s usage, a particular thing is an intentional system only in relation to the strategies of someone who is trying to explain and predict its behavior. (Dennett, 1978), p.4. Given this understanding, as Dennett notes, Lingering doubts about whether the chess-playing computer really has beliefs and desires are misplaced whether one calls what one ascribes to the computer beliefs or belief-analogues or information complexes makes no difference to the nature of the calculation one makes on the basis of the ascriptions. (Dennett, 1978), p.7. This instrumentalist nature of Dennett s intentional stance theory has the additional apparent benefit that he can combine an explicitly eliminativist approach to such things as beliefs and desires, or pains, with treating complex systems like persons as if they really have them. Unfortunately for Dennett, this solution is fundamentally unstable. I leave to one side criticisms which depend on the fact that he often treats important aspects of the intentional stance, such as rationality and ethical evaluation, non-instrumentalistically. (See (Baker, 1989) pp. 307-312 for an incisive critique.) Though I would contend that this is highly symptomatic of an unstable position, the crucial point is a dilemma about whether the intentional stance is dispensable without cognitive loss. Baker: If Dennett is correct, then any system, human or not, may be described exhaustively and its operations explained wholly in terms of its physical constitution. Dennett points out that if some version of mechanistic physicalism is true (as I believe), we will never need absolutely to ascribe any intentions to anything ((Dennett, 1978), p.273). This seems to imply that the intentional stance is in principle (even if not in practice) dispensable. On the other hand, Dennett has suggested, to fail to take an intentional stance is, in some cases, to miss certain objective patterns. Surely, this claim, which would help give the intentional stance the weight it needs to be more than a sham, leads straight to a dilemma for Dennett; for the existence of objective patterns that would be missed by a physical stance would seem to falsify

Dennett s instrumentalism concerning the intentional level (Baker, 1989) pp.312-313. For example, a race of superior Martians, who can predict all our behaviour from the physical stance, would nevertheless miss a great deal about us if they do not also see us as intentional systems, and what they missed would be perfectly objective patterns. A stockbroker placing an order in the stock exchange can be, in principle, minutely understood and predicted at the physical level with absolutely no understanding of the significance of his movements for the market, or indeed that there is such a thing at all. The upshot is that: If intentional system theory is genuinely instrumentalistic then the theory can not play the proto-scientific role that Dennett assigns to it because, as mere interpretation, the intentional stance swings free of the design and physical stances. On the other hand, if Dennett means the intentional stance to offer a special vocabulary for describing features equally well describable in the vocabulary of the design or physical stances, then it is not even instrumentalistic This would be a straightforward reduction (Baker, 1989) p. 314. Why does Dennett try an as if strategy which seems to cause as many problems as it solves when talking of propositional attitudes? The reason, I submit, is that straightforwardly reductionist strategies have at least as many problems. Before I look at some of these, though, it is worth noting that intentional system theory provides a rather promising way, superficially, of motivating the idea that computers could be persons. If both computers and people are intentional systems according to Dennett s definition, perhaps there is no insuperable barrier to computers being people after all. If, however, people really do have propositional attitudes and it is only convenient to sometimes treat computers as if they do, we need to dig deeper if we hope to motivate the possibility that they one day might be people. As it stands, we have no reason to think that computers are intentional systems, construed realistically. Computers are, however, artefacts. This is enough to make them intentional systems in another sense. They are, if you will, non-psychological intentional systems, because the fact that there are computers depends on the fact that there are real intentional states in the world namely those of the people who design, build, program and run them. I think that the significance of this fact, obvious as it is, has been vastly underestimated by philosophers and cognitive scientists. In fact, as Baker argues, non-psychological intentional properties and systems generally deserve a lot more attention See (Baker, 1995) pp.194-5. I will have some more to say about this later. 4 The Non-Computability of Propositional Content I believe that there are at least two broad reasons why the kind of reductionism which gives comfort to the idea that computers could be people is false. The first relates to computability and syntactic structure, as mentioned earlier. The second relates to qualia or phenomenal qualities, which I will not deal with in this paper. Taking a realist approach to propositional attitudes and upholding the Standard View, which says that propositional attitudes are brain states, leads naturally to the idea that the content of beliefs, for example, are structures in the head which are part of a language of thought. The most influential proponent of this idea is Jerry A. Fodor, who champions the notion of narrow content of beliefs, etc as causally explanatory of behavior. (See, for example, (Fodor, 1987)) The idea of narrow content is that it supervenes on the subject s intrinsic properties, without regard to the subject s environment ((Baker, 1995) p. 44). This is opposed to broad content our ordinary relational notion of content which depends on both the believer s intrinsic properties and their physical and social environment. In Fodor s scheme, it is not content so understood, but some more restricted narrow, non-relational neural structure, which does the actual work when we think, entertain propositions, etc. I do not wish to focus here on Fodor s defense of narrow content see (Baker, 1995) pp.42-56 for an extended rebuttal but on a related idea developed by William Lycan. On the language of thought hypothesis, belief states have syntactic structure, which means that Psychological processes are causal processes on sentencelike entities, individuated syntactically. (Baker, 1995) p.34. The idea is that there is a semantic representation associated with neural sentences which gives the logical structure of the sentence, and which can serve as input for syntactic transformations. Many sentences have indexical and other contextual elements. How to represent them in the neural language of thought? An obvious suggestion is that the syntax has parameters for such elements. These are filled by whatever indexical or contextual elements are relevant parts of the believer s environment when the internal sentence causes behaviour. Baker explains: Since the causal efficacy of sentences in the language of thought, on this view, is wholly determined by brain states of the agent, every element whose presence or absence can affect behavior must be represented in the brain. So, whether or not public language requires that every semantically relevant feature be explicitly represented in logical form, the function of the syntax of the language of thought in causing behavior requires that every semantically relevant feature be explicitly represented in logical form and physically encoded in the brain (Baker, 1995) p.35. So, for instance, I am tired now must have parameters for speaker and time. More generally, all of the many hidden parameters in ordinary discourse must have a slot in the extended internal syntax. It is not too hard to show that this leads to trouble, in the form of the problem of the parameter. There are two constraints that this view puts on the syntactic structure of beliefs:

(A). Syntactically distinct beliefs have physically distinct realizations in the brain. (B). A belief with n parameters is syntactically distinct from a belief with n + 1 parameters. It is almost ludicrously easy to come up with cases which violate these constraints. Baker s example is standard utterances of An event on the sun is not simultaneous with anyone s seeing it by Newtonian and Einsteinian physicists. Since we may assume that Einstein s theory, which includes a frame-of-reference parameter for simultaneity, gives the actual truth conditions for this utterance, we must either assume, bizarrely, that historical Newtonian physicists had such a parameter in their heads, despite believing in absolute simultaneity, or else suppose that the Einsteinian has an extra parameter. If we take this option, we must suppose that, in conformity to (A), that the two beliefs must have physically distinct realisations in the brain, despite apparently being the same belief. A brief thought experiment, which I will not go into right now, shows, however, that this need not be so, so at least one of (A) or (B) is violated. What do problems like this point to? The general moral has often been taken to be that meanings ain t in the head, or at least not only in the head. As Baker notes, the problem of the parameter is quite general, and is in fact a lot harder than her example might suggest: Consider (putative) representations of slurping soup is impolite in the heads of an absolutist and a relativist. In the scientific case, theories provide accounts of which features are the semantically relevant ones. But in most ordinary contexts, things are not so tidy. We have no general theory of semantically relevant features of standard utterances, because of the frame problem, which is how to get a machine to update knowledge of a changing situation by noticing salient features and ignoring others. (Baker, 1995) p.41. What I am pointing to here by means of the problem of the parameter is a truly major difference between computers and people. We persons notice semantically relevant features in our environmental context and respond appropriately to them as a matter of course. How we do this is a difficult question to answer, but as semantic content is relational and does not live completely in our heads there is a real difficulty in imagining how a machine might do something even roughly similar. At a minimum, it would have to bear interesting perceptual and volitional relations to an environmental context, as we do. There are, indeed, various proposals, such as causal theories of perception and reference, which are meant to remedy some of these problems, but I cannot pursue my scepticism about their prospects further here. I now turn to my promised general picture of the relationship between persons and computers, derived from the philosophy of Ken Wilber. 5 How Could Computers Ever Be People? A Wilberian Picture It is a hopeless task for me to give you an adequate summary of Ken Wilber s work in this space. Very briefly, however, Wilber (Wilber, 1995, 2000) has developed a framework to integrate all human knowledge which can be represented by a diagram with four quadrants, representing the interior individual, interior social or collective, exterior individual and exterior social or collective dimensions of existence. There are four axes of development running out from the centre of the diagram through the centre of each quadrant, marked off at intervals to represent various levels of individual interior or psychological development, social interior or cultural development, individual exterior or physiological development and collective exterior or institutional/systemic development. This diagram should be envisaged as a schematic of universal evolution, and also individual and social development, operating on units called, following A.N. Whitehead, holons. A holon is a whole which is part of a larger whole. Every actually existing thing is either an individual holon, a social holon, a heap composed of holons or an artefact. Moreover, holons tetra-evolve simultaneously through all four quadrants, and they have an interior, an exterior, and interior and exterior relations to other holons. For instance, persons have an interior reality, only accessible to first-person awareness, a shared space with other persons of interpersonal meanings, accessible to second-person relational awareness, an individual exterior bodily reality and networks of external relations to other bodies and the environment as a whole, accessible to third-person awareness. None of these realities are reducible to any of the others. From the point of view of Wilber s framework, physicalism is a version of flatland thinking, which collapses the left hand, or interior, dimensions to the right hand, or exterior, dimensions. On this view there are really only exteriors, accessible to monological third person awareness. Phenomenal awareness, meaning and all else that makes up our sense of ourselves as persons disappears as the cosmos is gutted. On Wilber s view, interiors of holons go all the way down to atoms and beyond. This is not pan-psychism, however, as interiors at these lower levels do not constitute what we would want to call a psyche, soul or mind, but something very minimal. Wilber s term is paninteriorism. The major point is that minds and souls emerge in the process of evolution or personal development, in lockstep with the evolution of exteriors. Computers, however, do not evolve they are artefacts. Holons have their principle of organisation within themselves, artefacts have it imposed from without. This is how they become nonpsychological intentional systems, as mentioned earlier. Why, then, will computers never be people? At base, because they have not gone through the biological evolutionary development required to become such sophisticated holons. Computers are, of course, composed of holons (typically, atoms of silicon and various metals, plus other various molecules of plastics, etc), but the level of interiority of these holons is nearly as far removed from that of persons as it could be. Because they are artefacts, any evolutionary development would require them to first become complex molecular

and then biological holons. Such a development would almost certainly destroy the externally imposed structure which makes them computers. At the least it would fail to make any use of it. The notion that computers could evolve in a hybrid fashion biologically and technologically at the one time does not take the importance of having an interior principle of development seriously enough. We indeed can, and do, put our computers through a process of technological evolution, but all this means is that we impose ever more sophisticated levels of order and control from without. There is no corresponding increase in interiority in the computer, no matter how many feedback control mechanisms, and the like, we build into the system so that it can be relatively self-managing. For computers to become people, they would probably have to cease being computers and then undergo a lengthy process of biological, not technological, evolution, which would transform them out of all recognition. Maybe such beings were once computers, as we were, in a sense, single-celled organisms or scattered atoms of star dust, but we don t need to take such fantasies seriously. 6 References Baker, L. R. (1987) Saving Belief: A Critique of Physicalism, Princeton University Press, Princeton, N.J. Baker, L. R. (1989) 'Instrumental Intentionality' Philosophy of Science, 56, 303-316. Baker, L. R. (1995) Explaining Attitudes: A Practical Approach to the Mind, Cambridge University Press, Cambridge, U. K. Baker, L. R. (1998) 'The first-person perspective: A test for naturalism.' American Philosophical Quarterly, 35, 327-348. Baker, L. R. (2000) Persons and Bodies, Cambridge University Press, Cambridge. Dennett, D. C. (1978) Brainstorms: Philosophical Essays on Mind and Psychology, Bradford Books. Dennett, D. C. (1991) Consciousness Explained, Penguin, London. Fodor, J. A. (1987) Psychosemantics: The Problem of Meaning in the Philosophy of Mind, MIT / Bradford, Cambridge MA. Wilber, K. (1995) Sex, Ecology, Spirituality: The Spirit of Evolution, Shambhala, Boston. Wilber, K. (2000) Integral Psychology: Consciousness, Spirit, Psychology, Therapy, Shambhala, Boston.