Why Computers are not Intelligent: An Argument. Richard Oxenberg

Similar documents
EXERCISES, QUESTIONS, AND ACTIVITIES My Answers

In Search of the Ontological Argument. Richard Oxenberg

Hume's Is/Ought Problem. Ruse and Wilson. Moral Philosophy as Applied Science. Naturalistic Fallacy

General Philosophy. Dr Peter Millican,, Hertford College. Lecture 4: Two Cartesian Topics

Coyne, G., SJ (2005) God s chance creation, The Tablet 06/08/2005

The Kripkenstein Paradox and the Private World. In his paper, Wittgenstein on Rules and Private Languages, Kripke expands upon a conclusion

Outline Lesson 5 -Science: What is True? A. Psalm 19:1-4- "The heavens declare the Glory of God" -General Revelation

Here s a very dumbed down way to understand why Gödel is no threat at all to A.I..

Introduction to Cognitivism; Motivational Externalism; Naturalist Cognitivism

1/7. The Postulates of Empirical Thought

Skepticism is True. Abraham Meidan

Are Miracles Identifiable?

Hume s Is/Ought Problem. Ruse and Wilson. Moral Philosophy as Applied Science. Naturalistic Fallacy

Dennett's Reduction of Brentano's Intentionality

DARWIN S DOUBT and Intelligent Design Posted on July 29, 2014 by Fr. Ted

Philosophy of Science. Ross Arnold, Summer 2014 Lakeside institute of Theology

Hindu Paradigm of Evolution

Human Nature & Human Diversity: Sex, Love & Parenting; Morality, Religion & Race. Course Description

Lesson 2 The Existence of God Cause & Effect Apologetics Press Introductory Christian Evidences Correspondence Course

The Rejection of Skepticism

The Qualiafications (or Lack Thereof) of Epiphenomenal Qualia

Scientific Dimensions of the Debate. 1. Natural and Artificial Selection: the Analogy (17-20)

In the Beginning A study of Genesis Chapters Christian Life Assembly Jim Hoffman The Journey 2018

The Spiritual Is Abstract

Biblical Faith is Not "Blind It's Supported by Good Science!

Do we have knowledge of the external world?

Evolution and Meaning. Richard Oxenberg. Suppose an infinite number of monkeys were to pound on an infinite number of

Intelligent Design. Kevin delaplante Dept. of Philosophy & Religious Studies

Philosophy 5340 Epistemology Topic 4: Skepticism. Part 1: The Scope of Skepticism and Two Main Types of Skeptical Argument

The Character of Space in Kant s First Critique By Justin Murphy October 16, 2006

Written by Rupert Sheldrake, Ph.D. Sunday, 01 September :00 - Last Updated Wednesday, 18 March :31

SUPPORT MATERIAL FOR 'DETERMINISM AND FREE WILL ' (UNIT 2 TOPIC 5)

Chalmers, "Consciousness and Its Place in Nature"

Philosophy 1100 Introduction to Ethics. Lecture 3 Survival of Death?

the notion of modal personhood. I begin with a challenge to Kagan s assumptions about the metaphysics of identity and modality.

9 Knowledge-Based Systems

Asking the Right Questions: A Guide to Critical Thinking M. Neil Browne and Stuart Keeley

Chapter 10 Consciousness and Evolution

Information and the Origin of Life

THE GREATEST SCANDAL NEVER EXPOSED

THE ROAD TO HELL by Alastair Norcross 1. Introduction: The Doctrine of the Double Effect.

IN his paper, 'Does Tense Logic Rest Upon a Mistake?' (to appear

METHODENSTREIT WHY CARL MENGER WAS, AND IS, RIGHT

THE IMPACT OF DARWIN S THEORIES. Darwin s Theories and Human Nature

PHIL 480: Seminar in the History of Philosophy Building Moral Character: Neo-Confucianism and Moral Psychology

Van Fraassen: Arguments concerning scientific realism

VERIFICATION AND METAPHYSICS

It s time to stop believing scientists about evolution

Understanding Truth Scott Soames Précis Philosophy and Phenomenological Research Volume LXV, No. 2, 2002

This handout follows the handout on The nature of the sceptic s challenge. You should read that handout first.

Explanatory Indispensability and Deliberative Indispensability: Against Enoch s Analogy Alex Worsnip University of North Carolina at Chapel Hill

Argument from Design. Dialogues Concerning Natural Religion. David Hume

Ground Work 01 part one God His Existence Genesis 1:1/Psalm 19:1-4

Perspectives on Imitation

Theories of epistemic justification can be divided into two groups: internalist and

Searle vs. Chalmers Debate, 8/2005 with Death Monkey (Kevin Dolan)

Atheism. Challenging religious faith. Does not endorse any ethical or political system or values; individual members may.

How Successful Is Naturalism?

Ayer on the criterion of verifiability

Metaphysics & Consciousness. A talk by Larry Muhlstein

CONCEPT OF WILLING IN WITTGENSTEIN S PHILOSOPHICAL INVESTIGATIONS

Time is limited. Define your terms. Give short and conventional definitions. Use reputable sources.

Introduction to Polytheism

Video 1: Worldviews: Introduction. [Keith]

Introduction. I. Proof of the Minor Premise ( All reality is completely intelligible )

Should it be allowed to win Jeopardy?

Think by Simon Blackburn. Chapter 3e Free Will

Skepticism and Internalism

Virtue Ethics without Character Traits

A Philosophical Critique of Cognitive Psychology s Definition of the Person

The activity It is important to set ground rules to provide a safe environment where students are respected as they explore their own viewpoints.

Qualified Realism: From Constructive Empiricism to Metaphysical Realism.

15 Does God have a Nature?

Functionalism and the Chinese Room. Minds as Programs

INFINITE "BACKWARD" INDUCTION ARGUMENTS. Given the military value of surprise and given dwindling supplies and

out in his Three Dialogues and Principles of Human Knowledge, gives an argument specifically

Verificationism. PHIL September 27, 2011

Adam Smith and the Limits of Empiricism

ASA 2017 Annual Meeting. Stephen Dilley, Ph.D., and Nicholas Tafacory St Edward s University

Should Teachers Aim to Get Their Students to Believe Things? The Case of Evolution

Can Rationality Be Naturalistically Explained? Jeffrey Dunn. Abstract: Dan Chiappe and John Vervaeke (1997) conclude their article, Fodor,

prohibition, moral commitment and other normative matters. Although often described as a branch

THE GOD OF QUARKS & CROSS. bridging the cultural divide between people of faith and people of science


Truth At a World for Modal Propositions

Wolterstorff on Divine Commands (part 1)

Computing Machinery and Intelligence. The Imitation Game. Criticisms of the Game. The Imitation Game. Machines Concerned in the Game

Debate on the mind and scientific method (continued again) on

THEISM, EVOLUTIONARY EPISTEMOLOGY, AND TWO THEORIES OF TRUTH

World without Design: The Ontological Consequences of Natural- ism , by Michael C. Rea.

This handout follows the handout on Hume on causation. You should read that handout first.

Phil 1103 Review. Also: Scientific realism vs. anti-realism Can philosophers criticise science?

Naturalized Epistemology. 1. What is naturalized Epistemology? Quine PY4613

Discussion Questions Confident Faith, Mark Mittelberg. Chapter 9 Assessing the Six Faith Paths

R. Keith Sawyer: Social Emergence. Societies as Complex Systems. Cambridge University Press

The Rationality of Religious Beliefs

INTELLIGENT DESIGN CREATION OF SPECIES

FAITH & reason. The Pope and Evolution Anthony Andres. Winter 2001 Vol. XXVI, No. 4

Nagel, T. The View from Nowhere. New York: Oxford University Press, 1986.

KNOWLEDGE ON AFFECTIVE TRUST. Arnon Keren

Transcription:

1 Why Computers are not Intelligent: An Argument Richard Oxenberg I. Two Positions The strong AI advocate who wants to defend the position that the human mind is like a computer often waffles between two points of view that are never clearly distinguished, though actually quite distinct. Let us call them 'position A' and 'position B.' Position A argues that human minds are like computers because they are both 'intelligent.' Position B argues that human minds are like computers because they both lack intelligence. The reason these positions are often confused is because of an ambiguity or vagueness in the understanding of what intelligence itself is. But, if we are to consider this question in such a way as to make it relevant for an investigation into the nature of the human mind then we must define intelligence in a way that captures what we mean when we say that a human being is intelligent. I propose the following definition: a being is intelligent to the extent that it is able to knowingly make decisions for the fulfillment of a purpose. Again, this definition of intelligence is based on the effort to capture what we mean when we say that a human being has 'intelligence.' Let us, then, consider the two AI positions with this in mind. II. Position A. Position A is based on the notion that computers are (or can in theory become) intelligent. My claim is that this notion is based on a failure to understand, or really consider, how computers work. Computers, it is true, can be made to mimic intelligence, and very sophisticated computer

2 programs can mimic intelligence in very sophisticated ways, but this does not make them intelligent. Let's consider a sophisticated computer application: Imagine a program X that is written in such a manner as to generate another program Y, which will then execute immediately upon the termination of program X. Suppose further, that program Y itself produces a new iteration of program X, with some slight modification, and that this new program X executes immediately upon the termination of program Y, producing, in its turn, yet another iteration of program Y, with a slight modification, and so on. We would now have produced self-modifying application X-Y. Let us add to the complexity even more. Let us suppose that program X is able to produce multiple versions of program Y based upon input that it receives while running. Let us suppose the same thing about program Y. Now we have an application X-Y that not only modifies itself, but 'learns'; that is, it modifies itself in accordance with its 'experiences' (i.e., its input). Of course, if we design this with enough forethought and intelligence, we might even produce an application that learns 'something,' that is, becomes progressively better at doing something based upon its 'experiences.' Now that seems very much like 'intelligence.' But appearances can be deceiving. Let us first of all notice that application X-Y is itself the product of human intelligence. However much it may be able to 'teach itself' and 'modify itself' it is in fact not the product of itself but the product of considerable human thought. Indeed, the more intelligent application X-Y seems, the more human intelligence we can be sure went into its designed. Such an application would almost certainly not be produced all at once. First a programmer would create a rather simple program X, which would generate a rather simple program Y, and then, over time, would add layers of complexity until application X-Y became a robust and complex application. One can easily imagine that at some point application X-Y

3 might become so complex that its creator would no longer be able to predict what it will do next. This, however, would not be an indication that application X-Y had suddenly become intelligent, but simply an indication that its creator had reached the limits of her own intelligence's ability to foresee all the implications of what she has created. Indeed, we know that application X-Y would not have achieved intelligence because we know how it works. What produces the illusion of intelligence in computers can be expressed in a single word: branching. It is the ability of computers to branch, i.e., to execute different operations depending upon variations in input, that creates for the observer the impression that the computer is making a 'decision' based on its 'experience.' In fact it is doing nothing of the kind. The branching operation is coded by the programmer, who provides for various operations on the basis of variations in input anticipated by the programmer (either more or less abstractly). The computer never makes a 'decision,' it simply (dumbly) does the next thing it is programmed to do on the basis of the electronic situation it is in at any given moment. That situation can be made to vary over different iterations of the program (or routine) through variations in input; nevertheless, the computer does not make a 'decision.' It simply executes the next instruction it is caused to execute based upon its design and programming. To realize this is to realize that AI position A is unsound. The computer has no real intelligence in the sense in which we have defined the term: It does not decide for a purpose. Nor is there any reason to suppose that it can acquire such intelligence by becoming more complex. Its complexity is strictly additive. It becomes more complicated as we program it to do more things on more conditions. Since we know why it does what it does on these conditions there is no reason for us to be deluded into thinking that at some point it acquires something new called 'intelligence.'

4 But this leads us to AI position B. III. Position B Position B states that the computer is like the human mind because both actually lack intelligence. The argument for position B generally goes as follows: The human mind is reducible to the human brain. The human brain, in turn, may be thought of as a very sophisticated computer executing a super-complex version of self-modifying application X-Y. Who has written application X-Y for the human brain? There is no 'who.' Application X-Y is generated from human genes, which are the products of billions of years of genetic evolution, which blindly follows the Darwinian laws of natural selection. In order to discuss position B effectively we need to consider something we have not yet considered: epistemology. The plausibility of position B is based upon a scientific-empiricist (positivist) epistemology. Basically, the scientific empiricist considers only what can be known objectively, so she looks at the behavior of other human beings and asks herself whether it is perhaps possible that their behavior can be explained as the product of a very sophisticated, biological, version of self-modifying application X-Y; and she answers 'yes.' Philosophers, however, (at least some of them) have long been aware of the limitations of this epistemology. Although it is extremely useful in allowing us to do empirical science, it has a major gap when used to generate theories as to the fundamental nature of reality. The gap is this: it is unable to take into account subjective experience. It is simply 'blind' to such experience. But we know, we know because we are ourselves 'subjects,' that such experience is real; hence it must be taken into account in any full treatment of reality; and certainly in any full treatment of the human mind.

5 So, taking this into account, let us ask whether the decision making process of a human being is really anything like the branching process of application X-Y. When I make a decision as to what I am going to do I generally do it in some such manner as this: I envision alternate possible futures, think about which actions will lead to which futures, and then choose my action on the basis of the future I want to actualize. Does application X-Y do anything of the kind? Well, we know for a fact that it doesn't. Application X-Y simply does, at any given moment, the next thing it is programmed to do based upon the electronic conditions prevalent at that moment. It does not envision possible futures, it does not consider alternative actions, and it does not choose on the basis of its desire. So, in fact, there is a profound disanalogy between the operations of the human mind and the operations of application X-Y. Since position B is itself based upon an argument from analogy, this profound disanalogy defeats position B. Now at this point in the argument the 'strong AI' advocate often jumps in to say something like: well maybe application X-Y, if we could get into its 'head,' would be seen to be doing something like the human mind. Note that this is, in effect, a return to refuted position A. It is also an ad hoc argument from ignorance. It is an argument from ignorance insofar it states that, since we don't know what the subjective state of application X-Y is, we might as well suppose it to be the same as the subjective experience of the human mind. But this, of course, is a silly argument. It offers no grounds for this supposition, and, of course, we have good reasons to reject it. Everything that application X-Y does can be accounted for on the basis of how we know it operates, which is other than how we know the human mind operates on the basis of our own subjective experience. It is also an ad hoc argument. It is proffered simply to save position B (which, ironically, it tries to do through a return to position A). There is no good reason then to take this objection seriously.

6 So, to summarize: position A fails because we know that computers in fact do not operate with 'intelligence.' We know this because we understand the principles of their operation, which are strictly causal. Position B fails because we know that the human mind does act with intelligence. We know this because we have subjective access to its operation, which we recognize to follow the definition of intelligence we first posited (and which, of course, we derived from our subjective experience of the mind s operation to begin with.). IV. Position C This leads, then, to the following conclusion: Computers are not like the human mind, because the former are not intelligent, and the latter is.