Logic for Robotics: Defeasible Reasoning and Non-monotonicity

Similar documents
All They Know: A Study in Multi-Agent Autoepistemic Reasoning

Negative Introspection Is Mysterious

Informalizing Formal Logic

Logic I or Moving in on the Monkey & Bananas Problem

Foundations of Non-Monotonic Reasoning

Artificial Intelligence: Valid Arguments and Proof Systems. Prof. Deepak Khemani. Department of Computer Science and Engineering

Logic & Proofs. Chapter 3 Content. Sentential Logic Semantics. Contents: Studying this chapter will enable you to:

(Refer Slide Time 03:00)

Module 5. Knowledge Representation and Logic (Propositional Logic) Version 2 CSE IIT, Kharagpur

Circumscribing Inconsistency

Announcements. CS243: Discrete Structures. First Order Logic, Rules of Inference. Review of Last Lecture. Translating English into First-Order Logic

Powerful Arguments: Logical Argument Mapping

Formalizing a Deductively Open Belief Space

A Model of Decidable Introspective Reasoning with Quantifying-In

Other Logics: What Nonclassical Reasoning Is All About Dr. Michael A. Covington Associate Director Artificial Intelligence Center

Understanding Truth Scott Soames Précis Philosophy and Phenomenological Research Volume LXV, No. 2, 2002

Does Deduction really rest on a more secure epistemological footing than Induction?

Logic and Pragmatics: linear logic for inferential practice

PHI 1500: Major Issues in Philosophy

Varieties of Apriority

Revisiting the Socrates Example

A New Parameter for Maintaining Consistency in an Agent's Knowledge Base Using Truth Maintenance System

Semantic Foundations for Deductive Methods

1. Introduction Formal deductive logic Overview

Chapter 1. Introduction. 1.1 Deductive and Plausible Reasoning Strong Syllogism

Quantificational logic and empty names

Knowledge, Time, and the Problem of Logical Omniscience

16. Universal derivation

An alternative understanding of interpretations: Incompatibility Semantics

Announcements. CS311H: Discrete Mathematics. First Order Logic, Rules of Inference. Satisfiability, Validity in FOL. Example.

A BRIEF INTRODUCTION TO LOGIC FOR METAPHYSICIANS

LOGIC: An INTRODUCTION to the FORMAL STUDY of REASONING. JOHN L. POLLOCK University of Arizona

ON CAUSAL AND CONSTRUCTIVE MODELLING OF BELIEF CHANGE

SOME PROBLEMS IN REPRESENTATION OF KNOWLEDGE IN FORMAL LANGUAGES

Exercise Sets. KS Philosophical Logic: Modality, Conditionals Vagueness. Dirk Kindermann University of Graz July 2014

Comments on Truth at A World for Modal Propositions

The Role of Logic in Philosophy of Science

Artificial Intelligence Prof. P. Dasgupta Department of Computer Science & Engineering Indian Institute of Technology, Kharagpur

On Priest on nonmonotonic and inductive logic

Selections from Aristotle s Prior Analytics 41a21 41b5

Paradox of Deniability

INTRODUCTION. This week: Moore's response, Nozick's response, Reliablism's response, Externalism v. Internalism.

How Gödelian Ontological Arguments Fail

A Generalization of Hume s Thesis

Postulates for conditional belief revision

Warrant and accidentally true belief

On the formalization Socratic dialogue

2.1 Review. 2.2 Inference and justifications

Artificial Intelligence Prof. P. Dasgupta Department of Computer Science & Engineering Indian Institute of Technology, Kharagpur

9 Knowledge-Based Systems

Artificial Intelligence I

What is the Nature of Logic? Judy Pelham Philosophy, York University, Canada July 16, 2013 Pan-Hellenic Logic Symposium Athens, Greece

Logic Appendix: More detailed instruction in deductive logic

TWO VERSIONS OF HUME S LAW

Semantic Entailment and Natural Deduction

1.2. What is said: propositions

UC Berkeley, Philosophy 142, Spring 2016

Russell: On Denoting

Richard L. W. Clarke, Notes REASONING

What would count as Ibn Sīnā (11th century Persia) having first order logic?

Belief as Defeasible Knowledge

Reductio ad Absurdum, Modulation, and Logical Forms. Miguel López-Astorga 1

What is a counterexample?

NON-NUMERICAL APPROACHES TO PLAUSIBLE INFERENCE

2.3. Failed proofs and counterexamples

A Recursive Semantics for Defeasible Reasoning

A Recursive Semantics for Defeasible Reasoning

1. Introduction. Against GMR: The Incredulous Stare (Lewis 1986: 133 5).

Study Guides. Chapter 1 - Basic Training

Constructive Logic, Truth and Warranted Assertibility

TOWARDS A PHILOSOPHICAL UNDERSTANDING OF THE LOGICS OF FORMAL INCONSISTENCY

Philosophy 5340 Epistemology. Topic 6: Theories of Justification: Foundationalism versus Coherentism. Part 2: Susan Haack s Foundherentist Approach

Lecturer: Xavier Parent. Imperative logic and its problems. by Joerg Hansen. Imperative logic and its problems 1 / 16

Commentary on Scriven

A Solution to the Gettier Problem Keota Fields. the three traditional conditions for knowledge, have been discussed extensively in the

Truth and Modality - can they be reconciled?

Pollock s Theory of Defeasible Reasoning

Cognitivism about imperatives

From Necessary Truth to Necessary Existence

9.1 Intro to Predicate Logic Practice with symbolizations. Today s Lecture 3/30/10

Epistemological Foundations for Koons Cosmological Argument?

PHILOSOPHY OF LOGIC AND LANGUAGE OVERVIEW LOGICAL CONSTANTS WEEK 5: MODEL-THEORETIC CONSEQUENCE JONNY MCINTOSH

Coordination Problems

In Defense of Radical Empiricism. Joseph Benjamin Riegel. Chapel Hill 2006

INTERMEDIATE LOGIC Glossary of key terms

Logic for Computer Science - Week 1 Introduction to Informal Logic

Empty Names and Two-Valued Positive Free Logic

Here s a very dumbed down way to understand why Gödel is no threat at all to A.I..

2 Lecture Summary Belief change concerns itself with modelling the way in which entities (or agents) maintain beliefs about their environment and how

Truth At a World for Modal Propositions

3.3. Negations as premises Overview

Class 33: Quine and Ontological Commitment Fisher 59-69

Unit VI: Davidson and the interpretational approach to thought and language

Artificial Intelligence. Clause Form and The Resolution Rule. Prof. Deepak Khemani. Department of Computer Science and Engineering

JELIA Justification Logic. Sergei Artemov. The City University of New York

An overview of formal models of argumentation and their application in philosophy

Logic I, Fall 2009 Final Exam

What is Physicalism? Meet Mary the Omniscient Scientist

An Inferentialist Conception of the A Priori. Ralph Wedgwood

Foreknowledge, evil, and compatibility arguments

Transcription:

Logic for Robotics: Defeasible Reasoning and Non-monotonicity

The Plan I. Explain and argue for the role of nonmonotonic logic in robotics and II. Briefly introduce some non-monotonic logics III. Fun, speculative stuff.

Introduction A. Robots! Robots are (physical, virtual, generally electromechanical) agents designed to perform tasks on their own or with (ideally minimized) guidance (or, from another angle, to amplify the power of people to perform tasks by performing parts or subtasks for us autonomously or semi-autonomously). No knock-down arguments for precise definitions are forthcoming here, but autonomous or semiautonomous agency seems to be a crucial feature of robothood.

B. Logic! Introduction The ability to reason (represent the world and infer things about it - e.g. changes in the local environment that result from actions and events) seems to be required for autonomous or semiautonomous agency. If we are leaving a task to a robot, we want to be able to trust it to do its job with some degree of accuracy. Suppose we want an autonomous robot for painting houses. We need the robot to be able to (among other things) recognize when a house is the right color or it might end up painting the house forever. More complicated tasks more reasoning More autonomy more reasoning Reasoning (representation and inference) is pretty much what logic is about.

Consequence I.1 Logic Language is the most notable format for reasoning The pieces of language we use for reasoning: Arguments Premises {true sentences or known sentences} Conclusions {sentences that follow } Conclusions are consequences of premises (they are derivable/provable from the premises, they are semantically or logically entailed by the premises, any interpretation that satisfies the premises satisfies the conclusions...) The line between premises and conclusions signifies the consequence relation. Also symbols like :.,, and words like therefore, it follows that..., so... etc. The consequence relation: the relation between premises and conclusions that makes it right to infer the conclusions from the premises. Truth preservation, warrant transmission, etc.

Validity The property we want our arguments to have; the feature we want a consequence relation to have. An argument is valid iff, in every possible situation where the premises are true, the conclusion is true. An argument is invalid if there is even one situation where the premises are true and the conclusions are false. Interpretations = possible situations Gather up all the sentences in the world, make them either true or false (but not neither and not both) that's an interpretation. Then take the premises of your argument and the conclusion, and see if, in every interpretation, conditions for validity hold. Inference Rules Schematic rules that are shaped like valid arguments Modus Ponens, Modus Tollens...

Formalization Using mathematical tools, especially algebra and set theory, to analyze arguments Symbols and structures made of symbols (expressions, formulas) represent parts of (sentences which are parts of) arguments Abstraction from content we can figure out what is shared by all good arguments, and we can classify argument forms (e.g., inference rules). Suppose: If Ralph is in the sun for a little while, then he will get sunburned. Suppose: Ralph has been in the sun for a little while. Conclude: He will get sunburned. Modus Ponens If P, then Q. P :. Q

Logic is Active Main idea: we can use the tools we have already developed/discovered to study new parts of reasoning. Propositional Logic (PL) Sentences (propositions) are represented by symbols {P, Q, R, } Connectives are isolated for study as things especially relevant to good form {not (~), and(^), or (v), if...then ( ), if and only if (iff, )}

PL is great Complete, consistent, expressive, decidable There are some arguments that PL just cannot analyze, and those arguments are very common in natural language All intelligent agents deserve rights, all robots are intelligent agents, Foodmotron is a robot :. Foodmotron is an intelligent agent and Foodmotron deserves rights P and Q and R :. S First Order Logic (FOL) can handle the arguments PL can't Quantification and Predication All (x), There-is(x), P(x), Q(x), and all the connectives

Analyzing the Foodmotron argument: Intelligent Agent = I, Deserves rights = D*, Robot = R All(x)Ix Dx, All(x)Rx Ix, R(f) :. I(f), D(f), I(f) and D(f) 1. (x)ix Dx 2. (x)rx Ix 3. Rf 4. If Df Universal Instantiation 1 5. Rf If Universal Instantiation 2 6. If Modus Ponens 3,5 7. Df Modus Ponens 4, 6 C. If ^ Df Conjunction Introduction 6, 7 *It would be more precise to analyze the predicate deserves rights as a binary relation between x and some y such that y is a right, i.e., (There-is(y) y is a right) and D(x,y) - but this is just an example, and it's convenient.

C. Robot Logic! Bridge Logic can be used for specifying, at a high or abstract level, the reasoning that we want robots to be able to do (and consequently, it characterizes the capabilities we need to design and program robots with). What kind of logic do we need to describe (and prescribe) the reasoning that robots do? First-Order Logic (FOL) seems the obvious candidate: It's deductively powerful. It's expressive. It's complete. It (arguably) characterizes the best (arguably) reasoning that people do (mathematical theorem proving).

C. Robot Logic The Set-up. Unfortunately, FOL and its consequence relation have some features that make it completely unsuitable for the task of characterizing a lot of interesting, good reasoning that humans and robots need to do. Keep in mind: Logic is Active Consequence (the relation between premises and conclusions) and Validity. We can have different kinds of consequence relations that still have validity. Valid arguments in FOL have problematic features and valid arguments in other logics may not have those features.

Monotonicity Structural Rules The consequence relation in a logic can be abstractly characterized with structural rules. Supraclassicality: if Γ φ then Γ φ. Reflexivity: if φ Γ then Γ φ; Cut: If Γ φ and Γ, φ ψ then Γ ψ; Monotony/Monotonicity: If Γ φ and Γ Δ then Δ φ If a formula (Phi) is a consequence of set of assumed or premise formulas (Gamma), and Gamma is a subset of a new, larger set of formulas (Delta), then Phi is a consequence of Delta. No matter what you add to your original set of premises, you can always validly infer your original conclusions.

Monotonicity is a property of the consequence relation in FOL (and all classical logics) Monotonicity: If T1, therefore, P is valid, no addition to T1 can make the inference to P invalid. Conclusions are never invalidated as premises increase. Conclusions are cumulative no take backs. Extending sets of premises/assumptions can never reduce the number of derivable/valid/entailed/provable conclusions (consequences). If a formula P is a consequence of a set of formulas, T1, then for any extension of T1 with another set T2, P is a consequence of the union of T1 and T2.

Think about it like this: Suppose that a formula P validly follows from a set of formulas T1, i.e., the argument T1 therefore P is valid. To extend T1, we add a non-empty set of new formulas (at least one new formula) to T1. Let's call this non-empty set T2. The resulting set (at least one formula larger than T1) is the union of T1 and T2. T2 is either consistent with T1 (does not contradict a formula in T1 or a consequence of T1) or inconsistent with T1. If T2 is consistent with T1, then we can ignore T2 and still derive P as a consequence of T1. Since T2 doesn't contradict anything in T1 or P, in any situation where T1 was true before adding T2, T1 is still true and in any situation where P was true before adding T2, P is still true. So, if T1 therefore P was valid before, it's valid now! Generally, inconsistency is to be avoided. In FOL, if T2 is inconsistent with T1, then we can prove anything from T1 together with T2. In classical logics, including FOL, any and every arbitrary sentence is a consequence of inconsistency (arguably) that's what makes inconsistency bad! In either of the two possible cases, we can never retract P.

A set of formulas is inconsistent if it contains a contradiction or entails a contradiction. We'll look at both cases. {P, ~P,} 1. P 2. P v Q 3. ~P 4. Q For any arbitrary Q at all! {...P...} 1. ~P 2. P 3. P v Q 4. ~P 5. Q For any arbitrary Q at all!

Why monotonicity is problematic (for robots and for us): Argument 1: Defeasible Reasoning Nonmonotonic Argument 2: Intelligent Agency Defeasible Reasoning

Defeasible reasoning = reasoning with a consequence relation that can be defeated when a defeasible consequence relation holds between premises and conclusions, the conclusions can be invalidated when we learn something new. Some conclusions make sense until we acquire new information. It's clear that defeasible reasoning is nonmonotonic. Monotonic = no matter what new premises you add, your original conclusion is still valid Defeasible = adding new premises can make your original conclusion invalid

Most human reasoning is defeasible reasoning Defeasible Arguments T1 = {If x is A, there is n probability that x is B, x is A} {Most As are B, x is A} {Typically, normally, generically we can safely assume As are B, x is A} --------------------------------------------------------------------------- x is B*. It can be the case that x is not-b even though every premise in T1 is true. Improbable things happen, x may not be like most other As, x may be atypical, abnormal, not generic or just a surprise. These arguments are good, but in what way? *Distinct from inferring that there is n probability that x is B, which should follow non-monotonically Default Assumptions make these arguments work.

Defaults Generics - Birds (most birds, birds in the context of this sentence) fly. Prototypes - Prototypical birds fly. Psychological? Typicality, Normality Birds fly if they are not atypical or abnormal. Probability - N% of birds fly. No-risk or Safe Guess - It's safe to assume birds fly. Consistency - In the absence of contradictory evidence, birds fly. Autoepistemology - If birds didn't fly, I'd know it, and I don't know that they don't fly.

Abduction T1 = {P is true, Q would explain P} Q is true Q could be false and all premises in T1 can still be true! Belief Revision {P, P Q} ~Q ~{P, P Q} ~P or ~(P Q)

Why is this defeasible reasoning stuff relevant to robotics? We want to design intelligent agents for performing tasks. More complicated tasks more reasoning More exactly, the less able a robot is to represent and infer about the world it operates in, the less useful it is. A robot unable to perform complex reasoning would only be able to complete tasks in a very simple world.. The actual world is very much not simple. More autonomy more reasoning We want to minimize our input to the robot and let it make its own decisions. That's not happening without some smarts. We can control the reasoning a robot does by using a logic or a consequence relation to control the conclusions the robot draws from the things we let it know.

Designing an intelligent agent: Intelligence, not omniscience P. Airplanes fly, p is an airplane C. p flies Unless its wings are full of holes, its engine is wrecked, it's out of fuel,..., it's a life-size model, it's made of cake,..., it's on the moon,... P is generic; there are massive numbers of exceptions to the rule. It's hard to represent this argument in FOL: either we treat Airplanes fly as a (false) universal claim or the robot would have to list all of the possible exceptions as part of its premises and confirm them before it could infer C. All(x)(Ax Fx) but that's just false, so C can't be concluded! All(x)((Ax ^ ~Hx ^ ~Ex ^ ~Gx ^ ^~Mx) Fx) almost everything about the airplane has to be confirmed before C can be concluded.

The more complex the world is, the more exceptions there are to generic rules. It's just unfeasible to explicitly program in all exceptions to generic statements using FOL and then expect the robot to confirm (prove) every exception does not hold for every inference the robot might make about particular objects in a complex world. Without defaults, there are inferences the robot just can't make without vast amounts of information. Avoiding Contradiction: If a robot uses monotonic reasoning and can't guarantee consistency by explicitly representing all exceptions, then the only way it can avoid ending up with contradictory beliefs (or beliefs ) is to reason defeasibly. Suppose the robot infers P from T1. Suppose it finds out that P is false. Then what? It can't take back P. It can never correct its error.

The Frame Problem If the robot is to perform a complex task, it will have to keep track of the effects of its actions, and it will have to represent the invariance of the properties of unaffected things. Most properties don't change without being acted on: if the robot performs an action that changes an object's color, the action will usually not change the object's position. It will also not change the location of other objects. It will also not Of course, there are exceptions. The list of invariances is as large as the world is complicated; we don't want to have to explicitly represent every unchanging fact. The fact that things that aren't explicitly changed stay the same is a default assumption every action leaves facts unchanged unless it is possible to infer that the action changes it.

II. Non-monotonic Logic Non-monotonic logic is the project of using logical tools we already have at our disposal to formalize defeasible reasoning. We want argument forms that allow us to figure out when we have valid arguments that also allow us to take back conclusions. We want our inferences to be good, even though we reserve the right to take them back.

Non-monotonic Logics, Formalisms Default Rules/Logic Closed World Assumption Autoepistemic Logic Preferred Models, Circumscription Inheritance Networks

Default Rules If {A1,...,An} and ~{B1,...,Bn}, then conclude C. Normal Default Rule = Consistency Test If {A1,...,An} and ~(~C), then conclude C. A(x):M(B1(x),...,Bn(x)) C(x) A(x) = the prerequisite C(x) = the consequent M(B1(x),...,Bm(x)) = the justification M = It is consistent to assume that... B1(x),...,Bm(x) = some set of things it is consistent to assume. If it's consistent to assume C, then go ahead and conclude C.

Default Logic P. (x is an A: M(x is a B)) x is a B P. (x is an airplane: M(x flies)) C. x flies. We can infer C defeasibly from P because the default rule says that, given our premises, and the satisfaction of the consistency condition, we can conclude C. If we add new premises that make it inconsistent to conclude C, the consistency condition does not hold. x is an A = premise, M(x is a B) = condition of the rule that can fail to hold if we get new information We can retain validity of the original argument (any situation where x is an A and it's consistent to assume x is a B is a situation where x is a B), and we can take back our conclusions.

Systems For Defaults Closed World Assumption :M~P ~P If it's not inconsistent to infer ~P, go ahead and infer it. Autoepistemic Logic Modal Logic The extension of a set of premises S = {P P follows in system K45 from the union of S and the positive and negative introspective closures of the subset, S0, of S} Q follows from P defeasibly if Q follows from the union of P plus all the things I know and all the things I don't know I could always learn something new that invalidates Q. P:If ~Q were the case, I'd know it, and I don't know it Q

More Systems for Defaults Preferred Models choose the smallest set of interpretations that makes the premises true, then see if those interpretations make the conclusion true This results in assuming that whatever we are not assuming (or proving) is false. Technically very difficult: Second-Order Logic (at least for Circumscription)

Inheritance Networks Is-A A B = As are Bs, generally Image taken from Stanford Encyclopedia of Philosophy

III. The Interesting Stuff Defeasible reasoning is a key component of human epistemic/cognitive abilities. Suppose some nonmonotonic logic or system really is the theory of human reasoning. If we imbue robots with programs that instantiate these logics, we've opened up the possibility that robots can really possess our abilities. Intelligent Agency Rights? Moral Status? Robot rights? A function of the moral importance of the tasks they perform? They should not be interfered with in their performance of morally important tasks. Suppose a robot is so complex that it performs a wide and varied assortment of morally important tasks does this enmesh it in a network of moral rights and obligations?

III. More interesting stuff There is more than one approach to defeasible reasoning: Logical Epistemological Pollock's System of Defeasible Reasoning, OSCAR Project The Web of Belief and the AGM Theory of Belief Revision

III. Even More... Logic as robotic enhancement of thought Artificial Intelligences, Theorem Proving programs, Automated Proof procedures... robots? Extended Cognition Thesis Regimentation Virtual Machines Logical Formalization, Deductive Procedures Turning complicated reasoning problems into smaller, more symbolic problems that can be solved with simple, mechanical routines. Symbolize, check for valid argument form, if not found, compile loopholes/counterexamples. Little programs or Ais (robots?) in your brain.

Things to Read Reiter, Ray, 1980, A logic for default reasoning, Artificial Intelligence, 13: 81-137. McDermott, Drew and Doyle, Jon, 1982, Non-Monotonic Logic I, Artificial Intelligence, 13: 41-72. McCarthy, J., 1980, Circumscription A Form of Non-Monotonic Reasoning, Artificial Inteligence, 13: 27 39. Moore, Robert C., 1993, Autoepistemic logic revisited, Artificial Intelligence, 59(1-2): 27-30. Stanford Encyclopedia of Philosophy Articles Non-monotonic Logic Defeasible Reasoning Logic and Artificial Intelligence Classical Logic The Frame Problem Quine, Willard van Orman, and Ullian, J. S., 1982, The Web of Belief, New York: Random House. Alchourrón, C., Gärdenfors, P. and Makinson, D., 1982, On the logic of theory change: contraction functions and their associated revision functions, Theoria, 48: 14-37. Pollock, John L. 1987, Defeasible Reasoning, Cognitive Science, 11: 481-518., 1995, Cognitive Carpentry, Cambridge, Mass.: MIT Press.