A Logic of Implicit and Explicit Belief

Similar documents
A Model of Decidable Introspective Reasoning with Quantifying-In

Module 5. Knowledge Representation and Logic (Propositional Logic) Version 2 CSE IIT, Kharagpur

All They Know: A Study in Multi-Agent Autoepistemic Reasoning

Artificial Intelligence: Valid Arguments and Proof Systems. Prof. Deepak Khemani. Department of Computer Science and Engineering

Belief as Defeasible Knowledge

Remarks on a Foundationalist Theory of Truth. Anil Gupta University of Pittsburgh

Logic I or Moving in on the Monkey & Bananas Problem

Belief, Awareness, and Two-Dimensional Logic"

SOME PROBLEMS IN REPRESENTATION OF KNOWLEDGE IN FORMAL LANGUAGES

2.1 Review. 2.2 Inference and justifications

Belief, Awareness, and Limited Reasoning: Preliminary Report

Circumscribing Inconsistency

How Gödelian Ontological Arguments Fail

Knowledge, Time, and the Problem of Logical Omniscience

UC Berkeley, Philosophy 142, Spring 2016

Logical Omniscience in the Many Agent Case

Formalizing a Deductively Open Belief Space

Semantic Entailment and Natural Deduction

A Judgmental Formulation of Modal Logic

Informalizing Formal Logic

Negative Introspection Is Mysterious

Semantic Foundations for Deductive Methods

What would count as Ibn Sīnā (11th century Persia) having first order logic?

Intersubstitutivity Principles and the Generalization Function of Truth. Anil Gupta University of Pittsburgh. Shawn Standefer University of Melbourne

Understanding Truth Scott Soames Précis Philosophy and Phenomenological Research Volume LXV, No. 2, 2002

4.1 A problem with semantic demonstrations of validity

1. Lukasiewicz s Logic

2.3. Failed proofs and counterexamples

What is the Nature of Logic? Judy Pelham Philosophy, York University, Canada July 16, 2013 Pan-Hellenic Logic Symposium Athens, Greece

Bayesian Probability

Chapter 1. Introduction. 1.1 Deductive and Plausible Reasoning Strong Syllogism

(Refer Slide Time 03:00)

Constructive Logic, Truth and Warranted Assertibility

ILLOCUTIONARY ORIGINS OF FAMILIAR LOGICAL OPERATORS

TWO VERSIONS OF HUME S LAW

Review of Philosophical Logic: An Introduction to Advanced Topics *

Beyond Symbolic Logic

Logic & Proofs. Chapter 3 Content. Sentential Logic Semantics. Contents: Studying this chapter will enable you to:

prohibition, moral commitment and other normative matters. Although often described as a branch

KNOWLEDGE AND THE PROBLEM OF LOGICAL OMNISCIENCE

Lecture Notes on Classical Logic

Review of "The Tarskian Turn: Deflationism and Axiomatic Truth"

Is the law of excluded middle a law of logic?

Reductio ad Absurdum, Modulation, and Logical Forms. Miguel López-Astorga 1

Artificial Intelligence. Clause Form and The Resolution Rule. Prof. Deepak Khemani. Department of Computer Science and Engineering

Does Deduction really rest on a more secure epistemological footing than Induction?

Can Negation be Defined in Terms of Incompatibility?

Comments on Truth at A World for Modal Propositions

ON CAUSAL AND CONSTRUCTIVE MODELLING OF BELIEF CHANGE

GROUNDING AND LOGICAL BASING PERMISSIONS

Artificial Intelligence Prof. P. Dasgupta Department of Computer Science & Engineering Indian Institute of Technology, Kharagpur

Class #14: October 13 Gödel s Platonism

Can Negation be Defined in Terms of Incompatibility?

Logic and Pragmatics: linear logic for inferential practice

SAVING RELATIVISM FROM ITS SAVIOUR

Instrumental reasoning* John Broome

Varieties of Apriority

Evidential Support and Instrumental Rationality

Lecture 3. I argued in the previous lecture for a relationist solution to Frege's puzzle, one which

What God Could Have Made

Bayesian Probability

JELIA Justification Logic. Sergei Artemov. The City University of New York

An Introduction to. Formal Logic. Second edition. Peter Smith, February 27, 2019

Introduction. I. Proof of the Minor Premise ( All reality is completely intelligible )

Postulates for conditional belief revision

Truth At a World for Modal Propositions

Situations in Which Disjunctive Syllogism Can Lead from True Premises to a False Conclusion

A Liar Paradox. Richard G. Heck, Jr. Brown University

Exercise Sets. KS Philosophical Logic: Modality, Conditionals Vagueness. Dirk Kindermann University of Graz July 2014

Contradictory Information Can Be Better than Nothing The Example of the Two Firemen

Ethical Consistency and the Logic of Ought

NICHOLAS J.J. SMITH. Let s begin with the storage hypothesis, which is introduced as follows: 1

International Phenomenological Society

Language, Meaning, and Information: A Case Study on the Path from Philosophy to Science Scott Soames

TRUTH-MAKERS AND CONVENTION T

ROBERT STALNAKER PRESUPPOSITIONS

Empty Names and Two-Valued Positive Free Logic

Quantificational logic and empty names

THE MEANING OF OUGHT. Ralph Wedgwood. What does the word ought mean? Strictly speaking, this is an empirical question, about the

15 Does God have a Nature?

Can A Priori Justified Belief Be Extended Through Deduction? It is often assumed that if one deduces some proposition p from some premises

Philosophy 240: Symbolic Logic

Williams on Supervaluationism and Logical Revisionism

The way we convince people is generally to refer to sufficiently many things that they already know are correct.

Logic for Computer Science - Week 1 Introduction to Informal Logic

Between the Actual and the Trivial World

6. Truth and Possible Worlds

REASONS AND ENTAILMENT

***** [KST : Knowledge Sharing Technology]

Reply to Robert Koons

Since Michael so neatly summarized his objections in the form of three questions, all I need to do now is to answer these questions.

VAGUENESS. Francis Jeffry Pelletier and István Berkeley Department of Philosophy University of Alberta Edmonton, Alberta, Canada

Epistemic Logic I. An introduction to the course

Haberdashers Aske s Boys School

Predicate logic. Miguel Palomino Dpto. Sistemas Informáticos y Computación (UCM) Madrid Spain

Theories of propositions

1.2. What is said: propositions

DEFINING ONTOLOGICAL CATEGORIES IN AN EXPANSION OF BELIEF DYNAMICS

From Necessary Truth to Necessary Existence

1. Introduction. Against GMR: The Incredulous Stare (Lewis 1986: 133 5).

Transcription:

From: AAAI-84 Proceedings. Copyright 1984, AAAI (www.aaai.org). All rights reserved. A Logic of Implicit and Explicit Belief Hector J. Levesque Fairchild Laboratory for Artificial Intelligence Research 4001 Miranda Avenue Palo Alto, California 94304 ABSTRACT As part of an on-going project to understand the found* tions of Knowledge Representation, we are attempting to characterize a kind of belief that forms a more appropriate basis for Knowledge Representation systems than that cap tured by the usual possible-world formalizations begun by Hintikka. In this paper, we point out deficiencies in current semantic treatments of knowledge and belief (including recent syntactic approaches) and suggest a new analysis in the form of a logic that avoids these shortcomings and is also more viable computationally. The kind of belief that underlies terms in AI such as Know!- edge Representation or knowledge base has never been adequately characterized. r As we discuss below, the major existing formal model of belief (originated by Hintikka in [l]) requires the beliefs of an agent to be closed under logical consequence, and thus can place unrealistic computational demands on his reasoning abilitites. Here we describe and formalize a weaker sense of belief that is much more attractive computationally and forms a more plausible foundation for the service to be provided by a Knowledge Representation utility. This formalization is done in the context of a logic of belief that has a truth-based semantic theory (like the possible-world approach but unlike its recent syntactic competitors). This logic is also shown to have connections to relevance logic and, in a certain sense, to subsume it. 1. Logical Omniscience & Possible Worlds A recurring problem in the modelling of belief or knowledge is what has been called in [z] logical omniscience. In a nutshell, all formalizations of belief based on a possible-world semantics suffer from the fact that at any given point, the set of sentences considered to be believed is closed under logical consequence. It is simply built into the logic that if a is believed and a logically implies,8, then B is believed as well. Apart from the fact that this does not allow for a resource-limited agent who might fail to draw any connection between a and fi, this has at least three other serious drawbacks from a modelling point of view: 1. Every valid sentence must be believed. 2. If two sentences are logically equivalent, then one must be believed if the other is. Because what is represented in a knowledge base is typically not required to be true, to be consistent with most philosophers and computer scientists, we are calling the attitude involved here belief rather than knowledre. 3. If a sentence and its negation are both believed, then so must every sentence. Any one of these might cause one to reject a possible-world formalization as unintuitive at best and completely unrealistic at worst. There is, however, a much more reasonable way of interpreting the possible-world characterization of belief. As discussed in [3], instead of taking logical omniscience as an idealization (or heuristic) in the modelling of the beliefs of an agent, we can understand it to be dealing realistically with a different though related concept, namely, what is implicit in what an agent believes. For example, if an agent imagines the world to be one where a is true and if o logically implies B, then (whether or not he realizes it) he imagines the world to be one where B also hap pens to be true. In other words, if the world the agent believes in satisfies cy, then it must also satisfy,8. Under this interpretation, we examine not what an agent believes directly, but what the world would be like if what he believed were true. There are often very good reasons for examining the consequences of what an agent believes even if the agent himself has not yet appreciated those consequences. If the proper understanding of a possible-world semantics is that it deals not with what is believed, but what is true given what is believed, what then is an appropriate semantics for dealing with the actual beliefs of an agent? Obviously, we need a concept other than the one formalized by possible worlds. If we use the terminology that a sentence is ezplicitly believed when it is actively held to be true by an agent and implicitly believed when it follows from what is believed, then what we want is a formal logical language that includes two operators, B and L: Ba will be true when a is explicitly believed while La will be true when Q is implicit in what is believed. While a possibleworld semantics (like that of [l] or [4]) is appropriate for dealing with the latter concept, the goal of this paper is to present one for the former. 2. The Syntactic Approach When talking about what an agent actually believes, we want to be able to distinguish between believing only a and (a > 8) on the one hand, and believing a, (CY > a) and @, on the other. While the picture of the world is the same in both cases, only the second involves realizing that /3 is true. This is somewhat of a problem semantically, since the two sets of beliefs are true in nreciselv the same possible worlds and so, in some sense, seman-

tically indistinguishable. This might suggest that any realistic the syntactic and the possible-world approaches so that different semantics for belief will have to include (something isomorphic sets of sentences can represent the same beliefs without requirto) a set of sentences to distinguish between the two belief sets ing that all logically equivalent sets do so. We now show that above. The usual way to interpret a sentence like La in a stan- there is a reasonably intuitive semantics for belief that has these dard Kripke framework is to have a model structure that con- properties. tains a set of possible worlds, an accessibility relation and other things. It appears that to interpret a sentence like Ba, a model structure will have to contain an explicit set of sentences. This is 3. Situations indeed what happens in the formalizations of belief of [S] and (61 that share our goal of avoiding logical omniscience. A slightly On closer examination, the reason the possible-world ap more sophisticated approach is that of [7] where the semantic preach to belief or knowledge leads to logical omniscience is that structure contains only an initial set of sentences (representing beliefs are characterized completely by a set of possible worlds a base set of beliefs) and a set of logically sound deductive rules (namely, those that are accessible from a given possible world). for obtaining new derived beliefs. Logical omniscience is avoided Intuitively, these possible worlds are to be thought of as the full there by allowing the deductive rules to be logically incomplete. range of what the agent thinks the world might be like. If he With or without deductive rules, I will refer to this approach to only believes that p is true, the set of worlds will be all those modelling belief as the.yntactic approach since syntactic entities where p is true: some, for example, where q is true, others, where have to be included within the semantic structures. q is false. However, because sentences which are tautologies will also be true in all these possible worlds, the agent is thought of Apart from this perhaps ill-advised mixture of syntax and as believing them just as if they were among his active beliefs. semantics, the syntactic approach suffers from a serious defect In terms of the possible worlds, there is no way to distinguish p that is the opposite of the problem with possible worlds. A from these tautologies. possible-world semantics is, in some sense, too coarse-grained to model belief in that it cannot distinguish belief sets that logically One way to avoid all these tautologies is to to make this noimply the same set of sentences. The syntactic approach, on tion of what an agent thinks the world is like be more relevant the ocher hand, is too fine-grained in that it considers any two to what he actually believes. This can be done by replacing the sets of sentences as distinct semantic entities and, consequently, possible worlds by a different kind of semantic entity that does different belief sets. not necessarily deal with the truth of all sentences. In particular, sentences not relevant to what an agent actually believes To see why this a problem, consider, for example, the disjunc- (including some tautologies) need not get a truth value at all. tion of LY and 8. There is no reason to suppose that Following [8] (but not too closely), we will call this sort of partial B(a v,9) E B(/3V a) possible world a Gtuation. bughly speaking, a situation may would be valid given a syntactic understanding of B since (@VP) support the truth of some sentences and the falsity of others, may be in the belief set while (/? V a) may not.2 The trouble but may fail to deal with other sentences at all. with this is that if we consider intuitively what For example, consider the situation of me sitting at my ter- It is believed that either o or /I is true. minal at work. We might say that this situation supports the fact that I m at work, that somebody is at my terminal, that is saying, the order seems to be completely irrelevant. It is there is either a terminal or a book at my desk, and so on. On almost an accident of lexical notation that we had to choose one the other hand, it does not support the contention that my wife of the disjuncts to go first. Yet, the syntactic approach makes is at home, that she is not out shopping, or even that she is at the left to right order of disjuncts semanticallysignificant in that home or not at home. Although the latter is certainly true, me we can believe one ordering but fail to believe the other. sitting at my terminal does not deal with it one way or another. The obvious counter to this is that the logic of the syntactic One way of thinking about situations is as generalizations of approach has to be embellished to avoid these spurious syntactic possible worlds where not every sentence in a language is redistinctions. For example, we might insist as part of the seman- quired to have a truth value. Conversely, we can think of postics that to be well-formed, any belief set containing (ckvb) must sible worlds as those limiting cases of situations where every also contain (/? V cr) (or, for Konolige, the obvious deduction sentence does have a truth value. Indeed, the concept of a posrule must be present). The trouble with this kind of constraint sible world being compatible with a situation is intuitively clear: is that it is semantically unmotivated. For example, should we every sentence whose truth is supported by the situation should also insist that any set containing 11~ must also contain cr? come out true in that possible world and every sentence whose Should every belief set containing a and b also contain (a ha)? falsity is supported should come out false. Again drawing from Should every belief set contain the Lobviousn tautologies such (81, we will also allow for incoherent situations with which no as (a > a)? Where do we stop? Clearly, it would be preferable possible world is compatible. These are situations that (at least to have a semantics where restrictions such as these follow from seem to) support both the truth and falsity of some sentence. the meaning of Ba and not the other way around. In other From the point of view of modelling belief, these are very useful words, we want a semantics (like that of possible worlds) that since they will allow an agent to have an incoherent picture of is based on some concept of truth rather than on a collection the world. of ad hoc restrictions to sets of sentences. Ideally, moreover, The trick, then, that underlies the logic of belief to follow the granularity of the semantics should lie somewhere between is to identify explicit belief with a Bet o{aituationa rather than 21n Konolige s #y&em, one disjunction may be deducible while the other possible worlds. Before examining the formal details, there is mav not. one point to make. Traditional lonics of knowledge and belief 199

have dealt not only with world knowledge but also with metaknowledge, that is, knowledge about knowledge. To be able to deal with this in our case is somewhat of a problem since we would have to deal with a whole raft of questions about what is believed about what is explicitly or implicitly believed. For example, even without assuming that everything believed is true, it is not clear whether or not B(La > CX) should be valid. For reasons given in [3], L(La > CY) should be valid even if belief does not, in general, imply truth. Instead of trying to settle all of these questions here and now, we will ignore them completely. The language below will simply not contain any sentences where a B or a L appears within the scope of another. This will simplify the semantics immensely while still illustrating how the two concepts can co-exist naturally. 4. A Formal Semantics The language we are considering (call it L) is formed in the obvious way from a set of atomic sentences P using the standard connectives V, A, and 1 for disjunction, conjunction, and negation respectively, and two uuary connectives B and L. Only regular propositional sentences (without a B or a L) can occur within the scope of these last two connectives. We assume that other connectives such as > and E can be understood in t(erms of the original ones.s Sentences of L are interpreted semantically in terms of a model atructute (S,B,T,3) w h ere.s is a set, B is a subset of S, and both t and 3 are functions from P (the atomic sentences) to subsets of S. Intuitively, S is the set of all situations with B being those situations that could be the actual one according to what is believed. For any atomic sentence p, T(p) are the situations that support the truth of p and 3(p) are those that support the falsity of p. To deal with the possible worlds compatible with a situation in a model structure, we define W by the following: W(3) = { 3 E S 1 for every p E P, a) a is a member of exactly one of 7(p) and 3(p), b) if 3 is a member of 7(p), then so is Q, and c) if a is a member of 3(p), then so is s.} The first condition aboves guarantees that s will be a possible world, while the last two guarantee compatibility. Also, for any subset S * of S, we will let W (S ) mean the union of all W (8) for every s in S.,* Given a semantic structure (S, 8, T,3 ), we can define the support relations /==T and +p holding between situations and sentences of L. Intuitively, 8 kta when 8 supports the truth of CX, and.9 kp Q when s supports the falsity of Q. More formally, we have the following: kt and k=f E S x L and are defined by 1. sktpiff8et(p). u k=p p iff d E 3(p). awe may eventually want a special implication operator, especially for sentences that are obiects of belief. 5. J kt Ba iff for every 3 in 8, 8 bta. a kfbcr iff 3 IfTBa. 6. J /== La iff for every 3 in W(B), 3 k=a. 3 +FLa iff 8 ktla. If 9 is an element of W(S) ( i.e. 8 is a possible world), then if B +=a, we say that a is true at a and otherwise that a ia joke at 8. Thus, as to be expected, a sentence is true iff it is not false iff its negation is false. Finally, we say that a is valid and write /= a provided that for any model structure (S, B, T,3 ) and any J in W(S), (Y is true at s. The satisfiablitity of a sentence (or of a set of sentences) can be defined analogously. This completes the semantics of L. While space precludes a lengthy examination of the properties of L, here are the major highlights. First of all, L handles its standard propositional subset correctly in that all instances of propositional tautologies are valid and, moreover, any sentence not containing a B or L is valid iff it is a standard tautology. As for implicit belief, it is easy to see that all tautologies are implicitly believed and that it is closed under implication. In other words, we have If + Q (where Q is propositional), then b La and k (La A L(cK 3 /4)) 3 L/9. Equally important, the sentence (Ba > La) is valid, meaning that everything that is explicitly believed is an implicit belief. In fact, if a sentence is a logical consequence of what is believed, then it is implicitly believed. Unfortunately, the converse does not hold since in some interpretations, there may be sentences that are true in the right set of possible worlds without being implied by what is believed. For example, if a sentence is necessarily true then it will be an implicit belief-even if it is not logically valid-a generic problem with the possible-world semantics for knowledge and belief that seems to have gone unnoticed in the literature. We should not be too concerned about this, however, since it does not affect either the valid or the satisfiable sentences of L, but only whether or not certain infinite sets of sentences are satisfiable.5 Of course, the major issue here is how the B operator behaves. Before examining the valid sentences containing B, it is worth copsidering some satisfiable sets of sentences that show that belief does not suffer from logical omniscience. The following sets are all satisfiable:. 1. {Bp,B(p~q),-Bq} Th is s h ows that beliefs are not closed under implication. A sentence Q is a logical consequence of a set L of sentences iff L U {TX} is unsatisfiable. 6There is, moreover, a fairly simple way to eliminate the problem of nonlogical necessary truths always being implicitly believed. Call a model structure ezpunriue if for any set of atomic sentences, there is a possible world in the structure such that the atomic sentences it supports is precisely that set. Now while there are certainly model st.ructures that are not expansive, it can be shown that the validity or satisfiability of a sentence would not change if these were defined in terms of expansive structures only. With this definition, moreover, a sentence would indeed be implicitly believed if and only ilit was lonicallv implied bv what was believed. 200

2. (1B(pv -p)} A va Id i sentence need not be believed. 3. { Bp, -B(p A (q V -q))} A logical equivalent to a belief need not be believed. 4. {Bp, B-p, -Bq} B e 1 ie f s can be inconsistent sentence being believed. without every The above sets show what freedom the logic allows in terms of beliel; to demonstrate that the logic does impose reasonable constraints on belief, we must look at the valid sentences of L. We will present these in terms of a proof theory for L that is both sound and complete with respect to the above semantics. The important point, however, is that unlike the syntactic approach, these constraints follow from the semantics. The only reason to consider a proof theory here is that it does provide an elegant and vivid way to examine the valid sentences of L (especially those using B). 5. A Proof Theory The proof theory of L must begin with a propositional basis of some sort to guarantee that all tautologies are present. The simplest way to do this is to have a single rule of inference, Modus Ponens, and the usual three axioms that can be found in any elementary logic textbook. To this basic system we will adjoin a collection of new axioms for implicit and explicit belief but no new rules of inference. The appropriate axioms for implicit belief should make sure that it contains all tautologies and all beliefs and is closed under implication. This can be achieved with three axiom schemata: 1. Lo, where a is a tautology. 2. (Ba 3 La). 3. kr A.+ 3 a) 3 Lg. For explicit belief, on the other hand, we have to dream up a set of axioms stating what has to be believed when something else is. In other words, we need a set of axioms of the form (Ba > BB), for various 0 and /?. Remarkably enough, this work has already been done for us in what is called relevance logic [9]. This logic deals with a relationship between pairs of sentences called entailment that is a proper subset of logical implication. Entailment is based on the intuition that the antecedent of an implication should be relevant to the consequent. As it turns out, entailment and belief are very closely related, as the following key result attains: Theorem 1: /= (Ba > B/?) if7 a entails /?. The proof of this theorem is based on a correspondence between our semantics of situations and a semantics of four truth-values described in 1111. What this tells us is that L contains relevance logic as a subpart: questions of entailment can be reduced t.o questions of belief in L. Moreover, we get this relevance logic without having to give up classical logic and the normal interpretation of > and the other connectives. We could imagine constructing a decision procedure for L directly from the above without even passing through a proof theory at all. Such A decision procedure, after all, is what counts when building a system that reasons with L. Proofs of this and the two other quoted theorems can be found in [lo], a slinhtlv revised version of this Daoer. So all that is needed to characterize the constraints satisfied by belief is to apply a set of axioms for entailment in relevance logic to belief. One such set given in [9] is the following: 4. 5. 6. 7. 8. 9. B(o A B) E B(/? A a). B(a v a) E B(/!3 V a). B(a A (B A 7)) = B((a A 8) A r)- B(a v (B v 7)) = B(b V b j V -Y). B(a A (B V +I)) = B((o A B) V (a A r))- B(a v (B A 7)) - B((a V 8 A b V -/j)- B-+rVj3) GE B(yaA+). B+A\) E B(lcrV 18). B-VTK G Ba. Ba A B/3 z B(a A a). Bav B/9 > B(LIVB). This particular axiomatization states that belief must respect properties of the logical operators such as commutativity, associativity, distibutivity, De Morgan s laws and double negation. Nothing in these axioms forces all the logical consequences of what is believed to be believed (as in axioms 1 and 3, above, for implicit belief), although each one forces Some consequences to be believed (e.g., by axiom 8, a double negation of a sentence must be believed if the sentence itself is). Another way to understand these axioms (except for the very last one) is as constraints on the individuation of beliefs. For example, (cr V 8) is believed iff (/l V a) is because these are two lexical notations for the same belief. In this sense, it is not that there is an automatic inference from one belief to another, but rather two ways of describing a single belief. This, in itself, does not justi&/ the axioms, however. It is easy to imagine logics of belief that are different from this one, omitting certain of the above constraints or perhaps adding additional ones. Indeed, there is not much to designing a proof theory with any collection of constraints on belief. The interesting fact about this particular set of a,xioms, however, is that it corresponds so nicely to an independently motivated semantic theory. Specifically, we have the following result: Theorem 2: (Soundness and Completeness) A sentence of L is a theorem of the above logic iff it is valid. Furthermore, and perhaps most importantly, the logic of L has very attractive computational properties as well, which we now turn to. 0. The Payoff What does this new logic of belief buy us? One thing is a language that can be used to formally reason about the beliefs of other agents without assuming logical omniscience, If we imagine a system planning speech acts as in [12], we can represent what it knows about the beliefs of another as a theory in L. It could then plan to remind someone of something he only believes implicitly. Similarly, it could take someone through certain steps of an argument or proof, at each stage pointing out implications of the other agent s beliefs. There are any number of ways to mechanize the necessary reasoning in L. One currently fashionable method involves translating evervthine: into first-order Ionic and running a resolution 201

theorem-prover over the results. This would involve the usual encoding of sentences of L as terms and characterizing either its validity or provability (or both) using a first-order theory. Just doing this, however, would miss a very important feature of L, namely that calculating propositional beliefs is much easier than doing general propositional reasoning. Consider, in particular, the role of a logical Knowledge Representation system (such as KRYPTON [13]) that is given as a knowledge base (or KB) a finite set of sentences in some language. What a knowledge-based system using this KB (such as a robot) will be interested in is whether or not some proposition is true of the application domain (e.g. Is it raining outside? ). The ideal way of answering this kind of questions is yes if the question follows from what is in the KB, no if its negation does and unknown otherwise. The sad fact of the matter, however, is that for all but extremely simple languages (including some without quantifiers) this question-answering is computationally intractable. This might be tolerable if the kind of question you ask is an open problem in mathematics where you are willing to stop arid redirect the theorem-prover with problem-specific heuristics if it seems to be thrashing. If, on the other hand, a robot is trying to decide whether or not to use an umbrella, and calls a Knowledge Representation system utility as a subroutine, this kind of behaviour is unacceptable. A possible solution to the problem is for the Knowledge Representation system to manage what is explicitly believed rather than its implications. In those cases where a question cannot be answered directly on the basis of what is believed, the robot can decide to try to figure out the answer by determining the implications of what it believes. Moreover, new facts can be sought and the question can even be abandoned it it becomes too expensive to pursue (e.g. the robot can decide to bring its umbrella just to be safe). The point is that this more general form of reasoning can be controlled very carefully depending on the situation since it is no longer just a subroutine call to a Knowledge Representation system. The robot can, in fact, plan to figure something out just as it would plan any other activity. This is all very speculative, of course. How do we know, for example, that it is any easier to calculate what is believed rather than its implications? There is, fortunately, fairly strong evidence for this, at least in the propositional cme: Theorem 3: Suppose KB and Q are propositional sentences in conjunctive norm al form. Determining if KB fogically implies a is co-j/p-complete but determining if KB entails a has an O(mn) ajgorjthm, where m = ]KBl and n = lal. Corollary 4: Assume KB and Q are as above. Then, in the worst case, deciding if a) /= (BKB 3 La) is very dj%cult. b) + ( BKES > Ba) is relatively easy. What this amounts to is that if we consider answering questions of a given fixed size, the time it takes to calculate what the KB believes will grow linearly at worst with the size of the KB, but the time it takes to calculate the implications of what the KB believes will grow ezponenfiallys at worst with the size of the KB. smore precisely, it will grow faster than any polynomial function, unless P eauals NP. Returning now to the formal modelling of the beliefs of other agents, the reason we would not want to simply run an untuned resolution theorem-prover over encodings of sentences of L is that we would lose the opportunity to exploit the computational tractability of belief. Again, it is not so much that our logic is the only one to capture a semantically and computationally respectable notion of belief. What it demonstrates, however, is first, that it is possible to move away from closure under classical implication without espousing the syntactic approach and giving up semautics altogether, and second, that there is hope for a non-trivial domain-independent Knowledge Representation deductive service. Of course, it remains to be seen whether these advantages can be preserved for a language that includes meta-knowledge and quantifiers. Discovering appropriate semantics and decision procedures in these cases remains a difficult open problem. ACKNOWLEDGEMENTS This work wa done as part of the KRYPTON project at Fairchild and I am indebted to its other members, Ron Brachman, Richard Fikes, Peter Pat&Schneider, and Victoria Pigman, as well as to David Israel of BBN, Joe Halpern and the other participants of the Knowledge Seminar at IBM San Jose, and to the Best Western family of hotels. REFERENCES Hintikka, J., Knowledge and Belief: An Inlroduction to the Logic o/ the Two Notions, Cornell University Press, 1962. Hintikka, J., Impossible Possible Worlds Vindicated, Journal of f hiiosophicnl Logic, 4, 1975, 475-484. Levesque, H. J., Foundations of a Functional Approach to Knowledge Representation, Artijiciaf Intelligence, forthcoming. Moore, R. C., Reasoning about Knowledge and Action, Technical Note 181, SRI International, Menlo Park, 1980. Moore, R. C. and Head&, G., Computational Models of Beliefs and the Semantics of Belief-Sentences, Technical Note 187, SRI International, Menlo Park, 1979. Eberle, R. A., A Logic of Believing, Knowing and Inferring, Sunthese 26, 1974, 356-382. Konolige, K., A Deduction Model of Belief, Ph. D. Thesis, Computer Science Department, Stanford University, in preparation. Barwise, J. and Perry, J., Situations and Attitudes, Bradford Books, Cambridge, MA, 1983. Anderson, A. R. and Belnap, N. D., Entailment, The Logic of Releoance and Necesaitg, Princeton University Press, 1975. Levesque, H. J., A Logic of Implicit and Explicit Belief, Fairchild Laboratory for Artificial Intelligence Research, Technical Report, in preparation. Belnap, N. D., A Useful Four-Valued Logic, in G. Epstein and J. M. Dunn (eds.), Modern User of Multiple-Valued Logic, Reidel, 1977. Perrault, C. R. and Cohen, P. R., Elements of a Plan-Based Theory of Speech Acts, Cognitive Science 3, 1979, 177-212. Brachman, R. J., Fikes, R. E., and Levesque, H. J., KRYP- TON: A Functional Approach to Knowledge Representation, IEEE Computer, 16 (lo), 1983, 67-73. 202