Steven O. Kimbrough. University of Pennsylvania. Operations and Information Management Dept. Philadelphia, PA (215) 898{5133

Size: px
Start display at page:

Download "Steven O. Kimbrough. University of Pennsylvania. Operations and Information Management Dept. Philadelphia, PA (215) 898{5133"

Transcription

1 On Indeterminacy and Defeasible Reasoning Steven O. Kimbrough University of Pennsylvania Operations and Information Management Dept. Philadelphia, PA (215) 898{5133 Stephen F. Roehrig Carnegie Mellon University The Heinz School of Public Policy and Management Pittsburgh, PA (412) DRAFT only. Do not circulate. Comments welcome. Abstract Some simple problems in defeasible reasoning are analyzed using both probabilistic and qualitative methods. We show that any such method which arrives at a rm conclusion regarding these problems must necessarily import additional assumptions. We argue in favor of those methods which are candid in their assumptions, and which can support alternative decision making rules. This research was supported in part by U.S. Coast Guard contract number DTCG39-86-C-E92204, Steven O. Kimbrough principal investigator. Steve Kimbrough wishes to thank Don Nute, Michael Covington, and David Billings for stimulating conversations at the University of Georgia, Athens, AI Center. File: Ladies28sok.tex. Revision: 2.8, July 4, Previous version: ladies27.tex

2 Contents 1 Introduction 3 2 Cases Specicity: Tweety : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Simpson's Paradox : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : An Example : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Analysis of Simpson's Paradox : : : : : : : : : : : : : : : : : : : : : : University Students : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Probabilistic Approaches : : : : : : : : : : : : : : : : : : : : : : : : : The Method of Sweeping Presumptions : : : : : : : : : : : : : : : : : 16 3 The Cases Reexamined: Underlying Possibilities Tweety : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : University Students : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Simpson's Paradox : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Discussion: A Possibilities Semantics for Defeasible Reasoning : : : : : : : : 23 4 Interpreting the Cases Indeterminacy in Language and Communication : : : : : : : : : : : : : : : : Indeterminacy in Game Theory : : : : : : : : : : : : : : : : : : : : : : : : : Implications : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 28 5 Discussion Sucient conditions for defeasible reasoning : : : : : : : : : : : : : : : : : : When will these conditions arise : : : : : : : : : : : : : : : : : : : : : : : : : What can be done about it? : : : : : : : : : : : : : : : : : : : : : : : : : : : 31 6 An Application 31 7 Conclusion 37 2

3 1 Introduction When faced with a particularly daunting case, Sherlock Holmes bemoaned his lack of evidence, saying \I can't make bricks without clay!" Holmes is hardly alone; in fact, most practical decision making involves trying to do one's best with a shortage of \clay." Worse yet, often the partially red bricks must be remade when new evidence arrives. Practical reasoning in general, and automated reasoning in particular, share these diculties with the great detective. In this paper we look at the problem of defeasible reasoning and its cousin, reasoning with incomplete information. We examine three typical problems from several points of view both qualitative and quantitative and argue that both types of reasoning are common, and that methods for approaching each type necessarily import additional assumptions which are seldom made explicit. The name defeasible reasoning comes from philosophy, but other researchers, most notably in AI, have used the term nonmonotonic reasoning to mean essentially the same thing: the revision of earlier conclusions in light of new evidence. Such reasoning is, on the face of it, impossible in standard rst order logic, but is common to all of us. For example, the familiar \good news-bad news" jokes play upon our ability to reason defeasibly on the y. Consider the fortunes of the aviator, of whom we are told: \The bad news is, he fell out of the plane: : : The good news is, he was wearing a parachute: : : The bad news is, the parachute didn't open: : : The good news is, he fell into a haystack: : : " and so on 1 : : : Each new bit of news reverses our belief in his survival. Reasoning with incomplete information is perhaps even more ubiquitous. In this case, the available facts are simply insucient to reach a unique conclusion. Will a health care bill make it through Congress? Will Pittsburgh keep its major league baseball franchise? Will the sun come up tomorrow? To answer such questions, we make assumptions we must make assumptions, because the problems are underdetermined. Defeasible reasoning, then, is the inevitable follow-on to reasoning with incomplete information. Because belief is tenuous, additional facts can change it. If we had all the facts in the rst place, our conclusions would be unassailable, forever. Automated reasoning in areas such as decision support, argument generation and support, document retrieval, and so on requires some formalized account of defeasible reasoning, and in the end, a calculus. While there is no shortage of calculi in the literature, they don't always agree. All, it seems, can be tricked into errors, including failure in making a discrimination when there should be one, and making the wrong discrimination. We're concerned with a 1 Thanks to Donald Nute for pointing out the relevance of \good news-bad news" jokes 3

4 third: making a discrimination based on assumptions that are implicit, not explicit. Which calculus is right? We will claim that in a sense to be made clear, there can't be a right calculus. The idea is this: problems, often surprisingly, come indeterminately; when you try to solve them, you (or the methods/calculi) either import policies or not. If you/they don't, then you/they can't discriminate; if they do, they will be systematically wrong at times. So, the issue comes down to thinking through a decision problem, importing the right assumptions, and picking a calculus that correctly accommodates them. 2 Cases Our purpose in this section is to present and elucidate three example problems for defeasible reasoning. Each of these three problems has independently arisen and been discussed in the literature on defeasible reasoning. We believe that, in a sense we try to make precise in the sequel, these examples fall well-spaced along a continuum of underspecication. None admits an ironclad answer viewed probabilistically, for example, in even the best circumstance the most we may do is to give a reasonably high probability for an outcome. In the worst circumstance, even this is not possible. Yet for all three examples, methods proposed in the literature claim to \solve" them. We begin with the literature's favorite problem, Tweety, a problem that also serves well in capturing certain aspects of the overall problem of defeasible reasoning. 2.1 Specicity: Tweety The Tweety story goes roughly as follows. We have as background: Normally, birds y. Normally, penguins do not y. All penguins are birds. And we have a two-part reasoning process: 1. Tweety [we now learn] is a bird. Does Tweety y? Presumably yes. 2. Tweety [we now learn] is a penguin. Does Tweety y? Presumably no. Two points about the Tweety problem, commonly made in the literature, bear repeating. First, given the background information, the reasoning in evidence here is really unexceptionable. Given only that Tweety is a bird, the reasonable conclusion is that Tweety probably (or, better, presumably) does in fact y. Learning, in addition, that Tweety is also a penguin, we naturally retract our earlier, tentative, conclusion that Tweety ies and we come to 4

5 the conclusion that Tweety probably (or presumably) does not y. New information that Tweety is a penguin defeats a previous conclusion that Tweety presumably ies. 2 The second point about the Tweety problem, as commonly told, is that formalization of the defeasibilty, or nonmonotonicity, aspect of the reasoning is problematic. At least supercially, we have several propositions (\Normally, birds y" : : : \Tweety is a bird") from which we draw an inference (\Presumably, Tweety ies"). We then acquire a new proposition (\Tweety is a penguin") and we are able to draw a new conclusion (\Presumably, Tweety does not y"). So far, there is nothing untoward. Add new premises to a consistent axiomatic system and new conclusions follow. What is dierent in the case of Tweety and is not present in logical and mathematical systems (as we know them) is that the addition of a new premise blocks the inference to an old conclusion. According to the Tweety story, standardly interpreted, addition of the premise \Tweety is a penguin" not only allows us to conclude that \Presumably, Tweety does not y" but it also (somehow) blocks the inference to \Presumably, Tweety ies," leaving us unambiguously concluding in favor of Tweety's non-airborneness. How is this possible? After all, there is no way to add axioms to Euclidean geometry and have the internal angles of a triangle add up to other than 180 degrees. So, these two points about Tweety present two formidable general problems: 1. How are we to understand defeasible reasoning or inference? 2. How (given our interests in automated reasoning) can we develop a formal calculus to model defeasible reasoning? In this paper we address both questions, but fundamentally our aim is to say something about the rst question. The design and evaluation of a calculus should ultimately follow upon a theory of defeasible reasoning, rather then vice versa. Human reasoning being what it is, however, the two tend to be developed together, in an intertwined manner. Often a calculus of defeasible reasoning is developed, with the theory of defeasible reasoning implicit in the calculus and required to be teased out. Calculi aside, is there a theory for defeasible reasoning available for Tweety? Almost. Every method for defeasible reasoning we are aware of concords with the strong specicity principle, which may be stated as nothing more than a direct abstraction of the Tweety problem: Strong Specicity. If Bs typically have property F, Ps typically have property :F, and Ps are a special kind of Bs (that is, Ps are a subset of Bs), then given 2 Note as well that the conclusions we draw depend here on the order in which new information arrives. Had we learned rst that Tweety is a penguin we would have concluded initially that Tweety (presumably) does not y. Subsequently being told that Tweety is a bird would not defeat our initial conclusion. 5

6 that something is a P (and no other relevant information), conclude (defeasibly) that it is :F. Reection, we assume, reveals the intuitive aptness of this specicity principle, but we can say more. We would prefer, and are able, to justify the specicity principle, as applied in the case of Tweety and similar cases. It is useful to view specicity from a probabilistic point of view. The Tweety problem, and specicity problems in general, are underdetermined, at least in the sense that an innite (or at least very large) number of distinct probability distributions are consistent with the information presented in the problem. \Normally, birds y" is consistent with very many distributions for the probability that a particular thing ies, given it is a bird. Although the structure of the Tweety problem does not uniquely determine the relevant probability distributions, it does constrain them. Assume, uncontroversially, that \Normally, if then " is always (but not necessarily uniquely) rendered as P ( j) > x where x > 0:5 Given this, it is easy to see that, so long as we are consistent with the problem statement, no matter how we assign probability distributions in the Tweety case the probability that Tweety ies, given only that Tweety is a bird is greater than 0:5, and the probability that Tweety does not y, given that Tweety is a penguin, is greater than 0:5. 3 The principle of strong specicity may, thus, be seen as a qualitative expression of a mathematical truth. This would explain the universality of its acceptance in the calculi of defeasible reasoning. We may now sum up our comments regarding Tweety. The (defeasible) reasoning evident in the Tweety problem is indeed unexceptionable. It is unexceptionable because it is an instance of a more general principle, strong specicity, and this principal (as here characterized) is justied as a verbal gloss on a mathematical truth. The Tweety problem is (as are other specicity problems) underdetermined in that there is insucient information in the problem to determine how probable (or how strongly presumable) it is that Tweety ies. Nevertheless, this is a weak sort of underdetermination in that we can determine that the relevant probabilities are (or are not) above 0:5. Let us see how these considerations fare with other problems of defeasible reasoning. 3 This is easily seen by thinking in terms of something like a Venn diagram. Draw a circle to encompass all and only the birds. Allocate the probability mass for ying (say by coloring) within the circle at will, so long as more than half the bird circle is colored. Place on top of this, but entirely within the bird circle, a circle for the penguins. Again, place it at will, providing that more than half its area lies above the uncolored area of the bird circle. There are lots of ways of doing this, but every way leaves the probability of ying, given bird as greater than 0:5 and the probability of not ying, given penguin as greater than 0:5. 6

7 2.2 Simpson's Paradox An Example Consider the summer vacationer who leaves the country in midseason, with his favorite baseball player comfortably leading the league in hitting. When he returns after season's end he learns that this player, call him Player A, has lost the batting title to another (Player B). He naturally assumes that during the second half of the season, his favorite player must have hit for a lower average than the eventual winner. Intuitively, many people argue that if Player A has a higher average in the rst half, but loses overall to Player B, then Player B must have had the higher average in the second half. Many others will recognize this apparently simple reasoning problem as an instance of Simpson's paradox [1, 3, 4, 5, 10, 20, 29, 32, 34]. While seeming to be suciently rich in detail to warrant a rm conclusion, in fact it could be that Player A had a higher average than Player B in the rst and second halves, but still lost overall. Although the overall batting average is determined by a weighted sum of the two half seasons, the weights may be far from even, giving rise to counter-intuitive results. (We will present a concrete numerical example in a moment.) The batting average problem is thus a problem of insucient information; without further clues, it is impossible to say for certain if Player A did better or worse than Player B in the second half. Often, however, a conclusion must be reached, at least tentatively. Thus we look for guidelines for judgement in the face of incomplete information. If we receive new clues, the problem becomes one of defeasible reasoning; we need to gracefully absorb new information and revise conclusions when warranted. Note that both types of reasoning attempt to provide the best most warranted in some sense conclusion given current information. Typically, no guarantee can be made about the correctness of the conclusion; its validity rests only on the underlying assumptions of the reasoning method Analysis of Simpson's Paradox Table 1 presents a numerical example of Simpson's paradox, in the context of two baseball players over the course of a season. While this sort of counter-intuitive construction has been much analyzed, we will briey consider it in terms of defeasible reasoning. One way to look at Simpson's paradox is with a generic network diagram. The diagram for the instance in Table 1 is given in Figure 1. p stands for the proposition that Player A has an at-bat, and h is the proposition that a hit is made. Thus :p represents an at-bat for Player B, and :h stands for failure to get a hit. s refers to the the rst half of the season, and :s to the second half. 7

8 p H+ HHj?? * s + h Figure 1: Generic network diagram for Simpson's paradox The positive sign on the arc p?! h is supposed to indicate the fact that Player A will more likely get a hit in any at-bat than will Player B, at least for a known value of s (= true or false, depending on the date). The arc p?! s records the information that Player A had fewer at-bats than Player B in the rst half of the season. However, if we don't know in which half of the season an at-bat occurred, that is, if we have no information about s, the appropriate diagram should look like that in Figure 2. We have labeled the only remaining arc with a question mark, indicating that there is uncertainty in the outcome. If this sign is negative, then Simpson's paradox has obtained. p? - h Figure 2: Collapsed diagram for Simpson's paradox In terms of the complete diagram (e.g., Figure 1), Simpson's paradox will obtain whenever (1) the inuence recorded on p?! h is positive and (2) the combination (however calculated) of those on p?! s and s?! h exceeds that on p?! h. 1st half 2nd half Season Player A Player B Player A Player B Player A Player B At bats Hits Average Table 1: Simpson's paradox in baseball 8

9 Given the plethora of examples in the literature, Simpson's paradox may seem a commonplace. Nonetheless, most people do nd it paradoxical. Despite the existence of \Simpson's paradox in real life" examples, it may be that we, as defeasible reasoners, and reasoners laboring with incomplete information, actually do a fairly good job. The heuristic \If Player A beats Player B in both halves of the season, he wins overall" might after all be the right one. We investigate this possibility more thoroughly in a moment. We saw in the discussion of Tweety that specicity was a problem feature which could be exploited. Is specicity a principle with, if not universal, at least broad applicability? The simple network diagram for Simpson's paradox is similar to one for Tweety (Figure 3), but the arc joining penguins with birds in the latter has only a syntactic resemblence to the arcs in the former. The penguin-bird arc in Tweety expresses the subset relationship, but the arcs in Simpson's paradox declare connections of a completely dierent nature. It seems, in fact, that specicity as a principle fails us in Simpson's paradox, and so we need something else to help us along. (The two diagrams of course dier structurally, as well as semantically. However, in x2.3 we give an example with exactly the same diagrammatic structure as Tweety, but with a much dierent result.) p H? HHj +? * + b f Figure 3: Generic network diagram for the Tweety problem Now, since the batting average problem is one of a surprising distribution over three propositions (player A/player B, 1st half/2nd half, hit/no hit), a plausible reaction is to ask \how often" such surprises can occur. Note that the paradoxical result is not one of a bad probabilistic outcome in a situation where the odds (and thus an expected value) suggest otherwise. Rather, the distribution itself is only partially determined. But if distributions satisfying the constraints of the problem only rarely exhibit \paradoxical behavior," then it seems justied, in the absence of additional information, to reason as intuition suggests. With the symbolization of the batting average problem, the setup of Simpson's paradox has Pr(h j p; s) > Pr(h j :p; s); Pr(h j p; :s) > Pr(h j :p; :s): 9

10 Of the distributions satisfying these conditions, we can look for the proportion satisfying Pr(h j :p) > Pr(h j p); that is, those representing Simpson's paradox. [27] presents a method for sampling uniformly from the space of all distributions over three propositions. In extended simulations using this method, we found the likelihood of \paradoxical" behavior to be 3%, a gure which compares well with the 3.8% found in a study of Simpson's paradox in airline on-time data [6]. A tentative conclusion is that although Simpson's paradox is structurally similar to the Tweety scenario, the former is a more dicult problem. Because Tweety may be analyzed using the principle of specicity, once that principle is invoked we are clear on the outcome. On the other hand, the batting average problem does not admit (so far as we know) to any such reasoning guide. Analysis of second order indications (i.e., a uniform distribution of distributions) appears to give some guidance, but if the truth be told, we have little justication for choosing a uniform prior, other than through a standard Bayesian incantation. We now turn to a third example, after which we discuss some lessons learned from all three. 2.3 University Students Suppose we are told (and are given no other relevant information) that adults tend to be employed, and that Fred is an adult. Is Fred employed? Well, presumably he is. Now suppose we are told that university students tend not to be employed, and that university students tend to be adults. Interesting facts, but so far Fred still looks to be employed. Suppose further that we discover that Fred is also a university student. Is he employed? This problem is known in the defeasible reasoning literature as the University Students problem [21]. What should we say, if we must say something, about Fred's employment status? Intuitions dier. First, for example, from one point of view we might reason that students are special kinds of adults, so what is typical of students (unemployment) should hold sway over what is typical of the larger class (adults are usually employed). By this reasoning, Fred is unemployed. Second, from another point of view, we might note that absent the information that students tend to be adults we would nd the evidence equally balanced regarding Fred's employment. From here we might reason that given we already know Fred is an adult the information that Fred is presumably an adult is no news at all and should be disregarded. So, we would conclude that the evidence is evenly balanced. Third, we might believe that the prevalence of unemployment among students as a whole is largely due to the massive 10

11 s H? HHj +? * a + e Figure 4: Preliminary diagram for University Students unemployment among the non-adults, so that adult students are not much dierent than adults as a whole, at least with regard to employment status. Then, Fred would presumably be employed. There are other ways we might reason, but these suce to show that there is no quick, easy, and obvious correct answer. We need to look deeper. Consider, as we have done with the previous problems, the simple network representation of University Students in Figure 4. Here a is the proposition that someone is an adult, s that they are a student, and e that they are employed Probabilistic Approaches At this stage, the diagram looks much the same as that for Simpson's paradox (and in fact identical to Tweety, but again with no specicity relationships). However, the algebraic signs, which in some sense indicate the direction of inuence, are dierent in the Simpson's paradox and University Students diagrams. Dierent too is the fact that we are not given clear information on relationships between conditional probabilities. \Adults tend to be employed" surely has a number of probabilistic interpretations. Five possible interpretations we consider are these. 1. We might assume, rst of all, that our sentence means Pr(e j a) = t or Pr(e j a) > t for some value of t, presumably (but not necessarily) larger than 0.5. This says that when considering only the two propositions a and e, we would bet that an adult was employed. If other factors inuence our judgement of employment, such as student status, we assume that they have been marginalized out. 11

12 2. If knowing that Fred is an adult increases belief that Fred is employed over a similar belief for all people in general, we would have Pr(e j a) > Pr(e): 3. If we think that knowing someone is an adult increases that person's chances of being employed over those who are non-adults, we have Pr(e j a) > Pr(e j :a): In this case, both probabilities may be low, but Pr(e j a) is the larger. This condition is called \positive association" in [20], and is in fact equivalent to positive correlation between e and a. 4. Because in the complete problem statement a person's student status s is relevant but has only indirect bearing on adults' employment, we might suppose the sentence means Pr(e j a; s) > Pr(e j :a; s) and Pr(e j a; :s) > Pr(e j :a; :s): In light of Simpson's Paradox, this interpretation may be quite dierent probabilistically from 2) above. This interpretation has been studied by Wellman [35] in the context of qualitative probabilistic networks. 5. Finally, we might assume that for some t. Pr(e j a; s) > t and Pr(e j a; :s) > t Interpretation 5) seems a poor semantic match to \Adults tend to be employed" for small values of t, since there is little force to such a statement in the absence of other knowledge. On the other hand, if all the statements in University Students are quantied according to 5), values of t greater than 0.5 result in a direct contradiction. Thus we will not consider 5) any longer. Because of the existence of distributions exhibiting Simpson's paradox, 3) is not implied by 4), and examples where 3) does not guarantee 4) are easy to nd. Thus these two are somewhat independent, and may be worth considering. It is easy to show that interpretations 2) and 3) are equivalent. It seems to us that 1) is the most natural probabilistic view of the \tends" statements in University Students, so this case will be pursued. 12

13 If we assume, say Pr(e j a) = t (1) Pr(ejs) = 1? t (2) Pr(a j s) = t (3) for various values of t, it is not dicult to show that as t! 1 the probability of Fred being employed tends to zero. If we index the atomic events as in Table 2, then Equations 3 require that t 7 + t 8 Pr(eja) = t = (4) t 3 + t 4 + t 7 + t 8 What we wish to determine is t 6 + t 8 Pr(ejs) = 1? t = t 2 + t 4 + t 6 + t 8 (5) t 4 + t 8 Pr(ajs) = t = : t 2 + t 4 + t 6 + t 8 (6) Pr(eja; s) = t 8 t 4 + t 8 : (7) Event :s; :a; :e s; :a; :e :s; a; :e s; a; :e # t 1 t 2 t 3 t 4 Event :s; :a; e s; :a; e :s; a; e s; a; e # t 5 t 6 t 7 t 8 Table 2: First note that solving equations (5) and (6) simultaneously yields t 2 = t 8. By specifying a value for Pr(e j a; s), we can solve for an upper bound on the value of the probability `tends'. For example, taking Pr(e j s; a) = 3=4, we have so Pr(ejs; a) = t 8 t 4 + t 8 = 3 4 3t 4 = t 8 13

14 implying In fact, if t = 4t 4 7t 4 + t : Pr(ejs; a) = a b then t b a + b : A graph of this relationship is shown in Figure 5. Pr(ejs; a) 6 1:0 0 0:5 - t 1:0 Figure 5: Upper bound on Pr(ejs; a) as a function of t Although the graph was derived by choosing Pr(eja; s) and solving for an upper bound on t, it is drawn and may be read the other way around, with t as the independent variable. As the probability of `tends' increases toward 1.0, the range of values that Pr(e j a; s) may take on shrinks, so that limpr(eja; s) = 0 t!1 This can also be shown directly. Since t 2 = t 8, t = t 4 + t 8 t 2 + t 4 + t 6 + t 8 = t 4 + t 8 t 4 + t 6 + 2t 8 so as t! 1, it must happen that t 8! 0 (at least for a xed, nite sample size). Looking at (7), it's clear that Pr(eja; s)! 0: 14

15 This is a consequence of the fact that as t! 1, the problem statements become more and more dicult to satisfy; in the limit, they become material implications which are contradictory when taken as a group. In the probabilistic setting, this forces Pr(e j a; s) to zero. If we simulate by drawing distributions which satisfy the conditions of University students (using the current interpretation of \tends"), and see which of those distributions likely has Fred employed, we obtain similar results. Specically, distributions over three propositions were chosen uniformly from the space of all such 8-tuples. We focused on those satisfying Pr(e j a) > t (8) Pr(ejs) < 1? t (9) Pr(a j s) > t (10) for various values of t. For each distribution generated in this way, the probability that Fred is employed is easily computed. Choosing Pr(e j s; a) > 0:5 as a cuto above which we judge Fred to be employed, a proportion can be tallied. The results of a simple simulation of University Students using this approach are presented in Figure 6. For small values of t, Fred is almost as likely employed as unemployed, but by the time t has increased to 0.5, 38% of distributions satisfying the problem statements have Fred employed. With t = 0:7 or larger, virtually no distributions have him employed. If the cuto is chosen as Pr(e j s; a) > t, so we judge Fred employed if he \tends" to be (according to the prevailing value for \tends"), then for t > 0:55, Fred is almost surely not employed. 0:5 p 6 0 0:5 - t 1:0 Figure 6: Proportion p of distributions with Fred likely employed It appears from this analysis that if we take \tends" to imply a conditional probability, and if that probability is suciently high, then Fred \tends" to be unemployed. But we 15

16 can hardly be certain that in the context of University Students, \tends" means \with high probability." A reasonably noncommittal stance might be to associate \tends" with \probability greater than 1/2." In this case the probabilistic argument for Fred's unemployment is hardly a dominating one. The evidence is much less convincing than in Simpson's paradox, for instance. Furthermore, many would argue that quantitative approaches are simply inappropriate for much of the scope of defeasible reasoning. We are instructed \Thou shalt not kill;" but wars occur, and sometimes they are broadly considered justiable. Yet it seems ludicrous to assign a probability to this defeasible rule. Objections to quantitative approaches to defeasible reasoning, and to reasoning with insucient information, have fallen into the following broad catagories: 1. We don't think that way (e.g. Nute [25]) 2. We can't think that way (e.g. Harman [13]) 3. The data (or the problems) don't come that way (e.g. McDermott and Doyle [22]) In the next subsection we examine a representative qualitative approach to defeasible reasoning. It should not be surprising, in light of the probabilistic analysis, that additional assumptions are required to resolve the University Students problem with this approach. We shall see, in fact, that there are competing policies for judgement, and that dierent resolutions result from dierent policy choices The Method of Sweeping Presumptions Sweeping presumptions [16, 17], is a logic-based, qualitative technique for modeling defeasible reasoning. It is built upon and extends the method of logic graphs [15, 11], a theorem-proving technique for sentence logic and rst-order logic. Here we very briey outline the method. If we take the statements of the University Students problem as hard-and-fast rules, a contradiction results: If x is a student, then x is unemployed. If y is a student, then y is an adult. If z is an adult, then z is employed. Clearly, the point of the problem is that the relationships are defeasible generalizations; not all students are unemployed, and so forth. Sweeping presumptions models this situation by using an extended form of the rule of detatchment (of which modus ponens is a special case). Thus in sweeping presumptions, 16

17 P S(y) - :E(y) P S(z) - A(z) P A(x) - E(x) Figure 7: University Students with Sweeping Presumptions If x is a student, then x is unemployed. becomes, at least for University students, If x is a student, then presumably x is unemployed. Translating the remainder of the problem, and introducing notation with an obvious meaning, we have the situation depicted in Figure 7 4 Intuitively, sweeping presumptions operates by collecting (sweeping up) arc labels on all logical inference paths, starting with whatever facts are at hand. The method keeps a running list of such inferences, along with the concatenation of all arc labels on each inference path. For example, suppose that we have an arc from to with label?. Suppose further that is currently in the list. Then we add?( ) to the list. More generally, let? and? 0 be two strings of (possibly iterated) arc labels. Suppose we have the arc as before and? 0 () in the list, then we add?? 0 ( ) to the list, where?? 0 is the concatenation of the two strings of labels. So now what of Fred, the adult student? In virtue of being a student (S(Fred)), Fred is presumably unemployed (a), and in virtue of being an adult, Fred is presumably employed (c). So far, we have a stando. But we also have it that students are normally adults. Consequently, unless we impose a rule to block the inference, by matching on S(z) we get P(A(Fred)), and from there get (b), PP(E(Fred)). One might hold that conclusion (b) should be blocked as a case of redundant information. Why? Because in establishing (b) we have as an intermediate result that presumably Fred is an adult, but in fact we have it as a basic fact without presumption that Fred is an adult. Thus, (b) does not contain any information pertaining to Fred's employment status 4 The full version of sweeping presumptions includes several more arc labels, which can be used to indicate other degrees of entailment. For simplicity, we omit discussion of these here. 17

18 that is not captured by (c). A general solution to this double-counting problem is complex (see [14] for one approach), and here we have to be content with providing a policy that covers only special cases, including the one at hand. Informally and roughly, the policy can be thought of in this way. Conclusions, with their operators in front, represent inferential paths in the digraph. If two paths share a common node, but one path contains the node unconditionally (given to us as fact, for example), while the other path contains the node only presumptively, then the conclusion reached by the latter path is eliminated from the list by this policy, which we will call the presumptively weak policy. Thus, in the specic case at hand, we would nd that (b) should not be a permitted conclusion, and hence that the evidence regarding Fred's employment status is evenly balanced. Common sense might also argue in favor of a policy dierent than the presumptively weak one. In the present case, we are given that Fred is an adult and we can conclude that presumably Fred is an adult. Under the presumptively weak policy, we treat the information that presumably Fred is an adult as redundant and we eliminate it from our weighing of evidence. An argument might be made (and, if we understand him correctly, the argument is made by Donald Nute) that instead the given information that Fred is an adult should be treated as redundant. This would occur under what we can call a presumptively strong policy on redundancy. How could a presumptively strong policy be justied? Taking the present case, one line of reasoning is as follows. We are given that Fred is a student and that students are presumably adults. Thus information that Fred is an adult is redundant, because according to the specicity principle, we should use the most narrowly dened catagory. Presumably, Fred is an adult in virtue of being a student, so presumably being told that Fred is an adult is not new information and should be discarded when netting out the evidence. If we accept this presumptively strong policy on redundancy, then in the present case we can conclude (a) and (b), but not (c), and the outcome favors Fred's being unemployed. 3 The Cases Reexamined: Underlying Possibilities We have argued from a mathematical perspective that these three representative problems for defeasible reasoning are in interesting ways indeterminate. Further, we have just presented on calculus for defeasible reasoning that recognizes indeterminacy and allows users to import dierent decision policies in order to resolve indeterminacy. In this section, our aim is to explain the indeterminacy that may be found in these problems, and thereby shed some light on the nature of defeasible reasoning and point towards practical machine-based applications. 18

19 3.1 Tweety Line B P X-1 X-2 F (1) > > > >? (2) > > >? > (3) > >? >? (4) > >??? (5) >? > > > (6) >? >? > (7) >?? > > (8) >??? > Table 3: Partial truth function from B, P, X-1, and X-2 to F. Consider the partial truth function given in table 3. Think of table 3 as specifying a partial truth function from B (being a bird), P (being a penguin), X-1 (an unspecied but logically independent condition) and X-2 (a dierent unspecied, logically independent condition) to F (ying). Each line in the table identies a logically distinct possibility. We might, thus, summarize the table by saying: 1. Birds normally y. (In 5 of the 8 cases here, they do.) 2. All penguins are birds. (In each of the 4 cases in which the possibility contains a penguin, that possibility is also a bird.) 3. Penguins normally do not y. (In 3 of the 4 cases present in which the possibility contains a penguin, that possibility does not y.) There is more to say, but rst we shall discuss, from this possibilities perspective, our other two ambient examples. 3.2 University Students Consider now the (total) truth function for University Students, given in table 4. Think of table 4 as specifying a total truth function from S (being a student), A (being an adult), X-1 and X-2 (as before), to E (being employed). Notice how we might summarize the table: 1. Students are normally unemployed. (Here in 5 of 8 cases they are unemployed.) 19

20 Line S A X-1 X-2 E (1) > > > > > (2) > > >? > (3) > >? > > (4) > >??? (5) >? > >? (6) >? >?? (7) >?? >? (8) >???? (9)? > > > > (10)? > >? > (11)? >? > > (12)? >?? > (13)?? > > > (14)?? >? > (15)??? > > (16)???? > Table 4: Total truth function from S, A, X-1, and X-2 to E. Fred, the adult university student is presumably employed. 2. Adults are normally employed. (Here in 7 of 8 cases they are employed.) 3. Students who are adults are normally employed. (Here in 3 of 4 cases they are employed.) Suppose the E value for line (3) were changed to?, then in 2 of 4 cases, adult students would be employed, yet adults would still tend to be employed (here in 6 of 8 cases), and students would still tend to be unemployed (here in 6 of 8 cases). Suppose the E values for lines (3) and (2) were changed to?. Then, in 3 of 4 cases students would be unemployed, yet adults would tend to be employed (here in 5 of 8 cases), and students would still tend to be unemployed (here in 7 of 8 cases). There is a problem, however, in that table 4 as given does not capture \Students tend not to be adults," which is an important part of the University Students problem. This is easily remedied. If we delete line (1), we get the following summary: 1. Students are normally unemployed. (Here in 5 of 7 cases they are unemployed.) 20

21 2. Adults are normally employed. (Here in 6 of 7 cases they are employed.) 3. Students tend not to be adults. (Here in 4 of 7 cases they are not adults.) 4. Students who are adults are normally employed. (Here in 2 of 3 cases they are employed.) If we now change the E value for line (2) from > to?, the general conclusion about the employment status of adult students changes, while the basic defeasible expressions remain true. In particular, we then have the following summary: 1. Students are normally unemployed. (Here in 6 of 7 cases they are unemployed.) 2. Adults are normally employed. (Here in 5 of 7 cases they are employed.) 3. Students tend not to be adults. (Here in 4 of 7 cases they are not adults.) 4. Students who are adults are normally employed. (Here in 2 of 3 cases they are unemployed.) Finally, we might delete line (2) entirely, in addition to line (1), to yield a table (partial truth function) with the following interpretation: 1. Students are normally unemployed. (Here in 5 of 6 cases they are unemployed.) 2. Adults are normally employed. (Here in 5 of 6 cases they are employed.) 3. Students tend not to be adults. (Here in 4 of 6 cases they are not adults.) 4. Students who are adults are as likely to be employed as not. (Here in 1 of 2 cases they are unemployed.) With these several versions before us of the truth table for the university students problem, it is perhaps clearest to see how the problem is indeterminate. Much slack remains. 21

22 Figure 8: Venn diagram for Simpson's paradox, table 1 example. s = [ [ [. :s = [ [ [. p = [ [ [. :p = [ [ [. h = [ [ [. :h = [ [ [. Pr() = 0:128. Pr() = 0:072. Pr() = 0:105. Pr() = 0:195. Pr() = 0:216. Pr() = 0:084. Pr() = 0:054. Pr() = 0: Simpson's Paradox The partial truth function approach, described above for Tweety and University Students, also works for Simpson's paradox, although the underlying truth table is much larger. To see how this works, consider the Venn diagram, in gure 8, for the baseball example of the paradox (cf., table 1, above). In the example, there are 1000 at-bats. This species the sample space. These at-bats may be covered by 10 indicator variables, X-1, X-2, : : :, X-10, as above. These 10 variables then cover 2 10 = 1024 possibilities, of which we need only Construct the truth table as follows. Lay out the array of input combinations > and? for the variables in the usual manner. This creates a table with 1024 rows. Number the rows from 1 to 1024, and delete rows with numbers geater than Now add 8 columns to the array, corresponding to ; ; : : : ;. In the column place >s in the rst 128 rows and?s in all the other rows. In the column, place >s in rows 129{200, and?s in all the other rows. Continue in this fashion for ; : : : ;. Add an output column, called h. In h, place >s in rows 129{305, and in rows 717{854; place?s in all the other rows. Given the interpretation in the caption to gure 8, we may now give the following summary of the (partial) truth table: 1. The presumption that player A gets a hit while at bat during the rst half of the season is strong than that for player B getting a hit while at bat during the rst half. (Here, we have 72 of 200 cases for A, and 105 of 300 cases for B.) 2. The presumption that player A gets a hit while at bat during the second half of the season is strong than that for player B getting a hit while at bat during the second 22

23 half. (Here, we have 84 of 300 cases for A, and 54 of 200 cases for B.) 3. Over the entire season, the presumption that player B gets a hit while at bat is stronger than the presumption that player A gets a hit while at bat. (Here, we have 159 of 500 cases for B, and 156 of 500 cases for A.) We have shown how these three examples may be treated in similar ways using truth tables. Now let us discuss what this means. 3.4 Discussion: A Possibilities Semantics for Defeasible Reasoning It is one thing to come up with a semantics for a calculus of defeasible reasoning, and quite another to produce a semantics that seems right, that has broad explanatory power, and that provides insight. Reection upon the three truth tables just provided for our three examples leads in the direction of the latter kind of semantics for defeasible reasoning. Here, in a nutshell, is our suggestion: Defeasible reasoning talk (e.g., \Normally,...," \Presumably,...") implicitly refers to a systematic array of possibilities, contextually conditioned. \Presumably,," means roughly \Among the relevant possibilities, the number of cases in which is true is larger than the number in which is not true. Comments: 1. Each row of the truth table, implicitly referred to, corresponds to a possibility. 2. The referenced truth may be total or partial. If partial, the speaker is simply leaving undened a number of possibilities. 3. These possibilities, and the reference to them, may or may not have a probabilistic interpretation. Clearly, given an array of possibilities one can assign a probability density function and then calculate various probabilities. Often, however, a probabilistic interpretation to our defeasible reasoning talk seems inappropriate. The present approach possibilities semantics for defeasible reasoning comports with a non-probabilistic, or qualitative, interpretation of defeasible reasoning talk. For example: (a) \Thou shalt not kill," and other moral imperatives, do not have a probabilistic interpretation. 5 Rather, on the theory here espoused, such talk refers to an array 5 This is not to deny that we can ask how often exceptions occur and under what conditions. 23

24 of relevant (but normally unspecied) possibilities and asserts that in only a few is it permissible to kill. The fact that these few may locally have high probability (as in a war) in no way undermines the original imperative. (b) \Presumed innocent until proven guilty," means, roughly, there is an extensive array of relevant possibilities for what happened; judge the defendant guilty only if, after examining all the evidence, there are no remaining possibilities that are other than remotely (\beyond a reasonable doubt") plausible. In other words, look at the lines in the relevant truth table. If there are no remaining lines with Innocent in the output column, or if those that remain are entirely implausible, then judge guilty; and if not, not. Thus, it is possible to preserve innocense and believe that guilt is very probable. 4. The referenced arrays of possibilities will typically have recourse to a number of indicator variables, the X-i in our previous discussion. These variables may or may not be interpreted. For example, in the Tweety case, the speaker may be (explicitly or implicitly) referring to robins and sparrows with X-1 and X-2. On the other hand, the indicator variables may be understood as interpretable but (so far) uninterpreted. Roughly: \Among the relevant possibilities, generated by a number of relevant but unspecied conditions, the number of cases in which is true is larger than the number in which is not true." In summary, 1. Given an arbitrary truth table, it will often be possible to summarize its contents (or part of its contents) using defeasible reasoning talk. This is natural and entirely appropriate. 2. Conversely, given defeasible reasoning talk, it is always (or at least often and in many important cases) possible to translate the talk into a truth table representation. 3. These two facts strongly suggest that the possibilities or truth table interpretation of defeasible reasoning talk is, at least often, the correct way to understand what is meant. 4. The suggestion is strengthened by the fact that it provides an understanding of both probabilistic and qualitative interpretations of defeasible reasoning talk. 5. The suggestion is further strengthened by the fact that it very naturally provides an account of indeterminacy in defeasible reasoning, which indeterminacy was independently identied by techniques described in earlier sections. 24

25 4 Interpreting the Cases When we reason defeasibly, we are reasoning with incomplete information. Our ndings, reported above, picture defeasible reasoning as occurring within dierent degrees of incompleteness, or indeterminacy. Pure specicity problems, e.g., Tweety, are not very problematic. Even though we lack knowledge of the probability distribution for Tweety's ying, we do know that whatever the proper distribution is, the probability that Tweety ies is less than 0.5 (assuming the problem is stated correctly). A great deal of indeterminacy lurks, but, compared to Simpson's paradox and University Students, Tweety feels to be rmly grounded. Simpson's paradox presents a signcantly less determinate situation. Our similation found a small percentage of probability distributions for which the surprising, \paradoxical," outcome actually obtains. Under reasonable assumptions, this percentage is about 3%, which accords nicely with empirical studies previously reported in the literature. Our \reasonable assumptions" are about distributions of distributions. We could be wrong in any given case, and obviously if these assumptions are wrong enough the 97% :: 3% ratio could be reversed. With Simpson a new degree of indeterminacy has appeared. Our discussion of University Students shows that even under \reasonable assumptions" the indeterminacy gap may be very large. This, in sum, is where we are. It leaves us with at least two questions, or families of questions: 1. To what degree do these ndings generalize? How pervasive is the phenomenon of problem indeterminacy? 2. What does this mean? What are the implications of these ndings for research and for practice? We address these questions in the subsections that follow. Briey, our response to (2) is quite simple: either beg, borrow or steal additional information, or choose a policy. To answer (1), we argue that the sort of indeterminacy we nd for defeasible reasoning ts with a strong trend in the cognitive and decision sciences. Two broad examples, given next, should suce to illustrate the point and answer (1). 4.1 Indeterminacy in Language and Communication Natural language is notoriously indeterminate. Often there is a wide gap between the meaning of a sentence and the speaker's meaning in uttering the sentence. This is easily seen in 25

26 ironic discourse, exemplied by the famous remark in the movie, Casablanca, \Major Strasse is one of the reasons the German Reich enjoys the reputation it has today." Similarly, there is the familiar dodge, \Thank-you for sending me your papers. I will waste no time in reading them." Finally, for present purposes, we have the well-known example from Articial Intelligence [26, page 209]: Computer parsers are too meticulous for their own good. They nd ambiguities that are quite legitimate, as far as English grammar is concerned, but that would never occur to a sane person. One of the rst computer parsers, developed at Harvard in the 1960s, provides a famous example. The sentence Time ies like an arrow is surely unambiguous if there ever was an unambiguous sentence. : : : But to the surprise of the programmers, the sharp-eyed computer found it to have ve dierent trees! Time proceeds as quickly as an arrow proceeds. (the intended reading) Measure the speed of ies in the same way that you measure the speed of an arrow. Measure the speed of ies in the same way that an arrow measures the speed of ies. Measure the speed of ies that resemble an arrow. Flies of a particular kind, time-ies, are fond of an arrow. Signicantly, for what we want to say about defeasible reasoning, the received view on linguistic ambiguity is that sentences are actually ambiguous, but communication and understanding are possible because humans bring to the table a rich set of default rules for interpreting sentences. These default rules typically appeal to aspects of the common (changeable), extra-linguistic environment. To understand, the speaker must bring to bear assumptions. These assumptions are not implicit in the utterances at hand and they may be defeated by new, or external, information Indeterminacy in Game Theory Consider the following situation. Player R and player C must each make a choice among the integers 1 to 100. They must choose independently; neither player will know the other's choice, nor will the players be able to discuss the game with each other. If R and C both 6 See [8] for a seminal essay on this general topic, and [31] for a more recent treatment. 26

1. Introduction Formal deductive logic Overview

1. Introduction Formal deductive logic Overview 1. Introduction 1.1. Formal deductive logic 1.1.0. Overview In this course we will study reasoning, but we will study only certain aspects of reasoning and study them only from one perspective. The special

More information

Informalizing Formal Logic

Informalizing Formal Logic Informalizing Formal Logic Antonis Kakas Department of Computer Science, University of Cyprus, Cyprus antonis@ucy.ac.cy Abstract. This paper discusses how the basic notions of formal logic can be expressed

More information

2.1 Review. 2.2 Inference and justifications

2.1 Review. 2.2 Inference and justifications Applied Logic Lecture 2: Evidence Semantics for Intuitionistic Propositional Logic Formal logic and evidence CS 4860 Fall 2012 Tuesday, August 28, 2012 2.1 Review The purpose of logic is to make reasoning

More information

NICHOLAS J.J. SMITH. Let s begin with the storage hypothesis, which is introduced as follows: 1

NICHOLAS J.J. SMITH. Let s begin with the storage hypothesis, which is introduced as follows: 1 DOUBTS ABOUT UNCERTAINTY WITHOUT ALL THE DOUBT NICHOLAS J.J. SMITH Norby s paper is divided into three main sections in which he introduces the storage hypothesis, gives reasons for rejecting it and then

More information

6.041SC Probabilistic Systems Analysis and Applied Probability, Fall 2013 Transcript Lecture 3

6.041SC Probabilistic Systems Analysis and Applied Probability, Fall 2013 Transcript Lecture 3 6.041SC Probabilistic Systems Analysis and Applied Probability, Fall 2013 Transcript Lecture 3 The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare

More information

Logic I or Moving in on the Monkey & Bananas Problem

Logic I or Moving in on the Monkey & Bananas Problem Logic I or Moving in on the Monkey & Bananas Problem We said that an agent receives percepts from its environment, and performs actions on that environment; and that the action sequence can be based on

More information

Coordination Problems

Coordination Problems Philosophy and Phenomenological Research Philosophy and Phenomenological Research Vol. LXXXI No. 2, September 2010 Ó 2010 Philosophy and Phenomenological Research, LLC Coordination Problems scott soames

More information

Comments on Truth at A World for Modal Propositions

Comments on Truth at A World for Modal Propositions Comments on Truth at A World for Modal Propositions Christopher Menzel Texas A&M University March 16, 2008 Since Arthur Prior first made us aware of the issue, a lot of philosophical thought has gone into

More information

Introduction Symbolic Logic

Introduction Symbolic Logic An Introduction to Symbolic Logic Copyright 2006 by Terence Parsons all rights reserved CONTENTS Chapter One Sentential Logic with 'if' and 'not' 1 SYMBOLIC NOTATION 2 MEANINGS OF THE SYMBOLIC NOTATION

More information

Conference on the Epistemology of Keith Lehrer, PUCRS, Porto Alegre (Brazil), June

Conference on the Epistemology of Keith Lehrer, PUCRS, Porto Alegre (Brazil), June 2 Reply to Comesaña* Réplica a Comesaña Carl Ginet** 1. In the Sentence-Relativity section of his comments, Comesaña discusses my attempt (in the Relativity to Sentences section of my paper) to convince

More information

1.2. What is said: propositions

1.2. What is said: propositions 1.2. What is said: propositions 1.2.0. Overview In 1.1.5, we saw the close relation between two properties of a deductive inference: (i) it is a transition from premises to conclusion that is free of any

More information

Does Deduction really rest on a more secure epistemological footing than Induction?

Does Deduction really rest on a more secure epistemological footing than Induction? Does Deduction really rest on a more secure epistemological footing than Induction? We argue that, if deduction is taken to at least include classical logic (CL, henceforth), justifying CL - and thus deduction

More information

On Priest on nonmonotonic and inductive logic

On Priest on nonmonotonic and inductive logic On Priest on nonmonotonic and inductive logic Greg Restall School of Historical and Philosophical Studies The University of Melbourne Parkville, 3010, Australia restall@unimelb.edu.au http://consequently.org/

More information

What is a counterexample?

What is a counterexample? Lorentz Center 4 March 2013 What is a counterexample? Jan-Willem Romeijn, University of Groningen Joint work with Eric Pacuit, University of Maryland Paul Pedersen, Max Plank Institute Berlin Co-authors

More information

xiv Truth Without Objectivity

xiv Truth Without Objectivity Introduction There is a certain approach to theorizing about language that is called truthconditional semantics. The underlying idea of truth-conditional semantics is often summarized as the idea that

More information

Since Michael so neatly summarized his objections in the form of three questions, all I need to do now is to answer these questions.

Since Michael so neatly summarized his objections in the form of three questions, all I need to do now is to answer these questions. Replies to Michael Kremer Since Michael so neatly summarized his objections in the form of three questions, all I need to do now is to answer these questions. First, is existence really not essential by

More information

Artificial Intelligence I

Artificial Intelligence I Artificial Intelligence I Matthew Huntbach, Dept of Computer Science, Queen Mary and Westfield College, London, UK E 4NS. Email: mmh@dcs.qmw.ac.uk. Notes may be used with the permission of the author.

More information

Philosophical Perspectives, 16, Language and Mind, 2002 THE AIM OF BELIEF 1. Ralph Wedgwood Merton College, Oxford

Philosophical Perspectives, 16, Language and Mind, 2002 THE AIM OF BELIEF 1. Ralph Wedgwood Merton College, Oxford Philosophical Perspectives, 16, Language and Mind, 2002 THE AIM OF BELIEF 1 Ralph Wedgwood Merton College, Oxford 0. Introduction It is often claimed that beliefs aim at the truth. Indeed, this claim has

More information

Logic & Proofs. Chapter 3 Content. Sentential Logic Semantics. Contents: Studying this chapter will enable you to:

Logic & Proofs. Chapter 3 Content. Sentential Logic Semantics. Contents: Studying this chapter will enable you to: Sentential Logic Semantics Contents: Truth-Value Assignments and Truth-Functions Truth-Value Assignments Truth-Functions Introduction to the TruthLab Truth-Definition Logical Notions Truth-Trees Studying

More information

Semantic Entailment and Natural Deduction

Semantic Entailment and Natural Deduction Semantic Entailment and Natural Deduction Alice Gao Lecture 6, September 26, 2017 Entailment 1/55 Learning goals Semantic entailment Define semantic entailment. Explain subtleties of semantic entailment.

More information

KANT S EXPLANATION OF THE NECESSITY OF GEOMETRICAL TRUTHS. John Watling

KANT S EXPLANATION OF THE NECESSITY OF GEOMETRICAL TRUTHS. John Watling KANT S EXPLANATION OF THE NECESSITY OF GEOMETRICAL TRUTHS John Watling Kant was an idealist. His idealism was in some ways, it is true, less extreme than that of Berkeley. He distinguished his own by calling

More information

Understanding Truth Scott Soames Précis Philosophy and Phenomenological Research Volume LXV, No. 2, 2002

Understanding Truth Scott Soames Précis Philosophy and Phenomenological Research Volume LXV, No. 2, 2002 1 Symposium on Understanding Truth By Scott Soames Précis Philosophy and Phenomenological Research Volume LXV, No. 2, 2002 2 Precis of Understanding Truth Scott Soames Understanding Truth aims to illuminate

More information

Russell: On Denoting

Russell: On Denoting Russell: On Denoting DENOTING PHRASES Russell includes all kinds of quantified subject phrases ( a man, every man, some man etc.) but his main interest is in definite descriptions: the present King of

More information

McDougal Littell High School Math Program. correlated to. Oregon Mathematics Grade-Level Standards

McDougal Littell High School Math Program. correlated to. Oregon Mathematics Grade-Level Standards Math Program correlated to Grade-Level ( in regular (non-capitalized) font are eligible for inclusion on Oregon Statewide Assessment) CCG: NUMBERS - Understand numbers, ways of representing numbers, relationships

More information

2.3. Failed proofs and counterexamples

2.3. Failed proofs and counterexamples 2.3. Failed proofs and counterexamples 2.3.0. Overview Derivations can also be used to tell when a claim of entailment does not follow from the principles for conjunction. 2.3.1. When enough is enough

More information

Truth and Modality - can they be reconciled?

Truth and Modality - can they be reconciled? Truth and Modality - can they be reconciled? by Eileen Walker 1) The central question What makes modal statements statements about what might be or what might have been the case true or false? Normally

More information

Etchemendy, Tarski, and Logical Consequence 1 Jared Bates, University of Missouri Southwest Philosophy Review 15 (1999):

Etchemendy, Tarski, and Logical Consequence 1 Jared Bates, University of Missouri Southwest Philosophy Review 15 (1999): Etchemendy, Tarski, and Logical Consequence 1 Jared Bates, University of Missouri Southwest Philosophy Review 15 (1999): 47 54. Abstract: John Etchemendy (1990) has argued that Tarski's definition of logical

More information

Chapter 1. Introduction. 1.1 Deductive and Plausible Reasoning Strong Syllogism

Chapter 1. Introduction. 1.1 Deductive and Plausible Reasoning Strong Syllogism Contents 1 Introduction 3 1.1 Deductive and Plausible Reasoning................... 3 1.1.1 Strong Syllogism......................... 3 1.1.2 Weak Syllogism.......................... 4 1.1.3 Transitivity

More information

What would count as Ibn Sīnā (11th century Persia) having first order logic?

What would count as Ibn Sīnā (11th century Persia) having first order logic? 1 2 What would count as Ibn Sīnā (11th century Persia) having first order logic? Wilfrid Hodges Herons Brook, Sticklepath, Okehampton March 2012 http://wilfridhodges.co.uk Ibn Sina, 980 1037 3 4 Ibn Sīnā

More information

Naturalized Epistemology. 1. What is naturalized Epistemology? Quine PY4613

Naturalized Epistemology. 1. What is naturalized Epistemology? Quine PY4613 Naturalized Epistemology Quine PY4613 1. What is naturalized Epistemology? a. How is it motivated? b. What are its doctrines? c. Naturalized Epistemology in the context of Quine s philosophy 2. Naturalized

More information

MISSOURI S FRAMEWORK FOR CURRICULAR DEVELOPMENT IN MATH TOPIC I: PROBLEM SOLVING

MISSOURI S FRAMEWORK FOR CURRICULAR DEVELOPMENT IN MATH TOPIC I: PROBLEM SOLVING Prentice Hall Mathematics:,, 2004 Missouri s Framework for Curricular Development in Mathematics (Grades 9-12) TOPIC I: PROBLEM SOLVING 1. Problem-solving strategies such as organizing data, drawing a

More information

Circularity in ethotic structures

Circularity in ethotic structures Synthese (2013) 190:3185 3207 DOI 10.1007/s11229-012-0135-6 Circularity in ethotic structures Katarzyna Budzynska Received: 28 August 2011 / Accepted: 6 June 2012 / Published online: 24 June 2012 The Author(s)

More information

Is Epistemic Probability Pascalian?

Is Epistemic Probability Pascalian? Is Epistemic Probability Pascalian? James B. Freeman Hunter College of The City University of New York ABSTRACT: What does it mean to say that if the premises of an argument are true, the conclusion is

More information

2004 by Dr. William D. Ramey InTheBeginning.org

2004 by Dr. William D. Ramey InTheBeginning.org This study focuses on The Joseph Narrative (Genesis 37 50). Overriding other concerns was the desire to integrate both literary and biblical studies. The primary target audience is for those who wish to

More information

Prisoners' Dilemma Is a Newcomb Problem

Prisoners' Dilemma Is a Newcomb Problem DAVID LEWIS Prisoners' Dilemma Is a Newcomb Problem Several authors have observed that Prisoners' Dilemma and Newcomb's Problem are related-for instance, in that both involve controversial appeals to dominance.,

More information

The Critical Mind is A Questioning Mind

The Critical Mind is A Questioning Mind criticalthinking.org http://www.criticalthinking.org/pages/the-critical-mind-is-a-questioning-mind/481 The Critical Mind is A Questioning Mind Learning How to Ask Powerful, Probing Questions Introduction

More information

Artificial Intelligence: Valid Arguments and Proof Systems. Prof. Deepak Khemani. Department of Computer Science and Engineering

Artificial Intelligence: Valid Arguments and Proof Systems. Prof. Deepak Khemani. Department of Computer Science and Engineering Artificial Intelligence: Valid Arguments and Proof Systems Prof. Deepak Khemani Department of Computer Science and Engineering Indian Institute of Technology, Madras Module 02 Lecture - 03 So in the last

More information

Reply to Cheeseman's \An Inquiry into Computer. This paper covers a fairly wide range of issues, from a basic review of probability theory

Reply to Cheeseman's \An Inquiry into Computer. This paper covers a fairly wide range of issues, from a basic review of probability theory Reply to Cheeseman's \An Inquiry into Computer Understanding" This paper covers a fairly wide range of issues, from a basic review of probability theory to the suggestion that probabilistic ideas can be

More information

PROSPECTIVE TEACHERS UNDERSTANDING OF PROOF: WHAT IF THE TRUTH SET OF AN OPEN SENTENCE IS BROADER THAN THAT COVERED BY THE PROOF?

PROSPECTIVE TEACHERS UNDERSTANDING OF PROOF: WHAT IF THE TRUTH SET OF AN OPEN SENTENCE IS BROADER THAN THAT COVERED BY THE PROOF? PROSPECTIVE TEACHERS UNDERSTANDING OF PROOF: WHAT IF THE TRUTH SET OF AN OPEN SENTENCE IS BROADER THAN THAT COVERED BY THE PROOF? Andreas J. Stylianides*, Gabriel J. Stylianides*, & George N. Philippou**

More information

Figure 1 Figure 2 U S S. non-p P P

Figure 1 Figure 2 U S S. non-p P P 1 Depicting negation in diagrammatic logic: legacy and prospects Fabien Schang, Amirouche Moktefi schang.fabien@voila.fr amirouche.moktefi@gersulp.u-strasbg.fr Abstract Here are considered the conditions

More information

The Non-Identity Problem from Reasons and Persons by Derek Parfit (1984)

The Non-Identity Problem from Reasons and Persons by Derek Parfit (1984) The Non-Identity Problem from Reasons and Persons by Derek Parfit (1984) Each of us might never have existed. What would have made this true? The answer produces a problem that most of us overlook. One

More information

Empty Names and Two-Valued Positive Free Logic

Empty Names and Two-Valued Positive Free Logic Empty Names and Two-Valued Positive Free Logic 1 Introduction Zahra Ahmadianhosseini In order to tackle the problem of handling empty names in logic, Andrew Bacon (2013) takes on an approach based on positive

More information

Lecture 4. Before beginning the present lecture, I should give the solution to the homework problem

Lecture 4. Before beginning the present lecture, I should give the solution to the homework problem 1 Lecture 4 Before beginning the present lecture, I should give the solution to the homework problem posed in the last lecture: how, within the framework of coordinated content, might we define the notion

More information

The SAT Essay: An Argument-Centered Strategy

The SAT Essay: An Argument-Centered Strategy The SAT Essay: An Argument-Centered Strategy Overview Taking an argument-centered approach to preparing for and to writing the SAT Essay may seem like a no-brainer. After all, the prompt, which is always

More information

(Refer Slide Time 03:00)

(Refer Slide Time 03:00) Artificial Intelligence Prof. Anupam Basu Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture - 15 Resolution in FOPL In the last lecture we had discussed about

More information

CSSS/SOC/STAT 321 Case-Based Statistics I. Introduction to Probability

CSSS/SOC/STAT 321 Case-Based Statistics I. Introduction to Probability CSSS/SOC/STAT 321 Case-Based Statistics I Introduction to Probability Christopher Adolph Department of Political Science and Center for Statistics and the Social Sciences University of Washington, Seattle

More information

Philosophy 148 Announcements & Such. Inverse Probability and Bayes s Theorem II. Inverse Probability and Bayes s Theorem III

Philosophy 148 Announcements & Such. Inverse Probability and Bayes s Theorem II. Inverse Probability and Bayes s Theorem III Branden Fitelson Philosophy 148 Lecture 1 Branden Fitelson Philosophy 148 Lecture 2 Philosophy 148 Announcements & Such Administrative Stuff I ll be using a straight grading scale for this course. Here

More information

9 Knowledge-Based Systems

9 Knowledge-Based Systems 9 Knowledge-Based Systems Throughout this book, we have insisted that intelligent behavior in people is often conditioned by knowledge. A person will say a certain something about the movie 2001 because

More information

Boghossian & Harman on the analytic theory of the a priori

Boghossian & Harman on the analytic theory of the a priori Boghossian & Harman on the analytic theory of the a priori PHIL 83104 November 2, 2011 Both Boghossian and Harman address themselves to the question of whether our a priori knowledge can be explained in

More information

Other Logics: What Nonclassical Reasoning Is All About Dr. Michael A. Covington Associate Director Artificial Intelligence Center

Other Logics: What Nonclassical Reasoning Is All About Dr. Michael A. Covington Associate Director Artificial Intelligence Center Covington, Other Logics 1 Other Logics: What Nonclassical Reasoning Is All About Dr. Michael A. Covington Associate Director Artificial Intelligence Center Covington, Other Logics 2 Contents Classical

More information

What God Could Have Made

What God Could Have Made 1 What God Could Have Made By Heimir Geirsson and Michael Losonsky I. Introduction Atheists have argued that if there is a God who is omnipotent, omniscient and omnibenevolent, then God would have made

More information

Logic for Robotics: Defeasible Reasoning and Non-monotonicity

Logic for Robotics: Defeasible Reasoning and Non-monotonicity Logic for Robotics: Defeasible Reasoning and Non-monotonicity The Plan I. Explain and argue for the role of nonmonotonic logic in robotics and II. Briefly introduce some non-monotonic logics III. Fun,

More information

6.080 / Great Ideas in Theoretical Computer Science Spring 2008

6.080 / Great Ideas in Theoretical Computer Science Spring 2008 MIT OpenCourseWare http://ocw.mit.edu 6.080 / 6.089 Great Ideas in Theoretical Computer Science Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

Argumentation Module: Philosophy Lesson 7 What do we mean by argument? (Two meanings for the word.) A quarrel or a dispute, expressing a difference

Argumentation Module: Philosophy Lesson 7 What do we mean by argument? (Two meanings for the word.) A quarrel or a dispute, expressing a difference 1 2 3 4 5 6 Argumentation Module: Philosophy Lesson 7 What do we mean by argument? (Two meanings for the word.) A quarrel or a dispute, expressing a difference of opinion. Often heated. A statement of

More information

CS 2104 Intro Problem Solving in Computer Science Test 1 READ THIS NOW!

CS 2104 Intro Problem Solving in Computer Science Test 1 READ THIS NOW! READ THIS NOW! Print your name in the space provided below. There are 5 problems, priced as marked. The maximum score is 100. The grading of each question will take into account whether you obtained a

More information

How Gödelian Ontological Arguments Fail

How Gödelian Ontological Arguments Fail How Gödelian Ontological Arguments Fail Matthew W. Parker Abstract. Ontological arguments like those of Gödel (1995) and Pruss (2009; 2012) rely on premises that initially seem plausible, but on closer

More information

Justified Inference. Ralph Wedgwood

Justified Inference. Ralph Wedgwood Justified Inference Ralph Wedgwood In this essay, I shall propose a general conception of the kind of inference that counts as justified or rational. This conception involves a version of the idea that

More information

Stout s teleological theory of action

Stout s teleological theory of action Stout s teleological theory of action Jeff Speaks November 26, 2004 1 The possibility of externalist explanations of action................ 2 1.1 The distinction between externalist and internalist explanations

More information

What is the Nature of Logic? Judy Pelham Philosophy, York University, Canada July 16, 2013 Pan-Hellenic Logic Symposium Athens, Greece

What is the Nature of Logic? Judy Pelham Philosophy, York University, Canada July 16, 2013 Pan-Hellenic Logic Symposium Athens, Greece What is the Nature of Logic? Judy Pelham Philosophy, York University, Canada July 16, 2013 Pan-Hellenic Logic Symposium Athens, Greece Outline of this Talk 1. What is the nature of logic? Some history

More information

Contradictory Information Can Be Better than Nothing The Example of the Two Firemen

Contradictory Information Can Be Better than Nothing The Example of the Two Firemen Contradictory Information Can Be Better than Nothing The Example of the Two Firemen J. Michael Dunn School of Informatics and Computing, and Department of Philosophy Indiana University-Bloomington Workshop

More information

The Problem with Complete States: Freedom, Chance and the Luck Argument

The Problem with Complete States: Freedom, Chance and the Luck Argument The Problem with Complete States: Freedom, Chance and the Luck Argument Richard Johns Department of Philosophy University of British Columbia August 2006 Revised March 2009 The Luck Argument seems to show

More information

16. Universal derivation

16. Universal derivation 16. Universal derivation 16.1 An example: the Meno In one of Plato s dialogues, the Meno, Socrates uses questions and prompts to direct a young slave boy to see that if we want to make a square that has

More information

Logic Appendix: More detailed instruction in deductive logic

Logic Appendix: More detailed instruction in deductive logic Logic Appendix: More detailed instruction in deductive logic Standardizing and Diagramming In Reason and the Balance we have taken the approach of using a simple outline to standardize short arguments,

More information

Helpful Hints for doing Philosophy Papers (Spring 2000)

Helpful Hints for doing Philosophy Papers (Spring 2000) Helpful Hints for doing Philosophy Papers (Spring 2000) (1) The standard sort of philosophy paper is what is called an explicative/critical paper. It consists of four parts: (i) an introduction (usually

More information

Final Paper. May 13, 2015

Final Paper. May 13, 2015 24.221 Final Paper May 13, 2015 Determinism states the following: given the state of the universe at time t 0, denoted S 0, and the conjunction of the laws of nature, L, the state of the universe S at

More information

VAGUENESS. Francis Jeffry Pelletier and István Berkeley Department of Philosophy University of Alberta Edmonton, Alberta, Canada

VAGUENESS. Francis Jeffry Pelletier and István Berkeley Department of Philosophy University of Alberta Edmonton, Alberta, Canada VAGUENESS Francis Jeffry Pelletier and István Berkeley Department of Philosophy University of Alberta Edmonton, Alberta, Canada Vagueness: an expression is vague if and only if it is possible that it give

More information

The St. Petersburg paradox & the two envelope paradox

The St. Petersburg paradox & the two envelope paradox The St. Petersburg paradox & the two envelope paradox Consider the following bet: The St. Petersburg I am going to flip a fair coin until it comes up heads. If the first time it comes up heads is on the

More information

From Necessary Truth to Necessary Existence

From Necessary Truth to Necessary Existence Prequel for Section 4.2 of Defending the Correspondence Theory Published by PJP VII, 1 From Necessary Truth to Necessary Existence Abstract I introduce new details in an argument for necessarily existing

More information

Presupposition and Accommodation: Understanding the Stalnakerian picture *

Presupposition and Accommodation: Understanding the Stalnakerian picture * In Philosophical Studies 112: 251-278, 2003. ( Kluwer Academic Publishers) Presupposition and Accommodation: Understanding the Stalnakerian picture * Mandy Simons Abstract This paper offers a critical

More information

Proof as a cluster concept in mathematical practice. Keith Weber Rutgers University

Proof as a cluster concept in mathematical practice. Keith Weber Rutgers University Proof as a cluster concept in mathematical practice Keith Weber Rutgers University Approaches for defining proof In the philosophy of mathematics, there are two approaches to defining proof: Logical or

More information

Class #14: October 13 Gödel s Platonism

Class #14: October 13 Gödel s Platonism Philosophy 405: Knowledge, Truth and Mathematics Fall 2010 Hamilton College Russell Marcus Class #14: October 13 Gödel s Platonism I. The Continuum Hypothesis and Its Independence The continuum problem

More information

The Kripkenstein Paradox and the Private World. In his paper, Wittgenstein on Rules and Private Languages, Kripke expands upon a conclusion

The Kripkenstein Paradox and the Private World. In his paper, Wittgenstein on Rules and Private Languages, Kripke expands upon a conclusion 24.251: Philosophy of Language Paper 2: S.A. Kripke, On Rules and Private Language 21 December 2011 The Kripkenstein Paradox and the Private World In his paper, Wittgenstein on Rules and Private Languages,

More information

Reductio ad Absurdum, Modulation, and Logical Forms. Miguel López-Astorga 1

Reductio ad Absurdum, Modulation, and Logical Forms. Miguel López-Astorga 1 International Journal of Philosophy and Theology June 25, Vol. 3, No., pp. 59-65 ISSN: 2333-575 (Print), 2333-5769 (Online) Copyright The Author(s). All Rights Reserved. Published by American Research

More information

Ramsey s belief > action > truth theory.

Ramsey s belief > action > truth theory. Ramsey s belief > action > truth theory. Monika Gruber University of Vienna 11.06.2016 Monika Gruber (University of Vienna) Ramsey s belief > action > truth theory. 11.06.2016 1 / 30 1 Truth and Probability

More information

Belief, Rationality and Psychophysical Laws. blurring the distinction between two of these ways. Indeed, it will be argued here that no

Belief, Rationality and Psychophysical Laws. blurring the distinction between two of these ways. Indeed, it will be argued here that no Belief, Rationality and Psychophysical Laws Davidson has argued 1 that the connection between belief and the constitutive ideal of rationality 2 precludes the possibility of their being any type-type identities

More information

INTERPRETATION AND FIRST-PERSON AUTHORITY: DAVIDSON ON SELF-KNOWLEDGE. David Beisecker University of Nevada, Las Vegas

INTERPRETATION AND FIRST-PERSON AUTHORITY: DAVIDSON ON SELF-KNOWLEDGE. David Beisecker University of Nevada, Las Vegas INTERPRETATION AND FIRST-PERSON AUTHORITY: DAVIDSON ON SELF-KNOWLEDGE David Beisecker University of Nevada, Las Vegas It is a curious feature of our linguistic and epistemic practices that assertions about

More information

THE CONCEPT OF OWNERSHIP by Lars Bergström

THE CONCEPT OF OWNERSHIP by Lars Bergström From: Who Owns Our Genes?, Proceedings of an international conference, October 1999, Tallin, Estonia, The Nordic Committee on Bioethics, 2000. THE CONCEPT OF OWNERSHIP by Lars Bergström I shall be mainly

More information

Now consider a verb - like is pretty. Does this also stand for something?

Now consider a verb - like is pretty. Does this also stand for something? Kripkenstein The rule-following paradox is a paradox about how it is possible for us to mean anything by the words of our language. More precisely, it is an argument which seems to show that it is impossible

More information

Logic and Pragmatics: linear logic for inferential practice

Logic and Pragmatics: linear logic for inferential practice Logic and Pragmatics: linear logic for inferential practice Daniele Porello danieleporello@gmail.com Institute for Logic, Language & Computation (ILLC) University of Amsterdam, Plantage Muidergracht 24

More information

Some questions about Adams conditionals

Some questions about Adams conditionals Some questions about Adams conditionals PATRICK SUPPES I have liked, since it was first published, Ernest Adams book on conditionals (Adams, 1975). There is much about his probabilistic approach that is

More information

Introduction to Statistical Hypothesis Testing Prof. Arun K Tangirala Department of Chemical Engineering Indian Institute of Technology, Madras

Introduction to Statistical Hypothesis Testing Prof. Arun K Tangirala Department of Chemical Engineering Indian Institute of Technology, Madras Introduction to Statistical Hypothesis Testing Prof. Arun K Tangirala Department of Chemical Engineering Indian Institute of Technology, Madras Lecture 09 Basics of Hypothesis Testing Hello friends, welcome

More information

- We might, now, wonder whether the resulting concept of justification is sufficiently strong. According to BonJour, apparent rational insight is

- We might, now, wonder whether the resulting concept of justification is sufficiently strong. According to BonJour, apparent rational insight is BonJour I PHIL410 BonJour s Moderate Rationalism - BonJour develops and defends a moderate form of Rationalism. - Rationalism, generally (as used here), is the view according to which the primary tool

More information

Causing People to Exist and Saving People s Lives Jeff McMahan

Causing People to Exist and Saving People s Lives Jeff McMahan Causing People to Exist and Saving People s Lives Jeff McMahan 1 Possible People Suppose that whatever one does a new person will come into existence. But one can determine who this person will be by either

More information

***** [KST : Knowledge Sharing Technology]

***** [KST : Knowledge Sharing Technology] Ontology A collation by paulquek Adapted from Barry Smith's draft @ http://ontology.buffalo.edu/smith/articles/ontology_pic.pdf Download PDF file http://ontology.buffalo.edu/smith/articles/ontology_pic.pdf

More information

BENEDIKT PAUL GÖCKE. Ruhr-Universität Bochum

BENEDIKT PAUL GÖCKE. Ruhr-Universität Bochum 264 BOOK REVIEWS AND NOTICES BENEDIKT PAUL GÖCKE Ruhr-Universität Bochum István Aranyosi. God, Mind, and Logical Space: A Revisionary Approach to Divinity. Palgrave Frontiers in Philosophy of Religion.

More information

Anthony P. Andres. The Place of Conversion in Aristotelian Logic. Anthony P. Andres

Anthony P. Andres. The Place of Conversion in Aristotelian Logic. Anthony P. Andres [ Loyola Book Comp., run.tex: 0 AQR Vol. W rev. 0, 17 Jun 2009 ] [The Aquinas Review Vol. W rev. 0: 1 The Place of Conversion in Aristotelian Logic From at least the time of John of St. Thomas, scholastic

More information

A Discussion on Kaplan s and Frege s Theories of Demonstratives

A Discussion on Kaplan s and Frege s Theories of Demonstratives Volume III (2016) A Discussion on Kaplan s and Frege s Theories of Demonstratives Ronald Heisser Massachusetts Institute of Technology Abstract In this paper I claim that Kaplan s argument of the Fregean

More information

2nd International Workshop on Argument for Agreement and Assurance (AAA 2015), Kanagawa Japan, November 2015

2nd International Workshop on Argument for Agreement and Assurance (AAA 2015), Kanagawa Japan, November 2015 2nd International Workshop on Argument for Agreement and Assurance (AAA 2015), Kanagawa Japan, November 2015 On the Interpretation Of Assurance Case Arguments John Rushby Computer Science Laboratory SRI

More information

IN DEFENCE OF CLOSURE

IN DEFENCE OF CLOSURE IN DEFENCE OF CLOSURE IN DEFENCE OF CLOSURE By RICHARD FELDMAN Closure principles for epistemic justification hold that one is justified in believing the logical consequences, perhaps of a specified sort,

More information

On the epistemological status of mathematical objects in Plato s philosophical system

On the epistemological status of mathematical objects in Plato s philosophical system On the epistemological status of mathematical objects in Plato s philosophical system Floris T. van Vugt University College Utrecht University, The Netherlands October 22, 2003 Abstract The main question

More information

Module 5. Knowledge Representation and Logic (Propositional Logic) Version 2 CSE IIT, Kharagpur

Module 5. Knowledge Representation and Logic (Propositional Logic) Version 2 CSE IIT, Kharagpur Module 5 Knowledge Representation and Logic (Propositional Logic) Lesson 12 Propositional Logic inference rules 5.5 Rules of Inference Here are some examples of sound rules of inference. Each can be shown

More information

ILLOCUTIONARY ORIGINS OF FAMILIAR LOGICAL OPERATORS

ILLOCUTIONARY ORIGINS OF FAMILIAR LOGICAL OPERATORS ILLOCUTIONARY ORIGINS OF FAMILIAR LOGICAL OPERATORS 1. ACTS OF USING LANGUAGE Illocutionary logic is the logic of speech acts, or language acts. Systems of illocutionary logic have both an ontological,

More information

part one MACROSTRUCTURE Cambridge University Press X - A Theory of Argument Mark Vorobej Excerpt More information

part one MACROSTRUCTURE Cambridge University Press X - A Theory of Argument Mark Vorobej Excerpt More information part one MACROSTRUCTURE 1 Arguments 1.1 Authors and Audiences An argument is a social activity, the goal of which is interpersonal rational persuasion. More precisely, we ll say that an argument occurs

More information

Complications for Categorical Syllogisms. PHIL 121: Methods of Reasoning February 27, 2013 Instructor:Karin Howe Binghamton University

Complications for Categorical Syllogisms. PHIL 121: Methods of Reasoning February 27, 2013 Instructor:Karin Howe Binghamton University Complications for Categorical Syllogisms PHIL 121: Methods of Reasoning February 27, 2013 Instructor:Karin Howe Binghamton University Overall Plan First, I will present some problematic propositions and

More information

OSSA Conference Archive OSSA 8

OSSA Conference Archive OSSA 8 University of Windsor Scholarship at UWindsor OSSA Conference Archive OSSA 8 Jun 3rd, 9:00 AM - Jun 6th, 5:00 PM Commentary on Goddu James B. Freeman Follow this and additional works at: https://scholar.uwindsor.ca/ossaarchive

More information

Haberdashers Aske s Boys School

Haberdashers Aske s Boys School 1 Haberdashers Aske s Boys School Occasional Papers Series in the Humanities Occasional Paper Number Sixteen Are All Humans Persons? Ashna Ahmad Haberdashers Aske s Girls School March 2018 2 Haberdashers

More information

Lecture 3. I argued in the previous lecture for a relationist solution to Frege's puzzle, one which

Lecture 3. I argued in the previous lecture for a relationist solution to Frege's puzzle, one which 1 Lecture 3 I argued in the previous lecture for a relationist solution to Frege's puzzle, one which posits a semantic difference between the pairs of names 'Cicero', 'Cicero' and 'Cicero', 'Tully' even

More information

A Priori Bootstrapping

A Priori Bootstrapping A Priori Bootstrapping Ralph Wedgwood In this essay, I shall explore the problems that are raised by a certain traditional sceptical paradox. My conclusion, at the end of this essay, will be that the most

More information

Chadwick Prize Winner: Christian Michel THE LIAR PARADOX OUTSIDE-IN

Chadwick Prize Winner: Christian Michel THE LIAR PARADOX OUTSIDE-IN Chadwick Prize Winner: Christian Michel THE LIAR PARADOX OUTSIDE-IN To classify sentences like This proposition is false as having no truth value or as nonpropositions is generally considered as being

More information

1. Lukasiewicz s Logic

1. Lukasiewicz s Logic Bulletin of the Section of Logic Volume 29/3 (2000), pp. 115 124 Dale Jacquette AN INTERNAL DETERMINACY METATHEOREM FOR LUKASIEWICZ S AUSSAGENKALKÜLS Abstract An internal determinacy metatheorem is proved

More information