Believing on the Basis of the Evidence * Henry E. Kyburg, Jr.

Size: px
Start display at page:

Download "Believing on the Basis of the Evidence * Henry E. Kyburg, Jr."

Transcription

1 Believing on the Basis of the Evidence * Henry E. Kyburg, Jr. 1. Introduction Do you believe that the temperature is between 64 F and 66 F when your well calibrated thermometer reads 65.1 F? Do you believe that it will rain tomorrow when the newspaper forecasts rain? Many of us would reply affirmatively to the first question and negatively to the second, claiming that on reflection what we believe about the weather is that the chance of rain is 'high'. The human lust for uniformity and generality, however, has led others to seek a single answer. The cautious philosophical probabilist might point to the theory of error according to which errors are distributed approximately normally to conclude that relative to the evidence, the probability is merely extremely high that the temperature is between 64 F and 66 F. The pragmatist might argue that just as taking the temperature to be between 64 F and 66 F provides a perfectly adequate basis for deciding what clothes to wear, so having the paper forecast rain provides a perfectly adequate basis for postponing the picnic. Furthermore, each can explain away the talk of the other. The probabilist will claim that the pragmatist is just making decisions under uncertainty seeking to maximize his expected utility and that the probabilities and utilities in each case lead to the pragmatist's decision, which he misleadingly calls a "conclusion." The pragmatist will claim that the probabilist is just making something simple complicated: in any ordinary context we will act as if the statement about the temperature is true; if we get contrary evidence, of course we will no longer accept it. Similarly, we will suppose the paper is right, unless and until we have new evidence that casts doubt on its conclusion. There are two fundamentally distinct ways of thinking about thinking about the world. One is contemplative; the other is oriented toward action. One seeks pure knowledge; the other is pragmatic. One leads to hedged claims; the other leads to categorical claims in a hedged way. Both approaches to thinking about the world have ancient roots: Socrates, seeking wisdom; Alexander, the man of action. Both are represented in contemporary philosophy: Carnap wanted to associate with each statement of our language its appropriate degree of confirmation, * The author gratefully acknkowledges the support of the National Science Foundation. 1

2 relative to what we know; Peirce and Dewey took the impetus for deliberation to be the necessity to choose an action, and the outcome to be the act. 1 Both approaches are represented in artificial intelligence: the probabilists taking the correct representation of our trans-evidential conclusions about the world to be hedged statements (the probability of rain tomorrow is.67), 2 and the logicists taking the representation to be categorical statements (it will rain tomorrow), appropriately hedged in a non-monotonic logic the conclusions can be withdrawn in the face of new evidence. 3 Of course both probabilists and logicists take for granted that there is a set of statements that we may take as background knowledge; and they both take for granted that there is another set of statements that functions as evidence for the problem at hand. From a philosophical point of view, there are serious issues raised by supposing that both background knowledge and evidence can be represented in a single first order language. 4 We will leave those issues aside, however, for from our point of view we must be able to take for granted appropriate parts of background knowledge, and we must be able to articulate our evidence within the system. Thus to start with, we will assume that there is a given canonical first order language L, within which our beliefs may be expressed. Let BK be a set of sentences in L representing our background knowledge, and E be a sentence representing evidence (for example, the thermometer reading). Let C be a sentence representing a conclusion. Then the distinction can be succinctly expressed as follows: 5 schema 1: BK,E C, hedged BK, E; therefore C, suitably hedged (e.g. "probably C"). schema 2: BK, E; C hedged inference BK, E; therefore, tentatively, pending new information, C. 1 Both views are discussed in detail in Kyburg 1970, mainly from the point of view of the philosophy of statistics and philosophy of science. For example, see Carnap 1950, Dewey 1953, Peirce See, for example, Peter Cheeseman See AIJ 1980, or Ginsberg Isaac Levi (Levi 1991) shows by example that it is possible to work directly with an algebra of full beliefs. While this is of philosophical importance, it is of questionable relevance to artificial intelligence. 5 The representation is due to Hemple

3 The first thing to notice is that the first schema represents a perfectly classical deductive inference. The conclusion follows, with complete certainty, from the premises. The conclusion itself, of course, is a hedged conclusion. This is particularly clear in the case of probability, where, given a prior distribution, "probably C" (or the probability of C is p), relative to BK and E is the ratio of the probability of BK and E and C to the probability of BK and E. This is a deductive consequence of the axioms of the probability calculus. Of course there are other formalisms for expressing hedges, including qualitative ones, or ones that seem quite different from probability. 6 Nevertheless, the point seems quite general. Let H be an operator that expresses a qualitative or quantitative way of hedging a conclusion; thus H(C) yields an element of some domain of doubt or credibility. If this operator is to be useful, we should be able to axiomatize it: that is, given BK and E we should be able to compute H(C). But then, relative to these axioms, there is no uncertainty about the inference from BK, E; to the conclusion H(C). Note also that the second schema does not preclude conclusions of the form "the probability of S is p." But in this case the p of the conclusion does not represent the uncertainty of the inference. In fact there may be no way of expressing the uncertainty of the inference, as a number of people who have written on nonmonotonic inference have argued. 7 It is neither easy to see how to reconcile these two approaches to what Israel 8 has called "real inference," nor is it easy to see how to choose between them. One could argue that, so far as probabilities are concerned, the computational problems are simply intractable: there is no feasible way of assigning numbers to each sentence of a reasonably rich first order language. But of course first order logic itself is already only semi-decidable, so it isn't clear that we have lost much, and this difficulty applies to the approach of logicists as much as to the approach of probabilists. It seems likely that it is only approximations of one form or another that can be constructed in software. If neither the logicist program nor the probabilist program can be carried out except in an approximate and truncated form, it may only be on the basis of the practical success of these approximations that a judgment can be made. This, of course, also suggests (realistically) that some applications may prove amenable to one approach, others to the other approach. 6 There are fewer than has been thought. A number of connections exist among the various representations, particularly among those developed for this context. See Kyburg 1983a, Heckerman 1986, Kyburg McCarthy and Hayes 1969, Nutter 1987, Perlis Israel

4 While this may be true, it may still be worth while pursuing a particular approach wholeheartedly. Only thus can it reveal its mettle. One should keep an open mind, but those whose interests are theoretical or foundational should not be led to feel that one answer, in the abstract, is as good as another, nor should one be quite sure, just because something works in one context, that this provides a certification of abstract, general, usefulness, or for that matter a certification of unique usefulness for that context. There is thus much to be said for pushing a particular approach, and exploring the limits of its applicability. Only the pig-headed and stubborn can give an idea a fair run for its money. In what follows I will assume the stance described in detail in (Kyburg, 1991), in which acceptance or full belief plays a crucial role, but in which acceptance is characterized in terms of probability. By "probability" here I mean evidential probability: 9 probability that is not subjective, but takes account of evidence, and that applies to singular propositions. 2. The Lottery The immediate and obvious problem for an attempt to construe acceptance in terms of probability is the "lottery paradox." 10 Suppose that 1 δ is taken as a high enough probability for acceptance. It isn't hard to find n statements the lottery provides only the simplest example each of which has a probability at least that high (ticket #i will lose), but such that the denial of the conjunction is also at least that probable (not all tickets will lose). Thus we end up accepting a set of statements that has no model (each ticket loses, but not all tickets lose). We can make this more general. Suppose that m is any measure of acceptability with the monotonic property that m(s T) > m(s) whenever S does not entail T. A statement S is to be acceptable if and only if m(s) 1 δ. We are faced with inconsistency the moment we can find k statements S 1, S 2,.. S k, each of which is acceptable, that has the following additional property: The disjunction of the first i does not entail the i + 1'st, so that m(s 1 S 2... S i+1 ) > m(s 1 S 2... S i ), the minimum value of the difference m(s 1 S 2... S i+1 ) m(s 1 S 2... S i ) is ε, and kε 1 δ. That the problem is a realistic one may be seen by thinking about measurement. The Gaussian theory of error supposes that errors of measurement are distributed normally. This is no doubt false in detail, but as a reasonable approximation it serves us well. That a measuring instrument is well calibrated means that its bias has been corrected for, so that the errors 9 Kyburg, 1991b. 10 Kyburg,

5 resulting from its use have approximately a normal distribution with a mean of zero and a variance that is characteristic of the instrument. Now consider a measuring person (laboratory) or robot. For example, suppose that it is measuring ball bearings to determine whether or not the diameter is within tolerance, e.g., within.0015" of what it is supposed to be. The result of a measurement is a number that represents the true diameter, plus the (positive or negative) error of that measurement. What must be decided first, since arbitrarily large errors are possible, is what is to count as adequate confidence. Suppose we decide that.99 is adequate that is, we are willing to be wrong 1% of the time in our claims concerning the diameter of a ball bearing. It is characteristic of the approximately normal distribution of error of our well calibrated measuring instrument that an error of.0002 will not happen more than 1% of the time. The process then looks like this: The diameter of a ball bearing is measured. The result is d The person/robot infers with confidence.99 that the true diameter is d ±.0002, or that the true diameter is in the interval [d ,d ]. We know that he is going to be right 99% of the time, and that's good enough. It is certainly natural to say that the person/robot is accepting the result of the measurements it makes, in the sense that it assigns an interval to the diameter of the ball bearing, based on the observed reading of the instrument, and, accordingly rejects or accepts the ball bearing as meeting tolerance. But the person/robot should also be 99% confident that of 10,000 ball bearings, each of which is accepted as meeting tolerance, at least one does not meet tolerance. If 99% confidence is adequate to warrant the acceptance of a ball bearing, it should also be adequate to warrant acceptance of the assertion that not all the accepted ball bearings are acceptable. To take 99% confidence as warranting acceptance is to allow the set of accepted statements to admit of no model to be inconsistent. One response is to say that although we are speaking as though the agent were accepting the proposition that each ball bearing meets tolerance, we should, properly, construe the agent as assigning a high probability to that proposition. Another response is to bite the bullet and say that a body of accepted statements can be useful even though it is inconsistent. Most writers construe the lottery paradox as an argument against a purely probabilistic acceptance rule. 11 The reason is that it is generally assumed that a set of beliefs should be closed under implication, and of course an inconsistency implies anything. Note that this closure condition is an 11 That was not the original intent; I thought of it as an argument against deductive closure! 5

6 additional constraint on the set of beliefs. If we adopt a purely probabilistic acceptance rule, we will have an inconsistent set of statements as our beliefs, but we will not have any single inconsistency among them. We should look more closely at the idea of "inconsistency." Despite the arguments of Levi 12 it is not obvious that a body of knowledge that contains many truths, but is inconsistent, is not of greater interest than a body of knowledge that is consistent but contains much less. This is particularly the case with regard to an artificial system, in which only beliefs expressed in a formal language need be considered. Let us construe probability as evidential probability. 13 Adopting Hempel's second schema, BK, E ======= hedged inference, C let us examine the class of conclusions obtained by hedged inference, where that is to be unpacked in terms of probability: C may be inferred from E and background BK just in case Prob(C, BK {E }) 1 δ > 1/2, for fixed δ. 14 The following properties characterize probabilistic or uncertain inference, thus construed. They become theorems for evidential probability. This form of inference is non-monotonic: we may have Prob(C, BK {E }) 1 δ, but Prob(C, BK {E {E '} ) < 1 δ, so that inference of C is warranted when we have evidence E, but is not warranted when we have additional evidence E'. The conclusion C may have any form, since probability is defined for any sentence of the language. In particular C may be a conjunction. But if inference of C is warranted, and inference of D is warranted, inference of C & D may not be warranted: in general the probability of a conjunction may be less than the probability of either conjunct. If D is entailed by C and C is a conclusion that is warranted, then D is a conclusion that is warranted. Thus any logical consequence of a conclusion that may legitimately be inferred from BK {E } may also be legitimately inferred from BK {E }. 12 Levi 1980, Levi Kyburg 1991b. 14 Evidential probability is interval valued; the inequality concerns the lower bound of the probability interval. 6

7 If BK {E } is consistent in the sense that its deductive closure does not contain every sentence of L, and C is a member of BK {E }, then C may legitimately be inferred from BK {E }. More interestingly, perhaps, even if BK {E } is inconsistent in this sense, evidential probabilities exist relative to BK {E }, and some members of BK {E } may be legitimately inferred from BK {E } (as well, of course, as other statements having high probabilities). If C is in BK {E }, then C may be inferred from BK {E } if and only if there is no set of statements {S 1,... S K } in BK {E } such that they jointly imply C. Since the logical consequences of a conclusion that may be legitimately inferred may also legitimately inferred, we may characterize the set of legitimate consequences of BK {E } in terms of the set of strongest conclusions. Excluding theorems of the language L, it is possible that there may be only a finite number of these conclusions. We may construe them, therefore, as providing a finite axiomatization of our body of knowledge at the acceptance level in question. Is the possible inconsistency of the set of conclusions probabilistically inferable from BK {E } an unmitigated disaster? It doesn't seem so. It clearly does not explode into the set of all sentences of the language. And indeed this approach seems to provide a way of dealing sensibly with inconsistent BK {E }. It is even possible to provide a kind of semantics for this form of inference. 15 Let us call a maximal consistent subset of BK {E } a strand of BK {E }. A strand clearly does have a model. We may consider the set of models corresponding to the strands of BK {E } to represent possibilities. Thus for the simple lottery of k tickets, there are k strands, each characterized by the assertion that ticket # i wins, for 1 i k. But at the level of confidence or acceptance we take ourselves to be concerned with, these possibilities are so remote as to be disregarded. If BK {E } is inconsistent, and the level of acceptance or confidence characteristic of BK {E } is 1/n, then there must be at least n strands in BK {E }. Since epistemological probability is based on knowledge of frequencies, the probability of C, relative to BK {E }, will be greater than 1 δ only if 15 Kyburg 1992, and Kyburg in preparation. 7

8 something that happens at least that frequently according to a statement in BK {E } actually occurs, and furthermore only if this is true for every strand of BK {E }. If we think of the statistical statement underlying the probability as allowing for a variety of states of affairs or possible worlds, then the inference to C is legitimate just in case in almost all (at least 1 δ) of the relevant worlds C is true. 3. Regress and Progress Where does the background knowledge and evidence come from? A gratifyingly unified theory would have them based on the same sort of uncertain inference we have just been discussing. A statement gets into BK {E } in virtue of being highly probable relative to a corpus of background data and knowledge that has even higher evidential standards than does BK {E }. This explains the relation between the fact noted earlier that a statement C that is in BK {E } may be inferred from it if and only if there is no set of statements {S 1,..., S k } in BK {E } that jointly imply C and the lottery paradox. For each of these statements to be in BK {E }, it must have a probability higher than 1 ε. Yet since they jointly are inconsistent with C, their conjunction must imply C. And therefore C implies the denial of their conjunction, and since the lower probability of C is greater than 1 ε, the lower probability of the denial of the conjunction of the S i is greater than 1 ε. Which is just to say that the upper probability of the conjunction is less than ε. Note that we have exactly the structure of the lottery paradox (except that the lottery doesn't have to be perfectly fair): a set of k + 1 sentences, one and only one of which must be false, each of which is highly probable. It follows that if ε is 1/n there must be at least n statements in the set. But there is a difficulty: we may have opened the door to an infinite regress. If what we can accept at a certain level depends on evidence, and what evidence we can accept depends on the next level of evidence, and what we can accept at that level depends on yet another level of evidence, we may never come to a halt. In order to accept that the temperature lies between 65 degrees and 66 degrees, we must accept that the distribution of errors of our thermometer is given by D; in order to accept that, we must have evidence to warrant a statistical inference; in order to have that evidence, we must accept yet another bunch of data,... It is the specter of this infinite regress that leads some people to throw up their hands, and argue that we can make whatever assumptions we want, so long as we are explicit about them. "You have to start somewhere!" Of course you can make whatever assumptions you want, but, from the present point of view, you can only expect to be taken seriously if those assumptions are justified. The question is whether they can be 8

9 justified without infinite regress or circularity. Let us examine this question in more detail. Suppose that in a given kind of context practical certainty is represented by a probability of That is, we are willing to act on the basis of a probability this high; i.e., we are not, in this kind of context, willing to bet against a statement whose lower probability is greater than In the context of measurement, for example, we will be willing to accept to use in a design a value V ± d, when the probability, relative to the evidence we have, that the true value of the quantity being measured lies in this interval is at least Now it would be unrealistic for us to demand that the evidence on the basis of which we compute this probability be absolutely certain and incorrigible. In real life, we will be taking as evidence that the errors of measurement are approximately normally distributed, that the variance of this distribution is approximately d/2.5, and so on. None of these claims is true a priori. Some writers would say that these claims are "assumptions" or "presuppositions" and that it is fine to make them so long as they are open and above board on the table, so to speak. It is possible to talk this way only because the assumptions that people usually make are relatively benign and well supported. The assumption that my (Kyburg's) measurements are error free, even if made openly and explicitly, would hardly constitute a ground for accepting my measurements as error free. The problem with construing a substantive claim to be "just an assumption" is that it often is taken as insulating the claim from critical examination. If the claim is open to examination, and is warranted, then it can be justified by evidence. And in the framework being discussed here, that means that there is a body of evidence, relative to which it has a high probability. How high ought the probability of evidence be in order to be used in establishing the practical certainties of level 0.90? Here is a heuristic argument that answers that question. (1) If S is in BK {E }, and S and T are practically certain relative to BK {E }, then S & T is practically certain, relative to BK {E }. We can establish this very modest bit of conjunctive closure by noting that any two statements known in BK {E }to have the same truth value (i.e., such that their biconditional is in BK {E }) should have the same probability. We have already observed that statements logically entailed by an acceptable statement are acceptable. Since T S & T is entailed by S, it will appear in BK {E }; since T is practically certain relative to BK {E }, so is S & T. 9

10 (2) If the lower probability of S is greater than 1 ε, and the lower probability of T is greater than 1 ε, then the lower probability of S & T is greater than 1 2ε. (Proof: P(S & T) + P(S & T) + P( S & T) + P( S & T) = 1 and so P(S & T) = 1 [P(S & T) + P( S & T)] [P( S & T) + P( S & T)] + P( S & T) holds for any P satisfying the upper and lower constraints. But P(S & T) + P( S & T) < ε and P( S & T) + P( S & T) < ε and P( S & T) > 0. Thus if 1 2ε represents practical certainty, 1 ε may plausibly be taken as evidential certainty. In the case at hand, that means we should be at least.95 confident that the distribution of error is approximately normal, that the variance is less than d/2.5, etc. This set of statements is the set of statements corresponding, on the next level, to BK {E }. Let us represent it by [BK {E }]*, and take its acceptance criterion to be Now if someone should question one of the "assumptions" on the basis of which we inferred with practical certainty that the value of the quantity is in V ± d, we have a framework within which to discuss the question. For example, suppose someone questions the assumption that the variance of the errors of measurement of the sort contemplated is less than d/2.5. That constitutes an orderly shift of context: we are now examining the evidence relative to which the probability of the claim concerning the variance has a probability of at least The evidence for that claim is assumed to have a justifiable probability of at least Can this process go on indefinitely? In some sense it could. But it can also be cut off in two ways. First, a step in the regress occurs only when there is a serious question about the evidence, or the evidence for the evidence, or.... Frivolous 16 questions don't count. The second reason is that even if one did pursue the questions back to a body of knowledge and evidence that contained only mathematical and logical generalizations, and bare observational evidence, we could still have enough material in that body of knowledge to obtain high probabilities. This reflects the fact that some statistical statements (e.g., "Most subsets of a given set reflect the qualities of the given set") are set theoretical truths. We will return to this later when we talk about boot-strapping ourselves into error. (Which, contrary to what you might think, is not a bad thing!) 4. Probabilities "Subjective probability" is ambiguous: It may refer to probabilities (however they may be fleshed out) that are relativized to a subject i.e., that may vary 16 Some people would say "philosophical." 10

11 from subject to subject, depending on the experience (or evidence) available to the subject in question. Or it may refer to probabilities that reflect the personal conviction of the agent, however it may have arisen: via logical analysis of the data, prejudice, hearsay, mistake, misunderstanding, computational error, or whatever. We will avoid the former use, and employ the phrase "subjective probability" only in its pejorative sense. 17 Evidential probability is directly related to the evidence the agent has. It is based on statistical knowledge in the database of the agent. In our measurement example, we take it as part of the agent's database or background knowledge BK that errors of measurement are distributed approximately normally, with a mean close to 0 and a variance close to d/2.5. This response generates two problems that must be dealt with. First, how does the agent come to know this statistical fact about errors of measurement? Second, if he can know this about errors of measurement, then he may also know something about the errors of measurement characteristic of the use of a certain instrument, or of an instrument made by a certain manufacturer, or of a certain kind of instrument, or of measurements made by a certain worker, or... There is an indefinitely large set of possible reference classes to which a particular measurement may be referred. How is this embarrassment of riches to be dealt with? With regard to the first question, the answer should be clear. Like any other item in the database of the agent, items of statistical knowledge get there by being probable enough, relative to a higher level database. The details of statistical inference on this view do not quite conform to either classical statistics or to personalistic Bayesianism, but they are not far different. There are cases, in fact, where everyone agrees. 18 For example, if our agent had reason to believe that the distribution of errors was approximately normal, a sample of measurements could provide evidence regarding the variance of that distribution. That background knowledge and evidence could render an interval estimate of the variance acceptable. (On a classical view of statistical inference this would not be expressed by saying that it was probable that the variance was in that interval, but even on this view the inferential procedure would be characterized by a low probability of error.) While there is still a lot of controversy surrounding statistical inference, the basic intuitions are pretty clear: parent populations are usually pretty much like their samples, in certain respects, and subject to certain provisos. From samples we may infer, uncertainly, general characteristics of real or hypothetical populations. 17 Some writers would say that the latter sense is the only realistic one: we can't control where people get their convictions from, and it is only those convictions that can be used to motivate action. I shall suppose that rational argument can modify conviction, and that probability plays a normative role. 18 Even R. A. Fisher (Fisher 1956). 11

12 The second question is exactly the question that evidential probability has wrestled with for a number of years. 19 There are some relatively uncontroversial aspects of the reference class problem that have not been widely acknowledged, however. For example, in classical probability, if "S T" is added to the background knowledge BK, then Prob(S BK,S T) = Prob(T BK,S T), since Prob(S & T BK,S T) = Prob(S BK,S T)Prob(T BK,S,S T) = Prob(S BK,S T). Note that this is the truthfunctional biconditional: it need merely be known in BK that S and T have the same truth value. Furthermore, given this fact, we can always find some version of the proposition in question that is tied to some (at least approximately) known statistics. The problem is not a lack of statistical data on which to base an evidential probability, but the problem of dealing with a rich set of alternative statistical bases, and some of which conflict with others, some of which are more precise than others. Weighing these factors judiciously and in a principled way is what we seek to do. This is exactly finding principles for the reasoned choice of a reference class, given a body of knowledge. Some principles seem relatively uncontroversial. Other things being equal, if two reference classes disagree, and one is known to be included in the other, the included reference class is to be preferred. There are two other kinds of uncontroversial cases (the Bayesian construction, which advises us to take account of prior probabilities, if we know them, and the principle that larger samples should take precedence over smaller samples. So far as I have been able to tell, no other principles are needed for resolving disagreement between reference classes. One more principle is needed, however, and it is controversial. It is the principle that if two reference classes do not disagree (every distribution regarded as possible for one class is also regarded as possible for the other, but perhaps not vice versa) then the more specific (stronger) information should dominate. This is the strength principle. If I have limited information about the life expectancies of College Professors, but no information that entails that it is different from the distribution of life expectancies among white collar workers in general, it is reasonable for me to use the more precisely known set of distributions. Against this principle is the intuition that if we don't know much about a narrow reference class to which an object is known to belong, that vagueness should be reflected in our epistemic probability, even when there 19 Starting with (Kyburg 1959a, b), and including (Kyburg 1961, 1974, 1991a, b; Loui 1986; Murtezaoglu 1991). 12

13 is a broader candidate reference class about which we have more exact information. In its favor is a slippery slope argument: we have trivial information even about the unit class of an object: namely that either 100% or 0% of the objects in that class have the target property. How do we cut off the slide to the result that the probability that the object has the target property is [0,1]? We are planning to insure Jones: White collar workers? College Professors? College Professors who smoke? College professors who smoke and are of Irish extraction? College professors who smoke and are of Irish extraction and whose last names begin with "J"? And who are married to French women? If we have not yet reached the unit class, we are close to it. Of course, if we construe probabilities as subjective this is no problem we don't need a reference class. One opinion is as good as the next, and our problem is to figure out what the opinion of the agent we have in mind actually is, not to figure out what it ought to be. Evidential probability is intended to serve two purposes: to provide a basis for uncertain inference, leading to the (non-monotonic) acceptance of factual statements about the world, and to provide a basis for making decisions about actions. We have dealt with some of the former questions we will deal more completely with the questions of evidence and acceptance later so let us turn to the question of decisions. 5. Decisions Standard wisdom about decision-making is that one should choose an action that maximizes expected utility: MEU. For each action, the possible outcomes are evaluated in terms of utility (the value to the agent) and in terms of probability. The expected utility of an action is the sum of the products of the utility of each alternative outcome, multiplied by its probability. An action with maximal utility (there may be more than one) is an action whose expected utility is not exceeded by that of any other action. This standard wisdom has been questioned. 20 Subjective Bayesians have (or should have) no problem with the principle of maximizing expected utility, since for them both utility and probability are subjective: it is a question of finding out what the (true) utilities and probabilities (degrees of belief) of the agent are. Some difficulty is provided by the fact that decisions of seemingly rational people do not always obey the constraints of the 20 Machina 1988, McClennen 1990, Allais 1979, Ellsberg, It is sometimes suggested that variance, as well as probability, should be taken account of. A full discussion of these issues is beyond the scope of this paper, but it may be noted that evidential probability does have a contribution to make in connection with these arguments about MEU. 13

14 theory. 21 It is also not always easy to find out what the probabilities and utilities of an agent are. Of course, for an artificial agent, we, the designers, can build in whatever probabilities and utilities we think appropriate. But, again, this may not be an easy job. On the other hand, if we want objective probabilities that are based on statistical knowledge, such as evidential probabilities, the range of the probability function will be intervals rather than real numbers, and the decision problem becomes significantly changed. We may also, without courting any additional problems, consider interval-valued utilities. Whether or not it makes sense to speak of objective utilities for a group or society of decision makers, it is at any rate easier to imagine a group agreeing on interval utilities than on real valued utilities. Let us suppose that an act A has the possible outcomes O 1,... O k, and that these outcomes are exhaustive and mutually exclusive. Let their utilities be u(o i ) = [lu i,hu i ]. (This may be an idealization or approximation; it may be that the actual utility-interval of an outcome depends on what other outcomes are possible.) Each of these outcomes has a probability, P(O i ) = [lp i,hp i ]. (We suppress background knowledge and evidence.) The first thing to be observed is that these upper and lower probabilities are not the appropriate ingredients for the decision theory. An example can make this clear. Suppose one act we contemplate depends on which of six kinds of outcome occurs. The utilities are u 1,...,u 6. Let the probability of each outcome be [.1,.2]. The expected utility of the act is not [0.1*(Σ u i ),0.2*(Σ u i )], since the outcomes cannot all have probability 0.1 or probability 0.2. The expected utility (if the utility for each outcome is real valued) is dependent on the possible joint distributions of the outcomes. We require a common reference class for each of these outcomes; this is built into the machinery of evidential probability, in the rules for dealing with conflicting statistical knowledge. Our statistical knowledge about this common reference class can be expressed in the form of a family of distributions of the outcomes O 1... O 6. The family might be given thus: F = {Multinomial( f 1,...f 6 ): Σf i = 1 & 0.1 f i 0.2}. The bounds on the expected utility of this act would then be given by: 21 In addition to the paradoxes of Allais 1979 and Ellsberg 1961, there are the psychological experiments discussed in Kahneman et al. 1982, that are alleged to show that people often violate the axioms of probability and decision theory. Some of these results are reflections of limited computational capacity and some are reflections of irrationality, but in some cases it is not clear that the alleged violations of Bayesian norms are actually violations, and in others it is not clear that the violations represent irrationality. 14

15 [ min d F (Σd i u i ), max d F (Σd i u i ) ] We can also incorporate interval utilities by simultaneously minimizing over a joint utility function, to get the lower bound, and maximizing over a joint utility function to get an upper bound. The upshot is an upper and a lower expected utility for each action. We must now provide a procedure for choosing an act (or set of equivalent acts) on the basis of these upper and lower expected utilities. One possible principle involves dominance: Let us call an act whose upper expected utility is below the lower expected utility of an alternative act dominated. A natural principle is to disregard dominated acts. Having gone this far, we will in general be left with a set of undominated acts, each characterized by an interval of expected utility. A second, more controversial, principle could then be applied: a maximin principle according to which we would rank the actions according to their lower expected utilities, rejecting those whose lower expected utility is less than that of some unrejected alternative. Now we would have left a set of undominated acts, each of which has the same lower expected utility. We could now rank them by upper expected utility, rejecting those whose upper expected utility is less than that of an alternative. The outcome of this would be a set (perhaps with only one member) of actions characterized by the same upper and lower expected utilities. The procedure just outlined does yield a decision theory; it only fails to pick out a unique act if both upper and lower expected utilities of more than one act coincide; and the same lack of uniqueness characterizes classical MEU theory. Whether this theory is altogether satisfactory may be open to question; it has not, so far as I know, been pursued in any great detail. 6. Practical Certainty We have referred to "practical certainty" as the level at which something probable enough can be accepted and acted upon. But how probable is this? An answer was offered in (Kyburg 1988). The idea is this: if our resources are limited and the unit of betting is discrete, then odds more extreme than a certain degree simply don't make sense for us. If all I have is a hundred dollars, and I can't bet part of a dollar, odds greater than a hundred to one are outside the realm with which I can deal. Although I can bet a dollar against 15

16 someone else's hundred and one, I can't offer to make book on that issue, because I can't cover the one-dollar bet made by someone else. This provides a clue as to how the level of practical certainty can be set. Since if I have the capital to handle a one dollar bet at odds of a thousand to one, I have the capital to handle a one dollar bet at any lower odds, the maximum odds for a minimum bet determine (practically) what is practical certainty. In this sense 'practical certainty' is determined by what is (or might be) at stake. In general, one does not want to change what counts as practical certainty from moment to moment that would defeat the purpose of having a fixed corpus of knowledge. One could also determine this level directly: How probable would something have to be in a certain context before you would treat it as something to bet on? Before you would simply act as if it were true? Note that there are two kinds of 'acting as if.' In a weak sense, when you bet at even money that the outcome of a coin-toss is heads, you are 'acting as if' the coin would land heads. In a stronger sense, to 'act as if' the coin would land heads would be to refuse to bet on tails at any odds. How one determines this critical ratio is a question I have no answer to. What does seem realistic and correct is that there are broad classes of things that may be treated differently. What amounts to practical certainty in the design of nuclear power plants would be neurotic nervousness in grocery shopping. The design of household appliances would fall somewhere between. It isn't clear that one needs more than a couple or three grades of practical certainty. While the specification of these grades, linked as it is to utility, may be 'subjective,' it is a far more controllable subjectivity than the assignment of subjective probabilities to an infinite number of statements. 7. Evidence We have already pointed out a natural relation between practical certainty and evidential certainty: if 1 2ε is practical certainty, 1 ε is evidential certainty. If acceptance is defined in terms of minimal epistemic probability, then if two statements are in the evidential corpus, their conjunction is in the practical corpus. And it is a theorem of the classical probability calculus that if the probability of S is 1 ε and the probability of T is 1 ε, then the probability of S & T is at least 1 2ε. This does not resolve the question of how statements get into the evidential corpus. Relative to what does a statement have to have a probability of 1 ε in order to be acceptable as evidence? Clearly not the same body of evidence it belongs to, else it would have a probability of 1.0 of being correct. We have offered change of context raising the level of practical 16

17 certainty as a local answer. But is there a global answer? Here is the answer offered in Science and Reason. It is an answer that accounts for the role of evidence, but does not require that there be some abstractly identifiable quality that singles out statements worthy of playing the role of evidence. Consider the measurement of length as an example. 22 How do we discover the distribution of errors of length measurement? Clearly not by comparing the results of measurement with the true lengths involved. It is rather by comparing length measurements with each other, with the underlying theory of measurement, and with our general background knowledge. Suppose we measure the same object five times. We get five results, each of which may be expressed in the form TL t (o) + δ t.: the true length of object o at t, plus the error of measurement δ at t. (For simplicity, we will take t = 1,..., 5.) The first question is why we do not measure with perfect accuracy i.e., take dt. to be uniformly 0. Why should we attribute any error to our measurements when we can suppose it is the object that is changing? The straight-forward answer is that we judge the object to be rigid, like our measuring instrument. Such judgments are sometimes erroneous, but prima facie warranted.23 In fact, for present purposes and for the time interval under consideration, we assume that the true length of o is constant: we may write simply TL(o). The second question is why we attribute the distribution of error to our measurements that we do. We could pick our favorite number (say, 5) and take that the be the true length of the object o. If our measurements were 10.0, 10.5, 10.3, 10.5, and 10.2, this would entail errors of 5.1, 5.5, 5.3, 5.5, and 5.2, respectively. But this would hardly be a reasonable thing to do. Even if we were up front about assuming the true length of o to be 5. Other things being equal, we suppose that the errors δ t. are distributed about 0.0 so that the error of each of our measurements can be taken to be the difference between the average of the measurements and the value of that measurement: 5 { 1 5 [TL(o) + δ t ]}-[TL(o) + δ 3 ] t=1 for the third measurement, for example. 22 A detailed treatment of measurement is to be found in Kyburg This is a qualitative judgment; we will see how error for qualitative judgments can be dealt with shortly. 17

18 If we have a lot of measurements of o, it becomes plausible to suppose that the distribution of those measurements about a central value is a sample of the distribution of errors of that measurement (each reduced by that central value). This is not always the case. Suppose we measure the two legs and the hypotenuse of a right triangle. It may well be the case that we cannot consistently suppose that the sides of the triangle and our measuring instrument are rigid, that the errors of measurement of each side are distributed evenly about the true length, and that the Pythagorean theorem is correct. We could reject any one of these conjuncts, but in fact we (generally) hold firmly to rigidity and to Pythagoras. But we would still plausibly pick true values for the sides so as to hold these items of knowledge true and at the same time to minimize the (absolute values of the) errors of our measurements. What justification is there for this completely arbitrary decision? The justification is pragmatic: treating error in this way leads to practical certainties that are useful in planning and designing, and that rarely lead us astray. It is not just the distribution of errors of measurement that is at issue in this pragmatic argument, but the whole body of general laws and conventions: the Pythagorean theorem, the stability of steel rods and metersticks that appear to be stable, etc. How does this work out in the framework under discussion? Here is what is proposed in Science and Reason. The reports of measurements need not be taken to be subject to error. (In fact for a scientific report to be 'corrected' is cause for censure!) It is the measurements themselves that embody error. From an analysis of reports of measurements alone, we can derive (with the aid of such principles as the principle that we should attribute no more error to our measurements than we are obliged to attribute to them) a sample distribution of errors, and this in a body of knowledge consisting of certainties alone. This sample distribution has no direct bearing on any actual measurement, but it does support, via statistical inference, the acceptance at whatever level of evidential certainty we choose, of an approximate general distribution of errors of measurement. (Naturally the approximation becomes looser as we choose a higher level of evidential certainty.) Now given a measurement report "the length of o is 5.05," and given the evidential certainty of a distribution of errors of measurement of that sort, we can arrive at a categorical assertion of length at the corresponding level of practical certainty: "The length of o is " Note that the appropriate error distribution may vary from one kind of measurement to another some methods of measurement are more accurate 18

19 than others. But this is no problem for the approach we are suggesting, since choosing the correct distribution of error for a particular measurement is just a special case of choosing the right reference class. There are two important points in what we have just observed: First, that choosing to suppose that we make errors of measurement at all is a pragmatic choice. Second, that we may arrive at evidential certainty concerning the distribution of such errors on the basis of observation reports alone. Since these two points are so fundamental and important, it is worth looking also at errors of classification. Applying predicates to individuals, like any other form of observational judgment, is insecure. One can be wrong. If the predicate in question is unrelated to other predicates, we can have no reason to suppose that any attribution is in error. But given connections among predicates (for example, universal connections of the form "All P's are Q's," but not limited to this form), we must often suppose that we have made an erroneous judgment. For example, if we take it as a fact (or a matter of convention) that all crows are black, we may still include, among many observation reports of the form "a i is a crow and a i is black" a number of reports of the form "b j is a crow and b j is not black." Some philosophers of science might deny this possibility, on the ground that in the face of such a report the generalization should be abandoned. But this is just to deny the possibility of erroneous observation reports not at all a plausible thing to do. On the other hand, the reports should not be ignored. They carry implicit information regarding the reliability with which the predicates "... is a crow" and "...is black" can be applied on the basis of observation. As before, what we do is to note the number of observation reports of a certain sort that must be rejected as erroneous, and use their relative frequency as a basis for statistical inference to the approximate long run relative frequency of such erroneous observation reports. It should, in connection with the feasibility and desirability of being able to make sense of "inconsistent" bodies of knowledge, be noted that no particular observation report need ever be identified as erroneous. We may know that 5% of the "... is a crow" reports must be regarded as misidentifications without knowing of a single such report that it is in error. Again, we may well have grounds for distinguishing the probability of error according to the appropriate reference class: an ornithologist is less likely to make an error in issuing a crow report than an ordinary person. The ornithologist's report when the observation is made in good light is less likely to be in error than when the observation is made in bad light. 19

Evidential Support and Instrumental Rationality

Evidential Support and Instrumental Rationality Evidential Support and Instrumental Rationality Peter Brössel, Anna-Maria A. Eder, and Franz Huber Formal Epistemology Research Group Zukunftskolleg and Department of Philosophy University of Konstanz

More information

Bayesian Probability

Bayesian Probability Bayesian Probability Patrick Maher September 4, 2008 ABSTRACT. Bayesian decision theory is here construed as explicating a particular concept of rational choice and Bayesian probability is taken to be

More information

Bayesian Probability

Bayesian Probability Bayesian Probability Patrick Maher University of Illinois at Urbana-Champaign November 24, 2007 ABSTRACT. Bayesian probability here means the concept of probability used in Bayesian decision theory. It

More information

Is Epistemic Probability Pascalian?

Is Epistemic Probability Pascalian? Is Epistemic Probability Pascalian? James B. Freeman Hunter College of The City University of New York ABSTRACT: What does it mean to say that if the premises of an argument are true, the conclusion is

More information

Understanding Truth Scott Soames Précis Philosophy and Phenomenological Research Volume LXV, No. 2, 2002

Understanding Truth Scott Soames Précis Philosophy and Phenomenological Research Volume LXV, No. 2, 2002 1 Symposium on Understanding Truth By Scott Soames Précis Philosophy and Phenomenological Research Volume LXV, No. 2, 2002 2 Precis of Understanding Truth Scott Soames Understanding Truth aims to illuminate

More information

Semantic Foundations for Deductive Methods

Semantic Foundations for Deductive Methods Semantic Foundations for Deductive Methods delineating the scope of deductive reason Roger Bishop Jones Abstract. The scope of deductive reason is considered. First a connection is discussed between the

More information

Detachment, Probability, and Maximum Likelihood

Detachment, Probability, and Maximum Likelihood Detachment, Probability, and Maximum Likelihood GILBERT HARMAN PRINCETON UNIVERSITY When can we detach probability qualifications from our inductive conclusions? The following rule may seem plausible:

More information

Oxford Scholarship Online Abstracts and Keywords

Oxford Scholarship Online Abstracts and Keywords Oxford Scholarship Online Abstracts and Keywords ISBN 9780198802693 Title The Value of Rationality Author(s) Ralph Wedgwood Book abstract Book keywords Rationality is a central concept for epistemology,

More information

In Defense of Radical Empiricism. Joseph Benjamin Riegel. Chapel Hill 2006

In Defense of Radical Empiricism. Joseph Benjamin Riegel. Chapel Hill 2006 In Defense of Radical Empiricism Joseph Benjamin Riegel A thesis submitted to the faculty of the University of North Carolina at Chapel Hill in partial fulfillment of the requirements for the degree of

More information

NICHOLAS J.J. SMITH. Let s begin with the storage hypothesis, which is introduced as follows: 1

NICHOLAS J.J. SMITH. Let s begin with the storage hypothesis, which is introduced as follows: 1 DOUBTS ABOUT UNCERTAINTY WITHOUT ALL THE DOUBT NICHOLAS J.J. SMITH Norby s paper is divided into three main sections in which he introduces the storage hypothesis, gives reasons for rejecting it and then

More information

Logic: Deductive and Inductive by Carveth Read M.A. CHAPTER IX CHAPTER IX FORMAL CONDITIONS OF MEDIATE INFERENCE

Logic: Deductive and Inductive by Carveth Read M.A. CHAPTER IX CHAPTER IX FORMAL CONDITIONS OF MEDIATE INFERENCE CHAPTER IX CHAPTER IX FORMAL CONDITIONS OF MEDIATE INFERENCE Section 1. A Mediate Inference is a proposition that depends for proof upon two or more other propositions, so connected together by one or

More information

Philosophy Of Science On The Moral Neutrality Of Scientific Acceptance

Philosophy Of Science On The Moral Neutrality Of Scientific Acceptance University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Transactions of the Nebraska Academy of Sciences and Affiliated Societies Nebraska Academy of Sciences 1982 Philosophy Of

More information

Informalizing Formal Logic

Informalizing Formal Logic Informalizing Formal Logic Antonis Kakas Department of Computer Science, University of Cyprus, Cyprus antonis@ucy.ac.cy Abstract. This paper discusses how the basic notions of formal logic can be expressed

More information

Ramsey s belief > action > truth theory.

Ramsey s belief > action > truth theory. Ramsey s belief > action > truth theory. Monika Gruber University of Vienna 11.06.2016 Monika Gruber (University of Vienna) Ramsey s belief > action > truth theory. 11.06.2016 1 / 30 1 Truth and Probability

More information

1. Introduction Formal deductive logic Overview

1. Introduction Formal deductive logic Overview 1. Introduction 1.1. Formal deductive logic 1.1.0. Overview In this course we will study reasoning, but we will study only certain aspects of reasoning and study them only from one perspective. The special

More information

How to Mistake a Trivial Fact About Probability For a. Substantive Fact About Justified Belief

How to Mistake a Trivial Fact About Probability For a. Substantive Fact About Justified Belief How to Mistake a Trivial Fact About Probability For a Substantive Fact About Justified Belief Jonathan Sutton It is sometimes thought that the lottery paradox and the paradox of the preface demand a uniform

More information

what makes reasons sufficient?

what makes reasons sufficient? Mark Schroeder University of Southern California August 2, 2010 what makes reasons sufficient? This paper addresses the question: what makes reasons sufficient? and offers the answer, being at least as

More information

AN EPISTEMIC PARADOX. Byron KALDIS

AN EPISTEMIC PARADOX. Byron KALDIS AN EPISTEMIC PARADOX Byron KALDIS Consider the following statement made by R. Aron: "It can no doubt be maintained, in the spirit of philosophical exactness, that every historical fact is a construct,

More information

1.2. What is said: propositions

1.2. What is said: propositions 1.2. What is said: propositions 1.2.0. Overview In 1.1.5, we saw the close relation between two properties of a deductive inference: (i) it is a transition from premises to conclusion that is free of any

More information

NOTES ON WILLIAMSON: CHAPTER 11 ASSERTION Constitutive Rules

NOTES ON WILLIAMSON: CHAPTER 11 ASSERTION Constitutive Rules NOTES ON WILLIAMSON: CHAPTER 11 ASSERTION 11.1 Constitutive Rules Chapter 11 is not a general scrutiny of all of the norms governing assertion. Assertions may be subject to many different norms. Some norms

More information

Keywords precise, imprecise, sharp, mushy, credence, subjective, probability, reflection, Bayesian, epistemology

Keywords precise, imprecise, sharp, mushy, credence, subjective, probability, reflection, Bayesian, epistemology Coin flips, credences, and the Reflection Principle * BRETT TOPEY Abstract One recent topic of debate in Bayesian epistemology has been the question of whether imprecise credences can be rational. I argue

More information

McCLOSKEY ON RATIONAL ENDS: The Dilemma of Intuitionism

McCLOSKEY ON RATIONAL ENDS: The Dilemma of Intuitionism 48 McCLOSKEY ON RATIONAL ENDS: The Dilemma of Intuitionism T om R egan In his book, Meta-Ethics and Normative Ethics,* Professor H. J. McCloskey sets forth an argument which he thinks shows that we know,

More information

part one MACROSTRUCTURE Cambridge University Press X - A Theory of Argument Mark Vorobej Excerpt More information

part one MACROSTRUCTURE Cambridge University Press X - A Theory of Argument Mark Vorobej Excerpt More information part one MACROSTRUCTURE 1 Arguments 1.1 Authors and Audiences An argument is a social activity, the goal of which is interpersonal rational persuasion. More precisely, we ll say that an argument occurs

More information

CHAPTER THREE Philosophical Argument

CHAPTER THREE Philosophical Argument CHAPTER THREE Philosophical Argument General Overview: As our students often attest, we all live in a complex world filled with demanding issues and bewildering challenges. In order to determine those

More information

Boghossian & Harman on the analytic theory of the a priori

Boghossian & Harman on the analytic theory of the a priori Boghossian & Harman on the analytic theory of the a priori PHIL 83104 November 2, 2011 Both Boghossian and Harman address themselves to the question of whether our a priori knowledge can be explained in

More information

Does Deduction really rest on a more secure epistemological footing than Induction?

Does Deduction really rest on a more secure epistemological footing than Induction? Does Deduction really rest on a more secure epistemological footing than Induction? We argue that, if deduction is taken to at least include classical logic (CL, henceforth), justifying CL - and thus deduction

More information

Review of Constructive Empiricism: Epistemology and the Philosophy of Science

Review of Constructive Empiricism: Epistemology and the Philosophy of Science Review of Constructive Empiricism: Epistemology and the Philosophy of Science Constructive Empiricism (CE) quickly became famous for its immunity from the most devastating criticisms that brought down

More information

TWO VERSIONS OF HUME S LAW

TWO VERSIONS OF HUME S LAW DISCUSSION NOTE BY CAMPBELL BROWN JOURNAL OF ETHICS & SOCIAL PHILOSOPHY DISCUSSION NOTE MAY 2015 URL: WWW.JESP.ORG COPYRIGHT CAMPBELL BROWN 2015 Two Versions of Hume s Law MORAL CONCLUSIONS CANNOT VALIDLY

More information

The Rightness Error: An Evaluation of Normative Ethics in the Absence of Moral Realism

The Rightness Error: An Evaluation of Normative Ethics in the Absence of Moral Realism An Evaluation of Normative Ethics in the Absence of Moral Realism Mathais Sarrazin J.L. Mackie s Error Theory postulates that all normative claims are false. It does this based upon his denial of moral

More information

THE CONCEPT OF OWNERSHIP by Lars Bergström

THE CONCEPT OF OWNERSHIP by Lars Bergström From: Who Owns Our Genes?, Proceedings of an international conference, October 1999, Tallin, Estonia, The Nordic Committee on Bioethics, 2000. THE CONCEPT OF OWNERSHIP by Lars Bergström I shall be mainly

More information

Varieties of Apriority

Varieties of Apriority S E V E N T H E X C U R S U S Varieties of Apriority T he notions of a priori knowledge and justification play a central role in this work. There are many ways in which one can understand the a priori,

More information

Choosing Rationally and Choosing Correctly *

Choosing Rationally and Choosing Correctly * Choosing Rationally and Choosing Correctly * Ralph Wedgwood 1 Two views of practical reason Suppose that you are faced with several different options (that is, several ways in which you might act in a

More information

On Priest on nonmonotonic and inductive logic

On Priest on nonmonotonic and inductive logic On Priest on nonmonotonic and inductive logic Greg Restall School of Historical and Philosophical Studies The University of Melbourne Parkville, 3010, Australia restall@unimelb.edu.au http://consequently.org/

More information

PHILOSOPHIES OF SCIENTIFIC TESTING

PHILOSOPHIES OF SCIENTIFIC TESTING PHILOSOPHIES OF SCIENTIFIC TESTING By John Bloore Internet Encyclopdia of Philosophy, written by John Wttersten, http://www.iep.utm.edu/cr-ratio/#h7 Carl Gustav Hempel (1905 1997) Known for Deductive-Nomological

More information

2nd International Workshop on Argument for Agreement and Assurance (AAA 2015), Kanagawa Japan, November 2015

2nd International Workshop on Argument for Agreement and Assurance (AAA 2015), Kanagawa Japan, November 2015 2nd International Workshop on Argument for Agreement and Assurance (AAA 2015), Kanagawa Japan, November 2015 On the Interpretation Of Assurance Case Arguments John Rushby Computer Science Laboratory SRI

More information

Uncommon Priors Require Origin Disputes

Uncommon Priors Require Origin Disputes Uncommon Priors Require Origin Disputes Robin Hanson Department of Economics George Mason University July 2006, First Version June 2001 Abstract In standard belief models, priors are always common knowledge.

More information

Remarks on the philosophy of mathematics (1969) Paul Bernays

Remarks on the philosophy of mathematics (1969) Paul Bernays Bernays Project: Text No. 26 Remarks on the philosophy of mathematics (1969) Paul Bernays (Bemerkungen zur Philosophie der Mathematik) Translation by: Dirk Schlimm Comments: With corrections by Charles

More information

Can Rationality Be Naturalistically Explained? Jeffrey Dunn. Abstract: Dan Chiappe and John Vervaeke (1997) conclude their article, Fodor,

Can Rationality Be Naturalistically Explained? Jeffrey Dunn. Abstract: Dan Chiappe and John Vervaeke (1997) conclude their article, Fodor, Can Rationality Be Naturalistically Explained? Jeffrey Dunn Abstract: Dan Chiappe and John Vervaeke (1997) conclude their article, Fodor, Cherniak and the Naturalization of Rationality, with an argument

More information

Saving the Substratum: Interpreting Kant s First Analogy

Saving the Substratum: Interpreting Kant s First Analogy Res Cogitans Volume 5 Issue 1 Article 20 6-4-2014 Saving the Substratum: Interpreting Kant s First Analogy Kevin Harriman Lewis & Clark College Follow this and additional works at: http://commons.pacificu.edu/rescogitans

More information

Contradictory Information Can Be Better than Nothing The Example of the Two Firemen

Contradictory Information Can Be Better than Nothing The Example of the Two Firemen Contradictory Information Can Be Better than Nothing The Example of the Two Firemen J. Michael Dunn School of Informatics and Computing, and Department of Philosophy Indiana University-Bloomington Workshop

More information

prohibition, moral commitment and other normative matters. Although often described as a branch

prohibition, moral commitment and other normative matters. Although often described as a branch Logic, deontic. The study of principles of reasoning pertaining to obligation, permission, prohibition, moral commitment and other normative matters. Although often described as a branch of logic, deontic

More information

Comments on Truth at A World for Modal Propositions

Comments on Truth at A World for Modal Propositions Comments on Truth at A World for Modal Propositions Christopher Menzel Texas A&M University March 16, 2008 Since Arthur Prior first made us aware of the issue, a lot of philosophical thought has gone into

More information

All They Know: A Study in Multi-Agent Autoepistemic Reasoning

All They Know: A Study in Multi-Agent Autoepistemic Reasoning All They Know: A Study in Multi-Agent Autoepistemic Reasoning PRELIMINARY REPORT Gerhard Lakemeyer Institute of Computer Science III University of Bonn Romerstr. 164 5300 Bonn 1, Germany gerhard@cs.uni-bonn.de

More information

Etchemendy, Tarski, and Logical Consequence 1 Jared Bates, University of Missouri Southwest Philosophy Review 15 (1999):

Etchemendy, Tarski, and Logical Consequence 1 Jared Bates, University of Missouri Southwest Philosophy Review 15 (1999): Etchemendy, Tarski, and Logical Consequence 1 Jared Bates, University of Missouri Southwest Philosophy Review 15 (1999): 47 54. Abstract: John Etchemendy (1990) has argued that Tarski's definition of logical

More information

Introduction Symbolic Logic

Introduction Symbolic Logic An Introduction to Symbolic Logic Copyright 2006 by Terence Parsons all rights reserved CONTENTS Chapter One Sentential Logic with 'if' and 'not' 1 SYMBOLIC NOTATION 2 MEANINGS OF THE SYMBOLIC NOTATION

More information

Learning is a Risky Business. Wayne C. Myrvold Department of Philosophy The University of Western Ontario

Learning is a Risky Business. Wayne C. Myrvold Department of Philosophy The University of Western Ontario Learning is a Risky Business Wayne C. Myrvold Department of Philosophy The University of Western Ontario wmyrvold@uwo.ca Abstract Richard Pettigrew has recently advanced a justification of the Principle

More information

Moral Relativism and Conceptual Analysis. David J. Chalmers

Moral Relativism and Conceptual Analysis. David J. Chalmers Moral Relativism and Conceptual Analysis David J. Chalmers An Inconsistent Triad (1) All truths are a priori entailed by fundamental truths (2) No moral truths are a priori entailed by fundamental truths

More information

Philosophy Epistemology Topic 5 The Justification of Induction 1. Hume s Skeptical Challenge to Induction

Philosophy Epistemology Topic 5 The Justification of Induction 1. Hume s Skeptical Challenge to Induction Philosophy 5340 - Epistemology Topic 5 The Justification of Induction 1. Hume s Skeptical Challenge to Induction In the section entitled Sceptical Doubts Concerning the Operations of the Understanding

More information

Scientific Progress, Verisimilitude, and Evidence

Scientific Progress, Verisimilitude, and Evidence L&PS Logic and Philosophy of Science Vol. IX, No. 1, 2011, pp. 561-567 Scientific Progress, Verisimilitude, and Evidence Luca Tambolo Department of Philosophy, University of Trieste e-mail: l_tambolo@hotmail.com

More information

Reasoning and Decision-Making under Uncertainty

Reasoning and Decision-Making under Uncertainty Reasoning and Decision-Making under Uncertainty 3. Termin: Uncertainty, Degrees of Belief and Probabilities Prof. Dr.-Ing. Stefan Kopp Center of Excellence Cognitive Interaction Technology AG A Intelligent

More information

The Greatest Mistake: A Case for the Failure of Hegel s Idealism

The Greatest Mistake: A Case for the Failure of Hegel s Idealism The Greatest Mistake: A Case for the Failure of Hegel s Idealism What is a great mistake? Nietzsche once said that a great error is worth more than a multitude of trivial truths. A truly great mistake

More information

Class #14: October 13 Gödel s Platonism

Class #14: October 13 Gödel s Platonism Philosophy 405: Knowledge, Truth and Mathematics Fall 2010 Hamilton College Russell Marcus Class #14: October 13 Gödel s Platonism I. The Continuum Hypothesis and Its Independence The continuum problem

More information

Philosophy 5340 Epistemology. Topic 6: Theories of Justification: Foundationalism versus Coherentism. Part 2: Susan Haack s Foundherentist Approach

Philosophy 5340 Epistemology. Topic 6: Theories of Justification: Foundationalism versus Coherentism. Part 2: Susan Haack s Foundherentist Approach Philosophy 5340 Epistemology Topic 6: Theories of Justification: Foundationalism versus Coherentism Part 2: Susan Haack s Foundherentist Approach Susan Haack, "A Foundherentist Theory of Empirical Justification"

More information

6.041SC Probabilistic Systems Analysis and Applied Probability, Fall 2013 Transcript Lecture 3

6.041SC Probabilistic Systems Analysis and Applied Probability, Fall 2013 Transcript Lecture 3 6.041SC Probabilistic Systems Analysis and Applied Probability, Fall 2013 Transcript Lecture 3 The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare

More information

RATIONALITY AND SELF-CONFIDENCE Frank Arntzenius, Rutgers University

RATIONALITY AND SELF-CONFIDENCE Frank Arntzenius, Rutgers University RATIONALITY AND SELF-CONFIDENCE Frank Arntzenius, Rutgers University 1. Why be self-confident? Hair-Brane theory is the latest craze in elementary particle physics. I think it unlikely that Hair- Brane

More information

2.1 Review. 2.2 Inference and justifications

2.1 Review. 2.2 Inference and justifications Applied Logic Lecture 2: Evidence Semantics for Intuitionistic Propositional Logic Formal logic and evidence CS 4860 Fall 2012 Tuesday, August 28, 2012 2.1 Review The purpose of logic is to make reasoning

More information

THE SEMANTIC REALISM OF STROUD S RESPONSE TO AUSTIN S ARGUMENT AGAINST SCEPTICISM

THE SEMANTIC REALISM OF STROUD S RESPONSE TO AUSTIN S ARGUMENT AGAINST SCEPTICISM SKÉPSIS, ISSN 1981-4194, ANO VII, Nº 14, 2016, p. 33-39. THE SEMANTIC REALISM OF STROUD S RESPONSE TO AUSTIN S ARGUMENT AGAINST SCEPTICISM ALEXANDRE N. MACHADO Universidade Federal do Paraná (UFPR) Email:

More information

In Epistemic Relativism, Mark Kalderon defends a view that has become

In Epistemic Relativism, Mark Kalderon defends a view that has become Aporia vol. 24 no. 1 2014 Incoherence in Epistemic Relativism I. Introduction In Epistemic Relativism, Mark Kalderon defends a view that has become increasingly popular across various academic disciplines.

More information

Philosophy 12 Study Guide #4 Ch. 2, Sections IV.iii VI

Philosophy 12 Study Guide #4 Ch. 2, Sections IV.iii VI Philosophy 12 Study Guide #4 Ch. 2, Sections IV.iii VI Precising definition Theoretical definition Persuasive definition Syntactic definition Operational definition 1. Are questions about defining a phrase

More information

Introduction to Statistical Hypothesis Testing Prof. Arun K Tangirala Department of Chemical Engineering Indian Institute of Technology, Madras

Introduction to Statistical Hypothesis Testing Prof. Arun K Tangirala Department of Chemical Engineering Indian Institute of Technology, Madras Introduction to Statistical Hypothesis Testing Prof. Arun K Tangirala Department of Chemical Engineering Indian Institute of Technology, Madras Lecture 09 Basics of Hypothesis Testing Hello friends, welcome

More information

Philosophy 5340 Epistemology Topic 4: Skepticism. Part 1: The Scope of Skepticism and Two Main Types of Skeptical Argument

Philosophy 5340 Epistemology Topic 4: Skepticism. Part 1: The Scope of Skepticism and Two Main Types of Skeptical Argument 1. The Scope of Skepticism Philosophy 5340 Epistemology Topic 4: Skepticism Part 1: The Scope of Skepticism and Two Main Types of Skeptical Argument The scope of skeptical challenges can vary in a number

More information

Module - 02 Lecturer - 09 Inferential Statistics - Motivation

Module - 02 Lecturer - 09 Inferential Statistics - Motivation Introduction to Data Analytics Prof. Nandan Sudarsanam and Prof. B. Ravindran Department of Management Studies and Department of Computer Science and Engineering Indian Institute of Technology, Madras

More information

- We might, now, wonder whether the resulting concept of justification is sufficiently strong. According to BonJour, apparent rational insight is

- We might, now, wonder whether the resulting concept of justification is sufficiently strong. According to BonJour, apparent rational insight is BonJour I PHIL410 BonJour s Moderate Rationalism - BonJour develops and defends a moderate form of Rationalism. - Rationalism, generally (as used here), is the view according to which the primary tool

More information

Dworkin on the Rufie of Recognition

Dworkin on the Rufie of Recognition Dworkin on the Rufie of Recognition NANCY SNOW University of Notre Dame In the "Model of Rules I," Ronald Dworkin criticizes legal positivism, especially as articulated in the work of H. L. A. Hart, and

More information

Ayer and Quine on the a priori

Ayer and Quine on the a priori Ayer and Quine on the a priori November 23, 2004 1 The problem of a priori knowledge Ayer s book is a defense of a thoroughgoing empiricism, not only about what is required for a belief to be justified

More information

1/12. The A Paralogisms

1/12. The A Paralogisms 1/12 The A Paralogisms The character of the Paralogisms is described early in the chapter. Kant describes them as being syllogisms which contain no empirical premises and states that in them we conclude

More information

Intersubstitutivity Principles and the Generalization Function of Truth. Anil Gupta University of Pittsburgh. Shawn Standefer University of Melbourne

Intersubstitutivity Principles and the Generalization Function of Truth. Anil Gupta University of Pittsburgh. Shawn Standefer University of Melbourne Intersubstitutivity Principles and the Generalization Function of Truth Anil Gupta University of Pittsburgh Shawn Standefer University of Melbourne Abstract We offer a defense of one aspect of Paul Horwich

More information

6.041SC Probabilistic Systems Analysis and Applied Probability, Fall 2013 Transcript Lecture 21

6.041SC Probabilistic Systems Analysis and Applied Probability, Fall 2013 Transcript Lecture 21 6.041SC Probabilistic Systems Analysis and Applied Probability, Fall 2013 Transcript Lecture 21 The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare

More information

Introduction: Belief vs Degrees of Belief

Introduction: Belief vs Degrees of Belief Introduction: Belief vs Degrees of Belief Hannes Leitgeb LMU Munich October 2014 My three lectures will be devoted to answering this question: How does rational (all-or-nothing) belief relate to degrees

More information

Some questions about Adams conditionals

Some questions about Adams conditionals Some questions about Adams conditionals PATRICK SUPPES I have liked, since it was first published, Ernest Adams book on conditionals (Adams, 1975). There is much about his probabilistic approach that is

More information

Constructive Logic, Truth and Warranted Assertibility

Constructive Logic, Truth and Warranted Assertibility Constructive Logic, Truth and Warranted Assertibility Greg Restall Department of Philosophy Macquarie University Version of May 20, 2000....................................................................

More information

ILLOCUTIONARY ORIGINS OF FAMILIAR LOGICAL OPERATORS

ILLOCUTIONARY ORIGINS OF FAMILIAR LOGICAL OPERATORS ILLOCUTIONARY ORIGINS OF FAMILIAR LOGICAL OPERATORS 1. ACTS OF USING LANGUAGE Illocutionary logic is the logic of speech acts, or language acts. Systems of illocutionary logic have both an ontological,

More information

Epistemic utility theory

Epistemic utility theory Epistemic utility theory Richard Pettigrew March 29, 2010 One of the central projects of formal epistemology concerns the formulation and justification of epistemic norms. The project has three stages:

More information

Searle vs. Chalmers Debate, 8/2005 with Death Monkey (Kevin Dolan)

Searle vs. Chalmers Debate, 8/2005 with Death Monkey (Kevin Dolan) Searle vs. Chalmers Debate, 8/2005 with Death Monkey (Kevin Dolan) : Searle says of Chalmers book, The Conscious Mind, "it is one thing to bite the occasional bullet here and there, but this book consumes

More information

What God Could Have Made

What God Could Have Made 1 What God Could Have Made By Heimir Geirsson and Michael Losonsky I. Introduction Atheists have argued that if there is a God who is omnipotent, omniscient and omnibenevolent, then God would have made

More information

Falsification or Confirmation: From Logic to Psychology

Falsification or Confirmation: From Logic to Psychology Falsification or Confirmation: From Logic to Psychology Roman Lukyanenko Information Systems Department Florida international University rlukyane@fiu.edu Abstract Corroboration or Confirmation is a prominent

More information

DISCUSSION PRACTICAL POLITICS AND PHILOSOPHICAL INQUIRY: A NOTE

DISCUSSION PRACTICAL POLITICS AND PHILOSOPHICAL INQUIRY: A NOTE Practical Politics and Philosophical Inquiry: A Note Author(s): Dale Hall and Tariq Modood Reviewed work(s): Source: The Philosophical Quarterly, Vol. 29, No. 117 (Oct., 1979), pp. 340-344 Published by:

More information

The Problem of Induction and Popper s Deductivism

The Problem of Induction and Popper s Deductivism The Problem of Induction and Popper s Deductivism Issues: I. Problem of Induction II. Popper s rejection of induction III. Salmon s critique of deductivism 2 I. The problem of induction 1. Inductive vs.

More information

Introduction. I. Proof of the Minor Premise ( All reality is completely intelligible )

Introduction. I. Proof of the Minor Premise ( All reality is completely intelligible ) Philosophical Proof of God: Derived from Principles in Bernard Lonergan s Insight May 2014 Robert J. Spitzer, S.J., Ph.D. Magis Center of Reason and Faith Lonergan s proof may be stated as follows: Introduction

More information

Fatalism and Truth at a Time Chad Marxen

Fatalism and Truth at a Time Chad Marxen Stance Volume 6 2013 29 Fatalism and Truth at a Time Chad Marxen Abstract: In this paper, I will examine an argument for fatalism. I will offer a formalized version of the argument and analyze one of the

More information

Why Have Consistent and Closed Beliefs, or, for that Matter, Probabilistically Coherent Credences? *

Why Have Consistent and Closed Beliefs, or, for that Matter, Probabilistically Coherent Credences? * Why Have Consistent and Closed Beliefs, or, for that Matter, Probabilistically Coherent Credences? * What should we believe? At very least, we may think, what is logically consistent with what else we

More information

UC Berkeley, Philosophy 142, Spring 2016

UC Berkeley, Philosophy 142, Spring 2016 Logical Consequence UC Berkeley, Philosophy 142, Spring 2016 John MacFarlane 1 Intuitive characterizations of consequence Modal: It is necessary (or apriori) that, if the premises are true, the conclusion

More information

On The Logical Status of Dialectic (*) -Historical Development of the Argument in Japan- Shigeo Nagai Naoki Takato

On The Logical Status of Dialectic (*) -Historical Development of the Argument in Japan- Shigeo Nagai Naoki Takato On The Logical Status of Dialectic (*) -Historical Development of the Argument in Japan- Shigeo Nagai Naoki Takato 1 The term "logic" seems to be used in two different ways. One is in its narrow sense;

More information

THE ROLE OF COHERENCE OF EVIDENCE IN THE NON- DYNAMIC MODEL OF CONFIRMATION TOMOJI SHOGENJI

THE ROLE OF COHERENCE OF EVIDENCE IN THE NON- DYNAMIC MODEL OF CONFIRMATION TOMOJI SHOGENJI Page 1 To appear in Erkenntnis THE ROLE OF COHERENCE OF EVIDENCE IN THE NON- DYNAMIC MODEL OF CONFIRMATION TOMOJI SHOGENJI ABSTRACT This paper examines the role of coherence of evidence in what I call

More information

Rethinking Knowledge: The Heuristic View

Rethinking Knowledge: The Heuristic View http://www.springer.com/gp/book/9783319532363 Carlo Cellucci Rethinking Knowledge: The Heuristic View 1 Preface From its very beginning, philosophy has been viewed as aimed at knowledge and methods to

More information

THE MEANING OF OUGHT. Ralph Wedgwood. What does the word ought mean? Strictly speaking, this is an empirical question, about the

THE MEANING OF OUGHT. Ralph Wedgwood. What does the word ought mean? Strictly speaking, this is an empirical question, about the THE MEANING OF OUGHT Ralph Wedgwood What does the word ought mean? Strictly speaking, this is an empirical question, about the meaning of a word in English. Such empirical semantic questions should ideally

More information

Richard L. W. Clarke, Notes REASONING

Richard L. W. Clarke, Notes REASONING 1 REASONING Reasoning is, broadly speaking, the cognitive process of establishing reasons to justify beliefs, conclusions, actions or feelings. It also refers, more specifically, to the act or process

More information

Negative Introspection Is Mysterious

Negative Introspection Is Mysterious Negative Introspection Is Mysterious Abstract. The paper provides a short argument that negative introspection cannot be algorithmic. This result with respect to a principle of belief fits to what we know

More information

A Logical Approach to Metametaphysics

A Logical Approach to Metametaphysics A Logical Approach to Metametaphysics Daniel Durante Departamento de Filosofia UFRN durante10@gmail.com 3º Filomena - 2017 What we take as true commits us. Quine took advantage of this fact to introduce

More information

Plantinga, Pluralism and Justified Religious Belief

Plantinga, Pluralism and Justified Religious Belief Plantinga, Pluralism and Justified Religious Belief David Basinger (5850 total words in this text) (705 reads) According to Alvin Plantinga, it has been widely held since the Enlightenment that if theistic

More information

2017 Philosophy. Higher. Finalised Marking Instructions

2017 Philosophy. Higher. Finalised Marking Instructions National Qualifications 07 07 Philosophy Higher Finalised Marking Instructions Scottish Qualifications Authority 07 The information in this publication may be reproduced to support SQA qualifications only

More information

Truth and Modality - can they be reconciled?

Truth and Modality - can they be reconciled? Truth and Modality - can they be reconciled? by Eileen Walker 1) The central question What makes modal statements statements about what might be or what might have been the case true or false? Normally

More information

Egocentric Rationality

Egocentric Rationality 3 Egocentric Rationality 1. The Subject Matter of Egocentric Epistemology Egocentric epistemology is concerned with the perspectives of individual believers and the goal of having an accurate and comprehensive

More information

Philosophy Epistemology. Topic 3 - Skepticism

Philosophy Epistemology. Topic 3 - Skepticism Michael Huemer on Skepticism Philosophy 3340 - Epistemology Topic 3 - Skepticism Chapter II. The Lure of Radical Skepticism 1. Mike Huemer defines radical skepticism as follows: Philosophical skeptics

More information

IN DEFENCE OF CLOSURE

IN DEFENCE OF CLOSURE IN DEFENCE OF CLOSURE IN DEFENCE OF CLOSURE By RICHARD FELDMAN Closure principles for epistemic justification hold that one is justified in believing the logical consequences, perhaps of a specified sort,

More information

Pictures, Proofs, and Mathematical Practice : Reply to James Robert Brown

Pictures, Proofs, and Mathematical Practice : Reply to James Robert Brown Brit. J. Phil. Sci. 50 (1999), 425 429 DISCUSSION Pictures, Proofs, and Mathematical Practice : Reply to James Robert Brown In a recent article, James Robert Brown ([1997]) has argued that pictures and

More information

Other Logics: What Nonclassical Reasoning Is All About Dr. Michael A. Covington Associate Director Artificial Intelligence Center

Other Logics: What Nonclassical Reasoning Is All About Dr. Michael A. Covington Associate Director Artificial Intelligence Center Covington, Other Logics 1 Other Logics: What Nonclassical Reasoning Is All About Dr. Michael A. Covington Associate Director Artificial Intelligence Center Covington, Other Logics 2 Contents Classical

More information

Philosophy 148 Announcements & Such. Inverse Probability and Bayes s Theorem II. Inverse Probability and Bayes s Theorem III

Philosophy 148 Announcements & Such. Inverse Probability and Bayes s Theorem II. Inverse Probability and Bayes s Theorem III Branden Fitelson Philosophy 148 Lecture 1 Branden Fitelson Philosophy 148 Lecture 2 Philosophy 148 Announcements & Such Administrative Stuff I ll be using a straight grading scale for this course. Here

More information

MARK KAPLAN AND LAWRENCE SKLAR. Received 2 February, 1976) Surely an aim of science is the discovery of the truth. Truth may not be the

MARK KAPLAN AND LAWRENCE SKLAR. Received 2 February, 1976) Surely an aim of science is the discovery of the truth. Truth may not be the MARK KAPLAN AND LAWRENCE SKLAR RATIONALITY AND TRUTH Received 2 February, 1976) Surely an aim of science is the discovery of the truth. Truth may not be the sole aim, as Popper and others have so clearly

More information

Probability Foundations for Electrical Engineers Prof. Krishna Jagannathan Department of Electrical Engineering Indian Institute of Technology, Madras

Probability Foundations for Electrical Engineers Prof. Krishna Jagannathan Department of Electrical Engineering Indian Institute of Technology, Madras Probability Foundations for Electrical Engineers Prof. Krishna Jagannathan Department of Electrical Engineering Indian Institute of Technology, Madras Lecture - 1 Introduction Welcome, this is Probability

More information