NICHOLAS J.J. SMITH. Let s begin with the storage hypothesis, which is introduced as follows: 1

Similar documents
Jeffrey, Richard, Subjective Probability: The Real Thing, Cambridge University Press, 2004, 140 pp, $21.99 (pbk), ISBN

Ramsey s belief > action > truth theory.

Introduction: Belief vs Degrees of Belief

Detachment, Probability, and Maximum Likelihood

British Journal for the Philosophy of Science, 62 (2011), doi: /bjps/axr026

Bayesian Probability

KNOWLEDGE ON AFFECTIVE TRUST. Arnon Keren

DESIRES AND BELIEFS OF ONE S OWN. Geoffrey Sayre-McCord and Michael Smith

Stout s teleological theory of action

THE MEANING OF OUGHT. Ralph Wedgwood. What does the word ought mean? Strictly speaking, this is an empirical question, about the

Bayesian Probability

Philosophical Perspectives, 16, Language and Mind, 2002 THE AIM OF BELIEF 1. Ralph Wedgwood Merton College, Oxford

Is Epistemic Probability Pascalian?

ROBERT STALNAKER PRESUPPOSITIONS

Probabilism, Representation Theorems, and Whether Deliberation Crowds Out Prediction

Reductio ad Absurdum, Modulation, and Logical Forms. Miguel López-Astorga 1

Buck-Passers Negative Thesis

Choosing Rationally and Choosing Correctly *

Foundations of Non-Monotonic Reasoning

Class #14: October 13 Gödel s Platonism

On the Expected Utility Objection to the Dutch Book Argument for Probabilism

The Rightness Error: An Evaluation of Normative Ethics in the Absence of Moral Realism

Dennett's Reduction of Brentano's Intentionality

Evidential Support and Instrumental Rationality

Realism and instrumentalism

Presupposition and Accommodation: Understanding the Stalnakerian picture *

Belief, Reason & Logic*

Keywords precise, imprecise, sharp, mushy, credence, subjective, probability, reflection, Bayesian, epistemology

-- The search text of this PDF is generated from uncorrected OCR text.

Imprint A PREFACE PARADOX FOR INTENTION. Simon Goldstein. volume 16, no. 14. july, Rutgers University. Philosophers

ON PROMOTING THE DEAD CERTAIN: A REPLY TO BEHRENDS, DIPAOLO AND SHARADIN

Moral Uncertainty and Value Comparison

Coordination Problems

COMPARING CONTEXTUALISM AND INVARIANTISM ON THE CORRECTNESS OF CONTEXTUALIST INTUITIONS. Jessica BROWN University of Bristol

The St. Petersburg paradox & the two envelope paradox

10. Presuppositions Introduction The Phenomenon Tests for presuppositions

The Assumptions Account of Knowledge Attributions. Julianne Chung

Some questions about Adams conditionals

RATIONALITY AND SELF-CONFIDENCE Frank Arntzenius, Rutgers University

The University of Chicago Press is collaborating with JSTOR to digitize, preserve and extend access to Ethics.

Russellianism and Explanation. David Braun. University of Rochester

INTUITION AND CONSCIOUS REASONING

NOTES ON WILLIAMSON: CHAPTER 11 ASSERTION Constitutive Rules

Lost in Transmission: Testimonial Justification and Practical Reason

Justifying Rational Choice The Role of Success * Bruno Verbeek

Understanding Belief Reports. David Braun. In this paper, I defend a well-known theory of belief reports from an important objection.

Egocentric Rationality

Varieties of Apriority

Discussion Notes for Bayesian Reasoning

Oxford Scholarship Online Abstracts and Keywords

What is a counterexample?

Understanding, Modality, Logical Operators. Christopher Peacocke. Columbia University

DISCUSSION THE GUISE OF A REASON

Degrees of Belief II

Etchemendy, Tarski, and Logical Consequence 1 Jared Bates, University of Missouri Southwest Philosophy Review 15 (1999):

Is it rational to have faith? Looking for new evidence, Good s Theorem, and Risk Aversion. Lara Buchak UC Berkeley

Understanding Truth Scott Soames Précis Philosophy and Phenomenological Research Volume LXV, No. 2, 2002

forthcoming in Res Philosophica, special issue on transformative experiences Transformative Experiences and Reliance on Moral Testimony

Scientific Realism and Empiricism

PHL340 Handout 8: Evaluating Dogmatism

Two Kinds of Moral Relativism

what you know is a constitutive norm of the practice of assertion. 2 recently maintained that in either form, the knowledge account of assertion when

1 Introduction. Cambridge University Press Epistemic Game Theory: Reasoning and Choice Andrés Perea Excerpt More information

The Philosophical Review, Vol. 100, No. 3. (Jul., 1991), pp

Answers to Five Questions

On the Origins and Normative Status of the Impartial Spectator

Utilitarianism: For and Against (Cambridge: Cambridge University Press, 1973), pp Reprinted in Moral Luck (CUP, 1981).

Religious belief, hypothesis and attitudes

Equality of Resources and Equality of Welfare: A Forced Marriage?

Who Has the Burden of Proof? Must the Christian Provide Adequate Reasons for Christian Beliefs?

Saying too Little and Saying too Much. Critical notice of Lying, Misleading, and What is Said, by Jennifer Saul

THE CONCEPT OF OWNERSHIP by Lars Bergström

knowledge is belief for sufficient (objective and subjective) reason

the negative reason existential fallacy

Accommodation, Inference, Generics & Pejoratives

A Subjective Theory of Objective Chance

Boghossian & Harman on the analytic theory of the a priori

Philosophical reflection about what we call knowledge has a natural starting point in the

Saying too Little and Saying too Much Critical notice of Lying, Misleading, and What is Said, by Jennifer Saul

2014 THE BIBLIOGRAPHIA ISSN: Online First: 21 October 2014

Gandalf s Solution to the Newcomb Problem. Ralph Wedgwood

Realism and the success of science argument. Leplin:

part one MACROSTRUCTURE Cambridge University Press X - A Theory of Argument Mark Vorobej Excerpt More information

The stated objective of Gloria Origgi s paper Epistemic Injustice and Epistemic Trust is:

Action Explanations Are Not Inherently Normative

Philosophical Ethics. Distinctions and Categories

Chapter 2 Ethical Concepts and Ethical Theories: Establishing and Justifying a Moral System

32. Deliberation and Decision

RALPH WEDGWOOD. Pascal Engel and I are in agreement about a number of crucial points:

Richard Carrier, Ph.D.

Unit VI: Davidson and the interpretational approach to thought and language

xiv Truth Without Objectivity

Belief, Awareness, and Two-Dimensional Logic"

Philosophy 5340 Epistemology. Topic 6: Theories of Justification: Foundationalism versus Coherentism. Part 2: Susan Haack s Foundherentist Approach

Is the Existence of the Best Possible World Logically Impossible?

what makes reasons sufficient?

Conditionals II: no truth conditions?

Believing Epistemic Contradictions

Lucky to Know? the nature and extent of human knowledge and rational belief. We ordinarily take ourselves to

IS IT ALWAYS RATIONAL TO SATISFY SAVAGE S AXIOMS?

Transcription:

DOUBTS ABOUT UNCERTAINTY WITHOUT ALL THE DOUBT NICHOLAS J.J. SMITH Norby s paper is divided into three main sections in which he introduces the storage hypothesis, gives reasons for rejecting it and then offers the filter theory as a replacement. Let s begin with the storage hypothesis, which is introduced as follows: 1 1 Call the following the storage hypothesis: degreed beliefs are stable attitudes of agents that can be appealed to for predicting and explaining patterns of behavior, judgment, and decision making. [1] the storage hypothesis is that humans have, or it is as if we have, degrees of belief, and our judgment and decision making across a wide range of situations is systematically predicted by them (more precisely, predicted by them in conjunction with some of our other states and processes, such as desires or preferences and intentions). [5] The critical assumption is that, like holding a map in your pocket, having some degree of belief or other in some proposition is a stable, persistent state... to give up the storage hypothesis is to allow for the possibility that an agent s degrees of belief cannot be specified independently of a specification of the decision problem she faces. This would be a radical revision to the ordinary way of doing things, since the ordinary way of doing things assumes that an agent s degrees of belief can be used to explain and predict her judgments and decisions because those degrees of belief are in place antecedent to the problems that prompt those judgments and decisions. [6] The first question to ask is: who holds such a view? Norby claims that the storage hypothesis is a widely held assumption in philosophical work [1] and that the truth of the storage hypothesis is a default assumption in philosophy [5]. In a footnote on p.5 there is a handful of references but these works are not described in sufficient detail to make it clear that their authors subscribe to the storage hypothesis, and 1 Page references are to a draft version of Norby s paper. Later on, quotations without accompanying references are repeats of earlier quotations. 1

2 there is not a single quotation in the paper in which an author commits to the view. It is not clear to me that the view really is widespread. Let s first consider expected utility theory: a view that certainly is widespread. Norby claims that expected utility theory incorporate[s] the storage hypothesis [6]. That claim is incorrect. Distinguish theoretically mature versions of expected utility theory from naive presentations of the basic idea (such as one might use when first presenting the idea of maximising expected utility to undergraduates). On the theoretically mature side we have the tradition stemming from [vnm44], [Sav54] and [Jef65], in which it is shown (in what is known as a representation theorem) that sets of preferences that satisfy certain constraints can be represented by a probability function and a utility function (in such a way that utilities are maximised that is, options with greater expected utility are preferred). On the naive side, we have agents who consult their probabilities and utilities in order to calculate expected utilities and then choose the act with the greatest expected utility. Norby states: The critical assumption is that, like holding a map in your pocket, having some degree of belief or other in some proposition is a stable, persistent state. But this assumption simply is not part of the picture in expected utility theory on either the sophisticated or the naive conception. On the sophisticated conception, the constraints on preferences do not specify that they should be stable over time which would be required to yield probabilities that are stable over time (given that, on this approach, probabilities and utilities are derived from preferences). On the naive side, probabilities (and utilities) are taken as inputs to the expected utility calculation. The core content of the theory is that one should choose the option that has the greatest expected utility given one s probabilities and utilities. If probabilities (or utilities) change, so (in general) do expected utilities and hence so does the recommended choice of action. It simply is not part of this view that one s probabilities are or should be stable over time. Let s consider now Bayesianism (in one sense of this word): the view that rationality requires (a) having (at any given time) degrees of belief that obey the axioms of probability and (b) updating these degrees of belief (only) by conditionalising on new evidence. This view does incorporate the element of stability over time that is key to the storage hypothesis. (It is not part of the view that a rational agent s degrees of belief never change: but it is part of the view that they change only in a particular kind of way.) However this is a normative theory, not a descriptive one. It is a theory about what epistemic rationality consists in: not an empirical theory that is meant to yield predictions concerning the behaviour of human agents. Even if it were combined with some form of decision theory, it would not be the case on this view that our judgment and decision making across a wide range of situations could be systematically predicted. Such prediction would require the further assumption that we are ideally rational agents (i.e. we satisfy (a) and (b) above and

also whatever decision-theoretic principles are added to the mix) and that is not a widespread view. In sum, the storage hypothesis as described by Norby is a descriptive thesis (for it yields systematic predictions of human behaviour across a wide range of situations) that has as a core commitment that degrees of belief are stable, persistent states. It is not clear to me that such a view is widely held in philosophy. 2 2 If the storage hypothesis is not widely held, then arguments against it become less interesting. But is Norby s argument against the view compelling in any case? I shall argue that it is not. Norby s argument centres on evidence which is supposed to show that degrees of belief are variable and situation-dependent rather than stable and persistent, as the storage hypothesis would have it. The first problem with the evidence is that much of it concerns explicit judgements or estimates of probability. One can only infer from the fact that I estimate the probability of P to be 0.7 (or whatever) when someone asks me about it, to the conclusion that my degree of belief that P is 0.7, if one assumes that degrees of belief are straightforwardly mirrored in estimates of probability. But this is surely not the case. Probability is a sophisticated concept: people need to learn about it before they can use it at all and notoriously, even once they have learned something about probability, people generally reason very badly with probabilities. Degrees of belief, on the other hand, are meant to be something that all folks have even if they know nothing about probability. Degrees of belief (usually together with utilities) are supposed to be implicit in behavioural dispositions. Now if you ask someone how likely she thinks it is that it will rain tomorrow, you might get all kinds of answer and the answer might vary depending on when and how you ask. After all, most people don t even know what it means when the meteorological service says that the probability of rain is 70% let alone have any idea of how to offer such an estimate themselves. But this does not mean that the person s degrees of belief are variable or unstable. It may yet be that her dispositions and hence her degrees of belief are perfectly stable: dispositions concerning whether to take the heavy umbrella that provides good rain protection, the light umbrella that provides less protection, or no umbrella; whether to buy a ticket on trains A and B (a more expensive trip but one that involves no walking) or train C (a cheaper trip but one that involves a ten minute walk); and so on. So we cannot conclude that someone s degrees of belief are unstable, simply on the basis of the fact that his explicit estimates of probability vary. We would need to know 2 I am not in a position to claim that the view is not widespread in any discipline but note that Norby s claim was that the storage hypothesis is a default assumption in philosophy. 3

4 furthermore that his relevant behavioural dispositions are unstable. But much of the evidence cited by Norby concerns estimates of probability and so this considerably weakens his case that degrees of belief are unstable. Apart from this point, there is a second problem with Norby s argument. He presents various kinds of case and then claims that the best description of them is one in which rather than being antecedently stored degrees of belief are assigned to propositions on the fly in each decision context. Which degree of belief is assigned to a particular proposition depends partly on what other possibilities are brought to mind in the context and that varies stochastically. So the same proposition may get assigned different degrees of belief in the context of different decisions even though the agent has not gained any new evidence or engaged in any further reasoning. The problem with the argument is that there is a description of the cases that is perfectly compatible with the storage hypothesis. On this description, agents have stable underlying degrees of belief. In a decision context, only certain possibilities are considered and so degrees of belief are renormalised: the degrees of belief assigned (at the underlying level) to possibilities not explicitly considered (in the given decision context) are spread across the possibilities that are considered. 3 This will give rise to variability in degrees of belief at the surface level but there are stable degrees of belief at the underlying level. 4 Of course the natural objection here will be: you can claim that there are these underlying stable degrees of belief if you wish but if they are not doing anything in decisions, then who cares? But there is an immediate response to this. The underlying degrees are doing something. They together with factors such as contextual salience are part of the explanation of why only certain options are called to mind in a decision context. Note that in Norby s central examples such as the lottery and the brain in the vat the events that are ignored (i.e. not explicitly called to mind in a decision context) are improbable. Now it would be a remarkable coincidence if the possibilities that agents ignored were always the unlikely ones unless their underlying degrees of belief were playing a role in selecting possibilities for explicit consideration. For example, one natural hypothesis is this: agents have limited resources, so when they face a decision, a reality check is first performed that is, sufficiently improbable (relative to the underlying degrees of belief) possibilities are filtered out and only the ones that might actually happen (not in the sense of have nonzero probability but in the more intuitive sense in which we say that s just not going to happen ) are explicitly considered. 3 There are different ways in which this could be achieved: cf. [Smi14]. 4 Some of Norby s descriptions of the cases actually sound like expressions of this kind of picture: This process of stochastic recall and reapportionment of degrees of belief [10, my emphasis]; we reapportion our uncertainties [31]. Others, however, do not: a process of credence formation [18].

Norby actually considers something like this idea: One might reason that we have a general tendency to ignore low-probability events [9]. Surprisingly, all he says in response is the following: But that seems more to describe the phenomenon than to explain it. More important would be an explanation of what it is about the way we store and recall possibilities in memory that makes these phenomena not only real but so common as to be hardly noticeable [9]. Regarding the first sentence of this quotation: Yes, it describes the phenomena! And the description involves the storage hypothesis being true (at the underlying level). So surely this view warrants more sustained consideration. Regarding the second sentence of this quotation: I have sketched a possible explanation above (in terms of limited resources at the level of explicit decision making i.e. we cannot consider too many options at once combined with a reality check that filters out sufficiently improbable events so that, in some contexts at least, they never make it to the level of explicit consideration). 3 Let s turn finally to Norby s suggested replacement for the storage hypothesis (which involves credences): the filter theory (which involves proto-credences ): A set of proto-credences is composed of a representation of all those possibilities that an agent thinks might be actual, and this representation serves as input to the decision set-up process. This is the heart of the filter theory of decision making. [25 6] Norby criticised the storage hypothesis for reasons having to do with the prediction of agents behaviour: I m going to present some evidence that I think strongly supports the claim that it is wrong in general to think that we carry around with us degrees of confidence (even intervals or ranges of degrees) in propositions from which it can be predicted (even by an omniscient observer) how likely we will treat those propositions as being when it comes time to make decisions. [6 7] He now claims that behavior can be predicted with the filter theory [26]. For reasons already touched on in the previous section, however, the filter theory is going to come off worse in the prediction stakes than the version of the storage hypothesis that I outlined in the previous section (in which agents have stable underlying credences which are then renormalised over selected options in decision contexts with the selection of options depending partly on their underlying credences and partly on factors of contextual salience). As already mentioned, in Norby s own core examples, only unlikely possibilities get ignored but Norby s theory is not going to be able to predict this. More generally, proto-credences are reverse-engineered from what might happen in decision-making contexts: 5

6 for Lot to think that the proposition that he won t be able to afford a new car is likely but not certain to be true, is for his decision set-up process to potentially result in deliberate decision making in which Lot treats it as likely but not certain that he won t be able to afford a new car and as unlikely but not ruled out that he will... because a proto-credence can give rise to treating a proposition with different degrees of likelihood in different situations, we will want to identify his degree of proto-credence not with any of these particular likelihoods but with the set of them. [28] proto-credences... are themselves understood functionally in terms of decision set-up processes [32] It is therefore extremely hard to see how, even in principle, one could get a handle on an agent s proto-credences in advance, in order to predict his behaviour in decisionmaking contexts. On the kind of picture I have sketched, on the other hand, the prospects of prediction are much better. On this picture, agents have stable underlying degrees of belief, which are implicit in their relevant behavioural dispositions (and can, subject to well-known caveats, potentially be elicited by offering bets as opposed to asking for explicit judgements of probability). These underlying degrees of belief, together with contextual salience factors, determine which possibilities will be explicitly considered in a given decision context and the weights assigned to these possibilities in the decision will be determined by renormalising the underlying degrees across the options under consideration. Of course this picture is partly idealised and normative and the role of salience in context introduces a potentially difficultto-predict element: for the context will affect not just the probability threshold at which the reality check eliminates options as not worthy of consideration it also potentially allows in certain options that fall below the threshold, if they are for some reason salient. On the whole, however, it seems that this picture unlike Norby s filter theory can potentially make genuine predictions. Furthermore, it makes the apparently correct prediction that agents generally ignore only events that have low probability (i.e. by the lights of the agents underlying credences). As Norby himself says: As I indicated earlier, there is in fact a good deal of empirical evidence suggesting that what I m here calling decision set-up processes operate by a kind of filtering operation of the sort I ve described, taking the most plausible, easily accessible possibilities... that are relevant to the task at hand, and filtering the alternatives out of consideration. [30, my emphasis]

In fact this is not the kind of picture Norby has described: it is the kind I have described in which most plausible means most likely by the lights of the agent s underlying degrees of belief. Because he has claimed that agents have no stable underlying degrees of belief, Norby is in fact in no position to claim that decision set-up processes operate by filtering out less plausible options. The picture Norby actually presented was as follows: I ve already indicated the basic outline of the theory I favor for explaining these facts and for getting a better understanding of the nature of degreed belief in light of unpacking and related effects. What I ve been suggesting is that the correct theory involves explaining the likelihood that an agent assigns, on a given occasion, to a possible state of the world in terms of recall processes by which that possibility is brought to mind and has its likelihood assigned to it. The likelihood assignment is sensitive to various situational factors, including to which other possibilities are called to mind at the time. [22] On this view, there would seem to be no basis at all for explaining why, when I consider only three possibilities, I assign them the likelihoods I do (let s say that on some occasion it s 1/ 1/ 1 rather than, for example, 1/ 1/ 1 ). On the kind of view I 3 3 3 4 4 2 have sketched, however, the values are potentially explicable: they are the result of renormalisation of the stable underlying degrees (once certain options are ruled out of consideration). References [Jef65] Richard Jeffrey, The logic of decision, second ed., University of Chicago Press, Chicago, 1983 (first edition 1965). [Sav54] Leonard J. Savage, The foundations of statistics, second revised ed., Dover Publications, New York, 1972 (first edition 1954). [Smi14] Nicholas J.J. Smith, Is evaluative compositionality a requirement of rationality?, Mind 123 (2014), 457 502. [vnm44] John von Neumann and Oskar Morgenstern, Theory of games and economic behavior, Princeton University Press, 1944. Department of Philosophy, Main Quadrangle A14, The University of Sydney, NSW 2006, Australia E-mail address: njjsmith@sydney.edu.au URL: http://www.personal.usyd.edu.au/~njjsmith/ 7