Jeffrey, Richard, Subjective Probability: The Real Thing, Cambridge University Press, 2004, 140 pp, $21.99 (pbk), ISBN

Similar documents
Degrees of Belief II

NICHOLAS J.J. SMITH. Let s begin with the storage hypothesis, which is introduced as follows: 1

RATIONALITY AND SELF-CONFIDENCE Frank Arntzenius, Rutgers University

British Journal for the Philosophy of Science, 62 (2011), doi: /bjps/axr026

Philosophy 148 Announcements & Such. Inverse Probability and Bayes s Theorem II. Inverse Probability and Bayes s Theorem III

Inductive inference is. Rules of Detachment? A Little Survey of Induction

A Priori Bootstrapping

Oxford Scholarship Online Abstracts and Keywords

Bayesian Probability

Philosophy Epistemology. Topic 3 - Skepticism

THE ROLE OF COHERENCE OF EVIDENCE IN THE NON- DYNAMIC MODEL OF CONFIRMATION TOMOJI SHOGENJI

Introduction: Belief vs Degrees of Belief

2014 THE BIBLIOGRAPHIA ISSN: Online First: 21 October 2014

The Zygote Argument remixed

Phil 1103 Review. Also: Scientific realism vs. anti-realism Can philosophers criticise science?

Probability: A Philosophical Introduction Mind, Vol July 2006 Mind Association 2006

THE MEANING OF OUGHT. Ralph Wedgwood. What does the word ought mean? Strictly speaking, this is an empirical question, about the


Understanding Truth Scott Soames Précis Philosophy and Phenomenological Research Volume LXV, No. 2, 2002

Philosophical Perspectives, 16, Language and Mind, 2002 THE AIM OF BELIEF 1. Ralph Wedgwood Merton College, Oxford

Bayesian Probability

In Defense of Pure Reason: A Rationalist Account of A Priori Justification, by Laurence BonJour. Cambridge: Cambridge University Press,

Review of Constructive Empiricism: Epistemology and the Philosophy of Science

Intersubstitutivity Principles and the Generalization Function of Truth. Anil Gupta University of Pittsburgh. Shawn Standefer University of Melbourne

Outline. The argument from so many arguments. Framework. Royall s case. Ted Poston

Epistemic Contextualism as a Theory of Primary Speaker Meaning

Introduction Symbolic Logic

Answers to Five Questions

Epistemic utility theory

Rational dilemmas. Graham Priest

Qualitative and quantitative inference to the best theory. reply to iikka Niiniluoto Kuipers, Theodorus

Belief, Reason & Logic*

Inferential Evidence. Jeff Dunn. The Evidence Question: When, and under what conditions does an agent. have proposition E as evidence (at t)?

Detachment, Probability, and Maximum Likelihood

What is a counterexample?

POLLOCK ON PROBABILITY IN EPISTEMOLOGY. 1. Some Remarks on Pollock s Critique of Bayesian Epistemology

Evidential Support and Instrumental Rationality

Some questions about Adams conditionals

Empty Names and Two-Valued Positive Free Logic

Ontological Justification: From Appearance to Reality Anna-Sofia Maurin (PhD 2002)

Scientific Realism and Empiricism

Against Coherence: Truth, Probability, and Justification. Erik J. Olsson. Oxford: Oxford University Press, Pp. xiii, 232.

Leibniz, Principles, and Truth 1

Boghossian & Harman on the analytic theory of the a priori

Class #14: October 13 Gödel s Platonism

1 Introduction. Cambridge University Press Epistemic Game Theory: Reasoning and Choice Andrés Perea Excerpt More information

Gandalf s Solution to the Newcomb Problem. Ralph Wedgwood

Keywords precise, imprecise, sharp, mushy, credence, subjective, probability, reflection, Bayesian, epistemology

1/12. The A Paralogisms

John Benjamins Publishing Company

HIGH CONFIRMATION AND INDUCTIVE VALIDITY

From Necessary Truth to Necessary Existence

Evidence and Normativity: Reply to Leite

Exercise Sets. KS Philosophical Logic: Modality, Conditionals Vagueness. Dirk Kindermann University of Graz July 2014

Understanding Belief Reports. David Braun. In this paper, I defend a well-known theory of belief reports from an important objection.

Verificationism. PHIL September 27, 2011

SUPPOSITIONAL REASONING AND PERCEPTUAL JUSTIFICATION

Ayer on the criterion of verifiability

Moral Argumentation from a Rhetorical Point of View

Skepticism is True. Abraham Meidan

Chalmers s Frontloading Argument for A Priori Scrutability

the aim is to specify the structure of the world in the form of certain basic truths from which all truths can be derived. (xviii)

Is it rational to have faith? Looking for new evidence, Good s Theorem, and Risk Aversion. Lara Buchak UC Berkeley

Coordination Problems

Philosophy 5340 Epistemology Topic 4: Skepticism. Part 1: The Scope of Skepticism and Two Main Types of Skeptical Argument

Does Deduction really rest on a more secure epistemological footing than Induction?

DO TROPES RESOLVE THE PROBLEM OF MENTAL CAUSATION?

An Inferentialist Conception of the A Priori. Ralph Wedgwood

How and How Not to Take on Brueckner s Sceptic. Christoph Kelp Institute of Philosophy, KU Leuven

An Empiricist Theory of Knowledge Bruce Aune

Can Rationality Be Naturalistically Explained? Jeffrey Dunn. Abstract: Dan Chiappe and John Vervaeke (1997) conclude their article, Fodor,

Overview of Today s Lecture

INTERPRETATION AND FIRST-PERSON AUTHORITY: DAVIDSON ON SELF-KNOWLEDGE. David Beisecker University of Nevada, Las Vegas

Some proposals for understanding narrow content

The Paradox of the stone and two concepts of omnipotence

Evidential arguments from evil

proper construal of Davidson s principle of rationality will show the objection to be misguided. Andrew Wong Washington University, St.

Saving the Substratum: Interpreting Kant s First Analogy

Speaking My Mind: Expression and Self-Knowledge by Dorit Bar-On

On Some Alleged Consequences Of The Hartle-Hawking Cosmology. In [3], Quentin Smith claims that the Hartle-Hawking cosmology is inconsistent with

How Gödelian Ontological Arguments Fail

Cover Page. The handle holds various files of this Leiden University dissertation

Etchemendy, Tarski, and Logical Consequence 1 Jared Bates, University of Missouri Southwest Philosophy Review 15 (1999):

Reason and Explanation: A Defense of Explanatory Coherentism. BY TED POSTON (Basingstoke,

A Puzzle About Ineffable Propositions

RALPH WEDGWOOD. Pascal Engel and I are in agreement about a number of crucial points:

Believing Epistemic Contradictions

Quine s Naturalized Epistemology, Epistemic Normativity and the. Gettier Problem

-- The search text of this PDF is generated from uncorrected OCR text.

Varieties of Apriority

Objective Evidence and Absence: Comment on Sober

Reply to Kit Fine. Theodore Sider July 19, 2013

World without Design: The Ontological Consequences of Natural- ism , by Michael C. Rea.

Scientific Method and Research Ethics Questions, Answers, and Evidence. Dr. C. D. McCoy

KANT, MORAL DUTY AND THE DEMANDS OF PURE PRACTICAL REASON. The law is reason unaffected by desire.

SAVING RELATIVISM FROM ITS SAVIOUR

What is the Nature of Logic? Judy Pelham Philosophy, York University, Canada July 16, 2013 Pan-Hellenic Logic Symposium Athens, Greece

A Problem for a Direct-Reference Theory of Belief Reports. Stephen Schiffer New York University

Course Webpage:

Remarks on the philosophy of mathematics (1969) Paul Bernays

Transcription:

Jeffrey, Richard, Subjective Probability: The Real Thing, Cambridge University Press, 2004, 140 pp, $21.99 (pbk), ISBN 0521536685. Reviewed by: Branden Fitelson University of California Berkeley Richard Jeffrey was one of the all-time greats in formal epistemology, and this was his last book. In classic Jeffrey style, what we have here is a short, dense, and incredibly rich and engaging monograph. It is simply amazing how much wisdom is packed into this little book. Before getting down to Serious Bayesian Business, Jeffrey begins with an extended acknowledgements section, which contains a heartfelt, emotional, and informatively autobiographical letter of farewell and thanks. The letter is addressed to Comrades and Fellow Travelers in the Struggle for Bayesianism, and its author is introduced to the reader as a fond foolish old fart dying of a surfeit of Pall Malls. As someone who only barely knew Dick Jeffrey (but hopes to be a Comrade in the aforementioned Struggle when he grows up), I was deeply touched and inspired by this introductory section of the book. It s no wonder that he was so beloved and respected both as a philosopher and as a man. The first chapter provides an excellent introduction to the basic concepts of subjective probability theory. Both the formal probability calculus, as well as its interpretation in terms of betting quotients for rational agents (the main application discussed in the book) are clearly and concisely presented here. This includes very accessible and clear explanations of Dutch Book arguments, conditional probability, and Bayes s Theorem. There are many useful exercises, and (as always) plenty of wise remarks and notes along the way. Jeffrey s style is highly effective pedagogically, because he tends to introduce things using snappy examples. Only after whetting the reader s appetite with such examples does Jeffrey invite the reader to think more systematically and theoretically. As such, this chapter would be a suitable (maybe even ideal) way to start an advanced undergraduate course on probability and induction (or inductive logic). Indeed, I plan to try it myself the next time I teach such a course.

Chapter two explains how subjective probability can be used to provide an account of the confirmation of scientific theories. The basic idea is to model inductive learning (typically, involving observation) as an event (called an update) that takes the agent from an old subjective probability assignment to a new one. If this learning process leads to a greater probability of a hypothesis (H) i.e., if new(h) > old(h) then H is said to have been confirmed (presumably, by whatever was learned during the update). Here, Jeffrey uses examples from the history of science to frame the discussion. Historical illustrations of both the Duhem-Quine problem, and the problem of old evidence are treated here (I will return to Jeffrey s discussion of the problem of old evidence later in this review). In keeping with Jeffrey s pedagogical style, no precise theory of updating is developed at this stage (although, some hints and puzzles are presented, which naturally lead the reader into wondering how such a theory might go). At this point, we just see some basic concepts applied to some simple historical examples. Precise theories of probabilistic update are discussed in the next chapter. From a pedagogical point of view, I suggest thinking of chapters two and three as operating together (I suspect that some students might have trouble following the details of the accounts exemplified in chapter two, without delving into some of the more theoretical material in chapter three along the way). In chapter three we get a masterful primer on the two main Bayesian theories of learning (probabilistic update). The classical theory of conditionalization (in which learning is modeled as conditionalizing on a proposition explicitly contained in the agent s doxastic space), as well as Jeffrey s more general theory of probability kinematics (in which learning is modeled as an event that alters an agent s credence function, but not necessarily by explicit conditionalization on a proposition) are compared and contrasted in a very illuminating way. We also get a pithy presentation of Jeffrey s radical probabilist epistemology, which was the philosophical motivation for Jeffrey s generalization of classical Bayesian conditionalization. There are two main reasons why Jeffrey saw a need to generalize classical conditionalization. First, classical conditionalization assumes that all learning is learning with certainty, since, whenever we conditionalize on a proposition E, we must subsequently assign E probability 1. Second, classical conditionalization presupposes that there is always a statement (in the agent s mentalese) that expresses the precise content of what was learned during an update.

Jeffrey conditionalization weakens both of these assumptions, thereby providing a more general (and more radically probabilistic ) framework for learning. The theoretical and philosophical aspects of this framework are laid out in chapter three. Before moving on to chapters four and five (which have to do with foundations and applications of subjective probability in statistics), I would like to digress with a few critical remarks on Jeffrey s account of the problem of old evidence presented in chapter two. The problem of old evidence is a problem (first articulated by Clark Glymour) for the traditional Bayesian theory of confirmation, which takes conditionalization as its learning rule. According to this classical approach, new(h) = old(h E), and E confirms H iff old(h E) > old(h). Hence, once E is learned, it cannot confirm any hypothesis thereafter, since all subsequent probability functions will have to assign probability 1 to E [i.e., new(e) = 1, and so new(x E) = new(x) for all X, and no subsequent confirmation of any X by E is possible]. But, intuitively, there seem to be cases in which we do want to say that E confirms H even though we have already learned E. For instance, Einstein knew about (E) the anomalous advance of the perihelion of Mercury, many years before he formulated his theory of general relativity (H) which predicts it. Nonetheless, it seems reasonable for Einstein to have judged that E confirms H (in 1915) when he learned that H predicts E. But, a classical Bayesian theory of confirmation cannot undergird his claim. [Many Bayesians respond to this problem by saying that, while Einstein s actual credence function in 1915 did not undergird the desired confirmation claim, some historical or counterfactual credence function does (e.g., the credence function he would have had, if he had not learned about the perihelion data). I will not discuss such approaches here.] Dan Garber provided a clever alternative explanation of confirmational judgments in such cases. Garber suggested that, while E did not confirm H for Einstein in 1915, the fact that H entails E (which Einstein did learn in 1915) did. The idea here is to model Einstein an agent who is not logically omniscient. Garber does this by adding a new statement to our (sentential) probability model of Einstein s epistemic state. This statement gets extrasystematically interpreted as H entails E. Garber then assumes that Einstein has some knowledge about this entailment relation (that if X is true and X entails Y is true, then Y must also be true), but he does not know whether or not H entails E is true. Then,

one can give constraints (historically plausible ones, even) on Einstein s credence function which ensure that H entails E confirms H in the classical Bayesian sense. Jeffrey speaks approvingly about this Garberian approach to logical learning and old evidence in chapter two. But, he then goes on to sketch an alternative account based on Jeffrey conditionalization. On Jeffrey s account (which is rather tersely presented in chapter two), we assume that there are two learning events: the empirical update in which E is learned, and the logical update in which H entails E is learned. Jeffrey places various constraints on these two updates so as to ensure that, at the end of the two updates, H has a greater probability than it did before the two updates. Thus, H is confirmed by the combination of the empirical and logical updates. There are lots of moving parts and assumptions in Jeffrey s account (it s considerably more complex than Garber s conditionalization approach). I won t get into these details here (although I think some of these assumptions are rather worrisome). Rather, I d like to focus on the motivation for a Jeffrey-conditionalization approach in the first place (in light of Garber s elegant, pre-existing classical conditionalization approach). Recall the two motivations (in general) for abandoning strict conditionalization in favor of Jeffrey conditionalization: (1) that sometimes learning is not learning with certainty, and (2) sometimes there is no statement in the agent s mentalese that expresses what was learned. The first motivation (1) cannot be relevant here, since (a) E must be learned with certainty in order for the problem of old evidence to get off the ground (if E is not learned with certainty, then E can still confirm H in the classical Bayesian sense, and there is no problem this is why even Jeffrey models the empirical update as a strict conditionalization), and (b) there is no reason to suppose that H entails E is not learned with certainty here (and even if there were, it is unclear how that would help to resolve the problem anyway). So, whatever Jeffrey sees as lacking in Garber s approach, it must have something to do with (2). But, Jeffrey concedes that E is expressed by a statement in the agent s (sentential) mentalese (namely, E ). So, it seems that the only motivation for using Jeffrey conditionalization rather than strict conditionalization to model logical learning (and to use this logical learning to account for the old evidence problem a la Garber) is the worry that H entails E is not expressed by any statement in the agent s mentalese. Indeed, Jeffrey seems to presuppose this in his account sketched in chapter two. I don t find this a very compelling worry. After all, Garber has shown how to use

extrasystematic interpretation of one of the sentences of the agent s language to model an agent s learning H entails E. One might respond on behalf of Jeffrey by complaining that having a sentence which is extrasystematically interpreted as H entails E is not the same thing as having a statement that systematically expresses H entails E. That s true, but I don t see why it s a problem for Garber s approach. It is quite common in the context of classical Bayesian confirmation theory to extrasystematically interpret statements in a sentential language as having first-order logical content which outstrips their systematic (propositional-logical) content. For instance, in Bayesian approaches to the ravens paradox, (atomic) sentences in simple languages are extrasystematically interpreted as monadic first-order claims like All ravens are black, and some of the (extrasystematic!) logical implications of these extrasystematic interpretations are crucial for proving the requisite theorems about the probability models in question. So, unless there is some reason to think that such applications of classical Bayesian confirmation theory (which trace back to the origins of the discipline) need to be re-worked Jeffrey-style, so as to avoid the use of such extrasystematic interpretations, I don t see why Garber s approach needs to be re-worked Jeffrey-style either. That said, I think Jeffrey s approach to old evidence and logical learning is both novel and clever. I just wonder whether its extra complexity and assumptions are really warranted, in light of Garber s simpler, classical approach. Chapter four contains a perspicuous and sophisticated introduction to the concept of expectation, and its relation to probability. Both unconditional and conditional expectation are expertly (and accessibly) covered here, along with their (sometimes subtle) connections to unconditional and conditional probability. This is something we (unfortunately) rarely see in a book on the philosophy of subjective probability. But, it is essential for a thorough understanding of the foundations of the subject (especially as they were developed by de Finetti and others in the 20th century). In particular, the basics of expectation are prerequisite for grasping a key concept discussed in chapter five: exchangeability. Exchangeability is considered by many to be the single most important concept in the foundations of subjective probability. But, it is almost never discussed in introductory texts on probability and inductive logic (at least, those that philosophers are likely to read). In chapter five, Jeffrey provides a survey of some of the central

results involving the concept of exchangeability. The most important of these are various forms of and variations on de Finetti s representation theorem for subjective probability, which provides a key to unlocking the mystery of how subjective probabilities can be obtained (noncapriciously) by updating on statistical information. This is some of the most technically (and philosophically) challenging material in the book. But, this chapter (especially) repays a careful work-through. I would say that the material in this chapter will be most challenging for students (even those with some background in probability). I would also say that those interested in the relationship between subjective and objective probability (e.g., probability in statistical mechanics) will find this chapter very illuminating and thought provoking (many references to excellent related work in statistics and physics are included here). Those who want a deep understanding of the foundations of subjective probability and its relationship to contemporary statistical science would be well served by a careful study of chapters four and five of The Real Thing. Chapter six (the final chapter of the book) is all about Jeffrey-style rational decision theory. Here, the reader will find a very effective crash course on the basics of the theory of rational decision first outlined in Jeffrey s classic essay The Logic of Decision. The presentation here benefits from many years of reflection since the publication of The Logic. In the very final section of the book (to my mind, one of the most interesting and sophisticated sections therein), we hear a completely new take from Jeffrey on the Newcomb problem. The Newcomb problem has plagued decision theorists (especially those of Jeffrey s ilk) for over thirty-five years. Here, at the very end of his very last work, Jeffrey renounces much of what he had been saying about that thorny problem for many years. In the process, he provides many wonderful new insights and ideas. This is the mark of a great philosophical mind (or, in his words, a fond foolish old fart ). Even the last pages of his last book involve radical re-workings of age-old resolutions of the deepest philosophical puzzles. Richard Jeffrey was one of the greatest philosophers of probability, induction, and rational decision we have known. His last book has given me a healthy dose of his wisdom. May it do the same for you.