Epistemic Risk and Relativism

Similar documents
BELIEF POLICIES, by Paul Helm. Cambridge: Cambridge University Press, Pp. xiii and 226. $54.95 (Cloth).

Deontological Perspectivism: A Reply to Lockie Hamid Vahid, Institute for Research in Fundamental Sciences, Tehran

Comments on Lasersohn

Who Has the Burden of Proof? Must the Christian Provide Adequate Reasons for Christian Beliefs?

Let s Bite the Bullet on Deontological Epistemic Justification: A Response to Robert Lockie 1 Rik Peels, Vrije Universiteit Amsterdam.

KNOWLEDGE ON AFFECTIVE TRUST. Arnon Keren

In Defense of Culpable Ignorance

CRUCIAL TOPICS IN THE DEBATE ABOUT THE EXISTENCE OF EXTERNAL REASONS

Received: 30 August 2007 / Accepted: 16 November 2007 / Published online: 28 December 2007 # Springer Science + Business Media B.V.

Speaking My Mind: Expression and Self-Knowledge by Dorit Bar-On

UC Berkeley UC Berkeley Previously Published Works

Theories of epistemic justification can be divided into two groups: internalist and

Skepticism and Internalism

Utilitarianism: For and Against (Cambridge: Cambridge University Press, 1973), pp Reprinted in Moral Luck (CUP, 1981).

Reply to Kit Fine. Theodore Sider July 19, 2013

Keywords precise, imprecise, sharp, mushy, credence, subjective, probability, reflection, Bayesian, epistemology

Ethics is subjective.

King and Kitchener Packet 3 King and Kitchener: The Reflective Judgment Model

In Defense of Radical Empiricism. Joseph Benjamin Riegel. Chapel Hill 2006

On Searle on Human Rights, Again! J. Angelo Corlett, San Diego State University

Bayesian Probability

HOW TO BE (AND HOW NOT TO BE) A NORMATIVE REALIST:

Oxford Scholarship Online Abstracts and Keywords

What Lurks Beneath the Integrity Objection. Bernard Williams s alienation and integrity arguments against consequentialism have

On the Concept of a Morally Relevant Harm

Luck, Rationality, and Explanation: A Reply to Elga s Lucky to Be Rational. Joshua Schechter. Brown University

Philosophical Perspectives, 16, Language and Mind, 2002 THE AIM OF BELIEF 1. Ralph Wedgwood Merton College, Oxford

REASON AND PRACTICAL-REGRET. Nate Wahrenberger, College of William and Mary

Introduction: Paradigms, Theism, and the Parity Thesis

A Case against Subjectivism: A Reply to Sobel

Buck-Passers Negative Thesis


Is Klein an infinitist about doxastic justification?

Gandalf s Solution to the Newcomb Problem. Ralph Wedgwood

Against Coherence: Truth, Probability, and Justification. Erik J. Olsson. Oxford: Oxford University Press, Pp. xiii, 232.

2014 THE BIBLIOGRAPHIA ISSN: Online First: 21 October 2014

Egocentric Rationality

Is there a good epistemological argument against platonism? DAVID LIGGINS

ALTERNATIVE SELF-DEFEAT ARGUMENTS: A REPLY TO MIZRAHI

PHL340 Handout 8: Evaluating Dogmatism

David Ethics Bites is a series of interviews on applied ethics, produced in association with The Open University.

Notes on Moore and Parker, Chapter 12: Moral, Legal and Aesthetic Reasoning

Wright on response-dependence and self-knowledge

3. Knowledge and Justification

Luminosity, Reliability, and the Sorites

Gale on a Pragmatic Argument for Religious Belief

Williamson, Knowledge and its Limits Seminar Fall 2006 Sherri Roush Chapter 8 Skepticism

AN ACTUAL-SEQUENCE THEORY OF PROMOTION

OSSA Conference Archive OSSA 3

VIEWING PERSPECTIVES

Small Stakes Give You the Blues: The Skeptical Costs of Pragmatic Encroachment

Responses to Respondents RESPONSE #1 Why I Reject Exegetical Conservatism

Rawls s veil of ignorance excludes all knowledge of likelihoods regarding the social

KANT, MORAL DUTY AND THE DEMANDS OF PURE PRACTICAL REASON. The law is reason unaffected by desire.

Non-evidential believing and permissivism about evidence: A reply to Dan-Johan Eklund

Review of Constructive Empiricism: Epistemology and the Philosophy of Science

The St. Petersburg paradox & the two envelope paradox

What God Could Have Made

Ayer on the criterion of verifiability

LODGE VEGAS # 32 ON EDUCATION

Choosing Rationally and Choosing Correctly *

REPUGNANT ACCURACY. Brian Talbot. Accuracy-first epistemology is an approach to formal epistemology which takes

IN DEFENCE OF CLOSURE

Skepticism is True. Abraham Meidan

Ramsey s belief > action > truth theory.

Comment on Robert Audi, Democratic Authority and the Separation of Church and State

Is Truth the Primary Epistemic Goal? Joseph Barnes

Answers to Five Questions

The procedural epistemic value of deliberation

what makes reasons sufficient?

The Rightness Error: An Evaluation of Normative Ethics in the Absence of Moral Realism

What Should We Believe?

A Contractualist Reply

OSSA Conference Archive OSSA 5

Seth Mayer. Comments on Christopher McCammon s Is Liberal Legitimacy Utopian?

Betting on God: Pascal, Probability Theory and Theology. nevertheless made surprising contributions to the field of religious philosophy.

The Qualiafications (or Lack Thereof) of Epiphenomenal Qualia

Belief Ownership without Authorship: Agent Reliabilism s Unlucky Gambit against Reflective Luck Benjamin Bayer September 1 st, 2014

Précis of Democracy and Moral Conflict

MULTI-PEER DISAGREEMENT AND THE PREFACE PARADOX. Kenneth Boyce and Allan Hazlett

Wittgenstein on the Fallacy of the Argument from Pretence. Abstract

Citation for the original published paper (version of record):

Knowledge, Trade-Offs, and Tracking Truth

The Kripkenstein Paradox and the Private World. In his paper, Wittgenstein on Rules and Private Languages, Kripke expands upon a conclusion

ON PROMOTING THE DEAD CERTAIN: A REPLY TO BEHRENDS, DIPAOLO AND SHARADIN

DO TROPES RESOLVE THE PROBLEM OF MENTAL CAUSATION?

The Critical Mind is A Questioning Mind

Unit VI: Davidson and the interpretational approach to thought and language

Reply to Gauthier and Gibbard

Philosophy of Religion 21: (1987).,, 9 Nijhoff Publishers, Dordrecht - Printed in the Nethenanas

Are There Reasons to Be Rational?

Some proposals for understanding narrow content

AN OUTLINE OF CRITICAL THINKING

THE CONCEPT OF OWNERSHIP by Lars Bergström

Interest-Relativity and Testimony Jeremy Fantl, University of Calgary

Kelly James Clark and Raymond VanArragon (eds.), Evidence and Religious Belief, Oxford UP, 2011, 240pp., $65.00 (hbk), ISBN

Verificationism. PHIL September 27, 2011

A CONSEQUENTIALIST RESPONSE TO THE DEMANDINGNESS OBJECTION Nicholas R. Baker, Lee University THE DEMANDS OF ACT CONSEQUENTIALISM

The unity of the normative

METHODISM AND HIGHER-LEVEL EPISTEMIC REQUIREMENTS Brendan Murday

Transcription:

Acta anal. (2008) 23:1 8 DOI 10.1007/s12136-008-0020-6 Epistemic Risk and Relativism Wayne D. Riggs Received: 23 December 2007 / Revised: 30 January 2008 / Accepted: 1 February 2008 / Published online: 23 February 2008 # Springer Science + Business Media B.V. 2008 Abstract It is generally assumed that there are (at least) two fundamental epistemic goals: believing truths, and avoiding the acceptance of falsehoods. As has been often noted, these goals are in conflict with one another. Moreover, the norms governing rational belief that we should derive from these two goals depend on how we weight them relative to one another. However, it is not obvious that there is one objectively correct weighting for everyone in all circumstances. Indeed, as I shall argue, it looks as though there are circumstances in which a range of possible weightings of the two goals are all equally epistemically rational. Keywords Justification. Epistemology. Risk. Relativism. Theory of knowledge This paper is part of a project in value-driven epistemology, which is an approach to epistemology that focuses on the value-theory that necessarily accompanies any such normative domain as epistemology. The underlying assumption is that the valuetheory, while interesting in its own right, will also have relevant and revelatory implications for epistemology itself. This is because a lot of work in epistemology depends on assumptions about value that are either unspoken, unexamined, or at least undefended. The first and foremost question that arises for value-driven epistemology is that of what the epistemic values are. This is a controversial subject, in part because it is hard even to settle on a procedure for determining the answer to the question. Some philosophers simply define epistemology as the theory of knowledge, and then argue that the only value relevant to knowledge is truth. Others take a more expansive view of epistemology, which at least opens up the possibility of epistemic values beyond truth. Indeed, there is now limited literature on the question of what comprises the set of epistemic values. However, for the purposes of this paper, I will adopt the conservative position that our basic epistemic value is for our beliefs to be W. D. Riggs (*) University of Oklahoma, 455 W. Lindsey, Room 605, Norman, OK 73019, USA e-mail: wriggs@ou.edu

2 W.D. Riggs true rather than false. As I hope to show, even this quite conventional assumption leads to some surprising results when you attend carefully to the value theory that underlies the epistemology. In particular, it has some potentially uncomfortable consequences for theories of epistemic justification. 1 Epistemic Risk One often-noted but rarely appreciated complication that arises immediately with the conventional assumption of true belief as the epistemic value is that there are really two values masquerading together as one. We want to have beliefs that are true, and we do not want to have beliefs that are false. That is, we value having true beliefs and we disvalue having false ones. It s been pointed out many times that these two values (of having true beliefs and avoiding falsehoods) are in conflict with one another. One way to think of this balancing act is that every time you take a doxastic stand either believing or withholding belief you run a certain risk. If you believe that p, you risk being wrong believing a falsehood. If you withhold judgment about p, you run the risk of failing to have that true belief, should p turn out to be true. So every single instance of belief or withholding represents an epistemic risk of one kind or another. What is interesting about this way of looking at our basic epistemic values is that it nicely captures the sense in which the two are in conflict with one another. If all we cared about were having true beliefs, we would believe everything that came into our heads, and if all we cared about were avoiding falsehoods, we d believe nothing. However, no theory of justification ever recommends either of those things. Rather, each one sets a certain standard (of evidence, reliability, or whatever) that must be met before the belief is justified. This represents the balance of epistemic risks that that particular theory recommends. The balance that must be struck is between the risk of missing an available truth vs. the risk of being in error. Epistemologists often make much of the need for one s account of justification to be properly connected to truth, but the flip side of this is that one s account of justification must be properly connected to error as well. In a previous work (Riggs 2003), I argued for the claim that every theory of epistemic justification implicitly sets this balance of epistemic risks. I proceeded to argue that in order to assess the adequacy of any given theory of justification, we need to look at the balance of epistemic risk that it is committed to, and determine whether it is as it should be. After all, different theories balance these values differently. For example, a theory with high evidential standards, say, puts the risk of being in error above the risk of missing an available true belief. Let s look at a simplistic example: suppose theory X says that one must have evidence sufficient to render a belief 80% likely (construed however you like) in order for one to be justified in believing it. This is saying that one should forego an even chance of gaining a true belief (when the evidence renders the belief 50% likely to be true) because of the chance of error, even though the chance of error is no higher than the chance of gaining a truth. To the extent that the standards implied by a theory of justification demand more than an even chance of truth, to that extent the theory places a higher premium of the risk of error than on the risk of not getting an available truth.

Epistemic risk and relativism 3 So, every theory of epistemic justification implicitly strikes such a balance between the two epistemic risks under discussion. But what is the correct balance? Presumably, we want our theories of justification to get this balance right. In my previous paper, I did not seriously question the assumption that there was an objectively correct balance of epistemic risk that theories of justification must respect, but it is precisely that assumption that I will put to the test in the next section. 2 A Balancing Act To get a perspective on epistemic risk, let s consider some risks we might take in our non-doxastic behavior for comparison. Consider two children, Sam and Pat, approaching a local park that is crowded with the other neighborhood children. To get to the park, they must get across a small creek. There is a bridge half a block away, but there is also a place where the more daring older kids just jump across right where the two children are. Neither Sam nor Pat has ever jumped the creek before, judging it to be at the very limit of their capabilities to make it across. However, this time, Sam runs towards the creek and takes the daring leap, landing successfully on the other side amidst the cheers of the other children. Pat, on the other hand, considers it for a moment, and then walks over to the bridge to cross. We can analyze Sam s and Pat s behavior in terms of their reactions to the various risks involved. What are these risks? Well, an obvious one is the risk of getting hurt. Another is the risk of looking stupid in front of all the neighborhood kids if you end up splashing into the creek. On the other hand, taking the bridge means foregoing the possible admiration of the all the kids at the park. For Sam, the potential payoffs in terms of neighborhood glory outweigh the risks of injury and ignominy. Pat, on the other hand, finds the balance of risks going the other way. For Pat, the fear of injury and ignominy outweighs the appeal of potential neighborhood glory. One possible explanation for this is that Sam and Pat have different estimates of the probabilities involved in determining the risks. For instance, perhaps Pat thinks it is only 30% likely that the jump would be successful, whereas Sam gives it a 70% probability. This would explain both Sam s willingness (and Pat s reluctance) to jump the creek. What I am interested in here, however, is the determination of the correct weighting or balance between conflicting values, not in how different people calculate the probabilities of possible outcomes differently. So, let us assume that Sam and Pat estimate the relevant probabilities involved in the risky jump identically. Under this assumption, then, the best explanation for the divergence in Sam s and Pat s behavior is that the two of them weight the relevant values in play differently. Sam either values neighborhood glory more highly than Pat, or else Pat fears pain and/or ridicule more than does Sam. This means that, in general, Pat requires either a lower probability of injury and ignominy or a higher probability of neighborhood glory than Sam before he will choose to perform such a daredevil stunt. My question is this: what is the correct balance amongst the relevant values that should be struck by Sam and Pat in this case? A closely related question that might be easier to get a handle on is, did one of the two behave more rationally than the

4 W.D. Riggs other? It seems fairly obvious to me that the answer to this latter question is straightforwardly no. This assumes, of course, that the possible consequences fall within a certain range of acceptability. If the consequence of falling into the creek was a protracted and painful death, then we might well say that Sam s action was less rational than Pat s (since people who take mortal risks for minimal payoff we consider to have a death wish or to be otherwise pathological). On the other hand, if the creek was little more than a thin dribble of water, then we might well say that Pat s action was less rational than Sam s (because people who are unwilling to take minimal risks for large payoffs we consider to have phobias or to be otherwise pathological.) The above example, however, is specifically designed to make the risks of jumping the creek significant enough to factor into rational behavior, yet not so powerful as to rationally compel one choice over another. In these circumstances, it seems most intuitively plausible to say that both children acted equally rationally, because each one behaved as dictated by a reasonable set of values (and probabilities). Since we are supposing that Sam and Pat do not differ in their estimation of the probabilities of the various possible outcomes, we have a case in which two different assessments of risk are equally rational, but differ only as to the weighting ascribed to the different values. It follows that each weighting of the values is equally rational. 3 Tipping the Epistemic Balance I think these lessons apply more or less straightforwardly to the epistemic situation. Just as we all knew daredevils and cowards when we were kids, we also know people who are quick to believe the latest theory or what they hear on TV or what they read on the Internet, and we all know people who are quite skeptical, who require more convincing than everybody else before they will believe something. It seems to me that the very same kind of explanation is apropos here as in the case of the two kids jumping the creek. Different people can assess differently the risks involved in believing or withholding, even if they have the same evidence, and hence make the same judgment regarding the likelihood of the truth of the proposition under consideration. For some, the risk of being wrong simply doesn t weigh as heavily on them as it does on others, and so they value the possibility of gaining a truth relatively high, and vice versa for others. Thus, just as one child requires a higher probability of glory or a lower probability of injury or ignominy than the other before he will jump, some believers require a higher probability of being right than others before they will commit to belief. One might object at this point by pointing to the fact that people are not psychologically free to adopt just any stance regarding epistemic risk. After all, it seems to be a psychological fact that humans can believe that p only when they think that p is true. This means that there are constraints on the kinds of risk tradeoffs that people can make with regard to their beliefs. For instance, it is psychologically implausible that someone could value gaining truths so highly, relative to their disvaluing of believing falsehoods that they believe virtually everything they hear. As a general rule, we are not capable of taking so cavalier an attitude toward the

Epistemic risk and relativism 5 truth, and, as students new to epistemology often (rightly) remark, it seems psychologically implausible to be a global skeptic. We simply cannot bring ourselves to refrain from believing altogether, as an extremely cautious attitude toward epistemic risk would dictate. While this does not refute skepticism, the protests of our students aside, it does seem to indicate that the entire range of stances toward epistemic risk is not available to psychologically normal humans. However, this is simply an analog of the point made earlier about pathological behavior. Psychologically normal humans are generally not capable of taking an extremely cavalier attitude toward their own survival, either. All this shows is that there is a range within which our values are variable, and any instance of someone s values going outside of that range is evidence of pathology or some kind of cognitive malfunction. Nevertheless, there remains, I claim, such a range within which people s epistemic values can rationally vary, and so make rational different stances toward epistemic risk for different people, even given the same evidence. Some might object here that my talk of balancing our values and calculating risks is all far too subjective and self-conscious, especially on the epistemic side. I agree that we do not do this balancing and calculating in anything like a conscious fashion. Indeed, I think that we normally don t do so in the case of action either. By and large, the risks that we take are part of our personality, not part of our voluntary (or, at least, our deliberative) behavior. Our proclivities to act or believe in certain ways tend to embody the values that we have, and the strengths with which we have them. 1 A related worry is that it seems that I am assuming that we can choose our beliefs in the way that we can choose our actions. This voluntarism is suggested by my use of the term rational. Rationality is normally thought to apply to actions, and it would be strange to assess the rationality of someone on the basis of things they have no control over. In particular, the application of an expected-utility notion of rationality seems suspect if we are constitutionally unable to act on such calculations, but rather simply find ourselves with or without a given belief whether we like it or not. Though I think this objection begs some important questions about the connections between lack of control and rationality, I need not be derailed by that discussion here. Let us consider a theory of epistemic justification that is not prone to being committed to any sort of problematic doxastic voluntarism reliabilism. According to simple versions of reliabilism, a belief is justified so long as what the believer comes to hold is via a process that is reliable at producing true beliefs. There are well-known problems with specifying the precise degree of reliability that is necessary for justification, but that is beside the present point. The considerations 1 This is actually a fairly subtle business. On the one hand, I don t think our assessment of risks and the subsequent determination of our behavior in situations that involve those risks is always or even very often deliberative. On the other hand, I don t want to imply that our behavior always accurately reflects our values. It seems to me that this is something we can get wrong. I don t just mean that we can miscalculate risk. That is obvious. We can over or underestimate the probabilities of various outcomes in a way that causes us to take on risks that we do not mean to. What I mean is that we can be wrong about what our own values are, or about the relative strength with which we hold those values. We humans are notoriously self-ignorant. Obviously, these are issues that an account of how to actually assess people on the basis of their own epistemic values must come to terms with. But I cannot pursue that task here.

6 W.D. Riggs presented above in favor or relativism about the two epistemic goals apply here as well. The correct degree of reliability that is required for justification will depend upon, among other things, the correct balance to maintain between gaining truths and avoiding falsehoods. If I am correct that there is a range of these weightings within which none is rationally preferable to the others, then there is no one degree of reliability that correctly sets the bar for justification. Thus, the problems that this relativity raises for theories of justification are not limited to those that have voluntarist sympathies. Let us take stock. I have argued that sometimes it is rational for two different agents to weight values that they share differently. If so, then different actions can be rational for two different people, even if they agree about all the relevant factual information, and, finally, it seems to me that all this carries over to the epistemic realm. That is, rational people can weight the epistemic values of having true beliefs and avoiding false one in different but equally rational ways. This can result in two people with the same evidence for a proposition p rationally coming to different doxastic stances regarding p. I take it that this is a more general point than the one William James was making in The Will to Believe (James 1969), where he argues that in certain very restricted circumstances our passional natures can legitimately determine belief. I am saying that across a much wider spectrum of cases, our passional natures can legitimately contribute to the determination of belief in a perfectly epistemically respectable way. Consequently, some of our assumptions about justification and, hence, knowledge may well need to be reconsidered in light of this apparent relativism about epistemic rationality. 4 The Truth (and Error) Connection It may seem as though there are still some theories of justification that are less likely to be affected by these considerations than others. The considerations in favor of the kind of relativism I am arguing for arise from treating rationality as though it were simply a matter of expected utility. In other words, the rationality of an action is equated to the expected utility of that action with respect to particular goals or values held by the subject. By analogy, the epistemic rationality of a given belief is assessed in terms of its expected utility with respect to the goals of having true beliefs and avoiding error. While this may seem a fairly natural way to assess rationality, epistemic or otherwise, it is only one among many. More to the point, there are differences of opinion among epistemologists as to how tightly a theory of justification must conform to such expected utility calculations. Thus, it is not immediately obvious that every theory of justification is affected by the relativity of epistemic values. On the face of it, deontological theories of justification would appear to be most immune to the worries raised in this paper. These are theories that take justification to consist in something like having fulfilled one s epistemic duties, or having been epistemically responsible in coming to hold the belief. For theories like these, the expected utility of the belief with respect to our epistemic goals is not, in the first instance at least, relevant to whether a belief is justified. Thus, the worry about the

Epistemic risk and relativism 7 subjectivity of the correct balance to be struck between those goals would seem to place no new burden on them. However, even deontological theories are generally held to the requirement of maintaining an appropriate relationship between justification and truth. For instance, if a theory s criteria of justification allowed for believers to be justified even while their beliefs consistently and comprehensively turn out to be false, this would be seen by many to be grounds for dismissing the theory as patently inadequate, but I have argued that avoiding falsehoods is just as basic an epistemic goal as is believing truths. Thus, if maintaining an appropriate relationship between justification and truth is a criterion of adequacy for justification theories, then so is maintaining an appropriate relationship between justification and error; and this returns us squarely to the issue of how one should balance these two considerations, given that they pull us in opposite directions. So, even though deontological theories do not formulate their criteria of justification directly in terms of satisfying our epistemic goals, the problem does not go away. Deontological theories of justification must show that their criteria of justification are appropriately related to the correct balance of our epistemic values. If this balance varies from individual to individual, then it looks like different deontological criteria might be appropriate for different individuals. Thus the problem of relativism arises even for such theories. 5 Some Final Thoughts and a Conclusion It may be thought that this is much ado about not a whole lot. After all, it s not as if I ve shown, or even tried to argue, that epistemic standards can diverge so much in a particular case that two agents can each be rational while one believes that p and the other believes that not-p on the basis of the same evidence. It doesn t seem plausible to me that there is that much play in our standards, at least with regard to what I have tried to show. At most, there will be cases where the same evidence justifies belief for one person but only withholding for another. So perhaps the most we need do is nod our heads in the direction of this mild relativism about epistemic standards, and simply acknowledge that the boundaries of epistemic justifications were always blurry, and any variation in standards that my arguments vindicate is hidden within that blurriness anyway. Perhaps, but there is an area of epistemology that has seen a lot of recent interest that might be affected more profoundly by this point than is the theory of justification. That is the epistemology of disagreement. There are a number of related and interesting issues that get debated under this general rubric. Among these are: What must my doxastic reaction be when confronted with an epistemic peer who disagrees with me about p? What must my doxastic reaction be when confronted with an epistemic superior who disagrees with me about p? Can two people who are fully apprised of all of each other s reasons and arguments nevertheless reasonably come to different conclusions regarding p? It seems to me that the points made in this paper about the relativity of epistemic rationality can be brought to bear here. While I do not have a fully worked-out view, at the least it looks as though two fully informed and reflective people could rationally disagree about whether p. If one of them is more of an epistemic risk-taker than the other, then presumably there will

8 W.D. Riggs be cases where the evidence is sufficient to make belief in p rational for that person, yet would allow only suspension of belief for a more epistemically caution person. If each were aware of their respective proclivities toward epistemic risk, it seems that each person could accept the other s doxastic stance toward p as reasonable, while under no rational pressure to change his own. Moreover, the effects of relativism about the weighting of our epistemic values are more profound if considered over time. Imagine two maximally rational agents with different weightings of the goals of having true beliefs and avoiding believing falsehoods. Suppose they begin at time t with all and only the same beliefs. Over time, even if they are exposed to all and only the same stimuli, evidence, and so on, differences in their belief systems will emerge, and these are likely to be cumulative. Over a sufficient amount of time, reflective rational agents can come to have significantly different views about the world. Moreover, even given the chance to discuss their differences and share their evidence, reasoning, and so on, they will be unable to reconcile their differences and neither one will be rationally criticizable. This is all quite speculative, of course, and my goal here is not to make a case one way or the other. it is merely to point out that taking proper account of the significance of the two independent values that are central to our epistemic lives can inform and perhaps advance current epistemological debates. I have tried to show that there is at least a range of possible assignments of the relative importance of our two basic epistemic values having true beliefs and avoiding errors within which rational people can rationally disagree. While the effects of this, if true, would not be devastating, it will require some rethinking on the part of most epistemologists of our accounts of justification and knowledge, and it may help shed some light on the much more recent, but very interesting, issue of the epistemology of disagreement. References James, W. (1969). The moral philosophy of William James. New York: The Thomas Y. Crowell Co. Riggs, W. (2003). Balancing our epistemic ends. Noûs, 37.