RATIONALITY AND THE GOLDEN RULE Eric van Damme Tilburg University 16 November 2014

Similar documents
1 Introduction. Cambridge University Press Epistemic Game Theory: Reasoning and Choice Andrés Perea Excerpt More information

Uncommon Priors Require Origin Disputes

Chapter 2: Commitment

Bounded Rationality. Gerhard Riener. Department of Economics University of Mannheim. WiSe2014

MILL ON JUSTICE: CHAPTER 5 of UTILITARIANISM Lecture Notes Dick Arneson Philosophy 13 Fall, 2005

Bounded Rationality :: Bounded Models

Class #14: October 13 Gödel s Platonism

WHEN is a moral theory self-defeating? I suggest the following.

KANTIAN ETHICS (Dan Gaskill)

THE CONCEPT OF OWNERSHIP by Lars Bergström

1.2. What is said: propositions

Critical Reasoning and Moral theory day 3

Counterfactuals, belief changes, and equilibrium refinements

Hoong Juan Ru. St Joseph s Institution International. Candidate Number Date: April 25, Theory of Knowledge Essay

Oxford Scholarship Online Abstracts and Keywords

SUMMARIES AND TEST QUESTIONS UNIT 6

NICHOLAS J.J. SMITH. Let s begin with the storage hypothesis, which is introduced as follows: 1

The Bounds of Reason: Game Theory and the Unification of the Behavioral Sciences

Q2) The test of an ethical argument lies in the fact that others need to be able to follow it and come to the same result.

How should I live? I should do whatever brings about the most pleasure (or, at least, the most good)

Logic and Pragmatics: linear logic for inferential practice

Informalizing Formal Logic

2.1 Review. 2.2 Inference and justifications

Epistemic conditions for rationalizability

6.041SC Probabilistic Systems Analysis and Applied Probability, Fall 2013 Transcript Lecture 3

Take Home Exam #2. PHI 1700: Global Ethics Prof. Lauren R. Alpert

Are Humans Always Selfish? OR Is Altruism Possible?

Prof. Bryan Caplan Econ 812

The Problem with Complete States: Freedom, Chance and the Luck Argument

DOWNLOAD OR READ : COLLECTIVE RATIONALITY EQUILIBRIUM IN COOPERATIVE GAMES PDF EBOOK EPUB MOBI

Reasoning about the Surprise Exam Paradox:

Notes on Moore and Parker, Chapter 12: Moral, Legal and Aesthetic Reasoning

Woodin on The Realm of the Infinite

6. Truth and Possible Worlds

Two Paradoxes of Common Knowledge: Coordinated Attack and Electronic Mail

Compatibilism and the Basic Argument

The Pleasure Imperative

Utilitarianism. But what is meant by intrinsically good and instrumentally good?

Rawls s veil of ignorance excludes all knowledge of likelihoods regarding the social

Understanding Truth Scott Soames Précis Philosophy and Phenomenological Research Volume LXV, No. 2, 2002

Paradox of Deniability

Rawls, rationality, and responsibility: Why we should not treat our endowments as morally arbitrary

Ramsey s belief > action > truth theory.

A THEORY OF COOPERATION IN GAMES WITH AN APPLICATION TO MARKET SOCIALISM. John E. Roemer. March 2018 COWLES FOUNDATION DISCUSSION PAPER NO.

Philosophical Perspectives, 16, Language and Mind, 2002 THE AIM OF BELIEF 1. Ralph Wedgwood Merton College, Oxford

REPUGNANT ACCURACY. Brian Talbot. Accuracy-first epistemology is an approach to formal epistemology which takes

PROSPECTIVE TEACHERS UNDERSTANDING OF PROOF: WHAT IF THE TRUTH SET OF AN OPEN SENTENCE IS BROADER THAN THAT COVERED BY THE PROOF?

PROFESSIONAL ETHICS IN SCIENCE AND ENGINEERING

24.03: Good Food 2/15/17

Oxford Scholarship Online

A Contractualist Reply

4 Liberty, Rationality, and Agency in Hobbes s Leviathan

Let us begin by first locating our fields in relation to other fields that study ethics. Consider the following taxonomy: Kinds of ethical inquiries

Semantic Entailment and Natural Deduction

McDougal Littell High School Math Program. correlated to. Oregon Mathematics Grade-Level Standards

(naturalistic fallacy)

The Backward Induction Solution to the Centipede Game*

Born Free and Equal? On the ethical consistency of animal equality summary Stijn Bruers

NOTES ON WILLIAMSON: CHAPTER 11 ASSERTION Constitutive Rules

Evaluating actions The principle of utility Strengths Criticisms Act vs. rule

Lecture 6 Workable Ethical Theories I. Based on slides 2011 Pearson Education, Inc. Publishing as Pearson Addison-Wesley

Consider... Ethical Egoism. Rachels. Consider... Theories about Human Motivations

Does Deduction really rest on a more secure epistemological footing than Induction?

KNOWLEDGE ON AFFECTIVE TRUST. Arnon Keren

Remarks on a Foundationalist Theory of Truth. Anil Gupta University of Pittsburgh

1. Introduction Formal deductive logic Overview

24.01: Classics of Western Philosophy

Social utility Functions-part I: theory

Epistemic Consequentialism, Truth Fairies and Worse Fairies

OPEN Moral Luck Abstract:

Torah Code Cluster Probabilities

WORLD UTILITARIANISM AND ACTUALISM VS. POSSIBILISM

SAYING AND MEANING, CHEAP TALK AND CREDIBILITY Robert Stalnaker

The Non-Identity Problem from Reasons and Persons by Derek Parfit (1984)

Chapter 2 Reasoning about Ethics

THE ROLE OF COHERENCE OF EVIDENCE IN THE NON- DYNAMIC MODEL OF CONFIRMATION TOMOJI SHOGENJI

DEONTOLOGY AND ECONOMICS. John Broome

Comments on Seumas Miller s review of Social Ontology: Collective Intentionality and Group agents in the Notre Dame Philosophical Reviews (April 20, 2

1. Lukasiewicz s Logic

What Lurks Beneath the Integrity Objection. Bernard Williams s alienation and integrity arguments against consequentialism have

Debates and Decisions: On a Rationale of Argumentation Rules

There are various different versions of Newcomb s problem; but an intuitive presentation of the problem is very easy to give.

Well-Being, Disability, and the Mere-Difference Thesis. Jennifer Hawkins Duke University

Chapter 2 Normative Theories of Ethics

Scanlon on Double Effect

Can Negation be Defined in Terms of Incompatibility?

Computer Ethics. Normative Ethics and Normative Argumentation. Viola Schiaffonati October 10 th 2017

Prompt: Explain van Inwagen s consequence argument. Describe what you think is the best response

-- The search text of this PDF is generated from uncorrected OCR text.

Published in Analysis 61:1, January Rea on Universalism. Matthew McGrath

University of Reims Champagne-Ardenne (France), economics and management research center REGARDS

Philosophical Ethics. Consequentialism Deontology (Virtue Ethics)

The Conflict Between Authority and Autonomy from Robert Wolff, In Defense of Anarchism (1970)

Basics of Ethics CS 215 Denbigh Starkey

SUNK COSTS. Robert Bass Department of Philosophy Coastal Carolina University Conway, SC

The Prospective View of Obligation

Are There Reasons to Be Rational?

The London School of Economics and Political Science. Reasons, Rationality and Preferences

Choosing Rationally and Choosing Correctly *

Moral Argumentation from a Rhetorical Point of View

Transcription:

RATIONALITY AND THE GOLDEN RULE Eric van Damme Tilburg University 16 November 2014 Abstract: I formalize the Golden Rule ( do unto others as you would like others to do unto you ) in a standard rational choice framework. I show that, in a 2-player context, a rational individual can follow the Golden Rule, but that, when society is larger, rationality and the Golden Rule may be inconsistent. I also show that a player that is following the Golden Rule may harm the other, in fact that, if a rational individual follows the Golden Rule, the outcome may be Pareto inferior to when the individual neglects the Rule. Finally, I show that, in this rational choice interpretation, the Golden Rule and the Law of Love may be incompatible. JEL Codes: A12, A13, C72, D63 The idea for this paper arose in the context of the project Markets for a Good Society, sponsored by the Dutch Royal Academy of Sciences (KNAW); I thank KNAW for incentivizing me (that is, both nudging me into and allowing me) to take a broader perspective than is usual in academic work. I thank Jörgen Weibull for discussions on his paper with Ingela Alger (Alger and Weibull (2013)) that induced me to write down these ideas. Manuel Toth is thanked for his reference to the work of Claude Berge. Eric van Damme, CentER for Economic Research and Tilburg Law and Economics Center (TILEC), Tilburg University, Eric.vanDamme@uvt.nl. 1

1. Introduction In this paper I address the question: is rationality compatible with the Golden Rule? To answer this question, I operationalize and formalize both concepts in a game theoretic framework. I show that the answer is affirmative in a 2-player society, but that the two concepts may be incompatible if society is larger. I also show that the Golden Rule leads to some surprising results in the 2-player case and that the Golden Rule may be incompatible with the Law of Love. The Golden Rule is a maxim (a general rule for behavior) that states: do to others as you would like others to do to you. The rule formalizes the ethic of reciprocity and it is prominent in many religions or world views. In fact it seems a universal rule throughout the globe. Herzler (1934) traces it back to at least 2000 BC and notes that it is found among primitive peoples as well as among high cultures and that it has been discussed by prominent philosophers both in the West (Aristotle) and in the East (Confucius); also see Puka (2014). The rule was given its definite statement by Jesus of Nazareth (Matthew 7:12; also see Luke 6:31). In this paper, I ask: what does this mean? And, are the demands of the Rule compatible with rationality? The Golden Rule, in both its first ( do ) and its second part ( what you would like ) links up with rationality as understood in modern economics and moderns social science more generally. Economic theory starts from what people desire (the like in the second part of the Rule) and derives from that what people do. The assumption of rationality, in essence, is that individuals consistently act to achieve what they desire, subject to the constraints that they face. Hence, the Golden Rule seems to assume a form of rationality; at least, it links up with it. In order to better understand the link between the two, in this paper, I will formalize the Golden Rule in modern language and, hence, interpret it in order to get a better understanding of it. Hence, on the one hand, this paper is one in hermeneutics. Of course, being primarily being a game theorist and not a philosopher, my interpretation certainly will not be as deep or profound as that of philosophers. Nevertheless, I think that, by using mathematical language, the precise formulation will make clear some properties of the Golden Rule as well as some difficulties associated to it. Precise formulations allow for sharp conclusions. For the latter, the reader can immediately turn to Proposition 1 and the Results 1-4. 2

The main question addressed in this paper is whether the Golden Rule is consistent with rationality. Let me make clear precisely what we mean with this; that is, how are my results to be interpreted? I will assume that individuals are rational in the sense of Von Neumann Morgenstern (1944). That is, each individual can evaluate all possible social outcomes (as well as lotteries over such outcomes), compare them, and rank them. This implies that each player s preferences can be represented by a utility function. Standard theory proceeds on the basis of these data and only these data: anything that is not captured in the utility functions is considered irrelevant; an assumption that Sen (1979) has labeled as welfarism, and has criticized. 1 We will not deal with the criticisms here, but move in a different direction. 2 I will assume that our players do not just have utility functions, but that they also take the Golden Rule seriously: they want to obey this rule. In fact, for them this rule is at least as important as the utility function is. His utility function determines what a person would like others to do to him, as in the second part of the Golden Rule. Since the person follows the Golden Rule, he then takes the corresponding action. Hence, once the player has determined what he would want (if he were the other player), he behaves according to the rule, without taking his preferences into account. Hence, both building blocks are important, and we investigate whether the demands that they jointly impose can be met. We will show that the conditions can always be met in a 2-person society, but that, in larger societies, additional conditions have to be met. The Golden Rule will typically produce a different outcome when the underlying utility function is different. For example, the Golden Rule forces a selfish individual to assume that the other is selfish, hence, it will typically force such a player to be very generous towards the other. Note also that the Golden Rule makes very weak demands on the knowledge of the individual. In particular, there is no need to need to know what the other wants. It will be immediately clear that this leads to problems if the desires of the other are very different. For 1 Sen (1979) defines welfarism as: the judgment of the relative goodness of alternative states of affairs must be based exclusively on, and taken as an increasing function of, the respective collections of individual utilities in these states. 2 For example, the standard approach is neutral as to how an outcome is achieved: whether outcome x is reached through an individual s own choice or because it is dictated and forced upon him is irrelevant. (Of course, this could also be dealt with by enlarging the set outcomes and include how they are reached.) 3

example, consider the interaction between a doctor, who has the objective to save lives, and a terminally ill patient who wants to die. The Golden Rule dictates that the doctor refuses to help the patient, which is against the patient s interests. It is thus clear that the Golden Rule will only produce good results if the individuals have similar desires. Throughout most of the paper, we assume symmetry; we will address asymmetry in Section 5. The remainder of the paper is structured as follows. We first formalize rationality and discuss the Golden Rule somewhat more extensively (Section 2). We then show that rationality and the Golden Rule are always consistent in the 2-person case (Section 3), but not necessarily when the number of players is larger (Section 4). Section 5 discusses the case of asymmetric players and the links between the Golden Rule and other maxims of behavior such as the Silver Rule and the Law of Love. Section 6 concludes. 2. Rationality and the Golden Rule I assume a society in which the individuals are (in the first instance) consequentialist: they care about the (final) outcomes. Let X denote the (finite) set of outcomes. Individuals are assumed to obey the von Neumann Morgenstern (1944) assumptions on decision making under risk, hence, the preference relation of individual i can be represented by a utility function u i : X R. Hence, u i (x) denotes the utility that individual i assigns to the outcome x, and a larger number denotes a more preferred outcome. Which outcome ultimately results depends on the interaction between the individuals, which can be represented by a game form. This game could be in extensive form and it could involve uncertainty, or incomplete information; however, these aspects are irrelevant for our purposes. We adopt the traditional approach, which has been revived in Kohlberg and Mertens (1986), hence, the strategic form is all that matters. Let S i denote the set of (pure) strategies of player i and let S be the set of strategy profiles (one strategy for each individual). Then each sεs determines a probability distribution on X and, hence, a unique (expected) utility u i (s) for each individual i. For our purposes, the situation is fully described by the game G =< S 1,, S n, u 1,, u n >. The Golden Rule states do to others as you would like others to do to you. The rule formalizes the ethic of reciprocity and, as noted in the Introduction, it is prominent in many religions or world views. In this paper, we address two questions: (i) What does it mean? (ii) Is it compatible with rationality, as defined above? We now have a first, brief, discussion on the first question. 4

Although the Golden Rule is older and is also found in Asian culture, in the Western literature, the first modern version of the rule is stated in Matthew 7:12: In everything, therefore, treat people the same way you want them to treat you, for this is the Law and the Prophets. Clearly, the Golden Rule asks a person to put himself in the position of the other; it suggests a general orientation toward the other, rather than a focus on the self. It alerts us to the natural tendency to be self-centered and to forget or neglect the impact of our actions to others. It reminds us that others are our peers. Notice that the rule only makes minimal demands on information; it remains self-centered. It asks us how we would view our own action if we were in the other s position; we are not asked the question: how does the other look at (or evaluate) our action? Hence, the perspective remains that of our own. The Golden rule asks us to put ourselves in the shoes of the other, it does not ask us to be the other. The rule does not say: treat others as they would like to be treated. (See Section 5 for more on this important distinction.) The Golden Rule should be distinguished from other moral precepts such as the First Commandment (also called the Law of Love) love thy neighbor as thyself (which was also stressed by Jesus; see Matthew 22:39, Mark 12:31 and Luke 6:31) and the Silver Rule (which was discussed by Confucius) do nothing to others you would not have done to you. The latter at first sight seems weaker than the Golden Rule, as also suggested by the labels gold and silver. Nevertheless, the difference is not so clear: if the action you would take to yourself is unique, then you do not appreciate that another action is taken and the Silver Rule will amount to the Golden Rule. In contrast, the First Commandment seems more demanding. We will only briefly discuss these alternative rules; see Section 5. In the Introduction, I already discussed how the results should be interpreted. I assume that players are rational and wish to obey the Golden Rule. Since they are rational, they know what they want, hence, each player knows what he would want others to do to him. He can then do that same action to the other. The question is whether there exists a strategy (or a strategy profile) with this property. If so, how does it look like, and how does it change when circumstances (such as preferences) change? 5

3. Two-player games Let me first formalize the Golden Rule in the case the society consist of two persons, Clearly the Golden Rule supposes that, if i and j interact, it is also possible that the same situation can arise with the positions of i and j interchanged. Hence, the overall environment is symmetric. 3 Within the context of the model described above, this means that the game G is symmetric: both players have the same strategy set, S 1 = S 2, and the payoff function is symmetric as well. To economize on notation, I write S 1 = S, and u 1 (. ) = u(. ) hence, symmetry means u(s, t) = u 1 (s, t) = u 2 (t, s) for all s, tεs, (1) Note that the underlying game in extensive form may allow for multiple situations. However, our assumption is that each situation is equally likely to arise with each player in each of the two possible roles. For example, consider traffic interactions where individuals can be cyclists or car drivers. Each individual can be in either role, so that a car driver has to think about what he would do if he were a cyclist. Equation (1) expresses that the overall situation is entirely symmetric. With the game G being symmetric, we can formalize the Golden Rule. The Golden Rule recommends a strategy r. The rule starts with do, hence it states what player i should do. In a society that accepts the Golden Rule, player i will follow it, hence player i will plays r. The Golden Rule states that i should do what he would like the other player j to do. Now, if i plays r r and i is rational (as defined above), then it is obvious what he would like the other player j to do. Given the utility function u i and his own strategy r player i wants j to choose that strategy that maximizes his (that is i s) payoff. In other words, the Golden Rule r should satisfy. u(r, r) = Max tεs u(r, t) (2) In other words, when player i plays r, r is the most favorable thing that j can do to i. Let us consider two examples. The first is the Prisoners Dilemma with payoff matrix given by 3 At least, the Golden Rule assumes that i can imagine himself to be in the position of j. He can then ask himself how he would view (and evaluate) his actions when he would be confronted with them as other player. This interpretation leads to the same expression as in (1). 6

C D C 3,3 0,4 D 4,0 1,1 Table 1: Prisoners Dilemma Clearly, the game is symmetric. When a player plays C, the best that can happen to him is that the other also plays C. Hence, in this Prisoners Dilemma, C is a Golden Rule. On the other hand, D is not a Golden Rule, as, when a player thinks of playing D, he still prefers the other to play C. The same holds for any mixed strategy: whatever one does, one always prefers the other to play C.Hence, in the Prisoners Dilemma, the unique Golden Rule prescribes to play C. The second example is a variation on Battle of the Sexes, of which the payoff matrix is: H T H 0,0 2,3 T 3,2 1,1 Table 2: Battle of the Sexes In this case if the row player (P 1 ) plays H, he prefers the other (P 2 ) to choose T; while, if he chooses T, he prefers P 2 to choose H. Hence, neither H nor T satisfies condition (2). However, this non-existence is caused by the restriction to pure strategies, and we know that rational players might have to use mixed strategies. So let us allow the players to use randomization. Let (S) denote the set of all mixed strategies, that is, all possibilities for a player to select a pure strategy from S by using a random device. We then see that the formalization of the Golden Rule is u(ρ, ρ) = Max τε (S) u(ρ, τ) = Max tεs u(ρ, t) (3) Where the last equality follows from the fact that the payoff function of P 1 is linear in the mixed strategy of P 2. In the above Battle of the Sexes, if ρ is a non-trivial mixed strategy and P 1 intends to play ρ, then P 1 always prefers P 2 to play a pure strategy unless ρ = (0.5,0.5), in 7

which case P 1 is indifferent to what P 2. We conclude that ρ = (0.5,0.5) is the unique Golden Rule in this game. Note that the Golden Rule is (again) different from the (mixed strategy) Nash Equilibrium, which is (0.25, 0.75) in this case. (There are also 2 asymmetric pure strategy Nash Equilibria, (H,T) and (T,H). We now have Proposition 1. If G is a symmetric 2-player game, then a Golden Rule exists. Proof For σε (S), denote by P(σ) the set of mixed strategies of the other player that maximize i s payoff when i plays σ. Clearly, P(σ) is non-empty, compact and convex. It is also easily seen that the map σ P(σ) has a closed graph. Hence, the conditions of the Kakutani Fixed Point Theorem are satisfied, so that there exists σε (S) with σεp(σ). Such a σ is a Golden Rule. Note that condition (3) takes an ex ante perspective: before randomizing according to ρ, a player asks himself what randomized strategy of the other player would be best for him. One could also take an ex post perspective and ask: what strategy of the other would I like to see, given the outcome of my randomization. The game from Table 2 shows that one cannot insist on this stronger requirement: it would lead to non-existence. We have proved existence of a Golden Rule. We cannot talk about the Golden Rule, however, as multiple ones may exist. The next coordination game (with X > 0) provides an example. H T H 1,1 0,0 T 0,0 X,X Table 3: A coordination game In this case, both H and T are Golden Rules and, if X 1, they are not equivalent: if X > 1, then both players prefer T, while H is preferred for X < 1. If X = 1, these two rules are equivalent and the players face a coordination problem. These observations motivate to 8

distinguish between strong and weak Golden Rules. We say that ρ is a weak Golden Rule (ρεwgr) if it satisfies (3); it is a strong Golden Rule if it is a weak one, and, in addition u(ρ, ρ) = Max σεwgr u(σ, σ) (4) The following result is trivial Corollary 1. Each symmetric 2-player game has at least one strong Golden Rule; different strong Golden Rules are payoff equivalent. I provide a final example, before moving to games with more players. The next table gives the Hawk-Dove game, which is well-known from evolutionary biology (Maynard Smith, 1982). As is common in the biology literature, we only list the payoffs of P 1. The game is symmetric, with C > V > 0. H D H (V C) 2 V D 0 V 2 Table 4: The Hawk-Dove Game Clearly, in this game, D is a golden rule. From the point of morality, this game has the same structure as the Prisoners Dilemma: whatever P 1 does, he always strictly prefers that P 2 chooses D. Hence, D is the unique Golden Rule, and it is, therefore, a strong Golden Rule. Note that, as in the Prisoners Dilemma, the Golden Rule is different from the Nash Equilibrium. (The Hawk-Dove game has a unique Nash Equilibrium; it is in mixed strategies and each player plays H with probability V C.) 4. n-player games The above settles the matter for two-player games. Let us now consider a society consisting of n individuals with n > 2. It is straightforward to generalize the definition of a symmetric game. It is an n-player game in which all players have the same set of pure strategies, S, and the identity of the players does not matter, hence, if π is a permutation of the set of players, then 9

u πi (πs) = u i (s) (5) When there are at least 3 players, an important question is whether players can correlate their actions (for example, because of they being able to talk to talk to each other, or even being able to conduct jointly controlled lotteries), or whether they have to move independently. I will assume that correlation is possible. I will write σ to denote a correlated strategy profile and write σ = (σ i, σ i ), where σ i is the marginal strategy of player i and where σ i denotes the marginal strategy of the opponents of P i. (Hence, σε (S n ), σ i ε (S) and σ i ε (S n 1 ).) In this context, a correlated strategy profile σ is a (weak) Golden Rule if σ i = σ j (for all i, j) and u i (σ) = max τ i ε (S n 1 ) u i(σ i, τ i ) (6) In words, this condition states the following: (i) players use a correlated strategy profile, (ii) all individual marginal strategies are the same, and (iii) looked at from the ex ante stage, the opponents of player i play that correlated strategy that is most favorable for i, given that i plays σ i. 4 We can also develop the concept for the case where correlation is not allowed. In that case, a (weak) Golden rule is a mixed strategy profile σ σ i, such that, with σ = (σ i,, σ i ) we have: σ i = σ j (for all i, j) and u i (σ) = max τ i ε (S) n 1u i(σ i, τ i ) (for all i) (7) It is the same idea as in (6), but, in (7), players are restricted to independent randomization. By allowing for correlation, we make it easier to satisfy the condition: a solution of (7) is also a solution of (6). However, if there are more than two players, a weak Golden Rule does not necessarily exist, as the following example shows. Assume that there are three players and that they are all selfish. Each player has 1.- at his disposal and may decide how much of his endowment to give to the others. Any amount that 4 Note that, as in the previous Section, it will not be possible to satisfy the corresponding ex post condition: For each sεs, ifσ i (s i ) > 0, then u i (s i, σ i (s i )) = max τ i ε (S n 1 ) u i(s i, τ i ) 10

is transferred (changes hand) is doubled. Hence, if P 1 transfers 1 to P 2 then P 1 loses 1 but P 2 gains 2. More generally, if x j i denotes the amount that i transfers to j, then the total amount that any P i has at his disposal at the end of the day is equal to: u i = 1 x j i x k i + 2x i j + 2x i k (8) (We have the constraint that 0 x j i 1, and there may be integer constraints as well.) What does player 1 want? Irrespective of what he himself does, he would like that both player 2 and player 3 give their entire amount to him. Hence, within our specification, the only possibility for the Golden Rule is: give all the money to player 1. Now, the problem is that this rule is not unanimous: it favors player 1. If we would instead consider player 2, then we would got the rule give all the money to player 2. These maxims are inconsistent. Hence, a weak Golden Rule does not exist for this game. Result 1. If n > 2, a weak Golden Rule need not exist. What causes the problem? Note, first of all, that the problem does not arise in the 2-player case. In this case, in essence, each player has two strategies, Give and Keep, and the Golden Rule is to Give. The issue of to whom should be given does not arise. In the 3-player case, we have a coordination problem and it seems that our formalism is not able to handle this very well. In my opinion, the origin of the problem lies at least as much in the Golden Rule itself than in our formulation of the game. In the 2-person case, both in the first and in the second part, the Rule refers to a specific one-to-one relation, but when more individuals are involved, things get blurred. In its most familiar version, the Golden Rule states: Do unto others as you would have them do unto you. The Rule talks about others, not about each single other, hence, it talks about the others as a group, not about the different individuals in that group. However, the do in the first part of the Rule, could refer to the individuals in that group separately. Hence, one can imagine a rule of the type do onto each other what you would like each other to do to you, but reading it immediately suggests a kind of decomposition, with the n-person game being a sum of n 1 separate 2-player games. Obviously, the formulation in (5) is much more 11

general than this, and, indeed, the example above is a game that is not decomposable in this way. An alternative way out is to have a decomposition into self and group, or self and others, as one has in the usual public good contribution games. In the canonical game of this type, each player i has a budget (which we can normalize to 1) and then decides how much, x i to give to the pool and how much (1 x i ) to keep for himself and the payoff to each individual i is then given by something like: n u i (x) = 1 x i + f( j=1 x j ), (9) In which there is a clear decomposition between self and group. In this situation, players are in symmetric positions and, with symmetric treatment, it reduces to a 2-person situation in which the Golden Rule is compatible with rationality. Hence, it is clear that there are certain classes of n-person problems for which the Golden Rule will be compatible with rationality. I not provide formal theorems in that direction; personally, I find the impossibility result to be more interesting. Returning to the example, and abstracting away from all game theoretic issues: what would a layman say what should be done? The first thing that comes to mind is that, when allowed, one should give 50 cents to each other player. Hence, others are treated equally. Now, does the Golden Rule indeed insist on equal treatment of others? It is not immediately obvious that it does. Hence, one might say that r is a Golden Rule with equal treatment if it satisfies such an additional condition. Probably, this further reduces the domain of eligible problems, and such rules may exist on appropriate domains. Nevertheless, asymmetries are natural, even in (small extensions of) the canonical public good game defined above. Players may have different budgets, or they may have different cost in contributing, or the benefits from the public good may differ. What does the Golden Rule say in these cases? Is it completely silent? In any case, we know that real humans find it much more difficult to decide what to do in such asymmetric cases; there are different normatively appealing rules of behavior (Reuben and Riedl, 2013). Perhaps, the Golden Rule was not designed for such situations; it arose in the context of smaller communities in which the assumption of (approximate) symmetry and, hence, equal treatment was obvious. 12

5. Examples: multistage games and asymmetric situations In this Section, I return to the two-player case and provide a few more examples and some discussion. First, consider the dictator game: one of the players is randomly chosen to be P 1 and provided with an endowment of 1. P 1 may then decide how much, t, to transfer to the other, P 2. This other player is passive; he has to accept the transfer. The material payoffs are, therefore, (1 t, t). Assume that both players are selfish. P 1 has to think what he would like P 2 to do if he himself would be in the position of receiver. Clearly, in that case, he would like the other to transfer the entire amount. Hence, the Golden Rule dictates that he should transfer the entire amount. A selfish player who follows the Golden Rule transfers everything. Note that the result will remain the same if a certain fraction of the money is lost when transferred, or when transfers are taxed. Still the entire amount will be transferred. Hence, the Golden Rule may produce an inefficient outcome. Note also that the Golden Rule does not produce a fair outcome. Next, consider the ultimatum game. The situation is the same as before, but now P 2 has a decision to make: he can decide whether to accept the transfer (resulting in material payoffs (1 t, t)), or to reject the transfer (resulting in material payoffs (0,0)). 5 P 2 has to ask himself what he would like if he were P 1. The answer is clear: P 1 would want him to always accept. For P 1 the situation is as in the dictator game. Hence, the outcome of the game is that P 1 transfers the entire amount, even if P 2 accepts any amount that is transferred, Let us return to the dictator game, but les us assume that players are altruistic. Specifically, assume that u 1 (x 1, x 2 ) = x 1 + f(x 2 ), where f is a concave increasing function with f(0) = 0. If the amount t is transferred, then the utility of P 2 is equal to u 2 (1 t, t) = u 1 (t, 1 t) = t + f(1 t) (10) 5 This 2-stage game can be transformed into a symmetric strategic game of the type considered in Section 2. Each player can be put in the position of Sender or Receiver, which each possibility equally likely; a strategy of a player then describes how much the player will transfer as a Sender and how he will respond to each possible transfer t of the other. 13

The Golden Rule asks P 1 to maximize this expression, hence, f (1 t) = 1, whenever there is an interior solution. For example, assume f(x) = αln (1 + x). Then t = 2 α, hence, the more altruistic the players are, the less money will be transferred. Thus far, in this Section, I have assumed that, although players may be in different positions, they have the same utility function. What happens if they have different utility functions? Consider again the dictator game and, for simplicity assume that P 1 is fair-minded and strictly prefers (0.5, 0.5) to any other outcome (say u 1 = x 1 + x 2 x 1 x 2 ), while P 2 is selfish and prefers to have the entire amount for himself (u 2 = x 2 ). Proposer P 1 asks himself what he would like P 2 to do if he himself were a receiver. The answer is that he would like P 2 to transfer 0.5, hence, P 1 will transfer 0.5 to P 2. Note that this is not what P 2 would like to see: he would like to receive everything. A similar result holds if we reverse the roles of the players. If the selfish player is the proposer, he will transfer everything even though the responder is satisfied with 0.5. In this case, the player s utilities are (0,0) even though (0.5, 1) is a feasible utility vector. In particular, note that the selfish player transfers more than the fair-minded player. The reason is obvious: a selfish player is more demanding, hence, the Golden Rule imposes stricter requirements on him. Being confronted with himself, he knows that he has to be very generous to please himself. As discussed in the Introduction, the Silver Rule states do nothing to others you would not have done to you. Is this different from the Golden Rule? If we interpret do not do as do not do something that does not maximize my payoff, then this maxim is equivalent to do what maximizes my payoff and the Silver Rule becomes equivalent to the Golden Rule. This may not be the only possible interpretation, but the formalism of rational choice focuses on maximization and best choices and does not allow for unambiguous definition of do not want. Of course, specific models allow for this, but it becomes a bit ad hoc. What you do not want could be things like the worst thing possible or something being forced upon you that is worse than what you could guarantee on your own. I will not pursue the implications of these possibilities here. The Silver Rule is commonly interpreted as do not harm others, however, this interpretation is very different as it explicitly demands to take the perspective of the other: in order to not 14

harm the other, one needs to take into account the other s preferences. Hence, one should not only take the position of the other, but also the utility function of the other. By a slight change in the above example, we can illustrate that such an interpretation is not warranted. In fact, following the Golden Rule might lead to harming the other. Suppose that P 1 is selfish (u 1 (x 1, x 2 ) = x 1 ) and that P 2 is extremely altruistic: he only cares about the material consumption of P 1 (hence, (u 2 (x 1, x 2 ) = x 1 ). In this case, the Golden Rule demands that P 1 transfers everything to P 2, but doing so actually harms both P 1 and P 2. If nothing is transferred, the utilities are (1,1), while, if P 1 follows the Golden Rule, the utilities are (0,0). Also note that in case P 1 neglects the Golden Rule, he will transfer nothing and that this is the unique Pareto efficient outcome. This example shows: Result 2. In a 2-person context, an individual that follows the Golden Rule may harm the other. Result 3. In a 2-person context, ignoring the Golden Rule and behaving rationally might produce an outcome that is Pareto superior to the one that results from following the Golden Rule. Herzler (1934) notes that John Stuart Mill linked the Golden Rule and the Law of Love to utilitarianism. Mill wrote: To do as you would be done by, and to love your neighbor as yourself constitute the ideal perfection of utilitarian morality. That might be true, but I am not sure. How to interpret love your neighbor as yourself? Does it mean the same as love your neighbor? I am not sure, but I think that true love might mean that one eliminates oneself or fully neglects one s own interests and, hence, focuses exclusively on the interests of the one that one loves. Rather than focusing on u i, the focus should, hence, exclusively be on u j. 6 In any case, it seems that one might not wish to hurt the one that one loves. 7 I think that the last example and the Results 2 and 3, therefore, show that the Golden Rule and the Law of Love may be incompatible: the former dictates to give everything; the latter dictates that nothing be given. What this implies for utilitarian morality I do not know. 6 I will not attempt here to formalize love in the context of the rational choice framework. The examples in this Section show that mutual love (i loves j and j loves i ) might lead to coordination problems and infinite regress which somehow have to be solved. 7 Although the Germans say: Was sich liebt das neckt sich. 15

It is not too difficult to find examples in real life. My wife dislikes arriving late, hence she prefers to leave early. I do not mind arriving late, hence, tend to leave late and indeed, when not being accompanied by my wife, I tend to arrive somewhat late. At what time should we leave if my wife go to a party together? The Golden Rule tells me that I should focus on my preferences, hence, that we should not leave too early. But I know that this hurts my wife. I do not know whether our interaction is covered by the Law of Love, but I tend to think that love implies that we should leave early. This discussion shows Result 4. The Law of Love is incompatible with love as commonly understood, or the Law of Love is incompatible with the Golden Rule. 6. Final Comments I am not aware of discussions of the Golden Rule in the (modern) economic literature, with one exception: Knight (1939). In this paper, Frank Knight discusses and criticizes several moral precepts associated with Christianity. He says that the Bible preaches the Gospel of Love, in particular Love the Lord, and Love thy neighbor as yourself (what I have called the Law of Love), but that the latter is impossible. At least, according to Knight, it is impossible to love the neighbor s child in the same way as one s own. He also discusses the question of who counts as a neighbor and argues that the rule of love can only hold in small societies. With respect to the Golden Rule directly (as in Matthew 7:12), Knight states that what most of us really want is to be left alone, at least we prefer to be free from interference of others. It is not so clear whether and how such freedom can be captured in my model. A student at Tilburg University, Manuel Toth, has brought to my attention the notion of a Berge equilibrium. This is a concept that can be applied to general games, that is, also asymmetric ones. It has been simplified by Zhukovskii, which provides the following definition: Definition (Berge Equilibrium in the sense of Zhukovskii)). A strategy profile s is a Berge Equilibrium if for each player i we have u i (s ) u i (s i, s i ) Clearly, this concept is exactly as my formalization of the Golden Rule, but it is more general as a Berge equilibrium also allows for the game to be asymmetric. It is easily seen that the Proposition 1 (and its proof) generalize: a Berge equilibrium exists for each 2-player game. 16

Equally obvious, existence of a Berge equilibrium is not guaranteed if there are more than 2 players. Within the context of the standard rational choice model, I have provided a precise formulation of the Golden Rule as well as of several other maxims for ethical behavior. I conclude that, with this interpretation, the Golden Rule is compatible with rationality in the 2-player case, but not necessarily when more than two players are involved. In the 2-player case, a rational individual who follows the Golden Rule may hurt the other. Both players may also strictly prefer that the Golden Rule is not followed. Of course, this rational choice interpretation of the Golden Rule may not be the only possible one, or maybe not even the most plausible one. If we accept this one, we may have to conclude as well that the Golden Rule and the Law of Love, which may be the two most important precepts from the Bible, are inconsistent. I agree with Knight (1939) that the Law of Love is too demanding for society as a whole; it seems impossible to love everyone as oneself. There are several spheres of interactions. People that are really close to each other, and who know each other well, may be able to follow the Law of Love, but doing so may imply that one has to violate the Golden Rule. People that are more distant and that do not know each other that well can follow the Golden Rule in bilateral interactions, but they should be aware of the fact that each may then hurt the other, unless the other s preferences are similar to one s own. In multilateral interactions, the Golden Rule may provide little guidance for actual behavior, especially when there are asymmetries. References Alger, I. and J. Weibull (2013) Homo Morales Preference evolution under incomplete information and assortative matching, Econometrica 81, 2269-2303 Berge, C. (1957) Théorie générale des jeux à n-personnes. Gauthier-Villars, Paris Herzler, J. O. (1934) On Golden Rules, International Journal of Ethics 44, 418-436. Knight, F.H. (1939) Ethics and Economic Reform. III. Christianity, Economica 6, 398-422 Kohlberg, E., Mertens J. F. (1986) On the strategic stability of equilibria Econometrica. 54, 1003-39. Maynard Smith, J. (1982) Evolution and the Theory of Games. Cambridge University Press. Puka, B. (2014) The Golden Rule, Internet Encyclopedia of Philosophy, http://www.iep.utm.edu/goldrule/ (Last consulted 15 November 2014) 17

Reuben, E. and A. Riedl (2013) Enforcement of contribution norms in public good games with heterogeneous populations, Games and Economic Behavior, 77, 122-137 Sen, A. (1979) Utilitarianism and Welfarism, The Journal of Philosophy, 76, No. 9, 463-489 Von Neumann J., Morgenstern O. (1944) Theory of Games and Economic Behavior, third ed. Princeton University Press, Princeton, NJ. 18