Skip to main content
Log in

Self-recognition in teams

  • Original Paper
  • Published:
International Journal of Game Theory Aims and scope Submit manuscript

Abstract

This paper studies an idea we call “(null) self-recognition,” which occurs when a player who was certain that they were a particular type privately discovers that they are in fact some other type. To address unresolved questions as to how players update their beliefs regarding their partner’s type and higher-order beliefs regarding both players’ types after self-recognition, we propose a “sequential reassessment” rule, in which beliefs concerning each player’s type are modified up to a given order. As an initial investigation of its equilibrium consequences, we embed sequential reassessment in a simple model of team production, in which players experience self-recognition when game play begins. Our main result, which applies for team projects with uneven task demands, shows how a player’s decision to work or shirk can depend solely on whether that player’s reassessment of their own type is “deeper” or “shallower” than their reassessment of their partner’s type.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. While our notion is far more general, self-recognition is implicit in early behavioral models of (fully) naive present-biased discounting (O’Donoghue and Rabin 1999). In these models, an agent discovers they are present-biased despite previously being certain they would not be. However, this form of self-recognition has been predominantly studied in single-agent settings, as opposed to our focus on a multi-player game.

  2. This is essentially O’Donoghue and Rabin’s (1999, 2001) solution concept (originally used to analyze naive present bias as an intrapersonal game), except here it is extended to a multi-agent setting.

  3. Darley and Latane (1968) provide early evidence of this well-known “bystander effect” leading to no-volunteer outcomes. Also see Diekmann (1985) and Franzen (1999) for more recent reviews.

  4. In other words, given \(\Theta \), if two candidate incorrect priors \(\widehat{\Theta }'\) and \(\widehat{\Theta }''\) imply the same instinct, i.e. if the relationship between \(\rho ^*\) and \(\widehat{\rho }^{\,*}\) (whether \(\rho ^*<\widehat{\rho }^{\,*}\) or \(\rho ^*>\widehat{\rho }^{\,*}\)) is the same with \(\widehat{\Theta }=\widehat{\Theta }'\) and with \(\widehat{\Theta }=\widehat{\Theta }''\), then a player’s strategy is the same with \(\widehat{\Theta }'\) and with \(\widehat{\Theta }''\).

  5. We should note, however, that our notion of self-recognition cannot embed recent generalizations of naive present bias, such as the notions of partial naivete proposed by Asheim (2008), Heidhues and Koszegi (2009), and Ali (2011), in which the decision-maker assigns a nonzero probability to the (ultimately realized) possibility of being present biased. Another caveat is that O’Donoghue and Rabin’s (1999) concept of naive present bias presumes that self-recognition occurs in each period, while here it only occurs at the beginning of the game. With that said, these specifications are behaviorally indistinguishable in our framework because a player who repeatedly experiences self-recognition would act the same if their reassessed beliefs at \(t^*\) were instead formed at the beginning of the game (and subsequently maintained). While the naive present-biased (quasi-hyperbolic) discounting model is typically applied in individual decision-making contexts, several recent papers consider naive present-biased agents in team settings. See for example, Haan and Hauck (2014), Cerrone (2016), Fahn and Hakenes (2018), and Fedyk (2018).

  6. Technically, a LPS may also prescribe tertiary beliefs, and so on. Since our model abstracts from the possibility of re-updating (a topic that is otherwise tangential to our present discussion, see Sect. 4.1), for our purposes the beliefs in (6) may also describe every successive set of beliefs in Alice’s LPS.

References

  • Ali SN (2011) Learning self-control. Q J Econ 126:857–893

    Article  Google Scholar 

  • Asheim G (2008) Procrastination, partial naivete, and behavioral welfare analysis. Working paper

  • Blume L, Brandenburger A, Dekel E (1991) Lexicographic probabilities and equilibrium refinements. Econometrica 59:81–98

    Article  Google Scholar 

  • Cerrone C (2016) Doing it when others do: a strategic model of procrastination. Working paper

  • Darley J, Latane B (1968) Bystander intervention in emergencies: diffusion of responsibility. J Personal Soc Psychol 8:377–383

    Article  Google Scholar 

  • Diekmann A (1985) Volunteer’s dilemma. J Confl Resolut 29:605–610

    Article  Google Scholar 

  • Fahn M, Hakenes H (2018) Teamwork as a self-disciplining device. Am Econ J Microecon (forthcoming)

  • Fedyk A (2018) Asymmetric naivete: beliefs about self-control. Working paper

  • Franzen A (1999) The volunteer’s dilemma: theoretical models and empirical evidence. In: Foddy M, Smithson M, Schneider S, Hogg M (eds) Resolving social dilemmas: dynamics, structural, and intergroup aspects. Psychology Press, New York, NY, pp 135–148

  • Gans J, Landry P (2018) I’m not sure what to think about them: non-Bayesian updating for naive present-biased players. Working paper

  • Haan M, Hauck D (2014) Games with possibly naive hyperbolic discounters. Working paper

  • Heidhues P, Koszegi B (2009) Futile attempts at self-control. J Eur Econ Assoc 7:423–434

    Article  Google Scholar 

  • Karni E, Vierø M-L (2013) Reverse Bayesianism’: a choice-based theory of growing awareness. Am Econ Rev 103:2790–2810

    Article  Google Scholar 

  • Myerson R (1986) Multistage games with communication. Econometrica 54:323–358

    Article  Google Scholar 

  • O’Donoghue T, Rabin M (1999) Doing it now or later. Am Econ Rev 89:103–124

    Article  Google Scholar 

  • O’Donoghue T, Rabin M (2001) Choice and procrastination. Q J Econ 116:121–160

    Article  Google Scholar 

  • Ortoleva P (2012) Modeling the change of paradigm: non-Bayesian reactions to unexpected news. Am Econ Rev 102:2410–2436

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peter Landry.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This paper evolved from an earlier working paper, titled “Procrastination in Teams.” Another working paper of ours, Gans and Landry (2018), also evolved from this earlier work, and instead focuses on an application to naive present bias within a modeling framework that is (in most ways) simpler than the framework considered here.

Appendices

Appendix A: Additional generalizations

1.1 A.1 Robustness to partial and non-sequential reassessments

In this appendix, we consider relaxing our assumptions on how players update their beliefs after self-recognition in two ways. First, we no longer assume that reassessed beliefs must correspond to players’ true type \(\Theta \) and instead allow a belief to be partially reassessed to a convex combination of \(\Theta \) and the incorrect prior \(\widehat{\Theta }\). Second, we relax Assumption 2, which captured the essence of sequential reassessment in stating that a belief can only be reassessed if its corresponding lower-order beliefs have been reassessed. That is, putting these two generalizations together, we now allow \(\Theta ^{n}_{ij}=\widehat{\Theta }\) and \(\Theta ^{n'}_{ij}=\Theta ^*\equiv (1{}-{}\lambda )\Theta {}+{}\lambda \widehat{\Theta }\) for some \(i,j{}\in {}\{A,B\}\), \(\lambda {}\in {}[0,1)\), and \(n'{}>{}n\).

The (effective) depths of a player’s reassessments are now given by:

$$\begin{aligned} d^*(i,j)=\left\{ \begin{array}{ll} \max \{n:\Theta ^n_{ij}=\Theta ^*\},&{}\quad \text {if }\Theta ^n_{ij}=\Theta ^*\ \text {for some}\ n\ge 1,\\ 0,&{}\quad \text {if }\Theta ^n_{ij}=\widehat{\Theta }\ \text {for all}\ n\ge 1. \end{array}\right. \end{aligned}$$
(7)

Here, we presume \(d^*(i,j)<\infty \) so that we can avoid having to work out arbitrarily complicated yet non-instructive cases (such as \(\Theta ^n_{ij}=\Theta ^*\) if and only if n is a prime number).

To proceed, we let \(\tilde{d}^*(j,k)\) denote player i’s perception of \(d^*(j,k)\), for \(j\ne i\) and \(i,j,k{}\in {}\{A,B\}\). Using (7), we can readily verify \(\tilde{d}^*(j,j)= d^*(i,j)\) and \(\tilde{d}^*(j,i)=\max \{d^*(i,i){}-{}1,0\}\). Maintaining \(j{}\ne {}i\), we also define \(f(n_i,n_j){}\equiv {}\{(\tilde{d}^*(j,j),\tilde{d}^*(j,i)){}:{}d^*(i,i)= n_i,d^*(i,j)= n_j\}\) and \(K^*(n_i,n_j)=\min \{K:f^{(K)}(n_i,n_j)=(0,0)\}\).

We already know that player i has zero reassessment and thus follows their instinct if \(K^*(d^*(i,i),d^*(i,j))= 1\). Next, since choosing the opposite of one’s partner’s perceived strategy at \(t^*\) is the perceived best response, and \(K^*(d^*(i,i),d^*(i,j))= K^*(\tilde{d}^*(j,j),\tilde{d}^*(j,i)){}+{}1\), player i’s strategy at \(t^*\) with \(K^*(d^*(i,i),d^*(i,j))= n\) is the opposite of player i’s strategy with \(K^*(d^*(i,i),d^*(i,j))= n{}-{}1\). By induction, player i will therefore follow players’ instinct if \(K^*(d^*(i,i),d^*(i,j))\) is odd and override players’ instinct if \(K^*(d^*(i,i),d^*(i,j))\) is even. Now \(K^*(d^*(i,i),d^*(i,j))\) is odd if and only if \(d^*(i,i){}>{}d^*(i,j)\) and \(K^*(d^*(i,i),d^*(i,j))\) is even if and only if \(d^*(i,i){}\le {}d^*(i,j)\). Hence, under our two generalizations as to how players update their beliefs after self-recognition, a strictly inward (effective) reassessment in the sense of \(d^*(i,i)>d^*(i,j)\) still leads player i to follow their instinct, while a weakly outward reassessment in the sense of \(d^*(i,i){}\le {}d^*(i,j)\) still leads player i to override their instinct. Thus, Proposition 3 is qualitatively robust to our generalizations. It is straightforward to confirm that our other main results (Sect. 4) follow in turn.

1.2 A.2 Further generalizations of non-degenerate beliefs

In this appendix, we consider a more general formulation of non-degenerate beliefs that, unlike the formulation presented in Sect. 6, allows a player with non-degenerate beliefs to believe that their partner may hold non-degenerate beliefs as well. To express this generalization, first let \(\sigma ^1_i=\sigma _i\) (with \(\sigma _i\) defined as in Sect. 6). Next, suppose player i’s beliefs may now be defined by a probability measure \(\sigma ^{\ell }_i\) with \(\ell >1\), where \(\sigma ^{\ell }_i(\sigma ^{\ell -1}_j)\) is the probability that player i assigns to the possibility that player j has non-degenerate beliefs given by \(\sigma ^{\ell -1}_j\).

Proposition 9

Under \(\sigma ^{\ell }_i\) with \(\ell >1,\) let : 

$$\begin{aligned} \pi ^{\textsc {ul}}\equiv & {} \left\{ \begin{array}{ll} \rho ^*,&{}\quad \rho ^*{}<{}\widehat{\rho }^{\,*},\\ 1{}-{}\rho ^*,&{}\quad \rho ^*{}>{}\widehat{\rho }^{\,*}. \end{array}\right. \\ \pi (\sigma ^{\ell }_i)\equiv & {} \left\{ \begin{array}{ll} {}\sum \nolimits _{\sigma ^{\ell -1}_j}\sigma ^{\ell }_i(\sigma ^{\ell -1}_j){}\cdot {}\mathrm{I} [\rho ^{\textsc {l}}_j{}<{}\rho ^*{}<{}\rho ^{\textsc {h}}_j|\sigma ^{\ell -1}_j],&{}\quad \ell = 2,\\ {}\sum \nolimits _{\sigma ^{\ell -1}_j}\sigma ^{\ell }_i(\sigma ^{\ell -1}_j){}\cdot {}\mathrm{I}[\pi (\sigma ^{\ell -1}_j)<\pi ^{\textsc {ul}}],&{}\quad \ell {}>{}2. \end{array}\right. \end{aligned}$$

Then player i follows their instinct if \(\pi (\sigma ^{\ell }_i){}>{}\pi ^{\textsc {ul}}\) and overrides their instinct if \(\pi (\sigma ^{\ell }_i){}<{}\pi ^{\textsc {ul}}\). Thus,  with \(\sigma ^{\ell _A}_A\) representing player A’s beliefs and \(\sigma ^{\ell _B}_B\) representing player B’s beliefs and \(\ell _A,\ell _B>1,\) the team achieves the first-best equilibrium if \(\min _{i\in \{A,B\}}\{\pi (\sigma ^{\ell _i}_i)\}{}<{}\pi ^{\textsc {ul}}{}<{}\max _{i\in \{A,B\}}\{\pi (\sigma ^{\ell _i}_i)\}\).

To aid our understanding of Proposition 9, we may first note that \(\pi ^{\textsc {ul}}\) as defined above represents the probability that a player with unlimited reassessment will, when mixing at \(t^*\), choose the action that is the opposite of their instinct. Next recall that, without loss of generality, Alice’s perceived best response is to select the opposite of Bob’s perceived strategy at \(t^*\), but would be indifferent if she perceives Bob as shirking and working with the same probabilities as a player with unlimited reassessment. Thus, Alice’s perceived best response is to follow her instinct if she believes the probability of Bob overriding his instinct is greater than \(\pi ^{\textsc {ul}}\), and to override her instinct if she believes the probability of Bob overriding his instinct is less than \(\pi ^{\textsc {ul}}\). Indeed, this is what Proposition 9 tells us, where \(\pi (\sigma ^\ell _A)\) is the probability that Alice with non-degenerate beliefs given by \(\sigma ^\ell _A\) assigns to the possibility that Bob overrides his instinct.

With degenerate beliefs, we saw that Alice’s reassessment was weakly outward if she perceived Bob as having a strictly inward reassessment, and vice versa. Taking this a step farther, if Alice believes that Bob believes that her reassessment is weakly outward (strictly inward), then her reassessment is indeed weakly outward (strictly inward). Similarly, in our initial consideration of non-degenerate beliefs in Sect. 6 with \(\ell =1\), we described such an Alice’s “effective” reassessment as weakly outward (strictly inward) if she perceived Bob as being more likely to believe her reassessment is weakly outward (strictly inward) than strictly inward (weakly outward). With \(\ell =2\), we can similarly describe Alice’s effective reassessment in terms of her belief regarding Bob’s belief of her (effective) reassessment. By induction, we can do the same for any \(\ell >1\).

Furthermore, just as Alice is effectively weakly outward (strictly inward) if Bob is effectively strictly inward (weakly outward), her strategy at \(t^*\) will also oppose her “overall” belief of Bob’s strategy. In this vein, the previously-discussed link between the direction of a player’s effective reassessment and their strategy at \(t^*\) is maintained with \(\ell =1,2,\ldots ,\) with the caveat that the original relationships established with degenerate beliefs in Proposition 3 may (as we saw in Sect. 6) not hold if a player’s effective reassessment only has a slight lean in either direction or if players’ instincts are insufficiently strong.

Appendix B: Potential model reformulations

In Sect. 2.5, we saw how our model could be reformulated as an adaptation of a conditional probability system (CPS) or a lexicographic probability system (LPS). Building on that exercise, this appendix considers other potential ways in which our analysis of sequential reassessment after self-recognition could have been recast within a more conventional modeling framework. In doing so, our discussion makes clear that the approach pursued in this paper was, in a sense, just one of several options. With that said, the discussion will also clarify why we decided to model sequential reassessment in response to self-recognition instead of recasting our analysis within an existing framework.

1.1 B.1 A prior belief representation without self-recognition

While reformulating sequential reassessment as either a CPS or LPS would have allowed us to cast our analysis within a more conventional modeling framework, an even more conventional approach would be to simply assume that Alice’s decision-relevant beliefs were the beliefs she held all along. Namely, instead of the prior beliefs in (4), we could have assumed

$$\begin{aligned} {\Pr }_0[\Theta ^n_{Aj}=\Theta ]=\mathbb {I}[n\le d(A,j)],\quad {\Pr }_0[\Theta ^n_{Aj}=\widehat{\Theta }]=\mathbb {I}[n>d(A,j)]. \end{aligned}$$
(8)

Here, d(AA) and d(AB) now describe Alice’s prior beliefs. In fact, d(AA) and d(AB) would still describe Alice’s decision-relevant beliefs too because her prior beliefs under (8), while not necessarily correct, would not be contradicted by “recognition” of her true type \(\Theta \). That is, Alice already knew her type and thus would not experience self-recognition.

Under this prior belief representation, our analysis relating d(AA) and d(AB) to Alice’s equilibrium behavior would then naturally be re-interpreted purely as a story relating higher-order beliefs to behavior, with no role for learning or belief revision more broadly. While such a reformulation would have certainly streamlined our model, the issues of interpretation and motivation considered for a CPS or LPS adaptation would arguably be exacerbated.

To begin, we would still lack a natural basis for the implication of Assumption 2 that Alice’s beliefs regarding each player’s type correspond to the true type \(\Theta \) up to a given order and to \(\widehat{\Theta }\) for beliefs of higher orders. It would also be more of a stretch to informally invoke sequential reassessment to help motivate the particular form of Alice’s beliefs in (8) because, without self-recognition, there would be no basis for belief revision. In addition, with correct prior beliefs regarding her own type (and possibly Bob’s too), our previous interpretation of \(\widehat{\Theta }\) as an incorrect prior would no longer apply, making it less obvious where \(\widehat{\Theta }\) comes from or what it is supposed to represent. With all that said, our present consideration of the prior belief representation in (8) does help to clarify that our analysis is not about belief-updating in the conventional sense as occurring during game play in response to other’ actions. It also helps in highlighting that, in our analysis, the behavioral impact of sequential reassessment arises entirely through the higher-order beliefs that it generates.

1.2 B.2 A static volunteer’s dilemma

Whether in our initial formulation of sequential reassessment, in the CPS and LPS adaptations to self-recognition in (5) and (6), or under the prior belief representation in (8), Alice’s beliefs are effectively fixed once game play begins. Furthermore, under any of these behaviorally-equivalent representations, we only need knowledge of players’ actions at the critical period \(t^*\) to infer whether or not the team achieves the first-best equilibrium outcome (see Proposition 4). In light of these two properties—fixed beliefs and effectively just one period of non-trivial play—a dynamic model may seem unnecessary.

If we instead considered a static model (in effect, taking \(T=1\)), two of the four payoff parameters in our model—the present bias \(\beta \) and the standard discount factor \(\delta \)—would naturally fall out. The interpretation of the game may also change. In particular, in the non-trivial case with \(T=N=1\), the payoff matrix would be equivalent to that of a two-player Volunteer’s Dilemma (VD). In a VD, players simultaneously choose whether or not to take a costly action (analogous to working) to provide a public good for which a second volunteer would be redundant (e.g. witnesses to a crime-in-progress deciding whether or not to call the police). Certainly this static version of the model could still be described in terms of team production, but the interpretation as a VD would have a precedent. With that said, diluting the application to teamwork is not necessarily problematic (after all, here it is merely conceived as a setting that allows an initial exploration of our more general ideas).

While a static version of our model with any of the belief reformulations in (5), (6), or (8) would still arguably lack a natural motivation for why beliefs would take the form implied by Assumption 2, sequential reassessment after self-recognition could still exist in a static model. Even in that case, our notion of self-recognition would become less connected to the traditional notion of naive present bias (O’Donoghue and Rabin 1999) since \(\beta \) would fall out of our model. Although present bias was not our main focus, the link to naive present bias did offer an established precedent for self-recognition in the form of incorrect prior beliefs regarding \(\beta \), which arguably makes it more natural to then consider self-recognition with respect to the other model parameters.

Besides these issues of motivation and interpretation, our original dynamic formulation of sequential reassessment in response to self-recognition may provide a useful building block for future research. It is not clear that potential future extensions of our framework could still equivalently be re-cast in terms of the more traditional formulations considered in this appendix or in terms of a CPS or LPS, as considered in Sect. 2.5. So, while we understand this modeling approach was not strictly necessary for our present analysis, we hope that formalizing sequential reassessment after self-recognition in a dynamic model can facilitate fruitful generalizations of these ideas.

Appendix C: Proofs

1.1 C.1 Proof of Lemma 1

Since \(\rho ^*<1\) implies \(\beta \delta ^{T-t^*}\phi >c+\sum _{k=1}^{T-t^*}\beta \delta ^k c\), which implies \(\beta \delta ^T\phi \) > \(\delta ^{t^*}c+\beta \sum _{t=t^*+1}^{T}\delta ^t c\) > \(\beta \sum _{t=t^*}^{T}\delta ^t c\), the discounted, time-zero value of the reward for completing the project exceeds the discounted costs in the proposed first-best equilibrium, even for the player(s) who work at \(t^*\). Thus, any equilibrium in which the team completes less than N tasks cannot be efficient. Any equilibrium in which the team completes more than N tasks cannot be efficient because switching one player’s strategy from working to shirking in a single period will, all else equal, strictly Pareto dominate the original equilibrium. Lastly, among all candidate equilibria in which the team completes exactly N tasks, an equilibrium in which effort is maximally backloaded (as in the proposed first-best equilibrium) minimizes the players’ present discounted effort costs, implying it must be efficient. \(\blacksquare \)

1.2 C.2 Proof of Proposition 1

If the team has completed \(N-2\) tasks prior to period T, working is a best-response to working by one’s partner because \(\phi >c\) (as implied by \(\rho ^*<1\)). Since net payoffs are non-positive with any other strategy profile, mutual working at T is thus the Pareto-dominant subgame equilibrium, implying its selection. Similarly, if the team has completed \(N-2(k+1)\) tasks prior to period \(T-k\ge t^*\), and mutual working is the continuation equilibrium after \(T-k\) if both players work, then working at \(T-k\) is again a best-response to working by one’s partner because \(\beta \delta ^k\phi -c-\sum _{k'=1}^{k}\beta \delta ^{k'}c\) > 0 (as implied by \(\rho ^*<1\)) while the present value of a player’s net payoff cannot be positive after shirking since completing the project would no longer be feasible. Since players’ net payoffs are non-positive with any other strategy profile, mutual working at \(T-k\) is thus the Pareto-dominant subgame equilibrium, implying its selection. By induction, with N even, both players will therefore work in all periods from \(t^*\) to T, provided no tasks have been completed before \(t^*\).

At this point, we only need to establish that, given this period-\(t^*\) continuation equilibrium, shirking at each period before \(t^*\) is indeed a best-response to shirking by one’s partner. Let’s assume that working is a best-response to shirking at some \(t<t^*\). Thus, the expected net discounted payoff for player i (without loss of generality) who works at \(t<t^*\) while player \(j\ne i\) shirks must exceed the net discounted payoff associated with mutual shirking in all periods up to \(t^*\), i.e. \(\beta \delta ^{T-t}\phi -\beta \sum _{t'=t^*}^{T}\delta ^{t'-t}c\). This can only be true if working at \(t<t^*\) decreases the expected total number of tasks completed by player i, which in turn can only be true if the expected total number of tasks completed by player j increases, in which case player j’s net discounted payoff must decrease. However, player j can maintain the strategy of shirking at all \(t'<t^*\) and working at all \(t'\ge t^*\), which is guaranteed (regardless of player i’s choices between t and \(t^*\)) to result in the team completing the project because, based on our work above, both players will work when it is required by both players, while player j will work if and when it is required by only one player in order for the project to remain feasible. Thus, player j will not complete more tasks as a result of player i working at t, which implies that both players shirk at each period before \(t^*\). \(\blacksquare \)

1.3 C.3 Proof of Proposition 2

If the team has completed \(N-1\) tasks prior to T (as would be the case if both players shirked at all \(t<t^*\) and worked at all t with \(t^*\le t\le T-1\)), working is a best-response to shirking by one’s partner since \(\phi >c\) while shirking is a best-response to working since \(\phi >\phi -c\). Thus, there is no symmetric pure-strategy subgame equilibrium. We can then see that there is a symmetric mixed-strategy equilibrium in which players shirk with probability \(\rho _T(\Theta )=\frac{c}{\phi }\) and work with probability \(1-\rho _T(\Theta )=\frac{\phi -c}{\phi }\) as the respective expected payoffs from working and from shirking are then \(\phi -c\) and \((1-\rho _T(\Theta ))\phi \)\(=\)\(\phi -c\) as well.

Next, suppose the team has completed \(N-2k-1\) tasks prior to \(T-k\), with \(k\le T-t^*\) (as would be the case if both players shirked at all \(t<t^*\) and worked at all t with \(t^*\le t<T-k\)), and that players shirk with probability \(\rho _{T-k'}(\Theta )\) and work with probability \(1-\rho _{T-k'}(\Theta )\) in the period \(T-k'\) subgame, \(0\le k'<k\), given the team has completed \(N-2k'-1\) tasks prior to period \(T-k'\) (as would be the case if both players shirked at all \(t<t^*\) and worked at all \(t'\) with \(t^*\le t<T-k'\)). Then working at \(T-k\) is a best-response to shirking by one’s partner since \(\beta \delta ^k\phi -c-\sum _{k'=1}^{k}\beta \delta ^{k'}c\) > 0 given \(\rho ^*<1\), while shirking is a best-response to working since \(\beta \delta ^k\phi -\sum _{k'=1}^{k}\beta \delta ^{k'}c\) > \(\beta \delta ^k\phi -c-\sum _{k'=1}^{k}\beta \delta ^{k'}c\) > 0. Thus, there is no symmetric pure-strategy equilibrium in this subgame. Next, observe that the continuation equilibrium after mutual shirking at \(T-k\) entails mutual shirking at every subsequent period because the project would no longer be feasible, while the continuation equilibrium after one player works and the other shirks entails both players working at every subsequent period (Proposition 1) and noting the subgame at \(T-k+1\) would be isomorphic to a game with a project featuring 2k tasks and k periods to complete the project. We can then see that there is a symmetric mixed-strategy equilibrium at \(T-k\) in which players shirk with probability \(\rho _{T-k}(\Theta )\)\(=\)\(c\bigl (\beta \delta ^k\phi -\beta \sum _{k'=1}^k\delta ^{k'}c\bigr )^{-1}\) and work with probability \(1-\rho _{T-k}(\Theta )\) as the respective expected payoffs from working and shirking are then \(\beta \delta ^k\phi -c-\beta \sum _{k'=1}^k\delta ^{k'}c\) > 0 and \((1-\rho _{T-k}(\Theta ))\bigl (\beta \delta ^k\phi -\beta \sum _{k'=1}^k\delta ^{k'}c\bigr )\)\(=\)\(\beta \delta ^k\phi -c-\beta \sum _{k'=1}^k\delta ^{k'}c\) as well.

At this point, we only need to show that, given this continuation equilibrium at \(t^*\), shirking at each period before \(t^*\) is a best-response to shirking by one’s partner. If player i works at some \(t<t^*\) while player \(j\ne i\) shirks, the continuation subgame at \(t+1\) is isomorphic to a game in which players have \(T-t\) periods to complete a project with \(N-1\) tasks. Since \(N-1\) is even, we know from Proposition 1 that players would shirk at each period up to (and including) \(t^*\) and work at each period beginning with \(t^*+1\), so that player i’s net discounted payoff would be \(\beta \delta ^{T-t}-c-\beta \sum _{t'=t^*+1}^{T}\delta ^{t'-t}c\). Since the net discounted payoff for the proposed equilibrium in which both players shirk at all \(t<t^*\) is \(\beta \delta ^{T-t}\phi -\beta \sum _{t'=t^*}^{T}\delta ^{t'-t}c\) and \(\bigl (\beta \delta ^{T-t}\phi -c-\beta \sum _{t'=t^*+1}^{T}\delta ^{t'-t}c\bigr )\)\(\bigl (\beta \delta ^{T-t}\phi -\beta \sum _{t'=t^*}^{T}\delta ^{t'-t}c\bigr )\)\(=\)\(\beta \delta ^{t^*-t}c-c\) < 0, working at \(t<t^*\) cannot be a best response to shirking by player j. \(\blacksquare \)

1.4 C.4 Proof of Lemma 2

A player with zero reassessment believes their partner will shirk with probability \(\widehat{\rho }^{\,*}\) at period \(t^*\) and work with probability \(1-\widehat{\rho }^{\,*}\), as described by the mixed strategy in Proposition 2, except under \(\widehat{\Theta }\) instead of \(\Theta \). The player’s perceived expected net discounted payoff is thus \(\beta \delta ^{T-t^*}\phi -c-\beta \sum _{t=t^*+1}^T\delta ^{t-t^*}c\) from working at \(t^*\) and \((1-\widehat{\rho }^{\,*})\bigl (\beta \delta ^{T-t^*}\phi -\beta \sum _{t=t^*+1}^T\delta ^{t-t^*}c\bigr )\) from shirking. Since \(\beta \delta ^{T-t^*}\phi -c-\beta \sum _{t=t^*+1}^T\delta ^{t-t^*}c\)\(=\)\((1-\rho ^*)\bigl (\beta \delta ^{T-t^*}\phi -\beta \sum _{t=t^*+1}^T\delta ^{t-t^*}c\bigr )\) and \(\beta \delta ^{T-t^*}\phi -\beta \sum _{t=t^*+1}^T\delta ^{t-t^*}c\) > 0, the perceived expected net discounted payoff is higher when working at \(t^*\) if \(\rho ^*\) < \(\widehat{\rho }^{\,*}\), but is higher when shirking if \(\rho ^*\) > \(\widehat{\rho }^{\,*}\). \(\blacksquare \)

1.5 C.5 Proof of Proposition 3

Parts (i) and (ii). We proceed by induction. Note that, by definition, a player follows their instinct under zero reassessment. Taking any \(n=1,2,\ldots ,\) we want to establish:

  1. (a)

    If player i follows their instinct with \(d(i,j)= m\) and \(d(i,i)= n\) for all \(m{}<{}n\), then player i overrides their instinct with \(d(i,j)= n\) and \(d(i,i)= m{}+{}1\) for all \(m{}<{}n\).

  2. (b)

    If player i overrides their instinct with \(d(i,j)= n\) and \(d(i,i)= m\) for all \(m{}\le {}n\), then player i overrides their instinct with \(d(i,j)= m\) and \(d(i,i)= n{}+{}1\) for all \(m{}\le {}n\).

To establish (a), let \(\tilde{d}(j,k)\) denote player i’s belief of d(jk) for \(i{}\ne {}j\) and \(k{}\in {}\{A,B\}\). Observe \(d(i,j)= n\) and \(d(i,i)= m{}+{}1\) imply \(\tilde{d}(j,i)= m\) and \(\tilde{d}(j,j)= n\). By assumption, player i believes player j follows their instinct given \(m{}\le {}n{}-{}1\). Letting \(v=\beta \delta ^{T-t^*}\phi {}-{}c{}-{}\beta \sum _{k=1}^{T-t^*}\delta ^k c{}>{}0\) denote the payoff from \(t^*\) if the project is completed with effort in all remaining periods, if players’ instinct is to work, player i’s perceived continuation payoffs at \(t^*\) are v from working and \(v{}+{}c\) from shirking; if players’ instinct is to shirk, player i’s perceived continuation payoffs are v from working and \(0{}<{}v\) from shirking. Player i (who has weakly outward reassessment) therefore overrides their instinct since, in both cases, it yields a strictly higher perceived continuation payoff than following their instinct.

For (b), first note that \(d(i,j)=m\) and \(d(i,i)=n+1\) imply \(\tilde{d}(j,i)=n\) and \(\tilde{d}(j,j)=m\). By assumption, player i believes that player \(j\ne i\) overrides their instinct given \(m\le n\). If players’ instinct is to work, player i’s perceived continuation payoffs at \(t^*\) are v from working and \(0<v\) from shirking; if players’ instinct is to shirk, player i’s perceived continuation payoffs are v from working and \(v+c>v\) from shirking. Player i (who has strictly inward reassessment) therefore follows their instinct since, in both cases, doing so yields a strictly higher perceived continuation payoff than from overriding their instinct.

Now if \(d(i,j)=0\) and \(d(i,i)=n+1\), player i believes that player j’s type is \(\widehat{\Theta }\) where \(\tilde{d}(j,i)=n\) and \(\tilde{d}(j,j)=0\) (although a hypothetical type-\(\widehat{\Theta }\) player’s beliefs would presumably not be based on reassessments triggered by self-recognition, d and \(\tilde{d}\) can be defined as before). By assumption, player i overrides their instinct if \(d(i,j)=n\) and \(d(i,i)=1\). Therefore, if player i with \(d(i,j)=n\) and \(d(i,i)=1\) believes that player j will work at \(t^*\) with probability p, then \(v\gtrless p\cdot v+c\) if players’ instinct is to shirk/work. Note that, if \(\tilde{d}(j,i)=n\) and \(\tilde{d}(j,j)=0\), player j’s continuation payoffs as perceived by player i are \(\widehat{v}=\widehat{\beta }\widehat{\delta }^{T-t^*}-\widehat{c}-\widehat{\beta }\sum _{k=1}^{T-t^*}\widehat{\delta }^k \widehat{c}\) from working and \(p\cdot \widehat{v}+\widehat{c}\) from shirking. It then follows that, given players’ instinct is to work if \(\rho ^*<\widehat{\rho }^{\,*}\) and to shirk if \(\rho ^*>\widehat{\rho }^{\,*}\) (Lemma 2), player i expects player j to override their instinct, implying player i follows their instinct.

Since a player follows their instinct under zero reassessment (by definition), conditions (a) and (b) prove that a player follows their instinct under a strictly inward reassessment and overrides their instinct under a weakly outward reassessment. \(\blacksquare \)

Part (iii). From Proposition 2, players mix in the event that it is common knowledge that both players types are given by \(\Theta \). Since a player with unlimited reassessment believes that it is common knowledge that both players types are given by \(\Theta \), such a player mixes at \(t^*\). \(\blacksquare \)

1.6 C.6 Proof of Proposition 4

Follows from Proposition 3 along with Lemma 1. \(\blacksquare \)

1.7 C.7 Proof of Corollary 1

If both players have zero reassessment, then neither player has a strictly outward reassessment, which implies the team cannot be efficient from Proposition 4. If at least one player has unlimited reassessment, then that player does not have a strictly inward or a weakly outward reassessment, which implies the team cannot be efficient from Proposition 4. \(\blacksquare \)

1.8 C.8 Proof of Proposition 5

Part (i). Following the logic in the proof of Proposition 3, a player’s perceived best response is to choose the opposite of their partner’s perceived strategy at \(t^*\). Since a fully-perceptive player’s beliefs regarding their partner’s beliefs are correct, and a player’s strategy at \(t^*\) is fully determined by their beliefs, a fully-perceptive player works at \(t^*\) if and only if their partner shirks, and shirks if and only if their partner works. \(\blacksquare \)

Part (ii). If player i is self-perceptive, then player i does not experience self-recognition. Thus, player i’s prior beliefs regarding the type of player \(j\ne i\) are maintained: \(d(i,j)=0\). Since \(d(i,i)\ge 1\) (in effect) by virtue of the fact that player i knows their type, player i’s beliefs are equivalent to those of a non-perceptive player with a strictly inward reassessment. From part (i) of Proposition 3, player i must then follow their instinct. \(\blacksquare \)

Part (iii). Since player i is other perceptive, \(d(i,j)=d(j,j)\) for \(j\ne i\). If \(d(j,j)\ge d(i,i)<\infty \), then \(d(i,i)\le d(i,j)\), implying player i’s effective reassessment is weakly outward, which in turn implies from part (ii) of Proposition 3 that player i overrides their instinct. If \(d(j,j)<d(i,i)\), then \(d(i,i)>d(i,j)\), implying player i’s effective reassessment is strictly inward, which implies from part (i) of Proposition 3 that player i overrides their instinct. If \(d(i,i)=d(j,j)=\infty \), then \(d(i,i)=d(i,j)=\infty \), implying player i’s effective reassessment is infinite, which in turn implies from part (iii) of Proposition 3 that player i mixes at \(t^*\). \(\blacksquare \)

1.9 C.9 Proof of Proposition 6

First note that, if player i (without loss of generality) is self-perceptive and player \(j\ne i\) is other-perceptive with \(d(j,j)=\infty \), then from Proposition 5 the former player does not override their instinct while the latter player follows their instinct. Therefore, it cannot be the case that one player works at \(t^*\) while the other shirks, implying the team is inefficient.

To show that alternate compositions are efficient, there are four potential cases to consider:

(a) Both players are other-perceptive with \(d(i,i)\ne d(j,j)\) for \(j\ne i\), in which case \(d(i,j)=d(j,j)\) and \(d(j,i)=d(i,i)\). Taking \(d(i,i)>d(j,j)\), without loss of generality, then \(d(i,i)>d(j,j)=d(i,j)\), implying player i’s effective reassessment is strictly inward. In turn, \(d(j,j)<d(i,i)=d(j,i)\), implying player j’s effective reassessment is strictly outward. Thus, player i follows their instinct while player j overrides their instinct, which ensures efficiency.

(b) Player i is other-perceptive and player \(j\ne i\) is fully perceptive, in which case \(d(i,j)=d(j,j)=\infty \) (i.e. it is common knowledge that player j’s type is \(\Theta \)) along with \(d(j,i)=d(i,i)\). Since players would have symmetric beliefs if \(d(i,i)=\infty \), \(d(i,i)<\infty =d(i,j)\) must hold. Therefore, player i’s effective reassessment is strictly outward, implying player i overrides their instinct from Proposition 3, while player j follows their instinct from Proposition 5. Hence, one player works at \(t^*\) while the other shirks, ensuring the team is efficient.

(c) Player i is self-perceptive and player \(j\ne \) is fully perceptive, in which case \(d(i,i)=d(j,i)=\infty \) (i.e. it is common knowledge that player i’s type is \(\Theta \)) along with \(d(j,j)=d(i,j)+1\). Since players would have symmetric beliefs if \(d(i,j)=\infty \), \(d(i,j)<\infty =d(i,i)\) must hold. Therefore, player i’s effective reassessment is strictly inward, implying player i follows their instinct from Proposition 3. In turn, we have \(d(j,i)=\infty >d(i,j)+1=d(j,j)\), implying player j’s effective reassessment is strictly outward. Thus, player j overrides their instinct from Proposition 3. Consequently, one player must work at \(t^*\) while the other player shirks, which ensures the team is efficient.

(d) Player i is self-perceptive and player \(j\ne i\) is other-perceptive with \(d(j,j)<\infty \), in which case \(d(i,i)=d(j,i)=\infty \) (i.e. it is common knowledge that player i’s type is \(\Theta \)). From Proposition 5, player i follows their instinct, while player j overrides their instinct since \(d(j,j)<\infty =d(i,i)\). Consequently, one player must work at \(t^*\) while the other player shirks, which ensures the team is efficient.

Note that, since a self-perceptive player does not reassess their beliefs regarding their partner’s type, a team with two self-perceptive players must have symmetric beliefs with \(d(i,i)=d(j,j)=1\) and \(d(i,j)=d(j,i)=0\) for \(j\ne i\). Therefore, the possibility of a team with two self-perceptive players did not need to be considered in this proof. \(\blacksquare \)

1.10 C.10 Proof of Proposition 7

Part (i). From Proposition 3, player j does not mix at \(t^*\). From Proposition 5, player i must choose the opposite strategy of player j, ensuring the team is efficient. \(\blacksquare \)

Part (ii). From Proposition 5, player i follows their instinct. From Proposition 3, player j overrides their instinct if and only if player j’s reassessment is weakly outward. Since a team is efficient if and only if one player follows and the other player overrides players’ instinct, the team is efficient if and only if player j’s reassessment is weakly outward. \(\blacksquare \)

Part (iii). From Proposition 5, player i follows their instinct if and only if \(d(j,j)<d(i,i)\), in which case the team is efficient if and only if player j’s overrides their instinct, which from Proposition 5 holds if and only if player j’s reassessment is weakly outward. Also from Proposition 5, player i overrides their instinct if and only if \(d(j,j)\ge d(i,i)<\infty \), in which case the team is efficient if and only if player j follows their instinct, which from Proposition 5 holds if and only if player j’s reassessment is strictly inward. \(\blacksquare \)

1.11 C.11 Proof of Proposition 8

From Lemma 2 and Proposition 3, player i perceives player j as shirking with probability \((1{}-{}s^{\textsc {wo}}_i{}-{}s^{\textsc {si}}_i)\cdot \rho ^*+s^{\textsc {si+}}_i{}\cdot \mathrm{I}[\rho ^*{}<{}\widehat{\rho }^{\,*}]+s^{\textsc {wo}}_i{}\cdot \mathrm{I}[\rho ^*{}>{}\widehat{\rho }^{\,*}]+(s^{\textsc {si}}_i{}-{}s^{\textsc {si+}}_i)\cdot \widehat{\rho }^{\,*}\). Thus, if \(\rho ^*>\widehat{\rho }^{\,*}\), player i follows/overrides their instinct (i.e. shirks/works at \(t^*\)) if \((1{}-{}s^{\textsc {wo}}_i{}-{}s^{\textsc {si}}_i)\cdot \rho ^*+s^{\textsc {wo}}_i+(s^{\textsc {si}}_i{}-{}s^{\textsc {si+}}_i)\cdot \widehat{\rho }^{\,*}\lessgtr \rho ^*\); if \(\rho ^*<\widehat{\rho }^{\,*}\), player i follows/overrides their instinct if \((1-s^{\textsc {wo}}_i-s^{\textsc {si}}_i)\cdot \rho ^*+s^{\textsc {si+}}_i+(s^{\textsc {si}}_i-s^{\textsc {si+}}_i)\cdot \widehat{\rho }^{\,*}\gtrless \rho ^*\). Combining and rearranging these expressions, the conditions for following/overriding one’s instinct then reduce to \(\rho ^*\notin [\rho ^{\textsc {l}}_i,\rho ^{\textsc {h}}_i]\) and \(\rho ^*\in (\rho ^{\textsc {l}}_i,\rho ^{\textsc {h}}_i)\), respectively. Thus, if \(\rho ^*\notin [\rho ^{\textsc {l}}_i,\rho ^{\textsc {h}}_i]\) for some \(i\in \{A,B\}\) and \(\rho ^*\in (\rho ^{\textsc {l}}_j,\rho ^{\textsc {h}}_j)\) for \(j\ne i\), players play opposite strategies at \(t^*\), which ensures the team is efficient (Lemma 1) with N odd. \(\blacksquare \)

1.12 C.12 Proof of Corollary 2

Part (i). From Proposition 8, player i follows their instinct for all \(\widehat{\rho }^{\,*}{}>{}\rho ^*\) if and only if \(\rho ^*{}<{}\rho ^{\textsc {l}}_i\) for all \(\widehat{\rho }^{\,*}{}>{}\rho ^*\), which holds if and only and can be equivalently rearranged as . Player i follows their instinct for all \(\widehat{\rho }^{\,*}{}<{}\rho ^*\) if and only if \(\rho ^*{}>{}\rho ^{\textsc {h}}_i\) for all \(\widehat{\rho }^{\,*}{}<{}\rho ^*\), which holds if and only and can be equivalently rearranged as . Combining these conditions, player i follows their instinct for all \(\widehat{\rho }^{\,*}\) (and thus, for all \(\widehat{\Theta }\)) if and only if . \(\blacksquare \)

Part (ii). From Proposition 8, player i overrides their instinct for all \(\widehat{\rho }^{\,*}{}>{}\rho ^*\) if and only if \(\rho ^*{}>{}\rho ^{\textsc {l}}_i\) for all \(\widehat{\rho }^{\,*}{}>{}\rho ^*\), which holds if and only . Player i overrides their instinct for all \(\widehat{\rho }^{\,*}{}<{}\rho ^*\) if and only if \(\rho ^*{}<{}\rho ^{\textsc {h}}_i\) for all \(\widehat{\rho }^{\,*}{}<{}\rho ^*\), which holds if and only . Combining these conditions, player i overrides their instinct for all \(\widehat{\rho }^{\,*}\) (and thus, for all \(\widehat{\Theta }\)) if and only if . \(\blacksquare \)

1.13 C.13 Proof of Corollary 3

Part (i). From the proof of part (i) of Corollary 2, we know that player i follows their instinct for all \(\widehat{\rho }^{\,*}>\rho ^*\), which in this case implies working at \(t^*\) (Lemma 2), if and only if . From the Proof of part (ii) of Corollary 2, we know that player i overrides their instinct for all \(\widehat{\rho }^{\,*}<\rho ^*\), which likewise implies working at \(t^*\) (Lemma 2), if and only if . Combining the conditions for \(\widehat{\rho }^{\,*}>\rho ^*\) and for \(\widehat{\rho }^{\,*}<\rho ^*\) implies that player i works at \(t^*\) for all \(\widehat{\rho }^{\,*}\) (and thus, for all \(\widehat{\Theta }\)) if and only if . \(\blacksquare \)

Part (ii). From the proof of part (i) of Corollary 2, we know that player i follows their instinct for all \(\widehat{\rho }^{\,*}<\rho ^*\), which in this case implies shirking at \(t^*\) (Lemma 2), if and only if . From the Proof of part (ii) of Corollary 2, we know that player i overrides their instinct for all \(\widehat{\rho }^{\,*}>\rho ^*\), which likewise implies shirking at \(t^*\) (Lemma 2), if and only if . Combining the conditions for \(\widehat{\rho }^{\,*}>\rho ^*\) and for \(\widehat{\rho }^{\,*}<\rho ^*\) implies that player i shirks at \(t^*\) for all \(\widehat{\rho }^{\,*}\) (and thus, for all \(\widehat{\Theta }\)) if and only if . \(\blacksquare \)

1.14 C.14 Proof of Corollary 4

Observe \(\widehat{\rho }^{\,*}<\rho ^*\) and \(\widehat{\rho }^{\,*}=\overline{\rho }_i\) imply \(\rho ^*=\rho ^{\textsc {l}}_i\), while \(\widehat{\rho }^{\,*}>\rho ^*\) and \(\widehat{\rho }^{\,*}=\overline{\overline{\rho }}_i\) imply \(\rho ^*=\rho ^{\textsc {h}}_i\). From Proposition 8, we then have that, given \(\widehat{\rho }^{\,*}<\rho ^*\), player i follows their instinct if \(\widehat{\rho }^{\,*}<\overline{\rho }_i\) and overrides their instinct if \(\widehat{\rho }^{\,*}>\overline{\rho }_i\). Similarly, Proposition 8 implies that, given \(\widehat{\rho }^{\,*}>\rho ^*\), player i follows their instinct if \(\widehat{\rho }^{\,*}>\overline{\overline{\rho }}_i\) and overrides their instinct if \(\widehat{\rho }^{\,*}<\overline{\overline{\rho }}_i\). Combining these cases with Lemma 2, we get that player i shirks if \(\widehat{\rho }^{\,*}<\overline{\rho }_i\), works if \(\widehat{\rho }^{\,*}\in (\overline{\rho }_i,\rho ^*)\), shirks if \(\widehat{\rho }^{\,*}\in (\rho ^*,\overline{\overline{\rho }}_i)\), and works if \(\widehat{\rho }^{\,*}>\overline{\overline{\rho }}_i\). Using the definition of \(\overline{\rho }_i\), we can see that \(0<\overline{\rho }_i<\rho ^*\) if and only if . Using the definition of \(\overline{\overline{\rho }}_i\), we can see that \(\rho ^*<\overline{\overline{\rho }}_i<1\) if and only if . Combining these conditions, we see that the behavior described above can only arise if . \(\blacksquare \)

1.15 C.15 Proof of Proposition 9

From Proposition 8, player j with beliefs given by \(\sigma ^j_1\) overrides their instinct if \(\rho ^*\in (\rho ^{\textsc {l}}_j,\rho ^{\textsc {h}}_j)\) and follows their instinct if \(\rho ^*\notin [\rho ^{\textsc {l}}_j,\rho ^{\textsc {h}}_j]\). Thus, \(\pi (\sigma ^2_i)\) is player i’s perception with beliefs given by \(\sigma ^2_i\) that player \(j\ne i\) will override their instinct. In turn, we can readily observe that player i overrides their instinct if \(\pi (\sigma ^2_i)<\pi ^{\textsc {ul}}\) and follows their instinct if \(\pi (\sigma ^2_i)>\pi ^{\textsc {ul}}\) since Propositions 2 and 3 imply that a player with unlimited reassessment overrides their instinct with probability \(\pi ^{\textsc {ul}}\). Thus, \(\pi (\sigma ^{\ell }_i)\sum _{\sigma ^{\ell -1}_j}\sigma ^{\ell }_i(\sigma ^{\ell -1}_j){}\cdot {}\mathrm{I}[\pi (\sigma ^{\ell -1}_j)<\pi ^{\textsc {ul}}]\) represents player i’s perception of the probability that player \(j\ne i\) will override their instinct for \(\ell =3\), and by induction this relation holds for all \(\ell >1\). From Lemma 1, we are then assured that the team achieves the first-best equilibrium if \(\min _{i\in \{A,B\}}\{\pi (\sigma ^{\ell _i}_i)\}{}<{}\pi ^{\textsc {ul}}{}<{}\max _{i\in \{A,B\}}\{\pi (\sigma ^{\ell _i}_i)\}\) with \(\sigma ^{\ell _A}_A\) representing players’ beliefs and \(\ell _A,\ell _B>1\). \(\blacksquare \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gans, J.S., Landry, P. Self-recognition in teams. Int J Game Theory 48, 1169–1201 (2019). https://doi.org/10.1007/s00182-019-00683-3

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00182-019-00683-3

Keywords

Navigation