Skip to main content
Log in

Associated consistency, value and graphs

  • Original Paper
  • Published:
International Journal of Game Theory Aims and scope Submit manuscript

Abstract

This article presents an axiomatic characterization of a new value for cooperative games with incomplete communication. The result is obtained by slight modifications of associated games proposed by Hamiache (Games Econ Behav 26:59–78, 1999; Int J Game Theory 30:279–289, 2001). This new associated game can be expressed as a matrix formula. We generate a series of successive associated games and show that its limit is an inessential game. Three axioms (associated consistency, inessential game, continuity) characterize a unique sharing rule. Combinatorial arguments and matrix tools provide a procedure to compute the solution. The new sharing rule coincides with the Shapley value when the communication is complete.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. \( v^*_{\tau }(S) =v(S)+\tau \sum _{j\in N{\setminus } S} [v(S\cup \{j\})-v(S)-v(\{j\})].\)

  2. That threshold value depends on the characteristic values of matrix \((1/\tau )(P_gM_cP_g-P_g)\). Since those characteristic values are changing from graph to graph we do not have a sharp result for \(\tau \). In Hamiache (2001) we obtained \(\tau <\frac{2}{n}\) for complete graphs.

  3. This proof has been proposed by a referee. It replaces advantageously a longer previous proof.

  4. We thank an anonymous referee for this extremely concise proof.

  5. The mean value of the unanimity game \((N, u_{_N},g)\) can be computed with the following formula, \(MV(N, u_{_N},g) =\frac{1}{k(N)} \sum _{i\in {}N:\#(N{\setminus }{\{i\}})=1} MV(N{\setminus }{\{i\}}, u_{_{N{\setminus }{\{i\}}}},g(N{\setminus }{\{i\}})), \) where \(k(N)=\#\{i\in {}N\mid N{\setminus }{\{i\}}\,\, \hbox {connected}\}\).

References

  • Borm P, Owen G, Tijs SH (1992) On the position value for communication situations. SIAM J Discret Math 5:305–320

    Article  Google Scholar 

  • Davis M, Maschler M (1965) The kernel of cooperative games. Naval Res Log Q 12:223–259

    Article  Google Scholar 

  • Hamiache G (1999) A value with incomplete communication. Games Econ Behav 26:59–78

    Article  Google Scholar 

  • Hamiache G (2001) Associated consistency and Shapley value. Int J Game Theory 30:279–289

    Article  Google Scholar 

  • Hamiache G (2003) A mean value for games with communication structures. Int J Game Theory 32:533–544

    Google Scholar 

  • Hamiache G (2010) A matrix approach to the associated consistency with an application to the Shapley value. Int Game Theory Rev 12:175–187

    Article  Google Scholar 

  • Hart S, Mas-Colell A (1989) Potential, value and consistency. Econometrica 57:589–614

    Article  Google Scholar 

  • Herings PJJ, van der Laan G, Talman AJJ (2008) The average tree solution for cycle-free graph games. Games Econ Behav 62:77–92

    Article  Google Scholar 

  • Horn RA, Johnson CR (1990) Matrix analysis. Cambridge University Press, Cambridge

    Google Scholar 

  • Myerson RB (1977) Graphs and cooperation in games. Math Oper Res 2:225–229

    Article  Google Scholar 

  • Peleg B (1980) On the reduced game property and its converse. Int J Game Theory 3:187–200

    Google Scholar 

  • Shapley LS (1953) A Value for n-Person Games, in Contributions to the Theory of Games I. Annals of Mathematics Studies, vol 28. Princeton University Press, Princeton, pp 307–317

  • Sobolev AI (1975) Characterization of the principle of optimality for games through functional equations (in Russian). Math Methods Soc Sci 6:94–151

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Florian Navarro.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The authors thank the referees of this paper for their high quality report. They have contributed to a better presentation of the article and the correction of some inaccuracies. Any errors that may possibly remain, are only ours.

Appendix

Appendix

Proof of Lemma 8:

First of all it is easy to see that \(<0, y_{_T}>\) are eigenpairs of matrix \(M_g\) for non-connected coalitions T, when \(y_{_T}[S]=1\) if \({S=T}\) and \(y_{_T}[S]=0\) if \({S\not =T}\). In the following, we will assume that \(\lambda \not =0\).

We already know that n vectors \(x_{\{i\}}\) for \(i\in N\), as defined by Eq. (4), are independent eigenvectors of matrix \(M_c\) related to eigenvalue 1 and that they are also eigenvectors of \(M_g\). We will denote the other eigenpairs of matrix \(M_c\) by \(<\lambda _{_S},\,x_{_S}>\) for \(S\subseteq N\) and \(\#S\ge 2\). Let \(<\lambda , w>\) be an eigenpair of matrix \(M_g=P_g \,M_{c}\,P_g\). Vector x can be expressed as a linear combination of eigenvectors of matrix \(M_c\), \(w= \sum _{\emptyset \not =S\subseteq N}c_{_S}\,x_{_S}.\) We will prove below that the eigenvalues of matrix \(M_g\) have a norm smaller than or equal to one.

$$\begin{aligned}&P_g \,M_{c}\,P_g \, \Big [\sum _{\mathop {i}\limits _{i\in {}N}} c_{_{\{i\}}}\,x_{_{\{i\}}} +\mathop {\sum }\limits _{\mathop {S}\limits _{\mathop {S\subseteq N}\limits _{\#S\ge 2}}}c_{_S}\,x_{_S}\Big ] = P_g \,M_{c}\, \Big [\sum _{\mathop {i}\limits _{i\in {}N}} c_{_{\{i\}}}\,x_{_{\{i\}}} +\mathop {\sum }\limits _{\mathop {S}\limits _{\mathop {S\subseteq N}\limits _{\#S\ge 2}}}c_{_S}\,x_{_S}\Big ]\\&\quad = P_g \, \Big [\sum _{\mathop {i}\limits _{i\in {}N}} c_{_{\{i\}}}\,x_{_{\{i\}}} +\sum _{\mathop {S}\limits _{\#S\ge 2}}\lambda _{_S}\,c_{_S}\,x_{_S}\Big ] = \sum _{\mathop {i}\limits _{i\in {}N}} c_{_{\{i\}}}\,x_{_{\{i\}}} +P_g \,\sum _{\mathop {S}\limits _{\#S\ge 2}}\lambda _{_S}\,c_{_S}\,x_{_S}\\&\quad =\lambda \, \Big [\sum _{\mathop {i}\limits _{i\in {}N}} c_{_{\{i\}}}\,x_{_{\{i\}}} +\mathop {\sum }\limits _{\mathop {S}\limits _{\mathop {S\subseteq N}\limits _{\#S\ge 2}}}c_{_S}\,x_{_S}\Big ]. \end{aligned}$$

The first equality uses the fact that the \(<1,\,w>\) is an eigenpair of matrix \(P_g\). The second equality is true since \(<\lambda _{_S},\,x_{_S}>\) is an eigenpair of matrix \(M_c\). Let us consider the norm of the two last terms,

Matrix \(P_g\) is diagonalizable (Property 3), its eigenvalues are thus semi-simple. It is well known that in that case there exists a matrix norm verifying where \(\rho (P_g)\) is the spectral radius which is equal to one. Moreover, there exists a vector norm compatible with the considered matrix norm (theorem 5.7.13 p. 324 Horn and Johnson), which means that

We thus obtain,

for \(\#S=s\ge 2\) the eigenvalues of \(M_c\) are \(\lambda _{_S}=1-s\tau \),

Parameter \(\tau \) is positive and arbitrarily small. Thus if \((\mid {}\lambda \mid -1)\) is positive, the term of the left hand side of the inequality is positive and we can choose parameter \(\tau \) to be sufficiently small to contradict the inequality. So \((\mid {}\lambda \mid -1)\) cannot be strictly positive, which leads to \(\mid {}\lambda \mid \le 1.\)\(\square \)

Proof of Lemma 9:

Let \(<\lambda ,\,w>\) be an eigenpair of matrix \(M_g\) when \(w=\sum _{\emptyset \not =S\subseteq N}c_{_S}\,x_{_S}\),

$$\begin{aligned} P_gM_cP_g\sum _{\mathop {S}\limits _{\emptyset \not =S\subseteq N}}c_{_S}\,x_{_S} =\lambda \sum _{\mathop {S}\limits _{\emptyset \not =S\subseteq N}}c_{_S}\,x_{_S}. \end{aligned}$$

We know that \(<1,\,w>\) is an eigenpair of matrix \(P_g\),

$$\begin{aligned} P_gM_c\sum _{\mathop {S}\limits _{\emptyset \not =S\subseteq N}}c_{_S}\,x_{_S} =\lambda \,P_g\sum _{\mathop {S}\limits _{\emptyset \not =S\subseteq N}}c_{_S}\,x_{_S}. \end{aligned}$$

Separating the singletons,

$$\begin{aligned} P_gM_c\sum _{\mathop {S}\limits _{\emptyset \not =S\subseteq N}}c_{_S}\,x_{_S} =P_gM_c\sum _{\mathop {i}\limits _{i\in N}}c_{_{\{i\}}}\,x_{_{\{i\}}} +P_gM_c\sum _{\mathop {S}\limits _{\#S\ge 2}}c_{_S}\,x_{_S} =P_g\lambda \sum _{\mathop {S}\limits _{\emptyset \not =S\subseteq N}} c_{_S}\,x_{_S}. \end{aligned}$$

Since \(<1,\,x_{_{\{i\}}}>\) for all \(i\in N\) and \(<1-s\,\tau ,\,x_{_S}>\) for all coalitions S verifying \(\#S=s\ge 2\) are eigenpairs of matrix \(M_c\),

$$\begin{aligned}&P_g \sum _{\mathop {i}\limits _{i\in N}} c_{_{\{i\}}}x_{_{\{i\}}} +P_g \mathop {\sum }\limits _{\mathop {S}\limits _{\mathop {S\subseteq N}\limits _{\#S\ge 2}}} (1-s\,\tau )c_{_S}x_{_S}\\&\quad = P_g \sum _{\mathop {S}\limits _{\emptyset \not =S\subseteq N}} c_{_S}x_{_S} -P_g\tau \mathop {\sum }\limits _{\mathop {S}\limits _{\mathop {S\subseteq N}\limits _{\#S\ge 2}}} {}sc_{_S}x_{_S} = P_g\lambda \sum _{\mathop {S}\limits _{\emptyset \not =S\subseteq N}} c_{_S}x_{_S}. \end{aligned}$$

For all connected coalitions T we have, after assembling a few terms,

$$\begin{aligned} \tau \,\mathop {\sum }\limits _{\mathop {S}\limits _{\mathop {S\subseteq N}\limits _{\#S\ge 2}}} \,s\,c_{_S}\,x_{_S}[T] =(1-\lambda )\,\sum _{\mathop {S}\limits _{\emptyset \not =S\subseteq N}} c_{_S}\,x_{_S}[T]. \end{aligned}$$

Since \(w=\sum _{\emptyset \not =S\subseteq N} c_{_S}\,x_{_S}\) is an eigenvector, there exists at least a connected coalition T such that \(w[T]=\sum _{\emptyset \not =S\subseteq N} c_{_S}\,x_{_S}[T]\not =0\). Isolating \(\lambda \),

$$\begin{aligned} \lambda =1 -\tau \,\frac{\displaystyle {\sum _{\mathop {S}\limits _{\#S\ge 2}}\,s\,c_{_S}\,x_{_S}[T]}}{\displaystyle {\sum _{\mathop {S}\limits _{\emptyset \not =S\subseteq N}}c_{_S}\,x_{_S}[T]}}. \end{aligned}$$

Since matrix \(P_gM_cP_g\) is real, \({{\overline{\lambda }}}\), the complex conjugate of \(\lambda \), is also one of its eigenvalues, and the next equality is true,

$$\begin{aligned} {\overline{\lambda }}= & {} 1 -\tau \,\frac{\displaystyle {\sum _{\mathop {S}\limits _{\#S\ge 2}}\, s\,\overline{c_{_S}}\,x_{_S}[T]}}{\displaystyle {\sum _{\mathop {S}\limits _{\emptyset \not =S\subseteq N}} \overline{c_{_S}}\,x_{_S}[T]}}.\\ \quad \lambda \,{\overline{\lambda }}= & {} \mid {}\lambda {}\mid ^2 =\frac{1}{\mid {}w[T]{}\mid ^2} \Big [w[T] -\tau \sum _{\mathop {S}\limits _{\#S\ge 2}}\, s\,{c_{_S}}\,x_{_S}[T]\Big ] \Big [{\overline{w}}[T] -\tau \sum _{\mathop {S}\limits _{\#S\ge 2}}\, s\,\overline{c_{_S}}\,x_{_S}[T]\Big ]. \end{aligned}$$

Expanding the two last terms of the previous expression,

$$\begin{aligned} \mid {}\lambda {}\mid ^2= & {} \frac{1}{\mid {}w[T]{}\mid ^2} \Big [{\mid {}w[T]{}\mid ^2} -2\,\tau {}Re\Big (w[T]\,\sum _{\mathop {S}\limits _{\#S\ge 2}}\, s\,\overline{c_{_S}}\,x_{_S}[T]\Big )\\&+\tau ^2\mid \sum _{\mathop {S}\limits _{\#S\ge 2}}\, s\,{c_{_S}}\,x_{_S}[T]\mid ^2\Big ]. \end{aligned}$$

We can choose an eigenvector w such that \({\mid {}w[T]{}\mid }=1\),

$$\begin{aligned} \mid {}\lambda {}\mid ^2 \, ={1} -2\,\tau {}Re\Big (w[T]\,\sum _{\mathop {S}\limits _{\#S\ge 2}}\, s\,\overline{c_{_S}}\,x_{_S}[T]\Big ) +\tau ^2\mid \sum _{\mathop {S}\limits _{\#S\ge 2}}\, s\,{c_{_S}}\,x_{_S}[T]\mid ^2. \end{aligned}$$

Since \(\mid {}\lambda {}\mid ^2\le 1\) we have,

$$\begin{aligned}&2\,\tau {}Re\Big (w[T]\,\sum _{\mathop {S}\limits _{\#S\ge 2}}\, s\,\overline{c_{_S}}\,x_{_S}[T]\Big ) -\tau ^2\mid \sum _{\mathop {S}\limits _{\#S\ge 2}}\, s\,{c_{_S}}\,x_{_S}[T]\mid ^2\ge 0.\\&\quad 0< \tau \le \frac{2\,Re\Big (w[T]\,\sum _{S:\#S\ge 2}\, s\,\overline{c_{_S}}\,x_{_S}[T]\Big )}{\mid \sum _{S:\#S\ge 2}\, s\,{c_{_S}}\,x_{_S}[T]\mid ^2}. \end{aligned}$$

Choosing parameter \(\tau \) sufficiently small will ensure that \(\mid {}\lambda {}\mid ^2<1\). \(\square \)

Auxiliary result:

$$\begin{aligned}&\sum _{\theta =1}^{m}\, {{j+\theta -1}\atopwithdelims (){\theta }}\, {{m}\atopwithdelims (){\theta }} \left( {\frac{\tau }{z_i}}\right) ^{\theta } =-1\\&\quad +\sum _{\theta =0}^{j-1} {{j-1}\atopwithdelims (){j-1-\theta }}\,{{m}\atopwithdelims (){j-1-\theta }} {\left( {\frac{\tau }{z_i}}\right) ^{j-1-\theta } \left( {1+\frac{\tau }{z_i}}\right) ^{m-j+1+\theta }}. \end{aligned}$$

Proof

$$\begin{aligned}&\sum _{\theta =0}^{m}\, {{\theta +j-1}\atopwithdelims (){\theta }}\, {{m}\atopwithdelims (){\theta }} \left( {\frac{\tau }{z_i}}\right) ^{\theta } =\sum _{\theta =0}^{m}\, \frac{(\theta +j-1)\dots (\theta +1)}{(j-1)!}\, {{m}\atopwithdelims (){\theta }} \left( {\frac{\tau }{z_i}}\right) ^{\theta }\nonumber \\&\quad =\frac{1}{(j-1)!}\,\sum _{\theta =0}^{m}\, \frac{d^{j-1}}{d\left( {\frac{\tau }{z_i}}\right) ^{j-1}}\left[ {{m}\atopwithdelims (){\theta }} \left( {\frac{\tau }{z_i}}\right) ^{\theta +j-1}\right] \nonumber \\&\quad =\frac{1}{(j-1)!}\, \frac{d^{j-1}}{d\left( {\frac{\tau }{z_i}}\right) ^{j-1}} \left[ \left( {\frac{\tau }{z_i}}\right) ^{j-1} \sum _{\theta =0}^{m}\, {{m}\atopwithdelims (){\theta }} \left( {\frac{\tau }{z_i}}\right) ^{\theta } \right] \nonumber \\&\quad =\frac{1}{(j-1)!}\, \frac{d^{j-1}}{d\left( {\frac{\tau }{z_i}}\right) ^{j-1}} \left[ \left( {\frac{\tau }{z_i}}\right) ^{j-1} \left( 1+{\frac{\tau }{z_i}}\right) ^{m} \right] . \end{aligned}$$
(14)

Applying Leibnitz’s Theorem for differentiation of a product,

$$\begin{aligned} \frac{d^t}{dx^t}(u\cdot {}v) =\sum _{\theta =0}^{t} {{t}\atopwithdelims (){\theta }} \frac{d^\theta }{dx^\theta }(u) \frac{d^{t-\theta }}{dx^{t-\theta }}(v), \end{aligned}$$

to Eq. (14), with \(u=(\tau /z_i)^{j-1}\), \(v=(1+\tau /z_i)^m\) and \(t=j-1\) we obtain,

$$\begin{aligned}= & {} \frac{1}{(j-1)!}\, \sum _{\theta =0}^{j-1} {{j-1}\atopwithdelims (){\theta }} \frac{d^{\theta }}{d\left( {\frac{\tau }{z_i}}\right) ^{\theta }} \left( \frac{\tau }{z_i}\right) ^{j-1} \frac{d^{j-1-\theta }}{d\left( {\frac{\tau }{z_i}}\right) ^{j-1-\theta }} \left( 1+{\frac{\tau }{z_i}}\right) ^{m} \\= & {} \frac{1}{(j-1)!}\, \sum _{\theta =0}^{j-1} {{j-1}\atopwithdelims (){\theta }} \frac{(j-1)!}{(j-\theta -1)!} \left( \frac{\tau }{z_i}\right) ^{(j-1-\theta )}\\&\frac{(m)!}{(m-j+\theta +1)!} \left( 1+{\frac{\tau }{z_i}}\right) ^{(m-j+1+\theta )} \\= & {} \sum _{\theta =0}^{j-1} {{j-1}\atopwithdelims (){j-\theta -1}} {{m}\atopwithdelims (){j-\theta -1}} \left( \frac{\tau }{z_i}\right) ^{(j-1-\theta )} \left( 1+{\frac{\tau }{z_i}}\right) ^{(m-j+1+\theta )}, \end{aligned}$$

which proves the Auxiliary result. \(\square \)

Proof of Lemma 10:

If T is non-connected, all the terms of the series are equal to 0 and Lemma 10 is true. So let us consider instead that T is connected.

$$\begin{aligned} (P_gM_cP_g)^{m}[S,T]= \sum _{\theta =1}^{m}A^{\theta }[S,T]\, {{m}\atopwithdelims (){\theta }} \tau ^{\theta }+P_g[{S,T}], \end{aligned}$$

where the parameters \(A^{\theta }[S,T]\) are the successive coefficients of the powers of x in the Maclaurin development of the generating function F(x). Those coefficients are the value of the relevant derivatives of F(x) at point \(x=0\).

$$\begin{aligned} A^{\theta }[S,T]=\frac{1}{\theta !}\frac{d^\theta \,F(0)}{d\,x^\theta }. \end{aligned}$$
(15)

Performing the euclidean division of Eq. (11), we can rewrite the generating function of Lemma 4 as,

$$\begin{aligned} F(x)=\frac{R(x)}{Q(x)} =\frac{\alpha _0+\alpha _{1}\,x+ \cdots +\alpha _{q-\mu -1}\,x^{q-\mu -1}}{1+b_1\,x+b_2\,x^2+\cdots +b_{q-\mu +1}\,x^{q-\mu +1}}+\frac{a_{q-\mu }}{b_{q-\mu }}, \end{aligned}$$

where \(\alpha _i=a_{i}-\frac{a_{q-\mu }}{b_{q-\mu }}b_{i}\) for \(1\le i \le q-\mu -1\) and \(\alpha _0=-\frac{a_{q-\mu }}{b_{q-\mu }}\).

From the partial fraction decomposition theorem, we can write the rational function, the first term of the right hand side of the previous equation, as a finite linear combination of terms of the form, \(E(x)=(x-z)^{-w}\), where z is a root of the denominator and w is an integer at most equal to the algebraic multiplicity of z. Note that z could be a complex number.

$$\begin{aligned} F(x)=\sum _{i=1}^p\sum _{j=1}^{w_i}\beta _{i,j}\frac{1}{(x-z_i)^j} +\frac{a_{q-\mu }}{b_{q-\mu }}, \end{aligned}$$

where \(z_1\), \(\dots \), \(z_p\) are the roots of Q(x), \(w_1\), \(\dots \), \(w_p\) are their respective algebraic multiplicities and \(\beta _{i,j}\) are the coefficients of the linear combination. The derivative of order \(\theta \) is given by,

$$\begin{aligned} \frac{d^\theta \,F(x)}{d\,x^\theta } =\sum _{i=1}^p\sum _{j=1}^{w_i}\beta _{i,j} \frac{(-j)(-j-1)\dots (-j-\theta +1)}{(x-z_i)^{j+\theta }}, \end{aligned}$$

and the coefficients \(A^{\theta }[S,T]\) are given by,

$$\begin{aligned} A^{\theta }[S,T]= & {} \frac{1}{\theta !} \frac{d^\theta \,F(0)}{d\,x^\theta } =\sum _{i=1}^p\sum _{j=1}^{w_i}\beta _{i,j} \frac{(-1)^\theta }{(-z_i)^{j+\theta }}\, {{j+\theta -1}\atopwithdelims (){\theta }}.\\ (P_gM_cP_g)^{m}[S,T]= & {} \sum _{\theta =1}^{m}\sum _{i=1}^p\sum _{j=1}^{w_i} \beta _{i,j}\frac{(-1)^\theta }{(-z_i)^{j+\theta }}\, {{j+\theta -1}\atopwithdelims (){\theta }}\, {{m}\atopwithdelims (){\theta }} \tau ^{\theta }+P_g[{S,T}]. \end{aligned}$$

Inverting the order of summations,

$$\begin{aligned} =\sum _{i=1}^p\sum _{j=1}^{w_i}\beta _{i,j} \frac{1}{(-z_i)^{j}}\, \sum _{\theta =1}^{m}\, {{j+\theta -1}\atopwithdelims (){\theta }}\, {{m}\atopwithdelims (){\theta }} \left( {\frac{\tau }{z_i}}\right) ^{\theta } +P_g[{S,T}]. \end{aligned}$$
(16)

Using the Auxiliary result we get,

$$\begin{aligned} (P_gM_cP_g)^{m}[S,T]= & {} -\sum _{i=1}^p\sum _{j=1}^{w_i}\beta _{i,j} \frac{1}{(-z_i)^{j}}+P_g[{S,T}]\nonumber \\&+\sum _{i=1}^p\sum _{j=1}^{w_i}\beta _{i,j} \frac{1}{(-z_i)^{j}}\, \sum _{\theta =0}^{j-1} {{j-1}\atopwithdelims (){j-1-\theta }}\,{{m}\atopwithdelims (){j-1-\theta }}\nonumber \\&{\times \left( {\frac{\tau }{z_i}}\right) ^{j-1-\theta } \left( {1+\frac{\tau }{z_i}}\right) ^{m-j+1+\theta }}. \end{aligned}$$
(17)

Since,

$$\begin{aligned} F(0)=\frac{R(0)}{Q(0)}=\sum _{i=1}^p\sum _{j=1}^{w_i}\beta _{i,j}\, \left( \frac{1}{-z_i}\right) ^j +\frac{a_{q-\mu }}{b_{q-\mu }}=0, \end{aligned}$$

we have,

$$\begin{aligned}&(P_gM_cP_g)^{m}[S,T] =\frac{a_{q-\mu }}{b_{q-\mu }}+P_g[{S,T}] \\&+\sum _{i=1}^p\sum _{j=1}^{w_i}\beta _{i,j} \frac{1}{(-z_i)^{j}}\\&\times \left[ \sum _{\theta =0}^{j-1}\! {{j-1}\atopwithdelims (){j-1-\theta }}\!\!{{m}\atopwithdelims (){j-1-\theta }} \!\!{\left( {\frac{\tau }{z_i}}\right) ^{\!j-1-\theta } \!\!\!\left( {1+\frac{\tau }{z_i}}\right) ^{\!m-j+1+\theta }}\right] \!\!. \end{aligned}$$

Let us consider the characteristic polynomial in Eq. (8) and Q(x) as defined by Eq. (11). If \(z_i\) is a root of Q(x), it is true that \(\frac{1}{z_i}\) is a root of Eq. (8) and thus an eigenvalue of matrix A (note that \(z_i\not =0\)). We are now ready to show that \(1+\frac{\tau }{z_i}\) is an eigenvalue of matrix \(M_g\).

Since the columns of matrix \((P_gM_cP_g)\) corresponding to non-connected coalitions are zero, the non-zero eigenvalues are preserved when we delete from matrix \((P_gM_cP_g)\) the lines and columns corresponding to non-connected coalitions. Let us denote by \(M^{s}_g\) that “simplified”  matrix. The corresponding sub-matrix of A is thus equal to \((A^{s})=(M^{s}_g-Id)/\tau \), which leads to \(Id+\tau \,A^{s}= M^{s}_g\). As a consequence, the eigenvalues of \(M^{s}_g\) are given by \(1+\frac{\tau }{z_i}\), which are also non-zero eigenvalues of matrix \(M_g\). Since the spectral radius of \(M_g\) is one, and since \(z_i\not =0\), the moduli of \(1+\frac{\tau }{z_i}\) are strictly smaller than 1.

Let us focus now on the terms \({{m}\atopwithdelims (){j-1-\theta }}({1+\frac{\tau }{z_i}})^{m}\) as m tends to infinity. If \(\theta =j-1\), the corresponding term reduces to \(({1+\frac{\tau }{z_i}})^{m}\) and thus converges to 0. Let us assume now that \(\theta =0,\,1,\dots ,j-2\).

$$\begin{aligned} \displaystyle { {{m}\atopwithdelims (){j-1-\theta }}\left| \left( {1+\frac{\tau }{z_i}}\right) \right| ^{m}}&=\displaystyle {\frac{m(m-1)(m-2)\ldots (m-j+2+\theta )}{(j-1-\theta )!}\left| \left( {1+\frac{\tau }{z_i}}\right) \right| ^m}\nonumber \\&\displaystyle {\le \frac{m^{j-1-\theta }}{(j-1-\theta )!}\left| \left( {1+\frac{\tau }{z_i}}\right) \right| ^m}. \end{aligned}$$
(18)

The logarithm of the last expression converges if and only if \((j-1-\theta )\,\log (m)+m\,\log \left| \left( 1+\frac{\tau }{z_i}\right) \right| \) converges as \(m\rightarrow \infty \). Using the fact that \((\log m)/m \rightarrow 0\) as \(m\rightarrow \infty \), the term in Expression (18) converges to 0 as m tends to infinity, which completes the proof of Lemma 10. \(\square \)

Proof of Lemma 11:

Let us write \((P_g\, M_c\, P_g)^n= (P_g\, M_c\, P_g)\, (P_g\, M_c\, P_g)^{n-1}\), and let W be the limit of the series \(\{(P_g\, M_c\, P_g)^k\}_{k=1}^{\infty }\). It is thus true that \((P_g\, M_c\, P_g)\, W=W\). In words, the columns of matrix W are eigenvectors of matrix \((P_g\, M_c\, P_g)\) related to eigenvalue 1. We shall show that these eigenvectors are “inessential” vectors. Let \(w=(w_{_S})_{_{\emptyset \not =S\subseteq N}}\) be an eigenvector associated to \(\lambda =1\). We will solve the following system of linear equations, \((P_g\, M_c\, P_g)\,w=w\). Since we have, for non-connected coalitions S, \(w_{_{S}}=\sum _{K\in S/g}w_{_{K}}\), we will concentrate only on connected coalitions. Considering Eq. (1), we obtain after few cancellations,

$$\begin{aligned} \sum _{\mathop {j}\limits _{j\in S^*{\setminus } S}} \big (w_{_{S\cup \{j\}}}-w_{_{S}}-w_{_{\{j\}}}\big )=0. \end{aligned}$$
(19)

So we learn that for all connected coalitions S, the related coefficient \(w_S\) is defined uniquely as a linear combination of \(w_{N\cup \{j\}}\) and \(w_{\{j\}}\) for \(j\in {}S^*{\setminus } S\). Noting that for \(S=N{\setminus } \{i\}\), we have \(w_{_{N{\setminus } \{i\}}}=w_{_{N}}-w_{_{\{i\}}},\) we can conclude that \(w_S\) is a linear combination of \(w_{N}\) and a selection of \(w_{\{j\}}\). We show below that for all connected coalitions S such that \(1\le \#S\le {}n-1\), we have,

$$\begin{aligned} w_{_{S}}=w_{_{N}}-\sum _{\mathop {i}\limits _{i\not \in S}}w_{_{\{i\}}}. \end{aligned}$$
(20)

Equation (20) is of course true for \(S=N\) and for all the connected coalitions with \(n-1\) elements. Let us assume that Eq. (20) is true for all connected coalitions of size s and above. We will show that Eq. (20) is also true for connected coalitions T verifying \(\#T=s-1\).

$$\begin{aligned} -\sum _{\mathop {j}\limits _{j\in {}T^*{\setminus } T}}w_{_{\{j\}}} -(\#T^*-\#T)\,w_{_{T}} +\sum _{\mathop {j}\limits _{j\in T^*{\setminus } T}}w_{_{T\cup \{j\}}} =0. \end{aligned}$$

Using the induction hypothesis,

$$\begin{aligned} -\sum _{\mathop {j}\limits _{j\in {}T^*{\setminus } T}}w_{_{\{j\}}} -(\#T^*-\#T)\,w_{_{T}} +\sum _{\mathop {j}\limits _{j\in T^*{\setminus } T}} \Big [w_{_{N}} -\sum _{\mathop {m}\limits _{m\in N{\setminus }(T\cup \{j\})}}w_{_{\{m\}}}\Big ] =0. \end{aligned}$$

Taking into account that \(N{\setminus }(T\cup \{j\})=(N{\setminus }{}T)\setminus \{j\}\),

$$\begin{aligned}&-\sum _{\mathop {j}\limits _{j\in {}T^*{\setminus } T}}w_{_{\{j\}}} -(\#T^*-\#T)\,w_{_{T}}+(\#T^*-\#T)\,w_{_{N}} \\&+\sum _{\mathop {m}\limits _{m\in T^*{\setminus } T}}w_{_{\{m\}}} -(\#T^*-\#T)\sum _{\mathop {m}\limits _{m\in N{\setminus }(T)}}w_{_{\{m\}}} =0. \end{aligned}$$

After relevant cancellations,

$$\begin{aligned} w_{_{T}}=w_{_{N}} -\sum _{\mathop {m}\limits _{m\in N{\setminus }{}T}}w_{_{\{m\}}}, \end{aligned}$$

which proves that Eq. (20) is true for all non-empty connected coalitions. As a consequence,

$$\begin{aligned} w_{_{\{j\}}}= & {} w_{_{N}}-\sum _{\mathop {i}\limits _{i\in N{\setminus }\{j\}}}w_{_{\{i\}}}, \end{aligned}$$
(21)
$$\begin{aligned} w_{_{N}}= & {} \sum _{\mathop {i}\limits _{i\in N}}w_{_{\{i\}}}. \end{aligned}$$
(22)

Combining Eqs. (20) and (22),

$$\begin{aligned} w_{_{S}}=w_{_{N}} -\sum _{\mathop {i}\limits _{i\in N{\setminus } S}} w_{_{\{i\}}} =\sum _{\mathop {i}\limits _{i\in {}N}} w_{_{\{i\}}} -\sum _{\mathop {i}\limits _{i\in N{\setminus } S}} w_{_{\{i\}}} =\sum _{\mathop {i}\limits _{i\in {}S}} w_{_{\{i\}}}. \end{aligned}$$
(23)

The eigenvectors of matrix \(P_g\, M_c\, P_g\) related to eigenvalue 1 are “inessential”  vectors. We have thus proved that \((P_g\, M_c\, P_g)^\infty [S,\,T] =\sum _{i\in S}(P_g\, M_c\, P_g)^\infty [\{i\},\,T]\). Which ensures that the limit game \({\tilde{v}}\) is inessential. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hamiache, G., Navarro, F. Associated consistency, value and graphs. Int J Game Theory 49, 227–249 (2020). https://doi.org/10.1007/s00182-019-00688-y

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00182-019-00688-y

Keywords

JEL Classification

Navigation