Skip to main content
Log in

Representative Voting Games

  • Original Paper
  • Published:
Social Choice and Welfare Aims and scope Submit manuscript

This article has been updated

Abstract

We propose the stationary Markov perfect equilibria of representative voting games as a benchmark to evaluate the outcomes of dynamic elections, in which the evolution of voters’ political power is endogenous. We show that the equilibria of dynamic elections can achieve this benchmark if politicians are sufficiently office motivated. For arbitrary equilibria of the electoral model, we characterize the faithfulness of politicians’ choices to the policy objectives of representative voters through a delegated best-response property. Finally, we provide conditions under which general dynamic electoral environments admit representative voters in each state.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Change history

  • 21 February 2021

    The headings Theorem 1 (Continued) and Corollary 1 (Continued) are incorrect. Now, they have been corrected to “Example (Continued)”.

Notes

  1. With a single state, our model mimics the equilibrium outcomes of dynamic model of elections with adverse selection following Duggan (2000) and Bernhardt et al. (2004): while we assume that a politician’s type is observed after she takes office to avoid complex updating of voters’ beliefs across states, the ex post commitment that we allow proxies for these beliefs in the absence of a state variable.

  2. Forand (2014) and Van Weelden (2013) establish related results in different models of dynamic elections with a fixed median voter and underlying collective decision problem.

  3. We assume a type-independent discount factor to simplify notation. As can be verified from their proofs, this assumption is not needed for Theorems 1 or 2. On the other hand, the result of Theorem 3 does not hold when different voter types have different discount factors.

  4. Theorem 2 is related to Proposition 4.2 in Duggan and Forand (2019), where we focus on ruling out the implementation of policy rules that are not solutions to the representative voter’s dynamic programming problem. There, the representative voter is fixed, and equilibrium coordination between representative voters is not an issue, so we can rely on a refinement of voting strategies which assumes only that, in all states, all politician types have available some policy which leads to reelection.

  5. If we allowed for history-dependent persistence of policy choices in the representative voting game, it would be the case that the outcomes of any Markov electoral equilibrium satisfying the conditions of Theorem 2 could be replicated by a (nonstationary) subgame perfect equilibrium of the representative voting game.

  6. This augmented game is a special case of the model from Duggan and Forand (2018), where we prove general existence of Markov electoral equilibria.

References

  • Bai JH, Lagunoff R (2011) On the Faustian dynamics of policy and political power. Rev Econ Stud 78(1):17–48

    Article  Google Scholar 

  • Banks J, Duggan J (2008) A dynamic model of democratic elections in multidimensional policy spaces. Q J Polit Sci 3:269–299

    Article  Google Scholar 

  • Bernhardt D, Dubey S, Hughson E (2004) Term limits and pork barrel politics. J Public Econ 88:2383–2422

    Article  Google Scholar 

  • Besley T, Coate S (1997) An economic model of representative democracy. Quart J Econ 112:85–114

    Article  Google Scholar 

  • Calvert R (1985) Robustness of the multi-dimensional voting model: candidate motivations, uncertainty and convergence. Am J Polit Sci 29:69–95

    Article  Google Scholar 

  • Downs A (1957) An economic theory of democracy. Harper and Bros, New York

    Google Scholar 

  • Duggan J (2000) Repeated elections with asymmetric information. Econ Polit 12:109–136

    Article  Google Scholar 

  • Duggan J (2014) Majority voting over lotteries: conditions for existence of a decisive voter. Econ Bull 34:263–270

    Google Scholar 

  • Duggan J, Forand JG (2018) Existence of Markov electoral equilibria. Unpublished paper

  • Duggan J, Forand JG (2019) Accountability via delegation in dynamic elections. Unpublished Paper

  • Federgruen A (1978) On n-person stochastic games with denumerable state space. Adv Appl Probab 10:452–471

    Article  Google Scholar 

  • Forand JG (2014) Two-party competition with persistent policies. J Econ Theory 152:64–91

    Article  Google Scholar 

  • Osborne M, Slivinski A (1996) A model of competition with citizen candidates. Q J Econ 111:65–96

    Article  Google Scholar 

  • Van Weelden R (2013) Candidates, credibility, and re-election incentives. Rev Econ Stud 80(4):1622–1651

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jean Guillaume Forand.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

J. G. Forand: This author acknowledges support from a SSHRC IDG.

Appendix

Appendix

Proof of Theorem 1

Let \({\tilde{\pi }}\) be a strategy profile in the representative voting game. For all states s, types t and policies x, let \({\tilde{V}}_{t}(s,x)\) denote the payoff from policy x in state s to a representative voter of type t, which solves the recursive equation

$$\begin{aligned} {\tilde{V}}_{t}(s,x)= & {} u_{t}(s,x)+\delta p(s|s,x){\tilde{V}}_{t}(s,x)+\delta \sum _{s'\not =s}p(s'|s,x)\int _{x'} {\tilde{V}}_{t}(s',x'){\tilde{\pi }}_{\kappa (s')}(dx'|s'),\nonumber \\&\end{aligned}$$
(2)

and let \({\tilde{V}}_t(s)=\int _x {\tilde{V}}_t(s,x){\tilde{\pi }}_{\kappa (s)}(dx|s)\). If furthermore the profile \({\tilde{\pi }}\) is a stationary Markov perfect equilibrium of the representative voting game, then, for all states s, \({\tilde{\pi }}_{\kappa (s)}(\cdot |s)\) puts probability one on solutions to

$$\begin{aligned} \max _{x\in Y(s)}{\tilde{V}}_{\kappa (s)}(s,x), \end{aligned}$$
(3)

so that

$$\begin{aligned} {\tilde{V}}_{\kappa (s)}(s)\geqslant & {} {\tilde{V}}_{\kappa (s)}(s,x) \end{aligned}$$
(4)

for all policies x, with equality if and only if x is a best-response for \(\kappa (s)\) against \({\tilde{\pi }}\) in the representative voting game. To define the Markov electoral strategy \(\sigma =(\pi ,\rho )\), we specify that for all s and all t, \(\pi _{t}(\cdot |s)={\tilde{\pi }}_{\kappa (s)}(\cdot |s)\). Therefore, because \({\tilde{\pi }}_{\kappa (s)}(Y^c(s)|s)=0\) for all s, it follows that \(V_{t}^F(s,t')={\tilde{V}}_{t}(s)\) and that \(V^B_t(s,t',x)=V^C_t(s,t',x)\) for all states s, types t and \(t'\) and policies x. In particular, any voting strategy \(\rho\) is optimal.

Assume further that the equilibrium profile \({\tilde{\pi }}\) is in pure strategies. We specify the voting strategy such that, for all states s and types t, \(\rho (s,t,x)=1\) if and only if \(x\in \text{ supp }({\tilde{\pi }}_{\kappa (s)}(\cdot |s))\). To verify the optimality of the voting strategy, fix any state s, incumbent type t and policy x. We have that

$$\begin{aligned} V^I_{\kappa (s)}(s,t,x)-V^C_{\kappa (s)}(s,t,x)= & {} p(s|s,x)\left[ u_{\kappa (s)}(s,x)+\delta V^I_{\kappa (s)}(s,t,x)-V^F_{\kappa (s)}(s,t) \right] \\= & {} p(s|s,x)\left[ {\tilde{V}}_{\kappa (s)}(s,x)-{\tilde{V}}_{\kappa (s)}(s) \right] \\\leqslant & {} 0, \end{aligned}$$

with equality whenever \(x\in \text{ supp }({\tilde{\pi }}_{\kappa (s)}(\cdot |s))\), as desired. The second equality follows from (8) and the inequality follows from (4). To verify the optimality of policy strategies, normalize stage utilities such that \({\underline{u}}\leqslant u_t(s,x)\leqslant {\overline{u}}\) for all states s, types t and policies x, and assume that \(\delta b>{\overline{u}}-{\underline{u}}\). Fix any state s and type t. An office holder of type t obtains a payoff of at least \(\frac{{\underline{u}}+b}{1-\delta }\) if she implements policy \(x\in \text{ supp }({\tilde{\pi }}_{\kappa (s)}(\cdot |s))\), while her payoff to implementing any policy \(x'\notin \text{ supp }({\tilde{\pi }}_{\kappa (s)}(\cdot |s))\) is at most \(\frac{{\overline{u}}}{1-\delta }+b\), so that choosing policy x is optimal, as desired.

Now assume that the equilibrium profile \({\tilde{\pi }}\) is in mixed strategies. For all states s, types t and policies \(x\notin \text{ supp }({\tilde{\pi }}_{\kappa (s)}(\cdot |s))\), we specify the voting strategy such that \(\rho (s,t,x)=0\). In particular, note that because \({\tilde{\pi }}_{\kappa (s)}(Y(s)|s)=1\) for all s, we have that, for any s and t, \(\rho (s,t,x)=0\) for all \(x\in Y^c(s)\cup Y^{d}(s)\). To construct the voting strategy for policies \(x\in \text{ supp }({\tilde{\pi }}_{\kappa (s)}(\cdot |s))\), we assume that \(\delta b\) is large enough that

$$\begin{aligned} \frac{{\overline{u}}-{\underline{u}}}{1-\delta }< & {} \frac{\delta b}{1-\delta \epsilon }\min \left\{ \epsilon , 1-\epsilon \right\} , \end{aligned}$$
(5)

where we use our normalization of stage utilities. Note that given any \(0<\epsilon <1\), we can set b large enough that (5) is satisfied. Our goal is to define a continuous mapping \(f:[\epsilon ,1]\rightarrow [\epsilon , 1]\), the fixed point \(R^*\) of which will allow us to construct the voting strategy \(\rho (s,t,x)\) for \(x\in \text{ supp }({\tilde{\pi }}_{\kappa (s)}(\cdot |s))\) such that, in each state, the ex ante probability of reelection of all politician types is \(R^*\), that is, for which \(\int _x \rho (s,t,x){\tilde{\pi }}_{\kappa (s)}(dx|s)=R^*\) for all s and t. To this end, let \(R\in [\epsilon ,1]\), and fix state s and type t.

Given any \(r\in [0,1]\), define \({\tilde{w}}_{s,t}(x, r)\) as the payoff to a type t politician in s if she chooses policy x and is reelected with probability r, given that she will be reelected with ex ante probability R in all future periods (because no politician commits to policies under \(\pi\), this mimics type t’s equilibrium payoff). That is,

$$\begin{aligned} {\tilde{w}}_{s,t}(x,r)= & {} u_t(s,x)+b +\delta \sum _{s'}p(s'|s,x)\left[ {\tilde{V}}_t(s')+\frac{rb}{1-\delta R} \right] , \end{aligned}$$

and note that \({\tilde{w}}_{s,t}(x,r)\) is continuous. For the restricted domain \(r\in [\epsilon , 1]\), define \({\underline{x}}_{s,t}(r)=\text{ argmin}_{x\in \text{ supp }({\tilde{\pi }}_{\kappa (s)}(\cdot |s))}{\tilde{w}}_{s,t}(x,r)\) and \({\underline{w}}_{s,t}(r)={\tilde{w}}_{s,t}({\underline{x}}_{s,t}(r),r)\), and note that \({\underline{w}}_{s,t}(r)\) is continuous (by the Maximum Theorem). Finally, for all \(x\in \text{ supp }({\tilde{\pi }}_{\kappa (s)}(\cdot |s))\) and \(r\in [\epsilon , 1]\), define \(r_{s,t}(x,r)\) as the solution \(r'\in (0,r]\) to \({\underline{w}}_{s,t}(r)={\tilde{w}}_{s,t}(x,r')\). To show that \(r_{s,t}(x,r)\) is well-defined, first note that \({\tilde{w}}_{s,t}(x,r)\) is strictly increasing in r because \(\delta >0\), so that \(r_{s,t}(x,r)\leqslant r\) follows from the fact that \({\underline{w}}_{s,t}(r)\leqslant {\tilde{w}}_{s,t}(x,r)\). Second, note that

$$\begin{aligned} {\underline{w}}_{s,t}(r)-{\tilde{w}}_{s,t}(x,0)\geqslant & {} {\underline{w}}_{s,t}(\epsilon )-{\tilde{w}}_{s,t}(x,0) \\\geqslant & {} \frac{{\underline{u}}}{1-\delta } +b\left[ 1+\frac{\delta \epsilon }{1-\delta \epsilon }\right] -\left[ \frac{{\overline{u}}}{1-\delta }+b \right] \\= & {} \frac{\delta b \epsilon }{1-\delta \epsilon }-\frac{{\overline{u}} -{\underline{u}}}{1-\delta } \\> & {} 0, \end{aligned}$$

where the final inequality follows from (5), so that \(r_{s,t}(x,r)>0\) for all \(r\in [\epsilon , 1]\). Finally, note that \(r_{s,t}(x,r)\) is continuous.

Towards defining the mapping f, we first define a collection \(\{R'_{s,t}(R) \}_{s,t}\), where each \(R'_{s,t}(R)\in [\epsilon ,1]\). Fix state s and type t. If \(\int _x r_{s,t}(x,1){\tilde{\pi }}_{\kappa (s)}(dx|s)\leqslant R\), then we set

$$\begin{aligned} R'_{s,t}(R)= & {} \int _x r_{s,t}(x,1){\tilde{\pi }}_{\kappa (s)}(dx|s). \end{aligned}$$

To ensure that \(R'_{s,t}(R)\) is well-defined in this case, we need to verify that \(R'_{s,t}(R)\geqslant \epsilon\), for which it is sufficient to show that \(r_{s,t}(x,1)\geqslant \epsilon\) for all \(x\in \text{ supp }({\tilde{\pi }}_{\kappa (s)}(\cdot |s))\). To see this, note that

$$\begin{aligned} {\underline{w}}_{s,t}(1)-{\tilde{w}}_{s,t}(x,\epsilon )\geqslant & {} \frac{{\underline{u}}}{1-\delta }+b\left[ 1+\frac{\delta }{1-\delta R}\right] -\left[ \frac{{\overline{u}}}{1-\delta }+ b\left[ 1+\frac{\delta \epsilon }{1-\delta R}\right] \right] \\\geqslant & {} \delta b\left[ \frac{1-\epsilon }{1-\delta \epsilon } \right] -\frac{{\overline{u}}-{\underline{u}}}{1-\delta } \\> & {} 0, \end{aligned}$$

yielding the desired contradiction, where the final inequality follows from (5). If instead \(\int _x r_{s,t}(x,1){\tilde{\pi }}_{\kappa (s)}(dx|s)> R\), then we set \(R'_{s,t}(R)=R\). Furthermore, in this case there exists \(r^*\in [\epsilon ,1)\) such that

$$\begin{aligned} \int _x r_{s,t}(x,r^*){\tilde{\pi }}_{\kappa (s)}(dx|s)= R. \end{aligned}$$

To see this, note that by our previous results

$$\begin{aligned} \int _x r_{s,t}(x,\epsilon ){\tilde{\pi }}_{\kappa (s)}(dx|s)\leqslant & {} \epsilon \\\leqslant & {} R, \end{aligned}$$

so that the claim follows from the continuity of \(r_{s,t}(x,r)\). Finally, we define the continuous mapping \(f:[\epsilon ,1]\rightarrow [\epsilon , 1]\) such that \(f(R)=\inf _{s,t}R'_{s,t}(R)\) for all \(R\in [\epsilon , 1]\). This mapping has a fixed point \(R^*=f(R^*)\) by Brouwer’s fixed point theorem, and, for all states s and types t,

$$\begin{aligned} R'_{s,t}(R^*)\leqslant R^*\leqslant R'_{s,t}(R^*), \end{aligned}$$

where the first inequality follows by construction of \(R'_{s,t}(R^*)\) and the second inequality follows from the fact that \(R^*\) is a fixed point of f. Therefore, \(R_{s,t}(R^*)=R^*\) for all s and t.

Finally, we can use the fixed point \(R^*\) of the mapping f to back out the voting strategy \(\rho (s,t,x)\) for all states s, types t and policies \(x\in \text{ supp }({\tilde{\pi }}_{\kappa (s)}(\cdot |s))\). Note that, by construction, \(\int _x r_{s,t}(x,1){\tilde{\pi }}_{\kappa (s)}(dx|s)\geqslant R^*\) for all \(x\in \text{ supp }({\tilde{\pi }}_{\kappa (s)}(\cdot |s))\), so that there exists \(r^*\in [\epsilon , 1]\) such that \(\int _x r_{s,t}(x,r^*){\tilde{\pi }}_{\kappa (s)}(dx|s)= R^*\). Therefore, we define \(\rho (s,t,x)=r_{s,t}(x,r^*)\).

To verify the optimality of the policy strategy, note that, by construction, politician t is indifferent between all \(x\in \text{ supp }({\tilde{\pi }}_{\kappa (s)}(\cdot |s))\), which all yield payoff \({\underline{w}}_{s,t}(r^*)\). Furthermore, by choosing policy \(x\notin \text{ supp }({\tilde{\pi }}_{\kappa (s)}(\cdot |s))\), the politician t can obtain a payoff of at most

$$\begin{aligned} \frac{{\overline{u}}}{1-\delta }+b< & {} {\underline{w}}_{s,t}(\epsilon )\\\leqslant & {} {\underline{w}}_{s,t}(r^*), \end{aligned}$$

where the first inequality follows by (5) and the final inequality follows because, by construction, \(r^*\geqslant \epsilon\). Therefore, there is no profitable deviation in s for politician t to a policy outside the support of \({\tilde{\pi }}_{\kappa (s)}(\cdot |s)\)

Proof of Theorem 2

Fix a convergent Markov electoral equilibrium \(\sigma =(\pi ,\rho )\). A first claim is that the symmetry of policy strategies and the optimality of the voting strategy \(\rho\) imply that, for all states s, types t and \(t'\) and policies x, \(V_{\kappa (s)}^F(s,t')=V_{\kappa (s)}^F(s,t)\), \(V_{\kappa (s)}^I(s,t,x)=V_{\kappa (s)}^I(s,t',x)\), \(V_{\kappa (s)}^C(s,t,x)=V_{\kappa (s)}^C(s,t',x)\) and

$$\begin{aligned}&\rho (s,t,x)V_{\kappa (s)}^I(s,t,x)+(1-\rho (s,t,x))V_{\kappa (s)}^C(s,t,x) \\&\quad =\rho (s,t',x)V_{\kappa (s)}^I(s,t',x)+(1-\rho (s,t',x))V_{\kappa (s)}^C(s,t',x). \end{aligned}$$

Now suppose, towards a contradiction, that there exists a policy \(x\in X(s)\) such that, for all types t,

$$\begin{aligned}&{u_{\kappa (s)}(s,x)+ \delta \left[ \rho (s,t,x)V_{\kappa (s)}^I(s,t,x)+(1-\rho (s,t,x))V_{\kappa (s)}^C(s,t,x)\right] } \nonumber \\&\quad > \int _{x'} \left[ u_{\kappa (s)}(s,x')+\delta \left[ \rho (s,t,x')V_{\kappa (s)}^I(s,t,x')+(1-\rho (s,t,x'))V_{\kappa (s)}^C(s,t,x') \right] \right] \pi _t(dx'|s)\nonumber \\&\quad = V_{\kappa (s)}^F(s,t), \end{aligned}$$
(6)

and notice that we have that \(x\in Y(s)\) if and only if \(\varphi (x)\in Y^c(s)\) also satisfies (6). Similarly, because the equilibrium is convergent, we have that \(x\in Y(s)\) if and only if \(\xi (s)\in Y^{d}(s)\) also satisfies (6). Correspondingly, in the sequel we assume that \(x\in Y^c(s)\). If a politician of type \(\kappa (s)\) commits to policy x in state s, it follows that

$$\begin{aligned}&{V_{\kappa (s)}^I(s,\kappa (s),x)-V^C_{\kappa (s)}(s,\kappa (s),x)} \\&\quad =p(s|s,x)\left[ u_{\kappa (s)}(s,x)+\delta V_{\kappa (s)}^I(s,\kappa (s),x) -V_{\kappa (s)}^F(s,\kappa (s))\right] \\&\quad \geqslant \delta p(s|s,x)(1-\rho (s,\kappa (s),x))\left[ V_{\kappa (s)}^I(s,\kappa (s),x)-V^C_{\kappa (s)}(s,\kappa (s),x) \right] , \end{aligned}$$

where the inequality, which is strict because \(p(s|s,x)>0\), follows from (6). Therefore, because \(\delta p(s|s,x)(1-\rho (s,\kappa (s),x))<1\), we have that

$$\begin{aligned} V_{\kappa (s)}^I(s,\kappa (s),x)> V^C_{\kappa (s)}(s,\kappa (s),x), \end{aligned}$$

and hence \(\rho (s,\kappa (s),x)=1\).

To complete the proof, suppose that the equilibrium \(\sigma\) is reelection-balanced with ex ante reelection probability \(R^*\). Therefore, the payoff to politician \(\kappa (s)\) from choosing according to policy strategy \(\pi _{\kappa (s)}(\cdot |s)\) in state s is

$$\begin{aligned} V_{\kappa (s)}^F(s,\kappa (s)) +b \left[ 1+\frac{\delta R^*}{1-\delta R^*}\right] . \end{aligned}$$
(7)

If instead politician \(\kappa (s)\) chooses deviating policy x in state s, her payoff is

$$\begin{aligned} u_{\kappa (s)}(s,x)+\delta V^I_{\kappa (s)}(s,\kappa (s),x) +b \left[ 1+\frac{\delta }{1-\delta p(s|s,x)}\left[ \frac{1-\delta p(s|s,x)R^*}{1-\delta R^*} \right] \right] , \end{aligned}$$

which, using (6), is strictly higher than her equilibrium payoff from (7), yielding the desired contradiction. □

Proof of Corollary 1

Fix a convergent and reelection-balanced Markov electoral equilibrium \(\sigma\) with reelection probability \(R^*\), and consider the profile \({\tilde{\pi }}\) in the representative voting game defined such that \({\tilde{\pi }}_{\kappa (s)}(\cdot |s)=\pi _{t}(\cdot |s)\) for all s and arbitrary t. First, if either (i) the policy profile \(\pi\) is pure or (ii) politicians never commit to policies, i.e., \(\pi _t(Y^c(s)|s)=0\) for all s and t, then we have that, for all states s, types t and \(t'\) and policies x,

$$\begin{aligned} V^F_{t}(s,t')= & {} {\tilde{V}}_t(s), \text{ and } \nonumber \\ V^I_t(s,t',x)= & {} p(s|s,x){\tilde{V}}_t(s,x)+\sum _{s'\not =s}p(s'|s,x){\tilde{V}}_t(s'), \end{aligned}$$
(8)

where payoffs \({\tilde{V}}_{t}(s,x)\) and \({\tilde{V}}_{t}(s)\) are defined as in (2) Second, by Theorem 2, and invoking the optimality of the voting strategy \(\rho\), we have that \(\pi _t(\cdot |s)\) puts probability one on solutions to

$$\begin{aligned} \max _{x\in Y(s)}u_{\kappa (s)}(s,x)+\delta \max \left\{ V_{\kappa (s)}^I(s,t,x),V_{\kappa (s)}^C(s,t,x)\right\} . \end{aligned}$$
(9)

Third, in both cases (i) and (ii), we have that \(V_{\kappa (s)}^I(s,t,x)=V_{\kappa (s)}^C(s,t,x)\), so that any solution to (9) must also be a solution to

$$\begin{aligned} \max _{x\in Y(s)}u_{\kappa (s)}(s,x)+\delta V_{\kappa (s)}^I(s,t,x). \end{aligned}$$
(10)

Finally, that \({\tilde{\pi }}_{\kappa (s)}(\cdot |s)\) must put probability one on solutions to (3) follows by substituting (8) into (10), so that the profile \({\tilde{\pi }}\) is a stationary Markov perfect equilibrium of the representative voting game. □

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Duggan, J., Forand, J.G. Representative Voting Games. Soc Choice Welf 56, 443–466 (2021). https://doi.org/10.1007/s00355-020-01283-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00355-020-01283-x

Navigation