Abstract
Models of adaptive bet-hedging commonly adopt insights from Kelly’s famous work on optimal gambling strategies and the financial value of information. In particular, such models seek evolutionary solutions that maximize long-term average growth rate of lineages, even in the face of highly stochastic growth trajectories. Here, we argue for extensive departures from the standard approach to better account for evolutionary contingencies. Crucially, we incorporate considerations of volatility minimization, motivated by interim extinction risk in finite populations, within a finite time horizon approach to growth maximization. We find that a game-theoretic competitive optimality approach best captures these additional constraints and derive the equilibria solutions under straightforward fitness payoff functions and extinction risks. We show that for both maximal growth and minimal time relative payoffs, the log-optimal strategy is a unique pure strategy symmetric equilibrium, invariant with evolutionary time horizon and robust to low extinction risks.
Similar content being viewed by others
1 Introduction
Kelly’s work on optimal gambling strategies and the value of side information was arguably the first convincing attempt at applying concepts from information theory for analysis in a different field Kelly (1956). This work was the precursor to growth-optimal portfolio theory which has extended the basic ideas to the realm of capital markets (Cover and Thomas 2006). There has recently been a resurge of interest in employing insights from optimal gambling theory in models of adaptive bet-hedging under fluctuating environments, where close analogies between the economic and biological setting have been convincingly made apparent (Bergstrom 2014; Rivoire and Leibler 2011; Donaldson-Matasci et al. 2010).
Biological bet-hedging was originally proposed to explain the observation of un-germinated seeds of annual plants (Cohen 1966). This strategy involves the variable phenotypic expression of a single genotype, rather than a result of genetic polymorphism, although it is difficult to empirically determine whether observed phenotypic diversity in a population arises from randomization by identical genomes or from an underlying polymorphism (Seger and Brockmann 1987). Indeed, evolutionary biologists have long acknowledged that in a stochastically variable environment, natural selection is likely to favor a gene that randomizes its phenotypic expression (Bergstrom 2014). Recent work has revealed a variety of potential instances of bet-hedging populations: delayed germination in desert winter annual plants that meets postulated criteria of adaptive bet-hedging in a variable environment (Gremer and Venable 2014), bacterial persistence in the presence of antibiotics that appears to constitute an adaptation tuned to the distribution of environmental change (Kussell et al. 2005), flowering times in Lobelia inflata which point to flowering being a conservative bet-hedging strategy (Simons and Johnston 2003), or even bet-hedging as a behavioral phenotype, such as the case of nut hoarding in squirrel populations in anticipation of short or long winters (Bergstrom 2014).
Notwithstanding these empirical findings, identifying actual cases of adaptive bet-hedging in the wild remains elusive. As Seger and Brockmann (1987) have noted more than three decades ago, it is in general difficult to determine whether observed diversity of behavior in a population arises from randomization by genetically identical individuals or from genetic heterogeneity within co-located individuals optimized for different environmental conditions. Moreover, phenotypic heterogeneity can arise within genetically homogenous populations as a form of specialization in a stable environment through stochastic gene expression, positive feedback loops, or asymmetrical cell division, all processes where bet-hedging is not at play (Rubin and Doebeli 2017). These difficulties provide further impetus for constructing better and more elaborate models to test against the data.
Of particular note in classic bet-hedging models is the adoption from economic theory of asymptotic growth rate optimality as the target function for fitness maximization strategies, where growth in wealth is analogous to growth in lineage size. Indeed, since evolution proceeds by shifting gene frequencies over generations, with frequency changes being multiplicative, long-term fitness is commonly measured by geometric mean fitness across generations (Hopper 2018). At the same time, it is also widely acknowledged that long-run growth rate is not a valid measure of fitness under fluctuating environments, such as in the case of bet-hedging populations (Lande 2007).
The resulting intrinsic unpredictability has led some researchers to formulate a probabilistic perspective for natural selection that integrates various effects of uncertainty on natural selection (Yoshimura et al. 2009). The applicability of geometric mean fitness has also come into question under finite-population models, where the probability of fixation provides additional and sometimes more suitable information than the geometric mean fitness (Proulx and Day 2001), and in periodically cycling selection regimes, where evolutionary success depends on the length of the cycle and the strength of selection (Ram et al. 2018). Moreover, both gambling and bet-hedging models targeting optimal growth rate implicitly assume an infinite time horizon in formulating the geometric average, thereby ignoring the finiteness of actual horizons over which both economic and evolutionary processes ultimately act. The problem is further amplified when interim extinction risk is taken into account, especially under finite-population models. Lineage growth trajectories which are highly stochastic are at risk of large “drawdowns,” which may pull the population below some extinction threshold, despite possessing a high asymptotic growth rate. Here we aim to incorporate considerations of finite evolutionary horizons and extinction risk in the search for adaptive optimality in bet-hedging models.
1.1 Background: The Standard Model
Most adaptive bet-hedging models are largely based on the classic horse race gambling model associated with Kelly (1956), where the biological counter-part is a lineage apportioning bets on several possible environments. Assume that k horses run in a race, and let horse \(X_i\) win with probability \(p_i\). If horse \(X_i\) wins, the odds are \(o_i\) for 1. A gambler wishes to apportion his bankroll among the horses \(0<f_i\le 1\), such that \(\sum f_i=1\) and participate in indefinitely repeated races \(n\rightarrow \infty \). How to best apportion the bankroll each time? In this setting, wealth is a discrete-time stochastic process over n periods,
where \(W(X)=f(X)O(X)\) is the random factor by which the gambler’s wealth is multiplied when horse X wins. More explicitly,
Kelly’s first insight was that choosing to simply maximize expected wealth (for any time horizon n) gives \({{\,\mathrm{\mathrm{arg\,max}}\,}}_f E[W_n (f)]=1\), with the implication that one bets everything on a single horse (the one with the highest \(p_i\)) and a consequent chance of total ruin once that horse loses a race. Therefore, Kelly proposed maximizing the asymptotic growth rate [the rigorous justification provided by Breiman (1961)]. By the law of large numbers random wealth may be expressed as,
where,
is the asymptotic exponential growth rate. If the gambler stakes his entire wealth each time, i.e., \(\sum f_i=1\), then
is maximized (convex nonlinear optimization) at “proportional gambling” \(f=p\) where \(D(p\Vert f)\) is minimized, without regard the actual odds provided by the bookie.
Indeed, the notion of proportional gambling, made famous by Kelly’s treatment, has found its way into classic models of diversified bet-hedging. In such models often assumed that “appropriate phenotypes are produced in proportion to the likelihood of each environment” (Hopper 2018) and that consequently “the classical bet-hedging prediction [is] that the optimum probability for employing a strategy is approximately equal to the probability that the strategy will be useful” (King and Masel 2007). Here we follow recent approaches that extend the standard model to non-lethal environments via a full-fitness matrix, such that this notion is no longer directly applicable.
Breiman (1961) was first to show that the Kelly solution is optimal in two convincing ways: [a] that given a Kelly strategy \(\phi ^*\) and any other “essentially different” strategy \(\phi \) (not necessarily a fixed fractional betting strategy),
and [b] that it minimizes the expected time to reach asymptotically large wealth goals. Moreover, this strategy is myopic in the sense that at each iteration of the race one only needs to consider the presently given parameters (Hakansson 1971). However, Kelly strategies may also yield tremendous drawdowns a problem widely recognized in the gambling community, such that optimal Kelly is often viewed as “too risky”; in practice gamblers and investors use “fractional Kelly” which deviates from the optimal solution but reduces the effective variance of the stochastic growth (Fig. 1). In the biological framework, this can lead to abrupt extinction events in finite (especially small) populations with highly stochastic lineage growth trajectories. A further complication is that the underlying probability distributions are merely estimated from past data and model assumptions, leading often to over-betting and increased risk (MacLean et al. 2011).
In this work, we extend the existing models to incorporate both interim extinction risk and finite evolutionary time horizons within a bet-hedging framework. This requires re-conceptualizing geometric mean fitness for such highly stochastic growth scenarios. We ultimately derive fitness functions that better account for such conditions where the fluctuating environment is strongly coupled to both long and short-term growth and locate optimal stable equilibria.
2 Methods
2.1 The Full-Fitness Matrix Model
We assume environments are i.i.d random events across generations, multinomially distributed (with some results generalized to non-identically distributed environments). Individuals within lineages have a static full-fitness matrix \([O_{ij}]\) in which non-lethal environments have low but generally nonzero fitness (Donaldson-Matasci et al. 2010; Rivoire and Leibler 2011). We adopt a finite-population model where lineages start off with some initial population size \(W_0\), implicitly assumed higher than some bet-hedging evolutionary threshold (King and Masel 2007). Lineages then evolve strategies to randomize individual phenotypes toward maximizing growth across finite horizons in the face of interim extinction threats. More formally, with k environments and phenotypes,
the general model of lineage growth trajectory across n generations under strategy f is a random process,
where,
with off-diagonal values reflecting the lower fitness for non-matching environments,
and where all individuals in a lineage are bet-hedging,
And finally, using a straightforward formulation of the growth rate, \(W_n^{1/n}\), a random variable for any finite horizon.
We first derive the asymptotic growth rate optimal “Kelly” solution for this setting (\(f^{\mathrm{Kelly}}\)) with a corresponding bet-hedging region of the environment simplex (“Appendix A”). Relaxing the assumption of i.i.d environments, we derive the static Kelly solution for the case of nonstationary environments—where environments are independent but not identically distributed across generations (“Appendix B”). While under nonstationary environments an optimal growth rate is reached with a dynamic myopic strategy, we focus here on a static strategy since adaptations effectively stabilize across time spans much higher than single generations, such that from evolutionary considerations dynamic strategies are not likely to emerge. Alternative models of fluctuating environments such as Markov chains with underlying switching probabilities (e.g., Li et al. 2017) are not pursued here and left for future work. Finally, we identify a “reference” strategy that admits deterministic growth trajectories, namely the “Dutch book” solution (where the variance of the finite-time growth rate is zero) and characterize the consequent loss of growth incurred by exchanging opportunity for certainty (Appendix C).
2.2 Relative Fitness Payoff Function
We now wish to go beyond the standard approach of targeting the optimization of the asymptotic growth rate as undertaken in the previous section—to incorporate finite evolutionary horizons and extinction risk considerations. For the sake of simplicity, we confine our model here to the case of \(k=2\) environments and phenotypes (so that the two environments occur with probability p and \(1-p\)). To motivate the shift to a finite-horizon framework, we first highlight an important property of our stochastic growth model, known also in portfolio theory (Markowitz 2006). We prove that for any two essentially different strategies, the maximal time \(n_0\) one lineage “dominates” the other is finite for every realization of lineage trajectory pair (“Appendix D”). The exponentially diminishing histogram of last intersection times of given two growth strategies in Fig. 2b (with a single instance of two trajectories for illustration in Fig. 2a) demonstrates this phenomenon.
The sustained variance and high skewness of the growth rate distribution under any finite horizon necessitates a comparative approach in formulating a fitness payoff function (in fact, the growth rate is asymptotically log-normal as shown in “Appendix E”). Consider a relative fitness measure for two different lineage strategies f and g: The probability that a random trajectory of a lineage with strategy f exceeds the random trajectory of a lineage with strategy g (given time horizon n),
with an induced relation defined by,
We may interpret this probabilistic relation between two strategies as relative fitness. Note that since realizations of \(W_n (f)\) and \(W_n (g)\) stem from the same underlying stochastic environmental sequence, they will generally be highly correlated (with the corresponding logarithmic growth rates in fact perfectly correlated, as shown in “Appendix F”). Consequently, the probability in Eq. (3) must be derived from their joint distribution rather than simply from marginal distributions. Figure 3 depicts realizations of the log growth rates of \(W_n (f)\) and \(W_n (g)\) as histogram distributions for some choice of strategies f and g, and some finite evolutionary horizon n. Asymptotically with time horizon n, such distributions approach normality with variance going to zero (“Appendix E”).
A few properties of the order induced by this relation are worth highlighting. [a] it is a complete order since any two \(W_n\) are comparable under the relation, [b] it is transitive for any n and consequently a pre-order, and [c] its maximal element is \(W_n^* (f^{\mathrm{Kelly}})\), such that both the order induced by \(E[\log W_n (f)]\) and the order induced by the payoff \(P(W_n (f)>W_n (g))\) form complete preorders and have the same maximal element (“Appendix G”). Despite these beneficial properties, given any “vanilla” strategy g and time horizon n, the strategy that maximizes the payoff function,
will vary as a function of g and n (demonstrated by counterexamples), and in particular will not necessarily be \(f^{\mathrm{Kelly}}\). This implies that a wildtype lineage with strategy g different from \(f^{\mathrm{Kelly}}\) will eventually be overtaken by some mutant invasive lineage with a strategy that maximizes this payoff function, a process that may potentially remain in recurrent flux, with invasive lineages replacing a wildtype lineage.
2.3 Competitive Optimality with Risk
To see whether evolutionary stable optima may also emerge, we develop a game-theoretic approach. Players are lineages with particular bet-hedging strategies and random initial population size. Lineages interact by competing over a common niche subject to the same environmental fluctuations. This setup is in some contrast to more standard evolutionary game theory settings, where agents are organisms rather than lineages and where the notion of an iterated strategy is prominent, but maintains the central aspect of interactions formalized in a payoff function (e.g., Stollmeier and Nagler 2018). A lineage survives the competitive encounter by avoiding extinction (defined in what follows) while exceeding its opponent in size over a given time horizon. This outcome is determined by a game-theoretic deterministic payoff function, modified from Eq. (3) to incorporate an extinction threshold and randomized initial lineage size. Ultimately, we are searching for Nash equilibria.
This approach is motivated by the classic work on time-invariant game-theoretic competitive optimality, within the scope of growth-optimal portfolio theory (Bell and Cover 1980, 1988). Bell and Cover consider a competitive setting for a stock portfolio model under any finite number of investment periods and prove that for any relative wealth payoff \(E[\phi (UW_1/VW_2)]\) and portfolio wealth \(W_1\) and \(W_2\), there are conditions on the function \(\phi \) such that the log-optimal Kelly portfolio is a solution to the game, given initial randomizations U and V (independent and of equal expectation). In particular, \(\phi (x)=\chi _{[1,\infty )}(x)\) results in the payoff \(\mathbb {P}(UW_1\le VW_2)\) with the log-optimal portfolio as a game-theoretic solution, given some initial fair randomizations. This additional fair randomization reduces the effect of small differences in end wealth, thus avoiding unwanted cases where the optimal strategy is beat by a small amount most of the time (Cover and Thomas 2006).
2.4 The Payoff Function in a Game-Theoretic Setting
For any time horizon n and extinction threshold d, we define a (deterministic) payoff function: The probability that a random trajectory of a lineage with strategy f exceeds the random trajectory of a lineage with strategy g without first going extinct (given time horizon n),
with initial population size independent randomizations \(u_0\) and \(v_0\), independent and of same mean but possibly of a different distribution class.
This payoff function induces a symmetric discrete-valued non-constant-sum game setting, although it is conceptually “zero-sum” \(M_n(f,g)+M_n(g,f)<1\) (“Appendix H”). Crucially, our payoff matrix is finite since it reflects the finitely many strategies possible in a finite-population model—there can only be N different sized partitions of a population of size N in betting on two environments (under \(k=2\) environments and phenotypes). A low-resolution toy-model instance of the payoff matrix is depicted in Fig. 4.
Our goal would be to identify pure strategy Nash equilibria reflecting the evolutionary solutions to competitive bet-hedging. In particular, we would like to explore the conditions under which a bet-hedging setting admits a symmetric equilibrium and whether it is unique. In “Appendix I,” we prove that for an infinite-size payoff matrix (i.e., continuous strategies) the log-optimal strategy is the solution to this game, invariant with the choice of time horizon. Moreover, any finite matrix representing the N strategies possible for a lineage of finite size N necessarily also admits a solution, as illustrated in Fig. 5. This solution is the strategy closest to the log-optimal strategy under the finite resolution framework, such that it converges to it asymptotically with N (“Appendix L”). Finally, under a nonstationary environment model the log-optimal strategy again emerges as the equilibrium static strategy—even given short time horizons (“Appendix M”).
The effect of lineage-size extinction thresholds on actual rates of extinction of random growth trajectories is illustrated in Fig. 6a. As would be expected, higher thresholds of extinction correspond to higher probabilities of extinction, with extinction rates that converge quickly to asymptotic values (“Appendix N”). Numerical simulations indicate that when incorporating low extinction thresholds that result in low extinction rates, the symmetric Nash equilibrium remains stable at the log-optimal strategy. Higher thresholds may result in a number of scenarios: a shift of the symmetric equilibrium away from the log-optimal solution, complete lack of equilibrium solution, or the emergence of multiple symmetric equilibria; in conjunction, multiple pairs of off-diagonal equilibria may appear (see Fig. 6b for one such scenario).
2.5 Minimum Time to Reach a Population Threshold Size
To gain further perspective on optimal strategies under highly stochastic growth, we consider evolutionary competition between lineages, where survival is determined by reaching a certain threshold of lineage size in minimal time [e.g., for \(K-\)selected species, see Reznick et al. (2002)]. In effect, the lineage with growth characteristics that minimize the time to reach a certain population size threshold “wins,” a setting with potential relevance in the context of competitively colonizing a limited niche, as in range expansion scenarios [see Villa Martin et al. (2019) for a bet-hedging population expanding into an unoccupied space]. We follow the classic results of Breiman 1961 on the log-optimal portfolio as the optimal strategy minimizing the expected time to reach an asymptotic target wealth, but instead of an infinite target we base the fitness payoff function on finite targets. Initial insight into the effect of strategy choice on the consequent distributions of minimal time (Fig. 7a) is provided by comparing their expectation, where the optimality of Kelly is already apparent (Fig. 7b).
Instead of considering expectations of (highly correlated) minimal time distributions, we devise a more informative fitness payoff function based on the joint distribution. Crucially, this payoff will naturally be amenable to a game-theoretic approach, in line with the type of analysis in the previous section with payoff \(M_n (f,g)\). As before, we condition the probability on avoiding an extinction threshold. The payoff captures the probability that a trajectory following strategy f reaches threshold c before a trajectory following strategy g, conditioned on avoiding an extinction threshold d. If both trajectories reach c at the same time (since time is in discrete generations), then the one which overshoots with a greater margin above c ‘wins’. Denote by T(f, c) the minimal time distribution given strategy f and target lineage size c,
More precisely, we denote new trajectories \(\{W^E_k\}_{k=1}^n\) by
and for all \(k=0,\ldots , n-1\)
We denote also by
the first time when the trajectory \(\{W^E_k\}_{k=0}^n\) cut the threshold c. \(T(f,C) = \infty \) if and only if this trajectory does not cut the threshold.
Then the payoff matrix \(M_c(f,g)\) is defined by
We then identify pure strategy Nash equilibria reflecting the evolutionary solutions with the new relative payoff \(M_c (f,g)\). In “Appendix J,” we prove that again Kelly is the solution to the game, invariant to the evolutionary “choice” of target population size c and that under a nonstationary environment regime Kelly emerges as the static equilibrium strategy. Finally, we highlight a deep mathematical link of this probabilistic perspective for minimal time optimality to the competitive optimality setting with payoff \(M_n (f,g)\). Formally, \(M_c(f,g)\) can be rewritten as a convex linear combination of \(M_n(f,g)\): \(M_c(f,g) = \sum _{n=0}^{\infty } P( W_0 W_n(f) > V_0 W_n(g), T(f,c) = n)\) (see “Appendix J” for more details).
3 Discussion
In this work, we provide further support for the robustness of the expected log criterion as an optimality solution for biological bet-hedging. We develop a game-theoretic framework inherently invariant to the span of evolutionary horizons while incorporating considerations of interim extinction risk and use multiple optimality criteria to strengthen our results. This approach goes beyond standard models of bet-hedging, which focus on indefinite “long-term” growth rates and that ignore accounting for interim risk. Previous work generally upholds that “phenotypes with the greatest long-term average growth rate will dominate the entire population” as “the basic principle” used in optimization (Yoshimura and Jansen 1996), or that a proxy for the likely outcome of evolution is “to think of organisms as maximizing the long-term growth rate of their lineage” (Donaldson-Matasci et al. 2010).
Nevertheless, some authors have recently acknowledged the importance of accounting for finite time horizons. For instance, Rivoire and Leibler (2011) note in passing that in their model “the growth rate emerges as a unique measure of fitness when considering the long-term limit \(T\rightarrow \infty \), but, if considering a finite ‘horizon’ there may be a different strategy that outperforms [it].” Indeed, as some evolutionists have argued, short-term fitness measures are also needed to achieve a full understanding of how evolution works in variable environments, as geometric mean fitness concerns the long-run evolutionary outcome (Okasha 2018). Moreover, long-term fitness metrics are typically formulated without regard to transient short-term population dynamics, in which lineages might come close to extinction. Under more inclusive models with extinction, selection in a fluctuating environment can also favor bet-hedging strategies that ultimately increase the risk of extinction (Libby and Ratcliff 2019). Given such considerations, the benefit of explicitly incorporating extinction considerations in stochastic growth models is clearly evident.
We have opted to focus on symmetric Nash equilibria rather than evolutionary stable strategies (ESS), which are strategies that cannot be beaten if the fraction of the rival invading mutants in the population is sufficiently small and are generally invoked in settings with iterative match-ups between individuals rather than lineages (Smith and Price 1973). Since the payoff in our game-theoretic setting pits one lineage against another (two different strategies), there is no explicit sense of invading mutants [but see Olofsson et al. (2009) for an ESS approach to bet-hedging]. Moreover, some of the classic aspects of Nash’s theorem do not directly apply within our setting. The theorem states that for every two-person zero-sum game with finitely many strategies, there exists a mixed strategy that solves the game (Nash 1951). While our framework is indeed “two-person”, it is not zero-sum and has finitely many strategies. Crucially, since an implicit goal of theoretical work such as ours may be toward predicting which strategies are likely to evolve, we focus on pure strategies rather than mixed ones, where the uniqueness of the equilibrium solution emerges as especially beneficial (echoing the classic approach of growth rate log-optimality where there is always a unique solution due to convexity).
We are not the first to attempt to model the expected minimal time to reach a finite asymptotic target, an extension of the seminal result of Breiman (1961) on properties of the log-optimal portfolio. Aucamp (1977) derived the first such analysis, given some basic assumptions that concern reaching a wealth target exactly vs. “overshooting” it. More recently, Kardaras and Platen (2010) find that in a continuous time or asset price model where a finite target can be exactly reached with no overshooting, the Kelly solution is still optimal; in a discrete time model Kelly is only approximately optimal, but if “time rebates” are introduced (to compensate overshooting the goal in the last investment period) it becomes exactly optimal. While these results on the expectation of the time distribution are in line with our analysis of stochastic lineage growth optimality, we obtain an even stronger result: Given finite population size targets, the log-optimal strategy emerges as a Nash equilibrium under a payoff function based on the joint distribution of minimal time trajectories.
Interestingly, Kelly (1956) has anticipated the application of his ideas in biological bet-hedging, writing “Although the model adopted here is drawn from the real-life situation of gambling, it is possible that it could apply to certain other economic situations...the essential requirements for the validity of the theory are the possibility of reinvestment of profits and the ability to control or vary the amount of money invested or bet in different categories.” It does not require a leap of the imagination to notice analogies of “economic situations” to evolutionary strategies, of “reinvestment of profits” to biological reproduction and growth, and of the “control” of invested money to evolved adaptive optimality. Of course, it is best appreciated with Shannon’s famous “bandwagon” warning in mind, cautioning over hasty attempts to apply insights from information theory to other fields (Shannon 1956).
3.1 Other Approaches to Optimization Under Finite Horizon and Risk
A seemingly straightforward way of introducing finite (albeit still arbitrary) horizons into optimization settings is by considering the expectation of a finite-horizon growth rate. This is the approach adopted in some recent stock portfolio models for finite horizons (Vince and Zhu 2013; Morgan 2015). Within our formalism from Eq. (2), this amounts to finding,
However, this implicitly assumes some arbitrary utility function, in this case the n-th root, the maximization of which requiring some justification. In contrast, Kelly’s focus on \({{\,\mathrm{\mathrm{arg\,max}}\,}}_f E[\log W_n]\) while implicitly assumes logarithmic utility, is equivalent the limit of the above expression, and leads to desired optimality properties as famously laid out by Breiman (1961).
A more convincing approach to maximizing wealth with risk management over finite horizons was proposed in Rujeerapaiboon et al. (2015) for portfolio construction. The authors consider the optimization of a minimum bound for finite-horizon growth,
with a degree of freedom corresponding roughly to a risk aversion or a choice of certainty parameter.
The expression above allows deriving the portfolio giving the highest minimum bound for wealth for any level of certainty \(\varepsilon \). While choosing a particular horizon n and a risk aversion parameter is perfectly sensible in an investment setting, the translation to the biological framework is problematic: what would be evolution’s risk aversion in this setting? Or the appropriate time horizon for optimization? Any choice of these two parameters would inescapably be arbitrary in nature. In an alternative approach Rujeerapaiboon et al. (2018) reformulate the Kelly gambling setting in terms of the Conservative Expected Value (CEV), a risk-averse expectation for highly skewed distributions. This amounts essentially to devising a systematic way of constructing fractional Kelly strategies such that it is strongly coupled with the infimum of the finite-horizon growth rate. Here again, there is an implicit arbitrariness in the choice of horizon length if applied in the context of an evolutionary framework, which we seek to avoid.
Other authors have focused on incorporating risk to the standard Kelly gambling setting with an infinite time horizon. For instance, Busseti et al. (2016) develop a systematic way to trade-off growth rate and drawdown risk by formulating a risk-constrained Kelly gambling problem within the standard setting of growth rate maximization under asymptotic horizons. The additional risk constraint limits the probability of a drawdown to a specified level. Nevertheless, for our purposes, percentage drawdown is arguably not a natural metric for representing lineage extinction risks, as compared with explicit extinction thresholds, especially in scenarios of competing finite-size populations (Ashby et al. 2017). Still other approaches may seek to target risk minimization as a primary criterion. In an approach akin to our Dutch book analysis, Wolf et al. (2005) minimize the growth rate variance and consequently the probability of extinction due to “unlucky” environmental trajectories. However, this is at the inevitable expense of achieving high stochastic growth rates, a vital aspect of evolutionary fitness.
3.2 Game-Theoretic Competitive Optimality of Bell and Cover
The results presented here can also be seen as both a special case and an extension of the classic results of Bell and Cover (1980, 1988). There are several important distinctions: [a] their setting is formulated for continuous random variables whereas our environments are discrete events, [b] their payoff implies a zero-sum game whereas our game is nonzero-sum (more accurately, non-constant-sum) due to the effect of extinctions, and [c] their payoff function is a straightforward probability while our payoff is effectively a conditional probability (includes considerations of extinction risk). Moreover, implicit in Bell and Cover’s setting is an infinitely sized payoff matrix, whereas our payoff matrix is finite since it reflects a finite number of strategies possible in a finite population. These distinctions have enabled us to show that, at least given the particular payoff function and discrete framework, the emerging symmetric Nash equilibrium is in fact a strict and unique one.
Some authors have generalized or utilized other aspects of the classic competitive optimality results. Most recently, Garivaltis (2019) has shown that discrete-time results of Bell and Cover (1988) hold equally well for continuous-time rebalanced portfolios in a competitive setting between two investors, each aiming to maximize the expected ratio of one’s own wealth to the other. In an original use of evolutionary ideas in finance, Lo et al. (2017) and Orr (2017) consider a payoff function capturing relative wealth of two competing investors each with some set initial wealth, focusing on finite-period analysis. They analyze optimal strategies of a primary player against a given “vanilla” strategy, a framework consistent with our initial relative payoff non-game-theoretic setting. They find that the particular vanilla strategy chosen plays an important role in the optimal allocation, in conjunction with initial wealth of both players.
Finally, our game-theoretic analysis may hint at a solution to a “coincidence” pointed out in Bell and Cover (1980). They were left perplexed as to why competitive optimality for a finite horizon turned out, by “coincidence,” to have the same solution (namely, Kelly) as in the growth-optimal portfolio: “Finally, it is tantalizing that \(b^*\) arises as the solution to such dissimilar problems [...] The underlying for this coincidence will be investigated.” Their follow-up 1988 paper suggests a “possible reason for the robustness of log-optimal portfolios” or why “log-optimal portfolios behave well in the competitive investment game”: namely that the wealth generated from any portfolio is always within “fair reach” of the wealth from the log-optimal portfolio. Indeed, the Kuhn–Tucker conditions and the consequent bound on the wealth ratio (Cover and Thomas 2006, Theorem 16.2.2) already imply that game-theoretic optimality is the driving force behind the asymptotic dominance. Fair randomization of initial wealth then leads to the game-theoretic solution for any increasing function of the wealth ratio. Our investigation of the payoff matrix suggests another perspective to this “coincidence.” Asymptotically with horizon n, the payoff matrix becomes maximally “contrasted,” with off-diagonal cells converging to probabilities of 0 or 1 (except those on “fault lines”), such that the Nash equilibrium emerges naturally. In effect, the “saddle-point” equilibrium, which has been established as invariant with n, asymptotically attains maximum curvature (“Appendix K”).
4 Conclusion
In this work, we have argued that under fluctuating environments and trait randomization geometric mean fitness should also encompass considerations of stochastic growth and extinction risk under finite evolutionary horizons. We show that for both the relative maximal growth payoff and the relative minimal time payoff there is a unique pure strategy symmetric equilibrium, which is invariant with evolutionary time horizon and robust to low extinction risk. Coinciding with the classic bet-hedging modeling approach, this is the Kelly log-optimal strategy. With higher thresholds of extinction, the equilibrium may shift away from Kelly and possibly branch out to multiple equilibria. Future work will be required to generalize the model to competitive optimality payoffs beyond pairwise lineages, Markovian environmental sequential transitions, random fitness matrices, and to more precisely capture the effect of high extinction thresholds on the optimal evolutionary solutions.
References
Ashby B, Watkins E, Lourenço J, Gupta S, Foster KR (2017) Competing species leave many potential niches unfilled. Nat Ecol Evol 1:1495–1501
Aucamp D (1977) An investment strategy with overshoot rebates which minimizes the time to attain a specified goal. Manag Sci 23(11)
Bell R, Cover TM (1988) Game-theoretic optimal portfolios. Manag Sci 34(6):724–733
Bell RM, Cover TM (1980) Competitive optimality of logarithmic investment. Math Oper Res 5(2):161–166
Bergstrom TC (2014) On the evolution of hoarding, risk-taking, and wealth distribution in nonhuman and human populations. Proc Natl Acad Sci 111(Supplement 3):10860–10867
Breiman L (1961) Optimal gambling systems for favorable games. In: Proceedings of the 4th Berkeley symposium on mathematical statistics and probability, vol 1, pp 63–68
Busseti E, Ryu E-K, Boyd S (2016) Risk constrained kelly gambling. J Invest 25(3):118–134
Cohen D (1966) Optimizing reproduction in a randomly varying environment. J Theor Biol 12(1):119–29
Cover TM, Thomas JA (2006) Elements of information theory, 2nd edn. Wiley, Hoboken
Donaldson-Matasci MC, Bergstrom CT, Lachmann M (2010) The fitness value of information. Oikos (Copenhagen, Denmark) 119(2):219–230
Garivaltis A (2018) Game-theoretic optimal portfolios in continuous time. Econ Theory Bull 1–9
Gremer JR, Venable DL (2014) Bet hedging in desert winter annual plants: optimal germination strategies in a variable environment. Ecol Lett 17(3):380–387
Hakansson N (1971) Capital growth and the mean-variance approach to portfolio selection. J Financ Quant Anal 6(1):517–557
Hopper KR (2018) Bet hedging in evolutionary ecology with an emphasis on insects. In: Reference module in life sciences. Elsevier
Kardaras C, Platen E (2010) Minimizing the expected market time to reach a certain wealth level. SIAM J Financ Math 1(1):16–29
Kelly JL Jr (1956) A new interpretation of information rate. Bell Syst Tech J 35:917–926
King O, Masel J (2007) The evolution of bet-hedging adaptations to rare scenarios. Theor Popul Biol 72(4):560–75
Kussell E, Kishony R, Balaban N, Leibler S (2005) Bacterial persistence: a model of survival in changing environments. Genetics 169(4):1807–14
Lande R (2007) Expected relative fitness and the adaptive topography of fluctuating selection. Evolution 61:1835–1846
Li X-Y, Lehtonen J, Kokko H (2017) Sexual reproduction as bet-hedging. Springer, Cham, pp 217–234
Libby E, Ratcliff WC (2019) Shortsighted evolution constrains the efficacy of long-term bet hedging. Am Nat 193(3):409–423 PMID: 30794447
Lo A, Orr H, Zhang R (2017) The growth of relative wealth and the kelly criterion. J Bioecon 20(1):49–67
MacLean LC, Thorp EO, Ziemba WT (2011) Good and bad properties of the kelly criterion, chapter 39. In: World scientific handbook in financial economics series, pp 563–572
Markowitz H (2006) Samuelson and investment for the long run. In: Samuelsonian economics and the twenty-first century. Oxford University Press, pp 252–261
Morgan D (2015) An alternative mathematical interpretation and generalization of the capital growth criterion. J Financ Invest Anal 4(4):6
Nash J (1951) Non-cooperative games. Ann Math 54(2):286–295
Okasha S (2018) Agents and goals in evolution. Oxford University Press, Oxford
Olofsson H, Ripa J, Jonzén N (2009) Bet-hedging as an evolutionary game: the trade-off between egg size and number. Proc Biol Sci 276(1669):2963–2969
Orr H (2017) Evolution, finance, and the population genetics of relative wealth. J Bioecon 20(1):29–48
Proulx SR, Day T (2001) What can invasion analyses tell us about evolution under stochasticity? Selection 2(1–2):1–15
Ram Y, Liberman U, Feldman MW (2018) Evolution of vertical and oblique transmission under fluctuating selection. Proc Natl Acad Sci 115(6):E1174–E1183
Reznick D, Bryant MJ, Bashey F (2002) r- and k-selection revisited: the role of population regulation in life-history evolution. Ecology 83(6):1509–1520
Rivoire O, Leibler S (2011) The value of information for populations in varying environments. J Stat Phys 142(6):1124–1166
Rubin IN, Doebeli M (2017) Rethinking the evolution of specialization: a model for the evolution of phenotypic heterogeneity. J Theor Biol 435:248–264
Rujeerapaiboon N, Kuhn D, Wiesemann W (2015) Robust growth-optimal portfolios. Manag Sci 62(7):2090–2109
Rujeerapaiboon N, Ross Barmish B, Kuhn D (2018) On risk reduction in kelly betting using the conservative expected value. In: 2018 IEEE conference on decision and control (CDC), pp 5801–5806
Seger J, Brockmann HJ (1987) What is bet-hedging? In: Oxford surveys in evolutionary biology. Oxford University Press, Oxford, pp 182–211
Shannon C (1956) The bandwagon (edtl.). IRE Trans Inf Theory 2(1):3–3
Simons A, Johnston M (2003) Suboptimal timing of reproduction in lobelia inflata may be a conservative bet-hedging strategy. J Evol Biol 16:233–243
Smith J, Price G (1973) The logic of animal conflict. Nature 246:15–18
Stollmeier F, Nagler J (2018) Unfair and anomalous evolutionary dynamics from fluctuating payoffs. Phys Rev Lett 120:058101
Villa Martin P, Munoz MA, Pigolotti S (2019) Bet-hedging strategies in expanding populations. PLOS Comput Biol 15(4):1–17
Vince R, Zhu Q (2013) Inflection point significance for the investment size. Available at SSRN https://ssrn.com/abstract=2230874
Wolf DM, Vazirani VV, Arkin AP (2005) Diversity in times of adversity: probabilistic strategies in microbial survival games. J Theor Biol 234(2):227–253
Yoshimura J, Jansen VAA (1996) Evolution and population dynamics in stochastic environments. Res Popul Ecol 38(2):165–182
Yoshimura J, Tanaka Y, Togashi T, Iwata S, ichi Tainaka K (2009) Mathematical equivalence of geometric mean fitness with probabilistic optimization under environmental uncertainty. Ecol Model 220(20):2611–2617
Acknowledgements
Open access funding provided by Projekt DEAL. We’d like to thank Alex Garivaltis for illuminating discussions on competitive optimality and two diligent reviewers for their very insightful comments. We also appreciate the continued support of Jürgen Jost and the Max Planck Institute (MIS). OT would like to further acknowledge the generous support of the Complexity Institute at NTU Singapore and Peter MA Sloot. TDT would also like to thank VIASM for financial support and hospitality in his two-month visiting in 2019.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A: The Kelly Solution to the Full-Fitness Matrix Model
In this section, we derive the Kelly (log-optimal) solution for the full-fitness matrix model.
The case \(k=2\): We have
where \(H\sim \mathrm{Binomial}(n,p)\). The Kelly solution is then defined by
where \(G(f) := \lim _{n\rightarrow \infty } W_n^{\frac{1}{n}}(f)\).
By denoting \(\overline{o}_1(f):=o_{11}f+o_{12}(1-f), \overline{o}_2(f):=o_{21}f+o_{22}(1-f)\), we have
Therefore, by directed calculations, we obtain the Kelly solution which is dependent on p
where \(p_{-} = \frac{o_{12}(o_{22}-o_{21})}{\Delta }\) and \(p_{+}= \frac{o_{11}(o_{22}-o_{21})}{\Delta }\), and the corresponding optimal value is
The case general k: By directed calculations, we obtain
where \(\overline{o}_i(\mathbf {f}) := \sum _{j=1}^k o_{ij} f_j\). This implies that for each \(\mathbf {p}\in \Delta _{k-1}:=\{(x_1,\ldots ,x_k) \in [0,1]^k \text { such that } x_1+\cdots +x_k =1\}\), \(G(\mathbf {f})\) is a continuous strict convex function in the compact convex domain \(\Delta _{k-1}\). Therefore there will always exist a unique Kelly solution \(f^{\mathrm{Kelly}} \in \Delta _{k-1}\) which is dependent on \(\mathbf {p}\).
Remark:
-
(i)
If the fitness matrix is diagonal, i.e., \((o_{ij}) = {{\,\mathrm{diag}\,}}\{o_1,\ldots ,o_k\}\), then \((f^{\mathrm{Kelly}})_i = p_i\);
-
(ii)
\(f^{\mathrm{Kelly}}\) solves the system
$$\begin{aligned} \sum _{i=1}^k \frac{p_i o_{ij}}{\overline{o}_i(\mathbf {f})} = 1, \quad \forall j=1,\ldots , k. \end{aligned}$$
Appendix B: The Solution to Nonstationary Environments
We model the environment probabilities on a parameterized Beta distribution, such that \(p \sim B(\alpha ,\beta )\), and prove that the Kelly solution (a static f that maximizes the asymptotic growth rate) in the asymptotic framework corresponds to the solution of the i.i.d. environment case with a probability equaling the expectation of the Beta distribution.
For sake of simplicity, we consider only \(k=2\). We have \( W_n(f) = \overline{o}_1(f)^H \overline{o}_2(f)^{n-H} \), where \(H\sim GB\big (n,\{p_1,\ldots , p_n\}\sim Beta(\alpha ,\beta )\big )\), i.e., \(H=\varepsilon _1+\cdots +\varepsilon _n\) with \(\varepsilon _r \sim Bernoulli(p_r)\) and \(p_r \sim Beta(\alpha , \beta )\). Using the law of large numbers, we have
where \(p=\lim _{n\rightarrow \infty }\frac{1}{n}\sum _{r=1}^n p_r= \frac{\alpha }{\alpha +\beta }\) is the expectation of the Beta distribution. Thus, the Kelly solution in this case is the same as the previous case.
Appendix C: The Dutch Book Solution and the Corresponding Loss of Growth
In this section, we derive the Dutch book solution for our model. By definition, the Dutch book solution \(f^D\) satisfies \(\overline{o}_1(f) = \overline{o}_2(f) = \cdots = \overline{o}_k(f)\) with the positive growth, i.e., \(\overline{o}_1(f)>1\).
The case \(k=2\): The Dutch book solution satisfies
Therefore, if \(\Delta := o_{11}o_{22} - o_{12}o_{21} > o_{11}+o_{22}-o_{12}-o_{21}\) then we always have a unique Dutch book solution \(f^D\)
and
The general case k: We give out here some criteria to have a unique Dutch book solution in the general case k.
Lemma 1
Given a fitness matrix \(O = (o_{i,j})_{i,j=1}^k\). Denote by \(\alpha _{i,j} = o_{i,j}-o_{k,j}\) for all \(j=1,\ldots ,k\) and \(i=1,\ldots ,k-1\). Denote by \(\Lambda = (\Lambda _{i,j})_{i,j=1}^k\) such that
If this fitness matrix O satisfies
-
(i)
\(o_{ii} > o_{ji}\ge 0\) for all \(i,j = 1,\ldots , k\)
-
(ii)
\(\Lambda _{i,k} > 0\) for all \(i=1,\ldots ,k\)
-
(iii)
\(\sum \limits _{j=1}^k o_{i,j}\Lambda _{j,k} > 1\) for all \(i=1,\ldots ,k-1\)
then there exists a Dutch book solution defined by \( f^D_j =\Lambda _{j,k}, \quad j=1,\ldots ,k \) and the corresponding deterministic wealth is
Proof
We have from Condition (iii)
Moreover from the definition of \(\alpha \) and \(\Lambda \), we have \(\overline{o_i}(\mathbf {f}^D) = \overline{o}_j(\mathbf {f}^D)\) for all \(i\ne j=1\ldots ,k.\) \(\square \)
Corollary 1
In the case of a diagonal matrix, i.e., \(o_{i,j} = {{\,\mathrm{diag}\,}}\{o_1,\ldots ,o_k\}\), by direct calculation, we obtain \( \Lambda _{i,k} = \frac{o_i^{-1}}{\sum \limits _{j=1}^k o_j^{-1}}.\)
Conditions (i) and (ii) hold true iff \(o_i>0\) and condition (iii) holds true iff \(\sum \limits _{j=1}^k o_j^{-1} < 1\).
Corollary 2
For a finite n and assuming \( \min \{o_{ii}\}_{i=1}^k \gg \max \{o_{ij}\}_{i\ne j} \ge 0, \) there exists a Dutch book solution \(\mathbf {f}^D\).
Proof
The conclusion directly follows from the above Corollary 1 (for a diagonal fitness matrix). \(\square \)
Appendix D: Finite Last Intersection
In this section, we show that for a given pair of strategies (f, g) with \(G(f) > G(g)\), there is a \(T(f,g) < \infty \) such that \(W_n(f,\mathbf {x}) > W_n(g,\mathbf {x})\) for all \(n \ge T(f,g)\) and for all \(\mathbf {x}\in \{0,1\}^{\infty }\). This means that the last intersection between two random trajectories \(\{W_n(f,\mathbf {x})\}_n\) and \(\{W_n(g,\mathbf {x})\}_n\)
is bounded above by T(f, g) (a finite number depending only on f and g).
Proof
We first define the excess growth rate
We note that for all \(\mathbf {x}\)
To this end, we need to prove that there is a \(T(f,g) < \infty \) such that
Otherwise, for each k there exist \(n_k \ge k\) and \(\mathbf {x}_k \in \{0,1\}^{\infty }\) such that \(E_{n_{k}} (\mathbf {x}_k) \le 0\). Now, there exists a subsequence of \(\{\mathbf {x}_k\}\) which is convergent to some \(\mathbf {x}\in \{0,1\}^{\infty }\). Therefore as \(k\rightarrow \infty \) we have \(n_k\rightarrow \infty \) and \(\lim _{n_k\rightarrow \infty }E_{n_{k}} (\mathbf {x}) \le 0\), in contradiction to (7). \(\square \)
Appendix E: Asymptotic Log-Normality of the Growth Rate
In this section, we show that in our discrete model, the growth rate approaches log-normality with zero variance.
Proof
We rewrite \( \frac{1}{n} \log W_n(f) = \frac{1}{n} \sum _{i=1}^n y_i,\) where \(y_i = x_i \log \overline{o}_1(f) + (1-x_i) \log \overline{o}_2(f) \) are independent discrete random variables with values: \(\log \overline{o}_1(f) , \log \overline{o}_2(f) \) and probabilities: \(p, 1-p\) correspondingly. Thus we have a sequence of i.i.d. random variables \(\{y_i\}_i\) with expectation \(\mu = E(y_i) = G(f)\) and variance \(\sigma ^2 = var(y_i) = p(1-p) (\log \overline{o}_1(f) - \log \overline{o}_2(f))^2\). By using the CLT, we have for a large n: \( 1/ \sqrt{n} \sum _{i=1}^n ( y_i - \mu ) \sim N(0,\sigma ^2) \) which is equivalent to
\(\square \)
Appendix F: Fully Correlated Log Growth Rates for the Case \(k=2\):
In this section, we show that for all \(f, g\ne f^D\)
Proof
Denote by
where \(\mathbf {x}= (x_1,\ldots ,x_n)\) is a realization and \(|x| = x_1+\cdots +x_n\). Because \(f,g \ne f^D\) we have \(\overline{o}_1(f)\ne \overline{o}_2(f)\) and \(\overline{o}_1(g)\ne \overline{o}_2(g)\), therefore we can define
We first prove that for any given m realizations \(\mathbf {x}^{(1)}, \ldots , \mathbf {x}^{(m)}\), we have
Indeed, we note that
and similarly for g. This implies (8). Therefore
Remark 1
Whether the correlation is \(\pm 1\) depends on \(\lambda > 0\) or \(\lambda < 0\). For \(f=f^{\mathrm{Kelly}}\) the growth factor with environment “1” > the growth factor with environment “0” implying \(\log \frac{ \overline{o}_1(f) }{ \overline{o}_2(f)} > 0\). Similarly for g it implies \(\log \frac{ \overline{o}_1(g) }{ \overline{o}_2(g)} > 0\), therefore \(\lambda > 0\). At \(f^D\), \(\log \frac{ \overline{o}_1(f^D) }{ \overline{o}_2(f^D)} = 0\), therefore it acts as a threshold. In most cases, the correlation will be \(+1\) since both f and g induce a positive growth rate.
Appendix G: Kelly is the Maximal Element in the Fitness Payoff Relation
Here we assume lineage size initial randomization, i.e., \(W_n(f) \gg W_n(g)\) iff
where \(W_0\) and \(V_0\) are random, and show that the Kelly strategy is the maximal element in this relation.
Proof
As a direct consequence of Proposition 1 and Eq. (11), we have
and equality if and only if \(f=f^{\mathrm{Kelly}}\). \(\square \)
Appendix H: Non-Constant-Sum Game, But Conceptually Zero-Sum
In this section, we show that
Proposition 1
-
(i)
For \(d=0\), \(M_n(f,g)+M_n(g,f) = 1\) for all f, g.
-
(ii)
For \(d> 0\), \(M_n(f,g)+M_n(g,f) < 1\) for all f, g.
Moreover, the game is conceptually zero-sum, but not formally.
Proof
-
(i)
We have from Eq. (9)
$$\begin{aligned} M_n(f,g)+M_n(g,f)= & {} \sum _{s=0}^n \Big (\mathbb {P}(W_n(f,s) W_0 > W_n(g,s) V_0)\\&+\, \mathbb {P}(W_n(f,s) W_0 < W_n(g,s) V_0) \Big )P(s) =\sum _{s=0}^n P(s) = 1. \end{aligned}$$ -
(ii)
On the other hand, we have from Eq. (5) for all \(f\ne g\)
$$\begin{aligned} M_n(f,g)+M_n(g,f)= & {} \mathbb {P}(CAB) + \mathbb {P}(AB^c) + \mathbb {P}(C^c AB) + \mathbb {P}(BA^c)\\= & {} \mathbb {P}(AB) + \mathbb {P}(AB^c) + \mathbb {P}(BA^c) = \mathbb {P}(A\cup B) < 1. \end{aligned}$$where \(C = \{W_0 W_n(f) > V_0 W_n(g)\}\), \(A= \{W_0 W_i(f) > d \quad \forall i=1,\ldots ,n\}\), \(B= \{V_0 W_i(g) > d \quad \forall i=1,\ldots ,n\}\). For \(f=g\) we also have
$$\begin{aligned} M_n(f,f)&= \mathbb {P}(W_0> V_0, A_1, A_2)< \mathbb {P}(W_0 > V_0) =\frac{1}{2}. \end{aligned}$$where \(A_1= \{W_0 W_i(f) > d \quad \forall i=1,\ldots ,n\}\), \(A_2= \{V_0 W_i(f) > d \quad \forall i=1,\ldots ,n\}\). Finally, numeric simulations demonstrate that if \(M(W,V)>M(U,V)\) then \(M(V,W)<M(V,U)\) for all W, V, U, i.e., changing to a strategy with a gain for one player always incurs a loss for the other player.\(\square \)
Appendix I: The Symmetric Nash Equilibrium Solution to Payoff \(M_n(f,g)\)
Proposition 2
We always have
and the equality happens if and only if \(p_{-}<p<p_{+}\).
Proof
For given f, g, we denote by \(\alpha _1 = \frac{\overline{o}_1(f)}{\overline{o}_1(g)}\), \(\alpha _2 = \frac{\overline{o}_2(f)}{\overline{o}_2(g)}\). We have
On the other hand, from the formula \( f^{\mathrm{Kelly}} = {\left\{ \begin{array}{ll} 0,&{} \text { if } p\in [0,p_{-}]\\ \frac{(1-p) o_{12}}{o_{12}-o_{11}} + \frac{p o_{22}}{o_{22}-o_{21}},&{} \text { if } p\in [p_{-}, p_{+}]\\ 1,&{} \text { if } p\in [p_{+},1], \end{array}\right. } \) we have for any pair \((f,f^{\mathrm{Kelly}})\), \(p \alpha _1 +(1-p)\alpha _2 = 1\) if \(p\in [p_{-}, p_{+}]\) and \(p \alpha _1 +(1-p)\alpha _2 < 1\) if \(p\notin [p_{-}, p_{+}]\). \(\square \)
Proposition 3
We consider a game with payoff without extinction
where \(W_0, V_0\) have the same distribution. Then, in this game, \((f^{\mathrm{Kelly}},f^{\mathrm{Kelly}})\) is a strict Nash equilibrium.
Proof
First, we note that
where \(A_1 = \{s\in \{0,\ldots ,n\}: \alpha _1^s \alpha _2^{n-s}<1\}\) and \(A_2 = \{0,\ldots ,n\} - A_1\). Therefore, for \(f=g\) we have \(\alpha _1=\alpha _2=1\), which implies \(A_1=\emptyset \), \(A_2=\{0,\ldots ,n\}\) and
For any \(f\ne f^{\mathrm{Kelly}}\), by using the Cauchy inequality for the second term, we have
From Proposition 2, we have
Therefore \((f^{\mathrm{Kelly}},f^{\mathrm{Kelly}})\) is a strict Nash equilibrium. \(\square \)
Proposition 4
The above Nash equilibrium is the unique one in the game.
Proof
Assume that \((f_0,g_0) \ne (f^{\mathrm{Kelly}}, f^{\mathrm{Kelly}})\) is another Nash equilibrium. Without loss of generality, we assume that \(g_0 \ne f^{\mathrm{Kelly}}\). By definition of a Nash equilibrium, we have \(M_n(f_0,g_0) \ge M_n(f,g_0)\) for all f and \(M_n(g_0,f_0) \ge M_n(g,f_0)\) for all g. By choosing \(f=g=f^{\mathrm{Kelly}}\) and using Proposition 1 we have \(M_n(f_0,g_0) \ge M_n(f^{\mathrm{Kelly}},g_0) > \frac{1}{2}\) and \(M_n(g_0,f_0) \ge M_n(f^{\mathrm{Kelly}},f_0) \ge \frac{1}{2}\). This implies that \(M_n(f_0,g_0) + M_n(g_0,f_0) > 1\) which is a contradiction to Proposition 1. Therefore \((f^{\mathrm{Kelly}}, f^{\mathrm{Kelly}})\) is the unique Nash equilibrium (see Fig. 8 where the equilibrium lies at the saddle-point of the payoff landscape.) \(\square \)
Appendix J: The Symmetric Nash Equilibrium Solution to Payoff \(M_c (f,g)\)
Proposition 5
We consider a game with payoff defined as (6) without extinction
Then, in this game, \((f^{\mathrm{Kelly}},f^{\mathrm{Kelly}})\) is a strict Nash equilibrium.
Proof
First we note that
Then, from Proposition 3 we have
and
Therefore \((f^{\mathrm{Kelly}},f^{\mathrm{Kelly}})\) is a strict Nash equilibrium. \(\square \)
Proposition 6
\((f^{\mathrm{Kelly}},f^{\mathrm{Kelly}})\) is the unique Nash equilibrium.
Proof
We first note that for all f, g
The remaining part of the proof is similar to the proof in Propposition 4. \(\square \)
It is worthwhile here to highlight a link between this payoff and \(M_n(f,g)\). Formally, \(M_c(f,g)\) can be rewritten as a convex linear combination of \(M_n(f,g)\):
This has a straightforward interpretation: for each event \((T(f,c) = n)\), [a] the event \((T(f,c) < T(g,c))\) is equivalent to the event \((T(g,c) > n)\) or \((V_0 W_n(g) < c \le W_0 W_n(f))\), and [b] the event \((T(f,c) = T(g,c), W_0 W_{T(f,c)} > V_0 W_{T(g,c)}(g))\) is equivalent to the event \((c \le V_0 W_n(g) < W_0 W_n(f))\). Consequently the combination of the two events \((T(f,c) < T(g,c))\) and \((T(f,c) = T(g,c), W_0 W_{T(f,c)} > V_0 W_{T(g,c)}(g))\) is equivalent to the event \((W_0 W_n(f) > V_0 W_n(g))\).
Appendix K: The Probability Payoff Matrix Converges with Horizon n to the Expected Log Matrix
Proposition 7
For any pair (f, g) with \(G(f)\ne G(g)\), we have
Proof
If \(G(f)-G(g) = \varepsilon >0\), then by a similar argumentation as Appendix D, there exists \(n_0 < \infty \) such that for all \(n\ge n_0\) and all \(\mathbf {x}\)
Therefore, for all \(n\ge n_0\) and all \(\mathbf {x}\)
This implies that
Therefore, \(M_{\infty }(f,g) = 1\). Similarly we obtain \(M_{\infty }(f,g) = 0\) if \(G(f)<G(g)\). \(\square \)
Remark 2
For the case \(G(f) = G(g)\) there are only two cases, \(g=f\) or \(g=\hat{f}\). If \(g=f\) we have \(M_{\infty }(f,f) = \frac{1}{2}\). If \(g=\hat{f}\) we do not know the value of \(M_{\infty }(f,\hat{f})\).
See Fig. 9 for a graphical illustration of the convergence.
Appendix L: Nash Equilibrium in Population Size N
In this section, we show that the Nash solution in population size N, denoted by \(f^*_N\), will be the strategy closest to Kelly under the finite resolution regime, and such that it converges asymptotically with N to the Kelly strategy. Denote by \(f^*_N\) the closest element to \(f^{\mathrm{Kelly}}\) in \(I_N:=\{0,\frac{1}{N},\ldots , 1\}\), i.e., \(f^*_N = {{\,\mathrm{\mathrm{arg\,min}}\,}}_{f\in I_N}|f-f^{\mathrm{Kelly}}|\). We show that \((f^*_N,f^*_N)\) is the Nash solution for the game with strategies defined only on \(I_N\). Due to the definition of \(f^*_N\), we see that \(|f^*_N - f^{\mathrm{Kelly}}| \le \frac{1}{N} \rightarrow 0\) as \(N\rightarrow \infty \). To this end, we show that \(M_n(f^*_N, f^*_N) \ge M_n(f, f^*_N)\) for all \(f \in I_N\). Indeed, we have already from (10) that \(M_n(f^*_N, f^*_N) = \frac{1}{2}\). Moreover, we have \(p \log \overline{o}_1(f) + (1-p) \log \overline{o}_2(f) < p \log \overline{o}_1(f^*_N) + (1-p) \log \overline{o}_2(f^*_N)\) for all \(f\in I_N {\setminus } \{f^*_N\}\). Therefore there exists \(\varepsilon >0\) such that
Thus, for every \(f\in I_N {\setminus } \{f^*_N\}\) we have \(\sum _{s=0}^n \alpha (s) P(s) < -n\varepsilon \) where \(\alpha (s) := s \log \frac{\overline{o}_1(f)}{\overline{o}_1(f^*_N)} + (n-s) \log \frac{\overline{o}_2(f)}{\overline{o}_2(f^*_N)}\). We assume that \(\log W_0\) and \(\log V_0\) have the same distribution with \(\mathrm {supp}\log W_0 \supset \{\alpha (0),\ldots ,\alpha (n)\}\) and \(|\mathrm {supp}\log W_0| = |\mathrm {supp}\log V_0| = r > 2n \varepsilon \). Denote by \(A_1=\{s: \alpha (s)<0\}\), \(A_2=\{s: \alpha (s)\ge 0\}\) and \(\delta = \frac{\frac{1}{2} - \frac{n\varepsilon }{r}}{\frac{3}{4}- \frac{n\varepsilon }{r}} \in (0,\frac{1}{2})\). We have
Note that \(\frac{\alpha (s)}{r}\in [-1,0]\) for \(s\in A_1\) and \(\frac{\alpha (s)}{r}\in [0,1]\) for \(s\in A_2\). Moreover \(x(\delta + x/2) \le \frac{1}{2}-\delta < \frac{1}{2} - \frac{3}{4} \delta \) for \(x\in [-1,0]\); \(x(\delta - x/2) \le \frac{\delta ^2}{2} < \frac{1}{2} - \frac{3}{4} \delta \) for \(x\in [-1,0]\). Therefore, for every \(f\in I_N {\setminus } \{f^*_N\}\) we have
Appendix M: Nash Equilibrium in Nonstationary Environments
Proposition 8
We consider also a game with payoff of players are
where \(W_0, V_0\) have the same distribution. Then, in this game, \((f^{\mathrm{Kelly}},f^{\mathrm{Kelly}})\) is the unique strict Nash equilibrium.
Proof
We note that in the nonstationary case we have
where \(H\sim GB\big (n,\{p_1,\ldots , p_n\}\sim Beta(\alpha ,\beta )\big )\) is a generalized binomial distribution. Therefore the proof is similar to the proof in Proposition 3 and is omitted. \(\square \)
Appendix N: Limit of the Extinction Rate
Proposition 9
Denote by
the probability that the extinction does not occur until time n and \(P_{n,d}(f) = 1-Q_{n,d}(f)\) the probability of extinction until time n (also see Fig. 6). We prove that
Proof
For the sake of simplicity, we denote by
Then we rewrite the formula
-
(i)
If \(\overline{o}_1(f), \overline{o}_2(f)>1\): we have \(\beta _{n,d}(x_1,\ldots ,x_{n-1},1) = \beta _{n,d}(x_1,\ldots ,x_{n-1},0) = \beta _{n-1,d}(x_1,\ldots ,x_{n-1})\), therefore \(Q_{n,d}(f) = Q_{n-1,d}(f) =\cdots = Q_{0,d}= \mathbb {P}(W_0>d) = 1\) for all n. Therefore \(\lim _{n\rightarrow \infty }P_{n,d}(f) = 0\).
-
(ii)
If \(\overline{o}_1(f), \overline{o}_2(f)<1\): we have \(\beta _{n,d}(x_1,\ldots ,x_{n}) = \frac{d}{\overline{o}_1(f)^{x_1+\cdots +x_n}\overline{o}_2(f)^{n-x_1-\cdots -x_n}}\) which approaches infinity with n. Therefore for n large enough, \(Q_{n,d} = 0\). This implies \(\lim _{n\rightarrow \infty }P_{n,d}(f) = 1\).
-
(iii)
If \(\overline{o}_1(f)> 1> \overline{o}_2(f)\): we have \(\beta _{n,d}(x_1,\ldots ,x_{n-1},1) = \beta _{n-1,d}(x_1,\ldots ,x_{n-1})\). Note that \(Q_{n,d}\) is decreasing and bounded below by 0, therefore there exists the limit of \(Q_{n,d}(f)\) which implies the limit of \(P_{n,d}(f)\).\(\square \)
Remark 3
\(c_d(f)\) is increasing with d (see Fig. 6).
Proof
If \(d_1>d_2\) then \(\beta _{n,d_1}(x_1,\ldots ,x_n) > \beta _{n,d_2}(x_1,\ldots ,x_n)\) therefore \(Q_{n,d_1}(f) < Q_{n,d_2}(f)\) which implies that \(c_{d_1}(f) \ge c_{d_2}(f)\). \(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Tal, O., Tran, T.D. Adaptive Bet-Hedging Revisited: Considerations of Risk and Time Horizon. Bull Math Biol 82, 50 (2020). https://doi.org/10.1007/s11538-020-00729-8
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11538-020-00729-8