1 Overview

Dynamical systems arise in several practical real-world situations apart from classical physical systems like astronomy, mechanics, etc., The dynamics could be studied purely within a single variable, say a set X or between two interacting or coupled variables, say X and Y or between several interacting variables. We start from basic dynamics arising out of any arbitrary two associated variables to more complex dynamical systems.

Let us consider an arbitrary space X consisting of all possible outcomes (discretely) of a random experiment. Consider a function \(\varphi _{t}^{x}\) which, for each value of t with \(t\in \{t_{1},t_{2},\ldots \},\) picks a value x for \(x\in X.\) The values that the function \(\varphi _{t}^{x}\) picks and the order of the picking can be pre-arranged through a deterministic model or else the values that the function \(\varphi _{t}^{x}\) picks and order of the picking can be a random process model. A differential equation model with a parameter space \(\Theta\) can be employed to obtain \(\varphi _{t_{i}}^{x_{i}}(\Theta )\) for \(i=1,2,\ldots\) The complexity of the dynamics of \(\varphi _{t}^{x}\) could depend on X alone as the parameter space, say, \(\Theta _{X}\) describing the changes attributed to X alone, or the complexity could depend on some external spaces, say, YZW,  etc. Classical mechanics models do not capture much complexity and hence the assumptions on \(\Theta _{X}\) are straightforward. The parameter spaces describing the dynamics alone corresponding to the spaces YZW,  could be \(\Theta _{Y}, \Theta _{Z}, \Theta _{W}\). The points or values within the space X can be imagined as states in X. Thus the transformation from \(\varphi _{t_{i}}^{x_{i}}(\Theta )\) to \(\varphi _{t_{i+1}}^{x_{i+1}}(\Theta )\) can be treated as a change in the state within the time-step \(t_{i+1}-t_{i}\) by the dynamical system with the base space X. These transformations which form a semigroup determine the overall system dynamics (see Fig. 1). The quantity \(t_{i+1}-t_{i}\) is greater than zero and it could be the same for each i, \(i=1,2,\ldots\), or it could be different. When the outputs of \(\varphi _{t}^{x}\) result from a differential equation model, then \(t_{i+1}-t_{i}\) is usually constant for all i values.

When the dynamics within the base space X are influenced by another space Y, the resultant dynamics, say \(\varphi _{t}^{(x,y)}\), could be different from \(\varphi _{t}^{x}\) (see Fig. 2). Suppose there is another space Y that has some influence over the values that the function \(\varphi _{t}^{x}\) picks. Later in this section, we have mentioned a classical pendulum example where the space Y we have indicated as a magnetic field. Here \(Y\ne \phi\) (empty set) and Y could be a single space or the union of a collection of spaces. Instead of \(\varphi _{t}^{x}\), we write the function \(\varphi _{t}^{(x,y)}\) for \(x\in\) X and \(y\in Y\). Let the corresponding parameter space be \(\Theta _{XY}\). Both the functions \(\varphi _{t}^{x}\) and \(\varphi _{t}^{(x,y)}\) map from the same domain to space X, i.e. \(\varphi _{t}^{x}:t\rightarrow X\) and \(\varphi _{t}^{(x,y)}:t\rightarrow X\) but for \(\Theta _{X}\ne \Theta _{XY}\) the values to which \(\varphi _{t}^{x}\) picks values in X under \(\Theta _{X}\) are different from the values which \(\varphi _{t}^{(x,y)}\) picks in X under \(\Theta _{XY}\). The sequence of values generated under the transformation \(\varphi _{t}^{x}\) is \(\{\varphi _{t_{1}}^{x_{1}},\varphi _{t_{2}}^{x_{2}},\ldots \}\) and the sequence of values generated under the transformation \(\varphi _{t}^{(x,y)}\) is \(\{\varphi _{t_{1}}^{(x_{1},y_{1})},\varphi _{t_{2}}^{(x_{2},y_{2})},\ldots \}\). The order of the values picked by \(\varphi _{t}^{(x,y)}\) is assumed different from the order of the values picked by \(\varphi _{t}^{x}\). The mapping rules to pick values by the functions \(\varphi _{t}^{x}\) and \(\varphi _{t}^{(x,y)}\) can be either prefixed (deterministic framework) as mentioned above or the values that these functions will pick in X could be outcomes of a stochastic framework which we discuss below. An order of picking rules could be different for different systems under study. Under the deterministic framework, if there is no influence of Y, then we can represent the dynamics created by \(\varphi _{t}^{x}\) by

$$\begin{aligned} \dot{X}=\varphi \left( \Theta _{X},X\right) , \end{aligned}$$
(1.1)

and if there is influence by Y then we can represent the dynamics created by \(\varphi _{t}^{(x,y)}\) by

$$\begin{aligned} \dot{X}=\varphi \left( \Theta _{XY},X,Y\right) . \end{aligned}$$
(1.2)

The value of \(\varphi _{t}^{(x,y)}\) at the time step \(t_{i+1}\) will depend on the values of \(\varphi _{t}^{(x,y)}\) and \(\Theta _{XY}\) but not on the value of the function that was obtained at \(t_{i-1}\) for \(i=1,2,\ldots\). This kind of data dependency only on the recent past, when blended with some probabilistic features, is known as the Markovian property. The initial value of the state X, say, \(\varphi _{t_{0}}^{(x,y)}\) and a parameter space \(\Theta _{XY}\) produce a unique combination of the sequence of functional values \((\varphi _{t}^{(x,y)})_{t>0}\) or produce a trajectory

$$\begin{aligned} \varphi _{t_{0}}^{(x,y)}\rightarrow \varphi _{t_{1}}^{(x,y)}\rightarrow \varphi _{t_{2}}^{(x,y)}\rightarrow \cdots . \end{aligned}$$

Unless the initial combinations of states and parameter spaces are perturbed, the systems (1.1) and (1.2) will always generate exactly the same dynamics over time t. Such exactness of the path or trajectories in deterministic systems guides several real-world situations, for example, designing space craft, satellites, predicting orbital paths, etc. The properties of stability of such deterministic systems are relatively easy to obtain and also to verify if there exists stability (for example, see1). In the present context, the idea of system stability can be explained as follows.

Let \(\varphi _{t}^{(x^{*},y)}\) be the value of the mapping at the equilibrium point \(x^{*}\). The system is stable if, for every \(\epsilon >0,\) there exists a distance functions \(d_{X}\) in X such that

$$\begin{aligned} d_{X}\left( \varphi _{t}^{(x^{*},y)},\varphi _{t}^{(x,y)}\right) <\delta \text { for }\delta >0 \end{aligned}$$
(1.3)

and

$$\begin{aligned} d_{X}\left( \varphi _{t}^{(x^{*},y)},\varphi _{t}^{(x,y)}\right) <\epsilon \text { for }t\ge t_{0}. \end{aligned}$$
(1.4)

The system becomes asymptotically stable whenever

$$\begin{aligned} d_{X}\left( \varphi _{t}^{(x^{*},y)},\varphi _{t_{0}}^{(x,y)}\right) <\delta , \end{aligned}$$
(1.5)

implies that

$$\begin{aligned} \varphi _{t}^{(x,y)}\ \text {converges to }\ \varphi _{t}^{(x^{*},y)}\text { as }\ t\rightarrow \infty . \end{aligned}$$
(1.6)

Small perturbations will have a lesser impact on attaining stability of systems in comparison with large perturbations. Those systems which do not attain stability like complex continuous evolutionary systems are also sometimes useful. Lyapunov and Poincaré have introduced various techniques for understanding the stability of deterministic systems1,2,3,4. We will later see that such a feature of uniqueness is not maintained under the stochastic framework.

In classical pendulum mechanics, this phenomenon of Y influencing the values \(\varphi _{t}^{(x,y)}\) picked from the space X can be treated as adding a magnetic field around the pendulum that influences the path of the oscillations. By changing the degree of a magnetic field the motion of a pendulum can be altered. Y acts like an external factor. In that case, \(\varphi _{t}^{x}\) has the property that the pendulum is oscillating without any external influences beyond the standard gravitational forces that affect any normal pendulum motion. Deterministic modeling equations with a predetermined \(\Theta\) are sufficient when practical situations like pendulum movement with factors influencing the pendulum movement are well understood. This is true even if external spaces are known to influence the location of the pendulum. The value of time could be continuous, for example, \(t\in [0,\infty ).\)

Figure 1:
figure 1

The dynamics in the space X due to transformations \(\varphi _{t}^{x}\). External factors outside space X are not influencing the dynamics. When a dynamical system is built using the availability of points within a space X to understand the mapping of \(\varphi _{t}^{x}\) into a space X, then such a system cannot be influenced by external factors outside the space X.

Figure 2:
figure 2

The dynamics in the space X due to transformations \(\varphi _{t}^{(x,y)}\). External space Y influencing the dynamics of the base space X are not influencing the global dynamics.

Under the stochastic framework, the jump from \(\varphi _{t_{i}}^{(x_{i},y)}\) to \(\varphi _{t_{i+1}}^{(x_{i+1},y)}\) is decided by a random process model. Under this framework, let us write \(\xi _{t_{i}}^{(x_{i},y)}\) for the values of X under the influence of another space Y mapped at the time step \(t_{i}\) for \(i=0,1,2,\ldots\). Let \(\pi _{j}\) be the probability that initially the system is at the state \(x_{j}\) for \(x_{j}\in X\) so that \(\xi _{t_{0}}^{(x_{j},y)}=\pi _{j}.\) A random process model will have a state space X, a random variable (\(\xi\)), and some governing rules to pick values according to the random variable from X. A random variable is a real-valued function whose domain \(\Omega\) has all possible outcomes for an experiment. The state space will consist of all possible values that \(\xi _{t}^{(x,y)}\) can pick. Here t could be discrete or continuous and the state space could be discrete or continuous. Let the first value picked by \(\xi _{t}^{(x,y)}\) at \(t_{1}\) be denoted by \(x_{1}\), so that \(\xi _{t_{1}}^{(x,y)}=x_{1}\) for \(x_{1}\in X;\) the second value picked by \(x_{i}\) at \(t_{2}\) should be denoted by \(x_{2}\), so that \(\xi _{t_{2}}^{(x,y)}=x_{2}\) and so on; thus let \(\xi _{t_{i}}^{(x,y)}=x_{i}\) for \(i=2,3,\ldots\). After starting from \(x_{j}\), at each time step the variable \(x_{i}\) might pick a new state or remain at the same state. We call this a transition to a new state or remaining at the same state. This construction is reminiscent of a Turing machine.

The transitions

$$\begin{aligned} x_{j}\rightarrow \xi _{t_{1}}^{(x,y)}\rightarrow \xi _{t_{2}}^{(x,y)}\rightarrow \cdots , \end{aligned}$$
(1.7)

might represent a constant sequence \(x_{j}\) or a sequence of distinct states or represent a combination of distinct and constant states of X (see Fig. 1). Let \(p_{x_{i},x_{j}}\) represent the probability of transition from the state \(x_{i}\) to the state \(x_{j}\). Then the above transitions in (1.7) are represented by the corresponding probabilities as shown below:

$$\begin{aligned} \begin{array}{c} p_{x_{j},x_{1}}\\ \overbrace{x_{j}\rightarrow \xi _{t_{1}}^{(x,y)}} \end{array}\ \ ,\ \ \begin{array}{c} p_{x_{1},x_{2}}\\ \overbrace{\xi _{t_{1}}^{(x,y)}\rightarrow \xi _{t_{2}}^{(x,y)}} \end{array}\ \ ,\ \ \begin{array}{c} p_{x_{2},x_{3}}\\ \overbrace{\xi _{t_{2}}^{(x,y)}\rightarrow \xi _{t_{3}}^{(x,y)}} \end{array}\rightarrow \cdots . \end{aligned}$$
(1.8)
Figure 3:
figure 3

States in the space X under a stochastic environment. A random model could return to the same state at a different time step as shown at \(x_{4}\), \(x_{7}\) and at \(x_{j},x_{8},\text {and }x_{9}\).

The general questions that we ask in these frameworks are related to quantifying the probabilities of transitions between the states of X and whether the states obey recurrent or transient properties. Does there exist any periodicity for the states? What is the long-term behavior of sequences of probabilities? A state i is said to be recurrent if \(\sum _{n=1}^{\infty }f_{ii}^{(n)}=1\), otherwise it is transient, where \(f_{ii}^{(n)}\) denote the probability that a random variable starting from initial state i returns for the first time to state i in the nthstep. Suppose d(i) be the greatest common divisor of all integers \(n\ge 1\) for which probability of transition from the state i to i is greater than zero, then the state i is said to have period d(i). The transition of \(\xi _{t_{i}}^{(x,y)}\) and \(\xi _{t_{i+1}}^{(x,y)}\) depends only on the state that the random variable associated with the process picks at time \(t_{i}\) for all \(i=1,2,\ldots\). Then we say that the system of random variables obeys the Markov property5,6.

When all the states of X are recurrent, and each can be reached by the other states (either directly or indirectly) as shown in Fig. 3, and there is a unique invariant distribution \(\pi\) such that \(\pi =pP,\) where \(P=(p_{x_{i}x_{j}})_{ij}\) is the transition probability matrix, then

$$\begin{aligned} \lim _{n\rightarrow }\pi p_{x_{i}x_{j}}^{(n)}=\pi _{x_{j}}. \end{aligned}$$

One of the key differences between dynamics due to Lyapunov and the dynamics in the stochastic framework is that, in the former, the trajectories created by a set of initial values will be unique. But, in the latter framework, the path connecting states in X created by the same starting state \(x_{j}\) for \(x_{j}\in X\) need not be unique. This distinction between the two kinds of philosophies challenges the Lyapunov dynamical systems usage in randomly evolving natural phenomena, like genetic or parasite evolution models, etc., However, the mathematical dynamical systems have profound applications when the dynamics have minimal influence due to random events as in mechanical systems, space explorations, solar systems, etc.

Proposition 1

Let \((D,\Theta _{X})\) be the differential equation-based dynamical system on a space X with parameter space \(\Theta _{X}\) and let \(\psi _{D}\) be the set of trajectories generated by \(\Theta _{X}.\) Let (M, X) be the stochastic dynamics created due to the model M on the same space X. Let \(p_{x_{i},x_{j}}\) be the transition probabilities from \(x_{i}\) to \(x_{j}\) for all \(x_{i},x_{j}\) in X. Let \(\gamma\) be the paths joining the states in X after the states are selected by the function \(\xi\) with \(\gamma :[t_{0},t_{n}]\rightarrow X\), where \(\gamma (t_{0})=x_{j}\) and \(\gamma (t_{n})=x_{n}\) for \(x_{j},x_{n}\in\)X. Then \(\psi _{D}\) is unique on \([t_{0},t_{n}]\) whereas \(\gamma\) need not be unique.

In the next section, we will review classical dynamical systems explained by Poincaré and Lyapunov. We will discuss the topological dynamics due to Stephen Smale7 as well as ergodicity results of Katok2.

2 Topological Dynamics

Mathematically, topological dynamics was first studied by Henri Poincaré during the early 20th century2. Some understanding of the dynamics of celestial objects through celestial mechanics has existed as far back as ancient Indian and ancient Greek works of literature8, 9. However, Poincaré first conceptualized the idea of topological dynamics while understanding the qualitative properties of differential equations. Among the many technicalities, the ideas of homeomorphisms, topological spaces, semigroups of continuous transformations between spaces, diffeomorphisms, flows between various states of spaces, etc., played a central role in several advancements in the field.

Several ideas of planetary motion, gravitational forces, solar system movements later termed celestial mechanics during the post-Copernican era were known to ancient Indian and Greek philosophers, astrologers, and mathematicians. For example, Aryabhatta (5th century BC) computed the number of lunar days and the value of \(\pi\) using planetary movements9. Bhaskara’s 12th century AD book Siddantasiromani has descriptions of several ancient Indian computations of planetary motions9,10,11. These ancient celestial mechanics combined with the modern-day understanding of such mechanics after the availability of new techniques perhaps inspired the formulation of dynamical systems, their study formally using differential equations12.

Poincaré’s 3-body approach using Newtonian type gravitational forces formed foundations for the n-body problems and qualitative understandings using dynamical systems. Kolmogorov’s and others’ ideas of chaos theory enriched the understanding of dynamical systems through ergodic properties. In the next few paragraphs, we will define and describe some of the previously mentioned technicalities.

2.1 Homeomorphic Spaces and Semigroups

Suppose that X and Y are two topological spaces. If \(f:X\rightarrow Y\) is a continuous, open, one-to-one, and onto mapping, then we say X and Y are homeomorphic. An example can be seen in Fig. 4. We consider that two spaces X and Y topologically have the same structure if they are homeomorphic. A semigroup is a set X with an associated binary operation13,14,15. These semigroups are associated with transformation semigroups, say, (XS), where S is the semigroup of transformations of X. Semigroups are associated with Markov processes on the space X5. Let us now see explicitly the association of semigroups with Markov process.

Figure 4:
figure 4

A homeomorphism between two spaces X and Y on \(\mathbb {R}\) with \(f(x)=\tan x\).

2.1.1 Markov Processes in Continuous Time

Let \(\{X(t):t\in (0,\infty ]\}\) be a collection of discrete points or random variables \(\{x_{1},x_{2},\ldots \}\) or \(\{x_{1},x_{2},\ldots ,x_{n},x_{n+1}\}.\) The stochastic process \(\{X(t)\}\) is called a continuous-time Markov chain if, for any \(t_{1}<t_{2}<\cdots<t_{n}<t_{n+1},\)

$$\begin{aligned} \text {Prob}[X(t_{n+1})&=x_{n+1}/X(t_{1})=x_{1},X(t_{2})=x_{2},\ldots ,X(t_{n})=x_{n}].\end{aligned}$$
(2.1)
$$\begin{aligned}&=\text {Prob}[X(t_{n+1})=x_{n+1}/X(t_{n})=x_{n}]. \end{aligned}$$
(2.2)

The probability \(p_{x_{i},x_{j}}\) describing the transition from the state \(x_{i}\) in X to the state \(x_{j}\) in X within a time-step \(t_{j}-t_{i}\) is given by

$$\begin{aligned} p_{x_{i},x_{j}}(t_{j}-t_{i})=\text {Prob}[X(t_{j})=x_{j}/X(t_{i})=x_{i}]\text { for }{t_{i}<t_{j}}. \end{aligned}$$
(2.3)

Such transitions (2.3) in Markov chains obey the Chapman–Kolmogorov equations (2.4)

$$\begin{aligned} \sum _{k=0}^{\infty }p_{x_{i},x_{k}}(t_{k}-t_{i})p_{x_{k},x_{j}}(t_{j}-t_{k})=p_{x_{i},x_{j}}(t_{j}-t_{i})\text { for }t_{i}<t_{k}<t_{j} \end{aligned}$$
(2.4)

with \(t_{i},t_{j}\in [0,\infty ).\) This implies that

$$\begin{aligned} P(t_{k}-t_{i})\cdot P(t_{j}-t_{i})=P(t_{j}-t_{i}), \end{aligned}$$
(2.5)

where P(.) is the transition matrix of all possible transitions in the space X. We see that the semigroup property holds because of the Chapman–Kolmogorov equations satisfied by Markov chains16,17,18.

Definition 2

(Semigroup)

A semigroup is a weaker structure of a group and is a set with an associative binary operation.

Definition 3

(Chapman–Kolmogorov equations) The n-step transition probability \(p_{ij}^{(n)}\) of a discrete random variable \(X_{n}\) describes the probability of transferring from state i to state j in n time steps. One can similarly define \(p_{ij}^{(n-m)}\) and \(p_{ij}^{(m)}\) to describe transitions in \((n-m)\) and m steps, respectively. For an arbitrary state k, the Chapman–Kolmogorov equations associate these transition probabilities as

$$\begin{aligned} p_{ij}^{(n)}=\sum _{k=1}^{\infty }p_{ik}^{(m)}p_{kj}^{(n-m)},\quad (0<m<n). \end{aligned}$$

Definition 4

(Diffeomorphisms) Let us consider two manifolds \(M_{1}\) and \(M_{2}\) (manifolds are a class of topological spaces). A function \(f:M_{1}\rightarrow M_{2}\) is called a diffeomorphism if f is continuously differentiable, one-to-me, onto and \(f^{-1}:M_{2}\rightarrow M_{1}\) is also continuously differentiable19.

Definition 5

(Flows) Let \(\dot{y}=h(y)\) and \(y(0)=y_{0}\) be the initial value problem for a vector field h(y). Saying that the function \(\phi _{t}(y)\) is the flow of h(y) means that \(y(t)=\phi _{t}(x^{*})\) is the solution for the initial value problem.

2.2 From Poincaré to Recent Developments

One of the elementary ways of understanding the qualitative behavior of any given dynamical system, for example as in (1.1) or (1.2), is through obtaining steady-state solutions of a corresponding linearization of the given dynamical system. Poincaré in the early 20th century for such a class of systems (for celestial mechanics) showed that, if \(\varphi _{t}^{x}\) or \(\varphi _{t}^{(x,y)}\) is analytic at \(x^{*}\) and if eigenvalues of \(D\varphi _{t}^{x}\) exist, then the system of equations of type (1.1) or (1.2) can be changed to a linear system2, 12. The idea of studying the qualitative behavior of a system around the neighborhood of the equilibrium was a ground-breaking work by Poincaré and that was extended by several other researchers to a variety of situations. The behavior of the dynamical system near the hyperbolic equilibrium is equal to the behavior of the linearized system at the equilibrium point. This simplification is possible because of the Hartman–Grobman theorem20,21,22,23. Suppose there is a hyperbolic equilibrium point \(x^{*}\) (i.e., with nonzero real eigenvalues of \(D\varphi _{t}^{x}(x^{*})\)) and suppose that \(\varphi _{t}^{x}\) is continuously differentiable (or \(C^{1}\) differentiable) from a neighborhood such that the function transforms this neighborhood to a corresponding linear system.

Theorem 6

(Hartman–Grobman theorem) Let \(\mathbb {R}^{n}\) be n-dimensional Euclidean space. Let \(S\subset \mathbb {R}^{n}\),\(x^{*}\in S\) and a neighborhood \(B_{\delta }(x^{*})\subset S\) for some \(\delta >0.\) Let \(\varphi :S\rightarrow \mathbb {R}^{n}\) be a \(C^{1}\) function on S such that \(\varphi (x^{*})=0.\) Let \(D\varphi _{t}^{x}(x^{*})\) be a hyperbolic equilibrium point of the system (1.1). Then there are two open neighborhoods U (for \(x^{*}\)) and V (for 0) and a homeomorphism \(H:U\rightarrow V\) such that the flow (transformation) \(\varphi _{t}^{x}\) of the system (1.1) with \(X(0)=x\) a topological flow is equivalent to the flow of its linearization \(e^{tD\varphi _{t}^{x}(x^{*})}:\) \(H(\varphi _{t}^{x})=e^{tD\varphi _{t}^{x}(x^{*})}H(x)\) for all \(x\in U\) and \(\left| t\right| \le 1.\)

A smooth diffeomorphism \(\varphi\) is a topological conjugate \(D\varphi\) near a hyperbolic equilibrium point \(x^{*}\) by a local homeomorphism H. As mentioned above, this theorem provides qualitative behavior around a neighborhood of the equilibrium. Because the dynamics produced by linearization are the same as the topological flow, we consider them as equivalent. Due to such equivalent nature, a given complex dynamical system is linearized to understand steady-state solutions.

2.3 What Did We Learn From Smale’s Work?

Dynamical systems development from the days of Isaac Newton to Poincaré was inspired by celestial mechanics. Such kind of dynamics if understood correctly prior to the development of mathematical formulations were less influenced by fluctuations of external forces after the initiation of the dynamics. The functions like \(\varphi _{t}^{x}\) can be built from piecewise connectedness properties.

Further mathematical developments of the dynamics by Lyapunov, Kolmogorov, Hartman, Grobman, Smale involved pure mathematical formulations. These formulations involved homeomorphisms, diffeomorphisms, and continuous transformation of open sets. Such formulations as in Poincaré do not change the course of the dynamics after initial values or initial points of reference that were set at \(t_{0}\) or at 0. Even the global stability features that were discussed earlier or the local stability features of Hartman–Grobman were purely mathematical possibilities and would perfectly fit well for the situation as in celestial mechanics. The Hartman–Grobman type of constructions of neighborhoods \(B_{\delta }(x^{*})\subset S\) and transformation functions like \(H:U\rightarrow V\) proves that studying corresponding linearized systems are enough to understand the dynamics in a given dynamical system outside celestial mechanics. All the developments up to Smale strongly emphasize dynamical systems that are independent of unexpected influence on the solution function \(\varphi _{t}^{x}\) or any random fluctuations on \(\varphi _{t}^{x}\).

Building an analysis that is bound to give a stable equilibrium either locally or globally is relatively easier than building such an analysis for a truly unpredictable \(\varphi _{t}^{x}\) for \(t>0\). Consider the function \(H(\varphi _{t}^{x})\) of the Hartman-Grobman theorem for \(t\in [t_{0},\infty ).\) Let there be fluctuations in the data on which the system was built after the initialization of the dynamics such that \(H(\varphi _{t}^{x})\) at \(t_{1}\in [t_{0},\infty )\) is not equal to the value of the function \(H(\varphi _{t}^{x})\) at \(t_{1}\) i.e. \(H(\varphi _{t_{1}}^{x}).\) The functional values \(H(\varphi _{t}^{x})\) for \(t\in [t_{0},t_{1})\) trace out a set of points in V (the curve described by \(H(\varphi _{t}^{x})\) for \(t\in [t_{0},t_{1})\)). The linearized system generated by Hartman–Grobman will provide the dynamics of the system until \(t_{1}-\epsilon _{1}\) for \(\epsilon _{1}>0\) and fail to provide the true dynamics at \(t_{1}.\) Let the true values of \(t_{1}\) be the new initial value for the system. Let the function \(H(\varphi _{t}^{x})\) for the new dynamics starting from \(t_{1}\) be plotted to get a curve \(H(\varphi _{t}^{x}):[t_{1},\infty )\rightarrow V.\) Suppose that \(H(\varphi _{t}^{x})\) at \(t_{2}\) is not equal to the values of \(H(\varphi _{t}^{x}):[t_{1},\infty )\rightarrow V\) at \(t_{2}\) for \(t_{2}>t_{1}.\) Let the rest of all values of \(H(\varphi _{t}^{x}):[t_{1},t_{2}-\epsilon _{2})\) for \(\epsilon _{2}>0\) be equal to the true values for the same interval. The true values \(t_{2}\) will be new initial values of the dynamical system. In general, let the time points at which the liniearized system values differ from the true values values be at \(t_{i}\) for \(i=1,2,\ldots \infty\) and the new initial values of the function \(H(\varphi _{t_{i-1}}^{x})\) be used to obtain the dynamics for the period \(H(\varphi _{t}^{x}):t\in [t_{i-1},t_{i}-\epsilon _{i})\) for \(i=1,2,\ldots ,\)

Let

$$\begin{aligned} \Gamma _{t_{i}}=\Sigma _{i=1}^{\infty }\left\| H(\varphi _{t_{i}}^{x})-H(\varphi _{t_{i-1}}^{x})\right\| \end{aligned}$$
(2.6)

be the length of the polygon generated up until the time \(t_{i}.\) The length of the curve generated by

$$\begin{aligned} H(\varphi _{t}^{x}):[t_{0},\infty )\rightarrow V], \end{aligned}$$

say \(L_{t_{i}}\) for the time interval \([t_{0},t_{1})\), will be different from the polygon (2.6). Hence the original dynamics generated by \(H(\varphi _{t}^{x})\) that was obtained through the Hartman–Grobman construction will be different from \(\Gamma _{t}\) for \(t\in [t_{0},\infty ).\) The trajectories of the function \(H(\varphi _{t}^{x})\) with initial values \(t_{0}\) will be different from the polygon that was obtained while computing the length \(\Gamma _{t}\) (2.6) for the same period \([t_{0},\infty ).\) Suppose \(\gamma _{i}\) to be the length of the polygon from \(t_{i-1}\) to \(t_{i}\) and \(H(\varphi _{s_{i}}^{x})\) to be the true value of the function at \(t_{i}.\) Let

$$\begin{aligned} \left\| H(\varphi _{t_{i}}^{x})-H(\varphi _{s_{i}}^{x})\right\| \end{aligned}$$

be the distance from the Hartman–Grobman constructed \(H(\varphi _{t}^{x})\) and the true value from the data at the same point.

Stephen Smale’s idea was to consider a space X with all the flows with the \(C^{r}-\)topology for \(r=1,2,\ldots\).7. He obtained stability of the system by considering equivalence relationships between flows and their orbit structures within the space. The flow of an ODE (ordinary differential equation) within a compact manifold gives us a closed set C for \(C\in X\). The union of all the flows equals the entire space X. Let \(\psi _{it}\) be the ith flow at time t. Then

$$\begin{aligned} \bigcup _{i}\int _{t}\psi _{it}\mathrm{d}t=X. \end{aligned}$$
(2.7)

The expression (2.7) can be considered as

$$\begin{aligned} \int _{t}\bigcup _{i}\psi _{it}\mathrm{d}t=X. \end{aligned}$$
(2.8)

The system of ODEs will be globally stable if, and only if, \(q_{it}\) for each i is globally stable. There are several other kinds of dynamics and their associations with other constructions were considered, for example, associations between Hamiltonian flows, Arnol’d’s work, Narasimhan’s work, and Anosov’s diffeomorphisms7,12,24.

Smale’s horseshoe mapping inspired the creation of transformations of manifolds into various other forms. Smale exploited the fact that to transform a square into a horseshoe a set of points with a region of a square considered need not be disturbed. Roughly, this set of undisturbed points is associated with attractors. Such a demonstration was clever as it involved functions of continuous transformations, called attractors. Attractors are the points in the space X that are not influenced (not disturbed due to the functional transformations)25. In this famous mapping of a standard square to the shape of a horseshoe, Smale considered diffeomorphic functions that first transform a square into a strip and then stretching this strip into a horsehoe shape.

Because the diffeomorphism property was involved, the horseshoe can be reverted to the initial square.

3 Two Distinct Dynamics from the Same Origin

The idea of this section is to explain how the dynamics created by a system are distinct if we keep updating the system with newer information available on the trajectories. Even if these two distinct dynamical systems are generated from the same origin, we could see two or more different dynamics emerge.

Proposition 7

Suppose that \(H(\varphi _{s_{i}}^{x})\) exists for \(i=1,2,3,\ldots\). If \(\gamma _{i}<\left\| H(\varphi _{t_{i}}^{x})-H(\varphi _{s_{i}}^{x})\right\|\) for every i then the equilibrium analysis performed through Hartman–Grobman will not represent the true dynamics for which the dynamical system was built.

Proof

We have

$$\begin{aligned} \int _{0}^{\infty }\gamma _{i}di<\int _{0}^{\infty }\left\| H(\varphi _{t_{i}}^{x})-H(\varphi _{s_{i}}^{x})\right\| di. \end{aligned}$$

Here \(\gamma _{i}\) is the length of the polygon from \(t_{i-1}\) to \(t_{i}\), \(H(\varphi _{t_{i}}^{x})\) is the value of the function at time \(t_{i},\) and \(H(\varphi _{s_{i}}^{x})\) is the true value of the function at \(t_{i}.\) Also \(H(\varphi _{s_{1}}^{x})\) is the value in V at time \(t_{1}\) at which the dynamics initiated at \(t_{0}\) does not match the function \(H(\varphi _{t}^{x}).\) Since \(H(\varphi _{s_{1}}^{x})\) exists and \(\gamma _{1}<\left\| H(\varphi _{t_{1}}^{x})-H(\varphi _{s_{1}}^{x})\right\|\), the path \(\gamma _{1}\) in reality would not have been completed. Suppose that \(H(\varphi _{t}^{x})\) values provide the true dynamics until \(t_{1}-\epsilon _{1}\) for some \(\epsilon _{1}>0\). Then

$$\begin{aligned} \left\| H(\varphi _{t_{0}}^{x})-H(\varphi _{t_{1}-\epsilon _{1}}^{x})\right\| <\gamma _{1}, \end{aligned}$$

which implies that

$$\begin{aligned} \left\| H(\varphi _{t_{0}}^{x})-H(\varphi _{t_{1}-\epsilon _{1}}^{x})\right\| <\left\| H(\varphi _{t_{1}}^{x})-H(\varphi _{s_{1}}^{x})\right\| . \end{aligned}$$
(3.1)

Suppose we consider the dynamics of the system with a new set of initial values as a resultant of \(H(\varphi _{s_{1}}^{x}).\) There will be two sets of dynamics in the process at \(t_{1}.\) One due to the original \(H(\varphi _{t}^{x})\) initiated at \(t_{0}\) and continuing throughout for \(t\in [t_{0},\infty )\), and the second dynamic that was initiated at \(t_{1}\) with the new set of initial values due to the function value of \(H(\varphi _{s_{1}}^{x}).\) Suppose the system that was originally set is allowed to continue after \(t_{1}\) without any interruption. The second system starting from s for \(s\in [t_{1},\infty )\) will have the function \(H(\varphi _{s}^{x})\) for \(s\in [t_{1},\infty ).\) See Fig. 5.

Figure 5:
figure 5

Construction of two dynamics from the same origin.

Let us call this newer function \(H_{1}(\varphi _{s}^{x})\) for \(H_{1}:U\rightarrow V_{1}\) and \(V_{1}\in \mathbb {R}^{n}.\) Note that

$$\begin{aligned} V_{1}\cap V=\phi \text { or }V_{1}\cap V\ne \phi . \end{aligned}$$
(3.2)

We allow either of the possibilities of (3.2) in our construction.

Suppose at \(t_{2}\) the function \(H_{1}(\varphi _{s}^{x})\) fails to provide the true value that the system which was initiated at \(t_{1}\) generates. Then

$$\begin{aligned} \left\| H_{1}(\varphi _{s_{1}}^{x})-H_{1}(\varphi _{t_{2}-\epsilon _{2}}^{x})\right\| <\gamma _{2}, \end{aligned}$$

where \(\gamma _{2}\) is the length of the path of \(H_{1}(\varphi _{s}^{x})\) for \(s\in [t_{1},t_{2}].\) This implies that

$$\begin{aligned}&\left\| H_{1}(\varphi _{s_{1}}^{x})-H_{1}(\varphi _{t_{2}-\epsilon _{2}}^{x})\right\| \nonumber \\&\quad <\left\| H_{1}(\varphi _{s}^{x})-H_{2}(\varphi _{s_{2}}^{x})\right\| \text { for }s\in [t_{1},t_{2}]. \end{aligned}$$
(3.3)

In (3.3), the function \(H_{2}:U\rightarrow V_{2}\) with \(V_{2}\in \mathbb {R}^{n}.\) The set \(V_{2}\) could satisfy one of the following possibilities:

$$\begin{aligned}&V_{2}\cap V_{1}\cap V =\phi \text { or }\ V_{2}\cap V_{1}\cap V\ne \phi \text { or} \nonumber \\&\quad \{V_{2}\cap V_{1}\ne \phi \text { and }V_{2}\cap V=\phi \}\nonumber \\&\quad \text {or } \ \ \{V_{2}\cap V\ne \phi \text { and }V_{2}\cap V_{1}=\phi \} \end{aligned}$$
(3.4)

Continuing the two kinds of dynamics described above, we will arrive at

$$\begin{aligned}&\left\| H(\varphi _{t_{0}}^{x})-H(\varphi _{t_{1}-\epsilon _{1}}^{x})\right\| \nonumber \\&\quad +\sum _{i=1}^{\infty } \left\| H_{i}(\varphi _{s_{i}}^{x})-H_{i}(\varphi _{t_{1+1}-\epsilon _{1+1}}^{x})\right\|<\int _{1}^{\infty }\gamma _{i}\mathrm{d}i\nonumber \\&\qquad < \sum _{i=1}^{\infty }\left\| H_{i}(\varphi _{t_{i+1}}^{x})-H_{i+1}(\varphi _{s_{1+1}}^{x})\right\| . \end{aligned}$$
(3.5)

The piecewise connected path of the second dynamical system is

$$\begin{aligned}&\left\| H(\varphi _{t_{0}}^{x})-H(\varphi _{t_{1}-\epsilon _{1}}^{x})\right\| +\sum _{i=1}^{\infty }\left\| H_{i}(\varphi _{s_{i}}^{x})-H_{i}(\varphi _{t_{1+1}-\epsilon _{1+1}}^{x})\right\| \nonumber \\&\quad <\int _{1}^{\infty }\gamma _{i}\mathrm{d}i \end{aligned}$$
(3.6)

In (3.5), the function \(H_{i}:U\rightarrow V_{i}\) and and \(V_{i}\in \mathbb {R}^{n}\) for \(i=2,3,\ldots\). The sets \(V_{i}\) for \(i=2,3,\ldots\) could satisfy one of the following possibilities:

$$\begin{aligned}&\bigcap _{i=1}^{\infty }V_{i}\cap V =\phi \text {\ \ or \ \ }\bigcap _{i=1}^{\infty }V_{i}\cap V\ne \phi \text {\ \ or \ \ }\nonumber \\&\biggl \{ V_{k}\cap V_{l}\ne \phi \text { for every }{k\ne l} \text { and }\bigcap _{i=1}^{\infty }V_{i}\cap V=\phi \biggr \} \end{aligned}$$
(3.7)

The dynamics created by \(H(\varphi _{t}^{x})\) and the piecewise connected path in (3.6) and the corresponding \(V_{i}\) values within \(\mathbb {R}^{n}\) form two distinct dynamics. Hence the original functional path of Hartman–Grobman-generated dynamics would not be valid in the situation described in the proposition. \(\square\)

Remark 8

The result with two kinds of dynamics can be extended with multiple dynamics evolving in the space X by continuing \(H_{1}(\varphi _{s}^{x})\) beyond \(t_{2}\), \(H_{2}(\varphi _{s}^{x})\) beyond \(t_{3}\), and so on. This will lead to multiple dynamics within the same space X.

Remark 9

When \(\gamma _{i}>\left\| H(\varphi _{t_{i}}^{x})-H(\varphi _{s_{i}}^{x})\right\|\), we are not sure if the two dynamics created in Proposition 7 are significantly different.

4 Discussion

Dynamical systems, especially the topological dynamics combined with the stochastic paradigm, are a fascinating field. After Poincaré’s groundbreaking work followed by Lyapunov’s global stability analysis, works of Kolmogorov, Arnol’d, Moser, to Smale’s differential manifolds, the subject has seen expansions to applications to natural sciences26.

We have presented a new result (Proposition 1) that provides possibly a new insight into topological and stochastic dynamics within a space X and then proved a result (Proposition 7) by considering two dynamics that have started at the same time 0 or \(t_{0}\). We are not providing proof of this proposition in this article.

5 Concluding Remarks

Dynamical system has grown in recent years into a powerful tool that can give new insights into geometry, differential equations, evolution theory, fractal geometry, and many other parts of modern mathematics. We have endeavored here to show how the dynamical systems point of view can shed light on questions arising from population dynamics. In particular, the Hartman-Grobman theorem gives us a handle on normalizing a dynamical system near a hyperbolic equilibrium point and then engaging in further, more detailed analysis.

What we have presented here are the only first steps in this program. We hope in future work to develop the ideas to a peak of real insight.