Skip to content
Licensed Unlicensed Requires Authentication Published by De Gruyter May 16, 2023

Relations between timescales of stochastic thermodynamic observables

  • Erez Aghion ORCID logo and Jason R. Green ORCID logo EMAIL logo

Abstract

Any real physical process that produces entropy, dissipates energy as heat, or generates mechanical work must do so on a finite timescale. Recently derived thermodynamic speed limits place bounds on these observables using intrinsic timescales of the process. Here, we derive relationships for the thermodynamic speeds for any composite stochastic observable in terms of the timescales of its individual components. From these speed limits, we find bounds on thermal efficiency of stochastic processes exchanging energy as heat and work and bound the rate of entropy change in a system with entropy production and flow. Using the time set by an external clock, we find bounds on the first time to reach any value for the entropy production. As an illustration, we compute these bounds for Brownian particles diffusing in space subject to a constant-temperature heat bath and a time-dependent external force.


Corresponding author: Jason R. Green, Department of Chemistry, University of Massachusetts Boston, Boston, MA 02125, USA; and Department of Physics, University of Massachusetts Boston, Boston, MA 02125, USA, E-mail:

  1. Author contribution: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.

  2. Research funding: This material is based upon work supported by the National Science Foundation under Grant No. 1856250.

  3. Conflict of interest statement: The authors declare no conflicts of interest regarding this article.

Appendix A: Mapping a Langevin process to master equation

In this Appendix, we derive the mapping between Langevin dynamics with a time-dependent force term, and its corresponding master equation with time-dependent transition rates. This mapping was introduced in [40]. Our starting point is the Langevin process in Eq. (3). To simulate realizations of this process, one could use the standard Euler–Mayurama method [52] (e.g., with dt = 0.01). Instead, we derive Eq. (6) by letting W y,y(t) = W L(y + Δ) and W y,y−Δ(t) = W R(y − Δ) be left and right jump probabilities, respectively. We note that all the probabilities are time-dependent, but suppress this notation. From conservation of probability, W L(y) + W R(y) ≡ 1. With these definitions, we re-write Eq. (6) as:

(A1) P ̇ ( y ) = W L ( y + Δ ) P ( y + Δ ) + W R ( y Δ ) P ( y Δ ) W L ( y ) + W R ( y ) P ( y ) .

To compare the discrete-space process to Langevin dynamics, for small Δ we expand W L(y + Δ), W R(y − Δ) as a Taylor series, up to order ∝Δ, and P(y ±Δ) up to ∝Δ2 (we verified that keeping term of order ∝Δ2 in the jump probabilities does not change the results below). We get

(A2) P ̇ ( y ) W L ( y ) + Δ W L ( y ) P ( y ) + Δ y P ( y ) + Δ 2 2 y 2 P ( y ) + W R ( y ) Δ W R ( y ) P ( y ) Δ y P ( y ) + Δ 2 2 y 2 P ( y ) W L ( y ) + W R ( y ) P ( y ) .

Rearranging Eq. (A2), we get:

(A3) P ̇ ( y ) Δ 2 2 y 2 P ( y ) Δ y W R ( y ) W L ( y ) P ( y ) + O ( Δ 3 ) = Δ 2 2 y 2 P ( y ) Δ y 2 W R ( y ) 1 P ( y ) + O ( Δ 3 ) .

Using δ t = δ t ̂ Δ 2 / ( 2 D ) , this yields:

(A4) 1 2 D δ P ( y ) δ t ̂ = 1 2 y 2 P ( y ) 1 Δ y 2 W R ( y ) 1 P ( y ) .

Now, in the limit of Δ → 0, we can associate Eq. (A4) with the standard Fokker–Planck equation associated with Eq. (3) [52];

(A5) δ P ( y ) δ t ̂ = D y 2 P ( y ) 1 μ y F ( y ) P ( y ) ,

(and xy). Reintroducing the notation for the time dependence, we have:

(A6) W R ( y , t ) = 1 / 2 + F ( y , t ) Δ 4 D μ and W L ( y , t ) = 1 2 F ( y , t ) Δ 4 D μ .

We can solve the master equation numerically using Euler’s discretization for the derivative on the left and iterating over Eq. (6) from the main text (noting P(y, t)/Δ → ρ(x, t) and δ t ̂ = δ t [ Δ 2 / ( 2 D ) ] ).

Figure A1: 
Simulation results for the probability density function ρ(x, t) (red squares) obtained via integration over the Langevin process, Eq. (3). For comparison, we show P(y, t)/Δ (green squares) for similar physical parameters (provided in the main text) from integrating the master equation, Eq. (6), at t = 4π. The lattice spacing is similar to the examples in Section 2. The simulation results from these two methods agree well. The Langevin simulation was performed using the Euler–Mayurama method with dt = 0.01 and 107 particles. The master equation simulation used the Euler–Mayurama scheme with 





δ
t

̂


=
δ
t

[



Δ


2


/

(

2
D

)


]



$\hat{\delta t}=\delta t[{{\Delta }}^{2}/(2D)]$



.
Figure A1:

Simulation results for the probability density function ρ(x, t) (red squares) obtained via integration over the Langevin process, Eq. (3). For comparison, we show P(y, t)/Δ (green squares) for similar physical parameters (provided in the main text) from integrating the master equation, Eq. (6), at t = 4π. The lattice spacing is similar to the examples in Section 2. The simulation results from these two methods agree well. The Langevin simulation was performed using the Euler–Mayurama method with dt = 0.01 and 107 particles. The master equation simulation used the Euler–Mayurama scheme with δ t ̂ = δ t [ Δ 2 / ( 2 D ) ] .

Appendix B: Obtaining S i and S e from the master equation

In this Appendix, we explain how to obtain the rates of entropy production S i and entropy flow S e , from the time-dependent transition rates of the master equation. The definitions and the derivation were designed to resemble those found for constant rates in Ref. [37].

Consider, for example, the simple finite lattice x − 2Δ, x − Δ, x, x + Δ, x + 2Δ. What follows extends trivially to any lattice size. The master equation is:

(B1) δ ρ 0 / δ t ̂ δ ρ 1 / δ t ̂ δ ρ 2 / δ t ̂ δ ρ 3 / δ t ̂ δ ρ 4 / δ t ̂ = W 0,0 W 0,1 W 0,2 W 0,3 W 0,4 W 1,0 W 1,1 W 1,2 W 1,3 W 1,4 W 2,0 W 2,1 W 2,2 W 2,3 W 2,4 W 3,0 W 3,1 W 3,2 W 3,3 W 3,4 W 4,0 W 4,1 W 4,2 W 4,3 W 4,4 ρ 0 ρ 1 ρ 2 ρ 3 ρ 4 ,

where we assign ρ 0 = P(x − 2Δ)/Δ, ρ 1 = P(x − Δ)/Δ, ρ 2 = P(x)/Δ and ρ 3 = P(x + Δ)/Δ, ρ 4 = P(x + 2Δ)/Δ. Importantly, in every column j; W i,i = −∑ ij W i ,j , such that ∑ i W i,j ≡ 0.

In our notation,

(B2) δ ρ ( x 2 Δ ) / δ t ̂ δ ρ ( x Δ ) / δ t ̂ δ ρ ( x ) / δ t ̂ δ ρ ( x + Δ ) / δ t ̂ δ ρ ( x + 2 Δ ) / δ t ̂ = W 0,0 W L ( x Δ ) 0 0 0 W R ( x 2 Δ ) W 1,1 W L ( x ) 0 0 0 W R ( x Δ ) W 2,2 W L ( x + Δ ) 0 0 0 W R ( x ) W 3,3 W L ( x + 2 Δ ) 0 0 0 W R ( x + Δ ) W 4,4 ρ ( x 2 Δ ) ρ ( x Δ ) ρ ( x ) ρ ( x + Δ ) ρ ( x + 2 Δ ) ,

where for every term on the diagonal; W i,i ≡ − W i,i−1W i,i+1, for example; W 1,1 = −W L(x − Δ) − W R(x − Δ). The transition matrix in Eq. (B2) further simplifies to (focusing now only on the right hand side, and the left 3 × 3 corner of the transition matrix, for example):

(B3) 1 W L ( x 2 Δ ) W L ( x 2 Δ ) W L ( x Δ ) 0 1 W L ( x 2 Δ ) 1 W L ( x Δ ) W L ( x Δ ) W L ( x ) 0 1 W L ( x Δ ) 1 W L ( x ) W L ( x ) ρ ( x 2 Δ ) ρ ( x Δ ) ρ ( x ) .

Using Eq. (B3), we can now write

(B4) ρ n + 1 = ρ n + d t 2 D Δ 2 ×

(B5) 0 W L ( x Δ ) 0 0 0 W L ( x 2 Δ ) 0 W L ( x ) 0 0 0 W L ( x Δ ) 0 W L ( x + Δ ) 0 0 0 W L ( x ) 0 W L ( x + 2 Δ ) 0 0 0 W L ( x + Δ ) 0 ρ n × 1 0 0 0 0 1 1 0 0 0 0 1 1 0 0 0 0 1 1 0 0 0 0 1 1 ρ n .

The sum of every column here is zero, as it should. Note that we could write everything also with W R instead of W L, of course. Equation (B5) yields the transition rates required for S i and S e , according to Eq. (8) in the main text.

As a final step, to facilitate the numerics, here, it is useful to look at the transition matrix in the following way

(B6) x Δ : x : x + Δ : 0 Out From  ( x ) 0 In From  ( x Δ ) H e r e , x In From  ( x + Δ ) 0 Out From  ( x ) Next  ( x + Δ ) .

This matrix shows that for any value of x, its left and right neighbor sites give the transition rate into this location from x − Δ or x + Δ, respectively. The upper and lower sites give the downward and upward exit rates (towards (x − Δ) and (x + Δ)). Using Eq. (B6), we see that in the simulation procedure, the summations in Eq. (8) can be approximated by a simple sum of nearest matrix-neighbors for any x. Additionally, we can truncate these sums entirely at finite times at very large |x|, where the value of the tails of the probability density is practically zero.

Appendix C: A standard derivation of the inverse function theorem

We briefly sketch the standard derivation of the inverse function theorem [48] used in Eq. (23). Consider the integral,

(C1) F [ T ( c ) ] = 0 T ( c ) g ( t ) d t ,

over non-zero function g(t) up to a time T(c). Choose F[T(c)] ≡ c. Now, by the Leibniz integral rule:

(C2) d F [ T ( c ) ] d c = g [ T ( c ) ] d T ( c ) d c .

But, since dF[T(c)]/dc ≡ 1, we get,

(C3) 1 = g [ T ( c ) ] d d c T ( c ) d d c T ( c ) = 1 g [ T ( c ) ] ,

the rate of change of the time T with respect to c.

Appendix D: Using numerical fitting to find the bounds on T S i ( c ) from simulation results

To obtain the bounds on the extrinsic time T S i ( c ̄ ) for any value of 0.02 < c ̄ < 4.7 shown in Figure 6, we first obtained the absolute rates | F ̇ | and | W ̇ | as function of time. Then, we divided the time interval t = [0.02, 4.7] into four segments, as explained in the main text. In each segment, we used nonlinear curve-fitting to fit a 4th-order polynomial to each rate. This choice of polynomial is an arbitrary choice. Other fitting functions may also be used. To obtain better numerical accuracy, we performed the fittings in a slightly smaller region inside each segment. For example, within the regime t = (0.21, 1.1226), we focused on the range 0.23 ≤ t ≤ 1.07. By definition, these bounds on t are set the boundaries of this regime for T S i , Figure 6(b). Here, for | F ̇ ( t ) | we fit the polynomial y(t) = p 1 t 4 + p 2 t 3 + p 3 t 2 + p 4 t + p 5, with (p 1, …, p 5) = (2.193, − 7.021, 8.526, − 4.586, 0.9967), and for | W ̇ ( t ) | the parameters are: = (0.00215, 0.05497, − 0.2096, − 0.05136, 0.3745). Using these functions, and the “initial condition” T S i ( 0.686 ) = 0.2376 , we solve the inequalities (27) using Mathematica, in the range 0.683 c ̄ ( = T S i ̄ ) 0.998 . Importantly, in this example, we obtained the bounds on the range of c ̄ and the initial condition by observing the plot of T S i ( c ̄ ) versus c ̄ = T S i ̄ and matching the value of the latter to the T S i s which were determined by the range of t. Naturally, in a practical situation where one does not have T S i ( c ̄ ) a priori, regime bounds for c ̄ may be detected by points where the numerical solution of the ordinary differential equation fails (e.g., becomes unstable or divergent). In Figure D1 we demonstrate these fittings. The upper (lower) gray curve shows simulation results for | W ̇ ( t ) | ( | F ̇ ( t ) | ) , and black dashes (dots) show the shape of the corresponding fitting polynomial. The simulation details are given in the main text.

Figure D1: 
The absolute temporal rates of the Helmholtz free energy and work, obtained from simulations results, used to bound the minimal time 




T




S


i





(



c

̄


)



${T}_{{S}_{i}}(\bar{c})$



 to generate any amount of temperature times absolute entropy production 





c

̄


=
T





S


i



̄




$\bar{c}=T\bar{{S}_{i}}$



 in the range 


0.23
≤


T




S


i




≤
1.07


$0.23\le {T}_{{S}_{i}}\le 1.07$



 (see in Figure 6 in the main text). The red curve shows simulation results for 


|



W

̇



(

t

)

|


$\vert \dot{\mathcal{W}}(t)\vert $



, and 


|



F

̇



(

t

)

|


$\vert \dot{F}(t)\vert $



 is represented in cyan curve. Black dashes and black dots show the shape of the corresponding fitting polynomials. The fitting procedure and parameters are in App. D.
Figure D1:

The absolute temporal rates of the Helmholtz free energy and work, obtained from simulations results, used to bound the minimal time T S i ( c ̄ ) to generate any amount of temperature times absolute entropy production c ̄ = T S i ̄ in the range 0.23 T S i 1.07 (see in Figure 6 in the main text). The red curve shows simulation results for | W ̇ ( t ) | , and | F ̇ ( t ) | is represented in cyan curve. Black dashes and black dots show the shape of the corresponding fitting polynomials. The fitting procedure and parameters are in App. D.

References

[1] S. Deffner and S. Campbell, “Quantum speed limits: from Heisenberg’s uncertainty principle to optimal quantum control,” J. Phys. A: Math. Theory, vol. 50, p. 453001, 2017. https://doi.org/10.1088/1751-8121/aa86c6.Search in Google Scholar

[2] L. Mandelstam and I. Tamm, “The uncertainty relation between energy and time in non-relativistic quantum mechanics,” in Selected Papers, I. E. Tamm, B. M. Bolotovskii, V. Y. Frenkel, and R. Peierls, Eds., Berlin, Heidelberg, Springer, 1991, pp. 115–123.10.1007/978-3-642-74626-0_8Search in Google Scholar

[3] S. B. Nicholson, A. del Campo, and J. R. Green, “Nonequilibrium uncertainty principle from information geometry,” Phys. Rev. E, vol. 98, p. 032106, 2018. https://doi.org/10.1103/physreve.98.032106.Search in Google Scholar

[4] V. T. Vo, T. Van Vu, and Y. Hasegawa, “Unified approach to classical speed limit and thermodynamic uncertainty relation,” Phys. Rev. E, vol. 102, p. 062132, 2020. https://doi.org/10.1103/physreve.102.062132.Search in Google Scholar

[5] S. B. Nicholson, L. P. García-Pintos, A. del Campo, and J. R. Green, “Time–information uncertainty relations in thermodynamics,” Nat. Phys., vol. 16, pp. 1211–1215, 2020. https://doi.org/10.1038/s41567-020-0981-y.Search in Google Scholar

[6] S. B. Nicholson and J. R. Green, “Thermodynamic speed limits from the regression of information,” arXiv:2105.01588 [cond-mat], 2021.Search in Google Scholar

[7] A. Dechant and S.-i. Sasa, “Improving thermodynamic bounds using correlations,” Phys. Rev. X, vol. 11, p. 041061, 2021. https://doi.org/10.1103/physrevx.11.041061.Search in Google Scholar

[8] F. Tasnim and D. H. Wolpert, “Thermodynamic speed limits for co-evolving systems,” arXiv preprint arXiv:2107.12471, 2021.Search in Google Scholar

[9] L. P. García-Pintos, S. B. Nicholson, J. R. Green, A. del Campo, and A. V. Gorshkov, “Unifying quantum and classical speed limits on observables,” Phys. Rev. X, vol. 12, p. 011038, 2022. https://doi.org/10.1103/physrevx.12.011038.Search in Google Scholar

[10] T. Van Vu and K. Saito, “Thermodynamic unification of optimal transport: thermodynamic uncertainty relation, minimum dissipation, and thermodynamic speed limits,” arXiv preprint arXiv:2206.02684, 2022.10.1103/PhysRevX.13.011013Search in Google Scholar

[11] S. Ito and A. Dechant, “Stochastic time evolution, information geometry, and the cramér-rao bound,” Phys. Rev. X, vol. 10, p. 021056, 2020. https://doi.org/10.1103/physrevx.10.021056.Search in Google Scholar

[12] D. Gupta and D. M. Busiello, “Tighter thermodynamic bound on the speed limit in systems with unidirectional transitions,” Phys. Rev. E, vol. 102, p. 062121, 2020. https://doi.org/10.1103/physreve.102.062121.Search in Google Scholar PubMed

[13] G. Falasco, M. Esposito, and J.-C. Delvenne, “Beyond thermodynamic uncertainty relations: nonlinear response, error-dissipation trade-offs, and speed limits,” J. Phys. A: Math. Theor., vol. 55, p. 124002, 2022. https://doi.org/10.1088/1751-8121/ac52e2.Search in Google Scholar

[14] P. Salamon and R. S. Berry, “Thermodynamic length and dissipated availability,” Phys. Rev. Lett., vol. 51, pp. 1127–1130, 1983. https://doi.org/10.1103/physrevlett.51.1127.Search in Google Scholar

[15] T. Feldmann, B. Andresen, A. Qi, and P. Salamon, “Thermodynamic lengths and intrinsic time scales in molecular relaxation,” J. Chem. Phys., vol. 83, pp. 5849–5853, 1985. https://doi.org/10.1063/1.449666.Search in Google Scholar

[16] V. Fairen, M. Hatlee, and J. Ross, “Thermodynamic processes, time scales, and entropy production,” J. Phys. Chem., vol. 86, pp. 70–73, 1982. https://doi.org/10.1021/j100390a014.Search in Google Scholar

[17] R. D. Miller, “Molecular motor speed limits,” Nat. Chem., vol. 4, pp. 523–525, 2012. https://doi.org/10.1038/nchem.1393.Search in Google Scholar PubMed

[18] R. Milo and R. Phillips, Cell Biology by the Numbers, Garland Science, 2015.10.1201/9780429258770Search in Google Scholar

[19] M. Shamir, Y. Bar-On, R. Phillips, and R. Milo, “Snapshot: timescales in cell biology,” Cell, vol. 164, pp. 1302–1302, 2016. https://doi.org/10.1016/j.cell.2016.02.058.Search in Google Scholar PubMed

[20] C. Jarzynski, “Equalities and inequalities: irreversibility and the second law of thermodynamics at the nanoscale,” Annu. Rev. Condens. Matter Phys., vol. 2, pp. 329–351, 2011. https://doi.org/10.1146/annurev-conmatphys-062910-140506.Search in Google Scholar

[21] C. Jarzynski, “Nonequilibrium equality for free energy differences,” Phys. Rev. Lett., vol. 78, pp. 2690–2693, 1997. https://doi.org/10.1103/physrevlett.78.2690.Search in Google Scholar

[22] G. E. Crooks, “Entropy production fluctuation theorem and the nonequilibrium work relation for free energy differences,” Phys. Rev. E, vol. 60, pp. 2721–2726, 1999. https://doi.org/10.1103/physreve.60.2721.Search in Google Scholar PubMed

[23] M. Esposito and C. Van den Broeck, “Three detailed fluctuation theorems,” Phys. Rev. Lett., vol. 104, p. 090601, 2010. https://doi.org/10.1103/physrevlett.104.090601.Search in Google Scholar PubMed

[24] R. Rao and M. Esposito, “Detailed fluctuation theorems: a unifying perspective,” Entropy, vol. 20, p. 635, 2018. https://doi.org/10.3390/e20090635.Search in Google Scholar PubMed PubMed Central

[25] A. C. Barato and U. Seifert, “Thermodynamic uncertainty relation for biomolecular processes,” Phys. Rev. Lett., vol. 114, p. 158101, 2015. https://doi.org/10.1103/physrevlett.114.158101.Search in Google Scholar PubMed

[26] T. R. Gingrich, J. M. Horowitz, N. Perunov, and J. L. England, “Dissipation bounds all steady-state current fluctuations,” Phys. Rev. Lett., vol. 116, p. 120601, 2016. https://doi.org/10.1103/physrevlett.116.120601.Search in Google Scholar

[27] I. Di Terlizzi and M. Baiesi, “Kinetic uncertainty relation,” J. Phys. A: Math. Theor., vol. 52, p. 02LT03, 2018. https://doi.org/10.1088/1751-8121/aaee34.Search in Google Scholar

[28] J. M. Horowitz and T. R. Gingrich, “Thermodynamic uncertainty relations constrain non-equilibrium fluctuations,” Nat. Phys., vol. 16, pp. 15–20, 2020. https://doi.org/10.1038/s41567-019-0702-6.Search in Google Scholar

[29] I. Neri, “Second law of thermodynamics at stopping times,” Phys. Rev. Lett., vol. 124, p. 040601, 2020. https://doi.org/10.1103/physrevlett.124.040601.Search in Google Scholar PubMed

[30] A. Dechant and S.-i. Sasa, “Continuous time reversal and equality in the thermodynamic uncertainty relation,” Phys. Rev. Res., vol. 3, p. L042012, 2021. https://doi.org/10.1103/physrevresearch.3.l042012.Search in Google Scholar

[31] A. Kolchinsky and D. H. Wolpert, “Work, entropy production, and thermodynamics of information under protocol constraints,” Phys. Rev. X, vol. 11, p. 041024, 2021. https://doi.org/10.1103/physrevx.11.041024.Search in Google Scholar

[32] D. Hartich and A. c. v. Godec, “Thermodynamic uncertainty relation bounds the extent of anomalous diffusion,” Phys. Rev. Lett., vol. 127, p. 080601, 2021. https://doi.org/10.1103/physrevlett.127.080601.Search in Google Scholar

[33] D. J. Skinner and J. Dunkel, “Improved bounds on entropy production in living systems,” Proc. Natl. Acad. Sci., vol. 118, 2021, Art no. e2024300118. https://doi.org/10.1073/pnas.2024300118.Search in Google Scholar PubMed PubMed Central

[34] D. Hendrix and C. Jarzynski, “A “fast growth” method of computing free energy differences,” J. Chem. Phys., vol. 114, pp. 5974–5981, 2001. https://doi.org/10.1063/1.1353552.Search in Google Scholar

[35] H. B. Callen, Thermodynamics and an Introduction to Thermostatistics, 2nd ed., New York, Wiley, 1985.Search in Google Scholar

[36] E. Aghion and J. R. Green, “Thermodynamic speed limits for mechanical work,” J. Phys. A: Math. Theor., vol. 56, p. 05LT01, 2023. https://doi.org/10.1088/1751-8121/acb5d6.Search in Google Scholar

[37] C. Van den Broeck, “Stochastic thermodynamics: a brief introduction,” Proc. Int. Sch. Phys. Enrico Fermi, vol. 184, p. 155, 2013.Search in Google Scholar

[38] A. Ashkin, J. M. Dziedzic, J. E. Bjorkholm, and S. Chu, “Observation of a single-beam gradient force optical trap for dielectric particles,” Opt. Lett., vol. 11, p. 288, 1986. https://doi.org/10.1364/ol.11.000288.Search in Google Scholar PubMed

[39] D. G. Grier and Y. Roichman, “Holographic optical trapping,” Appl. Opt., vol. 45, p. 880, 2006. https://doi.org/10.1364/ao.45.000880.Search in Google Scholar PubMed

[40] E. Barkai, E. Aghion, and D. A. Kessler, “From the area under the Bessel excursion to anomalous diffusion of cold atoms,” Phys. Rev. X, vol. 4, p. 021036, 2014. https://doi.org/10.1103/physrevx.4.021036.Search in Google Scholar

[41] V. Holubec, K. Kroy, and S. Steffenoni, “Physically consistent numerical solver for time-dependent Fokker-Planck equations,” Phys. Rev. E, vol. 99, p. 032117, 2019. https://doi.org/10.1103/physreve.99.032117.Search in Google Scholar

[42] G. Ryskin, “Simple procedure for correcting equations of evolution: application to Markov processes,” Phys. Rev. E, vol. 56, pp. 5123–5127, 1997. https://doi.org/10.1103/physreve.56.5123.Search in Google Scholar

[43] C. Van den Broeck and M. Esposito, “Ensemble and trajectory thermodynamics: a brief introduction,” Physica A, vol. 418, pp. 6–16, 2015. https://doi.org/10.1016/j.physa.2014.04.035.Search in Google Scholar

[44] M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, Washington DC, U.S. Government Printing Office, 1964.Search in Google Scholar

[45] E. Penocchio, R. Rao, and M. Esposito, “Thermodynamic efficiency in dissipative chemistry,” Nat. Commun., vol. 10, p. 1, 2019. https://doi.org/10.1038/s41467-019-11676-x.Search in Google Scholar PubMed PubMed Central

[46] A. I. Brown and D. A. Sivak, “Theory of nonequilibrium free energy transduction by molecular machines,” Chem. Rev., vol. 120, pp. 434–459, 2019. https://doi.org/10.1021/acs.chemrev.9b00254.Search in Google Scholar PubMed

[47] R. A. Bone, D. J. Sharpe, D. J. Wales, and J. R. Green, “Stochastic paths controlling speed and dissipation,” Phys. Rev. E, vol. 106, p. 054151, 2022. https://doi.org/10.1103/physreve.106.054151.Search in Google Scholar PubMed

[48] F. Clarke, “On the inverse function theorem,” Pac. J. Math., vol. 64, pp. 97–102, 1976. https://doi.org/10.2140/pjm.1976.64.97.Search in Google Scholar

[49] G. Muñoz-Gil, G. Volpe, M. A. Garcia-March, et al.., “Objective comparison of methods to decode anomalous diffusion,” Nat. Commun., vol. 12, p. 1, 2021. https://doi.org/10.1038/s41467-021-26320-w.Search in Google Scholar PubMed PubMed Central

[50] R. D. Neidinger, “Introduction to automatic differentiation and Matlab object-oriented programming,” SIAM Rev., vol. 52, pp. 545–563, 2010. https://doi.org/10.1137/080743627.Search in Google Scholar

[51] J. M. Parrondo, J. M. Horowitz, and T. Sagawa, “Thermodynamics of information,” Nat. Phys., vol. 11, pp. 131–139, 2015. https://doi.org/10.1038/nphys3230.Search in Google Scholar

[52] H. Risken, The Fokker-Planck Equation, Berlin, Springer, 1996.10.1007/978-3-642-61544-3Search in Google Scholar

Received: 2022-12-30
Accepted: 2023-04-11
Published Online: 2023-05-16
Published in Print: 2023-10-26

© 2023 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 7.5.2024 from https://www.degruyter.com/document/doi/10.1515/jnet-2022-0104/html
Scroll to top button