Skip to main content
Log in

A Random Walk Approach to Transport in Tissues and Complex Media: From Microscale Descriptions to Macroscale Models

  • SPECIAL ISSUE: CELEBRATING J. D. MURRAY
  • Published:
Bulletin of Mathematical Biology Aims and scope Submit manuscript

Abstract

The biological processes necessary for the development and continued survival of any organism are often strongly influenced by the transport properties of various biologically active species. The transport phenomena involved vary over multiple temporal and spatial scales, from organism-level behaviors such as the search for food, to systemic processes such as the transport of oxygen from the lungs to distant organs, down to microscopic phenomena such as the stochastic movement of proteins in a cell. Each of these processes is influenced by many interrelated factors. Identifying which factors are the most important, and how they interact to determine the overall result is a problem of great importance and interest. Experimental observations are often fit to relatively simple models, but in reality the observations are the output of complicated functions of the physicochemical, topological, and geometrical properties of a given system. Herein we use multistate continuous-time random walks and generalized master equations to model transport processes involving spatial jumps, immobilization at defined sites, and stochastic internal state changes. The underlying spatial models, which are framed as graphs, may have different classes of nodes, and walkers may have internal states that are governed by a Markov process. A general form of the solutions, using Fourier–Laplace transforms and asymptotic analysis, is developed for several spatially infinite regular lattices in one and two spatial dimensions, and the theory is developed for the analysis of transport and internal state changes on general graphs. The goal in each case is to shed light on how experimentally observable macroscale transport coefficients can be explained in terms of microscale properties of the underlying processes. This work is motivated by problems arising in transport in biological tissues, but the results are applicable to a broad class of problems that arise in other applications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Notes

  1. The subscript c indicates the composite process rather than the pure space jump process.

  2. In a finite regular graph boundary vertices have a different degree, but the boundary conditions determine how the degree changes. See (Othmer and Scriven 1971) for the structurally identical problem of diffusion between coupled cells in a finite regular lattice with various boundary conditions.

  3. The convention used in defining the tensor product is given in “Appendix A” and in Othmer and Scriven (1971).

  4. The subscripts s and e are meant to indicate the start (\(k=1\)) and end (\(k=n_k\)) of an edge. However, this is merely for labeling purposes and does not mean that the transport is directed.

  5. Homogeneous in a lattice as in a continuum, means that T(xy) and \(\phi (t|y)\) do not vary with current position of a particle, as in the continuum case.

  6. Both are formally equivalent if one replaces z by \(e^{i\omega }\) in \(\varvec{g}(z,t)\)). One advantage of the Fourier transform method is that the lattice spacings can be included as multiplying factors in the Fourier transform, e.g., \(e^{i\varDelta X\omega }\) whereas the generating function approach is defined only for integer powers of z. What this means is that the moments defined above must be multiplied by appropriate length factors related to the lattice spacing to obtain the correct results, whereas in the Fourier transform approach, these factors are included automatically.

  7. In this sense, the probability p(y) defined at each lattice point can be thought of as the Fourier series for the function \({\hat{p}}(\nu )\). Thus, the inverse lattice Fourier transform is simply the integral that gives the Fourier coefficients of \(p(\nu )\).

  8. These two quantities technically represent the probabilities associated with particles having reached membrane i (right or left) and not having yet reached a subsequent cell membrane.

  9. If particle immobilization is irreversible the following analysis simplifies somewhat, for Eq. (73) is then independent of \(p_i^{(2,2)}\) and \(k_+\) takes the place of \(k_d\) in the macroscopic parameters.

  10. The reduction of \(\varvec{A}_1\) to \(2n_e\varvec{I}\) is valid only under the restrictions discussed below.

  11. Each junction can still have arbitrarily many internal states. A single type of junction refers to all junctions in a lattice having the same connectivity, via a single edge, to other junctions in that lattice. For example, in a square lattice, each junction is connected to its neighbors to the north, south, east, and west. This differs from the exterior hexagonal lattice where type I junctions are connected to their neighbors to the north, southeast, and southwest; and type II junctions to neighbors to their south, northeast, and northwest.

References

  • Akiyama T, Kamimura K, Firkus C, Takeo S, Shimmi O, Nakato H (2008) Dally regulates Dpp morphogen gradient formation by stabilizing Dpp on the cell surface. Dev Biol 313(1):408–419

    Article  Google Scholar 

  • Iomin A (2011) Subdiffusion on a fractal comb. Phys Rev E 83(5):052106

  • Iomin A (2019) Richardson diffusion in neurons. Phys Rev E 100(1):010104

  • Iomin A, Zaburdaev V, Pfohl T (2016) Reaction front propagation of actin polymerization in a comb-reaction system. Chaos Solitons Fract 92:115–122

    Article  MathSciNet  MATH  Google Scholar 

  • Kicheva A, Pantazis P, Bollenbach T, Kalaidzidis Y, Bittig T, Jülicher F, Gonzalez-Gaitan M (2007) Kinetics of morphogen gradient formation. Science 315(5811):521–525

    Article  Google Scholar 

  • Berezhkovskii AM, Dagdug L, Bezrukov SM (2014) From normal to anomalous diffusion in comb-like structures in three dimensions. J Chem Phys 141(5):054907

  • Berezhkovskii AM, Dagdug L, Bezrukov SM (2015) Biased diffusion in three-dimensional comb-like structures. J Chem Phys 142(13):134101

  • Bracewell RN (1986) The Fourier transform and its applications, vol 31999. McGraw-Hill, New York

    MATH  Google Scholar 

  • Bressloff PC, Kim H (2018) Bidirectional transport model of morphogen gradient formation via cytonemes. Phys Biol 15(2):

  • Bressloff PC, Newby JM (2013) Stochastic models of intracellular transport. Rev Mod Phys 85(1):135

    Article  Google Scholar 

  • Gadgil C, Lee CH, Othmer HG (2005) A stochastic analysis of first-order reaction networks. Bull Math Biol 67:901–946

    Article  MathSciNet  MATH  Google Scholar 

  • Cox DR (1967) Renewal theory, vol 1. Methuen, London

    MATH  Google Scholar 

  • Entchev EV, Schwabedissen A, Gonzalez-Gaitan M (2000) Gradient formation of the TGF-beta homolog Dpp. Cell 103(6):981–91

    Article  Google Scholar 

  • Feller W (1968) An introduction to probability theory. Wiley, New York

    MATH  Google Scholar 

  • Mainardi F, Raberto M, Gorenflo R, Scalas E (2000) Fractional calculus and continuous-time finance II: the waiting-time distribution. Phys A 287(3–4):468–481

    Article  MATH  Google Scholar 

  • Aurenhammer F, Klein R, Lee DT (2013) Voronoi diagrams and Delaunay triangulations. World Scientific Publishing Company, Singapore

    Book  MATH  Google Scholar 

  • Gibson WT, Gibson MC (2009) Cell topology, geometry, and morphogenesis in proliferating epithelia. Curr Topics Dev Biol 89:87–114

    Article  Google Scholar 

  • Gibson MC, Lehman DA, Schubiger G (2002) Lumenal transmission of Decapentaplegic in Drosophila imaginal discs. Dev Cell 3(3):451–60

    Article  Google Scholar 

  • Goldhirsch I, Gefen Y (1987) Biased random walk on networks. Phys Rev A 35(3):1317

    Article  MathSciNet  Google Scholar 

  • Haerry TE, Khalsa O, O’Connor MB, Wharton KA (1998) Synergistic signaling by two BMP ligands through the SAX and TKV receptors controls wing growth and patterning in Drosophila. Development 125(20):3977–3987

    Article  Google Scholar 

  • Hamid T, Kolomeisky AB (2013) All-time dynamics of continuous-time random walks on complex networks. J Chem Phys 138(8):084110

  • Harris TJC, Tepass U (2010) Adherens junctions: from molecules to morphogenesis. Nat Revs Mol Cell Biol 11(7):502–514

    Article  Google Scholar 

  • Henyey FS, Seshadri V (1982) On the number of distinct sites visited in 2 d lattices. J Chem Phys 76(11):5530–5534

    Article  MathSciNet  Google Scholar 

  • Hu J, Othmer HG (2011) A theoretical analysis of filament length fluctuations in actin and other polymers. J Math Biol 63(6):1001–1049

    Article  MathSciNet  MATH  Google Scholar 

  • Hughes BD (1996) Random walks and random environments. Clarendon Press, Oxford

    MATH  Google Scholar 

  • Iomin A, Méndez V (2013) Reaction-subdiffusion front propagation in a comblike model of spiny dendrites. Phys Rev E 88(1):

  • Goldhirsch I, Gefen Y (1986) Analytic method for calculating properties of random walks on networks. Phys Rev A 33(4):2583

    Article  Google Scholar 

  • Isaacson SA (2009) The reaction-diffusion master equation as an asymptotic approximation of diffusion to a small target. SIAM J Appl Math 70(1):77–111

    Article  MathSciNet  MATH  Google Scholar 

  • Gou J, Lin L, Othmer HG(2018) A model for the Hippo pathway in the Drosophila wing disc. Biophys J 115(4):737–747 PMID: 30041810

  • Gou J, Stotsky JA, Othmer HG (2020) Growth control in the Drosophila wing disk. Wiley Interdisciplinary Reviews, New York, Systems biology and medicine, p e1478

    Google Scholar 

  • Kang HW, Zheng L, Othmer HG (2012) A new method for choosing the computational cell in stochastic reaction-diffusion systems. J Math Biol 2012:1017–1099

    Article  MathSciNet  MATH  Google Scholar 

  • Karlin S, Taylor HM (1975) A first course in stochastic processes, 2nd edn. Academic Press, New York

    MATH  Google Scholar 

  • Kim H, Bressloff PC (2018) Direct vs. synaptic coupling in a mathematical model of cytoneme-based morphogen gradient formation. SIAM J Appl Math 78(5):2323–2347

  • Kornberg TB (2014) Cytonemes and the dispersion of morphogens. Wiley Interdiscip Rev Dev Biol 3(6):445–463

    Article  Google Scholar 

  • Kornberg TB, Roy S (2014) Cytonemes as specialized signaling filopodia. Development 141(4):729–736

    Google Scholar 

  • Choi KW (2018) Upstream paths for Hippo signaling in Drosophila organ development. BMB Reports 51(3):134

    Article  Google Scholar 

  • Lin L, Othmer HG (2017) Improving parameter inference from frap data: an analysis motivated by pattern formation in the Drosophila wing disc. B Math Biol 79(3):448–497

    Article  MathSciNet  MATH  Google Scholar 

  • Lubashevskii IA, Zemlyanov AA (1998) Continuum description of anomalous diffusion on a comb structure. J Exp Theor Phys 87(4):700–713

    Article  Google Scholar 

  • Ciocanel MV, Fricks J, Kramer PR, McKinley SA (2020) Renewal reward perspective on linear switching diffusion systems in models of intracellular transport. Bull Math Biol 82(10):1–36

    Article  MathSciNet  MATH  Google Scholar 

  • Matthaus F, Jagodic M, Dobnikar J (2009) E. coli superdiffusion and chemotaxis-search strategy, precision, and motility. Biophys J 97(4):946–957

  • Metzler R, Klafter J (2000) The random walk’s guide to anomalous diffusion: a fractional dynamics approach. Phys Reports 339(1):1–77

    Article  MathSciNet  MATH  Google Scholar 

  • Montroll EW, Weiss GH (1965) Random walks on lattices II. J Math Phys 6(2):167–181

    Article  MathSciNet  MATH  Google Scholar 

  • Montroll EW, West BJ (1979) On an enriched collection of stochastic processes. Fluct Phen 66:61

    Article  Google Scholar 

  • Montroll EW (1969) Random walks on lattices. III. calculation of first-passage times with application to exciton trapping on photosynthetic units. J Math Phys 10(4):753–765

  • Montroll EW, Greenberg JM (1964) Proceedings of the symposium on applied mathematics. Am Math Soc Providence 16:193

    Google Scholar 

  • Mundt MG (2013) Characterization of a unique basolateral targeting domain in the Drosophila TGF-\(\beta \) type II receptor punt. Master’s thesis, University of Minnesota

  • Othmer HG, Painter K, Umulis D, Xue C (2009) The intersection of theory and application in biological pattern formation. Math Mod Nat Phenom 4(4):3–82

    Article  MATH  Google Scholar 

  • Othmer HG (1983) A continuum model for coupled cells. J Math Biol 17:351–369

    Article  MathSciNet  MATH  Google Scholar 

  • Othmer HG, Scriven LE (1971) Instability and dynamic pattern in cellular networks. J Theor Biol 32:507–537

    Article  Google Scholar 

  • Othmer HG, Dunbar SR, Alt W (1988) Models of dispersal in biological systems. J Math Biol 26(3):263–298

    Article  MathSciNet  MATH  Google Scholar 

  • Pavliotis G, Stuart A (2008) Multiscale methods: averaging and homogenization. Springer, Berlin

    MATH  Google Scholar 

  • Roerdink JBTM, Shuler KE (1985) Asymptotic properties of multistate random walks. I. theory. J Stat Phys 40(1):205–240

    Article  MathSciNet  Google Scholar 

  • Roerdink JBTM, Shuler KE (1985) Asymptotic properties of multistate random walks. II. applications to inhomogeneous periodic and random lattices. J Stat Phys 41(3):581–606

  • Churchill RV (1958) Operational Mathematics. McGraw-Hill

  • Scher H, Wu CH (1981) Random walk theory of a trap-controlled hopping transport process. Proc Natl Acad Sci 78(1):22–26

    Article  MathSciNet  Google Scholar 

  • Zhou S, Lo WC, Suhalim JL, Digman MA, Enrico G, Qing N, Lander AD (2012) Free extracellular diffusion creates the Dpp morphogen gradient of the Drosophila wing disc. Curr Biol 22(8):668–675

    Article  Google Scholar 

  • Shlesinger MF (1974) Asymptotic solutions of continuous-time random walks. J Stat Phys 10(5):421–434

    Article  MathSciNet  Google Scholar 

  • Havlin S, Ben-Avraham D (1987) Diffusion in disordered media. Adv Phys 36(6):695–798

    Article  Google Scholar 

  • Roy S, Huang H, Liu S, Kornberg TB (2014) Cytoneme-mediated contact-dependent transport of the Drosophila decapentaplegic signaling protein. Science 343(6173):1244624

    Article  Google Scholar 

  • Harmansa S, Alborelli I, Dimitri B, Caussinus E, Affolter M (2017) A nanobody-based toolset to investigate the role of protein localization and dispersal in Drosophila. Elife 6:

  • Subrahmanyan C (1943) Stochastic problems in physics and astronomy. Rev Mod Phys 15(1):1

    Article  MathSciNet  Google Scholar 

  • Landman U, Shlesinger MF (1977) Cluster motion on surfaces: a stochastic model. Phys Rev B 16(8):3389

    Article  Google Scholar 

  • Landman U, Shlesinger MF (1979) Stochastic theory of multistate diffusion in perfect and defective systems. I. mathematical formalism. Phys Rev B 19(12):6207

    Google Scholar 

  • Landman U, Shlesinger MF (1979) Stochastic theory of multistate diffusion in perfect and defective systems. II. case studies. Phys Rev B 19(12):6220

    Google Scholar 

  • Landman U, Montroll EW, Shlesinger MF (1977) Random walks and generalized master equations with internal degrees of freedom. Proc Natl Acad Sci 74(2):430–433

    Article  MathSciNet  Google Scholar 

  • Van Kampen NG (1992) Stochastic processes in physics and chemistry, vol 1. Elsevier, London

    MATH  Google Scholar 

  • Wartlick O, Mumcu P, Jülicher F, Gonzalez-Gaitan M (2011) Understanding morphogenetic growth control - lessons from flies. Nat Rev Mol Cell Biol 12(9):594–604

    Article  Google Scholar 

  • Weiss GH, Havlin S (1986) Some properties of a random walk on a comb structure. Phys A 134(2):474–482

    Article  Google Scholar 

  • Weiss GH, Rubin RJ (2007) Random walks: theory and selected applications. Wiley-Blackwell, London, pp 363–505

    Google Scholar 

  • Widmann TJ, Dahmann C (2009) Wingless signaling and the control of cell shape in Drosophila wing imaginal discs. Dev Biol 334(1):161–173

    Article  Google Scholar 

  • Yamazaki Y, Palmer L, Alexandre C, Kakugawa S, Beckett K, Gaugue I, Palmer RH, Vincent JP (2016) Godzilla-dependent transcytosis promotes Wingless signalling in Drosophila wing imaginal discs. Nat Cell Biol 18(4):451–457

    Article  Google Scholar 

Download references

Acknowledgements

Supported in part by NIH Grants # GM29123 and 54-CA-210190 and NSF Grants DMS # 178743 and 185357.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hans G. Othmer.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Dedicated to James D. Murray on his 90th birthday. Jim was a pioneer in Mathematical Biology, and as a mentor to many his leadership and vision have had an enormous impact on the development of the field.

Appendices

Appendix A: Definition of the Kronecker Product

We define the tensor product of vectors as

$$\begin{aligned} \mathbf{x} \otimes \mathbf{y} \equiv (x_1 \mathbf{y },\ldots , x_N \mathbf{y})^T = (x_1 y_1, \ldots , x_1 y_n \ldots , x_Ny_1, \ldots , x_N y_n)^T, \end{aligned}$$

and if \(\mathbf{R}\) is an \(N \times N\) matrix and \(\mathbf{T}\) an \(n \times n\) one, their tensor product is the \(Nn \times Nn\) matrix

$$\begin{aligned} \mathbf{R} \otimes \mathbf{T} = \left[ \begin{array}{lll} R_{11}T &{} \ldots &{} R_{1N}T \\ \cdot &{} \cdot \\ \cdot &{} \cdot \\ \cdot &{} \cdot \\ R_{N1}T &{} \ldots &{} R_{NN} T \end{array} \right] . \end{aligned}$$

As we have seen throughout, multistate random walks can be conveniently written as matrix–vector problems involving Kronecker products. An important property of matrices and vectors formed as Kronecker products is that one can compute Fourier transforms on the first and second terms of the Kronecker product separately. For instance if \(\varvec{F}\) is a discrete Fourier transform matrix with Hermitian adjoint \(\varvec{F}^{\prime }\),

$$\begin{aligned} \left( \varvec{F}\otimes \varvec{I}\right) \left( \varvec{A}\otimes \varvec{B}\right) \left( \varvec{F}^{\prime }\otimes \varvec{I}\right)&= \varvec{F}\varvec{A}\varvec{F}^{\prime }\otimes \varvec{B} = \tilde{\varvec{A}}\otimes \varvec{B}.\\ \left( \varvec{I}\otimes \varvec{F}\right) \left( \varvec{A}\otimes \varvec{B}\right) \left( \varvec{I}\otimes \varvec{F}^{\prime }\right)&= \varvec{A}\otimes \varvec{F}\varvec{B}\varvec{F}^{\prime } = \varvec{A}\otimes \tilde{\varvec{B}}. \end{aligned}$$

Kronecker products provide an easy way to describe certain multidimensional problems in terms of simpler one-dimensional problems (Othmer and Scriven 1971). Each additional dimension is, roughly speaking, included by appending an additional Kronecker product to the previous transition matrix. This also applies in cases where the dimension of the state space is increased by adding internal state transitions to a spatial jump process.

Appendix B: The Exterior Hexagonal Lattice

In an exterior hexagonal lattice, there are three orientations of edges that occur, and two types of junctions: those centered at upwards facing trijunctions (type I), and those at downwards facing trijunctions (type II), see Fig. 12. Let us assume here that \(\varvec{T}_{SV}\) and \(\varvec{K}_{SV}\) are the same for each edge, and that \(\varvec{K}_J\) is the same for all junctions. Likewise we assume that \(\varvec{D}_s\), \(\varvec{D}_e\), \(\varvec{D}_e^{\prime }\), and \(\varvec{D}_s^{\prime }\) do not vary depending on the edge or junction being considered.

Fig. 12
figure 12

Geometric quantities associated with the hexagonal lattice are depicted. Blue points are type I and Red points are type II. The quantity \(\varvec{\varDelta X}_{12}\) is the displacement vector between a type I and type II point, and \(\varvec{\varDelta X}_1\), \(\varvec{\varDelta X}_2\), and \(\varvec{\varDelta X}_3\) represent lattice displacement vectors between adjacent cells on the lattice. The red boxes indicate pairs of type I and type II points with the same lattice index, (IJ). The labels “s” and “e” indicate the starts and ends of edges

With two types of junctions that alternate, an easy way of assigning which edges start or end at which junctions is to require that all edges start at a type I junction and end at a type II junction (see the edges labeled “e” and “s” in Fig. 12). It is also helpful to specify some type of labeling system on the junctions and edges. To do so, we consider the combination of three edges around a type I vertex, and the type II junction attached at the end of the vertical edges to be labeled with the same label, (IJ) (see Fig. 12). The lattice position associated with this structure is given as \(\varvec{X}\), the position of the type I vertex. Of course, the positions of each SV on an edge can be found as some displacement, \(\varvec{x}\) from \(\varvec{X}\).

With this description, type I edges are vertical edges that pair type I and type II points with the same lattice index, e.g., \(\varvec{\varDelta I}_1={\varvec{0}}\). Type II edges are diagonal edges that pair type I lattice points and type II lattice points that differ with \(\varvec{\varDelta I}_2 = (0,-1)\) from start to end. Finally, type III edges pair type I and type II points with \(\varvec{\varDelta I}_3 = (1,-1)\) from start to end.

Since type I points are attached to the start of each edge and type II to the end of each edge, if we structure the probability density vector as \(\varvec{p} = (\varvec{p}_I, \varvec{p}_{II}, \varvec{p}_{e_1}, \varvec{p}_{e_2}, \varvec{p}_{e_3})^T\), we may write the overall transition matrix \(\varvec{T}+\varvec{K}\) as

(B.1)

We apply a lattice Fourier transformation to \(\varvec{X}\) to obtain

$$\begin{aligned} \hat{\varvec{T}}(\varvec{\omega })+\varvec{K} = \varvec{I}\otimes \left( \begin{array}{cc|ccc} \varvec{K}_J &{} {\varvec{0}} &{} \varvec{D}_s &{} \varvec{D}_s &{} \varvec{D}_s \\ {\varvec{0}} &{} \varvec{K}_J &{} \varvec{D}_e &{} \quad e^{i\varvec{\varDelta X}_2\cdot \varvec{\omega }} \varvec{D}_e &{} \quad e^{i\varvec{\varDelta X}_3\cdot \varvec{\omega }} \varvec{D}_e \\ \hline \varvec{D}^{\prime }_s &{} \varvec{D}^{\prime }_e &{} \varvec{T}_{SV}+\varvec{K}_{SV} &{} {\varvec{0}} &{} {\varvec{0}} \\ \varvec{D}^{\prime }_s &{} \quad e^{-i\varvec{\varDelta X}_2\cdot \varvec{\omega }} &{} {\varvec{0}} &{} \varvec{T}_{SV}+\varvec{K}_{SV} &{} {\varvec{0}} \\ \varvec{D}^{\prime }_s &{} \quad e^{-i\varvec{\varDelta X}_3\cdot \varvec{\omega }}\varvec{D}^{\prime }_e &{} {\varvec{0}} &{} {\varvec{0}} &{} \varvec{T}_{SV}+\varvec{K}_{SV} \end{array}\right) \end{aligned}$$
(B.2)

where \(\varvec{\varDelta X}_m\) are as defined in Eq. (86). Once the local transitions in \(\varvec{K}_J\), \(\varvec{K}_{SV}\), \(\varvec{T}_{SV}\), \(\varvec{D}_m\), and \(\varvec{D}^{\prime }_m\) are specified, substituting this matrix into Eq. (33) and solving for \(\varvec{q}\) and \(\varvec{p}\) will yield a solution for the spatial distribution of a particle over the hexagonal lattice with possible internal states and SVs between junction points. If we make use of the same definitions for \(\varvec{D}\), \(\varvec{D}^{\prime }\), \(\varvec{T}_{SV}\), \(\varvec{K}_{SV}\) and \(\varvec{K}_J\) from the interior lattice example in Sect. 6, with \(n_k=1\) and \(n_s=2\), we obtain a \(8\times 8\) matrix (after removing the rows and columns associated with immobile states at the junctions as in Sect. 6) that can be inverted to solve for \(\varvec{p}\) in Fourier–Laplace transform space. For instance, with \(1-f\) the proportion of particles that are immobilized at the boundary, the steady-state concentration at the junctions can be found as

$$\begin{aligned} \lim _{\varvec{\omega }\rightarrow \varvec{0}}\lim _{s\rightarrow 0} s \left( p_I(\varvec{\omega },s)+p_{II}(\varvec{\omega },s)\right) = \frac{f\mu \xi }{f\mu \xi +d((1-f)\mu +\xi )}. \end{aligned}$$

In closing this section, we note that random walks on exterior hexagonal lattices have been studied several times previously (Montroll 1969; Hughes 1996; Henyey and Seshadri 1982); however, in those cases, no internal states were considered. Thus, our formulation here provides a straightforward way to extend these previous results to more complicated transport processes. We also note that the lattice Green’s function for the exterior hexagonal lattice with no internal states or SVs is known (Henyey and Seshadri 1982; Hughes 1996). In Fourier transform space, the Green’s function for a hexagonal lattice where each edge is taken in one step, and the transition probabilities at each intersection are all 1/3, is of the form

$$\begin{aligned} \varvec{p}^{(H)}(\varvec{\omega },z) = \frac{\begin{pmatrix} 1 &{} \frac{z}{3}\left( 1+e^{i\varvec{\varDelta X}_2\cdot \varvec{\omega }}+e^{i\varvec{\varDelta X}_3\cdot \varvec{\omega }} \right) \\ \frac{z}{3}\left( 1+e^{-i\varvec{\varDelta X}_2\cdot \varvec{\omega }}+e^{-i\varvec{\varDelta X}_3\cdot \varvec{\omega }} \right) &{} 1 \end{pmatrix}}{1-\frac{z^2}{9}\left( 3+2\cos \omega _x+4\cos \left( \frac{\omega _x}{2}\right) \cos \left( \frac{\sqrt{3}\omega _y}{2}\right) \right) } \end{aligned}$$
(B.3)

Of course, when we set \(n_k=1\), \(n_s=1\), and replace \(\psi (t)\) with z in Eq. (B.2), we would obtain this result. The \(ij^{\text{ th }}\) element of \(\varvec{p}^H\) gives the occupation probability for a random walk that started on a type j vertex and is currently on a type i vertex.

Our result here appears to differ slightly from the previous results for this Green’s function. This because we have directly included the displacement, \(\varvec{\varDelta X}_i\) into the computation, rather than just using changes in index \(\varvec{\varDelta I}_i\), which yield precisely the result previously reported. In other words, Eq. (B.3) gives the probability of being located at position \(\varvec{X}_{\varvec{I}}\), whereas previous results give the probability of being located at index \(\varvec{I}\).

1.1 B.1 The Exterior Hexagonal Lattice with Arbitrary \(n_k\)

Here, we consider the hexagonal lattice with \(n_k\) SVs and \(n_s=1\). The FPT method from Sect. 4.1 becomes essential here as with arbitrary \(n_k\), directly solving the resulting matrices is non-trivial.

From Appendix B, we found that the form of \(\hat{\varvec{T}}(\varvec{\omega })+\hat{\varvec{K}}(\varvec{\omega })\) for the exterior hexagonal lattice. In this example, \(n_s=1\), so \(\varvec{K}_{SV}= \varvec{K}_J={\varvec{0}}\). We also have that

$$\begin{aligned} \begin{aligned} \varvec{D}_s&= \frac{\psi }{2}\begin{pmatrix}1&0&\dots&0 \end{pmatrix} \\ \varvec{D}_e&= \frac{\psi }{2}\begin{pmatrix}0&\dots&0&1 \end{pmatrix} \end{aligned} \end{aligned}$$
(B.4)

and

$$\begin{aligned} \varvec{D}^{\prime }_s = \frac{\psi }{3}\begin{pmatrix} 1 \\ 0 \\ \vdots \\ 0 \end{pmatrix}, \ \ \ \varvec{D}^{\prime }_e = \frac{\psi }{3}\begin{pmatrix} 0 \\ \vdots \\ 0 \\ 1 \end{pmatrix}. \end{aligned}$$

Lastly, \(\varvec{T}_{SV}\) is the matrix for a 1D random walk on a segment of length \(n_k\) with absorbing boundaries. This can be written as

$$\begin{aligned} \varvec{T}_{SV} = \frac{\psi }{2}\begin{pmatrix} 0 &{} 1 &{} 0 &{} \dots &{} &{} &{}\\ 1 &{} 0 &{} 1 &{} 0 &{} \dots &{} &{} \\ 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} \dots &{}\\ \vdots &{} &{} \ddots &{}\ddots &{} \ddots &{} 0 &{} \dots \\ 0 &{} &{} \dots &{} 0 &{} 1 &{} 0 &{} 1 \\ 0 &{} \dots &{} &{} &{} 0 &{} 1 &{} 0. \end{pmatrix} \end{aligned}$$

Since there is one internal state, \(\varvec{K}_{SV}={\varvec{0}}\). Since each edge has the same \(\varvec{T}_{SV}\), \(\varvec{D}\) and \(\varvec{D}^{\prime }\), we may use the formulation in Appendix D.2 to obtain the effective transition rates, \(\varvec{T}_{2,r}^{\text{ eff }}\).

To do so, start by noting that since \(n_s=1\), \(\varvec{T}_{1,r}^{\text{ eff }}\) and \(\varvec{K}_{r}\) are both equal to 0 and \(\varvec{T}_{2,r}^{\text{ eff }}\) is a scalar-valued function. Since the type I and type II junctions are equivalent in terms of the transitions that occur around them, \(\varvec{T}_{2,r}^{\text{ eff }}\) is found independent of whether the particle is at a type I or type II junction.

This leaves us with the following matrix equation:

$$\begin{aligned} \left( \begin{pmatrix} 1 &{} 0 &{} 0 &{} \dots \\ 0 &{} 1 &{} 0 &{} \dots \\ \vdots &{} &{} \ddots &{}\\ 0 &{}\dots &{}&{} 1 \end{pmatrix} -\begin{pmatrix} 0&\begin{pmatrix} \frac{\psi }{2} &{} 0 &{} \dots \end{pmatrix} \\ \begin{pmatrix} \psi \\ 0 \\ \vdots \\ \end{pmatrix}&\varvec{T}_{SV} \end{pmatrix} \right) \begin{pmatrix} {\tilde{f}}_1 \\ {\tilde{f}}_2 \\ \vdots \\ {\tilde{f}}_{n_k} \end{pmatrix} = \begin{pmatrix}1 \\ 0 \\ \vdots \\ 0 \end{pmatrix} \end{aligned}$$

for \({\tilde{f}}(s)\). Solving for \({\tilde{f}}_1\) via a Schur complement yields,

$$\begin{aligned} {\tilde{f}}_1 = 1-3\varvec{D}_s(\varvec{I}-\varvec{T}_{SV})^{-1}\varvec{D}_s, \end{aligned}$$

and the remaining elements of \(\tilde{\varvec{f}}\) are found as

$$\begin{aligned} \tilde{\varvec{f}} = \frac{(\varvec{I}-\varvec{T}_{SV})^{-1}\varvec{D}^{\prime }_s}{{\tilde{f}}_1}. \end{aligned}$$

Finally,

$$\begin{aligned} \varvec{T}_{2,r}^{\text{ eff }} = \varvec{D}_e \tilde{\varvec{f}} = \frac{\varvec{D}_e(\varvec{I}-\varvec{T}_{SV})^{-1}\varvec{D}_s^{\prime }}{1-3\varvec{D}^{\prime }_s(\varvec{I}-\varvec{T}_{SV})^{-1}\varvec{D}_s} = \frac{\psi }{2}{\tilde{f}}_{n_k}. \end{aligned}$$

The next step is of course to explicitly solve for this FPT. Since each edge of the lattice is simply a 1D segment with absorbing boundaries, the discussion in Appendix E provides us with an explicit form of the solution to \((\varvec{I}-\varvec{T}_{SV})^{-1}\). After some algebraic simplifications, we find that

$$\begin{aligned} \varvec{T}_{2,r}^{\text{ eff }} = \frac{2}{3}\frac{1}{\left( \frac{1-\sqrt{1-\psi ^2}}{\psi }\right) ^{n_k+1}+\left( \frac{1-\sqrt{1-\psi ^2}}{\psi }\right) ^{-n_k-1}} \end{aligned}$$

Since \(\varvec{K}=0\) and \(\varvec{T}_{1,r}^{\text{ eff }}=0\), this reduces the problem to a single-state CTRW on a hexagonal lattice. Thus, we can make use of formula in Eq. (B.3) with z replaced by \(\psi ^{\text{ eff }} = 3\varvec{T}_{2,r}^{\text{ eff }}\). The factor of 3 comes from the fact that \(\varvec{T}_{2,r}^{\text{ eff }}\) is in fact the effective waiting time distribution multiplied by the probability that the particle actually travels down a particular edge. This probability is 1/3 since each junction connects three edges in the exterior hexagonal lattice, and each edge has the same probability of being traveled on.

With this result, it is possible to directly compute the moments for the hexagonal lattice with \(n_K\) SVs. However, assuming each segment has a fixed total length of 1 independent of \(n_k\), the exact formulas become

$$\begin{aligned} {\tilde{m}}^{(1)}(s)&= 0 \\ {\tilde{m}}^{(2)}(s)&=\frac{1-\psi ^{\text{ eff }}}{s} \frac{\psi ^{\text{ eff }}}{2(1-\psi )^2} = \frac{1}{s}\frac{1-\psi ^{\text{ eff }}}{(a^{n_k+1}+a^{-n_k-1})+4\frac{1}{a^{n_k+1}+a^{-n_k-1}}-4} \end{aligned}$$

with \(a=(1-\sqrt{1-\psi ^2})\psi ^{-1}\). For a Poisson distributed waiting time \(\psi =\lambda e^{-\lambda t}\), we obtain the following results for the first several \(n_k=1,2,\dots \)

$$\begin{aligned} m^{(2)}(t)&= \frac{1}{2}\lambda t&n_k=0\\ m^{(2)}(t)&= \frac{1}{2}\frac{\lambda t}{4}-\frac{1}{16}\left( 1-e^{-2\lambda t}\right)&n_k=1 \\ m^{(2)}(t)&= \frac{1}{2}\frac{\lambda t}{9}-\frac{2}{27}+\frac{e^{-3\lambda t}}{54}\left( 4+3\lambda t\right)&n_k=2 \\ \vdots&\\ m^{(2)}(t)&= \frac{1}{2}\frac{\lambda t}{n_k^2} + \dots&\end{aligned}$$

In these results, \(\lambda \) and \(n_k\) are independent. However, if this hexagonal lattice is instead interpreted as a discretization of a continuum problem with 1D diffusion along segments of the lattice, then \(\lambda \sim n_k^2D\) where D is the diffusivity along the segment. In that case, the leading-order term for any \(n_k\) is

$$\begin{aligned} m^{(2)}(t) = \frac{1}{2}Dt \end{aligned}$$

Although we did not specify the direction of diffusion here, it turns out that for the hexagonal lattice, whether diffusion is considered along the x or y direction, the results are equivalent.

Note that the long-term asymptotics may be obtained easily, at least for a single internal state, from the theory developed in Roerdink and Shuler (1985a). However, the full time dependence is not computable under that theory.

Appendix C: General Form of \(\varvec{T}\) and \(\varvec{K}\)

Recall from Eqs. (33)–(36) that the solution \(\varvec{p}(\varvec{X},t|\{k_0,\ell _0\})\), to a transport problem depends on the initial concentration distribution, and on the form of the transition matrix. Equation (33) may be written as,

$$\begin{aligned} \begin{aligned} \varvec{q}\left( \varvec{X},t \right)&=\delta (t)\varvec{\delta }_{k_{0},\ell _{0}}\delta (\varvec{X})+\int _{0}^{t}\left[ \sum _{\varvec{X}^{\prime }\in {\mathscr {N}}(\varvec{X})}\varvec{T} (\varvec{X}-\varvec{X}^{\prime })\varvec{\phi }(t-\tau )\varvec{q}(\varvec{X}^{\prime },\tau )\right] d\tau \\&\quad + \int _0^t\varvec{K}\varvec{\varLambda }(t-\tau )\varvec{q}(\varvec{X},\tau ,\left\{ k_0,\ell _0\right\} d\tau \\ \varvec{p}\left( \varvec{X},t \right)&=\hat{\varvec{\varPhi }}(t)\varvec{\delta }_{k_{0},\ell _{0}}\delta (\varvec{X}) +\int _0^t \hat{\varvec{\varPhi }}(t-\tau ) \int _{0}^{\tau }\left[ \sum _{\varvec{X}^{\prime }\in {\mathscr {N}}(\varvec{X})}\varvec{T}(\varvec{X}-\varvec{X}^{\prime })\varvec{\phi }(\tau \right. \\&\qquad \left. -s)\varvec{q}(\varvec{X}^{\prime },s,)\right] ds\ d\tau + \int _0^t \hat{\varvec{\varPhi }}(t-\tau )\int _0^{\tau }\varvec{K}\varvec{\varLambda }(\tau -s)\varvec{q}(\varvec{X},s)ds\ d\tau \end{aligned} \end{aligned}$$

with

$$\begin{aligned} \hat{{\varPhi }}_{k,\ell }(t)=1-\int _0^t\psi _{k,\ell }(t)=1-\int _{0}^{t}\left( \sum _{\varvec{X}^{\prime }\in \mathscr {N}(\varvec{X})}\varvec{1}^T\varvec{T}(\varvec{X^{\prime }})\varvec{\phi }(\tau )\varvec{\delta }_{k,\ell } + \varvec{1}^T\varvec{K}\varvec{\varLambda }(\tau )\varvec{\delta }_{k,\ell }\right) d\tau . \end{aligned}$$

We now present a general structure for matrices \(\varvec{T}\) and \(\varvec{K}\) that can include the examples studied, along with other examples that may arise in a variety of applications.

Recall that the number of SVs is \(n_k\), the number of types of edges, \(n_e\), and the number of internal states, \(n_s\). The global matrices \(\varvec{T}\varvec{\phi }\) and \(\varvec{K}\varvec{\varLambda }\) can be written in block-structured formats, and will be understood here as functions of the Laplace transform variable s. We also will abuse notation here and write \(\varvec{T}\) and \(\varvec{K}\) rather than \(\varvec{T}\varvec{\phi }\) and \(\varvec{K}\varvec{\varLambda }\). Recall also that \(\varvec{K}\) involves only transitions between internal states, and no spatial movement whereas \(\varvec{T}\) involves spatial movement without any change in the internal state. Let \(\varvec{K}_J\in {\mathbb {R}}^{n_s\times n_s},\varvec{T}_{SV},\ \varvec{K}_{SV}\in {\mathbb {R}}^{(n_{k}n_s)\times (n_{k}n_s)}\), \(\varvec{D}^{\prime }_s,\varvec{D}^{\prime }_e\in {\mathbb {R}}^{(n_{k}n_s)\times n_s},\) and \(\varvec{D}_s,\varvec{D}_e\in {\mathbb {R}}^{n_s\times (n_{k}n_s)}\) be various blocks of \(\varvec{T}\) and \(\varvec{K}\). In particular,

  • \(\varvec{K}_J\) characterizes transitions between the internal states at each junction

  • \(\varvec{K}_{SV}\) characterizes transitions between internal states associated with the SVs along each edge

  • \(\varvec{T}_{SV}\) characterizes jumps between SVs on an edge

  • \(\varvec{D}_s\) and \(\varvec{D}_e\) characterize transitions from the SVs to a vertex

  • \(\varvec{D}^{\prime }_s\) and \(\varvec{D}^{\prime }_e\) characterize transitions from a junction to an SV

The transitions represented by these matrices are diagrammed in Fig. 5. The indices s and e on \(\varvec{D}\) and \(\varvec{D}^{\prime }\) are used to distinguish whether the transfers occur at the “start” or “end” of each edge (see Fig. 5). This is necessary since when the SVs are labeled from \(k=1\) to \(k=n_k\), different (but closely related) matrices are needed to describe transfers from \(k=1\) (start) and \(k=n_k\) (end). Recall Ex. 2 in Sect. 2.5 where \(\varvec{D}_s\) and \(\varvec{D}_e\) were explicitly constructed. This labeling does not imply that an edge is directed or undirected, but merely serves as a way of specifying the exact positions of SVs along the edge.

Next, define \(\varvec{I}\) as the infinite-dimensional identity matrix over the lattice positions, \(\varvec{X}\), and let \(\varvec{A}_{s,m}\) and \(\varvec{A}_{e,m}\) be infinite-dimensional adjacency matrices that specify which edges start and end at a given vertex. Similarly, let \(\varvec{A}_{s,m}^{\prime }\) and \(\varvec{A}_{e,m}^{\prime }\) specify the vertices that each edge is attached to. Often, \(\varvec{A}_{e,m}^{\prime }\) is the transpose of \(\varvec{A}_{s,m}\), and likewise for \(\varvec{A}_{s,m}^{\prime }\) and \(\varvec{A}_{e,m}\), so long as the graph is undirected. The index \(m=1,\dots ,n_e\) is used to distinguish between edges of different orientations (e.g., vertical or horizontal), and \(n_e\) is the number of different types of edges. For instance, in a square lattice, horizontal edges connect junctions that differ in their x-coordinate, but the vertical edges connect junctions that differ in their y-coordinate. Even if the same types of transition occur regardless of the edge orientation, distinguishing the orientation of each edge is crucial for obtaining spatial moments of the distribution. Furthermore, this formulation lends itself to extensions where not all edges have the same internal transition matrices.

The overall transition matrix is of the form

(C.1)
(C.2)

The blocks \(\bar{\varvec{D}}_{t,k}\) and \(\bar{\varvec{D}}_{t,k}^{\prime }\) for \(k=1,\dots ,n_e\) and \(t=\{s,e\}\) are themselves block matrices of the form

Note that for each type of edge in \(k=1,\dots ,n_e\), it is often possible to label the lattice so that edge \(\varvec{J}\) of that type connects to junction \(\varvec{J}\) at one terminus. With this in mind, we assume that for each edge, \(\varvec{J}\) of type k, the s-end of the edge border junction \(\varvec{J}\). Thus, \(\varvec{A}_{s,k}=\varvec{A}^{\prime }_{s,k}=\varvec{I}\), where \(\varvec{I}\) is the identity operator on the lattice. If we further assume that the matrices \(\varvec{D}_k\) are all equal, then \(\varvec{T}\) simplifies nicely,

(C.3)

which resembles the Laplacian-type structure observed previously.

Although \(\varvec{I}\), \(\varvec{A}_{t,m}\), and \(\varvec{A}_{t,m}^{\prime }\) with \(t=\{s,e\}\) were defined as infinite-dimensional matrices, they can also be understood as linear operators over the infinite lattice, e.g., \(\varvec{A}_{t,m} = A_{t,m}(\varvec{X},\varvec{Y})\) where \(\varvec{X}\) and \(\varvec{Y}\) are two junctions. In all the cases, we have considered, for each fixed \(\varvec{Y}\), the lattice operators are only nonzero over a few (or one) values of \(\varvec{X}\). The interpretation of \(\varvec{A}_{t,m}\) as lattice operators also gives meaning to the notation \(\varvec{T}(\varvec{X}-\varvec{X}^{\prime },s)\) used in Eqs. (33)–(36).

1.1 Remark

In some cases there can be more than one type of junction. An example is the exterior hexagonal lattice, which has two types of junctions. When this occurs, additional blocks of rows and columns will augment Eqs. (C.1), (C.2), and the definitions of \(\varvec{D}\) and \(\varvec{D}^{\prime }\) to describe internal transitions at each type of junction, transitions from each type of junction to each type of edge, and transitions from each type of edge to each type of junction, c.f. Eq. (B.1).

We see that the dimension of the state-space associated with each junction, \(\varvec{X}\) is \(n_T = n_s(n_v+n_en_k)\) where \(n_v\) is the number of junction types, and \(n_e\) the number of edge types.

1.2 C.1 Computation of Spatial Moments in the General Case

We now turn our attention toward computation of the moments for the general evolution equations in Eq. (33). Many properties of CTRWs are most easily analyzed in Laplace transform space. With lattice random walks, it is also often convenient to apply a lattice Fourier transform over \(\varvec{X}\), the lattice variable.

To compute the spatial moments (as functions of s, the Laplace transform variable), we start by computing the lattice Fourier transform of Eqs. (C.3) and (C.2) over the lattice spatial variable, \(\varvec{X}\), obtaining

$$\begin{aligned} \hat{\varvec{T}}+\hat{\varvec{K}}&=\left( \mathscr {F}\otimes \varvec{I}\right) (\varvec{T}+\varvec{K})\left( \mathscr {F}^{-1}\otimes \varvec{I}\right) \end{aligned}$$
(C.4)
$$\begin{aligned}&=\varvec{I}\otimes \left[ \begin{pmatrix}\varvec{K}_J &{} \varvec{D}_e+\varvec{D}_s &{} \dots &{} \varvec{D}_e+\varvec{D}_s \\ \varvec{D}^{\prime }_e+\varvec{D}^{\prime }_s &{} \varvec{T}_{SV}+\varvec{K}_{SV} &{} {\varvec{0}} &{}\dots \\ \vdots &{} {\varvec{0}} &{} \ddots &{} {\varvec{0}}\\ \varvec{D}^{\prime }_e+\varvec{D}^{\prime }_s &{} \vdots &{} {\varvec{0}} &{} \varvec{T}_{SV}+\varvec{K}_{SV} \end{pmatrix}\right. \nonumber \\&\quad + \left. \begin{pmatrix}{\varvec{0}} &{} \alpha _{e,1}(\varvec{\omega })\varvec{D}_e &{} \dots &{} \alpha _{e,m}(\varvec{\omega })\varvec{D}_e \\ \beta _{e,1}(\varvec{\omega }) \varvec{D}^{\prime }_e &{} {\varvec{0}} &{} \dots &{}\\ \vdots &{} \vdots &{} &{}\\ \beta _{e,m}(\varvec{\omega }) \varvec{D}^{\prime }_e &{} {\varvec{0}} &{} \dots &{} \end{pmatrix}\right] \end{aligned}$$
(C.5)
$$\begin{aligned}&= \varvec{I}\otimes \left[ \varvec{K}^{\prime }+\varvec{T}^{\prime }(\varvec{\omega })\right] \end{aligned}$$
(C.6)

where

$$\begin{aligned} \alpha _{e,m}(\varvec{\omega }) = (e^{i\varDelta \varvec{X}_{m}\cdot \varvec{\omega }}-1), \ \ \ \ \ \beta _{e,m}(\varvec{\omega }) = (e^{-i\varDelta \varvec{X}_{m}\cdot \varvec{\omega }}-1) \end{aligned}$$

and \({\mathscr {F}}\) is the Fourier transform operator on the lattice. The vectors \(\varDelta \varvec{X}_{m}\) specify the displacements between adjacent junctions along a type m edge. The functions \(\alpha _e(\varvec{\omega })\) and \(\beta _e(\varvec{\omega })\) are analogous to the functions \((z-1)\) and \((z^{-1}-1)\) discussed in Sect. 3.1.

Since we use a lattice Fourier transform as opposed to a generating function formalism here, we have replaced z by \(e^{i\varvec{\omega }}\). With the lattice Fourier transform, it seems somewhat easier to handle cases where \(\varDelta \varvec{X}_{t,m}\) are not all of the same length for every \(t=\{s,e\}\) or \(m=1,\dots ,n_s\), although these minutiae could likely be included in a generating function approach with only a few modifications. Lastly, the matrix \(\varvec{K}^{\prime }\) corresponds with \(\varvec{K}_0+\varvec{D}^++\varvec{D}^-\), and \(\varvec{T}^{\prime }(\varvec{\omega })\) corresponds with \((z-1)\varvec{D}^++(1/z-1)\varvec{D}^-\) from Sect. 3.2.

Due to the translation invariance of the lattice points, we see that the Fourier transform diagonalizes the adjacency matrices, \(\varvec{A}_{e,m}\) and \(\varvec{A}_{e,m}^{\prime }\). This reduces an infinite-dimensional matrix problem to an \(n_T\times n_T\) problem parameterized by a variable, \(\varvec{\omega }\). The spatial moments over the junctions are found by considering derivatives of \((\varvec{I}-\hat{\varvec{K}}-\hat{\varvec{T}})^{-1}\) with respect to \(\varvec{\omega }\) in the limit that \(\varvec{\omega }\rightarrow \varvec{0}\). For instance, on an infinite one-dimensional lattice, the derivative of the lattice Fourier transform of a function \(f(x_k)\) is

$$\begin{aligned} \lim _{\omega \rightarrow 0}\frac{\partial ^j {\hat{f}}(\omega )}{\partial \omega ^j} = \sum _{k=-\infty }^{\infty } (-i x)^jf(x_k) = (-i)^j m^{(j)}\ \ \ \ \ \ j=0,1,\dots \end{aligned}$$

Given \(\hat{\varvec{T}}\) and \(\hat{\varvec{K}}\), in the limit \(\varvec{\omega }\rightarrow \varvec{0}\), we have

$$\begin{aligned} \frac{\partial }{\partial \omega _j}\left[ \varvec{I}-\varvec{K}^{\prime }-\varvec{T}^{\prime }(\varvec{\omega })\right] ^{-1}= & {} \left[ \varvec{I}-\varvec{K}^{\prime }\right] ^{-1}\left( \frac{\partial \varvec{T}^{\prime }}{\partial \omega _j} \right) \left[ \varvec{I}-\varvec{K}^{\prime }\right] ^{-1}\\ \frac{\partial ^2}{\partial \omega _j\partial \omega _k}\left[ \varvec{I}-\varvec{K}^{\prime }-\varvec{T}^{\prime }(\varvec{\omega })\right] ^{-1}= & {} 2\left[ \varvec{I}-\varvec{K}^{\prime }\right] ^{-1}\left( \frac{\partial \varvec{T}^{\prime }}{\partial \omega _j} \right) \left[ \varvec{I}-\varvec{K}^{\prime }\right] ^{-1}\left( \frac{\partial \varvec{T}^{\prime }}{\partial \omega _k}\right) \left[ \varvec{I}-\varvec{K}^{\prime }\right] ^{-1} \\&+\left[ \varvec{I}-\varvec{K}^{\prime }\right] ^{-1}\left( \frac{\partial ^2\varvec{T}^{\prime }}{\partial \omega _j\partial \omega _k} \right) \left[ \varvec{I}-\varvec{K}^{\prime }\right] ^{-1} \end{aligned}$$

where derivatives, \(\frac{\partial }{\partial \omega _j}\), are taken with respect to \(\varvec{\omega }\) and are understood to be evaluated at \(\varvec{\omega }=\varvec{0}\). Since \(\varvec{T}^{\prime }(\varvec{\omega })=\varvec{0}\) as \(\varvec{\omega }\rightarrow \varvec{0}\), the only matrix inverse required is \(\left[ \varvec{I}-\varvec{K}^{\prime }\right] ^{-1}\). Averaging over the internal states within a cell, we can compute the average spatial moments of a process with transition matrix, \(\hat{\varvec{T}}\) as (c.f. Landman and Shlesinger 1979a)

$$\begin{aligned} \varvec{m}_j^{(1)}&= \left\langle (-i)\hat{\varvec{\varPhi }}\left[ \varvec{I}-\varvec{K}^{\prime }\right] ^{-1}\left( \frac{\partial \varvec{T}^{\prime }}{\partial \omega _j}\right) \left[ \varvec{I}-\varvec{K}^{\prime }\right] ^{-1}+x_j\hat{\varvec{\varPhi }}\left[ \varvec{I}-\varvec{K}^{\prime }\right] ^{-1}\right\rangle \end{aligned}$$
(C.7)
$$\begin{aligned} \varvec{m}_{jk}^{(2)}&= \left\langle -{\hat{\varPhi }}\left[ \varvec{I}{-}\varvec{K}^{\prime }\right] ^{-1}\left( 2\left( \frac{\partial \varvec{T}^{\prime }}{\partial \omega _j}\right) \left[ \varvec{I}{-}\varvec{K}^{\prime }\right] ^{-1}\left( \frac{\partial \varvec{T}^{\prime }}{\partial \omega _k}\right) +\left( \frac{\partial ^2\varvec{T}^{\prime }}{\partial \omega _j\partial \omega _k} \right) \right) \left[ \varvec{I}-\varvec{K}^{\prime }\right] ^{-1}\right. \end{aligned}$$
(C.8)
$$\begin{aligned}&\quad \left. +2(-i)x_j\hat{\varvec{\varPhi }}\left[ \varvec{I}-\varvec{K}^{\prime }\right] ^{-1}\left( \frac{\partial \varvec{T}^{\prime }}{\partial \omega _k}\right) \left[ \varvec{I}-\varvec{K}^{\prime }\right] ^{-1} +x_jx_k\hat{\varvec{\varPhi }}\left[ \varvec{I}-\varvec{K}^{\prime }\right] ^{-1}\right\rangle , \end{aligned}$$
(C.9)

where \(\varvec{x}=(x_1,x_2,\dots ,x_d)\) describes the position of SVs relative to a junction and \(\left\langle \dots \right\rangle \) is a suitable spatial-summation operator acting over the SVs. In this context, elements of \({\hat{\varPhi }}\) from Eq. (34) are written as

$$\begin{aligned} {\hat{\varPhi }}_{i}=\frac{1}{s}\left( 1-\sum _{j}\varvec{K}^{\prime }_{ij}(s)\right) . \end{aligned}$$
(C.10)

Many of the same points carry through if the transport process is posed as a generalized differential master equation rather than as an integral master equation. In the differential master equation setting, after Fourier and Laplace transformation, we write

$$\begin{aligned} \left( s\varvec{\varGamma }+\varvec{I}-\varvec{H}-\varvec{W}(\varvec{\omega })\right) \varvec{P} = \varvec{\varGamma }\varvec{\delta } \end{aligned}$$

with (recall we have set \(\varvec{T}\varvec{\phi }\mapsto \varvec{T}\) and \(\varvec{K}\varvec{\varLambda }\mapsto \varvec{K}\) in this section)

$$\begin{aligned} \varvec{W}(\varvec{\omega }) = \left( \mathscr {F}\otimes \varvec{I}\right) \varvec{\varGamma }\varvec{T}\left( \hat{\varvec{\varPhi }}\right) ^{-1} \left( \mathscr {F}\otimes \varvec{I}\right) , \ \ \ \varvec{H} = \varvec{\varGamma }\varvec{K}\left( \hat{\varvec{\varPhi }}\right) ^{-1} \end{aligned}$$

Similar to the results above, we define the matrix \(\varvec{S} = \left( s\varvec{\varGamma }+\varvec{I}-\varvec{H}-\varvec{W}(\varvec{0})\right) ^{-1}\), and then, we may compute the moments using the matrix derivative formulas above as

$$\begin{aligned} \varvec{m}^{(1)}_j&= (-i)\varvec{S}^{-1}\left( \frac{\partial \varvec{W}}{\partial \omega _j}\right) \varvec{S}^{-1}\varvec{\varGamma }\varvec{\delta } + \varvec{x}\varvec{S}\varvec{\varGamma }\varvec{\delta } \\ \varvec{m}^{(2)}_{jk}&=\left[ -\left( 2\varvec{S}\left( \frac{\partial \varvec{W}}{\partial \omega _j}\right) \varvec{S}\left( \frac{\partial \varvec{W}}{\partial \omega _k}\right) \varvec{S}+\varvec{S}\left( \frac{\partial ^2\varvec{W}}{\partial \omega _j\partial \omega _k}\right) \varvec{S}\right) -2ix_j\varvec{S}\left( \frac{\partial \varvec{W}}{\partial \omega _k}\right) \varvec{S} \right. \\&\left. \qquad +x_jx_k\varvec{S} \right] \varvec{\varGamma }\varvec{\delta }. \end{aligned}$$

Appendix D: General First-Passage-Time Problems and Secondary Vertex Reductions

In Sect. 4.1, we discussed how the SVs can be eliminated from some problems to form a CTRW involving only the junctions. Here, we discuss how this is done in the general case using the description in See Appendix C and C.1.

1.1 D.1 Reduction of SVs in the General Case

Recall the general integral master equation formulation in Eq. (36) with the \(n_T\times n_T\) matrices \(\varvec{T}\) and \(\varvec{K}\) defined in Eqs. (C.1) and (C.2). What we show in this section is that the effective transition rates derived in the previous examples applies more generally as well. In particular, by obtaining effective transition rates, the evolution of a system that has arbitrarily many SVs can be described by a reduced system that only involves internal states at junctions. We first give the resulting evolution equations and moment formulas, and then proceed to describe how to derive the elements of the effective transition matrices that appear in those equations.

In Laplace transform space, we write the \(n_s\)-dimensional integral master equation for the reduced system as

$$\begin{aligned} \begin{aligned} \varvec{q}_J(\varvec{X},s)&= \delta (\varvec{X})\delta (t)\varvec{\delta }_{k} + \sum _{\varvec{X}^{\prime }} \varvec{A}_2(\varvec{X}-\varvec{X}^{\prime })\otimes \varvec{T}_2^{\text{ eff }} \varvec{q}_J(\varvec{X}^{\prime },s) \\&\qquad + \left( \varvec{I}\otimes \varvec{K}_J+\varvec{A}_1\otimes \varvec{T}_1^{\text{ eff }} \right) \varvec{q}_J(\varvec{X},s) \\ \varvec{p}_J(\varvec{X},s)&= \hat{\varvec{\varPhi }}_0(s)\varvec{q}_J(\varvec{X},s) \end{aligned} \end{aligned}$$
(D.1)

where each nonzero element of the diagonal matrix \(\hat{\varvec{\varPhi }}_0\) is

$$\begin{aligned} {\hat{\varPhi }}_{0,k} = \frac{1}{s}\left( 1-\varvec{1}^T\left( \varvec{I}\otimes \varvec{K}_J+\varvec{A}_1\otimes \varvec{T}_1^{\text{ eff }} \right) \varvec{\delta }_k - \sum _{\varvec{X}} \varvec{1}^T\varvec{A}_2(\varvec{X}-\varvec{X}^{\prime })\otimes \varvec{T}_2^{\text{ eff }}\varvec{\delta }_k\right) . \end{aligned}$$

The \(n_s\times n_s\) matrices \(\varvec{T}_1^{\text{ eff }}\) and \(\varvec{T}_2^{\text{ eff }}\) describe effective transition rates for transitions within a junction, and for travel between adjacent junctions. They will be defined precisely in Appendix D.2. The matrix, \(\varvec{K}_J\) is the same as that in Eq. (C.2), and it also describes internal transitions at a junction. The infinite-dimensional adjacency matrices \(\varvec{A}_1\) and \(\varvec{A}_2\) are defined asFootnote 10

$$\begin{aligned} \varvec{A}_1&= \left( \sum _{m=1}^{n_e}\varvec{A}_{e,m}\varvec{A}^{\prime }_{e,m} +\varvec{A}_{s,m}\varvec{A}^{\prime }_{s,m}\right) = 2n_e I\\ \varvec{A}_2&= \left( \sum _{m=1}^{n_e}\varvec{A}_{s,m}\varvec{A}^{\prime }_{e,m} +\varvec{A}_{e,m}\varvec{A}^{\prime }_{s,m}\right) , \end{aligned}$$

where \(n_e\) is the number of different types of edges in the lattice. The adjacency matrices \(\varvec{A}_{e,m}\) and \(\varvec{A}_{s,m}\) are those that are introduced in Appendix C, and m ranges over the various types of edges in the system. As described in Appendix C, these matrices can also be understood as operators over the lattice. This leads to the notation in Eq. (D.1) where \(\varvec{A}_2(\varvec{X}-\varvec{X}^{\prime })\) is written as a function of the lattice points.

From Eq. (D.1), the integral master equation for \(\varvec{p}_J\) can be derived as

$$\begin{aligned} \varvec{p}_J(\varvec{X},s)= & {} \hat{\varvec{\varPhi }}_0\delta (\varvec{X})\varvec{\delta }_k + \sum _{\varvec{X}^{\prime }}\hat{\varvec{\varPhi }}_0\left( \varvec{A}_2(\varvec{X}-\varvec{X}^{\prime })\otimes \varvec{T}^{\text{ eff }}_2\right) \hat{\varvec{\varPhi }}_0^{-1}\varvec{p}_J(\varvec{X}^{\prime },s) \\&+ \hat{\varvec{\varPhi }}_0\left( \varvec{I}\otimes \varvec{K}_J+\varvec{A}_1\otimes \varvec{T}_1^{\text{ eff }} \right) \hat{\varvec{\varPhi }}_0^{-1}\varvec{p}_J(\varvec{X},s). \end{aligned}$$

Normally, each transition in a random jump process implies a change of internal state or position; however, with effective transitions, we must be able to account for paths where a particle, starting from a junction in internal state \(\ell \), makes a number of jumps along an edge and then returns to that same junction in state \(\ell \). The WTDs for such events to occur are given by the diagonal elements of \(\varvec{T}^{\text{ eff }}_1\), and they involve no change in position or internal state. We will later discuss how to reformulate the effective transition rates so that these self-jumps need not be considered as “transitions” in \(\varvec{T}^{\text{ eff }}_1\).

It also is the case that unlike \(\varvec{T}_J\) and \(\varvec{T}_{SV}\) which did not invoke changes in internal states, \(\varvec{T}^{\text{ eff }}_2\) can include transitions that simultaneously change the position and internal state of a particle. This is because each element of \(\varvec{T}^{\text{ eff }}_{2}\) is the aggregate of a number of different steps on an edge. Even though each individual transition on the edge involves only changes in state (\(\varvec{K}_{SV}\)) or position (\(\varvec{T}_{SV}\)), the aggregate of many transitions required for a particle to travel between junctions can change both.

Each element of \(\varvec{T}_1^{\text{ eff }}\) and \(\varvec{T}_2^{\text{ eff }}\) is found by solving a first-passage-time problem to obtain the distribution of times for a particle to arrive in internal state \(\ell \) at a junction after having started in state \(\ell _0\) at that junction or an adjacent one without having visited any other junction states in the intervening time. We will discuss the solution to these problems in the next section, but first turn to the computation of the spatial moments for the reduced system.

Note that aside from the exterior hexagonal lattice and a remark in Appendix C, we have only considered lattices with a single type of junctionFootnote 11 (e.g., the exterior hexagonal lattice does not fall into this category since it has type I and type II junctions). This restriction is mostly done for ease of notation and clarity, as there do not appear to be theoretical difficulties with considering multiple types of junctions. In this section, we will make two additional restrictions, also to ensure clarity of the arguments that follow.

Consider a lattice with a single type of junction and \(n_e\) types of edges, each type distinguished from the others only by its orientation relative to the x-axis. The first additional restriction is that the transport along each edge is undirected. The second restriction is that we now suppose that the lattice is constructed so that at each junction, one edge of each type starts and one edge of each type ends at that junction. Note that this restriction is also used in Appendix C. Aside from the exterior hexagonal lattice and persistent comb random walk, the examples discussed in this paper have these properties. As an example, in a square lattice, at each junction, one vertical edge and one horizontal edge start (have their s-terminus) at that junction, and one vertical and one horizontal edge end (have their e-terminus) at that junction. Thus, there are \(n_e=2\) types of edges (vertical and horizontal), and each junction is connected to \(4=2n_e\) edges in the square lattice.

Recall that the lattice Fourier transform of a function is

$$\begin{aligned} {\hat{f}}(\varvec{\omega }) = \sum _{\varvec{X}\in {\mathscr {G}}} f(\varvec{X})e^{i\varvec{X}\cdot \varvec{\omega }} \end{aligned}$$

with \(\varvec{\omega }\) defined over a d-dimensional cube, \(\left[ -\frac{\pi }{|\varDelta X|},\frac{\pi }{|\varDelta X|}\right] ^d\). Here, d is the dimension of the space and \(\varvec{\varDelta X}\) the lattice spacing.

Applying the lattice Fourier transform to the matrix,

$$\begin{aligned} \left( \varvec{I}\otimes \varvec{K}_J+\varvec{A}_1\otimes \varvec{T}_1^{\text{ eff }} \right) + \varvec{A}_2(\varvec{X})\otimes \varvec{T}_2^{\text{ eff }}, \end{aligned}$$

from Eq. (D.1), and keeping in mind the restrictions above, we obtain

$$\begin{aligned}&\varvec{I}\otimes \left[ \varvec{K}_J+2n_e(\varvec{T}_1^{\text{ eff }}+\varvec{T}_2^{\text{ eff }})\right] +\varvec{I}\otimes \left[ \sum _{j=1}^{2n_e}(e^{i\varDelta \varvec{X}_{j}\cdot \varvec{\omega }}-1)\varvec{T}_2^{\text{ eff }}\right] \\&\qquad \equiv \varvec{I}\otimes \varvec{K}_{0}+\varvec{I}\otimes \varvec{T}^{\text{ eff }}_{0}(\varvec{\omega }). \end{aligned}$$

To find the moments, we differentiate in Fourier space as in Appendix C.1. Differentiating \(\tilde{\varvec{p}}_J(\varvec{\omega },s)\), we obtain

$$\begin{aligned} \tilde{\varvec{m}}^{(1)}_j&=(-i){\hat{\varPhi }}_{0}\left[ \varvec{I}-\varvec{K}_{0}\right] ^{-1}\left( \frac{\partial \varvec{T}^{\text{ eff }}_{0}}{\partial \omega _j} \right) \left[ \varvec{I}-\varvec{K}_{0}\right] ^{-1} \end{aligned}$$
(D.2)
$$\begin{aligned} \tilde{\varvec{m}}^{(2)}_{jk}&=-{\hat{\varPhi }}_{0}\left[ \varvec{I}-\varvec{K}_{0}\right] ^{-1}\left( 2\left( \frac{\partial \varvec{T}^{\text{ eff }}_{0}}{\partial \omega _j} \right) \left[ \varvec{I}-\varvec{K}_{0}\right] ^{-1}\left( \frac{\partial \varvec{T}^{\text{ eff }}_{0}}{\partial \omega _k} \right) \right. \nonumber \\&\qquad \left. +\left( \frac{\partial ^2 \varvec{T}^{\text{ eff }}_{0}}{\partial \omega _j\partial \omega _k} \right) \right) \left[ \varvec{I}-\varvec{K}_{0}\right] ^{-1} \end{aligned}$$
(D.3)

with \(j,k=1,2,\dots ,d\). Notice that there is no summation over the \(n_k\) SVs since details regarding the SVs have been condensed into the elements of \(\varvec{T}_2^{\text{ eff }}\) which describes transfers between adjacent junctions. Likewise we do not need any information about how the internal states are situated relative to a junction. We can also write \(\hat{\varvec{\varPhi }}_0\) from above as

$$\begin{aligned} \left( \hat{\varPhi _0}(s)\right) _i = \frac{1}{s}\left( 1-\sum _j\varvec{K}_{0,ij}(s) \right) . \end{aligned}$$

Unlike the SVs, the internal states have not been summed over, so the moments are still written as vectors over these states.

Remark 1

It is important to note that since no summation is done over the SVs, the value of \(\varvec{p}_J(\varvec{X},t)\) at junction \(\varvec{X}\) is now the aggregate probability density of all particles that have reached that junction, but not yet arrived at any other junctions. In other words, a particle located on an edge after exiting junction \(\varvec{X}\), but not having yet reached an adjacent junction, would contribute to \(\varvec{p}_J(\varvec{X},t)\). This is a reflection of the fact that in the reduced process, the fine details of the exact location of a particle along an edge have been suppressed. Nonetheless, if one uses the elements of \(\hat{\varvec{\varPhi }}\) associated with junctions from the full system (e.g., Eq. (34)) rather than \(\hat{\varvec{\varPhi }}_0\) for the reduced system, the probabilities at junction \(\varvec{X}\) are obtained (c.f. the difference between \(\hat{\varvec{\varPhi }}\) and \(\hat{\varvec{\varPsi }}\) in Ex. 4).

Remark 2

When the FPT procedure is applied to a Markov process, this result is typically a non-Markovian process with a reduced number of degrees of freedom (Kampen 1992). The non-Markovian character arises since even if all the transitions in the original process have Poisson distributed waiting times, the aggregate waiting-time distribution for several sequential Poisson processes to occur is not a Poisson distribution. As noted above, this leads to an overall non-Markovian process since the only continuous waiting-time distributions which leads to a Markov process are Poisson distributions.

1.2 D.2 General First-Passage-Time Problems

It now remains to describe the general first-passage-time (FPT) problem which must be solved to obtain \(\varvec{T}^{\text{ eff }}_1\) and \(\varvec{T}^{\text{ eff }}_2\) in Eq. (D.1). The elements of \(\varvec{T}^{\text{ eff }}_{1,2}\) are closely related to FPTs for a random walk to reach state \(\ell \) at a junction given that it started in state \(\ell _0\) at one of the junctions bordering that edge and did not leave the edge yet.

Recall that we assume that all edges have the same set internal transitions and spatial jump transitions. Also recall that the transition matrices for internal state changes and spatial jumps on an edge are \(\varvec{K}_{SV}\) and \(\varvec{T}_{SV}\), respectively. Since a particle starting out on an edge can eventually jump to a vertex at the “s” or “e” end of that edge, the matrix \(\varvec{T}_{SV}+\varvec{K}_{SV}\) cannot conserve particle number. Thus, the matrix \(\varvec{T}_{SV}+\varvec{K}_{SV}\) is the transition matrix for a random walk confined to an edge and subject to absorbing boundary conditions. Since particles are absorbed at the boundaries, the rate at which particles are absorbed can be defined. In the full matrix system, these absorption rates correspond to the rate at which particles arrive at the vertices at either end of an edge.

In order to define the arrival rates, we consider the nonzero elements of \(\varvec{D}_e\) and \(\varvec{D}_s\) which describe transitions from an edge to a vertex. As depicted in Fig. 5, there are typically at most \(n_s\) distinct transitions from each end of an edge, one from each internal state at \(k=1\) and \(k=n_k\). Thus, there are typically at most \(n_s\) nonzero entries in \(\varvec{D}_s\) and \(\varvec{D}_e\). Considering \( \varvec{D}_s\) first, each nonzero element of \(\varvec{D}_s\) can be written as \(d_{\ell }\psi _{d,\ell } (t)\). Here, \(d_{\ell }\) is the probability for a particle at the first SV (\(k=1\)) in state \(\ell \) to jump to state \(\ell \) at the adjacent junction, and \(\psi _{d,\ell }(t)\) is the WTD for the jump when it does occur. The same description holds for nonzero elements of \( \varvec{D}_e\), except in that case the particle is jumping from the \(n_k\)th SV, \(k=n_k\) rather than \(k=1\).

Then, the Laplace transform of the FPT density for a particle to reach the junction attached to the “s”-end of an edge in state \(\ell \) after starting in state \((k_0,\ell _0)\) on the edge is of the form

$$\begin{aligned} {\tilde{f}}_{\ell }(s|\{k_0,\ell _0\}) = d_{\ell }\psi _{d,\ell }(s)\varvec{\delta }_{1,\ell }^T\left( \varvec{I}-\varvec{T}_{SV}-\varvec{K}_{SV}\right) ^{-1}\varvec{\delta }_{k_0,\ell _0} \end{aligned}$$

where \(\varvec{\delta }_{k_0,\ell _0}\) and \(\varvec{\delta }_{1,\ell }\) are Kronecker deltas over the state-space of the edge. But \({\tilde{f}}_{\ell }(s|\{k_0,\ell _0\})\) is just the \(\ell ^{\text{ th }}\) component of a vector \(\tilde{\varvec{f}}(s|\{k_0,\ell _0\})\) over the internal states at a junction. The FPT densities to reach each of the \(n_s\) internal states at a junction defined easily by decomposing \(\varvec{D}_s\) as a sum of rank one matrices,

$$\begin{aligned} \varvec{D}_s = \sum _{\ell =1}^{n_s} d_{\ell }\psi _{d,\ell }(s)\varvec{\delta }_{\ell }\varvec{\delta }_{1,\ell }^T \end{aligned}$$

and writing

$$\begin{aligned} \tilde{\varvec{f}}(s|\{k_0,\ell _0\}) = \varvec{D}_s\left( \varvec{I}-\varvec{T}_{SV}-\varvec{K}_{SV}\right) ^{-1}\varvec{\delta }_{k_0,\ell _0}. \end{aligned}$$

This describes the FPT densities to reach the vertex adjacent to the “s”-end of the edge for a particle starting at some SV on edge, and the FPT densities at the “e”-end can be found similarly. However, the effective transition rates we seek describe how long it takes a particle to jump between vertices, not from an SV to a vertex.

Let us consider the \(n_k^{\text{ th }}\) SV on an edge. Each internal state associated with this SV is directly connected to the junction at the “e”-end of the edge, and jumps from that junction to the edge are governed by \(\varvec{D}^{\prime }_e\). Like \(\varvec{D}_e\), \(\varvec{D}^{\prime }_e\) typically has at most \(n_s\) nonzero entries and we can write \(\varvec{D}^{\prime }_e\) as

$$\begin{aligned} \varvec{D}_e^{\prime } = \sum _{\ell =1}^{n_s}d_{\ell }^{\prime }\psi _{d,\ell }^{\prime }(s)\varvec{\delta }_{n_k,\ell }\varvec{\delta }_{\ell }^T \end{aligned}$$

where \(d_{\ell }^{\prime }\) is the probability of a particle in state \(\ell \) at the junction jumping to the edge, and \(\psi ^{\prime }_{d,\ell }\) is the WTD when the jump occurs.

With this, the elements of the vector,

$$\begin{aligned} \tilde{\varvec{f}}(s|\ell _0\}) = \varvec{D}_s\left( \varvec{I}-\varvec{T}_{SV}-\varvec{K}_{SV}\right) ^{-1}\varvec{\delta }_{n_k,\ell _0}\psi _{d,\ell _0}^{\prime } \end{aligned}$$

are the FPTs for a particle jumping from state \(\ell _0\) at the junction at the “e”-end of an edge to reach state \(\ell \) at the junction at the “s”-end of the same edge. Elements of \(\varvec{T}_{2}^{\text{ eff }}\) are then found by multiplying these FPTs by \(d_{\ell _0}^{\prime }\) which gives the probability of the jump occurring. Collecting all of these FPTs into a matrix, we obtain,

$$\begin{aligned} \varvec{T}_{2}^{\text{ eff }} = \varvec{D}_s\left( \varvec{I}-\varvec{T}_{SV}-\varvec{K}\right) ^{-1}\varvec{D}_e^{\prime } \end{aligned}$$
(D.4)

By similar arguments, we find the elements of \(\varvec{T}_1^{\text{ eff }}\) by replacing \(\varvec{D}_s\) with \(\varvec{D}_e\),

$$\begin{aligned} \varvec{T}_{1}^{\text{ eff }} = \varvec{D}_e\left( \varvec{I}-\varvec{T}_{SV}-\varvec{K}\right) ^{-1}\varvec{D}_e^{\prime }. \end{aligned}$$
(D.5)

Notice that we could have replaced \(\varvec{D}_s\) and \(\varvec{D}^{\prime }_e\) by \(\varvec{D}_e\) and \(\varvec{D}^{\prime }_s\) in the computation of \(\varvec{T}_2^{\text{ eff }}\). Either combination is valid since we have assumed that transport along each edge is undirected. Note that many useful examples involve just two types of effective transitions (return to origin, and jump to other states) in which case the matrices above will be rank two.

Fig. 13
figure 13

Random walk first-passage-time problems with \(n_{\ell }=1\). A The most general type of FPT problem involving the full set of states in between any two adjacent lattice states. B The two simplifications. In the first simplification (top), only the internal states within an edge are considered, the FPT matrix here is \(\varvec{D}\left( \varvec{I}- \varvec{T}_{SV}-\varvec{K}_{SV}\right) ^{-1}\varvec{D}^{\prime }\). Self-returns and jumps to adjacent junctions are highlighted. In the second case, the junction is considered as well. This rescales the problem to eliminate self-jumps. It can be understood as equivalent to the general FPT problem when all edges exhibit the same set of transitions including those to and from a lattice point. In (a) and (b), the highlighted boxes are used to indicate the states involved in the FPT computation

The presence of condensed paths over the SVs that return a particle to its starting state prior to jumping is a peculiarity that arises here. However, as we will now show, these self-jumps can be eliminated by solving an extended first-passage-time problem which results in a rescaling of the values of \(\varvec{T}_2^{\text{ eff }}\) and the off-diagonal elements of \(\varvec{T}_1^{\text{ eff }}\) to account for the contingency of multiple returns to the starting point. This was done implicitly in Ex. 4.2 when we solved for the FPT to reach \(X_{i\pm 1}\) from \(X_i\) in the a 1D lattice. In that case, no self-jumps were possible since the effective WTD was defined such that it described sojourns on edges that started (at \(X_i\)) and ended (at \(X_{i\pm 1}\)) at different points in space. This same principle holds in general.

It turns out the form of the rescaled transition rates is relatively simple. We first give the result and then discuss its derivation. To obtain the rescaled transition rates, we additively decompose \(\varvec{T}_1^{\text{ eff }}\) into diagonal and off-diagonal components,

$$\begin{aligned} \varvec{T}_1^{\text{ eff }} = \varvec{T}_{1,d}^{\text{ eff }}+\varvec{T}_{1,o}^{\text{ eff }}. \end{aligned}$$

The subscripts d and o indicate the diagonal and off-diagonal components of \(\varvec{T}_1^{\text{ eff }}\). The self-jumps are characterized by \(\varvec{T}_{1,d}^{\text{ eff }}\), and jumps that change the state or position of a particle are given by \(\varvec{K}\), \(\varvec{T}_{1,o}^{\text{ eff }}\), and \(\varvec{T}_2^{\text{ eff }} \). The rescaled internal transitions and first-passage-times are then of the form

$$\begin{aligned} \varvec{K}_r= & {} \varvec{K}\left( \varvec{I} - 2n_e\varvec{T}_{1,d}^{\text{ eff }}\right) ^{-1} , \ \ \ \ \varvec{T}_{1,r}^{\text{ eff }} = 2n_e\varvec{T}_{1,o}^{\text{ eff }}\left( \varvec{I} - 2n_e\varvec{T}_{1,d}^{\text{ eff }}\right) ^{-1},\ \ \ \ \nonumber \\ \varvec{T}_{2,r}^{\text{ eff }}= & {} \varvec{T}_2^{\text{ eff }}\left( \varvec{I} - 2n_e\varvec{T}_{1,d}^{\text{ eff }}\right) ^{-1} \end{aligned}$$
(D.6)

As noted before, this rescaling is equivalent to the solution of an extended first-passage-time problem that does not terminate when a particle returns to its starting state. Consider a particle starting at state \(\ell \) at vertex \(\varvec{X}\). To exclude self-jumps, consider the transition matrix for a particle in a system that includes state \(\ell \) at vertex \(\varvec{X}\) and all of the SVs and internal states on the edges that are attached to \(\varvec{X}\) (see Fig. 13).

The transition matrices for each \(\ell \) are of the form

$$\begin{aligned} \varvec{T}_{r,\ell }+\varvec{K}_{r,\ell } = \begin{pmatrix} 0 &{} \varvec{\delta }_{\ell }^T\varvec{D}_{t_1} &{} \dots &{} \varvec{\delta }_{\ell }^T\varvec{D}_{t_{2n_e}}\\ \varvec{D}^{\prime }_{t_1}\varvec{\delta }_{\ell } &{} \varvec{T}_{SV}+\varvec{K}_{SV} &{} {\varvec{0}} \dots \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ \varvec{D}^{\prime }_{t_{2n_e}}\varvec{\delta }_{\ell } &{} {\varvec{0}} &{} \dots &{} \varvec{T}_{SV}+\varvec{K}_{SV} \end{pmatrix},\ \ \ \ \ \ \ell = 1,\dots , N_{\ell } \end{aligned}$$
(D.7)

where \(t_{k} = \{s,e\}\) specifies which end of edge k connects to vertex, \(\varvec{X}\), see Fig. 13a. We see that a particle starting on \(\varvec{X}\) in state \(\ell \) can exit the system by 1) reaching \(\varvec{X}\) in any internal state except for \(\ell \), or 2) by reaching a junction adjacent to \(\varvec{X}\) in any internal state including \(\ell \). Since the particle cannot exit this system from the same position and state at which it started, no effective self-jumps occur.

The resulting matrix for the FPT problem is much larger in this case than the matrix \(\varvec{T}_e+\varvec{K}_e\) which was needed to find \(\varvec{T}_{1,2}^{\text{ eff }}\). This is disadvantageous since solutions may be more difficult to compute. However, a significant simplification is possible due to assumptions we have made. In the case, we have been considering, all edges have the same set of transitions and WTDs. Given this, we find that since the particle starts at the junction, the probabilities, \({\tilde{p}}_{k,\ell }(s)\) along each edge are equal. This allows us to write

$$\begin{aligned} \varvec{T}_{r_0,\ell }+\varvec{K}_{r_0,\ell } = \begin{pmatrix} 0 &{} (\varvec{D}_{t_1})_{\ell }\\ 2n_e(\varvec{D}^{\prime }_{t_1})_{\ell }^T &{}\quad \varvec{T}_{SV}+\varvec{K}_{SV} \end{pmatrix}. \end{aligned}$$

Given this matrix, FPTs to reach an adjacent vertex or return to \(\varvec{X}\) in a state other than \(\ell \) may be found using the techniques discussed above. After simplification, one can show that Eq. D.7 are the resulting solutions.

Likewise, one can substitute \(\varvec{T}_{1,r}^{\text{ eff }}\) and \(\varvec{T}_{2,r}^{\text{ eff }}\) into Eqs. (D.2) and (D.3) for the spatial moments. However, since both approaches end up computing approximate moments for the same initial system which includes SVs and junctions, they must both yield very similar results. The distinction is that in the latter case, the definition of a single jump has been redefined to only include jumps where the position or state of the particle changes. This distinction is important in regard to interpreting the elements of \(\varvec{T}^{\text{ eff }}\). In the former case, they cannot be considered as jump probabilities multiplying waiting-time distributions in the standard sense, but after rescaling, they can.

Appendix E: Random Walks on Segments with Various Boundary Conditions

Consider a random walk on \({\mathbb {Z}}\) where at each \(k\in {\mathbb {Z}}\), there is a 1/2 probability of jumping to the left or right. If the random walk starts at \(\ell \in {\mathbb {Z}}\), the probability generating function for the probability of finding the walker at m is (Hughes1996)

$$\begin{aligned} u^{(F)}(z,m|\ell )=\sum _{n=0}^{\infty } z^n p_n(m|\ell ) = (1-z^{2})^{-1/2}x^{|m-\ell |} \end{aligned}$$
(E.1)

with

$$\begin{aligned} x\equiv z^{-1}\left( 1-\sqrt{1-z^{2}}\right) \end{aligned}$$

and \(p_n(m|\ell )\) being the probability of reaching m after starting at \(\ell \) on the nth step of the walk. For a periodic segment with \(m,\ell =0,1,\dots ,N\), the solution is given by using the method of images (Montroll and Greenberg 1964) as

$$\begin{aligned} u^{(P)}(z,m|\ell )=\sum _{k=-\infty }^{\infty }u^{(F)}(z,m+Nk|\ell )=\left( \frac{x^{m-\ell }+x^{N-(m-\ell )}}{1-x^{N}}\right) (1-z^{2})^{-1/2}. \end{aligned}$$

It is also possible to find solutions with other types of boundary conditions (Hughes 1996; Weiss and Rubin 2007). For instance, the image points for a random walk on a segment with two absorbing boundaries are of the form \(\{ 2kN\pm \ell \}\), and the method of images solution yields

$$\begin{aligned} u^{(AA)}(z,m|\ell )=\frac{x^{|m-\ell |}-x^{|m+\ell |}+x^{2N}(x^{-|m-\ell |}-x^{-|m+\ell |})}{\sqrt{1-z^{2}}\left( 1-x^{2N}\right) }. \end{aligned}$$
(E.2)

Moments of \(u^{(F)}(z,\cdot |\cdot )\) can be found by summing over m and using identities related to geometric series:

$$\begin{aligned} \sum _{n=0}^{N}r^{n}&=\frac{1-r^{N+1}}{1-r}\\ \sum _{n=0}^{N}nr^{n}&=r\frac{d}{dr}\left[ \frac{1-r^{N+1}}{1-r}\right] =r\frac{1-(N+1)r^{N}+Nr^{N+1}}{(1-r)^{2}}\\ \sum _{n=0}^{N}n(n-1)r^{n}&=r^{2}\frac{d^{2}}{dr^{2}}\left[ \frac{1-r^{N+1}}{1-r}\right] \\&=r^{2}\frac{2-N(1+N)r^{N-1}+2(N^{2}-1)r^{N}-N(N-1)r^{1+N}}{(1-r)^{3}}\\ \sum _{n=0}^{N}n(n-1)\dots (n-k)r^{n}&=r^{k}\frac{d^{k}}{dr^{k}}\left[ \frac{1-r^{N+1}}{1-r}\right] \end{aligned}$$

For finite N, these formulas are valid (if one uses L’Hopitals rule at \(r=1\)) for all r, and in the case that \(N\rightarrow \infty \), the results are valid if \(0\le r<1\). In particular, the variance of \(u^{(F)}\) over the integers is

$$\begin{aligned} \sigma ^{2}(z)=\frac{z}{(1-z)^{2}}. \end{aligned}$$
(E.3)

In the CTRW setting, one can find the arrival probabilities, q(xt|y) by substituting the Laplace transform of \(\psi (t)\) for z in the formulas in this section. Likewise, upon multiplying by \({\hat{\varPhi }}(s)\), the probability, p(xt|y) and time-dependent moments can be found. For instance, for an unbiased 1D CTRW with nearest neighbor jumps,

$$\begin{aligned} \sigma (s) = L^2\frac{1-\psi }{s}\frac{\psi }{(1-\psi )^2} = \frac{L^2}{s}\frac{\psi }{1-\psi } \end{aligned}$$

as was found in Eq. (43) by different techniques.

The first-passage-time probability generating function may also be found by making use of the definition in Eq. (E.1) for \(u^{(F)}\). As derived in Montroll and Weiss (1965),

$$\begin{aligned} f^{(F)}(z,m|\ell ) = \frac{u^{(F)}(z,m|\ell )-\delta _{m,\ell }}{u^{(F)}(z,0|0)}. \end{aligned}$$

Furthermore, this result is easily extended to much more general cases such as multistate random walks, and walks on complicated structures (Hughes 1996).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Stotsky, J.A., Gou, J. & Othmer, H.G. A Random Walk Approach to Transport in Tissues and Complex Media: From Microscale Descriptions to Macroscale Models. Bull Math Biol 83, 92 (2021). https://doi.org/10.1007/s11538-021-00917-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11538-021-00917-0

Keywords

Navigation