Skip to main content
Log in

Mixture of Gaussians in the open quantum random walks

  • Published:
Quantum Information Processing Aims and scope Submit manuscript

Abstract

We discuss the Gaussian and the mixture of Gaussians in the limit of open quantum random walks. The central limit theorems for the open quantum random walks under certain conditions were proven by Attal et al (Ann Henri Poincaré 16(1):15–43, 2015) on the integer lattices and by Ko et al (Quantum Inf Process 17(7):167, 2018) on the crystal lattices. The purpose of this paper is to investigate the general situation. We see that the Gaussian and the mixture of Gaussians in the limit depend on the structure of the invariant states of the intrinsic quantum Markov semigroup whose generator is given by the Kraus operators which generate the open quantum random walks. Some concrete models are considered for the open quantum random walks on the crystal lattices. Due to the intrinsic structure of the crystal lattices, we can conveniently construct the dynamics as we like. Here, we consider the crystal lattices of \(\mathbb {Z}^2\) with intrinsic two points, hexagonal, triangular, and Kagome lattices. We also discuss Fourier analysis on the crystal lattices which gives another method to get the limit theorems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Attal, S., Guillotin-Plantard, N., Sabot, C.: Central limit theorems for open quantum random walks and quantum measurement records. Ann. Henri Poincaré 16(1), 15–43 (2015)

    Article  ADS  MathSciNet  Google Scholar 

  2. Attal, S., Petruccione, F., Sabot, C., Sinayskiy, I.: Open quantum random walks. J. Stat. Phys. 147, 832–852 (2012)

    Article  ADS  MathSciNet  Google Scholar 

  3. Attal, S., Petruccione, F., Sinayskiy, I.: Open quantum walks on graphs. Phys. Lett. A 376(18), 1545–1548 (2012)

    Article  ADS  MathSciNet  Google Scholar 

  4. Fagnola, F., Rebolledo, R.: Transience and recurrence of quantum Markov semigroups. Probab. Theory Relat. Fields 126, 289–306 (2003)

    Article  MathSciNet  Google Scholar 

  5. Fagnola, F., Rebolledo, R.: Quantum Markov semigroups and their stationary states, Stochastic analysis and mathematical physics II. Trends Math. pp. 77–128, (2003)

  6. Fagnola, F., Pellicer, R.: Irreducible and periodic positive maps. COSA 3(3), 407–418 (2009)

    Article  MathSciNet  Google Scholar 

  7. Ko, C.K., Konno, N., Segawa, E., Yoo, H.J.: How does Grover walk recognize the shape of crystal lattice? Quantum Inf. Process. 17(7), 167 (2018)

    Article  ADS  MathSciNet  Google Scholar 

  8. Ko, C.K., Konno, N., Segawa, E., Yoo, H.J.: Central limit theorems for open quantum random walks on the crystal lattices. J. Stat. Phys. 176, 710–735 (2019)

    Article  ADS  MathSciNet  Google Scholar 

  9. Konno, N., Yoo, H.J.: Limit theorems for open quantum random walks. J. Stat. Phys. 150, 299–319 (2013)

    Article  ADS  MathSciNet  Google Scholar 

  10. Sunada, T.: Topological crystallography with a view towards discrete geometric analysis, Surveys and Tutorials in Applied Mathematical Sciences 6. Springer (2013)

  11. Umanita, V.: Classification and decomposition of quantum Markov semigroups. Thesis, Politecnico di Milano (2005)

  12. Umanita, V.: Classification and decomposition of quantum Markov semigroups. Probab. Theory Relat. Fields 134, 603–623 (2006)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

We are grateful to Professor Franco Fagnola and Professor Veronica Umanita for many helpful discussions and giving us reference [11]. We thank Mrs. Yoo Jin Cha for helping with figures. The research by H. J. Yoo was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2016R1D1A1B03936006).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hyun Jae Yoo.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

A Examples: central limit theorem

In this Appendix, we consider some more examples satisfying the CLT. The examples of OQRWs on the hexagonal lattice were investigated in [8]. Here, we consider the examples for the triangular and Kagome lattices.

1.1 A.1 Triangular lattice

Triangular lattice is a crystal lattice that is depicted in \(\mathbb {R}^2\). Look at Figure 4.

Fig. 4
figure 4

Triangular lattice

1.1.1 A.1.1 Preparation

We let \(V_0=\{u\}\) and let \(\{e_i\}_{i=1,2,3}\) be the three self-loops in \(G_0\) with \(\mathrm {o}(e_i)=\mathrm {t}(e_i)=u\). (See Figure 4.) The reversed self-loops are denoted by \(\overline{e}_i\), \(i=1,2,3\). It is natural to define \(\mathfrak {h}\equiv \mathfrak {h}_u=\mathbb {C}^6\) and find six matrices B(e), \(e\in A(G_0)\), of size \(6\times 6\) that satisfy (2.4). However, it is too much to investigate all the general cases. Here, we focus on the simple examples that satisfy the central limit theorems. For that, we let \(\mathfrak {h}=\mathbb {C}^3\oplus \mathbb {C}^3\) and consider \(3\times 3\) block matrices for B(e), \(e\in A(G_0)\), as follows. We remark that the following construction is very similar to the example for hexagonal lattice studied in [8]. First, we let

$$\begin{aligned} \widehat{\theta }(e_1)=\frac{1}{\sqrt{2}}[1,1],\quad \widehat{\theta }(e_2)=\frac{1}{\sqrt{2}}[-1,1],\quad \widehat{\theta }(e_3)=[0,-\sqrt{2}], \end{aligned}$$

and \(\widehat{\theta }(\overline{e}_i)=-\widehat{\theta }(e_i)\), \(i=1,2,3\). In order to define the operators B(e), \(e\in A(G_0)\), let \(U=\left[ \begin{matrix}{} \mathbf{u}_1&\mathbf{u}_2&\mathbf{u}_3\end{matrix}\right] \) and \(V=\left[ \begin{matrix}{} \mathbf{v}_1&\mathbf{v}_2&\mathbf{v}_3\end{matrix}\right] \) be \(3\times 3\) unitary matrices with column vectors \(\mathbf{u}_i= [u_{1i}, u_{2i},u_{3i}]^T\) and \(\mathbf{v}_i= [ v_{1i},v_{2i},v_{3i} ]^T\), \(i=1,2,3\). For \(i=1,2,3\), let \(U_i\) be a \(3\times 3\) matrix whose ith column is \(\mathbf{u}_i\) and the remaining columns are zeros. Similarly, let \(V_i\) be the \(3\times 3\) matrix, whose ith column is the vector \(\mathbf{v}_i\) and other columns are zeros. For \(i=1,2,3\), let \(\widetilde{U}_i\) and \(\widetilde{V}_i\) be \(6\times 6\) matrices whose block matrices are given as follows:

$$\begin{aligned} \widetilde{U}_i=\left[ \begin{matrix}0&{}0\\ U_i&{}0\end{matrix}\right] , \quad \widetilde{V}_i=\left[ \begin{matrix}0&{}V_i\\ 0&{}0\end{matrix}\right] . \end{aligned}$$

Now, we define

$$\begin{aligned} B(e_i):=\widetilde{U}_i,\quad \text {and}\quad B(\overline{e}_i):=\widetilde{V}_i,\quad i=1,2,3. \end{aligned}$$

Then, \(B(e_i),\, B(\overline{e}_i),\, i=1,2,3\) satisfy condition (2.4).

It is easy to check that a state \(\rho \in \mathcal {E}(\mathfrak {h})\) is a solution to the equation \(\mathcal {L}_*(\rho )=0\), where \(\mathcal {L}_*(\rho )\) was defined in (3.1), if and only if \(\rho =\rho _1\oplus \rho _2\) and it holds that

$$\begin{aligned} \rho _1=\sum _{i=1}^3V_i\rho _2 V_i^*,~\quad \rho _2=\sum _{i=1}^3U_i\rho _1 U_i^*. \end{aligned}$$
(A.1)

Let us consider the following (doubly) stochastic matrices:

$$\begin{aligned} P_U:=\left[ \begin{matrix}|u_{11}|^2&{}|u_{21}|^2&{}|u_{31}|^2\\ |u_{12}|^2&{}|u_{22}|^2&{}|u_{32}|^2\\ |u_{13}|^2&{}|u_{23}|^2&{}|u_{33}|^2\end{matrix}\right] ,\quad P_V:=\left[ \begin{matrix}|v_{11}|^2&{}|v_{21}|^2&{}|v_{31}|^2\\ |v_{12}|^2&{}|v_{22}|^2&{}|v_{32}|^2\\ |v_{13}|^2&{}|v_{23}|^2&{}|v_{33}|^2\end{matrix}\right] . \end{aligned}$$
(A.2)

It was shown in [8, Proposition 4.1] that if the stochastic matrices \(P_UP_V\) and \(P_VP_U\) are irreducible, then the equation \(\mathcal {L}_*(\rho )=0\) has a unique state solution \(\rho =\rho _1\oplus \rho _2\) with \(\rho _1=\rho _2=\frac{1}{6}I\).

1.1.2 Example: nonzero covariance

Let us take \(U=V=U_G\), where

$$\begin{aligned} U_G=\frac{1}{3}\left[ \begin{matrix}-1&{}2&{}2\\ 2&{}-1&{}2\\ 2&{}2&{}-1 \end{matrix}\right] . \end{aligned}$$
(A.3)

It is obvious that \(P_UP_V=P_VP_U\) is irreducible, where \(P_U\) and \(P_V\) are defined in (A.2). Therefore, the equation \(\mathcal {L}_*(\rho )=0\) has a unique state solution \(\rho =\frac{1}{6}I\oplus \frac{1}{6}I\). From equation (3.2), it is easy to see that \(m=0\). By directly computing from (3.3), we get, up to a sum of a constant multiple of identity,

$$\begin{aligned} L_1=L_{1,u}\oplus L_{1,v}, \quad L_{1,u}=\left[ \begin{matrix}3&{}0&{}0\\ 0&{}\frac{3}{2}&{}0\\ 0&{}0&{}0 \end{matrix}\right] ,~~L_{1,v}=\left[ \begin{matrix}0&{}0&{}0\\ 0&{}\frac{3}{2}&{}0\\ 0&{}0&{}3 \end{matrix}\right] \end{aligned}$$

and

Notice that the transformation matrix \(\Theta \) in (2.1) is given by

$$\begin{aligned} \Theta =\frac{1}{\sqrt{2}}\left[ \begin{matrix}1&{}-1\\ 1&{}1\end{matrix}\right] . \end{aligned}$$
(A.4)

By the linearity of equation (3.3), we have (see [8, Remark 3.6])

$$\begin{aligned} L_{\mathbf{e}_i}=\sum _{j=1}^2{\Theta }_{ij}L_j, \quad i=1,2. \end{aligned}$$
(A.5)

Therefore, we get

$$\begin{aligned} L_{\mathbf{e}_1}=\Theta _{11}L_1+\Theta _{12}L_2=L_{\mathbf{e}_1,1}\oplus L_{\mathbf{e}_1,2},\quad L_{\mathbf{e}_1,1}=-L_{\mathbf{e}_1,2}=\frac{3}{2\sqrt{2}}\left[ \begin{matrix}1&{}0&{}0\\ 0&{}-1&{}0\\ 0&{}0&{}0 \end{matrix}\right] , \end{aligned}$$

and

$$\begin{aligned} L_{\mathbf{e}_2}=\Theta _{21}L_1+\Theta _{22}L_2=L_{\mathbf{e}_2,1}\oplus L_{\mathbf{e}_2,2} \end{aligned}$$

with

$$\begin{aligned} L_{\mathbf{e}_2,1}=\frac{9}{2\sqrt{2}}\left[ \begin{matrix}1&{}0&{}0\\ 0&{}1&{}0\\ 0&{}0&{}0 \end{matrix}\right] ,~~L_{\mathbf{e}_2,2}=\frac{3}{2\sqrt{2}}\left[ \begin{matrix}1&{}0&{}0\\ 0&{}1&{}0\\ 0&{}0&{}4 \end{matrix}\right] . \end{aligned}$$

Now, we are ready to compute the covariance matrix \(\Sigma \) given in (3.4). Since the mean m is zero and \(\rho _\infty =\frac{1}{6}I\), we are left with

$$\begin{aligned} C_{ij}= & {} \frac{1}{6}\sum _{e\in A(G_0)}\text {Tr}(B(e)B(e)^*)(\widehat{\theta }(e))_i(\widehat{\theta }(e))_j\nonumber \\&+\frac{1}{3}\sum _{e\in A(G_0)}\mathrm {Tr}(B(e)B(e)^*L_{\mathbf{e}_i})(\widehat{\theta }(e))_j\nonumber \\=: & {} C^{(1)}_{ij}+C_{ij}^{(2)}. \end{aligned}$$
(A.6)

For the first term \(C^{(1)}_{ij}\), the trace part is all 1 and thus we get

For the second term \(C^{(2)}_{ij}\), computations before taking trace give us

and so we get

Thus, summing those two terms we get the covariance matrix

(A.7)

The characteristic function for the Gaussian random variable X with mean zero and covariance \(\Sigma \) in (A.7) is

$$\begin{aligned} \mathbb {E}(e^{i\langle \mathbf{t},X\rangle })=e^{-\frac{1}{3}(t_1^2+3t_2^2)}. \end{aligned}$$

We notice that the variance in the horizontal line (x-axis) is smaller than that in the vertical line (y-axis). This reflects that fact that along the vertical line there are “roads” (the vectors \(\widehat{\theta }(e_3)\) and \(\widehat{\theta }(\overline{e}_3)\)) through which the walker can travel.

1.1.3 Example: zero covariance

Let us consider one more example for the model of OQRW on the triangular lattice. This time, let us take \(U=U_G\) in (A.3) and \(V=I\). In this case, the matrices \(P_UP_V\) and \(P_VP_U\) are also irreducible and hence the equation \(\mathcal {L}_*(\rho )=0\) has a unique state solution \(\rho _\infty =\frac{1}{6}I\). From equation (3.2), it is easy to see that \(m=0\). As before, the solutions of (3.3) are, up to a sum of constant multiple of identity,

$$\begin{aligned} L_1=L_{1,u}\oplus L_{1,v}, \quad L_{1,u}=\left[ \begin{matrix}1&{}0&{}0\\ 0&{}0&{}0\\ 0&{}0&{}-1 \end{matrix}\right] ,~~L_{1,v}=0, \end{aligned}$$

and

$$\begin{aligned} L_2=L_{2,u}\oplus L_{2,v}, \quad L_{2,u}=\left[ \begin{matrix}0&{}0&{}0\\ 0&{}1&{}0\\ 0&{}0&{}-1 \end{matrix}\right] ,~~L_{2,v}=0. \end{aligned}$$

Recall \(\Theta \) in (A.4). We then get

$$\begin{aligned} L_{\mathbf{e}_1}=\Theta _{11}L_1+\Theta _{12}L_2=L_{\mathbf{e}_1,u}\oplus L_{\mathbf{e}_1,v}, \quad L_{\mathbf{e}_1,u}=\frac{1}{\sqrt{2}}\left[ \begin{matrix}1&{}0&{}0\\ 0&{}-1&{}0\\ 0&{}0&{}0 \end{matrix}\right] ,~~L_{\mathbf{e}_1,v}=0, \end{aligned}$$

and

$$\begin{aligned} L_{\mathbf{e}_2}=\Theta _{21}L_1+\Theta _{22}L_2=L_{\mathbf{e}_2,u}\oplus L_{\mathbf{e}_2,v},\quad L_{\mathbf{e}_2,u}=\frac{1}{\sqrt{2}}\left[ \begin{matrix}1&{}0&{}0\\ 0&{}1&{}0\\ 0&{}0&{}-2 \end{matrix}\right] ,~~L_{\mathbf{e}_2,v}=0. \end{aligned}$$

The covariance matrix can be computed as before, and we get \(\Sigma =0\). Now, the measure is a Gaussian and the mean and covariance are all zero; this means that it is a Dirac measure at the origin.

1.2 A.2 Kagome lattice

In this subsection, we consider the OQRWs on the Kagome lattice. Look at the Kagome lattice in Fig. 5.

Fig. 5
figure 5

Kagome lattice

We let \(V_0=\{1, 2, 3\}\) by naming the vertices with numbers. For \(1\le i\ne j\le 3\), we let \(\{e_{ij}, f_{ij}\}\) be the 12 directed edges in \(G_0\) with a convention \(\mathrm {o}(e_{ij})=j\) and \(\mathrm {t}(e_{ij})=i\) and similarly for \(f_{ij}\)’s. We notice that \(\overline{e}_{ij}=e_{ji}\) and \(\overline{f}_{ij}=f_{ji}\). We let

$$\begin{aligned} \widehat{\theta }(e_{12})= & {} \widehat{\theta }(e_{21})=\widehat{\theta }(e_{13}) =\widehat{\theta }(e_{31})=0,\\ \widehat{\theta }(e_{23})= & {} \widehat{\theta }(f_{13})=-\widehat{\theta }(e_{32}) =-\widehat{\theta }(f_{31})=\frac{1}{{\sqrt{2}}}[1,1], \end{aligned}$$

and

$$\begin{aligned} \widehat{\theta }(f_{12})=\widehat{\theta }(f_{32})=-\widehat{\theta }(f_{21}) =-\widehat{\theta }(f_{23})=\frac{1}{\sqrt{2}}[-1,1]. \end{aligned}$$

In order to define the operators B(e), \(e\in A(G_0)\), let \(\mathfrak {h}_1=\mathfrak {h}_2=\mathfrak {h}_3=\mathbb {C}^4\), and \(\mathfrak {h}=\mathfrak {h}_1\oplus \mathfrak {h}_2\oplus \mathfrak {h}_3=\mathbb {C}^{12}\). Let H be a \(2\times 2\) unitary matrix given by

where

Notice that

Let \(U_L\), \(U_R\), \(V_L\), and \(V_R\) be \(4\times 4\) matrices given by

Notice that

$$\begin{aligned} U_L^*U_L= & {} \left[ \begin{matrix}P_1&{}0\\ 0&{}0\end{matrix}\right] , \quad U_R^*U_R=\left[ \begin{matrix}P_2&{}0\\ 0&{}0\end{matrix}\right] ,\\ V_L^*V_L= & {} \left[ \begin{matrix}0&{}0\\ 0&{}P_1\end{matrix}\right] , \quad V_R^*V_R=\left[ \begin{matrix}0&{}0\\ 0&{}P_2\end{matrix}\right] . \end{aligned}$$

For \(i,j=1,2,3\) \((i\ne j)\), let \(U_{ij}\) and \(V_{ij}\) be \(12\times 12\) matrices whose block matrices are given as follows:

$$\begin{aligned} U_{21}= & {} \left[ \begin{matrix}0&{}0&{}0\\ U_L&{}0&{}0\\ 0&{}0&{}0\end{matrix}\right] , \quad U_{31}=\left[ \begin{matrix}0&{}0&{}0\\ 0&{}0&{}0\\ U_R&{}0&{}0\end{matrix}\right] , \quad U_{32}=\left[ \begin{matrix}0&{}0&{}0\\ 0&{}0&{}0\\ 0&{}U_L&{}0\end{matrix}\right] ,\\ U_{12}= & {} \left[ \begin{matrix}0&{}U_R&{}0\\ 0&{}0&{}0\\ 0&{}0&{}0\end{matrix}\right] , \quad U_{13}=\left[ \begin{matrix}0&{}0&{}U_L\\ 0&{}0&{}0\\ 0&{}0&{}0\end{matrix}\right] , \quad U_{23}=\left[ \begin{matrix}0&{}0&{}0\\ 0&{}0&{}U_R\\ 0&{}0&{}0\end{matrix}\right] , \end{aligned}$$

and

$$\begin{aligned} V_{21}= & {} \left[ \begin{matrix}0&{}0&{}0\\ V_L&{}0&{}0\\ 0&{}0&{}0\end{matrix}\right] , \quad V_{31}=\left[ \begin{matrix}0&{}0&{}0\\ 0&{}0&{}0\\ V_R&{}0&{}0\end{matrix}\right] , \quad V_{32}=\left[ \begin{matrix}0&{}0&{}0\\ 0&{}0&{}0\\ 0&{}V_L&{}0\end{matrix}\right] ,\\ V_{12}= & {} \left[ \begin{matrix}0&{}V_R&{}0\\ 0&{}0&{}0\\ 0&{}0&{}0\end{matrix}\right] , \quad V_{13}=\left[ \begin{matrix}0&{}0&{}V_L\\ 0&{}0&{}0\\ 0&{}0&{}0\end{matrix}\right] , \quad V_{23}=\left[ \begin{matrix}0&{}0&{}0\\ 0&{}0&{}V_R\\ 0&{}0&{}0\end{matrix}\right] . \end{aligned}$$

Now, we define

$$\begin{aligned} B(e_{ij}):=U_{ij}\quad \text {and}\quad B(f_{ij}):=V_{ij},\quad i=1,2,3\,\,(i\ne j). \end{aligned}$$

Then, \(B(e_{ij}),\, B(f_{ij}),\, i,j=1,2,3\) \((i\ne j)\) satisfy condition (2.4).

Lemma A.1

The equation \(\mathcal {L}_*(\rho )=0\) for the states, where \(\mathcal {L}_*(\rho )\) was defined in (3.1), has a unique solution \(\rho _\infty =\frac{1}{12}I\oplus \frac{1}{12}I\oplus \frac{1}{12}I\in \mathcal {E}(\mathfrak {h})\).

Proof

It is easy to check that a state \(\rho \in \mathcal {E}(\mathfrak {h})\) solves the equation \(\mathcal {L}_*(\rho )=0\) if and only if it has the form \(\rho =\rho ^{(1)}\oplus \rho ^{(2)}\oplus \rho ^{(3)}\) and satisfies

$$\begin{aligned} \rho ^{(1)}= & {} U_R\rho ^{(2)} U_R^*+V_R\rho ^{(2)} V_R^*+U_L\rho ^{(3)} U_L^*+V_L\rho ^{(3)} V_L^*,\nonumber \\ \rho ^{(2)}= & {} U_R\rho ^{(3)} U_R^*+V_R\rho ^{(3)} V_R^*+U_L\rho ^{(1)} U_L^*+V_L\rho ^{(1)} V_L^*,\nonumber \\ \rho ^{(3)}= & {} U_R\rho ^{(1)} U_R^*+V_R\rho ^{(1)} V_R^*+U_L\rho ^{(2)} U_L^*+V_L\rho ^{(2)} V_L^*. \end{aligned}$$
(A.8)

From equations (A.8), we see that the matrices \(\rho ^{(i)}\), \(i=1,2,3\), are block matrices of the form

$$\begin{aligned} \rho ^{(i)}=\left[ \begin{matrix}\rho ^{(i)}_1&{}0\\ 0&{}\rho ^{(i)}_2 \end{matrix}\right] ,\quad i=1,2,3; \end{aligned}$$
(A.9)

here, \(\rho ^{(i)}_j\), \(j=1,2\), are \(2\times 2\) matrices, say

$$\begin{aligned} \rho ^{(i)}_j:=\left[ \begin{matrix}\rho ^{(i)}_j(1,1)&{}\rho ^{(i)}_j(1,2) \\ \rho ^{(i)}_j(2,1)&{}\rho ^{(i)}_j(2,2) \end{matrix}\right] . \end{aligned}$$

Using the form in (A.9), we can rewrite (A.8) in the following form:

$$\begin{aligned} \rho =S\rho S^*, \end{aligned}$$
(A.10)

where \(\rho \) and S are \(12\times 12\) block matrices defined by

$$\begin{aligned} \rho =\left[ \begin{matrix}\rho ^{(1)}_1&{}0&{}0&{}0&{}0&{}0\\ 0&{}\rho ^{(1)}_2&{}0&{}0&{}0&{}0\\ 0&{}0&{}\rho ^{(2)}_1&{}0&{}0&{}0\\ 0&{}0&{}0&{}\rho ^{(2)}_2&{}0&{}0\\ 0&{}0&{}0&{}0&{}\rho ^{(3)}_1&{}0\\ 0&{}0&{}0&{}0&{}0&{}\rho ^{(3)}_2\end{matrix}\right] ,\quad S=\left[ \begin{matrix}0&{}0&{}0&{}R&{}0&{}L\\ 0&{}0&{}R&{}0&{}L&{}0\\ 0&{}L&{}0&{}0&{}0&{}R\\ L&{}0&{}0&{}0&{}R&{}0\\ 0&{}R&{}0&{}L&{}0&{}0\\ R&{}0&{}L&{}0&{}0&{}0\end{matrix}\right] . \end{aligned}$$
(A.11)

It is easy to check that S is a unitary matrix. Therefore, by multiplying \(S^*\) and S from left and right, respectively, to equation (A.10) we also have

(A.12)

From (A.12), we see that \(\rho ^{(i)}_j\) are diagonal matrices:

(A.13)

Now equating the first block in (A.10) and (A.12), we get

$$\begin{aligned} \rho ^{(1)}_1=R\rho ^{(2)}_2R^*+L\rho ^{(3)}_2L^* =L^*\rho ^{(2)}_2L+R^*\rho ^{(3)}_2R, \end{aligned}$$

or

(A.14)
$$\begin{aligned}&=\frac{1}{2}\left[ \begin{matrix} \rho ^{(2)}_2(1,1)+\rho ^{(2)}_2(2,2)&{}0\\ 0&{}\rho ^{(3)}_2(1,1)+\rho ^{(3)}_2(2,2) \end{matrix}\right] . \end{aligned}$$
(A.15)

Looking at the off-diagonal components, we get

$$\begin{aligned} \rho ^{(2)}_2(2,2)=\rho ^{(3)}_2(1,1). \end{aligned}$$

Applying this relation to (A.15) and (A.14), we easily get

$$\begin{aligned} \rho ^{(1)}_1(1,1)=\rho ^{(2)}_2(2,2)= \rho ^{(3)}_2(1,1)=\rho ^{(1)}_1(2,2)= \rho ^{(2)}_2(1,1)=\rho ^{(3)}_2(2,2). \end{aligned}$$

That is, \(\rho ^{(1)}_1=\rho ^{(2)}_2=\rho ^{(3)}_2\). Using the cyclic symmetry, we obtain that all six matrices \(\rho ^{(i)}_j\), \(i=1,2,3\), \(j=1,2\), are the same to each other. Taking into account that \(\rho \) is a state, we conclude \(\rho =\frac{1}{12}I\oplus \frac{1}{12}I\oplus \frac{1}{12}I\in \mathcal {E}(\mathfrak {h})\) and the proof is completed. \(\square \)

Let us compute the mean m and covariance matrix \(\Sigma \). From equation (3.2), it is easy to see that \(m=0\). By directly computing from (3.3), we see that, up to a sum of a constant multiple of identity,

$$\begin{aligned} L_1=\left[ \begin{matrix}1&{}0&{}0&{}0\\ 0&{}1&{}0&{}0\\ 0&{}0&{}0&{}0\\ 0&{}0&{}0&{}0\end{matrix}\right] \oplus \left[ \begin{matrix}0&{}0&{}0&{}0\\ 0&{}0&{}0&{}0\\ 0&{}0&{}1&{}0\\ 0&{}0&{}0&{}1\end{matrix}\right] \oplus \left[ \begin{matrix}0&{}0&{}0&{}0\\ 0&{}2&{}0&{}0\\ 0&{}0&{}2&{}0\\ 0&{}0&{}0&{}0\end{matrix}\right] , \end{aligned}$$

and

$$\begin{aligned} L_2=\left[ \begin{matrix}3&{}0&{}0&{}0\\ 0&{}1&{}0&{}0\\ 0&{}0&{}0&{}0\\ 0&{}0&{}0&{}2\end{matrix}\right] \oplus \left[ \begin{matrix}1&{}0&{}0&{}0\\ 0&{}1&{}0&{}0\\ 0&{}0&{}3&{}0\\ 0&{}0&{}3&{}3\end{matrix}\right] \oplus \left[ \begin{matrix}1&{}0&{}0&{}0\\ 0&{}3&{}0&{}0\\ 0&{}0&{}2&{}0\\ 0&{}0&{}0&{}0\end{matrix}\right] . \end{aligned}$$

Notice that

$$\begin{aligned} \Theta =\frac{1}{\sqrt{2}}\left[ \begin{matrix}1&{}-1\\ 1&{}1\end{matrix}\right] . \end{aligned}$$

Therefore, we get by (A.5)

$$\begin{aligned} L_{\mathbf{e}_1}=\frac{1}{\sqrt{2}} \left( \left[ \begin{matrix}-2&{}0&{}0&{}0\\ 0&{}0&{}0&{}0\\ 0&{}0&{}0&{}0\\ 0&{}0&{}0&{}-2\end{matrix}\right] \oplus \left[ \begin{matrix}-1&{}0&{}0&{}0\\ 0&{}-1&{}0&{}0\\ 0&{}0&{}-2&{}0\\ 0&{}0&{}0&{}-2\end{matrix}\right] \oplus \left[ \begin{matrix}-1&{}0&{}0&{}0\\ 0&{}-1&{}0&{}0\\ 0&{}0&{}0&{}0\\ 0&{}0&{}0&{}0\end{matrix}\right] \right) , \end{aligned}$$

and

$$\begin{aligned} L_{\mathbf{e}_2}=\frac{1}{\sqrt{2}} \left( \left[ \begin{matrix}4&{}0&{}0&{}0\\ 0&{}2&{}0&{}0\\ 0&{}0&{}0&{}0\\ 0&{}0&{}0&{}2\end{matrix}\right] \oplus \left[ \begin{matrix}1&{}0&{}0&{}0\\ 0&{}1&{}0&{}0\\ 0&{}0&{}4&{}0\\ 0&{}0&{}0&{}4\end{matrix}\right] \oplus \left[ \begin{matrix}1&{}0&{}0&{}0\\ 0&{}5&{}0&{}0\\ 0&{}0&{}4&{}0\\ 0&{}0&{}0&{}0\end{matrix}\right] \right) . \end{aligned}$$

Now, we are ready to compute the covariance matrix \(\Sigma \) given in (3.4). Since the mean m is zero and \(\rho _\infty =\frac{1}{12}I\oplus \frac{1}{12}I\oplus \frac{1}{12}I\), we are left with

$$\begin{aligned} C_{ij}= & {} \frac{1}{12}\sum _{e\in A(G_0)}\text {Tr}(B(e)B(e)^*)(\widehat{\theta }(e))_i(\widehat{\theta }(e))_j\nonumber \\&+\frac{1}{6}\sum _{e\in A(G_0)}\mathrm {Tr}(B(e)B(e)^*L_{\mathbf{e}_i})(\widehat{\theta }(e))_j\nonumber \\=: & {} C^{(1)}_{ij}+C_{ij}^{(2)}. \end{aligned}$$
(A.16)

For the first term \(C^{(1)}_{ij}\), the trace part is all 1 and thus we get

$$\begin{aligned} C^{(1)}=\frac{1}{3}I. \end{aligned}$$

For the second term \(C^{(2)}_{ij}\), the terms, before taking trace, are given by

Then, we get

Thus, the covariance matrix is

(A.17)

Notice that the covariance matrix (A.17) has eigenvalues \(\frac{1}{6}(3\pm \sqrt{5})\) with corresponding eigenvectors \([2\mp \sqrt{5},1]^T\).

B Analytic proof of mixture of Gaussians for the hexagonal lattice

Let us recall the Fourier analysis on the crystal lattices and consider a dual process which was developed in [8, 9]. For a function \(f:\mathbb {L}\rightarrow \mathbb {C}\), its Fourier transform \(\widehat{f}:\Theta ({\mathbb {T}}^2)\rightarrow \mathbb {C}\) is defined by

$$\begin{aligned} \widehat{f}(\mathbf{k}):=\sum _{x\in \mathbb {L}}e^{-i\langle \mathbf{k},x\rangle }f(x), \end{aligned}$$
(B.1)

and the inverse relation is given by

$$\begin{aligned} f(x)=\frac{1}{|\det \Theta |}\frac{1}{(2\pi )^2}\int _{\Theta ({\mathbb {T}}^d)}e^{i\langle \mathbf{k},{x}\rangle }\widehat{f}(\mathbf{k})d\mathbf{k},\quad x\in \mathbb {L}.. \end{aligned}$$

(See [8, Section 5.1] for the details.) If \(\rho ^{(0)}\) is the initial condition, then the state at nth step is given in the Fourier transform space by [7]

$$\begin{aligned} \widehat{\rho ^{(n)}}(\mathbf{k})=\left( \sum _{e\in A(G_0)}e^{-i\langle \mathbf{k},\widehat{\theta }(e)\rangle }L_{B(e)}R_{B(e)^*}\right) ^n\widehat{\rho ^{(0)}}(\mathbf{k}),\quad \mathbf{k}\in \Theta (\mathbb {T}^2). \end{aligned}$$
(B.2)

Here, \(L_A\) and \(R_A\) are the left and right multiplication operators by A, respectively:

$$\begin{aligned} L_A(B):=AB, \quad R_A(B):=BA. \end{aligned}$$

The dual process is the process \((Y_n(\mathbf{k}))_{\mathbf{k}\in \Theta (\mathbb {T}^2)}\in \widehat{\mathcal {A}}\) given by

$$\begin{aligned} Y_n(\mathbf{k}):=\left( \sum _{e\in A(G_0)}e^{-i\langle \mathbf{k},\widehat{\theta }(e)\rangle }L_{B(e)^*}R_{B(e)}\right) ^n(I_{\mathfrak {h}}). \end{aligned}$$
(B.3)

Then, it holds that

$$\begin{aligned} p_x^{(n)}=\frac{1}{|\det \Theta |}\frac{1}{(2\pi )^2}\int _{\Theta (\mathbb {T}^2)} e^{i\langle \mathbf{k},x\rangle }\mathrm {Tr}\left( \widehat{\rho ^{(0)}}(\mathbf{k})Y_n(\mathbf{k})\right) d\mathbf{k}, \quad x\in \mathbb {L}. \end{aligned}$$

In other words, the Fourier transform of the probability density \((p_x^{(n)})_{x\in \mathbb {L}}\) at time n is given by

$$\begin{aligned} \widehat{p_\cdot ^{(n)}}(\mathbf{k})=\mathrm {Tr}\left( \widehat{\rho ^{(0)}}(\mathbf{k})Y_n(\mathbf{k})\right) ,\quad \mathbf{k}\in \Theta (\mathbb {T}^2). \end{aligned}$$
(B.4)

Let us focus on the situation where a mixture of Gaussians appears. Thus, suppose that \(P_UP_V\) and \(P_VP_U\) are reducible with a common decomposition into communicating classes, say \(\{\{1,2\}, \{3\}\}\) assuming the stochastic matrices \(P_UP_V\) and \(P_VP_U\) are defined on the state space \(\{1,2,3\}\). Put

$$\begin{aligned} D(\mathbf{k}):=\mathrm {diag}(e^{-i\langle \mathbf{k},\widehat{\theta }_1\rangle },e^{-i\langle \mathbf{k},\widehat{\theta }_2\rangle },1), \end{aligned}$$

where \(\mathrm {diag}(a,b,c)\) means the diagonal matrix with entries a, b, and c. We can show (cf. [8, Example 5.3]) that

$$\begin{aligned} Y_n(\mathbf{k})= & {} A_n(\mathbf{k})\oplus B_n(\mathbf{k});\nonumber \\ A_n(\mathbf{k})= & {} \mathrm {diag}(a_{n,1}(\mathbf{k}), a_{n,2}(\mathbf{k}),a_{n,3}(\mathbf{k})),\quad B_n(\mathbf{k})=\mathrm {diag}(b_{n,1}(\mathbf{k}), b_{n,2}(\mathbf{k}),b_{n,3}(\mathbf{k})),\nonumber \\ \end{aligned}$$
(B.5)

where the components satisfy the following recurrence relations:

$$\begin{aligned} \left[ \begin{matrix}a_{n,1}(\mathbf{k})\\ a_{n,2}(\mathbf{k})\\ a_{n,3}(\mathbf{k})\end{matrix}\right] =D(\mathbf{k})P_U\left[ \begin{matrix}b_{n-1,1}(\mathbf{k})\\ b_{n-1,2}(\mathbf{k})\\ b_{n-1,3}(\mathbf{k})\end{matrix}\right] ,\quad \left[ \begin{matrix}b_{n,1}(\mathbf{k})\\ b_{n,2}(\mathbf{k})\\ b_{n,3}(\mathbf{k})\end{matrix}\right] =D(\mathbf{k})^*P_V\left[ \begin{matrix}a_{n-1,1}(\mathbf{k})\\ a_{n-1,2}(\mathbf{k})\\ a_{n-1,3}(\mathbf{k})\end{matrix}\right] . \end{aligned}$$
(B.6)

Therefore, we get

$$\begin{aligned} \left[ \begin{matrix}a_{n,1}(\mathbf{k})\\ a_{n,2}(\mathbf{k})\\ a_{n,3}(\mathbf{k})\end{matrix}\right] =\widetilde{A_n}(\mathbf{k})\left[ \begin{matrix}1\\ 1\\ 1\end{matrix}\right] ,\quad \left[ \begin{matrix}b_{n,1}(\mathbf{k})\\ b_{n,2}(\mathbf{k})\\ b_{n,3}(\mathbf{k})\end{matrix}\right] =\widetilde{B_n}(\mathbf{k})\left[ \begin{matrix}1\\ 1\\ 1\end{matrix}\right] . \end{aligned}$$
(B.7)

Here, the matrices \(\widetilde{A}_n(\mathbf{k})\) and \(\widetilde{B}_n(\mathbf{k})\) are given by

$$\begin{aligned} \widetilde{A}_n(\mathbf{k})= & {} {\left\{ \begin{array}{ll} (D(\mathbf{k})P_UD(\mathbf{k})^*P_V)^m,&{}n=2m,\\ (D(\mathbf{k})P_UD(\mathbf{k})^*P_V)^mD(\mathbf{k})P_U, &{}n=2m+1,\end{array}\right. } \end{aligned}$$
(B.8)
$$\begin{aligned} \widetilde{B}_n(\mathbf{k})= & {} {\left\{ \begin{array}{ll} (D(\mathbf{k})^*P_VD(\mathbf{k})P_U)^m,&{}n=2m,\\ (D(\mathbf{k})^*P_VD(\mathbf{k})P_U)^mD(\mathbf{k})^*P_V, &{}n=2m+1. \end{array}\right. } \end{aligned}$$
(B.9)

By the assumption, we see that the operators \(\widetilde{A}_n(\mathbf{k})\) and \(\widetilde{B}_n(\mathbf{k})\) are block diagonal matrices acting on \(\mathbb {C}^2\oplus \mathbb {C}\). And since it is irreducible for each block, when we restrict on each block, the map \(\mathcal {L}_*\) has a unique invariant state (see the proof of [8, Proposition 4.1]). Therefore, for any \(\lambda \in [0,1]\), the following states (density matrices) are all invariant states satisfying \(\mathcal {L}_*(\rho ^{(\lambda )})=0\):

$$\begin{aligned} \rho ^{(\lambda )}=\lambda \eta +(1-\lambda )\xi ,~~\eta =\frac{1}{2}\eta _0\oplus \frac{1}{2}\eta _0,~~\xi =\frac{1}{2}\xi _0\oplus \frac{1}{2}\xi _0 \end{aligned}$$
(B.10)

with

For a concrete model, let us consider \(U=V=U_H\) in (4.12). By (4.14), \(P_UP_V\) and \(P_VP_U\) are reducible with a common communicating classes. There are infinitely many solutions to the equation \(\mathcal {L}_*(\rho )=0\), and in fact, for any \(\lambda \in [0,1] \), the states \(\rho ^{(\lambda )}\) in (B.10) are all invariant states.

A Gaussian Let us take the initial state

$$\begin{aligned} \rho ^{(0)}=\rho ^{(0)}_1 :=\left( \frac{1}{2}\eta _0\oplus \frac{1}{2}\eta _0\right) \otimes |0\rangle \langle 0|. \end{aligned}$$
(B.11)

Hence, we have \(\widehat{\rho ^{(0)}}(\mathbf{k})=\frac{1}{2}\eta _0\oplus \frac{1}{2}\eta _0\). Therefore, putting \(u:=[1,1,1]^T\) and \(u_0:=\frac{1}{\sqrt{2}}[1,1,0]^T\) we see that (use also \(\widetilde{B}_n(\mathbf{k})=\overline{\widetilde{A}_n(\mathbf{k})}\))

$$\begin{aligned} \widehat{p_\cdot ^{(n)}}(\mathbf{k})=\mathrm {Tr}\left( \widehat{\rho ^{(0)}}(\mathbf{k})Y_n(\mathbf{k})\right) =\mathrm {Re}\langle u_0,\widetilde{A}_n(\mathbf{k})u_0\rangle . \end{aligned}$$

Putting \(\theta _j=-\langle \mathbf{k},\widehat{\theta }_j\rangle \), \(j=1,2\), for simplicity, we have \(D=\mathrm {diag}(e^{i\theta _1},e^{i\theta _2},1)\). By defining \(P_{\pm }:=D^{\pm 1/2}PD^{\mp 1/2}\), we can write

$$\begin{aligned} {\widetilde{A}}_n={\left\{ \begin{array}{ll} D^{1/2}(P_+P_-)^mD^{1/2},&{}n=2m+1,\\ D^{1/2}(P_+P_-)^{m-1}P_+D^{-1/2},&{}n=2m. \end{array}\right. } \end{aligned}$$

(We have used \(Pu=u\).) Consider firstly \(n=2m+1\). Putting \(u_{\pm }:=D^{\pm 1/2}u_0\), we have

$$\begin{aligned} \widehat{p_\cdot ^{(n)}}(\mathbf{k})=\mathrm {Re}\langle u_-,(P_+P_-)^mu_+\rangle ,\quad (n=2m+1) \end{aligned}$$

We notice that D and P, and hence \(P^\pm \) also, are invariant on the range of \(P_1^\perp \), i.e., the two-dimensional subspace generated by the first two components of the vectors in \(\mathbb {C}^3\). Therefore, without loss of generality, we may let

We notice that

$$\begin{aligned} P_{\pm }=|u_{\pm }\rangle \langle u_{\pm }|. \end{aligned}$$

By directly computing, we get

$$\begin{aligned} (P_+P_-)|u_+\rangle= & {} \mu ^2|u_+\rangle \\ (P_+P_-)|u_-\rangle= & {} \langle u_+,u_-\rangle |u_+\rangle . \end{aligned}$$

Here,

$$\begin{aligned} \mu :=|\langle u_+,u_-\rangle |=\frac{1}{2}\left| e^{i\theta _1}+e^{i\theta _2}\right| . \end{aligned}$$

Therefore,

$$\begin{aligned} \widehat{p_\cdot ^{(n)}}(\mathbf{k})=\mathrm {Re}\langle u_-,(P_+P_-)^mu_+\rangle =\mu ^{2m}\mathrm {Re}\langle u_-,u_+\rangle =\frac{1}{2}(\cos \theta _1+\cos \theta _2)\mu ^{2m}. \end{aligned}$$

Now, let us consider the asymptotics of \(\widehat{p_\cdot ^{(n)}}(\mathbf{k})\) for large n. Let \(X_n\in \mathbb {L}\) be the position of the walker at time n. We want to see the behavior of \(X_n/\sqrt{n}\) at large time by computing \(\mathbb {E}\left[ e^{i\langle \mathbf{t},X_n/\sqrt{n}\rangle }\right] \), which is nothing but \(\widehat{p_\cdot ^{(n)}}(-\mathbf{t}/\sqrt{n})\) by (B.1). Then, formerly defined \(\theta _j=-\langle \mathbf{k},\widehat{\theta }_j\rangle \) becomes now \(\theta _j=\frac{1}{\sqrt{n}}\langle \mathbf{t}, \widehat{\theta }_j\rangle \), \(j=1,2\), and we get

$$\begin{aligned}&\frac{1}{2}(\cos \theta _1+\cos \theta _2)=1+O\left( \frac{1}{n}\right) ,\\&\quad \mu ^2=\frac{1}{2}\left( 1+\cos (\theta _1-\theta _2)\right) =1-\frac{\varepsilon ^2(\mathbf{t})}{4n}+O\left( \frac{1}{n^2}\right) , \end{aligned}$$

where \(\varepsilon ^2(\mathbf{t})=\langle \mathbf{t},\widehat{\theta }_1-\widehat{\theta }_2\rangle ^2\). Therefore, as \(n\rightarrow \infty \),

$$\begin{aligned} \mathbb {E}\left[ e^{i\langle \mathbf{t},X_n/\sqrt{n}\rangle }\right] =\widehat{p_\cdot ^{(n)}}(-\mathbf{t}/\sqrt{n})\rightarrow e^{-\frac{1}{8}\varepsilon ^2(\mathbf{t})}=e^{-\frac{1}{4}t_1^2}. \end{aligned}$$

We conclude that \(X_n/\sqrt{n}\) converges weakly to a Gaussian with covariance

$$\begin{aligned} \Sigma =\frac{1}{2}\left[ \begin{matrix}1&{}0\\ 0&{}0\end{matrix}\right] . \end{aligned}$$
(B.12)

The limit as n goes to infinity with even numbers can be computed similarly, and it gives the same result as the above. It is easy to guess the above result from the dynamics of the walk. In fact, the movements in the y-direction are just an oscillation between the coordinates \(\{-1/\sqrt{2},0,1/\sqrt{2}\}\). Therefore, the variance in the y-direction of the scaled walk by \(1/\sqrt{n}\) converges to 0 as (B.12) shows.

Another Gaussian Let us take the initial state

$$\begin{aligned} \rho ^{(0)}=\rho ^{(0)}_2:=\left( \frac{1}{2}\xi _0\oplus \frac{1}{2}\xi _0\right) \otimes |0\rangle \langle 0|. \end{aligned}$$
(B.13)

Put \(v_0:=[0,0,1]^T\) and \(P_2\) which is the projection onto the third component space so that \(v_0=P_2u\). Now, we have \(\widehat{\rho ^{(0)}}(\mathbf{k})=\frac{1}{2}\xi _0\oplus \frac{1}{2}\xi _0\). Therefore,

$$\begin{aligned} \widehat{p_\cdot ^{(n)}}(\mathbf{k})=\mathrm {Tr}\left( \widehat{\rho ^{(0)}}(\mathbf{k})Y_n(\mathbf{k})\right) =\mathrm {Re}\langle v_0,\widetilde{A}_n(\mathbf{k})v_0\rangle . \end{aligned}$$

Here, we have used again the fact that \(\widetilde{B}_n(\mathbf{k})=\overline{\widetilde{A}_n(\mathbf{k})}\). Clearly, we have

$$\begin{aligned} \widehat{p_\cdot ^{(n)}}(\mathbf{k})=1. \end{aligned}$$

This means that the measure is a Dirac measure at the origin. From the dynamics of the walk, it is obvious why we have Dirac measure. In fact, from the initial condition, the walk never moves out of the origin.

A mixture of Gaussians Let us consider an initial condition given by a convex combination of the preceding examples:

$$\begin{aligned} \rho ^{(0)}_\lambda :=\lambda \rho ^{(0)}_1+(1-\lambda )\rho ^{(0)}_2, \end{aligned}$$

where \(\rho ^{(0)}_1\) and \(\rho ^{(0)}_2\) are in (B.11) and (B.13), respectively. As we have seen in the preceding examples, the states \(\rho ^{(0)}_1\) and \(\rho ^{(0)}_2\) never mix as the dynamics goes on. Therefore, we see that as \(n\rightarrow \infty \), \(X_n/\sqrt{n}\) converges weakly to the mixture of Gaussians

$$\begin{aligned} \lambda \mu ^{(1)}+(1-\lambda )\delta _0, \end{aligned}$$

where \(\mu ^{(1)}\) is a Gaussian with mean 0 and covariance \(\Sigma \) in (B.12).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ko, C.K., Yoo, H.J. Mixture of Gaussians in the open quantum random walks. Quantum Inf Process 19, 244 (2020). https://doi.org/10.1007/s11128-020-02751-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11128-020-02751-0

Keywords

Mathematics Subject Classification

Navigation