1 LINEAR SYSTEMS IN A HILBERT SPACE

Let V be a real separable Hilbert space with inner product \((\,,)\); the case \(\dim V < \infty \) is not excluded. Let A be a linear operator with a dense domain \(D(A) \subset V\); A is not assumed to be bounded. The operator A is associated with the linear differential equation

$$\dot {x} = Ax,\quad x \in D(A).$$
(1)

Assume that this linear system has the quadratic invariant

$$f(x) = (x,x).$$
(2)

This means that the derivative of this function vanishes by virtue of system (1), i.e., \((Ax,x) = 0\) for all \(x \in D(A)\). However, this means that the operator A is skew-symmetric:

$$(Ax,y) + (x,Ay) = 0$$

for all \(x,y \in D(A)\). Specifically, all nonzero eigenvalues of A are purely imaginary.

Below, we describe the properties of eigenvectors of A that are important in what follows (for details and proofs, see [1]). Let

$$ \pm i{{\omega }_{1}},\; \pm i{{\omega }_{2}},\; \ldots $$
(3)

be the nonzero eigenvalues of A; the numbers \({{\omega }_{1}},{{\omega }_{2}}\), ... are assumed to be positive. Let

$$A{{\zeta }_{k}} = \pm i{{\omega }_{k}}{{\zeta }_{k}},\quad {{\zeta }_{k}} = {{\xi }_{k}} \pm i{{\eta }_{k}},\quad k \geqslant 1.$$

Here, \({{\xi }_{k}},{{\eta }_{k}} \in V\) and \(\xi _{k}^{2} + \eta _{k}^{2} \ne 0\). Then

$$A{{\xi }_{k}} = - {{\omega }_{k}}{{\eta }_{k}},\quad A{{\eta }_{k}} = {{\omega }_{k}}{{\xi }_{k}},$$
(4)
$$({{\xi }_{k}},{{\xi }_{k}}) = ({{\eta }_{k}},{{\eta }_{k}}),\quad ({{\xi }_{k}},{{\eta }_{k}}) = 0.$$

Additionally, let πn be an invariant plane of A containing the linearly independent vectors \({{\xi }_{n}}\) and \({{\eta }_{n}}\). Define

$$x = {{p}_{n}}{{\xi }_{n}} + {{q}_{n}}{{\eta }_{n}},\quad {{p}_{n}},{{q}_{n}} \in \mathbb{R};$$

these are points of πn. The restriction of the original linear system (1) to πn has the form

$$\mathop {\dot {p}}\nolimits_n = {{\omega }_{n}}{{q}_{n}},\quad \mathop {\dot {q}}\nolimits_n = - {{\omega }_{n}}{{p}_{n}};\quad n \geqslant 1.$$
(5)

This linear system describes the dynamics of a one-dimensional harmonic oscillator with frequency \({{\omega }_{n}}\). Obviously, it has the first integral \({{f}_{n}} = p_{n}^{2} + q_{n}^{2}\).

If there are no multiple eigenvalues in (3), then, for \(k \ne l\), the two-dimensional planes πk and \({{\pi }_{l}}\) are orthogonal to each other. Specifically, the nonzero vectors

$${{\xi }_{1}},{{\eta }_{1}},{{\xi }_{2}},{{\eta }_{2}},\; \ldots $$
(6)

form an orthogonal system.

Assume in what follows that

(i) the discrete spectrum of A is simple,

(ii) the orthonormal system of vectors (6) is complete.

It was shown in [1] that, under these assumptions, linear system (1) is a completely integrable Hamiltonian system. The Hamiltonian property also holds under weaker assumptions [2]. In the finite-dimensional case, it is sufficient that the operator A be nonsingular and there exist a first integral in the form of a nondegenerate quadratic form [3]. Specifically, the dimension of the phase space V is even. Our basic observation is that, under assumptions (i) and (ii), after introducing a suitable complex structure in V, Eq. (1) can be represented in the form of the Schrödinger equation.

Several remarks have to be made.

If the operator \(A{\text{:}}\;V \to V\) is bounded, then an existence (and uniqueness) theorem on the entire time axis \(\mathbb{R} = {\text{\{ }}t{\text{\} }}\) holds true for the linear differential equation (1) (see, e.g., [4]). In this case, the operator A is skew-self-adjoint: \(A\text{*} = - A\). In particular, the Lagrange adjoint linear system of differential equations

$$\dot {y} = - A{\text{*}}y,\quad y \in V$$

coincides with the original system (1). Therefore, linear system (1) can be called self-adjoint. For equations of mathematical physics, the operator A is, as a rule, unbounded, since it involves derivatives with respect to space variables.

Our consideration can be somewhat generalized by assuming that linear system (1) admits a first integral in the form of a continuous positive definite quadratic form

$$f = (Bx,x).$$

Then the self-adjoint operator B is invertible. In this case, we can introduce a new inner product in V, namely,

$$(x,y)' = (Bx,y).$$

It is easy to show that the vector space V with inner product \((\,,)'\) is also a real separable Hilbert space. Accordingly, without loss of generality, B can be assumed to be the identity operator.

2 REDUCTION TO THE SCHRÖDINGER EQUATION

Below, assumptions (i) and (ii) are supposed to hold and the notation from Section 1 is used. In the real vector space V, we introduce a complex structure. For this purpose, V is represented in the form of a direct sum \({{V}_{1}} \oplus {{V}_{2}}\). The subspace \({{V}_{1}}\) (\({{V}_{2}}\)) is the closure of the set of linear combinations of vectors \({{\xi }_{1}},\;{{\xi }_{2}},\; \ldots \) (\({{\eta }_{1}},{{\eta }_{2}}\), ..., respectively). These subspaces are orthogonal to each other, and the operator A interchanges \({{V}_{1}}\) and \({{V}_{2}}\).

A complex structure in V is defined by a linear operator J acting on the vectors \({{\xi }_{k}}\) and \({{\eta }_{k}}\) (\(k \geqslant 1\)) by analogy with the operator A:

$$J{{\xi }_{k}} = {{\eta }_{k}},\quad J{{\eta }_{k}} = - {{\xi }_{k}}.$$

Therefore, \(J({{V}_{1}}) = {{V}_{2}}\), \(J({{V}_{2}}) = {{V}_{1}},\) and \({{J}^{2}} = - I\). Moreover, the operator J is skew-self-adjoint: \(J\text{*} = - J\). The complex Hilbert space \({{V}^{\mathbb{C}}}\) consists of the sums

$$\psi = {{x}^{{(1)}}} + i{{x}^{{(2)}}},$$

where \({{x}^{{(1)}}}\) and \({{x}^{{(2)}}}\) are vectors from V and \({{i}^{2}} = - 1\). Multiplication by i corresponds to the action of the operator J. The space \({{V}^{\mathbb{C}}}\) is equipped with the natural Hermitian product

$$\left\langle {{{\psi }_{1}},{{\psi }_{2}}} \right\rangle = (x_{1}^{{(1)}} + ix_{1}^{{(2)}},x_{2}^{{(1)}} - ix_{2}^{{(2)}}).$$

The vectors

$${{\zeta }_{k}} = \frac{{{{\xi }_{k}} + i{{\eta }_{k}}}}{{\sqrt 2 }},\quad k \geqslant 1,$$

are linearly independent over \(\mathbb{C}\) and form an orthonormal basis in \(({{V}^{\mathbb{C}}},\,\left\langle {\,,\,} \right\rangle )\). Indeed,

$$\left\langle {{{\zeta }_{k}},{{\zeta }_{l}}} \right\rangle = \frac{{({{\xi }_{k}},{{\xi }_{l}}) + ({{\eta }_{k}},{{\eta }_{l}})}}{2} + \frac{{i({{\eta }_{k}},{{\xi }_{l}}) - ({{\xi }_{k}},{{\eta }_{l}})}}{2}$$

is equal to 0 if \(k \ne l\) and to 1 if k = l.

The vectors \({{\zeta }_{1}},\;{{\zeta }_{2}},\; \ldots \) form a complete system in \(({{V}^{\mathbb{C}}},\,\left\langle {\,,\,} \right\rangle )\). Let

$$\psi = \sum {{{c}_{k}}{{\zeta }_{k}}} \in {{V}^{\mathbb{C}}},\quad \sum {{{{\left| {{{c}_{k}}} \right|}}^{2}}} < \infty .$$

Then

$$A\psi = \sum {{{c}_{k}}} A{{\zeta }_{k}} = i\sum {{{c}_{k}}} {{\omega }_{k}}{{\zeta }_{k}}.$$
(7)

Let \({{P}_{1}},\;{{P}_{2}},\; \ldots \) be orthogonal projectors in \({{V}^{\mathbb{C}}}\) onto straight lines with unit vectors \({{\zeta }_{1}},\;{{\zeta }_{2}},\; \ldots \) Then equality (7) can be rewritten as

$$A\psi = i\sum {{{\omega }_{k}}} {{P}_{k}}(\psi ).$$

Furthermore, according to (5),

$$\begin{gathered} \dot {\psi } = \sum {\mathop {\dot {c}}\nolimits_k } {{\zeta }_{k}} = \sum {(\mathop {\dot {p}}\nolimits_k + i\mathop {\dot {q}}\nolimits_k )} {{\zeta }_{k}} \\ = - i\sum {{{\omega }_{k}}} {{c}_{k}}{{\zeta }_{k}} = - A\psi . \\ \end{gathered} $$
(8)

Therefore, Eq. (8) takes the form of the Schrödinger equation

$$i\dot {\psi } = \hat {H}\psi ,$$
(9)

where

$$\hat {H} = \sum {{{\omega }_{k}}} {{P}_{k}}$$
(10)

is the Hamiltonian operator. Equality (10) is the spectral decomposition of the Hermitian operator \(\hat {H}\).

Equation (9) has the quadratic invariant

$$\left\langle {\psi ,\psi } \right\rangle = \left( {\sum {{{c}_{k}}} {{\zeta }_{k}},\sum {{{{\bar {c}}}_{l}}{{{\overline \zeta }}_{l}}} } \right) = \sum {{{{\left| {{{c}_{n}}} \right|}}^{2}}} .$$

This is the original positive definite quadratic invariant represented in complex form.

Remark 1. The Planck constant in Eq. (9) is equal to 1. Of course, both sides of (9) can be multiplied by \(\hbar \). Then the Hamiltonian operator will be proportional to \(\hbar \). However, it is possible to do otherwise, namely, to replace time t by \(\frac{t}{\hbar }\).

These observations allow a somewhat different view of the quantization problem. The simplest linear Hamiltonian systems can be represented in a quantum mechanical form. As a trivial example, we consider a simple harmonic oscillator with frequency \(\nu \):

$$\dot {q} = \nu p,\quad \dot {p} = - \nu q;\quad p,\quad q \in \mathbb{R}.$$
(11)

It has the positive definite quadratic integral \(f = {{p}^{2}}\) + q2 (doubled energy of the oscillator). Setting \(\psi = q\) + ip (as before) we represent Eqs. (11) in the form of the one-dimensional Schrödinger equation \(i\hbar \dot {\psi } = \hat {H}\psi \), where \(\hat {H}\) is the operator of multiplication by the real number \(\nu \hbar \). We can immediately see an analogy with the classical Planck–Einstein formula \(E = \nu \hbar \) for the energy of a photon of frequency ν. From a quantum mechanical point of view, the invariant relation f = 1 is interpreted as the conservation of the “probability” \(\psi \bar {\psi }\) = 1. In “old” quantum mechanics (prior to the Schrödinger equation) this observation was expressed as follows: a transition between two neighboring quantum states corresponds to the classical basic oscillation (see the discussion in [5]).

Similar conclusions about “self-quantization” hold for linear evolution partial differential equations admitting a positive quadratic invariant. In addition to the Schrödinger equations, they include wave equations, the Liouville equation from statistical mechanics, and the Maxwell equations. All of them were examined in detail in [1] from the point of view of their complete integrability as Hamiltonian equations. Apparently, a reduction to the Schrödinger equation is also possible under more general conditions (see Section 4).

The self-quantization of the oscillation equations for an elastic medium resembles de Broil’s classical idea of the wave nature of quantum particles.

3 OBSERVABLES AND CONSERVATION LAWS

After a linear system with a positive quadratic invariant has been reduced to the Schrödinger equation, its dynamics can be considered in terms of quantum mechanics. However, it is possible to proceed directly without introducing a complex structure or using a preliminary reduction to the Schrödinger equation. A key point is to identify observables with time evolution of interest. In turn, this task is closely related to the definition of a measurement procedure, which is substantially different in quantum and classical mechanics (mathematical and physical aspects of the theory of measurements are discussed in [6, 7]).

By the states of linear system (1) (more precisely, its pure states), we mean nonzero elements of real Euclidean space VFootnote 1 with the normalization condition \((x,x) = 1\). The vectors x and \( - x\) define the same state. Observables are self-adjoint operators acting on V. In this section, all operators (including A) are assumed to be bounded (since we consider products of operators, their domains do not need to be kept track of under this assumption).

From a classical point of view, in principle, all observables (and system states) can be exactly measured simultaneously. In quantum mechanics, this is not the case. A measurement of an observable F is reduced to determining the eigenvalues (spectrum) of the self-adjoint operator F; moreover, the measurement results are considered nondeterministic. This operator generates the quadratic form

$$f(x) = (Fx,x),$$

which is interpreted in quantum mechanics as the mean value of F at the state x.

Let J be a skew-self-adjoint anti-involute operator from Section 2:

$$J\text{*} = - J\quad {\text{and}}\quad {{J}^{2}} = - I.$$

It defines the commutator of self-adjoint operators:

$${{\left[ {F,G} \right]}_{J}} = FJG - GJF,$$
(12)

which, like F and G, is a self-adjoint operator. Commutator (12) is associated with the Poisson bracket defined on the space of continuous quadratic forms on V: if \(f = (Fx,x)\) and \(g = (Gx,x)\) are two quadratic forms, then their Poisson bracket is given by

$$\left\{ {f,g} \right\} = ({{[F,G]}_{J}}x,x).$$

Clearly, the bracket is linear in each argument, \(\left\{ {f,g} \right\} = - \left\{ {g,f} \right\}\), and the Jacobi identity holds true. With the help of this Poisson bracket, it can be shown that the original linear system with a quadratic invariant is a Hamiltonian system [1].

By analogy with the well-known Weyl inequality in quantum mechanics, we have

$$4\sigma _{x}^{2}(F)\sigma _{x}^{2}(G) \geqslant {{({{\left[ {F,G} \right]}_{J}}x,x)}^{2}},$$
(13)

where

$$\sigma _{x}^{2}(B) = ({{B}^{2}}x,x) - {{(Bx,x)}^{2}}$$

is the variance of the observable B at the state x (average deviation from its expectation). Inequality (13) implies Heisenberg’s uncertainty relations: even at a pure state, J-noncommuting observables cannot be exactly measured simultaneously. Inequality (13) is derived using the relations \({{\left[ {F,J} \right]}_{J}} = {{\left[ {G,J} \right]}_{J}} = 0\). In other words, (13) holds for all observables H for which the Poisson bracket of the scalar square \((x,x)\) with their mean value (Hx, x) vanishes (in quantum mechanics, this property holds automatically).

How do the observables vary with time? Since

$$x(t,{{x}_{0}}) = {{e}^{{At}}}{{x}_{0}},\quad x(0,{{x}_{0}}) = {{x}_{0}},$$

and \(A\text{*} = - A\), we have (in the Heisenberg picture)

$$F(t) = {{e}^{{ - At}}}F{{e}^{{At}}}.$$
(14)

The operator

$$U(t) = {{e}^{{At}}}$$

can be called a real evolution operator. This operator is orthogonal:

$$U{\text{*}} = {{e}^{{A*t}}} = {{e}^{{ - At}}} = {{U}^{{ - 1}}}.$$

It follows from (14) that

$$\dot {F} = \left[ {F,A} \right] = FA - AF.$$
(15)

Therefore, the constancy of the observable is equivalent to the fact that the operators A and \(F\) are commuting. Equation (1) provides a Schrodinger description of the dynamical system, while (15) is a Heisenberg description.

On the other hand, \([F,A] = 0\) is the condition for the quadratic form f to be invariant with respect to the phase flow of system (1). Indeed, \(\dot {f} = 0\) if and only if

$$FA + A{\text{*}}F = 0.$$

Since the operator A is skew-self-adjoint, we obtain what was required.

4 MAXWELL EQUATIONS AS THE SCHRÖDINGER EQUATION

Reduction to the Schrödinger equation is possible under more general conditions than (i) and (ii) from Section 2. In some cases, this can be done using partial differential equations of special form.

As an example, we consider the system of Maxwell equations describing the evolution of an electric E and a magnetic H field in Euclidean space \({{\mathbb{R}}^{3}} = {\text{\{ }}x{\text{\} }}\) without currents:

$$\frac{{\partial E}}{{\partial t}} = c\,{\text{curl}}H,\quad \frac{{\partial H}}{{\partial t}} = - c\,{\text{curl}}E.$$
(16)

Here, \(c\) is the speed of light and the magnetic field is solenoidal (\({\text{div}}H = 0\)). System (16) implies the Poynting equation

$$\frac{\partial }{{\partial t}}({{E}^{2}} + {{H}^{2}}) + c\,{\text{div}}S = 0,\quad S = E \times H.$$

If the fields E and H quickly decay at infinity (\(\left| S \right|{{x}^{2}} \to 0\) as \(\left| x \right| \to \infty \)), then the Poynting equation yields a quadratic conservation law for Eqs. (16):

$$\int\limits_{{{\mathbb{R}}^{3}}} {({{E}^{2}} + {{H}^{2}})} \,{{d}^{3}}x = {\text{const}}.$$
(17)

Setting \(\psi = E + iH\), we use (16) to derive the Schrödinger equation

$$i\hbar \frac{{\partial \psi }}{{\partial t}} = \hat {H}\psi ,\quad \hat {H} = \hbar c\,{\text{curl}}.$$
(18)

Relation (17) can be represented in complex form:

$$\left\langle {\psi ,\psi } \right\rangle = \int\limits_{{{\mathbb{R}}^{3}}} {(\psi ,\bar {\psi })} {{d}^{3}}x = {\text{const}}.$$

A Hilbert space structure is specified using the Hermitian inner product

$$\left\langle {{{\psi }_{1}},{{\psi }_{2}}} \right\rangle = \int\limits_{{{\mathbb{R}}^{3}}} {({{\psi }_{1}},{{{\bar {\psi }}}_{2}})} \,{{d}^{3}}x,$$

which is defined on a vector space of square integrable fields. The Hamiltonian operator (18) is Hermitian: \(\langle \hat {H}{{\psi }_{1}},{{\psi }_{2}}\rangle \) = \(\langle {{\psi }_{1}},\hat {H}{{\psi }_{2}}\rangle \). For differentiable fields (which are dense everywhere in L2), this equality follows straightforwardly from the well-known vector analysis identity

$$({\text{curl}}\,a,b) = (a,{\text{curl}}\,b) + {\text{div}}(a \times b).$$

Remark 2. The Hamiltonian property of Maxwell’s equations has been studied by numerous authors, starting with Dirac, with the purpose of quantization of electrodynamics (see [1] and references therein).

The spectral properties of the curl operator were discussed in the context of fluid dynamics, for example, in [8, 9].