Paper The following article is Open access

Symmetric and antisymmetric kernels for machine learning problems in quantum physics and chemistry

, , and

Published 6 August 2021 © 2021 The Author(s). Published by IOP Publishing Ltd
, , Citation Stefan Klus et al 2021 Mach. Learn.: Sci. Technol. 2 045016 DOI 10.1088/2632-2153/ac14ad

2632-2153/2/4/045016

Abstract

We derive symmetric and antisymmetric kernels by symmetrizing and antisymmetrizing conventional kernels and analyze their properties. In particular, we compute the feature space dimensions of the resulting polynomial kernels, prove that the reproducing kernel Hilbert spaces induced by symmetric and antisymmetric Gaussian kernels are dense in the space of symmetric and antisymmetric functions, and propose a Slater determinant representation of the antisymmetric Gaussian kernel, which allows for an efficient evaluation even if the state space is high-dimensional. Furthermore, we show that by exploiting symmetries or antisymmetries the size of the training data set can be significantly reduced. The results are illustrated with guiding examples and simple quantum physics and chemistry applications.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 license. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

Kernel methods and neural networks are two of the most prevalent and versatile machine learning techniques. While various recent publications focus on invariant or equivariant deep learning algorithms, our goal is to derive kernel-based methods that exploit symmetries. Symmetries play an important role in many research areas such as physics and chemistry [13], but also point cloud classification problems [4] or problems defined on sets [5] are naturally permutation-invariant. One of the most prominent applications is in quantum physics. Systems of bosons require symmetric wave functions, whereas systems of fermions are represented by antisymmetric wave functions. Exploiting such symmetries of the underlying system is a popular and powerful approach that has been used to improve the performance of kernel-based methods as well as deep-learning algorithms. The goal is to obtain more accurate representations without increasing the number of training data points—resulting in more efficient learning algorithms—and to ensure that symmetry constraints are satisfied. In [1] and [2], for instance, neural networks and kernel approaches that take into account symmetries of molecules are constructed. These methods are then used for learning potential energy surfaces. An approach for constructing potential energy surfaces based on Gaussian processes combined with permutation-invariant kernels can be found in [6]. Gaussian processes that exploit symmetries by summing over permutations of identical atoms are also utilized in [7] to improve the accuracy of density functional theory descriptions. Moreover, the so-called SOAP (smooth overlap of atomic positions) kernel [8] is a popular framework to design translation-, rotation-, and permutation-invariant descriptors of molecules. In [9], general invariant kernels (capturing discrete and continuous transformations) for pattern analysis are defined and analyzed. Recently, neural network architectures for antisymmetric wavefunctions have been proposed [1014] that typically operate by applying Slater determinants to the outputs. The neural networks optimize the basis functions entering the Slater determinants through a deep learning variant of a technique called backflow. Backflow is a method to modify the basis functions used in quantum Monte Carlo as trial wavefunctions [15]. Neural network approaches such as FermiNet [11] and PauliNet [12] achieve extremely high accuracy with relatively few Slater determinants compared to standard quantum chemistry methods that build Slater determinants with fixed basis functions. Kernels, on the other hand, accomplish this by mapping the data to potentially infinite-dimensional feature spaces. Any continuous antisymmetric function can be approximated by antisymmetrized universal kernels. The universal approximation of symmetric and anti-symmetric functions is also studied in [16].

In this work, we develop kernels that are intrinsically symmetric or antisymmetric. Although we focus mostly on physics and chemistry applications in what follows, the derived kernels can be used in the same way in other kernel-based supervised or unsupervised learning algorithms such as kernel principal component analysis (kernel PCA) [17], kernel canonical correlation analysis (kernel CCA) [18], or support vector machines (SVMs) [19]. The main contributions are:

  • We derive symmetric and antisymmetric kernels based on conventional kernels such as polynomial and Gaussian kernels and show that certain kernels can be expressed as Slater permanents or determinants.
  • We analyze the feature spaces and approximation properties of such kernels.
  • We demonstrate that these techniques improve the efficiency of kernel-based methods for problems exhibiting symmetries or antisymmetries.
  • We apply kernel-based methods for solving the time-independent Schrödinger equation to simple quantum mechanics problems. Furthermore, we predict the boiling points of molecules using kernel ridge regression.

In section 2, we first introduce kernels, reproducing kernel Hilbert spaces, and kernel-based methods for solving the time-independent Schrödinger equation. Antisymmetric kernels will be derived in section 3 and symmetric kernels in section 4. These two sections contain the main theoretical results, in particular the analysis of the properties of the resulting polynomial and Gaussian kernels. Numerical results will be presented in section 5. We conclude the paper with a list of open problems and future research.

2. Kernels and kernel-based methods

We will briefly recapitulate the properties of kernels and introduce the induced reproducing kernel Hilbert spaces. Additionally, we will present a kernel-based method for solving the time-independent Schrödinger equation.

2.1. Reproducing kernel Hilbert spaces

A kernel can be regarded as a similarity measure. We will focus on real-valued kernels, but the definitions can be easily extended to complex domains. Kernel [19]

Definition 2.1  Given a non-empty set $ \mathbb{X} $, a function $ k : \mathbb{X} \times \mathbb{X} $ is called kernel if there exists a Hilbert space $ \mathbb{H} $ and a feature map $ \phi : \mathbb{X} \to \mathbb{H} $ such that

For a given kernel k, the so-called Gram matrix $ G \in \mathbb{R}^{m \times m} $ associated with a data set $ \{ x^{(i)} \}_{i = 1}^m \subset \mathbb{X} $ is defined by $ G_{ij} = k(x^{(i)}, x^{(j)}) $. Positive definiteness [19]

Definition 2.2  A function $ k : \mathbb{X} \times \mathbb{X} $ is called positive definite if for all m, all vectors $ c = [c_1, \dots, c_m]^\top \in \mathbb{R}^m $, and all subsets $ \{ x^{(i)} \}_{i = 1}^m \subset \mathbb{X} $ it holds that

Strictly positive definite means that $ c^\top G c = 0 $ for mutually distinct data points only if c = 0. It can be shown that a function $ k : \mathbb{X} \times \mathbb{X} \to \mathbb{R} $ is a kernel if and only if it is symmetric, i.e. $ k(x, x^{\prime}) = k(x^{\prime}, x) $, and positive definite (s.p.d. in what follows to avoid confusion between different notions of symmetry), see [19]. Such kernels induce so-called reproducing kernel Hilbert spaces. RKHS [19, 20]

Definition 2.3  Let $ \mathbb{X} $ be a non-empty set. A space $ \mathbb{H} $ of functions $ f : \mathbb{X} \to \mathbb{R} $ is called reproducing kernel Hilbert space (RKHS) with inner product $\left\langle{\,\cdot\,},{\,\cdot\,}\right\rangle_{\mathbb{H}}$ if a kernel k exists such that

  • (a)  
    $f(x) = \left\langle\,{f},\,{k(x, \,\cdot\,)}\right\rangle_{\mathbb{H}}$ for all $ f \in \mathbb{H} $, and
  • (b)  
    $ \mathbb{H} = \overline{\mathrm{span}\{k(x, \,\cdot\,) \mid x \in \mathbb{X} \}} $.

The first requirement is called the reproducing property. For f = k(x, ·), this results in $ k(x, x^{\prime}) =$ $\left\langle{k(x, \,\cdot\,)}, {k(x^{\prime},\,\cdot\,)}\right\rangle_{\mathbb{H}}$ so that we can define the so-called canonical feature map by φ(x) = k(x, ·). Additionally, for a data set $ \{ x^{(i)} \}_{i = 1}^m $, we define $ \Phi = [\phi(x_1), \dots, \phi(x_m)] $ so that $ G = \Phi^\top \Phi $. For more details on kernels and reproducing kernel Hilbert spaces, we refer to [19, 20]. It was shown in [21, 22] that not only function evaluations but also derivative evaluations can be represented as inner products in the RKHS $ \mathbb{H} $, provided the kernel is sufficiently smooth. Let now $ \alpha = (\alpha_1, \dots, \alpha_d) \in \mathbb{N}_0^d $ be a multi-index. We define $ \left\lvert \alpha \right\rvert = \sum_{i = 1}^d \alpha_i $ as usual and, for a fixed $ r \in \mathbb{N}_0 $, the index set $ I_r = \{\alpha \in \mathbb{N}_0^d: \left\lvert \alpha \right\rvert \le r \} $. Given a function $ f : \mathbb{X} \to \mathbb{R}$, the partial derivative of f with respect to α is defined by

[21, 22]

Theorem 2.4 Let $ r \in \mathbb{N}_0 $ be a non-negative number, $ k \in C^{2~r}(\mathbb{X} \times \mathbb{X}) $ a kernel, and $ \mathbb{H} $ the induced RKHS. Then:

  • (a)  
    $ \mathcal{D}^\alpha k(x, \cdot) \in \mathbb{H} $ for any $ x \in \mathbb{X} $ and α ∈ Ir .
  • (b)  
    $(\mathcal{D}^\alpha f\,\,)(x) = \left\langle{\mathcal{D}^\alpha k(x, \cdot)}, {f\,}\,\right\rangle_{\mathbb{H}}$ for any $ x \in \mathbb{X} $, $ f \in \mathbb{H} $, and α ∈ Ir .

In (i) and (ii), the derivative $\mathcal{D}^\alpha$ is understood as acting on the first argument of the kernel k.

We will need this property later for the approximation of differential operators. Another question is how rich these Hilbert spaces $ \mathbb{H} $ induced by a kernel k are. Universal kernel [23]

Definition 2.5  Let $ \mathbb{X} $ be compact and $ C(\mathbb{X}) $ the space of all continuous functions mapping from $ \mathbb{X} $ to $ \mathbb{R} $ equipped with $ ||{\,\cdot\,}||_\infty $. A kernel k is called universal if the induced RKHS $ \mathbb{H} $ is dense in $ C(\mathbb{X}) $.

That is, for a function $ f \in C(\mathbb{X}) $, we can find a function $ g \in \mathbb{H} $ such that $ ||{g-f\,}\,||_\infty \lt \varepsilon $ for any $ \varepsilon \gt 0 $. The Gaussian kernel

for instance, is universal, while the polynomial kernel

is not. We will analyze the properties of these kernels and their symmetrized and antisymmetrized counterparts in more detail below. Various other notions of universality and the relationships between universal and characteristic kernels are discussed in [24]. In what follows, we will omit the subscript $ \mathbb{H} $ if it is clear which inner product or norm we are referring to.

2.2. Kernel-based solution of the Schrödinger equation

In [25], we proposed a kernel-based method for the solution of the time-independent Schrödinger equation and the approximation of other differential operators such as the generator of the Koopman operator. We will restrict ourselves to the Schrödinger equation. Let V be a potential and $ \mathcal{H} = - \tfrac{\hbar^2}{2~\mathbf{m}} \Delta + V $ the Hamiltonian, where ћ is the reduced Planck constant and $ \mathbf{m} $ the mass, then the time-independent Schrödinger equation is defined by

That is, we want to compute eigenfunctions ψ and the associated eigenvalues E, which correspond to energies of the system. We define

where el is the lth unit vector, and operators

Here, $ \mathcal{C}_{00} $ is the standard covariance operator (see [26]) and $ \mathcal{C}_{01} $ contains the action of the Schrödinger operator. Since these integrals typically cannot be computed in practice, we estimate them using µ-distributed training data $ \{ x^{(i)} \}_{i = 1}^m $, resulting in the empirical operators

Assuming that the eigenfunctions can be represented as $ \widehat{\psi} = \Phi u $, i.e. they are contained in the space spanned by the functions $ \{ \phi(x^{(i)}) \}_{i = 1}^m $, we obtain a matrix eigenvalue problem

where the entries of the (generalized) Gram matrices $ G_{00}, G_{10} \in \mathbb{R}^{m \times m} $ are defined by

and

Eigenfunctions are then of the form

A detailed derivation and numerical results for simple quantum mechanics problems—the quantum harmonic oscillator and the hydrogen atom—can be found in [25].

3. Antisymmetric kernels and their properties

In this section, we will introduce the notion of antisymmetric kernels and define antisymmetric counterparts of well-known kernels such as the polynomial kernel and the Gaussian kernel. Furthermore, we analyze the properties of the resulting reproducing kernel Hilbert spaces. Most results can then be carried over to the symmetric case, which will be studied in section 4.

3.1. Antisymmetric kernels

Let $ \mathbb{X} \subset \mathbb{R}^d $ be the state space. Furthermore, let Sd be the symmetric group and π ∈ Sd a permutation. With a slight abuse of notation, we define $ \pi(x) = [x_{\pi(1)}, \dots, x_{\pi(d)}]^\top $ to be the vector $ x \in \mathbb{X} $ permuted by π. A function $ f : \mathbb{X} \to \mathbb{R} $ is called antisymmetric if

where $ \mathrm{sgn}(\pi) $ denotes the sign of the permutation π, which is 1 if the number of transpositions is even and −1 if it is odd. We define the antisymmetrization operator $ \mathcal{A} $ by

Remark 3.1. In the same way, we can consider state spaces of the form $ \mathbb{X} \subset \bigoplus_{i = 1}^{d_x} \mathbb{R}^{d_y} $. Functions would then be antisymmetric with respect to permutations of vectors in $ \mathbb{R}^{d_y} $. That is, for $ x = [x_1, \dots, x_{d_x}]^\top $ with $ x_i \in \mathbb{R}^{d_y} $, the permuted vector is then $ \pi(x) = [x_{\pi(1)}, \dots, x_{\pi(d_x)}]^\top $. For typical quantum mechanics applications, for instance, dy  = 3 (every particle has a position in a three-dimensional space) and dx the number of fermions (or bosons in the symmetric case). The special case dx  = 2 is considered in [27], where the spectral properties of symmetric and antisymmetric pairwise kernels are analyzed. Supervised learning problems with such pairwise kernels are discussed in [28].

Our goal is to define antisymmetric kernels for arbitrary d, which can then be used in kernel-based learning algorithms. Antisymmetric kernel function

Definition 3.2  Let $ k : \mathbb{X} \times \mathbb{X} \to \mathbb{R} $ be a kernel. We define an antisymmetric function $ k_\mathrm{a} : \mathbb{X} \times \mathbb{X} \to \mathbb{R} $ by

Clearly, if $ k(x, x^{\prime}) = k(x^{\prime}, x) $, then also $ k_\mathrm{a}(x, x^{\prime}) = k_\mathrm{a}(x^{\prime}, x) $. Furthermore, for a fixed permutation $ \widehat{\pi} \in S_d $, it holds that

Equation (1)

Here, we used the fact that $ \mathrm{sgn}(\widehat{\pi}) = \mathrm{sgn}(\widehat{\pi}^{-1}) $ and $ \mathrm{sgn}(\pi \circ \widehat{\pi}) = \mathrm{sgn}(\pi) \mathrm{sgn}(\widehat{\pi}) $. Additionally, we utilized the property that for a function $ g : S_d \to \mathbb{R} $ it holds that $ \sum_{\pi \in S_d} g(\pi) = \sum_{\pi \in S_d} g(\pi \circ \widehat{\pi}) $, which corresponds to a reordering of the summands. Thus, $ k_\mathrm{a} $ is antisymmetric in both arguments. From (1) it directly follows that $k_\mathrm{a}(x, x^{\prime}) = 0$ if at least two entries of x or $x^{\prime}$ are equal 6 .

Lemma 3.3. The function $ k_\mathrm{a} $ defines an s.p.d. kernel.

Proof.

proof We have

where $ \phi_a(x) = \frac{1}{d!} \sum_{\pi \in S_d} \mathrm{sgn}(\pi) \phi(\pi(x)) $. That is, $ k_\mathrm{a} $ is a kernel. Symmetry was shown above. To see that the function is positive definite, let $ c = [c_1, \dots, c_m]^\top \in \mathbb{R}^m $ be a coefficient vector and $ \{ x^{(i)} \}_{i = 1}^m \subset \mathbb{X} $. Then

The antisymmetrized two- and three-dimensional Gaussian kernels are visualized in figure 1. The feature space mapping of the antisymmetric kernel $ k_\mathrm{a} $ is the antisymmetrization operator $ \mathcal{A} $ applied to the feature space mapping of the kernel k.

Example 3.4. For $ \mathbb{X} \subset \mathbb{R}^2 $, the feature space of the quadratic kernel $ k(x, x^{\prime}) = (1 + x^\top x^{\prime})^2 $ is spanned by $ \{1, x_1, x_2, x_1^2, x_1 x_2, x_2^2 \} $ and thus six-dimensional. The feature space of the antisymmetrized kernel $ k_\mathrm{a} $ is spanned by the two antisymmetric functions $ \{x_1 - x_2, x_1^2 - x_2^2 \} $. This illustrates that the feature space is significantly reduced. $\triangle$

Figure 1.

Figure 1. (a) Two-dimensional antisymmetric Gaussian kernel $ k_\mathrm{a} $, where $ x^{\prime} = [0.4, -0.3]^\top $ and σ = 0.3. Yellow corresponds to positive and blue to negative values. (b) Three-dimensional antisymmetric Gaussian kernel $ k_\mathrm{a} $, where $ x^{\prime} = [0.3, -0.6, 0.4]^\top $ and σ = 0.2. The separating isosurface in the middle is defined by $ k_\mathrm{a}(x, x^{\prime}) = 0 $.

Standard image High-resolution image

Polynomial kernels of arbitrary degree p for d-dimensional spaces will be discussed in more detail in section 3.2.

Remark 3.5. The Mercer features of a kernel k are defined by the eigenfunctions of the integral operator

multiplied by the square root of the associated eigenvalues λ, see [19]. The Mercer features of an antisymmetric kernel $ k_\mathrm{a} $ are automatically antisymmetric. This can be seen as follows: let ϕ be an eigenfunction of $ \mathcal{T}_{k_\mathrm{a}} $ with corresponding eigenvalue λ, then

Mercer features of the Gaussian kernel and its antisymmetric and symmetric (see section 4) counterparts—computed by a spectral decomposition of the covariance operator, cf [29]—are shown in figure 2.

Permutation invariance

Definition 3.6  We call a kernel permutation-invariant if

for all permutations π ∈ Sd .

The Gaussian kernel and the polynomial kernel are permutation-invariant since the standard inner product and induced norm are permutation-invariant, i.e. $ \left\langle{x}{x^{\prime}}\right\rangle = \left\langle{\pi(x)}{\pi(x^{\prime})}\right\rangle $ for a permutation π ∈ Sd . The antisymmetric kernel $ k_\mathrm{a} $ is permutation-invariant by construction. While many kernels used in practice are naturally permutation-invariant, an open question is whether this assumption limits the expressivity of the induced function space. We will analyze the properties of the Gaussian kernel in section 3.3. The permutation-invariance allows us to simplify the representation of the antisymmetric kernel.

Lemma 3.7. Given a permutation-invariant kernel k, it holds that

Proof.

proof We obtain

since all permutations occur d! times. In the third line, we used the same properties of permutations as above. The proof for the second representation is analogous.

For the sake of simplicity, assume now that the kernel k is permutation-invariant. We want to show that for a universal kernel k, the reproducing kernel Hilbert space induced by the corresponding antisymmetric kernel $ k_\mathrm{a} $ is dense in the space of antisymmetric functions.

Proposition 3.8. Let $ \mathbb{X} $ be bounded. Given a universal, permutation-invariant, continuous kernel k, the space $ \mathbb{H}_a $ induced by $ k_\mathrm{a} $ is dense in the space of continuous antisymmetric functions given by $ C_a(\mathbb{X}) = \{f \in C(\mathbb{X}) \mid f \textrm{is antisymmetric} \} $.

Proof.

proof Let f be antisymmetric. It follows that $ f(x) = \mathrm{sgn}(\pi) f(\pi(x)) $ for all π ∈ Sd and thus

Since k is assumed to be universal, we can find coefficients $ \alpha_i \in \mathbb{R} $ and vectors $ x^{(i)} \in \mathbb{X} $ such that $ ||{\sum_{i = 1}^n \alpha_i k(\,\cdot\,, x^{(i)}) - f}||_\infty \lt \varepsilon $. Then

Continuous antisymmetric functions can be approximated arbitrarily well by universal antisymmetric kernels such as the Gaussian kernel. Although we used the same number of data points for the approximation in the proof (i.e. n points for the expansion in terms of k and also $ k_\mathrm{a} $), fewer data points are required in practice if we employ the antisymmetric kernel, see example 3.14.

Figure 2.

Figure 2. (a) Numerically computed normalized features of the Gaussian kernel k with bandwidth $ \sigma = \frac{1}{2} $. (b) Similar-looking but antisymmetric features of the associated kernel $ k_\mathrm{a} $. (c) Symmetric features of the kernel $ k_\mathrm{s} $ derived in section 4.

Standard image High-resolution image

3.2. Antisymmetric polynomial kernels

We have seen in example 3.4 that the feature space dimension of the polynomial kernel of order two for $ \mathbb{X} \subset \mathbb{R}^2 $ is reduced from six to two by the antisymmetrization. Let $ q = (q_1, \dots, q_d) \in \mathbb{N}_0^d $ be a multi-index and $ \left\lvert q \right\rvert = \sum_{i = 1}^{d} q_i $. We define $ x^q = \prod_{i = 1}^d x_i^{q_i} $. For a d-dimensional state space $ \mathbb{X} $, the polynomial kernel of order p is then given by

where

and $ q_0 = p - \left\lvert q \right\rvert $, cf [30]. The multinomial coefficients are defined by

Thus, the feature space is spanned by the monomials $ \big\{x^q ~\big|~ 0~\le \left\lvert q \right\rvert \le p \big\} $ and the dimension of the feature space is $ n_\phi = \binom{p+d}{d} $, see, e.g. [31].

We now want to find the feature space of the corresponding antisymmetric kernel ka . Given a multi-index q, assume that there exist two entries qi and qj with $ q_i = q_j $. Since the transposition (i, j) leaves the multi-index (and thus xq ) unchanged, this monomial will be eliminated by the antisymmetrization operator. It follows that the monomials must have distinct indices. In fact, the nonzero images of monomials under antisymmetrization are of the form

Equation (2)

where $ \delta = (d-1, d-2, \dots, 0) $ and $ \mu = (\mu_1, \dots, \mu_d) $ with $\mu_1~\geqslant \mu_2~\geq \dots \geqslant \mu_d \geqslant 0$ is a partition of a positive integer 7 , see [32]. The degrees of the terms of this antisymmetric polynomial are $ |{\mu}| + \binom{d}{2} $. Since we need all monomials of order $ 0~\leqslant \left\lvert q \right\rvert \leqslant p $, we have to consider the partitions µ of $ 0~\leqslant p_r \leqslant p - \binom{d}{2} $.

This representation uses the fact that multi-indices corresponding to antisymmetric polynomials can be written as q = δ + µ, where δ is defined as above and µ a partition. It follows that an antisymmetric polynomial must be at least of order $ \binom{d}{2} $. Equation (2) can be regarded as a Slater determinant (introduced below) for a specific set of functions. We also obtain the Vandermonde determinant (up to the sign) as a special case where µ = 0. Partition function

Definition 3.9 Let $ s_\ell(n) $ be the function that counts the partitions of n into exactly $ \ell $ parts.

A closed-form expressions for $ s_\ell(n) $ is not known, but it can be expressed in terms of generating functions or computed using the recurrence relation

where we define $ s_\ell(n) = 1 $ if n = 0 and $ \ell = 0 $ and $ s_\ell(n) = 0 $ if $ n \leqslant 0 $ or $ \ell \leqslant 0 $ (but not $ n = \ell = 0 $), see [33] for more details about partitions and partition functions.

Lemma 3.10. The dimension of the feature space generated by the antisymmetrized polynomial kernel of order p is

Proof.

proof Since $ \binom{d}{2} $ of the $ \left\lvert q \right\rvert $ exponents are already spoken for, we can use only the remaining $ \left\lvert q \right\rvert - \binom{d}{2} $ to generate partitions µ, with $ 0~\leqslant \left\lvert q \right\rvert \leqslant p $. All these numbers can be decomposed into at most d parts since we have only d variables. If the number of components is smaller than d, we simply add zeros.

Example 3.11. For d = 3 and p = 6, the base case is δ = (2, 1, 0), and we can generate partitions for

resulting in seven antisymmetric polynomials. $\triangle$

The sizes of the feature spaces of the polynomial kernels k and $ k_\mathrm{a} $ for different dimensions d and degrees p are summarized in table 1. This shows that antisymmetric polynomial kernels might not be feasible for higher-dimensional problems. For d = 10, for example, the lowest degree of the monomials is already 45.

Table 1. Dimensions of the feature spaces spanned by the polynomial kernel k and its antisymmetric counterpart $ k_\mathrm{a} $. Here, d is the dimension of the state space and p the degree of the polynomial kernel.

p 2345678
d $ n_\phi $ $ n_{\phi_a} $ $ n_\phi $ $ n_{\phi_a} $ $ n_\phi $ $ n_{\phi_a} $ $ n_\phi $ $ n_{\phi_a} $ $ n_\phi $ $ n_{\phi_a} $ $ n_\phi $ $ n_{\phi_a} $ $ n_\phi $ $ n_{\phi_a} $
262104156219281236164520
31002013525648471201116516
41503507001260210133024954

3.3. Antisymmetric Gaussian kernels

We will now analyze the properties of the Gaussian kernel. We have shown in Proposition 3.8 that the space spanned by the antisymmetric Gaussian kernel is dense in the space of continuous antisymmetric functions. For the Gaussian kernel, the expression obtained in lemma 3.7 can be simplified even further.

Lemma 3.12. Let k be the Gaussian kernel with bandwidth σ, then

Proof.

proof Applying Leibniz' formula

we have

Lemma 3.7 then yields the desired result.

This decomposition is akin to the well-known Slater determinant (see, e.g. [34]), which defines an antisymmetric wave function by

Notice that here the normalization factor is chosen in such a way that, provided the wave functions ψi , $ i = 1, \dots, d $, are normalized and orthogonal to each other, $ \psi_\mathrm{a} $ is normalized as well.

Remark 3.13. We can define a more general class of antisymmetric kernels. Let $ f : \mathbb{R} \to \mathbb{R} $ be a function, then

defines an antisymmetric kernel. We call such a function $ k_\mathrm{a} $ a Slater kernel. The Gaussian kernel can be obtained by setting $ f(r) = e^{-\frac{r^2}{2\sigma^2}} $ and the Laplacian kernel—using the 1-norm—by setting $ f(r) = e^{-\frac{r}{\sigma}} $. Alternatively, kernels based on generalized Slater determinants could be constructed or by concatenating creation and annihilation operators, see also [1113].

The advantage of the Slater determinant formulation is that we can compute it efficiently using matrix decomposition techniques, without having to iterate over all permutations, which would be clearly infeasible for higher-dimensional problems.

Example 3.14. In order to illustrate the difference between a standard Gaussian kernel k and its antisymmetrized counterpart $ k_\mathrm{a} $, we define an antisymmetric function $ f : \mathbb{R}^2~\to \mathbb{R} $ by $ f(x) = \sin(\boldsymbol{\pi}(x_1 - x_2)) $ and apply kernel ridge regression (see, e.g. [31]) to randomly sampled data points 8 . That is, we generate m data points x(i) in $ \mathbb{X} = [-1, 1] \times [-1, 1] $ and compute $ y^{(i)} = f(x^{(i)}) $. We then try to recover f from the training data $ \big\{(x^{(i)}, y^{(i)})\big\}_{i = 1}^m $. Additionally, we define an augmented data set of size 2 m by adding the antisymmetrized data set, i.e. $ \big\{(x^{(i)}, y^{(i)})\big\}_{i = 1}^m \cup \big\{(\pi(x^{(i)}), -y^{(i)}\big\}_{i = 1}^m $, where π = (1, 2) in cycle notation. The bandwidth of the kernel is set to $ \sigma = \frac{1}{2} $. The results are shown in figure 3. We measure the root-mean-square error (RMSE)—averaged over 5000 runs—in the midpoints of a regular 30 × 30 box discretization of the domain. Kernel ridge regression using $ k_\mathrm{a} $ results in more accurate function approximations and is, for small m, numerically equivalent to kernel ridge regression using k applied to the augmented data set of size 2 m. For larger values of m, doubling the size of the data set leads to ill-conditioned matrices and increased numerical errors 9 . $\triangle$

Figure 3.

Figure 3. (a) Antisymmetric function $ f(x) = \sin(\boldsymbol{\pi}(x_1 - x_2)) $. (b) Kernel ridge regression approximation error as a function of the number of data points. The antisymmetric Gaussian kernel leads to more accurate function approximations without increasing the size of the training data set.

Standard image High-resolution image

The example shows that the antisymmetrized kernel is indeed advantageous, it enables a more accurate representation without increasing the size of the data set. For higher-dimensional problems, this effect will be even more pronounced. To obtain the same accuracy for a three-dimensional antisymmetric function, we would already need 6 m data points. The kernel evaluations, on the other hand, become more expensive, but are easily parallelizable. The bottleneck of kernel-based methods is often the size of the training data set, which enters in a cubic way (since a generally dense system of linear equations has to be solved, or, if we are interested in eigenfunctions of operators associated with dynamical systems, a generalized eigenvalue problem).

3.4. Derivatives of antisymmetric kernels

For the approximation of differential operators, we will also need partial derivatives of the kernel $ k_\mathrm{a} $. Since $ k_\mathrm{a} $ just comprises alternating sums of kernel functions k, we can compute derivatives of $ k_\mathrm{a} $ by summing over derivatives of k. For polynomial and Gaussian kernels, the derivatives of k can be found in [25]. Alternatively, the partial derivatives of the antisymmetric Gaussian kernel can be computed via Slater determinants.

Example 3.15.  For the antisymmetric Gaussian kernel, let $ K^{e_l} \in \mathbb{R}^{d \times d} $ be the matrix with entries

Then

Similar formulas can be derived for the second-order derivatives. $\triangle$

4. Symmetric kernels and their properties

Although we focused on antisymmetric functions so far, symmetric functions also play an important role in quantum physics. Other typical applications include point clouds, sets, and graphs, where the numbering of points, elements, or vertices should not impair the learning algorithms. Some of the above results can be easily carried over to the symmetric case. The special case dx  = 2 is analyzed in [27]. Similar symmetrized kernels are also constructed in [6]. We focus on the analysis of the induced functions spaces.

4.1. Symmetric kernels

We call a function $ f : \mathbb{X} \to \mathbb{R} $ symmetric if

for all permutations π ∈ Sd and define the symmetrization operator

Symmetric kernel function

Definition 4.1 Let $ k : \mathbb{X} \times \mathbb{X} \to \mathbb{R} $ be a kernel. We then define a symmetric function $ k_\mathrm{s} : \mathbb{X} \times \mathbb{X} \to \mathbb{R} $ by

We simply omitted the signs of the permutations here. As before, if $ k(x, x^{\prime}) = k(x^{\prime}, x) $, then also $ k_\mathrm{s}(x, x^{\prime}) = k_\mathrm{s}(x^{\prime}, x) $. The function $ k_\mathrm{s} $ is permutation-symmetric in both arguments. Note that the definition of permutation-symmetry is different from permutation-invariance, which was defined by $ k(x, x^{\prime}) = k(\pi(x), \pi(x^{\prime})) $. Permutation-symmetric kernels are, however, automatically permutation-invariant. We briefly restate the above results for symmetric functions, the proofs are analogous to their counterparts for antisymmetric functions.

Lemma 4.2. The function $ k_\mathrm{s} $ defines an s.p.d. kernel.

Example 4.3.  For $ \mathbb{X} \subset \mathbb{R}^2 $, the feature space of the symmetrized polynomial kernel of order 2 is spanned by the symmetric functions $ \{1, x_1 + x_2, x_1^2 + x_2^2, x_1 x_2 \} $. $\triangle$

More general results for polynomial kernels will be derived in section 4.2. Eigenfunctions of the integral operator associated with $ k_\mathrm{s} $ are symmetric. Mercer features of the symmetrized Gaussian kernel for d = 2 are shown in figure 2.

Lemma 4.4. Given a permutation-invariant kernel k, it holds that

Analogously, continuous symmetric functions can be approximated arbitrarily well by symmetric universal kernels.

Proposition 4.5. Let $ \mathbb{X} $ be bounded. Given a universal, permutation-invariant, continuous kernel k, the space $ \mathbb{H}_s $ induced by $ k_\mathrm{s} $ is dense in the space of continuous symmetric functions given by $ C_\mathrm{s}(\mathbb{X}) = \{f \in C(\mathbb{X}) \mid f \textrm{is symmetric} \} $.

4.2. Symmetric polynomial kernels

Let us compute the dimensions of the feature spaces spanned by symmetrized polynomial kernels.

Lemma 4.6. The dimension of the feature space generated by the symmetrized polynomial kernel of order p is

Proof.

proof Let π be a permutation, then the multi-indices q and π(q) generate the same feature space function when we apply the symmetrization operator $ \mathcal{S} $ to the corresponding monomials xq and $ x^{\pi(q)} $. We thus have to consider only partitions µ of the integers $ 0 \leqslant \left\lvert q \right\rvert \leqslant p $ since the ordering of the multi-indices does not matter.

This case is similar to the antisymmetric case, with the difference that we require partitions of integers up to p instead of $ p-\binom{d}{2} $. Table 2 lists the dimensions of the feature spaces spanned by the polynomial kernel k and its symmetric version $ k_\mathrm{s} $ for different combinations of d and p. Compared to the standard polynomial kernel, the number of features is significantly lower, but higher than the number of features generated by the antisymmetric polynomial kernel.

Table 2. Dimensions of the feature spaces spanned by the polynomial kernel k and its symmetric counterpart $ k_\mathrm{s} $. Here, d is again the dimension of the state space and p the degree, cf table 1.

p 2345678
d $ n_\phi $ $ n_{\phi_s} $ $ n_\phi $ $ n_{\phi_s} $ $ n_\phi $ $ n_{\phi_s} $ $ n_\phi $ $ n_{\phi_s} $ $ n_\phi $ $ n_{\phi_s} $ $ n_\phi $ $ n_{\phi_s} $ $ n_\phi $ $ n_{\phi_s} $
2641061592112281636204525
31042073511561684231203116541
4154357701212618210273303849553

4.3. Symmetric Gaussian kernels

The symmetric kernel cannot be expressed as a Slater determinant anymore, but we can utilize a related concept. The permanent of a matrix $ A \in \mathbb{R}^{d \times d} $ is defined by

While for d = 2 the permanent can be written as a determinant (by flipping the sign of a12 or a21), this is not possible anymore for $ d \geqslant 3 $ [35]. No polynomial-time algorithm for the computation of the permanent is known, but there are efficient approximation schemes for matrices with non-negative entries [36].

Lemma 4.7. Let k be the Gaussian kernel with bandwidth σ, then

Proof.

proof The proof is analogous to the one for lemma 3.12. Using the definition of the permanent, we obtain

The result then follows from lemma 4.4.

Example 4.8. Assume we have a set of undirected graphs that we would like to classify or categorize. The results should not depend on the vertex labels and thus be identical for isomorphic graphs. Let $ A, A^{\prime} \in \mathbb{R}^{d \times d} $ be the adjacency matrices of the graphs G and $ G^{\prime} $, respectively. We define a Gaussian kernel for graphs by

where $ ||{\,\cdot\,}||_F $ denotes the Frobenius norm, and make it symmetric as described above. The only difference here is that we have to define $ \pi(A) = \big( a_{\pi(i),\pi(j)} \big)_{i, j = 1}^d $ to permute rows and columns simultaneously. The kernel function $k_\mathrm{s} (G, G^{\prime})$ can then be expressed in terms of so-called hyperpermanents. We have

The derivation of a formula for the Laplace expansion of hyperpermanents can be found in appendix A.1. For the considered example, we set σ = 1 and randomly generate a set of 100 undirected connected graphs of size d = 5. We then apply kernel PCA, see [17], using the symmetric kernel $ k_\mathrm{s} $. Sorting the graphs according to the first principal component, we obtain the ordering shown in figure 4 (only a subset of the graphs is displayed). Isomorphic graphs are grouped into the same category. $\triangle$

Figure 4.

Figure 4. Application of kernel PCA to a set of undirected graphs. The x-direction corresponds to the first principal component. The results show that isomorphic graphs are assigned the same value.

Standard image High-resolution image

Other learning algorithms such as kernel k-means, kernel ridge regression, or support vector machines can be used in the same way, enabling us to cluster, make predictions for, or classify data where the order of elements is irrelevant.

4.4. Product or quotient representations of symmetric kernels

The aim now is to express a symmetric kernel not as a Slater permanent but as a product or quotient of antisymmetric functions. As shown in section 3.1, an antisymmetric kernel is zero for all x for which a (non-trivial) permutation π exists such that π(x) = x. Therefore, products of antisymmetric kernels are zero for such x as well, see also figure 5. We thus mainly restrict ourselves to quotients. Let $ k_\mathrm{a}^{(1)} $ and $ k_\mathrm{a}^{(2)} $ be two permutation-invariant antisymmetric kernels and $ k_\mathrm{a}^{(2)}(x, x^{\prime}) \ne 0 $. We define

Then

Remark 4.9. If the numerator and denominator can be written as determinants, i.e. $ k_\mathrm{a}^{(1)}(x, x^{\prime}) = \det(K_1) $ and $ k_\mathrm{a}^{(2)}(x, x^{\prime}) = \det(K_2) $, we obtain

Example 4.10. Suppose d = 2. Let k(1) and k(2) be two Gaussian kernels with bandwidths σ1 and σ2, respectively, where $ \sigma_1 \lt \sigma_2 $. If the bandwidths are sufficiently small, either $ k^{(1)}(x, x^{\prime}) $ and $ k^{(2)}(x, x^{\prime}) $ or $ k^{(1)}(\pi(x), x^{\prime}) $ and $ k^{(2)}(\pi(x), x^{\prime}) $ will be close to zero (unless π(x) = x), where π = (1, 2) in cycle notation. Assume w.l.o.g. the latter holds, then

which is a Gaussian with bandwidth σ satisfying $ \frac{1}{\sigma^2} = \frac{1}{\sigma_1^2} - \frac{1}{\sigma_2^2} $. This is illustrated in figure 5. Furthermore, the limit of $k_\mathrm{s}(x,x^{\prime})$ as $x_2~\rightarrow x_1$ exists and is given by

with $ x = [x_1, x_1]^\top $, see appendix B. $\triangle$

Figure 5.

Figure 5. (a) Symmetric Gaussian kernel. (b) Quotient of antisymmetric Gaussian kernels. (c) Product of antisymmetric Gaussian kernels. The bandwidths of the antisymmetric kernels were chosen in such a way that the resulting functions approximate the symmetric Gaussian kernel. In the top row $ x^{\prime} = [0.4, -0.3]^\top $ and in the bottom row $ x^{\prime} = [0.4, 0.35]^\top $. Product kernels cannot approximate the symmetric Gaussian kernel if $ x^{\prime} $ is close to the separating boundary given by $ x_1 = x_2 $.

Standard image High-resolution image

The symmetric Gaussian kernel can be approximated by a quotient of antisymmetric Gaussian kernels, which can be evaluated in $ \mathcal{O}(d^3) $, thus avoiding the non-polynomial complexity of the permanent. The question whether such kernels are universal is beyond the scope of this work.

5. Applications

In addition to the guiding examples presented above, we will illustrate the efficacy of the derived kernels with the aid of quantum physics and chemistry problems.

5.1. Particles in a one-dimensional box

Let us first consider a simple one-dimensional two-particle system. We define a potential V by

Furthermore, we assume that the two particles do not interact and obtain the Schrödinger equation

Equation (3)

for $ 0 \leqslant x_1, x_2 \leqslant L $. By separating the two variables, we obtain the classical particle in a box problem, with eigenvalues $ E_\ell = \frac{\hbar^2~\pi^2~\ell^2}{2~\mathbf{m} L^2} $ and eigenfunctions $ \psi_\ell(x) = \sqrt{\frac{2}{L}} \sin\left( \frac{\pi \ell x}{L} \right) $, for $ \ell = 1, 2, 3, \dots $, see, for instance, [37]. For the two-particle system, the eigenvalues are hence of the form

and the eigenfunctions are

However, since the two particles are physically indistinguishable, the wave functions must satisfy $ |{\psi_{\ell_1, \ell_2}(x_1, x_2)|}^2 = |{\psi_{\ell_1, \ell_2}(x_2, x_1)}|^2 $, which implies that the functions are either symmetric (if the particles are bosons) or antisymmetric (if the particles are fermions). Let us assume that the two particles are electrons, i.e. fermions. We thus want to compute antisymmetric solutions of the time-independent Schrödinger equation by applying the approach introduced in section 2.2, see also [25]. In the same way, we could assume that the particles are bosons and compute symmetric solutions by replacing the antisymmetric kernel by a symmetric kernel.

We define ћ = 1, $ \mathbf{m} = 1 $, and $ L = \boldsymbol{\pi} $, choose the antisymmetric Gaussian kernel with bandwidth σ = 0.1, and generate m = 900 uniformly sampled points in [0, L]×[0, L]. Additionally, to ensure that the eigenfunctions are zero outside the box, we place 124 equidistantly distributed test points on the boundary and enforce $ \psi_{\ell_1, \ell_2}(x_1, x_2) = 0 $ for these boundary points. We thus have to solve a constrained eigenvalue problem and use the algorithm described in [38]. The first three eigenfunctions ψ1,2, ψ1,3, and ψ2,3 are shown in figure 6 and good approximations of the analytically computed eigenfunctions. The probability that the two electrons are in the same location is always zero. Furthermore, the results show that by increasing the number of data points we obtain more accurate and less noisy estimates of the true eigenvalues.

Remark 5.1.  We would like to point out that

  • This example is just meant as an illustration of the concepts and not as a realistic physical model;
  • Eigenfunctions with $ \ell_1 = \ell_2 $ are symmetric and eliminated by the antisymmetrization operation;
  • Approximations of the eigenfunctions can be obtained using far fewer points ($ m \lt 50 $), but the eigenvalues will be considerably overestimated (kernels tailored to quantum mechanics applications might lead to better approximations);
  • The antisymmetry assumption is encoded only in the kernel, not in the Schrödinger equation itself.

Figure 6.

Figure 6. Numerically computed antisymmetric eigenfunctions (a) ψ1,2, (b) ψ1,3, and (c) ψ2,3 with corresponding eigenvalues λ1,2 ≈ 2.75, λ1,3 ≈ 5.40, and λ2,3 ≈ 7.07 for m = 900. The eigenvalues are slightly larger than the analytically computed values λ1,2 = 2.5, λ1,3 = 5, and λ2,3 = 6.5. (d) Eigenvalues as a function of the number of data points. The solid lines represent the numerically computed eigenvalues, the shaded areas the standard deviation, and the dashed lines the analytically computed eigenvalues.

Standard image High-resolution image

This can be easily extended to the multi-particle case. We now add electron–electron interaction terms resulting in the Hamiltonian

For d = 3, we randomly generate 3000 interior points and 600 boundary points to enforce Dirichlet boundary conditions. We choose a Gaussian kernel with bandwidth σ = 0.1, assemble the Gram matrices, and again solve the resulting constrained eigenvalue problem. The results are shown in figure 7. For the sake of comparison, we also plot the corresponding eigenfunctions of the Schrödinger equation without the electron–electron interaction. It can be seen that the eigenfunctions for the separable case are similar to the eigenfunctions where the interaction terms are included. For this particular system, the interaction terms do not seem to have a drastic effect on the system's low-lying energy states. In general, however, their effect on the electronic wavefunction can be significant. We also remark that energies and wavefunctions of the interacting system could in principle be approximated by perturbation techniques. However, due to the degeneracy of the antisymmetric states, such a perturbation analysis seems beyond the scope of this work.

Figure 7.

Figure 7. Antisymmetric eigenfunctions (a) ψ1,2,3, (b) ψ1,2,4, and (c) ψ1,3,4. The top row shows the analytically computed eigenfunctions omitting electron–electron interaction, the bottom row the numerically computed eigenfunctions including repulsive forces.

Standard image High-resolution image

5.2. Acyclic molecules

As a second example, we consider a data set of acyclic molecules [39]. The aim is to determine the boiling points of these molecules containing the elements C, H, O, and S. The data set 10 consists of 183 graphs G = (V, E) representing the molecular structures and the corresponding boiling points in degrees Celsius, see figure 8 for a few examples of molecules included in the data set. The number of vertices $\left\lvert V \right\rvert$ varies between 3 and 11, where the hydrogen atoms of the molecules are neglected. Thus, in order to compare the graphs of different sizes, we expand all adjacency matrices to $\mathbb{R}^{d \times d}$ with d = 11 by appending rows and columns of zeros, representing artificial isolated nodes.

Figure 8.

Figure 8. Skeletal formulas of a selection of samples taken from the data set. The set contains oxygen and sulfur compounds of different complexities. The associated boiling points are between −23.7 C and 250 C.

Standard image High-resolution image

We define a symmetrized Laplacian kernel on graphs, cf example 4.8. Given the adjacency matrices $A, A^{\prime} \in \mathbb{R}^{d \times d}$ of the graphs G = (V, E) and $ G^{\prime} = (V^{\prime}, E^{\prime})$ as well as the kernel parameter $\sigma\gt0$, we define the tensor $T \in \mathbb{R}^{d \times d \times d \times d}$ by

for i ≠ j and k ≠ l and

The latter definition ensures that we avoid unwanted effects of any ordinal labeling of the nodes. Using the hyperpermanent of T, the kernel evaluation $k_\mathrm{s}(G,G^{\prime})$ can be written as

Equation (4)

Note that we do not consider entries ti,j,k,l with either i = j, k ≠ l or i ≠ j, k = l since $i = j \Leftrightarrow \pi(i) = \pi(j)$. We refer to appendix A for different methods and simplifications for computing the hyperpermanent of T.

For kernel-based (ridge) regression (see, e.g. [40]), we extract 165 adjacency matrices (≈90%) and their corresponding boiling point temperatures from the data set as training samples, the other data pairs constitute the test set. That is, for any G in the test set, the regression function is given by $f(G) = \Theta^\top K_{\textrm{train},G}$, where the vector $K_{\textrm{train}, G} \in \mathbb{R}^{165}$ is the Gram matrix (or kernel matrix) corresponding to the training samples (rows) and the test sample G (column). The vector $\Theta \in \mathbb{R}^{165}$ is the solution of $K_{\textrm{train},\textrm{train}} \Theta = b$ with b being the vector of boiling points of the molecules in the training set. We then compute the average error as well the root-mean-square error in the boiling points of the test set in order to evaluate the generalizabilty of the learned regression function. We repeat each experiment 10 000 times with randomly chosen training and test sets, the results for different kernel parameters σ are shown in figure 9(a).

Figure 9.

Figure 9. (a) Average errors and root-mean-square errors for the test set for different values of σ. The solid lines depict the median and the semi-transparent areas comprise the 30th to the 70th percentile of the respective errors. (b) Entries of the Gram matrix corresponding to the whole data set for σ = 3 with a logarithmically scaled color map. The samples are sorted by number of atoms, type of compound (oxygen/sulfur), and number of contained heteroatoms.

Standard image High-resolution image

The best results in terms of the average and the root-mean-square error are obtained for kernel parameters σ between 2.5 and 2.8, see table 3 for details. According to the data set's website, the best results so far for the boiling point prediction on 90% of the set as training data and 10% as test data are achieved by applying so-called treelet kernels which exploit all possible graph/tree patterns up to a given size [39]. In this case, the average error is listed as 4.87 and the root-mean-square error as 6.75. Both values are comparable with our results.

Table 3. Mean and median of the average error and root-mean-square error over all repetitions for values of σ between 2.5 and 2.8.

σ Average errorRoot-mean-square error
meanmedianmeanmedian
0.254.904.766.856.57
0.264.914.776.876.53
0.274.924.776.886.47
0.284.944.806.916.44

As shown in figure 9(b), the entries of the Gram matrix tend to decrease for larger molecules. This effect can be explained by the expansion of the adjacency matrices and similarities of the molecules with small numbers of atoms. For instance, the first two compounds in the (ordered) data set are dimethyl ether (C2H6O) and dimethyl sulfide (C2H6S). Due to the expansion from $\left\lvert V \right\rvert = 3$ to $\left\lvert V \right\rvert = 11$, the majority of the permutations in (4) do not affect the adjacency matrix of $G^{\prime}$, cf appendix A.3. The block structure of the matrix arises from the ordering of the data set, i.e. each group of molecules with the same number of atoms is divided into subgroups of compounds containing one oxygen atom, two oxygen atoms, one sulfur atom, and two sulfur atoms.

6. Conclusion

We derived symmetric and antisymmetric kernels that can be used in kernel-based learning algorithms such as kernel PCA, kernel CCA, or support vector machines, but also to approximate symmetric or antisymmetric eigenfunctions of transfer operators or differential operators (e.g. the Koopman generator or Schrödinger operator). Potential applications range from point cloud analysis and graph classification to quantum physics and chemistry. Furthermore, we analyzed the induced reproducing kernel Hilbert spaces and resulting feature space dimensions. The effectiveness of the proposed kernels was demonstrated using guiding examples and simple benchmark problems.

The next step is now to apply kernel-based methods to more complex quantum systems. Such problems might require kernels tailored to the system at hand. By exploiting additional properties (sparsity, low-rank structure, weak coupling between subsystems), it could be possible to improve the performance of kernel-based methods. Furthermore, the kernel flow approach proposed in [41] could be extended to operator estimation problems. This would allow us to also learn the kernel from data.

Another topic for future research would be to consider other types of symmetries and to develop kernels that explicitly take these properties into account. While the antisymmetric kernel can be evaluated efficiently using matrix factorizations, this is not possible for the symmetric kernel, which requires the evaluation of a matrix permanent. Utilizing efficient approximation schemes could speed up the generation of the required Gram matrices significantly. Alternatively, the product or quotient formulation of symmetric kernels could be exploited to facilitate the application of the proposed methods to higher-dimensional problems.

Acknowledgments

We would like to thank Jan Hermann for helpful discussions about quantum chemistry and the reviewers for their helpful comments and suggestions.

Data availability statement

The data and code that support the findings of this study are openly available at https://github.com/sklus/d3s/.

Funding

P Gelß and F Noé have been partially funded by Deutsche Forschungsgemeinschaft (DFG) through grant CRC 1114 'Scaling Cascades in Complex Systems' (Project ID: 23 522 1301, projects A04 and B06). F Noé also acknowledges funding from BMBF through the Berlin Institute for the Foundations of Learning and Data (BIFOLD), the European Commission (ERC CoG 772 230), and the Berlin Mathematics center MATH+ (AA2-8).

Appendix A.: Hyperpermanents

Given a tensor $T \in \mathbb{R}^{d^{\times 4}} = \mathbb{R}^{d \times d \times d \times d}$, the hyperpermanent of T is given by

In example 4.8, we considered tensor entries of the form

Equation (5)

for adjacency matrices $A, A^{\prime} \in \mathbb{R}^{d \times d}$ in order to construct the symmetrized Gaussian kernel for graphs. Another example for defining the entries of T, as described in section 5.2, is

Equation (6)

which results in the symmetrized Laplacian kernel for graphs. For both choices, we set $t_{i,i,k,k} = \exp(-1/2\sigma^2)$ and $t_{i,i,k,k} = \exp(-1/\sigma)$, respectively, if $a_{i,i} \neq a^\prime_{k,k}$ and ti,i,k,k  = exp(0), otherwise. In what follows, we will consider different techniques for computing the hyperpermanent of T.

A.1. Laplace expansion for the computation of hyperpermanents

Define $T^{(\mu)} \in \mathbb{R}^{d^{\times 4}}$ by

Let $\widehat{T}^{(\mu)} \in \mathbb{R}^{(d-1)^{\times 4}}$ denote the tensor that results from $T^{(\mu)}$ by removing all entries $t^{(\mu)}_{i,j,k,l}$ with i = 1, j = 1, k = µ, or l = µ. The hyperpermanent can then be written as

where δij denotes the Kronecker delta. Note that i = j implies π(i) = π(j).

A.2. Hyperpermanents of pairwise symmetric tensors

Suppose $t_{i,j,k,l} = t_{j,i,k,l}$ and $t_{i,j,k,l} = t_{i,j,l,k}$ for all $i,j,k,l \in \{1, \dots, d\} $. For instance, this is the case for tensors T containing elementwise evaluations of Gaussian and Laplacian kernels as given in (5) and (6), respectively. The hyperpermanent of T can then be written as

Equation (7)

The advantage of the above formula is that we can reduce the computational costs for the hyperpermanent. Additionally, we do not have to compute all elements of T, which also reduces the computational costs.

A.3. Hyperpermanents for graphs with isolated nodes

In order to compare graphs of different sizes, we include artificial isolated nodes in section 5.2. That is, the adjacency matrix of a given graph is expanded by adding zero entries. In this case, all permutations among the isolated nodes do not change the result of the product $\prod_{i = 1}^d \prod_{j = 1}^d t_{i, j, \pi(i), \pi(j)}$. Assume that the dimension of the adjacency matrix A used in (5) and (6) is initially $d^{\prime} \lt d$ before the expansion to $R^{d \times d}$. Then, it holds that

if $i\gt d^{\prime}$ or $j\gt d^{\prime}$. Thus, given two permutations π1 and π2 with

it follows that

This means that for each permutation π ∈ Sd , any of the $(d-d^{\prime})!$ permutations of the set ${\{\pi(d^{\prime}+1), \dots, \pi(d)\}}$ does not change the value of the quotient in (7). This fact can be exploited using the formula

which enables us to decrease the number of considered permutations significantly if $d^{\prime}$ is much smaller than d.

Appendix B.: Quotient representation of symmetric Gaussian kernels

Under the same assumptions as given in example 4.10, we use lemma 3.12 in order to write $k_\mathrm{s}(x,x^{\prime})$ as

where $x_1~\neq x_2$ and $x_1 ^{\prime} \neq x_2^{\prime}$. From l'Hôpital's rule follows that

Footnotes

  • Assume w.l.o.g. that $x_i = x_j$ for some indices i ≠ j. Let $\widehat{\pi}$ be the permutation which only swaps the positions i and j, then it holds that $k_\mathrm{a}(x, x^{\prime}) = k_\mathrm{a}(\widehat{\pi}(x), x^{\prime}) = \mathrm{sgn}(\widehat{\pi}) k_\mathrm{a}(x, x^{\prime}) = -k_\mathrm{a}(x, x^{\prime})$.

  • A partition of a positive integer n is a decomposition into positive integers so that the sum is n. The order of the summands does not matter, i.e. $ 6 = 3 + 2 + 1 $ and $ 6 = 1 + 2 + 3 $ are the same partition. We sort partitions in non-increasing order, e.g. µ = (3, 2, 1) is a partition of 6 into three parts.

  • We use a bold $\boldsymbol{\pi}$ for the mathematical constant to avoid confusion with permutations π.

  • This could be mitigated by decreasing the bandwidth or by regularization techniques.

  • 10 

    The data set can be found at https://brunl01.users.greyc.fr/CHEMISTRY/.

Please wait… references are loading.