Paper The following article is Open access

Trigonometric $\vee $-systems and solutions of WDVV equations*

and

Published 24 December 2020 © 2020 The Author(s). Published by IOP Publishing Ltd
, , Integrable Physics and its Connections with Special Functions and Combinatorics Integrable Physics and its Connections with Special Functions and Combinatorics Citation Maali Alkadhem and Misha Feigin 2021 J. Phys. A: Math. Theor. 54 024002 DOI 10.1088/1751-8121/abccf8

1751-8121/54/2/024002

Abstract

We consider a class of trigonometric solutions of Witten–Dijkgraaf–Verlinde–Verlinde equations determined by collections of vectors with multiplicities. We show that such solutions can be restricted to special subspaces to produce new solutions of the same type. We find new solutions given by restrictions of root systems, as well as examples which are not of this form. Further, we consider a closely related notion of a trigonometric ∨-system, and we show that its subsystems are also trigonometric ∨-systems. Finally, while reviewing the root system case we determine a version of (generalised) Coxeter number for the exterior square of the reflection representation of a Weyl group.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

The Witten–Dijkgraaf–Verlinde–Verlinde (WDVV) equations are a remarkable set of nonlinear third order partial differential equations for a single function $\mathcal{F}$. They were discovered originally in two-dimensional topological field theories, and they are in the core of Frobenius manifolds theory [11], in which case prepotential $\mathcal{F}$ is a function on a Frobenius manifold $\mathcal{M}$. A flat metric can be defined on $\mathcal{M}$ in terms of the third order derivatives of $\mathcal{F}$ which allows to reformulate WDVV equations as the associativity condition of a multiplication in a family of Frobenius algebras defined on the tangent spaces ${T}_{{\ast}}\mathcal{M}$. Structure constants of the multiplication are also given in terms of the third order derivatives of $\mathcal{F}$.

There is a remarkable class of polynomial solutions of WDVV equations, which corresponds to (finite) Coxeter groups $\mathcal{W}$ [11]. In this case the space $\mathcal{M}$ is the space of $\mathcal{W}$-orbits in the reflection representation of $\mathcal{W}$. Prepotential $\mathcal{F}$ is then a polynomial in the flat coordinates of the metric known as Saito metric.

For any Frobenius manifold there is an almost dual Frobenius manifold introduced by Dubrovin in [13]. Prepotentials for almost dual structures of the polynomial Frobenius manifolds can be expressed in a simple form

Equation (1.1)

where $\mathcal{A}=\mathcal{R}$ is the root system of the group $\mathcal{W}$. In this case the constant metric is the $\mathcal{W}$-invariant form on the vector space V of the reflection representation of the group $\mathcal{W}$.

Such solutions $\mathcal{F}$ of WDVV equations appear in four-dimensional Seiberg–Witten theory as perturbative parts of Seiberg–Witten prepotentials. Thus Marshakov, Mironov and Morozov found them for classical root systems in [21, 22]. Solutions (1.1) for non-classical root systems were found by Gragert and Martini in [23].

Veselov found solutions $\mathcal{F}$ of the form (1.1) for some not fully symmetric configurations of covectors $\mathcal{A}\subset {V}^{{\ast}}$, and he introduced the notion of a ∨-system [31] formulated in terms of linear algebra. A configuration of vectors $\mathcal{A}$ is a ∨-system exactly when the corresponding prepotential (1.1) satisfies WDVV equations [17, 31]. This property can also be reformulated in terms of flatness of a connection on the tangent bundle T V [32]. A closely related notion of the Dunkl system was introduced and studied in [10]. That structure for complex reflection groups was investigated further in [3] in relation with Frobenius manifolds theory.

The class of ∨-systems is closed under the natural operations of taking subsystems [17] and under restriction of a system to the intersection of some of the hyperplanes α(x) = 0, where $\alpha \in \mathcal{A}$ [16]. The class of ∨-systems contains multi-parameter deformations of the root systems An and Bn ( [9], see also [17] for more examples). The underlying matroids were examined in [27]. The problem of classification of ∨-systems remains open.

In this work we are interested in the trigonometric solutions of WDVV equations which have the form

Equation (1.2)

where ${f}^{\prime \prime \prime }\left(z\right)=\mathrm{cot}\enspace z,{c}_{\alpha }\in \mathbb{C}$, and Q = Q(x, y) is a cubic polynomial which depends on the additional variable $y\in \mathbb{C}$. Such solutions appear in five-dimensional Seiberg–Witten theory as perturbative parts of prepotentials [22]. Solutions of the form (1.2) for (non-reduced) root systems $\mathcal{A}=\mathcal{R}$ of Weyl groups and $\mathcal{W}$-invariant multiplicities cα were found by Hoevenaars and Martini in [20, 24]. They appear as prepotentials for the almost dual Frobenius manifold structures on the extended affine Weyl groups orbit spaces [12, 14]– see [26] for type An . In some cases such solutions may be related to the rational solutions (1.1) by twisted Legendre transformations [26].

Bryan and Gholampour found another remarkable appearance of trigonometric solutions (1.2) in geometry as they studied quantum cohomology of resolutions of A, D, E singularities [8]. The associative quantum product on these cohomologies is governed by the corresponding solutions ${\mathcal{F}}^{\text{trig}}$ with $\mathcal{A}={A}_{n},{D}_{n},{E}_{n}$ respectively.

Solutions of WDVV equations of the form (1.2) without full Weyl symmetry were considered by one of the authors in [18] where the notion of a trigonometric ∨-system was introduced and its close relation with WDVV equations was established. A key difference with the rational case is the existence of a rigid structure of a series decomposition of vectors from $\mathcal{A}$ which generalizes the notion of strings for root systems.

Many-parameter deformations of solutions Ftrig for the classical root systems were obtained by Pavlov from reductions of Egorov hydrodynamic chains [25]. Closely related many-parameter family of flat connections in type An was considered by Shen in [28, 29].

Study of the trigonometric and rational cases is related since if a configuration $\mathcal{A}$ with collection of multiplicities ${c}_{\alpha },\alpha \in \mathcal{A}$ is a trigonometric ∨-system then configuration $\sqrt{{c}_{\alpha }}\alpha $ is a rational one [18]. However, due to the presence of the extra variable y in the trigonometric case it is already nontrivial for dim V = 2 while the smallest nontrivial dimension of V in the rational case is 3.

There is also an important class of elliptic solutions of WDVV equations, which was considered by Strachan in [30] where, in particular, certain solutions related to An and Bn root systems were found. The prepotentials appear as almost dual prepotential associated to Frobenius manifold structures on An and Bn Jacobi groups orbit spaces [4, 5]. Such solutions appear also in six-dimensional Seiberg–Witten theory [7].

In this paper we study trigonometric solutions ${\mathcal{F}}^{\text{trig}}$ of the form (1.2) of WDVV equations. In section 2 we recall the notion of a trigonometric ∨-system and revisit its close relation with solutions of WDVV equations.

We investigate operations of taking subsystems and restrictions in sections 3 and 4. We show that a subsystem of a trigonometric ∨-system is also a trigonometric ∨-system, and that one can restrict solutions of WDVV equations of the form (1.2) to the intersections of hyperplanes to get new solutions.

In section 5 we find solutions ${\mathcal{F}}^{\text{trig}}$ for the root system BCn which depend on three parameters. By applying restrictions we obtain in sections 5 and 6 multi-parameter families of solutions ${\mathcal{F}}^{\text{trig}}$ for the classical root systems thus recovering and extending results from [25]. In the case of BCn we get a family of solutions depending on n + 3 parameters which can be specialized to Pavlov's (n + 1)-parametric family from [25]. A related multi-parameter deformation of BCn solutions (1.2) when Q depends on x variables only was obtained in [1] by similar methods.

In section 7 we consider solutions ${\mathcal{F}}^{\text{trig}}$ for n = dim V ⩽ 4. We show that solutions with up to five vectors on the plane belong to deformations of classical root systems. We also get new examples of solutions ${\mathcal{F}}^{\text{trig}}$ of the form (1.2) some of which cannot be obtained as restrictions of solutions (1.2) for the root systems.

In section 8 we revisit solutions ${\mathcal{F}}^{\text{trig}}$ for the root systems studied in [8, 20, 24, 28, 29]. The polynomial Q in this case depends on a scalar ${\gamma }_{\left(\mathcal{R},c\right)}$ which is determined in these papers for any invariant multiplicity function $c:\mathcal{R}\to \mathbb{C}$. We give a formula for ${\gamma }_{\left(\mathcal{R},c\right)}$ in terms of the highest root of $\mathcal{R}$ generalizing a statement from [8] for special multiplicities. We also find a related scalar ${\lambda }_{\left(\mathcal{R},c\right)}$ which is invariant under linear transformations applied to the root system $\mathcal{R}$. This scalar may be thought of as a version of generalized Coxeter number (see e.g. [19]) for the irreducible $\mathcal{W}$-module Λ2 V since it is given as a ratio of two canonical $\mathcal{W}$-invariant symmetric bilinear forms on Λ2 V.

We are dedicating this paper to our former colleague Jon Nimmo whom we miss a lot.

2. Trigonometric ∨-systems and WDVV equations

Let V be a vector space of dimension N over $\mathbb{C}$ and let V* be its dual space. Let $\mathcal{A}$ be a finite collection of covectors αV* which belongs to a lattice of rank N.

Let us also consider a multiplicity function $c:\mathcal{A}\to \mathbb{C}$. We denote c(α) as cα . We will assume throughout that the corresponding symmetric bilinear form

is non-degenerate. We will also write ${G}_{\mathcal{A}}$ for ${G}_{\left(\mathcal{A},c\right)}$ to simplify notations. The form ${G}_{\mathcal{A}}$ establishes an isomorphism ϕ: VV*, and we denote the inverse ϕ−1(α) by α, where ${G}_{\mathcal{A}}\left({\alpha }^{\vee },v\right)=\alpha \left(v\right)$ for any vV.

Let $U\cong \mathbb{C}$ be a one-dimensional vector space. We choose a basis in VU such that e1, ..., eN is a basis in V and eN+1 is the basis vector in U, and let x1, ..., xN+1 be the corresponding coordinates. We represent vectors xV, yU as x = (x1, ..., xN ) and y = xN+1. Consider a function $F:V\oplus U\to \mathbb{C}$ of the form

Equation (2.1)

where $\lambda \in {\mathbb{C}}^{{\ast}}$ and function $f\left(z\right)=\frac{1}{6}i{z}^{3}+\frac{1}{4}L{i}_{3}\left({e}^{-2iz}\right)$ satisfies f‴(z) = cot z. The WDVV equations is the following system of partial differential equations

Equation (2.2)

where Fi is (N + 1) × (N + 1) matrix with entries ${\left({F}_{i}\right)}_{\text{p}\;\text{q}}=\frac{{\partial }^{3}F}{\partial {x}_{i}\partial {x}_{p}\partial {x}_{q}}$ (p, q = 1, ..., N + 1).

Let e1, ..., eN be the basis in V* dual to the basis e1, ..., eN V. Then for any covector αV* we have $\alpha ={\sum }_{i=1}^{N}{\alpha }_{i}{\mathit{\text{e}}}^{\mathit{\text{i}}}$ and ${\alpha }^{\vee }={\sum }_{i=1}^{N}{\alpha }_{i}^{\vee }{e}_{i}$, where ${\alpha }_{i},{\alpha }_{i}^{\vee }\in \mathbb{C}$. Then

Equation (2.3)

where we denoted by α both column and row vectors α = (α1, ..., αN ), and αα is N × N matrix with matrix entries (αα)jk = αj αk . Let us define

Equation (2.4)

where i, j = 1, ..., N + 1. Now we will establish a few lemmas which will be useful later. The next statement is standard.

Lemma 2.1. Let $\tilde {G}$ be the matrix of the bilinear form ${G}_{\mathcal{A}}$, that is its matrix entry ${\left(\tilde {G}\right)}_{ij}={G}_{\mathcal{A}}\left({e}_{i},{e}_{j}\right)$, where i, j = 1, ..., N. Then for any covector γ = (γ1, ..., γN ) ∈ V* and ${\gamma }^{\vee }=\left({\gamma }_{1}^{\vee },\dots ,{\gamma }_{N}^{\vee }\right)\in V$, we have ${\tilde {G}}^{-1}{\gamma }^{T}={\left({\gamma }^{\vee }\right)}^{T}$.

Let ${M}_{\mathcal{A}}=V{\backslash}{\cup }_{\alpha \in \mathcal{A}}{{\Pi}}_{\alpha }$ be the complement to the union of all the hyperplanes Πα := {xV: α(x) = 0}. For any vector $\bar{a}=\left({a}_{1},\dots ,{a}_{N+1}\right)\in V\oplus U$ let us introduce the corresponding vector field ${\partial }_{\bar{a}}={\sum }_{i=1}^{N+1}{a}_{i}{\partial }_{{x}_{i}}\in {\Gamma}\left(T\left(V\oplus U\right)\right)$. For any $\bar{b}=\left({b}_{1},\dots ,{b}_{N+1}\right)\in V\oplus U$ we define the following multiplication on the tangent space ${T}_{\left(x,y\right)}\left({M}_{\mathcal{A}}\oplus U\right)$:

Equation (2.5)

where ηkl is defined in (2.4) and the summation over repeated indices here and below is assumed. It is clear from the definition that the multiplication * is commutative and distributive. The proof of the next statement is standard (see [11] for a similar statement).

Lemma 2.2. The associativity of multiplication * is equivalent to the WDVV equation (2.2).

Let us introduce vector field E by

For a fixed $\left(x,y\right)\in {M}_{\mathcal{A}}\oplus U$ after the identification T(x,y)(VU) ≅ VU we have that EU.

Proposition 2.3. Vector field E is the identity for the multiplication (2.5).

Proof. For all 1 ⩽ iN + 1 we have

Proposition 2.4. Let a = (a1, ..., aN ), b = (b1, ..., bN ) ∈ V, and let ${\partial }_{a}={\sum }_{i=1}^{N}{a}_{i}{\partial }_{{x}_{i}},\qquad \enspace {\partial }_{b}={\sum }_{i=1}^{N}{b}_{i}{\partial }_{{x}_{i}}$. Then the product (2.5) has the following explicit form

Equation (2.6)

Proof. Note that ${\eta }^{m,N+1}=\frac{1}{2}{\delta }_{m}^{N+1}$ for any m = 1, ..., N + 1, where ${\delta }_{i}^{j}$ is the Kronecker symbol. Therefore from (2.5) we have

where

Then we have

Equation (2.7)

by lemma 2.1. Also by formula (2.3) we have that

Equation (2.8)

The statement follows from formulas (2.7) and (2.8). □

If we identify vector space VU with the tangent space T(x,y)(VU) ≅ VU, then multiplication (2.6) can also be written as

Equation (2.9)

Now for each vector $\alpha \in \mathcal{A}$ let us introduce the set of its collinear vectors from $\mathcal{A}$

Let δδα and α0δα . Then for any γδ we have γ = kγ α0 for some ${k}_{\gamma }\in \mathbb{R}$. Note that kγ depends on the choice of α0 and different choices of α0 give rescaled collections of these parameters. Define ${C}_{\delta }^{{\alpha }_{0}}{:=}{\sum }_{\gamma \in \delta }{c}_{\gamma }{k}_{\gamma }^{2}$. Note that ${C}_{\delta }^{{\alpha }_{0}}$ is non-zero if and only if ${C}_{\delta }^{{\tilde {\alpha }}_{0}}\ne 0$ for any ${\tilde {\alpha }}_{0}\in \delta $.

For any $\alpha ,\beta \in \mathcal{A}$ we define Bα,β := αβ = αββα ∈ Λ2 V*, then ${B}_{\alpha ,\beta }:V\otimes V\to \mathbb{C}$ such that Bα,β (ab) = αβ(ab) = α(a)β(b) − α(b)β(a) for any a, bV. The following proposition holds.

Proposition 2.5. Assume that prepotential (2.1) satisfies the WDVV equations (2.2). Suppose that ${C}_{{\delta }_{\alpha }}^{{\alpha }_{0}}\ne 0$ for any $\alpha \in \mathcal{A},{\alpha }_{0}\in {\delta }_{\alpha }$. Then the identity

Equation (2.10)

holds for all a, bV provided that α(x) = 0.

Proof. For any a = (a1, ..., aN ) ∈ V we define ${F}_{a}={\sum }_{i=1}^{N}{a}_{i}{F}_{i}$. Also we define the matrix ${F}_{a}^{\vee }={F}_{N+1}^{-1}{F}_{a}$. The WDVV equations (2.2) are equivalent to the commutativity $\left[{F}_{a}^{\vee },{F}_{b}^{\vee }\right]=0$ for any a, bV. The commutativity $\left[{F}_{a}^{\vee },{F}_{b}^{\vee }\right]=0$ is then equivalent to the identities [18]

Equation (2.11)

Let us consider terms in the left-hand side of relation (2.11), where β or γ is proportional to α. The sum of these terms has to be regular at α(x) = 0. This implies that the product

Equation (2.12)

is regular at α(x) = 0. The first factor in the product (2.12) has the first order pole at α(x) = 0 by the assumption that ${C}_{{\delta }_{\alpha }}^{{\alpha }_{0}}\ne 0$ for any $\alpha \in \mathcal{A},{\alpha }_{0}\in {\delta }_{\alpha }$. This implies the statement. □

Similarly to proposition 2.5 the following proposition can also be established.

Proposition 2.6. Assume that prepotential (2.1) satisfies the WDVV equations (2.2). Suppose that ${C}_{\delta }^{{\alpha }_{0}}\ne 0$ for any $\alpha \in \mathcal{A},\delta \subseteq {\delta }_{\alpha },{\alpha }_{0}\in {\delta }_{\alpha }$. Then the identity (2.10) holds for any a, bV provided that tan α(x) = 0.

The proof is similar to the proof of proposition 2.5. Indeed, we have that expression (2.12) is regular at $\alpha \left(x\right)=\pi m,m\in \mathbb{Z}$. Assumptions imply that the first factor in (2.12) has the first order pole, which implies the statement.

The WDVV equations for a function F can be reformulated using geometry of the configuration $\mathcal{A}$. Such a geometric structure is embedded in the notion of a trigonometric ∨-system. Before defining trigonometric ∨-system precisely we need a notion of series (or strings) of vectors (see [18]).

For any $\alpha \in \mathcal{A}$ let us distribute all the covectors in $\mathcal{A}{\backslash}{\delta }_{\alpha }$ into a disjoint union of α-series

where $k\in \mathbb{N}$ depends on α. These series ${{\Gamma}}_{\alpha }^{s}$ are determined by the property that for any s = 1, ..., k and for any two covectors ${\gamma }_{1},{\gamma }_{2}\in {{\Gamma}}_{\alpha }^{s}$ one has either γ1 + γ2 = or γ1γ2 = for some $m\in \mathbb{Z}$. We assume that the series are maximal, that is if $\gamma \in {{\Gamma}}_{\alpha }^{s}$ for some $s\in \mathbb{N}$, then ${{\Gamma}}_{\alpha }^{s}$ must contain all the covectors of the form ${\pm}\gamma +m\alpha \in \mathcal{A}$ with $m\in \mathbb{Z}$. Note that if for some $\beta \in \mathcal{A}$ there is no $\gamma \in \mathcal{A}$ such that β ± γ = for $m\in \mathbb{Z}$, then β itself forms a single α-series.

By replacing some vectors from $\mathcal{A}$ with their opposite ones and keeping the multiplicity unchanged one can get a new configuration whose vectors belong to a half-space. We will denote such a system by ${\mathcal{A}}_{+}$. If this system contains repeated vectors α with multiplicities ${c}_{\alpha }^{i}$ then we replace them with the single vector α with multiplicity ${c}_{\alpha }{:=}{\sum }_{i}{c}_{\alpha }^{i}$.

Definition 2.7. [18] The pair $\left(\mathcal{A},c\right)$ is called a trigonometric -system if for all $\alpha \in \mathcal{A}$ and for any α-series ${{\Gamma}}_{\alpha }^{s}$, one has the relation

Equation (2.13)

Note that if ${\beta }_{1},{\beta }_{2}\in {{\Gamma}}_{\alpha }^{s}$ for some α, s, then αβ1 = ±αβ2 so the identity (2.13) may be simplified by cancelling wedge products. We also note that if $\mathcal{A}$ is a trigonometric ∨-system then ${\mathcal{A}}_{+}$ is the one as well.

The close relation between the notion of a trigonometric ∨-system and solutions of WDVV equations is explained by the next theorem. Before we formulate it let us introduce two symmetric bilinear forms ${G}_{\mathcal{A}}^{\left(i\right)}={G}_{\left(\mathcal{A},c\right)}^{\left(i\right)},i=1,2$, on the vector space Λ2 VVV.

Let us consider the bilinear form ${G}_{\mathcal{A}}^{\left(1\right)}$ on Λ2 V given by

Equation (2.14)

where z, w ∈Λ2 V. It is easy to see that for z = u1v1, w = u2v2, where u1, u2, v1, v2V we have

which is a natural extension of the bilinear form ${G}_{\mathcal{A}}$ to the space Λ2 V. It is also easy to see that this form ${G}_{\mathcal{A}}^{\left(1\right)}$ is non-degenerate and that it is $\mathcal{W}$-invariant. Let us also define the following bilinear form ${G}_{{\mathcal{A}}_{+}}^{\left(2\right)}$ on Λ2 V:

Equation (2.15)

where z, w ∈ Λ2 V.

The following statement shows that the bilinear form ${G}_{{\mathcal{A}}_{+}}^{\left(2\right)}$ is independent of the choice of the positive system ${\mathcal{A}}_{+}$.

Lemma 2.8. For any positive systems ${\mathcal{A}}_{+}^{\left(1\right)},{\mathcal{A}}_{+}^{\left(2\right)}$ for a trigonometric ∨-system $\left(\mathcal{A},c\right)$ we have ${G}_{{\mathcal{A}}_{+}^{\left(1\right)}}^{\left(2\right)}={G}_{{\mathcal{A}}_{+}^{\left(2\right)}}^{\left(2\right)}$.

Proof. Suppose firstly that two positive systems ${\mathcal{A}}_{+}^{\left(1\right)},{\mathcal{A}}_{+}^{\left(2\right)}$ for a trigonometric ∨-system $\left(\mathcal{A},c\right)$ satisfy the condition

for some $\alpha \in {\mathcal{A}}_{+}^{\left(1\right)}$. Notice that vector α cannot be a linear combination of vectors in ${\mathcal{A}}_{+}^{\left(1\right)}{\backslash}{\delta }_{\alpha }$. Hence for each α-series ${{\Gamma}}_{\alpha }^{s}$ in ${\mathcal{A}}_{+}^{\left(1\right)}$ we have

Equation (2.16)

since ${B}_{\alpha ,{\beta }_{1}}={B}_{\alpha ,{\beta }_{2}}$ for all ${\beta }_{1},{\beta }_{2}\in {{\Gamma}}_{\alpha }^{s}$.

Let us consider terms in ${G}_{{\mathcal{A}}_{+}^{\left(1\right)}}^{\left(2\right)}\left(z,w\right)$ which contain α. They are proportional to

by (2.16). The statement follows in this case.

In general, the system ${\mathcal{A}}_{+}^{\left(2\right)}$ can be obtained from the system ${\mathcal{A}}_{+}^{\left(1\right)}$ by a sequence of steps where in each one we replace the subset of vectors δα with vectors −δα and the resulting system is still a positive one. In order to see this one moves continuously the hyperplane defining ${\mathcal{A}}_{+}^{\left(1\right)}$ into the hyperplane ${\mathcal{A}}_{+}^{\left(2\right)}$ so that at each moment the hyperplane contains at most one vector from $\mathcal{A}$ up to proportionality. The statement follows from the case considered above. □

As a consequence of lemma 2.8 we can and will denote the form ${G}_{{\mathcal{A}}_{+}}^{\left(2\right)}$ as ${G}_{\mathcal{A}}^{\left(2\right)}$.

A close relation between trigonometric ∨-systems and solutions of WDVV equations is given by the following theorem.

Theorem 2.9 (cf [18]). Suppose that a configuration $\left(\mathcal{A},c\right)$ satisfies the condition ${C}_{\delta }^{{\alpha }_{0}}\ne 0$ for all $\alpha \in \mathcal{A},\enspace \delta \subseteq {\delta }_{\alpha },\enspace {\alpha }_{0}\in {\delta }_{\alpha }$. Then WDVV equations (2.2) for the function (2.1) imply the following two conditions:

  • (a)  
    $\mathcal{A}$ is a trigonometric ∨-system,
  • (b)  
    Bilinear forms (2.14), (2.15) satisfy proportionality ${G}_{\mathcal{A}}^{\left(1\right)}=\frac{{\lambda }^{2}}{4}{G}_{\mathcal{A}}^{\left(2\right)}.$

Conversely, if a configuration $\left(\mathcal{A},c\right)$ satisfies conditions (a) and (b) then WDVV equations (2.2) hold.

The key part of the proof is to derive trigonometric -conditions from WDVV equations, which goes along the following lines (see [18] for details). By proposition 2.6 identity (2.10) holds if tan α(x) = 0. The identity (2.10) is a linear combination of cot β(x)|tan α(x)=0, which can vanish only if it vanishes for each α-series. Hence identity (2.10) implies relations (2.13) so $\mathcal{A}$ is a trigonometric ∨-system.

Remark 2.10. A version of theorem 2.9 is given in [18, theorem 1] without specifying conditions ${C}_{\delta }^{{\alpha }_{0}}\ne 0$. However these assumptions seem needed in general in order to derive trigonometric ∨-conditions for α-series in the case when δα \{±α} ≠ ∅ as above arguments and proofs of propositions 2.5, 2.6 explain.

Remark 2.11. Let us consider vector fields $\tilde {u}=u+{\rho }_{1}E,\tilde {v}=v+{\rho }_{2}E\in {\Gamma}\left(T\left(V\oplus U\right)\right)$, where u, vV and ${\rho }_{1},{\rho }_{2}\in \mathbb{C}$. Then by proposition 2.3 and formula (2.9) the multiplication (2.5) takes the following form on T(x,y)(VU) ≅ VU:

Equation (2.17)

Dubrovin connection on the tangent bundle T(VU) is defined by

Equation (2.18)

where the multiplication * is given by (2.17) and $\mu \in \mathbb{C}$. Then the flatness of the connection (2.18) for all μ is equivalent to WDVV equations (2.2).

An important class of solutions of WDVV equations is given by (crystallographic) root systems $\mathcal{A}=\mathcal{R}$ of Weyl groups $\mathcal{W}$. Recall that a root system $\mathcal{R}$ satisfies the property

Equation (2.19)

for any $\alpha ,\beta \in \mathcal{R}$, and one has $\frac{2\langle \alpha ,\beta \rangle }{\langle \alpha ,\alpha \rangle }\in \mathbb{Z}$, where ⟨⋅, ⋅⟩ is a $\mathcal{W}$-invariant scalar product on V* ≅ V. The corresponding Weyl group is generated by reflections ${s}_{\alpha },\alpha \in \mathcal{R}$.

The following statement was established in [24] for the non-reduced root systems.

Theorem 2.12 (cf [24]).Let $\mathcal{A}=\mathcal{R}$ be an irreducible root system with the Weyl group $\mathcal{W}$ and suppose that the multiplicity function $c:\mathcal{R}\to \mathbb{C}$ is $\mathcal{W}$-invariant. Then prepotential (2.1) satisfies WDVV equations (2.2) for some $\lambda \in \mathbb{C}$.

Let us explain a proof of this statement different from [24] by making use the notion of a trigonometric ∨-system and theorem 2.9.

Proposition 2.13. Root system $\mathcal{A}=\mathcal{R}$ with $\mathcal{W}$-invariant multiplicity function c is a trigonometric ∨-system.

Proof. Fix $\alpha \in \mathcal{R}$. Take any $\beta \in \mathcal{R}$, and let γ = sα β. Then from (2.19) we have that

Hence $\beta ,\gamma \in {{\Gamma}}_{\alpha }^{s}$ for some s. The bilinear form ${G}_{\mathcal{R}}$ is $\mathcal{W}$-invariant so is proportional to ⟨⋅, ⋅⟩. Therefore we have

Hence,

which implies trigonometric ∨-conditions (2.13). □

It is easy to see that the bilinear form ${G}_{\mathcal{R}}^{\left(1\right)}$ is $\mathcal{W}$-invariant, and the same is true for the bilinear form ${G}_{\mathcal{R}}^{\left(2\right)}$ (see e.g. [2, proposition 6.4]). Since $\mathcal{W}$-module Λ2 V is irreducible, the forms ${G}_{\mathcal{R}}^{\left(1\right)}$ and ${G}_{\mathcal{R}}^{\left(2\right)}$ have to be proportional. By theorem 2.9 this implies theorem 2.12 provided that the form ${G}_{\mathcal{R}}^{\left(2\right)}$ is non-zero. The latter fact is claimed in [24] where the corresponding solution of WDVV equations was explicitly stated for the constant multiplicity function. It was found for any multiplicity function for the non-reduced root systems in [28, 29].

It follows that a positive half $\mathcal{A}={\mathcal{R}}^{+}$ of a root system $\mathcal{R}$ also defines a solution of WDVV equations (2.2). We find the corresponding form ${G}_{\mathcal{R^+}}^{\left(2\right)}$ for the root system $\mathcal{R}=B{C}_{N}$ explicitly in section 5. We also specify corresponding constants $\lambda ={\lambda }_{\left(\mathcal{R},c\right)}$ for (the positive halves of) reduced root systems $\mathcal{R}$ in section 8. Note that λ is invariant under the linear transformations applied to $\mathcal{A}$. In the root system case the scalar ${\lambda }_{\left(\mathcal{R},c\right)}$ may be thought of as a version of the (generalized) Coxeter number for the case of the representation Λ2 V, as the usual (generalized) Coxeter number can also be given as a ratio of two $\mathcal{W}$-invariant forms on V ([6, 19]).

3. Subsystems of trigonometric ∨-systems

In this section we consider subsystems of trigonometric ∨-systems and show that they are also trigonometric ∨-systems. An analogous statement for the rational case was shown in [17] (see also [15]).

A subset $\mathcal{B}\subset \mathcal{A}$ is called a subsystem if $\mathcal{B}=\mathcal{A}\cap W$ for some linear subspace WV*. The subsystem $\mathcal{B}$ is called reducible if $\mathcal{B}$ is a disjoint union of two non-empty subsystems, and it is called irreducible otherwise. Consider the following bilinear form on V associated with a subsystem $\mathcal{B}$:

The subsystem $\mathcal{B}$ is called isotropic if the restriction ${G}_{\mathcal{B}}{\vert }_{{W}^{\vee }}$ of the form ${G}_{\mathcal{B}}$ onto the subspace WV, where $W=\langle \mathcal{B}\rangle $, is degenerate and $\mathcal{B}$ is called non-isotropic otherwise.

Let us prove some lemmas which will be useful for the proof of the main theorem of this section.

Lemma 3.1. Let $\mathcal{A}$ be a trigonometric ∨-system. Let $\mathcal{B}=\mathcal{A}\cap W$ be a subsystem of $\mathcal{A}$ for some linear subspace WV* such that $W=\langle \mathcal{B}\rangle $. Consider the linear operator M: VW given by

Equation (3.1)

that is, $M\left(v\right)={\sum }_{\beta \in \mathcal{B}}{c}_{\beta }\beta \left(v\right){\beta }^{\vee }$, for any vV. Then

  • (a)  
    For any u, vV we have ${G}_{\mathcal{A}}\left(u,M\left(v\right)\right)={G}_{\mathcal{B}}\left(u,v\right)$.
  • (b)  
    For any $\alpha \in \mathcal{B}$, α is an eigenvector for M.
  • (c)  
    The space W can be decomposed as a direct sum
    Equation (3.2)
    where ${\lambda }_{i}\in \mathbb{C}$ are distinct, and the restriction $M{\vert }_{{U}_{{\lambda }_{i}}}={\lambda }_{i}I$, where I is the identity operator.

Proof. Let u, vV. We have

which proves the first statement.

Let us consider a two-dimensional plane πV* such that π contains α and another covector from $\mathcal{B}$ which is not collinear with α. Let us sum up ∨-conditions (2.13) over α-series which belong to the plane π. We get that

hence

Equation (3.3)

for some ${\lambda }_{\pi }\in \mathbb{C}$. Let us now sum up relation (3.3) over all such two-dimensional planes π which contain α and another non-collinear covector from $\mathcal{B}$. It follows that M(α) = λα, for some $\lambda \in \mathbb{C}$, hence property (b) holds.

The set of vectors $\left\{{\alpha }^{\vee }:\alpha \in \mathcal{B}\right\}$ spans W since $\mathcal{B}$ spans W. As α is an eigenvector for $M{\vert }_{{W}^{\vee }}$ for any $\alpha \in \mathcal{B}$ we get that $M{\vert }_{{W}^{\vee }}$ is diagonalizable, and W has the eigenspace decomposition as stated in (3.2). □

Lemma 3.2. Let $\mathcal{A}$ and $\mathcal{B}$ be as stated in lemma 3.1. Suppose that $\mathcal{B}$ is non-isotropic. Then

Equation (3.4)

where λi ≠ 0 for all i = 1, ..., k.

Proof. Let uV and $v\in {U}_{{\lambda }_{i}}$ for some i, where ${U}_{{\lambda }_{i}}$ is given by (3.2). Then by lemma 3.1 we have

Hence we have the required relation (3.4). Note that λi ≠ 0 for all i as otherwise ${G}_{\mathcal{B}}{\vert }_{{U}_{{\lambda }_{i}}{\times}V}=0$ which contradicts the non-isotropicity of $\mathcal{B}$. □

Assume that the subsystem $\mathcal{B}=\mathcal{A}\cap W$, $W=\langle \mathcal{B}\rangle $, is non-isotropic so that the bilinear form ${G}_{\mathcal{B}}{\vert }_{{W}^{\vee }}$ is nondegenerate. Then it establishes an isomorphism ${\phi }_{\mathcal{B}}:{W}^{\vee }\to {\left({W}^{\vee }\right)}^{{\ast}}$. For any $\beta \in \mathcal{B}$, let us denote ${\phi }_{\mathcal{B}}^{-1}\left(\beta {\vert }_{{W}^{\vee }}\right)$ by ${\beta }^{{\vee }_{\mathcal{B}}}$. The following lemma relates vectors ${\beta }^{{\vee }_{\mathcal{B}}}$ and β.

Lemma 3.3. In the assumptions and notations of lemmas 3.1 and 3.2 let $\beta \in \mathcal{B}$. Let $i\in \mathbb{N}$ be such that ${\beta }^{\vee }\in {U}_{{\lambda }_{i}}$. Then ${\beta }^{{\vee }_{\mathcal{B}}}={\lambda }_{i}^{-1}{\beta }^{\vee }$.

Proof. Let uW. By lemma 3.2 we have ${G}_{\mathcal{B}}\left({\beta }^{\vee },u\right)={\lambda }_{i}\beta \left(u\right)$. By the definition of ${\beta }^{{\vee }_{\mathcal{B}}}$ we have ${G}_{\mathcal{B}}\left({\beta }^{{\vee }_{\mathcal{B}}},u\right)=\beta \left(u\right)$. It follows that ${G}_{\mathcal{B}}\left({\lambda }_{i}^{-1}{\beta }^{\vee }-{\beta }^{{\vee }_{\mathcal{B}}},u\right)=0$, which implies the statement since the form ${G}_{\mathcal{B}}$ is non-degenerate on W. □

Lemma 3.4. Let $\mathcal{A}$ and $\mathcal{B}$ be as stated in lemma 3.1. Let $\alpha \in \mathcal{B}$ and let $i\in \mathbb{N}$ be such that ${\alpha }^{\vee }\in {U}_{{\lambda }_{i}}$. Consider an α-series ${{\Gamma}}_{\alpha }^{\mathcal{B}}$ in $\mathcal{B}$ and let $\beta \in {{\Gamma}}_{\alpha }^{\mathcal{B}}$. Then ${{\Gamma}}_{\alpha }^{\mathcal{B}}\subset {U}_{{\lambda }_{i}}$ or ${{\Gamma}}_{\alpha }^{\mathcal{B}}\subseteq \left\{{\pm}\beta \right\}$.

Proof. Suppose firstly that ${\beta }^{\vee }\in {U}_{{\lambda }_{i}}$. Since any covector $\gamma \in {{\Gamma}}_{\alpha }^{\mathcal{B}}$ is a linear combination of β and α, we get that $\gamma \in {U}_{{\lambda }_{i}}$ as required.

Suppose now that ${\beta }^{\vee }\notin {U}_{{\lambda }_{i}}$. Then ${\beta }^{\vee }\in {U}_{{\lambda }_{j}}$ for some ji. Since we have a direct sum decomposition (3.2) it follows that ${{\Gamma}}_{\alpha }^{\mathcal{B}}\subseteq \left\{{\pm}\beta \right\}$. □

Lemma 3.5. Let $\mathcal{A}\subset {V}^{{\ast}}$ be a finite collection of covectors, and let $\mathcal{B}\subset \mathcal{A}$ be a subsystem. Let $\alpha ,\beta \in \mathcal{B}$. Let ${{\Gamma}}_{\alpha }^{\mathcal{A}},{{\Gamma}}_{\alpha }^{\mathcal{B}}$ be the α-series in $\mathcal{A}$ and $\mathcal{B}$ respectively containing β. Then the set ${{\Gamma}}_{\alpha }^{\mathcal{A}}$ coincides with the set ${{\Gamma}}_{\alpha }^{\mathcal{B}}$.

Proof. Let $\gamma \in {{\Gamma}}_{\alpha }^{\mathcal{A}}$. It follows that $\gamma \in \mathcal{B}$. By maximality of ${{\Gamma}}_{\alpha }^{\mathcal{B}}$, it follows that $\gamma \in {{\Gamma}}_{\alpha }^{\mathcal{B}}$. Hence ${{\Gamma}}_{\alpha }^{\mathcal{A}}\subset {{\Gamma}}_{\alpha }^{\mathcal{B}}$. The opposite inclusion is obvious. □

Proposition 3.6. In the assumptions and notations of lemma 3.1 we have ${G}_{\mathcal{B}}\left(u,v\right)=0$ for any $u\in {U}_{{\lambda }_{i}}$ and $v\in {U}_{{\lambda }_{j}}$ such that ij.

Proof. From lemma 3.2 we have ${G}_{\mathcal{B}}\left(u,v\right)={\lambda }_{i}{G}_{\mathcal{A}}\left(u,v\right)={\lambda }_{j}{G}_{\mathcal{A}}\left(u,v\right)$. Hence ${G}_{\mathcal{A}}\left(u,v\right)=0$, which implies the statement. □

Now we present the main theorem of this section.

Theorem 3.7. Any non-isotropic subsystem of a trigonometric ∨-system is also a trigonometric ∨-system.

Proof. Let $\mathcal{A}$ be a trigonometric ∨-system and let $\mathcal{B}$ be its non-isotropic subsystem. Let $\alpha \in \mathcal{B}$. Then ${\alpha }^{\vee }\in {U}_{{\lambda }_{i}}$ in the decomposition (3.2) for some i. Consider an α-series ${{\Gamma}}_{\alpha }^{\mathcal{B}}$ in $\mathcal{B}$. Let $\beta \in {{\Gamma}}_{\alpha }^{\mathcal{B}}$. Then by lemma 3.4 we have the following two cases.

  • (a)  
    Suppose ${\beta }^{\vee }\in {U}_{{\lambda }_{i}}$. Then ${{\Gamma}}_{\alpha }^{\mathcal{B}}\subset {U}_{{\lambda }_{i}}$ and by lemmas 3.2, 3.3 we have
    Hence we have
    by lemma 3.1 and since $\mathcal{A}$ is a trigonometric ∨-system. Hence the ∨-condition (2.13) for $\mathcal{B}$ holds.
  • (b)  
    Suppose βUλj , where ji. Then ${G}_{\mathcal{B}}\left({\alpha }^{{\vee }_{\mathcal{B}}},{\beta }^{{\vee }_{\mathcal{B}}}\right)={\lambda }_{i}^{-1}{\lambda }_{j}^{-1}{G}_{\mathcal{B}}\left({\alpha }^{\vee },{\beta }^{\vee }\right)=0$, by proposition 3.6, and ${{\Gamma}}_{\alpha }^{\mathcal{B}}\subseteq \left\{{\pm}\beta \right\}$ by lemma 3.4. Hence the ∨-condition (2.13) for $\mathcal{B}$ holds.□

4. Restriction of trigonometric solutions of WDVV equations

In this section we consider the restriction operation for the trigonometric solutions of WDVV equations and show that this gives new solutions of WDVV equations. An analogous statement in the rational case was established in [16].

Let

Equation (4.1)

be a subsystem of $\mathcal{A}$ for some linear subspace $W=\langle \mathcal{B}\rangle \subset {V}^{{\ast}}$. Define

Equation (4.2)

Let us denote the restriction $\alpha {\vert }_{{W}_{\mathcal{B}}}$ of a covector αV* as ${\pi }_{\mathcal{B}}\left(\alpha \right)$.

Define also

Consider a point ${x}_{0}\in {M}_{\mathcal{B}}$ and tangent vectors ${u}_{0},{v}_{0}\in {T}_{{x}_{0}}{M}_{\mathcal{B}}$. We extend vectors u0 and v0 to two local analytic vector fields u(x), v(x) in the neighbourhood U of x0 that are tangent to the subspace ${W}_{\mathcal{B}}$ at any point $x\in {W}_{\mathcal{B}}\cap U$ such that u0 = u(x0) and v0 = v(x0). Consider the multiplication * given by (2.9). We want to study the limit of u(x) * v(x) when x tends to x0. The limit may have singularities at $x\in {W}_{\mathcal{B}}$ as cot α(x) with $\alpha \in \mathcal{B}$ is not defined for such x. Also we note that outside ${W}_{\mathcal{B}}$ we have a well-defined multiplication u(x) * v(x).

The proof of the next lemma is similar to the proof of [16, lemma 1] in the rational case (see also [1]).

Lemma 4.1. The limit of the product u(x) * v(x) exists when vector x tends to ${x}_{0}\in {M}_{\mathcal{B}}$ and it satisfies

Equation (4.3)

In particular, the product u0 * v0 is determined by vectors u0 and v0 only.

Now for the subsystem $\mathcal{B}\subset \mathcal{A}$ given by (4.1) let

Equation (4.4)

where k = dim W, be a basis of W. The following lemma shows that multiplication (4.3) is closed on the tangent space ${T}_{{\ast}}\left({M}_{\mathcal{B}}\oplus U\right)$.

Lemma 4.2. Let $\mathcal{B}\subset \mathcal{A}$ be a subsystem. Assume that prepotential (2.1) corresponding to a configuration $\left(\mathcal{A},c\right)$ satisfies WDVV equations (2.2). Suppose that ${C}_{{\delta }_{\alpha }}^{{\alpha }_{0}}\ne 0$ for any αS, α0δα . If $u,v\in {T}_{\left(x,y\right)}\left({M}_{\mathcal{B}}\oplus U\right)$, where $x\in {W}_{\mathcal{B}},y\in U$, then one has $u\;{\ast}\;v\in {T}_{\left(x,y\right)}\left({M}_{\mathcal{B}}\oplus U\right)$, that is

where multiplication * is given by (4.3).

Proof. Suppose that the subspace ${W}_{\mathcal{B}}$ given by (4.2) has codimension 1 in V, and let αS. We have $\mathcal{B}={\delta }_{\alpha }$. Let $x\in {M}_{\mathcal{B}}\subset {{\Pi}}_{\alpha .}$ Let u, vT(x,y)α U). Then u and v can be written as $u={a}_{u}\bar{u}+{b}_{u}E$, $v={a}_{v}\bar{v}+{b}_{v}E$, where $\bar{u},\bar{v}\in {{\Pi}}_{\alpha }$, and ${a}_{u},{b}_{u},{a}_{v},{b}_{v}\in \mathbb{C}$. By proposition 2.5 we have

Equation (4.5)

for any z, wV. By taking z, w ∉ Πα we derive from (4.5) that

which implies the statement by lemma 4.1.

Let us now consider ${W}_{\mathcal{B}}$ of codimension 2. Let S = {α1, α2}. By the above arguments

if $x\in {{\Pi}}_{{\alpha }_{i}}$ is generic and $u,v\in {T}_{\left(x,y\right)}\left({{\Pi}}_{{\alpha }_{i}}\oplus U\right),\quad \left(i=1,2\right)$. By lemma 4.1, u * v exists for $x\in {M}_{\mathcal{B}}$ and hence $u\;{\ast}\;v\in {T}_{\left(x,y\right)}\left(\left({{\Pi}}_{{\alpha }_{1}}\cap {{\Pi}}_{{\alpha }_{2}}\right)\oplus U\right)$. This proves the statement for the case when ${W}_{\mathcal{B}}$ has codimension 2. General $\mathcal{B}$ is dealt with similarly. □

Let us assume that ${G}_{\mathcal{A}}{\vert }_{{W}_{\mathcal{B}}}$ is non-degenerate. Then we have the orthogonal decomposition

Vector αV can be represented as

Equation (4.6)

where $\tilde {{\alpha }^{\vee }}\in {W}_{\mathcal{B}}$ and $w\in {W}_{\mathcal{B}}^{\perp }.$ By lemmas 4.1, 4.2 we have associative product

where ${x}_{0}\in {M}_{\mathcal{B}},u,v\in {W}_{\mathcal{B}}$.

For any $\gamma \in {W}_{\mathcal{B}}^{{\ast}}$ we define ${\gamma }^{{\vee }_{{W}_{\mathcal{B}}}}\in {W}_{\mathcal{B}}$ by ${G}_{\mathcal{A}}\left({\gamma }^{{\vee }_{{W}_{\mathcal{B}}}},v\right)=\gamma \left(v\right),\quad \forall \enspace v\in {W}_{\mathcal{B}}$.

Lemma 4.3. Suppose that the restriction ${G}_{\mathcal{A}}{\vert }_{{W}_{\mathcal{B}}}$ is non-degenerate. Then $\tilde {{\alpha }^{\vee }}={\pi }_{\mathcal{B}}{\left(\alpha \right)}^{{\vee }_{{W}_{\mathcal{B}}}}$ for any αV*.

Proof. From decomposition (4.6) we have

for any $v\in {W}_{\mathcal{B}}$. It follows that ${G}_{\mathcal{A}}\left({\pi }_{\mathcal{B}}{\left(\alpha \right)}^{{\vee }_{{W}_{\mathcal{B}}}}-\tilde {{\alpha }^{\vee }},v\right)=0$, which implies the statement as ${G}_{\mathcal{A}}{\vert }_{{W}_{\mathcal{B}}}$ is non-degenerate. □

Let us choose a basis in the space ${W}_{\mathcal{B}}\oplus U$ such that f1, ..., fn is a basis in ${W}_{\mathcal{B}},n=\mathrm{dim}\enspace {W}_{\mathcal{B}}$, and fn+1 is the basis vector in U, and let ξ1, ..., ξn+1 be the corresponding coordinates. We represent vectors $\xi \in {W}_{\mathcal{B}},y\in U$ as ξ = (ξ1, ..., ξn ) and y = ξn+1. The WDVV equations for a function $F:{W}_{\mathcal{B}}\oplus U\to \mathbb{C}$ is the following system of partial differential equations:

Equation (4.7)

where Fi is (n + 1) × (n + 1) matrix with entries ${\left({F}_{i}\right)}_{\text{p}\;\text{q}}=\frac{{\partial }^{3}F}{\partial {\xi }_{i}\partial {\xi }_{p}\partial {\xi }_{q}}$ (p, q = 1, ..., n + 1). The previous considerations lead to the following theorem.

Theorem 4.4. Let $\mathcal{B}\subset \mathcal{A}$ be a subsystem, and let S be as defined in (4.4). Assume that prepotential (2.1) satisfies WDVV equations (2.2). Suppose that ${C}_{{\delta }_{\alpha }}^{{\alpha }_{0}}\ne 0$ for any αS, α0δα . Then the prepotential

Equation (4.8)

where $\xi \in {W}_{\mathcal{B}},y\in U\cong \mathbb{C},\;\mathrm{a}\mathrm{n}\mathrm{d}\;\;\bar{\alpha }={\pi }_{\mathcal{B}}\left(\alpha \right)$, satisfies WDVV equations (4.7). The corresponding associative multiplication has the form

Equation (4.9)

where $\xi \in {M}_{\mathcal{B}},u,v\in {T}_{\left(\xi ,y\right)}{M}_{\mathcal{B}}$.

Proof. It follows by lemmas 2.2, 4.14.3, that multiplication (4.9) is associative. The corresponding prepotential has the form (4.8) and it satisfies WDVV equations (4.7) by lemma 2.2. □

In general a restriction of a root system is not a root system, so we get new solutions of WDVV equations by applying theorem 4.4 in this case. In sections 5, 6 and 8 we consider such solutions in more details.

5.  BCN type configurations

In this section we discuss a family of configurations of BCN type and show that it gives trigonometric solutions of the WDVV equations. Let the set $\mathcal{A}=B{C}_{N}^{+}$ consist of the following covectors:

Let us define the multiplicity function $c:B{C}_{N}^{+}\to \mathbb{C}$ by c(ei ) = r, c(2ei ) = s, c(ei ± ej ) = q, where $r,s,q\in \mathbb{C}$. We will denote the configuration $\left(B{C}_{N}^{+},c\right)$ as $B{C}_{N}^{+}\left(r,s,q\right)$. It is easy to check that

Equation (5.1)

where

Equation (5.2)

is assumed to be non-zero, and $\langle u,v\rangle ={\sum }_{i=1}^{N}{u}_{i}{v}_{i}$ is the standard inner product for u = (u1, ..., uN ), v = (v1, ..., vN ). For any α, βV*, ${\left(\alpha \wedge \beta \right)}^{2}:V\otimes V\to \mathbb{C}$ denotes the square of the covector αβ ∈ (VV)*.

Lemma 5.1. The following two identities hold:

Equation (5.3)

and

Equation (5.4)

Proof. Note that

Equation (5.5)

Equation (5.6)

and

Equation (5.7)

By adding together relations (5.5)–(5.7) we get identity (5.3).

We also have

Equation (5.8)

Equation (5.9)

Equation (5.10)

Equation (5.11)

Equation (5.12)

and

Equation (5.13)

Then by adding together identities (5.8)–(5.13) we obtain identity (5.4). □

Proposition 5.2. The quadratic forms ${G}_{\mathcal{A}}^{\left(1\right)},{G}_{\mathcal{A}}^{\left(2\right)}$ corresponding to the bilinear forms ${G}_{\mathcal{A}}^{\left(1\right)}\left(\cdot ,\cdot \right)$, ${G}_{\mathcal{A}}^{\left(2\right)}\left(\cdot ,\cdot \right)$ respectively have the following forms:

Equation (5.14)

and

Equation (5.15)

where h is given by (5.2).

Proof. Let us first prove identity (5.14). Note that ${G}_{\mathcal{A}}^{\left(1\right)}$ is a quadratic polynomial in r, s and q. The terms containing r2 add up to

Equation (5.16)

Similarly, the terms containing s2 add up to

Equation (5.17)

The terms containing rs add up to

Equation (5.18)

Now the terms containing rq have the form

Equation (5.19)

by lemma 5.1. Similarly, the terms containing sq add up to

Equation (5.20)

The terms containing q2 have the form

Equation (5.21)

Expression (5.21) is equal to

Equation (5.22)

by lemma 5.1. By adding together expressions (5.16)–(5.20) and (5.22) we get identity (5.14).

Let us now prove identity (5.15). Note that $h{G}_{\mathcal{A}}^{\left(2\right)}$ is a quadratic polynomial in r, s and q and that terms containing r2, rs and s2 all vanish. Terms containing rq in $h{G}_{\mathcal{A}}^{\left(2\right)}$ are given by

Equation (5.23)

Similarly, the terms containing sq in $h{G}_{\mathcal{A}}^{\left(2\right)}$ add up to

Equation (5.24)

Finally, the terms containing q2 in $h{G}_{\mathcal{A}}^{\left(2\right)}$ are given by

Equation (5.25)

Expression (5.25) is equal to

Equation (5.26)

by lemma 5.1. By adding together expressions (5.23), (5.24) and (5.26) we get identity (5.15). □

The previous proposition allows us to prove the following theorem.

Theorem 5.3. Prepotential (2.1) for the configuration $\left(\mathcal{A},c\right)=B{C}_{N}^{+}\left(r,s,q\right)$ satisfies WDVV equations (2.2) with

Equation (5.27)

where h is given by (5.2), provided that q(r + 8s + 2(N − 2)q) ≠ 0.

Proof. Firstly, $B{C}_{N}^{+}\left(r,s,q\right)$ is a trigonometric ∨-system by proposition 2.13. Secondly, by proposition 5.2 we have that ${G}_{\mathcal{A}}^{\left(1\right)}-\frac{{\lambda }^{2}}{4}{G}_{\mathcal{A}}^{\left(2\right)}=0$ if λ is given by (5.27). The statement follows by theorem 2.9. □

Theorem 5.3 gives a generalization of the results in [8, 20, 24, 29], where, in particular, solutions of the WDVV equations for the root systems DN , BN and CN were obtained. Following [20, 24] consider the function $\tilde {F}$ of N + 1 variables (x1, ..., xN , y) of the form

Equation (5.28)

where ${\mathcal{R}}^{+}$ is a positive half of the root system $\mathcal{R}$, multiplicities cα are invariant under the Weyl group, $\gamma \in \mathbb{C}$ and function $\tilde {f}$ given by

Equation (5.29)

satisfies ${\tilde {f}}^{\prime \prime \prime }\left(z\right)=\mathrm{coth}\enspace z$. Note that $\tilde {f}\left(z\right)=-f\left(-iz\right)$.

Let us explain that our solution (2.1) for the configuration $B{C}_{N}^{+}\left(r,s,q\right)$ leads to a solution of the form (5.28).

Proposition 5.4. Function $\tilde {F}$ given by (5.28) with ${\mathcal{R}}^{+}=B{C}_{N}^{+}$ satisfies WDVV equations (2.2) if

Equation (5.30)

Proof. By formula (5.1) solution F given by (2.1) for $\mathcal{A}=B{C}_{N}^{+}$ has the form

Equation (5.31)

where we redenoted variables (x, y) by $\left(\tilde {x},\tilde {y}\right)$. By changing variables $\tilde {x}=-ix$, $\tilde {y}=\frac{\gamma \lambda }{2h}y$ and dividing F by −λ solution (5.31) takes the form (5.28) provided that γ2 λ2 = −4h3 which is satisfied for γ given by (5.30). □

Let $n\in \mathbb{N}$ and let $\underline{m}=\left({m}_{1},\dots ,{m}_{n}\right)\in {\mathbb{N}}^{n}$ be such that

Equation (5.32)

Let us consider the subsystem $\mathcal{B}\subset \mathcal{A}=B{C}_{N}^{+}$ given by

Let us also consider the corresponding subspace ${W}_{\mathcal{B}}=\left\{x\in V:\beta \left(x\right)=0,\quad \forall \enspace \beta \in \mathcal{B}\right\}$. It can be given explicitly by the equations

where ξ1, ..., ξn are coordinates on ${W}_{\mathcal{B}}$. Let us now restrict the configuration $B{C}_{N}^{+}\left(r,s,q\right)$ to the subspace ${W}_{\mathcal{B}}$. That is we consider non-zero restricted covectors $\bar{\alpha }={\pi }_{\mathcal{B}}\left(\alpha \right),\alpha \in B{C}_{N}^{+}$ with multiplicities cα , and we add up multiplicities if the same covector on ${W}_{\mathcal{B}}$ is obtained a few times. Let us denote the resulting configuration as $B{C}_{n}\left(q,r,s;\underline{m}\right)$. It is easy to see that it consists of covectors

where f1, ..., fn is the basis in ${W}_{\mathcal{B}}^{{\ast}}$ corresponding to coordinates ξ1, ..., ξn .

As a corollary of theorems 4.4 and 5.3 we get the following result on (n + 3)-parametric family of solutions of WDVV equations, which can be specialized to (n + 1)-parametric family of solutions from [25].

Theorem 5.5. Let $\xi =\left({\xi }_{1},\dots ,{\xi }_{n}\right)\in {W}_{\mathcal{B}},y\in U\cong \mathbb{C}$. Assume that parameters r, q, s and $\underline{m}$ satisfy the relation r + 4s + 2q(mi − 1) ≠ 0 for any 1 ⩽ in. Then function

Equation (5.33)

where N is given by (5.32), satisfies the WDVV equations (4.7) if $\lambda ={\left(\frac{2{h}^{3}}{q\left(r+8s+2\left(N-2\right)q\right)}\right)}^{1/2}$, where h is given by (5.2), and $\left(r+8s+2\left(N-2\right)q\right)q\ne 0$.

Proof. We only have to check that cubic terms in (5.33) have the required form. For any $\xi \in {W}_{\mathcal{B}}$ we have

Equation (5.34)

Note that

by formula (5.32). Hence (5.34) becomes

as required. □

6.  AN type configurations

In this section we discuss a family of configurations of type AN and show that it gives trigonometric solutions of the WDVV equations.

Let $V\subset {\mathbb{C}}^{N+1}$ be the hyperplane $V=\left\{\left({x}_{1},\dots ,{x}_{n+1}\right):{\sum }_{i=1}^{N+1}{x}_{i}=0\right\}$. Let $\mathcal{A}={A}_{N}^{+}$ be the positive half of the root system AN given by

Let $t=c\left({\mathit{\text{e}}}^{\mathit{\text{i}}}-{e}^{j}\right)\in \mathbb{C}$ be the constant multiplicity. The following lemma gives the relation between covectors in $\mathcal{A}$ and their dual vectors in V.

Lemma 6.1. We have

Proof. Let x = (x1, ..., xN+1), y = (y1, ..., yN+1) ∈ V. Then the bilinear form ${G}_{\mathcal{A}}$ takes the form

which implies the statement. □

Now we can find the forms ${G}_{\mathcal{A}}^{\left(i\right)},i=1,2$.

Proposition 6.2. The quadratic forms ${G}_{\mathcal{A}}^{\left(1\right)},{G}_{\mathcal{A}}^{\left(2\right)}$ corresponding to the bilinear forms ${G}_{\mathcal{A}}^{\left(1\right)}\left(\cdot ,\cdot \right)$, ${G}_{\mathcal{A}}^{\left(2\right)}\left(\cdot ,\cdot \right)$ respectively have the following forms:

and

Equation (6.1)

Proof. For the first equality we have

since ${\sum }_{i=1}^{N+1}{\mathit{\text{e}}}^{\mathit{\text{i}}}{\vert }_{V}=0$. For equality (6.1) we have by lemma 6.1 that

Equation (6.2)

Note that

Equation (6.3)

since ${\sum }_{i=1}^{N+1}{\mathit{\text{e}}}^{\mathit{\text{i}}}{\vert }_{V}=0$. Also it is easy to see that

Equation (6.4)

Equality (6.1) follows from formulas (6.2)–(6.4). □

This leads us to the following result which can also be extracted from [24].

Theorem 6.3 (cf [24]). Prepotential (2.1), where $y={\sum }_{i=1}^{N+1}{x}_{i}$, for the configuration $\left(\mathcal{A},c\right)=\left({A}_{N}^{+},t\right)$ satisfies WDVV equations

where ${\left({F}_{i}\right)}_{\mathit{\text{p q}}}=\frac{{\partial }^{3}F}{\partial {x}_{i}\partial {x}_{p}\partial {x}_{q}},\left(p,q=1,\dots ,N+1\right)$, with

Equation (6.5)

Proof. Firstly, $\mathcal{A}$ is a trigonometric ∨-system by proposition 2.13. Secondly, by proposition 6.2 we have that

which is equal to 0 for λ given by (6.5). It follows by theorem 2.9 that F satisfies WDVV equations (2.2) as a function on the hyperplane $V\subset {\mathbb{C}}^{N+1}$ which also depends on the auxiliary variable y. Now we change variables to (x1, ..., xN+1) by putting $y={\sum }_{i=1}^{N+1}{x}_{i}$, which implies the statement. □

Let us now apply the restriction operation to the root system AN . Let $n\in \mathbb{N}$ and $\underline{m}=\left({m}_{1},\dots ,{m}_{n+1}\right)\in {\mathbb{N}}^{n+1}$ be such that ${\sum }_{i=1}^{n+1}{m}_{i}=N+1$. Let us consider the subsystem $\mathcal{B}\subset \mathcal{A}$ given as follows:

The corresponding subspace ${W}_{\mathcal{B}}$ defined by (4.2) can be given explicitly by the equations

Define covectros ${f}^{1},\dots ,{f}^{n+1}\in {W}_{\mathcal{B}}^{{\ast}}$ by restrictions ${f}^{i}={\pi }_{\mathcal{B}}\left(\right.{e}^{{\sum }_{j=1}^{i}{m}_{j}}\left(\right.$. The restriction of the configuration ${A}_{N}^{+}$ to the subspace ${W}_{\mathcal{B}}$ consists of the following covectors:

Equation (6.6)

The following result holds, which is closely related to a multi-parameter family of solutions found in [25] (see also [29]).

Theorem 6.4. The prepotential

Equation (6.7)

where $\xi =\left({\xi }_{1},\dots ,{\xi }_{n+1}\right)\in {\mathbb{C}}^{n+1}$ and $y={\sum }_{i=1}^{n+1}{\xi }_{i}$, satisfies WDVV equations

where ${\left({F}_{i}\right)}_{\mathit{\text{p q}}}=\frac{{\partial }^{3}F}{\partial {\xi }_{i}\partial {\xi }_{p}\partial {\xi }_{q}},\quad \left(p,q=1,\dots ,n+1\right)$, for any generic $t,{m}_{1},\dots ,{m}_{n+1}\in \mathbb{C}$.

Proof. Let us suppose firstly that ${m}_{i}\in \mathbb{N}$ for all i = 1, ..., n + 1. Define $N=-1+{\sum }_{i=1}^{n+1}{m}_{i}$. By theorem 6.3 function (2.1) with $\mathcal{A}={A}_{N}^{+}$ and λ given by (6.5) is a solution of WDVV equations (2.2). By theorem 4.4 the prepotential given by

Equation (6.8)

as a function on ${W}_{\mathcal{B}}\oplus \mathbb{C}$ satisfies WDVV equations. Note that

Equation (6.9)

Note also that

Equation (6.10)

and that

Equation (6.11)

By making use of relations (6.9)–(6.11) the function (6.8) takes the form

Equation (6.12)

By setting $y={\sum }_{i=1}^{n+1}{m}_{i}{\xi }_{i}$ and moving to variables $\left({\xi }_{1},\dots {\xi }_{n+1}\right)\in {\mathbb{C}}^{n+1}$ solution (6.12) takes the required form (6.7). The case of complex mi follows from the above considerations since ${F}_{\mathcal{B}}$ depends on mi polynomially. □

Remark 6.5. We note that theorem 6.3 and the solution F given by (2.1) is valid if one takes any generic linear combination of coordinates xi to form the extra variable $y={\sum }_{i=1}^{N+1}{a}_{i}{x}_{i},{a}_{i}\in \mathbb{C}$. The corresponding solution after restriction is given by the formula

where y is a linear combination of ${\xi }_{1},\dots ,{\xi }_{n+1},{\xi }_{i}\in \mathbb{C}$.

7. Further examples in small dimensions

In section 4 we presented the method of obtaining new solutions of WDVV equations through restrictions of known solutions. We applied it to classical families of root systems in sections 5 and 6. Similarly, starting from any root system and the corresponding solution of WDVV equations one can obtain further solutions by restrictions. In the next proposition we deal with a family of configurations in four-dimensional space which in general is not a restriction of a root system.

Proposition 7.1. Let a configuration $\mathcal{A}\subset {\mathbb{C}}^{4}$ consist of the following covectors:

where $p,q,r,s\in \mathbb{C}$ are such that 4r + s ≠ 0. Then $\mathcal{A}$ is a trigonometric ∨-system if

Equation (7.1)

Equation (7.2)

and ps ≠ 0. The corresponding prepotential (2.1) with

Equation (7.3)

is a solution of WDVV equations.

Proof. For $x=\left({x}_{1},{x}_{2},{x}_{3},{x}_{4}\right),y=\left({y}_{1},{y}_{2},{y}_{3},{y}_{4}\right)\in {\mathbb{C}}^{4}$ the bilinear form ${G}_{\mathcal{A}}$ is given by

To simplify notations let us introduce covectors

Because of B3 × A1-symmetry it is enough to check the trigonometric ∨-conditions for the following series only:

Trigonometric ∨-conditions for the series ${{\Gamma}}_{{e}^{1}},{{\Gamma}}_{{e}^{4}},{{\Gamma}}_{{e}^{1}+{e}^{2}}$ are immediate to check. Let us consider the trigonometric ∨-condition for α1-series. We have

which implies the ∨-condition (2.13) for ${{\Gamma}}_{{\alpha }_{1}}^{1}$ since s(3q + 4sp − 4r) − 2q(p + 4r + 2s) = 0 by relations (7.1) and (7.2).

Also we have

which implies the ∨-condition for ${{\Gamma}}_{{\alpha }_{1}}^{2}$ since s(q + p + 4r + 4s) − 2p(q + 2s) = 0 by relations (7.1) and (7.2).

Finally, we have

which implies the ∨-condition for ${{\Gamma}}_{{\alpha }_{1}}^{3}$ since s(p + 4rq) − 4r(q + 2s) = 0 by relations (7.1) and (7.2).

Let us now find the quadratic form ${G}_{\mathcal{A}}^{\left(1\right)}$. By straightforward calculations we get

Equation (7.4)

Now let us find the quadratic form ${G}_{\mathcal{A}}^{\left(2\right)}$. We have

Equation (7.5)

By making further use of relations (7.1) and (7.2) the expression (7.5) can be simplified to the form

Equation (7.6)

The final statement of the proposition follows from formulas (7.4), (7.6) and Theorem 2.9. □

Remark 7.2. We note that for special values of the parameters configuration $\mathcal{A}$ is a restriction of a root system (cf [16] where the rational version of this configuration was considered). Thus if r = 0 and p = q = s then $\mathcal{A}$ reduces to the root system D4. If r = 1, s = 4, then p = 6 and q = 1 and the resulting configuration is the restriction of the root system E7 along subsystem of type A3. If s = 2r then the resulting configuration is the restriction of the root system E6 along subsystem of type A1 × A1.

Further solutions of WDVV equations can be obtained from proposition 7.1 by restricting the configuration $\mathcal{A}$.

Proposition 7.3. Let ${\mathcal{A}}_{1}\subset {\mathbb{C}}^{3}$ be the configuration

with the corresponding multiplicities {r, 2p, p, q, 2r, 2s, s}, where $p,q,r,s\in \mathbb{C}$. Let configuration ${\mathcal{A}}_{2}\subset {\mathbb{C}}^{3}$ consist of the following set of covectors:

Suppose that relations (7.1) and (7.2) hold and that ps(4r + s) ≠ 0. Then ${\mathcal{A}}_{1},{\mathcal{A}}_{2}$ are trigonometric ∨-systems which also define solutions of WDVV equations given by formula (2.1) with λ given by (7.3).

Proof of this proposition follows from an observation that configuration ${\mathcal{A}}_{1}$ can be obtained from the configuration $\mathcal{A}$ from proposition 7.1 by restricting it to the hyperplane x1 = x2 (up to renaming the vectors). Similarly, configuration ${\mathcal{A}}_{2}$ can be obtained by restricting the configuration $\mathcal{A}$ to the hyperplane x1 + x2 + x3x4 = 0 (and up to renaming the vectors). Other three-dimensional restrictions of the configuration $\mathcal{A}$ give restriction of the root system F4 and a configuration from BC3 family.

Rational versions of configurations ${\mathcal{A}}_{1},{\mathcal{A}}_{2}$ were considered in [16]. Note that configuration ${\mathcal{A}}_{1}$ has collinear vectors 2e1, e1, so its rational version has different size.

Two-dimensional restrictions of $\mathcal{A}$ are considered below in propositions 7.6 and 7.7, or can belong to BC2 family of configuration, or have the form of configuration G2 or appear in [18, proposition 5].

Let us now consider examples of solutions (2.1) of WDVV equations where configuration $\mathcal{A}$ contains a small number of vectors on the plane. The next two propositions confirm that trigonometric ∨-systems with up to five covectors belong to A2 or BC2 families.

Proposition 7.4. Any irreducible trigonometric ∨-system $\mathcal{A}\subset {\mathbb{C}}^{2}$ consisting of three vectors with non-zero multiplicities has the form (6.6) where n = 2 in a suitable basis for some values of parameters.

Proof. By [18, proposition 2] any such configuration has the form $\mathcal{A}=\left\{\alpha ,\beta ,\gamma \right\}$ with the corresponding multiplicities {cα , cβ , cγ }, where vectors in $\mathcal{A}$ satisfy α ± β ± γ = 0 for some choice of signs. It is easy to see that equations

for ${m}_{1},{m}_{2},{m}_{3},t\in \mathbb{C}$ can be resolved. □

Proposition 7.5. Any irreducible trigonometric ∨-system $\mathcal{A}\subset {\mathbb{C}}^{2}$ consisting of four or five vectors with non-zero multiplicities has the form $B{C}_{2}\left(r,s,q;\underline{m}\right)$ in a suitable basis for some values of parameters.

Proof. By [18, proposition 3] any irreducible trigonometric ∨-system $\mathcal{A}$ consisting of four vectors has the form $\mathcal{A}=\left\{2{e}^{1},2{e}^{2},{e}^{1}{\pm}{e}^{2}\right\}$ in a suitable basis, and the corresponding multiplicities {c1, c2, c0} where c0 ≠ −2ci for i = 1, 2. Now we require parameters r, s, q, m1, m2 to satisfy

which can be done by taking

By [18, proposition 4] any irreducible trigonometric ∨-system $\mathcal{B}$ consisting of five vectors in a suitable basis has the form $\mathcal{B}=\left\{{e}^{1},2{e}^{1},{e}^{2},{e}^{1}{\pm}{e}^{2}\right\}$, and the corresponding multiplicities $\left\{{c}_{1},\tilde {{c}_{1}},{c}_{2},{c}_{{\pm}}\right\}$ satisfy c+ = c and $2\tilde {{c}_{1}}{c}_{2}={c}_{+}\left({c}_{1}-{c}_{2}\right)$, where $\left({c}_{1}+4\tilde {{c}_{1}}+2{c}_{+}\right)\left({c}_{2}+2{c}_{+}\right)\ne 0$. In order to compare the configuration $\mathcal{B}$ with the configuration $B{C}_{2}\left(r,s,q;\underline{m}\right)$, we require parameters r, s, q, m1, m2 to satisfy

These equations can be solved by taking

In the rest of this section we give more examples of trigonometric ∨-systems on the plane, which can be checked directly or using theorem 2.9. The configuration in the following proposition can be obtained by restricting configuration ${\mathcal{A}}_{1}$ from proposition 7.3 to the plane 2x1 + x2x3 = 0.

Proposition 7.6. Let $\mathcal{A}=\left\{{e}^{1},2{e}^{1},{e}^{2},{e}^{1}+{e}^{2},{e}^{1}-{e}^{2},2{e}^{1}+{e}^{2}\right\}\subset {\mathbb{C}}^{2}$ with the corresponding multiplicities $\left\{4a,a,2a,2a,2\left(a-b\right),\frac{2ab}{4a-3b}\right\}$, where 4a − 3b ≠ 0. Then $\mathcal{A}$ is a trigonometric ∨-system provided that a(2ab) ≠ 0. The corresponding solution of the WDVV equations has the form (2.1) with $\lambda =6\sqrt{3}\left(2a-b\right){\left(4a-3b\right)}^{-1/2}$.

The configuration in the following proposition can be obtained by restricting configuration ${\mathcal{A}}_{1}$ from proposition 7.3 to the plane x3 = 0.

Proposition 7.7. Let $\mathcal{A}=\left\{{e}^{1},2{e}^{1},{e}^{2},2{e}^{2},{e}^{1}{\pm}{e}^{2},{e}^{1}{\pm}2{e}^{2}\right\}\subset {\mathbb{C}}^{2}$ with the corresponding multiplicities $\left\{2a,\frac{a}{2}-\frac{b}{4},2b,a,b,a-\frac{b}{2}\right\}$, where a ≠ 0. Then $\mathcal{A}$ is a trigonometric ∨-system and the corresponding solution of the WDVV equations has the form (2.1) with $\lambda =6\sqrt{6}a{\left(4a-b\right)}^{-1/2}$.

In the next two propositions we give examples of trigonometric ∨-systems with nine and ten covectors on the plane.

Proposition 7.8 (cf [18]). Let $\mathcal{A}=\left\{{e}^{1},2{e}^{1},{e}^{2},{e}^{1}{\pm}{e}^{2},\frac{1}{2}\left(3{e}^{1}{\pm}{e}^{2}\right),\frac{1}{2}\left({e}^{1}{\pm}{e}^{2}\right)\right\}\subset {\mathbb{C}}^{2}$ with the corresponding multiplicities $\left\{a,b,\frac{a}{3},b,\frac{a}{3},a\right\}$. Then $\mathcal{A}$ is a trigonometric ∨-system provided that a ≠ −2b. The corresponding solution of the WDVV equations has the form (2.1) with λ = 6(a + 2b)(a + 4b)−1/2.

Note that if b = 0 then after rescaling ${e}^{2}\to \sqrt{2}{e}^{2}$ this configuration reduces to the positive half of the root system G2.

Proposition 7.9 (cf [18]). Let $\mathcal{A}=\left\{{e}^{1},2{e}^{1},{e}^{2},2{e}^{2},{e}^{1}{\pm}{e}^{2},{e}^{1}{\pm}2{e}^{2},2{e}^{1}{\pm}{e}^{2}\right\}\subset {\mathbb{C}}^{2}$ with the corresponding multiplicities $\left\{6a,\frac{3a}{2},6a,\frac{3a}{2},4a,a,a\right\}$. Then $\mathcal{A}$ is a trigonometric ∨-system provided that a ≠ 0. The corresponding solution of the WDVV equations has the form (2.1) with λ = 15a1/2.

8. Root systems solutions revisited

Following [20, 24], recall that WDVV equations (2.2) have solutions of the form

Equation (8.1)

where $\mathcal{R}\subset {V}^{{\ast}}$ is a root system of rank N, multiplicities cα and the inner product ⟨⋅, ⋅⟩ are invariant under the Weyl group, $\gamma ={\gamma }_{\left(\mathcal{R},c\right)}\in \mathbb{C}$ and function $\tilde {f}$ is given by (5.29). The corresponding values of ${\gamma }_{\left(\mathcal{R},c\right)}$ were given explicitly in [20, 24] for constant multiplicity functions cα = t  ∀α (except for $\mathcal{R}=B{C}_{N},{G}_{2}$), they were found in [8] for special multiplicities and in [28, 29] for arbitrary (non-reduced) root system $\mathcal{R}$ with invariant multiplicity. For type E root systems we have

Similarly to analysis of the BCN case in section 5 these solutions lead to solutions F of the form (2.1) for $\mathcal{A}={\mathcal{R}}^{+}$ and the corresponding values of $\lambda ={\lambda }_{\left(\mathcal{R},c\right)}$ are given by

Equation (8.2)

We recall that ${\lambda }_{\left(\mathcal{R},c\right)}$, in contrast to ${\gamma }_{\left(\mathcal{R},c\right)}$, is invariant under linear transformations applied to $\mathcal{R}$. An alternative way to derive values (8.2) is to apply theorem 4.4 to already known solutions. Thus ${\lambda }_{\left({E}_{6},t\right)}$ can be derived, for example, by considering the four-dimensional restriction of E6 along a subsystem of type A1 × A1 as this restriction is equivalent to the configuration from proposition 7.1 when parameter s = 2r. Likewise restriction of E7 along a subsystem of type A3 gives the same configuration from proposition 7.1 with r = 1 and s = 4. Similarly, restriction of E8 along a subsystem of type D6 gives the configuration of type BC2 which allows to get ${\lambda }_{\left({E}_{8},t\right)}$.

Let us now find ${\lambda }_{\left(\mathcal{R},c\right)}$ for the remaining cases, namely, $\mathcal{R}={F}_{4}$ and $\mathcal{R}={G}_{2}$, and general multiplicity functions. We start with the root system $\mathcal{R}={F}_{4}$.

Proposition 8.1. Let $\mathcal{A}={F}_{4}^{+}$ be the positive half of the root system F4 with the multiplicity function c given by

Equation (8.3)

where $r,s\in \mathbb{C}$. Then in the corresponding solution (2.1) of the WDVV equations (2.2) we have

Equation (8.4)

Proof. We note that the restriction of the configuration defined in proposition 7.1 to the hyperplane x4 = 0 gives the same configuration as one gets by restricting $\mathcal{A=}{F}_{4}^{+}$ to the hyperplane x4 = 0. Hence λ is given by formula (7.3). □

Proposition 8.1 has the following implication for the corresponding solution of the form (8.1), which is also contained in [29].

Proposition 8.2 [29].For $\mathcal{R}={F}_{4}$ with the multiplicity function (8.3) we have

Proof. We have ${\sum }_{\alpha \in {F}_{4}^{+}}{c}_{\alpha }\alpha {\left(x\right)}^{2}=3\left(s+2r\right){\sum }_{i=1}^{4}{x}_{i}^{2}$. Then solution F given by (2.1) for $\mathcal{A}={F}_{4}^{+}$ takes the form

Equation (8.5)

where λ is given by (8.4), and we redenoted variables (x, y) by $\left(\tilde {x},\tilde {y}\right)$. By dividing F by −λ and changing variables $\tilde {x}=-ix$, $\tilde {y}=\frac{\gamma \lambda }{6\left(s+2r\right)}y$, solution (8.5) takes the form (8.1) provided that γ2 λ2 = −108(s + 2r)2, which implies the statement. □

Let us now find the value of λ for $\mathcal{R}={G}_{2}$.

Proposition 8.3. Let $\mathcal{A}={G}_{2}^{+}$ be the positive half of the root system G2 with the multiplicity function given by

Equation (8.6)

where $q,p\in \mathbb{C}$. Then in the corresponding solution (2.1) of the WDVV equations (2.2) we have

Equation (8.7)

Proof. Note that by restricting the configuration ${\mathcal{A}}_{2}$ defined in proposition 7.3 to the hyperplane x1 + x2 + x3 = 0 we get the two-dimensional configuration

which can be mapped to the configuration G2 by a linear transformation. The corresponding multiplicities satisfy

which implies the statement by proposition 7.3 and theorem 4.4. □

Proposition 8.3 has the following implication for the corresponding solution of the form (8.1), which is also contained in [29].

Proposition 8.4 [29].For $\mathcal{R}={G}_{2}$ with multiplicity function (8.6) we have

Proof. We have ${\sum }_{\alpha \in {G}_{2}^{+}}{c}_{\alpha }\alpha {\left(x\right)}^{2}=\frac{3}{2}\left(p+3q\right)\left({x}_{1}^{2}+{x}_{2}^{2}\right)$. Then solution F given by (2.1) for $\mathcal{A}={G}_{2}^{+}$ takes the form

Equation (8.8)

where λ is given by (8.7), and we redenoted variables (x, y) by $\left(\tilde {x},\tilde {y}\right)$. By dividing F by −λ and changing variables $\tilde {x}=-ix$, $\tilde {y}=\frac{\gamma \lambda }{3\left(p+3q\right)}y$, solution (8.8) takes the form (8.1) provided that ${\gamma }^{2}{\lambda }^{2}=\frac{27}{2}{\left(p+3q\right)}^{3}$ which implies the statement. □

Solutions of WDVV equations of the form (8.1) were also obtained in [8]. More exactly, consider the multiplication * on the tangent space T(x,y)(VU) ≅ VU, where dim U = 1, xV, yU which is given by

Equation (8.9)

$\tilde {\gamma }={\tilde {\gamma }}_{\left(\mathcal{R},c\right)}\in \mathbb{C}$ and EU is the identity of *. It was shown in [8] that this multiplication is associative. It can be seen (cf section 2 above) that associativity of (8.9) is equivalent to the statement that function

Equation (8.10)

where ${d}_{\alpha }=\frac{{c}_{\alpha }}{\langle \alpha ,\alpha \rangle }$ satisfies WDVV equations, hence $\tilde {\gamma }={\tilde {\gamma }}_{\left(\mathcal{R},c\right)}={\gamma }_{\left(\mathcal{R},d\right)}$.

Let {α1, ..., αN } be a basis of simple roots of $\mathcal{R}$. Recall that there exists the highest root $\theta ={\theta }_{\mathcal{R}}={\sum }_{i=1}^{N}{n}_{i}{\alpha }_{i}\in \mathcal{R}$ such that, for every $\beta ={\sum }_{i=1}^{N}{p}_{i}{\alpha }_{i}\in \mathcal{R}$, we have ni pi for all i = 1, ..., N [6]. The constant $\tilde {\gamma }={\tilde {\gamma }}_{\left(\mathcal{R},c\right)}$ was expressed in [8] in terms of the highest root of the root system $\mathcal{R}$.

Proposition 8.5. [8] The value of ${\tilde {\gamma }}_{\left(\mathcal{R},c\right)}$ in the solution (8.10) in the case of constant multiplicity function cα = t is given by

Now we give a generalization of proposition 8.5 to the case of non-constant multiplicity function. Let p be the multiplicity of short roots and q be the multiplicity of long roots in a reduced non-simply laced root system $\mathcal{R}$.

Proposition 8.6. We have

Equation (8.11)

where scalars ai for all irreducible reduced non-simply laced root systems are given as follows:

  • (a)  
    Let $\mathcal{R}={B}_{N}$ with the basis of simple roots α1 = e1e2, ..., αN−1 = eN−1eN , αN = eN . Then
    Equation (8.12)
  • (b)  
    Let $\mathcal{R}={C}_{N}$ with the basis of simple roots α1 = e1e2, ..., αN−1 = eN−1eN , αN = 2eN . Then
    Equation (8.13)
  • (c)  
    Let $\mathcal{R}={F}_{4}$ with the basis of simple roots α1 = e2e3, α2 = e3e4, α3 = e4, ${\alpha }_{4}=\frac{1}{2}\left({e}^{1}-{e}^{2}-{e}^{3}-{e}^{4}\right)$. Then
    Equation (8.14)
  • (4)  
    Let $\mathcal{R}={G}_{2}$ with the basis of simple roots ${\alpha }_{1}=\frac{\sqrt{3}{e}^{1}}{2}-\frac{3{e}^{2}}{2},{\alpha }_{2}={e}^{2}$. Then
    Equation (8.15)

Proof. It follows from proposition 5.4 that

Note that ${\theta }_{{B}_{N}}={e}^{1}+{e}^{2}={\alpha }_{1}+2\left({\alpha }_{2}+\cdots +{\alpha }_{N}\right)$. Then it is easy to see that the substitution of (8.12) into formula (8.11) gives the same value of ${\tilde {\gamma }}_{\left({B}_{N},c\right)}$. Similarly, we have

which is equal to the value given by formula (8.11) after substitution ai from (8.13) and by using ${\theta }_{{C}_{N}}=2{e}^{1}=2\left({\alpha }_{1}+\cdots +{\alpha }_{N-1}\right)+{\alpha }_{N}$. It follows from proposition 8.2 that

Note that ${\theta }_{{F}_{4}}={e}^{1}+{e}^{2}=2{\alpha }_{1}+3{\alpha }_{2}+4{\alpha }_{3}+2{\alpha }_{4}$. Then it is easy to see that the substitution of values (8.14) into formula (8.11) gives the same value of ${\tilde {\gamma }}_{\left({F}_{4},c\right)}$. Similarly, it follows from proposition 8.4 that

which is equal to the expression in formula (8.11) after the substitution of (8.15) and by using ${\theta }_{{G}_{2}}=\sqrt{3}{e}^{1}=2{\alpha }_{1}+3{\alpha }_{2}$. □

It is not clear to us how to formulate proposition 8.6 for any non-simply laced (reduced) root system in a uniform way.

Let us also give another formula for ${\tilde {\gamma }}_{\left(\mathcal{R},c\right)}$ in terms of the dual root system ${\mathcal{R}}^{\vee }=\left\{{\beta }^{\vee }:\beta \in \mathcal{R}\right\}$, where ${\beta }^{\vee }=\frac{2\beta }{\langle \beta ,\beta \rangle }$. Then we have

Equation (8.16)

where coefficients ${\bar{n}}_{i}\in {\mathbb{Z}}_{{\geqslant}0}$ are determined by the expansion ${\theta }^{\vee }={\sum }_{i=1}^{N}{\bar{n}}_{i}{\alpha }_{i}^{\vee }$. Formula (8.16) follows from formula (8.11) by observing the relation ${\bar{n}}_{i}=\frac{{n}_{i}\langle {\alpha }_{i},{\alpha }_{i}\rangle }{\langle \theta ,\theta \rangle }$ for 1 ⩽ iN.

Acknowledgments

MF is grateful to L Hoevenaars for collaboration in the beginning of the work, and to N Nabijou and M Pavlov for stimulating discussions. We are grateful to I Strachan for pointing out paper [28] to us, and to B Vlaar for advice on his Mathematica code. The work of MA was funded by Imam Abdulrahman Bin Faisal University, Kingdom of Saudi Arabia, Dammam.

Footnotes

  • In memory of Jonathan Nimmo

Please wait… references are loading.
10.1088/1751-8121/abccf8