Next Article in Journal
On One Controllability of the Schrödinger Equation as Coupled with the Atomic-Level Mannesmann Effect
Next Article in Special Issue
Limit Cycle Bifurcations Near a Cuspidal Loop
Previous Article in Journal
Computational and Spectral Means for Characterizing the Intermolecular Interactions in Solutions and for Estimating Excited State Dipole Moment of Solute
Previous Article in Special Issue
Stability and Boundedness of the Solutions of Multi-Parameter Dynamical Systems with Circulatory Forces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Some Symmetries of Quadratic Systems

1
Department of Mathematics, Zhejiang Normal University, Jinhua 321004, China
2
Faculty of Electrical Engineering and Computer Science, University of Maribor, SI-2000 Maribor, Slovenia
3
Institute of Mathematics, Physics and Mechanics, SI-1000 Ljubljana, Slovenia
4
Center for Applied Mathematics and Theoretical Physics, SI-2000 Maribor, Slovenia
5
Faculty of Natural Science and Mathematics, University of Maribor, SI-2000 Maribor, Slovenia
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(8), 1300; https://doi.org/10.3390/sym12081300
Submission received: 29 May 2020 / Revised: 14 July 2020 / Accepted: 31 July 2020 / Published: 4 August 2020

Abstract

:
We provide a general method for identifying real quadratic polynomial dynamical systems that can be transformed to symmetric ones by a bijective polynomial map of degree one, the so-called affine map. We mainly focus on symmetry groups generated by rotations, in other words, we treat equivariant and reversible equivariant systems. The description is given in terms of affine varieties in the space of parameters of the system. A general algebraic approach to find subfamilies of systems having certain symmetries in polynomial differential families depending on many parameters is proposed and computer algebra computations for the planar case are presented.

1. Introduction

Studying various symmetries of dynamical systems is important for several reasons. Systems which phase diagrams posses a rotational symmetry are interesting because they are related to the second part of Hilbert’s 16th problem and the existence of such systems can lead to the construction of families with many limit cycles. Another reason for investigation of symmetries is their connection with the integrability. It is well known that an elementary singular point of a time-reversible polynomial system of degree two is always integrable [1]. It can be either a center or an integrable saddle [2] and it thus cannot be a focus. A similar conclusion is valid for much larger families of systems, as it had been shown in [2] where we found affine varieties in the space of parameters of a quadratic planar system which can be brought to a time-reversible one by a bijective linear transformation. This transformation preserves the integrability property of a present singular point at the origin. So, if the quadratic planar system with an isolated non-degenerate trace-zero singular point at the origin admits the transformation to a time-reversible system, the origin is automatically integrable.
In this paper we firstly treat affine transformations to symmetric systems (n-dimensional, n 2 ) in a unified and rather general way. Secondly, for planar quadratic systems, we calculate the varieties in the space of parameters defining the systems which can be transformed to rotationally (reversible) equivariant systems by an affine or merely linear (i.e., no translation involved) transformation.
Let us add some notation. All vectors from R n will be typeset in boldface. When we mention a linear map from R n R n , we have in mind an additive homogeneous (of degree one) map (which sends 0 to 0). By an affine map we denote a composition of a linear map and a translation. Linear maps will be denoted by capital letters and general (not necessarily linear) maps in calligraphic font. The symbols Id or Id n will both stand for the identity matrix and the transposition of a matrix A.
Our paper is organized as follows—in Section 2 we describe the general properties of (reversible) symmetric systems in R n and in Section 3 we compute algebraic varieties in parameter space, corresponding to quadratic planar systems that can be transformed to the (reversible) symmetric one by an affine or linear transformation. With some examples in Section 3 we end the paper.

2. Equivariant and Reversible Equivariant Systems in R n

Let us start with a rather general definition of a symmetry of a dynamical system. Throughout the paper, we will be interested in smooth, mostly polynomial dynamical systems of the form
d x d t = F ( x ) , x ( t ) R n .
Following Lamb et al. [3,4], we say that a bijective map B : R n R n is a symmetry of the system (1) when
d ( B φ ) d t = F B φ
for each trajectory φ : t x ( t ) of system (1). A map C : R n R n is called a reversible symmetry of system (1) when
d ( C φ ) d t = F C φ
for each trajectory φ of (1). The condition (2) implies that the system is invariant under transformation ( x , t ) ( B ( x ) , t ) and (3) implies the invariance under the map ( x , t ) ( C ( x ) , t ) .
We will be only interested in linear (reversible) symmetries. Let B : R n R n be a linear map. Then, by the linearity, d ( B φ ) d t = B d φ d t = F φ ( t ) . Now, if for some regular n × n matrix B and a fixed ε { 1 , 1 }
F ( B x ) = ε B F ( x )
holds for every x R n , then B is a symmetry if ε = 1 , and it is a reversible symmetry if ε = 1 .
Let Γ be a finite cyclic group of invertible linear operators (matrices) on R n . It is said that system (1) is Γ -equivariant if every B Γ is a symmetry of the system. Further, it is said that a system (1) is reversible Γ -equivariant if there exists a non-trivial homomorphism σ : Γ { 1 , 1 } such that for every  B Γ ,
F ( B x ) = σ ( B ) B F ( x )
holds true for every x R n . Obviously, the elements B Γ for which σ ( B ) = 1 , are reversing symmetries and those B with σ ( B ) = 1 are symmetries. By the successive application of (5), we get
F ( B k x ) = σ ( B ) k B k F ( x )
for all x R n and for all integers k. It easily follows that all even powers of a reversible symmetry are symmetries and all odd powers of a reversible symmetry are reversible symmetries. Therefore, in order that there exists a non-trivial reversible Γ -equivariant system, the order of Γ must be even as it can be easily seen by inserting the order of Γ for k in (6).
In the next proposition we will see that having only information for a generator of the group suffices.
Proposition 1.
A system (1) is Γ-equivariant if and only if any fixed generator of Γ is a symmetry. A system (1) is reversible Γ-equivariant if and only if some fixed generator of Γ is a reversible symmetry.
Proof. 
By application of (6) and taking σ ( R ) = 1 for the chosen generator R of Γ in the reversible case, the claims easily follow. □
We recall the definition of Kronecker product of matrices related to the tensor product of operators. The Kronecker product of A = [ a i j ] R m , n and B = [ b i j ] R p , q is defined as block matrix A B = a i j B R m p , n q . It is well know that the following rule can be applied ( A B ) ( C D ) = A C B D for any matrices A, B, C and D for which the products A C and B D are well defined.
We shall now restrict ourselves to quadratic dynamical systems of n equations and n unknown functions x = ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) tr . The way we express our system is a bit unusual, but it will provide some advantages for our consideration. Let us write
x ˙ = f 0 + F 1 x + ( Id x tr ) G x ,
where f 0 R n , F 1 is a n × n real matrix and G = ( G 1 , , G n ) tr is a n 2 × n matrix with symmetric blocks G 1 , , G n , that is, G k = G k tr , k = 1 , 2 , , n . Each of the matrices G k is the symmetric matrix arising from the quadratic form in the k-th equation, for example, a x 2 + 2 b x y + c y 2 = x tr a b b c x with x = ( x , y ) tr . For the reader’s convenience, Id n x t r is a block-diagonal n × n 2 matrix with x t r sitting in the diagonal blocks.
The algebraic conditions for a (reversible) Γ -equivariant quadratic systems are given in the following statement.
Theorem 1.
Let Γ = { R , R 2 , , R q 1 , Id } be an order-q cyclic group of invertible n × n matrices, generated by a matrix R and let ε { 1 , 1 } . System (7) is Γ-equivariant (reversible Γ-equivariant, resp.) if and only if the following equations are satisfied
R f 0 ε f 0 = 0 ,
R F 1 ε F 1 R = 0 ,
G R ε ( R R t r ) G = 0 ,
with choosing ε = 1 and ε = 1 for Γ-equivariance and reversible Γ-equivarivance, respectively.
Before presenting the proof we have a direct consequence of the fact that Equations (8)–(10) are linear with respect to f 0 , F 1 and G.
Corollary 1.
Let Γ be as in Theorem 1. The family of all (reversible) Γ-equivariant systems with parameters collected in f 0 R n , 1 , F 1 R n , n and G = ( G 1 , , G n ) tr R n 2 , n , forms a linear (vector) subspace in R n , 1 × R n , n × R n 2 , n .
We continue with the proof of Theorem 1.
Proof. 
By Proposition 1, we set the value of homomorphism σ on the generator R as σ ( R ) = ε and apply identity (5) with R in place of B and the form F ( x ) = f 0 + F 1 x + ( Id x tr ) G x . Then, the comparison of the terms of degree zero and one easily provides Equations (8) and (9). For Equation (10) some more effort is necessary. Equating the terms of order two gives us
H ( x ) : = ( Id x tr R tr ) G R x ε R ( Id x tr ) G x = 0 , x R n .
Evaluating the derivative D H ( x ) acting at y gives the identity
2 ( Id x tr R tr ) G R y 2 ε R ( Id x tr ) G y = 0 , y R n
so,
1 2 D H ( x ) = ( Id x tr R tr ) G R ε R ( Id x tr ) G = ( Id x tr ) ( Id R tr ) G R ε ( R x tr ) G = ( Id x tr ) ( Id R tr ) G R ε ( Id x tr ) ( R Id ) G = ( Id x tr ) ( ( Id R tr ) G R ε ( R Id ) G ) .
The above expression must be zero for all x , so we get
( Id R tr ) G R ε ( R Id ) G = 0 .
Multiplying by ( Id R tr ) = ( Id R tr ) gives
G R ε ( Id R tr ) ( R Id ) G = G R ε ( R R tr ) G = 0 ,
as desired. □
Several remarks are in order.
Remark 1.
From equality (8) we see that for a (reversible) equivariant quadratic system (with respect to Γ generated by R), the constant term column vector f 0 is either an eigenvector of R corresponding to the eigenvalue ε or, when ε is not an eigenvalue of R, f 0 = 0 . For example, when n = 2 , and R is the rotation for the angle 2 π / 3 , neither 1 nor 1 is an eigenvalue of R, therefore, f 0 = 0 , and the origin must be a singular point of the system.
Remark 2.
Equality (9) says that the matrix F 1 of the linear part is a zero of the Lie product R F 1 F 1 R (in other words F 1 commutes with R) when ε = 1 and F 1 is a zero of the Jordan product R F 1 + F 1 R when ε = 1 .
Remark 3.
Equation (10) is also a well-known linear equation in linear algebra, it is the so-called (homogeneous) Sylvester equation. From the next theorem we will recall that non-trivial solutions X of matrix equation A X B X = 0 are subject to the existence of common eigenvalues of the given matrices A and B of sizes n × n and k × k , respectively.
Theorem 2
([5], Th. 4.4.6).Let A C n , n , B C k , k and C C n , k . The equation A X X B = C has unique solution X C n , k if and only if A and B do not have any joint eigenvalues, that is, their spectra are disjoint.
In the sequel we apply Theorem 1 for computation of (in fact linear) varieties (in the space of parameters) of systems which are equivariant or reversible equivariant with respect to a chosen group of rotations of the plane. The theorem is also valid for the so-called time-reversible systems and mirror symmetric systems, where the group of symmetries, being of order two, is generated by a non-trivial involution. For the varieties in the space of parameters for such (merely planar) systems we refer the reader to our previous work [2].
Theorem 1 will also show why we have many planar equivariant quadratic systems with respect to the group generated by 2 π / 3 rotation, meanwhile, planar quadratic equivariant systems with respect to other rotations are found only among linear systems for the only solution G of system (10) is trivial. Similarly, we will see that the only non-linear quadratic reversible-equivariant systems will appear with π / 3 or π rotation.
From now on we consider planar ( n = 2 ) quadratic systems (7), parametrized as
x ˙ = ( a 0 + a 00 x + a 1 , 1 y + a 10 x 2 + a 01 x y + a 12 y 2 ) y ˙ = b 0 + b 1 , 1 x + b 00 y + b 21 x 2 + b 10 x y + b 01 y 2 ,
where
f 0 = ( a 0 , b 0 ) tr , F 1 = a 00 a 1 , 1 b 1 , 1 b 00 , G = G 1 G 2 , G 1 = a 10 1 2 a 01 1 2 a 01 a 12 , G 2 = b 21 1 2 b 10 1 2 b 10 b 01 .
We further restrict ourselves to rotational groups of symmetries. Recall that the cyclic multiplicative group Γ q of order q generated by 2 π / q -angle rotation around the origin is naturally isomorphic to the additive group Z q . From now on we refer to (reversible) Γ q -equivariant systems as (reversible) Z q -equivariant systems. It is not difficult to find the corresponding linear subspace of the space of parameters, completely describing (reversible) Z q -equivariant systems and, we present their description below. In different form as in our setting (complex parametrization), and for planar polynomial systems of degree not bounded by 2, the Z q -equivariant systems were completely determined in [6] and reversible Z q -equivariant systems in [7]. Let
R q = cos φ q sin φ q sin φ q cos φ q
be the rotation in counter clock direction for an angle φ q = 2 π / q , where the integer q { 2 , 3 , } is being fixed. In the next proposition we recall the form of (reversible) Z q -equivariant systems in the plane. The forms of polynomial planar Z q -equivariant systems may be found in [6] and for the reversible Z q -equivariant ones in [7]. In the following propositions we present the forms of (reversible) Z q -equivariant systems in terms of coefficient matrices. As we shall see, only Z 3 -equivariant and reversible Z 6 and reversible Z 2 -equivariant planar quadratic systems are in some sense non-trivial.
Proposition 2.
Consider the system (12).
  • System (12) is Z 2 -equivariant if and only if f 0 = 0 and G = 0 .
  • System (12) is Z 3 -equivariant if and only if f 0 = 0 ,
    F 1 = a 00 a 1 , 1 a 1 , 1 a 00
    and G = [ G 1 , G 2 ] tr with
    G 1 = a 10 b 01 b 01 a 10 , G 2 = b 01 a 10 a 10 b 01 .
  • System (12) is Z q -equivariant for any q 4 if and only if f 0 = 0 , G = 0 and F 1 is of the form (15).
The proof will be given after the following proposition in which we describe reversible Z q -equivariant systems. Note that only even q makes sense.
Proposition 3.
Consider the system (12).
  • System (12) is reversible Z 2 -equivariant if and only if F 1 = 0 .
  • System (12) is reversible Z 4 -equivariant if and only if f 0 = 0 , G = 0 and F 1 is of the form
    F 1 = a 00 a 1 , 1 a 1 , 1 a 00 .
  • System (12) is reversible Z 6 -equivariant if and only if f 0 = 0 , F 1 = 0 and G 1 , G 2 are of the form (16).
  • System (12) is reversible Z q -equivariant for some q = 2 k > 6 if and only if f 0 = 0 , F 1 = 0 and G = 0 .
Before starting to prove both propositions we need a preliminary argument involving eigenvalues of the Kronecker product [5]. Let us denote by eig ( A ) = λ 1 , λ 2 , , λ n ; the list of eigenvalues (possibly complex) of a n × n real matrix A, listed according to their algebraic multiplicities. The order of the listing is irrelevant. Then eig ( R q ) = e i φ q , e i φ q ;. It is known that the eigenvalues of the Kronecker product of matrices A and B are all products λ i μ j running through all eigenvalues λ i of A and μ j of B. Then, since R q R q t r 1 = R q R q and, eig ( R q R q ) is the list of all four products of eigenvalues, we have eig ( R q R q ) = e 2 i φ q , e 2 i φ q , 1 , 1 ; . When solving equality (10) for G, we have only trivial solution G = 0 unless there exists a joint member of lists eig ( R q ) and eig ( ε R q R q ) due to the Sylvester type of the equation. If ε = 1 , the only instance when G is not necessarily zero, is q = 3 , in this case we have e 2 i φ 3 = e i φ 3 . If ε = 1 , this happens when q { 2 , 6 } since e 2 i φ q = e i φ q .
The following Lemma will be used at several places in the proofs.
Lemma 1.
Let q 2 .
(i) 
The set of all real matrices commuting with the 2 × 2 rotation matrix R q is the family S = { a b b a ; a , b R } . A matrix F is similar to a member of the family S if and only if F = λ I d for some λ R or it has a conjugated pair of non-real eigenvalues.
(ii) 
The set of all real matrices Jordan commuting with the 2 × 2 rotation matrix R q is the family T = { a b b a ; a , b R } . A matrix F is similar to a member of the family T if and only if it is either zero or it has nonzero eigenvalues λ , λ .
Proof. 
The forms of the matrices in families S and T can be easily validated by direct elementary computations. All matrices in S are either of the form a I d or have eigenvalues a ± i b . In T , all non-zero members have a pair of non-zero eigenvalues ± a 2 + b 2 . The argument for the second claims in (i) and (ii) is that the similarity preserves the mentioned eigenvalue types and there are no other possibilities. □
Proof of Proposition 2.
Set ε = 1 . Validating the if statements, being an elementary exercise, is left to the reader. By using the above mentioned argument on the eigenvalues of R q and R q R q , we observe that G = 0 unless q = 3 . Moreover, as 1 is not an eigenvalue of R q for any q, we must have f = 0 in all cases due to (8). Clearly, every linear system is Z 2 -equivariant, since F ( x ) = F ( x ) for all x . For  q = 4 , by (9) the matrix F 1 must commute with R 4 and it is thus of the form (15) by (1) of Lemma 1. That G 1 , G 2 must be of the form (16) when q = 3 can be obtained by a straightforward solution of linear system (10) with R 3 in place of R. □
Proof of Proposition 3.
Now, let ε = 1 . We again write the proof only for the only if statements. By (8), f 0 = 0 for all q 2 , since only in the case q = 2 , the number 1 is an eigenvalue of R 2 = Id . If q = 2 , it is easy to see that any constant and any quadratic terms can be present in any reversible Z 2 -equivariant system which proves (1).
If q 4 , F 1 must Jordan commute with R 4 , and (2) of Lemma 1 gives the desired form. Moreover, reversible Z 6 -equivariant system is simultaneously Z 3 -equivariant, and F 1 must also be of the form (15). Then F 1 = 0 . The form of G follows from the direct solution of the homogeneous system given by the Sylvester Equation (10). Finally, when q = 2 k 8 , recall (6) and apply R q 2 = R k in order to obtain
F ( R k x ) = F ( R q 2 x ) = ( 1 ) 2 R q 2 F ( x ) = R k F ( x )
holding true for all x . This implies that a reversible Z q -equivariant system must as well be Z k -equivariant. Since k 4 , by (4) of Proposition 2, the system can only be linear. The matrix F 1 must furthermore satisfy R q F 1 + F 1 R q = 0 and since the trace of R q equals 2 cos π k , k 4 , and is non-zero, there must be F 1 = 0 . □

3. Transformation to (Reversible) Z q -Equivariant Systems

Our next interest is to classify the quadratic systems which can be transformed to Γ -equivariant or reversible Γ -equivariant systems by a bijective affine transformation of R n . Based on this result we will then compute the affine varieties in the space of parameters of planar systems. The next theorem is the cornerstone of our further consideration. Without loss of any generality, to simplify computations we will assume that the origin is a singular point of investigated systems, F ( 0 ) = 0 . Note that the map x S x + x 0 is a bijective affine transformation of R n whenever S is a real invertible matrix. Clearly, the inverse map is affine as well.
Theorem 3.
Let Γ be a finite cyclic group of invertible n × n matrices. Set ε = 1 or ε = 1 when studying Γ-equivariant and reversible Γ-equivariant systems, respectively.
(1) 
If there exists an affine transformation of the form x = S y + x 0 , with S invertible, which transforms a given quadratic system x ˙ = F 1 x + ( Id x tr ) G x , with F 1 and G given by (7), to a (reversible) Γ-equivariant system, then, by introducing the notation
X 0 = ( Id x 0 tr ) G ,
y 0 = ( F 1 + X 0 ) x 0 ,
F ˜ 1 = F 1 + 2 X 0 ,
the following identities, where R is a generator of the group Γ and B = S R S 1 , must be valid
B y 0 = ε y 0 ,
F ˜ 1 B ε B F ˜ 1 = 0 ,
G B ε ( B B tr ) G = 0 .
(2) 
If there exists a linear transformation of the form x = S y , which provides transformation of a system given in (1) of this theorem to (reversible) Γ-equivariant one, then
F 1 B ε B F 1 = 0 ,
G B ε ( B B tr ) G = 0 ,
where B = S R S 1 and R generates the group Γ.
(3) 
Suppose that Equations (22)–(24) are satisfied for some invertible matrix B of finite order, that is, B q = Id for some integer q > 1 and a vector x 0 . Then the translation x = y + x 0 transforms the original system to a (reversible) Γ B -equivariant one, Γ B = { B , B 2 , , B q 1 , Id } . Moreover, if B = S R S 1 for some real invertible matrix S and a matrix R in some canonical form, then the affine map x = S y + x 0 transforms the system to a (reversible) Γ R -equivariant one. Apparently, if x 0 = 0 , a linear transformation is possible.
Proof. 
(1)
By substitution x = S y + x 0 we rewrite the system as
y ˙ = S 1 F 1 ( S y + x 0 ) + S 1 ( Id ( y tr S tr + x 0 tr ) ) G ( y + x 0 ) = S 1 F 1 ( x 0 ) + S 1 ( Id x 0 tr ) ) G x 0 + S 1 F 1 S y + S 1 ( Id ( y tr S tr ) ) G y = S 1 y 0 + S 1 F 1 S y + S 1 ( Id ( y tr S tr ) ) G y .
Suppose that the obtained system is (reversible) Γ R -equivariant. Then the constant term vector g 0 = S 1 y 0 must satisfy (8) with g 0 in place of f 0 . Then R g 0 = ε g 0 , which by multiplying with S gives that B y 0 = ε y 0 and (22) follows. Writing explicitly linear and quadratic terms and using the proposed notations (19)–(21), gives (23) and (24).
(2)
Set x 0 = 0 , then X 0 = 0 , y 0 = 0 and F ˜ 1 = F 1 .
(3)
In the proof of (1) follow the steps in the reversed direction.
Problem. We aim to find algebraic conditions on the parameters of planar quadratic systems for which there exists either
  • a linear transformation, or
  • a non-linear affine transformation
which transforms the family of systems to
  • Z q -equivariant system, q = 2 , 3 , 4 , , or,
  • reversible Z q -equivariant system, q = 2 , 4 , .

3.1. Preliminaries from Polynomial Ring Theory

Let k be a field and k [ x 1 , , x m ] be the ring of polynomials in variables x 1 , , x m . By f 1 , , f s we denote the ideal I in k [ x 1 , , x m ] generated by polynomials f 1 , , f s k [ x 1 , , x m ] . The affine variety V ( I ) is the set of all solutions of the polynomial system of equations { f j = 0 : f j I } .
We first recall a fundamental theorem from the elimination theory, which is the basis for our computational approach. Let I be an ideal in k [ x 1 , , x m ] and fix an { 0 , 1 , , m 1 } . The th elimination ideal of I is the ideal I ( ) = I k [ x + 1 , , x m ] . Any point ( a + 1 , , a m ) V ( I ( ) ) is called a partial solution of the system { f = 0 : f I } . Some partial solutions can be extended to the solutions, and it depends on the field k whether there are many or not.
For the proof of the following theorem, see for example, References ([8], Chapter 3) or ([9], Chapter 1).
Theorem 4
(Elimination Theorem). Fix an eliminating term order on the ring k [ x 1 , , x m ] with x 1 > x 2 > > x m , and let G be a Gröbner basis for an ideal I of k [ x 1 , , x m ] with respect to this order. Then, for every ℓ, 0 m 1 , the set
G : = G k [ x + 1 , , x m ]
is a Gröbner basis for the ℓ-th elimination ideal I .
The radical of an ideal I is the ideal
I = { f k [ x 1 , , x m ] ;   there   exists   p N   such   that   f p I } .
An ideal I is a radical ideal if I = I . A proper ideal I is primary if f g I implies that f I or g p I for some p N . An ideal I is prime if f g I implies that f I or g I . An ideal I is primary if I is prime; in this case I is called associated prime ideal of I. A primary decomposition of I is a finite intersection I = Q 1 Q 2 Q s where all ideal Q j are primary. It is called minimal primary decomposition if the associated prime ideal Q j are all distinct and none of them contains the intersection of all others. Note that V ( I ) = V ( I ) .
The following fact will be extensively used for our computations.
Theorem 5.
(Lasker-Noether Decomposition Theorem) Every ideal I in k [ x 1 , , x m ] has a minimal primary decomposition. All such decompositions have the same number of primary ideals and the same collection of associated prime ideals.
We will be interested in solutions of systems of polynomial equations f 1 = f 2 = = f p = 0 . For a certain elimination ideal I ( ) associated to I = f 1 , , f p obtained by computation of Gröbner basis with the routine eliminate of Singular [10], we will then compute the minimal associated primes of I ( ) applying the routine minAssGTZ [11,12] of Singular.

3.2. Calculations

For the family of systems (the notation goes back to [13])
x ˙ = ( a 00 x + a 1 , 1 y + a 10 x 2 + a 01 x y + a 12 y 2 ) y ˙ = b 1 , 1 x + b 00 y + b 21 x 2 + b 10 x y + b 01 y 2
we aim to find (at least necessary) conditions on parameters a = ( a 00 , a 1 , 1 , a 10 , a 01 , a 12 ) and b = ( b 1 , 1 , b 00 , b 21 , b 10 , b 01 ) which would assure that it is possible to do such a linear/non-linear affine transformation of the coordinate system that the system becomes (reversible) Z q -equivariant. Let us for a moment assume that we allow any affine transformation, linear or non-linear. Then recall that with chosen ε { 1 , 1 } and q, Equations (22)–(24) must be satisfied for some B = α β γ δ and a vector x 0 = ( ρ , σ ) t r . At the moment we will not impose any condition on x 0 .
A glance to the equality (22) and the fact that ε is not an eigenvalue of B unless q = 2 and ε = 1 , which case we treat separately, gives us that y 0 = ( F 1 + X 0 ) x 0 = 0 , that is, x 0 = ( ρ , σ ) must be a singular point of the system. This produces two polynomial equations
h 1 = h 2 = 0 ,
where
h 1 : = a 00 r + a 10 r 2 + s ( a 1 , 1 + a 01 r + a 12 s ) , h 2 : = b 1 , 1 r + b 21 r 2 + s ( b 00 + b 10 r + b 01 s ) .
The next four equations arise from (23) and notation (21)
h 3 = h 4 = h 5 = h 6 = 0 ,
where
h 3 : = a 1 , 1 γ β b 1 , 1 ε a 01 γ ρ 2 β b 21 ε ρ 2 a 12 γ σ β b 10 ε σ + α ( 1 + ε ) ( a 00 + 2 a 10 ρ + a 01 s ) , h 4 : = a 00 β a 1 , 1 δ + α a 1 , 1 ε β b 00 ε 2 a 10 β ρ a 01 δ ρ + α a 01 ε ρ β b 10 ε ρ a 01 β σ 2 a 12 δ σ + 2 α a 12 ε σ 2 β b 01 ε s , h 5 : = b 00 γ + a 00 γ ε b 1 , 1 δ ε + b 10 γ ρ + 2 a 10 γ ε ρ 2 b 21 δ ε ρ + 2 b 01 γ σ + a 01 γ ε σ b 10 δ ε s + α ( b 1 , 1 + 2 b 21 ρ + b 10 s ) , h 6 : = γ ε + b 00 ( δ δ ε ) + b 10 δ ρ + a 01 γ ε ρ b 10 δ ε ρ + 2 b 01 δ σ + 2 a 12 γ ε σ 2 b 01 δ ε σ + β ( b 1 , 1 + 2 b 21 ρ + b 10 s ) .
Moreover, (24) gives 8 equations
h 7 = h 8 = h 9 = h 10 = h 11 = h 12 = h 13 = h 14 = 0 ,
with
h 7 : = a 01 γ + β ( b 10 γ 2 b 21 d ) ε α ( a 01 γ ε + a 10 ( 2 2 δ ε ) ) , h 8 : = 2 a 10 β ( 2 α a 12 γ 2 β b 01 γ + β b 10 d ) ε + a 01 δ ( 1 + α ε ) , h 9 : = 2 a 12 γ + a 2 a 01 ε + 2 b 2 b 21 ε α ( a 01 + β ( 2 a 10 + b 10 ) ε ) , h 10 : = β ( 2 α b 01 + β b 10 ) ε 2 a 12 ( δ a 2 ε ) a 01 ( β + α β ε ) , h 11 : = 2 α b 21 ( a 01 c 2 2 a 10 γ δ + 2 b 21 d 2 ) ε + b 10 ( γ + γ δ ε ) , h 12 : = 2 β b 21 + γ ( 2 a 12 γ + a 01 δ + 2 b 01 d ) ε + b 10 ( δ d 2 ε ) , h 13 : = α ( b 10 + a 01 γ ε b 10 δ ε ) + 2 ( b 01 γ + β ( a 10 γ + b 21 d ) ε ) , h 14 : = β ( b 10 a 01 γ ε + b 10 δ ε ) + 2 ( α a 12 γ ε + b 01 ( δ α δ ε ) ) .
The matrix B must be similar to the rotation matrix R q . It suffices to require that the α + δ is equal to 2 cos ( 2 π / q ) , the trace of R q , and the determinant α δ β γ is equal to 1. Moreover, as the determinant of B is equal to 1, B 1 equals the adjugate of B and, consequently, B tr = δ γ β α . So we add also:
h 15 = h 16 = 0 ,
where
h 15 : = α + δ 2 cos ( 2 π / q ) , h 16 : = α δ β γ 1 .
The polynomial system of equations is finally
h 1 = h 2 = = h 16 = 0 .
Our computational procedure consists of the following steps (except in the case ε = 1 and q = 2 ).
  • Fix the integer q.
  • Fix ε = 1 if considering transformations to Z q -equivariant systems, and set ε = 1 if treating transformations to reversible Z q -equivariant systems (only if q is even).
  • Set the initial ideal
    J 0 = h 1 , h 2 , , h 16 .
  • Elimination of six parameters α , β , γ , δ , ρ and σ from the ideal J 0 by applying routine eliminate in Singular gives the elimination ideal J ( 6 ) = : I q .
  • By running the procedure minAssGTZ in Singular compute the minimal primary components of I q and analyse the properties of the systems belonging to each component.
The exact results are presented in the theorems below. Some of the computations and all resulting ideals are given in the text file accessible on website (http://www.camtp.uni-mb.si/camtp/amade/Code-rotations.txt).
A remark is in order at this point. Assume that the elimination ideal and its decomposition into minimal primary ideals I q = I q 1 I q k , k 1 , has been obtained by steps (1)–(5) above. Then the necessary condition that there exists a bijective affine transformation, which brings the system to a (reversible) Z q -equivariant one is that the vector of its parameters ( a , b ) belongs to one of the varieties V ( I q i ) , i { 1 , , k } .
Conversely, the family of partial solutions ( a * , b * ) V ( I q i ) , i { 1 , , k } , which can be extended to solutions, that is, there exists a real 6-tuple ( α , β , γ , δ , ρ , σ ) ; solving the polynomial system (32), provides systems which can be transformed to (reversible) Z q -equivariant ones by affine transformations. The families in a certain component typically have interesting common properties. In our case, we will get two components for each q, one corresponding to systems being transformable by linear transformations and the other including systems which can be transformed to symmetric ones by non-linear affine transformations.
In the following theorems, we present necessary and in some cases also sufficient conditions for transformation of the family of systems (27) to (reversible) Z q -equivariant systems. In all of these theorems, the sufficient conditions are easy to check. So we give only arguments for the necessary conditions.
We say that a system from the family (27) is trivial if all parameters a and b are zero. To shorten writing, we introduce the following families.
  • L q (resp. r L q ) will denote the family of all systems (27) such that for each of them there exists a bijective linear transformation sending it to a Z q -equivariant (resp. reversible Z q -equivariant) one. The transformation in general depends on the system.
  • A q (resp. r A q ) will denote the family of all systems (27) such that for each of them there exists a non-linear bijective affine transformation sending it to a Z q -equivariant (resp. reversible Z q -equivariant) one. The transformation in general depends on the system.
Theorem 6.
Let q = 2 or q 4 . For any system in the family (27) we claim:
1. 
The system belongs to L q if and only if the system is linear and additionaly, when q 4 , its Jacobian matrix is either scalar multiple of the identity or, it has a pair of conjugated non-real eigenvalues.
2. 
The system belongs to A 2 if and only if the system is linear with the singular Jacobian matrix. The system belongs to A q , q 4 , if and only if it is trivial.
Proof. 
Proposition 2, assertion (1), tells us that the set of all Z 2 -equivariant systems is exactly the set of all linear systems (as the origin is a singular point by our assumption). The set of all linear systems is clearly invariant under any bijective linear transformation of R 2 .
For q = 4 , as every Z 4 -equivariant system is as well Z 2 -equivariant, similarly as above, the system must be linear and when q 4 , its Jacobian matrix F 1 must commute with B, B = S R q S 1 , by (9), ε = 1 . In turn, S 1 F 1 S commutes with R 4 so, F 1 must be similar to a member of the family S in (i) of Lemma 1. Now, the second assertion of (i) provides the form of F 1 which proves (1).
In order to exist a non-linear affine transformation, involving also a translation, which converts original system to a Z 2 -equivariant one, the system must be linear and the matrix of the linear part must be singular. Indeed, if we substitute x = y x 0 , x 0 0 , into x ˙ = F 1 x , the obtained system y ˙ = F 1 x 0 + F 1 y must have the origin as a singular point. Therefore, F 1 x 0 = 0 and F 1 must be singular. If q 4 , F 1 must be a singular member of the family S in (i) of Lemma 1, thus F 1 = 0 . □
In the next theorem we handle the nontrivial case q = 3 .
Theorem 7.
For any system in the family (27) it holds true:
  • If the system belongs to L 3 , then its parameters belong to the variety V ( I 31 ) and, if the linear part is non-zero, its determinant must be positive: a 1 , 1 b 1 , 1 a 00 b 00 > 0 .
    I 31 = a 01 2 b 01 , 2 a 10 b 10 , 2 a 1 , 1 b 21 a 00 b 10 b 00 b 10 + 2 b 1 , 1 b 01 , 2 b 1 , 1 a 12 + a 1 , 1 b 10 2 a 00 b 01 2 b 00 b 01 .
  • If the system belongs to A 3 , then its parameters belong to the variety V ( I 32 ) , where
    I 32 = g 1 , g 2 , , g 7
    with
    • g 1 = a 01 2 b 01 ,
    • g 2 = 2 a 10 b 10 ,
    • g 3 = 4 a 1 , 1 b 1 , 1 a 12 b 21 16 a 00 b 00 a 12 b 21 + 2 a 00 b 1 , 1 a 12 b 10 + 6 b 1 , 1 b 00 a 12 b 10 2 a 1 , 1 2 b 21 b 10 a 00 a 1 , 1 b 10 2 + a 1 , 1 b 00 b 10 2 4 b 1 , 1 2 a 12 b 01 + 12 a 00 a 1 , 1 b 21 b 01 + 4 a 1 , 1 b 00 b 21 b 01 2 a 00 2 b 10 b 01 6 a 1 , 1 b 1 , 1 b 10 b 01 + 4 a 00 b 00 b 10 b 01 2 b 00 2 b 10 b 01 + 4 a 00 b 1 , 1 b 01 2 4 b 1 , 1 b 00 b 01 2 ,
    • g 4 = 8 a 00 b 1 , 1 a 12 b 21 + 4 a 1 , 1 2 b 21 2 4 b 1 , 1 2 a 12 b 10 4 a 00 a 1 , 1 b 21 b 10 4 a 1 , 1 b 00 b 21 b 10 + 3 a 00 2 b 10 2 + 6 a 1 , 1 b 1 , 1 b 10 2 4 a 00 b 00 b 10 2 + b 00 2 b 10 2 8 a 00 2 b 21 b 01 16 a 1 , 1 b 1 , 1 b 21 b 01 + 24 a 00 b 00 b 21 b 01 8 a 00 b 1 , 1 b 10 b 01 4 b 1 , 1 b 00 b 10 b 01 + 12 b 1 , 1 2 b 01 2 ,
    • g 5 = 4 b 1 , 1 2 a 12 2 + 8 a 1 , 1 b 00 a 12 b 21 8 a 1 , 1 b 1 , 1 a 12 b 10 + 12 a 00 b 00 a 12 b 10 4 b 00 2 a 12 b 10 + 3 a 1 , 1 2 b 10 2 8 a 00 b 1 , 1 a 12 b 01 8 b 1 , 1 b 00 a 12 b 01 8 a 1 , 1 2 b 21 b 01 4 a 00 a 1 , 1 b 10 b 01 8 a 1 , 1 b 00 b 10 b 01 + 4 a 00 2 b 01 2 + 24 a 1 , 1 b 1 , 1 b 01 2 16 a 00 b 00 b 01 2 + 12 b 00 2 b 01 2 ,
    • g 6 = 32 a 00 2 b 00 a 12 b 21 + 4 a 1 , 1 3 b 21 2 4 a 00 2 b 1 , 1 a 12 b 10 4 a 1 , 1 b 1 , 1 2 a 12 b 10 12 a 00 b 1 , 1 b 00 a 12 b 10 4 a 1 , 1 2 b 00 b 21 b 10 + 5 a 00 2 a 1 , 1 b 10 2 + 6 a 1 , 1 2 b 1 , 1 b 10 2 6 a 00 a 1 , 1 b 00 b 10 2 + a 1 , 1 b 00 2 b 10 2 + 8 a 00 b 1 , 1 2 a 12 b 01 32 a 00 2 a 1 , 1 b 21 b 01 16 a 1 , 1 2 b 1 , 1 b 21 b 01 + 16 a 00 a 1 , 1 b 00 b 21 b 01 + 4 a 00 3 b 10 b 01 + 4 a 00 a 1 , 1 b 1 , 1 b 10 b 01 8 a 00 2 b 00 b 10 b 01 4 a 1 , 1 b 1 , 1 b 00 b 10 b 01 + 4 a 00 b 00 2 b 10 b 01 8 a 00 2 b 1 , 1 b 01 2 + 12 a 1 , 1 b 1 , 1 2 b 01 2 + 8 a 00 b 1 , 1 b 00 b 01 2 ,
    • g 7 = 4 a 1 , 1 3 b 1 , 1 b 21 2 16 a 00 a 1 , 1 2 b 00 b 21 2 4 a 00 2 b 1 , 1 2 a 12 b 10 4 a 1 , 1 b 1 , 1 3 a 12 b 10 + 4 a 00 b 1 , 1 2 b 00 a 12 b 10 + 16 a 00 2 a 1 , 1 b 00 b 21 b 10 4 a 1 , 1 2 b 1 , 1 b 00 b 21 b 10 + 16 a 00 a 1 , 1 b 00 2 b 21 b 10 + 5 a 00 2 a 1 , 1 b 1 , 1 b 10 2 + 6 a 1 , 1 2 b 1 , 1 2 b 10 2 12 a 00 3 b 00 b 10 2 30 a 00 a 1 , 1 b 1 , 1 b 00 b 10 2 + 16 a 00 2 b 00 2 b 10 2 + a 1 , 1 b 1 , 1 b 00 2 b 10 2 4 a 00 b 00 3 b 10 2 + 8 a 00 b 1 , 1 3 a 12 b 01 32 a 00 2 a 1 , 1 b 1 , 1 b 21 b 01 16 a 1 , 1 2 b 1 , 1 2 b 21 b 01 + 32 a 00 3 b 00 b 21 b 01 + 80 a 00 a 1 , 1 b 1 , 1 b 00 b 21 b 01 96 a 00 2 b 00 2 b 21 b 01 + 4 a 00 3 b 1 , 1 b 10 b 01 + 4 a 00 a 1 , 1 b 1 , 1 2 b 10 b 01 + 24 a 00 2 b 1 , 1 b 00 b 10 b 01 4 a 1 , 1 b 1 , 1 2 b 00 b 10 b 01 + 20 a 00 b 1 , 1 b 00 2 b 10 b 01 8 a 00 2 b 1 , 1 2 b 01 2 + 12 a 1 , 1 b 1 , 1 3 b 01 2 40 a 00 b 1 , 1 2 b 00 b 01 2 .
  • If for a vector of parameters ( a * , b * ) V ( I 31 ) there is a real solution α , β , γ , δ of system (32) with ρ = σ = 0 , then the system belongs to L 3 .
  • If for a vector of parameters ( a * , b * ) V ( I 32 ) there is a real solution α , β , γ , δ , ρ , σ , w of system (32) with additional equation 1 w ρ = 0 , then such a system is a member of A 3 .
Proof. 
In this case we apply steps (1)–(5) on page 10. We firstly generate the ideal J 0 , see (33), and then J ( 6 ) as the 6th elimination ideal, eliminating α , β , γ , δ , ρ , σ by applying eliminate of Singular. A Gröbner basis of this ideal consists of 11 generators; this gives the ideal I 3 (http://www.camtp.uni-mb.si/camtp/amade/Code-rotations.txt). It turns out that the ideal I 3 has two minimal associated primes provided by the routine minAssGTZ in Singular, the ideals I 31 and I 32 . As we have not imposed any conditions on the translation vector ( ρ , σ ) so far, the variety V ( I 3 ) contains all systems which can be transformed to a Z 3 -equivariant one by a linear or non-linear affine transformation.
We repeated the above procedure once more, adding equations ρ = 0 and σ = 0 to the equations h 1 = = h 16 = 0 and then the computation of minimal associated primes actually resulted in I 31 . To handle non-linear transformations, we have to implement the condition that ( ρ , σ ) is non-zero. We have found the following. It suffices to add only the condition ρ 0 by introducing a new variable w 1 , constructing a new ideal as the intersection J 0 1 w 1 ρ , and, then computing its 7-th elimination ideal J ( 7 ) by eliminating variables α , β , γ , δ , ρ , σ , w 1 . It turns out that this ideal has only one minimal associated prime, say J 71 . Repeating the same procedure with σ replacing ρ , adding a variable w 2 , computing the elimination ideal eliminating α , β , γ , δ , ρ , σ , w 2 and decomposing it into its minimal associated primes, gives again a sole ideal, let us name it J 72 . They happen to be not only equal, J 71 = J 72 , but also equal to the second component I 32 of I 3 . Therefore, as the condition ( ρ , σ ) ( 0 , 0 ) gives rise to the solution of problem of describing A 3 , it can be implemented as introducing the union of ideals ( J 0 1 w 1 ρ ) ( J 0 1 w 2 σ ) and computing its minimal associated primes, which by the above consideration is only one, equal to I 32 . □
We next describe the transformation to reversible Z q -equivariant systems. Here we set ε = 1 throughout. Only even q s make sense, as we know, and the only non-trivial cases, as we shall see, occur when q = 2 , q = 4 or q = 6 .
Theorem 8.
  • System (27) belongs to r L 2 if and only if it is reversible Z 2 -equivariant, that is, all linear terms are zero.
  • If system (27) belongs to r A 2 , then its parameters belong to the variety of the ideal
    I 2 = 4 b 00 a 12 b 21 b 00 a 01 b 10 2 b 1 , 1 a 12 b 10 + a 1 , 1 b 10 2 + 2 b 1 , 1 a 01 b 01 4 a 1 , 1 b 21 b 01 , 2 b 00 a 01 b 21 2 b 00 a 10 b 10 b 1 , 1 a 01 b 10 + a 00 b 10 2 + 4 b 1 , 1 a 10 b 01 4 a 00 b 21 b 01 , b 00 a 01 2 4 b 00 a 10 a 12 a 1 , 1 a 01 b 10 + 2 a 00 a 12 b 10 + 4 a 1 , 1 a 10 b 01 2 a 00 a 01 b 01 , b 1 , 1 a 01 2 4 b 1 , 1 a 10 a 12 2 a 1 , 1 a 01 b 21 + 4 a 00 a 12 b 21 + 2 a 1 , 1 a 10 b 10 a 00 a 01 b 10 .
  • Conversely, if for a vector of parameters ( a * , b * ) V ( I 2 ) there is a real solution ρ , σ of system (32) with additional constrain ( ρ , σ ) ( 0 , 0 ) , then such a system is a member of r A 2 .
Proof. 
Here we have to take into account only Equation (23) with B = Id . Inserting α = δ = 1 and β = γ = 0 into (29) gives equations
0 = a 00 2 a 10 ρ a 01 σ , 0 = a 1 , 1 a a 01 ρ 2 a 12 σ , 0 = b 1 , 1 + 2 b 21 ρ + b 10 σ , 0 = b 00 + b 10 ρ + 2 b 01 σ .
Setting ρ = σ = 0 instantly gives (1).
For establishing (2), eliminating parameters ρ , σ and computing minimal associated primary components, gives us one component, the ideal I 2 .
Assertion in (3) follows by extension of partial solutions. □
Finally, let us present the last almost trivial cases, for q 4 .
Theorem 9.
  • System (27) belongs to r L 4 if and only if it is linear and, if being non-trivial, the Jacobian has a pair of antipodal real eigenvalues.
  • If a system (27) belongs to r L 6 , then the parameters belong to V ( I 6 ) ,
    I 6 = a 00 , b 00 , a 1 , 1 , b 1 , 1 , a 01 2 b 01 , b 10 2 a 10 .
    Conversely, if for a vector of parameters ( a * , b * ) V ( I 6 ) there exist real α, β, γ, δ solving the system (32) with ρ = σ = 0 and ε = 1 , then such a system is a member of r L 6 .
  • Only trivial systems belong to r L q for q = 2 l > 6 .
  • Only trivial systems belong to r A q when q 4 .
Proof. 
(1)
As R 4 2 = R 2 = Id , r L 4 L 2 , only linear systems belong to r L 4 . A linear system belongs to r L 4 , if and only if its Jacobian matrix F 1 satisfies F 1 B + B F 1 = 0 , B = S R 4 S 1 , for some invertible S. Then, S 1 F 1 S must Jordan commute with R 4 which implies that F 1 is similar to a member of the family T in (b) of Lemma 1. Therefore, F 1 is of the required form.
(2)
Ideal I 6 is obtained by executing steps on page 10. The second claim follows by extending partial solutions.
(3)
Notice that R q 2 = R l which implies that r L q L l , and by (2) of Theorem 6, L l contains only trivial systems for all l 4 .
(4)
Similarly as above, r A q A l , l 2 . If l = 2 , the systems in A l are all linear with the singular Jacobian matrix, see (2) of Theorem 6. On the other hand, the Jacobian matrix F 1 must also satisfy F 1 B + B F 1 = 0 , B = S R 4 S 1 , for some invertible S. Then it follows that S 1 F 1 S must Jordan commute with R 4 . The only singular matrix Jordan commuting with R 4 is the zero matrix by (2) of Lemma 1. Thus, F 1 = 0 .
For q 6 , we can apply a geometrical reasoning. If the system has a line of singular points, then as a such cannot be reversible Z 6 -equivariant unless being trivial. The computational procedure does not give any real solutions in this case. Alternatively, if the translation point would have been an isolated singular point, due to reversible Z q symmetry, the system would have 7 singular points which for a planar quadratic system is not possible. For q 8 we can apply similar argument or, we can use the relation r A q A l and (2) of Theorem 6. □

4. Examples

The following examples illustrate our work.
Example 1.
The vector of coefficients of the system
x ˙ = x 2 x 2 + 2 x y + y 2 y ˙ = 2 x 2 y + 4 x y y 2 ,
see Figure 1, belongs to the variety V ( I 31 ) , so it is quite likely that there exists a linear transformation which brings this system to a Z 3 -equivariant one. After setting ε = 1 , q = 3 and ρ = σ = 0 in equations h 1 = = h 16 = 0 and solving this system for α , β , γ , δ , we get the solution
B = α β γ δ = 6 1 2 3 2 3 2 3 2 6 1 2 ,
which is not unique since the inverse of B is a generating symmetry as well. For finding the linear transformation S which will change the system in the Z 3 -equivariant one, we do the following. By the eigenvalue decomposition R 3 = S R diag ( e 2 i π / 3 , e 4 i π / 3 ) S R 1 and B = S B diag ( e 2 i π / 3 , e 4 i π / 3 ) S B 1 , we get B = ( S B S R 1 ) R 3 ( S B S R 1 ) 1 . Therefore, the transformation matrix is
S = S B S R 1 = 2 2 1 0 1 .
Now, the transformation x = 2 2 u + v , y = v , converts the system to
u ˙ = u 2 u v v ˙ = u 2 v + v 2 ,
which is clearly Z 3 -equivariant with a node at the origin, see Figure 2.
Example 2.
The next system will be one which we transform to a reversible Z 6 -equivariant system by a bijective linear tranformation. We claim that the system below is a member of r L 6 ,
x ˙ = x 2 + x y 2 y 2 y ˙ = 2 x 2 2 x y y 2 / 2 ,
see Figure 3. Similarly as above, but this time taking ideal I 6 instead, we get the system
u ˙ = 3 32 23 u 2 + 23 16 u v 3 32 23 v 2 v ˙ = 23 32 u 2 3 16 23 u v 23 32 v 2 ,
see Figure 4. Moreover, we have the symmetry matrix
B = α β γ δ = 23 7 69 46 9 69 46 4 69 23 23 + 7 69 46 ,
and the linear transformation
S = 23 8 7 8 0 1 .
Example 3.
An example of system in A 3 will be constructed so that we first fix the origin to be a saddle, then we choose a member ( a , b ) V ( I 32 ) such that the corresponding system has four singular points. As above we find the matrix S representing the linear part of transformation and additionally the translation vector ( ρ , σ ) . We start with
x ˙ = x x 2 + 4 x y y ˙ = y + 2 x y 2 y 2 ,
see Figure 5, which parameter-vector belongs to V ( I 32 ) . The transformed system is now
u ˙ = v / 3 + 3 ( u 2 v 2 ) v ˙ = u / 3 2 3 u v ,
see Figure 6, where the affine map ( x , y ) = S ( u , v ) + ( ρ , σ ) is defined by
S = 3 1 0 1 , ( ρ , σ ) = ( 1 / 3 , 1 / 6 ) .
We obtain exactly two solutions, the other one is the inverse map ( u , v ) S 1 ( x , y ) S 1 ( ρ , σ ) .
Example 4.
Our last example presents a system which can be send to a reversible Z 2 -equivariant one, here we actually need only translation because R 2 = Id . Starting with the system
x ˙ = 2 x 2 y x y + 25 8 y 2 y ˙ = x + 4 x 2 8 x y + y 2 ,
depicted in Figure 7, and transformed to
u ˙ = 1 12 + 2 u 2 u v + 25 8 v 2 v ˙ = 1 48 + 4 u 2 8 u v + v 2 ,
which phase portrait is drawn in Figure 8.
Note that this provides a simple method to construct a system with two singular points of the same kind.

Author Contributions

M.H. and V.G.R. contributed for conceptualization methodology, supervision and prepared the final version of the paper. T.P. developed the software, performed computations and wrote the initial draft of the paper. All authors have read and agreed to the published version of the manuscript.

Funding

M. Han was funded by National Natural Science Foundation of China (Nos. 11931016 and 11771296). The research of T. Petek and V. Romanovski was supported by the Slovenian Research Agency (programs P1-0306 and P1-0288, project N1-0063). We acknowledge COST ( European Cooperation in Science and Technology) actions CA15140 (Improving Applicability of Nature-Inspired Optimisation by Joining Theory and Practice (ImAppNIO)) and IC1406 (High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)) and Tomas Bata University in Zlín for accessing Wolfram Mathematica.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Romanovski, V.G. Time-reversibility in 2-dimensional systems. Open Syst. Inf. Dyn. 2008, 15, 359–370. [Google Scholar] [CrossRef]
  2. Han, M.; Petek, T.; Romanovski, V.G. Reversibility in polynomial sytems of ODE’s. Appl. Math. Comput. 2018, 338, 55–71. [Google Scholar]
  3. Lamb, J.S.W. Reversing Symmetries in dynamical systems. Physica A 1992, 25, 925–937. [Google Scholar] [CrossRef]
  4. Lamb, J.S.W.; Roberts, M. Reversible Equivariant systems. J. Differ. Equ. 1999, 159, 239–279. [Google Scholar] [CrossRef] [Green Version]
  5. Horn, R.A.; Johnson, C.R. Topics in Matrix Analysis; Cambridge University Press: Cambridge, UK, 1991. [Google Scholar]
  6. Li, J.; Zhao, X. Rotation symmetry groups of planar Hamiltonian systems. Ann. Differ. Equ. 1989, 5, 25–33. [Google Scholar]
  7. Han, M.; Sun, X. General form of Zq-reversible-equivariant planar systems and limit cycle bifurcations. J. Shanghai Norm. Univ. 2011, 40, 1–14. [Google Scholar]
  8. Cox, D.; Little, J.; O’Shea, D. Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra; Springer: New York, NY, USA, 1997. [Google Scholar]
  9. Romanovski, V.G.; Shafer, D.S. The Center and Cyclicity Problems: A Computational Algebra Approach; Birkhauser: Boston, MA, USA, 2009. [Google Scholar]
  10. Decker, W.; Greuel, G.-M.; Pfister, G.; Shönemann, H. Singular 3-1-6–A Computer Algebra System for Polynomial Computations (2012). Available online: http://www.singular.uni-kl.de (accessed on 2 August 2020).
  11. Decker, W.; Laplagne, S.; Pfister, G.; Schonemann, H.A. primdec.lib. A Singular 3-1 Library for Computing the Prime Decomposition and Radical of Ideals. 2010. Available online: http://www.singular.uni-kl.de (accessed on 2 August 2020).
  12. Gianni, P.; Trager, B.; Zacharias, G. Gröbner bases and primary decomposition of polynomials. J. Symb. Comput. 1988, 6, 146–167. [Google Scholar] [CrossRef]
  13. Brjuno, A.D. Local Methods in Nonlinear Differential Equations; Springer: Berlin, Germany, 1989. [Google Scholar]
Figure 1. System (37).
Figure 1. System (37).
Symmetry 12 01300 g001
Figure 2. System (38).
Figure 2. System (38).
Symmetry 12 01300 g002
Figure 3. System (39).
Figure 3. System (39).
Symmetry 12 01300 g003
Figure 4. System (40).
Figure 4. System (40).
Symmetry 12 01300 g004
Figure 5. System (41).
Figure 5. System (41).
Symmetry 12 01300 g005
Figure 6. System (42).
Figure 6. System (42).
Symmetry 12 01300 g006
Figure 7. System (43).
Figure 7. System (43).
Symmetry 12 01300 g007
Figure 8. System (44).
Figure 8. System (44).
Symmetry 12 01300 g008

Share and Cite

MDPI and ACS Style

Han, M.; Petek, T.; Romanovski, V.G. On Some Symmetries of Quadratic Systems. Symmetry 2020, 12, 1300. https://doi.org/10.3390/sym12081300

AMA Style

Han M, Petek T, Romanovski VG. On Some Symmetries of Quadratic Systems. Symmetry. 2020; 12(8):1300. https://doi.org/10.3390/sym12081300

Chicago/Turabian Style

Han, Maoan, Tatjana Petek, and Valery G. Romanovski. 2020. "On Some Symmetries of Quadratic Systems" Symmetry 12, no. 8: 1300. https://doi.org/10.3390/sym12081300

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop