Skip to content
BY 4.0 license Open Access Published by De Gruyter Open Access May 26, 2020

Cauchy matrix and Liouville formula of quaternion impulsive dynamic equations on time scales

  • Zhien Li and Chao Wang EMAIL logo
From the journal Open Mathematics

Abstract

In this study, we obtain the scalar and matrix exponential functions through a series of quaternion-valued functions on time scales. A sufficient and necessary condition is established to guarantee that the induced matrix is real-valued for the complex adjoint matrix of a quaternion matrix. Moreover, the Cauchy matrices and Liouville formulas for the quaternion homogeneous and nonhomogeneous impulsive dynamic equations are given and proved. Based on it, the existence, uniqueness, and expressions of their solutions are also obtained, including their scalar and matrix forms. Since the quaternion algebra is noncommutative, many concepts and properties of the non-quaternion impulsive dynamic equations are ineffective, we provide several examples and counterexamples on various time scales to illustrate the effectiveness of our results.

MSC 2010: 34A37; 34N05; 11R52

1 Introduction

In 1843, Hamilton initiated the concept of quaternions that extends the complex numbers to four-dimensional space [1]. Quaternions are 4-vectors, whose multiplication is determined by a noncommutative division algebra. Let the quaternions q = ( q 0 , q 1 , q 2 , q 3 ) 4 be

q = q 0 + q 1 i + q 2 j + q 3 k ,

where q 0 , q 1 , q 2 , q 3 and i, j, and k satisfy the following multiplication:

i 2 = j 2 = k 2 = 1 , j k = k j = i , k i = i k = j , i j = j i = k .

In the real-world applications, quaternions are superior to the real-valued vectors in description of the phenomena in physics and life sciences [2]. In fact, there exists the quaternionic differential equation structure in many research fields such as differential geometry, fluid mechanics, attitude dynamics, and quantum mechanics, and many interesting phenomena under the quaternionic background have attracted many researchers [3,4,5,6,7]. To the best of our knowledge, there are few research results on the theory of quaternion dynamic equations on time scales [8].

To study the dynamic equations on hybrid domains, in 1988, Stefan Hilger introduced the theory of time scales, which provides an effective way to unify various hybrid domain analysis and has recently received a lot of attention [9,10]. The non-quaternion dynamic equations and applications on time scales in various fields were well studied, and many results were obtained [11,12,13,14]. It is well known that a time scale T is an arbitrary nonempty closed subset of the reals, by choosing the time scale to be the set of real numbers, the general results yields the results concerning the different types of dynamic equations, for example, let T = , then the dynamic equations will turn into differential equations, if T = h Z , then the same results yield a result for difference equations with h-step. However, since there are many other time scales compared with real numbers and integers such as the quantum time scale T = q Z , the hybrid domains T = { h Z } { q Z ¯ } , etc., one can obtain a much more general result by using the theory of time scales.

On the other hand, the impulsive dynamic equations play a vital role in describing the natural phenomena with sudden changes; it is a hot topic of the research of impulsive dynamic systems since the instantaneous change caused by impulses has a very significant meaning in explaining and mastering the change rule of the object. Due to this reason, there have been many literature in this field [15,16,17,18].

Nevertheless, there are no research results related to the Cauchy matrix and Liouville formula for the theory of quaternion impulsive dynamic equations on time scales, which will lead to many difficulties in studying the quaternion impulsive dynamic equations on complex hybrid domains. To fill this gap, in Sections 3 and 4, the Cauchy matrices and Liouville formulas for the quaternion homogeneous and nonhomogeneous impulsive dynamic equations are derived; based on them, the existence, uniqueness, and expressions of their solutions are also obtained for their scalar form and matrix form, respectively. In each section, several concrete examples and counter examples are provided to analyze the feasibility of our obtained results.

2 Preliminaries

We denote the space of quaternion by . For any q = x 0 + x 1 i + x 2 j + x 3 k , the conjugate, the real part, and the imaginary part of q are, respectively, as follows:

q ¯ = x 0 x 1 i x 2 j x 3 k , ( q ) = x 0 , ( q ) = x 1 i + x 2 j + x 3 k .

The quaternion multiplication is a simple noncommutative division algebra, but the real and quaternion is commutable, i.e., if t and q , then tq = qt. Besides,

q ¯ q = x 0 2 + x 1 2 + x 2 2 + x 3 2 = | q | 2 , q h ¯ = h q ¯ , q 1 = q ¯ | q | 2 , ( q p ) = ( p q ) .

Similar to Definition 5.18 from [9], we can also introduce the following definition of quaternion-valued matrix exponential function.

Definition 2.1

Let f : T , we define the exponential function e f (t,t 0) by the solution of the initial value problem:

x Δ ( t ) = f ( t ) x ( t ) , x ( t 0 ) = 1 .

Also, let A : T n × n , the matrix exponential function e A (t,t 0) is defined by the solution of the initial value problem:

X Δ ( t ) = A ( t ) X ( t ) , X ( t 0 ) = I ,

where I is the n × n identity matrix.

Definition 2.2

For A : T n × n , where A(t) = [a rs (t)] n×n , 1 ≤ r, sn, if every a rs (t) is rd-continuous, then A is said to be rd-continuous quaternion-valued matrix, the collection of all such matrix functions is denoted by r d .

Definition 2.3

[19] For every quaternion function matrix A r d , it can be expressed uniquely in the form of:

A ( t ) = 1 ( A ( t ) ) + 2 ( A ( t ) ) j , where 1 ( A ( t ) ) , 2 ( A ( t ) ) n × n .

Hence, we can define G : n × n n × n by

G ( A ( t ) ) = [ 1 ( A ( t ) ) 2 ( A ( t ) ) 2 ( A ( t ) ) ¯ 1 ( A ( t ) ) ¯ ] ,

where G ( A ( t ) ) is called the complex adjoint function matrix of the quaternion function matrix A(t).

Denote ( A ( t ) ) by

( A ( t ) ) = [ 1 ( A ( t ) ) 1 ( A ( t ) ) ¯ + 2 ( A ( t ) ) 2 ( A ( t ) ) ¯ ] .

Remark 2.1

For A r d , let A(t) = A 0(t) + A 1(t)i + A 2(t)j + A 3(t)k, where A n 0 : T n × n , n 0 = 0, 1, 2, 3, by Definition 2.3 the following hold:

1 ( A ( t ) ) 1 ( A ( t ) ) ¯ = A 0 ( t ) A 0 ( t ) + A 1 ( t ) A 1 ( t ) + [ A 0 ( t ) A 1 ( t ) A 1 ( t ) A 0 ( t ) ] i , 2 ( A ( t ) ) 2 ( A ( t ) ) ¯ = A 2 ( t ) A 2 ( t ) + A 3 ( t ) A 3 ( t ) + [ A 2 ( t ) A 3 ( t ) A 3 ( t ) A 2 ( t ) ] i .

Hence, ( A ( ) ) n × n . Moreover, ( A ( ) ) n × n if and only if

(2.1) A 0 ( t ) A 1 ( t ) A 1 ( t ) A 0 ( t ) + A 2 ( t ) A 3 ( t ) A 3 ( t ) A 2 ( t ) = 0 .

Example 2.1

For some t 0 T , let

A ( t 0 ) = [ 1 + i 2 + i 1 3 + i ] + [ 1 + i 3 + i 1 i ] j .

Hence,

1 ( A ( t 0 ) ) 1 ( A ( t 0 ) ) ¯ = [ 1 + i 2 + i 1 3 + i ] [ 1 i 2 i 1 3 i ] = [ 4 + i 10 + 2 i 4 12 i ] , 2 ( A ( t 0 ) ) 2 ( A ( t 0 ) ) ¯ = [ 1 + i 3 + i 1 i ] [ 1 i 3 i 1 i ] = [ 5 + i 5 i 1 4 i ] .

Therefore,

( A ( t 0 ) ) = [ 9 + 2 i 15 + i 5 16 2 i ] 2 × 2 .

Definition 2.4

[8] For any A r d and ( A ( ) ) n × n , we define the determinant of the quaternion function matrix by:

ddet A ( t ) := det [ ( A ( t ) ) ] .

Remark 2.2

According to Example 2.1, ( A ( ) ) cannot be always a real-valued matrix. Hence, the condition ( A ( ) ) n × n in Definition 2.4 is necessarily required.

Remark 2.3

If A : T n × n , then ddet A(t) = det A(t) det A(t).

We introduce the notation D = { A r d : ( A ( ) ) n × n } .

For any A ( ) = [ a r h ( ) ] n × n n × n and a r h , x : T , let

a r h ( t ) = a r h 0 ( t ) + a r h 1 ( t ) i + a r h 2 ( t ) j + a r h 3 ( t ) k , x ( t ) = x 0 ( t ) + x 1 ( t ) i + x 2 ( t ) j + x 3 ( t ) k ,

where a rhs , x s : T and 1 ≤ r, hn, s = 0, 1, 2, 3. Define

∥; A ( t ) ∥; = r , h =1 n s = 0 3 | a r h s ( t ) | , ∥; x ( t ) ∥; = s = 0 3 | x s ( t ) | .

Definition 2.5

[8] Let f(t) = f 0(t) + f 1(t)i + f 2(t)j + f 3(t)k and f r : T be rd-continuous for each r = 0, 1, 2, 3, the integral of the function f(t) is defined as follows:

t 0 t f ( τ ) Δ τ = t 0 t f 0 ( τ ) Δ τ + i t 0 t f 1 ( τ ) Δ τ + j t 0 t f 2 ( τ ) Δ τ + k t 0 t f 3 ( τ ) Δ τ ,

where i, j, and k are the quaternion imaginary units.

Definition 2.6

[19] Let A ( ) = [ a r h ( ) ] n × n r d and a r h : T , where 1 ≤ r, hn, the integral of the matrix function A(t) is defined as follows:

t 0 t A ( τ ) Δ τ = [ t 0 t a s r ( τ ) Δ τ ] .

3 Quaternion scalar impulsive dynamic equation

Now, we consider the impulsive dynamic equations on a time scale as follows:

(3.1) { x Δ ( t ) = f ( t ) x ( t ) + h ( t ) , t t n , Δ ˜ x ( t ) = m n x ( t ) , t = t n ,

where f : T , m n , t n T , Δ ˜ x ( t ) = x ( σ ( t + ) ) x ( t ) .

Remark 3.1

In (3.1), if t n is the right-dense point, then Δ ˜ x ( t ) = x ( t + ) x ( t ) ; if t n is the right-scattered point, then Δ ˜ x ( t ) = x ( σ ( t ) ) x ( t ) .

Lemma 3.1

Let F ( t ) = t 0 t f ( τ ) t 0 τ g ( η ) Δ η Δ τ and t 0 T be fixed, where f , g : T , then F : T is differentiable at t with

F Δ ( t ) = f ( t ) t 0 t g ( η ) Δ η .

Moreover, if R ( t ) = t 0 t f ( η n ) t 0 η n f ( η n 1 ) t 0 η 2 f ( η 1 ) Δ η 1 Δ η n 1 Δ η n , then

(3.2) R Δ ( t ) = f ( t ) t 0 t f ( η n 1 ) t 0 η 2 f ( η 1 ) Δ η 1 Δ η n 1 .

Proof

For the right-scattered point t T , the Δ-derivative of F(t) can be calculated as follows:

F Δ ( t ) = t 0 σ ( t ) f ( τ ) t 0 τ g ( η ) Δ η Δ τ t 0 t f ( τ ) t 0 τ g ( η ) Δ η Δ τ μ ( t ) = t σ ( t ) f ( τ ) Δ τ t 0 τ g ( η ) Δ η μ ( t ) = [ t σ ( t ) f 0 ( τ ) Δ τ + i t σ ( t ) f 1 ( τ ) Δ τ + j t σ ( t ) f 2 ( τ ) Δ τ + k t σ ( t ) f 3 ( τ ) Δ τ ] t 0 τ g ( η ) Δ η μ ( t ) = [ f 0 ( t ) + i f 1 ( t ) + j f 2 ( t ) + k f 3 ( t ) ] t 0 t g ( η ) Δ η = f ( t ) t 0 t g ( η ) Δ η .

For the right-dense point t T , the derivative of F(t) is obvious. Similar to the above calculation, by taking F(t) = R(t), we can obtain (3.2). The proof is complete.□

Now, consider the following homogeneous linear dynamic equations:

(3.3) { x Δ ( t ) = f ( t ) x ( t ) , x ( t 0 ) = x 0 ,

where f : T is rd-continuous, x 0 , t 0 T .

Lemma 3.2

For (3.3), if f is uniformly bounded on T , i.e., there exists some constant M f > 0, such that ∥; f ( t ) ∥; M f for all t T , then the solution x(t) of the initial value problem of (3.3) is rd-continuous and given by

x ( t ) = ( 1 + n = 1 c n ( t ) ) x 0 ,

where

c n ( t ) = t 0 t f ( t n ) t 0 t n f ( t n 1 ) t 0 t 2 f ( t 1 ) Δ t 1 Δ t n 1 Δ t n .

Proof

Let h be constant with h > 0. For t 0 t < t 0 + h , we have

∥; c n ∥; t 0 t M f t 0 t n M f t 0 t 2 M f Δ t 1 Δ t n 1 Δ t n = M f n h n ( t , t 0 ) M f n ( t t 0 ) n n ! = M f n h n n ! .

By the Weierstrass theorem, the series n = 1 M f n h n n ! is convergent (say it is convergent to a + ), which implies that the series { n = 1 c n } is uniformly convergent on T .

Next, we show that the function x(t) is rd-continuous. For the right-dense t r T , ε > 0 , there exists δ ( ε ) = ε / a M f > 0 , such that | t t r | < δ ( ε ) , we have

∥; x ( t ) x ( t r ) ∥; = ∥; n =1 t r t f ( t n ) t 0 t n f ( t n 1 ) t 0 t 2 f ( t 1 ) Δ t 1 Δ t n 1 Δ t n x 0 ∥; n =1 M f n | t t r | ( t t 0 ) n 1 ( n 1)! δ ( ε ) n =1 M f n ( t t 0 ) n 1 ( n 1)! ε a M f M f n =1 M f n 1 h n 1 ( n 1)! < ε .

Thus, x(t r ) is continuous at right-dense. Moreover, since the function f(t) is rd-continuous, it follows that x(t) has the finite left-side limit at a left-dense point. Therefore, x(t) is rd-continuous.

On the other hand, by Lemma 3.1, we can get c n Δ ( t ) = f ( t ) c n 1 ( t ) , and hence, x ( t ) = x 0 + n = 1 c n ( t ) x 0 is Δ-differentiable with

x Δ ( t ) = n = 1 f ( t ) c n 1 ( t ) x 0 ,

where c 0 = 1, hence x Δ ( t ) = f ( t ) x ( t ) . Therefore, the function series x ( t ) = x 0 + n = 1 c n x 0 is a solution of (3.3), according to the continuation theorem of solutions for dynamic equations, x(t) is a solution for (3.3) on T . Furthermore, we assume that x 1 and x 2 are two solutions of (3.3), then

∥; x 1 ( t ) x 2 ( t ) ∥; t 0 t ∥; f ( τ ) ∥; ∥; x 1 ( τ ) x 2 ( τ ) ∥; Δ τ M t 0 t ∥; x 1 ( τ ) x 2 ( τ ) ∥; Δ τ .

By Corollary 6.7 from [9] (Bellman inequality on time scale), we can get ∥; x 1 ( t ) x 2 ( t ) ∥; = 0 . Therefore, the solution of (3.3) is unique. The proof is complete.□

Theorem 3.1

Let f : T be rd-continuous on T and (3.3) with the initial condition x ( t 0 ) = 1 has the solution with the exponential form as follows:

e f ( t , t 0 ) = 1 + n = 1 t 0 t f ( τ n ) t 0 τ n f ( τ n 1 ) t 0 τ 2 f ( τ 1 ) Δ τ 1 Δ τ n 1 Δ τ n .

Proof

Let

C n ( t ) = n =1 t 0 t f ( τ n ) t 0 τ n f ( τ n 1 ) t 0 τ 2 f ( τ 1 ) Δ τ 1 Δ τ n 1 Δ τ n

for n ≥ 1, by Lemma 3.1, we can get

C n Δ ( t ) = f ( t ) C n 1 ( t ) .

By Lemma 3.1, we can get 1 + n = 1 C n ( t ) is a unique solution of (3.3) with x ( t 0 ) = 1 . Hence, by Definition 2.1, we obtain

e f ( t , t 0 ) = 1 + n =1 t 0 t f ( τ n ) t 0 τ n f ( τ n 1 ) t 0 τ 2 f ( τ 1 ) Δ τ 1 Δ τ n 1 Δ τ n .

The proof is complete.□

Remark 3.2

From Theorem 3.1, for any f : T and s , t T , we can easily obtain (i) ef(t,t) = 1; (ii) e f  (σ(t),s) − e f  (t,s) = μ(t)f(t)e f  (t,s); and (iii) e f  (t,s) ≠ e f  (s,t).

Now, consider the homogeneous linear impulsive dynamic equations as follows:

(3.4) { x Δ ( t ) = f ( t ) x ( t ) , t t n , Δ ˜ x ( t ) = m n x ( t ) , t = t n , n Z , x ( t 0 ) = x 0 ,

where t n { t 1 , t 2 , , t n 0 } ( t 0 , t ) T , for t > t 0 , f : T , m n , Δ ˜ x ( t ) = x ( σ ( t + ) ) x ( t ) .

Lemma 3.3

The solution of (3.4) can be given as

x ( t ) = { [ 1 + r = 1 c 1, p ] x 0 , t 0 t < σ ( t 1 + ) , [ 1 + r = 1 c s , p ] v = s 1 1 (1 + m v ) [ 1 + l = 1 C v , l ] x 0 , σ ( t s 1 + ) t < σ ( t s + ) , 1 < s n 0 , [ 1 + r = 1 c s , p ] v = s 1 (1 + m v ) [ 1 + l = 1 C v , l ] x 0 , t = σ ( t s + ) , 1 < s n 0 , [ 1 + r =1 c n 0 + 1, p ] v = n 0 1 (1 + m v ) [ 1 + l = 1 C v , l ] x 0 , σ ( t n 0 + ) < t ,

where c s , r = σ ( t s 1 + ) t f ( τ r ) σ ( t s 1 + ) τ r f ( τ r 1 ) σ ( t s 1 + ) τ 2 f ( τ 1 ) Δ τ 1 Δ τ r 1 Δ τ r and C s,r = C s,r (t s ), 1 < sn 0.

Proof

By Lemma 3.2, for t 0 t < σ ( t 1 + ) , we have

x ( t ) = x 0 + r = 1 c 1 r x 0 , c 1 , r = t 0 t f ( τ r ) t 0 τ r f ( τ r 1 ) t 0 τ 2 f ( τ 1 ) Δ τ 1 Δ τ r 1 Δ τ r .

Furthermore, x ( σ ( t 1 + ) ) x ( t 1 ) = m 1 x ( t 1 ) , so x ( σ ( t 1 + ) ) = ( 1 + m 1 ) x ( t 1 ) . Then, for any 1 < sn 0 and σ ( t s 1 + ) t < σ ( t s + ) , we can obtain

x ( t ) = ( 1 + r = 1 c s r ( t ) ) x σ ( t s 1 + ) , where x s ( t s 1 + ) = x ( s ( t s 1 + ) ) , c s , r ( t ) = σ ( t s 1 + ) t f ( τ r ) σ ( t s 1 + ) τ r f ( τ r 1 ) σ ( t s 1 + ) τ 2 f ( τ 1 ) Δ τ 1 Δ τ r 1 Δ τ r .

Hence, for t σ ( t n 0 + ) , by repeating the same iteration process above, we have

x ( t ) = [ 1 + r = 1 σ ( t n 0 + ) t f ( τ r ) σ ( t n 0 + ) τ r f ( τ r 1 ) σ ( t n 0 + ) τ 2 f ( τ 1 ) Δ τ 1 Δ τ r 1 Δ τ r ] x ( σ ( t n 0 + ) ) = [ 1 + r = 1 σ ( t n 0 + ) t f ( τ r ) σ ( t n 0 + ) τ r f ( τ r 1 ) σ ( t n 0 + ) τ 2 f ( τ 1 ) Δ τ 1 Δ τ r 1 Δ τ r ] s = n 0 1 ( 1 + m s ) [ 1 + r = 1 C s , r ] x 0 ,

so the solution of (3.4) given by Lemma 3.3 is obtained. This completes the proof.□

Now, consider the nonhomogeneous linear dynamic equation as follows:

(3.5) { x Δ ( t ) = f ( t ) x ( t ) + h ( t ) x ( t 0 ) = x 0 ,

where f , h : T , x 0 , t 0 T .

Lemma 3.4

The solution of (3.5) is given by

x ( t ) = e f ( t , t 0 ) x 0 + t 0 t e f ( t , σ ( τ ) ) h ( τ ) Δ τ .

Moreover, x(t) can be given as

x ( t ) = ( 1 + n = 1 t 0 t f ( τ n ) t 0 τ n f ( τ n 1 ) t 0 τ 2 f ( τ 1 ) Δ τ 1 Δ τ n 1 Δ τ n ) x 0 + t 0 t ( 1 + n =1 σ ( τ ) t f ( τ n ) σ ( τ ) τ n f ( τ n 1 ) σ ( τ ) τ 2 f ( τ 1 ) Δ τ 1 Δ τ n 1 Δ τ n ) h ( τ ) Δ τ .

Proof

For t = t 0 , it is obvious that x ( t 0 ) = e f ( t 0 , t 0 ) x 0 + t 0 t 0 e f ( t 0 , σ ( τ ) ) h ( τ ) Δ τ = x 0 . Moreover,

μ ( t ) x Δ ( t ) = e f ( σ ( t ) , t 0 ) x 0 + t 0 σ ( t ) e f ( σ ( t ) , σ ( τ ) ) h ( τ ) Δ τ e f ( t , t 0 ) x 0 + t 0 t e f ( t , σ ( τ ) ) h ( τ ) Δ τ = e f ( σ ( t ) , t 0 ) x 0 e f ( t , t 0 ) x 0 + t 0 σ ( t ) e f ( σ ( t ) , σ ( τ ) ) h ( τ ) Δ τ t 0 t e f ( t , σ ( τ ) ) h ( τ ) Δ τ = μ ( t ) f ( t ) e f ( t , t 0 ) x 0 + t 0 σ ( t ) e f ( σ ( t ) , σ ( τ ) ) h ( τ ) Δ τ + t 0 t [ e f ( σ ( t ) , σ ( τ ) ) e f ( t , σ ( τ ) ) ] h ( τ ) Δ τ = μ ( t ) f ( t ) e f ( t , t 0 ) x 0 + μ ( t ) e f ( σ ( t ) , σ ( t ) ) h ( t ) + t 0 t μ ( t ) f ( t ) e f ( t , σ ( τ ) ) h ( τ ) Δ τ = [ f ( t ) x ( t ) + h ( t ) ] μ ( t ) .

Moreover, by Theorem 3.1, we can obtain the desired results. The proof is complete.□

Next, we consider the nonhomogeneous impulsive dynamic equation as follows:

(3.6) { x Δ ( t ) = f ( t ) x ( t ) + h ( t ) , t t n , Δ ˜ x ( t ) = m n x ( t ) , t = t n , n Z , x ( t 0 ) = x 0 ,

where f , h : T , t n { t 1 , t 2 , , t n 0 } [ t 0 , t ] , t [ a , b ] T and 1 + m n 0 , m n .

Theorem 3.2

The solution of (3.6) is given by

Ψ f , h = { ( 1 + r = 1 c 1 , r ) x 0 + t 0 t e f ( t , σ ( τ ) ) h ( τ ) Δ τ , t 0 t < σ ( t 1 + ) , ( 1 + w = 1 c s , w ) { v = s 1 1 ( 1 + m v ) u v + l = 2 s 2 k = s 1 l + 1 ( 1 + m k ) u k ( 1 + m l ) h l 1 + ( 1 + m s 1 ) h s 2 } + σ ( t s 1 + ) t e f ( t , σ ( τ ) ) h ( τ ) Δ τ , σ ( t s 1 + ) t < σ ( t s + ) , 1 < s n 0 , v = s 1 ( 1 + m v ) u v + l = 2 s 1 k = s l + 1 ( 1 + m k ) u k ( 1 + m l ) h l 1 + ( 1 + m s ) h s 1 , t = σ ( t s + ) , 1 < s n 0 , e f ( t , σ ( t n 0 + ) ) { v = n 0 1 ( 1 + m v ) u v + l = 1 n 0 1 k = n 0 l + 1 ( 1 + m k ) u k ( 1 + m l ) h l 1 + ( 1 + m n 0 ) h n 0 1 } + σ ( t n 0 + ) t e f ( t , σ ( τ ) ) h ( τ ) Δ τ , t > σ ( t n 0 + ) ,

where

u 1 = ( 1 + n = 1 C 1 , n ) x 0 + t 0 t 1 e f ( t , σ ( τ ) ) h ( τ ) Δ τ , u v = 1 + p = 1 C v , p , 1 < v n 0 , h l 1 = σ ( t l 1 + ) t l e f ( t , σ ( τ ) ) h ( τ ) Δ τ , 2 l n 0 .

Proof

For t 0 t < σ ( t 1 + ) , we have

x ( t 1 ) = [ 1 + r = 1 c 1, r ] x 0 + t 0 t 1 e f ( t , σ ( τ ) ) h ( τ ) Δ τ , c 1, r = t 0 t f ( τ r ) t 0 τ r f ( τ r 1 ) t 0 τ 2 f ( τ 1 ) Δ τ 1 Δ τ r 1 Δ τ r .

Furthermore, x ( σ ( t 1 + ) ) x ( t 1 ) = m 1 x ( t 1 ) , so x ( σ ( t 1 + ) ) = ( 1 + m 1 ) x ( t 1 ) , for 1 < sn 0, σ ( t s 1 + ) t < σ ( t s + )

x ( t ) = [ 1 + r = 1 c s , r ] x σ ( t s 1 + ) + σ ( t s 1 + ) t e f ( t , σ ( τ ) ) h ( τ ) Δ τ = [ 1 + r =1 c s , r ] (1 + m s 1 ) x ( t s 1 ) + σ ( t s 1 + ) t e f ( t , σ ( τ ) ) h ( τ ) Δ τ ,

where

c s , r = σ ( t s 1 + ) t f ( τ r ) σ ( t s 1 + ) τ r f ( τ r 1 ) σ ( t s 1 + ) τ 2 f ( τ 1 ) Δ τ 1 Δ τ r 1 Δ τ r ,

for 1 < sn 0, we have

x σ ( t s + ) = ( 1 + m s ) x ( t s ) = ( 1 + m s ) [ ( 1 + r = 1 C s , r ) x σ ( t s 1 + ) + σ ( t s 1 ) + t s e f ( t , σ ( τ ) ) h ( τ ) Δ τ ] ,

where

C s , r = σ ( t s 1 + ) t s f ( τ r ) σ ( t s 1 + ) τ r f ( τ r 1 ) σ ( t s 1 + ) τ 2 f ( τ 1 ) Δ τ 1 Δ τ r 1 Δ τ r .

For t > σ ( t n 0 + ) ,

x ( t ) = [ 1 + r = 1 σ ( t n 0 + ) t f ( τ r ) σ ( t n 0 + ) τ 2 f ( τ 1 ) Δ τ 1 Δ τ r ] x ( σ ( t n 0 + ) ) + σ ( t n 0 + ) t e f ( t , σ ( τ ) ) h ( τ ) Δ τ .

By repeating the iteration process above, we can obtain

Ψ f , h = { ( 1 + r = 1 c 1 , r ) x 0 + t 0 t e f ( t , σ ( τ ) ) h ( τ ) Δ τ , t 0 t σ ( t 1 + ) , ( 1 + w = 1 c s , w ) { v = s 1 2 ( 1 + m v ) ( 1 + p = 1 C v , p ) × ( 1 + m 1 ) [ ( 1 + n = 1 C 1 , n ) x 0 + t 0 t 1 e f ( t , σ ( τ ) ) h ( τ ) Δ τ ] + l = 2 s 2 k = s 1 l + 1 ( 1 + m k ) ( 1 + d = 1 C k , d ) ( 1 + m l ) σ ( t l 1 + ) t l e f ( t , σ ( τ ) ) h ( τ ) Δ τ + ( 1 + m s 1 ) σ ( t s 2 + ) t s 1 e f ( t , σ ( τ ) ) h ( τ ) Δ τ } + σ ( t s 1 + ) t e f ( t , σ ( τ ) ) h ( τ ) Δ τ , σ ( t s 1 + ) t < σ ( t s + ) , 1 < s n 0 , v = s 2 ( 1 + m v ) ( 1 + p = 1 C v , p ) ( 1 + m 1 ) [ ( 1 + n = 1 C 1 , n ) x 0 + t 0 t 1 e f ( t , σ ( τ ) ) h ( τ ) Δ τ ] + l = 2 s 1 k = s l + 1 ( 1 + m k ) ( 1 + d = 1 C k , d ) ( 1 + m l ) σ ( t l = 1 + ) t l e f ( t , σ ( τ ) ) h ( τ ) Δ τ + ( 1 + m s ) σ ( t s 1 + ) t s e f ( t , σ ( τ ) ) h ( τ ) Δ τ , t = σ ( t s + ) , 1 < s n 0 , [ 1 + w = 1 σ ( t n 0 + ) t f ( τ w ) σ ( t n 0 + ) τ 2 f ( τ 1 ) Δ τ 1 Δ τ w ] × { v = n 0 2 ( 1 + m v ) ( 1 + p = 1 C v , p ) ( 1 + m 1 ) [ ( 1 + n = 1 C 1 , n ) x 0 + t 0 t 1 h ( τ ) e f ( t , σ ( τ ) ) Δ τ ] + l = 2 n 0 1 k = n 0 l + 1 ( 1 + m k ) ( 1 + d = 1 C k , d ) ( 1 + m l ) σ ( t l 1 + ) t l e f ( t , σ ( τ ) ) h ( τ ) Δ τ + ( 1 + m n 0 ) σ ( t n 0 1 + ) t n 0 e f ( t , σ ( τ ) ) h ( τ ) Δ τ } + σ ( t n 0 + ) t e f ( t , σ ( τ ) ) h ( τ ) Δ τ , t > t n 0 ,

where

C 1, r = t 0 t 1 f ( τ r ) t 0 τ 2 f ( τ 1 ) Δ τ 1 Δ τ r , C s , r = σ ( t s 1 + ) t s f ( τ r ) σ ( t s 1 + ) τ 2 f ( τ 1 ) Δ τ 1 Δ τ r , c 1, r = t 0 t f ( τ r ) t 0 τ 2 f ( τ 1 ) Δ τ 1 Δ τ r , c s , r = σ ( t s 1 + ) t f ( τ r ) σ ( t s 1 + ) τ 2 f ( τ 1 ) Δ τ 1 Δ τ r ,

and 1 < sn 0. The proof is complete.□

Example 3.1

By Theorem 3.2, let T = Z h ˜ , h ˜ > 0 , for 1 < sn 0, we can get

c s , r = { 0 , r > t σ ( t s 1 ) h ˜ , h ˜ r q ˜ = 1 r f ( t q ˜ h ˜ ) , r = t σ ( t s 1 ) h ˜ , h ˜ r q ˜ = 1 r 1 f ( t q ˜ h ˜ ) v = 0 t σ ( t s 1 ) h ˜ ( r 2 ) f ( σ ( t s 1 ) + v h ˜ ) , r < t σ ( t s 1 ) h ˜ ;

C s , r = { 0, r > t s σ ( t s 1 ) h ˜ , h ˜ r v = 1 r f ( t s v h ˜ ) , r = t s σ ( t s 1 ) h ˜ , h ˜ r q ˜ =1 r 1 f ( t s q ˜ h ˜ ) v =0 t s σ ( t s 1 ) h ˜ ( r 2) f ( σ ( t s 1 ) + v h ˜ ) , r < t s σ ( t s 1 ) h ˜ ;

c 1, r = { 0 , r > t t 0 h ˜ , h ˜ r q ˜ =1 r f ( t q ˜ h ˜ ) , r = t t 0 h ˜ , h ˜ r q ˜ =1 r 1 f ( t q ˜ h ˜ ) v =0 t t 0 h ˜ ( r 2) f ( t 0 + v h ˜ ) , r < t t 0 h ˜ ;

C 1 , r = { 0 , r > t 1 t 0 h ˜ , h ˜ r q ˜ = 1 n f ( t 1 q ˜ h ˜ ) , r = t 1 t 0 h ˜ , h ˜ r q ˜ = 1 n 1 f ( t 1 q ˜ h ˜ ) v = 0 t 1 t 0 h ( r 2 ) f ( t 0 + v h ˜ ) , r < t 1 t 0 h ˜ .

By Theorem 3.2, we can obtain the solution of (3.6) under the time scale Z h ˜ .

Example 3.2

For (3.6), let T = q 0 , where q > 1, we can get

c s , r = { 0, r > ln t ln σ ( t s 1 ) ln q , ( q 1) r t r q r ( r + 1) 2 v =1 r f ( t q v ) , r = ln t ln σ ( t s 1 ) ln q , ( q 1) r t r 1 q r ( r 1) 2 v =1 r 1 f ( t q v ) l =0 ln t ln σ ( t s 1 ) ln q ( r 2) f [ σ ( t s 1 ) q l ] σ ( t s 1 ) q l , n < ln t ln σ ( t s 1 ) ln q ;

C s , r = { 0, r > ln t s ln σ ( t s 1 ) ln q , ( q 1) r t s r q r ( r + 1) 2 v = 1 r f ( t s q v ) , r = ln t s ln σ ( t s 1 ) ln q , ( q 1) r t s r 1 q r ( r 1) 2 v = 1 r 1 f ( t s q v ) l =0 ln t s ln σ ( t s 1 ) ln q ( r 2) f [ σ ( t s 1 ) q l ] σ ( t s 1 ) q l , r < ln t s ln σ ( t s 1 ) ln q ;

c 1 , r = { 0 , r > ln t ln t 0 ln q , ( q 1 ) r t r q r ( r + 1 ) 2 v = 1 n f ( t q v ) , r = ln t ln t 0 ln q , ( q 1 ) r t r 1 q r ( r 1 ) 2 v = 1 r 1 f ( t q v ) l = 0 ln t ln t 0 ln q ( r 2 ) f [ t 0 q l ] t 0 q l , r < ln t ln t 0 ln q ;

C 1 , r = { 0 , r > ln t 1 ln t 0 ln q , ( q 1 ) r t 1 r q r ( r + 1 ) 2 v = 1 r f ( t 1 q v ) , r = ln t 1 ln t 0 ln q , ( q 1 ) r t 1 r 1 q r ( r 1 ) 2 v = 1 r 1 f ( t 1 q i ) k = 0 ln t 1 ln t 0 ln q ( r 2 ) f ( t 0 q l ) t 0 q l , r < ln t 1 ln t 0 ln q .

By Theorem 3.2, we can obtain the solution of (3.6) under the time scale q 0 .

Let T = , 1 ≤ sn 0, r ≥ 1, it easily follows that

c s , r = t s 1 t f ( τ r ) t s 1 τ r f ( τ r 1 ) t s 1 τ 2 f ( τ 1 ) d τ 1 d τ r 1 d τ r , C s , r = t s 1 t s f ( τ r ) t s 1 τ r f ( τ r 1 ) t s 1 τ 2 f ( τ 1 ) d τ 1 d τ r 1 d τ r .

Example 3.3

Let T = Z + , consider the following impulsive discrete dynamic equation:

(3.7) { x Δ ( t ) = f ( t ) x ( t ) + h ( t ) , t 3 n 1 , Δ ˜ x ( t ) = m n x ( t ) , t = t n = 3 n 1 , x ( 1 ) = 1 ,

where i, j, and k are the quaternion imaginary units, m n , f(t) = 1 + ti, h(t) = 2 + tj, m n = n − 1, Δ ˜ x ( t ) = x ( σ ( t ) ) x ( t ) .

Proof

For any t T , t is right-scattered, therefore

x ( 2 ) = x ( 1 ) + 1 2 [ ( 1 + τ i ) x ( 1 ) + 2 + τ j ] Δ τ = 4 + i + j , x ( t ) = [ 2 + ( t 1 ) i ] x ( t 1 ) + 2 + ( t 1 ) j , t 3 n , x ( t ) = n x ( t 1 ) , t = 3 n .

By Theorem 3.2, for n ≥ 1 we can obtain:

u n + 1 = 1 + 3 n 3 n + 2 ( 1 + τ i ) Δ τ + 3 n 3 n + 2 ( 1 + τ i ) 3 n τ ( 1 + η i ) Δ η Δ τ = [2 + (3 n + 1) i ][2 + 3 n i ] , h n = 3 n 3 n + 2 e f (3 n + 2, σ ( τ ) ) Δ τ = 3 n 3 n + 1 e f (3 n + 2, σ ( τ ) ) Δ τ + 3 n + 1 3 n + 2 e f (3 n + 2, σ ( τ ) ) Δ τ = e f (3 n + 2,3 n + 1) h (3 n ) + h (3 n + 1) = [2 + (3 n + 1) i ][2 + 3 n j ] + [2 + (3 n + 1) j ] .

Furthermore, the solution of (3.7) is given by

Ψ f , h = { 1 , t = 1 , 4 + i + j , t = 2 , 3 , ( 2 + 4 i ) x ( 3 ) + 2 + 4 j , t = 4 , ( 2 + 5 i ) x ( 4 ) + 2 + 5 j , t = 5 , n ! s = n 2 [ 2 + ( 3 s 2 ) i ] [ 2 + 3 ( s 1 ) i ] ( 4 + i + j ) + r = 2 n 1 n ! ( r 1 ) ! s = n r + 1 [ 2 + ( 3 s 2 ) i ] × [ 2 + 3 ( s 1 ) i ] { [ 2 + ( 3 r 2 ) j ] + [ 2 + ( 3 r 2 ) i ] [ 2 + ( 3 r 3 ) j ] } + n { [ 2 + ( 3 n 2 ) j ] + [ 2 + ( 3 n 2 ) i ] [ 2 + ( 3 n 3 ) j ] } , t = 3 n , n 2 [ 2 + 3 n i ] x ( 3 n ) + 2 + 3 n j , t = 3 n + 1 , [ 2 + ( 3 n + 1 ) i ] [ 2 + 3 n i ] x ( 3 n ) + [ 2 + ( 3 n + 1 ) i ] ( 2 + 3 n j ) + 2 + ( 3 n + 1 ) j , t = 3 n + 2 .

Remark 3.3

Considering the following quaternion ∇-dynamic equations:

{ x ( t ) = f ( t ) x ( t ) + h ( t ) , t t n , x ( t ) = m n x ( t ) , t = t n ,

where f : T , m n , t n T , x ( t ) = x ( t ) x ρ ( t ) , the initial value is x ( t 0 ) = x 0 , let t n 0 < t n 1 < < t 1 < t 0 ; thus, the solution Ψ f , h is given by

Ψ f , h { ( 1 + r = 1 c 1 , r ) x 0 + t 0 t e f ( t , ρ ( τ ) ) h ( τ ) τ , ρ ( t 1 ) < t t 0 , ( 1 + w = 1 c s , w ) { v = s 1 1 ( 1 m v ) u v + l = 2 s 2 k = s 1 l + 1 ( 1 m k + 1 ) u k ( 1 m l ) h l 1 + ( 1 m s 1 ) h s 2 } + ρ ( t s 1 ) t e f ( t , ρ ( τ ) ) h ( τ ) τ , ρ ( t s ) < t ρ ( t s 1 ) , 1 < s n 0 , v = 1 s ( 1 m v ) u v + l = 2 s 1 k = s l + 1 ( 1 m k ) u k ( 1 m l ) h l 1 + ( 1 m s ) h s 1 , t = ρ ( t s ) , 1 < s n 0 , [ 1 + w = 1 ρ ( t n 0 ) t f ( τ w ) ρ ( t n 0 ) τ 2 f ( τ 1 ) τ 1 τ w ] { v = n 0 1 ( 1 m v ) u v + l = 2 n 0 1 k = n 0 l + 1 ( 1 m k ) u k × ( 1 m l ) h l 1 + ( 1 m n 0 ) h n 0 1 } + ρ ( t n 0 ) t e f ( t , ρ ( τ ) ) h ( τ ) τ , t < ρ ( t n 0 ) ,

where 1 < sn 0,

C 1 , r = t 0 t 1 f ( τ r ) t 0 τ 2 f ( τ 1 ) τ 1 τ r , C s , r = ρ ( t s 1 ) t s f ( τ r ) ρ ( t s 1 ) τ 2 f ( τ 1 ) τ 1 τ r , c 1 , r = t 0 t f ( τ r ) t 0 τ 2 f ( τ 1 ) τ 1 τ r , c s , r = ρ ( t s 1 ) t f ( τ r ) ρ ( t s 1 ) τ 2 f ( τ 1 ) τ 1 τ r , u 1 = ( 1 + n = 1 , C 1 , n ) x 0 + t 0 t 1 e f ( t , ρ ( τ ) ) h ( τ ) τ , u v = 1 + p = 1 C v , p , h l 1 = ρ ( t l 1 + ) t l e f ( t , ρ ( τ ) ) h ( τ ) τ .

4 Quaternion matrix impulsive dynamic equation

Now, consider the impulsive dynamic matrix equation as follows:

(4.1) { X Δ ( t ) = A ( t ) X ( t ) + F ( t ) , t t h , Δ ˜ X ( t ) = B h X ( t ) , t = t h ,

where A , F : T n × n , B h , t h T , Δ ˜ X ( t ) = X ( σ ( t + ) ) X ( t ) , h . The initial value problem

X ( t 0 ) = X 0 n × n .

Remark 4.1

In (4.1), if t h is the right-dense point, then Δ ˜ X ( t ) = X ( t + ) X ( t ) ; if t h is the right-scattered point, then Δ ˜ X ( t ) = X ( σ ( t ) ) X ( t ) .

Lemma 4.1

Let A , B : T n × n , Q ( t ) = t 0 t A ( τ ) s τ B ( η ) Δ η Δ τ , and t 0 T be fixed, then Q(t) is differentiable at t with

Q Δ ( t ) = A ( t ) t 0 t B ( η ) Δ η .

Moreover, if P ( t ) = t 0 t A ( τ n ) t 0 τ n A ( τ n 1 ) t 0 τ 2 A ( τ 1 ) Δ τ Δ τ n 1 Δ τ n , then

(4.2) P Δ ( t ) = A ( t ) t 0 t A ( τ n 1 ) t 0 τ 2 A ( τ 1 ) Δ τ 1 Δ τ n 1 .

Proof

By Lemma 3.1, we can obtain

[ s = 1 n t 0 t a r s ( τ 2 ) t 0 τ 2 a s l ( τ 1 ) Δ τ 1 Δ τ 2 ] Δ = s = 1 n a r s ( t ) t 0 t a s l ( τ 1 ) Δ τ 1 .

For the right-scatted point t T , the Δ-derivative of F(t) can be calculated as follows:

Q Δ ( t ) = t 0 σ ( t ) A ( τ ) s τ B ( η ) Δ η Δ τ t 0 t A ( τ ) s τ B ( η ) Δ η Δ τ μ ( t ) = t 0 σ ( t ) A ( τ ) Δ τ s τ B ( η ) Δ η t 0 t A ( τ ) Δ τ s τ B ( η ) Δ η μ ( t ) = [ t 0 σ ( t ) A ( τ ) Δ τ t 0 t A ( τ ) Δ τ ] s τ B ( η ) Δ η μ ( t ) = μ ( t ) A ( t ) s t B ( η ) Δ η μ ( t ) = A ( t ) s t B ( η ) Δ η .

For the right-dense point t T , the derivative of F(t) is obvious. Similar to the above calculation, by taking P(t) = Q(t), we can get (4.2). The proof is complete.□

Now, we consider the homogenous linear dynamic equation as follows:

(4.3) { X Δ ( t ) = A ( t ) X ( t ), X ( t 0 ) = X 0 ,

where A r d , X 0 n × n , and t 0 T .

Lemma 4.2

For (4.3), if A(t) is uniformly bounded on T , i.e., there exists some constant M A > 0, such that ∥; A ( t ) ∥; M A for any t T , then the solution of (4.3) is given as follows:

X ( t ) = ( I + n = 1 g n ( t ) ) X 0 ,

where g n ( t ) = t 0 t A ( τ n ) t 0 τ n A ( τ n 1 ) t 0 τ 2 A ( τ 1 ) Δ τ 1 Δ τ n 1 τ n .

Proof

Let h be constant with h > 0. For t 0 t < t 0 + h we have

∥; g n ( t ) ∥; t 0 t M A t 0 t n M A t 0 t 2 M A Δ t 1 Δ t n 1 Δ t n = M A n h n ( t , t 0 ) M A n ( t t 0 ) n n ! = M A n h n n ! .

By the Weierstrass theorem, the series n = 1 M A n h n n ! is convergent (say it is convergent to a + ), which implies that the series { n = 1 g n ( t ) } is uniformly convergent on T .

Next, we show that the function X r d . For the right-dense point t r T , ε > 0 , there exists δ ( ε ) = ε / a M f > 0 , such that | t t r | < δ ( ε ) , we have

∥; X ( t ) X ( t r ) ∥; = ∥; n = 1 t r t A ( t n ) t 0 t n A ( t n 1 ) t 0 t 2 A ( t 1 ) Δ t 1 Δ t n 1 Δ t n x 0 ∥; n = 1 M A n | t t r | ( t t 0 ) n 1 ( n 1 ) ! δ ( ε ) n = 1 M A n ( t t 0 ) n 1 ( n 1 ) ! ε a M A M A n = 1 M A n 1 h n 1 ( n 1 ) ! < ε .

Thus, X(t r ) is continuous at right-dense. Moreover, since the function A r d , it follows that A(t) has the finite left-side limit at a left-dense point. Therefore, X r d .

On the other hand, by Lemma 4.1, we can get g n Δ ( t ) = A ( t ) g n 1 ( t ) , and hence, X ( t ) = ( I + n = 1 g n ( t ) ) X 0 is Δ-differentiable with

X Δ ( t ) = n = 1 A ( t ) g n 1 ( t ) X 0 ,

where g 0(t) = I, hence X Δ (t) = A(t)X(t). Therefore, the function series X ( t ) = ( I + n = 1 g n ( t ) ) X 0 is a solution of (4.3); according to the continuation theorem of solutions for dynamic equations, X(t) is a solution for (4.3) on T . Furthermore, we assume that X 1 and X 2 are two solutions of (4.3), then

∥; X 1 ( t ) X 2 ( t ) ∥; t 0 t ∥; A ( τ ) ∥; ∥; X 1 ( τ ) X 2 ( τ ) ∥; Δ τ M A t 0 t ∥; X 1 ( τ ) X 2 ( τ ) ∥; Δ τ .

By Corollary 6.7 from [9] (Bellman inequality on time scale), we can get ∥;X 1(t) − X 2(t)∥; = 0. Therefore, the solution of (4.3) is unique. The proof is complete.□

Theorem 4.1

For (4.3), if for X 0 = I, there exists a unique matrix solution of (4.3), then the generalized exponential function e A ( t , t 0 ) can be given by

e A ( t , t 0 ) = I + n = 1 t 0 t A ( τ n ) t 0 τ n A ( τ n 1 ) t 0 τ 2 A ( τ 1 ) Δ τ Δ τ n 1 Δ τ n .

Proof

Let

g n = t 0 t A ( τ n ) t 0 τ n A ( τ n 1 ) t 0 τ 2 A ( τ 1 ) Δ τ Δ τ n 1 Δ τ n ,

for n ≥ 1. By Lemma 4.1, we can obtain

g n Δ = A ( t ) t 0 t A ( τ n 1 ) t 0 τ 2 A ( τ 1 ) Δ τ Δ τ n 1 = A ( t ) g n 1 .

By Lemma 4.2, we can obtain the series I + n = 1 g n is a unique solution of (4.3) with X ( t 0 ) = I . Therefore, by Definition 2.1 we can get

e A ( t , t 0 ) = I + n = 1 t 0 t A ( τ n ) t 0 τ n A ( τ n 1 ) t 0 τ 2 A ( τ 1 ) Δ τ Δ τ n 1 Δ τ n .

This completes the proof.□

Now, we consider the following nonhomogeneous linear dynamic equations:

(4.4) { X Δ ( t ) = A ( t ) X ( t ) + F ( t ) , X ( t 0 ) = X 0 ,

where A , F r d , X 0 n × n .

Lemma 4.3

The solution of (4.4) can be given by

X ( t ) = e A ( t , t 0 ) X 0 + t 0 t e A ( t , σ ( τ ) ) F ( τ ) Δ τ .

Moreover, X(t) can be given as

X ( t ) = [ I + n = 1 t 0 t A ( τ n ) t 0 τ 2 A ( τ 1 ) Δ τ 1 Δ τ n ] X 0 + t 0 t [ I + n = 1 σ ( τ ) t A ( τ n ) σ ( τ ) τ 2 A ( τ 1 ) Δ τ 1 Δ τ n ] F ( τ ) Δ τ .

Proof

For t = t 0 , it is obvious X ( t 0 ) = e A ( t 0 , t 0 ) X 0 + t 0 t 0 e A ( t 0 , σ ( τ ) ) F ( τ ) Δ τ = X 0 . A , F r d , by Lemma 4.2, we can obtain

X ( t ) = [ I + n =1 t 0 t A ( τ n ) t 0 τ 0 A ( τ n 1 ) t 0 τ 2 A ( τ 1 ) Δ τ 1 Δ τ n 1 Δ τ n ] X 0 + t 0 t e A ( t , σ ( τ ) ) F ( τ ) Δ τ .

is uniformly convergent on T and X r d . Moreover,

μ ( t ) X Δ ( t ) = e A ( σ ( t ) , t 0 ) x 0 + t 0 σ ( t ) e A ( σ ( t ) , σ ( τ ) ) F ( τ ) Δ τ e A ( t , t 0 ) x 0 + t 0 t e A ( t , σ ( τ ) ) F ( τ ) Δ τ = e A ( σ ( t ) , t 0 ) x 0 e A ( t , t 0 ) x 0 + t 0 σ ( t ) e A ( σ ( t ) , σ ( τ ) ) F ( τ ) Δ τ t 0 t e A ( t , σ ( τ ) ) F ( τ ) Δ τ = μ ( t ) A ( t ) e A ( t , t 0 ) x 0 + t 0 σ ( t ) e A ( σ ( t ) , σ ( τ ) ) F ( τ ) Δ τ + t 0 t [ e A ( σ ( t ) , σ ( τ ) ) e A ( t , σ ( τ ) ) ] F ( τ ) Δ τ = μ ( t ) A ( t ) e A ( t , t 0 ) x 0 + μ ( t ) e A ( σ ( t ) , σ ( t ) ) F ( t ) + t 0 t μ ( t ) A ( t ) e A ( t , σ ( τ ) ) h ( τ ) Δ τ = [ A ( t ) x ( t ) + F ( t ) ] μ ( t ) .

Furthermore, by Lemma 4.1, we can obtain the desired results. The proof is complete.□

Theorem 4.2

For (4.1), if any compact interval [ a , b ] T contains only a finite number of points t h , and for all h , the matrices I + B h are nonsingular, then the solution of (4.1) can be given by

Ψ A , F = { ( I + p = 1 g 0 , p ) X 0 + f 0 , t 0 t < σ ( t 1 + ) , r = s 1 0 ( I + B r + 1 ) U r + v = 1 s 2 l = s 1 v + 1 ( I + B l + 1 ) U l ( I + B v + 1 ) f ( v ) + ( I + B s ) f ( s 1 ) , t = σ ( t s ) , ( I + p = 1 g s , p ) [ r = s 1 0 ( I + B r + 1 ) U r + v = 1 s 2 l = s 1 v + 1 ( I + B l + 1 ) U l ( I + B v + 1 ) f ( v ) + ( I + B s ) f ( s 1 ) ] + f s , σ ( t s + ) t < σ ( t s + 1 + ) .

Proof

Let

g s , r = σ ( t s ) t A ( τ r ) σ ( t s ) τ r A ( τ r 1 ) σ ( t s ) τ 2 A ( τ 1 ) Δ τ 1 Δ τ r 1 Δ τ r , G s , r = σ ( t s ) t s + 1 A ( τ r ) σ ( t s ) τ r A ( τ r 1 ) σ ( t s ) τ 2 A ( τ 1 ) Δ τ 1 Δ τ r 1 Δ τ r , g 0 , r = t 0 t A ( τ r ) t 0 τ r A ( τ r 1 ) t 0 τ 2 A ( τ 1 ) Δ τ 1 Δ τ r 1 Δ τ r , G 0 , r = t 0 t 1 A ( τ r ) t 0 τ r A ( τ r 1 ) t 0 τ 2 A ( τ 1 ) Δ τ 1 Δ τ r 1 Δ τ r ,

where r, s ≥ 1. By Lemmas 4.1, 4.2, and 4.3, we can obtain

{ X ( t ) = ( I + q = 1 g q , 0 ) + t 0 t e A ( t , σ ( τ ) ) F ( τ ) Δ τ , t 0 t < σ ( t 1 + ) , X ( t ) = ( I + B s ) X ( t s ) , t = σ ( t s + ) , X ( t ) = ( I + v = 1 g s 1 , v ) X ( σ ( t s 1 ) ) + σ ( t s 1 ) t e A ( t , σ ( τ ) ) F ( τ ) Δ τ , σ ( t s 1 + ) t < σ ( t s + ) .

Let

f s = σ ( t s + ) t e A ( t , σ ( τ ) ) F ( τ ) Δ τ , f ( s ) = σ ( t s + ) t s + 1 e A ( t , σ ( τ ) ) F ( τ ) Δ τ , s 1 , f 0 = t 0 t e A ( t , σ ( τ ) ) F ( τ ) Δ τ , f ( 0 ) = t 0 t 1 e A ( t , σ ( τ ) ) F ( τ ) Δ τ , U s = I + p = 1 G s , p , s 1 , U 0 = ( I + p = 1 G 0 , p ) X 0 + f ( 0 ) .

Thus, the solution of (4.1) can be given by

Ψ A , F = { ( I + p = 1 g 0 , p ) X 0 + f 0 , t 0 t < σ ( t 1 + ) , r = s 1 0 ( I + B r + 1 ) U r + v = 1 s 2 l = s 1 v + 1 ( I + B l + 1 ) U l ( I + B v + 1 ) f ( v ) + ( I + B s ) f ( s 1 ) , t = σ ( t s ) , ( I + p = 1 g s , p ) [ r = s 1 0 ( I + B r + 1 ) U r + v = 1 s 2 l = s 1 v + 1 ( I + B l + 1 ) U l ( I + B v + 1 ) f ( v ) + ( I + B s ) f ( s 1 ) ] + f s , σ ( t s + ) t < σ ( t s + 1 + ) .

The proof is complete.□

Remark 4.2

The Cauchy matrix of (4.1) is as follows:

W ( t , z ) = { U ( t , z ) + F F ( t , z ) , t , z ( t s 1 , t s ] , U ( t , σ ( t s + ) ) ( I + B s ) [ U ( t s , z ) + F F ( t s , z ) ] + F F ( t , σ ( t s + ) ) , t s 1 < z t s < t t s + 1 , U ( t , t s + ) ( I + B s ) 1 [ U ( σ ( t s + ) , z ) + F F ( σ ( t s + ) , z ) ] + F F ( t , t s ) , t s 1 < t t s < z t s + 1 , U ( t , σ ( t s + ) ) { l = s s 1 + 1 ( I + B l ) U ( t l , σ ( t l 1 + ) ) ( I + B s 1 ) [ U ( t s 1 , z ) + F F ( t s 1 , z ) ] + v = s 1 + 1 s 1 l = s v + 1 ( I + B l ) U ( t l , σ ( t l 1 + ) ) ( I + B v ) F F ( t v , σ ( t v 1 ) ) + ( I + B s ) F F ( t s , σ ( t s 1 ) ) } + F F ( t , σ ( t s + ) ) , t s 1 1 < z t s 1 < t s < t t s + 1 , U ( t , t s 1 ) { l = s 1 s 1 ( I + B v ) 1 U ( σ ( t v + ) , t v + 1 ) ( I + B s ) 1 [ U ( σ ( t s + ) , z ) + F F ( σ ( t s + ) , z ) ] + v = s 1 + 1 s 1 l = s 1 v ( I + B l ) 1 U ( σ ( t l + ) , t l + 1 ) ( I + B v ) 1 F F ( σ ( t v + ) , t v + 1 ) + ( I + B s 1 ) F F ( σ ( t s 1 + ) , t s 1 + 1 ) } + F F ( t , t s 1 ) , t s 1 1 < t t s 1 < t s < z t s + 1 ,

where U ( t , z ) = [ I + n = 1 z t A ( τ n ) z τ 2 A ( τ 1 ) Δ τ 1 Δ τ n ] , F F ( t , z ) = z t e A ( t , σ ( τ ) ) F ( τ ) Δ τ , that is

W ( t , t ) = I , W ( σ ( t s + ) , z ) = [ I + B s ] W ( t s , s ) , W ( z , σ ( t s + ) ) = W ( z , t s ) [ I + B s ] 1 , W Δ ( t , z ) = A ( t ) X ( t ) X ( σ ( t s + ) ) + F ( t ) , t [ σ ( t s + ) , t s + 1 ] .

Remark 4.3

The system

(4.5) { X Δ ( t ) = A ( t ) X ( t ) + F ( t ) , t t h , Δ ˜ X ( t ) = X ( t ) B h , t = t h ,

where A , F : T n × n , B h , t h T , Δ ˜ X ( t ) = X ( σ ( t + ) ) X ( t ) , h . The initial value is X ( t 0 ) = X 0 . For t < t 0 , the solution of (4.5) can be given by

Ψ A , F = { ( I + p = 1 g 0 , p ) X 0 + f 0 , t 0 t < σ ( t 1 + ) , l = 1 s U s l v = 1 s ( I + B v ) + p = 0 s 2 w = s 1 p + 1 U w f ( p ) r = p + 1 s ( I + B r ) + f ( s 1 ) ( I + B s ) , t = σ ( t s ) , ( 1 + n = 1 g s , n ) [ l = 1 s U s l v = 1 s ( I + B v ) + p = 0 s 2 w = s 1 p + 1 U w f ( p ) r = p + 1 s ( I + B r ) + f ( s 1 ) ( I + B s ) ] + σ ( t s ) t F ( τ ) Δ τ , σ ( t s + ) t < σ ( t s + 1 ) ,

where g s,n , U s , f 0, and f (r) are defined in Theorem 4.2.

Example 4.1

For the system (4.1), when T = Z h ˜ , h ˜ > 0 , for s ≥ 1, we obtain

g s , r = { 0 , r > t σ ( t s ) h ˜ , h ˜ r q ˜ =1 r A ( t q ˜ h ˜ ) , r = t σ ( t s ) h ˜ , h ˜ r q ˜ =1 r 1 A ( t q ˜ h ˜ ) v =0 t σ ( t s ) h ˜ ( r 2) A ( σ ( t s ) + v h ˜ ) , r < t σ ( t s ) h ˜ ;

G s , r = { 0 , r > t s + 1 σ ( t s ) h ˜ , h ˜ r v = 1 r A ( t s + 1 v h ˜ ) , r = t s + 1 σ ( t s ) h ˜ , h ˜ r q ˜ =1 r 1 A ( t s + 1 q ˜ h ˜ ) v =0 t s + 1 σ ( t s ) h ˜ ( r 2) A ( σ ( t s ) + v h ˜ ) , r < t s + 1 σ ( t s ) h ˜ ;

g 0, r = { 0, r > t t 0 h ˜ , h ˜ r q ˜ =1 r A ( t q ˜ h ˜ ) , r = t t 0 h ˜ , h ˜ r q ˜ =1 r = 1 A ( t q ˜ h ˜ ) v =0 t t 0 h ˜ ( r 2) A ( t 0 + v h ˜ ) , r < t t 0 h ˜ ;

G 0 , r = { 0 , r > t 1 t 0 h ˜ , h ˜ r q ˜ = 1 n A ( t 1 q ˜ h ˜ ) , r = t 1 t 0 h ˜ , h ˜ r q ˜ = 1 n 1 A ( t 1 q ˜ h ˜ ) v = 0 t 1 t 0 h ˜ ( r 2 ) A ( t 0 + v h ˜ ) , r < t 1 t 0 h ˜ .

By Theorem 4.2, we can obtain the solution of (4.1) on the time scale Z h ˜ .

Example 4.2

For the system (4.1), when T = q 0 , where q > 1, for r , s we can get

g s , r = { 0 , r > ln t ln σ ( t s ) ln q , ( q 1 ) r t r q r ( r + 1 ) 2 v = 1 r A ( t q v ) , r = ln t ln σ ( t s ) ln q , ( q 1 ) r t r 1 q r ( r 1 ) 2 v = 1 r 1 A ( t q v ) l = 0 ln t ln σ ( t s ) ln q ( r 2 ) A ( σ ( t s ) q l ) σ ( t s ) q l , n < ln t ln σ ( t s ) ln q ;

G s , r = { 0 , r > ln t s + 1 ln σ ( t s ) ln q , ( q 1 ) r t s + 1 r q r ( r + 1 ) 2 v = 1 r A ( t s + 1 q v ) , r = ln t s + 1 ln σ ( t s ) ln q , ( q 1 ) r t s + 1 r 1 q r ( r 1 ) 2 v = 1 r 1 A ( t m + 1 q v ) l = 0 ln t s + 1 ln σ ( t s ) ln q ( r 2 ) A ( σ ( t s ) q l ) σ ( t s ) q l , r < ln t s + 1 ln σ ( t s ) ln q ;

g 0 , r = { 0 , r > ln t ln t 0 ln q , ( q 1 ) r t r q r ( r + 1 ) 2 v = 1 n A ( t q v ) , r = ln t ln t 0 ln q , ( q 1 ) r t r 1 q r ( r 1 ) 2 v = 1 r 1 A ( t q v ) 0 ln t ln t 0 ln q ( r 2 ) A ( t 0 q l ) t 0 q l , r < ln t ln t 0 ln q ;

G 0 , r = { 0 , r > ln t 1 ln t 0 ln q , ( q 1 ) r t 1 r q r ( r + 1 ) 2 v = 1 r A ( t 1 q v ) , r = ln t 1 ln t 0 ln q , ( q 1 ) r t 1 r 1 q r ( r 1 ) 2 v = 1 r 1 A ( t 1 q i ) k = 0 ln t 1 ln t 0 ln q ( r 2 ) A ( t 0 q l ) t 0 q l , r < ln t 1 ln t 0 ln q .

By Theorem 4.2, we can obtain the solution of (4.1) on the time scale q 0 .

Let T = , s ≥ 0, r ≥ 1, we can obtain

g s , r = t s t A ( τ r ) t s τ r A ( τ r 1 ) t s τ 2 A ( τ 1 ) d τ 1 d τ r 1 d τ r , G s , r = t s t s + 1 A ( τ r ) t s τ r A ( τ r 1 ) t s τ 2 A ( τ 1 ) d τ 1 d τ r 1 d τ r .

The following theorem can be obtained immediately by Remark 2.3 and Theorem 4.2.

Theorem 4.3

If Ψ A , F D , then the Liouville formula of the quaternion impulsive dynamic equations (4.1) can be given by ddet Ψ A,F .

Proof

By Definition 2.4, Remark 2.3, and Theorem 4.2, the result is obvious.□

Theorem 4.4

Let X(·) = [x rh (·)] n×n , x r h : T , where 1 ≤ r, hn, and X(t) is a solution of (4.1). If A(t), F(t), B n , and X 0 are diagonal matrices, then the Liouville formula of (4.1) can be given as

ddet X ( t ) = v = 1 n x v v .

Proof

For any x : T , we can obtain ( x ( ) ) . On the other hand, if A(t), F(t), B n , and X 0 are diagonal matrices, then the solution X(t) is a diagonal matrix. Hence, we can obtain

( X ( t ) ) = 1 ( X ( t ) ) 1 ( X ( t ) ) ¯ + 2 ( X ( t ) ) 2 ( X ( t ) ) ¯ = [ ( x r h ( t ) ) ] n × n ,

where 1 ≤ r, hn, for rh, ( x r h ( t ) ) = 0 ; for r = h, ( x r h ( t ) ) . Therefore,

ddet X ( t ) = v = 1 n x v v .

This completes the proof.□

Example 4.3

Let T = , consider the Liouville formula of the following dynamic equations:

(4.6) { X Δ ( t ) = A ( t ) x ( t ) + F ( t ) , t 3 n , Δ ˜ X ( t ) = B n ( t ) , t = 3 n , X ( 1 ) = X 0 ,

where

A ( t ) = F ( t ) = [ t + 1 + ( 1 ) t 2 + j ( 1 ) t 0 0 t + 1 + ( 1 ) t 2 + i ( 1 ) t ] ,

B n = [ 1 + t 3 1 0 1 + t 3 ] , X 0 = [ 2 0 0 1 ] .

By Theorem 4.2, for s ≥ 0, we can get

G s , 1 = A ( 3 s + 1 ) + A ( 3 s + 2 ) , G s , 2 = A ( 3 s + 2 ) A ( 3 s + 1 ) , G s , r = 0 , r 3 .

Hence, we have

U s = I + A ( 3 s + 1 ) + A ( 3 s + 2 ) + A ( 3 s + 2 ) A ( 3 s + 1 ) = [ I + A ( 3 s + 2 ) ] [ I + A ( 3 s + 1 ) ] 2 × 2 .

Moreover,

f ( s ) = 3 s + 1 3 s + 3 e A ( 3 s + 3 , σ ( τ ) ) Δ τ = 3 s + 1 3 s + 2 e A ( 3 s + 3 , σ ( τ ) ) Δ τ + 3 s + 2 3 s + 3 e A ( 3 s + 3 , σ ( τ ) ) Δ τ = e A ( 3 s + 3 , 3 s + 2 ) F ( 3 s + 1 ) + F ( 3 s + 2 ) = [ I + A ( 3 s + 2 ) ] F ( 3 s + 1 ) + F ( 3 s + 2 ) = [ I + A ( 3 s + 2 ) ] [ I + A ( 3 s + 1 ) ] I = U s I .

For U s 2 × 2 , we have f ( s ) 2 × 2 . By Theorem 4.2, when t = σ ( t s + ) , we can obtain

X ( t ) = r = s 1 0 ( I + B r + 1 ) U r + v = 1 s 2 l = s 1 v + 1 ( I + B l + 1 ) U l ( I + B v + 1 ) f ( v ) + ( I + B s ) f ( s 1 ) .

Thus, X 2 × 2 , hence for t = t A , where t A { 1 , 3 n , 3 n + 1 : n } , the Liouville formula of (4.6) can be given by ddet X(t).

Example 4.4

Let T = q , for q > 1, consider the Liouville formula of the impulsive dynamic equations as follows:

(4.7) { X Δ ( t ) = A ( t ) X ( t ) + F ( t ) , t t n = q 3 n 1 , Δ ˜ X ( t ) = B n ( t ) , t = t n = q 3 n 1 , X (2)= X 0 ,

where

A ( t ) = F ( t ) = 1 t [ 1 + ( 1 ) ln t ln q t q 1 ( 1 ) ln t ln q 2 j 0 0 2 + ( 1 ) ln t ln q t q 1 ( 1 ) ln t ln q 2 i ] ,

B n = [ t 3 t t 2 t + 3 ] , X 0 = [ 3 0 0 4 ] .

By the definition of the function matrix A(t), we can obtain

A ( q 3 n ) = q 3 n [ 1 + ( 1 ) 3 n q 3 n q 1 ( 1 ) 3 n 2 j 0 0 2 + ( 1 ) 3 n q 3 n q 1 ( 1 ) 3 n 2 i ] ,

A ( q 3 n + 1 ) = q 3 n 1 [ 1 + ( 1 ) 3 n + 1 q 3 n + 1 q 1 ( 1 ) 3 n + 1 2 j 0 0 2 + ( 1 ) 3 n + 1 q 3 n + 1 q 1 ( 1 ) 3 n + 1 2 i ] .

Therefore, we can get q 3 n A ( q 3 n ) + q 3 n + 1 A ( q 3 n + 1 ) 2 × 2 . Hence, for any n , we have

U n = I + q 3 n q 3 n + 2 A ( τ ) Δ τ + q 3 n q 3 n + 2 A ( τ ) q 3 n τ A ( τ 1 ) Δ τ 1 Δ τ = I + A ( q 3 n ) μ ( q 3 n ) + A ( q 3 n ) μ ( q 3 n + 1 ) + A ( q 3 n + 1 ) μ ( q 3 n + 1 ) A ( q 3 n ) μ ( q 3 n ) = [ I + A ( q 3 n + 1 ) μ ( q 3 n + 1 ) ] [ I + A ( q 3 n ) μ ( q 3 n ) ] .

Thus, U n 2 × 2 . On the other hand,

f ( n ) = q 3 n q 3 n + 2 e A ( q 3 n + 2 , σ ( τ ) ) F ( τ ) Δ τ = [ I + A ( q 3 n + 1 ) μ ( q 3 n + 1 ) ] F ( q 3 n ) μ ( q 3 n ) + F ( q 3 n + 1 ) μ ( q 3 n + 1 ) = U n I 2 × 2 .

Therefore, U n 2 × 2 , and for (4.7), we can obtain B n , f ( n ) , X 0 2 × 2 . By Theorem 4.2, when t = σ(t n ), we have

X ( t ) = r = s 1 0 ( I + B r + 1 ) U r + v = 1 s 2 l = s 1 v + 1 ( I + B l + 1 ) U l ( I + B v + 1 ) f ( v ) + ( I + B s ) f ( s 1 ) .

Hence, for t = t A , where t A ∈ {2,t n ,σ(t n )}, we have X 2 × 2 ; thus, the Liouville formula of (4.7) can be given by ddet X(t) = det X(t) det X(t).



  1. Competing interests: The authors declare that they have no competing interests.

Acknowledgments

This work was supported by the Youth Fund of NSFC (No. 11961077, 11601470), IRTSTYN, and Joint Key Project of Yunnan Provincial Science and Technology Department of Yunnan University (No. 2018FY001(-014)).

References

[1] W. R. Hamilton, R. Dimitrid, and B. Goldsmith, The mathematical tourist, Math. Intelligencer 11 (1989), 29–30.10.1007/BF03023819Search in Google Scholar

[2] A. Handson and H. Hui, Quaternion frame approach to streamline visualization, IEEE Trans. Vis. Comput. Graph. 1 (1995), 164–172.10.1109/2945.468403Search in Google Scholar

[3] K. I. Kou and Y. H. Xia, Linear quaternion differential equations: basic theory and fundamental results, Stud. Appl. Math. 141 (2018), 3–45.10.1111/sapm.12211Search in Google Scholar

[4] Z. Cai and K. I. Kou, Laplace transform: a new approach in solving linear quaternion differential equations, Math. Meth. Appl. Sci. 41 (2018), 4033–4048.10.1002/mma.4415Search in Google Scholar

[5] J. Zhu and J. Sun, Existence and uniqueness results for quaternion-valued nonlinear impulsive differential systems, J. Syst. Sci. Compl. 31 (2018), 596–607.10.1007/s11424-017-6158-9Search in Google Scholar

[6] J. Zhu and J. Sun, Global exponential stability of Clifford-valued recurrent neural networks, Neurocomputing 173 (2016), 685–689.10.1016/j.neucom.2015.08.016Search in Google Scholar

[7] K. Kou, W. Liu, and Y. Xia, Solve the linear quaternion-valued differential equations having multiple eigenvalues, J. Math. Phys. 60 (2019), 10.1063/1.5040237.Search in Google Scholar

[8] D. Cheng, K. I. Kou, and Y. H. Xia, A unified analysis of linear quaternion dynamic equations on time scales, J. Appl. Anal. Comput. 8 (2018), 172–201.10.11948/2018.172Search in Google Scholar

[9] M. Bohner and A. Peterson, Dynamic Equations on Time Scales – An Introduction with Applications, Birkhauser, Boston, 2001.10.1007/978-1-4612-0201-1Search in Google Scholar

[10] R. P. Agarwal, M. Bohner, and P. J. Wong, Sturm-Liouville eigenvalue problems on time scales, Appl. Math. Comput. 99 (1999), 153–166.10.1016/S0096-3003(98)00004-6Search in Google Scholar

[11] C. Wang and R. P. Agarwal, Almost periodic solution for a new type of neutral impulsive stochastic Lasota-Wazewska timescale model, Appl. Math. Lett. 70 (2017), 58–65.10.1016/j.aml.2017.03.009Search in Google Scholar

[12] C. Wang, R. P. Agarwal, and R. Sakthivel, Almost periodic oscillations for delay impulsive stochastic Nicholson’s blowflies timescale model, Comput. Appl. Math. 37 (2018), 3005–3026.10.1007/s40314-017-0495-0Search in Google Scholar

[13] C. Wang and R. Sakthivel, Double almost periodicity for high-order Hopfield neural networks with slight vibration in time variables, Neurocomputing 282 (2018), 1–15.10.1016/j.neucom.2017.12.008Search in Google Scholar

[14] Q. Kong and A. Zafer, Lower bounds for the eigenvalues of first-order nonlinear Hamiltonian systems on time scales, Appl. Math. Lett. 90 (2019), 154–161.10.1016/j.aml.2018.10.027Search in Google Scholar

[15] G. T. Stamov and J. O. Alzabut, Almost periodic solutions for abstract impulsive differential equations, Nonlinear Anal. 72 (2010), 2457–2464.10.1016/j.na.2009.10.042Search in Google Scholar

[16] I. M. Stamova and G. T. Stamov, Mittag-Leffler synchronization of fractional neural networks with time-varying delays and reaction-diffusion terms using impulsive and linear controllers, Neural Networks 96 (2017), 22–32.10.1016/j.neunet.2017.08.009Search in Google Scholar PubMed

[17] I. M. Stamova, Impulsive control for stability of n-species Lotka-Volterra cooperation models with finite delays, Appl. Math. Lett. 23 (2010), 1003–1007.10.1016/j.aml.2010.04.026Search in Google Scholar

[18] V. Lakshmikantham, D. D. Bainov, and P. S. Simeonov, Theory of Impulsive Differential Equations, World Scientific, Singapore, 1989.10.1142/0906Search in Google Scholar

[19] H. Aslaksen, Quaternionic determinants, Math. Intell. 18 (1996), 57–65.10.1007/978-1-4613-0195-0_13Search in Google Scholar

Received: 2019-09-06
Accepted: 2020-01-20
Published Online: 2020-05-26

© 2020 Zhien Li and Chao Wang, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 25.4.2024 from https://www.degruyter.com/document/doi/10.1515/math-2020-0021/html
Scroll to top button