1 Introduction

Riordan arrays play an important unifying role in enumerative combinatorics [14]. There has been some recent interest concerning combinatorial inequalities in Riordan arrays, including the log-convexity of column sequences. See [7, 8, 16, 17] for instance. In this paper, we focus on the combinatorial proof of the log-convexity of sequences in Riordan arrays.

A nonnegative sequence \((a_n)_{n\ge 0}\) is called log-convex (log-concave, resp.) if \(a^2_n\le a_{n+1}a_{n-1}\) (\(a^2_n\ge a_{n+1}a_{n-1}\), resp.) for all \(n\ge 1\). Log-convex and log-concave sequences arise often in combinatorics. An effective method of attacking log-convexity and log-concavity problems comes from the theory of total positivity. Following Karlin [11], an infinite matrix is called totally positive of order r (or shortly, TP\(_r\)), if its minors of all orders \(\le r\) are nonnegative. It is called totally positive (or shortly, TP) if its minors of all orders are nonnegative. Let \((a_n)_{n\ge 0}\) be an infinite sequence of nonnegative numbers. Clearly, it is log-concave if and only if its Toeplitz matrix

$$\begin{aligned}{}[a_{i-j}]_{i,j\ge 0}= \left[ \begin{array}{cccc} a_0 \\ a_1 \quad &{} a_0 \\ a_2 \quad &{} a_1 \quad &{} a_0 \\ \vdots \quad &{} \quad &{} \quad &{} \ddots \end{array} \right] \end{aligned}$$

is TP\(_2\), and it is log-convex if and only if its Hankel matrix \([a_{i+j}]_{i,j\ge 0}\) is TP\(_2\).

A Riordan array, denoted by (d(t), h(t)), is an infinite lower triangular matrix whose generating function of the kth column is \(d(t)h^k(t)\) for \(k=0,1,2,\ldots \), where \(d(0)=1, h(0)=0\) and \(h'(0)\ne 0\). A Riordan array \(R=[r_{n,k}]_{n,k\ge 0}\) can also be characterized by two sequences \(A=(a_n)_{n\ge 0}\) and \(Z=(z_n)_{n\ge 0}\) such that

$$\begin{aligned} r_{0,0}=1,\quad r_{n+1,0}=\sum _{j=0}^n z_j r_{n,j},\quad r_{n+1,k+1}=\sum _{j=k}^n a_{j-k} r_{n,j} \end{aligned}$$
(1.1)

for \(n,k\ge 0\) (see[10, 13] for instance). Call

$$\begin{aligned} P(R)= \left[ \begin{array}{ccccc} z_0 \quad &{} a_0 \\ z_1 \quad &{} a_1 \quad &{} a_0 \\ z_2 \quad &{} a_2 \quad &{} a_1 \quad &{} a_0 \\ \vdots \quad &{} \vdots \quad &{} \quad &{} \quad &{} \ddots \end{array} \right] \end{aligned}$$

the production matrix of R. Then, the recurrence in (1.1) is equivalent to the matrix decomposition

$$\begin{aligned} \overline{R}=R\cdot P(R), \end{aligned}$$
(1.2)

where \(\overline{R}\) is obtained from R by deleting the 0th row.

The 0th columns of Riordan arrays unify many well-known combinatorial numbers. For example, \(r_{n,0}\) are the Catalan numbers \(C_n\) if \(A=Z=(1,1,1,\ldots )\), the shifted Catalan numbers \(C_{n+1}\) if \(A=(1,2,1,0,\ldots )\) and \(Z=(2,1,0,\ldots )\), the Motzkin numbers \(M_n\) if \(A=(1,1,1,0,\ldots )\) and \(Z=(1,1,0,\ldots )\), the large Schröder numbers \(r_n\) if \(A=(1,2,2,\ldots )\) and \(Z=(2,2,2,\ldots )\), the little Schröder numbers \(s_n\) if \(A=Z=(1,2,2,\ldots )\), the central binomial coefficients \(\left( {\begin{array}{c}2n\\ n\end{array}}\right) \) if \(A=(1,2,1,0,\ldots )\) and \(Z=(2,2,0,\ldots )\), the central Delannoy numbers D(nn) if \(A=(1,3,2,0,\ldots )\) and \(Z=(3,4,0,\ldots )\), the restricted hexagonal numbers \(H_n\) if \(A=(1,3,1,0,\ldots )\) and \(Z=(3,1,0,\ldots )\), and so on. Using the matrix decomposition in (1.2) and the method of induction, Chen, Liang and Wang [8, Theorem 2.1] established the following criterion for the log-convexity of the 0th columns of Riordan arrays.

Theorem 1.1

Let \(R=[r_{n,k}]_{n,k\ge 0}\) be a Riordan array. If its production matrix P(R) is TP\(_2\), then the 0th column \((r_{n,0})_{n\ge 0}\) of R is log-convex.

Theorem 1.1 implies the log-convexity of many combinatorial sequences that arise in the enumeration of lattice paths, including the Catalan numbers, the Motzkin numbers, the Schröder numbers, the central binomial coefficients, the central Delannoy numbers, and the restricted hexagonal numbers. Callan [4] and Liu and Wang [12] gave injective proofs for the log-convexity of the Motzkin numbers and that of the Catalan numbers, respectively. Sun and Wang [15] did the same for the Catalan-like numbers introduced by Aigner [1,2,3] (see Sect. 4). In this paper, we present a combinatorial proof of Theorem 1.1 from the point of view of weighted Łukasiewicz paths.

In the next section, we give a combinatorial interpretation of the Riordan arrays in terms of weighted Łukasiewicz paths, and in Sect. 3 we present our combinatorial proof of Theorem 1.1. In the last section we point out that the techniques using in Sect. 3 can be generalized to show combinatorially the log-convexity of the Aigner-Catalan-Riordan numbers introduced by Wang and Zhang [16], and the strong q-log-convexity of many well-known combinatorial polynomials.

2 Preliminaries

Following [9], a Łukasiewicz path (L-path for short) is a lattice path that starts at the origin and never goes below the x-axis, using steps \(S_{-r}=(1,-r)\) where \(r\ge -1\). An L-path reduces to a Dyck path if \(r\in \{-1,1\}\), and a Motzkin path if \(r\in \{-1,0,1\}\). For convenience, we call \(S_1=(1,1)\) an up step, \(S_0=(1,0)\) a flat step, and \(S_{-r}=(1,-r), r\ge 1\) down steps. Clearly, the step \(S_{-r}\) has slope \(-r\). We say that an L-path has length n if it consists of n steps, and a step has height j if it starts at a lattice point with ordinate j. For \(j\ge 0\) we denote by \(S_{j,-r}\) the step that has height j and slope \(-r\). Note that \(j\ge r\).

A weighted L-path is an L-path each of whose steps is assigned with a weight \(w(S_{j,-r})\). The weight of an L-path w(P) is defined as the product of the weights of all its steps. Let uv be two lattice points, define

$$\begin{aligned} w(u,v)=\sum _P w(P), \end{aligned}$$

where the sum ranges over all weighted L-paths going from u to v. We adopt the convention that \(w(u,u)=1\) for any lattice point u.

Lemma 2.1

Let \(R=[r_{n,k}]_{n,k\ge 0}\) be a Riordan array with A-sequence \((a_n)_{n\ge 0}\) and Z-sequence \((z_n)_{n\ge 0}\). Then, the (nk)-entry of R can be interpreted in terms of weighted L-paths

$$\begin{aligned} r_{n,k}=w((0,0),(n,k)) \end{aligned}$$

using the weights

$$\begin{aligned} w(S_{j,-r})= \left\{ \begin{array}{ll} z_r, \quad &{} \hbox {if } j=r; \\ a_{r+1},\quad &{} \hbox {if } j>r, \end{array} \right. \end{aligned}$$
(2.1)

where \(S_{j,-r}\) is the step with height j and slope \(-r\) for all \(r\ge -1\) and \(j\ge r\).

Proof

Let \(t_{n,k}=w((0,0),(n,k))\) be the weight sum of all weighted L-paths from (0, 0) to (nk) using the weights given by (2.1). By convention \(t_{0,0}=1=r_{0,0}\). It is clear that \(t_{n,k}=0\) unless \(n\ge k\). Now we consider the last step \(S_{j,-r}\) that goes from (nj) to \((n+1,k+1)\), where \(0\le j\le n\) and \(r\ge -1\). If \(k+1=0\), then \(r=j\). So by (2.1) we have

$$\begin{aligned} t_{n+1,0}=\sum _{j=0}^n w(S_{j,-r})t_{n,j}=\sum _{j=0}^n z_j t_{n,j}. \end{aligned}$$

If \(k+1>0\), then \(r=j-(k+1)\). So by (2.1) we have

$$\begin{aligned} t_{n+1,k+1}=\sum _{j=k}^n w(S_{j,-r})t_{n,j}=\sum _{j=k}^n a_{j-k}t_{n,j}. \end{aligned}$$

Hence, comparing with (1.1), \(t_{n,k}\) and \(r_{n,k}\) satisfy the same recurrence with the same initial conditions, which completes the proof. \(\square \)

In order to give a combinatorial proof of Theorem 1.1 with the above interpretation of \(r_{n,k}\), we also need the following equivalent condition of the total positivity of order 2 of the production matrix.

Lemma 2.2

Let \(R=[r_{n,k}]_{n,k\ge 0}\) be a Riordan array with nonnegative A-sequence \((a_n)_{n\ge 0}\) and Z-sequence \((z_n)_{n\ge 0}\). The production matrix

$$\begin{aligned} P(R)= \left[ \begin{array}{ccccc} z_0 \quad &{} a_0 \\ z_1 \quad &{} a_1 \quad &{} a_0 \\ z_2 \quad &{} a_2 \quad &{} a_1 \quad &{} a_0 \\ \vdots \quad &{} \vdots \quad &{} \quad &{} \quad &{} \ddots \end{array} \right] \end{aligned}$$

is TP\(_2\) if and only if

$$\begin{aligned} a_{i+\ell }a_j\ge a_i a_{j+\ell } \quad \hbox {and}\quad z_{i+\ell }a_j\ge a_i z_{j+\ell } \end{aligned}$$
(2.2)

for \(i\le j\) and \(\ell \ge 0\).

Proof

If P(R) is TP\(_2\), then the A- and Z-sequences clearly satisfy (2.2). Now consider each minor of order 2 of P(R). If the minor is not taken from the 0th column of P(R), then it is a minor of order 2 of the Toeplitz matrix \([a_{i-j}]_{i,j\ge 0}\) of the sequence \((a_n)_{n\ge 0}\). So \(a_{i+\ell }a_j\ge a_i a_{j+\ell }\) implies that such a minor is nonnegative. If the minor is taken from the 0th and \((\ell +1)\)th columns of P(R), then it could be 0, \(z_{i+\ell }a_j\) or \(z_{i+\ell }a_j-a_i z_{j+\ell }\), which is certainly nonnegative since \(z_{i+\ell }a_j\ge a_i z_{j+\ell }\) for \(i\le j\) and \(\ell \ge 0\). Hence, the production matrix P(R) is TP\(_2\). \(\square \)

3 Proof of Theorem 1.1

In what follows, a weighted L-path refers to the weighted L-path using the weights given by (2.1). Let \(\mathscr {L}_n\) be the set of weighted L-paths of length n that start and end at the x-axis. It follows from Lemma 2.1 that

$$\begin{aligned} r_{n,0}=\sum _{P\in \mathscr {L}_n} w(P). \end{aligned}$$

To show the log-convexity of \((r_{n,0})_{n\ge 0}\), it suffices to show that

$$\begin{aligned} \sum _{P\in \mathscr {L}_n} w(P) \sum _{P\in \mathscr {L}_n} w(P)\le \sum _{P\in \mathscr {L}_{n+1}} w(P) \sum _{P\in \mathscr {L}_{n-1}} w(P). \end{aligned}$$
(3.1)

Denote by \(w(P,Q)=w(P)w(Q)\) the weight of a pair of L-paths. Define the weight of a set \(\mathscr {C}\) of pairs of L-paths

$$\begin{aligned} w(\mathscr {C})=\sum _{(P,Q)\in \mathscr {C}}w(P,Q). \end{aligned}$$

Then (3.1) is equivalent to

$$\begin{aligned} w(\mathscr {L}_n\times \mathscr {L}_n)\le w(\mathscr {L}_{n+1}\times \mathscr {L}_{n-1}). \end{aligned}$$
(3.2)

In the remaining part of this section we will construct an injection \(\sigma :\mathscr {L}_n\times \mathscr {L}_n\rightarrow \mathscr {L}_{n+1}\times \mathscr {L}_{n-1}\) by using weighted L-paths whose weights satisfy (3.2).

Start with a pair of L-paths \((P,Q)\in \mathscr {L}_n\times \mathscr {L}_n\), where P goes from (0, 0) to (n, 0) and Q goes from (1, 0) to \((n+1,0)\). Motivated by Callan’s method in [4], we scan the paths left to right and mark the first “encounter” \(\varepsilon _1\), where an encounter is defined as one of the following three cases.

  1. (I)

    A lattice point of intersection, as shown in Fig. 1a;

  2. (II)

    The cross point of two crossing steps AC and BD, as shown in Fig. 1b;

  3. (III)

    A pair of non-crossing steps (ADBC) satisfying that neither of AD and BC are up steps and \(y_D-y_B\le 1\) provided that D has ordinate \(y_D\) and B has ordinate \(y_B\), as shown in Fig. 1c.

Fig. 1
figure 1

The first “encounter” of a pair of L-paths

Clearly, for pairs of L-paths in \(\mathscr {L}_n\times \mathscr {L}_n\), at least one such encounter exists. Note that in case (II) and (III) each encounter is associated with four lattice points ABC and D satisfying that \(y_D-y_B\le 1\), since the slope of each step \(S_{j,-r}\) is \(-r\le 1\). The location of an encounter refers to the coordinates of its associated lattice points (only one in case (I) and four in case (II) and (III)). Now we define the operation \(\sigma \) on (PQ) in each of the above cases.

  1. (I’)

    Switch the paths to the right of the intersection lattice point, as shown in Fig. 2(a\(^\prime \));

  2. (II’)

    Swing the crossing steps so that they become non-crossing down (or flat) steps AD and BC, as shown in Fig. 2\(^\prime \);

  3. (III’)

    Swing the non-crossing steps so that they become crossing steps AC and BD, as shown in Fig. 2c\(^\prime \).

Fig. 2
figure 2

Operation \(\sigma \) on the L-paths in Fig. 1

Cases \((\mathrm I')\)\((\mathrm III')\) all give rise to pairs of L-paths in \(\mathscr {L}_{n+1}\times \mathscr {L}_{n-1}\). Furthermore, the location of the first encounter remains invariant under the operation \(\sigma \), thus \(\sigma \) is invertible. Therefore \(\sigma : \mathscr {L}_n\times \mathscr {L}_n\rightarrow \mathscr {L}_{n+1}\times \mathscr {L}_{n-1}\) is an injection. Note that the elements in \(\mathscr {L}_{n+1}\times \mathscr {L}_{n-1}-\sigma (\mathscr {L}_n\times \mathscr {L}_n)\) are pairs of nonintersecting L-paths that \(\sigma \) can not act on. See Figure 3 for illustration.

Fig. 3
figure 3

Elements in \(\mathscr {L}_2\times \mathscr {L}_4-\sigma (\mathscr {L}_3\times \mathscr {L}_3)\)

Now we are in the position to prove (3.2). Denote \(w(\mathrm I)=\sum w(P,Q)\), where the sum ranges over all pairs of L-paths (PQ) whose first encounter belongs to case (I). We do similarly for the other five cases. Then, it suffices to show that

$$\begin{aligned} w(\mathrm I)+w(\mathrm II)+w(\mathrm III)\le w(\mathrm I')+w(\mathrm II')+w(\mathrm III'). \end{aligned}$$

Let \((P,Q)\in \mathscr {L}_n\times \mathscr {L}_n\) and \((P',Q')=\sigma (P,Q)\in \mathscr {L}_{n+1}\times \mathscr {L}_{n-1}\). If the first encounter of (PQ) is an intersection lattice point, then it follows immediately that \(w(P,Q)=w(P',Q')\), thus \(w(\mathrm I)=w(\mathrm I')\). Let

$$\begin{aligned} \mathscr {C}&=\{(P,Q) \left| \hbox {the first encounter of }(P,Q)\hbox { belongs to case } (\mathrm II)\hbox { or }(\mathrm III)\right. \}, \\ \mathscr {C}'&=\{(P',Q') \left| \hbox {the first encounter of }(P',Q')\hbox { belongs to case } (\mathrm II')\hbox { or }(\mathrm III')\right. \}. \end{aligned}$$

So it remains to show that \(w(\mathscr {C})\le w(\mathscr {C}')\).

Suppose that \((P,Q)\in \mathscr {C}\). Then P and Q must have at least one pair of crossing steps. Let \(\varepsilon _1,\varepsilon _2,\ldots ,\varepsilon _m\) be the encounters of (PQ). Define an operator \(\tau \) such that \(\tau \) acts on the encounters of \((P,Q)\in \mathscr {C}\) in the same way as \(\sigma \) does in case \((\mathrm II')\) and \((\mathrm III')\). Note that the only difference between \(\sigma \) and \(\tau \) is that \(\sigma \) acts only on the FIRST encounter, while \(\tau \) can act on any of the encounters of (PQ). See Fig. 4 for illustration.

Fig. 4
figure 4

The actions of \(\tau \) on encounters of (PQ)

Let \(\mathscr {E}(P,Q)\) and \(\mathscr {O}(P,Q)\) be the sets of pairs of L-paths obtained by acting \(\tau \) on an even and odd number of encounters of (P,Q), respectively. It is easy to verify that \(\mathscr {E}(P,Q)\in \mathscr {C}\subseteq \mathscr {L}_n\times \mathscr {L}_n\) and \(\mathscr {O}(P,Q)\in \mathscr {C}'\subseteq \mathscr {L}_{n+1}\times \mathscr {L}_{n-1}\). Define an equivalence relation \(\sim \) on \(\mathscr {C}\) or \(\mathscr {C}'\) that \((P_1,Q_1)\sim (P_2,Q_2)\) if and only if the location of each encounter of \((P_1,Q_1)\) coincides with that of \((P_2,Q_2)\). Then

$$\begin{aligned} w(\mathscr {C})=\sum w(\mathscr {E}(P,Q)),\quad w(\mathscr {C}')=\sum w(\mathscr {O}(P,Q)), \end{aligned}$$

where the sums range over all the equivalence classes on \(\mathscr {C}\) and \(\mathscr {C}'\), respectively. In the sequel we turn our attention to show that \(w(\mathscr {E}(P,Q))\le w(\mathscr {O}(P,Q))\) for any \((P,Q)\in \mathscr {C}\).

Suppose that each encounter \(\varepsilon \) of (PQ) is associated with four lattice points ABC and D having ordinates hks and t, respectively (as shown in Fig. 5). Set the weight of an encounter

$$\begin{aligned} w(\varepsilon )= \left\{ \begin{array}{ll} X, \quad &{} \hbox {if }\varepsilon \hbox { consists of crossing steps }AC\hbox { and }BD; \\ Y, \quad &{} \hbox {if }\varepsilon \hbox { consists of non-crossing steps }AD\hbox { and }BC. \end{array} \right. \end{aligned}$$

Note that the weights of encounters in \(\mathscr {O}(P,Q)\) and \(\mathscr {E}(P,Q)\) are both the product of some X’s and Y’s. Moreover, there must be an even number of crossing encounters of elements in \(\mathscr {O}(P,Q)\), while an odd number of that in \(\mathscr {E}(P,Q)\). Recall that the encounters of (PQ) are \(\varepsilon _1,\varepsilon _2,\ldots ,\varepsilon _m\). Thus

$$\begin{aligned} w(\mathscr {O}(P,Q))-w(\mathscr {E}(P,Q))=\prod _{i=1}^m (Y_i-X_i)W_0, \end{aligned}$$
(3.3)

where \(W_0\) is the weight of the remaining parts in (PQ) except for the encounters, and \(X_i,Y_i\) are the weights corresponding to \(\varepsilon _i\).

Now consider the encounter \(\varepsilon \) indicated by Fig. 5. Recall that \(S_{j,-r}\) is the step having height j and slope \(-r\) with \(j\ge r\). Then, the crossing steps are \(AC=S_{h,s-h}\), \(BD=S_{k,t-k}\), and the non-crossing steps are \(AD=S_{h,t-h}\), \(BC=S_{k,s-k}\). Note that \(0\le k<h,0\le s<t\) and \(t-1\le k\). Hence, by Lemma 2.1, we obtain that if \(s=0\), then

$$\begin{aligned} X&=w(AC)w(BD)=w(S_{h,-h})w(S_{k,t-k})=z_{h}a_{k-t+1},\nonumber \\ Y&=w(AD)w(BC)=w(S_{h,t-h})w(S_{k,-k})=a_{h-t+1}z_{k}; \end{aligned}$$
(3.4)

if \(s>0\), then

$$\begin{aligned} X&=w(AC)w(BD)=w(S_{h,s-h})w(S_{k,t-k})=a_{h-s+1}a_{k-t+1},\nonumber \\ Y&=w(AD)w(BC)=w(S_{h,t-h})w(S_{k,s-k})=a_{h-t+1}a_{k-s+1}. \end{aligned}$$
(3.5)

Suppose that the production matrix P(R) is TP\(_2\). Then, combining Lemma 2.2 with (3.4) and (3.5), we obtain that \(Y\ge X\) for each encounter. So it follows from (3.3) that \(w(\mathscr {O}(P,Q))-w(\mathscr {E}(P,Q))\ge 0\). Therefore, \(w(\mathscr {L}_n\times \mathscr {L}_n)\le w(\mathscr {L}_{n+1}\times \mathscr {L}_{n-1})\) as desired. This completes the proof of Theorem 1.1.

Fig. 5
figure 5

The encounter \(\varepsilon \)

4 Remarks

Let \(T=[t_{n,k}]_{n,k\ge 0}\) be the infinite lower triangular matrix defined by

$$\begin{aligned} t_{0,0}=1,\quad t_{n+1,0}=\sum _{j=0}^n z_j t_{n,j},\quad t_{n+1,k+1}=\sum _{j=k}^n a_{j,k} t_{n,j}, \end{aligned}$$
(4.1)

for \(n,k\ge 0\), where all \(z_j,a_{j,k}\) are nonnegative and \(a_{j,k}=0\) unless \(j\ge k\ge 0\). Such a triangle is determined completely by the production matrix

$$\begin{aligned} P(T)= \left[ \begin{array}{ccccc} z_0 \quad &{} a_{0,0} \quad &{} \quad &{} \quad &{} \\ z_1 \quad &{} a_{1,0} \quad &{} a_{1,1} \quad &{} \quad &{} \\ z_2 \quad &{} a_{2,0} \quad &{} a_{2,1} \quad &{} a_{2,2} \quad &{} \\ \vdots \quad &{} \vdots \quad &{} \vdots \quad &{} \vdots \quad &{} \ddots \end{array} \right] . \end{aligned}$$

The triangles defined by (4.1) reduce to Riordan arrays when the coefficients \(a_{j,k}\) are constant in k; and to the recursive matrices introduced by Aigner [1, 2] when the production matrix P(T) is tridiagonal. Following Aigner, the numbers in the 0th column of a recursive matrix are called the Catalan-like numbers. Sun and Wang [15] gave a combinatorial proof of the log-convexity of the Catalan-like numbers in terms of weighted Motzkin paths. Here, we point out that the techniques used in Sect.  3 can be generalized to show the log-convexity of the Aigner–Catalan–Riordan numbers introduced by Wang and Zhang [16].

Theorem 4.1

([16, Theorem 1.1]) Let \(T=[t_{n,k}]_{n,k\ge 0}\) be the infinite lower triangular matrix defined by (4.1). If the production matrix P(T) is TP\(_2\), then the Aigner–Catalan–Riordan numbers \((t_{n,0})_{n\ge 0}\) is log-convex.

Proof

Similar to Lemma 2.1, we can interpret \(t_{n,k}\) in terms of weighted L-paths:

$$\begin{aligned} t_{n,k}=w((0,0),(n,k)) \end{aligned}$$

with weights

$$\begin{aligned} w(S_{j,-r})= \left\{ \begin{array}{ll} z_r, \quad &{} \hbox {if } j=r; \\ a_{j,j-r-1}, \quad &{} \hbox {if } j>r. \end{array} \right. \end{aligned}$$
(4.2)

Then, replacing all the weights by 4.2 throughout the proof in Sect. 3 yields an injective proof of Theorem 4.1. \(\square \)

Wang and Zhang also consider the q-analogue of Theorem 4.1. For two real polynomials f(q) and g(q) in q, write \(f(q)\ge _q g(q)\) if the coefficients of the difference \(f(q)-g(q)\) are all nonnegative. A sequence of polynomial \((f_n(q))_{n\ge 0}\) with nonnegative coefficients is called q-log-convex if \(f^2_n(q)\le _q f_{n+1}(q)f_{n-1}(q)\) for all \(n\ge 1\). It is called strongly q-log-convex if \(f_m(q)f_n(q)\le _q f_{n+1}(q)f_{m-1}(q)\) for any \(n\ge m\ge 1\). The strong q-log-convexity implies the q-log-convexity but the converse is not true (see [5, 6] for more information). The concept of the q-TP\(_2\) matrix can be defined similarly. Then, the proof of Theorem 4.1 can be carried over verbatim to its q-version, so we omit the proof for sake of brevity.

Theorem 4.2

([16, Theorem 3.1]) If the production matrix P(T(q)) is q-TP\(_2\), then the q-Aigner–Catalan–Riordan sequence \((t_{n,0}(q))_{n\ge 0}\) is strongly q-log-convex.

Theorem 4.2 unifies the strong q-log-convexity of many famous polynomials, including the Eulerian polynomials, the Narayana polynomials, and the Bell polynomials (see [17] for details). Therefore, the methods above also present an injective proof of the strong q-log-convexity of these polynomials.