Abstract
We consider a re-sampling scheme for estimation of the population parameters in the mixed-effects nonlinear regression models of the type used, for example, in clinical pharmacokinetics. We provide a two-stage estimation procedure which resamples (or recycles), via random weightings, the various parameter's estimates to construct consistent estimates of their respective sampling distributions. In particular, we establish under rather general distribution-free assumptions, the asymptotic normality and consistency of the standard two-stage estimates and of their resampled version and demonstrate the applicability of our proposed resampling methodology in a small simulation study. A detailed example based on real clinical pharmacokinetic data is also provided.
Similar content being viewed by others
Availability of data and material
Derived data supporting the findings of this study are available as Theoph under the R package.
References
Adeniyi I, Yahya WB, Ezenweke C (2018) A note on pharmacokinetics modelling of theophylline concentration data on patients with respiratory diseases. Turk Klin J Biostat. 10(1):27–45
Bar-Lev SK, Boukai B (2015) Recycled estimation of population pharmacokinetics models. Adv Appl Stat 47:247–263
Bates DM, Watts DG (2007) Nonlinear regression analysis and its applications. Wiley, New York
Bickel PJ, Freedman DA (1981) Some asymptotic theory for the bootstrap. Ann Stat 9:1196–1217
Boeckmann AJ, Sheiner LB, Beal SL (1994) NONMEM users guide: part V, NONMEM project group. University of California, San Francisco
Boukai B, Zhang Y (2018) Recycled least squares estimation in nonlinear regression. arXiv Preprint (2018)ArXiv:1812.06167 [stat.ME]
Chatterjee S, Bose A (2005) Generalized bootstrap for estimating equations. Ann Stat 33:414–436
Davidian M, Gallant AR (1993a) The non-linear mixed effects model with a smooth random effects density. Biometrika 80:475–488
Davidian M, Giltinan DM (1993b) Some simple methods for estimating intra-individual variability in non-linear mixed effects models. Biometrics 49:59–73
Davidian M, Giltinan DM (1995) Monographs on statistics and applied probability. Nonlinear models for repeated measurements data. Chapman and Hall, London
Davidian M, Giltinan DM (2003) Nonlinear models for repeated measurement data: an overview and update. J Agric Biol Environ Stat 8:387–419
Davison AC, Hinkley DV (1997) Bootstrap methods and their application. Cambridge University Press, Cambridge
Efron B (1979) Bootstrap methods: another Look at the Jackknife. Ann Stat 7:1–26
Efron B, Tibshirani R (1994) An introduction to the bootstrap. Chapman and Hall, New York
Eicker F (1963) Asymptotic normality and consistency of the least squares estimators for families of linear regressions. Ann Math Stat 34:447–456
Flachaire E (2005) Bootstrapping heteroskedastic regression models: wild bootstrap vs pairs bootstrap. CSDA 49:361–476
Fan J, Mei C (1991) The convergence rate of randomly weighted approximation for errors of estimated parameters of AR(I) models. Xian Jiaotong DaXue Xuebao 25:1–6
Freedman DA (1981) Bootstrapping regression models. Ann Stat 9:1218–1228
Hartigan JA (1969) Using subs ample values as typical value. J Am Stat Assoc 64:1303–1317
Ito K, Nisio M (1968) On the convergence of sums of independent Banach space valued random variables. Osaka J Math 5:33–48
Jennrich IR (1969) Asymptotic properties of non-linear least squares estimatiors. Ann Stat 40:633–643
Kurada RR, Chen F (2018) Fitting compartment models using PROC NLMIXED. Paper SAS1883-2018, https://bit.ly/2Cds4cT
Lo AY (1987) A large sample study of the Bayesian bootstrap. Ann Stat 15:360–375
Lo AY (1991) Bayesian bootstrap clones and a biometry function. Sankhya A 53:320–333
Lindstrom MJ, Bates DM (1990) Non-linear mixed effects models for repeated measures data. Biometrics 46:673–687
Mallet A (1986) A maximum likelihood estimation method for random coefficient regression models. Biometrika 73:645–656
Mammen E (1989) Asymptotics with increasing dimension for robust regression with applications to the bootstrap. Ann Stat 17:382–400
Mason DM, Newton MA (1992) A rank statistics approach to the consistency of a general bootstrap. Ann Stat 20:1611–1624
Newton MA, Raftery AE (1994) Approximate Bayesian inference with the weighted likelihood bootstrap (with discussion). J R Stat Soc Ser B 56:3–48
Praestgaard J, Wellner JA (1993) Exchangeably weighted bootstraps of the general empirical process. Ann Probab 21:2053–2086
Quenouille M (1949) Approximate tests of correlation in time-series. Math Proc Cambridge Philos Soc 45(03):483
R Core Team (2020) R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. 3-900051-07-0 (http://www.R-project.org/)
Rao CR, Zhao L (1992) Approximation to the distribution of Mestimates in linear models by randomly weighted bootstrap. Sankhya A 54:323–331
Rubin DB (1981) The Bayesian bootstrap. Ann Stat 9:130–134
Shao J, Tu DS (1995) The Jackknife and Bootstrap. Springer-Verlag, New York. Singh K (1981) On the asymptotic accuracy of Efron's bootstrap. Ann Stat 9:1187–1195
Sheiner LB, Rosenberg B, Melmon KL (1972) Modelling of individual pharmacokinetics for computer-aided drug dosage. Comput Biomed Res 5:441–459
Sheiner LB, Beal SL (1981) Evaluation of methods for estimating population pharmacokinetic parameters. II. Bioexponential model: routine clinical pharmacokinetic data. J Pharmacokinet Biopharm 9:635–651
Sheiner LB, Beal SL (1982) Bayesian individualization of pharmacokinetics: simple implementation and comparison with non-Bayesian methods. J Pharm Sci 71(12):1344–1348
Sheiner LB, Beal SL (1983) Evaluation of methods for estimating population pharmacokinetic parameters. III. Monoexponential model: routine clinical pharmacokinetic data. J Pharmacokinet Biopharm 11:303–319
Singh K (1981) On the asymptotic accuracy of Efron's bootstrap. Ann Stat 9:1187–1195
Steimer JL, Mallet A, Golmard JL, Boisvieux JF (1984) Alternative approaches to estimation of population pharmacokinetic parameters: comparison with the non-linear mixed effect model. Drug Metab Rev 15:265–292
Vonesh EF, Carter RL (1992) Mixed effects non-linear regression for unbalanced repeated measures. Biometrics 48:1–17
Weng CS (1989) On a second order property of the Bayesian bootstrap. Ann Stat 17:705–710
Wu CF (1981) Asymptotic theory of nonlinear least squares estimation. Ann Stat 9:501–513
Wu CFJ (1986) Jackknife, bootstrap and other resampling methods in regression analysis (with discussions). Ann Stat 14:1261–1350
Yu K (1988) The random weighting approximation of sample variance estimates with applications to sampling survey. Chin J Appl Prob Stat 3:340–347
Zhang Y, Boukai B (2019) Recycled estimates for nonlinear regression models. Stat 8:e230
Zhang Y, Boukai B (2019) Recycled two-stage estimation in nonlinear mixed effects regression models, arXiv:1902.00917
Zheng Z (1987) Random weighting methods. Acta Math Appl Sinica 10:247–253
Zheng Z, Tu D (1988) Random weighting method in regression models. Sci China Ser A-Math Phys Astron Technol Sci 31(12):1442–1459
Acknowledgements
We would like to thank the Editor and the anonymous referees for their careful review, useful comments and helpful suggestions that have led to much improvements of this paper.
Funding
This research was not supported by any funding.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
No conflicts of interest.
Code availability
The code supporting this study are available from the corresponding author upon reasonable request.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
7. Appendix
7. Appendix
1.1 7.1. Technical details and proofs, the STS estimation case
In this section of the "Appendix" we provide the technical results needed for the proofs of Theorems 1 and 2 on the STS estimator \({\hat{\theta }}_{{STS}}\) in the hierarchical nonlinear regression model. In the sequel, we let \(\phi _{1ij}(\theta ):=\phi _{ij}^{'}(\theta )\) (see (11)), and set K to denote a generic constant. Recall that (see Assumption A(1)),
Lemma 1
Under the conditions of Assumption A, for some \(K>0\)
where \(b_{i1}:=b_{1n_{i}}(t)\) is a sequence such that \(\underset{|t|\le K}{\sup }|b_{i1}-b_i-\theta _0|\rightarrow 0,\ \ \ a.s.,\) as \(n_i\rightarrow \infty \).
Proof of Lemma 1
Since \(\phi _{1ij}(\theta ):=\phi _{ij}^{'}(\theta )\), we have
Accordingly, we first note that,
By Assumption A (3), we have \( a_{n_i}^{-2}\underset{|t|\le K}{\sup }\sum _{j=1}^{n_i}f^{'2}_{ij}(b_{i1})-\frac{1}{\sigma ^2}\rightarrow 0 \ \ a.s., \) and by Assumption A (2) and Corollary A in Wu (1981), we also have,
Finally, the last term converge to 0 a.s. by Assumption A, an application of Cauchy-Schwarz inequality and Corollary A in Wu (1981). Thus we have
. \(\square \)
Lemma 2
Let \(X_i\) be a sequence of random variables bounded in probability and let \(Y_i\) be a sequence of random variables which satisfies \(\frac{1}{n}\sum _{i=1}^{n} |Y_i|\rightarrow 0\) in probability. Then \( \frac{1}{n}\sum _{i=1}^{n} X_iY_i\overset{p}{\rightarrow }0. \)
Proof of Lemma 2
Since \(X_i\) is bounded in probability, for any \(\epsilon >0\), there is \(K_\epsilon \) such that with sufficient large i, \( P(|X_i|>K_\epsilon )<\epsilon . \) Then
from which the desired result follows. \(\square \)
Lemma 3
There exists a \(K>0\) such that for any \(\epsilon >0\), for any i,
Proof of Lemma 3
Since \(\epsilon _{ij}\) and \(b_i\) are independent, for each \(i=1, \dots , N\), we have that for any \(j_1\ne j_2\),
Similarly,
Hence, we have, \(E(\phi _{ij_1}(\theta _0+b_i)\phi _{ij_2}(\theta _0+b_i))=E(\phi _{ij_1}(\theta _0+b_i))E(\phi _{ij_2}(\theta _0+b_i)).\) To conclude that,
Accordingly, there exists a \(K>0\) such that for any \(\epsilon >0\), for any i,
\(\square \)
We are now ready to prove Theorem 1
Proof of Theorem 1
Let
Next we will show for any given constant K,
By a Taylor expansion, \( \phi _{ij}(\theta _0+b_i+a_{n_i}^{-1}t)=\phi _{ij}(\theta _0+b_i)+\phi _{1ij}(b_{i1})a_{n_i}^{-1}t, \ \ \) where \(b_{i1}=\theta _0+b_i+ca_{n_i}^{-1}t\) for some \(0<c<1\). Accordingly we obtain that,
By Lemma 1, \( a_{n_i}^{-2}\underset{|t|\le K}{\sup }\sum _{j=1}^{n_i}\phi _{1ij}(b_{i1})-\frac{1}{\sigma ^2}\rightarrow 0\ \ a.s. \) Thus, we have proved (17). Next, by (16),
Thus,
By lemma 3 there exists a \(K>0\) such that for any \(\epsilon >0\), for any i,
So that by (18) and (17) we may choose K large enough such that for sufficiently large \(n_i\),
By the continuity of \(\sum _{j=1}^{n_i}\phi _{ij}(\theta )\) in \(\theta \), we have, for sufficiently large \(n_i\), that there exists a constant K such that the equation
has a root \(t=T_{ni}\) in \(|t|\le K\) with probability larger than \(1-\epsilon \). That is, we have \({{\hat{\theta }}_{ni}}=\theta _0+b_i+a_{ni}^{-1}T_{ni}, \) where \(|T_{ni}|<K\) in probability. Thus, by Lemma 2,
\(\square \)
For establishing the asymptotic normality result as stated in Theorem 2, we need the following Lemma.
Lemma 4
Under the conditions of Assumptions A,
Proof of Lemma 4
Let \( X_{ni}:= a_{n_i}^{-1}\sum _{j=1}^{n_i}\phi _{ij}(\theta _0+b_i), \) where, by proof of Theorem 1 we have \(E(X_{ni})=0\) and \(Var(X_{ni})=1\). Thus,
Now, for any \(\epsilon >0\),
Accordingly, we have \( \frac{1}{\sqrt{N}}\sum _{i=1}^{N}a_{n_i}^{-2}\sum _{j=1}^{n_i}\phi _{ij}(\theta _0+b_i)\overset{p}{\rightarrow } 0, \) as required. \(\square \)
Proof of Theorem 2
We first note that by Lemma 1 and (16),
Thus,
Recall that \(\sum _{i=1}^{N}b_i/N\rightarrow E(b_1)\equiv 0\). In view of (17) and since, \(\underset{N,ni\rightarrow \infty }{\lim } N/a_{ni}^2<\infty \), we have
Finally, from Lemma 4,
Thus, it follows that \( \lambda ^{-1}\sqrt{N}({\hat{\theta }}_{{STS}}-\theta _0)\Rightarrow {{\mathcal {N}}}(0,1). \) \(\square \)
1.2 7.2. Technical details and proofs, the recycled STS estimation case
In this section of the "Appendix" we provide the technical results needed for the proofs of Theorems 3 and 4 on the recycled STS estimator, \({\hat{\theta }}^*_{{RTS}}\), in the hierarchical nonlinear regression model. We begin with a re-statement of Lemma 2 from Boukai and Zhang (2018) which is concerned with the general random weights under Assumption W.
Lemma 5
Let \(\mathbf{w}_n=(w_{1:n}, w_{1:n}, \dots , w_{n:n})^\mathbf{t}\) be random weights that satisfy the conditions of Assumption W. Then With \(W_{i}=(w_{i:n}-1)/\tau _n, \ i=1\dots , n\) and \({\bar{W}}_n:=\frac{1}{n}\sum _{i=1}^{n}W_i\) we have, as \(n\rightarrow \infty \), that \((i)\ \ \frac{1}{n}\sum _{i=1}^{n}W_i\overset{p^*}{\rightarrow }0\) \((ii)\ \ \frac{1}{n}\sum _{i=1}^{n}W_i^2\overset{p^*}{\rightarrow }1\) and hence \((iii)\ \ \frac{1}{n}\sum _{i=1}^{n}(W_i-\bar{W}_n)^2\overset{p^*}{\rightarrow }1\).
Lemma 6
Under the conditions of Assumption W, \( \frac{1}{n}\sum _{i=1}^{n}w_{i:n}-1\overset{p^*}{\rightarrow }0 , \) Further, let \(\mathbf{u}_n=(u_1,u_2, \dots , u_n)^\mathbf{t}\) denote a vector of n i.i.d random variables that is independent of \(\mathbf{w}_n\) with \(E(u_i)=0\), \(E(u_i^2)<\infty \). Then, conditional on the given value of the \(\mathbf{u}_n\), we have \(\frac{1}{n}\sum _{i=1}^{n}u_i w_{i:n} \overset{p^*}{\rightarrow }0\), as \(n\rightarrow \infty \).
Proof of Lemma 6
We first note that
To conclude that, \(\frac{1}{n}\sum _{i=1}^{n}w_i-1\overset{p^*}{\rightarrow }0\), as \(n\rightarrow \infty \). As for the second assertion, we note that since
and since \(\sum _{i=1}^{n}u_i/n\rightarrow 0\), as \(n\rightarrow \infty \), we may only consider the first term. To that end, we note that
as \(n\rightarrow \infty \). We therefore conclude that \(\frac{1}{n}\sum _{i=1}^{n}u_iw_{i:n}\overset{p^*}{\rightarrow }0, \) as required. \(\square \)
Lemma 7
Under the conditions of Assumptions Aand B, we have that \(a_{n_i}^{-2}\sum _{j=1}^{n_i}\phi ^2_{ij}({{\hat{\theta }}_{ni}})\overset{p}{\rightarrow } 1, \) for all \(i=1,2,\dots ,N\).
Proof of Lemma 7: Since \({{\hat{\theta }}_{ni}}\overset{p}{\rightarrow }\theta _0\), we have
Write,
The first term in \(B_1\) converges to 0 by Assumption A (3), and Corollary A of Wu (1981) while the second term in \(B_1\) converges to 1 by Assumption A(3). Hence \(B_1\overset{p}{\rightarrow } 1\). As for the second and third terms, \(B_2\) and \(B_3\), it follows by a direct application of the Cauchy-Schwarz inequality ogether with Assumption B (1), that \(B_2\overset{p}{\rightarrow } 0\) and \(B_3\overset{p}{\rightarrow } 0\). Accordingly, it follows that \( a_{n_i}^{-2}\sum _{j=1}^{n_i}\phi ^2_{ij}({{\hat{\theta }}_{ni}})\overset{p}{\rightarrow } 1, \) as required. \(\square \)
Lemma 8
Under the conditions of Assumptions A and B, for all i,
where \(b^*_{i1}={{\hat{\theta }}_{ni}}+ca_{n_i}^{-1}t\) for some \(0<c<1\), as \(n_i\rightarrow \infty \).
Proof of Lemma 8
We first note that since by Theorem 1, we have \({{\hat{\theta }}_{ni}}-b_i-\theta _0\overset{p}{\rightarrow }0\), and since
it follows under Assumption B (3) that with \(|t|\le K\tau _{n_i}\), we have \( b^*_{i1}-b_i-\theta _0\overset{p}{\rightarrow }0. \) Thus,
In light of Assumption B (2–3) , and that \(\tau _{n_i}^2/n_i\rightarrow 0\), we only need to show, in order to complete the proof of Lemma 8, that
Toward that end, we note that,
It is straight forward to see that by Assumption B (1), \(\underset{n_i\rightarrow \infty }{\lim }I_1<\infty \), and that by Cauchy-Schwarz inequality \(\underset{n_i\rightarrow \infty }{\lim }I_3<\infty \). Finally we write
The first term converges to 0 in probability by Assumption B (2) and Corollary A of Wu (1981). Then, according to Assumption A (2),
The third term in \(I_2\) converges to 0 in probability by an application of the Cauchy-Schwarz inequality combined with Assumption B (1) and (2). Finally, the fourth term in \(I_2\), converges to 0 in probability again, by an application of the Cauchy-Schwarz inequality. Thus we have \(\underset{n_i\rightarrow \infty }{\lim }I_2<\infty \). Accordingly, we have established that as \(n_i\rightarrow \infty \),
\(\square \)
Lemma 9
Under the conditions of Assumptions A and B, there exists a \(K>0\) such that for any \(\epsilon >0\),
Proof of Lemma 9
By Lemma 7,
Hence we obtain,
Accordingly, there exists a \(K>0\) such that for any \(\epsilon >0\),
\(\square \)
Proof of Theorem 3
Let
First, we will show that for any given \(K>0\),
By a Taylor expansion we have that \( \phi _{ij}({{\hat{\theta }}_{ni}}+a_{n_i}^{-1}t)=\phi _{ij}({{\hat{\theta }}_{ni}})+\phi _{1ij}(b^*_{i1})a_{n_i}^{-1}t, \) where as before, \(b^*_{i1}={{\hat{\theta }}_{ni}}+ca_{n_i}^{-1}t\) for some \(0<c<1\). Accordingly we obtain,
Further,
By Lemma 8 and Lemma 1, we have
and
Thus, by an application of the Cauchy-Schwarz inequality we have proved (20). Next, in light of (19) we define
Accordingly,
Recall that by Lemma 9, there exists a \(K>0\) such that for any \(\epsilon >0\),
Accordingly, by (21) and (20) we may choose large enough K such that for sufficiently large \(n_i\),
From the continuity of \(\sum _{j=1}^{n_i}\phi _{ij}(\theta )\) in \(\theta \), we have for sufficiently large \(n_i\), that there exists a K such that the equation \( \sum _{j=1}^{n_i}w_{ij}\phi _{ij}({{\hat{\theta }}_{ni}}+a_{n_i}^{-1}t)=0, \) has a root, \(t=T^*_{ni}\) in \(|t|\le K\tau _{n_i}\), with a probability larger than \(1-\epsilon \). That is, we have \({\hat{\theta }}_{ni}^*={{\hat{\theta }}_{ni}}+a_{ni}^{-1}T^*_{ni}, \) where \(|\tau _{n_i}^{-1}T^*_{ni}|<K\) in probability. Accordingly we may rewrite \({\hat{\theta }}^*_{{RTS}}\) as,
That is,
Additionally, by Lemma 6, we have \(\frac{1}{N}\sum _{i=1}^{N}(u_i-1)\overset{p^*}{\rightarrow }0, \) as well as, \(\frac{1}{N}\sum _{i=1}^{N}u_ib_i\overset{p^*}{\rightarrow }0\). Further, we also have that
Now by Lemma 2 and the fact \(T_{ni}=O_p(1)\), we obtain, with \(U_i:=(u_i-1)/\tau _N\), that
as well as, \(\frac{1}{N}\sum _{i=1}^{N}a_{ni}^{-1}T_{ni}\overset{p}{\rightarrow } 0\). That is, we have established that, \(E^*(\frac{1}{N}\sum _{i=1}^{N}u_ia_{ni}^{-1}T_{ni})^2\overset{p}{\rightarrow } 0\). Accordingly we conclude, \(P^*(|\frac{1}{N}\sum _{i=1}^{N}u_ia_{ni}^{-1}T_{ni}|>\epsilon )=o_p(1)\). Similarly,
where by Lemma 2, Assumption B (3) and the fact \(\tau _{n_i}^{-1}T^{*}_{ni}=O_{p^*}(1)\), we obtain,
Finally, by Lemma 2,
Accordingly we also conclude that, \(P^*(|\frac{1}{N}\sum _{i=1}^{N}u_ia_{ni}^{-1}T^*_{ni}|>\epsilon )=o_p(1)\). Hence, we have proved that \(P^*(|{\hat{\theta }}^*_{{RTS}}-\theta _0|>\epsilon )=o_p(1)\). \(\square \)
For the related asymptotic normality results as stated in Theorem 4, we need the following two Lemmas.
Lemma 10
Suppose that the conditions of Assumptions A and B hold. If \(\frac{\tau _{n_i}}{\tau _N}=o(\sqrt{n_i})\) then as \(n_i\rightarrow \infty \) and \(N\rightarrow \infty \),
Proof of Lemma 10
Let
Clearly \(E^*(X^*_{ni})=0\), and \(X^*_{n_i}\) are independent for i in \(1,2,\dots , N\). Further, by Lemma 7 we have, as \(n_i\rightarrow \infty \), that
Thus, with \(U_i=(u_i-1)/\sqrt{\tau _N}\),
Since \(U_i\) and \(X^*_{ni}\) are independent, we obtain,
Finally, since \(\frac{\tau _{n_i}}{\tau _N}=o(\sqrt{n_i})\), we also have,
Accordingly we obtain that,
\(\square \)
Lemma 11
Suppose that the conditions of Assumptions A and B hold. If \(\frac{\tau _{n_i}}{\tau _N}=o(\sqrt{n_i})\) then as \(n_i\rightarrow \infty \) and \(N\rightarrow \infty \),
Proof of Lemma 11
We first write
By Lemma 2, Assumption B (3) and the fact \(\tau _N^{-1}S_{n_i}(T^*_{ni})\overset{p^*}{\rightarrow } 0\),
Further, it can be seen that,
Thus we have,
\(\square \)
We conclude the "Appendix" with a proof of Theorem 4.
Proof of Theorem 4
By Theorem 3 and (19) we express,
Accordingly we have,
where \(|T^*_{ni}|<K\tau _{n_i}\) in probability. Further,
By Lemma 10, \(I_2\overset{p^*}{\rightarrow } 0\), and by Lemma 11, \(I_3\overset{p^*}{\rightarrow } 0\), and therefore it remains only to consider \(I_1\). Now, observe that,
By Lemma 2,
Further by Lemma 5,
and clearly, \(\sqrt{N}(\bar{b}+\theta _0)\Rightarrow \mathcal{{N}}(\theta _0,\lambda ^2)\). Accordingly we have, \(\frac{\lambda ^{-1}}{N}\sum _{i=1}^{N}(b_i-\bar{b})^2\rightarrow 1 \ \ a.s.\) as well as \(\sqrt{N} \bar{U}(\bar{b}+\theta _0)\overset{p^*}{\rightarrow } 0\). Further, by Lemma 4.6 of Praestgaard and Wellner (1993), we have that
Thus we have
Finally we conclude that as \(n_i\rightarrow \infty \) and \(N\rightarrow \infty \),
\(\square \)
Rights and permissions
About this article
Cite this article
Zhang, Y., Boukai, B. Recycled two-stage estimation in nonlinear mixed effects regression models. Stat Methods Appl 31, 551–585 (2022). https://doi.org/10.1007/s10260-021-00581-7
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10260-021-00581-7