Abstract
Minimax robust designs for regression models with heteroscedastic errors are studied and constructed. These designs are robust against possible misspecification of the error variance in the model. We propose a flexible assumption for the error variance and use a minimax approach to define robust designs. As usual it is hard to find robust designs analytically, since the associated design problem is not a convex optimization problem. However, we can show that the objective function of the minimax robust design problem is a difference of two convex functions. An effective algorithm is developed to compute minimax robust designs under the least squares estimator and generalized least squares estimator. The algorithm can be applied to construct minimax robust designs for any linear or nonlinear regression model with heteroscedastic errors. In addition, several theoretical results are obtained for the minimax robust designs.
Similar content being viewed by others
References
Berger MPF, Wong WK (2009) An introduction to optimal designs for social and biomedical research. Wiley, New York
Boyd S, Vandenberghe L (2004) Convex optimization. Cambridge University Press, New York
Dean A, Morris M, Stufken J, Bingham D (2015) Handbook of design and analysis of experiments. CRC Press, Boca Raton
Dette H, Haines LM, Imhof L (1999) Optimal desgins for rational models and weighted polynomial regression. Ann Stat 27:1272–1293
Dette H, Wong WK (1999) Optimal designs when the variance is a function of the mean. Biometrics 55:925–929
Gao LL, Zhou J (2020) Minimax D-optimal designs for multivariate regression models with multi-factors. J Stat Plan Inference 209:160–173
Grant MC, Boyd SP (2013) The CVX users guide, release 2.0 (beta). CVX Research Inc., Stanford University, Stanford
Kitsos CP, Lopez-Fidalgo J, Trandafir PC (2006) Optimal designs for Michaelis–Menten model. In: Greek Statistical Institute Annual Meeting, January 2006, pp 467–475
Lipp T, Boyd S (2016) Variations and extensions of the convex-concave procedure. Optim Eng 17:263–287
Martínez-López I, Ortiz-Rodríguez IM, Rodríguez-Torreblanca C (2009) Optimal designs for weighted rational models. Appl Math Lett 22:1892–1895
Papp D (2012) Optimal designs for rational function regression. J Am Stat Assoc 107:400–411
Pukelsheim F (2006) Optimal design of experiments. Society for Industrial and Applied Mathematics, University City
Wiens DP (2015) Robustness of design. In: Dean A, Morris M, Stufken J, Bingham D (eds) Handbook of design and analysis of experiments. CRC Press, Boca Raton
Wiens DP, Li P (2014) V-optimal designs for heteroscedastic regression. J Stat Plan Inference 145:125–138
Wong WK, Yin Y, Zhou J (2019) Using SeDuMi to find various optimal designs for regression models. Stat Pap 60:1583–1603
Wu XF (2007) Optimal designs for segmented polynomial regression models and web-based implementation of optimal design software. PhD Thesis, Stony Brook University
Yan F, Zhang C, Peng H (2017) Optimal designs for additive mixture model with heteroscedastic errors. Commun Stat Theory Methods 46:6401–6411
Ye JJ, Zhou J, Zhou W (2017) Computing A-optimal and E-optimal designs for regression models via semidefinite programming. Commun Stat Simul Comput 46:2011–2024
Zhai Y, Fang Z (2018) Locally optimal designs for some dose-response models with continuous endpoints. Commun Stat Theory Methods 47:3803–3819
Acknowledgements
The authors are grateful to the Editor and referees for their helpful comments and suggestions. This research work was partially supported by Discovery Grants from the Natural Sciences and Engineering Research Council of Canada.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix: Proofs and derivations
Appendix: Proofs and derivations
Proof of Theorem 1:
For any \(\lambda \in \Lambda (\alpha )\), we have
Notice that \(\mathbf{B}_i=\lambda (\mathbf{u}_i) \mathbf{A}_i\) and \(\mathbf{A}_i\) are positive semidefinite for all \(i =1, \ldots , N\). In the following notation \(\succeq \) denotes Loewner order for positive semidefinite matrices. From (7) we get
which leads to the result in (10), i.e.,
From Boyd and Vandenberghe (2004, page 387), both \(-2 \log ( \det \left( \sum _{i=1}^N w_i \mathbf{A}_i \right) )\) and \(-\log ( \det \left( \sum _{i=1}^N w_i (g(\mathbf{u}_i) + \alpha )\mathbf{A}_i \right) ) \) are convex functions of \(\mathbf{w}\), so \(\phi _1(\mathbf{w})\) is a difference of two convex functions of \(\mathbf{w}\). The result for \(\phi _2(\mathbf{w})\) in (11) can be proved similarly. \(\square \)
Proof of Theorem 2
Suppose \(\mathbf{w}^*\) is a solution to problem (12) with \(\phi (\mathbf{w})=\phi _1(\mathbf{w})\). Let \(\mathbf{w}\) be any weight vector and let \(\mathbf{w}_{\delta } =(1-\delta ) \mathbf{w}^* + \delta \mathbf{w}\) for \(\delta \in [0,1]\), so \(\mathbf{w}_{\delta } \) is also a weight vector. Since \(\phi _1(\mathbf{w})\) is minimized at \(\mathbf{w}^*\), we must have
Straightforward computation of the derivative leads to
which gives the necessary condition,
Similarly we can prove the result when \(\phi (\mathbf{w})=\phi _2(\mathbf{w})\). \(\square \)
The derivative \( \frac{\partial v_2(\mathbf{w})}{\partial \mathbf{w}^\top } \) in (15): For \(\phi (\mathbf{w}) = \phi _1(\mathbf{w})\), we have
and for \(i=1, \ldots , N\),
Similarly, for \(\phi (\mathbf{w}) = \phi _2(\mathbf{w})\) we get
\(\square \)
Proof of Theorem 3
From (15) and \(\lim _{j \rightarrow \infty } \mathbf{w}^{(j)} = \mathbf{w}^*\), we get
and
which are the results in (18) and (19). To show that \(\mathbf{w}^*\) satisfies the necessary condition in Theorem 2, we use the method in the proof of Theorem 2 and let \(\mathbf{w}_{\delta } =(1-\delta ) \mathbf{w}^{(j)} + \delta \mathbf{w}\) for \(\delta \in [0,1]\). Then we have
which leads to the necessary condition in Theorem 2 as \(j \rightarrow \infty \). \(\square \)
Proof of Theorem 4
First we show that for each j, \(\xi (\mathbf{w}^{(j)})\) is symmetric on \(S_N\). For any distribution \(\xi (\mathbf{w})\) on the \(S_N\), we define \(\xi (\tilde{\mathbf{w}})\) to be the following distribution,
If \(\xi (\mathbf{w})=\xi (\tilde{\mathbf{w}})\), then \(\xi (\mathbf{w})\) is symmetric on the \(S_N\). Otherwise, the convex combination \(0.5\xi (\mathbf{w})+ 0.5\xi (\tilde{\mathbf{w}})\) is symmetric on the \(S_N\). Under the two conditions in Theorem 4, it is easy to verify that \({\tilde{\phi }}(\mathbf{w}, \mathbf{w}^{(j-1)}) = {\tilde{\phi }}(\tilde{\mathbf{w}}, \tilde{\mathbf{w}}^{(j-1)}) \), if \(\xi (\mathbf{w}^{(0)})\) is symmetric. Note that the choice of \(\mathbf{w}^{(0)}\) in the algorithm always makes \(\xi (\mathbf{w}^{(0)})\) to be symmetric. Since \({\tilde{\phi }}(\mathbf{w}, \mathbf{w}^{(j-1)}) \) is a convex function of \(\mathbf{w}\), the solution \(\mathbf{w}^{(j)}\) makes \(\xi (\mathbf{w}^{(j)})\) symmetric on the \(S_N\). Since \(\xi (\lim _{j \rightarrow \infty } \mathbf{w}^{(j)}) = \xi (\mathbf{w}^{*})\), \(\xi (\mathbf{w}^{*})\) is also symmetric on the \(S_N\). \(\square \)
Rights and permissions
About this article
Cite this article
Yzenbrandt, K., Zhou, J. Minimax robust designs for regression models with heteroscedastic errors. Metrika 85, 203–222 (2022). https://doi.org/10.1007/s00184-021-00827-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00184-021-00827-0
Keywords
- Robust regression design
- Minimax design
- D-optimality
- Non-convex optimization
- Generalized least squares estimator