Skip to main content
Log in

Minimax Linear Estimation with the Probability Criterion under Unimodal Noise and Bounded Parameters

  • Stochastic Systems
  • Published:
Automation and Remote Control Aims and scope Submit manuscript

An Erratum to this article was published on 01 December 2020

This article has been updated

Abstract

We consider a linear regression model with a vector of bounded parameters and a centered noise vector that has an uncertain unimodal distribution but known covariance matrix. We pose the minimax estimation problem for a linear combination of unknown parameters with the use of the probability criterion. The minimax estimate is determined as a result of minimizing a probability bound over the region of possible values of the variance and squared bias for all possible linear estimates. We establish that the resulting robust solution is less conservative in comparison with wider classes of distributions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Change history

References

  1. Bahadur, R. R., On the Asymptotic Efficiency of Tests and Estimates. Sankhya Indian J. Statist., 1960, Vol. 22, no. 3-4, pp. 229–252.

    MathSciNet  MATH  Google Scholar 

  2. Ibragimov, I. A. & Khas’minskii, R. Z., Asimptoticheskaya teoriya otsenivaniya Asymptotic Estimation Theory, Moscow: Nauka, 1977).

    Google Scholar 

  3. Vapnik, V. N. & Chervonenkis, A. Ya., Teoriya raspoznavaniya obrazov (statisticheskie problemy obucheniya). (Pattern Recognition Theory: Statistical Learning Problems), Moscow: Nauka, 1974.

    MATH  Google Scholar 

  4. Bakhshiyan, B. Ts., Nazirov, R. R., & El’yasberg, P. E., Opredelenie i korrektsiya dvizheniya (Identification and Correction of Motion), Moscow: Nauka, 1980).

    MATH  Google Scholar 

  5. Timofeeva, G. A., Nonlinear Confidence Estimates for Statistically Uncertain Systems, Autom. Remote Control, 2003, vol. 64 no. 11, pp. 1724–1733.

    Article  MathSciNet  Google Scholar 

  6. Medvedeva, N. V. & Timofeeva, G. A., Comparison of Linear and Nonlinear Methods of Confidence Estimation for Statistically Uncertain Systems, Autom. Remote Control, 2007, vol. 68 no. 4, pp. 619–627.

    Article  MathSciNet  Google Scholar 

  7. Anan’ev, B. I., Multistep Specific Stochastic Inclusions and Their Multiestimates, Autom. Remote Control, 2007, vol. 68 no. 11, pp. 1891–1899.

    Article  MathSciNet  Google Scholar 

  8. Pankov, A. R. & Semenikhin, K. V., Minimax Estimation by Probabilistic Criterion, Autom. Remote Control, 2007, vol. 68 no. 3, pp. 430–445.

    Article  MathSciNet  Google Scholar 

  9. Delage, E. & Ye, Y., Distributionally Robust Optimization under Moment Uncertainty with Application to Data-Driven Problems, Oper. Res., 2010, vol. 58, pp. 595–612.

    Article  MathSciNet  Google Scholar 

  10. Kogan, M. M., Robust Estimation and Filtering in Uncertain Linear Systems under Unknown Covariations, Autom. Remote Control, 2015, vol. 76 no. 10, pp. 1751–1764.

    Article  MathSciNet  Google Scholar 

  11. Karlin, S. & Studden, W. J., Tchebycheff Systems with Applications in Analysis and Statistics, New York: Interscience, 1966. Translated under the title Chebyshevskie sistemy i ikh primenenie v analize i statistike, Moscow: Nauka, 1976).

    Google Scholar 

  12. Dharmadhikari, S. & Joag-dev, K., Unimodality, Convexity, and Applications, San Diego: Academic, 1988).

    MATH  Google Scholar 

  13. Barmish, B. R. & Lagoa, C. M., The Uniform Distribution: A Rigorous Justification for Its Use in Robustness Analysis, Math. Control Signal. Syst., 1997, vol. 10, pp. 203–222.

    Article  MathSciNet  Google Scholar 

  14. Kibzun, A. I., On the Worst-Case Distribution in Stochastic Optimization Problems with Probability Function, Autom. Remote Control, 1998, vol. 59 no. 11, pp. 1587–1597.

    MathSciNet  MATH  Google Scholar 

  15. Kan, Yu. S., On the Justification of the Uniformity Principle in the Optimization of a Probability Performance Index, Autom. Remote Control, 2000, vol. 61 no. 1, pp. 50–64.

    MathSciNet  MATH  Google Scholar 

  16. Van Parys, B. P. G., Goulart, P. J. & Kuhn, D. Generalized Gauss Inequalities via Semidefinite Programming, Math. Program., 2016, vol. 156, pp. 271–302.

    Article  MathSciNet  Google Scholar 

  17. Granichin, O. N., The Nonasymptotic Confidence Set for Parameters of a Linear Control Object under an Arbitrary External Disturbance, Autom. Remote Control, 2012, vol. 73 no. 1, pp. 20–30.

    Article  MathSciNet  Google Scholar 

  18. Weyer, E., Campi, M. C., & Csaji, B. C., Asymptotic Properties of SPS Confidence Regions, Automatica, 2017, vol. 82, pp. 287–294.

    Article  MathSciNet  Google Scholar 

  19. Semenikhin, K. V., Two-Sided Probability Bound for a Symmetric Unimodal Random Variable, Autom. Remote Control, 2019, vol. 80 no. 3, pp. 474–489.

    Article  MathSciNet  Google Scholar 

  20. Vysochanskii, D. F. & Petunin, Yu. I. On a Gauss Inequality for Unimodal Distributions, Theory Probab. Appl., 1983, vol. 27 no. 2, pp. 359–361.

    Article  Google Scholar 

  21. Pukelsheim, F., The Three Sigma Rule, Am. Statist. 1994, vol. 48, pp. 88–91.

    MathSciNet  Google Scholar 

  22. Solov’ev, V. N., Dual Extremal Problems and Their Applications to Minimax Estimation Problems, Russ. Math. Surv., 1997, vol. 52 no. 4, pp. 685–720.

    Article  Google Scholar 

  23. Matasov, A. I., Estimators for Uncertain Dynamic Systems, Dordrecht: Kluwer, 1998.

    Book  Google Scholar 

  24. Grant, M.C. Boyd, S.P., The CVX Users’ Guide. Release 2.1, CVX Research, Inc. 2018 [Online]. http://cvxr.com/cvx.

  25. Akimov, P. A. & Matasov, A. I., An Iterative Algorithm for 1-Norm Approximation in Dynamic Estimation Problems, Autom. Remote Control, 2015, vol. 76, no. 5, pp. 733–748.

    Article  MathSciNet  Google Scholar 

  26. Arkhipov, A. S. & Semenikhin, K. V., Confidence Analysis of Linear Unbiased Estimates under Uncertain Unimodal Noise Distributions, J. Comput. Syst. Sci. Int., 2019, vol. 58, no. 5, pp. 674–683.

    Article  Google Scholar 

  27. Rockafellar, R. T., Convex Analysis, Princeton: Princeton Univ. Press, 1970. Translated under the title Vypuklyi analiz. Moscow: Mir, 1973.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Appendix

Appendix

Proof of Theorem 1. Consider the linear estimate \(\tilde{X}\) defined by the coefficient vector \(f\in {{\mathbb{R}}}^{n}\) and the bias \(c\in {\mathbb{R}}\) according to (5).

If the estimated value X and the observation vector Y satisfy the equations of the regression model (1) with parameter vector θ ∈ Θ and noise vector \(\eta \sim {\mathcal{H}}(K)\), then error \(\tilde{X}-X\) admits the representation ε + r, where \(\varepsilon \sim {\mathcal{H}}(d)\), and d, r satisfy the relations (9). Therefore, Eq. (11) has the inequality sign “⩽.”

To prove the opposite inequality, it suffices for a given vector of parameters θ ∈ Θ and a random variable \(\varepsilon \sim {\mathcal{H}}(d)\), where d and r have the form (9), to choose a random vector \(\eta \sim {\mathcal{H}}(K)\) satisfying equality \(\varepsilon +r=\tilde{X}-X\) with probability 1. By virtue of (1) and (9) the required equality is equivalent to the following:

$$\varepsilon =\langle f,\eta \rangle .$$

Acting in the same way as in [26], we define the desired vector by the rule

$$\eta ={K}^{1/2}\left\{\varepsilon | g{| }^{-2}g+P\zeta \right\},$$

where P = In − ∣g−2gg*, g = K1/2f, In is the identity matrix of size n × n, and ζ is the standard n-dimensional Gaussian random vector independent of ε. Checking conditions ε = ⟨f, η⟩, Mη = 0 and cov{η, η} = K is identical to the calculations from [26].

In case when \({\mathcal{H}}={\mathcal{P}}\), the proof ends here.

With \({\mathcal{H}}={\mathcal{U}}\) it remains to verify that the distribution of the vector η is symmetric and linearly unimodal. According to [12], this condition means that for any choice of the coefficient vector \(b\in {{\mathbb{R}}}^{n}\) the distribution of the linear combination

$$\langle b,\eta \rangle =\varepsilon | g{| }^{-2}\langle b,Tg\rangle +\langle b,TP\zeta \rangle $$

is symmetric and unimodal. This fact follows from the unimodality of the convolution of two symmetric unimodal one-dimensional distributions that are the distributions of both terms due to \(\varepsilon \sim {\mathcal{U}}(d)\) and \(\zeta \sim {\mathcal{N}}(0,{I}_{n})\) (see Theorem 1.6 from the same source).

Proof of Theorem 2. The convexity of the region Q follows directly from the convexity in \((f,c)\in {{\mathbb{R}}}^{n}\times {\mathbb{R}}\) of two functions

$$\langle Kf,f\rangle \quad \,\text{and}\,\quad \quad \mathop{\max }\limits_{\theta \in \Theta }{(c+\langle {B}^{* }f-a,\theta \rangle )}^{2}.$$

Therefore, the function ρ(d) as the lower boundary of the convex set Q will be convex as well (see Theorem I.5.3 from [27]). And since it is finite everywhere, it will be continuous.

The second statement follows from the definition of a support line for a convex set. With a fixed λ ⩾0, the straight line ρ + λd = γλ is a support line to the set Q at the point (dλ, ρλ) if the linear form ρ + λd reaches a minimum (or maximum) over Q at the specified point. The case of a maximum can be discarded since this linear form on the region Q is not bounded from above. Thus, we obtain the required facts (19) and (20).

Proof of Theorem 3. Theorem 1 due to the monotonic dependence of \({\pi }_{h}^{{\mathcal{H}}}(d,\rho )\) on ρ implies that

$$\mathop{\sup }\limits_{\theta \in \Theta }\mathop{\sup }\limits_{\eta \sim {\mathcal{H}}(K)}{\mathtt{P}}\left\{| \tilde{X}-X| \geqslant h\right\}={\pi }_{h}^{{\mathcal{H}}}(d,\rho ),$$
(A.1)

where d = ⟨Kf, f⟩ and \(\rho =\mathop{\sup }\limits_{\theta \in \Theta }{\left(c+\langle {B}^{* }f-a,\theta \rangle \right)}^{2}\).

Let \((\hat{d},\hat{\rho })\) be the minimum point of \({\pi }_{h}^{{\mathcal{H}}}(d,\rho )\) over (d, ρ) ∈ Q, and let \((\hat{f},\hat{c})\) be the solution to the minimax problem (19) with parameter λ equal to the slope of the support line for the set Q at point \((\hat{d},\hat{\rho })\). Then, according to Theorem 2, it holds that

$$\hat{d}=\langle K\hat{f},\hat{f}\rangle ,\qquad \hat{\rho }=\mathop{\max }\limits_{\theta \in \Theta }{\left(\hat{c}+\langle {B}^{* }\hat{f}-a,\theta \rangle \right)}^{2}.$$

Therefore, the estimate \(\hat{X}=\langle \hat{f},Y\rangle +\hat{c}\) realizes the equality

$$\mathop{\sup }\limits_{\theta \in \Theta }\mathop{\sup }\limits_{\eta \sim {\mathcal{H}}(K)}{\mathtt{P}}\left\{| \hat{X}-X| \geqslant h\right\}={\pi }_{h}^{{\mathcal{H}}}(\hat{d},\hat{\rho }).$$
(A.2)

Now, since \({\pi }_{h}^{{\mathcal{H}}}(\hat{d},\hat{\rho })\leqslant {\pi }_{h}^{{\mathcal{H}}}(d,\rho )\), we see that the left-hand side of (A.2) does not exceed the left-hand side of (A.1). Therefore, \(\hat{X}\) is a minimax estimate by the probability criterion on the class \({\mathcal{H}}(K)\), as needed.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Arkhipov, A., Semenikhin, K. Minimax Linear Estimation with the Probability Criterion under Unimodal Noise and Bounded Parameters. Autom Remote Control 81, 1176–1191 (2020). https://doi.org/10.1134/S0005117920070024

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S0005117920070024

Keywords

Navigation