Skip to main content
Log in

Variance-Based Subgradient Extragradient Method for Stochastic Variational Inequality Problems

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

In this paper, we propose a variance-based subgradient extragradient algorithm with line search for stochastic variational inequality problems by aiming at robustness with respect to an unknown Lipschitz constant. This algorithm may be regarded as an integration of a subgradient extragradient algorithm for deterministic variational inequality problems and a stochastic approximation method for expected values. At each iteration, different from the conventional variance-based extragradient algorithms to take projection onto the feasible set twicely, our algorithm conducts a subgradient projection which can be calculated explicitly. Since our algorithm requires only one projection at each iteration, the computation load may be reduced. We discuss the asymptotic convergence, the sublinear convergence rate in terms of the mean natural residual function, and the optimal oracle complexity for the proposed algorithm. Furthermore, we establish the linear convergence rate with finite computational budget under both the strongly Minty variational inequality and the error bound condition. Preliminary numerical experiments indicate that the proposed algorithm is competitive with some existing methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Data Availability Statement

The datasets generated during the current study are available from the corresponding author on reasonable request.

References

  1. Burkholder, D.L., Davis, B.J., Gundy, R.F.: Integral inequalities for convex functions of operators on martingales. Presented at the (1972)

  2. Cai, X., Gu, G., He, B.: On the \(O(1/t)\) convergence rate of the projection and contraction methods for variational inequalities with Lipschitz continuous monotone operators. Comput. Optim. Appl. 57(2), 339–363 (2014)

    Article  MathSciNet  Google Scholar 

  3. Censor, Y., Gibali, A., Reich, S.: Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 61(9), 1119–1132 (2012)

  4. Censor, Y., Gibali, A., Reich, S.: Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space. Optim. Method. Soft. 26(4–5), 827–845 (2011)

    Article  MathSciNet  Google Scholar 

  5. Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 148(2), 318–335 (2011)

    Article  MathSciNet  Google Scholar 

  6. Chen, Y., Lan, G., Ouyang, Y.: Accelerated schemes for a class of variational inequalities. Math. Program. 165(1), 113–149 (2017)

    Article  MathSciNet  Google Scholar 

  7. Chen, X., Sun, H., Xu, H.: Discrete approximation of two-stage stochastic and distributionally robust linear complementarity. Math. Program. 177(1), 255–289 (2019)

    Article  MathSciNet  Google Scholar 

  8. Cui, S., Shanbhag, U.V.: On the analysis of reflected gradient and splitting methods for monotone stochastic variational inequality problems. Presented at the (2016)

  9. Dang, C.D., Lan, G.: On the convergence properties of non-Euclidean extragradient methods for variational inequalities with generalized monotone operators. Comput. Optim. Appl. 60(2), 277–310 (2015)

    Article  MathSciNet  Google Scholar 

  10. Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problems. I and II. Springer, New York (2003)

    MATH  Google Scholar 

  11. Fukushima, M.: Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems. Math. Program. 53(1), 99–110 (1992)

    Article  MathSciNet  Google Scholar 

  12. Gürkan, G., Yonca, Özge A., Robonson, S.M.: Sample-path solution of stochastic variational inequalities. Math. Program. 84(2), 313–333 (1999)

    Article  MathSciNet  Google Scholar 

  13. Iusem, A.N., Jofré, A., Oliveira, R.I., Thompson, P.: Extragradient method with variance reduction for stochastic variational inequalities. SIAM J. Optimiz. 27(2), 686–724 (2017)

    Article  MathSciNet  Google Scholar 

  14. Iusem, A.N., Jofré, A., Oliveira, R.I., Thompson, P.: Variance-based extragradient methods with line search for stochastic variational inequalities. SIAM J. Optimiz. 29(1), 175–206 (2019)

    Article  MathSciNet  Google Scholar 

  15. Iusem, A.N., Jofré, A., Thompson, P.: Incremental constraint projection methods for monotone stochastic variational inequalities. Math. Oper. Res. 44(1), 236–263 (2019)

    MathSciNet  MATH  Google Scholar 

  16. Jadamba, B., Raciti, F.: Variational inequality approach to stochastic Nash equilibrium problems with an application to Cournot oligopoly. J. Optim. Theory Appl. 165(3), 1050–1070 (2015)

    Article  MathSciNet  Google Scholar 

  17. Jalilzadeh, A., Shanbhag, U.V., eg-VSSA, : An extragradient variable sample-size stochastic approximation scheme: error analysis and complexity trade-offs. Presented at the (2016)

  18. Jiang, J., Chen, X., Chen, Z.: Quantitative analysis for a class of two-stage stochastic linear variational inequality problems. Comput. Optim. Appl. 76(2), 431–460 (2020)

    Article  MathSciNet  Google Scholar 

  19. Jiang, H., Xu, H.: Stochastic approximation approaches to the stochastic variational inequality problem. IEEE T. Automat. Control 53(6), 1462–1475 (2008)

    Article  MathSciNet  Google Scholar 

  20. Kannan, A., Shanbhag, U.V.: Distributed computation of equilibria in monotone Nash games via iterative regularization techniques. SIAM J. Optimiz. 22(4), 1177–1205 (2012)

    Article  MathSciNet  Google Scholar 

  21. Kannan, A., Shanbhag, U.V.: Optimal stochastic extragradient schemes for pseudomonotone stochastic variational inequality problems and their variants. Comput. Optim. Appl. 74(3), 779–820 (2019)

    Article  MathSciNet  Google Scholar 

  22. Koshal, J., Nediéc, A., Shanbhag, U.V.: Distributed algorithms for aggregative games on graphs. Oper. Res. 64(3), 680–704 (2016)

    Article  MathSciNet  Google Scholar 

  23. Koshal, J., Nedić, A., Shanbhag, U.V.: Regularized iterative stochastic approximation methods for stochastic variational inequality problems. IEEE T. Automat. Control 58(3), 594–609 (2013)

    Article  MathSciNet  Google Scholar 

  24. Lei, J., Shanbhag, U.V.: Linearly convergent variable sample-size schemes for stochastic Nash games: Best-response schemes and distributed gradient-response schemes. Presented at the (2018)

  25. Lin, G.H., Fukushima, M.: Stochastic equilibrium problems and stochastic mathematical programs with equilibrium constraints: A survey. Pac. J. Optim. 6(3), 455–482 (2010)

    MathSciNet  MATH  Google Scholar 

  26. Liu, M., Mroueh, Y., Ross, J., Zhang, W., Cui, X., Das, P., Yang, T.: Towards better understanding of adaptive gradient algorithms in generative adversarial nets. In: Proceedings of the 2020 International Conference on Learning Representations, (2020). https://openreview.net/forum?id=SJxIm0VtwH

  27. Mertikopoulos, P., Zhou, Z.: Learning in games with continuous action sets and unknown payoff functions. Math. Program. 173(1), 465–507 (2019)

    Article  MathSciNet  Google Scholar 

  28. Mertikopoulos, P., Lecouat, B., Zenati, H., Foo, C.S., Chandrasekhar, V., Piliouras, G.: In: Optimistic Mirror Descent in Saddle-point Problems: Going the Extra (gradient) Mile, pp. 1–23. United States, New Orleans (2019)

  29. Ravat, U., Shanbhag, U.V.: On the characterization of solution sets of smooth and nonsmooth convex stochastic Nash games. SIAM J. Optimiz. 21(3), 1168–1199 (2011)

    Article  MathSciNet  Google Scholar 

  30. Robbins, H., Monro, S.: A stochastic approximation method. Ann. Math. Stat. 22, 400–407 (1951)

    Article  MathSciNet  Google Scholar 

  31. Robbins H., Siegund D.: A convergence theorem for non-negative almost supermartingales and some applications, In: Optimizing Methods in Statistics (Proceedings of a Symposium at Ohio State University, Columbus, Ohio), Rustagi, J. S. (eds.), Academic Press, New York, pp. 233-257 (1971)

  32. Shanbhag, U.V.: Stochastic variational inequality problems: Applications, analysis, and algorithms. Inf. Tutorials Oper. Res. pp. 71–107,(2013)

  33. Shanbhag, U.V., Blanchet, J.H.: In: Budget-constrained Stochastic Approximation, pp. 368–379. , Huntington Beach, CA (2015)

  34. Shapiro, A., Dentcheva, D., Ruszczynski, A.: Lectures on Stochastic Programming: Modeling and Theory. SIAM, Philadelphia (2009)

    Book  Google Scholar 

  35. Solodov, M.V., Svaiter, B.F.: A new projection method for variational inequality problems. SIAM J. Control Optim. 37(3), 765–776 (1999)

    Article  MathSciNet  Google Scholar 

  36. Sun, H., Chen, X.: Two-stage stochastic variational inequalities: Theory, algorithms and applications. J. Oper. Res. Soc. China 9, 1–32 (2021)

    Article  MathSciNet  Google Scholar 

  37. Wang, M., Lin, G.H., Gao, Y., Ali, M.M.: Sample average approximation method for a class of stochastic variational inequality problems. J. Syst. Sci. Complex. 24(6), 1143–1153 (2011)

    Article  MathSciNet  Google Scholar 

  38. Xu, H.: Sample average approximation methods for a class of stochastic variational inequality problems. Asia Pac. J. Oper. Res. 27(1), 103–119 (2010)

    Article  MathSciNet  Google Scholar 

  39. Yang, Z.P., Wang, Y., Lin, G.H.: Variance-based modified backward-forward algorithm with line search for stochastic variational inequality problems and its applications. Asia Pac. J. Oper. Res. 37(3), 1–33 (2020)

    Article  MathSciNet  Google Scholar 

  40. Yang, Z.P., Zhang, J., Zhu, X., Lin, G.H.: Infeasible interior-point algorithms based on sampling average approximations for a class of stochastic complementarity problems and their applications. J. Comput. Appl. Math. 352, 382–400 (2019)

    Article  MathSciNet  Google Scholar 

  41. Ye J.J., Yuan X., Zeng S., Zhang J.: Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems. Set-Valued Var. Anal. doi: https://doi.org/10.1007/s11228-021-00591-3(2021)

  42. Yousefian, F., Nedić, A., Shanbhag, U.V.: On smoothing, regularization, and averaging in stochastic approximation methods for stochastic variational inequality problems. Math. Program. 165(1), 391–431 (2017)

    Article  MathSciNet  Google Scholar 

  43. Yousefian, F., Nedić, A., Shanbhag, U.V.: On stochastic mirror-prox algorithms for stochastic Cartesian variational inequalities randomized block coordinate and optimal averaging schemes. Set-Valued Var. Anal. 26(4), 789–819 (2018)

    Article  MathSciNet  Google Scholar 

  44. Yousefian, F., Nedić, A., Shanbhag, U.V.: In: Optimal Robust Smoothing Extragradient Algorithms for Stochastic Variational Inequality Problems, pp. 5831–5836. , Los Angeles, CA (2014)

  45. Yousefian, F., Nedić, A., Shanbhag, U.V.: Self-tuned stochastic approximation schemes for non-Lipschitzian stochastic multi-user optimization and Nash games. IEEE T. Automat. Control 61(7), 1753–1766 (2016)

    Article  MathSciNet  Google Scholar 

  46. Yu, C., Van Der Schaar, M., Sayed, A.H.: Distributed learning for stochastic generalized Nash equilibrium problems. IEEE T. Signal Proces. 65(15), 3893–3908 (2017)

    Article  MathSciNet  Google Scholar 

  47. Zhang, X.J., Du, X.W., Yang, Z.P., Lin, G.H.: An infeasible stochastic approximation and projection algorithm for stochastic variational inequalities. J. Optim. Theory Appl. 183(3), 1053–1076 (2019)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Jin Zhang or Gui-Hua Lin.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was supported in part by NSFC (Nos. 12071280, 11971220), Shenzhen Science and Technology Program (No. RCYX20200714114700072), Stable Support Plan Program of Shenzhen Natural Science Fund (No. 20200925152128002), Guangdong Basic and Applied Basic Research Foundation (No. 2019A1515011152), the Young Talents in Higher Education of Guangdong (No. 2020KQNCX079), and the Open Project of Key Laboratory of School of Mathematical Sciences of Chongqing Normal University (No. CSSXKFKTZ202001).

Appendices

Appendix A: Proof of Lemma 4.3

First of all, we use Lemma 2.6 with \(p=q\), \(x^k\in \mathcal{F}_k\) and noting that \(\xi ^k\) is independent of \(\mathcal{F}_k\), we have

$$\begin{aligned} \big |\Vert \widehat{\epsilon }_1^k\Vert \big |\mathcal{F}_k\big |_p\le C_p\frac{\sigma _{p}(x^*)+L_p\Vert x^k-x^*\Vert }{\sqrt{N_k}}. \end{aligned}$$
(A.1)

Applying Lemma 2.6 again with \(p=q\), \(y^k\in \mathcal{F}_k\) and noting that \(\eta ^k\) is independent of \(\mathcal{F}_k\) and \(\Big |\big |\cdot |\widehat{\mathcal{F}}_k\big |_p|\mathcal{F}_k\Big |_p=|\cdot |\mathcal{F}_k|\), we have

$$\begin{aligned} \big |\Vert \widehat{\epsilon }_2^k\Vert \big |\mathcal{F}_k\big |_p=\Big |\big |\Vert \widehat{\epsilon }_2^k\Vert \big |\widehat{\mathcal{F}}_k\big |_p\big |\mathcal{F}_k\Big |_p\le C_p\frac{\sigma _{p}(x^*)+L_p\big |\Vert y^k-x^*\Vert \big |\mathcal{F}_k\big |_p}{\sqrt{N_k}}. \end{aligned}$$
(A.2)

From Lemma 2.7, Assumption 4.2 and noting that \(0<\alpha _k\le \widehat{\alpha }\le 1\), \(y^k=y(x^k,\alpha _k,\xi ^k)\), \(x^k\in \mathcal{F}_k\), \(\xi ^k\) is independent of \(\mathcal{F}_k\), we have

$$\begin{aligned} \big |\Vert \widehat{\epsilon }_3^k\Vert \big |\mathcal{F}_k\big |_p\le \frac{c_1\sigma _{2p}(x^*)+\widehat{L}_{2p}\Vert x^k-x^*\Vert }{\sqrt{N_k}}.\end{aligned}$$
(A.3)

On the other hand, putting together Step 3 in Algorithm 1, Lemmas 2.1 and 2.3, we obtain

$$\begin{aligned} \Vert x^*-y^k\Vert= & {} \Vert \Pi _X[x^*-\alpha _kF(x^*)]-\Pi _X[x^k-\alpha _k\widehat{F}(x^k,\xi ^k)]\Vert \\\le & {} \Vert x^*-\alpha _kF(x^*)-x^k+\alpha _k\widehat{F}(x^k,\xi ^k)\Vert \\\le & {} (1+\widehat{\alpha } L)\Vert x^k-x^*\Vert +\widehat{\alpha }\Vert \widehat{\epsilon }_1^k\Vert . \end{aligned}$$

Taking \(|\cdot |\mathcal{F}_k|_p\) in the above inequality, we have

$$\begin{aligned} \big |\Vert x^*-y^k\Vert \big |\mathcal{F}_k\big |_p\le (1+\widehat{\alpha } L)\Vert x^k-x^*\Vert +\widehat{\alpha }\big |\Vert \widehat{\epsilon }_1^k\Vert \big |\mathcal{F}_k\big |_p.\end{aligned}$$
(A.4)

Putting together relations (A.1)–(A.4) and use the facts that \(\big |a^2\big |\mathcal{F}_k\big |_{\frac{p}{2}}=\big |a\big |\mathcal{F}_k\big |_p^2,\ (a+b)^2\le 2a^2+2b^2,\ \widehat{L}_{2p}>L_pC_p,\ c_1>C_p\) and \(\sigma _{2p}(x^*)\ge \sigma _p(x^*)\), the claim with \(\mathcal{C}_p:=2c_1^2\left( 7-3\lambda ^2+\sup _k\frac{12L_p^2\widehat{\alpha }^2C_p^2}{N_k}\right) \) and \(\overline{\mathcal{C}}_p:=2\left( 4-3\lambda ^2+6(1+\widehat{\alpha } L)^2+\sup _k\frac{12L_p^2\widehat{\alpha }^2C_p^2}{N_k}\right) \) is proved. \(\square \)

1.1 Appendix B: Proof of Lemma 4.4

By \(y^k\in \widehat{\mathcal{F}}_k\) and \(\eta ^k\) is independent of \(\widehat{\mathcal{F}}_k\), we have

$$\begin{aligned}\mathbb {E}[\widehat{\epsilon }^k_2\big |\widehat{\mathcal{F}}_k]= & {} \mathbb {E}\Big [\frac{1}{N_k}\sum _{j=1}^{N_k}f(\eta _j^k,y^k)-F(y^k)\Big |\widehat{\mathcal{F}}_k\Big ]\\= & {} \frac{1}{N_k}\sum _{j=1}^{N_k}\mathbb {E}\big [f(\eta _j^k,y^k)\big |\widehat{\mathcal{F}}_k\big ]-F(y^k)\\= & {} \frac{1}{N_k}\sum _{j=1}^{N_k}F(y^k)-F(y^k)=0. \end{aligned}$$

Taking \(\mathbb {E}[\cdot |\mathcal {F}_k]\) in (4.1), we can get the conclusion from Lemmas 4.1 and 4.3 immediately.

Appendix C: Proof of Theorem 4.1

By Lemma 4.4, we have

$$\begin{aligned} \mathbb {E}[\Vert x^{k+1}-x^*\Vert ^2\big |\mathcal{F}_k]\le \left( 1+\frac{\overline{\mathcal{C}}_2\widehat{\alpha }^2\widehat{L}^2_4}{N_k}\right) \Vert x^k-x^*\Vert ^2-\rho \mathfrak {R}^2(F,x^k)+\frac{\mathcal{C}_2\widehat{\alpha }^2\sigma _4^2(x^*)}{N_k}. \end{aligned}$$

Taking into account \(\sum _kN_k^{-1}<\infty \). By applying Lemma 2.4 with \(v^k:=\Vert x^k-x^*\Vert ^2,\ a_k:=\frac{\overline{\mathcal{C}}_2\widehat{\alpha }^2\widehat{L}^2_4}{N_k},\ b_k:=\frac{\mathcal{C}_2\widehat{\alpha }^2\sigma _4^2(x^*)}{N_k}\), and \(u_k:=\frac{(1-3\lambda ^2)(\min \{\lambda \theta ,\widehat{\alpha }\})^2}{2|\mathrm {L}(\xi )|^2_2}\mathfrak {R}^2(F,x^k)\), we have that, almost surely, \(\{\Vert x^k-x^*\Vert ^2\}\) converges and \(\sum _{k=0}^{\infty }\mathfrak {R}^2(F,x^k)<\infty \). In particular, almost surely, \(\{x^k\}\) is bounded and \(0=\lim _{k\rightarrow \infty }\mathfrak {R}^2(F,x^k)=\lim _{k\rightarrow \infty }\Vert x^k-\Pi _X[x^k-F(x^k)]\Vert ^2\). This fact and the continuity of F and \(\Pi _X\) almost surely imply that every cluster point \(\bar{x}\) of \(\{x^k\}\) satisfies \(\Vert \bar{x}-\Pi _X[\bar{x}-F(\bar{x})]\Vert ^2=0\). Then, we conclude from Lemma 2.1 that \(\bar{x}\in X^*\). This together with the boundedness of \(\{x^k\}\) and the fact that every cluster point of \(\{x^k\}\) belonging to \(X^*\) indicate that \(\lim _{k\rightarrow \infty }\mathrm{d}(x^k,X^*)=0\) almost surely. In a similar way, we can deduce that \(\lim _{k\rightarrow \infty }\mathbb {E}[\mathfrak {R}^2(F,x^k)]=0\) by taking expectation in (4.11). \(\square \)

Appendix D: Proof of Lemma 4.5

First of all, it follows from Assumption 4.2 that there must exist \(k_0\) satisfying (4.12). In the following, we let \(d_i:=\Vert x^i-x^*\Vert ^2\) for \(i\in \mathbb {N}_0\) and \(k\ge k_0\) be given. Taking expectation in (4.11), making use of \(\mathbb {E}[\mathbb {E}[\cdot \big |\widehat{\mathcal{F}}_i]]=\mathbb {E}[\cdot ]\), and drop the negative term in the right-hand side. We then sum recursively the obtained inequality from \(i:=k_0\) to \(i:=k-1\), we obtain

$$\begin{aligned} |d_k|^2_2\le |d_{k_0}|^2_2+\overline{\mathcal{C}}_2\widehat{\alpha }^2\widehat{L}^2_4\sum _{i=k_0}^{k-1}\frac{|d_i|^2_2}{N_i}+\mathcal{C}_2\widehat{\alpha }^2\sigma _4^2(x^*)\sum _{i=k_0}^{k-1}\frac{1}{N_i}.\end{aligned}$$
(A.5)

For any \(\varpi >0\), we define the stopping time \(t_{\varpi }:=\{k\ge k_0:|d_k|_2>\varpi \}\). From (4.12) and (A.5), we have that, for any \(\varpi >0\) such that \(t_{\varpi }<\infty \),

$$\begin{aligned}\varpi ^2< & {} |d_{t_{\varpi }}|_2^2\le |d_{k_0}|^2_2+\overline{\mathcal{C}}_2\widehat{\alpha }^2\widehat{L}^2_4\sum _{i=k_0}^{t_{\varpi }-1}\frac{|d_i|^2_2}{N_i}+\mathcal{C}_2\widehat{\alpha }^2\sigma _4^2(x^*)\sum _{i=k_0}^{t_{\varpi }-1}\frac{1}{N_i}\\< & {} |d_{k_0}|^2_2+\phi \varpi ^2+\frac{\phi \mathcal{C}_2\sigma _4^2(x^*)}{\overline{\mathcal{C}}_2\widehat{L}_4^2}.\end{aligned}$$

By the definition of \(t_{\varpi }\) for any \(\varpi >0\), the argument above implies that any threshold \(\varpi ^2\), which \(\{\big |d_k\big |^2_2\}_{k\ge k_0}\) eventually exceeds, is bounded above by \(\frac{|d_{k_0}|^2_2+\frac{\phi \mathcal{C}_2\sigma _4^2(x^*)}{\overline{\mathcal{C}}_2\widehat{L}_4^2}}{1-\phi }\). As a result, \(\{|d_k|^2_2\}_{k\ge k_0}\) is bounded and satisfies the statement of the lemma. \(\square \)

Appendix E: Proof of Theorem 4.2

Since \(\{N_k\}\) satisfies Assumption 4.2, the conclusions in Theorem 4.1 and Lemma 4.5 hold. In particular, \(\{x^k\}\) is bounded in \(\mathcal{L}^2\). Let M satisfies that \(\sup _{k\ge 0}\big |\Vert x^k-x^*\Vert \big |_2^2\le M\). Hence, we have \(\sup _k\mathbb {E}[\Vert x^k-x^*\Vert ^2]\le M\). In the recursion of Lemma 4.2, we take expectation, use \(\mathbb {E}[\mathbb {E}[\cdot \big |\widehat{\mathcal{F}}_i]]=\mathbb {E}[\cdot ]\), and summing recursively from \(i:=0\) to \(i:=k\), we have

$$\begin{aligned}&\frac{(1-3\lambda ^2)(\min \{\lambda \theta ,\widehat{\alpha }\})^2 }{2|\mathrm {L}(\xi )|^2_2}\sum _{i=0}^k\mathbb {E}[\mathfrak {R}(F,x^i)^2]\\&\qquad \le \Vert x^0-x^*\Vert ^2+\left( \mathcal{C}_2\widehat{\alpha }^2\sigma _4(x^*)^2+\overline{\mathcal{C}}_2\widehat{\alpha }^2\widehat{L}_4^2M\right) \sum _{i=0}^k\frac{1}{N_i}.\end{aligned}$$

Taking the above inequality, the bound

$$\begin{aligned}\sum _{i=0}^k\frac{1}{N_i}\le \sum _{i=0}^{\infty }\frac{1}{N_i}\le \int ^{\infty }_{-1}\frac{dz}{N(z+\mu )(\ln (z+\mu ))^{1+b}} =\frac{1}{Nb(\ln (\mu -1))^b}\end{aligned}$$

and \(\min _{i=0,\cdots ,k}\mathbb {E}[[\mathfrak {R}(F,x^i)^2]\le \frac{1}{k+1}\sum _{i=0}^k\mathbb {E}[[\mathfrak {R}(F,x^i)^2]\), we obtain the conclusion immediately. \(\square \)

Appendix F: Proof of Theorem 4.3

First, it follows from Theorem 4.2 that there exists a constant \(\mathcal{M}>0\) such that, for every \(k\in \mathbb {N}\),

$$\begin{aligned} \min _{i=0,\cdots ,K}\mathbb {E}[[\mathfrak {R}(F,x^i)^2]\le \mathcal{M}n(Nbk)^{-1}. \end{aligned}$$

Hence, given \(\tau >0\), we obtain \(\min _{i=0,\cdots ,K}\mathbb {E}[[\mathfrak {R}(F,x^i)^2]\le \tau \) after \(K=\mathcal{O}(nN^{-1}b^{-1}\tau ^{-1})\) iterations. The total number of oracle calls after K iterations is upper bounded by

$$\begin{aligned} \sum _{i=0}^K(1+l_i)N_i\lesssim & {} \left( \max _{i=0,\cdots ,K}l_i\right) \sum _{i=0}^KN_i(\ln i)^{1+b}\lesssim \left( \max _{i=0,\cdots ,K}l_i\right) K^2N(\ln K)^{1+b}\nonumber \\\lesssim & {} \left( \max _{i=0,\cdots ,K}l_i\right) N^{-1}n^2b^{-2}\tau ^{-2}(\ln (nN^{-1}b^{-1}\tau ^{-1}))^{1+b} \end{aligned}$$
(A.6)

and \(\min _{i=0,\cdots ,K}\mathbb {E}[[\mathfrak {R}(F,x^i)^2]\le \tau \). By Lemma 4.1, one can obtain that \(l_k\le \log _{\frac{1}{\theta }}\left( \frac{\widehat{\alpha }\overline{L}_k}{\min \{\lambda \theta ,\widehat{\alpha }\}}\right) \). This fact, (A.6) and \(N=\mathcal{O}(n)\) imply the claimed bound on \(\sum _{i=0}^K(1+l_i)N_i\).

The concavity of the mapping \(z\mapsto \log _{\frac{1}{\theta }}(z)\) and the Jensen’s inequality imply

$$\begin{aligned} \mathbb {E}[l_i]\le \mathbb {E}\left[ \log _{\frac{1}{\theta }}\left( \frac{\widehat{\alpha }\overline{L}_k}{\min \{\lambda \theta ,\widehat{\alpha }\}}\right) \right] \le \log _{\frac{1}{\theta }}\left( \frac{\widehat{\alpha } L}{\min \{\lambda \theta ,\widehat{\alpha }\}}\right) , \end{aligned}$$

where the last inequality follows from \(\mathbb {E}[\overline{L}_k]=L\) by the definitions of \(\overline{L}_k\), L and Assumption 4.2. Taking expectation in (A.6), making use of the above relation, and the fact that \(N=\mathcal{O}(n)\), implies the claimed bound on \(\sum _{i=0}^K(1+\mathbb {E}[l_i])N_i\). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, ZP., Zhang, J., Wang, Y. et al. Variance-Based Subgradient Extragradient Method for Stochastic Variational Inequality Problems. J Sci Comput 89, 4 (2021). https://doi.org/10.1007/s10915-021-01603-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10915-021-01603-y

Keywords

Mathematics Subject Classification

Navigation