Skip to main content
Log in

Smoothing Strategy Along with Conjugate Gradient Algorithm for Signal Reconstruction

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

In this paper, we propose a new smoothing strategy along with conjugate gradient algorithm for the signal reconstruction problem. Theoretically, the proposed conjugate gradient algorithm along with the smoothing functions for the absolute value function is shown to possess some nice properties which guarantee global convergence. Numerical experiments and comparisons suggest that the proposed algorithm is an efficient approach for sparse recovery. Moreover, we demonstrate that the approach has some advantages over some existing solvers for the signal reconstruction problem.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1

Similar content being viewed by others

References

  1. Baraniuk, R.: Compressive sensing. IEEE Signal Process. Mag. 24, 118–121 (2007)

    Article  Google Scholar 

  2. Beck, A., Teboulle, M.: A fast iterative shrinkage thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)

    Article  MathSciNet  Google Scholar 

  3. Becker, S., Bobin, J., Candès, E.: NESTA: a fast and accurate first-order method for sparse recovery. SIAM J. Imaging Sci. 4(1), 1–39 (2011)

    Article  MathSciNet  Google Scholar 

  4. Bruckstein, A.M., Donoho, D.L., Elad, M.: From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev. 51, 34–81 (2009)

    Article  MathSciNet  Google Scholar 

  5. Candès, E.J.: The restricted isometry property and its implications for compressed sensing. Comptes Rendus Mathematique 346(9–10), 589–592 (2008)

    Article  MathSciNet  Google Scholar 

  6. Candès, E.J., Randall, P.A.: Highly robust error correction by convex programming. IEEE Trans. Inf. Theory 54, 2829–2840 (2008)

    Article  MathSciNet  Google Scholar 

  7. Candès, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52, 489–509 (2006)

    Article  MathSciNet  Google Scholar 

  8. Candès, E.J., Romberg, J.K., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59(8), 1207–1223 (2006)

    Article  MathSciNet  Google Scholar 

  9. Candès, E.J., Tao, T.: Near optimal signal recovery from random projections: universal encoding strategies. IEEE Trans. Inf. Theory 52, 5406–5425 (2004)

    Article  MathSciNet  Google Scholar 

  10. Candès, E.J., Tao, T.: Decoding by linear programming. IEEE Trans. Inf. Theory 51(12), 4203–4215 (2005)

    Article  MathSciNet  Google Scholar 

  11. Candès, E.J., Tao, T.: Near optimal signal recovery from random projections: Universal encoding strategies? IEEE Trans. Inf. Theory 52(12), 5406–5425 (2006)

    Article  MathSciNet  Google Scholar 

  12. Chen, X., Ye, Y., Wang, Z., Ge, D.: Complexity of unconstrained \(L_2-L_p\) minimization. Math. Program. 143, 371–383 (2014)

    Article  MathSciNet  Google Scholar 

  13. Chen, X., Zhou, W.: Smoothing nonlinear conjugate gradient method for image restoration using nonsmooth nonconvex minimization. SIAM J. Imaging Sci. 3(4), 765–790 (2010)

    Article  MathSciNet  Google Scholar 

  14. Chen, X., Zhou, W.: Convergence of the reweighted \(l_1\) minimization algorithm for \(l_2-l_p\) minimization. Comput. Optim. Appl. 59, 47–61 (2014)

    Article  MathSciNet  Google Scholar 

  15. Daubechies, I., DeVore, R., Fornasier, M., Güntük, S.: Iteratively reweighted least squares minimization for sparse recovery. Commun. Pure Appl. Math. 63, 1–38 (2010)

    Article  MathSciNet  Google Scholar 

  16. Donoho, D.: Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)

    Article  MathSciNet  Google Scholar 

  17. Friedman, J., Hastie, T., Tibshirani, R.: A note on the group lasso and a sparse group lasso. Stat. Theory (math.ST). Submitted on 5 Jan (2010)

  18. Ge, D., Jiang, X., Ye, Y.: A note on the complexity of \(L_p\) minimization. Math. Program. 129, 285–299 (2011)

    Article  MathSciNet  Google Scholar 

  19. Gribnoval, R., Nielsen, M.: Sparse decompositions in unions of bases. IEEE Trans. Inf. Theory 49, 3320–3325 (2003)

    Article  Google Scholar 

  20. Hale, E.T., Yin, W., Zhang, Y.: Fixed-point continuation for \(\ell _1\)-minimization: methodology and convergence. SIAM J. Optim. 19, 1107–1130 (2008)

    Article  MathSciNet  Google Scholar 

  21. Koh, K., Kim, S., Boyd, S.: An interior-point method for large scale \(l_1\) regularized logistic regression. J. Mach. Learn. Res. 8, 1519–1555 (2007)

    MathSciNet  MATH  Google Scholar 

  22. Lai, M.-J., Wang, J.: An unconstrained \(l_q\) minimization with 0 \( < q \le 1\) for sparse solutions of underdetermined linear system. SIAM J. Optim. 21(1), 82–101 (2011)

    Article  MathSciNet  Google Scholar 

  23. Lai, M.-J., Xu, Y., Yin, W.: Improved iteratively reweighted least squares for unconstrained smoothed \(l_q\) minimization. SIAM J. Numer. Anal. 51(2), 927–957 (2013)

    Article  MathSciNet  Google Scholar 

  24. Natarajan, B.K.: Sparse approximate solutions to linear systems. SIAM J. Comput. 24, 227–234 (1995)

    Article  MathSciNet  Google Scholar 

  25. Nguyen, C.T., Saheya, B., Chang, Y.-L., Chen, J.-S.: Unified smoothing functions for absolute value equation associated with second-order cone. Appl. Numer. Math. 135, 206–227 (2019)

    Article  MathSciNet  Google Scholar 

  26. Simonn, N., Friedman, J., Hastieand, T., Tibshirani, R.: A sparse-group lasso. J. Comput. Graph. Stat. 22, 231–245 (2013)

    Article  MathSciNet  Google Scholar 

  27. Wang, X., Liu, F., Jiao, L.C., Wu, J., Chen, J.: Incomplete variables truncated conjugate gradient method for signal reconstruction in compressed sensing. Inf. Sci. 288, 387–411 (2014)

    Article  Google Scholar 

  28. Wang, Y., Yin, W., Zheng, J.: Global convergence of ADMM in nonconvex nonsmooth optimization. J. Sci. Comput. 78, 29–63 (2019)

    Article  MathSciNet  Google Scholar 

  29. Wu, C., Zhan, J., Lu, Y., Chen, J.-S.: Signal reconstruction by conjugate gradient algorithm based on smoothing \(l_1-\)norm. Calcolo 56(4), 1–26 (2019)

    Article  Google Scholar 

  30. Wu, C., Zhang, J., Chen, J.-S.: An elastic unconstrained \(l_q-l_1\) minimization for finding sparse solution, submitted manuscript (2019)

  31. Xu, Z., Chang, X., Xu, F., Zhang, H.: \(L_{1/2}\) regularization: a thresholding representation theory and a fast solver. IEEE Trans. Neural Netw. Learn. Syst. 23, 1013–1027 (2012)

    Article  Google Scholar 

  32. Yin, K., Xiao, Y.H., Zhang, M.L.: Nonlinear conjugate gradient method for \(l_1\)-norm regularization problems in compressive sensing. J. Comput. Inf. Syst. 7, 880–885 (2011)

    Google Scholar 

  33. Zhang, C.: Nearly unbiased variable selection under minimax concave penalty. Ann. Stat. 38, 894–942 (2010)

    Article  MathSciNet  Google Scholar 

  34. Zhang, Y., Ye, W.: Sparse recovery by the iteratively reweighted \(l_1\) algorithm for elastic \(l_2-l_q\) minimization. Optimization 66(10), 1677–1687 (2017)

    Article  MathSciNet  Google Scholar 

  35. Zou, H., Hastie, T.: Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B-Stat. Methodol. 67, 301–320 (2005)

    Article  MathSciNet  Google Scholar 

  36. Zhu, H., Xiao, Y.H., Wu, S.Y.: Large sparse signal recovery by conjugate gradient algorithm based on smoothing technique. Comput. Math. Appl. 66, 24–32 (2013)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the anonymous referees for their valuable comments and suggestions which have significantly improved the original version of the paper.

Funding

Caiying Wu and Jing Wang are supported by the Natural Science Foundation of Inner Mongolia Autonomous Region (2018MS01016). Jan Harold Alcantara and Jein-Shan Chen are supported by the Ministry of Science and Technology, Taiwan.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jein-Shan Chen.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Caiying Wu, Jing Wang, research is supported by the Natural Science Foundation of Inner Mongolia Autonomous Region (2018MS01016). Jan Harold Alcantara, Jein-Shan Chen, research is supported by Ministry of Science and Technology, Taiwan.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, C., Wang, J., Alcantara, J.H. et al. Smoothing Strategy Along with Conjugate Gradient Algorithm for Signal Reconstruction. J Sci Comput 87, 21 (2021). https://doi.org/10.1007/s10915-021-01440-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10915-021-01440-z

Keywords

Mathematics Subject Classification

Navigation