Skip to main content

Advertisement

Log in

Improving robustness and efficiency of edge computing models

  • Published:
Wireless Networks Aims and scope Submit manuscript

Abstract

Existing designs of edge computing models are mostly targeted to improve the performance of accuracy. Yet, besides accuracy, robustness and inference efficiency are also crucial attributes to the performance. To achieve satisfied performance in edge-cloud computing frameworks, each distributed model is required to be both robust to perturbations and feasible for information uploading in wireless environments with limited bandwidth. In other words, feature encoders should be more robust and have faster inference time while maintaining accuracy at a competitive level. Therefore, to design accurate, robust and efficient models for bandwidth limited edge computing, we propose a systematic approach to autonomously optimize parameters and architectures of arbitrary deep neural networks. This approach employs a genetic algorithm based bi-generative adversarial network, which is utilized to autonomously develop and select the number of filters (for convolutional layers) and the number of neurons (for fully connected layers) from a wide range of values. To demonstrate the performance, we test our approach on ImageNet and ModelNet databases, and compare it with the state-of-the-art 3D volumetric network and two exclusively GA-based methods. Our results show that the proposed method can significantly improve performance by simultaneously optimizing multiple neural network parameters, regardless of the depth of the network.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Krizhevsky, A., Sutskever, I., Hinton, G.E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pp. 1097–1105

  2. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. Ieee

  3. Simonyan, K., Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556

  4. Ren, S., He, K., Girshick, R., Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems, pp. 91–99

  5. Zeiler, M.D., Fergus, R. (2014). Visualizing and understanding convolutional networks. In European Conference on Computer Vision, pp. 818–833, Springer

  6. Lin, M., Chen, Q., Yan, S. (2013). Network in network. arXiv preprint arXiv:1312.4400

  7. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., & Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1-9).

  8. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z. (2016). Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826

  9. He, K., Zhang, X., Ren, S., Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778

  10. Elsken, T., Metzen, J. H., & Hutter, F. (2019). Neural architecture search: A survey. The Journal of Machine Learning Research, 20(1), 1997–2017.

    MathSciNet  MATH  Google Scholar 

  11. Liu, C., Zoph, B., Neumann, M., Shlens, J., Hua, W., Li, L.-J., Fei-Fei, L., Yuille, A., Huang, J., Murphy, K. (2018). Progressive neural architecture search. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 19–34

  12. Tan, M., Chen, B., Pang, R., Vasudevan, V., Sandler, M., Howard, A., Le, Q.V. (2019). Mnasnet: Platform-aware neural architecture search for mobile. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2820–2828

  13. Xie, L., Yuille, A.: Genetic cnn. In 2017 IEEE International Conference on Computer Vision (ICCV), pp. 1388–1397 (2017). https://doi.org/10.1109/ICCV.2017.154

  14. Azizpour, H., Razavian, A. S., Sullivan, J., Maki, A., & Carlsson, S. (2016). Factors of transferability for a generic convnet representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(9), 1790–1802. https://doi.org/10.1109/TPAMI.2015.2500224

    Article  Google Scholar 

  15. Deb, K., Pratap, A., Agarwal, S., & Meyarivan, T. (2002). A fast and elitist multiobjective genetic algorithm: Nsga-ii. IEEE Transactions on Evolutionary Computation, 6(2), 182–197. https://doi.org/10.1109/4235.996017

    Article  Google Scholar 

  16. Leung, F. H. F., Lam, H. K., Ling, S. H., & Tam, P. K. S. (2003). Tuning of the structure and parameters of a neural network using an improved genetic algorithm. IEEE Transactions on Neural Networks, 14(1), 79–88. https://doi.org/10.1109/TNN.2002.804317

    Article  Google Scholar 

  17. Ritchie, M. D., White, B. C., Parker, J. S., Hahn, L. W., & Moore, J. H. (2003). Optimizationof neural network architecture using genetic programming improvesdetection and modeling of gene-gene interactions in studies of humandiseases. BMC bioinformatics, 4(1), 28.

    Article  Google Scholar 

  18. Benardos, P., & Vosniakos, G.-C. (2007). Optimizing feedforward artificial neural network architecture. Engineering Applications of Artificial Intelligence, 20(3), 365–382.

    Article  Google Scholar 

  19. Magnier, L., & Haghighat, F. (2010). Multiobjective optimization of building design using trnsys simulations, genetic algorithm, and artificial neural network. Building and Environment, 45, 739–746.

    Article  Google Scholar 

  20. Islam, B.U., Baharudin, Z., Raza, M.Q., Nallagownden, P. (2014). Optimization of neural network architecture using genetic algorithm for load forecasting. In Intelligent and Advanced Systems (ICIAS), 2014 5th International Conference On, pp. 1–6. Ieee

  21. Stanley, K. O., & Miikkulainen, R. (2002). Evolving neural networks through augmenting topologies. Evolutionary computation, 10(2), 99–127.

    Article  Google Scholar 

  22. Miikkulainen, R., Liang, J., Meyerson, E., Rawal, A., Fink, D., Francon, O., Raju, B., Navruzyan, A., Duffy, N., Hodjat, B. (2017). Evolving deep neural networks. arXiv preprint arXiv:1703.00548

  23. Rylander, B.I. (2001). Computational complexity and the genetic algorithm. PhD thesis, Moscow, ID, USA . AAI3022336

  24. Bergstra, J., & Bengio, Y. (2012). Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13, 281–305.

    MathSciNet  MATH  Google Scholar 

  25. Jin, J., Yan, Z., Fu, K., Jiang, N., Zhang, C. (2016). Neural network architecture optimization through submodularity and supermodularity. arXiv preprint arXiv:1609.00074

  26. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y. (2014). Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680

  27. Gatys, L.A., Ecker, A.S., Bethge, M. (2015). A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576

  28. Radford, A., Metz, L., Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434

  29. Johnson, J., Alahi, A., Fei-Fei, L. (2016). Perceptual losses for real-time style transfer and super-resolution. In: European Conference on Computer Vision

  30. Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A. (2017). Image-to-image translation with conditional adversarial networks. arXiv preprint

  31. Zhu, J.-Y., Park, T., Isola, P., Efros, A.A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint arXiv:1703.10593

  32. Lu, Y., Velipasalar, S. (2019). Autonomous choice of deep neural network parameters by a modified generative adversarial network. In: 2019 IEEE International Conference on Image Processing (ICIP), pp. 3846–3850, IEEE

  33. Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X., Xiao, J. (2015). 3d shapenets: A deep representation for volumetric shapes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1912–1920

  34. Tramèr, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., McDaniel, P. (2017). Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204

  35. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A. (2017). Towards Deep Learning Models Resistant to Adversarial Attacks

  36. Dong, Y., Liao, F., Pang, T., Hu, X., Zhu, J.(2017). Discovering adversarial examples with momentum. CoRR abs/1710.06081, arXiv:1710.06081

  37. Xie, C., Zhang, Z., Wang, J., Zhou, Y., Ren, Z., Yuille, A.L. (2018). Improving transferability of adversarial examples with input diversity. CoRR abs/1803.06978arXiv:1803.06978

  38. Dong, Y., Pang, T., Su, H., Zhu, J. (2019). Evading defenses to transferable adversarial examples by translation-invariant attacks. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition

  39. Wang, Z., Guo, H., Zhang, Z., Liu, W., Qin, Z., Ren, K. (2021). Feature importance-aware transferable adversarial attacks. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7639–7648

  40. Pytorch: Pytorch torchvision models. https://pytorch.org/docs/stable/torchvision/models.html

Download references

Acknowledgements

This work was supported by the National Key R&D Program of China (No. 2021YFB2900100), the Natural Science Basic Research Program of Shaanxi Province (No. 2022JQ-579), the Fund of Doctoral Startup of Xi’an University of Technology (No. 112-451121006), and the Fundamental Research Funds for the Central Universities (No. 3102019QD1001).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Helei Cui.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, Y., Lu, Y., Cui, H. et al. Improving robustness and efficiency of edge computing models. Wireless Netw (2022). https://doi.org/10.1007/s11276-022-03115-5

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11276-022-03115-5

Keywords

Navigation