Skip to main content
Log in

Deep Neural Networks: Selected Aspects of Learning and Application

  • PATTERN RECOGNITION AND IMAGE ANALYSIS MILIEU
  • Published:
Pattern Recognition and Image Analysis Aims and scope Submit manuscript

Abstract

Training methods for deep neural networks (DNNs) are analyzed. It is shown that maximizing the likelihood function of the distribution of the input data P(x) in the space of synaptic connections of a restricted Boltzmann machine (RBM) is equivalent to minimizing the cross-entropy (CE) of the network error function and minimizing the total mean squared error (MSE) of the network in the same space using linear neurons. The application of DNNs for the detection and recognition of productmarking is considered.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.
Fig. 6.
Fig. 7.
Fig. 8.
Fig. 9.
Fig. 10.
Fig. 11.
Fig. 12.

Similar content being viewed by others

REFERENCES

  1. A. Krizhevsky, L. Sutskever, and G. Hinton, “ImageNet classification with deep convolutional neural networks,” in Proc. of the Intern. Conf. Advances in Neural Inform., Processing Systems (NIPS-2012) (Lake Tahoe, 2012), No. 25, pp. 1090–1098.

  2. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).

    Article  Google Scholar 

  3. T. Mikolov et al., “Strategies for training large scale neural network language models,” in Proc. 2011 IEEE Workshop Automatic Speech Recognition and Understanding (Waikoloa, 2011), pp. 195–201.

  4. G. Hinton et al., “Deep neural network for acoustic modeling in speech recognition,” IEEE Signal Process. Mag. 29, 82–97 (2012).

    Article  Google Scholar 

  5. Y. Bengio, “Learning deep architectures for AI,” Found. Trends Mach. Learn. 2 (1), 1–127 (2009).

    Article  Google Scholar 

  6. Y. Bengio et al., “Greedy layer-wise training of deep networks,” Adv. Neural Inf. Process. Syst. 11, 153–160 (2007).

    Google Scholar 

  7. D. Erhan et al., “Why does unsupervised pre-training help deep learning?,” J. Mach. Learn. Res. 11, 625–660 (2010).

    MathSciNet  MATH  Google Scholar 

  8. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).

    Article  Google Scholar 

  9. V. A. Golovko, “Deep learning: An overview and main paradigms,” Opt. Mem. Neural Networks 26, 1–17 (2017).

    Article  Google Scholar 

  10. V. Golovko, A. Kroshchanka, M. Komar, and A. Sachenko, “Neural network approach for semantic coding of words,” in Lecture Notes in Computational Intelligence and Decision Making (Proc. of International Scientific Conference “Intellectual Systems of Decision Making and Problem of Computational Intelligence” (ISDMCI 2019)) (Springer, 2019), pp. 647–658.

  11. V. Golovko, A. Kroshchanka, E. Mikhno, M. Komar, and A. Sachenko, “Deep convolutional neural network for detection of solar panels,” in Data-Centric Business and Applications, Vol. 5: ICT Systems-Theory, Radio-Electronics, Information Technologies, and Cybersecurity (Springer, 2020), pp. 371–389.

  12. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).

    Article  Google Scholar 

  13. G. E. Hinton, A Practical Guide to Training Restricted Boltzmann Machines (Toronto, 2010).

    Google Scholar 

  14. G. E. Hinton, S. Osindero, and Yee-Whye Teh, “A fast learning algorithm for deep belief nets,” Neural Comput. 18, 1527–1554 (2006).

    Article  MathSciNet  Google Scholar 

  15. G. Hinton and R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science 313, 504–507 (2006).

    Article  MathSciNet  Google Scholar 

  16. H. Larochelle et al., “Exploring strategies for training deep neural networks,” J. Mach. Learn. Res. 10 (1), 1–40 (2009).

    MATH  Google Scholar 

  17. X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier networks,” in Proc. Intern. Conf. on Artificial Intelligence and Statistics (2011), Vol. 15, pp. 315−323.

  18. V. Golovko et al., “A learning technique for deep belief neural networks,” in Neural Networks and Artificial Intelligence, Vol. 440: Communication in Computer and Information Science (Springer, Cham, 2014), pp. 136–146.

  19. V. Golovko, “From multilayers perceptrons to deep belief neural networks: Training paradigms and application,” in Lections on Neuroinformatics, Ed. by V. A. Golovko (NRNU MEPhI, Moscow, 2015), pp. 47–84.

  20. V. Golovko, “A new technique for restricted Boltzmann machine learning,” in Proceedings of the 8th IEEE International Conference IDAACS-2015 (Warsaw, 2015), pp. 182–186.

  21. V. Golovko, A. Kroschanka, and D. Treadwell, “The nature of unsupervised learning in deep neural networks: A new understanding and novel approach,” Opt. Mem. Neural Networks 25, 127–141 (2016).

    Article  Google Scholar 

  22. V. A. Golovko, A. A. Kroshchanka, M. V. Kovalev, V. V. Taberko, and D. S. Ivaniuk, “Integration of artificial neural networks and knowledge bases,” in Open Semantic Technologies for Intelligent Systems, Ed. by V. Golenkov (BSUIR, Minsk, 2020), pp. 175–182.

    Google Scholar 

  23. Omron Documentation. https://robotics.omron.com/browse-documents. Accessed June 2020.

  24. V. Golovko, A. Kroshchanka, and E. Mikhno, “Brands and caps labeling recognition in images using deep learning,” in Pattern Recognition and Information Processing, Vol. 1055: Communication in Computer and Information Science (Springer, 2019), pp. 35–51.

  25. A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications” (2017). https://arxiv.org/pdf/1704.04861.pdf. Accessed June 2020.

  26. Hui, J., “Object detection: Speed and accuracy comparison (Faster R-CNN, R-FCN, SSD, FPN, RetinaNet and YOLOv3)” (2018). https://jonathan-hui.medium.com/object-detection-speed-and-accuracy-comparison-faster-r-cnn-r-fcn-ssd-and-yolo-5425656ae359. Accessed June 2020.

  27. Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Ng, “Reading digits in natural images with unsupervised feature learning,” in NIPS Workshop on Deep Learning and Unsupervised Feature Learning (2011).

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to V. A. Golovko, A. A. Kroshchanka or E. V. Mikhno.

Ethics declarations

The authors declare that they have no conflicts of interest.

Additional information

Prof. Vladimir Golovko received a master of science degree in Computer Engineering in 1984 from Bauman Moscow State Technical University, in 1990 he received a doctoral degree from Belarus State Technical University, and in 2003 he received a doctor of science degree in Computer Science from the United Institute of Informatics problems of National Academy of Sciences (Belarus). At present he is the head of the Intelligence Information Technologies Department and Laboratory of Artificial Neural Networks of Brest State Technical University. His research interests include artificial intelligence, neural networks, deep learning, autonomous robot learning, signal processing, intrusion and epilepsy detection. He is the author of more than 400 scientific papers.

Aliaksandr Kroshchanka received bachelor’s degree in 2008 and a master of science degree in 2009 from Brest State Pushkin University. At present he is working as a senior lecturer at the Intelligence Information Technologies Department of Brest State Technical University. Research interests: artificial intelligence, neural networks, deep learning, computer vision, integrated AI systems. He is the author of more than 40 scientific papers.

Egor Mikhno received a higher education diploma in 2016 and a master of science degree in Computer Engineering in 2017 from Brest State Technical University. He is currently completing his postgraduate studies and is a senior lecturer at the Department of Intelligent Information Technologies at Brest State Technical University. Research interests: artificial intelligence, neural networks, deep learning, python, machine learning. He is the author of about 10 scientific papers.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Golovko, V.A., Kroshchanka, A.A. & Mikhno, E.V. Deep Neural Networks: Selected Aspects of Learning and Application. Pattern Recognit. Image Anal. 31, 132–143 (2021). https://doi.org/10.1134/S1054661821010090

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S1054661821010090

Keywords:

Navigation