Skip to main content
Log in

Designing Interfaces for Classes of a Neural Network Graph Model

  • Published:
Programming and Computer Software Aims and scope Submit manuscript

Abstract

This paper describes an approach to testing artificial neural networks that is implemented in a C++ program as a set of data structures and algorithms for their processing. C++ classes are used as data structures that implement the processing of the following objects: vertex, edge, directed and undirected graphs, spanning tree, and circuit. Interfaces for the most important overloaded operations on these objects are described. The implementation of a testing procedure that uses overloaded operations on graph model objects is illustrated.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.
Fig. 6.
Fig. 7.
Fig. 8.
Fig. 9.
Fig. 10.
Fig. 11.
Fig. 12.

Similar content being viewed by others

REFERENCES

  1. Ciresan, D., Meier, U., Masci, J., and Schmidhuber, J., Multi-column deep neural network for traffic sign classification, Neural Networks, 2012, vol. 12, pp. 333–338.

    Article  Google Scholar 

  2. Talbot, D., CES 2015: Nvidia demos a car computer trained with “deep learning,” MIT Technol. Rev., 2015. https://www.technologyreview.com/s/533936/ces-2015-nvidia-demos-a-car-computer-trained-with-deep-learning.

  3. Roth, S., Shrinkage fields for effective image restoration, Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2014, pp. 2774–2781.

  4. Deng, L. and Yu, D., Deep learning: Methods and applications, Found. Trends Signal Process., 2014, vol. 7, nos. 3–4, pp. 1–19.

    Article  MathSciNet  Google Scholar 

  5. Karpov, Yu.L., Karpov, L.E., and Smetanin, Yu.G., Adaptation of general concepts of software testing to neural networks, Program. Comput. Software, 2018, vol. 44, no. 5, pp. 324–334. https://doi.org/10.1134/S0361768818050031

    Article  MathSciNet  Google Scholar 

  6. Rosenblatt, F., Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms, Washington: Spartan Books, 1961.

    Book  Google Scholar 

  7. Kohonen, T., Self-Organization and Associative Memory, New York: Springer, 1984.

    MATH  Google Scholar 

  8. Grossberg, S., Nonlinear neural networks: Principles, mechanisms, and architectures, Neural Networks, 1988, vol. 1, no. 1. pp. 17–61.

    Article  Google Scholar 

  9. Hebb, D.O., The Organization of Behavior, New York: Wiley, 1948.

    Google Scholar 

  10. Harary, F., Graph Theory, Addison Wesley, 1969.

    Book  Google Scholar 

  11. Ore, O., Theory of graphs, Amer. Math. Soc. Trans., 1962.

  12. Iordanskii, M.A., Konstruktivnaya teoriya grafov i ee prilozheniya (Constructive Graph Theory and Its Applications), Izd. Kirillitsa, 2016.

  13. Hopfield, J.J., Neural networks and physical systems with emergent collective computational abilities, Proc. Natl. Acad. Sci. U. S. A., 1982, vol. 79, no. 8, pp. 2554–2558.

    Article  MathSciNet  Google Scholar 

  14. Hinton, D.E. and Seinowski, T., Optimal perceptual inference, Proc. IEEE Conf. Computer Vision and Pattern Recognition, 1983, pp. 448–453.

  15. Aracena, J., Demongeot, J., and Goles, E., Positive and negative circuits in discrete neural networks, IEEE Trans. Neural Networks, 2004, vol. 15, no. 1, pp. 77–83.

    Article  Google Scholar 

  16. Karpov, Yu.L., Karpov, L.E., and Smetanin, Yu.G., Elimination of negative circuits in certain neural network structures to achieve stable solutions, Program. Comput. Software, 2019, vol. 45, no. 5, pp. 241–250. https://doi.org/10.1134/S0361768819050025

    Article  MathSciNet  Google Scholar 

Download references

ACKNOWLEDGMENTS

We are grateful to Viktor Vasilyevich Malyshko, Associate Professor at the System Programming Department of the Faculty of Computational Mathematics and Cybernetics (CMC), Moscow State University, who assisted in verifying and refining the class diagram of the graph model.

Funding

This work was supported by the Russian Foundation for Basic Research, project nos. 18-07-0697-a, 18-07-01211-a, 19-07-00321-a, and 19-07-00493-a.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Yu. L. Karpov, I. A. Volkova, A. A. Vylitok, L. E. Karpov or Yu. G. Smetanin.

Additional information

Not long before this publication, one of the co-authors of our work, Yurii Gennad’evich Smetanin, untimely passed away.

We shall always remember him as our friend, a true scientist, and a very good person.

Translated by Yu. Kornienko

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Karpov, Y.L., Volkova, I.A., Vylitok, A.A. et al. Designing Interfaces for Classes of a Neural Network Graph Model. Program Comput Soft 46, 463–472 (2020). https://doi.org/10.1134/S036176882007004X

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S036176882007004X

Navigation