Skip to main content
Log in

Encoding and Decoding of Recursive Structures in Neural-Symbolic Systems

  • Published:
Optical Memory and Neural Networks Aims and scope Submit manuscript

Abstract

One of the ways to join the connectionist approach and the symbolic paradigm is Tensor Product Variable Binding. It was initially devoted to building distributed representation of recursive structures for neural networks to use it as the input. Structures are an essential part of both formal and natural languages and appear in syntactic trees, grammar, semantic interpretation. A human mind smoothly operates with the appearing problems on the neural level, and it is naturally scalable and robust. The question arises of whether it is possible to translate traditional symbolic algorithms to the sub-symbolic level to reuse performance and computational gain of the neural networks for general tasks. However, several aspects of Tensor Product Variable Binding lack attention in public research, especially in building such a neural architecture that performs computations according to the mathematical model without preliminary training. In this paper, those implementation aspects are addressed. A proposed novel design for the decoding network translates a tensor to a corresponding recursive structure with the arbitrary level of nesting. Also, several complex topics about encoding such structures in the distributed representation or tensor are addressed. Both encoding and decoding neural networks are built with the Keras framework’s help and are analyzed from the perspective of applied value. The proposed design continues the series of papers dedicated to building a robust bridge between two computational paradigms: connectionist and symbolic.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.
Fig. 6.

Similar content being viewed by others

Notes

  1. From now on network description contains terminology accepted in the Keras [7] and TensorFlow [1] software frameworks.

  2. http://www.numpy.org/.

  3. https://www.scipy.org/.

  4. https://www.python.org/.

  5. https://github.com/demid5111/ldss-tensor-structures.

  6. https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.

REFERENCES

  1. Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., et al., TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. http://tensorflow.org/.

  2. Besold, T.R. and Kühnberger, K.U., Towards integrated neural—symbolic systems for human-level ai: Two research programs helping to bridge the gaps, Biol. Inspired Cognit. Archit., 2015, vol. 14, pp. 97–110.

    Google Scholar 

  3. Blacoe, W. and Lapata, M., A comparison of vector-based representations for semantic composition, in Proc. of the 2012 Joint Conf. on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, Association for Computational Linguistics, 2012, pp. 546–556.

  4. Browne, A. and Sun, R., Connectionist inference models, Neural Networks, 2001, vol. 14, no. 10, pp. 1331–1355.

    Article  Google Scholar 

  5. Cheng, J., Wang, Z., Wen, J.R., Yan, J., and Chen, Z., Contextual text understanding in distributional semantic space, in Proc. of the 24th ACM Int. on Conf. on Information and Knowledge Management, ACM, 2015, pp. 133–142.

  6. Cheng, P., Zhou, B., Chen, Z., and Tan, J., The topsis method for decision making with 2-tuple linguistic intuitionistic fuzzy sets, in 2017 IEEE 2nd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), IEEE, 2017, pp. 1603–1607.

  7. Chollet, F. et al., Keras, 2015. https://keras.io.

  8. Collier, M. and Beel, J., Implementing neural turing machines, in Int. Conf. on Artificial Neural Networks, Springer, 2018, pp. 94–104.

  9. Demidovskij, A., Implementation aspects of tensor product variable binding in connectionist systems, in Proc. of SAI Intelligent Systems Conference, Springer, 2019, pp. 97–110.

  10. Demidovskij, A., Automatic construction of tensor product variable binding neural networks for neural-symbolic intelligent systems, in Proc. of 2nd Int. Conf. on Electrical, Communication and Computer Engineering, IEEE, 2020 (not published).

  11. Demidovskij, A. and Babkin, E., Developing a distributed linguistic decision making system, Business Inform., 2019, vol. 13, no. 1.

  12. Demidovskij, A. and Babkin, E., Designing a neural network primitive for conditional structural transformations, in Russian Conf. on Artificial Intelligence, Springer, 2020, pp. 117–133.

  13. Demidovskij, A. and Babkin, E., Designing arithmetic neural primitive for sub-symbolic aggregation of linguistic assessments, J. Phys.: Conf. Ser., 2020, vol. 1680.

  14. Demidovskij, A.V., Towards automatic manipulation of arbitrary structures in connectivist paradigm with tensor product variable binding, in Int. Conf. on Neuroinformatics, Springer, 2019, pp. 375–383.

  15. Demidovskij, A.V. and Babkin, E.A., Towards designing linguistic assessments aggregation as a distributed neuroalgorithm, in 2020 XXIII Int. Conf. on Soft Computing and Measurements (SCM), IEEE, 2020, pp. 161–164.

  16. Dettmers, T. and Zettlemoyer, L., Sparse networks from scratch: Faster training without losing performance, arXiv preprint, 2019. arXiv:1907.04840.

  17. Evci, U., Gale, T., Menick, J., Castro, P.S., and Elsen, E., Rigging the lottery: Making all tickets winners, in Int. Conf. on Machine Learning, PMLR, 2020, pp. 2943–2952.

  18. Gallant, S.I. and Okaywe, T.W., Representing objects, relations, and sequences, Neural Comput., 2013, vol. 25, no. 8, pp. 2038–2078.

    Article  MathSciNet  Google Scholar 

  19. van Gelder, T., Distributed vs. local representation, PhD Thesis, Univ. Pittsburgh, 1999.

  20. Golmohammadi, D., Neural network application for fuzzy multi-criteria decision making problems, Int. J. Prod. Econ., 2011, vol. 131, no. 2, pp. 490–504.

    Article  Google Scholar 

  21. Graves, A., Wayne, G., and Danihelka, I., Neural turing machines, 2014. arXiv:1410.5401.

  22. Gray, S., Radford, A., and Kingma, D.P., Gpu kernels for block-sparse weights, 2017. arXiv:1711.09224 3.

  23. Kanerva, P., Hyperdimensional computing: An introduction to computing in distributed representation with high-dimensional random vectors, Cognit. Comput., 2009, vol. 1, no. 2, pp. 139–159.

    Article  Google Scholar 

  24. Kleyko, D., Khan, S., Osipov, E., and Yong, S.P., Modality classification of medical images with distributed representations based on cellular automata reservoir computing, in 2017 IEEE 14th Int. Symp. on Biomedical Imaging (ISBI 2017), IEEE, 2017, pp. 1053–1056.

  25. Kleyko, D., Rahimi, A., Rachkovskij, D.A., Osipov, E., and Rabaey, J.M., Classification and recall with binary hyperdimensional computing: Tradeoffs in choice of density and mapping characteristics, IEEE Trans. Neural Networks Learning Systems, 2018, vol. 29, no. 12, pp. 5880–5898.

    Article  Google Scholar 

  26. Le, Q. and Mikolov, T., Distributed representations of sentences and documents, in Int. Conf. on Machine Learning, 2014, pp. 1188–1196.

  27. Legendre, G., Miyata, Y., and Smolensky, P., Harmonic Grammar: A Formal Multi-Level Connectionist Theory of Linguistic Well-Formedness, Theoretical Foundations, Citeseer, 1990.

    Google Scholar 

  28. Legendre, G., Miyata, Y., and Smolensky, P., Distributed recursive structure processing, Advances in Neural Information Processing Systems 3 (NIPS 1990), 1991, pp. 591–597.

  29. Marcus, G., The next decade in ai: four steps towards robust artificial intelligence, 2020. arXiv:2002.06177.

  30. Mikolov, T., Chen, K., Corrado, G., and Dean, J., Eficient estimation of word representations in vector space, 2013. arXiv:1301.3781.

  31. Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., and Lerer, A., Automatic Differentiation in Pytorch, 2017.

  32. Pinkas, G., Reasoning, nonmonotonicity and learning in connectionist networks that capture propositional knowledge, Artif. Intell., 1995, vol. 77, no. 2, pp. 203–247.

    Article  MathSciNet  Google Scholar 

  33. Pinkas, G., Lima, P., and Cohen, S., Representing, binding, retrieving and unifying relational knowledge using pools of neural binders, Biol. Inspired Cognit. Archit., 2013, vol. 6, pp. 87–95.

    Google Scholar 

  34. Rumelhart, D.E., Hinton, G.E., McClelland, J.L., et al., A general framework for parallel distributed processing, Parallel Distrib. Process.: Explor. Microstruct. Cognit., 1986, vol. 1, no. 45–76, p. 26.

  35. Rumelhart, D.E., McClelland, J.L., Group, P.R., et al., Parallel Distributed Processing, Vol. 1, Cambridge, MA: MIT Press, 1987.

    Google Scholar 

  36. Serafini, L. and Garcez, A.d., Logic tensor networks: Deep learning and logical reasoning from data and knowledge, 2016. arXiv:1606.04422.

  37. Smolensky, P., Tensor product variable binding and the representation of symbolic structures in connectionist systems, Artif. Intell., 1990, vol. 46, no. 1–2, pp. 159–216.

    Article  MathSciNet  Google Scholar 

  38. Smolensky, P. and Legendre, G., The Harmonic Mind: From Neural Computation to Optimality-Theoretic Grammar (Cognitive Architecture), MIT Press, 2006, vol. 1.

  39. Smolensky, P., Legendre, G., and Miyata, Y., Integrating connectionist and symbolic computation for the theory of language, Curr. Sci., 1993, pp. 381–391.

  40. Teso, S., Sebastiani, R., and Passerini, A., Structured learning modulo theories, Artif. Intell., 2017, vol. 244, pp. 166–187.

    Article  MathSciNet  Google Scholar 

  41. Wang, H., Dou, D., and Lowd, D., Ontology-based deep restricted boltzmann machine, in Int. Conf. on Database and Expert Systems Applications, Springer, 2016, pp. 431–445.

  42. Wei, C. and Liao, H., A multigranularity linguistic group decision-making method based on hesitant 2-tuple sets, Int. J. Intell. Syst., 2016, vol. 31, no. 6, pp. 612–634.

    Article  Google Scholar 

  43. Widdows, D. and Cohen, T., Reasoning with vectors: A continuous model for fast robust inference, Logic J. IGPL, 2014, vol. 23, no. 2, pp. 141–173.

    Article  MathSciNet  Google Scholar 

Download references

Funding

The reported study was funded by Russian Foundation for Basic Research, project no. 19-37-90058.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. Demidovskij.

Ethics declarations

The authors declare that they have no conflicts of interest.

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Demidovskij, A. Encoding and Decoding of Recursive Structures in Neural-Symbolic Systems. Opt. Mem. Neural Networks 30, 37–50 (2021). https://doi.org/10.3103/S1060992X21010033

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.3103/S1060992X21010033

Navigation