Skip to main content

Advertisement

Log in

Computational Functionalism for the Deep Learning Era

  • Published:
Minds and Machines Aims and scope Submit manuscript

Abstract

Deep learning is a kind of machine learning which happens in a certain type of artificial neural networks called deep networks. Artificial deep networks, which exhibit many similarities with biological ones, have consistently shown human-like performance in many intelligent tasks. This poses the question whether this performance is caused by such similarities. After reviewing the structure and learning processes of artificial and biological neural networks, we outline two important reasons for the success of deep learning, namely the extraction of successively higher level features and the multiple layer structure, which are closely related to each other. Then some indications about the framing of this heated debate are given. After that, an assessment of the value of artificial deep networks as models of the human brain is given from the similarity perspective of model representation. Finally, a new version of computational functionalism is proposed which addresses the specificity of deep neural computation better than classic, program based computational functionalism.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  • Bartels, A. (2006). Defending the structural concept of representation. THEORIA An International Journal for Theory, History and Foundations of Science, 21(1), 7–19.

    MathSciNet  MATH  Google Scholar 

  • Bassett, D. S., & Mattar, M. G. (2017). A network neuroscience of human learning: Potential to inform quantitative theories of brain and behavior. Trends in Cognitive Sciences, 21(4), 250–264.

    Article  Google Scholar 

  • Blum, L., Shub, M., & Smale, S. (1989). On a theory of computation and complexity over the real numbers: NP-completeness, recursive functions and universal machines. Bulletin of the American Mathematical Society, 21(1), 1–46.

    Article  MathSciNet  MATH  Google Scholar 

  • Bonfiglioli, R., & Nanni, F. (2016). History and philosophy of computing. In From close to distant and back: how to read with the help of machines (pp. 87–100). Springer, Cham.

  • Bueno, O., & French, S. (2011). How theories represent. The British Journal for the Philosophy of Science, 62(4), 857–894.

    Article  MathSciNet  MATH  Google Scholar 

  • Bueno, O., French, S., & Ladyman, J. (2002). On representing the relationship between the mathematical and the empirical. Philosophy of Science, 69(3), 452–473.

    Article  MathSciNet  Google Scholar 

  • Cireşan, D., Meier, U., Masci, J., & Schmidhuber, J. (2011). A committee of neural networks for traffic sign classification. In The 2011 international joint conference on neural networks (pp. 1918–1921).

  • Cireşan, D., Meier, U., Masci, J., & Schmidhuber, J. (2012a). Multi-column deep neural network for traffic sign classification. Neural Networks, 32, 333–338.

    Article  Google Scholar 

  • Cireşan, D., Meier, U., & Schmidhuber, J. (2012b). Multi-column deep neural networks for image classification. In Proceedings of the 2012 IEEE conference on computer vision and pattern recognition (CVPR), IEEE Computer Society, Washington, DC, USA, CVPR ’12 (pp. 3642–3649).

  • Dehaene, S., Meyniel, F., Wacongne, C., Wang, L., & Pallier, C. (2015). The neural representation of sequences: From transition probabilities to algebraic patterns and linguistic trees. Neuron, 88(1), 2–19.

    Article  Google Scholar 

  • Fitch, W. T. (2014). Toward a computational framework for cognitive biology: Unifying approaches from cognitive neuroscience and comparative cognition. Physics of Life Reviews, 11(3), 329–364.

    Article  Google Scholar 

  • Giere, R. N. (2009). An agent-based conception of models and scientific representation. Synthese, 172(2), 269.

    Article  Google Scholar 

  • Gomes, L. (2014). Machine-learning maestro Michael Jordan on the delusions of big data and other huge engineering efforts. IEEE Spectrum 20 Oct 2014.

  • Hassabis, D., Kumaran, D., Summerfield, C., & Botvinick, M. (2017). Neuroscience-inspired artificial intelligence. Neuron, 95(2), 245–258.

    Article  Google Scholar 

  • Hinton, G. (2014). Where do features come from? Cognitive Science, 38(6), 1078–1101.

    Article  Google Scholar 

  • Hinton, G. E., Osindero, S., & Teh, Y. W. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18(7), 1527–1554.

    Article  MathSciNet  MATH  Google Scholar 

  • Holland, P. C., & Schiffino, F. L. (2016). Mini-review: Prediction errors, attention and associative learning. Neurobiology of Learning and Memory, 131, 207–215.

    Article  Google Scholar 

  • Hong, H., Yamins, D. L. K., Majaj, N. J., & DiCarlo, J. J. (2016). Explicit information for category-orthogonal object properties increases along the ventral stream. Nature Neuroscience, 19, 613–622.

    Article  Google Scholar 

  • Khadivi, P., Tandon, R., & Ramakrishnan, N. (2016). Flow of information in feed-forward deep neural networks. arxiv:1603.06220v1.

  • Kiani, R., Esteky, H., Mirpour, K., & Tanaka, K. (2007). Object category structure in response patterns of neuronal population in monkey inferior temporal cortex. Journal of Neurophysiology, 97(6), 4296–4309.

    Article  Google Scholar 

  • Kruger, N., Janssen, P., Kalkan, S., Lappe, M., Leonardis, A., Piater, J., et al. (2013). Deep hierarchies in the primate visual cortex: What can we learn for computer vision? IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1847–1871.

    Article  Google Scholar 

  • Legg, S., & Hutter, M. (2007). Universal intelligence: A definition of machine intelligence. Minds and Machines, 17(4), 391–444.

    Article  Google Scholar 

  • LeRoux, N., & Bengio, Y. (2008). Representational power of restricted Boltzmann machines and deep belief networks. Neural Computation, 20(6), 1631–1649.

    Article  MathSciNet  MATH  Google Scholar 

  • Levine, Y., Yakira, D., Cohen, N., & Shashua, A. (2017). Deep learning and quantum entanglement: Fundamental connections with implications to network design. arxiv:1704.01552.

  • Lin, H. W., & Tegmark, M. (2016a). Critical behavior from deep dynamics: A hidden dimension in natural language. arxiv:1606.06737.

  • Lin, H. W., & Tegmark, M. (2016b). Why does deep and cheap learning work so well? arxiv:1608.08225.

  • Maass, W. (1996). Lower bounds for the computational power of networks of spiking neurons. Neural Computation, 8(1), 1–40.

    Article  MATH  MathSciNet  Google Scholar 

  • Maass, W. (1997). Networks of spiking neurons: The third generation of neural network models. Neural Networks, 10(9), 1659–1671.

    Article  Google Scholar 

  • Mäki, U. (2009). MISSing the world. Models as isolations and credible surrogate systems. Erkenntnis, 70(1), 29–43.

    Article  Google Scholar 

  • Mäki, U. (2011). Models and the locus of their truth. Synthese, 180(1), 47–63.

    Article  Google Scholar 

  • Manning, C. D. (2015). Computational linguistics and deep learning. Computational Linguistics, 41(4), 701–707.

    Article  MathSciNet  Google Scholar 

  • Marblestone, A. H., Wayne, G., & Kording, K. P. (2016). Toward an integration of deep learning and neuroscience. Frontiers in Computational Neuroscience, 10, 94.

    Article  Google Scholar 

  • Mehta, P., & Schwab, D. J. (2014). An exact mapping between the variational renormalization group and deep learning. arxiv:1410.3831v1.

  • Merzenich, M. (2000). Seeing in the sound zone. Nature, 404, 820–821.

    Article  Google Scholar 

  • Mnih, V., Kavukcuoglu, K., & Silver, D. (2015). Human-level control through deep reinforcement learning. Nature, 518, 529–533.

    Article  Google Scholar 

  • Parnas, D. L. (2014). On the significance of Turing’s test. Communications of the ACM, 57(12), 8.

    Article  Google Scholar 

  • Parnas, D. L. (2017). The real risks of artificial intelligence. Communications of the ACM, 60(10), 27–31.

    Article  Google Scholar 

  • Patel, A. B., Nguyen, T., & Baraniuk, R. G. (2015). A probabilistic theory of deep learning. arxiv:1504.00641v1.

  • Piccinini, G. (2010). The mind as neural software? Understanding functionalism, computationalism, and computational functionalism. Philosophy and Phenomenological Research, 81(2), 269–311.

    Article  Google Scholar 

  • Piccinini, G., & Bahar, S. (2013). Neural computation and the computational theory of cognition. Cognitive Science, 37(3), 453–488.

    Article  Google Scholar 

  • Piccinini, G., & Scarantino, A. (2011). Information processing, computation, and cognition. Journal of Biological Physics, 37(1), 1–38.

    Article  Google Scholar 

  • Poggio, T., Mhaskar, H., Rosasco, L., Miranda, B., & Liao, Q. (2017). Why and when can deep—but not shallow—networks avoid the curse of dimensionality: a review. arxiv:1611.00740.

  • Quiroga, R. Q., Reddy, L., Koch, C., & Fried, I. (2007). Decoding visual inputs from multiple neurons in the human temporal lobe. Journal of Neurophysiology, 98(4), 1997–2007.

    Article  Google Scholar 

  • Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85–117.

    Article  Google Scholar 

  • Sharir, O., & Shashua, A. (2017). On the expressive power of overlapping operations of deep networks. arxiv:1703.02065.

  • Silver, D., Schrittwieser, J., & Simonyan, K. (2017). Mastering the game of Go without human knowledge. Nature, 550, 354–359.

    Article  Google Scholar 

  • Stallkamp, J., Schlipsing, M., Salmen, J., & Igel, C. (2012). Man versus computer: Benchmarking machine learning algorithms for traffic sign recognition. Neural Networks, 32, 323–332.

    Article  Google Scholar 

  • Tishby, N., & Zaslavsky, N. (2015). Deep learning and the information bottleneck principle. arxiv:1503.02406.

  • Trappenberg, T. P. (2014). Growing adaptive machines. In A brief introduction to probabilistic machine learning and its relation to neuroscience (pp. 61–108). Springer, Berlin.

  • Turing, A. M. (1950). Computing machinery and intelligence. Mind, 49, 433–460.

    Article  MathSciNet  Google Scholar 

  • van Fraassen, B. C. (2008). Scientific representation: Paradoxes of perspective. Oxford: Clarendon Press.

    Book  Google Scholar 

  • von Melchner, L., Pallas, S. L., & Sur, M. (2000). Visual behaviour mediated by retinal projections directed to the auditory pathway. Nature, 404, 871–876.

    Article  Google Scholar 

  • Voosen, P. (2015). The believers. Chronicle of Higher Education 61(24).

  • Weisberg, M. (2013). Simulation and similarity: Using models to understand the world. New York: Oxford University Press.

    Book  Google Scholar 

  • Weisberg, M. (2015). Biology and philosophy symposium on simulation and similarity: Using models to understand the world. Biology & Philosophy, 30(2), 299–310.

    Article  Google Scholar 

  • Weizenbaum, J. (1966). ELIZA—A computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45.

    Article  Google Scholar 

  • Wu, Y., Schuster, M., Chen, Z., Le, Q. V., Norouzi, M., et al. (2016). Google’s neural machine translation system: Bridging the gap between human and machine translation. arxiv:1609.08144v2.

  • Yamins, D. L. K., & DiCarlo, J. J. (2016). Using goal-driven deep learning models to understand sensory cortex. Nature Neuroscience, 19(3), 356–365.

    Article  Google Scholar 

  • Yamins, D. L. K., Hong, H., Cadieu, C. F., Solomon, E. A., Seibert, D., & DiCarlo, J. J. (2014). Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the National Academy of Sciences, 111(23), 8619–8624.

    Article  Google Scholar 

  • Yu, D., & Deng, L. (2011). Deep learning and its applications to signal and information processing. IEEE Signal Processing Magazine, 28(1), 145–154.

    Article  Google Scholar 

Download references

Acknowledgements

The author wishes to thank the editor and the anonymous reviewers for their constructive feedback on the manuscript. He is also grateful to David Teira (Universidad Nacional de Educación a Distancia, Madrid, Spain) and Emanuele Ratti (University of Notre Dame) for their valuable comments. Finally, he is indebted to José Muñoz-Pérez, José Luis Pérez-de-la-Cruz and Lawrence Mandow (Universidad de Málaga, Spain) for sharing with him their views on Artificial Intelligence.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ezequiel López-Rubio.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

López-Rubio, E. Computational Functionalism for the Deep Learning Era. Minds & Machines 28, 667–688 (2018). https://doi.org/10.1007/s11023-018-9480-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11023-018-9480-7

Keywords

Navigation