Skip to main content

Advertisement

Log in

Run-time Mapping of Spiking Neural Networks to Neuromorphic Hardware

  • Published:
Journal of Signal Processing Systems Aims and scope Submit manuscript

Abstract

Neuromorphic architectures implement biological neurons and synapses to execute machine learning algorithms with spiking neurons and bio-inspired learning algorithms. These architectures are energy efficient and therefore, suitable for cognitive information processing on resource and power-constrained environments, ones where sensor and edge nodes of internet-of-things (IoT) operate. To map a spiking neural network (SNN) to a neuromorphic architecture, prior works have proposed design-time based solutions, where the SNN is first analyzed offline using representative data and then mapped to the hardware to optimize some objective functions such as minimizing spike communication or maximizing resource utilization. In many emerging applications, machine learning models may change based on the input using some online learning rules. In online learning, new connections may form or existing connections may disappear at run-time based on input excitation. Therefore, an already mapped SNN may need to be re-mapped to the neuromorphic hardware to ensure optimal performance. Unfortunately, due to the high computation time, design-time based approaches are not suitable for remapping a machine learning model at run-time after every learning epoch. In this paper, we propose a design methodology to partition and map the neurons and synapses of online learning SNN-based applications to neuromorphic architectures at run-time. Our design methodology operates in two steps – step 1 is a layer-wise greedy approach to partition SNNs into clusters of neurons and synapses incorporating the constraints of the neuromorphic architecture, and step 2 is a hill-climbing optimization algorithm that minimizes the total spikes communicated between clusters, improving energy consumption on the shared interconnect of the architecture. We conduct experiments to evaluate the feasibility of our algorithm using synthetic and realistic SNN-based applications. We demonstrate that our algorithm reduces SNN mapping time by an average 780x compared to a state-of-the-art design-time based SNN partitioning approach with only 6.25% lower solution quality.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6

Similar content being viewed by others

References

  1. Akopyan, F., Sawada, J., Cassidy, A., Alvarez-Icaza, R., Arthur, J., Merolla, P., Imam, N., Nakamura, Y., Datta, P., Nam, G.J., & et al. (2015). TrueNorth: Design and tool flow of a 65 mW 1 million neuron programmable neurosynaptic chip. IEEE Transactions on Computer-aided Design of Integrated Circuits and Systems, 34(10), 1537–1557.

    Article  Google Scholar 

  2. Akopyan, F., Sawada, J., & et al. (2015). TrueNorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 34(10), 1537–1557.

    Article  Google Scholar 

  3. Al-Fuqaha, A., Guizani, M., Mohammadi, M., Aledhari, M., & Ayyash, M. (2015). Internet of things: a survey on enabling technologies, protocols, and applications. IEEE Communications Surveys Tutorials, 17(4), 2347–2376. https://doi.org/10.1109/COMST.2015.2444095https://doi.org/10.1109/COM https://doi.org/10.1109/COMST.2015.2444095ST.2015.2444095.

    Article  Google Scholar 

  4. Balaji, A., Adiraju, P., Kashyap, H. J., Das, A., Krichmar, J. L., Dutt, N. D., & Catthoor, F. (2020). Pycarl: a pynn interface for hardware-software co-simulation of spiking neural network. In 2020 International joint conference on neural networks (IJCNN).

  5. Balaji, A., Corradi, F., Das, A., Pande, S., Schaafsma, S., & Catthoor, F. (2018). Power-Accuracy Trade-Offs for heartbeat classification on neural networks hardware. Journal of Low Power Electronics, 14(4), 508–519.

    Article  Google Scholar 

  6. Balaji, A., Das, A., Wu, Y., Huynh, K., Dell’Anna, F., Indiveri, G., Krichmar, J. L., Dutt, N., Schaafsma, S., & Catthoor, F. (2019). Mapping Spiking Neural Networks on Neuromorphic Hardware. IEEE transactions on VLSI systems.

  7. Benini, L., & De Micheli, G. (2002). Networks on chip: a new paradigm for systems on chip design. In Proceedings 2002 Design, Automation and Test in Europe Conference and Exhibition (pp. 418–419): IEEE.

  8. Benjamin, B.V., Gao, P., McQuinn, E., Choudhary, S., Chandrasekaran, A.R., Bussat, J., Alvarez-Icaza, R., Arthur, J.V., Merolla, P.A., & Boahen, K. (2014). Neurogrid: a mixed-analog-digital multichip system for large-scale neural simulations. Proceedings of the IEEE, 102(5), 699–716.

    Article  Google Scholar 

  9. Chicca, E., Badoni, D., Dante, V., & et al. (2003). A vlsi recurrent network of integrate-and-fire neurons connected by plastic synapses with long-term memory. IEEE Transactions on Neural Networks, 14.

  10. Chou, T., Kashyap, H. J., Xing, J., Listopad, S., Rounds, E. L., Beyeler, M., Dutt, N., & Krichmar, J. L. (2018). Carlsim 4: an open source library for large scale, biologically detailed spiking neural network simulation using heterogeneous clusters. In 2018 International joint conference on neural networks (IJCNN). https://doi.org/10.1109/IJCNN.2018.8489326 (pp. 1–8).

  11. Cui, J., & Maskell, D. L. (2012). A fast high-level event-driven thermal estimator for dynamic thermal aware scheduling. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 31(6), 904–917.

    Article  Google Scholar 

  12. Das, A., Al-Hashimi, B. M., & Merrett, G. V. (2016). Adaptive and hierarchical runtime manager for energy-aware thermal management of embedded systems. ACM Trans. Embed. Comput. Syst 15(2). https://doi.org/10.1145/2834120.

  13. Das, A., & Kumar, A. (2012). Fault-aware task re-mapping for throughput constrained multimedia applications on noc-based mpsocs. In International symposium on rapid system prototyping (RSP): IEEE.

  14. Das, A., Kumar, A., Veeravalli, B., Shafik, R., Merrett, G., & Al-Hashimi, B. (2015). Workload uncertainty characterization and adaptive frequency scaling for energy minimization of embedded systems. In Design, automation & test in europe conference & exhibition (DATE).

  15. Das, A., Pradhapan, P., Groenendaal, W., Adiraju, P., Rajan, R. T., Catthoor, F., Schaafsma, S., Krichmar, J. L., Dutt, N., & Van Hoof, C. (2017). Unsupervised heart-rate estimation in wearables with liquid states and a probabilistic readout. arXiv:1708.05356.

  16. Das, A., Wu, Y., Huynh, K., Dell’Anna, F., Catthoor, F., & Schaafsma, S. (2018). Mapping of local and global synapses on spiking neuromorphic hardware. In Design, automation & test in europe conference & exhibition (DATE) (pp. 1217–1222). https://doi.org/10.23919/DATE.2018.8342201.

  17. Das, A., Wu, Y., Huynh, K., Dell’Anna, F., Catthoor, F., & Schaafsma, S. (2018). Mapping of local and global synapses on spiking neuromorphic hardware. In 2018 Design, automation test in europe conference exhibition (DATE) (pp. 1217–1222). https://doi.org/10.23919/DATE.2018.8342201.

  18. Das, A. K., Catthoor, F., & Schaafsma, S. (2018). Heartbeat classification in wearables using multi-layer perceptron and time-frequency joint distribution of ecg. In 2018 IEEE/ACM International conference on connected health: applications, Systems and Engineering Technologies (CHASE) (pp. 69–74): IEEE.

  19. Davies, M., Srinivasa, N., Lin, T. H., Chinya, G., Cao, Y., Choday, S. H., Dimou, G., Joshi, P., Imam, N., Jain, S., & et al. (2018). Loihi: a neuromorphic manycore processor with on-chip learning. IEEE Micro, 38(1), 82–99.

    Article  Google Scholar 

  20. Dhiman, G., Ayoub, R., & Rosing, T. (2009). PDRAM: a hybrid PRAM and DRAM main memory system. In Proceedings of the Annual Design Automation Conference (DAC) (pp. 469–664).

  21. Diehl, P. U., & Cook, M. (2015). Unsupervised learning of digit recognition using spike-timing-dependent plasticity. Frontiers in computational neuroscience, 9.

  22. Evans, D. (2011). The internet of things: How the next evolution of the internet is changing everything. CISCO White Paper, 1(2011), 1–11.

    Google Scholar 

  23. Gama, J.a., žliobaitundefined, I., Bifet, A., Pechenizkiy, M., & Bouchachia, A. (2014). A survey on concept drift adaptation. ACM Comput. Surv 46(4). https://doi.org/10.1145/2523813.

  24. Grzyb, B. J., Chinellato, E., Wojcik, G. M., & Kaminski, W. A. (2009). Facial expression recognition based on liquid state machines built of alternative neuron models. In 2009 International joint conference on neural networks (pp. 1011–1017): IEEE.

  25. Ji, Y., Zhang, Y., Li, S., Chi, P., Jiang, C., Qu, P., Xie, Y., & Chen, W. (2016). NEUTRAMS: Neural network transformation and co-design under neuromorphic hardware constraints. In International symposium on microarchitecture (MICRO): IEEE.

  26. Ji, Y., Zhang, Y., Li, S., Chi, P., Jiang, C., Qu, P., Xie, Y., & Chen, W. (2016). NEUTRAMS: Neural network transformation and co-design under neuromorphic hardware constraints. In International symposium on microarchitecture (MICRO).

  27. Kasabov, N. (2001). Evolving fuzzy neural networks for supervised/unsupervised online knowledge-based learning. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 31(6), 902–918.

    Article  Google Scholar 

  28. Lee, J. H., Delbruck, T., & Pfeiffer, M. (2016). Training deep spiking neural networks using backpropagation. Frontiers in Neuroscience, 10, 508.

    Google Scholar 

  29. Maass, W. (1997). Networks of spiking neurons: the third generation of neural network models. Neural Networks, 10(9), 1659–1671.

    Article  Google Scholar 

  30. Mahmood, A., Khan, S. A., Albalooshi, F., & Awwad, N. (2017). Energy-aware real-time task scheduling in multiprocessor systems using a hybrid genetic algorithm. Electronics, 6(2), 40.

    Article  Google Scholar 

  31. Mao, Y., You, C., Zhang, J., Huang, K., & Letaief, K. B. (2017). Mobile edge computing: Survey and research outlook. arXiv:1701.01090.

  32. Mao, Y., You, C., Zhang, J., Huang, K., & Letaief, K.B. (2017). A survey on mobile edge computing: The communication perspective. IEEE Communications Surveys Tutorials, 19(4), 2322–2358.

    Article  Google Scholar 

  33. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., & et al. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529.

    Article  Google Scholar 

  34. Mohammadi, M., Al-Fuqaha, A., Sorour, S., & Guizani, M. (2018). Deep learning for iot big data and streaming analytics: a survey. IEEE Communications Surveys & Tutorials, 20(4), 2923–2960.

    Article  Google Scholar 

  35. Moradi, S., Qiao, N., Stefanini, F., & Indiveri, G. (2018). A Scalable Multicore Architecture with Heterogeneous Memory Structures for Dynamic Neuromorphic Asynchronous Processors (DYNAPs). IEEE Transactions on Biomedical Circuits and Systems, 12(1), 106–122. https://doi.org/10.1109/TBCAS.2017.2759700.

    Article  Google Scholar 

  36. Moradi, S., Qiao, N., Stefanini, F., & Indiveri, G. (2018). A scalable multicore architecture with heterogeneous memory structures for dynamic neuromorphic asynchronous processors (DYNAPs). IEEE Transactions on Biomedical Circuits and Systems, 12(1), 106–122. https://doi.org/10.1109/TBCAS.2017.2759700.

    Article  Google Scholar 

  37. Mostafa, H. (2018). Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems, 29(7), 3227–3235.

    Google Scholar 

  38. Pan, S. J., & Yang, Q. (2009). A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10), 1345–1359.

    Article  Google Scholar 

  39. Shafique, M., Hafiz, R., Javed, M.U., Abbas, S., Sekanina, L., Vasicek, Z., & Mrazek, V. (2017). Adaptive and energy-efficient architectures for machine learning: challenges, opportunities, and research roadmap. In 2017 IEEE Computer society annual symposium on VLSI (ISVLSI) (pp. 627–632). https://doi.org/10.1109/ISVLSI.2017.124.

  40. Shi, W., Cao, J., Zhang, Q., Li, Y., & Xu, L. (2016). Edge computing: Vision and challenges. IEEE Internet of Things Journal, 3(5), 637–646. https://doi.org/10.1109/JIOT.2016.2579198.

    Article  Google Scholar 

  41. Shi, W., & Dustdar, S. (2016). The promise of edge computing. Computer, 49(5), 78–81.

    Article  Google Scholar 

  42. Song, S., Balaji, A., Das, A., Kandasamy, N., & Shackleford, J. (2020). Optimizing tensor contractions for embedded devices with racetrack memory scratch-pads. In Proceedings of the 21st ACM SIGPLAN/SIGBED International Conference on Languages, Compilers, and Tools for Embedded Systems, LCTES 2020.

  43. Thrun, S. (1998). Lifelong learning algorithms. In Learning to learn (pp. 181–209): Springer.

  44. Wen, W., Wu, C. R., Hu, X., Liu, B., Ho, T. Y., Li, X., & Chen, Y. (2015). An eda framework for large scale hybrid neuromorphic computing systems. In 2015 52Nd ACM/EDAC/IEEE design automation conference (DAC) (pp. 1–6): IEEE.

  45. Wijesinghe, P., Ankit, A., Sengupta, A., & Roy, K. (2018). An all-memristor deep spiking neural computing system: a step toward realizing the low-power stochastic brain. IEEE Transactions on Emerging Topics in Computational Intelligence, 2(5), 345–358.

    Article  Google Scholar 

  46. Xia, Q., & Yang, J. J. (2019). Memristive crossbar arrays for brain-inspired computing. Nature Materials, 18(4), 309.

    Article  Google Scholar 

  47. Zhao, W., & Cao, Y. (2006). New generation of predictive technology model for sub-45 nm early design exploration. IEEE Transactions on Electron Devices, 53(11), 2816–2823.

    Article  Google Scholar 

Download references

Acknowledgments

This work is supported by 1) the National Science Foundation Award CCF-1937419 (RTML: Small: Design of System Software to Facilitate Real-Time Neuromorphic Computing) and 2) the National Science Foundation Faculty Early Career Development Award CCF-1942697 (CAREER: Facilitating Dependable Neuromorphic Computing: Vision, Architecture, and Impact on Programmability).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Adarsha Balaji.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Balaji, A., Marty, T., Das, A. et al. Run-time Mapping of Spiking Neural Networks to Neuromorphic Hardware. J Sign Process Syst 92, 1293–1302 (2020). https://doi.org/10.1007/s11265-020-01573-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11265-020-01573-8

Keywords

Navigation