Abstract
The growth of connected intelligent devices in the Internet of Things has created a pressing need for real-time processing and understanding of large volumes of analogue data. The difficulty in boosting the computing speed renders digital computing unable to meet the demand for processing analogue information that is intrinsically continuous in magnitude and time. By utilizing a continuous data representation in a nanoscale crossbar array, parallel computing can be implemented for the direct processing of analogue information in real time. Here, we propose a scalable massively parallel computing scheme by exploiting a continuous-time data representation and frequency multiplexing in a nanoscale crossbar array. This computing scheme enables the parallel reading of stored data and the one-shot operation of matrix–matrix multiplications in the crossbar array. Furthermore, we achieve the one-shot recognition of 16 letter images based on two physically interconnected crossbar arrays and demonstrate that the processing and modulation of analogue information can be simultaneously performed in a memristive crossbar array.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 print issues and online access
$259.00 per year
only $21.58 per issue
Buy this article
- Purchase on Springer Link
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout
Similar content being viewed by others
Data availability
The data supporting the findings of this study are available within the article and its Supplementary Information, and from the corresponding author upon reasonable request.
References
Markov, I. L. Limits on fundamental limits to computation. Nature 512, 147–154 (2014).
Zhirnov, V. V., Cavin, R. K., Hutchby, J. A. & Bourianoff, G. I. Limits to binary logic switch scaling — a gedanken model. Proc. IEEE 91, 1934–1939 (2003).
Waldrop, M. M. The chips are down for Moore’s law. Nature 530, 144–147 (2016).
Yasumoto, K., Yamaguchi, H. & Shigeno, H. Survey of real-time processing technologies of IoT data streams. J. Inf. Process. 24, 195–202 (2016).
Di Ventra, M. & Pershin, Y. V. The parallel approach. Nat. Phys. 9, 200–202 (2013).
El-Kareh, B. & Hutter, L. N. Silicon Analog Components (Springer, 2015).
Big data needs a hardware revolution. Nature 554, 145–146 (2018).
Végh, J. How Amdahl’s Law limits the performance of large artificial neural networks. Brain Inform. 6, 4 (2019).
Krestinskaya, O., James, A. P. & Chua, L. O. Neuromemristive circuits for edge computing: a review. IEEE Trans. Neural Netw. Learn. Syst. 31, 4–23 (2020).
Yang, Y. Multi-tier computing networks for intelligent IoT. Nat. Electron. 2, 4–5 (2019).
Cai, F. et al. Power-efficient combinatorial optimization using intrinsic noise in memristor Hopfield neural networks. Nat. Electron. 3, 409–418 (2020).
Liu, C. et al. Small footprint transistor architecture for photoswitching logic and in situ memory. Nat. Nanotechnol. 14, 662–667 (2019).
Marković, D., Mizrahi, A., Querlioz, D. & Grollier, J. Physics for neuromorphic computing. Nat. Rev. Phys. 2, 499–510 (2020).
Miscuglio, M. & Sorger, V. J. Photonic tensor cores for machine learning. Appl. Phys. Rev. 7, 031404 (2020).
Lin, X. et al. All-optical machine learning using diffractive deep neural networks. Science 361, 1004–1008 (2018).
Feldmann, J., Youngblood, N., Wright, C. D., Bhaskaran, H. & Pernice, W. H. P. All-optical spiking neurosynaptic networks with self-learning capabilities. Nature 569, 208–214 (2019).
Kumar, S., Williams, R. S. & Wang, Z. Third-order nanocircuit elements for neuromorphic engineering. Nature 585, 518–523 (2020).
Sebastian, A., Le Gallo, M., Khaddam-Aljameh, R. & Eleftheriou, E. Memory devices and applications for in-memory computing. Nat. Nanotechnol. 15, 529–544 (2020).
Bandyopadhyay, A., Pati, R., Sahu, S., Peper, F. & Fujita, D. Massively parallel computing on an organic molecular layer. Nat. Phys. 6, 369–375 (2010).
Chai, Y. In-sensor computing for machine vision. Nature 579, 32–33 (2020).
Wang, C.-Y. et al. Gate-tunable van der Waals heterostructure for reconfigurable neural network vision sensor. Sci. Adv. 6, eaba6173 (2020).
Pan, C. et al. Reconfigurable logic and neuromorphic circuits based on electrically tunable two-dimensional homojunctions. Nat. Electron. 3, 383–390 (2020).
Wang, S. et al. Networking retinomorphic sensor with memristive crossbar for brain-inspired visual perception. Natl Sci. Rev. 8, nwaa172 (2020).
Shannon, C. E. Mathematical theory of the differential analyzer. J. Math. Phys. 20, 337–354 (1941).
Zhang, W. et al. Neuro-inspired computing chips. Nat. Electron. 3, 371–382 (2020).
Ielmini, D. & Wong, H. S. P. In-memory computing with resistive switching devices. Nat. Electron. 1, 333–343 (2018).
Sun, Z. et al. Solving matrix equations in one step with cross-point resistive arrays. Proc. Natl Acad. Sci. USA 116, 4123–4128 (2019).
Zidan, M. A. et al. A general memristor-based partial differential equation solver. Nat. Electron. 1, 411–420 (2018).
Xia, Q. & Yang, J. J. Memristive crossbar arrays for brain-inspired computing. Nat. Mater. 18, 309–323 (2019).
Zidan, M. A., Strachan, J. P. & Lu, W. D. The future of electronics based on memristive systems. Nat. Electron. 1, 22–29 (2018).
Pi, S. et al. Memristor crossbar arrays with 6-nm half-pitch and 2-nm critical dimension. Nat. Nanotechnol. 14, 35–39 (2019).
Sebastian, A., Le Gallo, M. & Eleftheriou, E. Computational phase-change memory: beyond von Neumann computing. J. Phys. D 52, 443002 (2019).
Chen, W.-H. et al. CMOS-integrated memristive non-volatile computing-in-memory for AI edge processors. Nat. Electron. 2, 420–428 (2019).
Wang, M. et al. Robust memristors based on layered two-dimensional materials. Nat. Electron. 1, 130–136 (2018).
Yao, P. et al. Fully hardware-implemented memristor convolutional neural network. Nature 577, 641–646 (2020).
Prezioso, M. et al. Training and operation of an integrated neuromorphic network based on metal-oxide memristors. Nature 521, 61–64 (2015).
Sheridan, P. M. et al. Sparse coding with memristor networks. Nat. Nanotechnol. 12, 784–789 (2017).
Lin, P. et al. Three-dimensional memristor circuits as complex neural networks. Nat. Electron. 3, 225–232 (2020).
Yao, P. et al. Face classification using electronic synapses. Nat. Commun. 8, 15199 (2017).
Raleigh, G. G. & Cioffi, J. M. Spatio-temporal coding for wireless communication. IEEE Trans. Commun. 46, 357–366 (1998).
Hu, M. et al. Memristor-based analog computation and neural network classification with a dot product engine. Adv. Mater. 30, 1705914 (2018).
International Roadmap for Devices and Systems: More Moore 2017 edn (IEEE, 2018).
Merolla, P. A. et al. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345, 668 (2014).
Davies, M. et al. Loihi: a neuromorphic manycore processor with on-chip learning. IEEE Micro 38, 82–99 (2018).
Yeon, H. et al. Alloying conducting channels for reliable neuromorphic computing. Nat. Nanotechnol. 15, 574–579 (2020).
Choi, S. et al. SiGe epitaxial memory for neuromorphic computing with reproducible high performance based on engineered dislocations. Nat. Mater. 17, 335–340 (2018).
Cai, F. et al. A fully integrated reprogrammable memristor–CMOS system for efficient multiply–accumulate operations. Nat. Electron. 2, 290–299 (2019).
Pi, S. et al. Nanoscale memristive radiofrequency switches. Nat. Commun. 6, 7519 (2015).
Torrezan, A. C. et al. Sub-nanosecond switching of a tantalum oxide memristor. Nanotechnology 22, 485203 (2011).
Kim, M. et al. Analogue switches made from boron nitride monolayers for application in 5G and terahertz communication systems. Nat. Electron. 3, 479–485 (2020).
Satyanarayanan, M. How we created edge computing. Nat. Electron. 2, 42 (2019).
Fuller, E. J. et al. Parallel programming of an ionic floating-gate memory array for scalable neuromorphic computing. Science 364, 570–574 (2019).
Sebastian, A. et al. Tutorial: brain-inspired computing using phase-change memory devices. J. Appl. Phys. 124, 111101 (2018).
Burr, G. W. et al. Experimental demonstration and tolerancing of a large-scale neural network (165 000 synapses) using phase-change memory as the synaptic weight element. IEEE Trans. Electron Devices 62, 3498–3507 (2015).
Wong, H. S. P. & Salahuddin, S. Memory leads the way to better computing. Nat. Nanotechnol. 10, 191–194 (2015).
Arimoto, Y. & Ishiwara, H. Current status of ferroelectric random-access memory. MRS Bull. 29, 823–828 (2004).
Zhang, W., Mazzarello, R., Wuttig, M. & Ma, E. Designing crystallization in phase-change materials for universal memory and neuro-inspired computing. Nat. Rev. Mater. 4, 150–168 (2019).
Feldmann, J. et al. Parallel convolutional processing using an integrated photonic tensor core. Nature 589, 52–58 (2021).
Miscuglio, M. & Sorger, V. J. Photonic tensor cores for machine learning. Appl. Phys. Rev. 7, 031404 (2020).
Acknowledgements
This work was supported in part by the National Natural Science Foundation of China (62034004, 61625402, 61974176 and 61921005), the Strategic Priority Research Program of the Chinese Academy of Sciences (XDB44000000), the National Key R&D Program of China (2019YFB2205400 and 2019YFB2205402) and Fundamental Research Funds for the Central Universities (020414380179 and 020414380171). F.M. acknowledges the support from the AIQ foundation and experimental assistance from Q. Liu, X. Tan and Z. Wu.
Author information
Authors and Affiliations
Contributions
F.M., S.-J.L. and C.W. conceived the idea and designed the experiments. F.M. and S.-J.L. supervised the whole project. C.W. performed all experiments. C.W. and S.-J.L. analysed the experimental data. C.-Y.W. and C.P. provided assistance during the experiment design. Z.-Z.Y. assisted in the device fabrication and circuit assembly. X.S. and W.W. contributed to circuit measurement. Y.G., Z.Z. and C.Z. contributed to the MIMO model. C.W. and Y.Z. carried out the simulation of the circuit models. C.W., S.-J.L. and F.M. co-wrote the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Peer review information Nature Nanotechnology thanks Yang Chai, Suhas Kumar and Abu Sebastian for their contribution to the peer review of this work.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Supplementary Information
Supplementary Figs. 1–18 and Tables 1–3.
Supplementary Video 1
The 16 letter images can be recognized in parallel by physically interconnecting two nanoscale crossbar arrays, in which one crossbar is used for data storage while the other serves as an artificial neural network for the inference task. The recognized results are transmitted to a wireless terminal (for example, cell phone) since the signal modulation is simultaneously accomplished with analogue computing.
Rights and permissions
About this article
Cite this article
Wang, C., Liang, SJ., Wang, CY. et al. Scalable massively parallel computing using continuous-time data representation in nanoscale crossbar array. Nat. Nanotechnol. 16, 1079–1085 (2021). https://doi.org/10.1038/s41565-021-00943-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s41565-021-00943-y
This article is cited by
-
Higher-dimensional processing using a photonic tensor core with continuous-time data
Nature Photonics (2023)
-
In-memory photonic dot-product engine with electrically programmable weight banks
Nature Communications (2023)
-
Graphene/MoS2−xOx/graphene photomemristor with tunable non-volatile responsivities for neuromorphic vision processing
Light: Science & Applications (2023)
-
Parallel in-memory wireless computing
Nature Electronics (2023)
-
An organic electrochemical transistor for multi-modal sensing, memory and processing
Nature Electronics (2023)