Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Review Article
  • Published:

Neuro-inspired computing chips

Abstract

The rapid development of artificial intelligence (AI) demands the rapid development of domain-specific hardware specifically designed for AI applications. Neuro-inspired computing chips integrate a range of features inspired by neurobiological systems and could provide an energy-efficient approach to AI computing workloads. Here, we review the development of neuro-inspired computing chips, including artificial neural network chips and spiking neural network chips. We propose four key metrics for benchmarking neuro-inspired computing chips — computing density, energy efficiency, computing accuracy, and on-chip learning capability — and discuss co-design principles, from the device to the algorithm level, for neuro-inspired computing chips based on non-volatile memory. We also provide a future electronic design automation tool chain and propose a roadmap for the development of large-scale neuro-inspired computing chips.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Computing architectures and paradigms.
Fig. 2: Benchmarks.
Fig. 3: Application-dependent device metric requirements.
Fig. 4: Analogue computing with NVM.
Fig. 5: EDA tool chain.
Fig. 6: Roadmap.

Similar content being viewed by others

References

  1. LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).

    Google Scholar 

  2. Silver, D. et al. Mastering the game of Go without human knowledge. Nature 550, 354–359 (2017).

    Google Scholar 

  3. Hoi-Jun, Y. Intelligence on Silicon: From Deep-Neural-Network Accelerators to Brain Mimicking AI-SoCs. In 2019 IEEE International Solid - State Circuits Conference - (ISSCC) 20–26 (IEEE, 2019).

  4. Roy, K., Jaiswal, A. & Panda, P. Towards spike-based machine intelligence with neuromorphic computing. Nature 575, 607–617 (2019).

    Google Scholar 

  5. Sebastian, A., Le Gallo, M., Khaddam-Aljameh, R. & Eleftheriou, E. Memory devices and applications for in-memory computing. Nat. Nanotechnol. https://doi.org/10.1038/s41565-020-0655-z (2020).

  6. Kandel, E. R. et al. Principles of Neural Science vol. 4 (McGraw-hill New York, 2000).

  7. Rosenblatt, F. The Perceptron, a Perceiving and Recognizing Automaton Project Para (Cornell Aeronautical Laboratory, 1957).

  8. Widrow, B., Pierce, W. H. & Angell, J. B. Birth, Life, and Death in Microelectronic Systems. IRE Trans. Mil. Electron. MIL–5, 191–201 (1961).

    Google Scholar 

  9. Mead, C. Neuromorphic electronic systems. Proc. IEEE 78, 1629–1636 (1990).

    Google Scholar 

  10. Jackel, L. D. Artificial neural networks for computing. J. Vac. Sci. Technol. B 4, 61 (1986).

    Google Scholar 

  11. Strukov, D. B., Snider, G. S., Stewart, D. R. & Williams, R. S. The missing memristor found. Nature 453, 80–83 (2008).

    Google Scholar 

  12. Jo, S. H. et al. Nanoscale memristor device as synapse in neuromorphic systems. Nano Lett. 10, 1297–1301 (2010).

    Google Scholar 

  13. Chen, Y.-H., Krishna, T., Emer, J. & Sze, V. Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks. In 2016 IEEE Int. Solid-State Circuits Conference (ISSCC) 262–263 (IEEE, 2016).

  14. Jouppi, N. P. et al. In-datacenter performance analysis of a tensor processing unit. In Proc. 44th Annual Int. Symposium on Computer Architecture https://doi.org/10.1145/3140659.3080246 (ACM, 2017).

  15. Yu, S. Neuro-inspired computing with emerging nonvolatile memorys. Proc. IEEE 106, 260–285 (2018).

    Google Scholar 

  16. Biswas, A. & Chandrakasan, A. P. Conv-RAM: An energy-efficient SRAM with embedded convolution computation for low-power CNN-based machine learning applications. In 2018 IEEE Int. Solid - State Circuits Conference - (ISSCC) 488–490 (IEEE, 2018).

  17. Chen, W.-H. et al. A 65nm 1Mb nonvolatile computing-in-memory ReRAM macro with sub-16ns multiply-and-accumulate for binary DNN AI edge processors. In 2018 IEEE Int. Solid - State Circuits Conference - (ISSCC) 494–496 (IEEE, 2018).

  18. Sim, J. et al. A 1.42TOPS/W deep convolutional neural network recognition processor for intelligent IoE systems. In 2016 IEEE Int. Solid-State Circuits Conference (ISSCC) 264–265 (IEEE, 2016).

  19. Xue, C.-X. et al. A 1Mb multibit reram computing-in-memory macro with 14.6ns parallel MAC computing time for CNNBased AI edge processors. In 2019 IEEE Int. Solid - State Circuits Conference - (ISSCC) 388–389 (IEEE, 2019).

  20. Mochida, R. et al. A 4M synapses integrated analog ReRAM based 66.5 TOPS/W neural-network processor with cell current controlled writing and flexible network architecture. In 2018 IEEE Symposium on VLSI Technology 175–176 (IEEE, 2018).

  21. Prezioso, M. et al. Training and operation of an integrated neuromorphic network based on metal-oxide memristors. Nature 521, 61–64 (2015).

    Google Scholar 

  22. Yao, P. et al. Face classification using electronic synapses. Nat. Commun. 8, 15199 (2017).

    Google Scholar 

  23. Ambrogio, S. et al. Equivalent-accuracy accelerated neural-network training using analogue memory. Nature 558, 60–67 (2018).

    Google Scholar 

  24. Li, C. et al. Long short-term memory networks in memristor crossbar arrays. Nat. Mach. Intell. 1, 49–57 (2019).

    Google Scholar 

  25. Zhang, J., Wang, Z. & Verma, N. In-memory computation of a machine-learning classifier in a standard 6T SRAM array. IEEE J. Solid-State Circ. 52, 915–924 (2017).

    Google Scholar 

  26. Srinivasa, S. et al. Monolithic 3D+ -IC based reconfigurable compute-in-memory SRAM macro. In 2019 Symposium on VLSI Technology T32–T33 (IEEE, 2019).

  27. Li, S. et al. DRISA: a DRAM-based reconfigurable in-situ accelerator. In Proc. 50th Annual IEEE/ACM Int. Symposium on Microarchitecture - MICRO-50 ‘17 288–301 (ACM, 2017).

  28. Guo, X. et al. Fast, energy-efficient, robust, and reproducible mixed-signal neuromorphic classifier based on embedded NOR flash memory technology. In 2017 IEEE Int. Electron Devices Meeting (IEDM) 6.5.1-6.5.4 (IEEE, 2017).

  29. Cai, F. et al. A fully integrated reprogrammable memristor–CMOS system for efficient multiply–accumulate operations. Nat. Electron. 2, 290–299 (2019).

    Google Scholar 

  30. Wu, H., Yao, P., Gao, B. & Qian, H. Multiplication on the edge. Nat. Electron. 1, 8–9 (2018).

    Google Scholar 

  31. Schmitt, S. et al. Neuromorphic hardware in the loop: training a deep spiking network on the BrainScaleS wafer-scale system. In 2017 Int. Joint Conference on Neural Networks (IJCNN) 2227–2234 (IEEE, 2017).

  32. Vaquer-Sunyer, R. et al. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345, 668–673 (2014).

    Google Scholar 

  33. Davies, M. et al. Loihi: a neuromorphic manycore processor with on-chip learning. IEEE Micro 38, 82–99 (2018).

    Google Scholar 

  34. Esser, S. K. et al. Convolutional networks for fast, energy-efficient neuromorphic computing. Proc. Natl Acad. Sci. USA 113, 11441–11446 (2016).

    Google Scholar 

  35. Wang, W. et al. Learning of spatiotemporal patterns in a spiking neural network with resistive switching synapses. Sci. Adv. 4, eaat4752 (2018).

    Google Scholar 

  36. Gao, L., Chen, P.-Y. & Yu, S. NbOx based oscillation neuron for neuromorphic computing. Appl. Phys. Lett. 111, 103503 (2017).

    Google Scholar 

  37. Xu, X. et al. Scaling for edge inference of deep neural networks. Nat. Electron. 1, 216–222 (2018).

    Google Scholar 

  38. Li, C. et al. Efficient and self-adaptive in-situ learning in multilayer memristor neural networks. Nat. Commun. 9, 2385 (2018).

    Google Scholar 

  39. Russakovsky, O. et al. ImageNet large scale visual recognition challenge. Int. J. Comp. Vis. 115, 211–252 (2015).

    MathSciNet  Google Scholar 

  40. Lin, T.-Y. et al. Microsoft COCO: common objects in context. In Computer Vision – ECCV 2014 (eds. Fleet, D., Pajdla, T., Schiele, B. & Tuytelaars, T.) 740–755 (Springer, 2014).

  41. Jeongwoo, P., Juyun, L. & Dongsuk, J. A 65nm 236.5nJ/classification neuromorphic processor with 7.5% energy overhead on-chip learning using direct spike-only feedback. In 2019 IEEE Int. Solid - State Circuits Conference - (ISSCC) 140–141 (IEEE, 2019).

  42. Pi, S. et al. Memristor crossbar arrays with 6-nm half-pitch and 2-nm critical dimension. Nat. Nanotechnol. (2018).

  43. Lin, P. et al. Three-dimensional memristor circuits as complex neural networks. Nat. Electron. 3, 225–232 (2020).

    Google Scholar 

  44. Prezioso, M. et al. Spike-timing-dependent plasticity learning of coincidence detection with passively integrated memristive circuits. Nat. Commun. 9, (2018).

  45. Jacob, B. et al. quantization and training of neural networks for efficient integer-arithmetic-only inference. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2704–2713 (IEEE, 2018).

  46. He, K., Zhang, X., Ren, S. & Sun, J. Deep Residual Learning for Image Recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 770–778 (IEEE, 2016).

  47. Wang, Z. et al. Engineering incremental resistive switching in TaOx based memristors for brain-inspired computing. Nanoscale 8, 14015–14022 (2016).

    Google Scholar 

  48. Schneider, M. L. et al. Ultralow power artificial synapses using nanotextured magnetic Josephson junctions. Sci. Adv. 4, e1701329 (2018).

    Google Scholar 

  49. Jerry, M. et al. Ferroelectric FET analog synapse for acceleration of deep neural network training. In 2017 IEEE Int. Electron Devices Meeting (IEDM) 6.2.1-6.2.4 (IEEE, 2017).

  50. Tang, J. et al. ECRAM as scalable synaptic cell for high-speed, low-power neuromorphic computing. in 2018 IEEE Int. Electron Devices Meeting (IEDM) 13.1.1-13.1.4 (IEEE, 2018).

  51. Zhao, M. et al. Characterizing Endurance Degradation of Incremental Switching in Analog RRAM for Neuromorphic Systems. in 2018 IEEE Int. Electron Devices Meeting (IEDM) 20.2.1–20.2.4 (IEEE, 2018).

  52. Zhao, M. et al. Investigation of statistical retention of filamentary analog RRAM for neuromophic computing. In 2017 IEEE Int. Electron Devices Meeting (IEDM) 39.4.1–39.4.4 (IEEE, 2017).

  53. Wu, H. et al. Device and circuit optimization of RRAM for neuromorphic computing. In 2017 IEEE Int. Electron Devices Meeting (IEDM) 11.5.1–11.5.4 (IEEE, 2017).

  54. Wang, Z. et al. Reinforcement learning with analogue memristor arrays. Nat. Electron. 2, 115–124 (2019).

    Google Scholar 

  55. Sheridan, P. M. et al. Sparse coding with memristor networks. Nat. Nanotechnol. 12, 784–789 (2017).

    Google Scholar 

  56. Li, C. et al. Analogue signal and image processing with large memristor crossbars. Nat. Electron. 1, 52–59 (2018).

    Google Scholar 

  57. Kim, S. et al. NVM neuromorphic core with 64k-cell (256-by-256) phase change memory synaptic array with on-chip neuron circuits for continuous in-situ learning. In 2015 IEEE Int. Electron Devices Meeting (IEDM) 17.1.1–17.1.4 (IEEE, 2015).

  58. Jerry, M. et al. A ferroelectric field effect transistor based synaptic weight cell. J. Phys. D. 51, 434001 (2018).

    Google Scholar 

  59. Sun, X., Wang, P., Ni, K., Datta, S. & Yu, S. Exploiting hybrid precision for training and inference: a 2T-1FeFET based analog synaptic weight cell. In 2018 IEEE Int. Electron Devices Meeting (IEDM) 4 (2018).

  60. Chi, P. et al. PRIME: a novel processing-in-memory architecture for neural network computation in ReRAM-based main memory. In 2016 ACM/IEEE 43rd Annual Int. Symposium on Computer Architecture (ISCA) 27–39 (IEEE, 2016).

  61. Boybat, I. et al. Neuromorphic computing with multi-memristive synapses. Nat. Commun. 9, (2018).

  62. Zhu, Z. et al. A configurable multi-precision CNN computing framework based on single bit RRAM. In Proc. 56th Annual Design Automation Conference 2019 on - DAC ‘19 0738-100X (ACM, 2019).

  63. Liu, Q. et al. A fully integrated analog ReRAM based 78.4TOPS/W compute-in-memory chip with fully parallel MAC computing. In 2020 IEEE Int. Solid - State Circuits Conference - (ISSCC) 500–502 (IEEE, 2020).

  64. Sun, X. et al. XNOR-RRAM: A scalable and parallel resistive synaptic architecture for binary neural networks. In 2018 Design, Automation & Test in Europe Conference & Exhibition (DATE) 1423–1428 (IEEE, 2018).

  65. Cassuto, Y., Kvatinsky, S. & Yaakobi, E. Sneak-path constraints in memristor crossbar arrays. In 2013 IEEE Int. Symposium on Information Theory 156–160 (IEEE, 2013).

  66. Jeong, Y., Zidan, M. A. & Lu, W. D. Parasitic effect analysis in memristor-array-based neuromorphic systems. IEEE Trans. Nanotechnol. 17, 184–193 (2018).

    Google Scholar 

  67. Yu, S. et al. Binary neural network with 16 Mb RRAM macro chip for classification and online training. In 2016 IEEE Int. Electron Devices Meeting (IEDM) 16.2.1-16.2.4 (IEEE, 2016).

  68. Wang, Z. et al. Fully memristive neural networks for pattern classification with unsupervised learning. Nat. Electron. 1, 137–145 (2018).

    Google Scholar 

  69. Shafiee, A. et al. ISAAC: A convolutional neural network accelerator with in-situ analog arithmetic in crossbars. in 2016 ACM/IEEE 43rd Annual Int. Symposium on Computer Architecture (ISCA) 14–26 (IEEE, 2016)

  70. Song, L., Qian, X., Li, H. & Chen, Y. PipeLayer: A pipelined ReRAM-based accelerator for deep learning. In 2017 IEEE Int. Symposium on High Performance Computer Architecture (HPCA) 541–552 (IEEE, 2017).

  71. Yao, P. et al. Fully hardware-implemented memristor convolutional neural network. Nature 577, 641–646 (2020).

    Google Scholar 

  72. Ankit, A. et al. PUMA: a programmable ultra-efficient memristor-based accelerator for machine learning inference. In Proc. Twenty-Fourth Int. Conference on Architectural Support for Programming Languages and Operating Systems - ASPLOS ‘19 715–731 (ACM, 2019).

  73. Ji, Y. et al. FPSA: a full system stack solution for reconfigurable ReRAM-based NN accelerator architecture. In Proc. Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems - ASPLOS ‘19 733–747 (ACM, 2019).

  74. Yang, T.-H. et al. Sparse ReRAM engine: joint exploration of activation and weight sparsity in compressed neural networks. In Proc. 46th International Symposium on Computer Architecture - ISCA ‘19 236–249 (ACM, 2019).

  75. Hu, M. et al. Memristor-based analog computation and neural network classification with a dot product engine. Adv. Mater. 30, 1705914 (2018).

    Google Scholar 

  76. Song, S., Miller, K. D. & Abbott, L. F. Competitive Hebbian learning through spike-timing-dependent synapticplasticity. Nat. Neurosci. 3, 919–926 (2000).

    Google Scholar 

  77. Zamanidoost, E., Bayat, F. M., Strukov, D. & Kataeva, I. Manhattan rule training for memristive crossbar circuit pattern classifiers. In 2015 IEEE 9th Int. Symposium on Intelligent Signal Processing (WISP) Proc. https://doi.org/10.1109/WISP.2015.7139171 (IEEE, 2015).

  78. Zhang, Q. et al. Sign backpropagation: An on-chip learning algorithm for analog RRAM neuromorphic computing systems. Neural Netw. 108, 217–223 (2018).

    Google Scholar 

  79. Hu, M. et al. Dot-product engine for neuromorphic computing: programming 1T1M crossbar to accelerate matrix-vector multiplication. In Proc. 53rd Annual Design Automation Conference on - DAC ‘16 1–6 (ACM, 2016).

  80. Zhang, W. et al. Design guidelines of RRAM based neural-processing-unit: a joint device-circuit-algorithm analysis. In Proc. 56th Annual Design Automation Conference 2019 on - DAC ‘19 1–6 (ACM, 2019).

  81. Chen, P.-Y., Peng, X. & Yu, S. NeuroSim: A circuit-level macro model for benchmarking neuro-inspired architectures in online learning. IEEE Trans. Computer-Aided Design Int. Circ. Syst. 1–1 (2018).

  82. Sabry Aly, M. M. et al. The N3XT approach to energy-efficient abundant-data computing. Proc. IEEE 107, 19–48 (2019).

    Google Scholar 

  83. Amin, M. H., Andriyash, E., Rolfe, J., Kulchytskyy, B. & Melko, R. Quantum Boltzmann machine. Phys. Rev. X 8, 021050 (2018).

    Google Scholar 

  84. Arute, F. et al. Quantum supremacy using a programmable superconducting processor. Nature 574, 505–510 (2019).

    Google Scholar 

  85. Sun, Z. et al. Solving matrix equations in one step with cross-point resistive arrays. Proc. Natl Acad. Sci. USA 116, 4123–4128 (2019).

    MathSciNet  MATH  Google Scholar 

  86. Mahmoodi, M. R., Prezioso, M. & Strukov, D. B. Versatile stochastic dot product circuits based on nonvolatile memories for high performance neurocomputing and neurooptimization. Nat. Commun. 10, 5113 (2019).

    Google Scholar 

  87. Merrikh-Bayat, F. & Shouraki, S. B. Memristive neuro-fuzzy system. IEEE Trans. Cybern. 43, 269–285 (2013).

    Google Scholar 

  88. Serb, A. et al. Unsupervised learning in probabilistic neural networks with multi-state metal-oxide memristive synapses. Nat. Commun. 7, 12611 (2016).

    Google Scholar 

  89. Krestinskaya, O., Dolzhikova, I. & James, A. P. Hierarchical temporal memory using memristor networks: a survey. IEEE Trans. Emerg. Top. Comput. Intell. 2, 380–395 (2018).

    Google Scholar 

  90. Schemmel, J. et al. A wafer-scale neuromorphic hardware system for large-scale neural modeling. In Proc. 2010 IEEE International Symposium on Circuits and Systems 1947–1950 (IEEE, 2010).

  91. Seo, J. et al. A 45nm CMOS neuromorphic chip with a scalable architecture for learning in networks of spiking neurons. In 2011 IEEE Custom Integrated Circuits Conference (CICC) https://doi.org/10.1109/CICC.2011.6055293 (IEEE, 2011).

  92. Kim, J. K., Knag, P., Chen, T. & Zhang, Z. A 640M pixel/s 3.65mW sparse event-driven neuromorphic object recognition processor with on-chip learning. In 2015 Symposium on VLSI Circuits (VLSI Circuits) C50–C51 (IEEE, 2015).

  93. Kang, M., Gonugondla, S. K., Patil, A. & Shanbhag, N. R. A Multi-Functional In-Memory Inference Processor Using a Standard 6T SRAM Array. IEEE J. Solid-State Circuits 53, 642–655 (2018).

    Google Scholar 

  94. Frenkel, C., Lefebvre, M., Legat, J. & Bol, D. A 0.086-mm2 12.7-pJ/SOP 64k-Synapse 256-Neuron Online-Learning Digital Spiking Neuromorphic Processor in 28-nm CMOS. IEEE Trans. Biomed. Circ. Syst. 13, 145–158 (2019).

    Google Scholar 

  95. Gonugondla, S. K., Kang, M. & Shanbhag, N. A 42pJ/decision 3.12TOPS/W robust in-memory machine learning classifier with on-chip training. In 2018 IEEE Int. Solid - State Circuits Conference - (ISSCC) 490–492 (IEEE, 2018).

  96. Pei, J. et al. Towards artificial general intelligence with hybrid Tianjic chip architecture. Nature 572, 106–111 (2019).

    Google Scholar 

  97. Benjamin, B. V. et al. Neurogrid: a mixed-analog-digital multichip system for large-scale neural simulations. Proc. IEEE 102, 699–716 (2014).

    Google Scholar 

  98. Qiao, N. et al. A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128K synapses. Front. Neurosci. 9, (2015).

  99. Moradi, S., Qiao, N., Stefanini, F. & Indiveri, G. A scalable multi-core architecture with heterogeneous memory structures for dynamic neuromorphic asynchronous processors (DYNAPs). IEEE Trans. Biomed. l Circ. Syst. 12, 106–122 (2018).

    Google Scholar 

  100. Khwa, W.-S. et al. A 65nm 4Kb algorithm-dependent computing-in-memory SRAM unit-macro with 2.3ns and 55.8TOPS/W fully parallel product-sum operation for binary DNN edge processors. In 2018 IEEE Int. Solid - State Circuits Conference - (ISSCC) 496–498 (IEEE, 2018).

  101. Si, X. et al. A Twin-8T SRAM computation-in-memory macro for multiple-Bit CNN-based machine learning. In 2019 IEEE Int. Solid - State Circuits Conference - (ISSCC) 396–397 (IEEE, 2019).

  102. Yang, J. et al. Sandwich-RAM: an energy-efficient in-memory BWN architecture with pulse-width modulation. In 2019 IEEE International Solid - State Circuits Conference - (ISSCC) 394–395 (IEEE, 2019).

  103. List of Intel microprocessors. Wikipedia (2020); https://en.wikipedia.org/wiki/List_of_Intel_microprocessors

  104. List of Nvidia graphics processing units. Wikipedia (2020); https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units

  105. Wang, Z. et al. Memristors with diffusive dynamics as synaptic emulators for neuromorphic computing. Nat. Mater. 16, 101–108 (2017).

    Google Scholar 

Download references

Acknowledgements

This work is supported in part by the National Key R&D Program of China (2019YFB2205104), National Natural Science Foundation of China (61851404), Beijing Municipal Science and Technology Project (Z191100007519008), Huawei Project (YBN2019075015), and Beijing Tsinghua and Hsinchu Tsinghua Joint Project.

Author information

Authors and Affiliations

Authors

Contributions

W.Z. and B.G. contributed to data collection, analysis and writing. All the authors contributed to discussion and revised the manuscript at all stages. H.W. contributed to work planning and development.

Corresponding author

Correspondence to Huaqiang Wu.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary Information

Supplementary Table 1.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, W., Gao, B., Tang, J. et al. Neuro-inspired computing chips. Nat Electron 3, 371–382 (2020). https://doi.org/10.1038/s41928-020-0435-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41928-020-0435-7

This article is cited by

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing