When trying to build something new, researchers often turn to nature for inspiration. Dry adhesives that mimic the feet of gecko lizards1, electronic materials with a skin-like ability to self-heal2, and wide-field-of-view cameras that resemble the vision of aquatic animals3, to name just a few. When building computers, the brain is an obvious starting place. But current machines are distinctly unnatural — and supremely successful none the less. The rise of machine learning and artificial intelligence (AI), and the energy demands they place on computing hardware, is though driving a search for alternative approaches and those that derive inspiration from the brain could provide a solution. In a Focus in this issue of Nature Electronics, we explore what neuromorphic computing can do.

An optical microscopy image of a programmable neuromorphic computing chip created by integrating a memristor crossbar array with CMOS control circuitry6. Credit: Seung Hwan Lee, University of Michigan

Brain-like approaches to computing can be traced back to the 1980s and the work of Carver Mead at the California Institute of Technology. As Mead recounts in our Reverse Engineering column, his work in the field is linked to a lunch with Caltech colleagues Richard Feynman and John Hopfield, and their decision then to teach a joint course on the physics of computation. “After three years, the course split and we went in different directions: Feynman launched quantum computation; Hopfield developed a new class of neural networks; and I saw analogue silicon technology as a promising vehicle for neuromorphic systems.”

Mead, and the collection of talented researchers that subsequently joined his group, began by developing sensory systems: retina chips for vision and cochlea chips for hearing. They would also go on to develop the address-event representation protocol for transmitting signals between neuromorphic chips. Today, neuromorphic computing takes a variety of forms: some analogue, some digital, some hybrid; some based on traditional silicon CMOS (complementary metal–oxide–semiconductor) devices and some based on novel material devices. One key approach is to try to move away from conventional von Neumann computing systems, where computation and memory are physically separated, and closer to the sparse networks of neurons and synapses found in the brain, where there is no such separation.

Memristive devices (or memristors) can provide both information processing and memory4, and have been used to create a variety of neuromorphic hardware systems. (See, for example, work in this issue on the use of memristor-based Hopfield neural networks: the networks, incidentally, developed by Mead’s lunchtime colleague.) Memristors are typically based on metal oxides or phase-change materials, but can also be made from other systems, including organic materials5. Magnetic materials are another option and such spintronic devices, which exploit both the electrical and magnetic properties of electrons, offer a compact and low-power approach to emulating neurons and synapses.

In a Review Article in this issue, Julie Grollier and colleagues explore the potential of such neuromorphic spintronics. The researchers consider how magnetic tunnel junctions can function as synapses and neurons, and how magnetic textures, including domain walls and skyrmions, can function as neurons. They also discuss the neuromorphic computing demonstrations that have already been created with small spintronic systems, and consider the challenges involved in scaling them up.

Neuromorphic spintronics is still at a relatively early stage of development, but other approaches are approaching their adolescence. In a further Review Article, Huaqiang Wu and colleagues discuss the latest advances in neuro-inspired computing chips. They examine spiking neural network chips (where information is encoded into the interval between spikes) and artificial neural network chips (where neuron states are encoded as digital bits, clock cycles or voltage levels). These chips are typically based on CMOS technology, but can also be based on non-volatile memory technology (which includes memristive devices) — and it is this approach, the researchers argue, that shows particular promise. They outline four key metrics for evaluating the performance of the chips — computing density, energy efficiency, computing accuracy, and learning capability — and propose a technological roadmap for the development of large-scale neuro-inspired computing chips based on non-volatile memory.

The potential of neuromorphic computing, and the role it could play in addressing the increasing computational demands of AI, has also helped reawaken interest in computer chip start-ups. In a News Feature in this issue, Sunny Bains explores these emerging companies and the technology they offer. The competition here is though intense. Beyond the established giants, there are also numerous other start-ups focused on developing chips for machine learning and AI using relatively conventional approaches. But AI is asking questions about what is the best way to build computers, and opportunities are there.