How does the brain work? How does intelligence emerge? Can we replicate it in machines? These are the questions that scientists studying natural and artificial intelligence have pondered for decades. Today, students learn concepts that have firm theoretical foundations in both fields: neural networks, reinforcement learning, attention, working memory and schemas. By improving communication between the fields, we can develop reinforcing cycles that produce novel insights for defining research directions for both natural and artificial intelligence.

Credit: Benjamin Lahner

Open challenges with quantitative benchmarks are a well-suited medium to spread insights between researchers and domains. Challenges can make disparate theoretical approaches directly comparable (that is, radically different models such as connectionist versus symbolic), provide a common platform for researchers with different expertise to work towards a common goal, promote reliability and reproducibility of results, and reduce potential social biases by allowing everyone to participate. While open challenges are well established in computer science, they are still rare in neuroscience and cognitive science. We therefore started a project this year with the ultimate goal to account for human brain responses, as observed in neuroimaging data, through computational models. We coined it the Algonauts Project1 (http://algonauts.csail.mit.edu/): inspired by the astronauts (that is, sailors of the stars) who launch into space to explore a new frontier, the Algonauts (that is, sailors of algorithms) set out to relate brains and computer algorithms.

The first edition of the Algonauts Project focused on visual object recognition, a topic that has long fascinated neuroscientists and computer scientists alike. In the human brain, visual object recognition is underpinned by neural operations starting with activity in the primary visual area (V1) and progressing to activity in the inferior temporal cortex (IT), a region shown to have neurons responding to complex shapes. The goal of the first challenge was to predict brain activity during visual object processing, from V1 to IT.

We provided neuroimaging data (from functional magnetic resonance imaging (fMRI) and magnetoencephalography) recorded while human participants observed photographs of natural objects. Teams taking part in the challenge could use the brain data together with the photographs observed by the participants to determine and improve their model’s ability to predict brain responses. We also provided a set of testing images for which the brain data was held back. Participants submitted model outputs for those held-out images, and we compared the outputs to brain activity to rank models.

To encourage participation and exploration we had only minimal rules in place. Participants could build and use any kind of model and additional data for model building, and they could frequently resubmit models. However, participants could not use any empirical brain data recorded for the testing set.

Teams from all around the world took part in the challenge. We quantified how well their model performed by measuring the percentage of the brain signal that the model explained. A perfect model would account for 100% of the explainable variance, given the noise in the brain data. To put results into context, we used AlexNet, a deep neural network widely used in cognitive neuroscience today, as a baseline2. In our data, AlexNet accounted for about 5–8% of the explainable variance, which is far from a perfect model.

The challenge proved successful in several ways. The best models submitted explained around 3.6 times more variance than AlexNet. As we allowed multiple submissions per team, it was possible that the improvement was due to overfitting on the testing data. To rule this out, we checked the results against a second, hidden testing set for which the participants could provide only one submission.

The best ranked contestants came from different fields (that is, computer science, cognitive neuroscience and astrophysics) and career stages (from students to full professors). The winners shared their approach on an open preprint server of their choice and were invited to present at the Algonauts Project workshop in July 2019.

In spite of large increases in explained variance, no model explained all the brain data. We therefore made all data openly accessible at the end of the challenge for further investigation.

The Algonauts Project continues — the next challenge is planned to open in April 2020. We aim for future challenges to address timely questions in both natural and artificial intelligence research, and to provide enough brain data to test data-hungry and complex artificial intelligence models. Action understanding is such a topic, and the Algonauts 2020 challenge will provide whole-brain fMRI activity for over a thousand short video clips. The winners will be invited to present their approach at a special event of the 2020 Cognitive Computational Neuroscience conference.

Do you accept the challenge?