Abstract
Much development has been directed towards improving the performance and automation of spike sorting. This continuous development, while essential, has contributed to an over-saturation of new, incompatible tools that hinders rigorous benchmarking and complicates reproducible analysis. To address these limitations, we developed SpikeInterface, a Python framework designed to unify preexisting spike sorting technologies into a single codebase and to facilitate straightforward comparison and adoption of different approaches. With a few lines of code, researchers can reproducibly run, compare, and benchmark most modern spike sorting algorithms; pre-process, post-process, and visualize extracellular datasets; validate, curate, and export sorting outputs; and more. In this paper, we provide an overview of SpikeInterface and, with applications to real and simulated datasets, demonstrate how it can be utilized to reduce the burden of manual curation and to more comprehensively benchmark automated spike sorters.
Competing Interest Statement
The authors have declared no competing interest.
Footnotes
In this revision, we present a new analysis that substantially extends our previous finding that different spike sorters show surprisingly little agreement when run on the same data. A major new finding we report here is that a consensus sorting derived from multiple sorters is a very effective method to remove false positives from these data sets. We demonstrate this both using synthetic and manually curated data sets. Importantly, the software framework we developed and present here makes this type of analysis easy and accessible to non-experts.
https://gui.dandiarchive.org/#/dandiset/5f1df18ef63d62e1dbd0694a