Skip to main content

REVIEW article

Front. Comput. Neurosci., 10 February 2021
Volume 15 - 2021 | https://doi.org/10.3389/fncom.2021.611183

Criticality, Connectivity, and Neural Disorder: A Multifaceted Approach to Neural Computation

  • 1Department of Computer Science, Oslo Metropolitan University, Oslo, Norway
  • 2Department of Computer Science, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
  • 3Department of Neuromedicine and Movement Science, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
  • 4Department of Clinical Neuroscience, Umeå University Hospital, Umeå, Sweden
  • 5Department of Neurology, St. Olav's Hospital, Trondheim, Norway
  • 6Department of Holistic Systems, Simula Metropolitan, Oslo, Norway

It has been hypothesized that the brain optimizes its capacity for computation by self-organizing to a critical point. The dynamical state of criticality is achieved by striking a balance such that activity can effectively spread through the network without overwhelming it and is commonly identified in neuronal networks by observing the behavior of cascades of network activity termed “neuronal avalanches.” The dynamic activity that occurs in neuronal networks is closely intertwined with how the elements of the network are connected and how they influence each other's functional activity. In this review, we highlight how studying criticality with a broad perspective that integrates concepts from physics, experimental and theoretical neuroscience, and computer science can provide a greater understanding of the mechanisms that drive networks to criticality and how their disruption may manifest in different disorders. First, integrating graph theory into experimental studies on criticality, as is becoming more common in theoretical and modeling studies, would provide insight into the kinds of network structures that support criticality in networks of biological neurons. Furthermore, plasticity mechanisms play a crucial role in shaping these neural structures, both in terms of homeostatic maintenance and learning. Both network structures and plasticity have been studied fairly extensively in theoretical models, but much work remains to bridge the gap between theoretical and experimental findings. Finally, information theoretical approaches can tie in more concrete evidence of a network's computational capabilities. Approaching neural dynamics with all these facets in mind has the potential to provide a greater understanding of what goes wrong in neural disorders. Criticality analysis therefore holds potential to identify disruptions to healthy dynamics, granted that robust methods and approaches are considered.

Introduction

Researchers have long grappled with the question of how the brain is able to process information, and many have recently turned to studying brain dynamics armed with tools from statistical physics and complexity science. In many physical systems, such as magnetic or gravitational systems, certain macroscopic features arise from the interactions of the constituent elements in a way that is unpredictable even from a perfect understanding of the behavior of each component; this is known as emergence (Chialvo, 2010). In the context of the brain, emergent phenomena encompass behavior and cognition, arising from the interaction of the vast number of neurons in the brain. Approaching the study of neural systems from this perspective entails studying neuronal behavior at the network or population level—observing and understanding emergent behaviors in the system rather than zeroing in on the behavior and connections of each individual neuron on its own. While exhibiting some computational power on their own, neurons are truly remarkable in their computational capacity when taken collectively.

It is hypothesized that the cortex may optimize its capacity for computation by self-organizing to a critical point (Beggs, 2008; Chialvo, 2010; Plenz, 2012; Shew and Plenz, 2013; Cocchi et al., 2017; Muñoz, 2018; Wilting and Priesemann, 2019a). Criticality is a dynamical state poised between order and disorder, or, more precisely, a transition between an absorbing phase in which activity gradually dies out and an active phase in which activity perpetuates indefinitely (Brochini et al., 2016). Critical systems must necessarily contain a large number of interacting non-linear components, though these conditions are not sufficient to ensure criticality; in the space of possible system states, criticality occupies a vanishingly small region, with chaotic and quiescent systems at the two opposite extremes (Brochini et al., 2016; Muñoz, 2018). A system operating in the critical state shows complex spatiotemporal behavior, and there is no scale, in space or time, that dominates the behavioral patterns of the system. That is, taking a closer or wider view of the system will show some variant of the same snapshot of the behavior. This mode of behavior is manifested by spatial and temporal correlations scaling as a power law over several orders of magnitude, giving rise to the presence of self-similar fractal-like structures over many scales. The brain exhibits complex spontaneous activity that crosses many time scales, a feature associated with criticality, and this activity is postulated to contribute to how the brain responds to stimuli and processes information.

In this review, we highlight network features evidenced to contribute to the emergence of critical dynamics in neural systems and discuss the benefits of experimentally studying the interplay between these features. Crucially, we also note here that care must be taken when extrapolating from theoretical findings on criticality to the more recent experimental research on criticality in neural systems observed at different scales. In particular, experimental explorations of criticality in neural systems point to the importance of considering the structures (Massobrio et al., 2015) and plastic mechanisms (Ma et al., 2019) that support this dynamical regime. Thus, critical dynamics in neuronal networks may be better understood by characterizing their connectivity and how this connectivity changes over time or in response to inputs and perturbations. Additionally, as discussed in detail by Shew and Plenz (2013), there is also much to be learned about the computational and functional benefits that criticality confers; thus, complementing graph theoretical and criticality metrics with an information theoretical approach can further shed light on the functional benefits of this dynamical regime.

Note that we aim here to focus on relevant considerations for empirical assessments of criticality in biological neural systems, particularly at the network level, and on how experimentalists may build upon the existing theoretical foundations to address the criticality hypothesis from a data-driven perspective. This review thus aims to provide the reader with basic insights on criticality and how it relates to neuroscience, rather than an in-depth discussion of the physics of criticality. Furthermore, we approach modeling studies with an eye on how they can inform our understanding of experimental systems but do not exhaustively review the vast field of model neural systems, as this is a topic of review unto itself.

In the remainder of this section, we present an overview of the theoretical benefits of criticality and experimental evidence supporting its emergence in living neural systems. The next section then focuses on the intersection between network neuroscience and criticality and discusses the connectivity features that can support critical dynamics. In the subsequent section, we consider the plasticity mechanisms that allow these networks to form, learn in response to inputs, and remain stable against perturbation or failures in the network. Finally, we conclude with a discussion of how approaching the study of criticality with a diversity of perspectives may prove more fruitful than any single directed approach.

Why Is Criticality Important?

The term self-organized criticality was coined as such to reflect the similarity of this phenomenon with the critical point observed in phase transitions in statistical mechanics, wherein a parameter, such as temperature, can be tuned to bring the system to a state between multiple phases of matter (Bak et al., 1988). However, a crucial point that distinguishes self-organized criticality from the conventional critical point in statistical mechanics is that the system tunes itself to criticality without the need for external tuning via a control parameter. In a self-organized critical system, the critical point is an attractor, meaning the system tends to evolve toward that point from a wide range of starting points; to again consider the parallel with thermodynamic criticality, if a thermodynamic system were to show self-organized criticality, intrinsic mechanisms would drive the system to return to the critical point between the liquid, solid, and gaseous phases. Obviously this is not the case, as matter in each of these phases can exist stably, but there are many fascinating properties conferred by criticality, as we will discuss in this section.

Many natural systems have been observed to show critical or critical-like behavior (Paczuski et al., 1996; Chialvo, 2010), including forest fires (Malamud et al., 1998; Buendía et al., 2020; Palmieri and Jensen, 2020) and flocks of birds (Cavagna et al., 2010), which has led researchers to explore the possibility of a similar phenomenon in the brain in a conjecture known as the criticality hypothesis (Beggs and Plenz, 2003; Beggs, 2008). This hypothesis states that the brain self-organizes into the critical state in order to optimize its computational capabilities.

The canonical sandpile model by Bak et al. (1987, 1988) describes a system slowly driven by the addition of grains of sand until an instability occurs and the sand is redistributed to restabilize the system. Because of the dynamical minimal stability of the system, the chain reactions set off by the external drive, with sand traveling from site to site until the system restabilizes, called “avalanches,” display the self-similar power-law scaling mentioned above. This means that the disruption of a single element in the system has a small but non-zero chance to change the state of the whole system. Work on emergent properties of dynamical systems has indicated that systems tuned to the critical regime show optimal information processing capabilities (Langton, 1990; Shew et al., 2009, 2011; Plenz, 2012; Shew and Plenz, 2013).

To provide a conceptual image of the complex behavior that can arise from even very simple parts and lay a foundation for how computation can emerge in such a system, we consider as an illustrative example the binary cellular automaton (CA), a system consisting of many stationary binary cells whose behavior is influenced by the states of the cells in their immediate neighborhood. Box 1 gives a brief definition of the CA and summarizes the important findings obtained by Langton (1990) in his work on how computation may emerge in physical systems at the “edge of chaos.” It should be noted, however, that although this regime was initially assumed to exhibit a continuous second-order phase transition, as classically described for critical dynamics, it has recently been found to show a discontinuous first-order transition (Reia and Kinouchi, 2014, 2015). In a vanishingly small region in the state of all possible CAs of the type Langton (1990) considered, quite interesting behavior emerges: complex patterns of activity are preserved over long distances in space and time. In the regime between the two extremes of quiescence and disorder (Kinouchi and Copelli, 2006), the system optimizes its capacity to perform the functions of information transmission, modification, and storage that are necessary to support computation.

Box 1. Cellular automata at the “edge of chaos.”

A binary cellular automaton (CA) is an n-dimensional array of binary cells whose states are updated synchronously in discrete time steps. The state of each cell at time t + 1 depends on the states of the cells in its neighborhood at time t. Such CAs are among the simplest systems to show complex behavior. Langton (1990) used the binary CA as a lens to assess the conditions under which a physical system may show the capacity to support computation. In a sweep of the possible rulesets for a one-dimensional binary CA, he demonstrated that a small subset of rules produce behavior compatible with the necessary tenets of computation, namely, the storage, transmission, and modification of information.

Examples of different “classes” of CA (Wolfram, 1984) corresponding to different dynamical regimes are shown in Figure 1, with class IV representing a transitional state analogous to criticality. He also demonstrated that these CAs occupy a small region, where mutual information is maximized at a point of intermediate entropy. This maximal mutual information indicates that these CAs at the “edge of chaos” have struck a balance between the competing needs of information storage, which requires low entropy, and information transmission, which requires high entropy, thereby allowing complex patterns of activity to propagate through the system over time and space without rapidly dying out or overwhelming the system. Despite its simplicity, the CA demonstrates how some sets of rules balancing quiescence and transmission can lead to complex patterns that allow for the transfer of information, an appealing property for neural systems.

FIGURE 1
www.frontiersin.org

Figure 1. Illustrative examples of the behavior observed in different classes of one-dimensional binary CAs. In these CAs, each row represents the CA at a given time step, and the two states of the cells are represented by black and white. Complex behavior arises in the critical regime, which becomes vanishingly small as the system increases in size. Langton (1990) characterized these CAs with the λ parameter, which represents the ratio of transitions to an arbitrary state selected as the “quiescent state.” Adapted from Langton (1990).

Although the transition hypothesized to occur in neural systems is distinct from the “edge-of-chaos” transition shown in Box 1, the features captured in this simpler system can help us understand why the dynamics at phase transitions may be relevant for information processing in the brain. In the transitional regime, patterns of activity are preserved over space and time, which means that spatially disparate elements of the system can communicate with each other and that informational representations are propagated in time. Different inputs produce distinguishable outputs, allowing systems near criticality to respond to stimuli in a meaningful way. These concepts underlie how information is encoded and transmitted in dynamical systems at criticality and highlight how studying criticality in experimental studies on neural computation can inform our understanding of how the brain processes information.

What does this have to do with computation? In a general sense, computation is a process, either natural or artificial, by which information is communicated and manipulated to produce some kind of meaningful behavior in a system (Denning, 2007). More concretely, computation is the act of solving a “computational problem”: a set of related questions with given information (input), each with its own distinct answer (output). Criticality has been found to optimize characteristics related to better performance at solving computational problems. For example, recurrent network models showing critical dynamics outperform their sub- and supercritical counterparts in terms of their input-to-output mappings; that is, the outputs produced from different inputs are more separable, or distinguishable, in critical networks (Bertschinger and Natschläger, 2004). Critical systems also show a maximal dynamic range (Kinouchi and Copelli, 2006; Gautam et al., 2015), which is the span of inputs distinguishable by the system. Additionally, the number of metastable states is maximized in networks with a critical branching ratio (Haldeman and Beggs, 2005), where a metastable state is defined as a cluster of similar output patterns produced by the same input. Information transfer and storage, represented, respectively, by the information shared between a source node and a destination node and that between a node's past and future states, is also optimized at criticality (Boedecker et al., 2012).

Although criticality is recognized to optimize many properties associated with computation, as discussed above, it should also be noted here that there are also some properties associated with criticality that may run counter to computational function in the sense described here (Wilting and Priesemann, 2019a). For example, the maximal dynamic range of a system in the critical state causes the specificity of the system to suffer; that is, a system that can sensitively respond to a wide range of inputs also shows more overlap between responses to similar inputs (Gollo, 2017). Thus, recent research has shifted from a singular focus on criticality to a broader realm of dynamical possibilities, including heterogenous networks composed of both critical and slightly subcritical subgroups (Gollo, 2017), the presence of a “reverberating regime,” enabling the task-dependent switching or combining of critical and slightly subcritical dynamics to enjoy the benefits of both states (Wilting et al., 2018), and the concept of self-organized quasi-criticality, which accounts for non-conservative dynamics in systems that show critical-like behavior over a finite range of scales (Bonachela and Muñoz, 2009; Bonachela et al., 2010; Buendía et al., 2020; Kinouchi et al., 2020). These findings show promise for the advancement of a more detailed and physically accurate view of how criticality is realized in living neural systems; however, we refrain in this review from venturing too far into the details of these topics and direct the interested reader to the cited literature.

Experimental Evidence of Criticality in Neural Systems

Beggs and Plenz (2003) were the first to experimentally demonstrate that the spontaneous behavior of in vitro cortical networks displays features consistent with critical dynamics in their landmark study on neuronal avalanches in cortical slices interfaced with microelectrode arrays (MEAs). At its most general, a neuronal avalanche extends over the duration of persistent activity propagating through the network and is punctuated by silent periods preceding and following the active period, as shown in Figure 2A. In the case of in vitro systems (i.e., slices or dissociated cultures), “activity” may refer to either the higher-frequency spikes or the lower-frequency local field potentials (LFPs), as both modalities have been studied (e.g., Beggs and Plenz, 2003; Pasquale et al., 2008). Criticality has also been studied at the macroscale using electroencephalography (EEG) (e.g., Meisel et al., 2013; Lee et al., 2019).

FIGURE 2
www.frontiersin.org

Figure 2. Definition of a neuronal avalanche and examples of empirical measures of criticality. (A) Definition of a neuronal avalanche. The top panel shows a raster plot divided into time bins, and the avalanche in the plot spans six active frames preceded and followed by inactive frames. An alternate view of the activity in the six frames is shown below, where each square represents an active electrode in an 8 × 8 grid. The bottom panel shows the definition of the avalanche shape, which is obtained by taking the number of active electrodes in each frame. (B) Illustration of the branching ratio. Blue nodes are active, and gray are inactive. A branching ratio of 1 allows activity to persist without overwhelming the system. (C) Shape collapse. In a critical system, all avalanches should show the same mean temporal shape profile across different size scales. Adapted from Marshall et al. (2016).

Regardless of the scale or method of data collection, one main hallmark of criticality is that neuronal avalanches show power-law scaling in both space and time, with sub- and supercritical behavior being characterized by exponential and bimodal distributions, respectively. Beggs and Plenz (2004) demonstrated that neuronal avalanches show diverse spatiotemporal patterns that are stable over several hours, highlighting their capacity to represent a wide range of information in a reproducible manner. It should be noted that a simple power-law fitting alone is not sufficient to identify criticality (Goldstein et al., 2004; Priesemann and Shriki, 2018); in fact, this was far from the only approach used by Beggs and Plenz (2003), who also evaluated the branching ratio and the effect of using only a subset of all recording points. In addition to the methods first used by Beggs and Plenz (2003), a number of criticality measures have since been put forward. Providing empirical evidence of criticality is challenging, and it is suggested that a range of measures be applied (Priesemann and Shriki, 2018); we present a selection of some such measures in Box 2.

Box 2. Experimental metrics of criticality.

To pursue further investigation of the criticality hypothesis, we must be armed with the appropriate tools for identifying a network's dynamic state. Power laws are notoriously challenging to handle in empirical data (Clauset et al., 2009) and can also arise in non-critical systems (Martinello et al., 2017; Touboul and Destexhe, 2017; Priesemann and Shriki, 2018). Thus, additional measures are needed to accurately identify when a network is in the critical state, and each measure should also be applied to appropriate null models for comparison. This box lists the main approaches currently used to identify criticality from empirical data, but it should be noted that the development of such methods remains an active area of research. Some of the measures listed below have been implemented in a freely available MATLAB toolbox called the Neural Complexity and Criticality Toolbox (Marshall et al., 2016), and detailed statistical analysis for fitting and analyzing power-law distributions can be performed with an open Python package called powerlaw (Alstott et al., 2014).

Power-law scaling of neuronal avalanches: One hallmark of criticality in neuronal networks is the power-law scaling of the size S and duration T of neuronal avalanches. That is, P(S) ∝ S−α and P(T) ∝ T−β, where P(·) is the probability distribution function. The size is generally defined as the number of activated electrodes or neurons, and the duration is the number of active time bins. When the time bin width is selected to correspond to the average inter-spike interval, the power law exponents of the size and duration have been shown to be approximately α = 1.5 and β = 2.0. However, the power-law scaling should persist across a range of temporal resolutions close to the order of magnitude of the average inter-spike interval, with the exponent α changing systematically with the selected time bin size (Beggs and Plenz, 2003; Pasquale et al., 2008). Power-law scaling should also remain when a more coarse-grained spatial resolution is considered, by using only a subset of all recording points. As stated above, there is an open Python package called powerlaw that can be used for detailed statistical analysis of power-law distributions (Alstott et al., 2014). As an additional power-law related metric, the κ parameter (Shew et al., 2009) gives a quantitative measure of the difference between the experimental and fitted cumulative probability distributions when using power-law fitting.

Branching ratio: The branching ratio σ is the ratio of the number of descendants to the number of ancestors, where activity on an ancestor electrode or neuron immediately precedes activity on a descendant electrode or neuron (Beggs and Plenz, 2003). A system in the critical state has a branching ratio of approximately 1, allowing activity to flow through the network without dying out (σ <1) or overwhelming the entire network (σ > 1), as shown in Figure 2B. A modified version of the branching ratio that is specific to LFP data has also been introduced, where the ratio is instead taken between the baseline-to-baseline areas of the negative LFP deflections (nLFPs) in successive time bins, rather than the number of nLFPs (Plenz, 2012). The nLFP area is correlated with the number of neurons firing and thus provides a better measure of group activity during an avalanche than the nLFP count.

Shape collapse: When a system is in the critical state, avalanches should show the same mean temporal profile across scales. The temporal profile of an avalanche represents the number of active sites as a function of time, and for a system in the critical state, the temporal profiles of all avalanches collapse onto the same profile shape when spatiotemporally scaled with a scaling exponent γ close to 2 (Figure 2C), as described by 〈S〉(T) ∝ T−γ, where 〈S〉(T) is the average size of all avalanches of a given duration T. Details can be found in Sethna et al. (2001) and Friedman et al. (2012), and an experimental demonstration of shape collapse in non-human primates can be found in Miller et al. (2019). The deviation from criticality coefficient (DCC) by Ma et al. (2019) is related to the concept of shape collapse and is computed from the difference between the scaling exponent γ calculated from empirical data using linear regression and the expected value calculated from the power-law exponent α of the size distribution.

Spatial subsampling: Because of the nature of observing neuronal systems, only a subset of the system components can be sampled. This spatial subsampling can sometimes lead to erroneous conclusions about the nature of the system's underlying dynamics. Methods involving the scaling of spatial subsampling (Levina and Priesemann, 2017) and a subsampling-invariant estimator (Wilting and Priesemann, 2018) have been developed to allow for the evaluation of dynamic states of subsampled systems.

Other measures: Some researchers have developed other quantitative measures to describe the dynamical state of the system. One notable example is the use of statistical scaling laws related to a phenomenon called “critical slowing down,” which refers to the tendency for systems to require more time to recover from a perturbation the closer they are to criticality (Meisel et al., 2015a). Additionally, detrended fluctuation analysis (DFA) offers a framework to understand scale-free oscillations in a range of systems (Hardstone et al., 2012).

Avalanche behavior during development was first observed in organotypic cortical cultures by Stewart and Plenz (2008), and they found that avalanches persisted throughout development over periods of up to 6 weeks in vitro, despite large changes in activity levels, suggesting homeostatic regulation to maintain this mode of activity. It has also been demonstrated that dissociated cortical networks may self-organize into the critical state after a period of maturation, though not all such networks reach the critical state and reports on the time course of maturation differ (Pasquale et al., 2008; Tetzlaff et al., 2010; Yada et al., 2017). The reported results on dissociated networks suggest that after a period of low activity, networks tend to pass through periods of first subcritical then supercritical behavior before settling into the critical state, though not all networks reach this state. This behavior has been hypothesized to stem from an initial overproduction of connections followed by a period of pruning excess connections (Pasquale et al., 2008; Yada et al., 2017). Additionally, experiments in which chemical perturbation is applied to increase excitation or inhibition in the network indicate that networks at criticality exhibit a balanced excitation-to-inhibition (E/I) ratio (Shew et al., 2009, 2011; Heiney et al., 2019). Together, these experimental findings point to the importance of a balance in both network structure and network dynamics to achieve criticality.

Shew et al. (2009, 2011) have explicitly linked the dynamic state of a cortical network with its information processing capacity by demonstrating that networks at criticality show maximal dynamic range, information transmission, and information capacity in comparison with their counterparts in the sub- and supercritical states. These properties harken back to the original requirements posed by Langton (1990) for a system to be capable of computation and further emphasize the role of the dynamical state in governing the functional behavior of a neuronal network. These studies highlight the functional benefits conferred by the critical state and give credence to the criticality hypothesis (Shew and Plenz, 2013). But how does a system organize itself to become capable of supporting critical dynamics? In the following sections we explore the relationship between the structure of a network and its dynamical behavior and consider the plasticity mechanisms that form and maintain target structures.

Criticality and Networks

Biological neural networks are interconnected networks of individual information processing units (neurons). When considering how information is processed within the network, it is vital to understand the interactions of the individual units, the organization of the brain network, and the integration of activity of widely distributed neurons (Bressler and Menon, 2010; van den Heuvel and Sporns, 2013). Underlying the aggregate activity of groups of neurons are the structural and functional connectivity of the network, which determine where signals pass and which neurons act in consort (Sporns, 2002; Womelsdorf et al., 2007). This in turn influences the information processing capabilities of neural networks, and network structure therefore contributes to determining the emergence of critical properties in neural networks. The question is then how the organization of biological neural networks can support critical dynamics to optimize computational efficiency. This section examines a selection of experimental and simulation-based studies that address this question.

Network Neuroscience

The application of modern network science to the brain and networks of neurons has flourished over the past two decades. The complex network of the brain has information processing as its primary goal and attempts to maximize this capacity while under multiple constraining influences, such as availability of space, energy, and nutrients, and thus must strike a balance between computational capacity and wiring cost (Laughlin and Sejnowski, 2003; Cuntz et al., 2010). These two factors are often in a tradeoff relationship; for example, direct connections facilitate the most effective signal transmission, but the long-range connections this requires are very costly to grow and maintain (Buzsáki et al., 2004). Additionally, the network must meet the changing demands of the organism while remaining resilient to damage to or failure of parts of the network, such as the loss of neurons or the connections between them (Pan and Sinha, 2007). Some of the basic network science principles that are commonly applied in neuroscience studies are highlighted in Box 3.

Box 3. Benefits of network topology.

Network neuroscience encompasses an approach to studying brain function that considers the ways in which neurons communicate, anatomically and functionally, across multiple scales (Bassett and Sporns, 2017). It is informed by complex systems theory, which states that the emergent behavior of a system cannot necessarily be understood simply by the properties of its individual components. It further applies mathematical techniques such as graph theory and algebraic topology to describe networks (graphs) in terms of their individual units (nodes) and their connections (edges). A node in this context can be a brain area, a single neuron, a recording electrode, a voxel, pixel, or any unit which describes the activity of a discrete part of a neural network. Edges can be physical connections obtained from connectivity mapping or functional connectivity based on correlation or other measures (Bullmore and Sporns, 2009). This approach has yielded great insight into how the brain is organized and how communication within brain networks occurs (Newman, 2003). While numerous methods exist for extracting the structural, functional, effective, weighted, or binary networks from living neural systems (for a review see Bullmore and Sporns, 2009; Bastos and Schoffelen, 2016; Hallquist and Hillary, 2018), there are several features among these considered to play an important role in neuronal organization and function.

Small-World Network: A small-world network is typically defined by how closely it approaches the small world ideal of high clustering and low characteristic path length (Watts and Strogatz, 1998). One way to produce a small-world network is to begin with a regular (or lattice) network, where each node is connected only with its nearest neighbors, and, with probability p, rewire each connection in the network to a randomly chosen node elsewhere in the network. When p = 1, every connection is rewired, and the result is a random network. However, at intermediate rewiring probabilities, the characteristic path length drops off drastically, showing that only a few long-range connections are necessary to facilitate the integration of the network. Additionally, the clustering of nodes remains high, retaining the local specialization of the original regular network. These properties make small-world networks highly advantageous for computation while reducing wiring cost (Chklovskii et al., 2002), essentially reducing the number of connections without sacrificing the capacity for network-wide communication (Bassett and Bullmore, 2017).

Scale-Free Network: In a scale-free network, the probability distribution of the node degree, which is the number of edges connected to each node in the network, follows a power law, meaning most nodes have a small number of edges and few nodes have many (Barabási and Albert, 1999; Eguíluz et al., 2005). These high-degree nodes are often called hubs, and they serve an important role in integration across the network (Sporns et al., 2004). Hubs make a scale-free network more robust to random deletion of nodes but susceptible to targeted damage of the hub nodes (Albert et al., 2000). Especially vulnerable (but not exclusive to scale-free networks) is the rich club (Zhou and Mondragón, 2004), a group of hubs with a high degree of interconnectivity between each other. While there is growing evidence for the presence of rich-club topology in the brain (Griffa and Van den Heuvel, 2018; Kim and Min, 2020) the presence of scale-free topology (Bonifazi et al., 2009) is still somewhat controversial, but it provides important insight in modeling studies of dynamics on network topology (Broido and Clauset, 2019).

Hierarchical Modularity: A modular network is characterized by the presence of clusters of nodes that are densely connected with each other and share few edges with nodes outside the cluster. In a hierarchically modular network, these clusters can be subdivided into other clusters according to the same principle, often over multiple scales (Figure 3). Modules are interconnected by connector nodes, which may or may not be hub nodes, allowing dissemination of signals and integration of information across the system. Modular networks may be more robust to dynamic change within the network. The intricacies of hierarchical modularity and its relation to other network topologies such as the rich club (McAuley et al., 2007) are extensive and, as such, beyond our scope here (for a review, see Meunier et al., 2010).

It is important to consider that the network models described above are not mutually exclusive. Rather, it appears that the brain displays hallmarks of all these network types (Bullmore and Sporns, 2009) and that deviations from their properties can be involved in disease (Stam, 2014), as illustrated in Figure 3.

FIGURE 3
www.frontiersin.org

Figure 3. Network characteristics associated with healthy brain networks. In normal conditions, brain networks show hallmarks of multiple network models. This includes an intermediate state between order and randomness in small-world organization, the power-law degree distribution of a scale-free network, and modular clusters organized in a hierarchical fashion. The integration of these different network types may be an evolutionary adaptation driven by the multi-constraint optimization of brain wiring. Deviations from the hallmarks of these network structures may be associated with abnormal brain function and disease.

Multiple lines of evidence now show that brain networks have a small-world organization, with high local clustering and low average path length, which facilitates segregated local specialization and global integration. Low-cost, short-range connections dominate, while a smaller number of long-range connections allow for few intermediaries between distant components (Bullmore and Sporns, 2012). Brain networks also show evidence of link clustering, where strong connections preferentially form between nodes with similar neighborhoods (Pajevic and Plenz, 2012). Some evidence also indicates brain networks are scale-free, with a heavy-tail degree distribution that follows a power law (Eguíluz et al., 2005). Components with high degrees furthermore tend to connect to other high-degree components in a “rich club,” which are hub regions of high connectivity that facilitate integration across distinct areas and wide propagation of signals and information (Sporns, 2013). The central nervous system (CNS) is also divided into specialized areas at multiple levels, from brain lobes to smaller but separate modules within these lobes, which can again be subdivided into further modules. This is characteristic of hierarchical modularity, which facilitates flexibility in adaptation because it can incorporate changes within a single module without affecting other, nearby modules. This makes the system at the same time robust and flexible (Meunier et al., 2010). The combination of these network architectures—small-world, scale-free, and modular—creates an efficient network well-suited for computation, as will be discussed in the following section.

Neural Network Topology Facilitates Criticality

Modeling work has provided evidence that the network features outlined above contribute to the emergence of critical dynamics as a means to support computation in networks of neurons. This provides some motivation to translate these findings into the experimental realm, but little work has been done thus far in this regard, despite the expanding experimental work on criticality, as already detailed above, and the large body of work on network neuroscience (Bassett and Sporns, 2017). However, one noteworthy methodological study has identified small-world organization in the effective (causal) connectivity of a cortical slice culture (Pajevic and Plenz, 2009). With further applications of such measures of connectivity to assess avalanche propagation in vitro, it will be possible to evaluate if network features found to be beneficial in modeling studies, such as small-worldness, can be experimentally confirmed. In this section, we will examine criticality and complex network features, but it should also be noted that criticality can also be demonstrated in random (Kinouchi and Copelli, 2006; Costa et al., 2015; Campos et al., 2017) and complete (Levina et al., 2007; Bonachela et al., 2010; Brochini et al., 2016; Costa et al., 2017; Kinouchi et al., 2019; Girardi-Schappo et al., 2020) networks. In a study specifically examining the impact of network structure on network dynamics in silico, Massobrio et al. (2015) showed that random network topology can only support power-law avalanche scaling under a narrow range of synaptic constraints and firing rates. Furthermore, of the topologies they investigated, only scale-free networks with a high average node degree and small-world features were able to display behavior consistent with experimental criticality.

Simulation studies on complex networks have found that features of criticality can emerge with biologically plausible regulatory mechanisms. Shin and Kim (2006) found that, for a network initialized as complete, i.e., fully connected, and allowed to change its connections over time by spike-timing-dependent plasticity (STDP), the network reorganizes into a scale-free network with small-world properties that shows evidence of self-organized criticality. Other studies have also found that concurrently scale-free and small-world networks recapture critical dynamics with exponents comparable to those found experimentally (Lin and Chen, 2005; Pellegrini et al., 2007; de Arcangelis and Herrmann, 2012). Complementary to this, Rubinov et al. (2011) demonstrated that a hierarchically modular structure with a preponderance of within-module connections, which have a relatively low wiring cost, produced a much broader critical regime than was observed in corresponding non-hierarchical networks. It has also been shown that in the Bak–Tang–Wiesenfield (BTW) model, also referred to as the “sandpile” model, self-organized criticality emerges as a result of the formation of modular clusters with biologically relevant dimensions, lending further evidence to the importance of modular network structures (Hoffmann, 2018).

When focusing on the activity of ensembles of neurons, it is common to consider bursting activity that encompasses multiple units and how these units coordinate their activity. In dissociated cortical neurons, networks that spontaneously develop critical dynamics display a level of synchrony higher than what is seen in uncoordinated subcritical activity but lower than that seen in highly regular supercritical activity (Pasquale et al., 2008; Valverde et al., 2015; Cocchi et al., 2017). This is also consistently reported in modeling studies and can be related to the branching ratio, or how many downstream neuronal responses are elicited by a single active neuron (Box 2). When the branching ratio is balanced near 1, the network is in a state of intermediate synchrony and tends to display critical avalanche dynamics in a way that maximizes the number of adaptive responses the network can produce to stimulus (Haldeman and Beggs, 2005; Shew and Plenz, 2013). However, despite bursting activity often being considered highly coordinated, critical networks in vitro have been observed to show more burst-dominated activity than their supercritical counterparts, with critical networks showing a higher proportion of spikes contained in bursts and an intermediate level of synchrony within those bursts (Pasquale et al., 2008).

A biological constraint for the branching parameter is the level of inhibition present as mediated by inhibitory interneurons (Girardi-Schappo et al., 2020). In the human cortex, 15–30% of neurons provide local inhibition, and this E/I ratio is frequently replicated in in silico models by tuning the number of inhibitory nodes and their connectivity within the network (Rudy et al., 2011; Tremblay et al., 2016). In both models and biological networks, inhibitory nodes typically constrain their connectivity within modules or clusters. In examinations of this inhibitory connectivity, it has been found that local inhibitors are necessary for critical dynamics in systems combining modularity and plasticity (Rubinov et al., 2011). Furthermore, Massobrio et al. (2015) tested a wide range of E/I ratios on scale-free networks and were only able to achieve critical dynamics in networks with inhibitory nodes comprising 20–30% of all nodes. They also observed the effect of the E/I ratio of the hub nodes specifically and found that the same ratio of approximately 30% inhibition in the hubs was able to support critical dynamics across a wide range of mean degrees, whereas none of the fully excitatory hub networks displayed critical behavior. In models and in the brain, this balance of excitation and inhibition acts as a countermeasure against runaway excitation and stabilizes the network dynamics (Fingelkurts et al., 2004; Shin and Kim, 2006; Meisel and Gross, 2009; Naudé et al., 2013; Salkoff et al., 2015). Furthermore, through the careful tuning of the E/I balance, multiple dynamic states can also be achieved in the same model (Li and Shew, 2020).

The Brain May Operate in a Critical Region, Not at a Critical Point

While the criticality hypothesis of the brain is attractive because it provides a model for brain activity that optimizes information processing and storage, aspects of the model are difficult to reconcile with knowledge of the brain's activity. For instance, the brain's activity and dynamics are not constant but fluctuate widely depending on multiple factors. That this widely variable and adaptable dynamic system can be tuned to a specific critical point can therefore seem counterintuitive. However, a finite system at criticality does not have to be tuned to a specific point but rather exhibits critical behavior over a particular region. The phase transition can be continuous, such that there exists a range of states within the system that support critical dynamics (Hesse and Gross, 2014). This extended model of criticality appears much more compatible with our knowledge of the brain's dynamics than the notion of a strict critical point. One such form of critical range is referred to as the Griffith's phase and appears to be facilitated by hierarchical, modular network architectures, which are consistent with the previously investigated small-world architecture of the brain (Gallos et al., 2012; Moretti and Muñoz, 2013; Ódor et al., 2015; Girardi-Schappo et al., 2016). A wide critical range would appear to be advantageous for the network, making the critical dynamics more robust against failure or perturbation than in the case where criticality can only be achieved in a narrow range or single point (Li and Small, 2012; Wang and Zhou, 2012). This range appears to be dependent on the level of structural heterogeneity or disorder within the network, including variance in the node in-degree distribution (Muñoz et al., 2010; Wu et al., 2019).

Additionally, as mentioned in the section on the importance of criticality, recent evidence suggests that a strict adherence to criticality may not be the sole aim of network organization (Wilting and Priesemann, 2019a). On the basis of these findings, it has been hypothesized that some brain networks may self-organize to points in a slightly subcritical range, where they could then flexibly tune their dynamics in accordance with the demands of a given task (Wilting and Priesemann, 2018; Wilting et al., 2018). Following this hypothesis, certain tasks may benefit from a reduced dynamic range in the network to subsequently reduce interference from non-task-specific inputs. Networks may also show heterogeneous local dynamical states, with a mixture of critical and subcritical regions balancing the competing demands for specificity and sensitivity (Gollo, 2017).

Although we have highlighted in this review how the underlying network structure may influence the emergence of critical dynamics here, it is not evident that these topological features are in and of themselves necessary for a network to be considered critical. As mentioned, simulation studies have found that power-law avalanche scaling can be obtained in regular, random, and small-world networks (de Arcangelis and Herrmann, 2012; Michiels Van Kessenich et al., 2016), and regardless of the edge directedness or the presence of inhibitory edges (Ódor and Kelling, 2019). Furthermore, it is possible to achieve power-law avalanche scaling in networks with only weak pairwise correlations and not the more complex patterns of functional connectivity seen in biological networks (Thivierge, 2014). The relationship between critical dynamics in the brain and its underlying network structure may therefore reflect a balance between computational capacity, the metabolic cost of the network activity (Thivierge, 2014), wiring cost in network development (Laughlin and Sejnowski, 2003; Cuntz et al., 2010), and the resilience of the network against perturbations (Goodarzinick et al., 2018). Though certain complex network topologies may better accommodate and broaden the range of critical dynamics (Li and Small, 2012; Moretti and Muñoz, 2013), they are only one component of a neural system.

Plasticity is Necessary to Achieve and Maintain Criticality

How efficient network structures are formed in different dynamical systems varies widely from system to system, and numerous models have been developed to describe the growth of efficient networks. The first general model for scale-free network formation was proposed by de Solla Price (1965) and popularized by Barabási and Albert (1999). Through the addition of nodes as the network evolves, each new node is preferentially attached to an existing node with high connectivity, resulting in a “rich-get-richer” hub formation and power-law-distributed connectivity. However, such models cannot represent neural growth, as they forego an important consideration of neural network formation: self-organization into an efficient topology depends not only on the connectivity but also on synaptic strength, E/I ratio, and, vitally, the plasticity that defines all of these network parameters. In this section, we will first focus on the role of plasticity in activity-dependent network formation and then consider how networks maintain the critical state through homeostatic plasticity (Stepp et al., 2015). The interplay between activity-dependent and homeostatic plasticity is schematically illustrated in Figure 4.

FIGURE 4
www.frontiersin.org

Figure 4. Schematic overview of the interplay between Hebbian and homeostatic plasticity. Hebbian plasticity serves to strengthen and form connections between neurons that fire together, whereas homeostatic plasticity maintains a balance in connections and activity levels. The lowermost case demonstrates how an absence of homeostatic plasticity would allow runaway Hebbian plasticity to overwhelm the network with activity.

Establishing Critical Dynamics in Neuronal Networks

Whereas network models may be constructed in a variety of ways to display critical dynamics and scale-free structures, actual neuronal networks form and maintain connections under numerous constraints. It is generally acknowledged that during development, neurons overshoot the number of necessary connections and then go through a phase of pruning before reaching a relatively stable state of connectivity (Low and Cheng, 2006). There is also evidence that cortical networks in vitro go through this same sequence of overshoot and pruning as they mature, and after this stage they may exhibit critical dynamics, though not all networks do (Stewart and Plenz, 2006; Pasquale et al., 2008; Yada et al., 2017). Van Ooyen et al. (1995) and Okujeni and Egert (2019) showed that a simple axon growth model assuming activity-dependent radial growth could form a network similar to those found in vitro by utilizing activity spontaneously arising in the network. Even with only a simple activity-dependent growth rule applied to systems with random initial placement, these systems have been shown to grow into a state supporting avalanches with power-law scaling in the behavior of the final networks (Abbott and Rohrkemper, 2007; Kossio et al., 2018). Correspondingly, the trajectory of the dynamic state in vitro appears to move from a subcritical to a supercritical state before ultimately reaching criticality. As the supercritical state in this case produces network-wide synchrony, it is probable that plasticity mechanisms reduce the global excitation level as a result of this synchrony and drive the network toward criticality. To mimic some of this development, numerous models have attempted to generate critical networks using plasticity rules applied to random, small-world, and scale-free topologies (de Arcangelis et al., 2006; Rubinov et al., 2011; de Arcangelis and Herrmann, 2012; Teixeira and Shanahan, 2014; Michiels van Kessenich et al., 2018, 2019). These models typically apply local Hebbian mechanisms, such as STDP, to rewire the network into a weight distribution or topology capable of achieving critical dynamics, thus recapitulating certain facets of biological network development.

Simple plasticity mechanisms based on correlated firing, such as STDP, can shift the topology of networks by changing the connection weights, resulting in directed and more complex networks. In observations of activity-dependent neural development in computational models, a number of researchers have observed the same general trend: that these mechanisms tend to drive the dynamics toward criticality, with the resulting topologies showing scale-free organization (Bornholdt and Röhl, 2003; Meisel and Gross, 2009). The end result is robust against different initial topologies and changes to the underlying parameters, such as average connectivity. Even when initializing a network from a random topology, STDP is sufficient in some models to drive the network toward critical dynamics (Teixeira and Shanahan, 2014; Li et al., 2017; Khoshkhou and Montakhab, 2019). Li et al. (2017) have also investigated the computational benefit of these STDP-trained networks, which showed improved input-to-output transformation performance at criticality (see also Bertschinger and Natschläger, 2004; Siri et al., 2007, 2008 for the computational benefits of criticality and Hebbian plasticity in recurrent neural networks).

Recent studies have furthered this modeling approach with the addition of more neurobiologically relevant features, such as axonal delay and hierarchical modularity. The inclusion of a time or axonal delay between pre- and post-target activation can shift both the directionality and distribution of synaptic strengths from a bimodal to a unimodal distribution without a loss of critical dynamics (Khoshkhou and Montakhab, 2019). Note that the potential of Hebbian mechanisms to produce critical dynamics is still model-dependent and can also drive the network to supercritical states. Again, the complexity of the network topology plays a vital role in conjunction with these mechanisms, as modular and hierarchical topologies both can counteract this supercritical organization and broaden the critical regime once it arises (Rubinov et al., 2011). Though neural development is an immensely complicated and complex process, Hebbian mechanisms appear to be one of many aspects that play a vital role in the self-organization of neural networks toward criticality and supporting topologies.

Homeostatic Maintenance of the Critical State

Although learning and developmental mechanisms, such as STDP, drive networks toward certain configurations, the network patterns formed in this way are not simply static structures but also undergo plastic changes to maintain homeostasis and in response to external stimuli. A number of researchers have investigated how different forms of plasticity influence the dynamical state of neuronal networks (Rubinov et al., 2011; Stepp et al., 2015; Zierenberg et al., 2018; Ma et al., 2019).

A network's resilience to damage necessitates a level of adaptability to restore dynamics following a perturbation beyond that offered by topologies that are robust against component failure (see Box 3). This adaptability is hypothesized to stem from homeostatic plasticity, which provides feedback to restore the overall excitability in local connections and the network. Whereas Hebbian plasticity is evidenced to give rise to critical dynamics, homeostatic plasticity is evidenced to maintain the network activity within this dynamic regime despite varying input levels and intrinsic activity (Levina et al., 2007; Naudé et al., 2013; Ma et al., 2019). This adaptive excitability can be exemplified through the branching ratio. A steadily increasing input level would produce branching parameters exceeding 1, given the increasing level of extrinsic excitation on the system. Yet by homeostatic scaling of the excitability in the network and the input connections, the branching parameter can be maintained, avoiding supercritical dynamics. The inverse also holds true in the absence of inputs. Homeostatic plasticity acts toward an intrinsic set point for the network's excitability and adjusts synaptic response to maintain input specificity (Turrigiano, 2017). In the absence of homeostatic plasticity mechanisms, such as synaptic scaling, Hebbian mechanisms create feedback loops of excitatory response that remove any specificity to synaptic input (Wu et al., 2020).

Michiels van Kessenich et al. (2018, 2019) have included global homeostatic plasticity mechanisms in their network models to observe if the networks are able to exhibit avalanches with power-law scaling and evaluate the performance of the network on classification tasks. By feeding error back into the network based on the desired outputs at certain readout sites, they were able to train the network to recognize different input patterns, including classifying handwritten digits. The response of the network to inputs after this training period showed a clear spatial organization, with distinct regions responding to different inputs. Although the network studied here is a simplified computational model, the results indicate that plasticity mechanisms are able to drive networks toward criticality and play a role in their capacity for learning and computation. Also, as shown by Girardi-Schappo et al. (2020), the use of multiple homeostatic mechanisms can generate highly diverse firing patterns and promote the self-organization of a network toward a critical point. Additionally, as with Hebbian plasticity and criticality, the application of homeostatic mechanisms can enhance the computational capabilities of the network by tuning it toward criticality, and increasing both the number of input patterns that can be distinguished by the network and the separability of these patterns (Naudé et al., 2013).

The role of homeostatic mechanisms in maintaining critical dynamics is exemplified experimentally in a study by Shew et al. (2015). There is growing evidence that homeostatic mechanisms such as synaptic depression aid in allowing the visual cortex to adapt to changes in sensory input and recover critical dynamics. Using ex vivo preparations of the visual cortex, Shew et al. (2015) demonstrated adaptation of the cortex to stimuli; upon first exposure to a stimulus, the network transiently showed non-critical dynamics, followed by a return to criticality via homeostatic plasticity. In a critically tuned model of their network, an external input similarly drove the network out of a critical state, and critical dynamics were then restored through the implementation of a synaptic scaling rule, indicating a likely mechanism for the homeostatic adaptation. While this provides evidence for short-term tuning toward criticality, there is also recent evidence of long-term homeostatic adaptation. Thus far, studies including homeostatic mechanisms in criticality experiments and models have largely been focused on such synaptic mechanisms or the E/I balance, with only scant focus on intrinsic plasticity (Naudé et al., 2013; Li X. et al., 2018; Zhang et al., 2019; Girardi-Schappo et al., 2020) or metaplasticity (Kinouchi et al., 2020, in preprint; Peng and Beggs, 2013).

The effect of E/I imbalance has been well-described in previous studies, as mentioned previously; however, manipulation of the intrinsic mechanisms underlying the E/I balance have largely been unexplored (Plenz, 2012). Ma et al. (2019) attempted to bridge this experimental gap by examining a well-established model of homeostatic plasticity in the context of criticality. By systematically exploring the space of possible E/I configurations, Ma et al. (2019) demonstrated that the balance achieved by critical networks may be struck with a number of configurations—specifically, they varied the E/I ratio, the number of excitatory neurons receiving input from each inhibitory neuron, and the ratio of inhibitory neurons receiving input. This shows that different possibilities exist for how networks may be configured to achieve criticality, though only a small fraction of the potential combinations yielded critical activity. Furthermore, when a combination of E/I parameters adjacent to one of the critical regimes was selected and further explored by allowing the network to evolve under synaptic scaling and STDP, it was unable to achieve critical dynamics regardless of the plasticity parameters, demonstrating the importance of local inhibitory dynamics in achieving criticality.

Crucially, when synaptic scaling was removed from the model by Ma et al. (2019), the model was no longer able to recover critical dynamics after a reduction in input. With synaptic scaling removed from the excitatory population, reduction in input resulted in runaway activity and increased synaptic strength due to uncompensated STDP; removing synaptic scaling from inhibitory neurons also shifted the network out of the critical regime, though less dramatically, and was accompanied by a reduction in synaptic strength. In contrast, removal of STDP left the network unaffected by the reduction in input, indicating the necessity of this type of plasticity for the network to give a meaningful response to inputs.

Brain Disorders and Disruptions to Criticality

Up to this point we have examined criticality's relationship to the network topology that supports it and the plastic mechanisms that organize and maintain it. Because of the intricate interplay of these underlying mechanisms, disruptions to either topology or plasticity can manifest as deviations in the dynamic state of the network, and as such, criticality analysis may aid in the identification of such disruptions and provide a better understanding of the mechanisms at play. In this section, we will detail studies that apply criticality analysis to the identification and prediction of diseases and disorders in the nervous system and propose suggestions for how to expand this work moving forward. Here we use the term “perturbation” in a medical sense to refer to disruptive and negative impacts to a network's baseline state, such as the severing of axons in vitro or epileptic states in vivo.

Criticality on Disrupted Foundations

The study by Ma et al. (2019) discussed above in the context of plasticity also provides substantial experimental insight into the effect of perturbations on the dynamic state of neuronal networks. By inducing monocular deprivation in rodents with chronic recording of the visual cortex, they were able to examine the effect of the perturbation on cortical activity. Despite the near complete removal of input to the cortex, the network's firing rate, or activity level, was initially maintained, providing no evidence of the sensory deprivation that had occurred. However, the neuronal avalanche behavior in the network revealed a deviation from criticality immediately following perturbation, despite the fact that the firing rate was maintained. Moreover, this deviation was sustained until homeostatic mechanisms restored it by upscaling inhibitory activity and subsequently reducing network firing (see Figure 5). The deviation from criticality and branching ratio measures (see Box 2) applied by Ma et al. (2019) exemplify the capacity for criticality analysis to identify perturbations. Additionally, this study emphasizes how multiple mechanistic underpinnings lend themselves to critical dynamics and the potential disparity between network dynamics and global activity levels. The effect of these mechanisms can be further emphasized through their disruptions during a critical state.

FIGURE 5
www.frontiersin.org

Figure 5. Schematic overview of in vivo experiment from Ma et al. (2019). The overall firing rate of the network showed a delayed response to the perturbation of removing excitatory sensory input by monocular deprivation. In contrast, the DCC and other criticality-related measures showed an immediate response and a more rapid return to baseline.

Modeling studies on disrupted network topology and impaired plasticity lend some insight into deviations from criticality following perturbation. Within scale-free and small-world networks, there exists a significant robustness against structural defects, as most nodes only connect to neighbors within a cluster or module. As a result, a large number of these low-degree nodes can be lesioned before critical dynamics are disrupted (Goodarzinick et al., 2018). Conversely, any removal of high-degree nodes or long-range connections can rapidly fragment the network structure and subsequently abolish any critical dynamics occurring (Callaway et al., 2000; Mizutaka and Yakubo, 2013; Valverde et al., 2015). The network-wide synchrony that occurs with efficient network topologies and at criticality may also enable the spread of disruptive states such as epilepsy. This synchrony is also dependent on a functioning and adaptive E/I balance, as discussed below.

In investigations of criticality in biological neural networks, it has been found that the simple addition of the GABA inhibitor bicuculline can shift the dynamics of a network from critical to supercritical by increasing synchrony within the network (Beggs and Plenz, 2003; Pasquale et al., 2008). Other studies have also shown that altering the balance of excitation and inhibition in biological networks can drive them into a different dynamic state. In self-organized critical networks, pharmacologically enhancing excitation can change the dynamics of the network from critical to supercritical, while reducing excitation promotes subcritical dynamics (Shew et al., 2009, 2011). Furthermore, it has been demonstrated that direct inhibitory action by addition of GABA to a network with supercritical dynamics can drive it into a critical state (Heiney et al., 2019). This points to GABAergic inhibition as important in disrupting highly synchronized activity where activity very frequently propagates throughout the entire network, driving it into a supercritical state. Multiple simulation studies have also shown that emergence of critical dynamics is dependent on a certain proportion of inhibition in the network, which conform to physiological levels of inhibitory neurons in the brain (de Arcangelis et al., 2006; Massobrio et al., 2015).

The Promise of Criticality Analysis in the Clinical Realm

Despite the rising interest in biological criticality in the last two decades, there has been a dearth of experimental and clinical studies connecting criticality to perturbations. Because criticality represents an optimal state for computation, one could expect departure from the critical state would entail a disruption unto itself (Shew et al., 2011); however, in practice, disruptions to network dynamics are likely more complicated than a transition away from criticality, as such transitions may be part of healthy activity (Stewart and Plenz, 2006; Pearlmutter and Houghton, 2009; Allegrini et al., 2015; Wilting and Priesemann, 2019b), though see also (Carvalho et al., 2020, in preprint) for a counterpoint to this. Given recent hypotheses concerning network computation in the slightly subcritical regime, quantifying the impact of perturbations on network dynamics necessitates rigorous analytical tools (see Box 2) (Priesemann et al., 2014; Wilting et al., 2018; Wilting and Priesemann, 2019a). The potential presence of heterogeneous local dynamics or global reverberating dynamics in the subcritical regime demands a combined and comparative approach for any medical application.

The existing literature detailing disorders as disrupted criticality largely pertain to the macroscale (see Zimmern, 2020 for a comprehensive review), as examined through (i)EEG (Thatcher et al., 2009), ECoG (Chaudhuri et al., 2018), and fMRI (Tagliazucchi et al., 2012). Even with the growing number of researchers examining the dynamics of mesoscale networks, there is still a lack of research turning these in vitro and in vivo methods toward the dynamics of perturbed networks. Given the growing sophistication of molecular and electrophysiological tools, the potential for experimental manipulation of network topology and homeostatic mechanisms is immense. In the absence of many mesoscale investigations into criticality perturbations (Stewart and Plenz, 2006; Gireesh and Plenz, 2008; Fekete et al., 2018), this section will focus on macroscale network dynamics and investigations into clinical disorders through the lens of criticality.

The majority of today's macroscale studies on disorders and criticality pertain to epilepsy disorder, which provides an example of how criticality analysis can benefit the clinical field (Worrell et al., 2002; Li et al., 2005; Meisel et al., 2015b, 2016; Arviv et al., 2016; Meisel and Loddenkemper, 2019; Rings et al., 2019). Given the absence of literature on other disorders in this context and the substantial literature that exists on network topology (Terry et al., 2012; Lopes et al., 2020) and E/I balance (Wei et al., 2017; Du et al., 2019) as they relate to epilepsy, we have chosen to examine this disorder here. An epileptic state, or seizure, is characterized by a departure from healthy dynamics to a hyper-synchronized or chaotic state. This epileptic state can be either focal and confined to cortical regions or circuits, or generalized and encompass the entire brain (Terry et al., 2012; Englot et al., 2016). Currently, diagnosing the presence of epilepsy is often based on the presence of overt structural deficits through MRI and CT, or through markers of infection and electrolyte testing (Stafstrom and Carmant, 2015). When the epilepsy is rooted in less overt factors, diagnosis becomes an issue of determining disruptive dynamics which has thus far proved to be a substantial problem (Stafstrom and Carmant, 2015; Meisel and Loddenkemper, 2019). To this end, the application of criticality analysis to epilepsy has been the focus of much recent research which we will discuss here (Meisel et al., 2015b, 2016; Arviv et al., 2016; Meisel, 2016; Beenhakker, 2019; Du et al., 2019; Meisel and Loddenkemper, 2019; Witton et al., 2019; Maturana et al., 2020).

Current literature links epileptogenesis to disruptions in network connectivity or neuronal excitability stemming from genetic pathologies, such as ion channel mutations, or acquired conditions, such as stroke (Terry et al., 2012; Wei et al., 2017). Predicting if and how these disruptions will lead to epileptogenesis has resulted in the development of measures of network excitability and synchrony and most recently the application of criticality analysis (Meisel, 2016). Diagnostically, criticality analysis has been applied as a biomarker in focal epilepsy patients, where critical slowing down, which is the stretching of activity patterns near a critical state, has been found to precede seizure onset (Maturana et al., 2020). Similarly, the presence of a Hopf bifurcation has been indicated as a diagnostic predictor based on modeled ECoG, neural field, and neural mass dynamics (Meisel and Kuehn, 2012; Buchin et al., 2018; Deeba et al., 2018); for a review on this topic, see Meisel and Loddenkemper (2019). In terms of treatment, the branching ratio computed from recordings of resting dynamics provides a quantitative measure to characterize the effect of anti-epileptic drugs (AEDs), as an alternative to the typically used absence of seizure or response to transcranial stimulation (Meisel et al., 2015b, 2016). A further study by Meisel (2020) also applied this to show an inverse correlation between network synchrony and AED dosage levels, indicating a shift toward a subcritical and away from a supercritical seizure state. However, we should note here that these promising results in the field also highlight some of the intricacies around criticality analysis. For example, when investigating critical slowing down and using a similar iEEG dataset as Maturana et al. (2020), Wilkat et al. (2019) conversely found no evidence of critical slowing down as an epileptic biomarker.

While these studies focus largely on neuronal excitability in isolation, the use of such analysis should also be integrated with the substantial work underway on epilepsy networks (Terry et al., 2012). Already, mapping of functional and structural connectivity in epileptic networks can identify the ictal, or seizure, onset zone and examine the spread from local to global seizure by means of effective connectivity (Yaffe et al., 2015). Indeed, models of seizure propagation and clinically recorded networks show rapid spread of disruptions through small-world networks as a result of their long-range connectivity and hub structure (Netoff et al., 2004; Ponten et al., 2007). With the reliance of seizure spread on network topology, epileptogenesis can occur as a result of disruptions to functional and structural connectivity (Avanzini and Franceschetti, 2009; Terry et al., 2012; Fornito et al., 2015). Functional connectivity mapping in stroke patients often reveals hyperconnectivity, where functional network components paradoxically show an increase in connectivity post-stroke (Hillary and Grafman, 2017). Up to 10% of stroke patients suffer seizures either early or late in their recovery and currently there exists no reliable prognostic tool for determining if these will develop into epilepsy (Myint et al., 2006). Unfortunately, application of AEDs as a preventative measure has thus far proven ineffective, indicating that epileptogenesis is partially independent of neuronal excitability or subject to interference from complex homeostatic mechanisms (Gilad et al., 2011). These forms of acquired epilepsy highlight the heterogeneity of the disorder and the necessity of a combined network and dynamics analysis. The application of criticality analysis to epilepsy disorders can potentially act as both a trial and guideline for other neurological disorders, such as psychiatric (van Bokhoven et al., 2018), developmental (Tinker and Velazquez, 2014; Gao and Penzes, 2015; Li L. et al., 2018), and degenerative disorders (Jiang et al., 2018; Ren et al., 2018; Marcuzzo et al., 2019).

Still, caution must be taken when comparing these studies of macroscale dynamics to their underlying meso and microscale mechanisms (Meisel and Kuehn, 2012). Network scale is a crucial feature of these mechanisms, and node and edge descriptors at different scales substantially alter the relevant activity dynamics. Additionally, the highly divergent methods for avalanche detection between different assessment modalities [macroscale: fMRI, (i)EEG, MEG, ECoG; mesoscale: spikes and LFPs from tetrodes and MEAs] risk erroneous conflation of results between different scales. Each method of avalanche detection and definition requires a thorough investigation into its relevance and robustness to multiple tests. In the final section of this paper we will highlight steps to improve the accuracy and standardization of criticality analysis, as well as the relationship between structural and critical dynamics.

Conclusion

Network neuroscience has seen explosive growth in the clinical field within the past two decades, providing insight into pathophysiology and disease propagation. However, such rapid growth comes along with the challenge of standardizing the measures applied (Hallquist and Hillary, 2018). The current lack of standardized graph theory measures has created a widely dissimilar range of network definitions and graph metrics to the point where it precludes meta-analysis. Advancements in the empirical study of criticality in neural systems are also beginning to see rapid growth, and it can be a struggle to keep up with which measures are best to apply. The consolidation and standardization of metrics used in the study of critical dynamics and connectivity in neural networks thus remains a considerable challenge, yet the sooner this challenge is approached, the more it can be mitigated. Therefore, we have highlighted certain measures that have proven useful in the study of criticality in the context of neuroscience (see Box 2). Given the complex and multifaceted nature of criticality, we fully expect later studies will further expand upon and improve measures. However, applying and comparing the same measures across experiments will lessen the future burden of comparison.

Criticality holds the promise of bridging several scales of neural activity by its nature as a scale-free property. Yet as we have examined, the step from statistical models of criticality to experimental analysis is a difficult one, where constraints and complicating factors arising from experimental methods and the underlying biological mechanisms make themselves known. As we have discussed, the different recording modalities across scales apply disparate approaches to analyzing criticality, making any comparison fraught with analytical pitfalls, and this issue may be further exacerbated by more indirect measures such as fMRI with its different timescales. While there are some groups (Gireesh and Plenz, 2008; Miller et al., 2019) making substantial progress on these experimental issues, as more and more groups turn toward applying criticality-based measures in the clinic and laboratory they need to be cognizant of the intricacies inherent in these topics. Similarly, intuitions from theoretical work, the idea that epileptic systems are supercritical and thus should have branching ratios exceeding 1, can be counter to experimental findings (Hobbs et al., 2010; Plenz, 2012), necessitating a closer look at analytical techniques and theoretical understandings.

Throughout this review, we have attempted to highlight the multifaceted nature of criticality and the potential its analysis holds as a metric of network health. Criticality is closely tied to the efficiency of its underlying network structure, as this structure supports the propagation of dynamical activity through the system. The emergence of these efficient topologies in turn results from the dynamics of the structure itself: the changes in connectivity mediated by the local and global plasticity. This intertwining of criticality and structural dynamics is an essential feature of the critical state, and examining in isolation any single feature contributing to the behavior of a network may forgo the complex interplay that gives rise to critical dynamics. Neural networks organize into small-world and hierarchical modular topologies in part to support critical dynamics, and both structure and dynamics likely develop due to the computational benefits they afford. Furthermore, there is evidence that networks in the critical state display characteristics indicative of their optimal computational capacity, yet few studies have explicitly focused on highlighting these benefits conferred by criticality. A focus on this aspect of criticality would also aid in understanding what goes wrong—or what computational functions may be affected—when a network is damaged or diseased. In the future, we hope more studies take into consideration the interplay between structure and critical dynamics, as well as the functional benefits this confers, as criticality analysis and network neuroscience can provide significant insight into complexity, computation, and medicine.

Author Contributions

KH and OH developed the focus and organization of the manuscript. KH and VF prepared the figures. All authors contributed to the choice of review topic and the writing and review of the manuscript.

Funding

This work was conducted as part of the SOCRATES and DeepCA projects; the Liaison Committee for Education, Research and Innovation in Central Norway (Samarbeidsorganet HMN-NTNU); the Joint Research Committee between St. Olav's Hospital and the Faculty of Medicine and Health Sciences, NTNU; NTNU Enabling Technologies; and ALS Norge. The SOCRATES project was partially funded by the Norwegian Research Council (NFR) through their IKTPLUSS research and innovation action on information and communication technologies under the project agreement 270961. The DeepCA project was partially funded by NFR through their Young Research Talent program under the project agreement 286558.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

Abbott, L. F., and Rohrkemper, R. (2007). A simple growth model constructs critical avalanche networks. Progress Brain Res. 165, 13–19. doi: 10.1016/S0079-6123(06)65002-4

PubMed Abstract | CrossRef Full Text | Google Scholar

Albert, R., Jeong, H., and Barabási, A.-L. (2000). Error and attack tolerance of complex networks. Nature 406, 378–382. doi: 10.1038/35019019

PubMed Abstract | CrossRef Full Text | Google Scholar

Allegrini, P., Paradisi, P., Menicucci, D., Laurino, M., Piarulli, A., and Gemignani, A. (2015). Self-organized dynamical complexity in human wakefulness and sleep: different critical brain-activity feedback for conscious and unconscious states. Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 92:032808. doi: 10.1103/PhysRevE.92.032808

PubMed Abstract | CrossRef Full Text | Google Scholar

Alstott, J., Bullmore, E., and Plenz, D. (2014). Powerlaw: a python package for analysis of heavy-tailed distributions. PLoS ONE 9:e85777. doi: 10.1371/journal.pone.0085777

PubMed Abstract | CrossRef Full Text | Google Scholar

Arviv, O., Medvedovsky, M., Sheintuch, L., Goldstein, A., and Shriki, O. (2016). Deviations from critical dynamics in interictal epileptiform activity. J. Neurosci. 36, 12276–12292. doi: 10.1523/JNEUROSCI.0809-16.2016

PubMed Abstract | CrossRef Full Text | Google Scholar

Avanzini, G., and Franceschetti, S. (2009). Mechanisms of epileptogenesis. Treat. Epilepsy Third Edn. 14, 67–79. doi: 10.1002/9781444316667.ch5

CrossRef Full Text | Google Scholar

Bak, P., Tang, C., and Wiesenfeld, K. (1987). Self-organized criticality: an explanation of the 1/ f noise. Phys. Rev. Lett. 59, 381–384. doi: 10.1103/PhysRevLett.59.381

PubMed Abstract | CrossRef Full Text | Google Scholar

Bak, P., Tang, C., and Wiesenfeld, K. (1988). Self-organized criticality. Phys. Rev. A 38, 364–374. doi: 10.1103/PhysRevA.38.364

CrossRef Full Text | Google Scholar

Barabási, A.-L., and Albert, R. (1999). Emergence of scaling in random networks. Science 286, 509–512. doi: 10.1126/science.286.5439.509

CrossRef Full Text | Google Scholar

Bassett, D. S., and Bullmore, E. T. (2017). Small-world brain networks revisited. Neuroscientist 23, 499–516. doi: 10.1177/1073858416667720

CrossRef Full Text | Google Scholar

Bassett, D. S., and Sporns, O. (2017). Network neuroscience. Nat. Neurosci. 20, 353–364. doi: 10.1038/nn.4502

CrossRef Full Text | Google Scholar

Bastos, A. M., and Schoffelen, J. M. (2016). A tutorial review of functional connectivity analysis methods and their interpretational pitfalls. Front. Syst. Neurosc. 9:175. doi: 10.3389/fnsys.2015.00175

PubMed Abstract | CrossRef Full Text | Google Scholar

Beenhakker, M. P. (2019). Cracklin' fish brains. Epilepsy Curr. 19, 112–114. doi: 10.1177/1535759719835348

CrossRef Full Text | Google Scholar

Beggs, J. M. (2008). The criticality hypothesis: how local cortical networks might optimize information processing. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 366, 329–343. doi: 10.1098/rsta.2007.2092

PubMed Abstract | CrossRef Full Text | Google Scholar

Beggs, J. M., and Plenz, D. (2003). Neuronal avalanches in neocortical circuits. J. Neuro Sci. 23, 11167–11177. doi: 10.1523/JNEUROSCI.23-35-11167.2003

CrossRef Full Text | Google Scholar

Beggs, J. M., and Plenz, D. (2004). Neuronal avalanches are diverse and precise activity patterns that are stable for many hours in cortical slice cultures. J. Neurosci. 24, 5216–5229. doi: 10.1523/JNEUROSCI.0540-04.2004

PubMed Abstract | CrossRef Full Text | Google Scholar

Bertschinger, N., and Natschläger, T. (2004). Real-time computation at the edge of chaos in recurrent neural networks. Neural. Comput. 16, 1413–1436. doi: 10.1162/089976604323057443

PubMed Abstract | CrossRef Full Text | Google Scholar

Boedecker, J., Obst, O., Lizier, J. T., Mayer, N. M., and Asada, M. (2012). Information processing in echo state networks at the edge of chaos. Theory Biosci. 131, 205–213. doi: 10.1007/s12064-011-0146-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Bonachela, J. A., De Franciscis, S., Torres, J. J., and Muoz, M. A. (2010). Self-organization without conservation: are neuronal avalanches generically critical? J. Stat. Mech. Theory Exp. 10:P02015. doi: 10.1088/1742-5468/2010/02/P02015

CrossRef Full Text | Google Scholar

Bonachela, J. A., and Muñoz, M. A. (2009). Self-organization without conservation: true or just apparent scale-invariance? J. Stat. Mech. Theory Exp. 9:P09009. doi: 10.1088/1742-5468/2009/09/P09009

CrossRef Full Text | Google Scholar

Bonifazi, P., Goldin, M., Picardo, M. A., Jorquera, I., Cattani, A., Bianconi, G., et al. (2009). GABAergic hub neurons orchestrate synchrony in developing hippocampal networks. Science 326, 1419–1424. doi: 10.1126/science.1175509

PubMed Abstract | CrossRef Full Text | Google Scholar

Bornholdt, S., and Röhl, T. (2003). Self-organized critical neural networks. Phys. Rev. E Stat. Phys. Plasmas Fluids Relat. Interdiscipl. Top. 67:5. doi: 10.1103/PhysRevE.67.066118

CrossRef Full Text | Google Scholar

Bressler, S. L., and Menon, V. (2010). Large-scale brain networks in cognition: emerging methods and principles. Trends Cogn. Sci. 14, 277–290. doi: 10.1016/j.tics.2010.04.004

PubMed Abstract | CrossRef Full Text | Google Scholar

Brochini, L., De Andrade Costa, A., Abadi, M., Roque, A. C., Stolfi, J., and Kinouchi, O. (2016). Phase transitions and self-organized criticality in networks of stochastic spiking neurons. Sci. Rep. 6, 1–15. doi: 10.1038/srep35831

PubMed Abstract | CrossRef Full Text | Google Scholar

Broido, A. D., and Clauset, A. (2019). Scale-free networks are rare. Nat. Commun. 10, 1–10. doi: 10.1038/s41467-019-08746-5

CrossRef Full Text | Google Scholar

Buchin, A., Kerr, C. C., Huberfeld, G., Miles, R., and Gutkin, B. (2018). Adaptation and inhibition control pathological synchronization in a model of focal epileptic seizure. ENeuro 5, 1–14. doi: 10.1523/ENEURO.0019-18.2018

PubMed Abstract | CrossRef Full Text | Google Scholar

Buendía, V., di Santo, S., Bonachela, J. A., Muñoz, M. A., di Santo, S., Bonachela, J. A., et al. (2020). Feedback mechanisms for self-organization to the edge of a phase transition. Front. Phys. 8:333. doi: 10.3389/fphy.2020.00333

CrossRef Full Text | Google Scholar

Bullmore, E., and Sporns, O. (2009). Complex brain networks: graph theoretical analysis of structural and functional systems. Nat. Rev. Neurosci. 10, 186–198. doi: 10.1038/nrn2575

PubMed Abstract | CrossRef Full Text | Google Scholar

Bullmore, E., and Sporns, O. (2012). The economy of brain network organization. Nat. Rev. Neurosci. 13, 336–349. doi: 10.1038/nrn3214

CrossRef Full Text | Google Scholar

Buzsáki, G., Geisler, C., Henze, D. A., and Wang, X. J. (2004). Interneuron diversity series: circuit complexity and axon wiring economy of cortical interneurons. Trends Neurosci. 7, 446–451. doi: 10.1016/j.tins.2004.02.007

PubMed Abstract | CrossRef Full Text | Google Scholar

Callaway, D. S., Newman, M. E. J., Strogatz, S. H., and Watts, D. J. (2000). Network robustness and fragility: percolation on random graphs. Phys. Rev. Lett. 85, 5468–5471. doi: 10.1103/PhysRevLett.85.5468

PubMed Abstract | CrossRef Full Text | Google Scholar

Campos, J. G. F., Costa, A. D. A., Copelli, M., and Kinouchi, O. (2017). Correlations induced by depressing synapses in critically self-organized networks with quenched dynamics. Phys. Rev. E 95:042303. doi: 10.1103/PhysRevE.95.042303

PubMed Abstract | CrossRef Full Text | Google Scholar

Carvalho, T. T. A., Fontenele, A. J., Girardi-Schappo, M., Feliciano, T., Aguiar, L. A. A., Silva, T. P. L., et al. (2020). Subsampled directed-percolation models explain scaling relations experimentally observed in the brain. arxiv [Preprint] 1–15.

PubMed Abstract | Google Scholar

Cavagna, A., Cimarelli, A., Giardina, I., Parisi, G., Santagati, R., Stefanini, F., et al. (2010). Scale-free correlations in starling flocks. Proc. Natl. Acad. Sci. U.S.A. 107, 11865–11870. doi: 10.1073/pnas.1005766107

PubMed Abstract | CrossRef Full Text | Google Scholar

Chaudhuri, R., He, B. J., and Wang, X.-J. (2018). Random recurrent networks near criticality capture the broadband power distribution of human ECoG dynamics. Cereb. Cortex 28, 3610–3622. doi: 10.1093/cercor/bhx233

PubMed Abstract | CrossRef Full Text | Google Scholar

Chialvo, D. R. (2010). Emergent complex neural dynamics. Nat. Phys. 6, 744–750. doi: 10.1038/nphys1803

CrossRef Full Text | Google Scholar

Chklovskii, D. B., Schikorski, T., and Stevens, C. F. (2002). Wiring optimization in cortical circuits. Neuron 34, 341–347. doi: 10.1016/S0896-6273(02)00679-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Clauset, A., Shalizi, C. R., and Newman, M. E. J. (2009). Power-law distributions in empirical data. SIAM Rev. 51, 661–703. doi: 10.1137/070710111

CrossRef Full Text | Google Scholar

Cocchi, L., Gollo, L. L., Zalesky, A., and Breakspear, M. (2017). Criticality in the brain: a synthesis of neurobiology, models and cognition. Progress Neurobiol. 158, 132–152. doi: 10.1016/j.pneurobio.2017.07.002

PubMed Abstract | CrossRef Full Text | Google Scholar

Costa, A. A., Brochini, L., and Kinouchi, O. (2017). Self-organized supercriticality and oscillations in networks of stochastic spiking neurons. Entropy 19, 1–16. doi: 10.3390/e19080399

CrossRef Full Text | Google Scholar

Costa, A. D. A., Copelli, M., and Kinouchi, O. (2015). Can dynamical synapses produce true self-organized criticality? J. Stat. Mech. Theory Exp. 2015:P06004. doi: 10.1088/1742-5468/2015/06/P06004

CrossRef Full Text | Google Scholar

Cuntz, H., Forstner, F., Borst, A., and Häusser, M. (2010). One rule to grow them all: a general theory of neuronal branching and its practical application. PLoS Comput. Biol. 6:e1000877. doi: 10.1371/journal.pcbi.1000877

PubMed Abstract | CrossRef Full Text | Google Scholar

de Arcangelis, L., and Herrmann, H. J. (2012). Activity-dependent neuronal model on complex networks. Front. Physiol. 3:62. doi: 10.3389/fphys.2012.00062

PubMed Abstract | CrossRef Full Text | Google Scholar

de Arcangelis, L., Perrone-Capano, C., and Herrmann, H. J. (2006). Self-organized criticality model for brain plasticity. Phys. Rev. Lett. 96:028107. doi: 10.1103/PhysRevLett.96.028107

PubMed Abstract | CrossRef Full Text | Google Scholar

de Solla Price, D. J. (1965). Networks of scientific papers. Science 149, 510–515. doi: 10.1126/science.149.3683.510

CrossRef Full Text | Google Scholar

Deeba, F., Sanz-Leon, P., and Robinson, P. A. (2018). Dependence of absence seizure dynamics on physiological parameter evolution. J. Theor. Biol. 454, 11–21. doi: 10.1016/j.jtbi.2018.05.029

PubMed Abstract | CrossRef Full Text | Google Scholar

Denning, P. J. (2007). Computing is a natural science. Commun. ACM 50, 13–18. doi: 10.1145/1272516.1272529

CrossRef Full Text | Google Scholar

Du, J., Vegh, V., and Reutens, D. C. (2019). Small changes in synaptic gain lead to seizure-like activity in neuronal network at criticality. Sci. Rep. 9, 1–15. doi: 10.1038/s41598-018-37646-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Eguíluz, V. M., Chialvo, D. R., Cecchi, G. A., Baliki, M., and Apkarian, A. V. (2005). Scale-free brain functional networks. Phys. Rev. Lett. 94:018102. doi: 10.1103/PhysRevLett.94.018102

CrossRef Full Text | Google Scholar

Englot, D. J., Konrad, P. E., and Morgan, V. L. (2016). Regional and global connectivity disturbances in focal epilepsy, related neurocognitive sequelae, and potential mechanistic underpinnings. Epilepsia 57, 1546–1557. doi: 10.1111/epi.13510

PubMed Abstract | CrossRef Full Text | Google Scholar

Fekete, T., Omer, D. B., O'Hashi, K., Grinvald, A., van Leeuwen, C., and Shriki, O. (2018). Critical dynamics, anesthesia and information integration: lessons from multi-scale criticality analysis of voltage imaging data. NeuroImage 183, 919–933. doi: 10.1016/j.neuroimage.2018.08.026

PubMed Abstract | CrossRef Full Text | Google Scholar

Fingelkurts, A. A., Fingelkurts, A. A., Kivisaari, R., Pekkonen, E., Ilmoniemi, R. J., and Kähkönen, S. (2004). Local and remote functional connectivity of neocortex under the inhibition influence. NeuroImage 22, 1390–1406. doi: 10.1016/j.neuroimage.2004.03.013

PubMed Abstract | CrossRef Full Text | Google Scholar

Fornito, A., Zalesky, A., and Breakspear, M. (2015). The connectomics of brain disorders. Nat. Rev. Neurosci. 16, 159–172. doi: 10.1038/nrn3901

CrossRef Full Text | Google Scholar

Friedman, N., Ito, S., Brinkman, B. A. W., Shimono, M., Deville, R. E. L., Dahmen, K. A., et al. (2012). Universal critical dynamics in high resolution neuronal avalanche data. Phys. Rev. Lett. 108, 1–5. doi: 10.1103/PhysRevLett.108.208102

PubMed Abstract | CrossRef Full Text | Google Scholar

Gallos, L. K., Makse, H. A., and Sigman, M. (2012). A small world of weak ties provides optimal global integration of self-similar modules in functional brain networks. Proc. Natl. Acad. Sci. U.S.A. 109, 2825–2830. doi: 10.1073/pnas.1106612109

PubMed Abstract | CrossRef Full Text | Google Scholar

Gao, R., and Penzes, P. (2015). Common mechanisms of excitatory and inhibitory imbalance in schizophrenia and autism spectrum disorders. Curr. Mol. Med. 15, 146–167. doi: 10.2174/1566524015666150303003028

PubMed Abstract | CrossRef Full Text | Google Scholar

Gautam, S. H., Hoang, T. T., McClanahan, K., Grady, S. K., and Shew, W. L. (2015). Maximizing sensory dynamic range by tuning the cortical state to criticality. PLoS Comput. Biol. 11, 1–15. doi: 10.1371/journal.pcbi.1004576

PubMed Abstract | CrossRef Full Text | Google Scholar

Gilad, R., Boaz, M., Dabby, R., Sadeh, M., and Lampl, Y. (2011). Are post intracerebral hemorrhage seizures prevented by anti-epileptic treatment? Epilepsy Res. 95, 227–231. doi: 10.1016/j.eplepsyres.2011.04.002

PubMed Abstract | CrossRef Full Text | Google Scholar

Girardi-Schappo, M., Bortolotto, G. S., Gonsalves, J. J., Pinto, L. T., and Tragtenberg, M. H. R. (2016). Griffiths phase and long-range correlations in a biologically motivated visual cortex model. Sci. Rep. 6:29561. doi: 10.1038/srep29561

PubMed Abstract | CrossRef Full Text | Google Scholar

Girardi-Schappo, M., Brochini, L., Costa, A. A., Carvalho, T. T. A., and Kinouchi, O. (2020). Synaptic balance due to homeostatically self-organized quasicritical dynamics. Phys. Rev. Res. 2:012042. doi: 10.1103/PhysRevResearch.2.012042

CrossRef Full Text | Google Scholar

Gireesh, E. D., and Plenz, D. (2008). Neuronal avalanches organize as nested theta- and beta/gamma-oscillations during development of cortical layer 2/3. Proc. Natl. Acad. Sci. U.S.A. 105, 7576–7581. doi: 10.1073/pnas.0800537105

PubMed Abstract | CrossRef Full Text | Google Scholar

Goldstein, M. L., Morris, S. A., and Yen, G. G. (2004). Problems with fitting to the power-law distribution. Eur. Phys. J. B 41, 255–258. doi: 10.1140/epjb/e2004-00316-5

CrossRef Full Text | Google Scholar

Gollo, L. L. (2017). Coexistence of critical sensitivity and subcritical specificity can yield optimal population coding. J. R. Soc. Interface 14:20170207. doi: 10.1098/rsif.2017.0207

PubMed Abstract | CrossRef Full Text | Google Scholar

Goodarzinick, A., Niry, M. D., Valizadeh, A., and Perc, M. (2018). Robustness of functional networks at criticality against structural defects. Phys. Rev. E 98, 1–7. doi: 10.1103/PhysRevE.98.022312

PubMed Abstract | CrossRef Full Text | Google Scholar

Griffa, A., and Van den Heuvel, M. P. (2018). Rich-club neurocircuitry: function, evolution, and vulnerability. Dialogues Clin. Neurosci. 20, 121–132. doi: 10.31887/DCNS.2018.20.2/agriffa

PubMed Abstract | CrossRef Full Text | Google Scholar

Haldeman, C., and Beggs, J. M. (2005). Critical branching captures activity in living neural networks and maximizes the number of metastable states. Phys. Rev. Lett. 94, 1–4. doi: 10.1103/PhysRevLett.94.058101

PubMed Abstract | CrossRef Full Text | Google Scholar

Hallquist, M. N., and Hillary, F. G. (2018). Graph theory approaches to functional network organization in brain disorders: a critique for a brave new small-world. Netw. Neurosci. 3, 1–26. doi: 10.1162/netn_a_00054

PubMed Abstract | CrossRef Full Text | Google Scholar

Hardstone, R., Poil, S.-S., Schiavone, G., Jansen, R., Nikulin, V. V., Mansvelder, H. D., et al. (2012). Detrended fluctuation analysis: a scale-free view on neuronal oscillations. Front. Physiol. 3:450. doi: 10.3389/fphys.2012.00450

PubMed Abstract | CrossRef Full Text | Google Scholar

Heiney, K., Ramstad, O. H., Sandvig, I., Sandvig, A., and Nichele, S. (2019). “Assessment and manipulation of the computational capacity of in vitro neuronal networks through criticality in neuronal avalanches,” in 2019 IEEE Symposium Series on Computational Intelligence (SSCI) (Xiamen), 247–254. doi: 10.1109/SSCI44817.2019.9002693

CrossRef Full Text | Google Scholar

Hesse, J., and Gross, T. (2014). Self-organized criticality as a fundamental property of neural systems. Front. Syst. Neurosci. 8:166. doi: 10.3389/fnsys.2014.00166

PubMed Abstract | CrossRef Full Text | Google Scholar

Hillary, F. G., and Grafman, J. H. (2017). Injured brains and adaptive networks: the benefits and costs of hyperconnectivity. Trends Cogn. Sci. 21, 385–401. doi: 10.1016/j.tics.2017.03.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Hobbs, J. P., Smith, J. L., and Beggs, J. M. (2010). Aberrant neuronal avalanches in cortical tissue removed from juvenile epilepsy patients. J. Clin. Neurophysiol. 27, 380–386. doi: 10.1097/WNP.0b013e3181fdf8d3

PubMed Abstract | CrossRef Full Text | Google Scholar

Hoffmann, H. (2018). Impact of network topology on self-organized criticality. Phys. Rev. E. 97:022313. doi: 10.1103/PhysRevE.97.022313

PubMed Abstract | CrossRef Full Text | Google Scholar

Jiang, L., Sui, D., Qiao, K., Dong, H. M., Chen, L., and Han, Y. (2018). Impaired functional criticality of human brain during Alzheimer's disease progression. Sci. Rep. 8, 1–11. doi: 10.1038/s41598-018-19674-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Khoshkhou, M., and Montakhab, A. (2019). Spike-timing-dependent plasticity with axonal delay tunes networks of izhikevich neurons to the edge of synchronization transition with scale-free avalanches. Front. Syst. Neurosci. 13:73. doi: 10.3389/fnsys.2019.00073

PubMed Abstract | CrossRef Full Text | Google Scholar

Kim, D. J., and Min, B. K. (2020). Rich-club in the brain's macrostructure: insights from graph theoretical analysis. Comput. Struct. Biotechnol. J. 18, 1761–1773. doi: 10.1016/j.csbj.2020.06.039

PubMed Abstract | CrossRef Full Text | Google Scholar

Kinouchi, O., Brochini, L., Costa, A. A., Campos, J. G. F., and Copelli, M. (2019). Stochastic oscillations and dragon king avalanches in self-organized quasi-critical systems. Sci. Rep. 9, 1–12. doi: 10.1038/s41598-019-40473-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Kinouchi, O., and Copelli, M. (2006). Optimal dynamical range of excitable networks at criticality. Nat. Phys. 2, 348–352. doi: 10.1038/nphys289

CrossRef Full Text | Google Scholar

Kinouchi, O., Pazzini, R., and Copelli, M. (2020). Mechanisms of self-organized quasicriticality in neuronal networks models. arxiv. doi: 10.3389/fphy.2020.583213

CrossRef Full Text | Google Scholar

Kossio, F. Y. K., Goedeke, S., Van Den Akker, B., Ibarz, B., and Memmesheimer, R. M. (2018). Growing critical: self-organized criticality in a developing neural system. Phys. Rev. Lett. 121:58301. doi: 10.1103/PhysRevLett.121.058301

PubMed Abstract | CrossRef Full Text | Google Scholar

Langton, C. G. (1990). Computation at the edge of chaos: phase transitions and emergent computation. Phys. D Nonlin. Phenomena 42, 12–37. doi: 10.1016/0167-2789(90)90064-V

CrossRef Full Text | Google Scholar

Laughlin, S. B., and Sejnowski, T. J. (2003). Communication in neuronal networks. Science. 301, 1870–1874. doi: 10.1126/science.1089662

CrossRef Full Text | Google Scholar

Lee, H., Golkowski, D., Jordan, D., Berger, S., Ilg, R., Lee, J., et al. (2019). Relationship of critical dynamics, functional connectivity, and states of consciousness in large-scale human brain networks. NeuroImage 188, 228–238. doi: 10.1016/j.neuroimage.2018.12.011

PubMed Abstract | CrossRef Full Text | Google Scholar

Levina, A., Herrmann, J. M., and Geisel, T. (2007). Dynamical synapses causing self-organized criticality in neural networks. Nat. Phys. 3, 857–860. doi: 10.1038/nphys758

CrossRef Full Text | Google Scholar

Levina, A., and Priesemann, V. (2017). Subsampling scaling. Nat. Commun. 8, 1–9. doi: 10.1038/ncomms15140

CrossRef Full Text | Google Scholar

Li, J., and Shew, W. L. (2020). Tuning network dynamics from criticality to an asynchronous state. PLoS Comput. Biol. 16:e1008268. doi: 10.1371/journal.pcbi.1008268

PubMed Abstract | CrossRef Full Text | Google Scholar

Li, L., Bachevalier, J., Hu, X., Klin, A., Preuss, T. M., Shultz, S., et al. (2018). Topology of the structural social brain network in typical adults. Brain Connectivity 8, 537–548. doi: 10.1089/brain.2018.0592

PubMed Abstract | CrossRef Full Text | Google Scholar

Li, X., Chen, Q., and Xue, F. (2017). Biological modelling of a computational spiking neural network with neuronal avalanches. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 375:20160286. doi: 10.1098/rsta.2016.0286

PubMed Abstract | CrossRef Full Text | Google Scholar

Li, X., Polygiannakis, J., Kapiris, P., Peratzakis, A., Eftaxias, K., and Yao, X. (2005). Fractal spectral analysis of pre-epileptic seizures in terms of criticality. J. Neural Eng. 2, 11–16. doi: 10.1088/1741-2560/2/2/002

PubMed Abstract | CrossRef Full Text | Google Scholar

Li, X., and Small, M. (2012). Neuronal avalanches of a self-organized neural network with active-neuron-dominant structure. Chaos 22:023104. doi: 10.1063/1.3701946

PubMed Abstract | CrossRef Full Text | Google Scholar

Li, X., Wang, W., Xue, F., and Song, Y. (2018). Computational modeling of spiking neural network with learning rules from STDP and intrinsic plasticity. Phys. A 491, 716–728. doi: 10.1016/j.physa.2017.08.053

CrossRef Full Text | Google Scholar

Lin, M., and Chen, T. (2005). Self-organized criticality in a simple model of neurons based on small-world networks. Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 71:06133. doi: 10.1103/PhysRevE.71.016133

PubMed Abstract | CrossRef Full Text | Google Scholar

Lopes, M. A., Junges, L., Woldman, W., Goodfellow, M., and Terry, J. R. (2020). The role of excitability and network structure in the emergence of focal and generalized seizures. Front. Neurol. 11:74. doi: 10.3389/fneur.2020.00074

PubMed Abstract | CrossRef Full Text | Google Scholar

Low, L. K., and Cheng, H. J. (2006). Axon pruning: an essential step underlying the developmental plasticity of neuronal connections. Philos. Trans. R. Soc. B Biol. Sci. 361, 1531–1544. doi: 10.1098/rstb.2006.1883

CrossRef Full Text | Google Scholar

Ma, Z., Turrigiano, G. G., Wessel, R., and Hengen, K. B. (2019). Cortical circuit dynamics are homeostatically tuned to criticality in vivo. Neuron 104, 655–664.e4. doi: 10.1016/j.neuron.2019.08.031

PubMed Abstract | CrossRef Full Text | Google Scholar

Malamud, B. D., Morein, G., and Turcotte, D. L. (1998). Forest fires: an example of self-organized critical behavior. Science 281, 1840–1842. doi: 10.1126/science.281.5384.1840

PubMed Abstract | CrossRef Full Text | Google Scholar

Marcuzzo, S., Terragni, B., Bonanno, S., Isaia, D., Cavalcante, P., Cappelletti, C., et al. (2019). Hyperexcitability in cultured cortical neuron networks from the G93A-SOD1 amyotrophic lateral sclerosis model mouse and its molecular correlates. Neuroscience 416, 88–99. doi: 10.1016/j.neuroscience.2019.07.041

PubMed Abstract | CrossRef Full Text | Google Scholar

Marshall, N., Timme, N. M., Bennett, N., Ripp, M., Lautzenhiser, E., and Beggs, J. M. (2016). Analysis of power laws, shape collapses, and neural complexity: new techniques and MATLAB support via the NCC toolbox. Front. Physiol. 7:250. doi: 10.3389/fphys.2016.00250

PubMed Abstract | CrossRef Full Text | Google Scholar

Martinello, M., Hidalgo, J., Maritan, A., Di Santo, S., Plenz, D., and Muñoz, M. A. (2017). Neutral theory and scale-free neural dynamics. Phys. Rev. X 7:041071. doi: 10.1103/PhysRevX.7.041071

CrossRef Full Text | Google Scholar

Massobrio, P., Pasquale, V., and Martinoia, S. (2015). Self-organized criticality in cortical assemblies occurs in concurrent scale-free and small-world networks. Sci. Rep. 5:10578. doi: 10.1038/srep10578

PubMed Abstract | CrossRef Full Text | Google Scholar

Maturana, M. I., Meisel, C., Dell, K., Karoly, P. J., D'Souza, W., Grayden, D. B., et al. (2020). Critical slowing down as a biomarker for seizure susceptibility. Nat. Commun. 11:2172. doi: 10.1038/s41467-020-15908-3

PubMed Abstract | CrossRef Full Text | Google Scholar

McAuley, J. J., Da Fontoura Costa, L., and Caetano, T. S. (2007). Rich-club phenomenon across complex network hierarchies. Appl. Phys. Lett. 91, 2–5. doi: 10.1063/1.2773951

CrossRef Full Text | Google Scholar

Meisel, C. (2016). Linking cortical network synchrony and excitability. Commun. Integr. Biol. 9, 1–3. doi: 10.1080/19420889.2015.1128598

PubMed Abstract | CrossRef Full Text | Google Scholar

Meisel, C. (2020). Antiepileptic drugs induce subcritical dynamics in human cortical networks. Proc. Nat. Acad. Sci. 117, 11118–11125. doi: 10.1073/pnas.1911461117

PubMed Abstract | CrossRef Full Text | Google Scholar

Meisel, C., and Gross, T. (2009). Adaptive self-organization in a realistic neural network model. Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 80, 1–6. doi: 10.1103/PhysRevE.80.061917

PubMed Abstract | CrossRef Full Text | Google Scholar

Meisel, C., Klaus, A., Kuehn, C., and Plenz, D. (2015a). Critical slowing down governs the transition to neuron spiking. PLoS Comput. Biol. 11:e1004097. doi: 10.1371/journal.pcbi.1004097

PubMed Abstract | CrossRef Full Text | Google Scholar

Meisel, C., and Kuehn, C. (2012). Scaling effects and spatio-temporal multilevel dynamics in epileptic seizures. PLoS ONE 7:e30371. doi: 10.1371/journal.pone.0030371

PubMed Abstract | CrossRef Full Text | Google Scholar

Meisel, C., and Loddenkemper, T. (2019). Seizure prediction and intervention. Neuropharmacology 172, 107898. doi: 10.1016/j.neuropharm.2019.107898

CrossRef Full Text | Google Scholar

Meisel, C., Olbrich, E., Shriki, O., and Achermann, P. (2013). Fading signatures of critical brain dynamics during sustained wakefulness in humans. J. Neurosci. 33, 17363–17372. doi: 10.1523/JNEUROSCI.1516-13.2013

PubMed Abstract | CrossRef Full Text | Google Scholar

Meisel, C., Plenz, D., Schulze-Bonhage, A., and Reichmann, H. (2016). Quantifying antiepileptic drug effects using intrinsic excitability measures. Epilepsia 57, e210–e215. doi: 10.1111/epi.13517

PubMed Abstract | CrossRef Full Text | Google Scholar

Meisel, C., Schulze-Bonhage, A., Freestone, D., Cook, M. J., Achermann, P., and Plenz, D. (2015b). Intrinsic excitability measures track antiepileptic drug action and uncover increasing/decreasing excitability over the wake/sleep cycle. Proc. Natl. Acad. Sci. U.S.A. 112, 14694–14699. doi: 10.1073/pnas.1513716112

PubMed Abstract | CrossRef Full Text | Google Scholar

Meunier, D., Lambiotte, R., and Bullmore, E. T. (2010). Modular and hierarchically modular organization of brain networks. Front. Neurosci. 4:200. doi: 10.3389/fnins.2010.00200

PubMed Abstract | CrossRef Full Text | Google Scholar

Michiels van Kessenich, L., Berger, D., de Arcangelis, L., and Herrmann, H. J. (2019). Pattern recognition with neuronal avalanche dynamics. Phys. Rev. E 99:010302. doi: 10.1103/PhysRevE.99.010302

PubMed Abstract | CrossRef Full Text | Google Scholar

Michiels Van Kessenich, L., De Arcangelis, L., and Herrmann, H. J. (2016). Synaptic plasticity and neuronal refractory time cause scaling behaviour of neuronal avalanches. Sci. Rep. 6:32071. doi: 10.1038/srep32071

PubMed Abstract | CrossRef Full Text | Google Scholar

Michiels van Kessenich, L., Luković, M., de Arcangelis, L., and Herrmann, H. J. (2018). Critical neural networks with short- and long-term plasticity. Phys. Rev. E 97:032312. doi: 10.1103/PhysRevE.97.032312

PubMed Abstract | CrossRef Full Text | Google Scholar

Miller, S. R., Yu, S., and Plenz, D. (2019). The scale-invariant, temporal profile of neuronal avalanches in relation to cortical γ-oscillations. Sci. Rep. 9, 1–14. doi: 10.1038/s41598-019-52326-y

PubMed Abstract | CrossRef Full Text | Google Scholar

Mizutaka, S., and Yakubo, K. (2013). Structural robustness of scale-free networks against overload failures. Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 88:012803. doi: 10.1103/PhysRevE.88.012803

PubMed Abstract | CrossRef Full Text | Google Scholar

Moretti, P., and Muñoz, M. A. (2013). Griffiths phases and the stretching of criticality in brain networks. Nat. Commun. 4:2521. doi: 10.1038/ncomms3521

PubMed Abstract | CrossRef Full Text

Muñoz, M. A. (2018). Colloquium: criticality and dynamical scaling in living systems. Rev. Modern Phys. 90:031001. doi: 10.1103/RevModPhys.90.031001

CrossRef Full Text | Google Scholar

Muñoz, M. A., Juhász, R., Castellano, C., and Ódor, G. (2010). Griffiths phases on complex networks. Phys. Rev. Lett. 105:128701. doi: 10.1103/PhysRevLett.105.128701

PubMed Abstract | CrossRef Full Text | Google Scholar

Myint, P. K., Staufenberg, E. F. A., and Sabanathan, K. (2006). Post-stroke seizure and post-stroke epilepsy. Postgraduate Med. J. 82, 568–572. doi: 10.1136/pgmj.2005.041426

PubMed Abstract | CrossRef Full Text | Google Scholar

Naudé, J., Cessac, B., Berry, H., and Delord, B. (2013). Effects of cellular homeostatic intrinsic plasticity on dynamical and computational properties of biological recurrent neural networks. J. Neurosci. 33, 15032–15043. doi: 10.1523/JNEUROSCI.0870-13.2013

PubMed Abstract | CrossRef Full Text | Google Scholar

Netoff, T. I., Clewley, R., Arno, S., Keck, T., and White, J. A. (2004). Epilepsy in small-world networks. J. Neurosci. 24, 8075–8083. doi: 10.1523/JNEUROSCI.1509-04.2004

CrossRef Full Text | Google Scholar

Newman, M. E. J. (2003). The structure and function of complex networks. SIAM Rev. 45, 167–256. doi: 10.1137/S003614450342480

CrossRef Full Text | Google Scholar

Ódor, G., Dickman, R., and Ódor, G. (2015). Griffiths phases and localization in hierarchical modular networks. Sci. Rep. 5, 1–15. doi: 10.1038/srep14451

PubMed Abstract | CrossRef Full Text | Google Scholar

Ódor, G., and Kelling, J. (2019). Critical synchronization dynamics of the Kuramoto model on connectome and small world graphs. Sci. Rep. 9:19621. doi: 10.1038/s41598-019-54769-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Okujeni, S., and Egert, U. (2019). Self-organization of modular network architecture by activity-dependent neuronal migration and outgrowth. ELife 8:e47996. doi: 10.7554/eLife.47996.031

PubMed Abstract | CrossRef Full Text | Google Scholar

Paczuski, M., Maslov, S., and Bak, P. (1996). Avalanche dynamics in evolution, growth, and depinning models. Phys. Rev. E Stat. Phys. Plasmas Fluids Relat. Interdiscipl. Top. 53, 414–443. doi: 10.1103/PhysRevE.53.414

PubMed Abstract | CrossRef Full Text | Google Scholar

Pajevic, S., and Plenz, D. (2009). Efficient network reconstruction from dynamical cascades identifies small-world topology of neuronal avalanches. PLoS Comput. Biol. 5:e1000271. doi: 10.1371/journal.pcbi.1000271

PubMed Abstract | CrossRef Full Text | Google Scholar

Pajevic, S., and Plenz, D. (2012). The organization of strong links in complex networks. Nat. Phys. 8, 429–436. doi: 10.1038/nphys2257

PubMed Abstract | CrossRef Full Text | Google Scholar

Palmieri, L., and Jensen, H. J. (2020). The forest fire model: the subtleties of criticality and scale invariance. Front. Phys. 8:257. doi: 10.3389/fphy.2020.00257

CrossRef Full Text | Google Scholar

Pan, R. K., and Sinha, S. (2007). Modular networks emerge from multiconstraint optimization. Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 76:045103. doi: 10.1103/PhysRevE.76.045103

PubMed Abstract | CrossRef Full Text | Google Scholar

Pasquale, V., Massobrio, P., Bologna, L. L., Chiappalone, M., and Martinoia, S. (2008). Self-organization and neuronal avalanches in networks of dissociated cortical neurons. Neuroscience 153, 1354–1369. doi: 10.1016/j.neuroscience.2008.03.050

PubMed Abstract | CrossRef Full Text | Google Scholar

Pearlmutter, B. A., and Houghton, C. J. (2009). A new hypothesis for sleep: tuning for criticality. Neural Comput. 21, 1622–1641. doi: 10.1162/neco.2008.05-08-787

PubMed Abstract | CrossRef Full Text | Google Scholar

Pellegrini, G. L., de Arcangelis, L., Herrmann, H. J., and Perrone-Capano, C. (2007). Activity-dependent neural network model on scale-free networks. Phys. Rev. E 76:016107. doi: 10.1103/PhysRevE.76.016107

PubMed Abstract | CrossRef Full Text | Google Scholar

Peng, J., and Beggs, J. M. (2013). Attaining and maintaining criticality in a neuronal network model. Phys. A Statist. Mech. Appl. 392, 1611–1620. doi: 10.1016/j.physa.2012.11.013

CrossRef Full Text | Google Scholar

Plenz, D. (2012). Neuronal avalanches and coherence potentials. Eur. Phys. J. Spec. Top. 205, 259–301. doi: 10.1140/epjst/e2012-01575-5

CrossRef Full Text | Google Scholar

Ponten, S. C., Bartolomei, F., and Stam, C. J. (2007). Small-world networks and epilepsy: graph theoretical analysis of intracerebrally recorded mesial temporal lobe seizures. Clin. Neurophysiol. 118, 918–927. doi: 10.1016/j.clinph.2006.12.002

PubMed Abstract | CrossRef Full Text | Google Scholar

Priesemann, V., and Shriki, O. (2018). Can a time varying external drive give rise to apparent criticality in neural systems? PLoS Comput. Biol. 14:e1006081. doi: 10.1371/journal.pcbi.1006081

PubMed Abstract | CrossRef Full Text | Google Scholar

Priesemann, V., Wibral, M., Valderrama, M., Pröpper, R., Le Van Quyen, M., Geisel, T., et al. (2014). Spike avalanches in vivo suggest a driven, slightly subcritical brain state. Front. Syst. Neurosci. 8:108. doi: 10.3389/fnsys.2014.00108

PubMed Abstract | CrossRef Full Text | Google Scholar

Reia, S. M., and Kinouchi, O. (2014). Conway's game of life is a near-critical metastable state in the multiverse of cellular automata. Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 89, 1–5. doi: 10.1103/PhysRevE.89.052123

PubMed Abstract | CrossRef Full Text | Google Scholar

Reia, S. M., and Kinouchi, O. (2015). Nonsynchronous updating in the multiverse of cellular automata. Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 91, 1–5. doi: 10.1103/PhysRevE.91.042110

PubMed Abstract | CrossRef Full Text | Google Scholar

Ren, S. Q., Yao, W., Yan, J. Z., Jin, C., Yin, J. J., Yuan, J., et al. (2018). Amyloid β causes excitation/inhibition imbalance through dopamine receptor 1-dependent disruption of fast-spiking GABAergic input in anterior cingulate cortex. Sci. Rep. 8, 1–10. doi: 10.1038/s41598-017-18729-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Rings, T., von Wrede, R., and Lehnertz, K. (2019). Precursors of seizures due to specific spatial-temporal modifications of evolving large-scale epileptic brain networks. Sci. Rep. 9, 1–12. doi: 10.1038/s41598-019-47092-w

PubMed Abstract | CrossRef Full Text | Google Scholar

Rubinov, M., Sporns, O., Thivierge, J. P., and Breakspear, M. (2011). Neurobiologically realistic determinants of self-organized criticality in networks of spiking neurons. PLoS Comput. Biol. 7:e1002038. doi: 10.1371/journal.pcbi.1002038

PubMed Abstract | CrossRef Full Text | Google Scholar

Rudy, B., Fishell, G., Lee, S., and Hjerling-Leffler, J. (2011). Three groups of interneurons account for nearly 100% of neocortical GABAergic neurons. Dev. Neurobiol. 71, 45–61. doi: 10.1002/dneu.20853

PubMed Abstract | CrossRef Full Text | Google Scholar

Salkoff, D. B., Zagha, E., Yüzgeç, Ö., and McCormick, D. A. (2015). Synaptic mechanisms of tight spike synchrony at gamma frequency in cerebral cortex. J. Neurosci. 35, 10236–10251. doi: 10.1523/JNEUROSCI.0828-15.2015

PubMed Abstract | CrossRef Full Text | Google Scholar

Sethna, J. P., Dahmen, K. A., and Myers, C. R. (2001). Crackling noise. Nature 410, 242–250. doi: 10.1038/35065675

CrossRef Full Text | Google Scholar

Shew, W. L., Clawson, W. P., Pobst, J., Karimipanah, Y., Wright, N. C., and Wessel, R. (2015). Adaptation to sensory input tunes visual cortex to criticality. Nat. Phys. 11, 659–663. doi: 10.1038/nphys3370

CrossRef Full Text | Google Scholar

Shew, W. L., and Plenz, D. (2013). The functional benefits of criticality in the cortex. Neuroscientist 19, 88–100. doi: 10.1177/1073858412445487

PubMed Abstract | CrossRef Full Text | Google Scholar

Shew, W. L., Yang, H., Petermann, T., Roy, R., and Plenz, D. (2009). Neuronal avalanches imply maximum dynamic range in cortical networks at criticality. J. Neurosci. 29, 15595–15600. doi: 10.1523/JNEUROSCI.3864-09.2009

PubMed Abstract | CrossRef Full Text | Google Scholar

Shew, W. L., Yang, H., Yu, S., Roy, R., and Plenz, D. (2011). Information capacity and transmission are maximized in balanced cortical networks with neuronal avalanches. J. Neurosci. 31, 55–63. doi: 10.1523/JNEUROSCI.4637-10.2011

PubMed Abstract | CrossRef Full Text | Google Scholar

Shin, C. W., and Kim, S. (2006). Self-organized criticality and scale-free properties in emergent functional neural networks. Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 74:045101. doi: 10.1103/PhysRevE.74.045101

PubMed Abstract | CrossRef Full Text | Google Scholar

Siri, B., Berry, H., Cessac, B., Delord, B., and Quoy, M. (2008). A mathematical analysis of the effects of Hebbian learning rules on the dynamics and structure of discrete-time random recurrent neural networks. Neural Comput. 20, 2937–2966. doi: 10.1162/neco.2008.05-07-530

PubMed Abstract | CrossRef Full Text | Google Scholar

Siri, B., Quoy, M., Delord, B., Cessac, B., and Berry, H. (2007). Effects of Hebbian learning on the dynamics and structure of random networks with inhibitory and excitatory neurons. J. Physiol. Paris 101, 136–148. doi: 10.1016/j.jphysparis.2007.10.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Sporns, O. (2002). Network analysis, complexity, and brain function. Complexity. 8, 56–60. doi: 10.1002/cplx.10047

CrossRef Full Text | Google Scholar

Sporns, O. (2013). Structure and function of complex brain networks. Dialogues Clin. Neurosci. 15, 247–262. doi: 10.31887/DCNS.2013.15.3/osporns

PubMed Abstract | CrossRef Full Text | Google Scholar

Sporns, O., Chialvo, D. R., Kaiser, M., and Hilgetag, C. C. (2004). Organization, development and function of complex brain networks. Trends Cogn. Sci. 8, 418–425. doi: 10.1016/j.tics.2004.07.008

PubMed Abstract | CrossRef Full Text | Google Scholar

Stafstrom, C. E., and Carmant, L. (2015). Seizures and epilepsy: an overview for neuroscientists. Cold Spring Harbor Perspect. Med. 5:a022426. doi: 10.1101/cshperspect.a022426

PubMed Abstract | CrossRef Full Text | Google Scholar

Stam, C. J. (2014). Modern network science of neurological disorders. Nat. Rev. Neurosci. 15, 683–695. doi: 10.1038/nrn3801

PubMed Abstract | CrossRef Full Text | Google Scholar

Stepp, N., Plenz, D., and Srinivasa, N. (2015). Synaptic plasticity enables adaptive self-tuning critical networks. PLoS Comput. Biol. 11:e1004043. doi: 10.1371/journal.pcbi.1004043

PubMed Abstract | CrossRef Full Text | Google Scholar

Stewart, C. V., and Plenz, D. (2006). Inverted-U profile of dopamine-NMDA-mediated spontaneous avalanche recurrence in superficial layers of rat prefrontal cortex. J. Neurosci. 26, 8148–8159. doi: 10.1523/JNEUROSCI.0723-06.2006

PubMed Abstract | CrossRef Full Text | Google Scholar

Stewart, C. V., and Plenz, D. (2008). Homeostasis of neuronal avalanches during postnatal cortex development in vitro. J. Neurosci. Methods 169, 405–416. doi: 10.1016/j.jneumeth.2007.10.021

PubMed Abstract | CrossRef Full Text | Google Scholar

Tagliazucchi, E., Balenzuela, P., Fraiman, D., and Chialvo, D. R. (2012). Criticality in large-scale brain fMRI dynamics unveiled by a novel point process analysis. Front. Physiol. 3:15. doi: 10.3389/fphys.2012.00015

PubMed Abstract | CrossRef Full Text | Google Scholar

Teixeira, F. P. P., and Shanahan, M. (2014). “Does plasticity promote criticality?” in 2014 International Joint Conference on Neural Networks (IJCNN) (Beijing), 2383–2390. doi: 10.1109/IJCNN.2014.6889562

CrossRef Full Text | Google Scholar

Terry, J. R., Benjamin, O., and Richardson, M. P. (2012). Seizure generation: the role of nodes and networks. Epilepsia 53, 166–169. doi: 10.1111/j.1528-1167.2012.03560.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Tetzlaff, C., Okujeni, S., Egert, U., Wörgötter, F., and Butz, M. (2010). Self-organized criticality in developing neuronal networks. PLoS Comput. Biol. 6:e1001013. doi: 10.1371/journal.pcbi.1001013

PubMed Abstract | CrossRef Full Text | Google Scholar

Thatcher, R. W., North, D. M., and Biver, C. J. (2009). Self-organized criticality and the development of EEG phase reset. Human Brain Mapp. 30, 553–574. doi: 10.1002/hbm.20524

PubMed Abstract | CrossRef Full Text | Google Scholar

Thivierge, J. P. (2014). Scale-free and economical features of functional connectivity in neuronal networks. Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 90:022721. doi: 10.1103/PhysRevE.90.022721

PubMed Abstract | CrossRef Full Text | Google Scholar

Tinker, J., and Velazquez, J. L. P. (2014). Power law scaling in synchronization of brain signals depends on cognitive load. Front. Syst. Neurosci. 8:73. doi: 10.3389/fnsys.2014.00073

PubMed Abstract | CrossRef Full Text | Google Scholar

Touboul, J., and Destexhe, A. (2017). Power-law statistics and universal scaling in the absence of criticality. Phys. Rev. E 95, 1–14. doi: 10.1103/PhysRevE.95.012413

PubMed Abstract | CrossRef Full Text | Google Scholar

Tremblay, R., Lee, S., and Rudy, B. (2016). GABAergic interneurons in the Neocortex: from cellular properties to circuits. Neuron 91, 260–292. doi: 10.1016/j.neuron.2016.06.033

PubMed Abstract | CrossRef Full Text | Google Scholar

Turrigiano, G. G. (2017). The dialectic of hebb and homeostasis. Philos. Trans. R. Soc. B Biol. Sci. 372, 4–6. doi: 10.1098/rstb.2016.0258

PubMed Abstract | CrossRef Full Text | Google Scholar

Valverde, S., Ohse, S., Turalska, M., West, B. J., and Garcia-Ojalvo, J. (2015). Structural determinants of criticality in biological networks. Front. Physiol. 6:127. doi: 10.3389/fphys.2015.00127

PubMed Abstract | CrossRef Full Text | Google Scholar

van Bokhoven, H., Selten, M., and Nadif Kasri, N. (2018). Inhibitory control of the excitatory/inhibitory balance in psychiatric disorders. F1000Research 7, 1–16. doi: 10.12688/f1000research.12155.1

PubMed Abstract | CrossRef Full Text | Google Scholar

van den Heuvel, M. P., and Sporns, O. (2013). Network hubs in the human brain. Trends Cogn. Sci. 17, 683–696 doi: 10.1016/j.tics.2013.09.012

CrossRef Full Text | Google Scholar

Van Ooyen, A., Van Pelt, J., and Corner, M. A. (1995). Implications of activity dependent neurite outgrowth for neuronal morphology and network development. J. Theor. Biol. 172, 63–82. doi: 10.1006/jtbi.1995.0005

PubMed Abstract | CrossRef Full Text | Google Scholar

Wang, S. J., and Zhou, C. (2012). Hierarchical modular structure enhances the robustness of self-organized criticality in neural networks. New J. Phys. 14:023005. doi: 10.1088/1367-2630/14/2/023005

CrossRef Full Text | Google Scholar

Watts, D. J., and Strogatz, S. H. (1998). Collective dynamics of ‘small-world’ networks. Nature 393, 440–442. doi: 10.1038/30918

CrossRef Full Text | Google Scholar

Wei, F., Yan, L. M., Su, T., He, N., Lin, Z. J., Wang, J., et al. (2017). Ion channel genes and epilepsy: functional alteration, pathogenic potential, and mechanism of epilepsy. Neurosci. Bull. 33, 455–477. doi: 10.1007/s12264-017-0134-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Wilkat, T., Rings, T., and Lehnertz, K. (2019). No evidence for critical slowing down prior to human epileptic seizures. Chaos 29:091104. doi: 10.1063/1.5122759

PubMed Abstract | CrossRef Full Text | Google Scholar

Wilting, J., Dehning, J., Pinheiro Neto, J., Rudelt, L., Wibral, M., Zierenberg, J., et al. (2018). Operating in a reverberating regime enables rapid tuning of network states to task requirements. Front. Syst. Neurosci. 12:55. doi: 10.3389/fnsys.2018.00055

PubMed Abstract | CrossRef Full Text | Google Scholar

Wilting, J., and Priesemann, V. (2018). Inferring collective dynamical states from widely unobserved systems. Nat. Commun. 9:2325. doi: 10.1038/s41467-018-04725-4

PubMed Abstract | CrossRef Full Text | Google Scholar

Wilting, J., and Priesemann, V. (2019a). 25 years of criticality in neuroscience — established results, open controversies, novel concepts. Curr. Opin. Neurobiol. 58, 105–111. doi: 10.1016/j.conb.2019.08.002

PubMed Abstract | CrossRef Full Text | Google Scholar

Wilting, J., and Priesemann, V. (2019b). Between perfectly critical and fully irregular: a reverberating model captures and predicts cortical spike propagation. Cereb. Cortex 29, 2759–2770. doi: 10.1093/cercor/bhz049

PubMed Abstract | CrossRef Full Text | Google Scholar

Witton, C., Sergeyev, S. V., Turitsyna, E. G., Furlong, P. L., Seri, S., Brookes, M., et al. (2019). Rogue bioelectrical waves in the brain: the Hurst exponent as a potential measure for presurgical mapping in epilepsy. J. Neural Eng. 16:056019. doi: 10.1088/1741-2552/ab225e

PubMed Abstract | CrossRef Full Text | Google Scholar

Wolfram, S. (1984). Universality and complexity in cellular automata. Phys. D Nonlin. Phenomena 10, 1–35. doi: 10.1016/0167-2789(84)90245-8

CrossRef Full Text | Google Scholar

Womelsdorf, T., Schoffelen, J. M., Oostenveld, R., Singer, W., Desimone, R., Engel, A. K., et al. (2007). Modulation of neuronal interactions through neuronal synchronization. Science 316, 1609–1612. doi: 10.1126/science.1139597

PubMed Abstract | CrossRef Full Text | Google Scholar

Worrell, G. A., Cranstoun, S. D., Echauz, J., and Litt, B. (2002). Evidence for self-organized criticality in human epileptic hippocampus. NeuroReport 13, 2017–2021. doi: 10.1097/00001756-200211150-00005

PubMed Abstract | CrossRef Full Text | Google Scholar

Wu, S., Zhang, Y., Cui, Y., Li, H., Wang, J., Guo, L., et al. (2019). Heterogeneity of synaptic input connectivity regulates spike-based neuronal avalanches. Neural Netw. 110, 91–103. doi: 10.1016/j.neunet.2018.10.017

PubMed Abstract | CrossRef Full Text | Google Scholar

Wu, Y. K., Hengen, K. B., Turrigiano, G. G., and Gjorgjieva, J. (2020). Homeostatic mechanisms regulate distinct aspects of cortical circuit dynamics. Proc. Natl. Acad. Sci. U.S.A. 117, 24514–24525. doi: 10.1073/pnas.1918368117

PubMed Abstract | CrossRef Full Text | Google Scholar

Yada, Y., Mita, T., Sanada, A., Yano, R., Kanzaki, R., Bakkum, D. J., et al. (2017). Development of neural population activity toward self-organized criticality. Neuroscience 343, 55–65. doi: 10.1016/j.neuroscience.2016.11.031

PubMed Abstract | CrossRef Full Text | Google Scholar

Yaffe, R. B., Borger, P., Megevand, P., Groppe, D. M., Kramer, M. A., Chu, C. J., et al. (2015). Physiology of functional and effective networks in epilepsy. Clin. Neurophysiol. 126, 227–236. doi: 10.1016/j.clinph.2014.09.009

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhang, A., Zhou, H., Li, X., and Zhu, W. (2019). Fast and robust learning in spiking feed-forward neural networks based on intrinsic plasticity mechanism. Neurocomputing 365, 102–112. doi: 10.1016/j.neucom.2019.07.009

CrossRef Full Text | Google Scholar

Zhou, S., and Mondragón, R. J. (2004). The rich-club phenomenon in the internet topology. IEEE Commun. Lett. 8, 180–182. doi: 10.1109/LCOMM.2004.823426

CrossRef Full Text | Google Scholar

Zierenberg, J., Wilting, J., and Priesemann, V. (2018). Homeostatic plasticity and external input shape neural network dynamics. Phys. Rev. X 8:031018. doi: 10.1103/PhysRevX.8.031018

CrossRef Full Text | Google Scholar

Zimmern, V. (2020). Why brain criticality is clinically relevant: a scoping review. Front. Neural Circuits 14:54. doi: 10.3389/fncir.2020.00054

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: criticality, connectivity, neural disorder, in vitro neural networks, complexity, neuronal avalanches, neural computation, plasticity

Citation: Heiney K, Huse Ramstad O, Fiskum V, Christiansen N, Sandvig A, Nichele S and Sandvig I (2021) Criticality, Connectivity, and Neural Disorder: A Multifaceted Approach to Neural Computation. Front. Comput. Neurosci. 15:611183. doi: 10.3389/fncom.2021.611183

Received: 28 September 2020; Accepted: 18 January 2021;
Published: 10 February 2021.

Edited by:

Matjaž Perc, University of Maribor, Slovenia

Reviewed by:

Dietmar Plenz, National Institutes of Health (NIH), United States
Osame Kinouchi, University of São Paulo, Brazil
Xiumin Li, Chongqing University, China
Jean-Philippe Thivierge, University of Ottawa, Canada

Copyright © 2021 Heiney, Huse Ramstad, Fiskum, Christiansen, Sandvig, Nichele and Sandvig. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Kristine Heiney, kristine.heiney@oslomet.no; Ola Huse Ramstad, ola.h.ramstad@ntnu.no

These authors have contributed equally to this work

Download