Next Article in Journal
Internet of Measurement Things Architecture: Proof of Concept with Scope of Accreditation
Previous Article in Journal
Classification of Dried Strawberry by the Analysis of the Acoustic Sound with Artificial Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Competitive Learning in a Spiking Neural Network: Towards an Intelligent Pattern Classifier

by
Sergey A. Lobov
*,
Andrey V. Chernyshov
,
Nadia P. Krilova
,
Maxim O. Shamshin
and
Victor B. Kazantsev
Neurotechnology Department, Lobachevsky State University of Nizhny Novgorod, 603950 Nizhny Novgorod, Russia
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(2), 500; https://doi.org/10.3390/s20020500
Submission received: 3 December 2019 / Revised: 10 January 2020 / Accepted: 14 January 2020 / Published: 16 January 2020
(This article belongs to the Section Biomedical Sensors)

Abstract

:
One of the modern trends in the design of human–machine interfaces (HMI) is to involve the so called spiking neuron networks (SNNs) in signal processing. The SNNs can be trained by simple and efficient biologically inspired algorithms. In particular, we have shown that sensory neurons in the input layer of SNNs can simultaneously encode the input signal based both on the spiking frequency rate and on varying the latency in generating spikes. In the case of such mixed temporal-rate coding, the SNN should implement learning working properly for both types of coding. Based on this, we investigate how a single neuron can be trained with pure rate and temporal patterns, and then build a universal SNN that is trained using mixed coding. In particular, we study Hebbian and competitive learning in SNN in the context of temporal and rate coding problems. We show that the use of Hebbian learning through pair-based and triplet-based spike timing-dependent plasticity (STDP) rule is accomplishable for temporal coding, but not for rate coding. Synaptic competition inducing depression of poorly used synapses is required to ensure a neural selectivity in the rate coding. This kind of competition can be implemented by the so-called forgetting function that is dependent on neuron activity. We show that coherent use of the triplet-based STDP and synaptic competition with the forgetting function is sufficient for the rate coding. Next, we propose a SNN capable of classifying electromyographical (EMG) patterns using an unsupervised learning procedure. The neuron competition achieved via lateral inhibition ensures the “winner takes all” principle among classifier neurons. The SNN also provides gradual output response dependent on muscular contraction strength. Furthermore, we modify the SNN to implement a supervised learning method based on stimulation of the target classifier neuron synchronously with the network input. In a problem of discrimination of three EMG patterns, the SNN with supervised learning shows median accuracy 99.5% that is close to the result demonstrated by multi-layer perceptron learned by back propagation of an error algorithm.

1. Introduction

Nowadays, artificial neural networks (ANN) are widely used in practical applications. One of the important applications is the use of ANN in the human–machine interface (HMI), in particular in the electromyographical (EMG) interface. Several strategies are used to solve the problem of control of external (“additive”) devices using EMG signals. Conventional techniques are based on one-channel recordings and limited to either trigger control based on detecting a threshold signal, or proportional control in the case of continuous monitoring of some discriminating feature extracted from the EMG signal. Multichannel recording significantly expands control capabilities, and entirely new signal-processing techniques are used, such as classification of EMG patterns and multichannel regression [1,2]. EMG classification (accordingly, movement recognition) can be combined with command control and, hence, can be used in the case when a control device is equipped with an autonomous, local low-level control system capable of implementing some macro commands. ANN-based technics are used in a vast range of EMG classification tasks (see, e.g., [3,4,5]). In turn, regression allows one to estimate muscle effort strength by its EMG signal and, hence, can be used for proportional (gradual) control, e.g., for reconstruction of torque value of some joints [6]. In addition to other mathematical tools, ANNs were also successfully applied to the regression problem, including multichannel registration [7,8,9].
ANNs based on artificial neurons are widely used in approximation, classification and clustering problems. The artificial neuron, originally proposed by McCulloch and Pitts [10] represents a weighted input adder with an output activation function. Such simplification of a living neuron permits the development of a variety of effective learning procedures (see e.g., [11,12]). However, artificial neurons used in ANNs still cannot solve many neurocomputational “dynamical” tasks based on temporal encoding and synchronization effects.
Spiking neural networks (SNN) are a relatively new paradigm for neural computations. A spiking neuron represents a dynamical system, where a spike fires when a neuron membrane potential exceeds a certain excitation threshold. Moreover, this excitation may be activity dependent, e.g., be determined by previous activity of a neuron [13]. It is believed that the SNN-based computations have great potential [14,15] and, theoretically, can reach enormous computational efficacy like real brain circuits.
A spiking neuron with synaptic plasticity is considered as an ideal candidate for the role of the basic element of upcoming neuromorphic systems based on memristive devices (see i.e., [16,17]). Their key ability to synchronize [18,19] can be used in artificial, additive devices and prostheses [20,21]. The neural synchronization in such systems can involve both living and artificial neurons. There are reports showing the first experimental implementations of hybrid networks consisting of living neuronal cells and artificial spiking neurons [22,23]. Thus, the new generation of HMI can be implemented entirely by neural networks where neurons of the brain interact with their artificial counterparts that work as part of prostheses or external devices.
Neural network functions depend on learning defined basically by tuning weights of couplings between the network units. There are several learning rules used in ANNs including Hebbian learning. However, Hebb did not give any mathematical equations for his idea about the potentiation of interacting neurons whose activity correlates [24] the learning rule can be written as [12,25]:
Δ w i j = η x j y i
where Δ w i j is the change of coupling from neuron j to neuron i, η is learning rate, x j is output activity of neuron j (input signal for neuron i), y i is output activity of neuron i. Equation (1) cannot be used in such form because it may lead to unlimited increase of the weights. This problem can be solved, in particular, by introducing the forgetting function that depends on the output activity of the neuron and on the weight of the input connection [26]:
Δ w i j = η x j y i F ( y i ) w i j
Taking into account some restrictions [27], one can transform Equation (2) to the rule of competitive learning widely used in ANNs to implement unsupervised learning:
Δ w i j = { η ( x j w i j ) ,   if   neuron   i   wins   competition   ( y i = 1 ) 0 ,   if   neuron   i   l oses   competition   ( y i = 0 )
This is the so called “winner takes all” rule meaning that only the neuron that has maximum output response to the input pattern can be trained.
In contrast with ANN, in SNN one can use an experimentally confirmed [28,29,30] algorithm of Hebbian learning in the form of spike timing-dependent plasticity (STDP). The STDP potentiates coupling between two neurons if a postsynaptic neuron generates a spike after a presynaptic one and depresses it otherwise [31]. It is important to note that this type of plasticity includes elements of synaptic competition, which makes «success» of the synapses dependent on the time of spikes transmitted through it [32].
Earlier we proposed to use layer of spiking neurons as a feature extractor for EMG. A signal from SNN was transmitted to ANN that classified EMG patterns corresponding to different hand gestures [21]. The aim of the current study is to develop an intelligent classification system based entirely on SNN. To do this we first explore the possibility of rate and temporal coding by one neuron and then define a minimal set of basic learning rules to ensure a selective SNN’s response. Then, we implement the studied principles in a concrete SNN classifying the EMG patterns. The developed SNN can be used in upcoming neuromorphic systems as a core implementing HMI.

2. Models and Methods

For a single spiking neuron we employed dynamical system proposed by Eugene Izhikevich [33]. The neuron’s driving current is given by:
I ( t ) = ξ ( t ) + I s y n ( t ) + I s t m l ( t )
where ξ ( t ) is an uncorrelated zero-mean white Gaussian noise with variance D , I s y n ( t ) is the synaptic current, and I s t m l ( t ) is the external stimulus. The synaptic current represents the weighted sum of all synaptic inputs to the neuron:
I s y n ( t ) = j g j w j ( t ) y j ( t )
where the sum is taken over all presynaptic neurons, w j is the strength of the synaptic coupling directed from neuron j , g j is the scaling factor equal either to 2 or to −2 for excitatory and inhibitory neurons, respectively, and y j ( t ) describes the amount of neurotransmitters released by presynaptic neuron j :
d y j d t = y j τ + t j s p δ ( t t j s p )
where τ = 100 ms is the decay time of synaptic output [31].
We implemented the STDP model using local variables or traces [31]. The weight increase corresponding to long-term potentiation (LTP) occurs when a postsynaptic neuron fires a spike and it is proportional to presynaptic trace y j 1 ( t ) :
d w i j d t + = F + ( w i j ) y j 1 ( t ) t i s p δ ( t t i s p )
The weight decrease corresponding to long-term depression (LTD) occurs when a presynaptic neuron fires a spike and it is proportional to a postsynaptic trace y i 1 ( t ) :
d w i j d t = F ( w i j ) y i 1 ( t ) t j s p δ ( t t j s p )
For the weight updating, we use the multiplicative rule [34]:
F + ( w i j ) = λ ( 1 w i j ) ,   F ( w i j ) = λ α w i j
For the rate coding we also used the triplet-based STDP characterized by frequency dependence [35]. Unlike the pair-based rule, the triplet-based rule uses two local variables—fast and slow with different decaying times τ 1 and τ 2 , and the dynamics of these variables can be also described by Equation (6). In the minimal triplet model [35] the LTD is calculated by Equation (8), but in the LTP the increase of weight is proportional not only to the fast presynaptic trace, y j 1 ( t ) , but also to the slow postsynaptic trace, y i 2 ( t ) , as follows:
d w i j d t + = F + ( w i j ) y j 1 ( t ) y i 2 ( t ) δ ( t t i s p )
We used the following parameter values: λ = 0.001, α = 1, τ 1 = 10 ms, τ 2 = 100 ms (corresponding to minimal triplet model in [35]).
First, let us consider temporal and rate coding for single neuron. The scheme of the network is illustrated in Figure 1. Each of 10 presynaptic neurons encodes time or frequency of spikes in the repeating input patterns affecting the postsynaptic neuron during learning. In temporal coding (Figure 1A), stimulation pattern contained definite sequence of pulses S1–S10 with the inter-pulse interval ∆t taken here values of 1, 2, 5, 10 and 20 ms in different simulations. The frequency of such stimulus applications was 1 Hz. In the rate coding (Figure 1B), we tuned stimulation parameters so that the presynaptic neurons fired spike trains with average frequencies 0.1, 0.2, 0.5, 1, 2, 3, 6, 12, 25 and 50 Hz. In our simulations, the learning protocol lasted 1000 s of model time.
We used familiar (e.g., learned before) and unknown patterns to estimate the result of learning in both coding schemes. In the temporal coding, we took the first/last half of the temporal pattern as a familiar/unknown pattern, respectively. In the rate coding, in order to generate the unknown pattern, we reversed the learned pattern so that the first and the last presynaptic neurons had a spiking rate 50 Hz and 0.1 Hz, respectively.
For experimental purposes, we recruited 8 healthy volunteers of either sex from 18 to 44 years old. The study complied with the Helsinki declaration adopted in June 1964 (Helsinki, Finland) and revised in October 2000 (Edinburg, Scotland). The Ethics Committee of the Lobachevsky State University of Nizhny Novgorod approved the experimental procedure (protocol No. 35 from 5 September 2019). All participants gave their written consent.
Registration of the EMG signals was accomplished with the use of 8-channel bracelet MYO Thalmic Labs, which was located on subject’s forearm. During SNN learning, each subject in a standing position alternately flexed and extended his/her wrist for one minute. Meanwhile each gesture—rest, flexion and extension of the hand—lasted about 3 s. SNN learning was performed online directly at the time of EMG registration. However, we measured the accuracy of classifying EMG patterns by offline records. It was equal to the ratio of the spike rate of the classifier neuron excited by the presentation of “its own pattern” to the sums of spike rates of all three classifiers.
To estimate the gradual character of SNN activity, we asked the subjects to flex and extend their wrist with four different degrees of effort, determined by the different degrees of deviation of the palm from the center position. Each pattern was 10 s long and was sent to the input of trained SNN. The muscle effort strength was estimated indirectly through mean absolute value (MAV) of the EMG signal, which was averaged on the whole time interval over all EMG channels.

3. Results

3.1. Spiking Neurons as Electromyographical (EMG) Features Extractors

One of important information features of the EMG signal is its amplitude. Earlier we proposed method to extract this feature using spiking neurons [21]. In particular, a “sensory” neuron receives from a virtual stimulator a signal in the form of EMG-associated current:
I s t m l ( t ) = k · E M G ( t )
where EMG(t) denotes recorded EMG signal and k is the scaling coefficient (we use k = 2 × 106 as in [21]).
Figure 2 shows an example of neural activity of two sensory neurons receiving inputs from electrodes located on extensors during wrist extension. Both registered muscles take part in the current movement, however, signals from them have different amplitudes due to the anatomical properties of these muscles and/or to the localization of the electrodes (Figure 2, top panel). Both input signals lead to increasing spiking frequency rate of corresponded sensory neurons (Figure 2, S3, S5) and the EMG channel with higher amplitude evokes faster spiking (Figure 2, red line). Thus, the spiking neurons perform rate coding. The spiking rate depends on the amplitude of the EMG signal, which, in turns, corresponds to muscle strength.
However, there are different latencies of spiking response to EMG signal of various amplitude. A sensory neuron receiving the signal of lower amplitude (Figure 2, blue line) begins to respond to it much later compared with stronger stimuli (Figure 2, red line). Thus, a spiking neuron simultaneously encodes the input signal based both on the spiking rate and on latencies of the first response spike. In the case of such temporal-rate coding, the SNN should implement learning mechanisms worked properly for both types of coding. Based on this, we first studied the training of a single neuron with a pure rate and temporal patterns, and then built a universal SNN that is trained using mixed coding.

3.2. Learning and Selective Response of a Single Neuron

In temporal coding learning neuron receives information as a sequence of spikes from different presynaptic neurons. Consequently, we expect to obtain weight distribution depending on spike timing within the training pattern and (in the protocol used) on the rank of spiking. Indeed, in both cases of the STDP (pair- and triplet-based) after repeating stimulation, we find correlations between weights and spike timing (Figure 3A, solid lines). This effect can be explained by the presence of a refractory period in spiking neurons. After firing a spike, a postsynaptic neuron receives presynaptic spikes in the after-spike hyperpolarization period reproduced by the Izhikevich model. Consequently, the neuron cannot respond and corresponding couplings become depressed. Time intervals between spikes varied from 1 to 10 ms in simulations, then, the time of the pattern presentation was varied from 10 to 100 ms. In the case of shorter time intervals (<5 ms), the weights of the first couplings become potentiated, while the rest become depressed. In the case of increased interval, the neuron have enough time to recover its sensitivity within the pattern, which leads to alternating couplings with large and small weights (Figure 3A, dashed lines).
Let us consider the selective response of the neuron to a familiar pattern as a criterion for success learning. In the case of short interspike intervals and the weight dependence on the rank of spiking (Figure 3A, solid lines), the postsynaptic neuron shows high/no response activity to the familiar/unknown patterns, respectively (Figure 3B, 4 ms). In the case of big intervals and alternating weights (Figure 3A, dashed lines), the neuron is almost unable to discriminate the patterns (Figure 3B, 10 ms). The pair- and triplet-based STDP rules have similar weight distributions and selectivity in all studied cases (Figure 3).
Thus, a single neuron can potentially be selective to the rank of spiking only at the beginning of the temporal pattern. This effect was described earlier [36]; on its basis, STDP-driven latency coding can be implemented, in which synapses that transmit spikes faster decrease their latency [37]. In general, the SNN needs to implement neural competition and axonal delays for encoding complex and long temporal patterns [38]. The sensitivity of STDP-driven neuron to the beginning of temporal pattern can lead to spatial heterogeneity of monolayer a SNN under local repeating stimulation. Each neuron in such a SNN after “learning” has potentiated its input connections from the stimulation side and depressed ones from the opposite direction. At the network scale as a result the centrifugal (relative to the stimulation site) couplings are potentiated and network responses become synchronized to stimuli [39,40].
Attempts to implement rate coding based only on STDP failed in our experiments. There are no expected relations between weights distribution and frequency rate of the stimuli (Figure 4A, “STDP” and “tSTDP”). Accordingly, no neural selectivity was observed (Figure 4B, “STDP” and “tSTDP”). This happened because of the STDP events (close pairs and triplets of spikes) do not depend on the presynaptic frequency rate. Constant stimulation with the rate pattern leads to fluctuations of refractory durations of the postsynaptic neuron. During the excitable state of this neuron the incoming spikes make it fired regardless of their frequency rate. It corresponds to the presynaptic- postsynaptic (“pre-post”) spike sequence and STDP potentiates couplings. Other spikes of all frequency rates arrive at the refractory stage. It corresponds to the “post-pre” sequence and STDP depress coupling. As a result, all weights become averaged regardless of the frequency rate.
The LTP part of triplet-based STDP for spiking neurons (Equation (10)) is most consistent with the Hebbian learning for artificial neurons (Equation (1)). Accordingly, they have the common drawback—unlimited weight growth. More precisely, when applying the multiplicative rule (Equation (9)), the weight is limited to 1. The problem is that the triplet-based STDP depends on the averaged frequency of the postsynaptic neuron only and, regardless of the rate of presynaptic spikes, it potentiates all incoming couplings. In other words, there is the lack of synaptic selectivity and as a result, the neuron cannot discriminate patterns (Figure 4B, tSTDP).
The synaptic competition can be a possible solution of this problem. Similar to the ANNs (Equation (2)), we introduce forgetting function for incoming synapses, which is proportional to neuronal activity:
d w i j d t = w i j y i τ f
where τ f is decay time of weights, y i describes averaged activity of postsynaptic neuron i described Equation (6) with different decay time of synaptic trace τo.
Using the triplet-based STDP combined with the forgetting function (parameters τ f = 10 ms, τo = 100 ms) one can gain explicit dependence of weights on the presynaptic spike rate (Figure 4A, tSTDP + F). Note, that the relation is strictly sigmoid. Selectivity testing shows that postsynaptic neuron activity during exposition of familiar pattern is considerably higher than in the case of unknown pattern (Figure 4B, tSTDP + F).

3.3. EMG Patterns Classification Problem as an Example of Unsupervised Learning in Spiking Neuron Networks (SNN)

Next, we tested the new learning rule of the triplet-based STDP with synaptic forgetting to design a SNN capable of classifying EMG patterns. Unlike the case of individual neurons, training the whole network should provide recognition of several patterns. Therefore, the structure of the SNN (number of neurons and neural layers, topology of neural connections, etc.) should be built specifically to solve this task. Proposed SNN (Figure 4A) consists of two layers with “sensory” and “classifying” functions (“S” and “C”, respectively, in Figure 5). In turn, each layer includes excitatory and inhibitory neurons.
Inhibitory neurons (Figure 5, marked blue) in the input layer are necessary for lateral inhibition, which significantly improves the quality of further recognition of EMG patterns by contrasting the signal [21]. In order to identify the muscle rest patterns we include one additional neuron in the input layer, which fires spikes when the other input neurons are silent. For this propose we use large individual noise (D = 70) for this neuron and strong incoming couplings from inhibitory neurons.
The output network layer consists of three excitatory neurons that classify EMG signals after learning (Figure 5A, “classifiers”) and three inhibitory neurons that provide lateral inhibition. In this case, lateral inhibition plays a key role in learning: when one of the neurons-classifiers is active, the other output neurons are inhibited. As the learning rule (triplet-based STDP, synaptic forgetting) works only while postsynaptic neuron is active, only one neuron can be trained at a time. Thus, lateral inhibition implements the “winner takes all” principle which is widely used in traditional ANNs implementing self-organizing maps (SOM) proposed by Kohonen [27]. As a result of learning, the coupling strengths between input and output layers change providing a selective response to different EMG patterns (Figure 5B–D, Video S1). As the proposed SNN is based on unsupervised learning, it is unpredictable to say which neuron will respond to a particular pattern. Therefore, if we use SNN as a classifier we need to assign class labels to output neurons after learning.
During learning procedure of about 1 min, raw EMG signals were sent online to the input layer of SNN while a subject flexed and extended his/her wrist (Video S1). Figure 6 illustrates typical EMG signals and responses of trained classifier neurons. Note that the neurons make errors predominantly when EMG patterns (correspondently hand movements) are changed.
With selected SNN parameters median accuracy for the eight subjects was 91% (Q1 = 85%, Q3 = 95%) which was lower than the 100% accuracy demonstrated by multi-layer perceptron with a back propagation algorithm applied to the same problem. But it would be more correct to compare the proposed SNN with Kohonen’s SOM, where competitive learning is performed in corresponding ANN [27]. Earlier we showed, that a SOM-based classifier demonstrated median accuracy 87 % for five EMG patterns [41]. In the current study median accuracy of SOM for eight participants was 88% (Q1 = 82%, Q3 = 89%) for the three motions.
Figure 7A shows the distribution of the normalized amplitude of the EMG signal averaged over all subjects when performing wrist flexion and extension. This profile corresponds to the distribution of weight coefficients of two trained classifier-neurons that can be selectively excited when these movements are performed (Figure 7B). Thus, the combination in the SNN of the triplet-based STDP, synaptic forgetting and lateral inhibition leads to the formation of a distribution of weights similar to the distribution of the amplitude feature of the input signal. Thus, the proposed complex learning rule for our SNN works quite similar to the competitive learning implemented in an ANN (Equation (3)).
In addition, the proposed SNN shows a gradual nature of the response depending on the amplitude of the signal. In particular, the dependence of the spike rate of classifier neurons on the amplitude of the EMG signal is linear (Figure 7C). Considering that the amplitude of the EMG, in turn, is also linearly proportional to the effort developed by the muscles [42], it can be concluded that classifier neurons not only recognize the movement performed by the subject, but also encode the degree of muscle strength involved in such movements.

3.4. SNN Supervised Learning

Next, we developed supervised SNN learning. In contrast with unsupervised learning, we now stimulate target neurons during pattern presentation to the network input. Technically, in our neuro-simulator application at the time moment of the EMG pattern presentation we connect the virtual stimulation electrode that generates high-frequency activity (40 Hz) to one of the classifier neuron (Video S2). This leads to excitation of the target neuron and inhibition of the other classifier neurons. As a result only one target neuron “associates itself” with the presented EMG pattern. Next, this “supervised stimulation” was applied to another target classifier neuron during another EMG pattern presentation to the network input. Note that there is no need to deactivate learning in time intervals between stimuli—during this time the triplet-based STDP and synaptic forgetting are working but not erasing previous results. Earlier, similar mechanism called Pavlov’s principle was proposed as an analog of backpropagation error method in SNN [43]. In our case, we also generate SNN feedback via additional stimulation labeling the neurons that planned to be trained at a time.
After such online procedure of supervised learning median accuracy of SNN was 99.5% (Q1 = 99.4%, Q3 = 99.8%). Note that these results are much closer to 100% accuracy of the multi-layer perceptron than in the case of SNN unsupervised learning.

4. Discussion

In summary, we have shown the possibility of implementing competitive learning in spiking neurons in the context of temporal and rate coding. We have demonstrated that for such learning the following three major mechanisms should be employed together, including:
(i)
Hebbian learning (in the current work, through triplet-based STDP);
(ii)
synaptic competition or competition of inputs (in the current work, through synaptic forgetting); and
(iii)
neural competition or competition of outputs (in the current work, through lateral inhibition).
The use of Hebbian learning in the form of pair- or triplet-based STDP is sufficient for temporal coding. In this case, the neurons are sensitive only to spikes in the beginning of the input pattern. A neural network with neural competition (lateral inhibition) and axonal delays is required for encoding of complex and long-term patterns [38].
However, Hebbian learning only is not sufficient to implement rate coding. In this case, to enrich the selectivity, one should employ synaptic competition which ensures depression of less used synapses. We have implemented this type of competition by introducing the forgetting function for incoming synapses proportional to the activity of the postsynaptic neuron. Obviously, synaptic competition can be implemented in other ways, for example, using homeostatic plasticity [44,45]. Hence, by combining Hebbian learning with synaptic competition, both temporal and rate coding can be achieved. Moreover, learning-driven weights rearrangement determines by type of coding, rather than a priori specified network topology [44].
Note that here we do not study carefully the quality of the selectivity achieved by training one neuron. In the case of both temporal and rate coding, to test selectivity we use a pattern that is very different from that learned. Note also, that recently the concept of a multidimensional brain has been proposed for ANNs, according to which the neuron selectivity increases non-linearly with increasing dimension (number) of synaptic inputs. In particular, when certain (rather general) conditions are met for an artificial neuron the theoretically achievable selectivity can approach 100% with a number of synapses of more than 20 [46,47]. When using 10 synapses, as in the current work (learning one neuron), the theoretical selectivity is about 50%, which means that in the space of input patterns even a perfectly learned neuron will classify about half of the patterns as familiar. Obviously, in the case of spiking neurons, we can expect similar dimensional dependence and its study could be the subject of our future work.
Neural competition is necessary for selective SNN learning: not all output neurons should respond to a particular pattern, but only a part. As a result, different neurons or neural groups will acquire an affinity for different input patterns. In our SNN, for this purpose we introduced lateral inhibition permitted to implement the “winner takes all” principle. Earlier, we also used lateral inhibition in processing the EMG signal enhancing the contrast [21].
Unsupervised learned SNN cannot compete with a multilayer perceptron in the classification accuracy. Nevertheless, even in its simple form it has several advantages based on the analogous signaling of spiking neurons. In particular, the SNN can provide gradual response depending on input signal amplitude and the low lag of response to the change of input pattern. Note also, that earlier we proposed some improvements of EMG control based on ANNs, in particular combined command-proportional EMG control [42] and optimizing response speed [48]. However, these extensions of basic ANN functions required special configurations of EMG interfaces and the use of external non-ANN algorithms.
Finally, we proposed a simple implementation of supervised learning in SNN. The single-layer architecture is so far more similar to the classical Rosenblatt’s perceptron than to the multi-layer ANN, trained by the error back-propagation algorithm. Nevertheless, in the problem of discrimination of three EMG patterns the supervised learned SNN shows accuracy close to the result demonstrated by the multi-layer perceptron learned by back propagation of the error algorithm. It is shown that SNN learning based on error correction can act similarly to the back propagation in the perceptron (see [49,50]). In our model, SNN implements biologically plausible associative learning by the associations of certain input patterns with the activity of certain output neurons. Thinking about further developments, a design of multi-layer SNN will be proposed in which the input and hidden layers provide unsupervised competitive learning, while the output layer can be trained using the proposed “supervised stimulation”.

Supplementary Materials

The following are available online at https://www.mdpi.com/1424-8220/20/2/500/s1, Video S1: Unsupervised SNN learning. The output neurons in the process of learning become selective to different EMG patterns generated by the muscles during (a) wrist extension, (b) wrist flexion, (c) rest. It is impossible to predict which neuron will be responsible for which gesture. At the end of learning, we show that trained neuron has different couplings depending on what signals it responds on. The degree of grayscale of coupling is proportional to the value of weight. Video S2: Supervised SNN learning: Supervised learning is stimulation of the target neuron simultaneously with the generation of the corresponding EMG pattern. We would like to achieve the following correspondences of output neurons: (a) the left neuron—the movement of the palm to the left, i.e., wrist flexion, (b) the middle neuron—rest, (c) the right neuron—the movement of the palm to the right, i.e., wrist extension.

Author Contributions

Conceptualization, S.A.L. and V.B.K.; software, S.A.L. and M.O.S.; investigation—single neuron learning, A.V.C.; investigation—SNN learning, S.A.L.; EMG experiments, N.P.K. and M.O.S.; EMG data analysis S.A.L., N.P.K. and M.O.S.; writing, S.A.L., A.V.C. and V.B.K. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the governmental assignment of the Ministry of Science and Higher Education of the Russian Federation (grant No. 8.2487.2017/ПЧ).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Roche, A.D.; Rehbaum, H.; Farina, D.; Aszmann, O.C. Prosthetic myoelectric control strategies: A clinical perspective. Curr. Surg. Rep. 2014, 2, 44. [Google Scholar] [CrossRef]
  2. Hahne, J.M.; Biebmann, F.; Jiang, N.; Rehbaum, H.; Farina, D.; Meinecke, F.C.; Muller, K.-R.; Parra, L.C. Linear and nonlinear regression techniques for simultaneous and proportional myoelectric control. IEEE Trans. Neural Syst. Rehabil. Eng. 2014, 22, 269–279. [Google Scholar] [CrossRef]
  3. Patel, Y.; Nageswaran, S. Myoelectric Controlled Thumb. In Proceedings of the 2018 3rd International Conference for Convergence in Technology (I2CT), Pune, India, 6–8 April 2018; pp. 1–5. [Google Scholar]
  4. Lima, A.A.M.; Araujo, R.M.; de Santos, F.A.G.; Yoshizumi, V.H.; de Barros, F.K.H.; Spatti, D.H.; Liboni, L.H.B.; Dajer, M.E. Classification of Hand Movements from EMG Signals using Optimized MLP. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–7. [Google Scholar]
  5. Roldan-Vasco, S.; Restrepo-Agudelo, S.; Valencia-Martinez, Y.; Orozco-Duque, A. Automatic detection of oral and pharyngeal phases in swallowing using classification algorithms and multichannel EMG. J. Electromyogr. Kinesiol. 2018, 43, 193–200. [Google Scholar] [CrossRef] [PubMed]
  6. Ullah, K.; Kim, J.H. A mathematical model for mapping EMG signal to joint torque for the human elbow joint using nonlinear regression. In Proceedings of the 2009 4th International Conference on Autonomous Robots and Agents, Wellington, New Zealand, 10–12 Feburary 2009; pp. 103–108. [Google Scholar]
  7. Ngeo, J.G.; Tamei, T.; Shibata, T. Continuous and simultaneous estimation of finger kinematics using inputs from an EMG-to-muscle activation model. J. Neuroeng. Rehabil. 2014, 11, 122. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Chen, J.; Zhang, X.; Cheng, Y.; Xi, N. Surface EMG based continuous estimation of human lower limb joint angles by using deep belief networks. Biomed. Signal. Process. Control. 2018, 40, 335–342. [Google Scholar] [CrossRef]
  9. Alghofaily, B.; Ding, C. Meta-Feature Based Data Mining Service Selection and Recommendation Using Machine Learning Models. In Proceedings of the 2018 IEEE 15th International Conference on e-Business Engineering (ICEBE), Xi’an, China, 12–14 October 2018; pp. 17–24. [Google Scholar]
  10. McCulloch, W.S.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 1943, 5, 115–133. [Google Scholar] [CrossRef]
  11. Widrow, B.; Lehr, M.A. 30 years of adaptive neural networks: Perceptron, Madaline, and backpropagation. Proc. IEEE 1990, 78, 1415–1442. [Google Scholar] [CrossRef]
  12. Haykin, S. Neural Networks. A Comprehensive Foundation, 2nd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 1998. [Google Scholar]
  13. Izhikevich, E.M. Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting; The MIT Press: Cambridge, MA, USA, 2007. [Google Scholar]
  14. Paugam-Moisy, H.; Bohte, S. Computing with Spiking Neuron Networks BT—Handbook of Natural Computing; Rozenberg, G., Bäck, T., Kok, J.N., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 335–376. ISBN 978-3-540-92910-9. [Google Scholar]
  15. Quiroga, Q.R.; Panzeri, S. Principles of Neural Coding; Taylor & Francis Group: Boca Raton, FL, USA, 2013; ISBN 9781439853313. [Google Scholar]
  16. Chiolerio, A.; Chiappalone, M.; Ariano, P.; Bocchini, S. Coupling resistive switching devices with neurons: State of the art and perspectives. Front. Neurosci. 2017, 11, 70. [Google Scholar] [CrossRef] [Green Version]
  17. Mikhaylov, A.N.; Morozov, O.A.; Ovchinnikov, P.E.; Antonov, I.N.; Belov, A.I.; Korolev, D.S.; Sharapov, A.N.; Gryaznov, E.G.; Gorshkov, O.N.; Pigareva, Y.I.; et al. One-Board design and simulation of double-layer perceptron based on metal-oxide memristive nanostructures. IEEE Trans. Emerg. Top. Comput. Intell. 2018, 2, 371–379. [Google Scholar] [CrossRef]
  18. Llinas, R.R. I of the Vortex: From Neurons to Self; The MIT Press: Cambridge, MA, USA, 2001. [Google Scholar]
  19. Buzsáki, G. Rhythms of the Brain; Oxford University Press: Oxford, UK, 2006. [Google Scholar]
  20. Lobov, S.; Kazantsev, V.; Makarov, V.A. Spiking neurons as universal building blocks for hybrid systems. Adv. Sci. Lett. 2016, 22, 2633–2637. [Google Scholar] [CrossRef]
  21. Lobov, S.; Mironov, V.; Kastalskiy, I.; Kazantsev, V. A spiking neural network in semg feature extraction. Sensors 2015, 15, 27894–27904. [Google Scholar] [CrossRef] [Green Version]
  22. Gupta, I.; Serb, A.; Khiat, A.; Zeitler, R.; Vassanelli, S.; Prodromakis, T. Real-time encoding and compression of neuronal spikes by metal-oxide memristors. Nat. Commun. 2016, 7, 12805. [Google Scholar] [CrossRef] [PubMed]
  23. Gater, D.; Iqbal, A.; Davey, J.; Gale, E. Connecting Spiking Neurons to a Spiking Memristor Network Changes the Memristor Dynamics. In Proceedings of the 2013 IEEE 20th International Conference on Electronics, Circuits, and Systems, Abu Dhabi, UAE, 8–11 December 2013. [Google Scholar]
  24. Hebb, D.O. The Organization of Behavior: A Neuropsychological Theory; Wiley: Hoboken, NJ, USA, 1949. [Google Scholar]
  25. Brown, T.H.; Kairiss, E.W.; Keenan, C.L. Hebbian synapses: Biophysical mechanisms and algorithms. Annu. Rev. Neurosci. 1990, 13, 475–511. [Google Scholar] [CrossRef]
  26. Kohonen, T. An introduction to neural computing. Neural Netw. 1988, 1, 3–16. [Google Scholar] [CrossRef]
  27. Kohonen, T. Self-organized formation of topologically correct feature maps. Biol. Cybern. 1982, 43, 59–69. [Google Scholar] [CrossRef]
  28. Sjöström, P.J.; Turrigiano, G.G.; Nelson, S.B. Rate, timing, and cooperativity jointly determine cortical synaptic plasticity. Neuron 2001, 32, 1149–1164. [Google Scholar] [CrossRef] [Green Version]
  29. Bi, G.; Poo, M. Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type. J. Neurosci. 1998, 18, 10464–10472. [Google Scholar] [CrossRef]
  30. Markram, H.; Lübke, J.; Frotscher, M.; Sakmann, B. Regulation of synaptic efficacy by coincidence of postsynaptic aps and epsps. Science 1997, 275, 213–215. [Google Scholar] [CrossRef] [Green Version]
  31. Morrison, A.; Diesmann, M.; Gerstner, W. Phenomenological models of synaptic plasticity based on spike timing. Biol. Cybern. 2008, 98, 459–478. [Google Scholar] [CrossRef] [Green Version]
  32. Song, S.; Miller, K.D.; Abbott, L.F. Competitive hebbian learning through spike-timing-dependent synaptic plasticity. Nat. Neurosci. 2000, 3, 919. [Google Scholar] [CrossRef]
  33. Izhikevich, E.M. Simple model of spiking neurons. IEEE Trans. Neural Netw. 2003, 14, 1569–1572. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Rubin, J.; Lee, D.D.; Sompolinsky, H. Equilibrium properties of temporally asymmetric hebbian plasticity. Phys. Rev. Lett. 2001, 86, 364–367. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Pfister, J.-P.; Gerstner, W. Triplets of spikes in a model of spike timing—dependent plasticity. J. Neurosci. 2006, 26, 9673–9682. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Masquelier, T.; Guyonneau, R.; Thorpe, S.J. Spike timing dependent plasticity finds the start of repeating patterns in continuous spike trains. PLoS ONE 2008, 3, e1377. [Google Scholar] [CrossRef]
  37. Masquelier, T.; Thorpe, S.J. Unsupervised learning of visual features through spike timing dependent plasticity. PLOS Comput. Biol. 2007, 3, e31. [Google Scholar] [CrossRef]
  38. Masquelier, T.; Guyonneau, R.; Thorpe, S.J. Competitive stdp-based spike pattern learning. Neural Comput. 2008, 21, 1259–1276. [Google Scholar] [CrossRef]
  39. Lobov, S.; Simonov, A.; Kastalskiy, I.; Kazantsev, V. Network response synchronization enhanced by synaptic plasticity. Eur. Phys. J. Spec. Top. 2016, 225, 29–39. [Google Scholar] [CrossRef]
  40. Lobov, S.A.; Zhuravlev, M.O.; Makarov, V.A.; Kazantsev, V.B. Noise enhanced signaling in stdp driven spiking-neuron network. Math. Model. Nat. Phenom. 2017, 12, 109–124. [Google Scholar] [CrossRef]
  41. Shamsin, M.; Krilova, N.; Bazhanova, M.; Kazantsev, V.; Makarov, V.A.; Lobov, S. Supervised and unsupervised learning in processing myographic patterns. J. Phys. Conf. Ser. 2018, 1117, 12008. [Google Scholar] [CrossRef]
  42. Lobov, S.A.; Mironov, V.I.; Kastalskiy, I.A.; Kazantsev, V.B. Combined use of command-proportional control of external robotic devices based on electromyography signals. Sovrem. Tehnol. Med. 2015, 7, 30–38. [Google Scholar] [CrossRef] [Green Version]
  43. Lebedev, A.E.; Solovyeva, K.P.; Dunin-Barkowski, W.L. The Large-Scale Symmetry Learning Applying Pavlov Principle BT—Advances in Neural Computation, Machine Learning, and Cognitive Research III; Kryzhanovsky, B., Dunin-Barkowski, W., Redko, V., Tiumentsev, Y., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 405–411. [Google Scholar]
  44. Clopath, C.; Büsing, L.; Vasilaki, E.; Gerstner, W. Connectivity reflects coding: A model of voltage-based stdp with homeostasis. Nat. Neurosci. 2010, 13, 344. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  45. Bienenstock, E.L.; Cooper, L.N.; Munro, P.W. Theory for the development of neuron selectivity: Orientation specificity and binocular interaction in visual cortex. J. Neurosci. 1982, 2, 32–48. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Tyukin, I.; Gorban, A.N.; Calvo, C.; Makarova, J.; Makarov, V.A. High-dimensional brain: A tool for encoding and rapid learning of memories by single neurons. Bull. Math. Biol. 2019, 81, 4856–4888. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. Gorban, A.N.; Makarov, V.A.; Tyukin, I.Y. The unreasonable effectiveness of small neural ensembles in high-dimensional brain. Phys. Life Rev. 2019, 29, 55–88. [Google Scholar] [CrossRef]
  48. Lobov, S.A.; Krylova, N.P.; Anisimova, A.P.; Mironov, V.I.; Kazantsev, V.B. Optimizing the speed and accuracy of an emg interface in practical applications. Hum. Phys. 2019, 45, 145–151. [Google Scholar] [CrossRef]
  49. Bohte, S.M.; Kok, J.N.; Poutré, H. La Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing 2002, 48, 17–37. [Google Scholar] [CrossRef] [Green Version]
  50. Neftci, E.O.; Mostafa, H.; Zenke, F. Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks. IEEE Signal. Process. Mag. 2019, 36, 51–63. [Google Scholar] [CrossRef]
Figure 1. Temporal (A) and rate (B) coding scheme for single neuron. St1–t10, Sf1–f10—stimulators with temporal and frequency rate parameters correspondently, N1–10—presynaptic neurons, Np—postsynaptic neuron, tj—stimulating pulse time for neuron Nj. ∆t = tj+1tj—time between pulses.
Figure 1. Temporal (A) and rate (B) coding scheme for single neuron. St1–t10, Sf1–f10—stimulators with temporal and frequency rate parameters correspondently, N1–10—presynaptic neurons, Np—postsynaptic neuron, tj—stimulating pulse time for neuron Nj. ∆t = tj+1tj—time between pulses.
Sensors 20 00500 g001
Figure 2. Features extraction by spiking neurons. Top panel: example of two electromyographical (EMG) channels recording muscle activity of m. extensor carpi radialis (blue) and m. extensor carpi ulnaris (red) during wrist extension. S3, S5—activity (the membrane potential) of sensory neurons receiving input from the 3rd and 5th electrodes of the MYO Thalmic bracelet registering EMG from these muscles. The dashed line indicates the start of movement.
Figure 2. Features extraction by spiking neurons. Top panel: example of two electromyographical (EMG) channels recording muscle activity of m. extensor carpi radialis (blue) and m. extensor carpi ulnaris (red) during wrist extension. S3, S5—activity (the membrane potential) of sensory neurons receiving input from the 3rd and 5th electrodes of the MYO Thalmic bracelet registering EMG from these muscles. The dashed line indicates the start of movement.
Sensors 20 00500 g002
Figure 3. Temporal coding results of simulations with different inter-spike intervals using pair-based rule (spike timing-dependent plasticity, STDP) and triplet-based rule (tSTDP): (A) Synaptic weights vs. the rank of spiking on corresponding synapses; (B) The response of the postsynaptic neuron to familiar and unknown pattern.
Figure 3. Temporal coding results of simulations with different inter-spike intervals using pair-based rule (spike timing-dependent plasticity, STDP) and triplet-based rule (tSTDP): (A) Synaptic weights vs. the rank of spiking on corresponding synapses; (B) The response of the postsynaptic neuron to familiar and unknown pattern.
Sensors 20 00500 g003
Figure 4. Rate coding results with pair-based rule (STDP) triplet-based rule (tSTDP) and triplet-based rule with forgetting function (tSTDP + F): (A) Synaptic weights vs. spike rate on corresponding synapse; (B) The response of the postsynaptic neuron to familiar and unknown pattern.
Figure 4. Rate coding results with pair-based rule (STDP) triplet-based rule (tSTDP) and triplet-based rule with forgetting function (tSTDP + F): (A) Synaptic weights vs. spike rate on corresponding synapse; (B) The response of the postsynaptic neuron to familiar and unknown pattern.
Sensors 20 00500 g004
Figure 5. The scheme of spiking neuron networks (SNN) classifying EMG-patterns: (A) General topology of SNN; (BD) The responses of SNN to EMG patterns during wrist flexion/extension/rest. Squares—EMG-associated virtual electrodes, red/blue circles—excitatory/inhibitory neurons. The color saturation of red/blue is proportional to the value of the positive/negative output signal of the corresponding circuit element at the moment. S—sensory neurons, C—classifying neurons.
Figure 5. The scheme of spiking neuron networks (SNN) classifying EMG-patterns: (A) General topology of SNN; (BD) The responses of SNN to EMG patterns during wrist flexion/extension/rest. Squares—EMG-associated virtual electrodes, red/blue circles—excitatory/inhibitory neurons. The color saturation of red/blue is proportional to the value of the positive/negative output signal of the corresponding circuit element at the moment. S—sensory neurons, C—classifying neurons.
Sensors 20 00500 g005
Figure 6. Input and output signals of SNN-classifier. Top panel: Example of two EMG channels recording muscle activity of flexor (red) and extensor (blue). C1–C3—activity (the membrane potential) of three classifier neurons.
Figure 6. Input and output signals of SNN-classifier. Top panel: Example of two EMG channels recording muscle activity of flexor (red) and extensor (blue). C1–C3—activity (the membrane potential) of three classifier neurons.
Sensors 20 00500 g006
Figure 7. Interrelation of EMG-signal characteristics and neurons-classifiers after learning: (A) Average EMG-signal amplitude distribution corresponding to EMG-channel serial number; (B) Averaged distribution of synaptic weights corresponding to EMG-channel serial number; (C) Relation of spike rate to EMG-signal amplitude.
Figure 7. Interrelation of EMG-signal characteristics and neurons-classifiers after learning: (A) Average EMG-signal amplitude distribution corresponding to EMG-channel serial number; (B) Averaged distribution of synaptic weights corresponding to EMG-channel serial number; (C) Relation of spike rate to EMG-signal amplitude.
Sensors 20 00500 g007

Share and Cite

MDPI and ACS Style

Lobov, S.A.; Chernyshov, A.V.; Krilova, N.P.; Shamshin, M.O.; Kazantsev, V.B. Competitive Learning in a Spiking Neural Network: Towards an Intelligent Pattern Classifier. Sensors 2020, 20, 500. https://doi.org/10.3390/s20020500

AMA Style

Lobov SA, Chernyshov AV, Krilova NP, Shamshin MO, Kazantsev VB. Competitive Learning in a Spiking Neural Network: Towards an Intelligent Pattern Classifier. Sensors. 2020; 20(2):500. https://doi.org/10.3390/s20020500

Chicago/Turabian Style

Lobov, Sergey A., Andrey V. Chernyshov, Nadia P. Krilova, Maxim O. Shamshin, and Victor B. Kazantsev. 2020. "Competitive Learning in a Spiking Neural Network: Towards an Intelligent Pattern Classifier" Sensors 20, no. 2: 500. https://doi.org/10.3390/s20020500

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop