A Tutorial on Spiking Neural Networks for Beginners

Despite being quite effective in a variety of tasks across industries, deep learning is constantly evolving, proposing new neural network (NN) architectures, deep learning (DL) tasks, and even brand new concepts of the next generation of NNs, such as the Spiking Neural Network (SNN). SNN was introduced by the researchers at Heidelberg University and the University of Bern developing as a fast and energy-efficient technique for computing using spiking neuromorphic substrates. In this article, we will mostly discuss Spiking Neural Network as a variant of neural network. We will also try to understand how is it different from the traditional neural networks.  Below is a list of the important topics to be tackled.

Table of Contents

  1. What is Spiking Neural Network (SNN)
  2. How Does Spiking Neural Network Work?
  3. Traditional Neural Network Vs SNN
  4. Application of Spiking Neural Networks
  5. Advantages and Disadvantages of SNN

Let’s start the discussion by understanding what is Spiking Neural Network is.

What is Spiking Neural Network (SNN)?

Artificial neural networks that closely mimic natural neural networks are known as spiking neural networks (SNNs). In addition to neuronal and synaptic status, SNNs incorporate time into their working model. The idea is that neurons in the SNN do not transmit information at the end of each propagation cycle (as they do in traditional multi-layer perceptron networks), but only when a membrane potential – a neuron’s intrinsic quality related to its membrane electrical charge – reaches a certain value, known as the threshold.

THE BELAMY

Sign up for your weekly dose of what’s up in emerging technology.

Email

The neuron fires when the membrane potential hits the threshold, sending a signal to neighbouring neurons, which increase or decrease their potentials in response to the signal. A spiking neuron model is a neuron model that fires at the moment of threshold crossing.

Artificial neurons, despite their striking resemblance to biological neurons, do not behave in the same way. Biological and artificial NNs differ fundamentally in the following ways:

Download our Mobile App

  • Structure in general
  • Computations in the brain
  • In comparison to the brain, learning is a rule.

Alan Hodgkin and Andrew Huxley created the first scientific model of a Spiking Neural Network in 1952. The model characterized the initialization and propagation of action potentials in biological neurons. Biological neurons, on the other hand, do not transfer impulses directly. In order to communicate, chemicals called neurotransmitters must be exchanged in the synaptic gap.

How Does Spiking Neural Network Work?

Key Concepts 

What distinguishes a traditional ANN from an SNN is the information propagation approach. SNN aspires to be as close to a biological neural network as feasible. As a result, rather than working with continually changing time values as ANN does, SNN works with discrete events that happen at defined times. SNN takes a set of spikes as input and produces a set of spikes as output (a series of spikes is usually referred to as spike trains).

The general idea is as;

  • Each neuron has a value that is equivalent to the electrical potential of biological neurons at any given time.
  • The value of a neuron can change according to its mathematical model; for example, if a neuron gets a spike from an upstream neuron, its value may rise or fall.
  • If a neuron’s value surpasses a certain threshold, the neuron will send a single impulse to each downstream neuron connected to the first one, and the neuron’s value will immediately drop below its average.
  • As a result, the neuron will go through a refractory period similar to that of a biological neuron. The neuron’s value will gradually return to its average over time.
Spike Based Neural Codes

Artificial spiking neural networks are designed to do neural computation. This necessitates that neural spiking is given meaning: the variables important to the computation must be defined in terms of the spikes with which spiking neurons communicate. A variety of neuronal information encodings have been proposed based on biological knowledge:

  • Binary Coding:  

Binary coding is an all-or-nothing encoding in which a neuron is either active or inactive within a specific time interval, firing one or more spikes throughout that time frame. The finding that physiological neurons tend to activate when they receive input (a sensory stimulus such as light or external electrical inputs) encouraged this encoding.

Individual neurons can benefit from this binary abstraction because they are portrayed as binary units that can only accept two on/off values. It can also be applied to the interpretation of spike trains from current spiking neural networks, where a binary interpretation of the output spike trains is employed in spike train classification.

  • Rate Coding:

Only the rate of spikes in an interval is employed as a metric for the information communicated in rate coding, which is an abstraction from the timed nature of spikes. The fact that physiological neurons fire more frequently for stronger (sensory or artificial) stimuli motivates rate encoding. 

It can be used at the single-neuron level or in the interpretation of spike trains once more. In the first scenario, neurons are directly described as rate neurons, which convert real-valued input numbers  “rates”  into an output “rate” at each time step. In technical contexts and cognitive research, rate coding has been the concept behind conventional artificial “sigmoidal” neurons.

  • Fully Temporal Codes

The encoding of a fully temporal code is dependent on the precise timing of all spikes. Evidence from neuroscience suggests that spike-timing can be incredibly precise and repeatable. Timings are related to a certain (internal or external) event in a fully temporal code (such as the onset of a stimulus or spike of a reference neuron).

  • Latency Coding

The timing of spikes is used in latency coding, but not the number of spikes. The latency between a specific (internal or external) event and the first spike is used to encode information. This is based on the finding that significant sensory events cause upstream neurons to spike earlier.

This encoding has been employed in both unsupervised and supervised learning approaches, such as SpikeProp and the Chronotron, among others. Information about a stimulus is encoded in the order in which neurons within a group generate their first spikes, which is closely connected to rank-order coding.

SNN Architecture

Spiking neurons and linking synapses are described by configurable scalar weights in an SNN architecture. The analogue input data is encoded into the spike trains using either a rate-based technique, some sort of temporal coding or population coding as the initial stage in building an SNN. 

A biological neuron in the brain (and a simulated spiking neuron) gets synaptic inputs from other neurons in the neural network, as previously explained. Both action potential production and network dynamics are present in biological brain networks.

The network dynamics of artificial SNNs are much simplified as compared to actual biological networks. It is useful in this context to suppose that the modelled spiking neurons have pure threshold dynamics (as opposed to refractoriness, hysteresis, resonance dynamics, or post-inhibitory rebound features).

When the membrane potential of postsynaptic neurons reaches a threshold, the activity of presynaptic neurons affects the membrane potential of postsynaptic neurons, resulting in an action potential or spike.

Learning Rules in SNN’s

Learning is achieved in practically all ANNs, spiking or non-spiking, by altering scalar-valued synaptic weights. Spiking allows for the replication of a form of bio-plausible learning rule that is not possible in non-spiking networks. Many variations of this learning rule have been uncovered by neuroscientists under the umbrella term spike-timing-dependent plasticity (STDP).

Its main feature is that the weight (synaptic efficacy) connecting a pre-and post-synaptic neuron is altered based on their relative spike times within tens of millisecond time intervals. The weight adjustment is based on information that is both local to the synapse and local in time. The next subsections cover both unsupervised and supervised learning techniques in SNNs.

  • Unsupervised Learning

Data is delivered without a label, and the network receives no feedback on its performance. Detecting and reacting to statistical correlations in data is a common activity. Hebbian learning and its spiking generalizations, such as STDP, are a good example of this. The identification of correlations can be a goal in and of itself, but it can also be utilized to cluster or classify data later on. 

STDP is defined as a process that strengthens a synaptic weight if the post-synaptic neuron activates soon after the pre-synaptic neuron fires, and weakens it if the post-synaptic neuron fires later. This conventional form of STDP, on the other hand, is merely one of the numerous physiological forms of STDP.

  • Supervised Learning

In supervised learning, data (the input) is accompanied by labels (the targets), and the learning device’s purpose is to correlate (classes of) inputs with the target outputs (a mapping or regression between inputs and outputs). An error signal is computed between the target and the actual output and utilized to update the network’s weights. 

Supervised learning allows us to use the targets to directly update parameters, whereas reinforcement learning just provides us with a generic error signal (“reward”) that reflects how well the system is functioning. In practice, the line between the two types of supervised learning is blurred.

Traditional Neural Network Vs SNN

A spiking neural network is a two-layered feed-forward network with lateral connections in the second hidden layer that is heterogeneous in nature. To transfer information, biological neurons use brief, sharp voltage increases. Action potentials, spikes, and pulses are all terms used to describe these signals.  Spiking neuron networks are more potent than non-spiking counterparts because they can encode temporal information in their signals, but they also require different and biologically more realistic synaptic plasticity rules.

Spikes can’t just hop from one neuron to the next. They must be handled by the neuron’s most complex component: the synapse, which is made up of the axon’s end, a synaptic gap, and the first piece of the dendrite. The synapse was formerly thought to merely transport a signal from the axon to the dendrite; however, it has now been discovered to be a highly complex signal pre-processor that is critical in learning and adaptation. When a spike reaches the synapse’s axonal (presynaptic) side, some vesicles fuse with the cell membrane and release their neurotransmitter content into the extracellular fluid that fills the synaptic gap.

Artificial neural networks are a rather old computer science technique; the original ideas and models date back more than fifty years. McCulloch-Pitts threshold neurons were the first generation of artificial neural networks, a conceptually simple model in which a neuron sends a binary ‘high’ signal if the sum of its weighted incoming inputs exceeds a threshold value.

Despite the fact that these neurons can only produce digital output, they have been used in sophisticated artificial neural networks such as multi-layer perceptrons and Hopfield nets. A multilayer perceptron with a single hidden layer, for example, can compute any function with a Boolean output; these networks are known as universal for digital computations. 

Second-generation neurons compute their output signals using a continuous activation function rather than a step- or threshold function, making them appropriate for analogue in- and output. The sigmoid and hyperbolic tangent are two examples of activation functions that are commonly utilized.

Application of Spiking Neural Networks

In theory, SNNs can be used in the same applications as standard ANNs. SNNs can also stimulate the central nervous systems of biological animals, such as an insect seeking food in an unfamiliar environment. They can be used to examine the operation of biological brain networks due to their realism. 

Starting with a hypothesis about the topology and function of a real neural circuit, recordings of this circuit can be compared to the output of the appropriate SNN to assess the hypothesis’ plausibility. However, adequate training processes for SNNs are lacking, which can be a hindrance in particular applications, such as computer vision.

Advantages and Disadvantages of SNN

Advantages
  • SNN is a dynamic system. As a result, it excels in dynamic processes like speech and dynamic picture identification.
  • When an SNN is already working, it can still train.
  • To train an SNN, you simply need to train the output neurons.
  • Traditional ANNs often have more neurons than SNNs; however, SNNs typically have fewer neurons.
  • Because the neurons send impulses rather than a continuous value, SNNs can work incredibly quickly.
  • Because they leverage the temporal presentation of information, SNNs have boosted information processing productivity and noise immunity.
Disadvantages
  • SNNs are difficult to train. 
  • As of now, there is no learning algorithm built expressly for this task.
  • Building a small SNN is impracticable.

Final Words

Through this post, we have seen the basic concept related to spiking neural networks and discussed its general working methodology. We covered the various concepts related to it such as neural codes, their architectures, its learning schemes. Lastly, we have discussed the differences between traditional ANN and the SNN and have seen some advantages and disadvantages of SNN.

References