Overview of Spiking Neural Networks

Over the last few years, Spiking Neural Networks (SNNs) have been a subject of interest in areas such as Edge Computing and the Internet of Things since they offer high energy efficiency and low latency rates and are as powerful as Deep Neural Networks (DNNs). SNNs imitate the brain’s information processing approach through converting the data into spikes and processing them until the neuron spike is elicited. Nevertheless, defining the SNN architectures is challenging and demands advanced knowledge of deep learning frameworks, which makes it challenging for users to set them up properly.

Introducing the GenericSNN Framework for Neuromorphic Computing

Existing Frameworks for Spiking Neural Networks

  • -Enhances the specific application of converting ANN to SNN such as in the image processing area.

  • -Examines the issue of SNN parameter quantization and demonstrates that it is possible to reduce the memory four times with a 1% decrease in accuracy on the MNIST dataset.
    -Restricted to one aspect of SNN development, namely memory optimization, and thus does not address any of the larger issues related to SNN development.


 

GenericSNN Framework

The GenericSNN package, designed for Spiking Neural Networks (SNNs) modeling, consists of three main classes:

  1. BaseModel Class: The core of the package, providing essential methods for tasks like optimizer design and spiking pattern visualization.
  2. Nengo-based Class: This class constructs SNN models using Nengo objects, with the exception of the Simulator from NengoDL, facilitating faster model creation.
  3. NengoDL-based Class: Another class for creating models using NengoDL objects, offering an alternative for network design and assessment.
Spiking Neural Networks schama-eesy-innovation
ClassDiagram of the proposed framework.

These classes allow researchers and practitioners to build and evaluate Spiking Neural Networks models in a modular and flexible manner. All classes follow a similar setup and execution flow, comparable to frameworks like Keras and TensorFlow. The process begins with data adaptation, requiring input data to have a temporal dimension. This involves converting data into a format with a time window to create multiple time samples, essential for data with temporal evolution. For individual samples, the same sample is used throughout the time window. Figures 1 and 2 illustrate the relationships between Nengo, NengoDL, and the package’s classes, as well as the model’s flow diagram, respectively.
In addition to converting data to the time domain, it must also be converted into spikes for proper processing in Spiking Neural Networks (SNNs). This conversion is a fundamental requirement for SNN modeling.

A. Neural Coding Schemes

Encoding strategies convert input signals (analog or digital) into spikes for excitatory neurons, aligning with biological brain information codification. Two main coding schemes are used: rate coding and temporal coding. Rate encoding, the most common method in nervous systems, can be divided into count, density, and population rate encoding, with the latter used in Nengo.

In population rate encoding, information is represented by the firing rate of a neuron population described by a tuning curve. In decoding, spike trains are filtered by an exponentially decaying filter to generate postsynaptic current, and then all trains are summed.

An example figure shows this process:

  • Encoding: Eight neurons encode a signal. The tuning curves of neurons are shown, a sinusoidal input signal is represented, and the spikes for each neuron are depicted. When the input signal value is high, the blue neuron has the highest firing rate, while the gray neuron has zero firing.
  • Decoding: The spike trains are shown, followed by the same trains after applying a low-pass filter to the synapses, reconstructing the original signal. Finally, the bottom graph shows how, by adding up in a weighted way the components from the different neurons, the original signal is reconstructed. Those neurons which increasing tuning curve have a positive weight, while the others have a negative one.

 

Spiking Neural Networks schema-eesy-innovation
Spikes coding scheme, a) encoding process, b) decoding process. In all subfigures, except in the left-middle and right-bottom ones, each color is associated with a neuron.

B. Network Definition

C. Training and Evaluation

D. Compatibility With External Packages

Javier Mendez
with eesy-innovation GmbH and many more researchers delve into this important topic.
you can find the full article here.