Overview of Spiking Neural Networks
Over the last few years, Spiking Neural Networks (SNNs) have been a subject of interest in areas such as Edge Computing and the Internet of Things since they offer high energy efficiency and low latency rates and are as powerful as Deep Neural Networks (DNNs). SNNs imitate the brain’s information processing approach through converting the data into spikes and processing them until the neuron spike is elicited. Nevertheless, defining the SNN architectures is challenging and demands advanced knowledge of deep learning frameworks, which makes it challenging for users to set them up properly.
Nevertheless, SNNs are not often applied for DNN deployment at the network edge while other techniques like DNN quantization, pruning, or knowledge distillation are. Current development frameworks for SNNs are often specialized for specific tasks and intricate, which makes the process of configuring and training SNNs challenging.
Introducing the GenericSNN Framework for Neuromorphic Computing
To overcome these problems, a new framework based on Nengo and NengoDL packages is introduced, allowing to create SNN models even with no prior experience in this field. This is similar to the steps used in defining an ANN model and follows the TensorFlow naming conventions for ease of integration with other machine learning toolboxes such as SciPy.
For clarification, we summarize the contributions of this research paper as follows:
- An easy way to construct SNN is provided. The workflow pipeline when using this framework is very similar to the Keras or TensorFlow framework to ensure that SNN is accessible for users who may not be as specialized in the field.
- In the design of the proposed framework, it was done with a view of harmonizing with the other external popular ML Python packages. This also allows the application of state of the art technology on the proposed framework.
- The proposed framework has been tested in three distinct scenarios for evaluating its effectiveness. These use cases have been chosen to demonstrate how this framework can be employed to design SNN models for classification as well as the models for predictions.
Existing Frameworks for Spiking Neural Networks
This section identifies other frameworks that have been developed for constructing SNN models and analyzes their strengths and weaknesses in comparison with the proposed framework.
Existing Frameworks:
-
C. Li’s Framework:
-Enhances the specific application of converting ANN to SNN such as in the image processing area.
-Reports 70. achieving only 18% on ImageNet while being rigid and not compatible with other tools. -
Direct SNN Quantization Framework:
-Examines the issue of SNN parameter quantization and demonstrates that it is possible to reduce the memory four times with a 1% decrease in accuracy on the MNIST dataset.
-Restricted to one aspect of SNN development, namely memory optimization, and thus does not address any of the larger issues related to SNN development. -
Zhou’s Transformer to SNN Conversion:
-Transforms Transformer Networks into SNNs, yielding 74. 81% accuracy on ImageNet.
-This is a balance between the energy and performance benefits against the accuracy but it is not very friendly to users.
-
SynSense’s Rockpool:
-It has the specific goal of encouraging more non-technical users to make use of SNN by providing clear documentation and examples.
-Lacks compatibility with innovative systems and applications because it does not adhere to conventional AI package nomenclature. -
Liu’s Framework:
-Modifieds ANN activation functions for spike-based activation to minimize the loss of accuracy.
-Strictly for expert use, it is not designed for general use or for compatibility with external packages. -
Fang’s SpikingJelly:
-This chapter provides an in-depth look at automatic differentiation for gradient-based training with a focus on detailed neuron models.
-It needs much setup work, which is best for experts, and should be used for application-specific guidance.
There are frameworks currently available that cover certain aspects or applications of Spiking Neural Networks (SNNs) but are not general-purpose and not very user-friendly. They are most often designed for a specific task, are usually more sophisticated than traditional software, and cannot easily be used in combination with other artificial intelligence systems. On the other hand, the framework we proposed in this paper has no such drawbacks as it is designed to avoid these issues and make the creation of SNN models easier while still integrating with the most popular AI libraries.
GenericSNN Framework
The GenericSNN package, designed for Spiking Neural Networks (SNNs) modeling, consists of three main classes:
- BaseModel Class: The core of the package, providing essential methods for tasks like optimizer design and spiking pattern visualization.
- Nengo-based Class: This class constructs SNN models using Nengo objects, with the exception of the Simulator from NengoDL, facilitating faster model creation.
- NengoDL-based Class: Another class for creating models using NengoDL objects, offering an alternative for network design and assessment.
These classes allow researchers and practitioners to build and evaluate Spiking Neural Networks models in a modular and flexible manner. All classes follow a similar setup and execution flow, comparable to frameworks like Keras and TensorFlow. The process begins with data adaptation, requiring input data to have a temporal dimension. This involves converting data into a format with a time window to create multiple time samples, essential for data with temporal evolution. For individual samples, the same sample is used throughout the time window. Figures 1 and 2 illustrate the relationships between Nengo, NengoDL, and the package’s classes, as well as the model’s flow diagram, respectively.
In addition to converting data to the time domain, it must also be converted into spikes for proper processing in Spiking Neural Networks (SNNs). This conversion is a fundamental requirement for SNN modeling.
In our framework, the number of neurons in the population of each layer is defined with the help of parameters n_neurons when the layer is being defined. Also, the maximum firing rates of neurons are defined when the SpikingNengoNeuralNetworkModel class is created. In the case of the interception of the tuning curves (the point when the activity of each neuron turns to zero), the framework allows the possibility to choose between two options: In this case, it is possible to set all the curves to zero interception, or distribute the interceptions in the range of -1 to 1 evenly. However, the framework has its own set of default values for these parameters in case the user is not sure how to set them.
B. Network Definition
The method for defining a Spiking Neural Network (SNN) depends on the class being used: The method for defining a Spiking Neural Network (SNN) depends on the class being used:
SpikingNengoNeuralNetworkModel Class: This is done using only Nengo objects, which means that users have to provide layers as parameters of the Ensemble object (Ensemble is a neuron population) as well as the connections between Ensembles. This approach provides low-level control over internal parameters, which allows for fine-tuning and customization of the Spiking Neural Networks architecture for certain applications, although it is not suitable for beginners.
SpikingNeuralNetworkModel Class: This class is used to define the network topology in a manner that is quite similar to that of the conventional ANNs using Keras. Users define the layer structure by providing a list of layers where traditional attributes such as the activation function and the number of neurons are defined. This makes it easier because the class takes care of the intricacies of the syntax within the class itself. Debugging is made easy since Nengo Probe objects are automatically instantiated to capture data (spike data, represented values, neuron voltages and many others).
C. Training and Evaluation
Training models with SNNs with the help of SpikingNengoNeuralNetworkModel and SpikingNeuralNetworkModel is based on NengoDL, which is a tool for training models on CPU or GPU based on Nengo and TensorFlow. This is to mean that one can train the model using high-performance GPU servers or normal CPUs without having to change the script.
NengoDL employs rate-based approximation of the spiking neuron models similar to the networks’ parameters optimization in DL but with the spiking neurons network. This is made easier by the fact that the arguments are inputted to the simulator and by using probes to get data in the simulator which is made available by class methods.
The BaseModel class contains the functions to plot the training data including the spike voltages, spike counts, temporal decision outputs for layers. Finally, both classes enable prediction and scoring using the NengoDL simulator; the configurations are switched to make predictions rather than minimizing for parameters; it is done based on the problem type given.
D. Compatibility With External Packages
It was intended to be used in a way that would fit and complement the existing data science frameworks based on the standard method naming convention of fit, predict, as well as score. This compatibility means that one can use external utility functions and increases the possibilities of the Spiking Neural Networks models in various new scientific fields. The extra modules are supposed to be easily modifiable and expandable with other packages for hyper-parameters tuning or incorporating new methods, which will help to bring this framework closer to the machine learning community.
For hyper-parameter optimization, the framework may use external tools such as scikit-learn with features such as score, get_params, set_params, and many others, similar to the GridSearch. This makes the object to fully conform to scikit-learn estimator and predictor classes so that developers do not struggle when integrating the object into their code base.
Therefore, the integration with TensorFlow makes it easier to apply new layers, loss functions and metrics. These capabilities and the execution of the classes of the presented framework will be described and exemplified in section IV with two well-known applications and the results will be compared with the best traditional ANN models.
Conclusion
The proposed GenericSNN framework is found to be very user-friendly and realistic in nature. SNN framework is easy to use and can easily be integrated into the current work flow of the researchers and the practitioners since it is compatible with most of the programming languages and libraries. Furthermore, the proposed framework shows high accuracy and robustness on multiple benchmarking datasets in different domains when applied. Thus, we eagerly anticipate observing further developments of the Spiking Neural Networks framework to advance AI and the field of cognitive computing.
In a recent white paper, our colleague Alberto Martin-Martin and Marta Verona-Almeida , Javier Mendez
with eesy-innovation GmbH and many more researchers delve into this important topic.
you can find the full article here.