Approaches
There are many approaches to implementing learning within spiking neural networks, determining an ideal categorisation will likely be an ongoing challenge. The results presented on this site have been organised into following categories:
Supervised Learning
These approaches use an error signal to directly modify weights, typically employing backpropagation or other direct feedback mechanisms.
Unsupervised Learning
These methods learn from input data without labeled examples, using local learning rules like STDP to adjust synaptic weights based on spike timing.
Reinforcement Learning
These approaches combine spike-based learning with reinforcement signals, using reward-modulated STDP or policy gradient methods to optimize behavior based on rewards.
Hybrid Learning
These methods integrate supervised and unsupervised learning, often using unsupervised pre-training to learn features, followed by supervised fine-tuning for specific tasks.
Spike-Based Backpropagation Variants
These variants adapt traditional backpropagation for spiking neurons, using surrogate gradients to handle the non-differentiable nature of spiking activity.
Reservoir Computing and Liquid State Machines
These approaches use a fixed, randomly connected network (reservoir) to process inputs, training only the readout layer to leverage the reservoir’s dynamic response.