6
Learning in Neuromorphic Systems
In this chapter, we address some general theoretical issues concerning synaptic plasticity as the mechanism underlying learning in neural networks, in the context of neuromorphic VLSI systems, and provide a few implementation examples to illustrate the principles. It ties to Chapters 3 and 4 on neuromorphic sensors by proposing theoretical means for utilizing events for learning. It is an interesting historical fact that when considering the problem of synaptic learning dynamics within the constraints imposed by implementation, a whole new domain of previously overlooked theoretical problems became apparent, which led to rethinking much of the conceptual underpinning of learning models. This mutual cross-fertilization of theory and neuromorphic engineering is a nice example of the heuristic value of implementations for the progress of theory.
We first discuss some general limitations arising from plausible constraints to be met by any implementable synaptic device for the working of a neural network as a memory; we then illustrate strategies to cope with such limitations, and how some of those have been translated into design principles for VLSI plastic synapses. The resulting type of Hebbian, bistable, spike-driven, stochastic synapse is then put in the specific, but wide-scope, context of associative memories based on attractor dynamics in ...
Get Event-Based Neuromorphic Systems now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.