B4 - Learning in self-organized critical circuits
Recent experiments demonstrated that spontaneous activity in cortical slices [Beggs et al. 2003] and in vivo [Gireesh et al. 2008] is constituted of epochs of activity propagation, so-called neuronal avalanches, which obey power-law statistics. These findings indicate that cortical networks operate near a critical point and thus represent a self-organized critical system [Bak et al. 1987]. Although, the number of reported critical neuronal systems steadily increases, the benefits of the critical state for neuronal function are still insufficiently understood. It was shown recently that near the critical point the dynamical range of network responses is maximized [Kinouchi et al. 2006] which is optimal not only for sensory systems but also might be beneficial for the differentiation between patterns that are presented during learning. How does criticality of a neural network influence its capability of learning and how does long-term plasticity affect the closeness to criticality? These are the main questions that we are going to study in the proposed project. In our previous studies we showed how a simple neural model with a single control parameter, the maximal amount of synaptic resources, becomes self-organized critical [Levina et al. 2007, Levina et al. 2009], if it is endowed with frequency dependent synapses. For finite systems we can control the state of the neural circuit in the sense of its avalanche size distribution to be critical, subcritical and supracritical.
Using Hebbian learning we will train networks in these different states to recognize a set of patterns. Monitoring learning curves we will test the hypothesis that the learning performance of the network improves with closeness to the critical state. Complementary, it is important to investigate how a network can maintain criticality despite strong and inhomogeneous potentiation induced during learning. To this end we will study how a network that is trained on a certain set of stimuli can stay critical or adapt back to the critical state after the learning phase with a minimal loss of information. One promising approach to this problem that we will explore is slow homeostatic plasticity [Levina et al. 2007].
To establish a theoretical framework we will use a mean field approach based on critical branching processes [Zapperi et al. 1995]. The successful use of branching processes for the analysis of realistic neural networks requires a better understanding of how delays and refractory periods influence the branching ratio. Further developing combinatorial methods [Eurich et al.2002] we will go beyond the simplified models considered so far [Haldemann et al. 2005] and calculate a branching ratio in the realistic networks. Combining Hebbian learning and homeostatic plasticity will finally yield a network that operates on four hierarchically ordered timescales: Tavalanche < Texternal input < THebbian learning < Tplasticity. This network is expected to be capable of effective learning and will return to a critical state between learning sessions.
Finally, we will examine the implication of our theoretic results on: 1) memory consolidation and corticohippocampal interaction, where criticality was already observed [Peyrache et al. 2009]; 2) interaction of crystallized and fluid intelligences [Lindenberger et al. 2007]. This will enable us to draw a conclusion on age-related changes in cognitive performance.
Belongs to Group(s):
Is part of Section B