Complutense University Library

Elements for a general memory structure: properties of recurrent neural networks used to form situation models

Makarov , Valeri A. and Song, Yongli and Velarde, Manuel G. and Hübner, David and Cruse, Holk (2008) Elements for a general memory structure: properties of recurrent neural networks used to form situation models. Biological Cybernetics , 98 (5). 371-395 . ISSN 0340-1200

[img] PDF
Restricted to Repository staff only until 31 December 2020.

1MB

Official URL: http://www.springerlink.com/content/h3758550745u3617/fulltext.pdf

View download statistics for this eprint

==>>> Export to other formats

Abstract

We study how individual memory items are stored assuming that situations given in the environment can be represented in the form of synaptic-like couplings in recurrent neural networks. Previous numerical investigations have shown that specific architectures based on suppression or max units can successfully learn static or dynamic stimuli (situations). Here we provide a theoretical basis concerning the learning process convergence and the network response to a novel stimulus. We show that, besides learning "simple" static situations, a nD network can learn and replicate a sequence of up to n different vectors or frames. We find limits on the learning rate and show coupling matrices developing during training in different cases including expansion of the network into the case of nonlinear interunit coupling. Furthermore, we show that a specific coupling matrix provides low-pass-filter properties to the units, thus connecting networks constructed by static summation units with continuous-time networks. We also show under which conditions such networks can be used to perform arithmetic calculations by means of pattern completion.


Item Type:Article
Uncontrolled Keywords:Recurrent neural network; Situation model; Memory-Learning
Subjects:Medical sciences > Biology > Neurosciences
ID Code:16654
References:

Beer RD (2006) Parameter space structure of continuous-time recurrent neural networks. Neural Comput 18: 3009–3051

Cruse H, Hübner D (2008) Selforganizing memory: active learning of landmarks used for navigation. Biol Cybern (submitted)

Cruse H, Sievers K (2008) A general network structure for learning Pavlovian paradigms (in preparation)

Elman JL (1990) Finding structure in time. Cogn Sci 14: 179–211

Feynman R (2001) In: Hawking SW (ed) The universe in a nutshell. Bantam Press, New York

Fuster JM (1995) Memory in the cerebral cortex: an empirical approach to neural networks in the human and nonhuman primate. MIT Press, Cambridge

Hopfield JJ (1982) Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci 79: 2554–2558

Hopfield JJ (1984) Neurons with graded response have collective computational properties like those of two state neurons. Proc Natl Acad Sci 81: 3088–3092

Jaeger H, Haas H (2004) Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 2:78–80

Kühn S, Beyn WJ, Cruse H (2007) Modelling memory functions with recurrent neural networks consisting of input compensation units: I. Static situations. Biol Cybern 96: 455–470

Kühn S, Cruse H (2007) Modelling memory functions with recurrent neural networks consisting of input compensation units: II. Dynamic situations. Biol Cybern 96: 471–486

Kindermann T, Cruse H (2002) MMC— a new numerical approach to the kinematics of complex manipulators. Mech Mach Theory 37: 375–94

Palm G, Sommer FT (1996) Associative data storage and retrieval in neural networks. In: Domany E, van Hemmen JL, Schulten K(eds) Models of neural networks III. Association, generalization, and representation. Springer, New York, pp 79–18

Pasemann F (2002) Complex dynamics and the structure of small neural networks. Netw: Comput Neural Syst 13: 195–16

Steinkühler U, Cruse H (1998) A holistic model for an internal representation to control the movement of a manipulator with redundant degrees of freedom. Biol Cybern 79: 457–66

Strang G (2003) Introduction to linear algebra. Wellesley - Cambridge Press, Cambridge

Tani J (2003) Learning to generate articulated behavior through the bottom-up and the top-down interaction processes. Neural Netw 16: 11–3

Wessnitzer J, Webb B (2006) Multimodal sensory integration in insects—towards insect brain control architectures. Bioinspir Biomim 1: 63–5

Deposited On:09 Oct 2012 10:06
Last Modified:07 Feb 2014 09:33

Repository Staff Only: item control page