A central aspectof information processing in connectionist models is the flow of activationbetween units through the weighted connections in the network. A unit’s activation is usually a value between0 and 1 (or between -1 and 1, depending on how the model is implemented). Inputunits receive information from the environment and their activation value isdetermined by this information. Forexample, a model that learns about animals could receive representations ofanimals that consist of feature descriptions, and each input unit could standfor one feature such as ‘has a tail’. Then,when an animal with a tail is presented the activation of the ‘has tail’ unit(and others that stand for the animal’s other features) is set to 1. Absence ofa tail can be coded by leaving the unit non-active (0) or explicitly coding forthe absence of the feature (-1). The activationof the input units is then fed through the network via the weighted connectionsbetween the units. Each unit then sumsup the activation it receives (e.g., if the ‘has tail’ unit is linkedthrough a connection with the weight 0.1 to another unit, then that unit willreceive an activation of 0.1 (1×0.1) from the ‘has tail’ unit, and moreactivation from the other units from which it receives incoming connections. The way in which a unit translates itsincoming activation into its own activation state is often through anon-linear transformation. Typicaltransformations are threshold functions (e.g., when the summed incomingactivation is smaller than zero, the unit remains inactive, and when it isabove zero, the unit’s own activation value is set to 1), sigmoid functions(similar to a threshold function, but with values between 0 and 1 when theincoming activation is around 0) and Gaussian functions (bell shape: when theincoming activation is small or large, the unit’s resulting activation is low,and when the incoming activation is medium the unit’s activation is high). The non-linearity of this activation functionis crucial to the model’s ability to learn complex tasks. Biological neurons do not vary theiractivation in the same way. Instead they fire individual spikes of the sameamplitude, but the frequency by which they fire can vary. A high activation of a biological neuron istherefore characterized by rapid firing, that is, a high frequency. Thus, the higher firing frequency ofbiological neurons is translated into a higher rate of activation in artificialneurons, although a few artificial neural network models in fact do use spikingneurons. More generally, it is taken tobe the process by which the central nervous system is stimulated into activitythrough the mediation of the reticular activating system.
See Centralnervous system (CNS), Cognitive neuroscience, Computational models,Connectionism, Connectionist models, Dynamical systems approaches, Mesencephalic reticular activatingsystem, Neural net, Neuroconstructivist models, Processing units, Space code principle