Backpropagation

The best-known algorithm for training neural network models with more than two layers of units.  The principle of the backpropagation algorithm is to change the weights in the neural network so that the discrepancy between the network’s output and the desired output (supplied by a ‘teacher’) is minimised.  In general, bakcpropagation aids in the prediction (i.e., planning) and control (i.e., reinforcement learning) of large systems, and not just for supervised learning.  Compared to other, more traditional methods or error minimisation, it reduces the cost of computing elements by a factor N, where N is the number of elements to be calculated.  Another advantage is that it allows higher degrees of non-linearity and precision to be applied to neural networks.

See Activation, Auto-encoder networks, Cognitive-functionalist approach, Cognitive neuroscience, Computational models, Connectionism, Connectionist models, Neural net