Baidu Encyclopedia version

The BP algorithm (ie, backpropagation algorithm) is a learning algorithm suitable for multi-layer neural networks under the guidance of a mentor. It is based on the gradient descent method. The input-output relationship of BP network is essentially a mapping relationship: the function performed by a BP neural network with n input m output is a continuous mapping from n-dimensional Euclidean space to a finite field in m-dimensional Euclidean space. The mapping is highly nonlinear. Its information processing capability is derived from multiple recombination of simple nonlinear functions, so it has a strong ability to reproduce functions. This is the basis for the application of the BP algorithm.

see details

 

Wikipedia version

Backpropagation is a method for artificial neural networks that is used to calculate the gradients required in the calculation of the weights used in the network. Backpropagation is short for "wrong backward propagation" because errors are calculated at the output and distributed backwards throughout the network layer. It is usually used to train deep neural networks.

Backpropagation is the extension of delta rules to a multi-layer feedforward network, which is achieved by iteratively calculating the gradient of each layer using chain rules. It is closely related to the Gauss-Newton algorithm and is part of the study of neural back propagation.

Backpropagation is a special case of a more general technique called automatic differentiation. In the case of learning, the gradient descent optimization algorithm commonly used in backpropagation calculates the loss function of the weight gradient of the neurons by calculation.

see details