Baidu Encyclopedia version
Restricted Boltzmann machine (RBM) is a randomly generated neural network that can learn the probability distribution through input data sets. RBM was originally named by the inventor Paul Smolens as a Harmonium based on 1986, but it was not until Jeffrey Sinton and his collaborators invented the fast learning algorithm in the mid-2000 era that the restricted Bozeman machine Become known. Restricted Bozman machines have been used in dimensionality reduction, classification, collaborative filtering, feature learning, and topic modeling. Depending on the task, the restricted Bozeman machine can be trained using supervised learning or unsupervised learning.
Wikipedia version
The Restricted Boltzmann Machine (RBM) is a random artificial neural network that can learn the probability distribution of its group inputs.
RBM was originally invented by Paul Smolensky under the name of Harmonium in 1986 and was highlighted by Geoffrey Hinton and collaborators who invented the fast learning algorithm for them in the middle of 2000. RBM has found applications in dimensionality reduction, classification, collaborative filtering, feature learning and topic modeling. Depending on the task, they can receive training through supervision or unsupervised.
As the name implies, RBM is a variant of the Boltzmann machine, the limitation of which is that their neurons must form a bipartite graph: a pair of nodes (often called "visible" and "hidden") units from each of the two sets of units can There is a symmetric connection between them; and there is no connection between the nodes within the group. In contrast, "unrestricted" Boltzmann machines may have connections between hidden units. This limitation allows for more efficient training algorithms than the general class of Boltzmann machines, especially gradient-based contrast divergence algorithms.
Restricted Boltzmann machines are also available for deep learning networks. In particular, the deep confidence network can be formed by "stacking" the RBMs and optionally fine-tuning the resulting depth network by gradient descent and backpropagation.
Comments