Home | Trees | Indices | Help |
|
---|
|
Restricted Boltzmann Machine node. An RBM is an undirected probabilistic network with binary variables. The graph is bipartite into observed (*visible*) and hidden (*latent*) variables. By default, the ``execute`` method returns the *probability* of one of the hiden variables being equal to 1 given the input. Use the ``sample_v`` method to sample from the observed variables given a setting of the hidden variables, and ``sample_h`` to do the opposite. The ``energy`` method can be used to compute the energy of a given setting of all variables. The network is trained by Contrastive Divergence, as described in Hinton, G. E. (2002). Training products of experts by minimizing contrastive divergence. Neural Computation, 14(8):1711-1800 **Internal variables of interest** ``self.w`` Generative weights between hidden and observed variables ``self.bv`` bias vector of the observed variables ``self.bh`` bias vector of the hidden variables For more information on RBMs, see Geoffrey E. Hinton (2007) Boltzmann machine. Scholarpedia, 2(5):1668
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
Inherited from Inherited from |
|||
Inherited from Node | |||
---|---|---|---|
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|
|||
|
|||
Inherited from Node | |||
---|---|---|---|
|
|
|||
Inherited from |
|||
Inherited from Node | |||
---|---|---|---|
_train_seq List of tuples:: |
|||
dtype dtype |
|||
input_dim Input dimensions |
|||
output_dim Output dimensions |
|||
supported_dtypes Supported dtypes |
|
:Parameters: hidden_dim number of hidden variables visible_dim number of observed variables
|
|
If `return_probs` is True, returns the probability of the hidden variables h[n,i] being 1 given the observations v[n,:]. If `return_probs` is False, return a sample from that probability.
|
|
This method contains all pre-inversion checks. It can be used when a subclass defines multiple inversion methods.
|
|
|
|
Update the internal structures according to the input data `v`. The training is performed using Contrastive Divergence (CD). :Parameters: v a binary matrix having different variables on different columns and observations on the rows n_updates number of CD iterations. Default value: 1 epsilon learning rate. Default value: 0.1 decay weight decay term. Default value: 0. momentum momentum term. Default value: 0. update_with_ph In his code, G.Hinton updates the hidden biases using the probability of the hidden unit activations instead of a sample from it. This is in order to speed up sequential learning of RBMs. Set this to False to use the samples instead.
|
Compute the energy of the RBM given observed variables state `v` and hidden variables state `h`. |
If `return_probs` is True, returns the probability of the hidden variables h[n,i] being 1 given the observations v[n,:]. If `return_probs` is False, return a sample from that probability.
|
Return True if the node can be inverted, False otherwise.
|
Sample the hidden variables given observations v. :Returns: a tuple ``(prob_h, h)``, where ``prob_h[n,i]`` is the probability that variable ``i`` is one given the observations ``v[n,:]``, and ``h[n,i]`` is a sample from the posterior probability. |
Sample the observed variables given hidden variable state h. :Returns: a tuple ``(prob_v, v)``, where ``prob_v[n,i]`` is the probability that variable ``i`` is one given the hidden variables ``h[n,:]``, and ``v[n,i]`` is a sample from that conditional probability. |
Stop the training phase. By default, subclasses should overwrite `_stop_training` to implement this functionality. The docstring of the `_stop_training` method overwrites this docstring.
|
Update the internal structures according to the input data `v`. The training is performed using Contrastive Divergence (CD). :Parameters: v a binary matrix having different variables on different columns and observations on the rows n_updates number of CD iterations. Default value: 1 epsilon learning rate. Default value: 0.1 decay weight decay term. Default value: 0. momentum momentum term. Default value: 0. update_with_ph In his code, G.Hinton updates the hidden biases using the probability of the hidden unit activations instead of a sample from it. This is in order to speed up sequential learning of RBMs. Set this to False to use the samples instead.
|
Home | Trees | Indices | Help |
|
---|
Generated by Epydoc 3.0.1 on Thu Mar 10 15:27:45 2016 | http://epydoc.sourceforge.net |