Generative topographic map (GTM)
is a machine learning
method that is a probabilistic counterpart of the self-organizing map
(SOM), is provably convergent and does not require a shrinking neighborhood
or a decreasing step size. It is a generative model
: the data is assumed to arise by first probabilistically picking a point in a low-dimensional space, mapping the point to the observed high-dimensional input space (via a smooth function), then adding noise in that space. The parameters of the low-dimensional probability distribution, the smooth map and the noise are all learned from the training data using the expectation-maximization
(EM) algorithm. GTM was introduced in 1996 in a paper by Christopher M. Bishop
, Markus Svensen, and Christopher K. I. Williams.
Details of the algorithm
The approach is strongly related to density networks
which use importance sampling
and a multi-layer perceptron
to form a non-linear latent variable model
. In the GTM the latent space is a discrete grid of points which is assumed to be non-linearly projected into data space. A Gaussian noise
assumption is then made in data space so that the model becomes a constrained mixture of Gaussians
. Then the model's likelihood can be maximized by EM.
In theory, an arbitrary nonlinear parametric deformation could be used. The optimal parameters could be found by gradient descent etc.
The suggested approach to the nonlinear mapping is to use a radial basis function network
(RBF) to create a nonlinear... Read More