摘要 |
In training data, a similarity matrix is generated for each of types of data corresponding to different kernels, and graph Laplacians are formed individually from the similarity matrices. An entire graph Laplacian is defined as linear combination of the individual graph Laplacians with coupling constants. Observation variables and latent variables associated therewith are assumed to form normal distributions, and the coupling constants are assumed to form a gamma distribution. Then, on the basis of a variational Bayesian method, a variance of the observation variables and the coupling constants can be figured out with a reasonable computational cost. Once the variance of the observation variables and the coupling constants are figured out, a predictive distribution for any input data can be figured out by means of a Laplace approximation.
|