Modulating scalable Gaussian processes for expressive statistical learning

Published on Dec 1, 2021in Pattern Recognition7.74
· DOI :10.1016/J.PATCOG.2021.108121
Haitao Liu14
Estimated H-index: 14
(DUT: Dalian University of Technology),
Yew-Soon Ong64
Estimated H-index: 64
(NTU: Nanyang Technological University)
+ 1 AuthorsXiaofang Wang5
Estimated H-index: 5
(DUT: Dalian University of Technology)
Sources
Abstract
Abstract null null For a learning task, Gaussian process (GP) is interested in learning the statistical relationship between inputs and outputs, since it offers not only the prediction mean but also the associated variability. The vanilla GP however is hard to learn complicated distribution with the property of, e.g., heteroscedastic noise, multi-modality and non-stationarity, from massive data due to the Gaussian marginal and the cubic complexity. To this end, this article studies new scalable GP paradigms including the non-stationary heteroscedastic GP, the mixture of GPs and the latent GP, which introduce additional latent variables to modulate the outputs or inputs in order to learn richer, non-Gaussian statistical representation. Particularly, we resort to different variational inference strategies to arrive at analytical or tighter evidence lower bounds (ELBOs) of the marginal likelihood for efficient and effective model training. Extensive numerical experiments against state-of-the-art GP and neural network (NN) counterparts on various tasks verify the superiority of these scalable modulated GPs, especially the scalable latent GP, for learning diverse data distributions.
References60
Newest
Cited By0
Newest
This website uses cookies.
We use cookies to improve your online experience. By continuing to use our website we assume you agree to the placement of these cookies.
To learn more, you can find in our Privacy Policy.