Advances in Bayesian Machine Learning: From Uncertainty to Decision Making
View / Open Files
Authors
Ma, Chao
Advisors
Hernandez Lobato, Jose Miguel
Date
2021-12-31Awarding Institution
University of Cambridge
Qualification
Doctor of Philosophy (PhD)
Type
Thesis
Metadata
Show full item recordCitation
Ma, C. (2021). Advances in Bayesian Machine Learning: From Uncertainty to Decision Making (Doctoral thesis). https://doi.org/10.17863/CAM.91196
Abstract
Bayesian uncertainty quantification is the key element to many machine learning applications. To this end, approximate inference algorithms are developed to perform inference at a relatively low cost. Despite the recent advancements of scaling approximate inference to “big model $\times$ big data” regimes, many open challenges remain. For instance, how to properly quantify the parameter uncertainties for complicated, non-identifiable models (such as neural networks)? How to properly handle the uncertainties caused by missing data, and perform learning/inference in a scalable way? Furthermore, how to optimally collect new information, so that missing data uncertainties can be further reduced, and better decisions can be made?
In this work, we propose new research directions and new technical contributions towards these research questions. This thesis is organized in two parts (theme A and theme B). In theme A, we consider quantifying model uncertainty under the supervised learning setting. To step aside some of the difficulties of parameter-space inference, we propose a new research direction called function space approximate inference. That is, by treating supervised probabilistic models as stochastic processes (measures over functions), we can now approximate the true posterior of the predictive functions by another class of (simpler) stochastic processes. We provide two different methodologies for function space inference and demonstrate that they return better uncertainty estimates, as well as improved empirical performances on complicated models.
In theme B, we consider the quantification of missing data uncertainty under the unsupervised learning setting. We propose a new approach for quantifying missing data uncertainty, based on deep generative models. It allows us to step aside from the computational burden of traditional methods, and perform accurate and scalable missing data imputation. Furthermore, by utilizing the uncertainty estimates returned by the generative models, we propose an information-theoretic framework for efficient, scalable, and personalized active information acquisition. This allows us to maximally reduce missing data uncertainty, and make improved decisions with new information.
Keywords
Bayesian inference, Decision making, Deep generative models, Deep learning, Machine learning, Missing data, Uncertainty
Identifiers
This record's DOI: https://doi.org/10.17863/CAM.91196
Statistics
Total file downloads (since January 2020). For more information on metrics see the
IRUS guide.
Recommended or similar items
The current recommendation prototype on the Apollo Repository will be turned off on 03 February 2023. Although the pilot has been fruitful for both parties, the service provider IKVA is focusing on horizon scanning products and so the recommender service can no longer be supported. We recognise the importance of recommender services in supporting research discovery and are evaluating offerings from other service providers. If you would like to offer feedback on this decision please contact us on: support@repository.cam.ac.uk