On weight and variance uncertainty in neural networks for regression tasks

arXiv:2501.04272v2 Announce Type: replace
Abstract: We investigate the problem of weight uncertainty originally proposed by [Blundell et al. (2015). Weight uncertainty in neural networks. In International conference on machine learning, 1613-1622, PMLR.] in the context of neural networks designed for regression tasks, and we extend their framework by incorporating variance uncertainty into the model. Our analysis demonstrates that explicitly modeling uncertainty in the variance parameter can significantly enhance the predictive performance of Bayesian neural networks. By considering a full posterior distribution over the variance, the model achieves improved generalization compared to approaches that treat variance as fixed or deterministic. We evaluate the generalization capability of our proposed approach through a function approximation example and further validate it on the riboflavin genetic dataset. Our exploration encompasses both fully connected dense networks and dropout neural networks, employing Gaussian and spike-and-slab priors respectively for the network weights, providing a comprehensive assessment of how variance uncertainty affects model performance across different architectural choices.

Liked Liked