upcarta
  • Sign In
  • Sign Up
  • Explore
  • Search

Beyond Discrete Support in Large-scale Bayesian Deep Learning - OATML

  • Paper
  • Apr 22, 2020
  • #Deeplearning #Neuroscience #MachineLearning
Yarin Gal
@yaringal
(Author)
Sebastian Farquhar
@seb_far
(Author)
Michael A Osborne
@maosbot
(Author)
oatml.cs.ox.ac.uk
Read on oatml.cs.ox.ac.uk
1 Recommender
1 Mention
Most neural networks learn a single estimate of the best network weights. But, in reality, we are unsure. What we really want is to learn a probability distribution over those weigh... Show More

Most neural networks learn a single estimate of the best network weights. But, in reality, we are unsure. What we really want is to learn a probability distribution over those weights that reflects our uncertainty, given the data. This is called the posterior probability distribution over the weights of the network. A distribution leaves open the possibility of improving our inference when we get more data through Bayesian updating, e.g. by continual learning. Beyond that, it makes it easy to estimate the uncertainty of our predictions.

Show Less
Recommend
Post
Save
Complete
Collect
Mentions
See All
Joost van Amersfoort @joost_v_amersf ยท Apr 29, 2020
  • Post
  • From Twitter
Nice write-up of a simple and effective improvement for training your Bayesian NN using Bayes by Backprop!
  • upcarta ©2025
  • Home
  • About
  • Terms
  • Privacy
  • Cookies
  • @upcarta