upcarta
  • Sign In
  • Sign Up
  • Explore
  • Search

When causal inference fails - detecting violated assumptions with uncertainty-aware models - OATML

  • Article
  • Dec 8, 2020
  • #MachineLearning #DataScience
Sören Mindermann
@sorenmind
(Author)
Uri Shalit
@ShalitUri
(Author)
Andrew Jesson
@anndvision
(Author)
Yarin Gal
@yaringal
(Author)
oatml.cs.ox.ac.uk
Read on oatml.cs.ox.ac.uk
1 Recommender
1 Mention
Effective personalised treatment recommendations are enabled by knowing precisely how someone will respond to treatment. When there is sufficient knowledge about both the population... Show More

Effective personalised treatment recommendations are enabled by knowing precisely how someone will respond to treatment. When there is sufficient knowledge about both the population and an individual, knowing their response to treatment (the treatment-effect) is possible and recommendations can be made with relative confidence. However, there are many reasons why we would not know enough about someone to make an informed recommendation. For example, there may be insufficient coverage, i.e. they may not be represented in the study population. This can be the case if the study is limited to data coming from just one hospital. Or, there may be insufficient overlap, i.e. they are not represented in either the group that received treatment or the group that did not. This can be the case if there are socio-economic barriers to accessing treatment. In both cases, effectively communicating uncertainty and deferring a recommendation is preferable to providing an uninformed and potentially dangerous recommendation.

Show Less
Recommend
Post
Save
Complete
Collect
Mentions
See All
Tim G. J. Rudner (sigmoid.social/@timrudner) @timrudner · Dec 9, 2020
  • Post
  • From Twitter
Check out this excellent paper by @anndvision, @sorenmind, @yaringal & @ShalitUri! @OATML_Oxford #NeurIPS2020
  • upcarta ©2025
  • Home
  • About
  • Terms
  • Privacy
  • Cookies
  • @upcarta