Cambridge faculty - AI alignment, deep learning, and existential safety.
Formerly Mila, FHI, DeepMind, ElementAI.
David Krueger @DavidSKrueger
·
Dec 19, 2022
Recommended! This is a well-written, concise (10 pages!) summary of the Alignment Problem. It helps explain: 1) The basic concerns of (many) alignment researchers 2) Why we don't think it's crazy to talk about AI killing everyone 3) What's distinctive about the field.
Paper
Feb 22, 2023
David Krueger @DavidSKrueger
·
Oct 23, 2022
Another post by @jenner_erik and Johannes Treutlein presents a great response: Reflecting, I think Katja's post mostly gives me optimism about slow take-off; a lot of my pessimism there stems from general pessimism about advanced info-tech + society.
Article
Oct 19, 2022