upcarta
  • Sign In
  • Sign Up
  • Explore
  • Search

Existential Risk from Power-Seeking AI

  • Paper
  • May, 2023
  • #ArtificialIntelligence
Joe Carlsmith
@jkcarlsmith
(Author)
jc.gatspress.com
Read on jc.gatspress.com
1 Recommender
1 Mention
This essay formulates and examines what I see as the core argument for concern about existential risk from misaligned artificial intelligence. I begin by discussing a backdrop pictu... Show More

This essay formulates and examines what I see as the core argument for concern about existential risk from misaligned artificial intelligence. I begin by discussing a backdrop picture that informs such concern.
On this picture, intelligent agency is an extremely powerful force, and creating agents much more intelligent than us is playing with fire—especially
given that if their objectives are problematic, such agents would plausibly have instrumental incentives to seek power over humans. I then look more closely at the type of agents we should be worried about; the incentives to create and deploy them; the difficulty of ensuring that they don’t seek power in unintended ways; the reasons to expect them to end up deployed regardless; the likelihood that the problem scales to the permanent disempowerment of our species; and the value lost if so. My current view, in light of these considerations, is that the existential risk at stake is disturbingly high (i.e., greater than 10% by 2070).

Show Less
Recommend
Post
Save
Complete
Collect
Mentions
See All
Liron Shapira @liron · May 18, 2023
  • Post
  • From Twitter
If you're looking for a clear overview of the AI doom argument, I recommend this one by @jkcarlsmith: Podcast audio version here:
  • upcarta ©2025
  • Home
  • About
  • Terms
  • Privacy
  • Cookies
  • @upcarta