upcarta
  • Sign In
  • Sign Up
  • Explore
  • Search

How Rogue AIs may Arise

  • Article
  • May 23, 2023
  • #ArtificialIntelligence
Yoshua Bengio
@YoshuaBengio
(Author)
yoshuabengio.org
Read on yoshuabengio.org
1 Recommender
1 Mention
The rise of powerful AI dialogue systems in recent months has precipitated debates about AI risks of all kinds, which hopefully will yield an acceleration of governance and regulato... Show More

The rise of powerful AI dialogue systems in recent months has precipitated debates about AI risks of all kinds, which hopefully will yield an acceleration of governance and regulatory frameworks. Although there is a general consensus around the need to regulate AI to protect the public from harm due to discrimination and biases as well as disinformation, there are profound disagreements among AI scientists regarding the potential for dangerous loss of control of powerful AI systems, also known as existential risk from AI, that may arise when an AI system can autonomously act in the world (without humans in the loop to check that these actions are acceptable) in ways that could potentially be catastrophically harmful. Some view these risks as a distraction for the more concrete risks and harms that are already occurring or are on the horizon. Indeed, there is a lot of uncertainty and lack of clarity as to how such catastrophes could happen. In this blog post we start a set of formal definitions, hypotheses and resulting claims about AI systems which could harm humanity and then discuss the possible conditions under which such catastrophes could arise, with an eye towards helping us imagine more concretely what could happen and the global policies that might be aimed at minimizing such risks.

Show Less
Recommend
Post
Save
Complete
Collect
Mentions
See All
David Krueger @DavidSKrueger ยท May 24, 2023
  • Post
  • From Twitter
I don't want to summarize too much, since I think everyone should read this latest piece by Yoshua Bengio (Turing award winner for Deep Learning, along with Geoff Hinton and Yann Lecun): Suffice to say that this piece engages w/AI x-risk very seriously.
  • upcarta ©2025
  • Home
  • About
  • Terms
  • Privacy
  • Cookies
  • @upcarta