upcarta
  • Sign In
  • Sign Up
  • Explore
  • Search

Moving Too Fast on AI Could Be Terrible for Humanity

  • Article
  • May 31, 2023
  • #ArtificialIntelligence
Katja Grace
@KatjaGrace
(Author)
time.com
Read on time.com
1 Recommender
1 Mention
The window of what AI can’t do seems to be contracting week by week. Machines can now write elegant prose and useful code, ace exams, conjure exquisite art, and predict how proteins... Show More

The window of what AI can’t do seems to be contracting week by week. Machines can now write elegant prose and useful code, ace exams, conjure exquisite art, and predict how proteins will fold.

Experts are scared. Last summer I surveyed more than 550 AI researchers, and nearly half of them thought that, if built, high-level machine intelligence would lead to impacts that had at least a 10% chance of being “extremely bad (e.g. human extinction).” On May 30, hundreds of AI scientists, along with the CEOs of top AI labs like OpenAI, DeepMind and Anthropic, signed a statement urging caution on AI: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Why think that? The simplest argument is that progress in AI could lead to the creation of superhumanly-smart artificial “people” with goals that conflict with humanity’s interests—and the ability to pursue them autonomously. Think of a species that is to homo sapiens what homo sapiens is to chimps.

Show Less
Recommend
Post
Save
Complete
Collect
Mentions
See All
Michael A Osborne @maosbot · May 31, 2023
  • Post
  • From Twitter
Unsurprisingly-superb article from @KatjaGrace
  • upcarta ©2025
  • Home
  • About
  • Terms
  • Privacy
  • Cookies
  • @upcarta