upcarta
  • Sign In
  • Sign Up
  • Explore
  • Search

Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368

  • Video
  • Mar 30, 2023
  • #ArtificialIntelligence
Lex Fridman
@LexFridman
(Host)
Eliezer Yudkowsky
@ESYudkowsky
(Guest)
www.youtube.com
Watch on Youtube
24,895 198 min
1 Recommender
1 Mention
Eliezer Yudkowsky is a researcher, writer, and philosopher on the topic of superintelligent AI. Please support this podcast by checking out our sponsors: - Linode: https://linode.co... Show More

Eliezer Yudkowsky is a researcher, writer, and philosopher on the topic of superintelligent AI. Please support this podcast by checking out our sponsors:
- Linode: https://linode.com/lex to get $100 free credit
- House of Macadamias: https://houseofmacadamias.com/lex and use code LEX to get 20% off your first order
- InsideTracker: https://insidetracker.com/lex to get 20% off

EPISODE LINKS:
Eliezer's Twitter: https://twitter.com/ESYudkowsky
LessWrong Blog: https://lesswrong.com
Eliezer's Blog page: https://www.lesswrong.com/users/eliezer_yudkowsky
Books and resources mentioned:
1. AGI Ruin (blog post): https://lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities
2. Adaptation and Natural Selection: https://amzn.to/40F5gfa

PODCAST INFO:
Podcast website: https://lexfridman.com/podcast
Apple Podcasts: https://apple.co/2lwqZIr
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41

OUTLINE:
0:00 - Introduction
0:43 - GPT-4
23:23 - Open sourcing GPT-4
39:41 - Defining AGI
47:38 - AGI alignment
1:30:30 - How AGI may kill us
2:22:51 - Superintelligence
2:30:03 - Evolution
2:36:33 - Consciousness
2:47:04 - Aliens
2:52:35 - AGI Timeline
3:00:35 - Ego
3:06:27 - Advice for young people
3:11:45 - Mortality
3:13:26 - Love

SOCIAL:
- Twitter: https://twitter.com/lexfridman
- LinkedIn: https://www.linkedin.com/in/lexfridman
- Facebook: https://www.facebook.com/lexfridman
- Instagram: https://www.instagram.com/lexfridman
- Medium: https://medium.com/@lexfridman
- Reddit: https://reddit.com/r/lexfridman
- Support on Patreon: https://www.patreon.com/lexfridman

Show Less
Recommend
Post
Save
Complete
Collect
Mentions
See All
Bill Ackman @BillAckman · Mar 30, 2023
  • Post
  • From Twitter
An incredible podcast on the dangers of AI and the risk to us all. In a sane world, we would slow development until we could be assured that AI is aligned with humankind. Sadly we can’t trust what our enemies will do with AI if they get there first.
  • upcarta ©2025
  • Home
  • About
  • Terms
  • Privacy
  • Cookies
  • @upcarta