upcarta
  • Sign In
  • Sign Up
  • Explore
  • Search

Ep 63: Eliezer Yudkowsky (AI Safety Expert) Says It's Too Late to Save Humanity from AI

  • Podcast episode
  • May 6, 2023
  • #ArtificialIntelligence
Logan Bartlett
@loganbartlett
(Host)
Eliezer Yudkowsky
@ESYudkowsky
(Guest)
open.spotify.com
Listen on Spotify
198 min
Watch on Youtube
95 193 min
1 Recommender
4 Mentions
Eliezer Yudkowsky is a researcher, writer, and advocate for artificial intelligence safety. He is best known for his writings on rationality, cognitive biases, and the development o... Show More

Eliezer Yudkowsky is a researcher, writer, and advocate for artificial intelligence safety. He is best known for his writings on rationality, cognitive biases, and the development of superintelligence. Yudkowsky has written extensively on the topic of AI safety and has advocated for the development of AI systems that are aligned with human values and interests. Yudkowsky is the co-founder of the Machine Intelligence Research Institute (MIRI), a non-profit organization dedicated to researching the development of safe and beneficial artificial intelligence. He is also a co-founder of the Center for Applied Rationality (CFAR), a non-profit organization focused on teaching rational thinking skills. He is also a frequent author at LessWrong.com as well as Rationality: From AI to Zombies.

In this episode, we discuss Eliezer’s concerns with artificial intelligence and his recent conclusion that it will inevitably lead to our demise. He’s a brilliant mind, an interesting person, and genuinely believes all of the stuff he says. So I wanted to have a conversation with him to hear where he is coming from, how he got there, understand AI better, and hopefully help us bridge the divide between the people that think we’re headed off a cliff and the people that think it’s not a big deal.

(0:00) Intro

(1:18) Welcome Eliezer

(6:27) How would you define artificial intelligence?

(15:50) What is the purpose of a firm alarm?

(19:29) Eliezer’s background

(29:28) The Singularity Institute for Artificial Intelligence

(33:38) Maybe AI doesn’t end up automatically doing the right thing

(45:42) AI Safety Conference

(51:15) Disaster Monkeys

(1:02:15) Fast takeoff

(1:10:29) Loss function

(1:15:48) Protein folding

(1:24:55) The deadly stuff

(1:46:41) Why is it inevitable?

(1:54:27) Can’t we let tech develop AI and then fix the problems?

(2:02:56) What were the big jumps between GPT3 and GPT4?

(2:07:15) “The trajectory of AI is inevitable”

(2:28:05) Elon Musk and OpenAI

(2:37:41) Sam Altman Interview

(2:50:38) The most optimistic path to us surviving

(3:04:46) Why would anything super intelligent pursue ending humanity?

(3:14:08) What role do VCs play in this?

Show Notes:

https://twitter.com/liron/status/1647443778524037121?s=20

https://futureoflife.org/event/ai-safety-conference-in-puerto-rico/

https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy

https://www.youtube.com/watch?v=q9Figerh89g

https://www.vox.com/the-highlight/23447596/artificial-intelligence-agi-openai-gpt3-existential-risk-human-extinction

Eliezer Yudkowsky – AI Alignment: Why It's Hard, and Where to Start

Mixed and edited: Justin Hrabovsky

Produced: Rashad Assir

Executive Producer: Josh Machiz

Music: Griff Lawson

 

🎙 Listen to the show

Apple Podcasts: https://podcasts.apple.com/us/podcast/three-cartoon-avatars/id1606770839

Spotify: https://open.spotify.com/show/5WqBqDb4br3LlyVrdqOYYb?si=3076e6c1b5c94d63&nd=1

Google Podcasts: https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5zaW1wbGVjYXN0LmNvbS9zb0hJZkhWbg

 

🎥 Subscribe on YouTube: https://www.youtube.com/channel/UCugS0jD5IAdoqzjaNYzns7w?sub_confirmation=1

 

Follow on Socials

📸 Instagram - https://www.instagram.com/theloganbartlettshow

🐦 Twitter - https://twitter.com/loganbartshow

🎬 Clips on TikTok - https://www.tiktok.com/@theloganbartlettshow

 

About the Show

Logan Bartlett is a Software Investor at Redpoint Ventures - a Silicon Valley-based VC with $6B AUM and investments in Snowflake, DraftKings, Twilio, and Netflix. In each episode, Logan goes behind the scenes with world-class entrepreneurs and investors. If you're interested in the real inside baseball of tech, entrepreneurship, and start-up investing, tune in every Friday for new episodes.

Show Less
Recommend
Post
Save
Complete
Collect
Mentions
See All
Liron Shapira @liron · May 7, 2023
  • Post
  • From Twitter
^ This is my #1 content recommendation of the last few years.
Liron Shapira @liron · May 7, 2023
  • Post
  • From Twitter
Translation: THIS BRILLIANT AND HIGHLY INSIGHTFUL INTERVIEW IS ABSOLUTELY REQUIRED LISTENING!!!
Liron Shapira @liron · May 8, 2023
  • Post
  • From Twitter
The full interview of @ESYudkowsky on @loganbartshow is my #1 recommended piece of content to watch in the last few years:
Liron Shapira @liron · May 9, 2023
  • Post
  • From Twitter
Clipped from this week's incredible episode of @loganbartshow:
  • upcarta ©2025
  • Home
  • About
  • Terms
  • Privacy
  • Cookies
  • @upcarta