Thread by Julian Togelius
- Tweet
- Apr 4, 2023
- #ArtificialIntelligence #GameDevelopment
Thread
Is Elden Ring an existential risk to humanity? That is the question I consider in my new blog post. Yes, it is a comment on the superintelligence/x-risk debate. Read the full thing here:
togelius.blogspot.com/2023/04/is-elden-ring-existential-risk-to.html
Or follow along for a few tweets first, if you prefer.
togelius.blogspot.com/2023/04/is-elden-ring-existential-risk-to.html
Or follow along for a few tweets first, if you prefer.
First of all: Elden Ring is not an existential risk to humanity. It's a great game, and it's a ridiculous idea that it would kill us all. But why would anyone take seriously that "AI" could kill us all, when Elden Ring couldn't?
Maybe because Elden Ring is not AI? But AI is not a thing. It's a research field producing a lot of different algorithms and software that don't really have that much in common. Much of it uses gradient descent, but that's just high-school math. The devil is in the details here.
Maybe it's because Elden Ring doesn't have goals, can't modify itself, and spread by itself over the internet? But that is not true in any meaningful sense for the more prominent systems that we call "AI" either.
If you take a step back, it seems that the x-risk arguments are all about "intelligent" systems. Because if they are intelligent, or have "cognitive powers" or something like that, they have all kinds of properties. Here is where the real trouble begins.
Intelligence is, in my considered opinion, not a useful abstraction outside of a small window of human contexts. It's typically defined as a weighted sum of results on various paper-and-pen tests, and is not defined outside of a relatively narrow range.
You can look at it from a zoological perspective. Various animals have specific "cognitive" capabilities that outperform ours, including long- and short-term memory. And of course many sensory and motoric capabilities that beat ours hands down.
Because the capabilities are so uneven and domain-specific, it's not possible to meaningfully compare the intelligence of various species. Each species has evolved a scrappy set of capabilities that works for its specific ecological niche.
This of course applies to humans too. There is no reason to presume that our intelligence is excessively general or "higher" than other animals that can exist across a wide variety of environments, such as rats or pigeons.
Now, as a thought experiment, think of the smartest person you know of. Now imagine that person being able to think ten times as fast and has ten times as much memory. What would that mean? What would that person be able to do that other humans could not do?
That person could probably be a very good programmer, and ace IQ tests, but could they take over the world? No, that's not really how society works. Do the smartest people in the world rule the world as it is?
Also note that for various tasks that used to be called cognitive - calculation, algebra, various forms of text editing - computers have been better than humans for the last 50 years at least. And they have not taken over the world yet. (Though people worried back in the 50s!)
So basically, "intelligence" is an ill-defined concept outside narrow human contexts, and a weasel word. Using it seriously degrades the quality of your argument. Same for near-synonyms, such as cognitive capabilities.
What happens to the doomer arguments if you remove the "intelligence" concept (and similar concepts)? Mostly, they fall apart. They turn out to be of the from "AI gets us X, and Y leads to apocalypse, now let's call both X and Y 'intelligence'".
If you want to argue for something extraordinary like the risk of human extinction, you need extraordinary arguments. You need to do the work of basing them on concrete capabilities found in actual software, not hide behind treacherous concepts like "intelligence".
Incidentally, this is why I think many AI researchers don't even want to engage with the LW-type doomers. To engage, you need to buy into their conceptual apparatus. But it's fundamentally broken.
Anyway. I wanted to end this by saying that I think we need more, not less, research and discussion on the impact of new techniques like LLMs on society. Big things are afoot, exciting things imho. But there also risks, including identity impersonation, bias, and misinformation.
The doomer discourse unfortunately risks sucking the oxygen out of the room for serious discussion on possibilities, harms, and mitigation of new techniques coming out of AI research.
But how I actually want to end this thread is by telling you to play Elden Ring. In particular if you need some doom and gloom and don't get enough in the real world. It's an amazing game about vengeful gods and civilizational collapse, and it will punish you plenty.
Thank you for reading this far. Now go read the full thing before you post your replies.
togelius.blogspot.com/2023/04/is-elden-ring-existential-risk-to.html
togelius.blogspot.com/2023/04/is-elden-ring-existential-risk-to.html