Great piece from @_achan96_ about why "solving alignment" is not enough to guarantee good outcomes. IME, this is a common objection from ML researchers who are concerned about risks from advanced AI, but skeptical of AI Alignment.