upcarta
  • Sign In
  • Sign Up
  • Explore
  • Search

Using the Veil of Ignorance to align AI systems with principles of justice

  • Paper
  • Apr 24, 2023
  • #ArtificialIntelligence #Philosophy
Laura Weidinger
@weidingerlaura
(Author)
Iason Gabriel
@IasonGabriel
(Author)
Kevin R. McKee
@KevinRMcKee
(Author)
Richard Everett
@reverettai
(Author)
www.pnas.org
Read on www.pnas.org
1 Recommender
1 Mention
The philosopher John Rawls proposed the Veil of Ignorance (VoI) as a thought experiment to identify fair principles for governing a society. Here, we apply the VoI to an important g... Show More

The philosopher John Rawls proposed the Veil of Ignorance (VoI) as a thought experiment to identify fair principles for governing a society. Here, we apply the VoI to an important governance domain: artificial intelligence (AI). In five incentive-compatible studies (N = 2, 508), including two preregistered protocols, participants choose principles to govern an Artificial Intelligence (AI) assistant from behind the veil: that is, without knowledge of their own relative position in the group. Compared to participants who have this information, we find a consistent preference for a principle that instructs the AI assistant to prioritize the worst-off. Neither risk attitudes nor political preferences adequately explain these choices. Instead, they appear to be driven by elevated concerns about fairness: Without prompting, participants who reason behind the VoI more frequently explain their choice in terms of fairness, compared to those in the Control condition. Moreover, we find initial support for the ability of the VoI to elicit more robust preferences: In the studies presented here, the VoI increases the likelihood of participants continuing to endorse their initial choice in a subsequent round where they know how they will be affected by the AI intervention and have a self-interested motivation to change their mind. These results emerge in both a descriptive and an immersive game. Our findings suggest that the VoI may be a suitable mechanism for selecting distributive principles to govern AI.

Show Less
Recommend
Post
Save
Complete
Collect
Mentions
See All
Ethan Mollick @emollick · Apr 25, 2023
  • Post
  • From Twitter
New hot field: applied moral philosophy. A lot of AI alignment is based on the people who are using AI systems today & their preferences (what answers they like or don’t). This interesting paper argues that using a Rawlsian Veil of Ignorance works better.
  • upcarta ©2025
  • Home
  • About
  • Terms
  • Privacy
  • Cookies
  • @upcarta