upcarta
  • Sign In
  • Sign Up
  • Explore
  • Search

How should AI systems behave, and who should decide?

  • Article
  • Feb 16, 2023
  • #ChatGPT #ArtificialIntelligence
openai.com
Read on openai.com
1 Recommender
1 Mention
OpenAI’s mission is to ensure that artificial general intelligence (AGI) benefits all of humanity. We, therefore, think a lot about the behavior of AI systems we build in the run-up... Show More

OpenAI’s mission is to ensure that artificial general intelligence (AGI) benefits all of humanity. We, therefore, think a lot about the behavior of AI systems we build in the run-up to AGI, and the way in which that behavior is determined. Since our launch of ChatGPT, users have shared outputs that they consider politically biased, offensive, or otherwise objectionable. In many cases, we think that the concerns raised have been valid and have uncovered real limitations of our systems which we want to address. We’ve also seen a few misconceptions about how our systems and policies work together to shape the outputs you get from ChatGPT.

Show Less
Recommend
Post
Save
Complete
Collect
Mentions
See All
Yo Shavit @yonashav · Feb 16, 2023
  • Post
  • From Twitter
I think it’s great that OpenAI released this: But let’s not forget the original paper by two OAI employees (@cbd @IreneSolaiman, both now elsewhere) that first analyzed this LM-cultural-context-adaptation strategy:
  • upcarta ©2025
  • Home
  • About
  • Terms
  • Privacy
  • Cookies
  • @upcarta