upcarta
  • Sign In
  • Sign Up
  • Explore
  • Search

On the Impossible Safety of Large AI Models

  • Paper
  • May 9, 2023
  • #ArtificialIntelligence #Naturallanguageprocessing
El Mahdi El Mhamdi
@L_badikho
(Author)
Sadegh Farhadkhani
@Sadegh_Farhad
(Author)
Rachid Guerraoui
@RachidGuerraoui
(Author)
arxiv.org
Read on arxiv.org
1 Recommender
1 Mention
Large AI Models (LAIMs), of which large language models are the most prominent recent example, showcase some impressive performance. However they have been empirically found to pose... Show More

Large AI Models (LAIMs), of which large language models are the most prominent recent example, showcase some impressive performance. However they have been empirically found to pose serious security issues. This paper systematizes our knowledge about the fundamental impossibility of building arbitrarily accurate and secure machine learning models. More precisely, we identify key challenging features of many of today's machine learning settings. Namely, high accuracy seems to require memorizing large training datasets, which are often user-generated and highly heterogeneous, with both sensitive information and fake users. We then survey statistical lower bounds that, we argue, constitute a compelling case against the possibility of designing high-accuracy LAIMs with strong security guarantees.

Show Less
Recommend
Post
Save
Complete
Collect
Mentions
See All
xuan (ɕɥɛn / sh-yen) @xuanalogue · May 14, 2023
  • Post
  • From Twitter
Interesting paper! Just FYI there seems to be a leftover comment on pg. 10, at the end of the paragraph on "Federated learning is not privacy-preserving." -- wanted to DM, but don't seem able to do so!
  • upcarta ©2025
  • Home
  • About
  • Terms
  • Privacy
  • Cookies
  • @upcarta