upcarta
  • Sign In
  • Sign Up
  • Explore
  • Search

Prompt injection: what’s the worst that can happen?

  • Article
  • Apr 14, 2023
  • #ArtificialIntelligence
Simon Willison
@simonw
(Author)
simonwillison.net
Read on simonwillison.net
1 Recommender
1 Mention
Prompt injection: what’s the worst that can happen? Activity around building sophisticated applications on top of LLMs (Large Language Models) such as GPT-3/4/ChatGPT/etc is growing... Show More

Prompt injection: what’s the worst that can happen?
Activity around building sophisticated applications on top of LLMs (Large Language Models) such as GPT-3/4/ChatGPT/etc is growing like wildfire right now.

Many of these applications are potentially vulnerable to prompt injection. It’s not clear to me that this risk is being taken as seriously as it should.

To quickly review: prompt injection is the vulnerability that exists when you take a carefully crafted prompt like this one:

Show Less
Recommend
Post
Save
Complete
Collect
Mentions
See All
Matt Clifford @matthewclifford · Apr 15, 2023
  • Post
  • From Twitter
Great post by @simonw that points to a lot of near-term AI safety / security challenges
  • upcarta ©2025
  • Home
  • About
  • Terms
  • Privacy
  • Cookies
  • @upcarta