upcarta
  • Sign In
  • Sign Up
  • Explore
  • Search

ChatGPT's Hallucinations Could Keep It from Succeeding

  • Article
  • Mar 13, 2023
  • #ArtificialIntelligence
Ilya Sutskever
@ilyasut
(Author)
spectrum.ieee.org
Read on spectrum.ieee.org
1 Recommender
1 Mention
ChatGPT has wowed the world with the depth of its knowledge and the fluency of its responses, but one problem has hobbled its usefulness: It keeps hallucinating. Yes, large languag... Show More

ChatGPT has wowed the world with the depth of its knowledge and the fluency of its responses, but one problem has hobbled its usefulness: It keeps hallucinating.

Yes, large language models (LLMs) hallucinate, a concept popularized by Google AI researchers in 2018. Hallucination in this context refers to mistakes in the generated text that are semantically or syntactically plausible but are in fact incorrect or nonsensical. In short, you can’t trust what the machine is telling you.

Show Less
Recommend
Post
Save
Complete
Collect
Mentions
See All
Yann LeCun @ylecun · Mar 16, 2023
  • Post
  • From Twitter
Very nice article by Craig Smith in IEEE Spectrum about the debate on the power and limitations of LLMs between (among others) @ilyasut and me. Do AI systems ultimately need to be grounded in reality, not merely learn from language? I say yes.
  • upcarta ©2025
  • Home
  • About
  • Terms
  • Privacy
  • Cookies
  • @upcarta