upcarta
  • Sign In
  • Sign Up
  • Explore
  • Search

Meaning without reference in large language models

  • Paper
  • Aug 12, 2022
  • #Naturallanguageprocessing #Linguistics #Cognitivescience
Felix Hill
@FelixHill84
(Author)
steven t. piantadosi
@spiantado
(Author)
arxiv.org
Read on arxiv.org
1 Recommender
1 Mention
The widespread success of large language models (LLMs) has been met with skepticism that they possess anything like human concepts or meanings. Contrary to claims that LLMs possess... Show More

The widespread success of large language models (LLMs) has been met with skepticism that they possess anything like human concepts or meanings. Contrary to claims that LLMs possess no meaning whatsoever, we argue that they likely capture important aspects of meaning, and moreover work in a way that approximates a compelling account of human cognition in which meaning arises from conceptual role. Because conceptual role is defined by the relationships between internal representational states, meaning cannot be determined from a model's architecture, training data, or objective function, but only by examination of how its internal states relate to each other. This approach may clarify why and how LLMs are so successful and suggest how they can be made more human-like.

Show Less
Recommend
Post
Save
Complete
Collect
Mentions
See All
Laura Ruis @LauraRuis ยท Aug 10, 2022
  • Post
  • From Twitter
Great read of a nuanced take that does LLMs current striking capabilities proper justice
  • upcarta ©2025
  • Home
  • About
  • Terms
  • Privacy
  • Cookies
  • @upcarta