upcarta
  • Sign In
  • Sign Up
  • Explore
  • Search

Chain-of-Thought Prompting Elicits Reasoning in Large Language Models

  • Paper
  • Jan, 2022
  • #Naturallanguageprocessing #ProgrammingLanguages
Jason Wei
@_jasonwei
(Author)
Xuezhi Wang
@XuezhiWang
(Author)
arxiv.org
Read on arxiv.org
1 Recommender
1 Mention
We explore how generating a chain of thought -- a series of intermediate reasoning steps -- significantly improves the ability of large language models to perform complex reasoning.... Show More

We explore how generating a chain of thought -- a series of intermediate reasoning steps -- significantly improves the ability of large language models to perform complex reasoning. In particular, we show how such reasoning abilities emerge naturally in sufficiently large language models via a simple method called chain of thought prompting, where a few chain of thought demonstrations are provided as exemplars in prompting. Experiments on three large language models show that chain of thought prompting improves performance on a range of arithmetic, commonsense, and symbolic reasoning tasks. The empirical gains can be striking. For instance, prompting a 540B-parameter language model with just eight chain of thought exemplars achieves state of the art accuracy on the GSM8K benchmark of math word problems, surpassing even finetuned GPT-3 with a verifier.

Show Less
Recommend
Post
Save
Complete
Collect
Mentions
See All
Cemre Ucar @CemreUcar ยท May 17, 2023
  • Post
"Experiments on three large language models show that chain of thought prompting improves performance on a range of arithmetic, commonsense, and symbolic reasoning tasks."
  • upcarta ©2025
  • Home
  • About
  • Terms
  • Privacy
  • Cookies
  • @upcarta