upcarta
  • Sign In
  • Sign Up
  • Explore
  • Search

Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models

  • Paper
  • Apr 19, 2023
  • #ChatGPT #ArtificialIntelligence #ComputerScience
Pan Lu
@lupantech
(Author)
Hao Cheng (Researcher)
@HaoChengResearcher
(Author)
Kai-Wei Chang
@kaiwei_chang
(Author)
Song-Chun Zhu
@SongChunZhu
(Author)
Jianfeng Gao
@JianfengGao0217
(Author)
arxiv.org
Read on arxiv.org
1 Recommender
1 Mention
Large language models (LLMs) have achieved remarkable progress in various natural language processing tasks with emergent abilities. However, they face inherent limitations, such as... Show More

Large language models (LLMs) have achieved remarkable progress in various natural language processing tasks with emergent abilities. However, they face inherent limitations, such as an inability to access up-to-date information, utilize external tools, or perform precise mathematical reasoning. In this paper, we introduce Chameleon, a plug-and-play compositional reasoning framework that augments LLMs to help address these challenges. Chameleon synthesizes programs to compose various tools, including LLM models, off-the-shelf vision models, web search engines, Python functions, and rule-based modules tailored to user interests. Built on top of an LLM as a natural language planner, Chameleon infers the appropriate sequence of tools to compose and execute in order to generate a final response. We showcase the adaptability and effectiveness of Chameleon on two tasks: ScienceQA and TabMWP. Notably, Chameleon with GPT-4 achieves an 86.54% accuracy on ScienceQA, significantly improving upon the best published few-shot model by 11.37%; using GPT-4 as the underlying LLM, Chameleon achieves a 17.8% increase over the state-of-the-art model, leading to a 98.78% overall accuracy on TabMWP. Further studies suggest that using GPT-4 as a planner exhibits more consistent and rational tool selection and is able to infer potential constraints given the instructions, compared to other LLMs like ChatGPT.

Show Less
Recommend
Post
Save
Complete
Collect
Mentions
See All
Matt Clifford @matthewclifford · Apr 20, 2023
  • Post
  • From Twitter
Interesting paper: “Chameleon is capable of synthesizing programs to compose various tools to tackle a broad range of queries. While it does not require any training, Chameleon uses the in-context learning capabilities of LLMs to generate these programs”
  • upcarta ©2025
  • Home
  • About
  • Terms
  • Privacy
  • Cookies
  • @upcarta