Can LLMs Reason and Plan?

Large Language Models (LLMs) are on track to reverse what seemed like an inexorable shift of AI from explicit to tacit knowledge tasks. Trained as they are on everything ever written on the web, LLMs exhibit “approximate omniscience”–they can provide answers to all sorts of queries, but with nary a guarantee. This could herald a new era for knowledge-based AI systems–with LLMs taking the role of (blowhard?) experts.

But first, we have to stop confusing the impressive form of the generated knowledge for correct content, and resist the temptation to ascribe reasoning, planning, self-critiquing etc. powers to approximate retrieval by these n-gram models on steroids. We have to focus instead on LLM-Modulo techniques that complement the unfettered idea generation of LLMs with careful vetting by model-based AI systems.

Join us for the first session of the second cohort of the Soc(AI)ety Seminars as we host Subbarao Kambhampati, professor of computer science at Arizona State University. In this talk, Kambhampati will reify this vision and attendant caveats in the context of the role of LLMs in planning tasks.

For more information visit the event website.

Global AffairsLaw and PoliticsScience and TechnologyAIArtificial IntelligenceDigest157EthicsLucy Family Institute for Data & SocietyUniversity of Notre Dame

More Like This

Related Posts

Let your curiosity roam! If you enjoyed the insights here, we think you might enjoy discovering the following publications.

Stay In Touch

Subscribe to our Newsletter


To receive the latest and featured content published to ThinkND, please provide your name and email. Free and open to all.

Name
This field is hidden when viewing the form
This field is hidden when viewing the form
What interests you?
Select your topics, and we'll curate relevant updates for your inbox.
Affiliation