October 26, 2022

We have a special (virtual) seminar on Thursday 10/27 11 AM – 12 PM PT. We are excited to invite Denny Zhou from Google Brain to tell us about his recent work on language models.
 

Meeting ID: 919 1142 0987

Title: Teach Language Models to Reason

Description: Can we teach language models like teaching humans to solve unseen NLP tasks by only providing a few examples? Recently, by combining with Google’s newest large language model PaLM-540B, chain-of-thought prompting integrated with self-consistency decoding has demonstrated striking performance on many challenging NLP tasks — largely outperforms sota results in the literature which even typically require 100x – 1000x more annotated examples and have to train task-specific neural models. Moreover, the results by chain of thought prompting & self-consistency are fully interpretable. More recently, least to most prompting is developed to further enable large language models to achieve easy to hard generalization, like compositional generalization. In this talk, I will present these progresses, and discuss their implications to the future of AI research.

Speaker bio: Denny Zhou is a Research Scientist in Google Brain. He founded and leads the Reasoning Research Group of Google Brain. His work on machine reasoning includes chain-of-thought prompting, self-consistency, least-to-most prompting, and neural logical machines. Denny also led SpreadSheetCoder which has been integrated to Google Sheets to automatically generate formulas for users, and MobileBERT which has been widely used as a highly compact and fast language model, in particular, in Android apps. Denny received Google Research Impact Award in 2022 and WSDM2022 Test of Time Award. Denny has also been serving as (senior) area chairs for NeurIPS, ICML and ICLR.