Length Generalization @ DLCT

Ғылым және технология

This is a talk delivered at the (usually not recorded) weekly journal club "Deep Learning: Classics and Trends" (mlcollective.org/dlct ).
Speaker: Hattie Zhou
Title: What Algorithms can Transformers Learn? A Study in Length Generalization
Abstract: Large language models exhibit surprising emergent generalization properties, yet also struggle on many simple reasoning tasks such as arithmetic and parity. This raises the question of if and when Transformer models can learn the true algorithm for solving a task. We study the scope of Transformers' abilities in the specific setting of length generalization on algorithmic tasks. Here, we propose a unifying framework to understand when and how Transformers can exhibit strong length generalization on a given task. Specifically, we leverage RASP (Weiss et al., 2021) -- a programming language designed for the computational model of a Transformer -- and introduce the RASP-Generalization Conjecture: Transformers tend to length generalize on a task if the task can be solved by a short RASP program which works for all input lengths. This simple conjecture remarkably captures most known instances of length generalization on algorithmic tasks. Moreover, we leverage our insights to drastically improve generalization performance on traditionally hard tasks (such as parity and addition). On the theoretical side, we give a simple example where the "min-degree-interpolator" model of learning from Abbe et al. (2023) does not correctly predict Transformers' out-of-distribution behavior, but our conjecture does. Overall, our work provides a novel perspective on the mechanisms of compositional generalization and the algorithmic capabilities of Transformers.
Speaker's bio: Hattie Zhou is a PhD student at Mila and the University of Montreal, where she is advised by Hugo Larochelle and Aaron Courville. Prior to Mila, Hattie worked as a data scientist at Uber and as an economic consultant at Cornerstone Research. Her research focuses on identifying and understanding various deep learning phenomena, with a particular focus on systematic generalization.
Paper link: arxiv.org/abs/2310.16028

Пікірлер

    Келесі