MENU

Fun & Interesting

Claire Boyer: Attention layers provably solve single-location regression

Video Not Working? Fix It Now

Attention-based models, such as Transformer, excel across various tasks but lack a comprehensive theoretical understanding, especially regarding token-wise sparsity and internal linear representations. To address this gap, we introduce the single-location regression task, where only one token in a sequence determines the output, and its position is a latent random variable, retrievable via a linear projection of the input. To solve this task, we propose a dedicated predictor, which turns out to be a simplified version of a non-linear self-attention layer. We study its theoretical properties, by showing its asymptotic Bayes optimality and analyzing its training dynamics. In particular, despite the non-convex nature of the problem, the predictor effectively learns the underlying structure. This work highlights the capacity of attention mechanisms to handle sparse token information and internal linear structures. CONFERENCE Recording during the thematic meeting : «Meeting in Mathematical Statistics New challenges in high-dimensional statistics» the December 19, 2024 at the Centre International de Rencontres Mathématiques (Marseille, France) Filmmaker : Guillaume Hennenfent Find this video and other talks given by worldwide mathematicians on CIRM's Audiovisual Mathematics Library: http://library.cirm-math.fr. And discover all its functionalities: - Chapter markers and keywords to watch the parts of your choice in the video - Videos enriched with abstracts, bibliographies, Mathematics Subject Classification - Multi-criteria search by author, title, tags, mathematical area

Comment