MENU

Fun & Interesting

THIS is why large language models can understand the world

Algorithmic Simplicity 162,969 2 weeks ago
Video Not Working? Fix It Now

5 years ago, nobody would have guessed that scaling up LLMs would as successful as they are. This belief, in part, was due to the fact that all known statistical learning theory predicted that massively oversized models should overfit, and hence perform worse than smaller models. Yet the undeniable fact is that modern LLMs do possess models of the world that allow them to generalize beyond their training data. Why do larger models generalize better than smaller models? Why does training a model to predict internet text cause it to develop world models? Come deep dive into the inner working of neural network training to understand why scaling LLMs works so damn well. Want to see more videos like this in the future? Support me on Ko-fi https://ko-fi.com/algorithmicsimplicity Papers referenced: Double Descent: https://arxiv.org/abs/1812.11118 The Lottery Ticket Hypothesis: https://arxiv.org/abs/1803.03635 My previous videos on Autoregressive Transformers: Auto-regression (and diffusion): https://youtu.be/zc5NTeJbk-k Transformers: https://youtu.be/kWLed8o5M2Y

Comment