MENU

Fun & Interesting

PyTorch Lightning - William Falcon

Kris Skrinak 9,557 6 years ago
Video Not Working? Fix It Now

In recent years, techniques such as 16-bit precision, accumulated gradients and distributed training have allowed models to train in record times. In this talk William Falcon goes through the implementation details of the 10 most useful of these techniques, including DataLoaders, 16-bit precision, accumulated gradients and 4 different ways of distributing model training across hundreds of GPUs. We’ll also show how to use these already built-in in PyTorch Lightning, a Keras-like framework for ML researchers. William is the creator of PyTorch-Lightning and an AI PhD student at Facebook AI Research and NYU CILVR lab advised by Kyunghyun Cho. Before his PhD, he Co-founded AI startup NextGenVest (acquired by Commonbond). He also spent time at Goldman Sachs and Bonobos. He received his BA in Stats/CS/Math from Columbia University. Every month the deep learning community of New York gathers at the AWS loft to share discoveries and achievements and describe new techniques. https://github.com/williamFalcon/pytorch-lightning

Comment