MENU

Fun & Interesting

Optimization for Deep Learning (Momentum, RMSprop, AdaGrad, Adam)

DeepBean 80,694 lượt xem 2 years ago
Video Not Working? Fix It Now

Here we cover six optimization schemes for deep neural networks: stochastic gradient descent (SGD), SGD with momentum, SGD with Nesterov momentum, RMSprop, AdaGrad and Adam.

Chapters
---------------
Introduction 00:00
Brief refresher 00:27
Stochastic gradient descent (SGD) 03:16
SGD with momentum 05:01
SGD with Nesterov momentum 07:02
AdaGrad 09:46
RMSprop 12:20
Adam 13:23
SGD vs Adam 15:03

Comment