MENU

Fun & Interesting

Reparameterization Trick - WHY & BUILDING BLOCKS EXPLAINED!

Kapil Sachdeva 12,497 3 years ago
Video Not Working? Fix It Now

This tutorial provides an in-depth explanation of challenges and remedies for gradient estimation in neural networks that include random variables. While the final implementation of the method (called Reparameterization Trick) is quite simple, it is interesting and somewhere important to understand how and why the method can be applied in the first place. # Recommended videos to watch before this one Evidence Lower Bound https://www.youtube.com/watch?v=IXsA5Rpp25w 3 Big Ideas - Variational AutoEncoder, Latent Variable Model, Amortized Inference https://www.youtube.com/watch?v=h9kWaQQloPk KL Divergence https://www.youtube.com/watch?v=9_eZHt2qJs4 # Links to various papers mentioned in the tutorial Auto-Encoding Variational Bayes https://arxiv.org/abs/1312.6114 Doubly Stochastic Variational Bayes for non-Conjugate Inference https://proceedings.mlr.press/v32/titsias14.pdf Stochastic Backpropagation and Approximate Inference in Deep Generative Models https://arxiv.org/abs/1401.4082 Gradient Estimation Using Stochastic Computation Graphs https://arxiv.org/abs/1506.05254 # A thread with some insights about the name - "The Law Of The Unconscious Statistician" https://math.stackexchange.com/questions/1500751/the-law-of-the-unconscious-statistician #gradientestimation #elbo #variationalautoencoder

Comment