MENU

Fun & Interesting

Understanding Variational Autoencoders (VAEs) | Deep Learning

DeepBean 29,485 lượt xem 1 year ago
Video Not Working? Fix It Now

Here we delve into the core concepts behind the Variational Autoencoder (VAE), a widely used representation learning technique that uncovers the hidden factors of variation throughout a dataset.

Timestamps
--------------------
Introduction 0:00
Latent variables 01:53
Intractability of the marginal likelihood 05:08
Bayes' rule 06:35
Variational inference 09:01
KL divergence and ELBO 10:14
ELBO via Jensen's inequality 12:06
Maximizing the ELBO 12:57
Analyzing the ELBO gradient 14:34
Reparameterization trick 15:55
KL divergence of Gaussians 17:40
Estimating the log-likelihood 19:04
Computing the log-likelihood 19:58
The Gaussian case 20:17
The Bernoulli case 21:56
VAE architecture 23:33
Regularizing the latent space 25:37
Balance of losses 28:00

Useful links
------------------------
Original VAE paper: https://arxiv.org/abs/1312.6114
More detailed explanation: https://arxiv.org/abs/1906.02691
Nice discussion of the reparameterization trick: https://gregorygundersen.com/blog/2018/04/29/reparameterization/
Intro to variational inference and the ELBO: https://www.cs.cmu.edu/~epxing/Class/10708-15/notes/10708_scribe_lecture13.pdf
On the problem of learnt variance in the decoder: https://arxiv.org/abs/2006.13202
VAE tutorial in Keras: https://keras.io/examples/generative/vae/
MIT lecture on deep generative modelling: https://www.youtube.com/watch?v=3G5hWM6jqPk&t=1450s
Deriving the KL divergence for Gaussians: https://leenashekhar.github.io/2019-01-30-KL-Divergence/
Article with a nice discussion of regularized latent spaces: https://towardsdatascience.com/understanding-variational-autoencoders-vaes-f70510919f73

Comment