MENU

Fun & Interesting

Lesson 11 2022: Deep Learning Foundations to Stable Diffusion

Jeremy Howard 25,491 2 years ago
Video Not Working? Fix It Now

(All lesson resources are available at http://course.fast.ai.) In this lesson, we discuss various techniques and experiments shared by students on the forum, such as interpolating between prompts for visually appealing transitions and improving the update process in text-to-image generation, and a novel approach to decreasing the guidance scale during image generation. We then dive into a new paper called DiffEdit, which focuses on semantic image editing using text-conditioned diffusion models. We walk through the process of reading and understanding the paper, emphasizing the importance of grasping the main idea and not getting bogged down in every detail. We then embark on a deep exploration of matrix multiplication using Python, compare APL with PyTorch, and introduce the concept of Frobenius norm. We also discuss the powerful concept of broadcasting, which allows for operations between tensors of different shapes, and demonstrate its efficiency in speeding up matrix multiplication. The techniques introduced in this lesson allow us to speed up our initial Python implementation by a factor of around five million, including leveraging the GPU for massive parallelism! 0:00 - Introduction 0:20 - Showing student’s work 13:03 - Workflow on reading an academic paper 16:20 - Read DiffEdit paper 26:27 - Understanding the equations in the “Background” section 46:10 - 3 steps of DiffEdit 51:42 - Homework 59:15 - Matrix multiplication from scratch 1:08:47 - Speed improvement with Numba library 1:19:25 - Frobenius norm 1:25:54 - Broadcasting with scalars and matrices 1:39:22 - Broadcasting rules 1:42:10 - Matrix multiplication with broadcasting Thanks to raymond-wu on forums.fast.ai for the timestamps, and to fmussari for the transcript.

Comment