As an extension of the Vanitas project, this video explains how ComfyUI works and how we can use it to integrate Stable Diffusion into our Blender workflow. Links:
1. The tutorial was inspired by Mickmumpitz's much more detailed workflow, which you can explore here: https://youtu.be/8afb3luBvD8?si=XNBVlRau6Yv0ThLm
2. Installing ComfyUI on Mac tutorial: https://youtu.be/m9jg1fdOiVY?si=ZM2X3JhMF5l7sglK
3. Google Doc for Links and Explanations: https://docs.google.com/document/d/1pYqbkeAbtiyraGRkzECaQx1Libbf_MKbu1tpOFfJc8E/edit?usp=sharing
4. Google Drive for Workflow Files:https://drive.google.com/drive/folders/1WHHrYpKh4wpt3QpKsLFviQFEyWXVsafg?usp=sharing
0:00:00 Introduction
0:05:03 Preparing renders from Blender
0:12:38 Installing ComfyUI
0:15:50 Explaining ComfyUI
0:22:20 Downloading the models and checkpoint files
0:27:28 Explaining the list of workflow examples
0:28:06 Text to image workflow
0:31:52 Text to image workflow + LORA
0:38:01 Text and Image prompting using an IPAdapter
0:44:05 ControlNet and image prompting using a Blender depth pass to make a new image
0:50:57 Text to animation workflow with AnimateDiff
0:54:53 Text and Blender frame sequence to animation with ControlNet and AnimateDiff
0:59:47 Text to video using a tuned LORA and an LCM for denoising (to speed things up)
1:06:02 Using an LCM Lora to further denoise and reduce processing time