Join Today's Live Tutorial: Getting Started with GENERATIVE AI and ControlNet in ComfyUI the EASY WAY!
🖼️ Today's Agenda:
In today’s session, we will talk about how to install ComfyUI and how to use ControlNet with Stable Diffusion 1.5 and Stable Diffusion XL to guide our image generation and do further creative work with Depth Maps!
🌟 A Special Celebration of Our Discord Community 🌟
Join the conversation and become a part of our growing artistic collective on Discord. Your insights make our community richer!
https://discord.gg/BPQRBVfkG7
🔗 Essential Resources for Participation:
Slides for this stream – https://miroleon.github.io/comfyui-guide/controlnet
Detailed Tutorial on Heibara Ai Blog – https://blog.heibara.ai/posts/stability-matrix/
Stability Matrix – https://lykos.ai/downloads
ZavyChromaXL V6 Model – https://civitai.com/models/119229/zavychromaxl
DreamShaper 8 Model – https://civitai.com/models/4384/dreamshaper
AI Prompt Generator – https://ollama.com/impactframes/llama3_ifai_sd_prompt_mkr_q4km
Images from Museo (Public Domain) – https://museo.app/
Images from Unsplash – https://unsplash.com/
Videos from Pexels – https://www.pexels.com/
Windows PowerToys (for Color Picker) – https://aka.ms/installpowertoys
👨🎨 Meet Your Host:
I teach AI in art and architecture at RWTH Aachen University. In my journey with the Heibara Ai collective, we’ve found that image-to-image generation in Stable Diffusion offers remarkable outcomes, even with unrelated input images. Today, I’ll show you how to set up ComfyUI, enhance your AI prompts, and dive into creative image-to-image generation.
✨ Follow my Work
Find my other artwork on Instagram – https://www.instagram.com/miroxleon/
Follow me on Twitter – https://twitter.com/miroxleon
For all other links and contact, visit my website – https://miroleon.de/links
Get my wallpaper via https://miroleon.gumroad.com/l/wallpaper_inflate_land
🔍 Chapters
0:00 Intro
1:02 Welcome
1:38 Examples of What You Will Learn
2:29 Catch Up with Previous Stream
3:15 Plan for the Video
6:06 Hardware Requirements
7:04 Install and Run Stability Matrix
10:45 Downloading Checkpoint Models in Stability Matrix
12:19 Downloading ControlNet Models in Stability Matrix
21:12 Move Models From Other Stability Matrix Installation
24:31 Import ControlNet Workflow
25:24 Install ComfyUI Manager
26:33 Install Missing Custom Nodes with ComfyUI Manager
28:57 ControlNet Diffuser vs Preprocessor Model
30:42 SDXL ControlNet Workflow Overview
32:36 Apply ControlNet Node
33:02 ControlNet Preprocessing
34:21 OpenPose Example
36:24 Soft Edge Example
36:51 Canny Edge Example
37:21 Depth Example
39:53 Apply ControlNet with Diffuser (Depth)
50:33 Apply ControlNet with Canny Edge
52:17 ControlNet vs IP-Adapter for Consistent Characters
54:43 Apply ControlNet with Soft Edge
56:37 Apply ControlNet with Open Pose
58:42 Manually Changing ControlNet with Photopea in ComfyUI
1:06:38 Text with ControlNet and Photopea
1:08:11 SD15 ControlNet Workflow Overview
1:10:58 Working with Different Image References (Museo & Unsplash)
1:13:37 SD15 to SDXL
1:15:26 Open Pose and Model Limitations
1:18:33 Changing Strength, Start, and End Percent in Apply ControlNet Node
1:26:15 Recap and Using Other ControlNets
1:27:55 Increasing Detail with Steps vs LoRAs
1:28:47 Video to Depth Map Workflow Install Missing Custom Nodes
1:31:07 Creating the Thumbnail with ComfyUI
1:31:56 Gradient Maps in ComfyUI
1:33:35 Using PowerToys for Color Picking
1:36:57 Video to Depth Map Gradient Map Workflow
1:37:18 Pexels for Free Video References
1:39:04 Load Video (Upload) Node and Settings
1:40:19 ControlNet Preprocessing for Video Inputs
1:44:16 Apply Gradient Map to Video or Image Sequence
1:49:56 Combine Image Sequence to Video File (MP4)
1:53:23 Combine Depth Gradient with Canny Outlines into One Video
1:56:07 Wrapping Up
1:56:46 Outro