Txt2Vid Made Easy with ComfyUI & AnimateDiff

Ғылым және технология

#stablediffusion #aiart #generativeart #aitools #comfyui
Turning prompts to short form videos can be done quickly and efficiently within ComfyUI. This video outlines the workflow.
Time Stamps
Intro: 0:00
Installing Custom Nodes: 0:14
Setting Up the Workflow: 1:03
Prompt Scheduling: 4:40
Reviewing the Outputs: 10:22
Frame Interpolation: 11:45
Outro: 13:52
Original Source for Learning:
civitai.com/articles/2379/gui...
Workflow From the Video:
promptingpixels.com/comfyui-w...
Install ComfyUI Manager (You'll need this in order to install the custom nodes):
github.com/ltdrdata/ComfyUI-M...
Then Install These Custom Nodes:
FizzNodes (Batch Prompt Scheduling): github.com/FizzleDorf/ComfyUI...
AnimateDiff: github.com/Kosinkadink/ComfyU...
VideoHelperSuite (For Video Combine Node): github.com/Kosinkadink/ComfyU...
AnimateDiff Motion Modules:
SDXL, 1.5v1, 1.5v2, 1.4: civitai.com/models/108836?mod...
V3: github.com/guoyww/animatediff...
Important links:
- Patreon: / promptingpixels
- 👾 Discord: / discord
- 🌐 Website: promptingpixels.com/
- 🛠️ GitHub: github.com/content-and-code

Пікірлер: 30

  • @amnzk08
    @amnzk085 ай бұрын

    Don't know if anyone's told you this lately, but I love you man. Your direct approach and clear explanation about why you use specific nodes make your videos so simple to understand. Earned a subscriber, and waiting for your channel to explode!

  • @PromptingPixels

    @PromptingPixels

    5 ай бұрын

    Haha - thanks so much - appreciate it!

  • @hi4620
    @hi46202 ай бұрын

    Very useful video! Thanks!

  • @hamidmohamadzade1920
    @hamidmohamadzade19203 ай бұрын

    thanks, good simple details

  • @SumNumber
    @SumNumber5 ай бұрын

    Great tutorial . Awesome plugin ! :O)

  • @PromptingPixels

    @PromptingPixels

    5 ай бұрын

    Thanks so much for checking it out!

  • @WiLDeveD
    @WiLDeveD5 ай бұрын

    Awesome !!! in this way we can make long videos with AI. Thanks bro ❤

  • @PromptingPixels

    @PromptingPixels

    5 ай бұрын

    Yeah - it's pretty powerful stuff that you can string together using this workflow. Need to start testing the newly released AnimateLCM (author of AnimateDiff Evolved just updated the node to support this which is supposed to provide some impressive results in as little as 4 steps.

  • @suzanazzz
    @suzanazzz19 күн бұрын

    thanks for this tutorial, it cleared up some things that you explained in detail! Do you have a tutorial on upscaling the results of these animate diff video results? Thanks in advance

  • @Radarhacke
    @Radarhacke6 күн бұрын

    Wow! Very good job! Thank you! Crazy: If you i use closed_loop, i have add min. 4 more frames as set in batch_size to max_frames. Otherway the first and last image is not similar, cant figure out why.

  • @GES1985
    @GES1985Ай бұрын

    How do we connect an image input to use as the first frame when you added the empty latent image (big batch) @ 3:43?

  • @PromptingPixels

    @PromptingPixels

    Ай бұрын

    That would be an img2vid workflow rather than txt2img (as is outlined in this video). I was playing around with an img2vid workflow a few weeks ago. Will try to get a video about this posted up onto the channel as the process is going to be different than what was covered in this video. Method 1: You could use IPAdapter+AnimateDiff to convert an image into a short form video. This workflow goes through the steps: civitai.com/models/372584 Another option is to use Stable Video Diffusion (SVD) which takes an image as an input and outputs a video. The problem though is that you can't apply textual prompts (that I am aware of).

  • @GES1985

    @GES1985

    Ай бұрын

    @@PromptingPixels can you do textual prompts inside of the img2vid workflow you mentioned wanting to make a video on?

  • @PromptingPixels

    @PromptingPixels

    Ай бұрын

    @@GES1985 Yes, textual prompts should be supported to help inform the output.

  • @AmerikaMeraklisi-yr2xe
    @AmerikaMeraklisi-yr2xe6 ай бұрын

    That looks cool, is it really possible scale this video ?

  • @PromptingPixels

    @PromptingPixels

    6 ай бұрын

    Hey - that is a good idea for a future video. Yes you should be able to. How big are you thinking on the scale? This way I might be able to test this out for you.

  • @hi4620
    @hi46202 ай бұрын

    12:30 I have a question. When I run render, I have the Ksampler running even though I haven't changed anything. Why is this happening?

  • @PromptingPixels

    @PromptingPixels

    2 ай бұрын

    Sounds like you might have a random seed value - will need to change it to fixed to not always run each time you queue the prompt.

  • @amirgnia5412
    @amirgnia54126 ай бұрын

    I’m having mps error on Mac to generate video. Is there any fix ?

  • @PromptingPixels

    @PromptingPixels

    6 ай бұрын

    MPS error is a memory related error (typically that you ran out of RAM). What is the model you are using (1.5, 2.1, or XL)? Also what is the total RAM in your system.

  • @amirgnia5412

    @amirgnia5412

    6 ай бұрын

    @@PromptingPixels M1 max 32 gb, 1.5 model

  • @murtazakarim7042
    @murtazakarim70423 ай бұрын

    I can't istall fizznodes

  • @Paperclown
    @Paperclown6 ай бұрын

    i'm trying to walkthrough this literally 2 days after you posting and the options of close enough settings involving AnimateDiff Evolved arent even close to what installed with ComfyUI Manager today

  • @PromptingPixels

    @PromptingPixels

    6 ай бұрын

    Mind sending a screenshot (Imgur link of your workflow) or can share the json file in the discord if you want? Will take a look at it when I have a minute.

  • @Paperclown

    @Paperclown

    6 ай бұрын

    @@PromptingPixels appreciate you responding sorry I didnt follow up with you earlier was working through the rest of the video. I got something CLOSE to it under the Gen 2 tab but options are labeled different there. Also this is from scratch you didn't mention the sdxl.vae.safetensors which you changed to something else it was listed on comfui mod manager to download tho. imgur DOT com/q3Sz5DW.png

  • @Paperclown

    @Paperclown

    6 ай бұрын

    @@PromptingPixels also forgot to mention the end result I just get rainbow snow artifacts will keep playing around with it or look at other recent videos to get through it. Thanks for your explain to me like im 5 years old approach heh.

  • @PromptingPixels

    @PromptingPixels

    6 ай бұрын

    No need to apologize! Still learning all this and just documenting with videos here on KZread to share and help remember everything 😂 Thanks for the screenshot - super helpful. For the VAE I am using the vae-ft-mse-840000-ema-pruned.ckpt (linked here in the Stability repo - huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.ckpt). Some models have this baked into the checkpoint so you can just connect the noodle from the checkpoint loader to the VAE decode node or you can use this one provided by Stability. As for the Gen 1/2 tabs - it appears there was an update to the AnimateDiff Evolved Repo just this weekend. They updated their Readme outlining the differences between the two nodes (github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved#gen1gen2-nodes). The workflow presented in my video would fall under the Gen1 tab. Gen 2 appears to just separate out the nodes (motion module, evolved sampling). I just updated my workflow in the repo if you want to load it into ComfyUI to give it a spin: github.com/content-and-code/prompting-pixels/tree/main/comfyui_workflows/1_26_24%20-%20txt2vid%20in%20ComfyUI%20(AnimateDiff) Hope all this helps!

  • @Paperclown

    @Paperclown

    6 ай бұрын

    @@PromptingPixels thanks so much ! I was trying to learn what stuff actually does then after your video I learned there was a repository that people share their workflows. In that aspect your video was essential to help me understand what stuff is.

  • @nirsarkar
    @nirsarkar4 ай бұрын

    Will this work on Mac?

  • @PromptingPixels

    @PromptingPixels

    4 ай бұрын

    Yes, but very very slow - to give you an idea, produce one image and check how long it takes. Once you have that, then multiply that by the number of frames required divided by 60 seconds. For example, say you want footage at 15 frames per second * and it takes 15 sec per image generation. Then it would be 3.75 minutes (225 seconds / 60) to get just one second of final footage.

Келесі