ComfyUI AnimateDiff Prompt Travel: ControlNets and Video to Video!!!

This is a fast introduction into ‪@Inner-Reflections-AI‬ workflow regarding AnimateDiff powered video to video with the use of ControlNet.
You can download the ControlNet models here:
huggingface.co/lllyasviel/Con...
The workflow file can be downloaded from here:
drive.google.com/file/d/14F6a...
The model (checkpoint) downloaded for this tutorial series are here:
civitai.com/models/134442/hel...
The VAE used can be downloaded from:
huggingface.co/AIARTCHAN/aich...
The motion_modules and motion_loras can be found on the original AnimateDiff repo where you will be offered different sources to download them from:
github.com/guoyww/AnimateDiff
Or here's a quick link to civitai:
civitai.com/models/108836/ani...
civitai.com/models/153022
Socials:
x.com/c0nsumption_
/ consumeem

Пікірлер: 182

  • @yoyo2k149
    @yoyo2k1498 ай бұрын

    Tested on AMD RX6800XT (Ubuntu22.04 + ROCm 5.7). it works flawlessly and stay close to 12Go of VRAM. Really helpful, thanks a lot.

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    Awesome. Will pin this for others. Mind giving a short guide on the r/animatediff subreddit? :)

  • @miaoa7414

    @miaoa7414

    8 ай бұрын

    @@c0nsumption When loading the graph, the following node types were not found: BatchPromptSchedule Nodes that have failed to load will show as red on the graph.😭

  • @yoyo2k149

    @yoyo2k149

    8 ай бұрын

    @@c0nsumption I will try to post a small guide before the end of the week-end. :)

  • @user-sk2mk2wp9e
    @user-sk2mk2wp9e8 ай бұрын

    Hard! Thank you very much for paying for it all the way! It's a pity that I brushed it here before going to bed and have to wait until tomorrow to practice.

  • @wholeness
    @wholeness8 ай бұрын

    Bro we on this journey together. Keep goin!

  • @Inner-Reflections-AI
    @Inner-Reflections-AI8 ай бұрын

    Nicely Done!

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    Everyone, this is the original creator of this workflow. Amazing artist/creative. Please follow them! 🙏🏽

  • @colaaluk
    @colaaluk4 ай бұрын

    great video

  • @58gpr
    @58gpr8 ай бұрын

    I was waiting for this one! Thanks mate & keep 'em coming :)

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    No worries 😉 Figured it’d be a quick way to introduce ControlNets but still give a lot of y’all what you’re waiting for 🧍🏽‍♂️

  • @ronnykhalil
    @ronnykhalil8 ай бұрын

    yea baby (edit: this is straight up the most valuable 10 minutes ive watched on KZread in a while, exactly the signal I needed amidst all the noise regarding comfy and diff. You explained it really well and clear. Thank ye kindly!

  • @Andro-Meta
    @Andro-Meta8 ай бұрын

    Converting the pretext, and how to do that completely blew my mind and opened doors to understanding what I could do. Thank you.

  • @calvinherbst304
    @calvinherbst3045 ай бұрын

    Thank you. Excellent tutorial :) Keep them coming, subbed!

  • @aminshallwani9369
    @aminshallwani93698 ай бұрын

    Thanks for the video, very helpful. Well done😍

  • @JaredVBrown
    @JaredVBrown4 ай бұрын

    Very helpful and approachable tutorial. Thanks!

  • @yuradanilov5244
    @yuradanilov52448 ай бұрын

    thanks for the tutorial, man! 🙌

  • @SkyOrtizCreative
    @SkyOrtizCreative8 ай бұрын

    Love your vids bro!!! I know it takes a lot of work to makes these, really appreciate your efforts. 🙌

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    Thanks for understanding 🧍🏽‍♂️ Legit takes so much time 😣 lol

  • @victorhansson3410
    @victorhansson34108 ай бұрын

    damn, glad i saw your channel recommended on reddit. fantastic video - calm, concise and well made!

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    Thanks dude 🙏🏽 Happy to help elevate and educate the community

  • @Copperpot5
    @Copperpot58 ай бұрын

    Nice job on these of late. In general I have a hard time watching video tutorials w/ people on screen talking - but you're hitting all the right notes on these so far.......Haven't -wanted- to bother w/ Comfy, but have definitely admired the generations some have been sharing. Thanks for making well timed / friendly tutorials. Stick w/ it and you'll def build a good/active channel. Thanks!

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    Thanks for the positivity hey 👏🏽

  • @banzai316
    @banzai3168 ай бұрын

    Good work! Thanks! 👏

  • @francaleu7777
    @francaleu77778 ай бұрын

    perfect tuto ! Thanks a lot !

  • @mikberg1824
    @mikberg18246 ай бұрын

    Really good tutorial,thank you!

  • @edkenndy
    @edkenndy8 ай бұрын

    Awesome! Thanks for sharing the resources.

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    Trying to get everyone up to speed on all the amazing workflows available 🙏🏽

  • @keagoaki
    @keagoaki7 ай бұрын

    straight to the point and clear, nice to follow,no music is perfect. i can choose my own background if needed. thanks a lot you just made me a fortune haha

  • @haydnmann
    @haydnmann8 ай бұрын

    this is sick, nice work dude. sub'd

  • @LearningVikas
    @LearningVikas3 ай бұрын

    Thanks worked finally❤❤

  • @digidope
    @digidope8 ай бұрын

    Thanks! Straight to the point!

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    Yes indeed. Hard to keep it that way with such complex topics but I’m trying!

  • @TheJPinder
    @TheJPinder5 ай бұрын

    good stuff

  • @samshan9321
    @samshan93217 ай бұрын

    really helpful tutorial, thx

  • @UON
    @UON8 ай бұрын

    Exciting! I hope this helps me figure out how to do a much longer vid2vid without running out of vram

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    I mention a note on VRAM. Can lower the image size so a smaller resolution and then upscale later. How much VRAM do you have???? Have you considered using RunPod? They have a preset comfyUI template

  • @leretah
    @leretah8 ай бұрын

    awesome, thank you. Im really appreciated

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    No worries. More on the way. Just super busy with work sorry 🙏🏽

  • @DefinitelyNotMike
    @DefinitelyNotMike7 ай бұрын

    This is so fucking cool and it worked with no issues! Thanks!

  • @Ekopop
    @Ekopop7 ай бұрын

    that my friend is a very nice video, thanks a lot, I'll follow your stuff

  • @danielvgl
    @danielvgl4 ай бұрын

    Great!!!

  • @Spajra-music
    @Spajra-music8 ай бұрын

    crushing bro

  • @ekke7995
    @ekke79957 ай бұрын

    this is it!!

  • @Elliryk_
    @Elliryk_8 ай бұрын

    Great Video my friend!! Elliryk 😉

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    Ahhhhhhhhhh shiiiii 🧍🏽‍♂️ Enjoy the video my guy. Excited to see what you cook up 🍳🥘⏲️

  • @victorvaltchev42
    @victorvaltchev428 ай бұрын

    Top!

  • @aoi_andorid
    @aoi_andorid8 ай бұрын

    This video will help many creators. Please have a place to pay for coffee.

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    🥹 Will set up soon. I love y’all. Thanks for all the love 🙏🏽 I set up a patreon, will be sharing soon. Also considering setting up subscriptions on X

  • @MrPlasmo
    @MrPlasmo8 ай бұрын

    very helpful as always thanks. Is there a way to make a "preview" video frame node so that you can view the progress of the render before it is completed? This way one could cancel the render if it looks terrible or not the way you want it without wasting render time. This was one of the nice things about deforum that saved me a lot of time

  • @lovisodin8658

    @lovisodin8658

    8 ай бұрын

    just use a fixed seed, and in the node 'load video upload', just change "select every nth", to 20 for example if you want 6 images preview

  • @leandrogoethals6599
    @leandrogoethals65993 ай бұрын

    nice tutorial, have u found a way that u can upload a 3 min video in one piece into the VHS load video node?

  • @nelson5298
    @nelson52987 ай бұрын

    Thanks for ur sharing. I really learn a lot. Quick question... how do I change model's cloth and keep the new cloth can prerform consistently. I type in sweater, but some frame will change sweater into tank top...

  • @ucyuzaltms9324
    @ucyuzaltms93248 ай бұрын

    i love the output

  • @kaleabspica8437
    @kaleabspica84373 ай бұрын

    what do i have to do if i want to change the look of it ? since yours is closer to anime style i want to make it to realism or sci-fi etc

  • @BrandonFoy
    @BrandonFoy8 ай бұрын

    Whoa! This is awesome, thanks for sharing your workflow. I haven’t used ComfyUI - just been in A1111. Can you recommend tutorials for Comfy? Or any you’ve made that’ll be a solid start to start learning this method? Thank you!!

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    This was by me and is a great way to get started. It’s part of the playlist this current video is in: kzread.info/dash/bejne/hXud2NudkaXQYto.htmlsi=MDwuANfnq6W_Wzul Also this actually isn’t my workflow, it’s the work of @Inner-Reflections-AI here on KZread! I did make some modifications though to make things a bit easier :)

  • @BrandonFoy

    @BrandonFoy

    8 ай бұрын

    @@c0nsumption oh man, thank you so much!!!!! 🙌🏾🙌🏾🙌🏾

  • @BrandonFoy

    @BrandonFoy

    8 ай бұрын

    @@c0nsumption yeah, this is exactly what I’m looking for!! Awesome thanks again!

  • @zweiche
    @zweiche8 ай бұрын

    i really appreciate for this guide. this will help me alot! however i have 1 problem maybe you can help me with i have done everything right. i see frames from video and i see controlnet output with lines. however after ksampler my gif and image outputs are all blackscreen. what do you think my problem could be.

  • @alishkaBey
    @alishkaBey6 ай бұрын

    Great tutorial bro ! Could you make a video about morphing videos with Ipadapters?

  • @OffTheHorizon
    @OffTheHorizon3 ай бұрын

    Im using Ksampler, but it takes 9 minutes for 1 of the 25 samples. Which is obviously extremely slow. Im working on a macbook m1 Max, do you have any tips on making it quicker?

  • @wagmi614
    @wagmi6148 ай бұрын

    cool vid, you'll might the best animatediff channel now, what's coming next?

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    IPAdapter, ControlNet Keyframes, Frame Interpolation, Refiner and Upscaling, amongst others! Also Hotshot-XL tutorials. Thanks btw, I appreciate ya.

  • @wagmi614

    @wagmi614

    8 ай бұрын

    @@c0nsumption 3 or 5 image interpolation as in with start and end frames please

  • @victorvaltchev42
    @victorvaltchev428 ай бұрын

    What was the size of the video in the end? Because you showed 1024 576 in the beginning. Is that the resolution in the end as well? Also how do you load other formats of video, I only have webp and gif?

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    Yes, that's what dictates the output resolution. Have upscaling coming up soon but have two jobs so very limited time!

  • @victorvaltchev42

    @victorvaltchev42

    8 ай бұрын

    @@c0nsumption Great content man! Thanks for the answer! I was a long time Automatic1111 user but the past weeks with the advances of animatediff in comfyui I'm definetely switching!

  • @mulleralmeida4844
    @mulleralmeida48442 ай бұрын

    Starting to learn ComfyUI, when I click on Queue Prompt, my computer takes a long time to process the KSampler node. I'm using a MacBook Pro 14 M2 PRO, is it normal for it to take so long?

  • @RenoRivsan
    @RenoRivsan4 ай бұрын

    can you show how to remove animatediff from this workflow... i dont want my video to change style

  • @GamingDaveUK
    @GamingDaveUK8 ай бұрын

    Nice, may have to try this after work. is it the same process if you want to use more uptodate models (cant go back to 1.5 after using SDXL lol)

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    I’ve tested the HotshotXL workflow. Currently SD15 is doing a lot better. But InnerReflections is creating some magnificent pieces using it and is supposedly about to share his workflow 🧍🏽‍♂️

  • @yuxiang3147
    @yuxiang31478 ай бұрын

    Awesome video! Do you know how you can combine openpose and depth and lineart together to improve the results?

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    Yeah, I’ll make a follow up video for multiple ControlNets

  • @yuxiang3147

    @yuxiang3147

    8 ай бұрын

    @@c0nsumption Nice! Looking forward to it, you are doing awesome stuff man keep it up!

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    @@yuxiang3147 thanks for the positivity 🙏🏽

  • @l1far
    @l1far8 ай бұрын

    I use run diffusion and can't load your workflow in json( can you upload the pic too maybe that can fix it?

  • @JimDiMeo
    @JimDiMeo8 ай бұрын

    Hey man - love the tutorials!! Where do you add different video creation formats - I only have gif and webp - Thx

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    🤔 should be more. Search for the VHS Video Combine node in your ComfyUI and try that.

  • @JimDiMeo

    @JimDiMeo

    8 ай бұрын

    @@c0nsumption yes! Found that last night. Thx for the reply through.

  • @SuperDao
    @SuperDao8 ай бұрын

    Can you make a tutorial on how to upscale the render ?

  • @risewithgrace
    @risewithgrace8 ай бұрын

    Thanks! I downloaded this workflow but the output only has formats for image/gif, or image/webp, even though I am inputting video. There is no video/h264 setting in the dropdown. Any idea how I can add that?

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    Replace the output node with “VHS Video Combine” node. You can double click in the interface and search for it.

  • @Csarmedia
    @Csarmedia8 ай бұрын

    the ebsynth of comfyui

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    Honestly better than EBSynth. Cause it works on every frame. Only reason why this changes is cause of prompt travel. Otherwise the first scene would have stayed 👍🏽

  • @norvsta

    @norvsta

    8 ай бұрын

    @@c0nsumption so cool. I faffed around for a coupla days trying to install Win 10 just to run ebsynth, now I don't have to bother. Thanks for the tut 🙌

  • @jiananlin
    @jiananlin8 ай бұрын

    how to apply more than 1 controlnets?

  • @Csarmedia
    @Csarmedia8 ай бұрын

    The worklflow file is giving me an error: TypeError: Cannot read properties of undefined (reading '0')

  • @bowaic9467
    @bowaic94679 күн бұрын

    I don't know how to fix this problem. 'ControlNet' object has no attribute 'latent_format'

  • @vtchiew5937
    @vtchiew59378 ай бұрын

    thanks! got it working after a few tries, but i realize the prompts are not really working (at least I don't see them "travelling"), it seems the whole prompts are taken into consideration instead. do you have similar issues? i see that the default workflow has 4 prompts, but in your generated video at least it traveled from green lush to wintery storm, whereas mine always started with wintery storm, and remained like that throughout the video.

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    Depends on various factors: key frame distance, seed, cfg, sampler, inputs, etc etc. That’s the artistic process my friend, fiddle with it all. This was a quick output to get everyone involved. I’m just really busy testing all the new tech, working, and trying to formulate constructive tutorials for everyone to tag along.

  • @vtchiew5937

    @vtchiew5937

    8 ай бұрын

    @@c0nsumption thanks for the reply bro, been fiddling with it since then, great tutorial~

  • @itsjaysenofficial
    @itsjaysenofficial8 ай бұрын

    Will it work for a macbook pro M1 with 16gb or ram??

  • @AI-nsanity
    @AI-nsanity8 ай бұрын

    I don't have the option for mp4 output do you have any idea why ?

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    Change output node to VHS Video Combine. I believe that solves it

  • @Spajra-music
    @Spajra-music8 ай бұрын

    followed this all the way through and at the end my video output was just black. any suggestions?

  • @El__ANTI
    @El__ANTI8 ай бұрын

    Error occurred when executing CheckpointLoaderSimpleWithNoiseSelect ...

  • @voytakaleta
    @voytakaleta8 ай бұрын

    Awesome! I have one question, how can I install / connect ffmpeg to comfyUI. I get this error "[AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled". Thank you very much!

  • @JMcGrath

    @JMcGrath

    8 ай бұрын

    I have same issue

  • @voytakaleta

    @voytakaleta

    8 ай бұрын

    kzread.info/dash/bejne/o4eg2thvaLvWm9o.html@@JMcGrath

  • @luclaura1308
    @luclaura13088 ай бұрын

    How would you go around adding a Lora (not a motion one) to this workflow? I tried adding a Load Lora after Load checkpoint, but I'm getting black images.

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    This tutorial: kzread.info/dash/bejne/d6Ch26usprDInKg.htmlsi=Kk_dWXxGELq-Kemy

  • @luclaura1308

    @luclaura1308

    8 ай бұрын

    @@c0nsumption Thanks!

  • @Beedji
    @Beedji8 ай бұрын

    Hey man, great tutorial ! I have an error message that pops out however, it says "Control type ControlNet may not support required features for sliding context window; use Control objects from Kosinkadink/Advanced-ControlNet nodes." which is weird since I have Kosinkadink's model installed. Have you experienced this error as well ?

  • @Beedji

    @Beedji

    8 ай бұрын

    Ok I think i've found the problem. I wasn't using the same VAE that you (I was using a SD1.5 pruned one) and now that I installed the same than you (Berrysmix) it seems to work. No idea what difference this makes, but we'll see! haha

  • @aaronv2photography
    @aaronv2photography8 ай бұрын

    You made a video (I think) about unlimited animatediff length animations. How would we incorporate that into this workflow so we can go past the 120 frame limit?

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    I would imagine just make sure you add in more than 120 frames and increase the max frames on the ‘BatchPromptSchedule’ node past 120. If you don’t include enough frames I’m assuming the generation will just continue prompts based form the point of missed frames but who knows 🤷🏽‍♂️ Test it out, will probably make some cool stuff

  • @DimiArt
    @DimiArt3 ай бұрын

    Weird im getting preview images from the upscaler node and the lineart images from the controlnet, but im not getting any actual output results.

  • @DimiArt

    @DimiArt

    3 ай бұрын

    ok i realized my checkpoint and my VAE were set to the ones in the downloaded workflow and i had to set them to the ones i actually had downloaded instead. My bad

  • @kaleabspica8437

    @kaleabspica8437

    3 ай бұрын

    do you know how to change the look of it.

  • @DimiArt

    @DimiArt

    3 ай бұрын

    @@kaleabspica8437 change the look of what

  • @pauliuscreative
    @pauliuscreative8 ай бұрын

    My original input video was 7 seconds and the output video I got is slower which is 12 seconds. Do you know why?

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    Check you output frame rate

  • @user-hb6dd9iu9g
    @user-hb6dd9iu9g8 ай бұрын

    Thank you for this tuttorial! I'm using colab version and i get tottally black result pictures and video, could you give me a hint how can i fixe it? thx But most of the time i get this issue ..SD model must be either SD1.5-based for AnimateDiff or SDXL-based for HotShotXL. Need help..=\

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    Are you using an SDXL model or SD1.5? Other models don’t work for animatediff/hotshot. Can you lmk what model you are using and I’ll do some research

  • @hatakeventus
    @hatakeventus8 ай бұрын

    does this work with AMD RX 6700??

  • @antonradacic2374
    @antonradacic23748 ай бұрын

    ive set everything up, but for some reason i get error at ksampler step "Error occurred when executing KSampler: 'ModuleList' object has no attribute '1'"

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    DM me on Twitter the actual error message and a screenshot of the nodes. Too vague to answer. Either that or post on r/animatediff subreddit

  • @speaktruthtopower3222
    @speaktruthtopower32228 ай бұрын

    is there wa way to point to different directories so we don't have to re'download models, lors and others files.

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    I use ComfyUI as my base for a there repos so not sure. But try here: github.com/comfyanonymous/ComfyUI/discussions/72

  • @speaktruthtopower3222

    @speaktruthtopower3222

    8 ай бұрын

    @@c0nsumption I figured it out. just change the root directory and point it to your SD in the "extra_model_paths.yaml" file.

  • @philspitlerSF
    @philspitlerSF3 ай бұрын

    I don't see a link to download the workflow

  • @leretah
    @leretah8 ай бұрын

    Yesterday all is ok and today I have this error: Error occurred when executing KSampler: unsupported operand type(s) for //: 'int' and 'NoneType' File "C:\Users\lenin\Downloads\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) Please help me, learn this is really frustrating many times, but Im love it!!!

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    Sounds like you have the wrong data in your ksampler somewhere. Try reloading the workflow from scratch. Consider posting your issue in the r/animatediff subreddit

  • @looneyideas
    @looneyideas8 ай бұрын

    Can you use RunPod or does it have to be local?

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    Runpod has a ComfyUI template

  • @saiya3725
    @saiya37258 ай бұрын

    Hey when i drag from the pre text input im not getting the ttn text node option. What am i missing?

  • @saiya3725

    @saiya3725

    8 ай бұрын

    i installed tinyterra and got it

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    @@saiya3725 👍🏽 Good job figuring it out

  • @frustasistumbleguys4900
    @frustasistumbleguys49008 ай бұрын

    hey, why i got noise with artefact on my output? i follow you

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    DM me over X or instagram. Send me an example image.

  • @VJSharpeyes
    @VJSharpeyes8 ай бұрын

    The Node 'realistic lineart' node is always missing when loading your CSV. Any tips of what I could have missed? I am warned about "LineArtPreprocessor" missing and then in the install manager I only see Fannovel16s which is already installed.

  • @VJSharpeyes

    @VJSharpeyes

    8 ай бұрын

    Oh hang on. There is an abandoned repo that looks like it contains it.

  • @jorgecucalonf

    @jorgecucalonf

    8 ай бұрын

    Same issue here. Console gives me this: (IMPORT FAILED): C:\AI\ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux

  • @jorgecucalonf

    @jorgecucalonf

    8 ай бұрын

    Managed to get it working by reverting the comfyui_controlnet_aux folder to an older commit. Otherwise we must wait for the owner of the repository to update the it with a fix.

  • @jorgecucalonf

    @jorgecucalonf

    8 ай бұрын

    That was quick. It's fixed now :D

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    Good job getting it working. If I have some spare time today or this week I’ll try to research.

  • @ehsankholghi
    @ehsankholghi5 ай бұрын

    i upgraded to 3090ti 24gig.how much cpu ram i need to do video to video SD? I have 32gig

  • @c0nsumption

    @c0nsumption

    5 ай бұрын

    Should be fine with that. Dont upgrade your ram till you hit your bottleneck. If your doing really really long sequences it’ll bottleneck but even then you can just split them up into smaller chunks

  • @ehsankholghi

    @ehsankholghi

    5 ай бұрын

    @@c0nsumption thanks so much.is it possible to make a video with like 1000 frame(1000 png) whit ur workfllow? i got this error after 1.5 hours rendertime: numpy.core._exceptions._ArrayMemoryError: Unable to allocate 6.43 GiB for an array with shape (976, 1024, 576, 3) and data type float32

  • @nilshonegger
    @nilshonegger8 ай бұрын

    Thank you so much for sharing your workflow! Is there a way to bypass the VAE nodes in order to use it with Models that don't require a VAE (such as Dreamshaper, EpicRealism)?

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    Plug the vae from your checkpoint loader node into any slot that requires a vae

  • @bowaic9467
    @bowaic946712 күн бұрын

    Do u know what happen to this error? Error occurred when executing CheckpointLoaderSimpleWithNoiseSelect: 'model.diffusion_model.input_blocks.0.0.weight' File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff odes_extras.py", line 52, in load_checkpoint out = load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 511, in load_checkpoint_guess_config model_config = model_detection.model_config_from_unet(sd, diffusion_model_prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\comfy\model_detection.py", line 239, in model_config_from_unet unet_config = detect_unet_config(state_dict, unet_key_prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\comfy\model_detection.py", line 120, in detect_unet_config model_channels = state_dict['{}input_blocks.0.0.weight'.format(key_prefix)].shape[0] ~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  • @arkelss4
    @arkelss48 ай бұрын

    Can this alsowork with automatic1111?

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    No idea but most likely not. Best to start learning the newer tools and growing out of Auto1111. Developer experience isn’t the greatest on Auto so most development on state of the art tech is happening on ComfyUI and other repos.

  • @terencechen5857
    @terencechen58578 ай бұрын

    have you tried this workflow + IP adapter, it will increase memory significantly

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    Yeah, it’ll pull around 17 GB of VRAM. I have a Runpod tutorial coming for those lacking. Took a lot of debugging and studying but I’ve ironed out the bugs and got it figured out. Then I can drop all the remaining workflows and tutorials 🙏🏽 This way if anyone’s lacking I can redirect them to run pod where they pay as they go and for good cards rather than Google colab which imo really isn’t worth it.

  • @terencechen5857

    @terencechen5857

    8 ай бұрын

    it's more than 17GB in my case, depending on how many frames to be generated, however looking forward to see your update, thanks @@c0nsumption

  • @terencechen5857

    @terencechen5857

    8 ай бұрын

    I did some updates(comfyui, custom nodes like ipadapter etc.), the usage of VRAM is down to 11GB with a resolution of 576 * 1024 😂@@c0nsumption

  • @lanvinpierre
    @lanvinpierre8 ай бұрын

    can you do cli prompt in comfyui? great tutorial btw!

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    Sorry, confused about what you’re asking. Are you asking if you can do prompt travel?

  • @lanvinpierre

    @lanvinpierre

    8 ай бұрын

    the one where where you used 3 different images to help with the animation "frame 0 0001, frame 8 0002' im not sure what its called but can that be done thru comfyui or should it be done like your other tutoriial? @@c0nsumption

  • @aoi_andorid
    @aoi_andorid7 ай бұрын

    Is anyone using AI to generate workflow for Comfy UI? Please let me know if you know of any useful links.

  • @c0nsumption

    @c0nsumption

    7 ай бұрын

    I don’t understand the question. ComfyUI is literally an AI powered software

  • @aoi_andorid

    @aoi_andorid

    7 ай бұрын

    I thought that if GPT could recognize and learn from a large number of json files and images showing workflows, it would be possible to generate workflows in natural language! (I used DeepL for the translation, so I apologize if I was rude in my wording;)@@c0nsumption

  • @AIPixelFusion
    @AIPixelFusion8 ай бұрын

    How are you only using 11GB of VRAM? Mine goes above 24GB and has to use non-GPU RAM...

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    How much VRAM do you have? How many frames are you using? What is the size of your frames? What size are you upscaling them too? How long is your generation? What do you have running in the background on your computer? Going above 24GB of VRAM has to be for a reason.

  • @AIPixelFusion

    @AIPixelFusion

    8 ай бұрын

    @@c0nsumption I have: 24GB VRAM 30 frames video frame size is 720x1280 (should I be lowering it to 576x1024?) values for upscaler: 576x1024 (are these ignored if smaller than the video frame size?)

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    @@AIPixelFusion 🤔 What the hell. Can you send me a photo of your node network over X? I don’t understand how you’re using that much vram if your upscaler is at 576 by 1024. How long is your actual input video/ amount of frames? Did you make sure to cap them like I did? (Where I limited the amount of frames it would process)

  • @Syzygyyy

    @Syzygyyy

    8 ай бұрын

    same issue@@c0nsumption

  • @2amto3am
    @2amto3am8 ай бұрын

    Can we do image to image??

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    This is image to image it’s just converting the video for you. If you want just use the node from the beginning of the video. Am I reading your question correctly? 🤔

  • @Oscilatii
    @Oscilatii8 ай бұрын

    Hello! Used your tutorial and workflow but duno why, my video is crap :) The background is modified and is cool, but my face is still like original video with some modified colors. If I want to make my face a robot for example, just won't work... With openpose instead of lineart, i got great results but is missing mouth movement when I speak If I use same prompt in img2img, results are amazing

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    You can adjust controlnet weight, try different controlnets or try mixing them. I’ll drop a multi controlnet video soon

  • @Oscilatii

    @Oscilatii

    8 ай бұрын

    @@c0nsumption thanks for your answer. One of my problem was that I use a realistic model :) now everything is ok. Thx again for this tutorial, rly helped me

  • @koalanation
    @koalanation8 ай бұрын

    Great video! Just you know: the models in huggingface are free to download, no need to open any account

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    Some require sign especially upon initial release. It’s all what the developers dictate when posting. Like when SDXL dropped you had to have a huggingface account to download

  • @dnvman
    @dnvman8 ай бұрын

    hey nice video 🫶 where to get ttn text node?

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    This video shows the process: kzread.info/dash/bejne/d6Ch26usprDInKg.htmlsi=ej88H8_35b1N2cb9

  • @wagmi614
    @wagmi6148 ай бұрын

    bro it's been a week, where's some new vids, eagerly waiting

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    lol 😂 Been working on a Runpod setup video for people who don’t have compute power. Was pretty difficult to figure it all out but I got it. Posting in the next 30 min to an hour. Workflow vids now coming since I got that out of the way 🧍🏽‍♂️

  • @wagmi614

    @wagmi614

    8 ай бұрын

    @@c0nsumption hope new workflows doesn't always involve runpod now on, would love to get it working locally always

  • @skycladsquirrel
    @skycladsquirrel8 ай бұрын

    Awesome tutorial. I'm using the Controlnet set for the next one. Here's my latest video: kzread.info/dash/bejne/qqWGxbxrgqq5ZtY.html

  • @nft_bilder_art2098
    @nft_bilder_art20988 ай бұрын

    please tell me why I got this error?? when I launch COMFYUI... D:\comfuUI\ComfyUI>python main.py ** ComfyUI start up time: 2023-10-17 05:30:32.177484 Prestartup times for custom nodes: 0.0 seconds: D:\comfuUI\ComfyUI\custom_nodes\ComfyUI-Manager Traceback (most recent call last): File "D:\comfuUI\ComfyUI\main.py", line 69, in import comfy.utils File "D:\comfuUI\ComfyUI\comfy\utils.py", line 1, in import torch ModuleNotFoundError: No module named 'torch'

  • @nft_bilder_art2098

    @nft_bilder_art2098

    8 ай бұрын

    before this there was no such error at start

  • @nft_bilder_art2098

    @nft_bilder_art2098

    8 ай бұрын

    Maybe I'm launching it wrong somehow? Thank you in advance for your cooperation!

  • @nft_bilder_art2098

    @nft_bilder_art2098

    8 ай бұрын

    all okay, I watched your last video, I figured it out, thank you very much

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    Love that you internally said “I’m figuring this out dammit! “ lol. Good job 👍🏽

  • @benjaminbardouparis
    @benjaminbardouparis8 ай бұрын

    Wow. Huge thanks for this! Is it possible to use a SD XL model for generating a painting style? I’d like to use this one and I don’t know if it’s possible with your workflow. Btw, many thanks !!

  • @c0nsumption

    @c0nsumption

    8 ай бұрын

    You can use hotshotxl: civitai.com/articles/2601/guide-comfyui-sdxl-animation-guide-using-hotshot-xl-an-inner-reflections-guide

  • @benjaminbardouparis

    @benjaminbardouparis

    8 ай бұрын

    Thanks!

  • @eraniopetruska5701

    @eraniopetruska5701

    8 ай бұрын

    Hi! Did you manage to get it running? @@benjaminbardouparis