Mastering ComfyUI: Getting Started with Video to Video!

Тәжірибелік нұсқаулар және стиль

In this video i will dive you into the captivating world of video transformation using ComfyUI's new custom nodes. Discover the secrets to creating stunning videos that push the boundaries of creativity and imagination. I hope to inspire you to unleash your video editing potential!
** Links from the Video Tutorial **
- ComfyUI-N-Suite: github.com/Nuked88/ComfyUI-N-...
- ComfyUI's ControlNet Auxiliary Preprocessors: github.com/Fannovel16/comfyui...
- File Converter: file-converter.org/
- RIFLE repo used: github.com/hzwer/Practical-RI...
- ControlNet v1.1 Models: huggingface.co/lllyasviel/Con...
- revAnimated Model: civitai.com/models/7371/rev-a...
- Workflow** : www.patreon.com/posts/tutoria...
** Let me be EXTREMELY clear: I don't want you to feel obligated to join my Patreon just to access this workflow. My Patreon is there for those who genuinely want to support my work. If you're interested in the workflow, feel free to watch the video - it's not that long, I promise! 🙏
❤️❤️❤️Support Links❤️❤️❤️
Patreon: / dreamingaichannel
Buy Me a Coffee ☕: ko-fi.com/C0C0AJECJ

Пікірлер: 228

  • @SheRoMan
    @SheRoMan9 ай бұрын

    Your dedication and hard work are greatly appreciated.

  • @zentrans
    @zentrans9 ай бұрын

    Excellent , I'm glad someone is having fun programming in exact functionalities they want

  • @Asimovmediaglobal
    @Asimovmediaglobal8 ай бұрын

    Excellent explination. Finally someone that actually shows what is actually happening.

  • @UmutVedat_UV
    @UmutVedat_UV2 ай бұрын

    this is exactly what I d been looking for a while. thank you

  • @rons96
    @rons962 ай бұрын

    i was looking for a loader for video and frames that is sufficient optimized about resources consumption, and here it is, thank you very much!

  • @88.AmpLyte
    @88.AmpLyteАй бұрын

    Wow, thank you Brother. From the time you took to create simplistic custom versions, your explanations and they way you broke down individual variables.. i was able to take in a real understanding of these components as well as keep up with the new insights as the video progressed. 🧠💪👏

  • @PravinDahal
    @PravinDahal5 ай бұрын

    What an awesome piece of work and this fantastic explanation to go with it. I love it. Thank you so much!

  • @wholeness
    @wholeness9 ай бұрын

    The beauty is once it’s perfected you can just drop us the file! 😂

  • @RandPrint
    @RandPrint6 ай бұрын

    Thanks a bunch! outputting to frames and then loading the frames really helped me get around the memory boundary with batch size of 0, I could only wish for save and load latent frames to match if you find yourself plunking around in this project again.

  • @enigmatic_e
    @enigmatic_e8 ай бұрын

    Very helpful! Thank you so much for this.

  • @sebastianc09
    @sebastianc098 ай бұрын

    your videos are excellent! your explanations are so clear as well!

  • @iresolvers
    @iresolvers8 ай бұрын

    Thank you finally got it working! great video!

  • @ssorce3839
    @ssorce38396 ай бұрын

    Thank you, this was just what I was looking for!

  • @blademarketing
    @blademarketing4 ай бұрын

    Very lovely, you are brilliant, mate

  • @cameronlee9327
    @cameronlee93272 ай бұрын

    Keep going, upload more high-quality videos like this!

  • @deeceehawk
    @deeceehawk5 ай бұрын

    Thank you thank you ever so much for creating such useful custom notes! NODES, sorry audio transcription… Thank you so much! There is no way I could have quickly handled this process without! Thank you :-)

  • @ryancarper595
    @ryancarper5954 ай бұрын

    You would be a good narrator of novels - you have the perfect voice for it. Also thanks for your work with this, brilliant!

  • @nikgerhard502

    @nikgerhard502

    3 ай бұрын

    Tts

  • @rubb3rmind
    @rubb3rmind9 ай бұрын

    Thanks for this!

  • @davegravel6828
    @davegravel68285 ай бұрын

    Love it!

  • @A5h3n.
    @A5h3n.6 ай бұрын

    Super stellar!! I've been testing limitations with a GTX 1070 and so far been able to run animatediff with a 60 frame length time (txt2vid & img2vid) and 3 loras. Haven't tried any xl models cause I don't have enough vram but hopefully I can render through video with these nodes.

  • @Burnholiday
    @Burnholiday6 ай бұрын

    Super cool bro well done

  • @oraocean
    @oraocean9 ай бұрын

    Thanks! I enjoy your tutorial!

  • @DreamingAIChannel

    @DreamingAIChannel

    9 ай бұрын

    You are welcome!

  • @hphector6
    @hphector69 ай бұрын

    Subbed, you seem to be on top of this stuff.

  • @SquirrelTheorist
    @SquirrelTheorist7 ай бұрын

    This is incredible, great work! Excellent tutorial, I am definitely going to try this out instead of batch img2img with the old auto1111 webui, although I will miss it.

  • @KDawg5000
    @KDawg50009 ай бұрын

    This is great. Can't wait to try it out. Heck, I was looking for just a simple batch process function for ComfyUI, to process a bunch of video frames I created, but here you even included steps to convert the video frames to images. I wonder if optical-flow models like RAFT will make it into ComfyUI (maybe they already have)?

  • @puoiripetere
    @puoiripetere9 ай бұрын

    Great video! Potentially thanks to the nodes it is possible to exploit roop for a face replacement...the potential is endless, I imagine also dw pose, lineart etc...Congratulations :)

  • @DreamingAIChannel

    @DreamingAIChannel

    9 ай бұрын

    Thanks!

  • @jonathas_arquiteto
    @jonathas_arquiteto9 ай бұрын

    great! I was talkin about this today! :D

  • @W.A.-Linux
    @W.A.-Linux8 ай бұрын

    Nice, Thanks :)

  • @AdriIdzwanMansor
    @AdriIdzwanMansor6 ай бұрын

    don't forget to use stability matrix! and comfy manager as well!

  • @Satscape
    @Satscape9 ай бұрын

    Thanks for making those nodes (and the video). It looks like the vid2vid nodes I had installed (but broken) have been abandoned. So I can remove those and install yours!

  • @annonymousperson5376
    @annonymousperson53767 ай бұрын

    Great video! I am new to this so just wanted to know how to use the Lora loader in this workflow

  • @dougsbir
    @dougsbir5 ай бұрын

    could you please post a full screenshot of the finished workflow I am new to all this thankyou so much great work!!!!

  • @nlmnx5763
    @nlmnx576318 күн бұрын

    thanks babe

  • @automioai
    @automioai9 ай бұрын

    Thanks for the tutorial ! I have a question, how you can use the tool to search for nodes ? Im just able to search it using right click , and the other is, I was not able to find the Final Apply Contronet, I need to add it ? ( not even find in the Manager)

  • @DreamingAIChannel

    @DreamingAIChannel

    9 ай бұрын

    oh just doubleclick on an empty space in the board! Apply Contronet is a default node so you should already have it!

  • @johnsummerlin7630
    @johnsummerlin763011 күн бұрын

    11:45 clarification requested: "denoise" is not a right-click option on the depicted node. What needs to be loaded for this option to show up, as the source of this denoise-control is not clear. There are multiple "custom scripts" items in the manager menu, with different authors and different conflict warnings too.

  • @queencityballer2264
    @queencityballer22646 ай бұрын

    Having issues with the Load Controlnet node. Weights only load failed. Thoughts? Thanks.

  • @Valyrie97
    @Valyrie972 ай бұрын

    is there any way, with these nodes or maybe something newer (things move so fast) to pipe the previous frame along with the canny and do a img2img for each frame after the first?

  • @Bicyclesidewalk
    @Bicyclesidewalk4 ай бұрын

    Thanks for keeping the open-source spirit strong and locking the workflow behind Patreon~

  • @DreamingAIChannel

    @DreamingAIChannel

    4 ай бұрын

    is your argument really the "open-source spirit"? ... stop being lazy and watch the video plz 😂

  • @Bicyclesidewalk

    @Bicyclesidewalk

    4 ай бұрын

    @@DreamingAIChannel If you are going to do a video using a very free to use tool and then turn around a bork the thing with some paywall - meh, what a joke.

  • @DreamingAIChannel

    @DreamingAIChannel

    4 ай бұрын

    ​@@Bicyclesidewalk man.. please... it's pretty clear to me why are you complaining. Please stop... is not good for you, and i'm telling this from the bottom of my heart.

  • @Bicyclesidewalk

    @Bicyclesidewalk

    4 ай бұрын

    Do the right thing.@@DreamingAIChannel

  • @dkamhaji
    @dkamhaji9 ай бұрын

    Great Video, Great tools. thanks so much for this contribution! I have 2 questions, 1. I cannot see the set denoise setting for the ksampler in mine? I know this is essential for a more stable output. how can I get this setting in mine? and 2. how did you set the angular lines> is that a custom node?

  • @DreamingAIChannel

    @DreamingAIChannel

    9 ай бұрын

    Hi! both things are from a custom nodes suite called ComfyUI-Custom-Scripts by pythongosssss, here i've explained how to put the straight lines: kzread.info/dash/bejne/c56ryNWwqq_TqLg.html

  • @dkamhaji

    @dkamhaji

    9 ай бұрын

    Thank you!@@DreamingAIChannel that helped. I got it installed and took me a step closer. when I set .05 everything changes then I set back to the steps I want but when I check the denoise says default 15 again? how do you know what its set to? from A1111 you have a slider to see what its set to so its easy to see in comfy its hidden.

  • @grabaka884
    @grabaka8847 ай бұрын

    How did you get that loadvideo box?

  • @hleet
    @hleet9 ай бұрын

    Another nice video to watch. I love your videos and your explanations are simple to understand. Is it possible for you to explain how to use the freeU comfyui Node ? there is also the "freeU advanced" custom nodes. The thing is they just throw "B and S", and for the moment whatever number In put into, I only get BullSh*t output ...

  • @DreamingAIChannel

    @DreamingAIChannel

    9 ай бұрын

    Hi! When freeU came out I thought about making a video straight away, then I saw how it works and took a step back. The fact is that I can also explain how FreeU works BUT since the values depend not only on the model used but also on the styles applied to it, like LoRA, and there is no tool to date that allows us to understand if "we're on the right track" with the values we're putting in, in the end I'd be forced to tell you to try randomly (like you said) until you find the correct values, and this wouldn't make it a good tutorial 🤣, the only tip I could give you is to stay low with the values and start by modifying only the B or only the S. I look forward to a tool that can make our lives easier and then I'll be the first to make a video!

  • @shatteredMoonEnt
    @shatteredMoonEnt8 ай бұрын

    Thanks for the cool tut and nodes. I would love some input on: how do you get a more-stable image while preventing the generated video from looking just like the source material? I've been messing around with the "set denoise" in the ksampler (adv), however I can either get it to have a LOT of variation by setting the denoise to something above 0.5, or essentially a noisy reproduction of the original by setting to 0.5 or less. Subbed to watch more as it comes out :)

  • @shatteredMoonEnt

    @shatteredMoonEnt

    8 ай бұрын

    I suppose, actually, I should have asked if you have figured out a way to inject a pre-generated latent into the ksampler to guide the look; for example, I have a character with a face I want to reuse through my video.

  • @shatteredMoonEnt

    @shatteredMoonEnt

    8 ай бұрын

    I guess that's just a lora, isn't it...

  • @abrahamgeorgec
    @abrahamgeorgec16 күн бұрын

    Were you able to download the video to user defined folder using API. Which node to use for the same?

  • @HerraHazar
    @HerraHazar2 ай бұрын

    I dont seem to have this video node, do I need to add that somehow

  • @sugy747
    @sugy7476 ай бұрын

    Hello, thank you for this great tutorial. Unfortunately I have a problem, I have installed the repo from github in my custom_nodes, I also have the N-Nodes file which is fully installed and in the right place (with all the nodes in the py folder). But when I run ComfyUI, I have certain nodes like the interpolator which is there, but I can't get my hands on the LoadVideo and SaveVideo nodes ! From what I've read, this problem has also happened to other people, and on the github, one even said it could be due to a conflict with animdiff. What do you think?

  • @DreamingAIChannel

    @DreamingAIChannel

    6 ай бұрын

    Hi, well no, when you have a conflict with the animatediff node is the Load Video of that node that doesn't appear, not mine. Please attach the full log( or post it in the GitHub repo so I can see what it's wrong)

  • @sugy747

    @sugy747

    6 ай бұрын

    @@DreamingAIChannel ok, sorry for the confusion, I'm not very experienced, I'll share what I can on Sunday. thanks for the quick reply, I probably made a mistake but I really couldn't find it.

  • @user-hb6dd9iu9g
    @user-hb6dd9iu9g8 ай бұрын

    Thank you for this tuttorial. Could ypu share this workflow, please?:)

  • @florianl6764
    @florianl67648 ай бұрын

    Hey great Video! The only Problem is myy SaveVideo node has 2 options savevideo and saveframes and it looks i need to connect line to it. I dont get an output. But its working so without an error. Do you know why my savevideo looks different from urs?

  • @DreamingAIChannel

    @DreamingAIChannel

    8 ай бұрын

    uhm i don't know why you see savevideo and saveframes as input instead of widget format, did you try to update comfyui to the lastest version? maybe is an older version. However you can bring them back to normal if you right click on the node and select "Convert SaveVideo to widget" and "Convert SaveFFrames to widget"

  • @Wile-xs7gh
    @Wile-xs7gh6 ай бұрын

    how did you get the lines to be straight mine are curvy. kind of annoying to work with

  • @gersonburneoamar4760
    @gersonburneoamar47608 ай бұрын

    Thank you so much! I'm learning how to use comfyui thanks to you and this video for video. I would be very grateful if you could help me with a question. I use a radeon card so the process is very slow, it works well at 520 but when I upload it it takes too long, I would like to know if there is a tool to increase the quality of the video to full HD, I have seen it for images but I don't know if it exists for video. Thank you very much in advance.

  • @DreamingAIChannel

    @DreamingAIChannel

    8 ай бұрын

    Hi! If you mean a tool outside comfyui, there are plenty! The most popular was Topaz Video last time I checked, it's not free but there is a free trial. Otherwise after a quick search I found this github.com/Djdefrag/QualityScaler that seems good.

  • @gersonburneoamar4760

    @gersonburneoamar4760

    8 ай бұрын

    Thank you very much for the recommendations! Rather, I was referring to some group of nodes that I can add to the workbench that you present in this video since it is the one I am using. I have seen that they do that with images in comfyUi (for example they work at 512 and with an upscale they output them in x8 quality), so I was wondering if there was any way to do it with video as well. Thank you very much again!@@DreamingAIChannel

  • @DreamingAIChannel

    @DreamingAIChannel

    8 ай бұрын

    uhm i think you can do the same things that you do with images! because a video is just a serie of images so if you attach at the output , before the SaveVideo and the frame interpolator, a node able to upscale an image, i'm pretty sure it will run without any problems!

  • @gkoriski
    @gkoriski4 ай бұрын

    Thanks, it looks like the nodes changed since this video came out, any chance for an updated video?

  • @DreamingAIChannel

    @DreamingAIChannel

    4 ай бұрын

    Hi! It didn't change so much so i don't know, i mean it's true, there are a lot more outputs but they are just optional things, the only notable change is the starting_frame parameter which makes you choose where to start extracting frames in the video. i would like to do a video update when i have time to seriously update it as i need to improve performance and add some functionality!

  • @renanarchviz
    @renanarchviz2 ай бұрын

    Error occurred when executing LoadVideo [n-suite]: 'NoneType' object has no attribute 'shape' Would you help me?

  • @NicolasLuthy
    @NicolasLuthy4 ай бұрын

    Do you know how I can manage to produce video to video content on an external GPU? Google Collab or Vast-ai? Currently I am getting mental breakdowns by my limitations on a mac book :(

  • @DreamingAIChannel

    @DreamingAIChannel

    4 ай бұрын

    Hi! Vast-Ai! Because Collab, as far as I know, was banning Comfyui from notebooks.

  • @mr7nightmare819
    @mr7nightmare8198 ай бұрын

    Hey, Thank you for your videos. have a question about installing the ComfyUI-N. So when I tried to install that "install_dependency_gguf_models" I get this "ERROR: Failed building wheel for llama-cpp-python" "Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects" I will appreciate it if you could help me. thank you!

  • @DreamingAIChannel

    @DreamingAIChannel

    8 ай бұрын

    Hi! Well I need to see the full error, is it possible that you don't have a compiler like visual studio installed?

  • @mr7nightmare819

    @mr7nightmare819

    8 ай бұрын

    never mind i fixed that.

  • @maoming9131
    @maoming91316 ай бұрын

    hey great vid!, could you copy pasta the prompts you've been using in the video? @8:30

  • @OffTheHorizon
    @OffTheHorizon2 ай бұрын

    Hey! I installed the custom nodes suite, udated it, but I cant seem to find the LoadVideo node. Ive searched for it, refreshed comfyui over and over but it still isnt in the list of nodes. Do you have a solution?

  • @SVKRemider

    @SVKRemider

    2 ай бұрын

    Did you find the fix? I have the same problem

  • @OffTheHorizon

    @OffTheHorizon

    2 ай бұрын

    @@SVKRemider i applied a workflow that had loadvideo already in it, that worked!

  • @blockchaindomain2226
    @blockchaindomain22268 ай бұрын

    how did you get your cables nice and neat like that?!?!?!??!

  • @DreamingAIChannel

    @DreamingAIChannel

    8 ай бұрын

    Hi! i've explained it here: kzread.info/dash/bejne/c56ryNWwqq_TqLg.html

  • @the_meridian
    @the_meridian3 ай бұрын

    Awesome vid for somenoe who wasn't going to dabble in Video because it looked so complicated. This is a nice, easy in So I got it up and running and took a simple gif and what it put out was sort of an animation, but more of a slideshow. Interpolation seemed to do nothing to help, and like you say this is a very rudimentary rig not meant for prime time but....where do you suppose we go from here?

  • @DreamingAIChannel

    @DreamingAIChannel

    3 ай бұрын

    Thanks! So, if you are going to use something like a gif i suggest you to put "original" in the frame rate since gifs already have few frames, from here you can do everything really, because you can add control net or animatediff and then only your imagination (an the VRAM lol) will be your limit!

  • @the_meridian

    @the_meridian

    3 ай бұрын

    @@DreamingAIChannel I tried a short few second clip of a "real" video too, and controlnet was in place as per your workflow. The controlnet shapes are 100% correct but the results the put out from Ksampler are still pretty random unless of course operater error. dont' know what animatediff is so I'll look into that :)

  • @leol.4541
    @leol.45415 күн бұрын

    I have installed ComfyUI Auxiliary Prepocessor, but I can't find any CannyEdge, I just have the regular Canny. Someone can help ? Also, just using Canny, I have a problem when rendering, apprently the GPU, but everything seems rite on my computer. And when I remove the Canny Node, Everything's seems right until the rendering comes to the KSampler Advanced node, there, the same problem appear, anyone can help please ?

  • @Bitcoin_Baron
    @Bitcoin_Baron21 күн бұрын

    Can you update and just provide a downloadable archive we can just extract into the comfy folder? I can't understand the github instructions, they dont make sense for average user.

  • @pseudopod77
    @pseudopod775 ай бұрын

    i installed Control_Net but how did you load the preprocessors at 9:25? My selections are blank there. Where did they come from?

  • @DreamingAIChannel

    @DreamingAIChannel

    5 ай бұрын

    But do you mean that when you search for "CannyEdgePreprocessor" you don't have any result? Maybe ControlNet Auxiliary Preprocessors it didn't install correctly

  • @foodseen7824
    @foodseen78245 ай бұрын

    I can't get FrameInterpolator to populate as a node. I have it saved in the custom_nodes file. I installed it in the Manager... Did I install it in the wrong place or something else wrong? Any help would be great! I'm really confused and its my last step.

  • @DreamingAIChannel

    @DreamingAIChannel

    5 ай бұрын

    Hi, what do you mean "I have it saved in the custom_nodes file." , FrameInterpolator is part of ComfyUI-N-Suite, if you have installed something else then i dont know how to help you!

  • @MCLangKhach
    @MCLangKhach9 ай бұрын

    Are there any requirements for video upload? The video I uploaded did not work. even though it was in mp4 extension

  • @DreamingAIChannel

    @DreamingAIChannel

    9 ай бұрын

    uhm no, there are none, unless some python dependencies have some limitation but that seems strange to me, did it give you an error? Maybe is a big file and you should only wait the end of the upload

  • @ehsankholghi
    @ehsankholghi5 ай бұрын

    hello in SD my sampling mehtods has obnly 8 ! i cant see others like KARRAS and others.how can i fix this?

  • @DreamingAIChannel

    @DreamingAIChannel

    5 ай бұрын

    Hi! Karras is in the scheduler part not inside the sampler list, btw i think is possible that some custom nodes have messed up comfyui, so is better if you download a brand new comfyui installation and install only the nodes you really need!

  • @ehsankholghi

    @ehsankholghi

    5 ай бұрын

    @@DreamingAIChannel i upgraded to 3090ti 24gig.how much cpu ram i need to do video to video SD? I have 32gig

  • @user-gv3iz1sv6b
    @user-gv3iz1sv6b4 ай бұрын

    thank you, pleas shear Frame Interpolation nodes to use in project ,i cant found it

  • @DreamingAIChannel

    @DreamingAIChannel

    4 ай бұрын

    hi! Is in my custom node, you can download it here: github.com/Nuked88/ComfyUI-N-Nodes

  • @eyeemotion1426
    @eyeemotion14264 ай бұрын

    So do I understand it right? If I have a video from about 2 hours, setting the batch-size to 38 will automatically process the video in parts of 3 minutes? I'm asking because currently I'm using the Depthmap extension in Stable Diffusion (Automatic11111) to create depthmaps/heatmaps. But I run into the problem you mention. Anything past 3 minutes results in an error. I had to cut my video into 3 minutes (38 files) and manually have to insert the short video, hit generate, wait for it to process and save it. Having to be there and insert the next videoclip and so on. As it doesn't seem to be able to batch process. So with what you are showing here, I wouldn't even need to cut my video into pieces? But probably still have to merge all the depthmaps/heatmap clips resulting from it? Also, I'm not familair with ComfyUI yet. Does things need to be specifically made for ComfyUI? Or does anything that works in Stable Diffusion, work in ComfyUI? Because I need nodes where I can select a weightmodel, and generate a depthmap and/or heatmap for it. I'm currently using Thygate depthmap extension in Stable Diffusion.

  • @DreamingAIChannel

    @DreamingAIChannel

    4 ай бұрын

    Hi! Batch_size 38 means 38 frames, so assuming your video is at 30fps is: (7200seconds * 30 ) /38 = 5,684 batches in total since 38 frames is like 1.26seconds. I have never tried such a long video but it should work. ComfyUI and A1111 also share templates but if you're talking about extension you have to look for the equivalent in "nodes format" for comyui, usually you can find it just by googling but for Thygate depthmap I don't know since I couldn't find anything, maybe "Auxiliary Preprocessors" custom node integrates those too. However for the basics of comfyui you can watch my previous videos! I have a video on Auxiliary Preprocessors too, is the one called "Mastering ComfyUI: Creating Stunning Human Poses with ControlNet! - TUTORIAL"

  • @eyeemotion1426

    @eyeemotion1426

    4 ай бұрын

    @@DreamingAIChannelOk. Tried cutting the video with LosslessCut, and that way I found out that a 3 minute video was around 4300 frames. Now I've noticed that I can't cut my video that way either, because it add frames and it seems it also changes the speed, because they won't synch up with the original video-file and I also get jumpcuts when I merge them back together. So I hope this batch size processing can help. I will check that video of yours out and hopefully find a solution.

  • @DreamingAIChannel

    @DreamingAIChannel

    4 ай бұрын

    ​@@eyeemotion1426Hi! But are you talking about my loading/saving video nodes? Because if so it's a bug I need to fix xD

  • @eyeemotion1426

    @eyeemotion1426

    4 ай бұрын

    @@DreamingAIChannel No, it was LosslessCut that did add frames. Apparently that's the nature if you cut videos instead of re-rendering them. Something about keyframes. Yours I still have to try. I was using Stable Diffusion before I saw your video, so I still have to familiarize myself with ComfyUI. Especially for depthmap/heatmap generation.

  • @DreamingAIChannel

    @DreamingAIChannel

    4 ай бұрын

    @@eyeemotion1426 ohh ok! Don't worry take your time!

  • @swannschilling474
    @swannschilling4749 ай бұрын

    I never did video to video either but now I might give it a try! 🎉😊 Btw is this and AI voice?

  • @DreamingAIChannel

    @DreamingAIChannel

    9 ай бұрын

    yep!

  • @swannschilling474

    @swannschilling474

    9 ай бұрын

    @@DreamingAIChannel its a very good one! Not bothering at all, very natural! Is it from Eleven Labs?

  • @DreamingAIChannel

    @DreamingAIChannel

    9 ай бұрын

    @@swannschilling474 oh no, there is no fun in using eleven labs! Is bark + a lot of cuts and retries xD

  • @swannschilling474

    @swannschilling474

    9 ай бұрын

    @@DreamingAIChannel wow!! But yes!! Bark is awesome!!

  • @SquirrelTheorist

    @SquirrelTheorist

    7 ай бұрын

    @@DreamingAIChannel I thought so! I noticed the artifacts and it definitely sounded like Bark, but it's definitely better than the ones used in those Minecraft shorts. Also, I think from the inflexion of the pitch, am I correct in assuming you are a woman? Sorry if I'm wrong, I mean no offense, it just seems you have a higher pitch-range and use a lot of enthusiasm in your voice compared to the Ai model you are using in Bark.

  • @qubicone
    @qubicone3 ай бұрын

    Hello, I installed your Nodes but in LoadVideo Node there is missing "Choose file to upload" button and also drag and drop is not working, Everything is up to date and of course I tryed restart everything. Can you help please ? And also thank you for this excelent work, I cant looking forward to use it when problem will be solved :D

  • @DreamingAIChannel

    @DreamingAIChannel

    3 ай бұрын

    Hi, finally I had a little bit of time to test my nodes with the latest version of ComfyUI because I thought that the problem could be related to some updates, but it's working. Can you open an issue on github with the image of the developer console of your browser (Ctrl +shift +J) in ComfyUI page? I think there is some comflict in your installation but i don't know where, also what OS do you use? Thanks!

  • @DerekBranscombe
    @DerekBranscombe8 ай бұрын

    Amazing tutorial - for me everything works perfectly until the end - but nothing happens at 'save video'. There is no error. If I plug the image out from the VAE decode into a normal image output, it works. Any ideas?

  • @DreamingAIChannel

    @DreamingAIChannel

    8 ай бұрын

    Hi! Yes, i just discovered this, i can bet that you have animatediff node installed right? For some reason in animatediff node the dev did not put a "filter" in a function that prevent it to be executed only inside the "AnimateDiffCombine", and this is conflicting with the Save Video node. I tryed everything on my node but it seems i cannot do anything "on my side", so i submitted a patch to the animatediff dev that he can apply to avoid this. Meanwhile you can replace the comfyui-anidiff folder with mine, deleting that and cloning this "patched" repository github.com/Nuked88/comfyui-animatediff.git . It should work, then if you have any problem with the patched version please tell me!

  • @DerekBranscombe

    @DerekBranscombe

    8 ай бұрын

    @@DreamingAIChannel wow thank you for your detailed response - you are absolutely right. That fixed it! I had another problem that cropped up, I've completely lost the ComfyUI interface menu (the box with queue prompt and manager). I've been getting it to work with edge but not chrome, no matter what I try. It's not there at all even if I zoom out. I don't think it's related to your workflow but if you have any idea let me know. Thanks again!!!!

  • @DreamingAIChannel

    @DreamingAIChannel

    8 ай бұрын

    ​@@DerekBranscombe well i dont' have any idea about that xD, but you can do this: try to use comfyui in incognito in chrome, if the menu is working again just delete all the cookie and cache related to comfyui in the "normal part" of the browser and it should reset to default even the positioning of the menu

  • @A5h3n.

    @A5h3n.

    6 ай бұрын

    ​@@DreamingAIChannelDo you know if the patch has been implemented or is this git still the only way?

  • @DreamingAIChannel

    @DreamingAIChannel

    6 ай бұрын

    ​@@A5h3n.Hi, it has been implemented!

  • @hainguyen-gq7yb
    @hainguyen-gq7yb9 ай бұрын

    Error occurred when executing FrameInterpolator: list index out of range File "/content/drive/MyDrive/ComfyUI/execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/content/drive/MyDrive/ComfyUI/execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/content/drive/MyDrive/ComfyUI/execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/content/drive/MyDrive/ComfyUI/execution.py", line 59, in slice_dict d_new[k] = v[i if len(v) > i else -1] I got this error right in the first step of selecting video -> saving video

  • @DreamingAIChannel

    @DreamingAIChannel

    9 ай бұрын

    Hi, what do you put in LoadVideo as images_limit and batch_size? "Out of range" means that some value it's too hight, for something like this can you open an issue on github?it will be more easy to manage for me! Thanks!

  • @xxraveapexx2750
    @xxraveapexx27502 ай бұрын

    Savevideo seems not to work. after comfy is done with everything, there is no preview in the SaveVideo Node. and no result is saved in the outpufolder

  • @Dan-gy4uh
    @Dan-gy4uh9 ай бұрын

    Great content!! I've learned so much watching your videos. I get a bunch of errors when the process reaches the ksampler. The list is longer than what is displayed below. Any help to fix this would be greatly appreciated :) Thanks. Error occurred when executing KSamplerAdvanced: 'ModuleList' object has no attribute '1' File "C:\Users\Dan\Desktop\Stable Diffusion AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute File "C:\Users\Dan\Desktop\Stable Diffusion AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data uis = []

  • @DreamingAIChannel

    @DreamingAIChannel

    9 ай бұрын

    uhm, for try to help you i need to know what you are doing and how, otherwise for me is pretty impossible :p

  • @ezdeezytube
    @ezdeezytube7 ай бұрын

    Installed your-n-nodes package but I still don't get a loadvideo node, and noticed that on launch of comfyUI I am seeing module not found: no module named 'scikit_build_core'. Also the txt file mentions a need for py-cpuinfo, moviepy, opencv-python, scikit-build, typing, diskcache. Not sure what to do :/

  • @DreamingAIChannel

    @DreamingAIChannel

    7 ай бұрын

    If you reboot comfyui the error is still there?

  • @ezdeezytube

    @ezdeezytube

    7 ай бұрын

    Yes. I noticed it now says llama_ccp is not installed. It tries to install it, but says could not find a version that satsifies the requirement in llama-cpp-python. I am running the latest version of Python

  • @DreamingAIChannel

    @DreamingAIChannel

    7 ай бұрын

    @@ezdeezytube lastest? It's tested for 3.11 and 3.10 , I don't know if llama_cpp can run on more updated versions of python!

  • @ezdeezytube

    @ezdeezytube

    7 ай бұрын

    I meant 3.11

  • @DreamingAIChannel

    @DreamingAIChannel

    7 ай бұрын

    @@ezdeezytube what's your enviroment? Linux of Windows?

  • @zentrans
    @zentrans9 ай бұрын

    I'm stuck at the first step of (batch) processing frames to create an automatic mask over a segment of the image in Comfy. Can u do an overview of the available methods ? I was convinced I could use SAM to do this because of the way the image gets segmented in 1111 "sd-webui-inpaint-anything" extension WMZK_kJmMhE Segments have fixed colors that can obviously be automatically turned into masks (if u can select by color instead of dot picking), and the segments are different depending on the SAM model selected... can't figure out how to do it in Comfy

  • @DreamingAIChannel

    @DreamingAIChannel

    9 ай бұрын

    I tried to do something similar, but i've used OneFormer COCO Segmentor for the mask, try to watch this video: kzread.info/dash/bejne/q4ejq9Kth5XOpaw.html

  • @zentrans

    @zentrans

    9 ай бұрын

    ​@@DreamingAIChannel Found that vid earlier, but CoCo segmentation doesn't go far enough. I don't get the small segments I wan't... so what exactly is the secret of the 1111 extenstion ?

  • @zentrans

    @zentrans

    9 ай бұрын

    @@DreamingAIChannel am I wrong in assuming that SAM models output the colored segmentation maps ? I'm puzzled by the fact that I can't see that kind of output

  • @DreamingAIChannel

    @DreamingAIChannel

    9 ай бұрын

    ​@@zentrans i think that output a colored segmentation mask too, but the thing that i'm not sure is if the first frame will have the same colors of the second frame and so on...i mean if i have a man and in the first frame is red, it will be red even in the second frame? Or we need to apply a special tecnique to have always the same color on the same object? Because if i watch the presentation video for sure is possible, but i don't know if someone have implemented that, and if is not implemented, using colors for make an automatic mask will be impossible 🤣

  • @zentrans

    @zentrans

    9 ай бұрын

    @@DreamingAIChannel I'm pretty sure it doesn't apply random colors, otherwise the output of the same image would always be different

  • @MadXDax
    @MadXDax7 ай бұрын

    not sure if its just me but whenever i drop the mp4 file to the node it does nothing ive tried via browsing for the file too.

  • @DreamingAIChannel

    @DreamingAIChannel

    7 ай бұрын

    Uhm I don't really know, it should work the same way the image Loader does. Try restarting comfyui and the browser.What environment are you using?

  • @MadXDax

    @MadXDax

    7 ай бұрын

    @@DreamingAIChannel sorry if this sounds dumb but I'm not sure what you mean by environment and yeah I've gone as far as restarting my pc

  • @DreamingAIChannel

    @DreamingAIChannel

    7 ай бұрын

    @@MadXDax hi don't worry! I mean if you are for example using windows and the zip downloaded from the comfyui GitHub

  • @MadXDax

    @MadXDax

    7 ай бұрын

    @@DreamingAIChannel yeah its windows and the downloaded zip

  • @DreamingAIChannel

    @DreamingAIChannel

    7 ай бұрын

    @@MadXDax ok so it's probably a conflict with another custom node, what custom node do you have installed?

  • @keagoaki
    @keagoaki4 ай бұрын

    what are the limitations of the video, I mean how long the video can last and resolution? it tells me that the entity is to big.

  • @DreamingAIChannel

    @DreamingAIChannel

    4 ай бұрын

    Your VRAM is the limitation, so you need to try with shorter/lower resolution video until no error occurs.

  • @keagoaki

    @keagoaki

    4 ай бұрын

    got a 4080 laptop v ram: 12 is there a file size limitation? resolution in full hd. I would love to skip the conversion to png seq etc.. my videos got sound in it. and you know Ill keep trying. thnx. @@DreamingAIChannel

  • @dragongaiden1992
    @dragongaiden19922 ай бұрын

    Friend, I wanted to install the load video node and it gives an error, even if I restart, it does not load the node.

  • @DreamingAIChannel

    @DreamingAIChannel

    2 ай бұрын

    hi! what error does it give?

  • @dragongaiden1992

    @dragongaiden1992

    2 ай бұрын

    @@DreamingAIChannel Thank you, I think my problem was updating comfyui, but apparently that's it, I have to try your video again

  • @ricardocosta9336
    @ricardocosta93369 ай бұрын

    please teach us how to create noderinos!

  • @DreamingAIChannel

    @DreamingAIChannel

    9 ай бұрын

    noderinos? xD

  • @ricardocosta9336

    @ricardocosta9336

    9 ай бұрын

    @@DreamingAIChannel Noderinos are a reduplication of nodes with the Italian sufix -ino that add the symbolic meaning of little. I was studying the code for comfyui and your code and the complexity of functionality that emerges from the generalized simple code (extensions and so on) is beautiful. But I'm not yet capable to create functional ones. I'm kinda dumb, hahaha. So I was hopping if you being interested in doing a coding video, kinda a "how to do nodes". I mean if you want and the community too. But I believe that the comfyui can be the future of opensource replicate AI pipelines. Not only text, vision but infrastructure too. But I digress, ty so much for your videos man. I'm learning a lot.

  • @DreamingAIChannel

    @DreamingAIChannel

    9 ай бұрын

    ​@@ricardocosta9336 you're not the first (and I suspect not the last neither xD) to ask me to make a video on "how to do a node," and I will, but only when I'm sure of what I'm doing! Right now it's all reverse-engineering due to lack of documentation, so I don't know if what I'm doing is the best practice or not, maybe when I'm pretty sure, even if there is no documentation, I'll try to make a video about it!

  • @anthonydelange4128
    @anthonydelange412823 күн бұрын

    - Value not in list: video: 'Flying' not in [] issue with loadvideo.

  • @bliu-kc7co
    @bliu-kc7co8 ай бұрын

    May I ask where this plugin is from{frame lnterpolator},Can you give me the github address?

  • @DreamingAIChannel

    @DreamingAIChannel

    8 ай бұрын

    Yes is in my suite, github.com/Nuked88/ComfyUI-N-Nodes

  • @Ekopop

    @Ekopop

    7 ай бұрын

    I have the same issue, I installed but cannot find that node.

  • @MCLangKhach
    @MCLangKhach9 ай бұрын

    I can't find "Load Video" in my ComfyUI

  • @DreamingAIChannel

    @DreamingAIChannel

    9 ай бұрын

    You need to install my custom nodes from here github.com/Nuked88/ComfyUI-N-Nodes

  • @ryancabell3775
    @ryancabell37757 ай бұрын

    You made the narration AI too?

  • @DreamingAIChannel

    @DreamingAIChannel

    7 ай бұрын

    Do you mean the narrating voice?

  • @StableMindAI
    @StableMindAI9 ай бұрын

    How to use with google colab ?

  • @DreamingAIChannel

    @DreamingAIChannel

    9 ай бұрын

    Uhm, what do you mean? I didn't try with colab but it should work normally

  • @StableMindAI

    @StableMindAI

    9 ай бұрын

    @@DreamingAIChannel I mean how to install buttons in colab. I can't find "Loadvideo" in my colab?

  • @DreamingAIChannel

    @DreamingAIChannel

    9 ай бұрын

    @@StableMindAI you need to clone github.com/Nuked88/ComfyUI-N-Nodes in the custom_nodes directory of ComfyUI and then start comfyui

  • @StableMindAI

    @StableMindAI

    9 ай бұрын

    @@DreamingAIChannel If using colab, you will use google driver. You know what I mean ? What do I copy into the ConfyUI folder of the google driver?

  • @DreamingAIChannel

    @DreamingAIChannel

    8 ай бұрын

    is the same, somewhere you wull have the custom_nodes directory where you need to put my node directory that you can even donwload as zip directly form github

  • @TechfortheWorld-qi3xs
    @TechfortheWorld-qi3xs8 ай бұрын

    I don't understand where you're using Practical-RIFE?

  • @DreamingAIChannel

    @DreamingAIChannel

    8 ай бұрын

    It's embedded in the frame interpolator node.

  • @Ekopop

    @Ekopop

    7 ай бұрын

    @@DreamingAIChannel which I cannot find in the N suite, does it have a different name in a different package maybe ? or wouldn't it be a problem ? Is there a way to bypass that by simply having two nodes ?

  • @DreamingAIChannel

    @DreamingAIChannel

    7 ай бұрын

    ​@@Ekopopbut do you have the latest version of the n-suite? In that case what do you see in the console? There must be some errors, otherwise if you search "frameinterpolator" you should find it

  • @Ekopop

    @Ekopop

    7 ай бұрын

    @@DreamingAIChannel 0.0 seconds (IMPORT FAILED): C:\Program Files\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-N-Nodes yeah apparently import has failed. I do have the latest version and all the other nodes are working fine, it's juste frameInterpolator that doesn't show up, would it be that it cannot be importated if I have another node that has a similar function from another package ?

  • @Ekopop

    @Ekopop

    7 ай бұрын

    @@DreamingAIChannel 0.0 seconds (IMPORT FAILED): C:\Program Files\ComfyUI_windows_portable\ComfyUI\custom_nodes\Practical-RIFE-main also that one which I reckon is quite similar.

  • @Eugeniocaraujo
    @Eugeniocaraujo28 күн бұрын

    Unable to run it: Traceback (most recent call last): File "...\ComfyUI-master\custom_nodes\ComfyUI-N-Nodes-main\__init__.py", line 64, in spec.loader.exec_module(module) File "", line 883, in exec_module File "", line 241, in _call_with_frames_removed File "...\ComfyUI-master\custom_nodes\ComfyUI-N-Nodes-main\py\frame_interpolator_node.py", line 18, in from model.pytorch_msssim import ssim_matlab ModuleNotFoundError: No module named 'model' Cant load the interpolator in the comfyUI cause of this...

  • @ehsankholghi
    @ehsankholghi5 ай бұрын

    whats ur gpu?

  • @DreamingAIChannel

    @DreamingAIChannel

    5 ай бұрын

    Hi! It's a 3080 10GB

  • @ehsankholghi

    @ehsankholghi

    5 ай бұрын

    @@DreamingAIChannel which gpu should i buy for RVC voice model training and video to video stable diffusion? 4060ti 16gb is good to me?

  • @DreamingAIChannel

    @DreamingAIChannel

    5 ай бұрын

    Well 4060ti is good because you have a lot of VRAM but be aware that is not really fast!

  • @ehsankholghi

    @ehsankholghi

    5 ай бұрын

    @@DreamingAIChannel so what about 3090 24gb?

  • @Nuked

    @Nuked

    5 ай бұрын

    ​@@ehsankholghiYes, I think that's currently the best choice for quality/ price!

  • @michail_777
    @michail_7778 ай бұрын

    Hi.I tried it like in your video, but it only gave me the frames from the video and the Canny frames. I create videos in Deforum and I would like to advise you to try using the CN model "tile", "ip2p" "temporalnet" to avoid flickering. As I understand it works the same everywhere, only somewhere better smooths the frames, and somewhere worse. I've been doing this for over 8 months now and have come to the conclusion that only these CN models work great with video. Here's an example: kzread.info/dash/bejne/iZqKw8-YfrCyYto.html Underneath the video you'll find some tips. Maybe this will help. I hope I can get some work done with your script Have a good generation.

  • @DreamingAIChannel

    @DreamingAIChannel

    8 ай бұрын

    Thank you ❤️

  • @Run_run_run12345

    @Run_run_run12345

    8 ай бұрын

    hey man, thanks for your sharing.

  • @Pauluz_The_Web_Gnome
    @Pauluz_The_Web_Gnome8 ай бұрын

    Lol, a.i. voice-over

  • @Kay_R

    @Kay_R

    7 ай бұрын

    I'm fine with it

  • @TheMahdiayman
    @TheMahdiayman8 ай бұрын

    .json file plz

  • @shitbreak2k
    @shitbreak2k7 ай бұрын

    is this a Ai voice ?

  • @DreamingAIChannel

    @DreamingAIChannel

    7 ай бұрын

    Yup

  • @Blackbbdude
    @Blackbbdude7 ай бұрын

    Hi Thanks for your work.. I just installed your additions to ComfyUI...however I simply could not get them to work properly. I used the Git Pull command, that put the files in the main ComfyUI-N-Nodes folder ..but no libs folder in that. I also tried downloading the zip files and extracing that in the "custom_nodes" folder. Anyway I had to re install several times...to cut a very long story short ..I eventually ended up with 2 main folders "ComfyUI-N-Nodes-main" and "ComfyUI-N-Nodes" ..the lib folder being in the later. So that works..but not sign of the Frame Interpolater ...and I get this message from the command File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-N-Nodes-main\py\frame_interpolator_node.py", line 18, in from model.pytorch_msssim import ssim_matlab ModuleNotFoundError: No module named 'model' I wonder if you could help out...there may have been some updates from ComfyUI that have made this happen I dont know, I havent had this problem with other custom nodes> Thanks in advance

  • @DreamingAIChannel

    @DreamingAIChannel

    7 ай бұрын

    Please delete that folder and install the node with git clone

  • @Blackbbdude

    @Blackbbdude

    7 ай бұрын

    @@DreamingAIChannel Thanks for the reply...Unfortunately that does not work..I deleted those folders and and set up the cmd and git clone with the git address in the custom nodes folder, it extracts and puts in the ComfyUI-N-Nodes folder...but it does not work in ComfyUI. The ONLY WAY it works (with 3 of the 4 modules) is with the 2 folders, and it doesn't show the interpolate option. Anyway I wont waste your time anymore, I am going to delete the folders and forget about it. Thanks

  • @koosiuhang5257
    @koosiuhang52579 ай бұрын

    Error occurred when executing LoadVideo: chunk expects `chunks` to be greater than 0, got: 0 File "E:\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "E:\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "E:\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "E:\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-N-Nodes\py\video_node.py", line 463, in encode i_tensor_batches = torch.chunk(i_tensor, n_chunks, dim=0) how to settle it ? thank you

  • @DreamingAIChannel

    @DreamingAIChannel

    8 ай бұрын

    Since i don't know what you are dooing is hard for me to try to help you 🤣

  • @shatteredMoonEnt

    @shatteredMoonEnt

    8 ай бұрын

    @@DreamingAIChannel You set your images_limit in the Load Video to be smaller than the batch_size value. Your batch_size must be

  • @goodgames8171
    @goodgames81718 ай бұрын

    How to install node with frame interpolator?

  • @goodgames8171

    @goodgames8171

    8 ай бұрын

    I just reloaded the SD few times and the node appeared

  • @Ekopop

    @Ekopop

    7 ай бұрын

    sorry sorry what did you do exactly ? I'm encountering the same problem

  • @PlushBanshee

    @PlushBanshee

    6 ай бұрын

    @@Ekopopsame here lol

  • @samon29
    @samon295 ай бұрын

    Thank you, this was just what I was looking for!

Келесі