ComfyUI AI: What if the new IP adapter weight scheduling meets Animate Diff evolved?

Фильм және анимация

This is the first part of a series, in the coming episodes I will show a workflow in which the upscaling and image enhancement method Perturbed Attention Guidance is integrated and with which the animations can be generated in high resolution and long playback time. And where I try out various additional methods to control the output video, such as different control nets.
Once again, it's incredibly cool what the developer of the IP Adapter Plus nodes has created for us. The longer I play around with the adapters, the more ideas I come up with.
You can find and download the workflow on my website www.alienate.de.
0:00 - 1:13 Intro
1:14 - 7:42 Setup Workflow
7:43 - 13:00 Explaining Nodes
13:01 - 13:44 Outro

Пікірлер: 79

  • @MisterCozyMelodies
    @MisterCozyMelodies12 күн бұрын

    everything on this tutorial is awesome, the voice, the background music, the detailed in each step of the tutorial, very immersive video, thanks a lot! you are doing a next lvl video here

  • @Showdonttell-hq1dk

    @Showdonttell-hq1dk

    12 күн бұрын

    Thanks a lot, I really appreciate that! It always drives me crazy when I watch tutorials and numerous in-between steps are simply skipped. I definitely didn't want to do that in my videos. That's why I always rebuild the workflow after the video has been completed according to its instructions to see if it works.

  • @eccentricballad9039

    @eccentricballad9039

    10 күн бұрын

    @@Showdonttell-hq1dk Thanks a lot for actually creating art instead of creating content. It's so immersive and i feel like i stepped into my own artificial intelligence work studio.

  • @Showdonttell-hq1dk

    @Showdonttell-hq1dk

    10 күн бұрын

    That's a wonderful compliment, thanks a lot!

  • @electronicmusicartcollective

    @electronicmusicartcollective

    7 күн бұрын

    @Showdonttell-hq1dk ...uhhhm except the room of the voice ;) better dry signal. pls not a recognize reverb/delay

  • @wizards-themagicalconcert5048

    @wizards-themagicalconcert5048

    4 күн бұрын

    @@Showdonttell-hq1dk It works very well ! Very easy to understand and follow ! Thanks !

  • @wizards-themagicalconcert5048
    @wizards-themagicalconcert50484 күн бұрын

    Fantastic Content and video,keep em up ! Subbed !

  • @AmazenWisdom
    @AmazenWisdom13 күн бұрын

    Wow. Another great tutorial! Thank you so much for sharing!

  • @Showdonttell-hq1dk

    @Showdonttell-hq1dk

    13 күн бұрын

    Thanks for watching! :)

  • @SylvainSangla
    @SylvainSangla9 күн бұрын

    Thanks a lot for sharing these tutos and workflows !

  • @Showdonttell-hq1dk

    @Showdonttell-hq1dk

    9 күн бұрын

    Thanks for watching!

  • @abaj006
    @abaj00616 күн бұрын

    Amazing work! Thanks for sharing, much appreciated!

  • @Showdonttell-hq1dk

    @Showdonttell-hq1dk

    15 күн бұрын

    I'm glad you like it. Thanks for watching and subscribing

  • @697_
    @697_13 күн бұрын

    The way your AI says hugging face is quite cute tbh 1:36

  • @Showdonttell-hq1dk

    @Showdonttell-hq1dk

    13 күн бұрын

    Tbh, one of the reasons I chose Charlotte was because her voice keeps me motivated when making the videos. And, if that works for me, then there's a good chance that viewers will like her AI voice too. ;)

  • @FlippingSigmas
    @FlippingSigmas12 күн бұрын

    great video!

  • @Showdonttell-hq1dk

    @Showdonttell-hq1dk

    12 күн бұрын

    Thanks!

  • @skycladsquirrel
    @skycladsquirrel12 күн бұрын

    amazing!

  • @MrXRes
    @MrXRes7 күн бұрын

    Thank you! What voice generator did you use?

  • @Showdonttell-hq1dk

    @Showdonttell-hq1dk

    5 күн бұрын

    This is the AI voice profile Charlotte from Elevenlabs. Thanks for watching.

  • @wonder111
    @wonder1115 күн бұрын

    Great approach to teaching what only the programmers can understand. I worked on this for a few hours, it fails at the last (video combine) node with this error: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED. Any idea what maybe in error? Thanks, and I will be following..

  • @GamingDaveUK
    @GamingDaveUK12 күн бұрын

    Do you have a tutorial for the sdxl version? so far every guide i look at for animation always shows 1.5 models. given sdxl's prompt cohesion and better image quality, its surprising so many are still using 1.5.

  • @Showdonttell-hq1dk

    @Showdonttell-hq1dk

    12 күн бұрын

    Unfortunately, this does not yet work with SDXL. At least not the version 2 Motionloras etc. yet. This means that you can't really use everything that AnimateDiff Evolved provides with SDXL. It was also an adjustment for me because I have only been using SDXL models for the last few months. In the next few days I want to try everything with HotshotXL, maybe it will work better, but I can't really say anything about that yet. You can download a basic XL workflow from the site. But as I said, there's not much you can do with it. Most of the workflows I've found mix SD 1.5 with SDXL in some way with different adapter loras, but they're not satisfactory. Link: civitai.com/articles/2950/guide-comfyui-animatediff-xl-guide-and-workflows-an-inner-reflections-guide

  • @BuzzJeux_Studio
    @BuzzJeux_Studio11 күн бұрын

    Fantastic tutorial and very useful, but idk why i have an issue (oom) with 16 VRAM.. how much VRAM do you use for that ?

  • @Showdonttell-hq1dk

    @Showdonttell-hq1dk

    11 күн бұрын

    Thanks for watching, glad you like it. My graphics card has 12 GB of vram. Maybe it helps to enlarge the swap file, i.e. the virtual memory. I have set mine to 80 GB and since then I have hardly had any problems of this kind.

  • @BuzzJeux_Studio

    @BuzzJeux_Studio

    11 күн бұрын

    @@Showdonttell-hq1dk First of all thanks for your quick reply, after increasing my virtual memory (I was currently at 30 GB) as you mentioned, but I still had the problem, after several hours looking for the why and wherefore, I finally found where my error was coming from, I was using input images that were far too large in terms of resolution! Problem solved by using basic 512x512 images :)

  • @Showdonttell-hq1dk

    @Showdonttell-hq1dk

    10 күн бұрын

    @BuzzJeux_Studio ​ So business as usual! :) An error occurs, a simple fix doesn't work --> many hours, countless websites read and how-to-fix-problem-xyz-vidoes later --> problem was basically easy to solve. However, the images are usually downscaled to a low resolution of 224x224 by the image batch multiple node anyway. I have just tried it again with 5 images in a resolution of 6000x6000. I only got an error message when I tried to upload an image with 20480x12288 to the load image node. This means that images larger than 512x512 should also work in principle, at least with a graphics card like yours.

  • @alexhalka
    @alexhalka9 күн бұрын

    Amazing!!! Would love to have your workflow, can't acces your site.

  • @Showdonttell-hq1dk

    @Showdonttell-hq1dk

    9 күн бұрын

    The website takes a while to load. Would you try again, it should actually work.

  • @DerekShenk
    @DerekShenk15 күн бұрын

    Since viewers will want to learn what you teach them, it would be far more beneficial if you included links to your workflow. Additionally, if you really want to stand out ahead of other tutorials, if you include links to the actual images you use in your workflow, thereby enabling viewers to fully reproduce what you show them, it would be fantastic!

  • @Showdonttell-hq1dk

    @Showdonttell-hq1dk

    15 күн бұрын

    Thanks for watching! You can find and download the workflow on my website alienate.de. As for the images, my idea is to show in the tutorials how you can set up and use the workflow yourself. Without exception, all the images I use in the videos are created or photographed by myself. I also work as a photographer, which means that some of the images used are also linked to image rights. Apart from all the fun of learning how to use ComfyUI and create videos from it, it's also a financial matter. Thanks for your remarks and interest anyway.

  • @clangsison

    @clangsison

    13 күн бұрын

    sometimes people are lazy that’s why they want the workflow. others view these types of videos (and matteo’s) as very insightful if one truly wants to understand how things work.

  • @Showdonttell-hq1dk

    @Showdonttell-hq1dk

    12 күн бұрын

    @@clangsison I didn't want to say it out loud. But yes, it's probably true. Although I can understand it somewhat. When you come into contact with it for the first time, a fully functional workflow like this is really helpful. You can take it apart and understand step by step how it works. Thanks for watching. :)

  • @amorgan5844

    @amorgan5844

    8 күн бұрын

    ​@Showdonttell-hq1dk it's always appreciated, your work and workflows are some pf the best I've ever seen

  • @czlaczimapping
    @czlaczimapping7 күн бұрын

    I have an error message: 'VAE' object has no attribute 'vae_dtype' Do you know what is the problem?

  • @Showdonttell-hq1dk

    @Showdonttell-hq1dk

    7 күн бұрын

    Have you tried using a different Vae? Or connect the Vae from the checkpoint to the Vae decode node?

  • @CosmicFoundry
    @CosmicFoundry16 күн бұрын

    awesome, thanks for this! Do you have workflow somewhere?

  • @Showdonttell-hq1dk

    @Showdonttell-hq1dk

    16 күн бұрын

    Thanks for watching. I'm glad you like it. I'll definitely upload the workflow to my website later today, www.alienate.de.

  • @WhySoBroke

    @WhySoBroke

    16 күн бұрын

    @@Showdonttell-hq1dkGreat method and thanks in advance for the workflow!

  • @Showdonttell-hq1dk

    @Showdonttell-hq1dk

    15 күн бұрын

    You can now download the workflow as a json file from my website if you like. Have fun trying it out. The link is usually in the video description.

  • @CosmicFoundry

    @CosmicFoundry

    12 күн бұрын

    @@Showdonttell-hq1dk got it thanks! keep up the great work!

  • @nirdeshshrestha9056

    @nirdeshshrestha9056

    12 күн бұрын

    @@Showdonttell-hq1dk I got error pls help

  • @martinkaiser5263
    @martinkaiser526313 күн бұрын

    Wo genau kann ich den workflow downloaden ? Sehe es einfach nicht

  • @Showdonttell-hq1dk

    @Showdonttell-hq1dk

    13 күн бұрын

    Hey, danke fürs Anschauen. Der Workflow ist als Json-Datei auf meiner Webseite, alienate.de, zu finden. Einfach herunterscrollen bis zu den Comfy-Bildern, das erste Bild ist das Logo meines Kanals, rechts daneben ist eine Liste mit der Überschrift: "Download Workflow Json". Der letzte Punkt auf der Liste lautet: "IPA Weight Scheduling + Animate Diff Workflow", das ist der Link zum Workflow. Mittels Rechtsklick öffnet sich das Kontextmenü, dann auf: "Link speichern unter ...", klicken und die heruntergeladene Json-Datei anschließend einfach in die Benutzeroberfläche von ComfyUI ziehen und die rotmarkierten Knoten via ComfyUI-Manager und: "install missing custom nodes", installieren. So sollte es gehen. Ich hoffe, das war hilfreich. Wenn ja, dann viel Spaß damit.

  • @double-7even
    @double-7even12 күн бұрын

    I can't understand the weights for IPAdapter weights. There are two values: e.g. "0.0, 1.0" in IPAdapter weights. Is the first value (0.0) the weight for first image batch (blue in workflow) and the second value (1.0) for second image batch (cyan in workflow)?. Btw. Amazing work 👍

  • @Showdonttell-hq1dk

    @Showdonttell-hq1dk

    12 күн бұрын

    Thanks for watching! I spent another couple of hours today looking for a detailed explanation of the nodes involved. But it seems that there are no detailed texts available. So I can only tell you what my very long tests have shown. By the way, I'm currently working on a new video about it, and some things have become a bit clearer. My approach is empirical, so to speak; that is, I test it and see how the nodes behave with each other. It's incredibly complex, even though it looks so simple sometimes. My observations are; The two values (0.0, 1.0) indicate how much weight is given to the IP adapters on the one hand and the prompt on the other. 1.0 = the Ip adapter receives the greater weight, i.e. the images. 0.0 = the prompts receive the greater weight. As the outputs of the IP adapter weights nodes are called Image_1 and Image_2, I assume that the first image of the Images Batch Multiple node is processed by the first IP adapter batch node, at least more strongly. This is also shown by the test. And therefore the second image from the second Ip adapter batch node. However, things get more complex here. I'll try to shed more light on this darkness in the next few videos. :) But the short answer to your question is; yes, something like that.

  • @double-7even

    @double-7even

    11 күн бұрын

    @@Showdonttell-hq1dk Thank You! I'm looking forward to the new video and I really appreciate Your hard work! Another problem which I found is that changing resolution to 2x (768x768) produce broken video. Details are repeated vertically and overall the whole scene is mixed up. Do You know why and how can I prevent this? EDIT: I think I know answer. It's latent size and it's limited to trained model data size (512x512). For bigger size we need to upscale it?

  • @Showdonttell-hq1dk

    @Showdonttell-hq1dk

    11 күн бұрын

    @@double-7even Yes, that's right. I had the same problem, but after a few runs with the same seed and a resolution of 768 x 512 the problem disappeared completely. Anyway, it seems advisable to use the same seed, even if the changes only occur after a few runs. My seed is 998999, so if you use a copy of my workflow, there's a good chance that it will work there too. I don't know if you have changed it. But I would be interested to know whether the seed works across all computers.

  • @Showdonttell-hq1dk

    @Showdonttell-hq1dk

    11 күн бұрын

    @@double-7even And that is, as you say, a typical SD.1.5 problem. With the SDXL models, you no longer have these worries. But unfortunately Animate Diff does not yet work properly with these models.

  • @Showdonttell-hq1dk

    @Showdonttell-hq1dk

    11 күн бұрын

    But I have just found out that one can integrate additional IP adapter embeds into the workflow. That's pretty cool and will definitely be included in the new video.

  • @kargulo
    @karguloКүн бұрын

    Hi , I did build workflow but results is very blury

  • @Showdonttell-hq1dk

    @Showdonttell-hq1dk

    Күн бұрын

    First of all, thanks for subscribing. You are the thousandth subscriber. :) To solve the problem, you can set the resolution a little higher. If you are using the absolute reality LCM model, then the optimal resolution is 576 to 320, and you can insert a NNlatent upscale node between the custom sampler and the Vae decode node. For this node, you only need to set SD-1.5 and the factor to 2.0. The input images also play a role and should not be too small in resolution. If you are using a multi-scaled mask, the min-float-value should be set to about 1.0. Let me know if any of this has helped. If none of this works, the workflow can also be found on my website alienate.de. Maybe you can try it out as well.

  • @kargulo

    @kargulo

    Күн бұрын

    @@Showdonttell-hq1dk thanks for reply I so glad that Im thousandth subscriber :) I found my mistake during build workflow, I missed checkpoint file and I choose different then recommended by You :)

  • @Showdonttell-hq1dk

    @Showdonttell-hq1dk

    23 сағат бұрын

    @@kargulo Ah, ok, yes, the workflow is set up for LCM. But you can also use other checkpoint models if you download the corresponding lcm-lora-model. So: Connect the Lora model loader to the checkpoint and then select the LCM-Lora, connect the model output to the Use-Evolved-Sampling node. And then connect its model output to the Model Sampling Discrete node and select LCM in this node. You can install the LCM-Lora model via the ComfyUI manager. Have fun with it.

  • @nirdeshshrestha9056
    @nirdeshshrestha905612 күн бұрын

    It didnot work I get an error, can you help?

  • @Showdonttell-hq1dk

    @Showdonttell-hq1dk

    12 күн бұрын

    What is the error message?

  • @nirdeshshrestha9056

    @nirdeshshrestha9056

    12 күн бұрын

    @@Showdonttell-hq1dk Error occurred when executing IPAdapterBatch: cannot access local variable 'face_image' where it is not associated with a value File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 761, in apply_ipadapter return (work_model, face_image, ) ^^^^^^^^^^

  • @nirdeshshrestha9056

    @nirdeshshrestha9056

    12 күн бұрын

    @@Showdonttell-hq1dk Error occurred when executing IPAdapterBatch: cannot access local variable 'face_image' where it is not associated with a value File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 761, in apply_ipadapter return (work_model, face_image, ) ^^^^^^^^^^

  • @Showdonttell-hq1dk

    @Showdonttell-hq1dk

    12 күн бұрын

    @@nirdeshshrestha9056 I have tried to reproduce the error, but without success. What you can do is first click on “Update all” in the Comfy Manager and then restart ComfyUI. Then you can check in the ComfyUI Manager if all extensions (custom nodes) are updated, if not then update them manually. If relevant nodes are marked in red in the ComfyUI manager under “import failed”, try uninstalling and reinstalling them. And check if all necessary models: Clipvision, IP-Adapter, and AnimateDiff Motion-Models and Motion-Loras are installed. Please also make sure that the images you are using are still in the same folder and have not been moved somewhere else in the meantime. I hope this helps. If not, please let me know. Good luck!

  • @nirdeshshrestha9056

    @nirdeshshrestha9056

    12 күн бұрын

    @@Showdonttell-hq1dk tried but failed ahain

  • @daoshen
    @daoshen6 күн бұрын

    Amazing work and results! The voice is annoying to listen to and distracts from the content. This is, of course, subjective. A more neutral voice might appeal to more of us?

  • @697_
    @697_13 күн бұрын

    ip adAPTer

  • @Showdonttell-hq1dk

    @Showdonttell-hq1dk

    13 күн бұрын

    :P

Келесі