IPAdapter v2 Released! Old workflows are broken - Stable Diffusion Experimental

Ғылым және технология

A new version of IPAdapter was released, with a completely overhauled code! The new version completely breaks old workflows, so in this tutorial we'll see how we can fix that and what does the new and improved IPAdapter brings to the table.
If you updated your IPAdapter node and it's not working anymore, we'll fix that in no time.
This is the first video in a series of Research & Development videos about generative AI. These videos will look at new tools that are not quite production ready yet, but are still interesting and exciting enough to warrant further inspection.
Resources needed:
- IPAdapter github: github.com/cubiq/ComfyUI_IPAd...
- Simple, one image reference generator workflow: pastebin.com/JjUK9pUQ
- Workflow I'm using to test all the different image embeds: pastebin.com/xeD1Upai
- Example workflows from IPAdapter github: github.com/cubiq/ComfyUI_IPAd...
Models:
- Aetherverse XL (SDXL model used in this video): civitai.com/models/308337
- Dreamshaper 1.5 (if you want to use a 1.5 model, this is a good all-around): civitai.com/models/4384/dream...
IPAdapter (Read the instructions. These models may have slightly different denominations than the ones used in this video, as I rename my models to remember which are which):
- IPAdapter Plus XL model. Place it into your "\models\ipadapter" folder and use it in your Load IPAdapter Model node: huggingface.co/h94/IP-Adapter...
- IPAdapter Plus 1.5 model. Place it into your "\models\ipadapter" folder and use it in your Load IPAdapter Model node: huggingface.co/h94/IP-Adapter...
- CLIPVision ViT-H model (works with IPAdapter *PLUS*, both 1.5 and XL, only. For IPAdapter *STANDARD*, not used in this video and kind of deprecated, you need a ViT-G model). Place it into "\models\clip_vision" and use it in your Load CLIPVision Model node: huggingface.co/h94/IP-Adapter...
Timestamps:
00:00 - Intro
00:31 - GitHub Announcement
01:01 - Broken Workflows if IPAdapter is updated
01:31 - Inspecting the new IPAdapter nodes
03:35 - Testing the new Weight Types
06:59 - Testing the new Combine Embed Types
09:34 - Outro and a request for the Dev
#ipadapter #ipadapterv2 #stablediffusion #stablediffusiontutorial #ai #generativeai #generativeart #comfyui #comfyuitutorial #risunobushi_ai #moodboards #reference #sdxl #sd #risunobushi #andreabaioni

Пікірлер: 37

  • @risunobushi_ai
    @risunobushi_aiАй бұрын

    TLDW recap: - Complete node rewrite for IPAdapter by the Dev; - the old workflows are broken because the old nodes are not there anymore; - multiple new IPAdapter nodes: regular (named "IPAdapter"), advanced ("IPAdapter Advanced"), and faceID ("IPAdapter FaceID); - there's no need for a separate CLIPVision Model Loader node anymore, CLIPVision can be applied in a "IPAdapter Unified Loader" node; - CLIPVision can be applied separately if "IPAdapter Unified Loader" is not used; - new Weight Types; - new Combine Embed types for multiple images inside of one IPAdapter node.

  • @Instant_Nerf

    @Instant_Nerf

    26 күн бұрын

    I’m having some errors with ipadapter.. what fielder does it goin into ? Thanks

  • @risunobushi_ai

    @risunobushi_ai

    26 күн бұрын

    @@Instant_Nerf with IPAdapter v2, you can either use a load IPAdapter model node and select the one you would like to use, and hook up the IPAdapter model output to the IPAdapter input on the IPAdapter advanced node, or bypass this step at all and just select the IPAdapter model of your liking via the Unified Loader node, a node that goes between the checkpoint node and the IPAdapter advanced node. I didn’t cover this second option in the video because there was no documentation yet and I didn’t know it was a possibility.

  • @TyreII
    @TyreIIАй бұрын

    The day he released the new one was the first time I ever tried it. I was so annoyed that it wouldn't work! I thought it was me but just got really unlucky that i chose that day of all days to try it! This helps a lot! Thanks.

  • @reapicus557
    @reapicus557Ай бұрын

    This information is exactly what I needed. Thank you very much!

  • @harshitpruthi4022
    @harshitpruthi4022Ай бұрын

    thanks for the comparison , it saves a lot of trouble

  • @ChloeLollyPops
    @ChloeLollyPopsАй бұрын

    Thank you!

  • @stepahinigor
    @stepahinigorАй бұрын

    Thanks Andrea! You covered everything I needed. We just need to see if it's working more accurately or the same or worse than v1. I still have old one in my comfy so I can backup it, right?

  • @risunobushi_ai

    @risunobushi_ai

    Ай бұрын

    AFAIK if you rename the folder and node you should be good, but since I haven’t tried that myself I would double check before doing anything risky. What I did, to be overly sure, is just install a new instance of comfyUI with the new nodes. Then you can redirect the model paths to the previous instance in order to limit the amount of storage space used.

  • @ragnar364
    @ragnar36419 күн бұрын

    thx bro

  • @JackTorcello
    @JackTorcelloАй бұрын

    Grazie mille!

  • @svenhinrichs4072
    @svenhinrichs40729 күн бұрын

    to make the new possible , old things have to die....

  • @xdrxqcx
    @xdrxqcxАй бұрын

    Great video, any changes to the faceid version that you know of? Your voice seems to be desynchronized for me, but that might just be me.

  • @risunobushi_ai

    @risunobushi_ai

    Ай бұрын

    I haven’t run tests on the faceid version, since this was more of a rushed job to fix the suddenly outdated informations in the video I released on Saturday, but I’ll try that as well and update you soon. The voice is a bit desynched unfortunately, I’m not sure what happened with the upload. I’ll check if I can fix that.

  • @xdrxqcx

    @xdrxqcx

    Ай бұрын

    @@risunobushi_aithank you, great channel so far!

  • @risunobushi_ai

    @risunobushi_ai

    Ай бұрын

    I've checked out the faceID version, and the only change seems to be that the model now can go through a IPAdpater Unified Loader FaceID node, which can be used in order to bypass the need for a separate, specific CLIPVision Loader node for all IPAdapter nodes (faceID, Advanced, and regular). I guess Matteo got tired of people asking why their ViT-G models weren't working with non-ViT-G IPAdapters and decided to code a new node with all the CLIPVisions already implemented.

  • @xdrxqcx

    @xdrxqcx

    Ай бұрын

    @@risunobushi_ai that sounds like a good change, i definitely hit a few red walls of text like in your midjourney vs stable diffusion short when trying combinations of clip, ipadapter and faceid

  • @twspokemongoofficiale709
    @twspokemongoofficiale70928 күн бұрын

    how can i fix the red ipadapter node on certain workflows? i folowed each step of the github tutorial but its still not working

  • @risunobushi_ai

    @risunobushi_ai

    28 күн бұрын

    The red node means you updated the node, and since the new version is a complete node rewrite with different node names, comfyUI can't find the previous version anywhere. In order to fix that, you can either: - delete the red node and replace it with the new nodes, as explained in the video, or - rollback to the previous version of IPAdapter, by going to your comfyui folder, custom_nodes folder, comfyui_ipadapter_plus folder, open in terminal, and type "git checkout 6a411dc" without the "" and press enter. This will rollback IPAdapter to the versione before the node rewrite.

  • @Oxes
    @Oxes28 күн бұрын

    can you make a workflow for consistent characters based on 1 image?

  • @risunobushi_ai

    @risunobushi_ai

    28 күн бұрын

    Sure, I’ll add it to the list! I’ll need to find a way to spin it so that it solves actual issues in production environments and in such a way that it can be used in a production-ready workflow, so I’ll have to think about how to implement it that way as well.

  • @Oxes

    @Oxes

    28 күн бұрын

    @@risunobushi_ai thanks it will be amazing to achieve thT

  • @Flipacadabra
    @FlipacadabraАй бұрын

    You a boss! Tytytytytyty

  • @hoodooguru1450
    @hoodooguru1450Ай бұрын

    The workflows are not permanently broken. All you have to do is replace the old IPAdapter with the new version, refresh and reload the browser window and the old workflows will work again but using the new V2 IPAdapter.

  • @2PeteShakur

    @2PeteShakur

    Ай бұрын

    nice, good to know - v2 is pretty damn awesome, in ways, another level/milestone reached! ;)

  • @risunobushi_ai

    @risunobushi_ai

    Ай бұрын

    They're broken in the sense that, if the node is updated, there's no seamless integration between the new and the old node. This is an issue in production environments, as an auto update would break the deployed workflow. Think of it in terms of SaaS: if you had a workflow serving clients remotely and in an automated fashion, this update would make the deployment go poof. Usually node updates are handled seamlessly, but since this is a complete node rewrite with new node names and processes, it doesn't seamlessly integrate into old workflows automatically.

  • @JavierPortillo1
    @JavierPortillo1Ай бұрын

    Well... You have to look at it from a developers point of view to understand why the breaking changes. First, the guy is not getting paid to do this work, only through donations but that's clearly not enough. So having both versions now implies double effort for little to no reward. He could have done it in a way that was retro compatible but it seems like it wasn't worth it. So, if you're relying on his nodes for your professional work the best *you* can do to make sure things like this don't happen again is to donate.

  • @risunobushi_ai

    @risunobushi_ai

    Ай бұрын

    I completely agree, it’s completely up to the dev and since it’s their own free time and effort we’re not owed anything. What I’m suggesting is that it’d have been a overall smoother deployment if they had released v2 separately, as another repo, while abandoning further development on v1. V1 would have been deprecated, but the workflows would have been still operational

  • @robrobs3d

    @robrobs3d

    29 күн бұрын

    @@risunobushi_ai That would have been nice indeed...

  • @timeless3d858
    @timeless3d858Ай бұрын

    weak input, weak output, weak middle, and strong middle seem to be copying the image based on frequency. Imagine it was building the image from simple colors, and splotches, then defining them into more defined shapes, then defining them into smaller shapes, and them defining the textures. Input would be the vague shapes, output would be the textures, so a strong middle would get rid of both the composition and the texture, and just leave the character in a new composition with slight resemblance, and different finish

  • @risunobushi_ai

    @risunobushi_ai

    Ай бұрын

    Great infos, thank you for your analysis. I’ll look a bit more into those modes then, but by how they seem to behave they don’t have a place in my own line of work. They might be interesting for pre-prod / concept art workflows though

Келесі