How to use IPAdapter models in ComfyUI
Everything you need to know about using the IPAdapter models in ComfyUI directly from the developer of the IPAdapter ComfyUI extension.
👉 You can find the extension "ComfyUI_IPAdapter_plus" on github here: github.com/cubiq/ComfyUI_IPAd...
00:00 Introduction
01:33 Basic Workflow
05:55 IPAdapter Plus Model
07:32 Prepping Images
09:16 Sending Multiple Images
14:07 Face Model
15:18 SDXL
17:42 Img2Img
19:10 Inpainting
20:00 ControlNet
21:40 Upscaling
23:17 Saving embeds
26:33 Conclusions
🎵 Background Music
-- "Part A" by Alexander Nakarada (www.serpentsoundstudios.com)
Licensed under Creative Commons BY Attribution 4.0 License
-- Last Stop Synthwave by Karl Casey @ White Bat Audio (whitebataudio.com/)
-- CyberPunk City by Peritune (peritune.com/blog/2020/05/22/...)
Пікірлер: 260
Just want to say Thank You Thank You Thank You, from the bottom of my heart. There are very few developers that take the time to actually explain their tools, let alone include additional options such as the saving embeddings, that offer huge potential for sharing, and extending the workflow in regards to resource management. You are a huge asset and very much appreciated.
Your comfy node (and this video) are invaluable resources! Thanks so much for helping me wrap my head around IP!
Incredibly valuable tutorial. Keep up the good work.
this is so absolutely bonkers I can't beleive how much things have changed in 3 months. I just found your channel and binged your last months content and holy whattheheck this is brilliant.
Oh man, this is changing my world. So much we can do with this. And... you explained your tools. Thank you so much!
i been sure i'll never understand how all this things work, especially inside of comfyui, you're just the best in your explanations, they're so clear! thanks for the knowledge you're sharing to us.
Excellent video! You have such a pleasant style of communicating that it really was a pleasure absorbing all of this information. Well done and thanks!
So much content in a single video, this is amazing... thanks so much!
What a useful tutorial, absolutelly fantastic, thanks a lot!
Just want to say Thank you. For 2 days, I have been searching the way to inpaint using image and in the video, you have explained it. in a very easy way to understand. Thank you very much.
Thank you so much for this AMAZING feature and also the detailed readme plus VIDEOS! We need more people like you! You are an enrichment to the AI / Open Source community
@VincentRie
4 ай бұрын
double down on that. thank u so much for this amazing piece of work.
Thank you for creating this implementation! Very clever solutions for handling workflows! I saw you added weighting options for the images as someone requested it, I was doing it through repeating the same images a few times to increase their weight, it was very messy 😅
Since you have so much expertise and knowledge in this topic, I really look forward to the training model tutorial 😊
@latentvision
7 ай бұрын
I'll work on that but it's really for kinda edge scenarios and optimization (I guess can be useful for some art styles)
thank you, this is fantastic -- very well explained. The saving embeds is brilliant!
Wonderful tutorial...very clear and easy to follow...👍🏻
New to ComfyUI. Thanks for this. It was very helpful for someone like me who has heard about IPAdaptor, but had no clue what it really does.
Thank you for your time and energy on this. This was a great introduction to comfyui.
Awesome. Adding different image ratio inputs and outputs and the ability to give custom weights to batched input images would be a blessing !!! Thx for ur work !
@brandonflores4
3 ай бұрын
i think scott detweiler made a video on weighted inputs. one of his comfyui episodes. unsure if it had to do with ipadapter.
One of the best nodes I've seen for COMFY. Using it to lead renders with my current workflows and results show increased accuracy and detail. SUBBED!!
Just came across your amazing tool. Congratulations and thank you! Amazing applications for this in the future, I think.
Man, I can't thank you enough for this, Bravo. 👏👏👏
This a game changer and continue to a be game changer. Not to mention you are kind enough to provide not just a video, but A GREAT video on how to use your tool! Thank you a million times.
Wow! Fantastic video, I learned so much, thank you!
This is AMAZING! Your explanation and trick, omg! I learned alot!
Great video! So informative and straight to the point 👍 I would love to see your video on the training
Brilliant, thanks so much. Your system and explanation is awesome, I've learnt so much!
Amazing. Thank you for all your hard work.
Amazing tutorial!! Much respect ❤️🇲🇽❤️
Great Job explaining everything , Thank You!!!
I'd classify this as the top 5 informational comfy/SD video to watch. Thank you Mato sir! Also looking forward to the training tutorial.
Amazing work! Very inspirational
Grazie mille for sharing this tool and explaining it so clearly!
Amazing! so many useful tools in one video
Amazing work bud!
Thank you. This is huge to the community
This was very useful and very well explained! thanks you a lot!
great explanation, and very good benefits you add into ipadapter. Thank you so much
thanks for creating this! Game changer on comfyui!
It worth 30 min without hesitation
A fantastic presentation, thanks so much.
Thank you very much for this tutorial! This tool is very powerful, and it is going to make my workflows so much easier to construct.
spectacular presentation, thank you
Great Explaination simple crystal clear thank you so much
Thanks a lot for this one!! Great tutorial!!
thank you so much yo! you are, literally, incredible.
Marvelous, keep up the good work.
Bravo! Fantastic work with IPAdapterPlus, I would be very interested to see also a video on the training process you mentioned. I'm trying to train a style that is quite unique, so I can't just use one image. I'm getting poor result with Lora standard network training, and standard dreambooth training
thanks a lot for your very clear explanations and this awesome tool.
incredible detailed!... thank you !.
Great extension and great tutorial! Awesome! Thanks for this It was posted on my Discord yesterday after I released a video about a more basic noise injection technique just using nodes. I will defintely try this out and, if it's ok for you, introduce it to my german speaking audience.
@latentvision
8 ай бұрын
hey thanks! I checked your videos, I don't speak German but they are really well done and easy to follow. Absolutely take whatever you want from my video and the content on my repository.
WOWWW! This looks amazing... I'm going to try this out tonight (I may not get any sleep this weekend :) )
Thanks so much for IP Adapter its been working nicely in Automatic 1111. I still have to learn how to use it thoroughly. More tutorials would be appreciated!
Very good tutorial, thanks
Thanks for the great guide!
Excellent, you are awesome, and thanks very much for the explanation and video.
The best! 🤝👏
Thank you very much for this video and nodes !
thank you soo much, clear and to the point.
just beginning with it and already seem this great nodes
Great video, thank you!
really really nice, many thanks!
Many thanks for Your work!
So useful! Thanks!!
Nice work, thanks
This is magic! Thank you very much...
Great Work please keep going!
nice one, i'm going to use it. thank you.
Thank you!
where do you get the clipvision models that you use?
woow very nice, i wish you make more videos and tutorials .. Thanks thanks
More videos like this please !!!
Thank You
Thankyou, You do a much better job at explaining compared to a person paywalling content behind patreon. Thanks for your work on ipadapter its another indespensible tool and hopefully others will help and work toward further amazing improvements to the whole Stable Diffusion scene, and less paywalling content on youtube and such and more open source and less stagnation.
@latentvision
5 ай бұрын
thanks for you message, I feel the same way. Mind you I don't think there's anything wrong in asking compensation for quality content, but since I developed an Open Source tool I find it's only fair that I also share the know-how on how to use it. I guess it's the only way we can actually evolve.
@ac3d657
5 ай бұрын
@@latentvision thankyou for your response. Correct, people wouldn't be able to learn and only watching a couple of your videos and you have made understanding comfy in general a lot easier - support chains and donations are fine but when they hide what is essentially open and free infomation found in the same videos like yours is very concerning in this community and only invites stagnatioin in the ai/stablediff space.
great stuff
Your voice (I don't know if it is your own) sounds a lot like "my name is Giovanni Giorgio" 😊 Thanks so much for your very calm way of explaining and naturally also for your time & energy invested into the development of IPAdapter!!!
Great Job
this tutorial is best
@latentvision Is there a way to know how the image is being described internally? I mean, can that text be extracted somehow?
thank u man
TY me and my ComfyUI loves you
bro sei un grande
Great tutorial! How were you able to get such high denoise on the upscale? Anything over .2 for me starts to change the look.
Thanks for the tools and tutorials! One question though is where can I find the clip encoders to download? I am looking everywhere and I can't seem to find the IPAdapter SD15 clip encoder anywhere. Also once I download it where do I put it so it shows up correctly? Additionally is there a way to find out the underlying path of where each node is looking for?(It would make pathing so much easier when manually adding files)
Ciao matteo, grazie
COngrats !!!
How much does the model affect the outcome? Also, how do you resize the latent image in the img2img section? I'm on a 4GB GPU so I can't run high resolution images, it will take me ages. I need to resize the images that are latent. Also the latent image does not maintain the lines like yours, yours is almost like using controlnet canny, it keeps everything the same and just changes the style, mine doesn't. It changes the entire look of the image, pose, etc.
Where do you get the ipadapter encoders? can't seem to find them
@2PeteShakur
6 ай бұрын
"Additionally you need the image encoders to be placed in the ComfyUI/models/clip_vision/ directory: SD 1.5 model (use this also for all models ending with _vit-h) SDXL model"
Very helpful. Is IPadapter similar to A4’s “reference” controlnet?
I'm having a hard time finding the clipvision model, did you use the 'h94/IP-Adapter' one on huggingface? Thx!
@latentvision
7 ай бұрын
yes they are both in the h94 repository on huggingface. the "sdxl" one is good only for one model (the base sdxl). for all others you use the vit-h
@elowine
7 ай бұрын
@@latentvision Thanks for developing this tool! So much easier to work with IPAdapter when its not just a commandline and a prompt
I love this tutorial. But at 14:21, you have a load clp vision: ipAdapter_image_encoder.sd15.safetensors. I have been looking everywhere for this image encoder but cannot find it. I can only find the clipvisionG or the ViT-H clip tensors. Any tips?
Where can I get the ip-adapter_sd15.bin file for the IP adapter model loader node and in which folder should I put it?
I cant find the image encoder sd15 safetensors, does someone know where i can download it ? thanks
thank tyou
how do you create the mask from the UI?
Ladies and gentlemen, we have found THE GUY ! Thank you so much for everything you are doing to the Ai community, this is great work and a great explanation. May I ask what is the limitation of the IP adaptor in terms of the fed images, like on what bases it determines the tokens? For example, if i used a line art image with a realistic checkpoint trained for producing mostly photographic images, will the IP adaptor get the tokens from the trained checkpoint or from the IP adaptor model ?
@latentvision
5 ай бұрын
thanks for the kind words. IPAdapter is a very strong conditioning but the main checkpoint will always show its character. It's better to pick the right model for the image you wish to generate
Truly amazing. I would give you a Noble Prize if I could. Thank you Matteo. For upscaling with IPAdapter, why not send it through a ControlNet Tile along with an Ultimate Upscaler? You could create an image in an SDXL checkpoint, then IMG2IMG with ControlNet tile in a SD1.5 checkpoint. The denoise of the tile model should be 0.3 or less to give the closest results to the image.
@latentvision
5 ай бұрын
hey thanks the point of that segment is to show you the strength of the IPAdapter model. Of course you'll have to mix it with other nodes in a real life scenario. If you use tile controlnet already though you probably don't need to add IPAdapter
Thanks so much! I applied this combined with Kohya DeepShrink and the results are amazing. Just a question, If I want to take the face using the face adaptor and I want it to combine it with an adapter that takes the style of the photo, what should I do? I can't find a workflow for that
@latentvision
5 ай бұрын
you can daisy chain IPAdapters, check my attention masking video, you can see how it works there
@latent-broadcasting
5 ай бұрын
@@latentvision Thanks for your answer! I'll check that video
Hi thank you so much for your wonderful work ! Question : How can we have clear face replacement ? Now it's merging faces together. Not the most accurate output, the style is preserved, but face details are merged.
@latentvision
7 ай бұрын
I'm not sure that is feasible with the current implementation, unless you do some extensive training I guess. This is not a "face swap" but a "face describer" Be sure to use a closeup portrait of the person and use the prepimage node adding a little sharpening if needed.
@xq_le1t0r97
7 ай бұрын
Just use re-actor face component... do the image you want.... and add the end just replace the face. Don´t use a photo of the person you want as base for the IPA.
Hello , I have installed Ip adapter in custom nodes when I have tried to add node I don't found apply Ip adapter
Have a question. In SDXL section you've put SDXL encoder, but in documentations it says to put SD1.5 encoder? Which one is correct?
@latentvision
5 ай бұрын
check the documentation, one SDXL model requires the SDXL encoder, all others work on the SD1.5 encoder