AI Face & Hand Replacement Tutorial: Mastering Comfy UI Impact Pack Part 1

Ғылым және технология

🚀 Dive into our latest tutorial where we explore the cutting-edge techniques of face and hand replacement using the Comfy UI Impact Pack! In this detailed guide, we install and effectively utilize the Comfy UI Impact Pack, demonstrating step-by-step how to replace and refine facial and hand elements in your AI-generated images.
🔍 Starting from the basics, we install the Comfy Impact extension. We then look at practical demonstrations using various models including combining SDXL & SD 1.5
🌟 Special thanks to our Patreon supporters for making these in-depth tutorials possible. Your contribution helps in enhancing the quality and frequency of our content.
🔗 Relevant Links:
Comfy UI Manager Tutorial: Link Pending
Blog Post/Workflows: endangeredai.com/comfy-face-1/
🅿: / endangeredai
👍 If you find this video helpful, please like, subscribe, and consider supporting us on Patreon. Your feedback is invaluable, so drop your thoughts and any successful combinations you've discovered in the comments below! Stay tuned for our next video, where we delve deeper into each component of the Comfy UI Impact Pack.
🎬 Happy Creating!"

Пікірлер: 35

  • @whatwherethere
    @whatwherethere7 ай бұрын

    Nice, good level of explanation. It is crazy how Civitai gives the impression you can one shot a perfect image from a base checkpoint. The more basic info the better.

  • @EndangeredAI

    @EndangeredAI

    7 ай бұрын

    Oh for sure! Civitai images have a lot of work put into them and often times use additional loras and models

  • @ysy69
    @ysy698 ай бұрын

    thanks for this great step by step tutorial!

  • @EndangeredAI

    @EndangeredAI

    8 ай бұрын

    Glad it was helpful!

  • @pmaegerman
    @pmaegerman6 ай бұрын

    Amazing, thank you, I'm a beginner and it was easy to follow ;) Happy with the results I get

  • @EndangeredAI

    @EndangeredAI

    5 ай бұрын

    So happy to hear it was helpful

  • @lukeovermind
    @lukeovermind5 ай бұрын

    Good Tutorial. Trust me, their is a real need for in depth comfy tutorials, some us wants to know the why and how of what we using so we have a better understanding. Your Ksampler video is a good example and the latest video of the creator of the IPadapter

  • @hatuey6326
    @hatuey63267 ай бұрын

    very good tutorial i've tried it with reactor it give a very good result !

  • @EndangeredAI

    @EndangeredAI

    7 ай бұрын

    Great to hear!

  • @b.radical
    @b.radical8 ай бұрын

    Great vids so far, keep it simple, some of the other comfyUI masters (Scott) assume we know a lot of stuff, and I get very lost.

  • @EndangeredAI

    @EndangeredAI

    6 ай бұрын

    Glad it helps! That’s the intention behind my videos!

  • @TailspinMedia
    @TailspinMedia3 ай бұрын

    in my experience, FaceDetailer really shines when dealing with images where the person/subject is farther back in the distance, for example mid-range or full body shots. However for up close images you're probably better off just using refiner or a good upscaler.

  • @EndangeredAI

    @EndangeredAI

    3 ай бұрын

    I’d love to know more about your upscale process. I’ve found you still need a descent face to upscale. I’m actually planning a dedicated video on up scaling as there are so many models and approaches now, so I’d love your insight

  • @TailspinMedia

    @TailspinMedia

    3 ай бұрын

    @@EndangeredAI i use the Ultimate SD Upscale if it's a close up image of the face, using an Upscaler model like SkinDiffDetail Lite, and a denoise range of maybe 0.3-0.45. If the face is a bit farther back/zoomed out, I do use FaceDetailer but sometimes the result can be optimized further by then running the image through a img2img workflow with denoise at a low to mid range (0.3-0.5) to keep close to the original (you can also try using the maskeditor in the Load Image node to focus on the face only if that's the desire.) With img2img you can experiment with different models and CFG levels to see which creates best face result.

  • @-Belshazzar-
    @-Belshazzar-6 ай бұрын

    Thanks for the video! I had the same issue as you with the lora, What i discovered on my own trained lora (though i am jot sure if that is a real solution) is that i needed to add the keyword activation of the lora, adding the lora itself was not ebough. So i mean for example "photo of my loraName "

  • @dondandojo
    @dondandojo4 ай бұрын

    13:02 Don't know exacty how but it worked at the first try in my case despite having force_impaint off. I guess It's because I loaded image then I put it into the KSampler , vae decoded it and then put the image into the input to face detailer pipe. Maybe you know the answer. I'm generaly surprised something is suddenly working from my side not the other way around XD Masybe they updated something and it's working now despite force_impaint off. Anyway really amazing video. Thank you so much!

  • @EndangeredAI

    @EndangeredAI

    4 ай бұрын

    Glad it helped! Make sure you watch part 2! We build on concepts in the first part to get better results.

  • @cemilhaci2
    @cemilhaci24 ай бұрын

    Thank you, I couldn't find the prompt, can you share it in the comments?

  • @Narco_org
    @Narco_org8 ай бұрын

    wooooow nice

  • @EndangeredAI

    @EndangeredAI

    8 ай бұрын

    Glad you like it

  • @lmbits1047
    @lmbits10475 ай бұрын

    To me doing tests while the seed is randomized every time makes it too hard to judge. I would keep it fixed all the time in order to have some kind of control of what you are doing.

  • @Narco_org
    @Narco_org8 ай бұрын

    Can I request a tutorial? If you can provide a training for training in Comfy UI, I really need this training and you really teach in the best way and I really understand your training very well. Sorry, my language is not English, I apologize if there is a problem in writing this message

  • @EndangeredAI

    @EndangeredAI

    8 ай бұрын

    Sure! Could you tell me more about what you need?

  • @Narco_org

    @Narco_org

    8 ай бұрын

    @@EndangeredAIthank you, I need a tutorial on how to train a model create images with my face. in other words: how add my face to a model and then its create images with my face :)

  • @AmerikaMeraklisi-yr2xe
    @AmerikaMeraklisi-yr2xe5 ай бұрын

    Where is that workflow I couldnt find your site, I found hands json but It doesnt work on face.

  • @EndangeredAI

    @EndangeredAI

    5 ай бұрын

    Let me check, I’ll revert back later today

  • @Avalon19511
    @Avalon195116 ай бұрын

    I tried dropping the image to comfy but it doesn't give any workflow, unless I missed something in the link you gave?

  • @EndangeredAI

    @EndangeredAI

    5 ай бұрын

    Avalon, drop by the discord and remind me, I´ll send you the json. I need to upload it to the website as well. It seems images are getting converted to Webp and loosing the workflow discord.gg/Cdn3T768

  • @yiluwididreaming6732
    @yiluwididreaming67324 ай бұрын

    workflow??

  • @Avalon19511
    @Avalon195116 ай бұрын

    This is so much easier in A1111, all you do is enable it and done, I don't understand why comfy couldn't have had the same simple functionality, pull up the node attach and enable done?? It's like everything is so over thought. If A1111 ever gets optimize to work as fast as comfy and is as memory efficient i'm going back to A1111

  • @EndangeredAI

    @EndangeredAI

    5 ай бұрын

    I feel you. Comfy UI is about expandability and customisability on the fly. Although everything in this tutorial can be done in automatic 1111, this tutorial is mean´t to get you started with some foundational skills around the nodes that are used. With these nodes you can do a variety of more nuanced experiments that would be more tedious to do on A1111, such as for example using a different model to do the face inpainting, or even going to the point of separately different faces of different sizes depending on the sizing details you give the detector. If you want the straightforward approach, I would agree A1111 is definitely the way to go. However if you want to have more control and want to experiment with different approaches and techniques, A1111 gives you a plethora of approaches.

  • @lukeovermind

    @lukeovermind

    5 ай бұрын

    Honestly getting tired of the comments of how “complicated“ Comfy is in relation to A1111. Comfy is only as complicated as your willingness to learn and accepting that it is going to be hard to do at first but knowing it will get easier.

  • @sebastianlin2960
    @sebastianlin29607 ай бұрын

    I don't have the ultralystic detector somehow

  • @EndangeredAI

    @EndangeredAI

    7 ай бұрын

    Is it not appearing in your comfy manager as an option to install?

  • @ricperry1

    @ricperry1

    7 ай бұрын

    When you start ComfyUI, do you get any error messages or any custom nodes that failed to load? If so, you might need to update to the most current version of ComfyUI and Comfy Manager.

Келесі