ComfyUI - Hands are finally FIXED! This solution works with all models!

Фильм және анимация

Hands are finally fixed! This solution will work about 90% of the time using ComfyUI and is easy to add to any workflow regardless of the model or LoRA you might be using. This is utilizing the MeshGraphromer Hand Refiner, which is part of the controlnet preprocessors you get when you install that custom node suite. We can use the output of this node as well as the mask to help guide correction in any image. I also show some of the issues I ran into while working with this solution.
#comfyui #stablediffusion
Gigabyte 17X Laptop is doing the inference today! Grab one here:
amzn.to/3thtfpR
You can grab the controlnet from here, or use the manager:
github.com/Fannovel16/comfyui...
Interested in the finished graph and in supporting the channel as a sponsor? I will post this workflow (along with all of the previous graphs) over in the community area of KZread.
/ @sedetweiler

Пікірлер: 200

  • @joelface
    @joelface4 ай бұрын

    So cool that this works! Love the ingenuity that it must have taken to figure this all out.

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    It was a bit of a pain to watch if you check out the live stream from last Saturday. That seed was the major issue.

  • @lennoyl
    @lennoyl4 ай бұрын

    Thanks for all your videos. I was a little lost with all those nodes versions but, now, I'm starting to understand better how to use Comfyui

  • @gab1159
    @gab11594 ай бұрын

    Awesome man, trying this now, your tutorials are great and easy to follow. A godsend!

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    Glad I could help!

  • @RichGallant
    @RichGallant4 ай бұрын

    Hi That is very cool,and works well for me. Once again your explanations are clear and very simple to follow. As an old guy who learns best by reading these are great.

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    Great to hear!

  • @byxlettera1452
    @byxlettera1452Ай бұрын

    The best video I have seen so far. Very clear and it gets to the point. Nothing to add. Thanks

  • @Marcus_Ramour
    @Marcus_Ramour4 ай бұрын

    very clear and well explained, many thanks for sharing!

  • @ai_materials
    @ai_materials3 ай бұрын

    Thank you for all the useful information!☺

  • @kietzi
    @kietzi3 ай бұрын

    Very nice tutorial. Looks like compositing^^ so as a comp-artist, i love this workflow :)

  • @fabiotgarcia2
    @fabiotgarcia24 ай бұрын

    Hi Scott! First of all I want to congrats you for yours amazing tutorials. Thank you!! Could you please create another version of this workflow where instead use prompt to create an image we will upload an image?

  • @potusuk
    @potusuk4 ай бұрын

    Nice follow up, thanks Scott

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    Any time!

  • @DrunkenKnight71
    @DrunkenKnight714 ай бұрын

    thank you! i'm getting great results using this...

  • @davidm8505
    @davidm85054 ай бұрын

    This is great. Thank you for making it so clear and simple. Would you happen to have any videos on maintaining consistency of characters across multiple renders? Many situations require more than just one shot of a character but I find consistency almost impossible to achieve just by text alone.

  • @Seffyzero
    @Seffyzero19 күн бұрын

    Bold choice, spending 5 minutes setting up nodes you explicitly tell us not to do, only to have those nodes be required steps in the tutorial.

  • @grafik_elefant
    @grafik_elefant4 ай бұрын

    Wonderful! Thanks for sharing! 👍

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    Thank you! Cheers!

  • @gimperita3035
    @gimperita30354 ай бұрын

    So grateful I 'm starting to understand how things flow in Comfy UI without feeling too lost. It sounded like Chinese to me a couple of months ago. Now it's like German. Still rough but somehow familiar. 😆 Thank you for this!

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    Glad it was helpful!

  • @furiousnotch7914

    @furiousnotch7914

    4 ай бұрын

    ​@@sedetweiler I just wanted to know, what's the minimum system requirements for running comfyUi smoothly, without any problem? Appreciate you 🙂

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    Probably 4gb of vram.

  • @furiousnotch7914

    @furiousnotch7914

    4 ай бұрын

    @@sedetweiler I've tried with 4GB VRAM and, 16GB RAM.. it takes 2:16 hours to generate and upscale 1 image. RTX 4060 8GB VRAM with 16GB RAM ✌️or RTX 3060 12GB VRAM with 16GB RAM✌️or RTX 3060 8GB VRAM with 16GB VRAM✌️.... (I have i7 12th gen) Which one do you prefer between these three? Don't know which one would be the best for faster image generation and upscaling.... Thanks for your earlier response 🙂

  • @Renzsu

    @Renzsu

    4 ай бұрын

    @@furiousnotch7914 VRAM takes priority, the more the better. Then think about the speed of the card. The new 4070 Super seems to be a happy middle ground of the latest generation. Smaller budget? 4060 Ti 16 Gb. Bigger budget? Think 4080 Super or 4090. Of the 30 series, I would take the fastest one with at least 16 Gb. But honestly, I would save up a bit more and go straight to 40 series.

  • @rickandmortyunofficial8986
    @rickandmortyunofficial89864 ай бұрын

    Thank you for making a tutorial by building nodes manually, it really helps clarify each function of the nodes, unlike other channels which present workflows with ready-made nodes

  • @MrVovsn
    @MrVovsn4 ай бұрын

    Thanks for the tutorial!

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    You are welcome! Thanks for taking the time to leave a comment. Cheers!

  • @Shirakawa2007
    @Shirakawa20074 ай бұрын

    Thank you very much for this!

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    You're very welcome!

  • @gingercholo
    @gingercholo4 ай бұрын

    super specific use case, when the subjects hands are literally like the image your using, if not the depth maps it comes up with a straight trash.

  • @IrrealKIM
    @IrrealKIM3 ай бұрын

    Thank you, that works perfectly!

  • @sedetweiler

    @sedetweiler

    3 ай бұрын

    Glad it helped!

  • @AnthonyEspino-ou6ii
    @AnthonyEspino-ou6ii4 ай бұрын

    Hi! Thank you so much! I just became a sponsor! Your videos are so useful as I'm trying to figure out solutions to these types of issues and I was wondering if you had any ideas for how to fix exposed feet as these often are in the same place as hands on initial generation and I haven't seen any similar depth rec or masking for this particular use case. Would love to hear your thoughts!

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    I have not seen much of a call for that, but civit probably has some for models and you could just use a different mask creation method and this same solution

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    Thank you for the sub!

  • @Comenta-san
    @Comenta-san4 ай бұрын

    😯so simple. I love ComfyUI

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    It really is, for such a terrible issue. Cheers!

  • @maxfxgr
    @maxfxgr4 ай бұрын

    Amazing video! Learnt so much from this Scott! A new random question arises, what's the name of the plugin that gives you info on which node is executed at runtime on the top left? :)

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    That is from the PythonGoSsssss pack.

  • @ysy69
    @ysy692 ай бұрын

    Thanks for this video. Have you tried to see if this works with SDXL workflows?

  • @preecefirefox
    @preecefirefox4 ай бұрын

    Great video, thanks for making it! Have you tried it with a person holding something? I’m wondering how well it works if part of the hand is meant to be not visible 🤔

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    Not sure, but it is worth trying!

  • @murphylanga
    @murphylanga2 ай бұрын

    Thanks for your video. You can use the global seed if you set the seed in an extra primitive node and fix it

  • @sedetweiler

    @sedetweiler

    2 ай бұрын

    Cool, thanks

  • @lemonZzzzs
    @lemonZzzzs4 ай бұрын

    now that's pretty cool!

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    I am loving it!

  • @BabylonBaller
    @BabylonBaller4 ай бұрын

    Great vid, thanks Scott. Guys, if your using A1111.. It takes just two clicks to enable Hand Refiner in Controlnet and fix hands lol But the noodles are much more fun, if you have time to kill.

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    The difference for me is I know how it works. with much of A1111 you check a box and the magic happens. With Comfy you actually control and learn how it all goes together. It is the difference from just eating in a restaurant and knowing how to cook as well.

  • @traugdor

    @traugdor

    4 ай бұрын

    The Hand Refiner in ControlNet isn't as powerful as the fine control you have in ComfyUI. One-button solutions always have issues. I've used both and always get better results with ComfyUI.

  • @hunhs
    @hunhs4 ай бұрын

    Good job!

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    Thank you! Cheers!

  • @antaoalmada1475
    @antaoalmada14754 ай бұрын

    I've noticed that this functions well with open hands but not as effectively with hands in a relaxed, close-to-the-body position. Do you have any insights on fine-tuning it to address these scenarios? Thanks a bunch for the excellent tutorials!

  • @goodie2shoes

    @goodie2shoes

    2 ай бұрын

    same here. it's not the solution I was hoping for. It erh.. kinda sucks

  • @technoprincess95
    @technoprincess954 ай бұрын

    It cannot be completely eradicated. Only post processing with pts AI can help, and sometimes when the hand is quite stable , it may provide an additional glove or piece of steel on the hand,

  • @risewithgrace
    @risewithgrace4 ай бұрын

    Thank you! Can you share how to do this with moving hands in a video?

  • @b4ngo540

    @b4ngo540

    4 ай бұрын

    use the "image batch to image list" node as input for this hand fixer

  • @mistraelify
    @mistraelify3 ай бұрын

    Well, that works fine with big hands but not very good with like 3-4 characters in the picture and little hands, closed hands, specific poses. Somewhat the MeshGrapher gives bad results. But it's definitely this path to use for correcting details without altering too much of the original seed. I'm impressed how that works.

  • @dannyvfilms
    @dannyvfilms4 ай бұрын

    Great stuff! Do you know if there’s a community node for Invoke for this? I’m not sure how interchangeable or inter-compatible the nodes are.

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    I don't know. I love the Invoke project for a lot of reasons, but I just have not used it lately as I live in comfy most of the day.

  • @oldmanliving
    @oldmanliving4 ай бұрын

    Please try different hand poses you will know it never fix hand. When the ControlNet depth preprocessor gives you bad depth hand, you will still get bad hand. Even it gives you good preprocessed depth hand, for different hand poses it will still generate flip, or reverse bad hand. I am so sorry to tell the true.

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    It isn't perfect, but again this works in 90% of the situations where we get bad hands.

  • @DanielWoj

    @DanielWoj

    2 ай бұрын

    I would say that it improves 50% in the photo-like images. But maybe 10-20% in painting or some low CFG styles.

  • @RamonGuthrie
    @RamonGuthrie4 ай бұрын

    Thanks for this video

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    Most welcome

  • @lumbagomason
    @lumbagomason4 ай бұрын

    One more thing that you can do is send the final image to fooocus image prompt > inpaint>improve face, hands (2nd option), paint both hands, and use the quick prompt called detailed hand. Edit: This is AFTER you have refined the hands using the above tutorial.

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    Thanks for sharing!

  • @b4ngo540

    @b4ngo540

    4 ай бұрын

    @@sedetweiler this sounds interesting, if you tested this and you think it's effective, we would love to see a part 2 of this video doing these extra steps for a perfect result

  • @ThoughtFission
    @ThoughtFissionАй бұрын

    Hey Scott, really suprised you're not ahead of the curve with somethging about a SD3 howto.

  • @xaiyeon_xiuzhen
    @xaiyeon_xiuzhen4 ай бұрын

    ty for sharing !

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    My pleasure!

  • @alexanderschlosser7987
    @alexanderschlosser79874 ай бұрын

    Thank you so much for another amazing tutorial! I’m trying to figure out what the best way is to combine this with the refiner. Would I go through both the base and the refiner for the full image first, and then do base and refiner again for only the hands? I tried something like that, but the results are not that great as the hands don’t really match the visual quality of the rest of the picture.

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    I would refine at the very end.

  • @alexanderschlosser7987

    @alexanderschlosser7987

    4 ай бұрын

    @@sedetweiler Refine everything together you mean? How would you do that if you want to do 80% of the processing in the base and 20% in the refiner? Fix the hands even with some noise of the base left?

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    yup. that is what I would do. Since the position of the fingers is probably already determined by that time, additional refinement isn't going to undo that.

  • @alexanderschlosser7987

    @alexanderschlosser7987

    4 ай бұрын

    Thank you, I really appreciate your input!

  • @korinlifshits8780
    @korinlifshits87802 ай бұрын

    hi. Great content. Thank you . where is the workflow json for the video? thank you

  • @sedetweiler

    @sedetweiler

    2 ай бұрын

    They are in the community tab here on KZread. That is the only method they give us for communication, unfortunately. Thank you for supporting the channel!

  • @scottownbey9340
    @scottownbey93404 ай бұрын

    Scott great stuff! I ran into some snags applying this to a workflow with 2 other controlnets ( Depth + Openpose) Im not using Advanced contronet for the other 2 and 1 Ksampler. Do I need 2 Ksamplers like your video?

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    The first one creates the flawed image, and the graphformer can then spot the hands and the second sampler fixes them. So, I am using 2 samplers for that reason. Because this works so well with just depth, I am not throwing all the controlnets at it, as it just works as is quite often.

  • @scottownbey9340

    @scottownbey9340

    4 ай бұрын

    I got my workflow to work with one KSampler using 1.5 model ( Im using Controlnet for the body (DWopenpose + Depth) and now MeshGraphomer) and got to that point where i generated great hands but the image totally changed , so I added the Set Latent noise mask with samples going into a empty latent image (replacing the one from the KSampler) and the image is totally gone. So frustrating as i was almost there.. Any guidance would be appreciated

  • @scottownbey9340

    @scottownbey9340

    4 ай бұрын

    Got it working! thanks@@sedetweiler

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    awesome! it sounded like you were SO close! that is great news!

  • @RhapsHayden

    @RhapsHayden

    17 күн бұрын

    ​@@scottownbey9340did you end up adding another ksampler or staying with one?

  • @ryzikx
    @ryzikx4 ай бұрын

    all models! nice

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    cheers!

  • @NotThatOlivia
    @NotThatOlivia4 ай бұрын

    HUGE THANKS !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    sure! happy Friday!

  • @BrunoBissig
    @BrunoBissig4 ай бұрын

    Hi Scott, thanks for the update. I'm also trying this with img2img but I can't get it to work propperly. Maybe an idea for another video?

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    Sure! that should be as simple as replacing the empty latent with a VAE Encoded image and use the samples off of that.

  • @BrunoBissig

    @BrunoBissig

    4 ай бұрын

    Hi Scott, now it works. I think my input image was not the right choice for that. I changed it now to the girl in your video with six fingers as input, and now its fixed and i get five fingers. Thanks! @@sedetweiler

  • @teambellavsteamalice
    @teambellavsteamalice2 ай бұрын

    Is there a way to split the image to background and person, fix the hands and then recombine? Maybe also model a the pose (body and hands), so any animation of that can be done very precise and consistent?

  • @atomicchewbacca1663
    @atomicchewbacca16632 ай бұрын

    I got to the point where the Meshgraphormer is added to the Ui , however all it generates is a black box. I installed the comfyui manager and such. Are there some videos I should go back and watch before trying the methods in this video?

  • @jbnrusnya_should_be_punished
    @jbnrusnya_should_be_punished16 күн бұрын

    I would like to try it, but I can't see the workflow attached here or in the community tab. Although I'm not sure if it will work due to hardware limitations (rx580) and software differences (sd 1.5, torch, nodes).

  • @SleepyBeeASMR
    @SleepyBeeASMR3 ай бұрын

    i have some problems, i dont seem to find out why the hands are smaller in the mask and when impaiting its like they render smaller

  • @lesserdak
    @lesserdak4 ай бұрын

    I made it to 4:30 and then nothing shows up in Controlnet Models, "undefined". I went to manager > install custom nodes > Fannovel16 which says "NOTE: Please refrain from using the controlnet preprocessor alongside this installation, as it may lead to conflicts and prevent proper recognition." Not sure how to proceed. Is my ComfyUI installation bad?

  • @LGCYBeats

    @LGCYBeats

    4 ай бұрын

    Same here, not sure what im doing wrong

  • @lilillllii246
    @lilillllii2464 ай бұрын

    Hello! Is there a way to integrate two json files with different functions in comfyui? One is to do the inpaint function, and the other is to maintain a consistent character through faceid, but I'm having trouble linking the two.

  • @lumbagomason
    @lumbagomason4 ай бұрын

    Thanks man

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    Any time

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    yup!

  • @drozd1415
    @drozd14154 ай бұрын

    Do you have any solution if im getting "new(): expected key in DispatchKeySet(CPU, CUDA, HIP, XLA, MPS, IPU, XPU, HPU, Lazy, Meta) but got: PrivateUse1" error while using MeshGraphormer? PS. Greate video, i just would make it run on my amd pc xD

  • @Enu_Vibe
    @Enu_Vibe4 ай бұрын

    I use to enjoy your mid journey tutorials and workflow. Can I ask why you stopped? Now that the models are even more powerful, i wish we can turn to expert like you.

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    I guess I just need to make some. I have a few ideas on them they I have not seen covered. Thank you for the suggestion!

  • @AlIguana
    @AlIguana4 ай бұрын

    amazing! i couldn't get it to work though, it won't detect the hands (the "display mask" box is just a black square every time, and i can't work out why). still.. something to work on :)

  • @dmarcogalleries254
    @dmarcogalleries254Ай бұрын

    Can you next time go more on SD3 Creative upscaler? IK don't find much info on it. So you don't use it with a 2k image? it sats 1000 or less? I'm trying to figure out if it is worth it at 25 cents per upscale. Thanks!

  • @Lunarsong.
    @Lunarsong.4 ай бұрын

    This may be a dumb question but does this process also work for cartoon/anime models?

  • @rakly3473
    @rakly34732 ай бұрын

    How do you make it so you don't see the 2 squares in your end image where it repainted the hands? can even see them in your youtube video.

  • @alexmehler6765
    @alexmehler6765Ай бұрын

    does it also work on hands which dont wave directly at the camera or for cartoon models ? i dont think so

  • @RhapsHayden
    @RhapsHayden19 күн бұрын

    Should I run Meshgraphormer before or after ReActor?

  • @keylanoslokj1806
    @keylanoslokj18064 ай бұрын

    How do you get this level of control though with colab notebooks and python code?

  • @zdvvisual
    @zdvvisual4 ай бұрын

    Hi thank you for this idea, but i had problem. i generated 3 persons but the refiner only got 1 person hand left and right, the second and third person's hands are not detected. So i only fixed one person hand. What is the problem here?

  • @lmbits1047
    @lmbits10474 ай бұрын

    For some reason the hands from the picture I am trying this on don't get detected. I guess this method only works for hands that are already clear enough they are hands.

  • @michaspringphul
    @michaspringphul3 ай бұрын

    does that work for all kind of hand positions? e.g. hands grabbing a handle, hands tipping on a keyboard or piano, hands clapping ....

  • @NikolaiBloom
    @NikolaiBloom4 ай бұрын

    Can you share a downloadable workflow for this

  • @SaschaFuchs
    @SaschaFuchs4 ай бұрын

    It doesn't work with every model either. Graphormer has its problems with hands that originate from 2D, 2.5D models. Apparently the depth information that Graphormer needs to recognise that they are fingers is missing.

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    So far I have had great luck with it, even using non-AI images as starting points. I think it is a pretty flexible tool.

  • @RussellThomason
    @RussellThomason3 ай бұрын

    This only seems to detect 1 set of hands even when there's multiple people and it doesn't detect parts of fingers or hands that are occluded. And there is very often noticeable artifacts around the bounding boxes themselves even if the hands are done well. Any ideas how to refine this?

  • @wootoon
    @wootoon2 ай бұрын

    I can use it normally under the SD1.5 model, but I always get an error when I use the SDXL model.

  • @user-ro6qy3hf8j
    @user-ro6qy3hf8j4 ай бұрын

    Can this be used with a image as input?

  • @Gradashy
    @Gradashy9 күн бұрын

    I have installed the ControlNet but that node not appears to me

  • @BiancaMatsuo
    @BiancaMatsuoАй бұрын

    Is this possible to be done with other WebUi, like Forge WebUI?

  • @marcihuppi
    @marcihuppi4 ай бұрын

    i clicked update all in the manager, now my comfy doesn't work anymore. i get this error: raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled any ideas how to solve this? everything worked fine before the update

  • @bronsonvdbroeck
    @bronsonvdbroeckАй бұрын

    The control net model doesn't work with an amd setup, save the time homies.

  • @kleber1983
    @kleber19834 ай бұрын

    Funny how I have the proper controlnet installed but I don´t have this specific one for hands.... What am I doing wrong? thx.

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    check that you are up-to-date and restarted.

  • @paultsoro3104
    @paultsoro31042 ай бұрын

    can this handle an image of a couple holding hands? Thanks. its impossible in Krita and Firefly I tried it already..

  • @cstar666
    @cstar6663 ай бұрын

    Is there anything similar in the works for FEET?!

  • @SuperFunHappyTobi
    @SuperFunHappyTobi2 ай бұрын

    I am getting an error "Error occurred when executing KSampler: mat1 and mat2 shapes cannot be multiplied (308x2048 and 768x320)" Does anyone know what this is? am running the inpaint depth hand control model that is recommended on the github Seems to be an error with the Ksample

  • @SLAMINGKICKS
    @SLAMINGKICKS4 ай бұрын

    I have two GPU's how do make sure comfyui is using the most powerful of the two nvidea cards.

  • @radiantraptor
    @radiantraptorАй бұрын

    I can't figure out how to make this work. Even if the MeshGraphormer produces good results and the hands look nice in the depth map, the hands in the final image often look worse than in the image before MeshGraphormer. It seems that the second KSampler does mess up the hands again. Is there anything to avoid this?

  • @sedetweiler

    @sedetweiler

    Ай бұрын

    you can always use a different model for the 2nd sampler. Be sure you use a different seed! That was one I tripped over.

  • @chucklesb

    @chucklesb

    Ай бұрын

    @@sedetweiler wish this helped. I'm using the same model you are in the video and it just makes it worse.

  • @___x__x_r___xa__x_____f______
    @___x__x_r___xa__x_____f______4 ай бұрын

    Hi Scott, where is the wf please?

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    wf? sorry, not sure I follow.

  • @___x__x_r___xa__x_____f______

    @___x__x_r___xa__x_____f______

    4 ай бұрын

    @@sedetweiler that’s ok, I tedter your workflow for graphormer

  • @Catapumblamblam
    @Catapumblamblam4 ай бұрын

    my controlnet model list is empity snd I can't find where to download them

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    If you go to the git for any node suite by clicking on the name in the manager, it will tell you what additional files or models are needed and where to get them.

  • @Catapumblamblam

    @Catapumblamblam

    4 ай бұрын

    @@sedetweiler @ 4:20 when you are selecting your model in the controlnet list, you are full of models, my list is empity!

  • @Catapumblamblam

    @Catapumblamblam

    4 ай бұрын

    @@sedetweiler and, another question: Is it working on text video?

  • @gelisob

    @gelisob

    3 ай бұрын

    same, "load controlnet model" box list empty. Did get mesh things when installing fannovel16 pack but that list is empty.. continuing to loo for answer.

  • @beatemero6718
    @beatemero67184 ай бұрын

    The meshgraphormer puts put only a black Image. I have everything installed and updated. Any help?

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    Hmm, is it not seeing the hands at all? If they are really messed up, it will not see them. I would just check the mask to see if it found them.

  • @beatemero6718

    @beatemero6718

    4 ай бұрын

    @@sedetweiler i tested it again with a simple prompt of a waving woman, using empty latent Image and a resolution of 832x1216 (using a custom sdxl merge) and it works fine. The First Time I tried I did img2img of a stylized toon character which output hands already look quite alright. However the meshgraphormer refuses to recognize the hands of said character.

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    It might not be good with cartoons. Not sure, I don't tend to go for that type of artwork personally.

  • @beatemero6718

    @beatemero6718

    4 ай бұрын

    @@sedetweiler yeah, thats what I expected and it seems to be the case. It doesnt properly recognize cartoony proportions, even though in my opinion cartoony hands come out better in general, due to the fact that they are bigger and give stable diffusion more space to generate them a bit better.

  • @dflfd

    @dflfd

    4 ай бұрын

    @@beatemero6718 maybe you could try DWPose?

  • @LouisGedo
    @LouisGedo4 ай бұрын

    Hi Scott

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    hi there!

  • @playlistening9528
    @playlistening95284 ай бұрын

    can you pastebin the workflow?

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    It is in the community area on youtube for channel sponsors.

  • @alexlovsky7217

    @alexlovsky7217

    4 ай бұрын

    @@sedetweiler ugh

  • @dapper5314
    @dapper53143 ай бұрын

    Hi Scott, any new videos? Theres some new stuff we need to learn

  • @user-yi2zu9cb6l
    @user-yi2zu9cb6l2 ай бұрын

    meshgraph hand refiner not work ...

  • @tomasm1233
    @tomasm1233Ай бұрын

    HI Scott. IS it possible to use ComfyUI to do inpainting on the pre-existing image?

  • @VigneshoViyan
    @VigneshoViyan4 ай бұрын

    make a video about upscale

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    I did a few of those. something specific you want to see?

  • @VigneshoViyan

    @VigneshoViyan

    4 ай бұрын

    @@sedetweiler Like magnificAi, upscale with extra details

  • @salturpasta6204
    @salturpasta62044 ай бұрын

    Not trying to sound facetious here but surely it would be far less of a ballache just to photoshop extra fingers out, far quicker 🤷‍♂️

  • @DaemonJax

    @DaemonJax

    3 ай бұрын

    Yeah, those original image hands were already pretty great -- I'd just fix it in photoshop. I guess this method is fine for people with ZERO artistic ability.

  • @juukaa648
    @juukaa6482 ай бұрын

    Thx sir!

  • @lanoi3d
    @lanoi3dАй бұрын

    Thanks, this video made me realize SD isn't for me. This is WAY too complicated. It's no wonder now why most AI art at high res looks like crap if you look closely at the details.

  • @miketoriant
    @miketoriant4 ай бұрын

    anyone compared this to handdetailer?

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    no idea. this seems to be super simple to deploy and works on anything I have thrown at it.

  • @goliat2606
    @goliat26064 ай бұрын

    I generated woman face that i very like. Is any way to apply this face to rest of images which i will generate? Reactor or other face replace nodes makes face very bluury when image is bigger than 512x512.

  • @dflfd

    @dflfd

    4 ай бұрын

    use IP adapter

  • @goliat2606

    @goliat2606

    4 ай бұрын

    @@dflfd I tried IPAdapter too and results are similar. The face is either blurry or much different than source face :(.

  • @EternalKernel
    @EternalKernel4 ай бұрын

    Try GroundingDinoSam

  • @nickleviathan3186
    @nickleviathan31864 ай бұрын

    Does not work well at all, sorry.

  • @crazyaz7161

    @crazyaz7161

    7 күн бұрын

    What do you mean? It fixes hands like 80% of the time.

  • @Akame_zs
    @Akame_zs2 ай бұрын

    e ainda não ficou bom ..

  • @BlackDragonBE
    @BlackDragonBEАй бұрын

    Great videos, but those live streams in the playlist exclusive to members are really annoying. I love the way you explain things, but those really kill the vibe for me. More than half of the playlist are those live streams and almost every video includes "if you watched the last live stream...". I wish I did, but I don't have the money. Thanks for what you do, but I'm going to look elsewhere.

  • @VanillaIceCoffee
    @VanillaIceCoffee4 ай бұрын

    it's not a fix, that's a hack

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    True. We are working around a weakness. But as the models get better, we can probably stop needing to rely on things like this.

  • @VanillaIceCoffee

    @VanillaIceCoffee

    4 ай бұрын

    @@sedetweiler unless it gets heavily trained on high definition hands it's not gonna get any better, we need a bit larger encoding (at a cost) and decoder trained on hands, too much details getting lost.

  • @saultigh4304
    @saultigh43044 ай бұрын

    After switching to ComfyUI, I realized just how ugly and clunky A1111 is :)

  • @sedetweiler

    @sedetweiler

    4 ай бұрын

    It is also pretty restrictive. Yea, people have the noodles, but it is actually cleaner in the long run as you know exactly what is going on.

  • @LastEsper
    @LastEsper2 ай бұрын

    Not a particularly useful workflow. Only fixes the most simplistic hand deformations, such as open hand poses as in this example. Any form of gesture complexity and it doesn't work (ex., deformed hand holding onto a sword hilt or similar cases of occlusion such as a subject placing their left hand on top of their right).

Келесі