Hand FIXING Controlnet - MeshGraphormer

Тәжірибелік нұсқаулар және стиль

MeshGraphormer is hand FIXING for ControlNet. I Build a Workflow for ComfyUI for you and will explain step by step how it works.
👐 Elevate your digital artwork with Graphormer's advanced Depthmap generation, providing lifelike and realistic hand anatomy. 🎨 This tool opens up a world of possibilities for correct hands and expressive hand gestures
#### Links from the Videoi ####
my Workflow: openart.ai/workflows/NkhzwEW8...
ControlNet Aux: github.com/Fannovel16/comfyui...
Hand Inpaint Model: huggingface.co/hr16/ControlNe...
#### Join and Support me ####
Buy me a Coffee: www.buymeacoffee.com/oliviotu...
Join my Facebook Group: / theairevolution
Joint my Discord Group: / discord
AI Newsletter: oliviotutorials.podia.com/new...
Support me on Patreon: / sarikas

Пікірлер: 230

  • @Douchebagus
    @Douchebagus4 ай бұрын

    Hey, Olivio, been watching your videos for a while now. I just wanted to say what an absolute help your guides are and I am thankful that your channel exists. I tried comfyui a few months ago but I gave up because of large learning curve, but with your help, I'm not only cruising through it but learning faster than I thought possible.

  • @goldholder8131
    @goldholder81314 ай бұрын

    The way you articulate how the nodes relate to each other is just fantastic. And your workflows is a fantastic place to learn about the flow of processing all these complex things. Messing with variables here and there makes it a fun scientific project. Thanks again!

  • @BrunoBissig
    @BrunoBissig5 ай бұрын

    Hi Olivio, thats simply great, thank you very much for the workflow!

  • @knightride9635
    @knightride96355 ай бұрын

    Thanks ! A lot of work went into this. Happy new year

  • @seraphin01
    @seraphin015 ай бұрын

    awesome video, it was still a struggle even with controlnet to fix those pesky hands.. gonna give it a try with this setup, you're amazing, happy new year!

  • @jenda3322
    @jenda33225 ай бұрын

    Jako vždy, jsou vaše videa fantastická ok.👍👏

  • @nikgrid
    @nikgrid4 ай бұрын

    Thanks Olivio..excellent tutorial

  • @76abbath
    @76abbath5 ай бұрын

    Thanks a lot for the video Olivio!

  • @henrischomacker6097
    @henrischomacker60975 ай бұрын

    Excellent video! Congratulations.

  • @mosske
    @mosske5 ай бұрын

    Thank you so much Olivio. Love your videos! 😊

  • @OlivioSarikas

    @OlivioSarikas

    5 ай бұрын

    My pleasure!

  • @daviddiehn5176
    @daviddiehn51764 ай бұрын

    Hey Olivio, I intergrated the mask captioning etc. to my workflow, but now the same error is occuring everytime. I tried a bit a round, but I am still clueless. Error occurred when executing KSampler: mat1 and mat2 shapes cannot be multiplied (308x2048 and 768x320) ... ( 100 lines complex code )

  • @fingerprint8479
    @fingerprint84795 ай бұрын

    Hi, works with A1111?

  • @RamonGuthrie
    @RamonGuthrie5 ай бұрын

    This might be your most liked video ever ...Hand FIXING the Holygrail of AI

  • @rileyxxxx

    @rileyxxxx

    5 ай бұрын

    xD

  • @hippotizer
    @hippotizer5 ай бұрын

    Super valuable video, thanks a lot!

  • @micbab-vg2mu
    @micbab-vg2mu5 ай бұрын

    Very useful - thank you.

  • @GrocksterRox
    @GrocksterRox5 ай бұрын

    Very creative as always Olivio!!!

  • @diegopons9808
    @diegopons98085 ай бұрын

    Hey! Available on A1111 as well?

  • @vincentmilane
    @vincentmilane5 ай бұрын

    ERROR : When loading the graph, the following node types were not found: AV_ControlNetPreprocessor Nodes that have failed to load will show as red on the graph. I tried many things, always pop up

  • @caffeinezombies

    @caffeinezombies

    4 ай бұрын

    I still have this issue as well, after following many suggestions on installing other items.

  • @user-ln7ti5ki5z

    @user-ln7ti5ki5z

    3 ай бұрын

    Same here

  • @user-ln7ti5ki5z

    @user-ln7ti5ki5z

    3 ай бұрын

    I solved this issue by opening the manager and then clicking "Install Missing Custom Nodes"

  • @jakubjakubjakubjakubjakub

    @jakubjakubjakubjakubjakub

    2 ай бұрын

    @@user-ln7ti5ki5z That works! Thank you!

  • @lauracamellini7999
    @lauracamellini79995 ай бұрын

    Thanks so much olivio!

  • @maxfxgr
    @maxfxgr5 ай бұрын

    Hello and have an awesome 2024

  • @skycladsquirrel
    @skycladsquirrel5 ай бұрын

    Great job Olivio! Let's give you a five finger hand of applause!

  • @amkire65
    @amkire655 ай бұрын

    Great video. I find that the depth map looks a lot better than the hand in the finished image, I'm not too sure why it changes quite so much. It's cool that we're getting closer, though... what I'm really after is a way to get consistent clothing in multiple images so I don't have a character that changes clothes in every panel of a story.

  • @user-fu5sz4su8u
    @user-fu5sz4su8u5 ай бұрын

    Can I use this in A1111????

  • @hwj8640
    @hwj86405 ай бұрын

    Thanks for sharing!

  • @OlivioSarikas

    @OlivioSarikas

    5 ай бұрын

    My pleasure

  • @Ulayo
    @Ulayo5 ай бұрын

    A little late comment, but you don't need to do a vae decode -> encode. There's a node called "Remove latent noise mask" that removes the mask so you can keep working on the same latent. (Every time you go between latent and pixel space you lose a little quality, as the decode/encode process is not lossless). Also, you would probably get a little less sausage like hands if you lowered the denoise a bit to somewhere in the 0.7-0.9 area.

  • @zoybean

    @zoybean

    5 ай бұрын

    But then it doesn't show an image output so how would I do that for the midas preprocessor step?

  • @Ulayo

    @Ulayo

    5 ай бұрын

    @@zoybean You still decode the latent to get an image for the midas step. Just connect that same latent to a remove noise mask and pass that to the upscale latent node.

  • @beatemero6718

    @beatemero6718

    4 ай бұрын

    I dont quite understand. You need the decode to pass the Image to the meshgraphormer. The remove Noise mask has only a latent in and output, so how would not need the decode?

  • @Ulayo

    @Ulayo

    4 ай бұрын

    @@beatemero6718 I may have worded my reply a bit wrong. You still need to decode the latent to get an image that you pass to the preprocessor. But you shouldn't encode that image again. Just add a remove latent noise mask to the same latent and send it to the sampler.

  • @beatemero6718

    @beatemero6718

    4 ай бұрын

    @@Ulayo I got you.

  • @ysy69
    @ysy695 ай бұрын

    happy new year olivio!

  • @HolidayAtHome
    @HolidayAtHome5 ай бұрын

    That's great! Would love to see some examples of more complicated hand positions or hands that are partly covered by some objects. Does it still work then or is it unusable in those scenarios ?

  • @sb6934
    @sb69345 ай бұрын

    Thanks!

  • @Gabriecielo
    @Gabriecielo5 ай бұрын

    Thanks for the tutorial and the result is amazing, save a lot of photoshop time. I found there are several limitations too. It focus on fingers fixing, but if two right hands for same person, this model seems not fixing it, may be I didn't find the right way to tune it? And it's a SD15 only could not work with SDXL checkpoints for now, hope it gets updated later.

  • @BackyardTattoo
    @BackyardTattoo5 ай бұрын

    Hi Olivo, thanks for the video. How can apply the workflow to an imported image? Is it possible?

  • @hatuey6326
    @hatuey63265 ай бұрын

    great tuto as always ! i would like to see how it works on img to img and with sdxl !

  • @jcvijr
    @jcvijr5 ай бұрын

    Thank you! This model could be included in adetailer node, to simplify the process..

  • @user-db1rv4ou4l
    @user-db1rv4ou4l5 ай бұрын

    would be nice if you had an sdxl version

  • @markdkberry
    @markdkberryАй бұрын

    I get some weird module = 1 error and it wont go past the last KSampler maybe because "ControlNet Auxillary PreProcessors" has this message in the manager so this doesnt work for me : "NOTE: Please refrain from using the controlnet preprocessor alongside this installation, as it may lead to conflicts and prevent proper recognition."

  • @michail_777
    @michail_7775 ай бұрын

    That's great! Now let's do the tests:)))

  • @kleber1983
    @kleber19834 ай бұрын

    does the controlnet is really necessary? I´ve achieved the same result by passing the meshgraphormer mask through a VAE encode for impainting and it worked, I think it´s simplier, but I wonder if it compromisses the quality... thx.

  • @BenjaminKellner
    @BenjaminKellner5 ай бұрын

    Instead of VAE decode and encode before your latent upscale, you could use a use 'get latent size' node, create an empty mask injecting width/height as input, and apply a blank mask as the new latent mask. Especially with larger images it will save you time versus going through the VAE pipeline, but also, since the VAE encoding/decoding is a lossy process, you actually lose quality between samples (not that an upscaled latent looks any better unless done iteratively) -- I prefer to upscale in pixelspace, then denoise starting at step 42, ending at step 52, then another sample after that from step 52 to carry me to 64 steps. I find three samples before post is my optimal workflow.

  • @user-wi7vz2io5n

    @user-wi7vz2io5n

    5 ай бұрын

    Excellent. Where can I find your optimal workflow to learn from you? Thank you

  • @tetsuooshima832

    @tetsuooshima832

    2 ай бұрын

    @@user-wi7vz2io5n hahahaha

  • @Zbig-xw6yp
    @Zbig-xw6yp5 ай бұрын

    Great video. Please note that Preprocess is requiring Node "Segment Anything" for some reason and without it can not be loaded! Thank You for sharing!

  • @weebtraveller
    @weebtraveller5 ай бұрын

    thank you very much, great as always. Can you do Ultimate SD Upscale instead?

  • @Not4Talent_AI
    @Not4Talent_AI5 ай бұрын

    Pretty cool! Does it work well with hands in more complex positions? Like someone flicking a marble (random example).

  • @Rasukix

    @Rasukix

    5 ай бұрын

    hello there

  • @Steamrick

    @Steamrick

    5 ай бұрын

    Try it out and let us know

  • @Not4Talent_AI

    @Not4Talent_AI

    5 ай бұрын

    sup!1 hahhaa@@Rasukix

  • @Not4Talent_AI

    @Not4Talent_AI

    5 ай бұрын

    dont have comfy installed atm@@Steamrick

  • @ImAlecPonce
    @ImAlecPonce5 ай бұрын

    Thanks :) I'm going to try sticking an img to img to it right away XD

  • @ooiirraa
    @ooiirraa5 ай бұрын

    Thank you for the new ideas! I think it can be improved a little bit. Every encode goes with a loss of quality, so it might be a better decision to first create the full rectangular mask with the dimensions of the image and then apply the new mask to the latent without reencoding. ❤ thank you for your work!

  • @cchance

    @cchance

    5 ай бұрын

    Ya was gonna say don’t decode and recode just overwrite the mask

  • @Foolsjoker

    @Foolsjoker

    5 ай бұрын

    @@cchance How would you just overwrite the mask without decoding to 'flatten' the image?

  • @Madwand99

    @Madwand99

    5 ай бұрын

    @@cchance Do you have another workflow to show what you mean by this?

  • @UltraStyle-AI
    @UltraStyle-AI5 ай бұрын

    Can't find any info about it yet. Need to install on A1111.

  • @97BuckeyeGuy
    @97BuckeyeGuy5 ай бұрын

    I wish you would do more work with SDXL models. I want to see some of the workarounds that may be out there for the lack of a Tiled ControlNet. And I'd like to see more about Kohya Shrink with SDXL.

  • @OlivioSarikas

    @OlivioSarikas

    5 ай бұрын

    Yes, I really need to do more sdxl. But personally I never use it for my Ai images, because it takes much longer and I don't need the added benefits

  • @EH21UTB

    @EH21UTB

    5 ай бұрын

    @@OlivioSarikas Also interested in SDXL. Isn't there a way to use this new hands tool to generate the depth mask and then apply with SDXL models?

  • @Steamrick

    @Steamrick

    5 ай бұрын

    @@EH21UTB Of course. There's SDXL depth controlnets available, though they're not specifically trained for hands. You'd have to experiment which of the available ones works best.

  • @ryutaro765
    @ryutaro7653 ай бұрын

    Can we also use this refined method for img2img?

  • @SetMeFree
    @SetMeFree2 ай бұрын

    when i do img2img it changes my original image into a cartoon but fixes the hands. Any advice?

  • @abellos
    @abellos5 ай бұрын

    Fantastic, can be used also in automatic1111?

  • @mirek190

    @mirek190

    5 ай бұрын

    lol

  • @sharezhade

    @sharezhade

    5 ай бұрын

    Need a video about that. Comfy-ui seems so complicated

  • @AirwolfPL

    @AirwolfPL

    4 ай бұрын

    @@sharezhade it's not complicated and offers great control of the process but it's horribly time consuming. A1111 offers much more streamlined experience for me.

  • @jbnrusnya_should_be_punished
    @jbnrusnya_should_be_punished18 күн бұрын

    I got a strange error: SyntaxError: Unexpected non-whitespace character after JSON at position 4 (row 1 column 5) .Even "Install Missing Custom Nodes" does not help.

  • @KINGLIFERISM
    @KINGLIFERISM5 ай бұрын

    In Darth Vader's voice, " the circle is com-plete." I am now wondering if SEGS could be used instead of a huge box. It can mess up a face if the hand is close to it. Any ideas guys?

  • @HeinleinShinobu
    @HeinleinShinobu5 ай бұрын

    cannot install controlnet preprocessor, has this error Conflicted Nodes: ColorCorrect [ComfyUI-post-processing-nodes], ColorBlend [stability-ComfyUI-nodes], SDXLPromptStyler [ComfyUI-Eagle-PNGInfo], SDXLPromptStyler [sdxl_prompt_styler]

  • @fabiotgarcia2
    @fabiotgarcia24 ай бұрын

    Hi Olivio! How can we apply this workflow to an imported image? Is it possible?

  • @bluemurloc5896
    @bluemurloc58965 ай бұрын

    great video, would you please consider making a tutorial for automatic 1111?

  • @BabylonBaller

    @BabylonBaller

    5 ай бұрын

    Yea, feels like all he posts about is Comfy and forgetting about the 90% of the industry that uses Automatic1111.

  • @megadarthvader
    @megadarthvader5 ай бұрын

    Isn't there a simplified version for web ui? 😅 With that concept map style system everything looks so complicated 🥶

  • @TheColonelJJ
    @TheColonelJJ2 ай бұрын

    Can we add this to Forge?

  • @androidgamerxc
    @androidgamerxc5 ай бұрын

    im automatic 1111 squad please tell how to add in that

  • @A42yearoldARAB
    @A42yearoldARAB3 ай бұрын

    Is there an automatic 1111 version of this?

  • @randomVimes
    @randomVimes5 ай бұрын

    One suggestion for vids like this: a section at the end which shows 3 example prompts and results. Prompt can be on screen, dont have to read it out

  • @hmmrm
    @hmmrm5 ай бұрын

    hello, , i have tried to reach you on discord but i couldnt, i wanted to ask you a very important question.. once we upload our workflows in open ai .. we cant delete any of the workflows ? why ?

  • @4thObserver
    @4thObserver5 ай бұрын

    I really hope they streamline this process in future iterations. MeshGraphormer seems very promising but I lost track of what each step and process does 6 minutes into the video.

  • @meadow-maker

    @meadow-maker

    5 ай бұрын

    Yeah I couldn't even load the Mesh Graphormer node at first, it took me several breaks, coffees and redo until I found it. Really shoddy training video.

  • @RhapsHayden
    @RhapsHayden21 күн бұрын

    Have you managed to get consistent hand animations yet?

  • @TheHmmka
    @TheHmmka3 ай бұрын

    How to fix next error? When loading the graph, the following node types were not found: AV_ControlNetPreprocessor Nodes that have failed to load will show as red on the graph.

  • @user-ln7ti5ki5z

    @user-ln7ti5ki5z

    3 ай бұрын

    I solved this issue by opening the manager and then clicking "Install Missing Custom Nodes"

  • @NamikMamedov
    @NamikMamedov5 ай бұрын

    How can we fix hands in automatic 1111?

  • @ImmacHn
    @ImmacHn5 ай бұрын

    1:30 you can update the custom nodes instead of uninstalling then reinstalling, in the manager press "Fetch updates" once the the updates are fetched Comfy will prompt you to open the "Install Custom Nodes" at which point the custom nodes that have updates will show an "Update" button. After that restart comfy and refresh the page.

  • @OlivioSarikas

    @OlivioSarikas

    5 ай бұрын

    I know. But when I updated it, it didn't give me the new preprocessor

  • @ImmacHn

    @ImmacHn

    5 ай бұрын

    @@OlivioSarikas I see, did you refresh the page after? The nodes are basically client sided so you would need to reload after the reset to see the new node

  • @ImmacHn

    @ImmacHn

    5 ай бұрын

    @@OlivioSarikas Also thanks for the videos, they're very helpful!

  • @Kryptonic83

    @Kryptonic83

    5 ай бұрын

    yeah, i hit update all in comfyui manager then fully restarted comfyui and refreshed the page, worked for me without reinstalling the extension.

  • @D3coify
    @D3coify3 ай бұрын

    I'm trying to do this with "load Image" node

  • @josesimoes1516
    @josesimoes15165 ай бұрын

    If anyone else has an error that 'mediapipe' module can't be found and can't install package due to OSError or something like that just uninstall the auxiliary processor nodes, reboot comfy, install again, reboot again and it works. Everything was fully updated when I was getting that error so reinstalling is probably the best choice just to avoid annoyances.

  • @V_2077
    @V_2077Ай бұрын

    Anybody know an sdxl controlnet refiner for this?

  • @AnimeDiff_
    @AnimeDiff_5 ай бұрын

    segs preprocessor?

  • @alucard604
    @alucard6045 ай бұрын

    Any idea why my "comfyui-art-venture" custom nodes have an "import failed" issue? Its is required by this workflow for the "ControlNet Preprocessor". I already made sure that all conflicting custom nodes are uninstalled.

  • @2PeteShakur

    @2PeteShakur

    5 ай бұрын

    same issue, u updated comfyui?

  • @caffeinezombies

    @caffeinezombies

    4 ай бұрын

    Same issue

  • @VladimirBelous
    @VladimirBelous4 ай бұрын

    I made a workflow for improving the face using a depth map, I would like to link to this process the improvement of hands using a depth map, as well as the process of enlarging with detail without losing quality. For me it turns out either soapy or pixelated around the edges.

  • @Rasukix
    @Rasukix5 ай бұрын

    I presume this is usable with a1111 also?

  • @GS195

    @GS195

    5 ай бұрын

    Oh I hope so

  • @ImmacHn

    @ImmacHn

    5 ай бұрын

    Should really try going the Comfy route, it might seem overwhelming at first, but it's amazing once you get the hang of it.

  • @Rasukix

    @Rasukix

    5 ай бұрын

    I just find nodes hard to handle, my brain just doesn't work well with it@@ImmacHn

  • @omegablast2002
    @omegablast20025 ай бұрын

    only for comfy?

  • @listahul2944
    @listahul29445 ай бұрын

    Great! thanks for the video. how about a img to img fix hands workflow.

  • @OlivioSarikas

    @OlivioSarikas

    5 ай бұрын

    It's inpainting, so that should work too

  • @TheDocPixel

    @TheDocPixel

    5 ай бұрын

    Technically... this is img2img. Just delete the front parts that generate the picture, and start by adding your own picture with a Load Image node.

  • @vbtaro-englishchannel
    @vbtaro-englishchannel2 ай бұрын

    It’s awesome but I can’t use meshgraphormer node. I don’t know why. I guess it’s because I’m using Mac.

  • @MiraPloy
    @MiraPloy5 ай бұрын

    Couldn't dwpose or openpose do the same thing?

  • @hurricanepirates8602
    @hurricanepirates86025 ай бұрын

    Why is AV_ControlNetPreprocessor node red? Egadz!

  • @vincentmilane

    @vincentmilane

    5 ай бұрын

    same for me

  • @graphilia7
    @graphilia75 ай бұрын

    Thanks! I have a problem when I launch the Workflow, this warning appears: "the following node types were not found: AV_ControlNetPreprocessor" I downloaded and placed the "ControlNet-HandRefiner-pruned" file in this folder: ComfyUI_windows_portable\ComfyUI\models\controlnet. Can you please tell me how to fix this?

  • @sirdrak

    @sirdrak

    5 ай бұрын

    Same here... I tried uninstalling and reinstalling the custom nodes as said in the video but the error persists. Edit: Solved intallling Art Venture custom nodes, but now i have the problem of 'mediapipe' error with MeshGraphormer-DepthMapPreprocessor node...

  • @birdfingers354

    @birdfingers354

    5 ай бұрын

    Me three

  • @caffeinezombies

    @caffeinezombies

    4 ай бұрын

    ​@@sirdrakI looked for art venture custom nodes and couldn't find anything.

  • @notanemoprog

    @notanemoprog

    3 ай бұрын

    I replaced the workflow "ControlNet Preprocessor" used in the video (from that "venture" package I don't have) with "AIO Aux Preprocessor" selecting "MiDas DepthMap" and got at least the first image produced (bad hands) before further problems happened

  • @1ststepmedia105
    @1ststepmedia1055 ай бұрын

    I keep getting an error message, the workflow stops at the MeshGraphormer-DepthMapPreprocessor window. I followed the direction you gave and have downloaded the hand inpaint model and place it in the folder not luck.

  • @jamiesonsidoti

    @jamiesonsidoti

    5 ай бұрын

    Same... hits the MeshGraphormer node and coughs the error: A Message class can only inherit from Message Getting the same error when attempting to use the Load InsightFace node for ComnfyUI_IPAdapter_Plus. Tried on a separate new install of Comfy and the error persists.

  • @Okratron-rr8we
    @Okratron-rr8we5 ай бұрын

    i tried replacing the first ksampler with a load image node so that i could process an already generated image through, but it just skipped the mesh graphormer node entirely. any tips? i also plugged the load image into a vae encoder for the second ksampler.

  • @Okratron-rr8we

    @Okratron-rr8we

    5 ай бұрын

    nvm, the mesh graphormer simply isnt able to detect the hands in the image i'm using. maybe soon there will be a way to increase its detectability. other than that, this works great!

  • @listahul2944

    @listahul2944

    5 ай бұрын

    @@Okratron-rr8we I'm just starting with comfy so forgive me is there is some mistake What I did: Created a "Load image" node and connected it to the "meshgraphormer hand refiner" create a "VAE encoder" node and connect the same "Load image" to it... that VAE encoder I connected to the "set latent noise mask". Also, using it like this there is sometimes it isn't able to detect the hands in the image i'm using.

  • @Okratron-rr8we

    @Okratron-rr8we

    5 ай бұрын

    @@listahul2944 yep, thats exactly what i did also. im sure there is a way to identify the hands for the ai but im new to this also. thanks for trying though

  • @substandard649
    @substandard6495 ай бұрын

    Interesting.... does it work with SDXL too?

  • @rodrimora

    @rodrimora

    5 ай бұрын

    would like to know too

  • @Steamrick

    @Steamrick

    5 ай бұрын

    The controlnet is clearly made for SD1.5. That said, there's no reason you could not combine the depth map output with a SDXL depth controlnet, though it may not work quite as well as a net specifically trained for hands.

  • @TheP3NGU1N

    @TheP3NGU1N

    5 ай бұрын

    Sd1.5 always comes first.. SDXL will probably be next as they usually require a little extra to get worked out.

  • @substandard649

    @substandard649

    5 ай бұрын

    I thought sd15 was officially deprecated, if so then you would expect sdxl to be the first target for new releases. That being said i get way better results from the older model, XL is so inflexible by comparison...rant over 😀

  • @TheP3NGU1N

    @TheP3NGU1N

    5 ай бұрын

    Depends on what you are going for. In the realm of realism, sd15 is still king to most people. Tho xl is quickly catching up. Programming wise, sd15 is easier and most of the time, if you get it to work for sd15 getting it to work for xl is going to be much easier, the reverse isn't quite the same@@substandard649

  • @News_n_Dine
    @News_n_Dine5 ай бұрын

    Unfortunately I don't have the device requirement to set up comfyui. Please do you have any advice for me?

  • @News_n_Dine

    @News_n_Dine

    5 ай бұрын

    Btw, I already tried google colab, didn't work

  • @PeoresnadaStudio
    @PeoresnadaStudio5 ай бұрын

    i would like to see more result examples :)

  • @OlivioSarikas

    @OlivioSarikas

    5 ай бұрын

    You can create as many as you want with my workflow. But I know what you mean 🙂

  • @PeoresnadaStudio

    @PeoresnadaStudio

    5 ай бұрын

    @@OlivioSarikas i mean, it's nice to see more samples on general... thanks for your videos, they are great!

  • @adamcarskaddan
    @adamcarskaddan4 ай бұрын

    I don't have the controlnet preprocessor. How do if fix this?

  • @user-ln7ti5ki5z

    @user-ln7ti5ki5z

    3 ай бұрын

    Try opening the manager and then clicking "Install Missing Custom Nodes" and reboot

  • @BVLVI
    @BVLVI5 ай бұрын

    what keeps me from using comfy UI is the models folder. I want to keep it in a1111 but I can't seem to figure out how to make it point to that folder.

  • @OlivioSarikas

    @OlivioSarikas

    5 ай бұрын

    There is a yaml file in the comfy folder called extra_model_paths. Most likely your version ends in ".example" remove that to make it a yaml file and add the A1111 folder

  • @SaschaFuchs

    @SaschaFuchs

    5 ай бұрын

    What Olivio has written or symlinks. That's how I did it, because I put all the loras and checkpoints on an external SSD, they are connected with symlinks. I do the same with the output folders, they run together on one folder using symlinks.

  • @haoshiangyu6906
    @haoshiangyu69065 ай бұрын

    Add Krita +comfy work flow. Please! I see a lot of video that combines the 2 and line to see how you use it

  • @DashtonPeccia
    @DashtonPeccia5 ай бұрын

    I'm sure this is a novice mistake, but I am getting AV_ControlNetPreprocessor node type missing even after completely uninstalling and re-installing the Controlnet Aux Preprocessor. Anyone else getting this?

  • @kasoleg

    @kasoleg

    5 ай бұрын

    I have the same case, help

  • @KonoShunkan

    @KonoShunkan

    5 ай бұрын

    That is a different set of custom nodes to the aux controlnet nodes. It's called comfyui-art-venture (AV = Art Venture) and can be installed via Comfyui Manager. You may also need control_depth-fp16 safetensors model from Hugging Face.

  • @2PeteShakur

    @2PeteShakur

    5 ай бұрын

    @@KonoShunkan getting conflicts with comfyui-art-venture, disabled the conflicted nodes, still issue,,,

  • @Madwand99

    @Madwand99

    5 ай бұрын

    I'm getting this error too, I haven't figured it out yet.

  • @notanemoprog

    @notanemoprog

    3 ай бұрын

    Because the one featured in the video and workflow is _not_ in "comfyui_controlnet_aux-main" which most people have but in another "venture" package, so if I understood the point of that node, I suppose the same result can be produced by replacing the workflow "ControlNet Preprocessor" used in the video (from that "venture" package I don't have) with "AIO Aux Preprocessor" selecting "MiDas DepthMap" and got at least the first image produced (bad hands) before further problems happened

  • @duck-tube6786
    @duck-tube67865 ай бұрын

    Olivio, by all means continue with ComfyUI vids but please also include A1111 as well.

  • @happyme7055

    @happyme7055

    5 ай бұрын

    Yes, please Olivio! For me as a hobby AI creator, A1111 is the better solution because it's not nearly as complicated to install/operate...

  • @cchance

    @cchance

    5 ай бұрын

    A111 plugins come slower these days and stuff like this in a1111 just isn’t as easy he’s doing 3 ksamplers masking and other stuff in a specific order. That’s just not how a111 works at least not easily

  • @sierradesigns2012

    @sierradesigns2012

    5 ай бұрын

    Yes please!

  • @joeterzio7175

    @joeterzio7175

    5 ай бұрын

    I see ComfyUI and I stop watching. It's obsolete already and that workflow looks like a complex wiring diagram. The future of AI image generation is going to be text based, not that mess of spaghetti string.

  • @fritt_wastaken

    @fritt_wastaken

    5 ай бұрын

    @@joeterzio7175 text based is the past of AI image generation. And it won't come back until something like chatgpt can understand you perfectly and use that "spaghetti string" for you. And even then you probably would have to intervene if you're not just goofing around and actually creating something. There is absolutely no way to describe everything required for an image using just text

  • @truth_and_raids3404
    @truth_and_raids34045 ай бұрын

    I can't get this to work ,every time I get an error Error occurred when executing MeshGraphormer-DepthMapPreprocessor: [Errno 2] No such file or directory: 'C:\\Users\\AShea\\Downloads\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\ckpts\\hr16/ControlNet-HandRefiner-pruned\\cache\\models--hr16--ControlNet-HandRefiner-pruned\\blobs\\41ed675bcd1f4f4b62a49bad64901f08f8b67ed744b715da87738f926dae685c.incomplete'

  • @wykydytron
    @wykydytron5 ай бұрын

    A1111 all the way, noodles are for eating not computers.

  • @tutmstudio
    @tutmstudio2 ай бұрын

    The hand is calibrated to some extent, but the end result is different face. But the face is different in the end result. Can't you do the same face?

  • @ttul
    @ttul5 ай бұрын

    Hmmm. The mask still being in the latent batch output is something that should be fixed.

  • @sergetheijspartner2005
    @sergetheijspartner200510 күн бұрын

    Maybe make a "perfect human"-workflow, I have seen separate workflows for face-detailing, skin, hands, eyes, feet.....maybe I just want to click qeue prompt once and I want my humanoid figure to be perfect in the end without building a workflow for every part of the human body

  • @vincentmilane
    @vincentmilane5 ай бұрын

    ERROR : (IMPORT FAILED) comfyui-art-venture How to fix ?

  • @notanemoprog

    @notanemoprog

    3 ай бұрын

    If you don't have that "venture" package I guess it is possible to replace the workflow "ControlNet Preprocessor" with "AIO Aux Preprocessor" selecting "MiDas DepthMap"

  • @andrewq7125
    @andrewq71255 ай бұрын

    Wait for SDXL

  • @Konrad162
    @Konrad1622 ай бұрын

    isn't an open pose better?

  • @wzs920
    @wzs9205 ай бұрын

    does it work for a1111?

  • @OlivioSarikas

    @OlivioSarikas

    5 ай бұрын

    I will check. But new tech almost always comes to comfyui first

  • @Ruslan4564
    @Ruslan45645 ай бұрын

    Also you can use simple Midas Depth map instead ComfyUI's ControlNet Auxiliary Preprocessors

  • @user-ln7ti5ki5z

    @user-ln7ti5ki5z

    3 ай бұрын

    Maybe try opening the manager and then clicking "Install Missing Custom Nodes" and reboot

  • @kanall103
    @kanall1035 ай бұрын

    I stoped to watch 0:45 lol

  • @cokuzaklar
    @cokuzaklarАй бұрын

    thanks for the video, but i am sure there are faster easier ways to tackle this issue

  • @OlivioSarikas

    @OlivioSarikas

    Ай бұрын

    let me know if you find any. that said, you can also use negative embeddings, but they are not addressing specific hands, they instead aim to create better hands in the first place. but they might also alter the overall look of yout image

  • @Ultimum
    @Ultimum5 ай бұрын

    Is there something similar for Stable diffusion?

  • @beatemero6718

    @beatemero6718

    5 ай бұрын

    What do you mean? Bro, this IS stable diffusion.

  • @Ultimum

    @Ultimum

    5 ай бұрын

    @@beatemero6718 Nope thats ComfyUI

  • @sharezhade

    @sharezhade

    5 ай бұрын

    I think means automatic 1111, cos comfy-ui its so complicated for some users @@beatemero6718

  • @notanemoprog

    @notanemoprog

    3 ай бұрын

    Probably means one of the text user interfaces like A1111 or similar@@beatemero6718

  • @toonleap
    @toonleap5 ай бұрын

    No love for AUTOMATIC1111?

Келесі