Пікірлер

  • @ShengzhuPeng
    @ShengzhuPeng20 күн бұрын

    Hi! I’m interested in a business collaboration. Could you please share your email? Thanks

  • @RhapsHayden
    @RhapsHaydenАй бұрын

    Is this still working for anyone? I did a fresh install of Comfy + Python 3.10.10 and it still cannot load.

  • @MrMertall
    @MrMertallАй бұрын

    I keep getting the below error despite having installed YACS already : No module named 'yacs.config' File "C:\Users\xpare\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\xpare\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\xpare\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\xpare\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux ode_wrappers\mesh_graphormer.py", line 66, in execute from controlnet_aux.mesh_graphormer import MeshGraphormerDetector File "C:\Users\xpare\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\mesh_graphormer\__init__.py", line 5, in from controlnet_aux.mesh_graphormer.pipeline import MeshGraphormerMediapipe, args File "C:\Users\xpare\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\mesh_graphormer\pipeline.py", line 12, in from custom_mesh_graphormer.modeling.hrnet.config import config as hrnet_config File "C:\Users\xpare\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mesh_graphormer\modeling\hrnet\config\__init__.py", line 7, in from .default import _C as config File "C:\Users\xpare\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mesh_graphormer\modeling\hrnet\config\default.py", line 15, in from yacs.config import CfgNode as CN

  • @ahmadzaini
    @ahmadzainiАй бұрын

    Thank you man, great job! But in my PC, IPAdapterApply node is missing and become red, when i try to replace it with IPAdapterAdvance node, i miss the 'insightface' input. Do you know how to solve this problem?

  • @_gr1nchh
    @_gr1nchh2 ай бұрын

    Getting runtime error "mat1 and mat2" shapes cannot be multiplied. Any idea as to what could be causing this?

  • @ryuktimo6517
    @ryuktimo65172 ай бұрын

    this does not work on resting hands only raised hands

  • @qus123
    @qus1232 ай бұрын

    What can you put in the prompt that goes into unsampler?

  • @voxyloids8723
    @voxyloids87232 ай бұрын

    Still trying to make a mesh from it

  • @caoonghoang5060
    @caoonghoang50603 ай бұрын

    When loading the graph, the following node types were not found: IPAdapterApply I have fully installed the nodes and still get this error

  • @cinematic_monkey
    @cinematic_monkey3 ай бұрын

    This is too hard to follow. You should build it from scratch with every step shown.

  • @JanRTstudio
    @JanRTstudio3 ай бұрын

    OK, I will consider it in the next video, thanks for the feedback

  • @Distop-IA
    @Distop-IA3 ай бұрын

    This channel is underrated. You're the goat @JanRTstudio

  • @JanRTstudio
    @JanRTstudio3 ай бұрын

    Thank you, so glad to hear that!

  • @renanarchviz
    @renanarchviz3 ай бұрын

    on mine it appears in the install custom nodes tab. a red band shows the conflict. it does not appear on the desktop in confyui

  • @bradyee227
    @bradyee2273 ай бұрын

    hi i am getting this error could you help plz File "D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\cuda\__init__.py", line 293, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled

  • @JanRTstudio
    @JanRTstudio3 ай бұрын

    Hi you are using cuda version comfyui without installing torch-cuXXX (like cu118 version). You can try to run "your_ComfyUI_folder\update\update_comfyui_and_python_dependencies.bat". Are you using portable version?

  • @voxyloids8723
    @voxyloids87233 ай бұрын

    Can\t find practical usage

  • @JanRTstudio
    @JanRTstudio3 ай бұрын

    I was trying to use it as background in blender, but the addon has issues for importing.

  • @hanygh2240
    @hanygh22403 ай бұрын

    thx

  • @MikevomMars
    @MikevomMars3 ай бұрын

    This workflow CREATES and image, but in most of the cases, you'd like to load an EXISTING image to refine it 😐

  • @JanRTstudio
    @JanRTstudio3 ай бұрын

    yeah similar suggestion in other comments, I will test and upload a Img2Img workflow

  • @epelfeld
    @epelfeld3 ай бұрын

    Thank you it works great, just something wrong with colors. They are too bright and the image is overexposed. Have you an idea what's wrong?

  • @JanRTstudio
    @JanRTstudio3 ай бұрын

    Sure! It might be the color match node, you can try different reference picture for the color match

  • @mkrl89
    @mkrl893 ай бұрын

    Hi there! Great video tho. I've tried follow it to install Magic Animate nodes but I failed... Maybe you could help. My case is that despite all stuff is downloaded and it shows Magic Animate as installed in Manager I am not able to find those nodes in Comfy. I even tried to use your workflow but those nodes still appear as red boxes. I found out that my terminal shows this info under Magic Animate node: " cannot import name 'PositionNet' from 'diffusers.models.embeddings'" I'd appreciate any ideas what's wrong :)

  • @mfb-ur7kz
    @mfb-ur7kz3 ай бұрын

    I receive an error while trying the ScrenShare node: " Error accessing screen stream: NotAllowedError: Failed to execute 'getDisplayMedia' on 'MediaDevices': Access to the feature "display-capture" is disallowed by permission policy." Do you know what might cause the error? Where can I enable display-capture? thanks

  • @JanRTstudio
    @JanRTstudio3 ай бұрын

    Hi what's your system and browser? It seems the browser doesn't allow screen share. Are you using server version or sd-webui-comfyui?

  • @leetotti3064
    @leetotti30644 ай бұрын

    When i run the workflow,it stops and show me : Error occurred when executing KSampler: mat1 and mat2 shapes cannot be multiplied (154x768 and 1280x2048)

  • @JanRTstudio
    @JanRTstudio4 ай бұрын

    It seems models/vae don't match, can you please double check the models are in the same nodes as in the video?

  • @leetotti3064
    @leetotti30644 ай бұрын

    Thanks, I checked the models and vae and it looks like that's the problem. It works now@@JanRTstudio

  • @kshabana_YT
    @kshabana_YT4 ай бұрын

    you are a pro

  • @fabiotgarcia2
    @fabiotgarcia24 ай бұрын

    Does it work on Mac M2?

  • @JanRTstudio
    @JanRTstudio4 ай бұрын

    I think so, comfyui stated support for M2 with any recent macOS version, and this is the native support so should work, though I don't have Mac to test it right now

  • @VFXMinds
    @VFXMinds4 ай бұрын

    hi comfyui easy use import failed error i m getting. Not able to to run style selector node.

  • @JanRTstudio
    @JanRTstudio4 ай бұрын

    Can you copy the error code in the command line window regarding the import failed? that's strange if installed from comfyui manager

  • @kuka7466
    @kuka74663 ай бұрын

    SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5) @@JanRTstudio

  • @JanRTstudio
    @JanRTstudio3 ай бұрын

    @@kuka7466 Hi can you check "Install Missing Custom Nodes" from comfyui-manager menu? generally it's missing nodes

  • @nickchalion
    @nickchalion4 ай бұрын

    Hi and thank you for this vid , i have a very large error, can u help me plz? Error occurred when executing KSampler: Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 16, 24, 24] to have 4 channels, but got 16 channels instead and ...

  • @JanRTstudio
    @JanRTstudio4 ай бұрын

    Sure! Very similar to the issue mentioned in another comment, can you double check the model names in 4 black loader nodes (2 unets 1vae and 1 clip) are correct? Sometimes Comfyui just update a default value if your model file location (0:31) is not correct.

  • @nickchalion
    @nickchalion4 ай бұрын

    @@JanRTstudio thank you so much , it fixed 😍

  • @JanRTstudio
    @JanRTstudio4 ай бұрын

    @@nickchalion Awesome!

  • @MikevomMars
    @MikevomMars4 ай бұрын

    The following node types were not found: -StableCascade_StageB_Conditioning -StableCascade_EmptyLatentImage Unfortunately, they aren't available in the manager 😐

  • @ischeka
    @ischeka4 ай бұрын

    did you update the comfyui itself , since cascade nodes are native not custom nodes I believe

  • @MikevomMars
    @MikevomMars4 ай бұрын

    @@ischeka After updating ComfyUI, the nodes were available BUT the workflow stops with an error "Given groups=1, weight of size [320, 16, 1, 1], expected input[2, 64, 12, 12] to have 16 channels, but got 64 channels instead"

  • @JanRTstudio
    @JanRTstudio4 ай бұрын

    @@MikevomMars Can you double check it's stable_cascade selected in Load Clip type, and reselect all the models in that 4 black loader nodes? Or just drag the downloaded workflow to comfyui again to reload. I just updated comfyui, but can't replicate your error. Seems something wrong with the model loading.

  • @JanRTstudio
    @JanRTstudio4 ай бұрын

    @@ischeka yep, thank you!

  • @MikevomMars
    @MikevomMars4 ай бұрын

    @@ischeka Finally, it works - thanks for helping 😊👍 The issue was as follows: ComfyUI automatically filled the UNET, CLIP and VAE loaders, but for some strange reason, it inserted the stage a safetensor in the top UNET loader instead of the stage b. I had a hard time figuring out what safetensors need to go in what loader because they are so tiny in the video that it is hard to see. But it works now.

  • @Foolsjoker
    @Foolsjoker4 ай бұрын

    Are people able to train on this yet?

  • @JanRTstudio
    @JanRTstudio4 ай бұрын

    Yes, the training code has been released for lora, controlnet and Stages B & C. You can find it at the hyperlink "training" on their github webpage.

  • @sudabadri7051
    @sudabadri70514 ай бұрын

    Lol i was just banging my head against a wall trying to fix this. Thank you

  • @JanRTstudio
    @JanRTstudio4 ай бұрын

    😄No problem my friend

  • @meywu
    @meywu4 ай бұрын

    Please upload your video to 4K for read node name more easy.

  • @JanRTstudio
    @JanRTstudio4 ай бұрын

    I will try 1440p next time, limited by my monitor res 😅thanks for the suggestion.

  • @ehsankholghi
    @ehsankholghi4 ай бұрын

    thansk so much for ur great tutorials.is there any render time limit in comfyui? i wanna use a 32seconds video.its 30 frame (1000 png) for video2video but i got this error on my 3090ti: numpy.core._exceptions._ArrayMemoryError: Unable to allocate 6.43 GiB for an array with shape (976, 1024, 576, 3) and data type float32

  • @JanRTstudio
    @JanRTstudio4 ай бұрын

    It seems 1000 might be a little bit more, I didn't have a chance to try that yet, but you can try a lower fps, like 30fps -> 10fps, and do frame interpolation (ComfyUI-Frame-Interpolation VFI node) afterwards, thus you only need to generate 320 images a time. I have a VFI node example in my RAVE video

  • @ehsankholghi
    @ehsankholghi4 ай бұрын

    @@JanRTstudio numpy.core._exceptions._ArrayMemoryError: Unable to allocate 6.43 GiB for an array with shape (976, 1024, 576, 3) and data type float32 i got this error

  • @jeffg4686
    @jeffg46864 ай бұрын

    Nice. Could you use Automatic 1111 to train a lora for monkeys hands using this as a base model? By "can you", I mean is it possible do you think?

  • @JanRTstudio
    @JanRTstudio4 ай бұрын

    I believe yes, A1111 not sure, but you can find training using python here: github.com/microsoft/MeshGraphormer/blob/main/docs/EXP.md

  • @jeffg4686
    @jeffg46864 ай бұрын

    @@JanRTstudio - thanks

  • @JanRTstudio
    @JanRTstudio4 ай бұрын

    no problem! @@jeffg4686

  • @jeffg4686
    @jeffg46864 ай бұрын

    @@JanRTstudio - Nice. I might have to take a trip to the zoo, or even do some gens with dalle or something.

  • @JanRTstudio
    @JanRTstudio4 ай бұрын

    Sounds good 😄@@jeffg4686

  • @sudabadri7051
    @sudabadri70514 ай бұрын

    Another amazing video my friend ❤

  • @JanRTstudio
    @JanRTstudio4 ай бұрын

    😀

  • @GggggQqqqqq1234
    @GggggQqqqqq12344 ай бұрын

    Thank you Thank you Thank you.

  • @JanRTstudio
    @JanRTstudio4 ай бұрын

    😀

  • @stephantual
    @stephantual4 ай бұрын

    You probably know this but you could just use IPadapter for the clothes, at 1/0/1.0, it has solid grasp on the image and given only few frames are generated with very little movement , a simple mask will do (but coco segmenting can also be implemented). Thank you!

  • @JanRTstudio
    @JanRTstudio4 ай бұрын

    Right, I bypassed IPA in the video, mask/segmenting is a good way to try, thanks for the suggestion!

  • @stephantual
    @stephantual4 ай бұрын

    @@JanRTstudio Thank you for the cool videos! Do you have an X account so we can follow you on?

  • @JanRTstudio
    @JanRTstudio4 ай бұрын

    Sure! Just created one, JanRT111, will update there! @@stephantual

  • @freegames247
    @freegames2474 ай бұрын

    my RTX 460ti 16GB Ram runs out of memory

  • @JanRTstudio
    @JanRTstudio4 ай бұрын

    16 frames 512x768? I ran it w/ 12G, make sure to mute the groups not needed, you can save VRAM if running step by step by muting.

  • @freegames247
    @freegames2474 ай бұрын

    yep, lowering the fps do the think. thx

  • @JanRTstudio
    @JanRTstudio4 ай бұрын

    Sure, have fun! @@freegames247

  • @ehsankholghi
    @ehsankholghi4 ай бұрын

    i upgraded to 3090ti 24gig.how much cpu ram i need to do video to video SD? I have 32gig

  • @JanRTstudio
    @JanRTstudio4 ай бұрын

    That's pretty cool! 24G is a lot, it's mainly GPU ram to render, if shifted to CPU ram the speed will slow down dramatically, so 32G RAM is good enough you don't want to use it. With 24G you can try latent upscale to get high resolution animation. I just have 12G VRAM and can do 512x768 100frames+ one time. You are good to go!

  • @anoubhav
    @anoubhav4 ай бұрын

    What is a unsampler?

  • @JanRTstudio
    @JanRTstudio4 ай бұрын

    Sampling is a denoising process, unsampler does the reverse and creates the noise pattern from image, used to reconstruct the image with modified prompts.

  • @tianxiangxu2288
    @tianxiangxu22884 ай бұрын

    nice work

  • @JanRTstudio
    @JanRTstudio4 ай бұрын

    Thank you!

  • @user-vj3bj7dd1x
    @user-vj3bj7dd1x5 ай бұрын

    fantastic work!, the only problem is that I found that after running this workflow, the background is always pure color, even if I have added some background info in prompt, it does not work, could you share some fix method?

  • @JanRTstudio
    @JanRTstudio5 ай бұрын

    Thanks for the feedback! First I think you can try to bypass the "AnimateDiff Loader" and set "Input_Img_Cap" to 1, and run some single picture to check if the background is generated as you wish, change to different models if not. Or you can add the depth controlnet with a very low strength, 0.2 for example. If you just want to restyle the source video, you can decrease the denoise value in the first Ksampler, like 0.6 - 0.8.

  • @typho0n5
    @typho0n55 ай бұрын

    Error occurred when executing VHS_LoadImagesPath: directory is not valid: D:/Program Files/ComfyUI_windows_portable/ComfyUI/output/ADiff/JanRT_P05/ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 155, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 85, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 78, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite\videohelpersuite\load_images_nodes.py", line 143, in load_images raise Exception("directory is not valid: " + directory) I've tried many times but I really don't know how to modify

  • @JanRTstudio
    @JanRTstudio5 ай бұрын

    You can change the first CR prompt text to your folder path "D:/ComfyUI_windows_portable/ComfyUI/output/ADiff/", and rerun from the beginning. Loading images from folder other than the "output" folder inside ComfyUI usually causes error, I think that's the reason.

  • @mick7727
    @mick77275 ай бұрын

    My brain always shuts down when I see ComfyUI. I started a week ago on a1111 so yeah, very early days!

  • @JanRTstudio
    @JanRTstudio5 ай бұрын

    lol yeah get familiar with a1111, you will find it's just those options separated as nodes in comfyui

  • @ronnysempai
    @ronnysempai5 ай бұрын

    Good video, thanks

  • @JanRTstudio
    @JanRTstudio5 ай бұрын

    Thank you!

  • @GfcgamerOrgon
    @GfcgamerOrgon5 ай бұрын

    Its unfortunatelly that crossing fingers is interpreted as single hand, on many angles, I wish they could fix this up. A traning should be made, probably, and some sign to detect when one hand is under the other, because it deforms really bad as if there is only one hand on person! Also gloves have been a problem to me. It can be better yet.

  • @JanRTstudio
    @JanRTstudio5 ай бұрын

    Exactly, they mentioned this limitation. Works well in general poses, but still need to fix manually when crossing, overlapping, partial hands, etc.

  • @sureshotmv8255
    @sureshotmv82555 ай бұрын

    Great content! Is there a guide on how sparse control rgb/scribble actually work? What I mean how do you know it's placed first and last image? Can you place RGB Sparse Control on frames 1,5,7,9,15,20? How?

  • @JanRTstudio
    @JanRTstudio5 ай бұрын

    Thank you! Yes, that's controlled by the "Sparse method", I am making another video for it and will talk about these methods

  • @risewithgrace
    @risewithgrace5 ай бұрын

    For some reason, even though I've successfully downloaded Comfyui Impact Pack, Comfyui will still say it's missing. So the node above SAMLoader in the Face Detailer section is red. Have you run into this issue?

  • @JanRTstudio
    @JanRTstudio5 ай бұрын

    That's strange, did you use comfyUI manager to install Impact? Just check the CMD window, during loading it will show "Import Failed" for Impact Pack, and before that, it actually gives you error and the cause of failure.

  • @sudabadri7051
    @sudabadri70515 ай бұрын

    is there anyway to include IP adapter FaceID sdxl into this and a regional IP adapter like your animate diff workflow?

  • @JanRTstudio
    @JanRTstudio5 ай бұрын

    I haven't tried sdxl for RAVE but should work, I will try and update the workflow if it works

  • @JanRTstudio
    @JanRTstudio5 ай бұрын

    I tried the FaceID sdxl and it works, seems best w/ reactor and without face plus for the animation (not only the face), but just based on a few generations. The workflow link is added above and you can have a try. I probably will add Regional IPadapter in the future post, it's in the inspire pack but I actually have not used it yet.

  • @sudabadri7051
    @sudabadri70515 ай бұрын

    @@JanRTstudio you are awesome i will test and let you know how I go!

  • @sudabadri7051
    @sudabadri70515 ай бұрын

    can you add an option for me to give you money through youtube or patreon, your work is excellent I want to support you.

  • @JanRTstudio
    @JanRTstudio5 ай бұрын

    Thank you for your kind words, that is already great support for me, really appreciate it! I will consider adding it later.@@sudabadri7051

  • @sudabadri7051
    @sudabadri70515 ай бұрын

    amazing!! your videos are really good

  • @JanRTstudio
    @JanRTstudio5 ай бұрын

    Glad you like it!

  • @DYYGITAL
    @DYYGITAL5 ай бұрын

    Anyone know why i'm getting an error code returned when running the mesh graphormer? the error reads as follows: WinError 206] The filename or extension is too long: 'C:\\Users\\Christian\\OneDrive\\Documents\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\ckpts\\hr16/ControlNet-HandRefiner-pruned\\cache\\models--hr16--ControlNet-HandRefiner-pruned\\snapshots\\f0917f0595ecb7f6435f49e4b2b28f8dd68ab0cb'

  • @JanRTstudio
    @JanRTstudio5 ай бұрын

    Can you try to directly download the files "graphormer_hand_state_dict.bin" and "hrnetv2_w64_imagenet_pretrained.pth" from huggingface link in the above Description and put them into your "ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts\hr16\ControlNet-HandRefiner-pruned" folder?

  • @DYYGITAL
    @DYYGITAL5 ай бұрын

    @@JanRTstudio absolute hero, this has worked, thank you!

  • @JanRTstudio
    @JanRTstudio5 ай бұрын

    Great, you are welcome!@@DYYGITAL

  • @merry6671
    @merry66713 ай бұрын

    You can also solve it by making the pathname shorter. For example, Reducing parts like "ComfyUI_windows_portable" to ComfyUI or placing ComfyUI straight into the C root folder because the problem is literally that the total pathname is too long.

  • @user-sb8bo5xc4s
    @user-sb8bo5xc4s5 ай бұрын

    Error occurred when executing SEGSDetailerForAnimateDiff: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first. File "G:\Blender_ComfyUI\ComfyUI\execution.py", line 154, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "G:\Blender_ComfyUI\ComfyUI\execution.py", line 84, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "G:\Blender_ComfyUI\ComfyUI\execution.py", line 77, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "G:\Blender_ComfyUI\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\segs_nodes.py", line 204, in doit segs = SEGSDetailerForAnimateDiff.do_detail(image_frames, segs, guide_size, guide_size_for, max_size, seed, steps, cfg, sampler_name, File "G:\Blender_ComfyUI\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\segs_nodes.py", line 183, in do_detail cropped_image_frames = cropped_image_frames.numpy() 😂😂😂😂

  • @___x__x_r___xa__x_____f______
    @___x__x_r___xa__x_____f______5 ай бұрын

    is there an sdxl version?

  • @JanRTstudio
    @JanRTstudio5 ай бұрын

    Not yet, because the controlnet model was trained on 1.5, but I did try XL + 1.5 and it works. I will update it later.

  • @___x__x_r___xa__x_____f______
    @___x__x_r___xa__x_____f______5 ай бұрын

    @@JanRTstudiowhat do you mean xl + sd15

  • @___x__x_r___xa__x_____f______
    @___x__x_r___xa__x_____f______5 ай бұрын

    @@JanRTstudio what do you mean 1.5+xl?

  • @JanRTstudio
    @JanRTstudio5 ай бұрын

    SDXL for generation, SD1.5 for hand fix @@___x__x_r___xa__x_____f______

  • @___x__x_r___xa__x_____f______
    @___x__x_r___xa__x_____f______5 ай бұрын

    @@JanRTstudio yup makes sense

  • @SergeyPower
    @SergeyPower5 ай бұрын

    great video - any advice on making a workflow to work with img2img ? I guess I miss some important step as the meshgraphormer just returns black instead of Zdepth but works fine with txt2img..

  • @JanRTstudio
    @JanRTstudio5 ай бұрын

    Thanks, just simply replace the first "preview bridge" with "load image" and mute the first "vae decode", that's all. I also found black sometimes if the hand shape distorted too much, and it also doesn't support well for anime picture.

  • @bestof467
    @bestof4673 ай бұрын

    @@JanRTstudio I tried your method for img2img but did not work. Could you share a workflow download for img2img including bulk multiple image fix hands?

  • @JanRTstudio
    @JanRTstudio3 ай бұрын

    @@bestof467 Sure, let me test and I will upload later

  • @mehradbayat9665
    @mehradbayat96652 ай бұрын

    @@JanRTstudio Was this solved?