c0nsumption

c0nsumption

Just me....

X Account:
x.com/c0nsumption_

Reddit:
reddit.com/u/ConsumeEm

IG:
instagram.com/c0nsumption_

Пікірлер

  • @bowaic9467
    @bowaic94674 күн бұрын

    I don't know how to fix this problem. 'ControlNet' object has no attribute 'latent_format'

  • @bowaic9467
    @bowaic94677 күн бұрын

    Do u know what happen to this error? Error occurred when executing CheckpointLoaderSimpleWithNoiseSelect: 'model.diffusion_model.input_blocks.0.0.weight' File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff odes_extras.py", line 52, in load_checkpoint out = load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 511, in load_checkpoint_guess_config model_config = model_detection.model_config_from_unet(sd, diffusion_model_prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\comfy\model_detection.py", line 239, in model_config_from_unet unet_config = detect_unet_config(state_dict, unet_key_prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\comfy\model_detection.py", line 120, in detect_unet_config model_channels = state_dict['{}input_blocks.0.0.weight'.format(key_prefix)].shape[0] ~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  • @mhfx
    @mhfx11 күн бұрын

    yess i love this, great tutorial

  • @josephine.miller
    @josephine.miller12 күн бұрын

    Thank you for making this tutorial. The way you explain is super clear and great to follow 😊

  • @johnhenle1701
    @johnhenle1701Ай бұрын

    Great tutorial, I don't have experience with command prompt so I was hesitant, this made it effortless. Got everything installed and even got video to video working by following your other video, thanks for everything :)

  • @bkdjart
    @bkdjartАй бұрын

    Awesome tutorial! Can you explain how to add more than one training video?

  • @soniabendre
    @soniabendreАй бұрын

    I'm getting validation errors for ModelConfig saying that extra fields like controlnet_map are not permitted.

  • @houseofcontent3020
    @houseofcontent3020Ай бұрын

    Do you have a good workflow for matching an object into an existing photo background?

  • @ceesh5311
    @ceesh5311Ай бұрын

    Was just here for some comfyui tutorials, but hey thanks this is real speak

  • @ceesh5311
    @ceesh5311Ай бұрын

    Thanks a lot, you need better audio for youtube !

  • @versuspl434
    @versuspl4342 ай бұрын

    Hey man I find your tutorials the easiest to understand from all other comfyui tutorials on KZread keep it up! I was wondering if you selling any courses on comfyui? or any way I could pay you for an hour to help me fix an issue with generating images or maybe you have a friend with equal or close knowledge I could pay to teach me.

  • @seancondev3321
    @seancondev33212 ай бұрын

    Brilliant, have been thinking about this

  • @piclezwd
    @piclezwd2 ай бұрын

    Great video thanks! Question, why in the video you mentioned to not open (click) on the port 3000?!

  • @designapp5308
    @designapp53082 ай бұрын

    When using image2video, the effect is not ideal. How should I adjust it? It is very different from the reference picture.

  • @findingforeverness8569
    @findingforeverness85692 ай бұрын

    much appreciated!!

  • @mulleralmeida4844
    @mulleralmeida48442 ай бұрын

    Starting to learn ComfyUI, when I click on Queue Prompt, my computer takes a long time to process the KSampler node. I'm using a MacBook Pro 14 M2 PRO, is it normal for it to take so long?

  • @3ky3ky
    @3ky3ky2 ай бұрын

    Error occurred when executing ADE_AnimateDiffLoaderWithContext: Motion module 'motionmodule15v2.ckpt' is intended for SDXL models, but the provided model is type SD1.5. File "/workspace/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/workspace/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/workspace/ComfyUI/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/workspace/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_gen1.py", line 138, in load_mm_and_inject_params motion_model = load_motion_module_gen1(model_name, model, motion_lora=motion_lora, motion_model_settings=motion_model_settings) File "/workspace/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/model_injection.py", line 424, in load_motion_module_gen1 raise MotionCompatibilityError(f"Motion module '{mm_info.mm_name}' is intended for {mm_info.sd_type} models, " \

  • @3ky3ky
    @3ky3ky2 ай бұрын

    nvm ... downloaded bad version, fixed and working 20-04

  • @voxyloids8723
    @voxyloids87232 ай бұрын

    Wow ! That's AMAZING ! 🔥🔥🔥

  • @voxyloids8723
    @voxyloids87232 ай бұрын

    Thank you so much. Do I understand correctly that AI takes video with motion like a reference and for example if I want to train physics simulation lora. I can simulate in 3d software example and feed it but how does anidiff understand what should be physically animated in shot? Do you plan make another video with using pretrained anidiff loras in img2vid. And I want to create rotating lora that will keep object proportions.

  • @user-rk3wy7bz8h
    @user-rk3wy7bz8h2 ай бұрын

    Hi, i thought to write you maybe you can help me. I want to ask you about some Errors i get with Comfyui. It has nothing to do with this video, but maybe you can help me: 1 . Working with Get Sigma ( from Comfyui Essentials) it shows this error : Error occurred when executing BNK_GetSigma: 'SDXL' object has no attribute 'get model_object 2working with ReActor i get this: Error occured when executing ReActorFaceSwap: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parametee when instantiating InferenceSession. For example, onnxrunyime.InferenceSession(..., providers=['TensorrtExecutionProvider', CUDAExecutionProvider', ' CPUExutionProvider'],...)

  • @samwalker4442
    @samwalker44422 ай бұрын

    my bro, that was awesome....thanks man 🙌

  • @zizhdizzabagus456
    @zizhdizzabagus4562 ай бұрын

    btw is there any normal way to make workflow pause, wait for mask and then just go on? often its just gona start to generate from the start when using this method of bypassing. i try to use preview chooser, but it doesnt work too, the gaussan blur is not happy with the mask provided

  • @zizhdizzabagus456
    @zizhdizzabagus4562 ай бұрын

    completely doesnt work. have no idea whats wrong, have no errors, using your workflow, resulting image is just same as after upscaling. differential diff is not model dependant, right?

  • @miken3d
    @miken3d2 ай бұрын

    Nice one!!

  • @user-hb6dd9iu9g
    @user-hb6dd9iu9g2 ай бұрын

    Thank you for the vid! Could you check the link with workflow, please, i can't find it there (:

  • @OffTheHorizon
    @OffTheHorizon2 ай бұрын

    Im using Ksampler, but it takes 9 minutes for 1 of the 25 samples. Which is obviously extremely slow. Im working on a macbook m1 Max, do you have any tips on making it quicker?

  • @meredithhurston
    @meredithhurston2 ай бұрын

    Is there an update on this workflow? It isn't working for me.

  • @EbonEagle
    @EbonEagle3 ай бұрын

    Thanks for the this quick advice!!

  • @kobe5113
    @kobe51133 ай бұрын

    hey man, thank you for everything you do. i have an issue that kept me up all night over the past two days. its 6am again and i still didnt fix it... do you maybe have time to hop on a discord call and maybe point me in the right direction? kinda desperate at this point..

  • @spearcy
    @spearcy3 ай бұрын

    The workflow link doesn't work. How did you install differential diffusion?

  • @philspitlerSF
    @philspitlerSF3 ай бұрын

    I don't see a link to download the workflow

  • @leandrogoethals6599
    @leandrogoethals65993 ай бұрын

    nice tutorial, have u found a way that u can upload a 3 min video in one piece into the VHS load video node?

  • @alexmedvec4571
    @alexmedvec45713 ай бұрын

    Ok.The problem is some of my dependencies in Friz nodes disappear randomly after I shut down the GPU. I think that I fixed the problem after 3rd time building the image generator. the problem was likely that after going back to workstation after shutting down GPU for the day, I didn't activate venv again. It probably corrupted the files this way. You should have said that we had to do it every time we access our workstation.

  • @prontocgtutor8163
    @prontocgtutor81633 ай бұрын

    Awesome tutorial thanks! Do you think multiple loras of the same video can be trained in chunks adjacently and then all used in 1 long animation chained together and their weights animated to be blended together to form one single coherent animation?

  • @kaleabspica8437
    @kaleabspica84373 ай бұрын

    what do i have to do if i want to change the look of it ? since yours is closer to anime style i want to make it to realism or sci-fi etc

  • @LearningVikas
    @LearningVikas3 ай бұрын

    Thanks worked finally❤❤

  • @amorgan5844
    @amorgan58443 ай бұрын

    Dude! You content is so freaking good! What do you do for a living? You know more than anybody ive watched!

  • @amorgan5844
    @amorgan58443 ай бұрын

    Do you have a vid for this AD CLI for comfyui? This is so detailed and well done

  • @nkofr
    @nkofr3 ай бұрын

    nice. definitely interrested in how it works (diff diff)

  • @cgonestudio2752
    @cgonestudio27523 ай бұрын

    eres genial mi edentifico contigo muchos exitos..sigue asi ,,,, thanks

  • @ryanontheinside
    @ryanontheinside3 ай бұрын

    Thanks bro! If you feel like making a followup video, i would watch it! More on physics

  • @DimiArt
    @DimiArt3 ай бұрын

    Weird im getting preview images from the upscaler node and the lineart images from the controlnet, but im not getting any actual output results.

  • @DimiArt
    @DimiArt3 ай бұрын

    ok i realized my checkpoint and my VAE were set to the ones in the downloaded workflow and i had to set them to the ones i actually had downloaded instead. My bad

  • @kaleabspica8437
    @kaleabspica84373 ай бұрын

    do you know how to change the look of it.

  • @DimiArt
    @DimiArt3 ай бұрын

    @@kaleabspica8437 change the look of what

  • @HistoryIsAbsurd
    @HistoryIsAbsurd3 ай бұрын

    thanks alot eh well worth the sub

  • @DemzOneMusic
    @DemzOneMusic3 ай бұрын

    Hey great vid , Just curious if these will work with SDXL as I am having trouble getting any motion lora to work with SDXL?

  • @WhatsThisStickyStuff
    @WhatsThisStickyStuff3 ай бұрын

    What node pack are you using for the gaussian blur mask?

  • @bilalpenbegullu2851
    @bilalpenbegullu28513 ай бұрын

    I was looking for the god in the wrong place, he was here all along.

  • @c0nsumption
    @c0nsumption3 ай бұрын

    😂 you’re too kind

  • @yumincao5878
    @yumincao58784 ай бұрын

    this is so dm good. this is the best tutorial.

  • @danielvgl
    @danielvgl4 ай бұрын

    Great!!!

  • @luclaura1308
    @luclaura13084 ай бұрын

    Great tutorial!

  • @harshitpruthi4022
    @harshitpruthi40224 ай бұрын

    cant find the differntial diffusion node in manager

  • @c0nsumption
    @c0nsumption4 ай бұрын

    Did you git pull as I directed in the beginning of the video? It’s not a custom node.

  • @harshitpruthi4022
    @harshitpruthi40224 ай бұрын

    @@c0nsumption I dragged and dropped it and then it showed that it is missing. Can't find it in custom node and the upper link is also not working

  • @harshitpruthi4022
    @harshitpruthi40224 ай бұрын

    Actually I am using confyui on google colab

  • @c0nsumption
    @c0nsumption4 ай бұрын

    @@harshitpruthi4022 but did you git pull as I directed in the beginning of the video. AGAIN, it’s not a custom node. If you want me to help you, you can’t just repeat the same thing back. I’m asking you a huge question: Did. You. Git. Pull.

  • @harshitpruthi4022
    @harshitpruthi40224 ай бұрын

    Oh yes