I got this error what i do Error occurred when executing LoraLoader: 'NoneType' object has no attribute 'lower' File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", Line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI odes.py", 11, in load lora
@cgpixel67454 күн бұрын
make sure that you select the lora file in the lora loader otherwise it will not work
@RoshanYadav-v2z4 күн бұрын
@@cgpixel6745 ok i will try now
@RoshanYadav-v2z4 күн бұрын
@@cgpixel6745 I got another error Error occurred when executing ADE_AnimateDiffLoaderWithContext: 'NoneType' object has no attribute 'lower' File :\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, olj.FUNCTION, allow_interrupt=True) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func) (**slice_dict(input_data_all, i))) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff- Evolved animatediff odes_gen1.py", line 138, in load_mm_and_inject_params motion_model = load_motion_module_gen1(model_name, model, motion_lora-motion_lora, motion_model_settings=motion_model_settings) C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff- File " Evolved\animatediff\model_injection.py", line 1201, in load_motion_module_gen1 mm_state_dict = comfy.utils.load_torch_file(model_path, safe_load=True) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 14, in load_torch_file
@RoshanYadav-v2z4 күн бұрын
@@cgpixel6745 I got another error Error occurred when executing ADE_AnimateDiffLoaderWithContext: 'NoneType' object has no attribute 'lower' File :\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, olj.FUNCTION, allow_interrupt=True) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func) (**slice_dict(input_data_all, i))) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff- Evolved animatediff odes_gen1.py", line 138, in load_mm_and_inject_params motion_model = load_motion_module_gen1(model_name, model, motion_lora-motion_lora, motion_model_settings=motion_model_settings) C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff- File " Evolved\animatediff\model_injection.py", line 1201, in load_motion_module_gen1 mm_state_dict = comfy.utils.load_torch_file(model_path, safe_load=True) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 14, in load_torch_file
@DBPlusAI4 күн бұрын
where can i find a lastest version , i saw in ur subcribe but it not lastest version <3
@cgpixel67454 күн бұрын
the latset version of what exactly ?
@DBPlusAI2 күн бұрын
@@cgpixel6745 I'm very sorry that I haven't watched the video carefully, I misunderstood, I'm sincerely sorry :<<<
@RoshanYadav-v2z5 күн бұрын
Hi sir i need help about comfyui😊
@cgpixel67455 күн бұрын
OFC how can i help you
@RoshanYadav-v2z5 күн бұрын
@@cgpixel6745 sir when I run comfy ui by clicking run_nvidea_gpu. This error show me what I do plzz guid Traceback (most recent call last): File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\main.py", line 80, in <module> import execution File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 11, in <module> import nodes File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI odes.py", line 21, in <module> import comfy.diffusers_load File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\diffusers_load.py", line 3, in <module> import comfy.sd File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 5, in <module> from comfy import model_management File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 119, in <module> total_vram = get_total_memory(get_torch_device()) / (1024*1024) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 88, in get_torch_devi ce return torch.device(torch.cuda.current_device()) File "C:\Users\akash\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\cuda\__init__.py", line 778, in current_device _lazy_init() File "C:\Users\akash\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\cuda\__init__.py", line 293, in lazy_init torch._C._cuda_init() RuntimeError: The NVIDIA driver on your system is too old (found version 11060). Please update your GPU driver by downlo ading and installing a new version from the URL: www.nvidia.com/Download/index.aspx Alternatively, go to: https:/ /pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. C:\Users\akash\Documents\ComfyUI_windows_portable>pause Press any key to continue
@RoshanYadav-v2z5 күн бұрын
@@cgpixel6745 @cgpixel6745 sir when I run comfy ui by clicking run_nvidea_gpu. This error show me what I do plzz guid Traceback (most recent call last): File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\main.py", line 80, in <module> import execution File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 11, in <module> import nodes File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI odes.py", line 21, in <module> import comfy.diffusers_load File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\diffusers_load.py", line 3, in <module> import comfy.sd File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 5, in <module> from comfy import model_management File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 119, in <module> total_vram = get_total_memory(get_torch_device()) / (1024*1024) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 88, in get_torch_devi ce return torch.device(torch.cuda.current_device()) File "C:\Users\akash\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\cuda\__init__.py", line 778, in current_device _lazy_init() File "C:\Users\akash\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\cuda\__init__.py", line 293, in lazy_init torch._C._cuda_init() RuntimeError: The NVIDIA driver on your system is too old (found version 11060). Please update your GPU driver by downlo ading and installing a new version from the URL: www.nvidia.com/Download/index.aspx Alternatively, go to: https:/ /pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. C:\Users\akash\Documents\ComfyUI_windows_portable>pause Press any key to continue
@bgtubber8 күн бұрын
I've also had bad results with the old SDXL canny. I've always wondered why it's worse than the SD 1.5 canny. Good to know that the Canny in the Controlnet Union model doesn't have such problems. Thanks for the demonstration!
@cgpixel67457 күн бұрын
thanks to you for the positive energy
@wellshotproductions654110 күн бұрын
Thank you for this, another great video. I like how you go over the workflow, highlighting different steps. And no background music to disract me! You sound like a wise professor.
@cgpixel674510 күн бұрын
@@wellshotproductions6541 thanks for you comments I am trying to improve the quality in every next tutorial based on the community advices, I am glad that you liked it
@ken-cheenshang682910 күн бұрын
thx!
@lance330110 күн бұрын
Great content and great workflow. Thanks for sharing.
@jinxing-xv3py12 күн бұрын
It is amazing
@MrEnzohouang12 күн бұрын
I have a question to ask about the commercial use of comfyui workflow, is it possible to naturally put its products on the models? At present, I seem to know only clothes and shoes are relatively large products, but jewelry, such as earrings, bracelets, necklaces, etc. Although midjourney can be used to process the modified photos, the appearance of the product cannot be controlled, but the appearance control of very small objects in sd seems not to be strong, at least the thin chain will be difficult, I wonder if you have a solution? Thank you very much
@Gavinnnnnnnnnnnnnnn13 күн бұрын
how do i get depth_sdxl.safetensors for depth anything?
@sinuva14 күн бұрын
bit big diference actually
@cgpixel674514 күн бұрын
In speed it's more interesting
@kallamamran17 күн бұрын
I feel like V2 actually has LESS details 🤔
@cgpixel674514 күн бұрын
In some images that's true
@Nonewedone20 күн бұрын
Thank you, I use this workflow to generate a picture, everything seems good, but only the upload image didn't affect the color which I masked.
@cgpixel674520 күн бұрын
Try to play with the weight value of ipadapter
@govindmadan235323 күн бұрын
sdxl depth controlnet keeps giving error -> Error occurred when executing ACN_AdvancedControlNetApply: 'ControlNet' object has no attribute 'latent_format' Do you know anything about this, or can you please give the link to the exact dept and scribble files for ControlNet that you are using
@govindmadan235323 күн бұрын
Already using the one given in link
@cgpixel674521 күн бұрын
use this link huggingface.co/lllyasviel/sd-controlnet-scribble/tree/main also dont forget to rename your controlnet model and click refrech in comfyui in order to add the model name and it should fix your error
@KINGLIFERISM23 күн бұрын
comfy is so annoying. The developer really needs to make it more stable. Could not install this. And I have installed LLM's even dependencies for faceswap, dlib and anyone knows that isn't straightforward but this? No go... sigh. I give up and not reinstalling again.
@cgpixel674523 күн бұрын
yes you are right but for this DAV2 it is quite simple did you face any issues ?
@pixelcounter50624 күн бұрын
Thank you very much for your information. For me it's quite surprising to have a more detailed depth map with V2, but more or less the same results. I guess canny or scribble is of help to overcome that lack of precision of depth map V1.
@aarizmohamed1713827 күн бұрын
Amazing work🙌🙌🥳🔥
@lonelytaigahotel28 күн бұрын
how to increase the number of frames?
@cgpixel674528 күн бұрын
You change it with the number of frame in the video combine
@RoshanYadav-v2z5 күн бұрын
@@cgpixel6745Ipadaptor folder not found in model folder what I do
@MattOverDriveАй бұрын
Thank you very much for posting the workflow! for anybody curious, I ran CG Pixel's default workflow and prompt on an NVidia P40. Image generation was 25 seconds and video generation was 9 minutes and 11 seconds. I have a 3090 on the way lol.
@cgpixel6745Ай бұрын
I am glad that I helped you and I also have rtx 3060 yours should perform better than mine especially if you have more than 6 gb vram
@MattOverDriveАй бұрын
@@cgpixel6745 I put in an rtx 3070ti (8gb) and it generated the image in 5 seconds and the video in 2 minutes and 13 seconds. Time to retire the P40 lol. I'll report back when the 3090 is here
@MattOverDriveАй бұрын
It was delivered today, RTX 3090 image generation was 3 seconds and the video was 1 minute and 14 seconds. Huge improvement!
@weirdscixАй бұрын
Interesting video. Did you base this on the ipiv workflow? As only the upscaling seems to differ.
@cgpixel6745Ай бұрын
yes it is
@RoshanYadav-v2z5 күн бұрын
@@cgpixel6745Ipadaptor folder not found in model folder what I do
@senoharyoАй бұрын
thanks a lot brother! this is work flow that I'm looking for, you are my superhero ! XD
@cgpixel6745Ай бұрын
i am here to help you
@senoharyoАй бұрын
@@cgpixel6745 I know :)
@runebinderАй бұрын
Interesting comparison but it's a bit of an apple to oranges one as the fine tuned models have the benefit of a much greater data set and development. Not seen anyone compare it to SDXL Base yet which would be more of an accurate check. SD3's main issue that I can see is it appears to have quite a limited training data set as poses all look very similar etc. Really looking forward to seeing what the community do with it.
@cgpixel6745Ай бұрын
yeah i also believe that more amazing update are gonna come with this SD3 model lets cross our fingers for it
@UtokoАй бұрын
If you disincentivizing finetunes with your licencing it is another story tho.
@yesheng8779Ай бұрын
@yesheng8779Ай бұрын
thank you so much
@DavidgotboredАй бұрын
There is a annoying problem When i zoom out the fog on the moon disappears from my vision How can i increase the view, so the fog doesn't disappear? Please help me
@cgpixel6745Ай бұрын
in the view tab change the end value from 1000 to 10 000 then select the camera go to the camera icon and do the same from 100 to 10 000 and it should be fixed
@onezenАй бұрын
Can we do all the upscale stuff in ComfyUI directly?
@cgpixel6745Ай бұрын
Yes we can I will upload a video on that soon stay tune
@onezenАй бұрын
@@cgpixel6745
@user-kx5hd6fx3tАй бұрын
so great, thank you so much
@pixelcounter506Ай бұрын
Thank you for presenting this tool. Seems to be really interesting and could be quite helpful regarding compositing!
@cgpixel6745Ай бұрын
i am glad that i helped you
@pixelcounter506Ай бұрын
Your comparison between IC-light and IP-Adapter is really a good idea. I have the feeling that you have more control of the final result with IP-Adapter in selecting a base image. With IC-light you always have a quite heavy color shift. Is the mask still playing a role if you are using IP-Adapter?
@cgpixel6745Ай бұрын
yes it is still playing role and you can check it by changing its position
@vincema4018Ай бұрын
Possible to get your light type images?
@cgpixel6745Ай бұрын
Sure just send me your email
@netspacema19 күн бұрын
can i please have them too?
@zlwuzlwuАй бұрын
Great job
@cgpixel6745Ай бұрын
thanks
@ismgroov4094Ай бұрын
Thx sir❤
@cgpixel6745Ай бұрын
your welcome hope that was helpfull
@StudioOCOMATimelapseАй бұрын
Merci, c'est nickel 👍
@cgpixel6745Ай бұрын
Avec plaisir 👍
@ismgroov4094Ай бұрын
this is good.
@ismgroov4094Ай бұрын
thanks a lot. I respect you, sir!
@cgpixel6745Ай бұрын
thanks it helps me to create more amazing video
@SoSpectersАй бұрын
hey, I really like this workflow and concept, but I can't seem to run it. I keep getting this error Error occurred when executing KSampler: 'ModuleList' object has no attribute '1' And in the console I see WARNING SHAPE MISMATCH diffusion_model.input_blocks.0.0.weight WEIGHT NOT MERGED torch.Size([320, 8, 3, 3]) != torch.Size([320, 4, 3, 3]) IC-Light: Merged with diffusion_model.input_blocks.0.0.weight channel changed from torch.Size([320, 4, 3, 3]) to [320, 8, 3, 3] !!! Exception during processing!!! 'ModuleList' object has no attribute '1' I didn't touch anything, and watched the IC light installation video before. I completely re-installed comfyui and installed only modules used in this workflow, and still I get this error... any ideas?
@cgpixel6745Ай бұрын
Check your checkpoint model I personally used juggernaut version not the sdxl one
@SoSpectersАй бұрын
@@cgpixel6745 I used 5 different SD1.5 models, including the very first one that comes with comfy. Emu 1.5 or whatever it's called... right now my latest lead indicates that despite installing layerdiffuse, a requirement for IC light, it may not have installed correctly. Further research once I get home.
@cgpixel6745Ай бұрын
@@SoSpecters in that case try update comfyui or reduce the resolution of the image from 1024 to 512 may be that would do
@SoSpectersАй бұрын
@@cgpixel6745 alright did brother, seems like it was not the case. I opened a ticket with the IC light github, I'm seeing a lot of Ksampler errors like my own. Hoping to get some feedback there and I will share with the community when I figure it out.
@MrEnzohouangАй бұрын
Could you help me on this case? Please An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on. File "D:\AI\ComfyUI-aki-v1.3\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "D:\AI\ComfyUI-aki-v1.3\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "D:\AI\ComfyUI-aki-v1.3\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "D:\AI\ComfyUI-aki-v1.3\custom_nodes\comfyui_controlnet_aux ode_wrappers\depth_anything.py", line 19, in execute model = DepthAnythingDetector.from_pretrained(filename=ckpt_name).to(model_management.get_torch_device()) File "D:\AI\ComfyUI-aki-v1.3\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\depth_anything\__init__.py", line 40, in from_pretrained model_path = custom_hf_download(pretrained_model_or_path, filename, subfolder="checkpoints", repo_type="space") File "D:\AI\ComfyUI-aki-v1.3\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\util.py", line 324, in custom_hf_download model_path = hf_hub_download(repo_id=pretrained_model_or_path, File "", line 52, in hf_hub_download_wrapper_inner File "D:\AI\ComfyUI-aki-v1.3\python\lib\site-packages\huggingface_hub\utils\_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "D:\AI\ComfyUI-aki-v1.3\python\lib\site-packages\huggingface_hub\file_download.py", line 1371, in hf_hub_download raise LocalEntryNotFoundError( I'm put the checkpoint file in D:\AI\sd-webui-aki-v4.5\models\Depth-Anything and Add the file link address in comfyui as: Add the file link address in comfyui as,then i'm put 3 pth files in D:\AI\sd-webui-aki-v4.5\extensions\sd-webui-controlnet\models and mark the same adress on comfyui yaml file
@MrEnzohouangАй бұрын
I found the file address and fixed the problem myself, thanks for the edited workflow!
@NgocNguyen-ze5yjАй бұрын
wonderful tutorials, could you please make a video work with people subjects? ( IClight and IPADAPTER ERROR with face and body) thanks
@cgpixel6745Ай бұрын
Yeah I will try too I will upload another ic light soon so stay tune
@user-kx5hd6fx3tАй бұрын
I can't find this vedio for 16:9 Version in your channel
@cgpixel6745Ай бұрын
I did not post it yet I will do it soon
@user-kx5hd6fx3tАй бұрын
@@cgpixel6745 thank you very much
@IamalegalAlienАй бұрын
could you help me to solve depth anything error..? i got : Error occurred when executing DepthAnythingPreprocessor: [Errno 2] No such file or directory: 'C:\\Users\\meee2\\Desktop\\SD\ ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\ckpts\\LiheYoung\\Depth-Anything\\.huggingface\\download\\checkpoints\\depth_anything_vitl14.pth.6c6a383e33e51c5fdfbf31e7ebcda943973a9e6a1cbef1564afe58d7f2e8fe63.incomplete' and dont know how to solve..
@cgpixel6745Ай бұрын
You need to place the ckpts model into the right folder comfyui\models\controlnet
@MrEnzohouangАй бұрын
@@cgpixel6745 I found the file address and fixed the problem myself, thanks for the edited workflow!
@briz-vh9smАй бұрын
Bro, If I have a portrait, I want to change the background of the person in the portrait, but in this case, the shadows and light expressions look awkward. Can you make a workflow that satisfies these two requirements?
@cgpixel6745Ай бұрын
yes bro you can do it just watch this tutorial kzread.info/dash/bejne/aIOkzLKnmdCWkZs.html it should resolve everything
@abetuna27072 ай бұрын
you should do a tutorial on comfy ui for total beginners, you will get a lot of views
@cgpixel67452 ай бұрын
well if you need any help i am here
@Lastnamefirstname2892 ай бұрын
How can I download ur workflow? As json file
@cgpixel67452 ай бұрын
I put it in the description box
@user-ob8qr1by1b2 ай бұрын
Where can I download ipadapter clip.safetensors?
@weirdscix2 ай бұрын
Very nice tutorial, thanks for sharing the workflow
@cgpixel67452 ай бұрын
thanks happy that it help you
@wellshotproductions65412 ай бұрын
Great video! Thank you for sharing. Can I suggest that in the future either turn down the background music or just turn it off entierly? You are delightfully soft-spoken, so it would make it easier to hear you clearly. Keep it up brother!
@cgpixel67452 ай бұрын
thanks for the advice i was little septic about it and now i am sure of it
Пікірлер
class
where can i download depth sdxl.safetensors??
OK
Can you provide the workflow in the intro?
I got this error what i do Error occurred when executing LoraLoader: 'NoneType' object has no attribute 'lower' File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", Line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI odes.py", 11, in load lora
make sure that you select the lora file in the lora loader otherwise it will not work
@@cgpixel6745 ok i will try now
@@cgpixel6745 I got another error Error occurred when executing ADE_AnimateDiffLoaderWithContext: 'NoneType' object has no attribute 'lower' File :\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, olj.FUNCTION, allow_interrupt=True) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func) (**slice_dict(input_data_all, i))) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff- Evolved animatediff odes_gen1.py", line 138, in load_mm_and_inject_params motion_model = load_motion_module_gen1(model_name, model, motion_lora-motion_lora, motion_model_settings=motion_model_settings) C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff- File " Evolved\animatediff\model_injection.py", line 1201, in load_motion_module_gen1 mm_state_dict = comfy.utils.load_torch_file(model_path, safe_load=True) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 14, in load_torch_file
@@cgpixel6745 I got another error Error occurred when executing ADE_AnimateDiffLoaderWithContext: 'NoneType' object has no attribute 'lower' File :\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, olj.FUNCTION, allow_interrupt=True) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func) (**slice_dict(input_data_all, i))) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff- Evolved animatediff odes_gen1.py", line 138, in load_mm_and_inject_params motion_model = load_motion_module_gen1(model_name, model, motion_lora-motion_lora, motion_model_settings=motion_model_settings) C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff- File " Evolved\animatediff\model_injection.py", line 1201, in load_motion_module_gen1 mm_state_dict = comfy.utils.load_torch_file(model_path, safe_load=True) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 14, in load_torch_file
where can i find a lastest version , i saw in ur subcribe but it not lastest version <3
the latset version of what exactly ?
@@cgpixel6745 I'm very sorry that I haven't watched the video carefully, I misunderstood, I'm sincerely sorry :<<<
Hi sir i need help about comfyui😊
OFC how can i help you
@@cgpixel6745 sir when I run comfy ui by clicking run_nvidea_gpu. This error show me what I do plzz guid Traceback (most recent call last): File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\main.py", line 80, in <module> import execution File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 11, in <module> import nodes File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI odes.py", line 21, in <module> import comfy.diffusers_load File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\diffusers_load.py", line 3, in <module> import comfy.sd File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 5, in <module> from comfy import model_management File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 119, in <module> total_vram = get_total_memory(get_torch_device()) / (1024*1024) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 88, in get_torch_devi ce return torch.device(torch.cuda.current_device()) File "C:\Users\akash\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\cuda\__init__.py", line 778, in current_device _lazy_init() File "C:\Users\akash\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\cuda\__init__.py", line 293, in lazy_init torch._C._cuda_init() RuntimeError: The NVIDIA driver on your system is too old (found version 11060). Please update your GPU driver by downlo ading and installing a new version from the URL: www.nvidia.com/Download/index.aspx Alternatively, go to: https:/ /pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. C:\Users\akash\Documents\ComfyUI_windows_portable>pause Press any key to continue
@@cgpixel6745 @cgpixel6745 sir when I run comfy ui by clicking run_nvidea_gpu. This error show me what I do plzz guid Traceback (most recent call last): File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\main.py", line 80, in <module> import execution File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 11, in <module> import nodes File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI odes.py", line 21, in <module> import comfy.diffusers_load File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\diffusers_load.py", line 3, in <module> import comfy.sd File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 5, in <module> from comfy import model_management File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 119, in <module> total_vram = get_total_memory(get_torch_device()) / (1024*1024) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 88, in get_torch_devi ce return torch.device(torch.cuda.current_device()) File "C:\Users\akash\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\cuda\__init__.py", line 778, in current_device _lazy_init() File "C:\Users\akash\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\cuda\__init__.py", line 293, in lazy_init torch._C._cuda_init() RuntimeError: The NVIDIA driver on your system is too old (found version 11060). Please update your GPU driver by downlo ading and installing a new version from the URL: www.nvidia.com/Download/index.aspx Alternatively, go to: https:/ /pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. C:\Users\akash\Documents\ComfyUI_windows_portable>pause Press any key to continue
I've also had bad results with the old SDXL canny. I've always wondered why it's worse than the SD 1.5 canny. Good to know that the Canny in the Controlnet Union model doesn't have such problems. Thanks for the demonstration!
thanks to you for the positive energy
Thank you for this, another great video. I like how you go over the workflow, highlighting different steps. And no background music to disract me! You sound like a wise professor.
@@wellshotproductions6541 thanks for you comments I am trying to improve the quality in every next tutorial based on the community advices, I am glad that you liked it
thx!
Great content and great workflow. Thanks for sharing.
It is amazing
I have a question to ask about the commercial use of comfyui workflow, is it possible to naturally put its products on the models? At present, I seem to know only clothes and shoes are relatively large products, but jewelry, such as earrings, bracelets, necklaces, etc. Although midjourney can be used to process the modified photos, the appearance of the product cannot be controlled, but the appearance control of very small objects in sd seems not to be strong, at least the thin chain will be difficult, I wonder if you have a solution? Thank you very much
how do i get depth_sdxl.safetensors for depth anything?
bit big diference actually
In speed it's more interesting
I feel like V2 actually has LESS details 🤔
In some images that's true
Thank you, I use this workflow to generate a picture, everything seems good, but only the upload image didn't affect the color which I masked.
Try to play with the weight value of ipadapter
sdxl depth controlnet keeps giving error -> Error occurred when executing ACN_AdvancedControlNetApply: 'ControlNet' object has no attribute 'latent_format' Do you know anything about this, or can you please give the link to the exact dept and scribble files for ControlNet that you are using
Already using the one given in link
use this link huggingface.co/lllyasviel/sd-controlnet-scribble/tree/main also dont forget to rename your controlnet model and click refrech in comfyui in order to add the model name and it should fix your error
comfy is so annoying. The developer really needs to make it more stable. Could not install this. And I have installed LLM's even dependencies for faceswap, dlib and anyone knows that isn't straightforward but this? No go... sigh. I give up and not reinstalling again.
yes you are right but for this DAV2 it is quite simple did you face any issues ?
Thank you very much for your information. For me it's quite surprising to have a more detailed depth map with V2, but more or less the same results. I guess canny or scribble is of help to overcome that lack of precision of depth map V1.
Amazing work🙌🙌🥳🔥
how to increase the number of frames?
You change it with the number of frame in the video combine
@@cgpixel6745Ipadaptor folder not found in model folder what I do
Thank you very much for posting the workflow! for anybody curious, I ran CG Pixel's default workflow and prompt on an NVidia P40. Image generation was 25 seconds and video generation was 9 minutes and 11 seconds. I have a 3090 on the way lol.
I am glad that I helped you and I also have rtx 3060 yours should perform better than mine especially if you have more than 6 gb vram
@@cgpixel6745 I put in an rtx 3070ti (8gb) and it generated the image in 5 seconds and the video in 2 minutes and 13 seconds. Time to retire the P40 lol. I'll report back when the 3090 is here
It was delivered today, RTX 3090 image generation was 3 seconds and the video was 1 minute and 14 seconds. Huge improvement!
Interesting video. Did you base this on the ipiv workflow? As only the upscaling seems to differ.
yes it is
@@cgpixel6745Ipadaptor folder not found in model folder what I do
thanks a lot brother! this is work flow that I'm looking for, you are my superhero ! XD
i am here to help you
@@cgpixel6745 I know :)
Interesting comparison but it's a bit of an apple to oranges one as the fine tuned models have the benefit of a much greater data set and development. Not seen anyone compare it to SDXL Base yet which would be more of an accurate check. SD3's main issue that I can see is it appears to have quite a limited training data set as poses all look very similar etc. Really looking forward to seeing what the community do with it.
yeah i also believe that more amazing update are gonna come with this SD3 model lets cross our fingers for it
If you disincentivizing finetunes with your licencing it is another story tho.
thank you so much
There is a annoying problem When i zoom out the fog on the moon disappears from my vision How can i increase the view, so the fog doesn't disappear? Please help me
in the view tab change the end value from 1000 to 10 000 then select the camera go to the camera icon and do the same from 100 to 10 000 and it should be fixed
Can we do all the upscale stuff in ComfyUI directly?
Yes we can I will upload a video on that soon stay tune
@@cgpixel6745
so great, thank you so much
Thank you for presenting this tool. Seems to be really interesting and could be quite helpful regarding compositing!
i am glad that i helped you
Your comparison between IC-light and IP-Adapter is really a good idea. I have the feeling that you have more control of the final result with IP-Adapter in selecting a base image. With IC-light you always have a quite heavy color shift. Is the mask still playing a role if you are using IP-Adapter?
yes it is still playing role and you can check it by changing its position
Possible to get your light type images?
Sure just send me your email
can i please have them too?
Great job
thanks
Thx sir❤
your welcome hope that was helpfull
Merci, c'est nickel 👍
Avec plaisir 👍
this is good.
thanks a lot. I respect you, sir!
thanks it helps me to create more amazing video
hey, I really like this workflow and concept, but I can't seem to run it. I keep getting this error Error occurred when executing KSampler: 'ModuleList' object has no attribute '1' And in the console I see WARNING SHAPE MISMATCH diffusion_model.input_blocks.0.0.weight WEIGHT NOT MERGED torch.Size([320, 8, 3, 3]) != torch.Size([320, 4, 3, 3]) IC-Light: Merged with diffusion_model.input_blocks.0.0.weight channel changed from torch.Size([320, 4, 3, 3]) to [320, 8, 3, 3] !!! Exception during processing!!! 'ModuleList' object has no attribute '1' I didn't touch anything, and watched the IC light installation video before. I completely re-installed comfyui and installed only modules used in this workflow, and still I get this error... any ideas?
Check your checkpoint model I personally used juggernaut version not the sdxl one
@@cgpixel6745 I used 5 different SD1.5 models, including the very first one that comes with comfy. Emu 1.5 or whatever it's called... right now my latest lead indicates that despite installing layerdiffuse, a requirement for IC light, it may not have installed correctly. Further research once I get home.
@@SoSpecters in that case try update comfyui or reduce the resolution of the image from 1024 to 512 may be that would do
@@cgpixel6745 alright did brother, seems like it was not the case. I opened a ticket with the IC light github, I'm seeing a lot of Ksampler errors like my own. Hoping to get some feedback there and I will share with the community when I figure it out.
Could you help me on this case? Please An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on. File "D:\AI\ComfyUI-aki-v1.3\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "D:\AI\ComfyUI-aki-v1.3\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "D:\AI\ComfyUI-aki-v1.3\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "D:\AI\ComfyUI-aki-v1.3\custom_nodes\comfyui_controlnet_aux ode_wrappers\depth_anything.py", line 19, in execute model = DepthAnythingDetector.from_pretrained(filename=ckpt_name).to(model_management.get_torch_device()) File "D:\AI\ComfyUI-aki-v1.3\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\depth_anything\__init__.py", line 40, in from_pretrained model_path = custom_hf_download(pretrained_model_or_path, filename, subfolder="checkpoints", repo_type="space") File "D:\AI\ComfyUI-aki-v1.3\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\util.py", line 324, in custom_hf_download model_path = hf_hub_download(repo_id=pretrained_model_or_path, File "", line 52, in hf_hub_download_wrapper_inner File "D:\AI\ComfyUI-aki-v1.3\python\lib\site-packages\huggingface_hub\utils\_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "D:\AI\ComfyUI-aki-v1.3\python\lib\site-packages\huggingface_hub\file_download.py", line 1371, in hf_hub_download raise LocalEntryNotFoundError( I'm put the checkpoint file in D:\AI\sd-webui-aki-v4.5\models\Depth-Anything and Add the file link address in comfyui as: Add the file link address in comfyui as,then i'm put 3 pth files in D:\AI\sd-webui-aki-v4.5\extensions\sd-webui-controlnet\models and mark the same adress on comfyui yaml file
I found the file address and fixed the problem myself, thanks for the edited workflow!
wonderful tutorials, could you please make a video work with people subjects? ( IClight and IPADAPTER ERROR with face and body) thanks
Yeah I will try too I will upload another ic light soon so stay tune
I can't find this vedio for 16:9 Version in your channel
I did not post it yet I will do it soon
@@cgpixel6745 thank you very much
could you help me to solve depth anything error..? i got : Error occurred when executing DepthAnythingPreprocessor: [Errno 2] No such file or directory: 'C:\\Users\\meee2\\Desktop\\SD\ ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\ckpts\\LiheYoung\\Depth-Anything\\.huggingface\\download\\checkpoints\\depth_anything_vitl14.pth.6c6a383e33e51c5fdfbf31e7ebcda943973a9e6a1cbef1564afe58d7f2e8fe63.incomplete' and dont know how to solve..
You need to place the ckpts model into the right folder comfyui\models\controlnet
@@cgpixel6745 I found the file address and fixed the problem myself, thanks for the edited workflow!
Bro, If I have a portrait, I want to change the background of the person in the portrait, but in this case, the shadows and light expressions look awkward. Can you make a workflow that satisfies these two requirements?
yes bro you can do it just watch this tutorial kzread.info/dash/bejne/aIOkzLKnmdCWkZs.html it should resolve everything
you should do a tutorial on comfy ui for total beginners, you will get a lot of views
well if you need any help i am here
How can I download ur workflow? As json file
I put it in the description box
Where can I download ipadapter clip.safetensors?
Very nice tutorial, thanks for sharing the workflow
thanks happy that it help you
Great video! Thank you for sharing. Can I suggest that in the future either turn down the background music or just turn it off entierly? You are delightfully soft-spoken, so it would make it easier to hear you clearly. Keep it up brother!
thanks for the advice i was little septic about it and now i am sure of it