CG TOP TIPS

CG TOP TIPS

Are you ready for an extraordinary journey into the world of AI and graphics software?
If you are new in AI and graphics software or you want to improve your skills as a professional, you are at the right place.
"CgTopTips" shares a wide range of tutorials from the beginner level to the upper levels on our up-to-date KZread channel.
Our team provides a great number of videos which are too detailed and providing fundamental information.

New videos are coming every day, don’t miss to check our channel :)
openart.ai/workflows/@cgtips
facebook.com/cgtoptips
twitter.com/cgtoptips
[email protected]

#CgTopTips

CG TOP TIPS - AI MUSIC

CG TOP TIPS - AI MUSIC

Пікірлер

  • @fatfrank22
    @fatfrank22Сағат бұрын

    Didn't work for me, getting tons of error.

  • @valorantacemiyimben
    @valorantacemiyimben2 сағат бұрын

    Merhaba, her şeyi yaptım ama bu hatayı alıyorum :( Even though I upload BiRefNet files, it still gives an error :/ Error occurred when executing BiRefNet: Model loading failed: [Errno 2] No such file or directory: 'C:\\Users\\xxx\\Desktop\ ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\\ComfyUI_windows_portable\\ComfyUI\\models\\BiRefNet\\swin_large_patch4_window12_384_22kto1k.pth' File "C:\Users\xxx\Desktop ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\xxx\Desktop ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\xxx\Desktop ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\xxx\Desktop ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-BiRefNet\BiRefNet_node.py", line 101, in matting self.load(weight_path, device=device) File "C:\Users\xxx\Desktop ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-BiRefNet\BiRefNet_node.py", line 67, in load raise RuntimeError(f"Model loading failed: {e}")

  • @ARTICREATOR-SAM
    @ARTICREATOR-SAM4 сағат бұрын

    Is there a specific reason why you don't include the links in the video in the description? I'm not lazy, this was just a question for me 😁😁❤❤

  • @HamedEmine
    @HamedEmine4 сағат бұрын

    Do we need to download the BiRefNet models separately? Apparently it doesn't do so automatically... Thank you by the way!

  • @valorantacemiyimben
    @valorantacemiyimben2 сағат бұрын

    Even though I upload BiRefNet files, it still gives an error :/

  • @HamedEmine
    @HamedEmine2 сағат бұрын

    @@valorantacemiyimben You need to place the models under the BiRefNet folder inside your ComfyUI models folder The path should look like this: "ComfyUI\models\BiRefNet" In that folder you need to download 6 models, total size is 3.06GB Here are the model names you should have there: - BiRefNet-DIS_ep580.pth - BiRefNet-ep480.pth - pvt_v2_b2.pth - pvt_v2_b5.pth - swin_base_patch4_window12_384_22kto1k.pth - swin_large_patch4_window12_384_22kto1k.pth

  • @ARTICREATOR-SAM
    @ARTICREATOR-SAM4 сағат бұрын

    tnx for video CG TOP...... Please teach how to make A to B morph ......Is it possible to train this workflow? I leave the link below Because it is very dumb and very disorganized And it's different from your helpful tutorial videos Thank you kzread.info/dash/bejne/paOo19OegNvFnto.html

  • @valorantacemiyimben
    @valorantacemiyimben4 сағат бұрын

    Hello, the workflow is not on your site. how can we download

  • @user-ik2to2hu3y
    @user-ik2to2hu3y11 сағат бұрын

    çok zor. pencerelere alışamadım.

  • @CgTopTips
    @CgTopTips11 сағат бұрын

    Bu iş akışı diğer iş akışlarına kıyasla daha basittir. Bol pratikle sizin için normal hale gelecektir

  • @LeePreston-t1d
    @LeePreston-t1d13 сағат бұрын

    Anyone know if this can work on mac M3, I am recieving this error code currently if anyone knows how to help. "Error occurred when executing LivePortraitVideoNode: Torch not compiled with CUDA enabled File "/Users/leepreston/Desktop/AI/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/leepreston/Desktop/AI/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/leepreston/Desktop/AI/ComfyUI/execution.py", line 65, in map_node_over_list results.append(getattr(obj, func)(**input_data_all)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/leepreston/Desktop/AI/ComfyUI/custom_nodes/comfyui-liveportrait/nodes/live_portrait.py", line 468, in run live_portrait_pipeline = LivePortraitPipeline( ^^^^^^^^^^^^^^^^^^^^^ File "/Users/leepreston/Desktop/AI/ComfyUI/custom_nodes/comfyui-liveportrait/nodes/LivePortrait/src/live_portrait_pipeline.py", line 67, in __init__ self.live_portrait_wrapper: LivePortraitWrapper = LivePortraitWrapper(cfg=inference_cfg) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/leepreston/Desktop/AI/ComfyUI/custom_nodes/comfyui-liveportrait/nodes/LivePortrait/src/live_portrait_wrapper.py", line 29, in __init__ self.appearance_feature_extractor = load_model(cfg.checkpoint_F, model_config, cfg.device_id, 'appearance_feature_extractor') ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/leepreston/Desktop/AI/ComfyUI/custom_nodes/comfyui-liveportrait/nodes/LivePortrait/src/utils/helper.py", line 99, in load_model model = AppearanceFeatureExtractor(**model_params).cuda(device) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py", line 915, in cuda return self._apply(lambda t: t.cuda(device)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py", line 779, in _apply module._apply(fn) File "/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py", line 779, in _apply module._apply(fn) File "/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py", line 804, in _apply param_applied = fn(param) ^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py", line 915, in return self._apply(lambda t: t.cuda(device)) ^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/torch/cuda/__init__.py", line 284, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled")

  • @SuperCinema4d
    @SuperCinema4d17 сағат бұрын

    Cool, how chang pose my foto, not prompt?

  • @TheAmit4sun
    @TheAmit4sun21 сағат бұрын

    Any reason why i am not seeing option in search for MimicMotion Sampler and MimicMotion Getposes ?

  • @CgTopTips
    @CgTopTips21 сағат бұрын

    Try update the Mimic Motion and then completely close and reopen the program, maybe it help !

  • @TheAmit4sun
    @TheAmit4sun20 сағат бұрын

    @@CgTopTips and we have to install the mimicmotion custom node from comfyUI itself right? I am not sure why my comfyui is looking at /notebooks/ComfyUI/custom_nodes/MimicMotion/__init__.p . Where as at git repo __init__p is inside MimicMotion/mimicmotion

  • @avicap17
    @avicap1723 сағат бұрын

    gracias por compartir , ayudaria si tiene subtitulos saludos

  • @luisellagirasole7909
    @luisellagirasole7909Күн бұрын

    Hello and thanks, I'm using Mimic but a problem I have is the background, moving along the dancer... How I can fix it? Thank you!

  • @user-jh8zy7oy5b
    @user-jh8zy7oy5bКүн бұрын

    tell me what the problem is, it takes a very long time to render for more than an hour and then such a result and there are no errors. the video that turned out to have a black square instead of a head.

  • @CgTopTips
    @CgTopTipsКүн бұрын

    Prolonged render time could be due to one of the following options: 1. You are using the CPU instead of the GPU. 2. The program is downloading its required files for the first time (though the render should not be lengthy on the second run). 3. Your settings, such as video duration, size, or the number of steps, are high!

  • @fabiojj6991
    @fabiojj6991Күн бұрын

    👍👍👍🙏

  • @AlexsForestAdventureChannel
    @AlexsForestAdventureChannelКүн бұрын

    Thank you for the help, My images are looking better now. Hive Five

  • @user-go5vl9rv4p
    @user-go5vl9rv4pКүн бұрын

    How many computer specs are required to use this program?

  • @CgTopTips
    @CgTopTipsКүн бұрын

    Minimum 8GB VRAM !

  • @user-go5vl9rv4p
    @user-go5vl9rv4pКүн бұрын

    Set notebook specs?

  • @RoshanYadav-v2z
    @RoshanYadav-v2z2 күн бұрын

    Sir i getting this error. Error occurred when executing DownloadAndLoadMimicMotionModel: Error no file named config.json found in directory C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\models\diffusers\stable-video-diffusion-img2vid-xt-1-

  • @CgTopTips
    @CgTopTipsКүн бұрын

    Make sure you have downloaded all `config.json` files according to the video, or that their names exactly match those shown in the video

  • @RoshanYadav-v2z
    @RoshanYadav-v2zКүн бұрын

    @@CgTopTips I downloaded and check many time I placed all model and jason file correctly

  • @elonmusk4720
    @elonmusk47202 күн бұрын

    Error occurred when executing DownloadAndLoadMimicMotionModel: Error no file named config.json found in directory

  • @CgTopTips
    @CgTopTips2 күн бұрын

    Make sure you have downloaded the `config.json` files according to the video, or that their names exactly match those shown in the video.

  • @vijayeditzz-nt3lq
    @vijayeditzz-nt3lq2 күн бұрын

    how to render all frames. it only gives few frames animated and that goes in loop.what setting need to be changed to render all dance moves. kzread.info/dash/bejne/YqSX29ZxiMy5ldY.html

  • @CgTopTips
    @CgTopTips2 күн бұрын

    Did you set the frame_load_cap to zero?

  • @vijayeditzz-nt3lq
    @vijayeditzz-nt3lq2 күн бұрын

    @@CgTopTips no i think i just left it as it is given in the workflow. i think frame_load_cap was either the one in example workflow which was 15 or your workflow which was 24.should it be set to zero to render all frames? one more question is with 8 gb vram will i be able to render mimic motion video of 20 minutes long. thanks

  • @CgTopTips
    @CgTopTips2 күн бұрын

    Unfortunately No ! "For a 640x360 (nHD) video, you can only get approximately 40 frames as output."

  • @vijayeditzz-nt3lq
    @vijayeditzz-nt3lq2 күн бұрын

    @@CgTopTips kzread.info/dash/bejne/aYWNycOwmMzWe7A.html hi i managed to create a motion video but facial features not perfect. thanks so much for assistance. you are great help in learning

  • @CgTopTips
    @CgTopTips2 күн бұрын

    Increase the resolusion or use face detailer or face swap

  • @cXrisp
    @cXrisp2 күн бұрын

    Thanks! I'm grabbin' 'em all.

  • @rozonox
    @rozonox2 күн бұрын

    작동하지 않아요 애러가 너무 많습니다.

  • @CgTopTips
    @CgTopTips2 күн бұрын

    비디오를 따라 진행하고 있는지 확인하시고, 오류 메시지를 받으시면 공유해 주세요. 제가 도와드릴 수 있을지도 모릅니다

  • @rozonox
    @rozonox20 сағат бұрын

    @@CgTopTips 도움을 요청드립니다. 동일한 노드와 값으로 진행했습니다만. 하기와 같이 'MimicMotion Sampler'단계에서 에러가 발생합니다. --------▼▼▼▼▼▼▼-------- Error occurred when executing MimicMotionSampler: "compute_index_ranges_weights" not implemented for 'Half' File "/Volumes/R_SSD/ML/ComfyUI/execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Volumes/R_SSD/ML/ComfyUI/execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Volumes/R_SSD/ML/ComfyUI/execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Volumes/R_SSD/ML/ComfyUI/custom_nodes/ComfyUI-MimicMotionWrapper/nodes.py", line 294, in process frames = pipeline( ^^^^^^^^^ File "/opt/miniconda3/envs/comfy/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/Volumes/R_SSD/ML/ComfyUI/custom_nodes/ComfyUI-MimicMotionWrapper/mimicmotion/pipelines/pipeline_mimicmotion.py", line 521, in __call__ image_embeddings = self._encode_image(image, device, num_videos_per_prompt, self.do_classifier_free_guidance, image_embed_strength=image_embed_strength) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Volumes/R_SSD/ML/ComfyUI/custom_nodes/ComfyUI-MimicMotionWrapper/mimicmotion/pipelines/pipeline_mimicmotion.py", line 152, in _encode_image image = clip_preprocess(image.clone(), 224) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Volumes/R_SSD/ML/ComfyUI/comfy/clip_vision.py", line 25, in clip_preprocess image = torch.nn.functional.interpolate(image, size=(round(scale * image.shape[2]), round(scale * image.shape[3])), mode="bicubic", antialias=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/miniconda3/envs/comfy/lib/python3.11/site-packages/torch/nn/functional.py", line 4589, in interpolate return torch._C._nn._upsample_bicubic2d_aa( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  • @zetho69marini78
    @zetho69marini783 күн бұрын

    Awesome results but...why my video creation is on slow motion? did i do something wrong?

  • @CgTopTips
    @CgTopTips2 күн бұрын

    Change frame_rate in video combine node to 25 fps

  • @zetho69marini78
    @zetho69marini782 күн бұрын

    @@CgTopTips thank you bruh!!

  • @ARTICREATOR-SAM
    @ARTICREATOR-SAM3 күн бұрын

    tnx for video.....I sent you an email, but I have been waiting for a reply for a long time I sent another email today

  • @kneel.downnn
    @kneel.downnn3 күн бұрын

    Whats your pc specs btw

  • @CgTopTips
    @CgTopTips2 күн бұрын

    RTX 4060, 8GB VRAM

  • @davimak4671
    @davimak46713 күн бұрын

    this is livetime?

  • @CgTopTips
    @CgTopTips3 күн бұрын

    No, but it has good processing speed. Make sure your input video's size is not too large

  • @valorantacemiyimben
    @valorantacemiyimben3 күн бұрын

  • @user-pn6ey5dn4y
    @user-pn6ey5dn4y3 күн бұрын

    Do you know of a way to crop/resize the video to a square shape that will work in 'this' workflow without distorting the original image? Usually, I'd use image resize or prepare images for clip vision, but they don't work here because of the connectors.

  • @CgTopTips
    @CgTopTips3 күн бұрын

    Use ImageCrop node

  • @user-pn6ey5dn4y
    @user-pn6ey5dn4y3 күн бұрын

    @CgTopTips thanks for a suggestion. Could you send a picture please? I tried to connect "load video and segment" to two different image crop nodes, but they don't connect. I could crop a video if I use a different load video node, but then I can't connect into the 'drive video' connector in the live portrait node. Thank you

  • @timemirror_
    @timemirror_3 күн бұрын

    Thanks!! I have an issue btw. The "insightface" folder didn't appear in my "models" folder. I am sure I downloaded the nods you mentioned at the beginning of the video. Maybe I'm doing something wrong. What do you think?

  • @CgTopTips
    @CgTopTips3 күн бұрын

    Manually create that folder and put the model

  • @Deftribute
    @Deftribute3 күн бұрын

    Followed instructions. I'm not an expert at all. Just a "watch & Mimic" guy. It retrieves me a error on LivePortrait Video Node (#7) Does it only work with default LivePortrait Driving Video? Error occurred when executing LivePortraitVideoNode: [Errno 2] No such file or directory: '/ComfyUI/models/liveportrait/base_models/appearance_feature_extractor.pth' File "/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/ComfyUI/execution.py", line 65, in map_node_over_list results.append(getattr(obj, func)(**input_data_all)) File "/ComfyUI/custom_nodes/comfyui-liveportrait/nodes/live_portrait.py", line 468, in run live_portrait_pipeline = LivePortraitPipeline( File "/ComfyUI/custom_nodes/comfyui-liveportrait/nodes/LivePortrait/src/live_portrait_pipeline.py", line 67, in __init__ self.live_portrait_wrapper: LivePortraitWrapper = LivePortraitWrapper(cfg=inference_cfg) File "/ComfyUI/custom_nodes/comfyui-liveportrait/nodes/LivePortrait/src/live_portrait_wrapper.py", line 29, in __init__ self.appearance_feature_extractor = load_model(cfg.checkpoint_F, model_config, cfg.device_id, 'appearance_feature_extractor') File "/ComfyUI/custom_nodes/comfyui-liveportrait/nodes/LivePortrait/src/utils/helper.py", line 134, in load_model model.load_state_dict(torch.load(ckpt_path, map_location=lambda storage, loc: storage)) File "/venv/lib/python3.10/site-packages/torch/serialization.py", line 997, in load with _open_file_like(f, 'rb') as opened_file: File "/venv/lib/python3.10/site-packages/torch/serialization.py", line 444, in _open_file_like return _open_file(name_or_buffer, mode) File "/venv/lib/python3.10/site-packages/torch/serialization.py", line 425, in __init__ super().__init__(open(name, mode))

  • @user-rk3wy7bz8h
    @user-rk3wy7bz8h3 күн бұрын

    Hi .is it still 24vram till now?

  • @CgTopTips
    @CgTopTips3 күн бұрын

    Unfortunately, this is more of a promotional aspect, and the VRAM is below 8GB at most! Overall, this platform is better suited for beginners for learning purposes

  • @user-rk3wy7bz8h
    @user-rk3wy7bz8h3 күн бұрын

    @@CgTopTips thanks man. If you know any way to be able to render with Vram higher than 8.. please tell me. I need high vram and it should be free

  • @yngeneer
    @yngeneer3 күн бұрын

    Due to this : 'When installing or updating this custom node, many installation packages may be downgraded due to the installation of requirements. !! python3.12 is incompatible.' > are there any other SAM nodes you would recommend, or ...or - did you encounter any colisions mentioned, or... or... just don't worry ... > ? Heh, I am little scared to install that node, i don't want my fragile Comfie to burn to the floor... ...so...I made the step... :D and everything seems ok... but when I used their (ZHO) basic workflow, everything run ok-ish with the yolo_world/l, but crashes with /m ans /s > 'cannot reshape tensor of 0 elements into shape [-1, 1, 1, 0] because the unspecified dimension size -1 can be any value and is ambiguous' ... i suppose that 'l' is for large, 'm' is for medium and 's' is for small ... so when it run with 'l', I should be good, isn't it so? Also, how is it with that 'inpaint' versions of models? I thought the 'inpaint' is somehow obsolete and nortmal models can be used for inpaint too, or am I delusional? And for the last > I did take a look at the 'Brushnet' video you mentioned in previous video comments, but there is no outpainting also. Is there a way to OUTpaint in ComfyUI? Thanks for all you do for us ;-) p.s.: I can imagine > enlarge the image I want to outpaint in something like mspaint, and than that enlarged blank area inpaint in comfy....is that the right way to 'outpaint' ?

  • @HannibalCaine
    @HannibalCaine2 күн бұрын

    Had the same issue. Downgraded python to 3.10 and still got the same error.

  • @user-pn6ey5dn4y
    @user-pn6ey5dn4y3 күн бұрын

    Could you please add a node to load a reference face pic? Thank you

  • @mikrodizels
    @mikrodizels4 күн бұрын

    Thanks, the workflow works, and it did a pretty good job upscaling a 512x512 image to a 2048x2048 one. I have a Nvidia GTX 1060 6gb card, the whole process took around 20 minutes to finish. I then downloaded the v2_rank256 version of controlnet model and tried upscaling using that in hopes I could make the process way faster by sacrificing some quality, but it took around the same time unfortunately. I used the same exact workflow, the only thing I have different in my workflow is my checkpoint (Juggernaut instead of dreamshaper). ~20 mins does not seem that low vram friendly

  • @CgTopTips
    @CgTopTips3 күн бұрын

    My graphics card is an RTX 4060 8Gb VRAM, and with the settings in the video, it generates a 1024x1024 image in two minutes. Other upscaling methods, like using Supir, result in an "Out of memory" error !

  • @mikrodizels
    @mikrodizels3 күн бұрын

    @@CgTopTips How long does the upscaling process take you tho?

  • @CgTopTips
    @CgTopTips3 күн бұрын

    @@mikrodizels 2~3 min (RTX 4090 8gb vram -1024x1024)

  • @mikrodizels
    @mikrodizels3 күн бұрын

    @@CgTopTips You mean to just generate a 1024x1024 image? Or to upscale a 1024x1024 image, using this workflow and these settings?

  • @CgTopTips
    @CgTopTips3 күн бұрын

    The photo I uploaded was 150×150 pixels, and I upscaled it to 1024×1024 pixels using the this workflow

  • @tetsuooshima832
    @tetsuooshima8324 күн бұрын

    This looks very cool but I'm afraid of ZHO_ZHO_ZHO packages now, I always had issues with them in the past xD

  • @wolf63tot
    @wolf63tot4 күн бұрын

    Error occurred when executing Yoloworld_ESAM_Zho: cannot import name 'packaging' from 'pkg_resources' (J:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pkg_resources\__init__.py)

  • @CgTopTips
    @CgTopTips4 күн бұрын

    Always make sure to follow the installation steps for each custom node precisely through its GitHub page, and ensure that the custom node is correctly installed on your computer. - Make sure to install the requirement files for each custom node and download the necessary models. The best way to troubleshoot is to read the ComfyUI Terminal Panel.

  • @dadekennedy9712
    @dadekennedy97124 күн бұрын

    Wonderful video. Thank you!

  • @CgTopTips
    @CgTopTips4 күн бұрын

    🙏

  • @baheth3elmy16
    @baheth3elmy164 күн бұрын

    Thanks for the video! It just goes forever, I waited for 30 minutes for a single picture of 163kb to upscale and it was still not done, so I cancelled the process.

  • @CgTopTips
    @CgTopTips4 күн бұрын

    Check the terminal panel to ensure that no model is currently downloading. Are you using a graphics card or CPU? Did you set up the nodes exactly like in the video? ...

  • @fatiheke
    @fatiheke4 күн бұрын

    error not install "LivePortraitVideoNode"

  • @CgTopTips
    @CgTopTips3 күн бұрын

    Always check the following points: - Ensure you install the requirement files for each custom node (pip install -r requirements.txt). - Download the necessary models for each custom node. - Verify that all custom nodes needed for the workflow are installed without issues (check through the manager panel). - Ensure there are no version conflicts between models; for example, if the checkpoint is SD1.5, the ControlNet should also be SD1.5. - Always follow the installation steps for each custom node precisely through its GitHub page. - The best way to understand an issue when you see an error message is the ComfyUI Terminal Panel. For example, sizes might not match, or you might not have selected the settings for a node correctly, and so on. - You can copy the error message and search for a solution on Google. Note: If the problem is still unresolved, please share a screen shot of error with your workflow via email so I can check it.

  • @yngeneer
    @yngeneer4 күн бұрын

    thank you for your work, would you mind to showcase inpaint and outpaint. Are there necessary inpaint/outpaint(?) models still? How to properly outpaint in comfy?

  • @CgTopTips
    @CgTopTips4 күн бұрын

    Did you watch BrushNet video ?

  • @yngeneer
    @yngeneer4 күн бұрын

    @@CgTopTips I will !

  • @cXrisp
    @cXrisp4 күн бұрын

    Works for me! I was getting seams but they went away after I changed the seam_fix_mode to "Half Tile" on the second upscaler. Thanks for the workflow -- I learned a lot.

  • @Infinite_Dre4m
    @Infinite_Dre4m5 күн бұрын

    How do you add pretext, appended text?

  • @DeMaddin81
    @DeMaddin815 күн бұрын

    I noticed something: When I tried with another dancer, her face was covered by her hand for a few frames. I immediately got an error message (like) "Face not recognized". The process aborted and the entire result was discarded. That means the entire rendering time was wasted. Do you have any idea how to make the tool continue rendering despite the error message? Perhaps an additional box in the ComfyUI that catches the error?

  • @CgTopTips
    @CgTopTips5 күн бұрын

    Unfortunately, in this method, the face must be visible in all frames, and there is currently no solution for this issue !

  • @DeMaddin81
    @DeMaddin815 күн бұрын

    @@CgTopTips THX!

  • @DeMaddin81
    @DeMaddin815 күн бұрын

    Hi. I had a look at the temporary files. I noticed something: With your method, it seems that for EVERY frame of the source video, a complete pass with the facial expressions of the target video is made. This means that at 30 fps and 1 second (for the result), for example, 30x30 = 900 images are generated per second (at 30 fps of the source video). However, only 30 images are needed for the result, the other 870 images are discarded. I understand that comparison images are needed for consistent movement. But wouldn't 5 comparison images be enough, for example, instead of the full second? If I render 10 seconds of video, for example (at 30 fps), the result for 300 video images is 10*30*30=9000 generated images. That's crazy and needs a lot of time. Do you have a solution for this.

  • @eddeyman
    @eddeyman5 күн бұрын

    thank you , that was helpful

  • @mynameisinosuke9680
    @mynameisinosuke96805 күн бұрын

    i got error bro this is my error how to i fix it Error occurred when executing DownloadAndLoadMimicMotionModel: Error no file named config.json found in directory E:\ComfyUI\ComfyUI_windows_portable\ComfyUI\models\diffusers\stable-video-diffusion-img2vid-xt-1-1. File "E:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MimicMotionWrapper odes.py", line 128, in loadmodel self.vae = AutoencoderKLTemporalDecoder.from_pretrained(svd_path, subfolder="vae", variant="fp16", low_cpu_mem_usage=True).to(dtype).to(device).eval() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\models\modeling_utils.py", line 616, in from_pretrained config, unused_kwargs, commit_hash = cls.load_config( ^^^^^^^^^^^^^^^^ File "E:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\configuration_utils.py", line 377, in load_config raise EnvironmentError(

  • @elonmusk4720
    @elonmusk47202 күн бұрын

    same error have you find any solution

  • @NOBLEFILMS1987
    @NOBLEFILMS19875 күн бұрын

    AWESOME!

  • @shagithyansathishkumar6283
    @shagithyansathishkumar62835 күн бұрын

    its working ?

  • @CgTopTips
    @CgTopTips5 күн бұрын

    Of course, I tried it myself and you can see in the video that it works

  • @shagithyansathishkumar6283
    @shagithyansathishkumar62835 күн бұрын

    @@CgTopTips i try waiting

  • @xinyu5706
    @xinyu57065 күн бұрын

    Error occurred when executing SVD_img2vid_Conditioning: 'NoneType' object has no attribute 'encode_image' File "E:\comfy UI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfy UI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfy UI\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfy UI\ComfyUI_windows_portable\ComfyUI\comfy_extras odes_video_model.py", line 46, in encode output = clip_vision.encode_image(init_image) ^^^^^^^^^^^^^^^^^^^^^^^^ svd_ing2vid_Conditioning cannot run , What is the reason for the error above? Thank you for sharing and looking forward to your reply.

  • @YouCanDoItTootorials
    @YouCanDoItTootorials5 күн бұрын

    at 9:38 when i queue generation the next ksampler errors with 'NoneType' object has no attribute 'shape', anyone with ideas bout possible issue?

  • @YouCanDoItTootorials
    @YouCanDoItTootorials5 күн бұрын

    well the object has no attribute error seemed to be due to i neglected to use an sdxl check point (oops) but now i am getting mat1 and mat2 shapes cannot be multiplied (16x2048 and 768x320) error, which i believe may be due to non-matching input image dimensions? should we have a node in here to autoresize input images to keep things smooth?