DreamingAI

DreamingAI

A channel created by the love of AI

Пікірлер

  • @luxipo9934
    @luxipo993423 сағат бұрын

    This is soectacular

  • @zetho69marini78
    @zetho69marini783 күн бұрын

    Is there a way to connect it to the promp to create characters with the face of someone? i can do it on the normal stable diffusion....but in comfyUI??

  • @vargaalexander
    @vargaalexander4 күн бұрын

    in SD 1111 there is an option "restore face mask". is it in Comfy ui?

  • @shuanshuanzai
    @shuanshuanzai4 күн бұрын

    You show a complex workflow,which looks so adv, but it's not a good demo. Where is your embedding string connected to ?

  • @thebigs1997
    @thebigs19975 күн бұрын

    im getting "Exception during processing!!! Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!" but i got 4060ti / 32gb ram... what can i do???

  • @j_shelby_damnwird
    @j_shelby_damnwird5 күн бұрын

    Just what I was looking for, thank you very much. Does this work with SDXL/Pony models as well?

  • @SukritLomlai
    @SukritLomlai5 күн бұрын

    Thank you so much, for helping me find the light

  • @ujjawaltyagi8540
    @ujjawaltyagi85407 күн бұрын

    This was awesome! Can you please make a video explaining how to train a Lora model locally using comfyUI, like download images from google or some websites/raw images and then cleaning them as they will be of diff size, and then processing to generate prompt and then training them locally for a anime character..........please

  • @MaghrabyANO
    @MaghrabyANO9 күн бұрын

    it says "invalid prompt: {'type': 'invalid_prompt', 'message': 'Cannot execute because a node is missing the class_type property.', 'details': "Node ID '#59'", 'extra_info': {}}" in the command window, and doesnt let me run the workflow, any help?

  • @AldoZorzi
    @AldoZorzi10 күн бұрын

    Thanks for this! It definitely put me on the right path. Only to thank you, a little tip: changing get_image method to def get_image(filename, subfolder, folder_type): data = {"filename": filename, "subfolder": subfolder, "type": folder_type} url_values = urllib.parse.urlencode(data) img_data = requests.get("{}/view?{}".format(server_address, url_values)).content return img_data and your code to ... for node_id in images: for image_data in images[node_id]: with open(f"{args.dest}/{filename}", 'wb') as handler: handler.write(image_data) ... metatags are preserved and dragging a file into ComfyUI loads the workflow and seeds data.

  • @Ronniboyy-u9n
    @Ronniboyy-u9n10 күн бұрын

    hello can anyone help me with this Error " D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:891 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page"..????

  • @Amazingphysicscourses
    @Amazingphysicscourses12 күн бұрын

    Hi. Nice. What is the minimum GPU required?

  • @GetawayFilms
    @GetawayFilms14 күн бұрын

    Great video, concise explanation. Loved it.

  • @sinuva
    @sinuva16 күн бұрын

    i couldnt make this ipadapter model works = ( for some reason, it doent find the, for example, plus high strengh

  • @mr.entezaee
    @mr.entezaee18 күн бұрын

    ModuleNotFoundError: No module named 'jmespath' Train finished

  • @giusparsifal
    @giusparsifal18 күн бұрын

    Hello and thanks, a question, if I interrupt the process there is a backup or I have to begin from start? Thank you

  • @leol.4541
    @leol.454118 күн бұрын

    I have installed ComfyUI Auxiliary Prepocessor, but I can't find any CannyEdge, I just have the regular Canny. Someone can help ? Also, just using Canny, I have a problem when rendering, apprently the GPU, but everything seems rite on my computer. And when I remove the Canny Node, Everything's seems right until the rendering comes to the KSampler Advanced node, there, the same problem appear, anyone can help please ?

  • @PaulRoneClarke
    @PaulRoneClarke18 күн бұрын

    Unfortunately these custom scripts bricked my Comfy installation "Assertion Error - Torch not compiled with CUDA enabled" I had to remove your scripts and run a python and Comfy update to get it back.

  • @XastherReeD
    @XastherReeD19 күн бұрын

    Ok, so this is the third question I had answered in just as many videos. Short videos, no tangents. Then you also shared PoseMyArt, which looks absolutely perfect for use with OpenPose. Yeah, I'm subscribed.

  • @iccang
    @iccang20 күн бұрын

    Hi... I got error when the proses on save the video. there is message like: Frames have been successfully reassembled into /Users/iccangninol/ComfyUI/temp/video.mp4 !!! Exception during processing!!! MoviePy error: the file /Users/iccangninol/ComfyUI/temp/video.mp4 could not be found! Please check that you entered the correct path. Traceback (most recent call last): File "/Users/iccangninol/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/Users/iccangninol/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/Users/iccangninol/ComfyUI/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/Users/iccangninol/ComfyUI/custom_nodes/ComfyUI-N-Nodes/py/video_node_advanced.py", line 616, in save_video video_clip = VideoFileClip(videos_output_temp_dir) File "/Users/iccangninol/miniconda3/envs/Comfy2/lib/python3.10/site-packages/moviepy/video/io/VideoFileClip.py", line 88, in __init__ self.reader = FFMPEG_VideoReader(filename, pix_fmt=pix_fmt, File "/Users/iccangninol/miniconda3/envs/Comfy2/lib/python3.10/site-packages/moviepy/video/io/ffmpeg_reader.py", line 35, in __init__ infos = ffmpeg_parse_infos(filename, print_infos, check_duration, File "/Users/iccangninol/miniconda3/envs/Comfy2/lib/python3.10/site-packages/moviepy/video/io/ffmpeg_reader.py", line 270, in ffmpeg_parse_infos raise IOError(("MoviePy error: the file %s could not be found! " OSError: MoviePy error: the file /Users/iccangninol/ComfyUI/temp/video.mp4 could not be found! Please check that you entered the correct path. Can you tell me, where wrong with my case?

  • @KEV.IN_
    @KEV.IN_21 күн бұрын

    Hi, could you make a video about generating different images with different poses but with a same anime character by uploading the anime character?

  • @TBou_nyncuk
    @TBou_nyncuk22 күн бұрын

    Great vod mate!

  • @johnsummerlin7630
    @johnsummerlin763024 күн бұрын

    11:45 clarification requested: "denoise" is not a right-click option on the depicted node. What needs to be loaded for this option to show up, as the source of this denoise-control is not clear. There are multiple "custom scripts" items in the manager menu, with different authors and different conflict warnings too.

  • @marcoantonionunezcosinga7828
    @marcoantonionunezcosinga782826 күн бұрын

    Greetings, I am a newbie to this type of program, I have some problems installing ComfyUI-N-Nodes, although due to the lack of nodes it has the name SaveVideo. I will continue watching your videos, maybe there is one that will help me, thank you

  • @Digital_Paradise
    @Digital_Paradise28 күн бұрын

    Any idea how to PADDING at output face ? i want to swap on the eye and nose area and leave mouth area, i try figured out how to do that without masking cause i have batch process image... example i have image man eating big bread in front his mouth, but Reactor always change that bread into mouth shape

  • @eveekiviblog7361
    @eveekiviblog736128 күн бұрын

    Please show how we can link comfy to telegram bot

  • @MolediesOflife
    @MolediesOflife29 күн бұрын

    crazy ai

  • @timjones9316
    @timjones931629 күн бұрын

    Thanks! Have been struggling with the original version (did not get it to work). Your nodes really worked great and simple in the first attempt. The long explanation of the Lora-traning node also appreciated. (Note: building the lora with 45 images did take some time > 3.5 hrs, using a 4070ti)

  • @abrahamgeorgec
    @abrahamgeorgec29 күн бұрын

    Were you able to download the video to user defined folder using API. Which node to use for the same?

  • @abrahamgeorgec
    @abrahamgeorgec29 күн бұрын

    Nice Explanation. Where you able to download a video (from a video workflow) to a user specific folder ?

  • @soundmob329
    @soundmob32929 күн бұрын

    Of all tutorials, this was the only one that worked. everyone else skipped over the most important which was installing IN the python_embeded folder. They didn't even specify to download it to THAT specific path. that's literally all it took

  • @dongyanghan4030
    @dongyanghan4030Ай бұрын

    Traceback (most recent call last): File "D:\BaiduSyncdisk\Proceduralization\default\4comfyui\batch_test0621.py", line 18, in <module> prompt_workflow = json.load(open('D:\\BaiduSyncdisk\\Proceduralization\\default\\4comfyui\\workflow_api.json')) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "json\__init__.py", line 293, in load UnicodeDecodeError: 'gbk' codec can't decode byte 0xa8 in position 479: illegal multibyte sequence

  • @nlmnx5763
    @nlmnx5763Ай бұрын

    thanks babe

  • @tcgerbilheroes4386
    @tcgerbilheroes4386Ай бұрын

    the training is finished in 4 seconds and nothing is added to the folder i created for the model. the log : D:\AI\Comfyui\ComfyUI\custom_nodes\Lora-Training-in-Comfy/sd-scripts/train_network.py The following values were not passed to `accelerate launch` and had defaults used instead: `--num_processes` was set to a value of `1` `--num_machines` was set to a value of `1` `--mixed_precision` was set to a value of `'no'` `--dynamo_backend` was set to a value of `'no'` To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`. C:\Users\HelpTech\AppData\Local\Programs\Python\Python310\python.exe: can't open file 'D:\\AI\\Comfyui\\custom_nodes\\Lora-Training-in-Comfy\\sd-scripts\\train_network.py': [Errno 2] No such file or directory Traceback (most recent call last): File "C:\Users\HelpTech\AppData\Local\Programs\Python\Python310\lib unpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\HelpTech\AppData\Local\Programs\Python\Python310\lib unpy.py", line 86, in _run_code exec(code, run_globals) File "C:\Users\HelpTech\AppData\Local\Programs\Python\Python310\lib\site-packages\accelerate\commands\launch.py", line 1027, in <module> main() File "C:\Users\HelpTech\AppData\Local\Programs\Python\Python310\lib\site-packages\accelerate\commands\launch.py", line 1023, in main launch_command(args) File "C:\Users\HelpTech\AppData\Local\Programs\Python\Python310\lib\site-packages\accelerate\commands\launch.py", line 1017, in launch_command simple_launcher(args) File "C:\Users\HelpTech\AppData\Local\Programs\Python\Python310\lib\site-packages\accelerate\commands\launch.py", line 637, in simple_launcher raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) subprocess.CalledProcessError: Command '['C:\\Users\\HelpTech\\AppData\\Local\\Programs\\Python\\Python310\\python.exe', 'custom_nodes/Lora-Training-in-Comfy/sd-scripts/train_network.py', '--enable_bucket', '--pretrained_model_name_or_path=D:\\AI\\Comfyui\\ComfyUI\\models\\checkpoints\\bismuthmix_v30.safetensors', '--train_data_dir=D:/AI/Art/milffy', '--output_dir=D:\\AI\\Art\\milffy model', '--logging_dir=./logs', '--log_prefix=Milffy', '--resolution=512,512', '--network_module=networks.lora', '--max_train_epochs=5000', '--learning_rate=1e-4', '--unet_lr=1.e-4', '--text_encoder_lr=1.e-4', '--lr_scheduler=cosine_with_restarts', '--lr_warmup_steps=0', '--lr_scheduler_num_cycles=1', '--network_dim=32', '--network_alpha=32', '--output_name=Milffy', '--train_batch_size=1', '--save_every_n_epochs=100', '--mixed_precision=fp16', '--save_precision=fp16', '--seed=26', '--cache_latents', '--prior_loss_weight=1', '--max_token_length=225', '--caption_extension=.txt', '--save_model_as=safetensors', '--min_bucket_reso=256', '--max_bucket_reso=1584', '--keep_tokens=0', '--xformers', '--shuffle_caption', '--clip_skip=2', '--optimizer_type=AdamW8bit', '--persistent_data_loader_workers', '--log_with=tensorboard']' returned non-zero exit status 2. Train finished

  • @Bitcoin_Baron
    @Bitcoin_BaronАй бұрын

    Can you update and just provide a downloadable archive we can just extract into the comfy folder? I can't understand the github instructions, they dont make sense for average user.

  • @anthonydelange4128
    @anthonydelange4128Ай бұрын

    - Value not in list: video: 'Flying' not in [] issue with loadvideo.

  • @Roguefromearth
    @RoguefromearthАй бұрын

    Hey, I am getting error - ERROR: No matching distribution found for websockets-client

  • @curiouspers
    @curiouspersАй бұрын

    Hahaha, that's a perfect ending! <3

  • @Fatmir-lt6cq
    @Fatmir-lt6cqАй бұрын

    Thank you! how to swap complet face with hair?

  • @HiramLNoise
    @HiramLNoiseАй бұрын

    That's like Photoshop with extra steps.

  • @roiyg19
    @roiyg19Ай бұрын

    Thank you for this. How can i contact you regarding an interesting project?

  • @sdafsdf9628
    @sdafsdf9628Ай бұрын

    Thank you very much for the exciting experiments. I have tested with an AI image in which a narrow idyllic alley in an Italian village is created. There are cobblestones, windows, doors and flowers. Unfortunately, all this creativity is lost in the outpaint. Fooocus handles it a little better, but the images are too dark there. The hard test is to enlarge an image, then reduce it to the original size in the graphics program and then enlarge it again using Outpaint. Repeating this 5 times (optically we go backwards) shows all the weaknesses. How can we use the creativity that is in the AI in the outpaint? Even with the original promt there is no improvement. It is also not possible to draw conclusions about the enlargement only from the original, the user has to say (text prompt) how the world should change, even if only slightly. If a light comes in from the right, then the lamp must come in at some point. If there is a shadow, there must be a person standing there at some point...

  • @killbadmashia9225
    @killbadmashia9225Ай бұрын

    Where can we go and learn which components are used in a certain workflow to accomplish a task ? or what the workflow of Nodes would be to accomplish a certain task ?

  • @user-pt6mq9ff2s
    @user-pt6mq9ff2sАй бұрын

    Just remove the piano track, very distracting.

  • @Eugeniocaraujo
    @EugeniocaraujoАй бұрын

    Unable to run it: Traceback (most recent call last): File "...\ComfyUI-master\custom_nodes\ComfyUI-N-Nodes-main\__init__.py", line 64, in <module> spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 883, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "...\ComfyUI-master\custom_nodes\ComfyUI-N-Nodes-main\py\frame_interpolator_node.py", line 18, in <module> from model.pytorch_msssim import ssim_matlab ModuleNotFoundError: No module named 'model' Cant load the interpolator in the comfyUI cause of this...

  • @youjohnnyd7773
    @youjohnnyd7773Ай бұрын

    I receive error messages when creating image captions. Error occurred when executing GPT Sampler [n-suite]: list index out of range File "E:\AI\AITools\ComfyUI\execution.py", line 141, in recursive_execute input_data_all = get_input_data(inputs, class_def, unique_id, outputs, prompt, extra_data) File "E:\AI\AITools\ComfyUI\execution.py", line 26, in get_input_data obj = outputs[input_unique_id][output_index] Please help me fix this, thanks.

  • @eliassuzumura
    @eliassuzumuraАй бұрын

    For the first time I was able to understand a full ConfyUI tutorial. Thank you.

  • @88.AmpLyte
    @88.AmpLyteАй бұрын

    Wow, thank you Brother. From the time you took to create simplistic custom versions, your explanations and they way you broke down individual variables.. i was able to take in a real understanding of these components as well as keep up with the new insights as the video progressed. 🧠💪👏

  • @theteknologist9574
    @theteknologist9574Ай бұрын

    Awesome video. No filler, just the goods.

  • @DarksNote
    @DarksNoteАй бұрын

    Das Programm ist noch zu kompliziert für die Mehrheit.