ControlAltAI

ControlAltAI

Welcome to Control Alt Ai. We provide tutorials on how to master the latest ai tools like StableDiffusion, MidJourney, Blue Willow, ChatGPT, and Everything AI basically.

Our Channel provides simplistic and practical tutorials and information that are easy to understand for everyone from a tech enthusiast, a developer, or someone curious about the latest advancements in AI.

We are committed to sharing our knowledge and expertise with our viewers. So if you're looking to stay informed on the latest AI tools, News and expand your knowledge, subscribe to our channel.

Пікірлер

  • @Thank_you_USA
    @Thank_you_USA15 сағат бұрын

    great vid btw what about coloring and old black and white photo? i think the color match mkl is not enough for it

  • @controlaltai
    @controlaltai10 сағат бұрын

    Thanks. You need ControlNet Recolor for that. Here is the workflow tutorial kzread.info/dash/bejne/moiH1pKyc7yTn6g.htmlsi=8JqhnKyic6HmUkee

  • @phuongmedia
    @phuongmedia2 күн бұрын

    Can you give me your standard negative prompt?

  • @controlaltai
    @controlaltai2 күн бұрын

    There is no standard negative prompt. It depends on the checkpoint. Check sample images of the checkpoint you are using on civitai. For some checkpoints like sd3, we start with no negative, and add when required.

  • @nuwan78
    @nuwan782 күн бұрын

    Thanks for the good tutorial, WHen I tried these steps in the stable diffusion web UI, It seemed like doesn't generate anything. any idea why ?. I am new to this stable diffusion tools. In my UI I don't seen the preprocessor IP-adapter_clip_sd15 as yours.

  • @controlaltai
    @controlaltai2 күн бұрын

    Check the cmd for the exact error. Are you getting a black image?

  • @ericgoodman38
    @ericgoodman383 күн бұрын

    This tutorial is probably the best I've ever seen on any subject. I will still have to watch it many times to absorb the information.

  • @meltingdude
    @meltingdude3 күн бұрын

    Tried this, the yoloworldESAM is giving me an import failed. probably needs updates.

  • @jianzheng9551
    @jianzheng95515 күн бұрын

    I tried to use this workflow for dogs and cats but the results are not good. Is this expected or am I missing anything?

  • @controlaltai
    @controlaltai5 күн бұрын

    It will work for any subject, object and scene. Elaborate on what you mean results are not good, like they work for some images and don't work for others, or they don't work at all?

  • @controlaltai
    @controlaltai6 күн бұрын

    For error: "cannot import name 'packaging' from 'pkg_resources'" The solution: Ensure that Python 3.12 or lower is installed with a comfy UI portable. Then go inside ComfyUI_windows_portable\python_embeded folder and run this command. python.exe -m pip install setuptools==65.5.1 A comfy update installs setuptools-70.0.0, you need to downgrade for it to work.

  • @hleet
    @hleet5 минут бұрын

    Thank you very much ! now it works

  • @hottrend4977
    @hottrend49776 күн бұрын

    Please help me, I get the error "cannot import name 'packaging' from 'pkg_resources'"

  • @controlaltai
    @controlaltai6 күн бұрын

    Okay found the solution, Firstly ensure, that Python 3.12 or lower is installed with comfy ui portable. The go inside the comfy portable folder python embedded folder and run this command. python.exe -m pip install setuptools==65.5.1 A comfy update installs setuptools-70.0.0, you need to downgrade for it to work.

  • @user-fj3oc9cx1b
    @user-fj3oc9cx1b6 күн бұрын

    Huge thanxs for the video! At last i have good inpaint and outpaint workflows

  • @ignaciofaria8628
    @ignaciofaria86288 күн бұрын

    Anyone else struggling with the command python -m pip install inference==0.9.13, try using py -m pip install inference==0.9.13 instead.

  • @goodie2shoes
    @goodie2shoes9 күн бұрын

    Amazing tutorial. I need a couple of viewings to take it all in because there is soo much usefull information!

  • @rei6477
    @rei64779 күн бұрын

    I tried attention masking again, similar to what you showed in this video(not same cause of the IP Adapter update), but when I generated a wide horizontal image with a mask applied to the center, I only got borders on the sides and the background didn't expand to fill the entire image size. Has this technique stopped working after an update, or could there be a mistake in my node setup? Would you mind checking this for me? 10:13

  • @controlaltai
    @controlaltai9 күн бұрын

    Sure email me the workflow, I will have a look. mail @ controlaltai . com (without spaces)

  • @rei6477
    @rei64779 күн бұрын

    ​@@controlaltai Sorry, I was using an anime model(anima pencil), which is why it only output images with the background cropped out. When I switched to juggernaut it worked correctly!Sorry for the quick comment and Thank you for going out of your way to provide your email , offering to help

  • @petrino
    @petrino9 күн бұрын

    Error occurred when executing Yoloworld_ESAM_Zho: cannot import name 'packaging' from 'pkg_resources' (C:\AI\ComfyUI_windows_portable_nvidia_cu121_or_cpu 1\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pkg_resources\__init__.py)

  • @petrino
    @petrino9 күн бұрын

    this is also after i followed this step: Command to Install Inference: python -m pip install inference==0.9.13 python -m pip install inference-gpu==0.9.13

  • @controlaltai
    @controlaltai7 күн бұрын

    I am looking at this. I don't think it's an inference problem. Something to do with the python version or the latest comfy update. Will get to you if I find a solution.

  • @theiwid24
    @theiwid2410 күн бұрын

    Nice work mate!

  • @itanrandel4552
    @itanrandel455210 күн бұрын

    Excuse me, do you have any tutorial on how to make a batch of multiple depth or softedge images per image?

  • @controlaltai
    @controlaltai9 күн бұрын

    You just connect the load image node with the required ore processors and the save nodes.

  • @nocnestudio7845
    @nocnestudio784511 күн бұрын

    Great tutorials. Good Job 💪... 28:12 very heavy operation. 1 frame is taking all life...

  • @JustAI-fe9hh
    @JustAI-fe9hh12 күн бұрын

    Thank you for this wonderful video!

  • @stijnfastenaekels4035
    @stijnfastenaekels403512 күн бұрын

    Awesome tutorial, thanks! But i'm unable to find the visual area composition custom node when i try to install it. Was it removed?

  • @controlaltai
    @controlaltai12 күн бұрын

    Thanks and no you can find it here....github.com/Davemane42/ComfyUI_Dave_CustomNode

  • @cyberspider78910
    @cyberspider7891013 күн бұрын

    Brilliant and no fuss work. Keep it up bro. With this quality of tutorial, you will outgrow any major channel...

  • @Senpaix3
    @Senpaix313 күн бұрын

    It's installing Optimum on mine, and it's stuck for awhile now. What should I do?

  • @divye.ruhela
    @divye.ruhela15 күн бұрын

    Wow, the details are unreal! Trying this for sure and reporting back!

  • @sup2069
    @sup206915 күн бұрын

    Mine only has 2 tabs. How to enable the 3rd tab? Its missing.

  • @jamesharrison8156
    @jamesharrison815616 күн бұрын

    A wonderful tutorial! I learned a lot more about ComfyUI. Thank you so much for taking the time to create this. Also, showing how you created the workflow and following along myself works much better as a learning tool.

  • @controlaltai
    @controlaltai15 күн бұрын

    Thank you!!

  • @ai_gene
    @ai_gene17 күн бұрын

    Thank you for the tutorial! I would like to know how to do the same thing as in the fourth workflow but using IPAdapter FaceID to be able to place a specific person in the frame. I tried, but the problem is that the inputs to MultiAreaConditioning are Conditioning, while the outputs from IPAdapter FaceID are Model. How can I solve this problem? I would appreciate any help.

  • @controlaltai
    @controlaltai16 күн бұрын

    Okay but this area conditioning in the tutorial is not designed to work with ip adapter. That's a very different workflow. To place a specific person in the frame we have not covered that tutorial but involves masking and adding the person and then using lc light and a bunch of other stuff to adjust lighting as per the scene. Processing it through sampling and then changing the face again.

  • @ai_gene
    @ai_gene16 күн бұрын

    @@controlaltai Thank you for your response! 😊 It would be great if you could create a tutorial on this topic. I'm trying to develop a workflow for generating thumbnails for videos. The main issue is that SD places the person's face in the center, but I would like to see the face on the side to leave space for other information on the thumbnail. Your tutorial was very helpful for composition, but now I need to figure out how to integrate a specific face. 😅

  • @controlaltai
    @controlaltai16 күн бұрын

    Unfortunately due to an agreement with the company who owns the insight face copyright tech I cannot publicly create any face swapping tutorial for KZread. Just search for a reactor and you should find plenty of KZread. I am just restricted for public education not paid consultations or workflows which are private. (For this specific topic)

  • @controlaltai
    @controlaltaiКүн бұрын

    ​@@ai_geneHi, okay so to have the face on the left it is very very easy. You can do this via using 2 control nets. Dw pose and depth. Make sure the image resolution is same as the image generated and ensure the ControlNet image the person is on the left.

  • @Noobinski
    @Noobinski17 күн бұрын

    That was extremely helpful indeed. Thank you for showcasing how to do it. Not many do (or even know what they're talking about).

  • @controlaltai
    @controlaltai17 күн бұрын

    Thank you!!

  • @user-ey3cm7lf1y
    @user-ey3cm7lf1y18 күн бұрын

    Thanks for excellent Video. However i wonder why my Blip Analyze Image is different with in the video . And also in my Blip Loader , there is no model name "caption". I already downloaded all in the Requirements sections

  • @controlaltai
    @controlaltai18 күн бұрын

    Blip was recently updated. Just input the new blip node and model and use whatever it shows there. These things are normal. Ensure comfy is updated to the latest version along with all custom nodes. Only caption will not work any more.

  • @user-ey3cm7lf1y
    @user-ey3cm7lf1y18 күн бұрын

    ​@@controlaltai So i already applyed your workflow json. But when I click the queue prompt, I get an "allocate on device" error. However, if I check it and then click the queue prompt again, it works fine without any errors. So, I searched for the "allocate on device" error related to ComfyUI, but my error log was different from the search results. My error log only mentions "allocate on device" without any mention of insufficient memory, and below that, it shows the code. However, other people's error logs mention insufficient memory. Despite this, could my error also be a memory issue?

  • @controlaltai
    @controlaltai17 күн бұрын

    Allocate on device error is running out of vram or system ram. If you can tell your vram and system ram and what is the size of the image you are trying to fix the face for and with what box settings or anything else you are trying to do i can guide you on how to optimize the settings as per your system specs.

  • @user-ey3cm7lf1y
    @user-ey3cm7lf1y17 күн бұрын

    @@controlaltai This is my system when i run comfy ui server. Total VRAM 6140 MB, total RAM 16024 MB Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 4050 Laptop GPU : cudaMallocAsync VAE dtype: torch.bfloat16 Im using Fix_Faces_Extra workflow json. And my images size is under 1mb jpg files. And process is stopped at FaceDetailer node. I think i should optimize the FaceDetailer Settings. Thanks

  • @controlaltai
    @controlaltai17 күн бұрын

    Images size does not matter resolution does. Change face detailer from 1024 or 767 setting to 256. That should work for you. Try that

  • @user-pg9wy3qn4c
    @user-pg9wy3qn4c18 күн бұрын

    Does this method work with videos?

  • @controlaltai
    @controlaltai18 күн бұрын

    It does indeed in my testing but the workflow is way way different. I took a plane take off video and removed it completely (the plane that is) and re constructed the video. I did not include it in the tutorial as it was becoming too long.

  • @AlanLeebr
    @AlanLeebr19 күн бұрын

    With your method its possible create an 15 or 30 seconds scene?

  • @controlaltai
    @controlaltai18 күн бұрын

    With clip extension method yes indeed.

  • @user-ey3cm7lf1y
    @user-ey3cm7lf1y19 күн бұрын

    I tried to install inference==0.9.13 But i got error. Should i downgrade my python version to 3.11 ?

  • @controlaltai
    @controlaltai19 күн бұрын

    I suggest you backup your environment then downgrade. Wont work unless on 3.11

  • @user-ey3cm7lf1y
    @user-ey3cm7lf1y19 күн бұрын

    @@controlaltai Thank you i solve the problem on 3.11

  • @swipesomething
    @swipesomething20 күн бұрын

    3:37 After I installed the node, I had errors "cannot import name packaging from pkg_resources", I updated the inference and inference-gpu packages and it was working, so if anybody has the same errors try to update the inference and inference-gpu

  • @controlaltai
    @controlaltai6 күн бұрын

    The issue is this won't work with the latest version of ComfyUI. Python3.12 is incompatible. You have to use an older version of python.

  • @controlaltai
    @controlaltai6 күн бұрын

    Okay found the solution, Firstly ensure, that Python 3.12 or lower is installed with comfy ui portable. The go inside the comfy portable folder python embedded folder and run this command. python.exe -m pip install setuptools==65.5.1 A comfy update installs setuptools-70.0.0, you need to downgrade for it to work.

  • @ApexArtistX
    @ApexArtistX21 күн бұрын

    Best tutorial.. all other shitubers sells their workflow on patreon greedy bastards..

  • @ApexArtistX
    @ApexArtistX21 күн бұрын

    too many processs dont try it out. wait for final release.. stupid cuda 11 build

  • @controlaltai
    @controlaltai21 күн бұрын

    There is a better one released. Planning a tutorial for that. More stable. It's called muse pose.

  • @ApexArtistX
    @ApexArtistX21 күн бұрын

    @@controlaltai thanks for workflow tutorial.. best youtuber

  • @Nrek_AI
    @Nrek_AI22 күн бұрын

    you've given us so much info here. Thank you so much! I learned so much

  • @TuangDheandhanoo
    @TuangDheandhanoo22 күн бұрын

    It's too much info at the middle. You lost me when you doing the upscale, downscale, and setup the swiches and such. I think setting things neatly is nice but it's personal preference and in this case it's not about SUPIR at all. Let say I only want to know upscaling 2X I would have to scrub your video back and forth try to go through your switches connection and oh yeah where does that height and width go again???

  • @controlaltai
    @controlaltai22 күн бұрын

    Well, watch the video right at till the end where the techniques used especially the cases where downscale upscale and downscale are used repeatedly to upscale a single photo. These switches were not added for personal preference only. To show some of the supir techniques you require them. You can however skip just go to the next section. Don't add the switch only the 2x upscale. Connect the height and width from the bottom to the upscale factor input. Wherever the switch connects to.

  • @niemamnicka
    @niemamnicka23 күн бұрын

    in the current version of the web-ui v1.9.3-4 fixing the xformers (from 13:00) using the "Xformers Command:" breaks the environment -> resulting in popup windows again (entry point). I get back to the step where you remove venv and extension, then I skipped installing xformers with command in (venv) and added --xformers argument to the webui-user.bat - this installed correct version for me after running it. Web-ui is starting without errors and the current as of today is versions are: version: v1.9.3-4-g801b72b9  •  python: 3.10.11  •  torch: 2.1.2+cu121  •  xformers: 0.0.23.post1  •  gradio: 3.41.2 Cheers

  • @Artishtic
    @Artishtic24 күн бұрын

    Thank you for the object removal section

  • @Tecturon
    @Tecturon24 күн бұрын

    1. In the Load CLIP Vision you're loading SDXL\pytorch_model.bin. What model would that be as of today? The ComfyUI Manager does not show this model. I figured it should be CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors. This should fit to die IPAdapter Model ip-adapter_sdxl.safetensors. Is this correct? 2. Also, IP Adapter had an update rendering the "Apply IPAdapter" node deprecated. I'm using IPAdapter Advanced, now. Weight_type ease in-out (found a comment by you concerning this), but the other parameters? combine_embeds=concat, start_at=0.0, end_at=1.0, embeds_scaling=V only? Problem: The KSampler gets an lineart image from the ControlNet Group (black lines, white background), but the result is always a black image with no error message involved. Why would the Image Generation only create black images?

  • @controlaltai
    @controlaltai24 күн бұрын

    For the Ip Adapter _SSDXL.safetensor model you should use - Clip-Vit_bigG-14 as clip vision, as you correctly said. Black images means issue with VRAM. For IP Adapter Advance, Weight type ease in ease out, no other changes.

  • @Tecturon
    @Tecturon18 күн бұрын

    @@controlaltai Thanks for your reply! I solved the assumed VRAM issue by switching from Windows to Linux (for that purpose). Getting my RX 6750 XT running was not trivial, but I finally got around that. I ran the exact workflow that failed under Win and got a result, finally. Thanks for pointing out the possible reason for failure, as it indicated a configuration problem. Your hint motivated me to finally move to Linux for SD generations. It's faster, too.

  • @valorantacemiyimben
    @valorantacemiyimben24 күн бұрын

    Hello. thanks a lot. Can we add make-up to a photo we added ourselves? How can we do?

  • @controlaltai
    @controlaltai24 күн бұрын

    Yes, Load it in image to image instead of text to image.

  • @markmanburns
    @markmanburns25 күн бұрын

    Amazing tutorial. So much value from the time invested to watch this.

  • @pressrender_
    @pressrender_26 күн бұрын

    Hey guys! Is there a way to change the output name on the image on the filename_prefix? I was able to do that with the %KSampler.seed% for example. But... It worked only on the filename_prefix on the Video Combine node only. I can't make it work with the Save image node. I would love to have a custom name with the model+cfg+steps. Or even better, a custom node that prints that information on the images. So, I could know that information without opening each image on Comfy. Thanks a lot!

  • @controlaltai
    @controlaltai24 күн бұрын

    Try the save node from WAS node suit.

  • @SilverEye91
    @SilverEye9126 күн бұрын

    Heads up! The ApplyIPAdapter node no longer exists. It appears as the replacement is now known as "IPAdapter Advanced". A lot of the others have also changed names, including Load IPAdapter Model which is now called "IPAdapter Model Loader". That's easy to find, but just be aware that just searching for the same names as in this video may not work anymore.

  • @semenderoranak2603
    @semenderoranak260326 күн бұрын

    When I download animate anyone evloved through manager, I get an error saying "ImportError: DLL load failed while importing torch_directml_native" asfter restarting