Welcome to Control Alt Ai. We provide tutorials on how to master the latest ai tools like StableDiffusion, MidJourney, Blue Willow, ChatGPT, and Everything AI basically.
Our Channel provides simplistic and practical tutorials and information that are easy to understand for everyone from a tech enthusiast, a developer, or someone curious about the latest advancements in AI.
We are committed to sharing our knowledge and expertise with our viewers. So if you're looking to stay informed on the latest AI tools, News and expand your knowledge, subscribe to our channel.
Пікірлер
great vid btw what about coloring and old black and white photo? i think the color match mkl is not enough for it
Thanks. You need ControlNet Recolor for that. Here is the workflow tutorial kzread.info/dash/bejne/moiH1pKyc7yTn6g.htmlsi=8JqhnKyic6HmUkee
Can you give me your standard negative prompt?
There is no standard negative prompt. It depends on the checkpoint. Check sample images of the checkpoint you are using on civitai. For some checkpoints like sd3, we start with no negative, and add when required.
Thanks for the good tutorial, WHen I tried these steps in the stable diffusion web UI, It seemed like doesn't generate anything. any idea why ?. I am new to this stable diffusion tools. In my UI I don't seen the preprocessor IP-adapter_clip_sd15 as yours.
Check the cmd for the exact error. Are you getting a black image?
This tutorial is probably the best I've ever seen on any subject. I will still have to watch it many times to absorb the information.
Tried this, the yoloworldESAM is giving me an import failed. probably needs updates.
I tried to use this workflow for dogs and cats but the results are not good. Is this expected or am I missing anything?
It will work for any subject, object and scene. Elaborate on what you mean results are not good, like they work for some images and don't work for others, or they don't work at all?
For error: "cannot import name 'packaging' from 'pkg_resources'" The solution: Ensure that Python 3.12 or lower is installed with a comfy UI portable. Then go inside ComfyUI_windows_portable\python_embeded folder and run this command. python.exe -m pip install setuptools==65.5.1 A comfy update installs setuptools-70.0.0, you need to downgrade for it to work.
Thank you very much ! now it works
Please help me, I get the error "cannot import name 'packaging' from 'pkg_resources'"
Okay found the solution, Firstly ensure, that Python 3.12 or lower is installed with comfy ui portable. The go inside the comfy portable folder python embedded folder and run this command. python.exe -m pip install setuptools==65.5.1 A comfy update installs setuptools-70.0.0, you need to downgrade for it to work.
Huge thanxs for the video! At last i have good inpaint and outpaint workflows
Anyone else struggling with the command python -m pip install inference==0.9.13, try using py -m pip install inference==0.9.13 instead.
Amazing tutorial. I need a couple of viewings to take it all in because there is soo much usefull information!
I tried attention masking again, similar to what you showed in this video(not same cause of the IP Adapter update), but when I generated a wide horizontal image with a mask applied to the center, I only got borders on the sides and the background didn't expand to fill the entire image size. Has this technique stopped working after an update, or could there be a mistake in my node setup? Would you mind checking this for me? 10:13
Sure email me the workflow, I will have a look. mail @ controlaltai . com (without spaces)
@@controlaltai Sorry, I was using an anime model(anima pencil), which is why it only output images with the background cropped out. When I switched to juggernaut it worked correctly!Sorry for the quick comment and Thank you for going out of your way to provide your email , offering to help
Error occurred when executing Yoloworld_ESAM_Zho: cannot import name 'packaging' from 'pkg_resources' (C:\AI\ComfyUI_windows_portable_nvidia_cu121_or_cpu 1\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pkg_resources\__init__.py)
this is also after i followed this step: Command to Install Inference: python -m pip install inference==0.9.13 python -m pip install inference-gpu==0.9.13
I am looking at this. I don't think it's an inference problem. Something to do with the python version or the latest comfy update. Will get to you if I find a solution.
Nice work mate!
Excuse me, do you have any tutorial on how to make a batch of multiple depth or softedge images per image?
You just connect the load image node with the required ore processors and the save nodes.
Great tutorials. Good Job 💪... 28:12 very heavy operation. 1 frame is taking all life...
Thank you for this wonderful video!
Awesome tutorial, thanks! But i'm unable to find the visual area composition custom node when i try to install it. Was it removed?
Thanks and no you can find it here....github.com/Davemane42/ComfyUI_Dave_CustomNode
Brilliant and no fuss work. Keep it up bro. With this quality of tutorial, you will outgrow any major channel...
It's installing Optimum on mine, and it's stuck for awhile now. What should I do?
Wow, the details are unreal! Trying this for sure and reporting back!
Mine only has 2 tabs. How to enable the 3rd tab? Its missing.
A wonderful tutorial! I learned a lot more about ComfyUI. Thank you so much for taking the time to create this. Also, showing how you created the workflow and following along myself works much better as a learning tool.
Thank you!!
Thank you for the tutorial! I would like to know how to do the same thing as in the fourth workflow but using IPAdapter FaceID to be able to place a specific person in the frame. I tried, but the problem is that the inputs to MultiAreaConditioning are Conditioning, while the outputs from IPAdapter FaceID are Model. How can I solve this problem? I would appreciate any help.
Okay but this area conditioning in the tutorial is not designed to work with ip adapter. That's a very different workflow. To place a specific person in the frame we have not covered that tutorial but involves masking and adding the person and then using lc light and a bunch of other stuff to adjust lighting as per the scene. Processing it through sampling and then changing the face again.
@@controlaltai Thank you for your response! 😊 It would be great if you could create a tutorial on this topic. I'm trying to develop a workflow for generating thumbnails for videos. The main issue is that SD places the person's face in the center, but I would like to see the face on the side to leave space for other information on the thumbnail. Your tutorial was very helpful for composition, but now I need to figure out how to integrate a specific face. 😅
Unfortunately due to an agreement with the company who owns the insight face copyright tech I cannot publicly create any face swapping tutorial for KZread. Just search for a reactor and you should find plenty of KZread. I am just restricted for public education not paid consultations or workflows which are private. (For this specific topic)
@@ai_geneHi, okay so to have the face on the left it is very very easy. You can do this via using 2 control nets. Dw pose and depth. Make sure the image resolution is same as the image generated and ensure the ControlNet image the person is on the left.
That was extremely helpful indeed. Thank you for showcasing how to do it. Not many do (or even know what they're talking about).
Thank you!!
Thanks for excellent Video. However i wonder why my Blip Analyze Image is different with in the video . And also in my Blip Loader , there is no model name "caption". I already downloaded all in the Requirements sections
Blip was recently updated. Just input the new blip node and model and use whatever it shows there. These things are normal. Ensure comfy is updated to the latest version along with all custom nodes. Only caption will not work any more.
@@controlaltai So i already applyed your workflow json. But when I click the queue prompt, I get an "allocate on device" error. However, if I check it and then click the queue prompt again, it works fine without any errors. So, I searched for the "allocate on device" error related to ComfyUI, but my error log was different from the search results. My error log only mentions "allocate on device" without any mention of insufficient memory, and below that, it shows the code. However, other people's error logs mention insufficient memory. Despite this, could my error also be a memory issue?
Allocate on device error is running out of vram or system ram. If you can tell your vram and system ram and what is the size of the image you are trying to fix the face for and with what box settings or anything else you are trying to do i can guide you on how to optimize the settings as per your system specs.
@@controlaltai This is my system when i run comfy ui server. Total VRAM 6140 MB, total RAM 16024 MB Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 4050 Laptop GPU : cudaMallocAsync VAE dtype: torch.bfloat16 Im using Fix_Faces_Extra workflow json. And my images size is under 1mb jpg files. And process is stopped at FaceDetailer node. I think i should optimize the FaceDetailer Settings. Thanks
Images size does not matter resolution does. Change face detailer from 1024 or 767 setting to 256. That should work for you. Try that
Does this method work with videos?
It does indeed in my testing but the workflow is way way different. I took a plane take off video and removed it completely (the plane that is) and re constructed the video. I did not include it in the tutorial as it was becoming too long.
With your method its possible create an 15 or 30 seconds scene?
With clip extension method yes indeed.
I tried to install inference==0.9.13 But i got error. Should i downgrade my python version to 3.11 ?
I suggest you backup your environment then downgrade. Wont work unless on 3.11
@@controlaltai Thank you i solve the problem on 3.11
3:37 After I installed the node, I had errors "cannot import name packaging from pkg_resources", I updated the inference and inference-gpu packages and it was working, so if anybody has the same errors try to update the inference and inference-gpu
The issue is this won't work with the latest version of ComfyUI. Python3.12 is incompatible. You have to use an older version of python.
Okay found the solution, Firstly ensure, that Python 3.12 or lower is installed with comfy ui portable. The go inside the comfy portable folder python embedded folder and run this command. python.exe -m pip install setuptools==65.5.1 A comfy update installs setuptools-70.0.0, you need to downgrade for it to work.
Best tutorial.. all other shitubers sells their workflow on patreon greedy bastards..
too many processs dont try it out. wait for final release.. stupid cuda 11 build
There is a better one released. Planning a tutorial for that. More stable. It's called muse pose.
@@controlaltai thanks for workflow tutorial.. best youtuber
you've given us so much info here. Thank you so much! I learned so much
It's too much info at the middle. You lost me when you doing the upscale, downscale, and setup the swiches and such. I think setting things neatly is nice but it's personal preference and in this case it's not about SUPIR at all. Let say I only want to know upscaling 2X I would have to scrub your video back and forth try to go through your switches connection and oh yeah where does that height and width go again???
Well, watch the video right at till the end where the techniques used especially the cases where downscale upscale and downscale are used repeatedly to upscale a single photo. These switches were not added for personal preference only. To show some of the supir techniques you require them. You can however skip just go to the next section. Don't add the switch only the 2x upscale. Connect the height and width from the bottom to the upscale factor input. Wherever the switch connects to.
in the current version of the web-ui v1.9.3-4 fixing the xformers (from 13:00) using the "Xformers Command:" breaks the environment -> resulting in popup windows again (entry point). I get back to the step where you remove venv and extension, then I skipped installing xformers with command in (venv) and added --xformers argument to the webui-user.bat - this installed correct version for me after running it. Web-ui is starting without errors and the current as of today is versions are: version: v1.9.3-4-g801b72b9 • python: 3.10.11 • torch: 2.1.2+cu121 • xformers: 0.0.23.post1 • gradio: 3.41.2 Cheers
Thank you for the object removal section
1. In the Load CLIP Vision you're loading SDXL\pytorch_model.bin. What model would that be as of today? The ComfyUI Manager does not show this model. I figured it should be CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors. This should fit to die IPAdapter Model ip-adapter_sdxl.safetensors. Is this correct? 2. Also, IP Adapter had an update rendering the "Apply IPAdapter" node deprecated. I'm using IPAdapter Advanced, now. Weight_type ease in-out (found a comment by you concerning this), but the other parameters? combine_embeds=concat, start_at=0.0, end_at=1.0, embeds_scaling=V only? Problem: The KSampler gets an lineart image from the ControlNet Group (black lines, white background), but the result is always a black image with no error message involved. Why would the Image Generation only create black images?
For the Ip Adapter _SSDXL.safetensor model you should use - Clip-Vit_bigG-14 as clip vision, as you correctly said. Black images means issue with VRAM. For IP Adapter Advance, Weight type ease in ease out, no other changes.
@@controlaltai Thanks for your reply! I solved the assumed VRAM issue by switching from Windows to Linux (for that purpose). Getting my RX 6750 XT running was not trivial, but I finally got around that. I ran the exact workflow that failed under Win and got a result, finally. Thanks for pointing out the possible reason for failure, as it indicated a configuration problem. Your hint motivated me to finally move to Linux for SD generations. It's faster, too.
Hello. thanks a lot. Can we add make-up to a photo we added ourselves? How can we do?
Yes, Load it in image to image instead of text to image.
Amazing tutorial. So much value from the time invested to watch this.
Hey guys! Is there a way to change the output name on the image on the filename_prefix? I was able to do that with the %KSampler.seed% for example. But... It worked only on the filename_prefix on the Video Combine node only. I can't make it work with the Save image node. I would love to have a custom name with the model+cfg+steps. Or even better, a custom node that prints that information on the images. So, I could know that information without opening each image on Comfy. Thanks a lot!
Try the save node from WAS node suit.
Heads up! The ApplyIPAdapter node no longer exists. It appears as the replacement is now known as "IPAdapter Advanced". A lot of the others have also changed names, including Load IPAdapter Model which is now called "IPAdapter Model Loader". That's easy to find, but just be aware that just searching for the same names as in this video may not work anymore.
When I download animate anyone evloved through manager, I get an error saying "ImportError: DLL load failed while importing torch_directml_native" asfter restarting