Hey, I'm Bitesized Genius.
I create Stable Diffusion content on KZread, to help viewers better understand this emerging technology and create fantastic art in a variety of ways.
My ambition is to turn BitesizedGenius into the largest community for AI related content, without wasting the viewers time, lying to the viewer through clickbait titles or distracting editing, which can make learning more difficult.
All support is greatly appreciated and will contribute towards helping me improve the channel, purchase equipment and as one can dream, to make this my full time job.
Business Email: [email protected]
Patreon: www.patreon.com/BitesizedGenius
Ko-fi: www.ko-fi.com/bitesizedgenius
Buy Me Coffee: bmc.link/bitesizedgenius
Tutorials: www.bitesizedgenius.com/
Пікірлер
such a hit and miss extension when trying to add extra people
i already manage to install it but after i closed the program it cant launch anymore ,,its said site cant be reach ? any suggestion
Thanks so much for making this, faaaaaar too many people in the space seem to assume everyone can code with Python like a god so none of this has to be explained, yet those like me are sitting there with NO FUCKING IDEA how any of this shit works! lol So again, thanks, just knowing these basics makes a gigantic difference.
Im using sdxl, your turtorial was amazing. Is there a reason i dont have interagate CLIP or interagate deepbooru?
Its disabled by default in the settings, can't remember where but its still around in there!
Hello, I am learning the diffusion models and wanted to lora fine tune sdxl, using huggingface script for this but results are not good, how to achieve such a realistic results, wanted to know where should I focus for such realism, Realistic Vision has limited text issue, repetition error such as finger, almost no distortion in human images, how?
Please I need your help. I have two pictures and I want to change the pose, I want picture A to have the same pose with picture B.. Please how can I do it
but after we install it, how do we use it? Where does it show?
Your explanation has happened a lot.
really needed this, many thanks
Nice tutorial! Do you use SD Forge? I have a RTX 3090 and my generation for ONE picture on controlnet is literally taking 8 MINUTES. I must be doing something wrong. I read somewhere that SD Forge is way faster for controlnet.
Have yet to try it, but in future!
Your tutorials are so good
For the ip adapter models, you go very fast in the video, wich ones are they exactly?
Good video friend, but what is needed for the pose to be respected? What sampler to use? What is the denoise setting? He recognizes my pose but he doesn't respect me in the image, sometimes he appears without hands, I wish you could answer
You only need to use controlnet for the pose to be respected, the sampler is irrelevant and if using image to image, then use a higher denoising strength. Also ensure your using a pose model that includes hands. And ensure there are no errors in the command window when running control net.
straight to the point, subscribed
I hate this automated voice, please change it. Content is gem.
For some reason, nothing is making sense to me. Maybe it’s the photos you were using. Nothing is different about them. I don’t know. I just heard mask bunch of times.
Bro, you are a savior, tons of high quality tutorials
I just got into Stable Diffusion and the whole week I was generating images without knowing these techniques. Thank you.
I'm not getting an Open Pose Tab at the top of my UI. Can you point to a place that will actually help fix my SD?
same issue.
What is the UI being used for these tests?
Automatic1111
@@BitesizedGenius Thanks!
This is an awesome video, and great introduction to Yodayo. We've added so many new features since this came out, it's almost like looking a time capsule lol. We are working on voiced bots, as well as having your bots be able to interact with one another. A background removal tool is great idea, I'll drop it into our feedback channels. (it's also kind of funny to see my name when you pulled up the Discord screenshot lol)
Thanks, glad to hear the service is developing well!
OutOfMemoryError: CUDA out of memory. in stablediffusion webui
This is not a good tutorial you skip the Models installation to quickly and one is left not knowing WGICH model to download , what it does and why. you leave people in the black with the most complicated part of stable diffusion the fucking instalation of models.
Legend - Finally someone who just gives clear concise explanations on how everything works! Subbed
Hi, I've recently installed Stable Diffusion and after I followed your steps and try to copy generation data, I get the message: "TypeError: 'NoneType' object is not iterable" is there anything else I need to add (extensions or similar) to make it work?
It maybe because of low VRAM
Bro, 00:57 you didnt provide the link for that
nor do you say which of those files
BREAK did not worked for me. it did better without it
I just noticed that my version of Automatic 1111 does not have the DPM++2M Karras or DPM++SDE Karras. Karras is only in the "Schedule type" under the generation tab. Thoughts?
I have been stuck after installation, WHERE to find it in the UI. Then I saw in another video, that the controlNet tab is just below the main tab under "txt2img" waiting to be expanded. This moment isn't shown in the video.
That was a solid checkpoint tutorial man.
If you use SDXL 1.O you need to download the additional file from hugging face and drop it in the same folder as the .pth shown here.
Which file do you mean?
how in your HIRES FIX SETTINGS your can use hires fix checkpoints and promts?
Awesome! Thank you so much for the tutorial!
I've gone through this a few times, and updated to ControlNet v1.1.445, and no matter what, the preprocessor never shows up as ip-adaptor_clip_sd15. I've redownloaded the appropriate files the link you provided to them, but it's a no go. The correct model shows up, but not the correct preprocessor.
Me too, working in Forge. Any ideas anyone?
Watching and installed Automatic11 in April 2024, my local install of Automatic1111 does not show Sampling Method DPM++ 2M SDE Karras as it does in the video, can someone tell me why that is?
mine also not showing.
love your videos. no unnecessary showing face where I usually had to skip no unnecessary extra steps, straight to each features sexy voice 👍
This is a game changer. Even more control! I'd advice to watch it a few times as it is loaded -and worth it. Thanks for your video!
Things look amazing but I can't run SDXL without getting an out of memory error. I have 8GB of AMD GPU. What can I do?
you can use main RAM but it will be slow. Even 16GB with SDXL uses a little bit of main memory for me. AMD 6800 here
Try SDXL Lighting models, only 4-8 steps and CFG 1-2. It's very quick.
Command: --lowvram Right click on the launcher, click edit, it opens a txt window and you paste it there
In the prompt weighting section you didn’t put colons so it didn’t accurately interpret it exactly the way you intended
Is there a way to control angle and perspective. Like the low angles that imply the viewer of the picture would be small and looking up at a huge object or person that towers over them?
use prompts like from below, from above, birdseye view, fisheye lens etc and add weighting if required
@@BitesizedGenius Thanks dude
This is fantastic. I work for an AI image gen site, and I'm always trying to explain the principles of prompting and what ( ) does vs [ ], or how to structure a prompt from most to least important. You concisely break down so many things here, thank you!
And the one who break the most also.
Is it also possible to change the hairstyle
Absolutely, will do a vid on that in future
How do you get those masks in generated results together with output image?
Thanks for this great info!!
why does every tile has face in it?
lower your denoising strength