WOW! NEW ControlNet feature DESTROYS competition!
With a new major update to ControlNet for Stable diffusion, Reference only literally changed the game, again.
Prompt styles here:
/ sebs-hilis-79649068
Support me on Patreon to get access to unique perks! / sebastiankamph
Chat with me in our community discord: / discord
My Weekly AI Art Challenges • Let's AI Paint - Weekl...
My Stable diffusion workflow to Perfect Images • Revealing my Workflow ...
ControlNet tutorial and install guide • NEW ControlNet for Sta...
Famous Scenes Remade by ControlNet AI • Famous Scenes Remade b...
LIVE Pose in Stable Diffusion • LIVE Pose in Stable Di...
Control Lights in Stable Diffusion • Control Light in AI Im...
Ultimate Stable diffusion guide • Stable diffusion tutor...
Inpainting Tutorial - Stable Diffusion • Inpainting Tutorial - ...
The Rise of AI Art: A Creative Revolution • The Rise of AI Art - A...
7 Secrets to writing with ChatGPT (Don't tell your boss!) • 7 Secrets in ChatGPT (...
Ultimate Animation guide in Stable diffusion • Stable diffusion anima...
Dreambooth tutorial for Stable diffusion • Dreambooth tutorial fo...
5 tricks you're not using in Stable diffusion • Top 5 Stable diffusion...
Avoid these 7 mistakes in Stable diffusion • Don't make these 7 mis...
How to ChatGPT. ChatGPT explained in 1 minute • How to ChatGPT? Chat G...
This is Adobe Firefly. AI For Professionals • This Is Adobe Firefly....
Adobe Firefly Tutorial • Adobe Firefly Tutorial...
ChatGPT Playlist • ChatGPT
Пікірлер: 508
Download Prompt styles: www.patreon.com/posts/sebs-hilis-79649068 Please support me on Patreon for early access videos. It will also help me keep creating these guides: www.patreon.com/sebastiankamph
@mizutofu
Жыл бұрын
how do you get two controlnet unit on your gui?
@UnBknT
Жыл бұрын
You have to add the styles to the prompt, btw. In the video you just selected them from the dropdown, but they're not added to the prompt until you click on "add style to prompt"
@sebastiankamph
Жыл бұрын
@@UnBknT Not needed to click the button. They are still applied.
@142vids
Жыл бұрын
Why pay your monthly patreon while I can watch your free youtube videos with adblock on? We were beating the competition I thought no?
@sebastiankamph
Жыл бұрын
@@142vids you're free to do whatever you want. The people supporting me does it from the kindness of their heart, helping me keep doing these videos.
Started playing with it a few hours ago. It is insane. It's nearly as good as training but without the training. It pulls faces, poses, lighting, art style, everything. I cannot believe this is only the first iteration, it is already so good. I thought Shuffle was dope but this is on a whole new level.
@Mocorn
Жыл бұрын
Exactly, "almost as good as training" is the scary part. I've been able to get better likeness out of this reference_only model than I've had with pretty much every early training attempt. There's been a bit of cherry picking but in some cases I've gotten 2 extremely good hits from a 4 batch render. It's crazy how good this is already!
@derek5634
Жыл бұрын
@@Mocorn Strange - casue I still cannot get it to create a decent copy of the original face. Always makes the new image look younger and very different from original face
@tamask001
Жыл бұрын
To me it looks like this method only works for people "coming out of the model". For example if you take the seed image from this video and try to generate other images from that without Sebastian's "Digital/Oil Painting" and "Easy Negative" styles, the results are very unimpressive. I'm not saying that this new ControlNet is not super cool for some use cases, but I though he could have been clearer about the limitations.
@ggoddkkiller1342
Жыл бұрын
I couldn't make it work with v1.1.174, txt2img is completely broken even hair colour doesn't match. While img2img kinda works better at least matching hair and clothes but faces are out of a horror movie twisted etc. Im using exactly same styles and settings
@kimjohn3877
Жыл бұрын
how to access free trial?
this is actually the very definition of game changing
@sebastiankamph
Жыл бұрын
💯
@FarbachViBrittanniaBuildPeace
Жыл бұрын
Can create a character and make a whole tv show or anime out of the character
Thank you so much! You've been pretty much the only source I've needed to learn everything I need about control net. Great videos with clear and concise information. Keep it up!
@sebastiankamph
Жыл бұрын
Thank you very much, glad the videos have been helpful to you 😊
This has been extremely helpful in redesigning the characters for a video game I did way back in high school. I've taken my art, ran it through Ai, and saw it give me different variations of my work. I'd then pick what I liked from each and draw up the final design, It is such a time saver.
Dude... You're litrally faster than me clicking update button in SD... Have my sub!
@sebastiankamph
Жыл бұрын
Thank you kindly! 😊🌟
I have only started using Stble Diffusion a bit over a week ago and your videos are such a big help.
Wow. CN guys are on a roll. They are innovating faster than OpenAI and Google. Hopefully they can keep up the momentum.
@rproctor83
Жыл бұрын
Ha, every day there is a dozen new breakthroughs!
@jan.kowalski
Жыл бұрын
@@AG-ur1lj Thats why the battle for those brilliant mind is not based on ambition but depravation. The big ones will acquire what they can, and the rest will be depraved and obscured. As always.
@jan.kowalski
Жыл бұрын
@@AG-ur1lj powerful how? Will it scale to millions of users? Will it be safe from lawsuits or flexible enough to attract business users? I doubt that. Microsoft or Google could wait and buy anything viable and you, even with your brilliance will have nothing to say. As always in the history.
@jan.kowalski
Жыл бұрын
@@AG-ur1lj you didn't realized that this technology is already paywalled and regulated. You will not profit of it - above certain level of course - because you will not have a resources to train those tools or licenses to use copyrighted source data. As of now, it is not a problem for big corporations, because they just take the best solutions and use them with their data. You probably will be happy, but once more, you will not profit of that. Even if you will be able to train the best of the art algorithm it will be WORSE than their, because they have access to all those data and resources.
@jan.kowalski
Жыл бұрын
@@AG-ur1lj you downloaded terabytes of images and text and all copyrighted books and proprietary magazines from internet? I doubt that. Yet, Google or Microsoft works on that scale. Since you will NEVER have access to data, you will just become a giver of ideas to big corporations with your improvements to "open" algorithms. Without data, those algorithms just dont work. Even algorithms I enclosed in parantheses because when open source community will produce some breakthrough algorithm, big corporations WILL patent some small improvement and you will be barred from using them. That is the reality based on history. I'm amazed of your idealistic view of business.
Sebastian big thanks for providing your styles. I mostly use them right at the beginning before even prompting and they provide beautiful results.
@sebastiankamph
Жыл бұрын
Happy to help!
The Open Pose 3D extension is great for posing - you can run it in the GUI tab, set the skeleton in three-dimensional space, together with hands and feet and generate 3 images: canny, depth and openpose.
Waiting for my new computer with beefy vram to arrive, watching your vids to prep, and I'm loving what I'm seeing! Thanks so much for these!
Many thanks for sharing tutorials, its a massive time saver ;D
Thanks for the video, I love watching how you present it. Keep it up!
@sebastiankamph
Жыл бұрын
Thank you for the support! 😊
This is fantastic! Thanks so much for the heads up.
I've tried it... I don't get it to render anything even close to the likeness of the input image 😥
Amazing! Thanks you for making these tutorials
please raise your volume, I almost had an heart attack when ad kicked in lol
Thank you for this news update!
Cant wait to try it, thx!
Unfortunately, it doesn't work for me. The generated images all look like the same person but they dont resemble the person in my original image. It's like my image is completely ignored.
@PKBO173
Жыл бұрын
Totaly same. Iam getting whole different face..
@user-dk9td7kl8c
4 ай бұрын
Do you use mac m processors? Cause i have and there is a bug when it tryies to catch uploaded face.
Is it possible to use this to get a different angle of a specific environment in the same style? No people or characters, just an environment.
@sebastiankamph
Жыл бұрын
Yeah, but it won't be 100%. It's like a better img2img
AMAZING!! Fantastic video! Thank you for sharing it!!
@sebastiankamph
Жыл бұрын
Glad you liked it, you superstar, you! 😊🌟
This seems like something they could really use to do multi-frame rendering for txt2video
Thank You Sebastian, As ever your Tutorials are informative and straight to the point...And they Work!
@sebastiankamph
Жыл бұрын
Happy to help, thank you for being here! 🌟
If you High res fix after the init image is generated, you can usually cut through the noise. Go with R-ESRGAN 4x -> Denoise to 0.3 or 0.2. Keep that part weak. Or, alternatively, you can drag your CFG, and try to use High Res fix to add additional noise and burn if you are going for a Noisey style.
Very strange! I updated everything, turned everything on exactly the same way, uploaded a picture but the result is completely random. It does NOT WORK!
@TheFarmingAngels
Жыл бұрын
the same. it works only with some demo pictures (perfect face, no sobtle expressions, no background). And openpose miss the front/back pose 70% of time
@DedBruzon
Жыл бұрын
Yes, same thing.
I LOVE THIS FEATURE. Already got some awesome results on first few minutes fooling with it.
Upon seeing this I upgraded to a 12gb gpu this week so I could finally run ControlNet. It is indeed a literal game changer for projects that need character consistency. No more Lora and prompting gymnastics while crossing your fingers that the next batch will render what you want. Cuts workflow to a fraction of what is was before and opens all kinds of new creative doors. I’m loving this feature!
@sebastiankamph
11 ай бұрын
Happy to hear it's working out for you! ControlNet is life.
Love this!!! I need this. Character consistency is my biggest problem.
Wonder if people have started building graphic novels with this. Consistency in character design and style between frames is going to be really useful for something like that.
@mhelvens
Жыл бұрын
Or video. 😮
@HunterIndia
Жыл бұрын
U can already get consistent characters with textual inversion or LORA , u can train one yourself, especially textual inversion which needs 8 images , anymore is just useless to train a TI
@Pahiro
Жыл бұрын
@@HunterIndia but then you'd need to train a model for each character.. Suppose it's not that tall of an order but still, this'll make things much easier. I should start looking for some webcomics with an AI tag. Would love to see AI being utilized in that space.
@ColoNihilism
Жыл бұрын
that's the dream --> Video indeed, scary how much GPU power would be required
@zoybean
Жыл бұрын
@@Pahiro I'm trying but with Blender and img2img (more fine control).
Pog, didn't notice the Update. xd ty, Seb. Had a good day.
@sebastiankamph
Жыл бұрын
You're welcome! And thank you, you too 🌟
Being able to do my characters in different 3D positions Dang this is godlike
@fernando749845
Жыл бұрын
This has more character consistency than many 'old-fashioned' comic books :-)
@MrErick1160
Жыл бұрын
@@fernando749845 😅😅 this is actually sad to hear
@piotrrossa8030
Жыл бұрын
@@MrErick1160 my results are compeltly different than the reference :D :D
@sebastiankamph
Жыл бұрын
🌟🌟
@TimeMasterOG
Жыл бұрын
@@fernando749845 yes but actually no.. comic books stay very character consistent unless a panel gets drawn by a different artist
Wow, that is amazing, great video as always.
@sebastiankamph
Жыл бұрын
Glad you liked it! 😊
This is what I was waiting for! My goodness
I would love to see how they pulled this off. It seems like if they can do this, then a lot of other things we don't have yet ought to be possible, like maintaining outfits or architecture. This is perfect for making comics, though, with character coherence between frames. Maybe they could even fix the coherency issue of tiling a high res image, depending on what they did, exactly. This is pretty crazy,
@wykydytron
Жыл бұрын
You can maintain outfit with it, just promy that outfit or maybe use just outfit here and face in separate controlnet .. you know what gonna check that today
@skittlzboi
Жыл бұрын
@@wykydytron did you figure out how to do it? I try to use a CN for reference and one for open pose but cant seem to figure out how to get good results
I did get the smile to work, but I had to add my whole prompt so my image didn't change drastically and added the (woman smiling:1.2) at the beginning of my prompt. The Posing part was changing my image too much but I have to play some more with that. In the time you made this video they updated controlnet to v1.1.164. Thanks love your videos!
@sebastiankamph
Жыл бұрын
Glad you're enjoying the videos! I had to test a bunch of stuff before I got it working, and some versions barely even worked for me. Hoping new versions will make it easier to use for all.
@monsterlair
Жыл бұрын
This is exactly my experience too. Also, "ControlNet is more important" brightens up the image for me. I can get more consistent lighting with "My prompt is more important", but that changes the image more.
@Maltebyte2
5 ай бұрын
Im not getting nowhere fast! might just give up alltogether! i mean the output looks nothing, nothing like the input image! and i did everything exactly the same as in the video! ;(
Thank you for this excellent content!
@sebastiankamph
Жыл бұрын
Happy to help! 😊
Just wow, GAME CHANGER is the right set of words for this... just tried it and am uttelry impressed, thanks for reporting on this!!
I've followed the same steps but my pictures come out nothing like my original. I am enabling it, selecting 'reference only' but new pictures look like nothing like me
I don't have the ControlNet Unit 0 and ControlNet Unit 1 tabs. I only have "single image" and "batch" and nothing above that. Have I done something wrong? I've checked that everything is up to date.
@alexandercato7400
10 ай бұрын
Settings - controlnet - multicontrolnet: It is set to 3 by default. If you set it to 2, it should work.
Hi , very good tutorial . I tried my own image is input for the controlnet with refrence_only , and a simple promt like "man is smiling" in the face are totally different. how can I preserve the face ? Thanks Eran
Hey there. Good content. Learning a lot on this channel! Thank you Sebastian. How do I bring such a face (as here) into a generated image of say an assassin? Do I just carry on with my prompt as I would have? And bring a face image to controlnet?
How would you recommend getting the back of a character? I am trying to grab a depth map from both sides and combine them in blender. I guess I could do head on and then 120deg turns in either direction....
thank you for your work.
@sebastiankamph
Жыл бұрын
Thank you for your engagement! 🌟
This is quite amazing. Even better than using LORAs and the chance to combine LORAs, seeds and ControlNet with reference methods, NICE... BTW... I was specting my "Wonderwall" dad joke. I'm very disapointed, mister Kamph (read it in a beautiful british Sean Connery angry tone).
@sebastiankamph
Жыл бұрын
You posted it after I recorded this. But I did find it very good! 😂😘
Thank you very much for another great video. Wanted to ask about the styles, its the first time to see this, it there any videos that you explain what it is and how to use it or can you let me know here quickly about it? Thank you
@sebastiankamph
Жыл бұрын
Check the pinned commend or video description. Install instructions and usage in that link
Can I achieve something like mix mid journey with cnet? For example human photo and potato and achieve something in-between?
Great video, amazing tool!!
@sebastiankamph
Жыл бұрын
Thank you! ControlNet is so powerful, it blows my mind. And I'm not exaggerating.
Thnak you!
i have 1.66 - but it will only copy the pose - the person looks nothing like the original photo...any ideas why?
@Maltebyte2
5 ай бұрын
Same issue here! does exactly motivate me at all to continue this! have you found out why?
Thank you, this is very helful😉
Awesome Tutorial! Thank you soo much! However for some reason SD ignores the second controlnet and doesn't give me the pose I want. Any idea what the issue might be? please keep making more videos!
Thanks, great video and straight to the point. Liked, subbed and commented !!!
@sebastiankamph
Жыл бұрын
The holy trinity! You're the real mvp 🌟
Bro is the best, thank you so much for saving a ton of time
Well, this wasn't quite what I was looking for but holy hell I got something good. I accidently wiped my prompts and I didn't know how to get them back... loading the image into png brought up my prompts/settings. So, thank you for that!!!
This is insanely useful. Like I've been trying for the last week to collect images for a Lora. It can be tricky as hell because keeping characters consistent is HARD. Change just a few words and suddenly the whole piece looks like a different style. It will be SOOO easy to make Lora now thanks to this. What will they come up with next because google and openai in my opinion are doing a pretty "meh" job.
@TTarragon
Жыл бұрын
Yeah, this was my first thought too. By itself, it's great, but it can be SO useful for training Loras, which I suppose, are more accurate
@scottyfityoga
Жыл бұрын
Hey can you please tell me how this makes it easier for training a Lora?
@benjaminjako
Жыл бұрын
@@scottyfityoga Easier to source images of a certain person, for example.
Hm this one don't work so well for me, tried different kinds of photos but they become soooo far of which is weird since I have tried the same settings you use and used other check points, samling methods and longer sampling steps but still not close like yours. Any tips or idea what it might be?
How is this different from image2image? I played with it and dont see a diference
In order to have the openpose model, you have to download the models separately, right?
Do I have to follow this procedure if I want to take one image from image 2 image window and apply on it open pose to get different variations? Or is there a simpler way!
Hey Sebastian, loving your videos. I notice that I don't have any ControlNet Units in my UI. Any advice on why/how that is set up?
@alexandercato7400
10 ай бұрын
Settings - controlnet - multicontrolnet: It is set to 3 by default. If you set it to 2, it should work.
You are on the top of your game Seb! Go king!
@sebastiankamph
Жыл бұрын
Thanks superstar! 🌟
Something wrong, i have latest version of ControlNet, but images come out absolutely different from my control image.
Great video thank you. I have a question, I can make a pose in img2img. When you use a batch of 4 you get 4 pictures and one pose picture. Can i save this pose. Because when i click on the pose image and i use the save button it doesnt work. I don't get a download button as with a normal picture.
How do you get that expanded bar on the top of the screen?
If you keep injuring yourself, it's time to book an appointment to learn some shelf improvement. This looks amazing, I keep meaning to look into ControlNet more but never seem to get around to it. Cheers.
Hello Sir, can you help with my query about Hand and fingers deform ho when creating artwork like extra legs extra fingers. sometime the went missing or not corrects as per human anatomy. I tried negative prompts, but still issue remains.
Hello there I have a question regarding controlnet. I have seen, using a 3d model you can make poses and use openpose to extract it, now in this video I have learned, that you could use any face as a reference and even combine it with the openpose. Now my question is: I do have a whole finished 3D Model of my Character eg. a 3d anime character in blender, I would pose it and it has its own face and clothing. Now I would pose my 3d model and have a picture of it. How can I use Controlnet so it would use the reference picture, and generate a image with the same face and clothing? Is there any way?
I am facing a strange issue where I choose a model to work on, such as realismEngine_v10, on Control Net, retrieve an image, and then choose canny in the model's preprocessor and write the prompt, but I receive an error. Is the model incompatible with the new Control Net model, or is there another issue? *Note that I have downloaded new models for Control Net such as control_v11p_sd15_openpose.
Really curious how this could also work with inpainting and img2img at the same time. exciting!
Best channel. period.
6:11 can this be applied to the full body pose? I noticed the generated image only focuses on the half of the body.
hello, meanwhile thanks for your videos because I'm learning so much! I wanted to ask if there is a way to add real objects into an image, for example a model holding a real bag. Thank you
How did you get your styles menu subdivided like that? Is there an extension that does that or what?
tried it but I don't see any effect of controllnet on the result. does the image have to be created from the same model I am trying to use?
i cant see controlnet option there i m on latest stable diffusion n from India but idk any reason why I cant see it
@sebastiankamph
Жыл бұрын
Install from extensions tab
Thanks!
@sebastiankamph
Жыл бұрын
Wow, thank you once again! Real mvp material. 💫
Awesome video👍 Your computer is so fast in generating pictures, what are your hardware specs (cpu, gpu, RAM)?
great job sebastian. is there a website where i can try this out?
my control net model show none, how do i load model into control net? thank you
Do you have a video for training models or how to set up loras and the such? im lil dumb, Ty
oh i meant to comment one of the last times. I downloaded the styles file from your link and put it in my SD folder as instructed on the link, and they aren't on my styles tab on Auto1111. am i missing something?
I actually have troubleshooting with the first part, with reference_only I got an image totally different. Anyone know about resolving this issue ?
What I would like to do is inpainting woth controlNet. And what I mean is, I have an img with a pose, I remove one arm for inpainting and I pass another arm pose, and the inpainting is done with that new arm pose. Is this possible? What i found is not like that
10x for the update! Looks reassuring as not to have to learn how to train and fine tune. Wonder if you can just keep using the same reference face with ANY different scenario, thus we got ourself a character mapped by seed only
for the posing , im thinking we can also extract a pose from an image ?
I'm actually doing even more crazy things with tile. But yeah, reference ones are great too.
My laptop has only 4gb vram so not a good start already😅 but i was able to generate at 1024×1024 resolution but after updating automatic 1111 i can't generate above 512×512 , and also can't use controlnet, everytime the vram usage goes through the roof. Then i upgraded to torch 2.0 but still it didn't help. Torch 2 definitely decreased my generation time though ngl. What should i do?? I want to use controlenet.
has anybody figured out while there are no models coming with ControlNet v1.1.234? I tried to use it on this version and nothing worked -ControlNet just ignored for anything(canny, pose etc). I could not select any models for any pre-processor as Models dropdown list was empty. I have downloaded one model for openPose, put it in models folder in extensions. Now I can select this model for pose and it is all started working. I have installed ControlNet from Automatic1111 but it only puts yml files for models in the required folder but not actually models itself.
Omg this is incredible
@sebastiankamph
Жыл бұрын
I can't disagree 😅
when I use controlnet, it only produces an inverted image as a result of the reference even when I select reference as the control. how would I fix this?
Hi Sebastian, your videos are amazing!! Thanks very much, i have a question for you. Do you think it's possible to make an Ai model wear a real dress? For example if I have the ghost mannequin photo of a dress can i generate a worn photo with an AI model? Please let me know, i'm new in this fild anch I think this could be very useful
I've followed this exact process and my outputs aren't even 5% similar to my input. Something is very wrong
It's still very clunky but we can see the future here. I want to be able to adjust like making a MMO character, then dress how ever i see fit, then put the character in any scene I want, in any pose i want, talking / singing / dancing /what ever. we are so close to that now it is so exciting!
@wykydytron
Жыл бұрын
There is controlnet that allows for easy outfit swap, my poor memory can't handle it's name it has 3 versions first end on 20 in name if that helps, anyway it detects what's on picture and paints it in corresponding color, then you just say you want person to have x outfit and it will change clothes but rest will remain unchanged
@aguyfromnothere
Жыл бұрын
@@wykydytron Segmentation.
I just started stable diffusion, how do I get that UI and feature because I don't see it? Is there a video on how to set it up?
Folks, if I want to generate a character in a background image and have the character blend into the background, how do I do this?
How did you get multi control net? It looks like I only have the one .
If you already have other photos as jpgs, how do you change the skirts in one photo, and then apply those skirts to the models in the other jpgs?
That was one the missing feature : the ability to keep the same character. Still not perfect but we are going there ! I now wonder if it will become possible to generate few good looking image and train a dreambooth on them. That way you can reuse the face only as an inpaint
@RoboMagician
Жыл бұрын
wondering the same thing. things like coping over styles like a person's clothes, and patterns on the clothes etc to the generation imges. does midjourney remix do that?
There's one thing about this pre-processor... It's more resource consuming. I'm generating a 512x768 image, and setting a Hires Fix to 2x. As soon, as it starts to render a Hires image, "NansException: A tensor with all NaNs was produced in Unet" error occurs. It starts to render an upscaled image only if I lower an upscaler to 1,6.