FULL Perspective Control - ControlNET in Automatic 1111 - Stable Diffusion
Тәжірибелік нұсқаулар және стиль
Use ControlNet in A1111 to have full control over perspective. You can use this with 3D models from the internet, or create your own 3D models in Blender or other software. This method allows you to create different views of a similar looking location. You can also use Multi-ControlNet to place a character into the scene
#### Links from the Video ####
Join my Live Stream: kzread.infopuxTMSqC1bc
SketchFab Japanese Alley: sketchfab.com/3d-models/japan...
Buy me a Coffee: www.buymeacoffee.com/oliviotu...
Join my Facebook Group: / theairevolution
Joint my Discord Group: / discord
Support my Channel:
/ @oliviosarikas
Subscribe to my Newsletter for FREE: My Newsletter: oliviotutorials.podia.com/new...
How to get started with Midjourney: • Midjourney AI - FIRST ...
Midjourney Settings explained: • Midjourney Settings Ex...
Best Midjourney Resources: • 😍 Midjourney BEST Reso...
Make better Midjourney Prompts: • Make BETTER Prompts - ...
My Facebook PHOTOGRAPHY group: / oliviotutorials.superfan
My Affinity Photo Creative Packs: gumroad.com/sarikasat
My Patreon Page: / sarikas
All my Social Media Accounts: linktr.ee/oliviotutorials
Пікірлер: 81
Wow, such a great idea. Awesome! 👏
Wonderful tip. I've been trying to do this with sketches, but I'm excited to free myself from the tunnel background with buildings on either side stretching infinitely into the distance.
I knew those sliders in Controlnet could do wonders :-) Gr8 video mate 👌
Followed a hard guide on the internet to install stable diffusion and they didn't even go over xformers. Learned about it a week later and this little line of code literally sped up my renders by like 45% and I'm not joking. Some renders was taking 3 minutes (I use a lot of lora's) and this cut it down to like 1:30 minutes some even faster. Thank you!!!!
Thanks for sharing the tip. You're awesome ;)
Woo! I asked if you could post something like this before and you did it, awesome! Thank you so much!
@OlivioSarikas
Жыл бұрын
Thank you. Did you talk to me about that in my last live stream?
@zoybean
Жыл бұрын
@@OlivioSarikas Yep, that was me!
FIRST, and I #CantWait to get into it! 💪🧠 -Thanks, Olivio!! 🎉
Great video, informative as always.
@OlivioSarikas
Жыл бұрын
Thank you very much :)
hey man, your channel rocks!
Thanks!
Heck yah, great video! I have been using game screenshots for this sort of thing too, works great!
@OlivioSarikas
Жыл бұрын
Great Idea! :) Thank you
I can't believe I never thought of using Depth to create scenery in a certain perspective!! Wow. That is actually a really good use of that feature. I've spent a long time trying to get the camera in the right spot, and I could have just done it this way. Thanks for the clever tip!
@martin-cheers
Жыл бұрын
I second that.
@cekuhnen
Жыл бұрын
I work with blender and from the 3D scene extract the depth image Super useful
what a brilliant idea !
@OlivioSarikas
Жыл бұрын
Thank you :)
what you really need is a nested-type default slider. nesting sliders or somewhat similar will be absolutely neccessary to handle future complexities before they are simplified
So interesting video. The way we can control the weight and that's affect the final results is something that I learned today👍
@OlivioSarikas
Жыл бұрын
Thank you :)
It's incredible. I didn’t even think that you can just take a screenshot of any scene in a movie. Or a video game. Take a recognizable map and create your own version. I think that we all have already taken a picture of our apartment and played with interior design, for example, classicism or Victorian style.
Any way we can turn the image in different (like fill in the blank) angles and get consistency so it can be used for 3D scenes? For example: You take an image and keep it as a texture on the model, then move the angle a bit so that the depth map can read it a different way, but now (or still) you're still having an imgtoimg feature involved, which can be tested to stay texturally consistent. Of course, the UV maps will update, but you have now textures that can be used for 3D animation. Please reply to let me know lol.
Hi Olivio. I don't remember the name, but Stable Diffusion has an Expansion that bends the image and creates a corridor. You can create streets and whatever you want.But I have another question. If you know how to achieve stability when you create animation with ControlNet script (img2img) or when you process batch at once. Just the face can already be stabilized, but the clothes, they are always different. It's kind of similar, but not. If you know how and what to do.Show us, please.
Hi Olivio is there still a way to use stable diffusion with all changes and additions in google colab? or even if there is an alternative to google colab other than local installation would be great! ty
Oliver does controlNet generate a depth map ? I was under the impression that you need to supply it with it.
#### Links from the Video #### Join my Live Stream: kzread.infopuxTMSqC1bc SketchFab Japanese Alley: sketchfab.com/3d-models/japanese-street-at-night-fb1bdcd71a5544d699379d2d13dd1171 Buy me a Coffee: www.buymeacoffee.com/oliviotutorials Join my Facebook Group: facebook.com/groups/theairevolution Joint my Discord Group: discord.gg/XKAk7GUzAW
كم انت جميل في طرحك
Hello! As always, everything is top notch. For which you have a lot of respect. In the video you mentioned that the resulting images can be converted into 3D. Could you show us how all these pictures could be converted back into 3-D model. )))
@OlivioSarikas
Жыл бұрын
Hi, thank you. I think you misunderstood me. I don't know how to turn them back into 3D. It can be done to a certain degree, by modeling the same scene, but only for a kind of zoom effect, not an actual 3D space, as far as i know
@CHACHILLIE
11 ай бұрын
You can camera project them back onto the 3D model
If I want to fill an empty room with furniture using this technique, but leaving the position of the fixtures and walls unchanged, how should I set up the problem? If I want you to suggest the random arrangement of the supplies (brainstorming).
Not all that different from a project I did before -- taking a picture of a person from the internet and turning it into an anime-style fanart. The only things that the original picture and the final product have in common are the pose of the character and the camera angle. ControlNet Depth and Canny had been very important, along with ControlNet Clip Image (style).
Using the Guest mode gives very good result.
@OlivioSarikas
Жыл бұрын
Thank you, i will try that
How does the guidance parameter defines how long it’s gonna be used when the guidance start parameters does that already?
I have used kinda the same method but for interior architecture scene with images i generated i will send it
I always wondered what those sliders would do-
Where are you from Olivio. I can just listen to your accent for hours and hours. It is so cute :)
@OlivioSarikas
Жыл бұрын
Thank you. I'm from Vienna. Well, i'm from Germany, but i live in Vienna :)
What’s with the annotation result preview? I’ve never seen that in 1111 and I don’t get how it relates to the output
Hey Olivio, I am trying to install the AUTOMATIC1111 but issue is coming my laptop is getting hanged please tell me what is the system requirement to install the A1111 in local machine
Have you experimented with video using the 3d models to walk through the scene and then plug into depth control map?
👋
It's interesting because control net pulls depth out of 2D but with SD in blender the depth is 100% calculated so it's more accurate to use the SD plugin (which I haven't tested yet but I assume it is underdeveloped compared to auto's).
@cobr3545
Жыл бұрын
Control Net uses a rendered depth map, same as blender. It's not estimated from 2D if you supply one.
@cobr3545
Жыл бұрын
Control Net uses a rendered depth map, same as blender. It's not estimated from 2D if you supply one.
hey i dont have "depth" under model for control net. how did you get that?
@OlivioSarikas
Жыл бұрын
maybe your controlnet version is outdated or you didn't download the depth model
is it also possible to create an output that doesnt looke cartony with this method?
@fluffsquirrel
Жыл бұрын
Maybe that's just the model he's using?
@OlivioSarikas
Жыл бұрын
yes, of course. I just used anime here, because it is easier to prompt for
@fluffsquirrel
Жыл бұрын
@@OlivioSarikas Thank you!
This is probably a stupid question, but do you think it's possible for the AI to generate the same image but will different lighting? It would be awesome.
@OlivioSarikas
Жыл бұрын
try rendering the scene first, then using canny as the controlmap with that image input and change the daylight description. See if that helps
@stillfangirlingtoday1468
Жыл бұрын
@@OlivioSarikas Oh, I will definitely try it! Thank you for replying!
Hi, i have controlnet but my settings are different, for start there is no annotator result next to the image, also the prepocressor options there is three options : depht leres, depht midas and depht zoe, then below options are Control Weight, starting control step, ending control step, preprocessor resolution, remove near %, remove background %. Anyone know why my setings are not the same as him?
@kikeluzi
Жыл бұрын
Same here... And also... I think it's not working :c
@pedrodeelizalde7812
Жыл бұрын
@@kikeluzi It did work for me using depth leres. But i had to play with settings to get it...
@kikeluzi
Жыл бұрын
@@pedrodeelizalde7812, I'll try this one then. Thank you!!! 😁I was using "depth_midas" * edit: I just needed to download a model first... ;u;
cleaver girl....
I wish MidJourney could do something like this... generate depth maps or use a depth map as perspective control
@audiogus2651
Жыл бұрын
It used to come up quite bit in the weekly chats a few months ago and they did say it was being looked into.
@OlivioSarikas
Жыл бұрын
Yes, MJ really needs stuff like that ASAP
do u have a video with stable diffusion xl inseting text?
@OlivioSarikas
Жыл бұрын
Nope, not yet. I could make one
@elarcadenoah9000
Жыл бұрын
@@OlivioSarikas cool bro you are the chosen one
Can I add 1girl in the prompt? If I want the girl stand in the middle of the street 😊
@OlivioSarikas
Жыл бұрын
yes, you can also use multi-controlnet to pose her exactly in a specific position
also worth mentioning you used a different seed for each render
It's now banned anyone know any alternatives?
@kikeluzi
Жыл бұрын
I didn't find anything about that. ControlNET was banned? 🤔 how do you know and where to find?
meta has a new ai called segment anything, goes nicely with ai generated images to cut out elements, maybe worth content for you
@OlivioSarikas
Жыл бұрын
I will have a look at that. thank you
do you lost your time with ai now ....life time not worth ....you are old ....control it ....you are not happy with this.... pay atencion what your life is now.....
@Thagnoth
Жыл бұрын
Yes, instructor… I shall abandon all my enjoyment of technology… Thank you for showing me your text-only Amish hypnosis technique……