FULL Perspective Control - ControlNET in Automatic 1111 - Stable Diffusion

Тәжірибелік нұсқаулар және стиль

Use ControlNet in A1111 to have full control over perspective. You can use this with 3D models from the internet, or create your own 3D models in Blender or other software. This method allows you to create different views of a similar looking location. You can also use Multi-ControlNet to place a character into the scene
#### Links from the Video ####
Join my Live Stream: kzread.infopuxTMSqC1bc
SketchFab Japanese Alley: sketchfab.com/3d-models/japan...
Buy me a Coffee: www.buymeacoffee.com/oliviotu...
Join my Facebook Group: / theairevolution
Joint my Discord Group: / discord
Support my Channel:
/ @oliviosarikas
Subscribe to my Newsletter for FREE: My Newsletter: oliviotutorials.podia.com/new...
How to get started with Midjourney: • Midjourney AI - FIRST ...
Midjourney Settings explained: • Midjourney Settings Ex...
Best Midjourney Resources: • 😍 Midjourney BEST Reso...
Make better Midjourney Prompts: • Make BETTER Prompts - ...
My Facebook PHOTOGRAPHY group: / oliviotutorials.superfan
My Affinity Photo Creative Packs: gumroad.com/sarikasat
My Patreon Page: / sarikas
All my Social Media Accounts: linktr.ee/oliviotutorials

Пікірлер: 81

  • @RetzyWilliams
    @RetzyWilliams Жыл бұрын

    Wow, such a great idea. Awesome! 👏

  • @jaredbeiswenger3766
    @jaredbeiswenger3766 Жыл бұрын

    Wonderful tip. I've been trying to do this with sketches, but I'm excited to free myself from the tunnel background with buildings on either side stretching infinitely into the distance.

  • @dreamzdziner8484
    @dreamzdziner8484 Жыл бұрын

    I knew those sliders in Controlnet could do wonders :-) Gr8 video mate 👌

  • @masterjx
    @masterjx5 ай бұрын

    Followed a hard guide on the internet to install stable diffusion and they didn't even go over xformers. Learned about it a week later and this little line of code literally sped up my renders by like 45% and I'm not joking. Some renders was taking 3 minutes (I use a lot of lora's) and this cut it down to like 1:30 minutes some even faster. Thank you!!!!

  • @nonameishere7234
    @nonameishere72344 ай бұрын

    Thanks for sharing the tip. You're awesome ;)

  • @zoybean
    @zoybean Жыл бұрын

    Woo! I asked if you could post something like this before and you did it, awesome! Thank you so much!

  • @OlivioSarikas

    @OlivioSarikas

    Жыл бұрын

    Thank you. Did you talk to me about that in my last live stream?

  • @zoybean

    @zoybean

    Жыл бұрын

    @@OlivioSarikas Yep, that was me!

  • @MarkDemarest
    @MarkDemarest Жыл бұрын

    FIRST, and I #CantWait to get into it! 💪🧠 -Thanks, Olivio!! 🎉

  • @coda514
    @coda514 Жыл бұрын

    Great video, informative as always.

  • @OlivioSarikas

    @OlivioSarikas

    Жыл бұрын

    Thank you very much :)

  • @temarket
    @temarket Жыл бұрын

    hey man, your channel rocks!

  • @sb6934
    @sb6934 Жыл бұрын

    Thanks!

  • @audiogus2651
    @audiogus2651 Жыл бұрын

    Heck yah, great video! I have been using game screenshots for this sort of thing too, works great!

  • @OlivioSarikas

    @OlivioSarikas

    Жыл бұрын

    Great Idea! :) Thank you

  • @santosic
    @santosic Жыл бұрын

    I can't believe I never thought of using Depth to create scenery in a certain perspective!! Wow. That is actually a really good use of that feature. I've spent a long time trying to get the camera in the right spot, and I could have just done it this way. Thanks for the clever tip!

  • @martin-cheers

    @martin-cheers

    Жыл бұрын

    I second that.

  • @cekuhnen

    @cekuhnen

    Жыл бұрын

    I work with blender and from the 3D scene extract the depth image Super useful

  • @JulienTaillez
    @JulienTaillez Жыл бұрын

    what a brilliant idea !

  • @OlivioSarikas

    @OlivioSarikas

    Жыл бұрын

    Thank you :)

  • @nathanielblairofkew1082
    @nathanielblairofkew1082 Жыл бұрын

    what you really need is a nested-type default slider. nesting sliders or somewhat similar will be absolutely neccessary to handle future complexities before they are simplified

  • @bryan98pa
    @bryan98pa Жыл бұрын

    So interesting video. The way we can control the weight and that's affect the final results is something that I learned today👍

  • @OlivioSarikas

    @OlivioSarikas

    Жыл бұрын

    Thank you :)

  • @aggressiveaegyo7679
    @aggressiveaegyo7679 Жыл бұрын

    It's incredible. I didn’t even think that you can just take a screenshot of any scene in a movie. Or a video game. Take a recognizable map and create your own version. I think that we all have already taken a picture of our apartment and played with interior design, for example, classicism or Victorian style.

  • @animestories5084
    @animestories5084 Жыл бұрын

    Any way we can turn the image in different (like fill in the blank) angles and get consistency so it can be used for 3D scenes? For example: You take an image and keep it as a texture on the model, then move the angle a bit so that the depth map can read it a different way, but now (or still) you're still having an imgtoimg feature involved, which can be tested to stay texturally consistent. Of course, the UV maps will update, but you have now textures that can be used for 3D animation. Please reply to let me know lol.

  • @michail_777
    @michail_777 Жыл бұрын

    Hi Olivio. I don't remember the name, but Stable Diffusion has an Expansion that bends the image and creates a corridor. You can create streets and whatever you want.But I have another question. If you know how to achieve stability when you create animation with ControlNet script (img2img) or when you process batch at once. Just the face can already be stabilized, but the clothes, they are always different. It's kind of similar, but not. If you know how and what to do.Show us, please.

  • @RiyadJaamour
    @RiyadJaamour Жыл бұрын

    Hi Olivio is there still a way to use stable diffusion with all changes and additions in google colab? or even if there is an alternative to google colab other than local installation would be great! ty

  • @cekuhnen
    @cekuhnen Жыл бұрын

    Oliver does controlNet generate a depth map ? I was under the impression that you need to supply it with it.

  • @OlivioSarikas
    @OlivioSarikas Жыл бұрын

    #### Links from the Video #### Join my Live Stream: kzread.infopuxTMSqC1bc SketchFab Japanese Alley: sketchfab.com/3d-models/japanese-street-at-night-fb1bdcd71a5544d699379d2d13dd1171 Buy me a Coffee: www.buymeacoffee.com/oliviotutorials Join my Facebook Group: facebook.com/groups/theairevolution Joint my Discord Group: discord.gg/XKAk7GUzAW

  • @NasserQahtani
    @NasserQahtani Жыл бұрын

    كم انت جميل في طرحك

  • @FikaBakilli
    @FikaBakilli Жыл бұрын

    Hello! As always, everything is top notch. For which you have a lot of respect. In the video you mentioned that the resulting images can be converted into 3D. Could you show us how all these pictures could be converted back into 3-D model. )))

  • @OlivioSarikas

    @OlivioSarikas

    Жыл бұрын

    Hi, thank you. I think you misunderstood me. I don't know how to turn them back into 3D. It can be done to a certain degree, by modeling the same scene, but only for a kind of zoom effect, not an actual 3D space, as far as i know

  • @CHACHILLIE

    @CHACHILLIE

    11 ай бұрын

    You can camera project them back onto the 3D model

  • @Aristocle
    @Aristocle9 ай бұрын

    If I want to fill an empty room with furniture using this technique, but leaving the position of the fixtures and walls unchanged, how should I set up the problem? If I want you to suggest the random arrangement of the supplies (brainstorming).

  • @ryry9780
    @ryry9780 Жыл бұрын

    Not all that different from a project I did before -- taking a picture of a person from the internet and turning it into an anime-style fanart. The only things that the original picture and the final product have in common are the pose of the character and the camera angle. ControlNet Depth and Canny had been very important, along with ControlNet Clip Image (style).

  • @entrypoint2009
    @entrypoint2009 Жыл бұрын

    Using the Guest mode gives very good result.

  • @OlivioSarikas

    @OlivioSarikas

    Жыл бұрын

    Thank you, i will try that

  • @Moedow
    @Moedow Жыл бұрын

    How does the guidance parameter defines how long it’s gonna be used when the guidance start parameters does that already?

  • @tarekramadan1867
    @tarekramadan1867 Жыл бұрын

    I have used kinda the same method but for interior architecture scene with images i generated i will send it

  • @matbeedotcom
    @matbeedotcom Жыл бұрын

    I always wondered what those sliders would do-

  • @sirmalof3255
    @sirmalof3255 Жыл бұрын

    Where are you from Olivio. I can just listen to your accent for hours and hours. It is so cute :)

  • @OlivioSarikas

    @OlivioSarikas

    Жыл бұрын

    Thank you. I'm from Vienna. Well, i'm from Germany, but i live in Vienna :)

  • @BlackMita
    @BlackMita Жыл бұрын

    What’s with the annotation result preview? I’ve never seen that in 1111 and I don’t get how it relates to the output

  • @niteshghuge
    @niteshghuge Жыл бұрын

    Hey Olivio, I am trying to install the AUTOMATIC1111 but issue is coming my laptop is getting hanged please tell me what is the system requirement to install the A1111 in local machine

  • @The-Inner-Self
    @The-Inner-Self Жыл бұрын

    Have you experimented with video using the 3d models to walk through the scene and then plug into depth control map?

  • @LouisGedo
    @LouisGedo Жыл бұрын

    👋

  • @digitaltutorials1
    @digitaltutorials1 Жыл бұрын

    It's interesting because control net pulls depth out of 2D but with SD in blender the depth is 100% calculated so it's more accurate to use the SD plugin (which I haven't tested yet but I assume it is underdeveloped compared to auto's).

  • @cobr3545

    @cobr3545

    Жыл бұрын

    Control Net uses a rendered depth map, same as blender. It's not estimated from 2D if you supply one.

  • @cobr3545

    @cobr3545

    Жыл бұрын

    Control Net uses a rendered depth map, same as blender. It's not estimated from 2D if you supply one.

  • @facex7x
    @facex7x Жыл бұрын

    hey i dont have "depth" under model for control net. how did you get that?

  • @OlivioSarikas

    @OlivioSarikas

    Жыл бұрын

    maybe your controlnet version is outdated or you didn't download the depth model

  • @aronhommer1942
    @aronhommer1942 Жыл бұрын

    is it also possible to create an output that doesnt looke cartony with this method?

  • @fluffsquirrel

    @fluffsquirrel

    Жыл бұрын

    Maybe that's just the model he's using?

  • @OlivioSarikas

    @OlivioSarikas

    Жыл бұрын

    yes, of course. I just used anime here, because it is easier to prompt for

  • @fluffsquirrel

    @fluffsquirrel

    Жыл бұрын

    @@OlivioSarikas Thank you!

  • @stillfangirlingtoday1468
    @stillfangirlingtoday1468 Жыл бұрын

    This is probably a stupid question, but do you think it's possible for the AI to generate the same image but will different lighting? It would be awesome.

  • @OlivioSarikas

    @OlivioSarikas

    Жыл бұрын

    try rendering the scene first, then using canny as the controlmap with that image input and change the daylight description. See if that helps

  • @stillfangirlingtoday1468

    @stillfangirlingtoday1468

    Жыл бұрын

    @@OlivioSarikas Oh, I will definitely try it! Thank you for replying!

  • @pedrodeelizalde7812
    @pedrodeelizalde7812 Жыл бұрын

    Hi, i have controlnet but my settings are different, for start there is no annotator result next to the image, also the prepocressor options there is three options : depht leres, depht midas and depht zoe, then below options are Control Weight, starting control step, ending control step, preprocessor resolution, remove near %, remove background %. Anyone know why my setings are not the same as him?

  • @kikeluzi

    @kikeluzi

    Жыл бұрын

    Same here... And also... I think it's not working :c

  • @pedrodeelizalde7812

    @pedrodeelizalde7812

    Жыл бұрын

    @@kikeluzi It did work for me using depth leres. But i had to play with settings to get it...

  • @kikeluzi

    @kikeluzi

    Жыл бұрын

    @@pedrodeelizalde7812, I'll try this one then. Thank you!!! 😁I was using "depth_midas" * edit: I just needed to download a model first... ;u;

  • @RealitySlipTV
    @RealitySlipTV Жыл бұрын

    cleaver girl....

  • @smortonmedia
    @smortonmedia Жыл бұрын

    I wish MidJourney could do something like this... generate depth maps or use a depth map as perspective control

  • @audiogus2651

    @audiogus2651

    Жыл бұрын

    It used to come up quite bit in the weekly chats a few months ago and they did say it was being looked into.

  • @OlivioSarikas

    @OlivioSarikas

    Жыл бұрын

    Yes, MJ really needs stuff like that ASAP

  • @elarcadenoah9000
    @elarcadenoah9000 Жыл бұрын

    do u have a video with stable diffusion xl inseting text?

  • @OlivioSarikas

    @OlivioSarikas

    Жыл бұрын

    Nope, not yet. I could make one

  • @elarcadenoah9000

    @elarcadenoah9000

    Жыл бұрын

    @@OlivioSarikas cool bro you are the chosen one

  • @petzme8910
    @petzme8910 Жыл бұрын

    Can I add 1girl in the prompt? If I want the girl stand in the middle of the street 😊

  • @OlivioSarikas

    @OlivioSarikas

    Жыл бұрын

    yes, you can also use multi-controlnet to pose her exactly in a specific position

  • @Rasukix
    @Rasukix Жыл бұрын

    also worth mentioning you used a different seed for each render

  • @sakifishmam1436
    @sakifishmam1436 Жыл бұрын

    It's now banned anyone know any alternatives?

  • @kikeluzi

    @kikeluzi

    Жыл бұрын

    I didn't find anything about that. ControlNET was banned? 🤔 how do you know and where to find?

  • @Hazzel31337
    @Hazzel31337 Жыл бұрын

    meta has a new ai called segment anything, goes nicely with ai generated images to cut out elements, maybe worth content for you

  • @OlivioSarikas

    @OlivioSarikas

    Жыл бұрын

    I will have a look at that. thank you

  • @dezenho
    @dezenho Жыл бұрын

    do you lost your time with ai now ....life time not worth ....you are old ....control it ....you are not happy with this.... pay atencion what your life is now.....

  • @Thagnoth

    @Thagnoth

    Жыл бұрын

    Yes, instructor… I shall abandon all my enjoyment of technology… Thank you for showing me your text-only Amish hypnosis technique……

Келесі