Super Realistic Pictures with RealVisXL

In this video, we will review the new SDXL fine-tuned model, RealVis XL!
Although still in the Beta version, the new model generates super realistic pictures; quality is way above the previous Realistic Vision SD 1.5. Let's test it together!
🤩 Get 20% more credit on Diffusion Hub using the PROMO CODE: LAURA20
💬 Social Media:
[Discord] / discord
[Patreon] patreon.com/IntelligentArt?ut...
[Instagram] / lacarnevali
[TikTok] www.tiktok.com/@lacarnevali?i...
____________________________________________________________________
🤙🏻 Learn More:
/ membership
/ lauracarnevali
📌 Links:
DiffusionHub (try it for FREE, copy the full link): diffusionhub.io?fpr=laura17
RealVis XL Model: civitai.com/models/139562/rea...
ADetailer: github.com/Bing-su/adetailer
00:07 Introduction to RealVis
00:30 Diffusion Hub
01:00 What model and VAE to use
02:39 Generate the first realistic picture
03:38 Generate the same picture using different seeds
04:13 Generate the same picture with different aspect ratios (portrait and landscape)
05:19 Generate a realistic landscape
05:51 Generate a portrait
06:52 ADetailer to improve faces/hands
#aiart #stablediffusion #generativeart #stabilityai #stablediffusiontutorial
#sdxl #diffusionhub

Пікірлер: 21

  • @twri128
    @twri1289 ай бұрын

    @LaCarnevali First thanks for producing excellent videos on stable diffusion! Issues with face details most often will be fixed when upscaling as you say but can be improved if you stick with a resolution that is close to 1024^2 pixels. The latent image is 1/8:th of the pixel image and simply sometimes are not enough for a small eye or mouth to generate properly. These resolutions are also mentioned in the paper you are referencing. Stability AI has provided the community with a subset of those in the table in the paper. 1024x1024 (1:1), 1152x896 (1.28:1), 896x1152 (0.78:1), 1216x832 (1.46:1), 832x1216 (0.68:1), 1344x768 (1.75:1), 768x1344 (0.57:1), 1536x640 (2.4:1), 640x1536 (0.42:1)

  • @mostafamohamed-jk5jk
    @mostafamohamed-jk5jk9 ай бұрын

    The accent ❤❤❤❤

  • @Vestu
    @Vestu9 ай бұрын

    Love your channel and your Italian accent 😊

  • @jamesbriggs
    @jamesbriggs9 ай бұрын

    Super useful thanks :)

  • @kevinehsani3358
    @kevinehsani33589 ай бұрын

    Thanks for the video, I take it I can download the check point and vae to my stable diffusion and try it there. I do have some recent problems when I use controlnet and was wondering if you are getting them too and was wondering if the updates are doing that. I have everything as the latest. I get this error no matter which control type I use in contronet(1.1.14). For example for "tile" and model "control_v11f1e_sd15_tile [a371b31b]" I get the same error as I others which is " RuntimeError: mat1 and mat2 shapes cannot be multiplied (77x2048 and 768x320) ". Thanks for any feedback

  • @LaCarnevali

    @LaCarnevali

    9 ай бұрын

    Hi! ControlNet for SDXL is not yet integrated within A1111 - you should try ComfyUI if you want to try it :)

  • @kevinehsani3358

    @kevinehsani3358

    9 ай бұрын

    @@LaCarnevali I used it perfectly fine for weeks

  • @ekkamailax
    @ekkamailax5 ай бұрын

    Is it possible to fine tune this model using the same techniques as your previous tutorial?

  • @LaCarnevali

    @LaCarnevali

    5 ай бұрын

    yes you can - the training will be slower so you want to use GPU and need to apply some minor adjustments like tick for SDXL model

  • @AliKhan-vt3uk
    @AliKhan-vt3uk8 ай бұрын

    ComfyUi not working on Google Colab. Can you please make video on that. How I can hse ComfyUI with any other cloud? Complete guide😢

  • @LaCarnevali

    @LaCarnevali

    8 ай бұрын

    If you're running on windows, you can install it locally: kzread.info/dash/bejne/pX2fxKahmKabmbw.html Will have a look into Colab :)

  • @DrOrion
    @DrOrion9 ай бұрын

    Do one on hand fixing please.

  • @LouisGedo
    @LouisGedo9 ай бұрын

    👋

  • @user-io8gh4yi4s
    @user-io8gh4yi4s6 ай бұрын

    please make a video kohya lora train for a face on mac apple silicon Laura

  • @LaCarnevali

    @LaCarnevali

    6 ай бұрын

    I have a video, but you cannot train on a Mac, you'll need to use an external GPU, i.e., Colab/ThinkDiffusion/DiffusionHub

  • @wtfchoir
    @wtfchoir7 ай бұрын

    Why are you super cute?

  • @asphoarkimete9500
    @asphoarkimete95005 ай бұрын

    congratulations you look great, I am currently using this model: {from diffusers import StableDiffusionPipeline model_id = "SG161222/Realistic_Vision_V6.0_B1_noVAE" # Initialize the Stable Diffusion pipeline for image generation. pipe = StableDiffusionPipeline.from_pretrained(model_id, use_auth_token=token) pipe.to(device)} ,do you think the realVisionV6 model is newer and better than the model : "Realistic_Vision_V6.0_B1_noVAE"? i saw your video about the custom train model, how can i use a Hypernetwork ,in this function : "StableDiffusionPipeline.from_pretrained(model_id, use_auth_token=token)"?

Келесі