Simple animations with Blender and Stable Diffusion - SD Experimental

Ғылым және технология

Generating animations with any 3D software and Stable Diffusion is easy. Explaining how takes a while.
In this Stable Diffusion Experimental tutorial, we'll see how, with the power of anime and g- I mean, with the power of Blender, keyframes and two workflows, we can get good animations without worrying about EBSynth, Adobe After Effects, etc.
All we need is a 3D scene, a way to turn stills from the scene into generated images, and an AnimateDiff / IPAdapter / ControlNet plug and play pipeline.
Want to support me? You can buy me a coffee here: ko-fi.com/risunobushi
Workflows:
- Workflow 1, turn 3D keyframe 1 into generated keyframe 1, or turn 3D keyframes 2 to X into generated keyframes 2 to X: openart.ai/workflows/risunobu...
- Workflow 2, turn generated keyframes into animations: openart.ai/workflows/risunobu...
Resources:
- IPAdapter & AnimateDiff pipeline by the one and only Latent Vision, upon which these workflows expand: • Animation with weight ...
- Adobe Mixamo: www.mixamo.com
- LCM Lora Model: huggingface.co/latent-consist...
- AnimateDiff LCM Model: civitai.com/models/326698/ani...
- QRCode Monster ControlNet Model: huggingface.co/monster-labs/c...
- ControlGIF model: huggingface.co/crishhh/animat...
- AnimateDiff nodes: github.com/Kosinkadink/ComfyU...
- More AnimateDiff resources (other models): github.com/guoyww/AnimateDiff...
Timestamps:
00:00 - Intro
01:13 - How it works
02:27 - Setting up the scene in Blender
06:48 - Generating keyframes from 3D keyframes
13:44 - Generating animations from the generated keyframes
26:18 - First Animation
26:44 - Setting up the refiner pass
28:12 - Refined Animation
28:51 - More Examples
30:05 - Outro
#stablediffusion #animatediff #blender #controlnet #mixamo #animation #adobemixamo #stablediffusiontutorial #ai #generativeai #generativeart #comfyui #comfyuitutorial #sdxllightning #3d #2d #illustration #render #risunobushi_ai #sdxl #sd #risunobushi #andreabaioni

Пікірлер: 23

  • @merion297
    @merion297Ай бұрын

    I am seeing it but not believing it. It's incredible. Incredible is a weak word for it.

  • @emanuelec2704
    @emanuelec2704Ай бұрын

    Grandissimo! Keep them coming.

  • @risunobushi_ai

    @risunobushi_ai

    Ай бұрын

    Grazie!

  • @M4rt1nX
    @M4rt1nXАй бұрын

    Great as usual. Thanks a lot. While using blender, you can automate the intervals of frames by changing the number of steps. Render ➡Frame range ➡Step

  • @risunobushi_ai

    @risunobushi_ai

    Ай бұрын

    Thanks for the heads up! I was sure there was a setting somewhere

  • @gimperita3035
    @gimperita3035Ай бұрын

    Fantastic stuff! I own more 3d assets that I'm eager to admit and using generative AI in this way was the idea from the beginning. I can't thank you enough . and of course Matteo as well.

  • @risunobushi_ai

    @risunobushi_ai

    Ай бұрын

    ahah at least this is a good way to put those models to use! Glad you liked it!

  • @moritzryser
    @moritzryserАй бұрын

    dope

  • @pandelik3450
    @pandelik3450Ай бұрын

    Since you don't need to extract depth from photos but from Blender, you could just use the Blender compositor to save depth passes for all frames to a folder and then load them into ControlNet from that folder.

  • @risunobushi_ai

    @risunobushi_ai

    Ай бұрын

    Yeah I debated doing that, since I was made aware of it in a previous video, but ultimately I decided on going this way because my audience is more used to comfyUI than Blender. I didn’t want to overcomplicate things in Blender, even if they might seem easy to someone who’s used to it, but exporting depth directly is definitely the better way to do it

  • @armandadvar6462
    @armandadvar646214 күн бұрын

    It is so complicated not easy😅😮

  • @eias3d
    @eias3dАй бұрын

    Morning Andrea! cool workflow! Where can I find the Lora's "LCM_pytorch_lora_weight_15.safetensors"?

  • @risunobushi_ai

    @risunobushi_ai

    Ай бұрын

    argh that's the only model I missed in the description! I'm adding it now. You can find it here: huggingface.co/latent-consistency/lcm-lora-sdv1-5

  • @eias3d

    @eias3d

    Ай бұрын

    @@risunobushi_ai Hehe

  • @arong_
    @arong_Ай бұрын

    Awesome stuff! Just wondering how are you able to use ipadapter plus style transfer with an sd 1.5 model like you're using? I thought that wasn't possible and it never works for me

  • @risunobushi_ai

    @risunobushi_ai

    Ай бұрын

    Huh, I’ve never actually had an my issue with it. I tested it with both 1.5 and SDXL when it was first updated and I didn’t encounter any errors. The only thing that comes to mind is that I have collected a ton of clipvision models over the past year, so maybe I have something that works with 1.5 by chance?

  • @arong_

    @arong_

    Ай бұрын

    @@risunobushi_ai ok maybe, I remember Mateo mentioned it also in his ipadapter update tutorial that it wouldn't work for 1.5 but maybe it works for some and yes maybe you have some special tool that unlocked it. Regardless this is great stuff, loving and learning a lot from your tutorials

  • @dannylammy
    @dannylammyАй бұрын

    There's gotta be a better way to load those key frames, thanks!

  • @risunobushi_ai

    @risunobushi_ai

    Ай бұрын

    Yep there is, as I’m saying in the video it’s either this or using a batch loader node that targets a folder, but for the sake of clarity in the explanation I’d rather have all nine frames on video

  • @BoomBillion
    @BoomBillionАй бұрын

    AI didn't eliminate graphic designers. It evolved them into 3d designers and Ai graphic engineers.

  • @srb20012001

    @srb20012001

    Ай бұрын

    As well as insanely technical node tree composers!

  • @reimagineuniverse
    @reimagineuniverseАй бұрын

    Great way to steal other peoples work and make it look like you did it without learning any skills

  • @risunobushi_ai

    @risunobushi_ai

    Ай бұрын

    If you're talking about the ethics of generative AI, we could discuss about this for days. If you're talking about the workflow, I don't know what you're getting at, since I developed it myself starting from Matteo's.

Келесі