ComfyUI: Motion Director. Training Motion Lora for Animatediff!

I know y'all wanted it \( ̄︶ ̄*\))
Workflow:
github.com/C0nsumption/Consum...
Motion Director allows to train Motion Lora's for AnimateDiff opening a whole world of possibilities in regards to film, animation, and more. For more information, read the original authors repo:
github.com/ExponentialML/Anim...
Download Stable diffusion checkpoints from here:
civitai.com/models
Download Animatediff models and lora here:
huggingface.co/guoyww/animate...
Link to example input footage:
drive.google.com/drive/folder...
Socials:
x.com/c0nsumption_
/ consumeem

Пікірлер: 33

  • @mhfx
    @mhfxАй бұрын

    yess i love this, great tutorial

  • @rusch_meyer
    @rusch_meyer4 ай бұрын

    Thank you so much for this!

  • @Gabriecielo
    @Gabriecielo4 ай бұрын

    Clean and detailed, make me confident enough to try it myself today. Thanks a lot!

  • @c0nsumption

    @c0nsumption

    4 ай бұрын

    Really happy to hear that. That was the goal. To the point, no fuss. No extra detail. No smoke and mirrors. Just: “here’s Kijais workflow, here’s how to jump in.” We can worry about all the nuances and stuff later 🙏🏽

  • @Gabriecielo

    @Gabriecielo

    4 ай бұрын

    @@c0nsumption It kept crashing every time after training for around 1 min. Maybe training consumes more vram than usual tasks? Anyway, still good to try and learn.

  • @luclaura1308
    @luclaura13084 ай бұрын

    Great tutorial!

  • @Elliryk_
    @Elliryk_4 ай бұрын

    great tutorial my friend 🔥

  • @c0nsumption

    @c0nsumption

    4 ай бұрын

    My dude 🙌🏽 Thanks for being here :) Happy you enjoyed 👍🏽

  • @francaleu7777
    @francaleu77774 ай бұрын

    great! thank you

  • @ryanontheinside
    @ryanontheinside4 ай бұрын

    Thanks bro! If you feel like making a followup video, i would watch it! More on physics

  • @Artof3drendering
    @Artof3drendering4 ай бұрын

    Top! thks

  • @elowine
    @elowine4 ай бұрын

    This is really cool! As a 3d artist/animator it makes me wonder if its posible to make basic rendered animations as input to train. I wonder what kind of footage would work best for that, like high contrast, lots of lines etc. To maybe make the creation of those lora's a lot quicker. Or maybe it prefers realworld footage. Last year I playing with Deforum which has the ability to use camera movements exported from Blender, which at the time worked pretty well. This Lora tech seems a few steps back in regards to workflow and control, but ofcourse its a different tech. And I only got glitchy lsd footage out of Deforum, but that might have improved since.

  • @ronnykhalil
    @ronnykhalil4 ай бұрын

    Dooooood

  • @DemzOneMusic
    @DemzOneMusic4 ай бұрын

    Hey great vid , Just curious if these will work with SDXL as I am having trouble getting any motion lora to work with SDXL?

  • @versuspl434
    @versuspl4342 ай бұрын

    Hey man I find your tutorials the easiest to understand from all other comfyui tutorials on KZread keep it up! I was wondering if you selling any courses on comfyui? or any way I could pay you for an hour to help me fix an issue with generating images or maybe you have a friend with equal or close knowledge I could pay to teach me.

  • @bkdjart
    @bkdjart2 ай бұрын

    Awesome tutorial! Can you explain how to add more than one training video?

  • @prontocgtutor8163
    @prontocgtutor81634 ай бұрын

    Awesome tutorial thanks! Do you think multiple loras of the same video can be trained in chunks adjacently and then all used in 1 long animation chained together and their weights animated to be blended together to form one single coherent animation?

  • @voxyloids8723
    @voxyloids87234 күн бұрын

    How to train a motion module for animatediff?

  • @voxyloids8723
    @voxyloids87233 ай бұрын

    Thank you so much. Do I understand correctly that AI takes video with motion like a reference and for example if I want to train physics simulation lora. I can simulate in 3d software example and feed it but how does anidiff understand what should be physically animated in shot? Do you plan make another video with using pretrained anidiff loras in img2vid. And I want to create rotating lora that will keep object proportions.

  • @FightClubGarmz
    @FightClubGarmz4 ай бұрын

    Great video thank you, Instead of a prompt can you use a reference image?

  • @c0nsumption

    @c0nsumption

    4 ай бұрын

    ? 🤔

  • @EXO57
    @EXO574 ай бұрын

    Hi thank you for the video! I have out of memory error (torch.cuda.OutOfMemoryError: Allocation on device 0 would exceed allowed memory) with 12Gb of RAM. Do you know if it's possible to train with 12Gb and if yes how to fix that ?

  • @c0nsumption

    @c0nsumption

    4 ай бұрын

    Try lowering your resolution for training. So train at 512 x 384 or some resolution under 512 x 512.

  • @appdeveloper3895
    @appdeveloper38954 ай бұрын

    hi, thank you for the info.... I am getting some error... Error occurred when executing ADMD_CheckpointLoader: 'NoneType' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead. what it could be?

  • @c0nsumption

    @c0nsumption

    4 ай бұрын

    Did you install the Animatediff v3 models and their associated adapter lora?

  • @MrSaddydaddy

    @MrSaddydaddy

    4 ай бұрын

    @@c0nsumption Same here error, Animatediff v3 and adapter in place, i've read that new version of comfyui doesn't need xformers anymore, but they are used to train lora, is that true?:) i tried install comfy from scratch, portable version. First few lines of error: Error occurred when executing ADMD_CheckpointLoader: No operator found for `memory_efficient_attention_forward` with inputs: query : shape=(1, 2, 1, 40) (torch.float32) key : shape=(1, 2, 1, 40) (torch.float32) value : shape=(1, 2, 1, 40) (torch.float32) attn_bias : p : 0.0 `flshattF` is not supported because: xFormers wasn't build with CUDA support dtype=torch.float32 (supported: {torch.float16, torch.bfloat16}) `tritonflashattF` is not supported because: xFormers wasn't build with CUDA support dtype=torch.float32 (supported: {torch.float16, torch.bfloat16}) triton is not available requires A100 GPU `cutlassF` is not supported because: xFormers wasn't build with CUDA support `smallkF` is not supported because: xFormers wasn't build with CUDA support max(query.shape[-1] != value.shape[-1]) > 32 unsupported embed per head: 40

  • @MrSaddydaddy

    @MrSaddydaddy

    4 ай бұрын

    @c0nsumption , Still cannot run training:doing omething very wrong, tried reinstall....anyone had this kind of error?

  • @MrSaddydaddy

    @MrSaddydaddy

    4 ай бұрын

    Did i miss something?:) previous tutorials right? sorry if so, will check everything

  • @elifmiami

    @elifmiami

    Ай бұрын

    I have same problem. Anyone fix the issue ?

  • @elowine
    @elowine4 ай бұрын

    oh and a question sorry haha. The resulting temporal lora, is that linked to the promped that's used to generate the video? Or is the lora just the motion info. I tried to alter the prompt after one run, and now the whole workflow restarts again, I was under the impression that the motion lora can be used without retraining it.

  • @c0nsumption

    @c0nsumption

    4 ай бұрын

    This workflow is for training Loras. If you wanted to use the motion Lora’s without retraining, you would use them in an AnimateDiff workflow

  • @elowine

    @elowine

    4 ай бұрын

    @@c0nsumption Thanks!!