Corridor Crew Workflow For Consistent Stable Diffusion Videos
Ойын-сауық
The Ultimate Workflow for Consistent Stable Diffusion Videos. This one's a long one, sorry lol.
HOW TO SUPPORT MY CHANNEL
-Support me by joining my Patreon: / enigmatic_e
_________________________________________________________________________
SOCIAL MEDIA
-Join my discord: / discord
-Instagram: / enigmatic_e
-Tik Tok: / enigmatic_e
-Twitter: / 8bit_e
_________________________________________________________________________
Runway Greenscreen: • Is This The Best Backg...
Davinci Workflow: • Davinci Resolve Workfl...
Davinci Deflicker: • Remove AI video flicke...
Kohya Info:
github.com/bmaltais/kohya_ss
Birme:
www.birme.net
LoraBasicSettings.json:
mega.nz/file/XNAB3KbR#W_-TvUy...
LoraLowVRAMSettings.json:
mega.nz/file/7BJ3EIAC#_QUUWV4...
CORRIDOR CREW
/ @corridorcrew
AITREPRENEUR
/ @aitrepreneur
Install SD: • Installing Stable Diff...
Install ControlNet: • New Stable Diffusion E...
Chapters
0:00 Intro
01:16 Corridor Crew Step Breakdown
02:21 Setting Up Clip
05:31 Removing BG
06:03 Head Tracking
09:57 Training 3D vs. Images
14:30 Installing Kohya (LORA training)
18:10 Training Model
26:49 Stable Diffusion Settings
35:03 Reverse Stabilizing
39:36 Deflickering
41:12 Final Results
Пікірлер: 125
I am happy to see someone who is still exploring all the possibilities on getting the perfect consistent animation. Thank you for explaining everything clearly. Hopefully we will soon have an extension for getting consistent animations on SD.
Always happy to find another person to subscribe to that is making high quality, easy to follow, technical videos on Stable Diffusion!
Thanks for creating this breakdown. I've been wanting to do a deep dive into a way to stylize some of my 3D animations using stable diffusion and didn't know where to even start. I'm subbing!
Bro. You rock. What a great video. Thank you for taking your time to create this, your work is clean. Subscribed!
Love the way you always cover the multiple variables and the frustration or patience they require to master. Almost requesting that video, just a vid of total frustration, cause I know it happens.
Great tip on the fusion render speeding up the overall render. I’m new to resolve and it really did make a world of difference.
I love, love, love your channel. Congratulations for your job!
Man! This one was packed with information
love your analytical approach to getting shit done, great content
Absolutely great video!! I was thinking about using ebsynth but this method seems really fun!! Cheers
Excellent. Thanks for all the description and details.
@enigmatic_e
Жыл бұрын
No problem!
Amazing flow, man. Pretty neat tricks.
great guide, the possibilities are really exciting
This is such a wicked tutorial! Thanks bro!
I love your videos! And thanks for the work in helping us understand how to easily create with hella tools :) Cheers from San Francisco!
@enigmatic_e
Жыл бұрын
No problem! Bay Area, nice! I’m from San Jose but now living in Germany. Cheers
great vid, you added a lot extra from the corridors video well done
a great tip when using LORA for the captioning sequence: add "Dudley" to the "Prefix to add to BLIP caption" and it will apply it to every text file so you dont have to go in and add it to all of them.
Great video, no complaints. For some people, extra steps might help: use SD to make the 3d renders less 3d renders. Different outfits would increase LoRA flexibility IF that was important. So many variables. Again, that’s for sharing all of this.
Hey, just wanted to say thank you for this helpful guide! I followed it step by step (i had some problems with controllnet, it only gave me one tab instead of multiple tabs) and so far my outcome looks pretty decent! Even without the deflicker effect in davinci resolve studio!
@enigmatic_e
Жыл бұрын
Hey! So to get multiple tabs go to setting, controlnet and there should be an option to add multiple controlnets. Change the amount and go to apply and restart your. Should have more after that.
@Daxviews
Жыл бұрын
@@enigmatic_e Wow thanks a lot! Just found it and changed it. I hope you will continue making such great videos :D
Thank you bro!
I think what could work is that you could draw pupil-less/iris-less eyes, as in all "white" eyes so then in post-edit you can animate the pupil/iris in the eyesocket for more consistency.
Good job
This is so helpful, thanks. Two questions: one: could you pull stills from the cosplay video, alter the stills and then use that for training? Two: is the training only for humans or could I take charcoal drawings I’ve made and train the Lora on drawing style? No figures, just technique and ‘look’
+1 for the 3d tracking tutorial
very useful thanks
thank you!
Thanks man! you're the best! I really need this for a school project I have coming up, and you've been a life savior! did you ever end up making that video about 3d tracking? I need to add my 3d designed objects to the video. could you point me to some info on that please? or even better a link to your video if you made it.. holding my fingers crossed! thank you!
DaVinci Resolve's Magic Mask feature allows you to easily separate objects from their backgrounds.
Great video, super helpful I have a question, whenever I batch using control net, it only produces one frame from the directory I set despite having 200 images Any thoughts on how to fix this?
Hey, just wanted to say your content is great and very informative! I was wondering if you knew how to fix a bad or disfigured looking face from a relatively close distance? I always get weird looking faces while using img2img with controlnet using the canny and control-canny model.
@enigmatic_e
Жыл бұрын
Might be pushing the controlnet too much
3D tracking background is awesome,wish to see a lesson
Awesome workflow thx you so much for making that king of video ! I'm having some troubles with reverse stabilization. Everything is working just fine until I press "Unstretch" on CC PowerPin. Then my footage (the face of my character) looses it's scale and become too small, it also "cuts" its own frame while moving (looks like a precomposed object that is cropped because it goes out of frame)... Any ideas of what might goes wrong in here ? Thanks a lot ! 🙏
Flicker Free after effects plugin is giving me a good results . I put the Slow Motion 2 preset and active the motion compensation. I don't like the idea to install a hole software for a deflickering effect. If anyone would do a comparison between the two . To see if there is a big difference. Then i maybe consider it if it's really better 🤣.
@enigmatic_e
Жыл бұрын
I’ll try it and see!
I'm amazed at the number of people, such as enigmatic_e, saying "anyways" instead of anyway. I don't know if I'll ever get used to it. (Random comment of the day.)
@enigmatic_e
11 ай бұрын
lol habit i guess. I've never been a good speaker or writer. My years spent going back and forth between Mexico and US probably didn't help
Yes please. Vids on 3D tracking. Thanks man. +]
Amazing! Is there any chance you can do a video about the same flow in davinci resolve instead of after effects?
Runway ML has a handy dandy AI background removal, I personally haven't tried it yet but having used roto brush 2, I think AI just made roto-ing not a pain in the 4$$ lol
Thank you Thank you Thank you Thank you Thank you Thank you Thank you
For the level of quailty that Corridor Crew had for their edit, is it necessary to make the model in the same way or can i get the same quality following your workflow. Cus I'm hoping to get very unique models to each man/person i put into it.
3d Tracking video would be cool!
Omg bro! It’s you! 🤯🤩
@enigmatic_e
Жыл бұрын
Yeah bro! 😂
Wouldn't running optical flow tracking on the original video, then applying that as a transform backwards and forwards with blending on the generated video smooth things out? I have no idea if something like that was attempted, or how to actually implement it, but I have a feeling it would be nice :D
Im working on my top AI channels video to pass on my subscribers, as I have retired from the field for now... and you have definitely made the list... Just skimmed this video but you definitely go over all sides of it... from blender to mixamo to i dont even know some of the sites you're using... you are going deep on it... Great job my friend... Keep up the good work... I definitely recommend your channel to anyone who wants to get into generative AI work...
@enigmatic_e
Жыл бұрын
Thank you good sir. 🙏
@judgeworks3687
Жыл бұрын
I’m one of common sense subscribers (&enigmatic). Have learned sooooo much from both of you, thank you.
@iamYork_
Жыл бұрын
@@judgeworks3687 thank you... I still have a lot of knowledge to pass on but currently just obliged by too many professional time constraints to upload weekly, especially on the tutorial side of it all as a typically tutorial for me can take between 20 to 40 hours to create... Enigmatic has the crown right now in my opinion... For both beginners and more experienced users that dabble in other softwares... He blends them all together... Great person and talented as well... When anyone asks me about other channels to check out when it comes to generative AI for creative purposes... Enigmatic is the first channel I always recommend...
Is there a way to save your stable diffusion settings? Like the Noise multiplier for img2img? Also thanks for this in depth tutorial :D
Great video man! 👏Full of super useful info.👍 Excellent tip about stabilizing the face.👌Thanks a million.🙏🙏🙏 Still I'm facing a process issue. How do you get the images out of img2img or deforum to have an easily "keyable" background? Even when I put a sequence with flat green background as input, Stable Diffusion (img2img or deforum) draws elements on the background and apply a dull color, I can't key it afterwards in AE. I tried many prompts but with no luck. At 1:44, we can see that your output image has a plain green background, what smart sorcery did you use to achieve this?
@enigmatic_e
5 ай бұрын
This video is quite outdated unfortunately. A lot of techniques are not necessary anymore with Animatediff. I have two videos about it on my channel.
@VouskProd
5 ай бұрын
Thanks, I will check that. Comfy seams great but it's a new world to install and learn (looks like too time-consuming for me now during my current project 😅)
@enigmatic_e
5 ай бұрын
yea totally get that. I wouldnt switch if youre in middle of something.@@VouskProd
@VouskProd
5 ай бұрын
@@enigmatic_e Yup, but still, the more I look at ComfyUI, the more it draws me in, even in the middle of a project.😅 Anyway, you've already saved my life with the face zoom trick, which worked perfectly in my case 🙏And for my not-so-green background on SD output, well, rotobrush2 is my friend 🙂
Yes -lease talk about 3D tracking.
About the Lora training, it doesn't matter if the artwork of the character you're trying to train is in different styles or not, it will all translate over.
@enigmatic_e
Жыл бұрын
Thanks! Good to know.
i had the same bug with da vinci studio, it was, either 5s ou 2hours of rendering, i was mad,thank you for the tips
@enigmatic_e
Жыл бұрын
No problem, it was driving me crazy too!
Using Mocha Pro is a cool idea.
Hi, i get this everytime I try to generate an image from text: RuntimeError: "LayerNormKernelImpl" not implemented for 'Half' any solutions? (I have a imac 64GB RAM btw) Thank you very much
when I go to the github page I cant seem to find the commands you copied to use in powershell when installing kohya (time 16:15). Has it changed or am I confused?
Where should these setting lora be located and how to use them is not very clear?? Thank you in advance!
The sd makes my background different, because of this I can't remove the background in after effects, what could it be?
What to change on your config if using a RTX4090 (24GB VRAM) & 16core cpu ?
@enigmatic_e
Жыл бұрын
Not sure what would change. Maybe you could make the resolution bigger in Stable Diffusion.
Recuerda redireccionar a los nuevos a videos introductorios que ya tienes, por ejemplo el mas reciente de como instalar SD.
when i added the green screen video and did the tracking in AE and once i exported it as jpeg mine shows the green screen still how did you remove that to come out black background instead?
@enigmatic_e
Жыл бұрын
You would have to use a color key to remove the green.
would love to see how you use blender :D
I have the model "sd15_hed.pth" but the processor, when using it I don't see the "hed.yaml" any suggestions anyone?
Is there a way you can create an realistic image from your own background? and turn it into a 3d image? im new to all of this
@enigmatic_e
Жыл бұрын
Mmm I don’t think that’s possible at the moment.
Okay and just to know : it's not possible to have her originality method using in their original tutorial anymore ?
please make the AE+Blender 3d tracking video
@enigmatic_e
Жыл бұрын
Working on it! 😉
Amazing stuff. Unfortunately, I don't have Nvidia. So I can't try anything that you share. Do you have some suggestions for people who use an AMD card? Thank you in advance.
@enigmatic_e
Жыл бұрын
Might have to go with google colab and use it through there. I want to get I to that and try to make a video for people in your situation.
@tanyasubaBg
Жыл бұрын
@@enigmatic_e thanks it would be great
@judgeworks3687
Жыл бұрын
This woman’s tuts are great too and she covers using runPod and how to run SD when you have old or computers (she doesn’t run SD on her computer). I don’t know if the LORA and training works but seems like it would…kzread.info/dash/bejne/X2GOkpWwkqfWj9Y.html
@BrunodeSouzaLino
Жыл бұрын
SD should work with AMD cards with ROCm support and PCIe 3 atomics. Don't expect much in the way of support, as most people think CUDA is the only framework that exists.
Podrias decirme la configuracion de tu PC?
@enigmatic_e
Жыл бұрын
Tengo 3080 10gb
Can this work for Mac?
For a better result u should have used the main prompt to describe what u want from mha
@enigmatic_e
9 ай бұрын
Thank you for the advice. This video however is quite outdated now. There’s different methods that give way more consistent results now.
How did you get multiple tabs for controlnet ??
@enigmatic_e
Жыл бұрын
Go to settings and then control net and I think there’s settings to add controlsnets
@musyc1009
Жыл бұрын
@@enigmatic_e got it ! thanks for the instructions , and keep up the good work with the vids, you helped A LOT
can you do an updated video on warp fusion? Their new version is much better and way more smoother!
@enigmatic_e
10 ай бұрын
I know, I was hired to help with it 😁
@NewMateo
10 ай бұрын
@@enigmatic_e Ahh sorry! 😅 Well you did an incredible job! that warp fusion tech is crazy good!
@enigmatic_e
10 ай бұрын
@@NewMateo 😂 all good. Will probably do an updated tut soon
why don't you use deforum for this?
@enigmatic_e
Жыл бұрын
Does it give different results?
@RHYTE
Жыл бұрын
@@enigmatic_e It should give more consistancy because the last frame is fed in to generate the next. However for me it doesn't seem to work as well with controlnet at the moment.
i can't get Lora to work the installation guide is completely different now on git
@GoodguyGastly
Жыл бұрын
Same here.
@AlinkBee
Жыл бұрын
@@GoodguyGastly x3
No where in the description you mentioned how to download the JSON files.
@enigmatic_e
Жыл бұрын
under LoraBasicSettings.json: there is a link to download
put a pastebin or something for the prompts man
Is there any real-time software that can implement AI technology like this
@enigmatic_e
Жыл бұрын
Not at the moment. Runway is getting close
@user-ze5jk3uc7o
Жыл бұрын
@@enigmatic_e Thank you very much, looking forward to a real-time tool, I think when the time comes to use live, should be very interesting
so complex😅😅
@enigmatic_e
Жыл бұрын
Sorry about that 😅
i subscribed to the corridor crew and didn't go into depth as what is being said here .. not to discourage anyone but you may not find the answers you seek in the subscription..
@enigmatic_e
Жыл бұрын
Do you mean you did the paid subscription with them?
@bigdaveproduction168
Жыл бұрын
Yes I know what do you mean, now with the evolution of stable diffusion corridor's tutorial seems to be obsolete now
This whole workflow is beyond most budgets. I don't think most small studios or individuals have the know-how and funds to create their own AI algo specific to a curated dataset of expected results, then have enough computing power to train said dataset to satisfaction in a timely manner, record video with the correct settings and repeat a series of complicated conversion steps and cleaning on a frame by frame basis using several pieces of software until the whole process is done. It's important to note that the vast majority of artists are not technical people and know very little, if any programming, even if said programming is related to their craft. Couple that with the fact that SD is in constant development and has non-existent documentation and you have a workflow which would be slower than doing the whole thing yourself to the same level of quality (keeping in mind most of the cleaning you have to do in the outputs will be already integrated in the result by the animator).
It looks bad
@enigmatic_e
Жыл бұрын
You look amazing ❤️
@aminebelahbib
Жыл бұрын
@@enigmatic_e I know that but thanks ♥️
i cant help but feel bad for all the animators in japan who make less than minimum wage. i think they will be replaced by AI in the next 10 years