Animation with weight scheduling and IPAdapter
Ғылым және технология
About time we talked about animations again! I just released new nodes IPAdapter and the Essential that make scheduling IPAdapter, Prompt and controlnet very easy and efficient.
Workflows: f.latent.vision/download/sche...
Github sponsorship: github.com/sponsors/cubiq
Support with paypal: www.paypal.me/matt3o
Twitter: / cubiq
My Discord server: / discord
Banodoco Discord server: / discord
For the LCM model you use either the beta one: huggingface.co/wangfuyun/Anim...
or the final version: huggingface.co/wangfuyun/Anim...
00:00 Intro
01:10 Prep the keyframes
04:04 Scheduled weights
13:55 Scheduled prompt ipadapter and controlnet
Пікірлер: 170
This is history in the making.
@latentvision
2 ай бұрын
damn, I'm so old already?!
@RetzyWilliams
2 ай бұрын
Although, you still will see flicker and issues at higher detail resolutions. These are very simple examples
@treedruids5776
2 ай бұрын
@@RetzyWilliamsits still really great ground building for the potential that others can do with this tool
Someone should be paying this man.
@latentvision
2 ай бұрын
LOL, I agree! :D
Can't thank you enough for your contributions to the field. You are truly a genius!
@latentvision
2 ай бұрын
one "thank" is enough :)
Thank you for all your hard work, Brother! Your contributions to this community have helped to elevate my content so much. I can't thank you enough.
When Matteo speaks, I listen👌👌
@MerajKhan-dh3wy
2 ай бұрын
Kindly make about clothes and garments on modern
I've never clicked so fast on a KZread thumbnail !
@latentvision
2 ай бұрын
🤣
Amazing work Matteo as always. Proud to share italian roots with some talented guys like you.
An absolutely awesome masterclass from Maestro Latente!!... so many great tips that I cannot thank you enough!!
Tonight playing with a workflow I found I could get someone to (kind of) walk by getting images in the right order, this kind of baffled me, I sit down and put TV on and see this. Thankyou so much for showing me what my workflow is telling me is possible. Many thanks for all your contributions.
I love you SO SO MUCH! Been waiting for this tutorial since I saw your post last week hahaha Thank you thank you
thank you for this helpful tutorial
OMG, This is a great job , thank you so much
Thanks Mateo for this great topic that I am not ready (yet!) at all
Thank you, I learn new stuff from here, all the love for you
Incredible! ThankU❤
These videos are great!
This looks totally mind blowing! Thanks for sharing! Would love to watch a breakdown that is suited more for beginners, especially for the later part.
Seems like a great base to use when upscaling video. Upscale the key frames but also utilize the original animation for controlling pose or whatever. Very cool technique
Great stuff. Thank you very much for your knowledge. Have a good day!
This is great tutorial. As a newbie to ComfyUI I found there were a lot of additional things I needed to download that weren't mentioned such as Clip Vision 😉
It's amazing!very usefully video,thank you
I love this guy!
This is super, Matteo. Why are you so good at this?
Matteo é o melhor!
There needs to be a frame that is perfectly from behind. Otherwise you'll get that crazy Popeye-jaw.
Matt30! Multo-grazie!
Thanks!
Wonderfull THX
The best channel for learning comfy.
I keep getting this error: "Prompt outputs failed validation - IPAdapterBatch: - Exception when validating inner node: tuple index out of range" EDIT: I did an update all and now this error is gone, but got a new one. "Error occurred when executing IPAdapterBatch: cannot access local variable 'face_image' where it is not associated with a value" If I bypass the 2nd IPAdapter node, it works? So something it doesn't like w/that node. EDIT: The problem was the IPAdapter weights was set to "full batch" instead of alternate. So was wasn't getting any images for the 2nd IPAdapter.
@digitalflick
2 ай бұрын
same
@MassimilianoMitch
2 ай бұрын
same problem with "face_image" error. Thanks for the solutions in edit.
@darrynrogers204
2 ай бұрын
Thanks for the solution!
❤
Ugghh i love your brain sir ...
@latentvision
2 ай бұрын
I knew it! the zombie apocalypse has started!
Cool! I need a lot of animation frames, so image cherry picking and manual keyframing just doesn't cut it, but this method works great for shorter and detailed animations. Suggestion: color code the nodes so it would be easier to follow. With all grey nodes, it is hard to follow, especially on mobile phone. I hope we will see more animation stuff soon;)
you are CRAZY(in the good way), OMG
Very Nice! Works PERFECT, if you want to use V3 motion model, simple use HYPERSD lora, 8 steps.
Nicee, thanks, you think vid2vid is coming soon?
Hello Matteo, Thank you for the great tool and tutorials! I haven a question. I am unable to use this technique mantaining the characteristics of the image I am using. For some reason the result comes different from the input I have created. What is the parameter thet controls how much of the input image is used? Can I force it to just follow it? Cheers
will try this with my drawings🔥
My God, ... Mateo, my master, eternal respect to you, I am shocked by your knowledge. I just hope my 1080Ti can handle this xD Thanks one again!
I was the 1337 view. Must be a sign! (Thanks Matteo, for your great work to the community!)
also i can not find controlGIF.ckpt file
@S.Korolev
Ай бұрын
any luck with it?
What you think to add an image interrogator from the last images batch multiple and connect it to the Prompt Schedule? It will require string format but I guess it could work...
This is wonderful. Thank you always. This only works for SD1.5 models, correct?
@latentvision
2 ай бұрын
there are a couple of SDXL models for AnimateDiff, but they don't work very well
Cool
Thank you for the in-depth video! But where do I get the ControlGIF model for the ControlNet node?
🐐🐐🐐🐐🐐🐐
Hi Matteo! Your new video is so great! I want to ask what is your PC specs (CPU, GPU, RAM)? Thanks a lot for these videos, I learned a lot!
@latentvision
2 ай бұрын
amd 59xx, 64gb ram, nvidia 4090 running on linux
oh man Matteo Thank you so much this is what I have been looking for! possibly I could apply batches of mask to make an animation? like I get a sequence of water movement and get masks of the sequence. connect the masks to attention mask to create other objects moving mimicking water movement.
@latentvision
2 ай бұрын
yeah that would work too
My utmost gratitude man, what you're doing is insane!
I love the workflow! is there any chance to get less movement in your second example? LIke can I tell the AnimateDiff Node to decrease the movements from frame to frame?
@latentvision
Ай бұрын
you can run it slower by increasing the number of frames
This is brilliant--thank you for sharing! Is it possible to apply a style lora into the workflow? The IP adapter gets the look pretty close, but if a custom style lora could be applied in conjunction with the IP adapter that would push things to a whole new level.
Hey Matteo, thanks so much for this. Is thee a workflow for creating such consistent character images like you did with the blond girl?
@latentvision
Ай бұрын
as I said in the video it's mostly prompting, but if you add an IPAdapter of the first generation the subsequent will be very close to it
@simonrobson615
Ай бұрын
@@latentvision Thank you, I should have watched the video before asking the question :) Your videos and time developing these nodes is of huge benefit to the open AI community, thank you!
Thanks as always ... I have a question .. Can we make it loop video?
@latentvision
2 ай бұрын
there's a way to make kinda looping videos in animatediff, check the main repository
Thanks for the further development! Question: As it stands now the weighting and scheduling with IPAdapter can only be used to stich images together into a video; working in a t2v workflow. I'm wondering if there is a way to wire the nodes for video to video so that I can load a video and use IPAdapter weights + prompt travel to influence the video with different images at different times through the video.
@latentvision
19 күн бұрын
yes of course, a video is just a series of images, it works just the same
@calvinherbst304
14 күн бұрын
@@latentvision I made a bit of a miscommunication; what I meant was that I'm looking for a way to use both load video and weighted IPAdapter so that different parts of the video will be influenced by the different images of the IPAdapter at different times, instead of building the output video directly from the IPAdapter inputs.
Awesome!! I enjoy all of your videos
very informatic tutorial, when i running fire water workflow i am getting error from prompt sheduler which is missing 4 req positional arguments: pw_a, pw_b, pw_c and pw_d. pls suggest me what is the solution. thanks
I'm getting a 'TypeError: can't multiply sequence by non-int of type float' when I try your workflow?
6:54 hey Matteo I extracted frames from video and placed the frames into a folder. Instead of using 'Load Image' node one by one, Is there any node automatically load up images from a folder in order? like files names are in order so it can load up images automatically in order. Thank you always.
@latentvision
Ай бұрын
check the node "load images path"
How do we find that Discord server you mentioned at the beginning?
@latentvision
2 ай бұрын
try this discord.gg/WdpGf2tx
Always amazing!
you are geniuses
This is INCREDIBLE. Thank you!
The url in notes for the GIF controlnet model does not lead to that model unless these other motion models are the same thing by a different name.
@latentvision
2 ай бұрын
just rename it
Great as always!!! 🎉
Hmm, when trying to use your workflow I'm getting this error When loading the graph, the following node types were not found: IPAdapterBatch IPAdapterUnifiedLoader IPAdapterWeights IPAdapterNoise Nodes that have failed to load will show as red on the graph. I've updated ComfyUI_IPAdapter_plus , deleted and recloned, deleted and redownloaded through manager, and I continue to get the same error each time. no module named "node helpers" is why it fails to import
@elowine
2 ай бұрын
Is your ComfyUI up to date? That sometimes messes things up for me. Can try a git pull when inside the ComfyUI folder and after that try to update IPA again.
@siegekeebs
2 ай бұрын
@@elowine I'll give that a try, I haven't updated in a few weeks
I can stop F5-ing now 😄I'm 300 images in, and still no back of the head image, I love the tech, I hate the prompting 😅
@luman1109
2 ай бұрын
use the composition IPadapter
Outstanding as usual, thanks for the great work!
Awesome work
@latentvision
2 ай бұрын
just doing my part
Thank you!
Hey Matteo, thanks for the amazing job you're doing. Following this workflow i get an error: "only integer tensors of a single element can be converted to an index". This is happening when i turn the IPAdapter Batch nodes "weight" widgets to inputs and connect them to the IPAdapter Weights node output. Somehow if i turn back those weights inputs into widgets, the Sampler is able to process them, but ofc i don't get the desired result. Do you know what this might be related to?
@latentvision
28 күн бұрын
please post an issue on the official repository adding workflow and complete error message
controlGIF is "motion_checkpoint_less_motion" or "motion_checkpoint_more_motion" from "crishhh/animatediff_controlnet" ?
@latentvision
2 ай бұрын
normal motion :D
@Ratinod
2 ай бұрын
@@latentvision Most likely it's controlnet_checkpoint.ckpt from "crishhh/animatediff_controlnet"
@latentvision
2 ай бұрын
I believe I put the link inside the workflow in a note node
@Ratinod
2 ай бұрын
@@latentvision You're right. It turns out that I was let down by the habit of repeating what I saw from your videos without using ready-made workflows :).
@user-uv4vv4mk4j
2 ай бұрын
@@latentvision I don't understand your conversation. Which model is controlGIF?
Fantastic! 🎉 wonderful video
Can someone explain how the weights strategy parameter works?
@AB-wf8ek
2 ай бұрын
If you hook it up to a Display Any node, you'll see what the outputs are. It looks like it's a list of parameters specific to Matteo's nodes in order to generate the appropriate keyframes. Essentially it's a parametric way of calculating the keyframes, that way you can add or remove images and it will automatically adjust the keyframes accordingly. This replaces the need to use something like Batch Prompt Schedule or Batch Value Schedule nodes to manually enter in keyframe values.
I LOVE YOUR WORK MAN
I got a question about the IPAdapter Weights node. If you want to " hold" one of the input images for a while instead of constantly evolving, how would one approach this. You can increase the number of Frames used but it's still moving forward to the next input image, could you somehow freeze this for a few frames? Or am I asking to much now haha.
@latentvision
2 ай бұрын
the easiest is to repeat the frame twice
What is the software called when you were refining the images?
@latentvision
2 ай бұрын
it's an open source software called GIMP
@goodie2shoes
2 ай бұрын
I'm not sure but I think Mateo mentioned gimp in one of his earlier video's
@Grunacho
2 ай бұрын
Good open source tools are also photopea and Krita 😉
i dont have sgm_uniform as a scheduler. can someone point out how/what to get this?
Not sure if this was mentioned, but for the life of me I couldn't find the Images Batch Multiple node. Took a bit of searching (Manager was quite unhelpful, here) until I found it was part of the ComfyUI Essentials pack. Hope this helps someone.
Great update! Banodoco is indeed amazing!
Why sgm_uniform? Karras worse?
@ryanontheinside
19 күн бұрын
If I remember correctly it is recommended with LCM sampler
Hey Matteo, sorry another annoying question from me. Your workflow works a charm and I'm having great results with the typography workflow. I've been trying to create a moment at the beginning before the first word comes in. I can do this by adding a black image in the Images Batch Multiple node before the first word. But the result is that there is no 'die off' after the second word. I've tried many things; adding 2 black frames at the end, repeating the second prompt 3 times in the Prompt Schedule From Weights Strategy node, adding more frames in the IPAdapter Weights node, but nothing seems to work. Any thoughts would be helpful. I know you're not getting paid for this so I appreciate any help at all
@latentvision
Ай бұрын
hard to say without seeing your workflow. but generally speaking you need to add a "fire" frame at the beginning (so animation starts with 2 fire images basically) and then a black frame for the control net
This tutorial is really great! Very practical!(sponsored!) But I have a small question: if I don't want the original image to change, which parameters do I need to adjust? I tried ControlNet, but it doesn't seem to work.
@latentvision
21 күн бұрын
with animatediff the original image will always change to a certain degree. You can use video2video or controlnets, but it's not like SVD for example that it starts from a given frame and reiterate on that
Getting this error, any idea why? Required input is missing: encode_batch_size
@latentvision
2 ай бұрын
you probably just need to refresh the page
Hey Matteo, I don't seem to find the "lcm-lora-sd15.safetensors" file anywhere online. I've followed your links in description but they bring me to .ckpt files, so I'm a bit confused here. Can you please help? Thanks a lot for your time.
@latentvision
Ай бұрын
search LCM LORA on huggingface
What is up with the Shutterstock watermark in the final image?
You mention a Discord channel for animation (Bannadoku or somthing - its hard to hear). Can you provide a link or the correct name?
@elowine
2 ай бұрын
banodoco, see you there :D
@AonSolarra
2 ай бұрын
@@elowine When searching Discord communities for banadoco I get zero hits. Do you need an invite link to find it?
Hello Sir. Can you Please help me out? ipAdapter faceid suddenly got extremely slow and I have no idea Now to fix it. It did not use to be that slow. Do you have any idea what I could do?
@latentvision
Ай бұрын
please join my discord or post an issue on github, it's hard to escalate on an youtube comment
@beatemero6718
Ай бұрын
@@latentvision i understad that. You are right. I will join the dicord and post it as an issue. Thank you for your Work.
Someone knows the node for his "images batch multiples" and "ipadapter weights" ? thk you
@latentvision
2 ай бұрын
comfyui essentials and ipadapter of course
@seminole3001
2 ай бұрын
@@latentvision thk you for the answer and your work.
@seminole3001
2 ай бұрын
Another question... why you don't use the node "everywhere" ? Did you encoutered trouble with it ?
@latentvision
2 ай бұрын
@@seminole3001 it makes the workflow very difficult to follow especially when teaching. In a node system like comfy it's considered an "anti-pattern"
@seminole3001
2 ай бұрын
@@latentvision last question, the model animateGIF ? Did you rename it ? I don't find a link to download it...
Hey Master Matteo! Trying here on a Mac Silicon.. In the end of the script, I see this error: "RuntimeError: MPS: Unsupported Border padding mode" Probably a Mac error? :(
@latentvision
2 ай бұрын
please report the error on github, posting the full backtrace. thanks
Awesome !!
couldn't these setups be packaged into the program so we just change the variables instead of going to such a steep learning curve?
@latentvision
Ай бұрын
they could, yes
Super cool!! keep going 👍
Looks like Kara from detroid become human :)
It's great. Thx
Hello author, read Embed group ipadpt Where can I download this file
I'm happy thanks to your video. thank you.🥰🥰🥰🥰
Maestro! ❤
Matteo: "Life's too short for slow generations" 😅👍
You are the GOAT
my efficiency-nodes-comfyui have been failing to install. What should I do .I have repeatedly installed it many times