No video

Loki - Live Portrait - NEW TALKING FACES in ComfyUI !

UPDATE: • RotoMaker Pack | Anima...
Live Portrait generates talking heads using your image and a guiding video of a person talking. In my workflow the source FPS and Audio will be preserved in your generation.
Workflow: civitai.com/mo...
Github: github.com/kij...
models: huggingface.co...
place all models inside /models/liveportrait/
Loki-FaceSwap: • LOKI FASTEST FACE SWAP...
Loki-MimicMotion: • LOKI - Mimic Motion - ...
Join this channel to get access to perks:
/ @fivebelowfiveuk
discord: / discord
www.fivebelowfive.uk
- Workflow Packs:
Hyper SUPIR civitai.com/mo...
Merge Models civitai.com/mo...
cosXL Convertor civitai.com/mo...
Looped Motion civitai.com/mo...
Trio Triple Latents civitai.com/mo...
Ananke Hi-Red civitai.com/mo...
- SDXL Lora's
civitai.com/mo...
civitai.com/mo...
civitai.com/mo...
civitai.com/mo...
civitai.com/mo...
civitai.com/mo...
civitai.com/mo...
civitai.com/mo...
civitai.com/mo...
civitai.com/mo...
- Introducing series (music/video)
Noisee.ai • Introducing Noisee.ai ...
Udio.com • Introducing Udio.com [...
suno.com • Introducing Suno V3 Music
haiper.ai • Introducing Video Gene...
- Checkpoint Merging
• Create The Best Model ...
- cosXL / cosXL-edit conversion
• Convert any SDXL model...
• Unlock CosXL with any ...
- 3D Generation
• DJZ 3D Collection
- New Diffusion Models (April '24)
Stable Cascade:
• Stable Cascade Comfy C...
• Stable Cascade in Comf...
SDXS-512:
• SDXS - New Image Gener...
cosXL & cosXL-edit:
• CosXL & CosXL-Edit - N...
- Stable Cascade series:
• Stable Cascade Workflo...
- Image Model Training
datasets • Datasets in detail - M...
colab • Updated Lora Training ...
local • Updated Lora Training ...
civitai • Stable Cascade LORA tr...
civitai • SDXL Lora Training wit...
- Music with Audacity
• Make Music with Audaci...
• Make Music with Audaci...
- DJZ custom nodes (aspectsize node)
• AspectSize (djz-nodes)...
stable diffusion cascade
stable diffusion lora training
comfyui nodes explained
comfyui video generation
comfyui tutorial 2024
best comfyui workflows
comfyui image to image
comfyui checkpoints
civitai stable diffusion tutorial

Пікірлер: 34

  • @ArrowKnow
    @ArrowKnowАй бұрын

    Thank you for this! I was playing with the default workflow from LivePortrait but your workflow fixed all of the issues I was having with it. Perfect timing. Love it

  • @FiveBelowFiveUK

    @FiveBelowFiveUK

    Ай бұрын

    Glad it helped! the credit goes to the author as we used his nodes to fix the framerate :) thanks so much tho - this is exactly why i make mildly custom editions for my packs. I just want to share these tools and see what everyone can do !

  • @dadekennedy9712
    @dadekennedy971221 күн бұрын

    So good!

  • @GamingDaveUK
    @GamingDaveUKАй бұрын

    Got all excited for this as it looked to be exactly what iwas looking for... a way to create an animated avatar reading along to a mp3/wav speech file... sadly it looks like a video to video. looks cool... but the search to create a video based on a tts sound file continues lol

  • @FiveBelowFiveUK

    @FiveBelowFiveUK

    Ай бұрын

    we covered that previously, you can use HEDRA to do TTS or use your TTS with a picture, this will generate the talking heads also. In this video we are specifically looking at ComfyUI, where we used the Hedra to animate our puppet target character. In the previous deep dive we are exploring 2D puppet animation, with motion tracked talking heads. I have also recorded myself mimicking the words from an audio file, this can then drive the speaking animation :) -- it can work !

  • @DaveTheAIMad

    @DaveTheAIMad

    Ай бұрын

    @@FiveBelowFiveUK Just tried Hedra and the result was really good...but limited to 30 seconds, slicing the audio up could work but i am likely to have a lot of these to do over time. The more I look into this, the more it seems like there is no local solution where you can just feed in an image and a wav/mp3 file and get a resulting video. hedra did impress me though. I rember years ago using something called "crazy talk" that worked well but you had to mask the avatar, set the face locations yourself etc....which honestly i would be ok with doing in comfyui lol. Every solution either fails (dblib for dreamtalk node for example) or needs a video as a driver. Its actually all rather frustrating. maybe someone will solve it down the line.

  • @9bo_park
    @9bo_parkАй бұрын

    How were you able to capture your own movements and include them in the video? I’m curious about how you managed to show your captured video itself in the bottom right corner.

  • @FiveBelowFiveUK

    @FiveBelowFiveUK

    Ай бұрын

    I have never shown how i create my avatar on screen, it is myself and was captured using a google Pixel 5 phone. I have also started using Motion tracking with the DJI Osmo Pocket 3, which is excellent for this. The process has been refined from a multi-software Adobe method to a 100% in ComfyUI approach. It used to be left on all night to finish a 1 minute animation, but now i can complete 600 frames in just 200 seconds. We need 30FPS so we are close to but not quite reaching 30FPS for Live Rendering. The process is simpler now, however originally it involved Large sequences of images, with Depth/Pose and a lot of manual rotoscoping. Before i would have to do a lot of editing and use Adobe Photoshop, Premiere and After Effects. Now i can just load the video from my cameras into the workflow and it does all the hard work, leaving me with assets to place into the scenes.

  • @sejaldatta463
    @sejaldatta46325 күн бұрын

    Hey great video - you mention the liquifying and using dewarp stabilizers. What nodes would you recommend in comfyui to help resolve this?

  • @FiveBelowFiveUK

    @FiveBelowFiveUK

    25 күн бұрын

    unfortunately i might have been unclear, afaik there are not any nodes for that (yet haha) but i would use Adobe Premiere/After Effects or Davinci Resolv or some other dedicated video editing softwares to achieve that kind of post processing. In previous videos we have looked at using Rotoscoping and motion tracking with generated 2D assets for webcam driven puppets etc, thing like this. Recently my efforts were to hunt down and build some base packs to replace those actions in comfyui, eliminating most of the work down with paid software or online services. short answer is, we fixed that in post :)

  • @guillaumebieler7055
    @guillaumebieler7055Ай бұрын

    What kind of hardware are you running this on? It's too much for my A40 Runpod instance 😅

  • @FiveBelowFiveUK

    @FiveBelowFiveUK

    Ай бұрын

    even my 4090 can actually bottleneck on the CPU side with more than ~1000 frames in a single batch. this used the video input loader, the default will use the whole source clip. if you used more than 10-20 seconds at 30fps, it might start to struggle even with a nice setup. I split my source clips up and use the workflow like that. alternatively with a longer source clip, use 600 frame cap and use the start frame skip 0, 600, 1200, 1800, etc adding 600 frames. then you can join the results later. I'll include a walkthrough in the next Loki video, it splits the job into parts which are more manageable :)

  • @adamsmith-lb9zv
    @adamsmith-lb9zv12 күн бұрын

    What,Prompt outputs failed validation: Return type mismatch between linked nodes: images, LP OuT != IMAGEWHs_VideoCombine :Return type mismatch between linked nodes: images, LP OUT != IMAGE

  • @FiveBelowFiveUK

    @FiveBelowFiveUK

    12 күн бұрын

    which workflow in the pack is giving this error?

  • @adamsmith-lb9zv

    @adamsmith-lb9zv

    11 күн бұрын

    @@FiveBelowFiveUK V12

  • @adamsmith-lb9zv

    @adamsmith-lb9zv

    11 күн бұрын

    @@FiveBelowFiveUK V12 workflow, on the liveportrait node conversion composite video in the process of this error, update and re-add models and so on are this error.

  • @FiveBelowFiveUK

    @FiveBelowFiveUK

    Күн бұрын

    There will be an update to this pack, because we switch the backend to mediapipe (opensource), the old ones used inswapper (research model). This can happen from time to time when the authors made significant changes to the code. Thanks for letting me know

  • @Avalon19511
    @Avalon19511Ай бұрын

    How did you get one image in the results, mine is split between the source and target?

  • @FiveBelowFiveUK

    @FiveBelowFiveUK

    Ай бұрын

    if you are using the workflow provided (links in description), i have made the changes shown in this video - those changes were: 1. removed the split view (we want the best resolution for use later) 2. added FPS sync with the Source video 3. Connected the Audio, so the final video used the input speech.

  • @Avalon19511

    @Avalon19511

    Ай бұрын

    @@FiveBelowFiveUK All good just copied yours, definitely not as smooth as hedra but it's a start:)

  • @adamsmith-lb9zv
    @adamsmith-lb9zvАй бұрын

    blogger, this node can only be used on Apple devices OS can be used, workflow node through, but there is an error message is not associated with the MPS

  • @sprinteroptions9490
    @sprinteroptions9490Ай бұрын

    great stuff.. works well.. but the workflow's a lot slower than the standalone when just trying out different photos to sync.. it's like it's processing the video again every time? With the demo animating a new image takes roughly 10 seconds after a video has been processed the first time.. so the comfy workflow takes over a minute every time no matter what.. maybe i tripped something ? i dunno

  • @FiveBelowFiveUK

    @FiveBelowFiveUK

    Ай бұрын

    if you used my demo video head, it's quite long and it's possible to setup a frame limit, then batch them by moving the start frames. I used the default of the whole source clip, which might be hundreds of frames. If you see slowness in general there is a note about ONNX support and a link to how to fix it in the LivePortrait github, i believe this is to do with the reactor backend stack, which is similar - With Loki Face Swap, you should see almost instant face swapping, when using a presaved face model that you loaded.

  • @Avalon19511
    @Avalon19511Ай бұрын

    also your video combine is different from mine, mine says image, audio, meta_batch, vae, is it possible to change the connections?

  • @veltonhix8342

    @veltonhix8342

    Ай бұрын

    Yes, right click the node and select convert widget to input.

  • @Avalon19511

    @Avalon19511

    Ай бұрын

    @@veltonhix8342 thank you, any thoughts about getting one image in the results?

  • @FiveBelowFiveUK

    @FiveBelowFiveUK

    Ай бұрын

    download my modified workflow from the description :) it's on civit

  • @alirezafarahmandnejad6613
    @alirezafarahmandnejad6613Ай бұрын

    why the face in my final video is covered with a black box?

  • @FiveBelowFiveUK

    @FiveBelowFiveUK

    Ай бұрын

    this would indicate that something did not install correctly with your backend. check the github for the node you are using, and see if there are any reports from other people. Two people have reported this since i launched the video. github.com/Gourieff/comfyui-reactor-node contains good advice if you have problems with insightface (required)

  • @alirezafarahmandnejad6613

    @alirezafarahmandnejad6613

    Ай бұрын

    @@FiveBelowFiveUK i dont think if it's a insightface issue cause i fixed it beforehand, i dont have issues with result coming out of others flows or nodes that include insightface, only this one, that's weird, i even tried the main flow, and user-made ones, same issue.

  • @alirezafarahmandnejad6613

    @alirezafarahmandnejad6613

    Ай бұрын

    @@FiveBelowFiveUK never mind bro fixed it :) the issue was that i was using cpu for rendering , changed it to cuda, now works fine

  • @bugsycline3798
    @bugsycline3798Ай бұрын

    hu?

  • @angloland4539
    @angloland453926 күн бұрын

  • @FiveBelowFiveUK

    @FiveBelowFiveUK

    26 күн бұрын

    don't forget to check the latest video ! an alternative for talking with motion

Келесі