ComfyUI - Learn how to generate better images with Ollama | JarvisLabs

Ғылым және технология

In this video we will learn how to use the power of LLMs using Ollama and Comfy_IF_AI nodes to generate the best images. Vishnu will also take us through on how to set up ollama in JarvisLabs' instances.
Workflow : github.com/jarvislabsai/comfy...
Check out Ollama: ollama.com/
Check out our ComfyUI basics playlist: • ComfyUI - Getting star...
Check out our socials:
Website: jarvislabs.ai/
Discord: / discord
X: / jarvislabsai
LinkedIn: / jarvislabsai
Instagram: / jarvislabs.ai
Medium: / jarvislabs
Connect with Vishnu:
X: / vishnuvig
Linkedin: / vishnusubramanian

Пікірлер: 28

  • @chorton53
    @chorton533 күн бұрын

    This is really amazing !! Great work guys !

  • @RickySupriyadi
    @RickySupriyadiСағат бұрын

    is this what they call magic prompt? where ollama model refine user prompt?

  • @evolv_85
    @evolv_855 күн бұрын

    Awesome. Thanks.

  • @rishabh063
    @rishabh063Ай бұрын

    Hi Vishnu, great vide9

  • @JarvislabsAI

    @JarvislabsAI

    Ай бұрын

    Thanks Rishab

  • @aimademerich
    @aimademerichАй бұрын

    Phenomenal

  • @JarvislabsAI

    @JarvislabsAI

    Ай бұрын

    Thanks :)

  • @hempsack
    @hempsackАй бұрын

    I cannot get this to run on my laptop, it fails to load into comfyui in manager and manual install. I have a full update on comfy, and I also run the txt file to get all the required files needed, and it still fails. Any idea why? I am running a Asus Rog strix 2024 with 64 gigs of ram, and a 4090 16 gigs Vram card. I have all the requirements needed for ai generations.

  • @JarvislabsAI

    @JarvislabsAI

    Ай бұрын

    Did you try checking the error log to narrow down the issue?

  • @Ai-dl2ut
    @Ai-dl2utАй бұрын

    Hello sir... doees this IF_AI node take lot of time???... for me its taking like 15mins to load for every queue...using RTX3060

  • @JarvislabsAI

    @JarvislabsAI

    Ай бұрын

    It depends on what model you choose. Also try running ollama directly and see how fast it is.

  • @Ai-dl2ut

    @Ai-dl2ut

    Ай бұрын

    @@JarvislabsAI Thanks, let me try that

  • @Necro-wr2tn
    @Necro-wr2tnАй бұрын

    Hey, this looks great but i have a question. How much it costs to generate this images?

  • @JarvislabsAI

    @JarvislabsAI

    Ай бұрын

    These are all opensource software, so there is not much cost associated with it. If you need a GPU, and the base software setup then you would be paying for the compute. The pricing starts at 0.49$ an hour, and the actual billing happens per minute. jarvislabs.ai/pricing

  • @goodchoice4410

    @goodchoice4410

    Ай бұрын

    lol

  • @Necro-wr2tn

    @Necro-wr2tn

    Ай бұрын

    @@goodchoice4410 why lol?

  • @mufasa.alakhras
    @mufasa.alakhrasАй бұрын

    How do i get the Load Checkpoint?

  • @JarvislabsAI

    @JarvislabsAI

    Ай бұрын

    You can double click and search for load checkpoint node. If you want the checkpoint model. You can download it from this link: huggingface.co/RunDiffusion/Juggernaut-XL-v8/tree/main

  • @mufasa.alakhras

    @mufasa.alakhras

    Ай бұрын

    Thank you!@@JarvislabsAI

  • @impactframes
    @impactframesАй бұрын

    Hi, Thank you very much great tutorial ❤

  • @JarvislabsAI

    @JarvislabsAI

    Ай бұрын

    Thanks for creating the node, waiting for your future works 😊

  • @impactframes

    @impactframes

    Ай бұрын

    @@JarvislabsAI I made a super update please check it out also my other nodes for talking avatars😉 and thank again for the tutorial ❤️

  • @JarvislabsAI

    @JarvislabsAI

    Ай бұрын

    @@impactframes Sure, we will look into it 🙌

  • @impactframes

    @impactframes

    Ай бұрын

    @@JarvislabsAI thank you :)

  • @israeldelamoblas5043
    @israeldelamoblas504329 күн бұрын

    Ollama is superslow, I would like a faster version using LM Studio or similar. Thanks

  • @JarvislabsAI

    @JarvislabsAI

    29 күн бұрын

    Noted!

  • @RickySupriyadi

    @RickySupriyadi

    Сағат бұрын

    slow or fast isn't that depends which model you are using? phi3 in ollama blazing fast

  • @spiffingbooks2903
    @spiffingbooks29034 күн бұрын

    The Topic is interesting but (in common with most KZread Comfy experts, the whole presentation is confusing for the 90% of the audience that has just stumbled upon this. I think to be more successful you need to be clearer about what you want to achieve, why its a good idea. Explain how this JarvisAI fits into this, make it clear what resources need to be downloaded and exactly how in the least problematic way. I dont want to appear too negative as of course you are wanting to be helpful just trying to give some tips how to improve your presentation and hopefully consequently increase subscriber numbers

Келесі