ComfyUI - Learn how to generate better images with Ollama | JarvisLabs
Ғылым және технология
In this video we will learn how to use the power of LLMs using Ollama and Comfy_IF_AI nodes to generate the best images. Vishnu will also take us through on how to set up ollama in JarvisLabs' instances.
Workflow : github.com/jarvislabsai/comfy...
Check out Ollama: ollama.com/
Check out our ComfyUI basics playlist: • ComfyUI - Getting star...
Check out our socials:
Website: jarvislabs.ai/
Discord: / discord
X: / jarvislabsai
LinkedIn: / jarvislabsai
Instagram: / jarvislabs.ai
Medium: / jarvislabs
Connect with Vishnu:
X: / vishnuvig
Linkedin: / vishnusubramanian
Пікірлер: 28
This is really amazing !! Great work guys !
is this what they call magic prompt? where ollama model refine user prompt?
Awesome. Thanks.
Hi Vishnu, great vide9
@JarvislabsAI
Ай бұрын
Thanks Rishab
Phenomenal
@JarvislabsAI
Ай бұрын
Thanks :)
I cannot get this to run on my laptop, it fails to load into comfyui in manager and manual install. I have a full update on comfy, and I also run the txt file to get all the required files needed, and it still fails. Any idea why? I am running a Asus Rog strix 2024 with 64 gigs of ram, and a 4090 16 gigs Vram card. I have all the requirements needed for ai generations.
@JarvislabsAI
Ай бұрын
Did you try checking the error log to narrow down the issue?
Hello sir... doees this IF_AI node take lot of time???... for me its taking like 15mins to load for every queue...using RTX3060
@JarvislabsAI
Ай бұрын
It depends on what model you choose. Also try running ollama directly and see how fast it is.
@Ai-dl2ut
Ай бұрын
@@JarvislabsAI Thanks, let me try that
Hey, this looks great but i have a question. How much it costs to generate this images?
@JarvislabsAI
Ай бұрын
These are all opensource software, so there is not much cost associated with it. If you need a GPU, and the base software setup then you would be paying for the compute. The pricing starts at 0.49$ an hour, and the actual billing happens per minute. jarvislabs.ai/pricing
@goodchoice4410
Ай бұрын
lol
@Necro-wr2tn
Ай бұрын
@@goodchoice4410 why lol?
How do i get the Load Checkpoint?
@JarvislabsAI
Ай бұрын
You can double click and search for load checkpoint node. If you want the checkpoint model. You can download it from this link: huggingface.co/RunDiffusion/Juggernaut-XL-v8/tree/main
@mufasa.alakhras
Ай бұрын
Thank you!@@JarvislabsAI
Hi, Thank you very much great tutorial ❤
@JarvislabsAI
Ай бұрын
Thanks for creating the node, waiting for your future works 😊
@impactframes
Ай бұрын
@@JarvislabsAI I made a super update please check it out also my other nodes for talking avatars😉 and thank again for the tutorial ❤️
@JarvislabsAI
Ай бұрын
@@impactframes Sure, we will look into it 🙌
@impactframes
Ай бұрын
@@JarvislabsAI thank you :)
Ollama is superslow, I would like a faster version using LM Studio or similar. Thanks
@JarvislabsAI
29 күн бұрын
Noted!
@RickySupriyadi
Сағат бұрын
slow or fast isn't that depends which model you are using? phi3 in ollama blazing fast
The Topic is interesting but (in common with most KZread Comfy experts, the whole presentation is confusing for the 90% of the audience that has just stumbled upon this. I think to be more successful you need to be clearer about what you want to achieve, why its a good idea. Explain how this JarvisAI fits into this, make it clear what resources need to be downloaded and exactly how in the least problematic way. I dont want to appear too negative as of course you are wanting to be helpful just trying to give some tips how to improve your presentation and hopefully consequently increase subscriber numbers