6 Ways to Run ChatGPT Alternatives in Your Machine (Including Llama3)
Ғылым және технология
Open-source AI and Large Language Models are getting better and better. If we can replace ChatGPT or Bard with them we would gain a great deal of privacy and use these models in cases we previously couldn't before (like when dealing with sensitive or proprietary data).
In this video, we'll learn 6 ways to run various open-source large language models locally.
🔗 Useful links:
I tried 7 ChatGPT alternatives: • I Tried 7 ChatGPT Alte...
How to train a model using CI/CD: • CI/CD Essentials for M...
Blog post with examples: semaphoreci.com/blog/local-llm
HuggingFace: huggingface.co
LangChain: www.langchain.com
Llama.cpp: github.com/ggerganov/llama.cpp
Llamafile: github.com/Mozilla-Ocho/llama...
Ollama: ollama.ai
GPT4ALL: gpt4all.io/index.html
================================================
Timestamps:
0:00 Intro
0:54 Hardware requirements
2:04 (1) How to use HuggingFace 🤗 and Transformers 🤖
8:35 (2) LangChain
11:58 (3) Llama.cpp
17:21 (4) Llamafile
20:39 (5) Ollama.ai
23:43 (6) GPT4ALL
27:10 Conclusion
================================================
#llm #ai #localllm #llama2 #openai #chatgpt #nlp #machinelearning #ml #development #programming #devops #tutorial #llama3
Пікірлер: 52
Learn how to run Llama3 in your machine. Running a ChatGPT alternative locally can be cheaper and more secure. You can ask it about private information without worrying what happens with the data.
Arrived here after feeling confused by all the links and approaches floating around in the internet. After your video, I finally feel like I understand this space, this is the best resource I've found on this topic by far.
Man, thank you so much for this walkthrough. It feels like multiple hours of my own browsing, research, and getting lost safely packed in 30 min video.
@SemaphoreCI
14 күн бұрын
Glad to hear it!
I appreciate you !! I look forward to seeing your channel grow
@SemaphoreCI
3 ай бұрын
Thank you so much 🤗
An excellent explanatory video with valuable and useful information for using LLAMA models without the Internet to ensure complete privacy.
Excellent breakdown of user friendly options to run llms.
@SemaphoreCI
4 ай бұрын
Thank you. The space is moving so fast is hard to keep track of everything. Exciting times.
Thanks for clarifying my long-listed doubts. Now I am able to connect the dots.
@SemaphoreCI
3 ай бұрын
Glad it was helpful!
I appreciate that this video is a nice well rounded high-level description. The llama.cpp explanation was very helpful. Thank you very much for sharing.
@SemaphoreCI
3 ай бұрын
Glad you enjoyed it!
Thanks for the summary! There were a lot of things in your video which I didnt know before. ♥
@SemaphoreCI
2 ай бұрын
Thank you! I'm happy it helped
Thank you so much. Clear concise beautiful presentation. I look forward to engaging with more of your content.
@SemaphoreCI
3 ай бұрын
Awesome, thank you!
Excellent presentation! Thank you
@SemaphoreCI
2 ай бұрын
Glad you enjoyed it!
Thanks very much for your time and awesome video ❤🎉
@SemaphoreCI
3 ай бұрын
Thanks for watching!
This is one of the best videos I’ve ever seen about running LLMs! Specifically it’s very user friendly for non-coders! Maybe some tweaks in the title/description will help none coders to find this better
@SemaphoreCI
25 күн бұрын
Thank you! It's a good suggestion. I'm glad you enjoyed it.
This is fantastic stuff man - quick question - have you considered using LM Studio?
@SemaphoreCI
3 ай бұрын
Thank you. I considered but I preferred to highlight open-source tools first.
thank you presenting this... very helpful to understand options..
@SemaphoreCI
14 күн бұрын
Thank you for watching. I'm happy it helped!
Thank you for sharing, I will try some of your suggestions for open-source products. :-)
@SemaphoreCI
3 ай бұрын
Please do!
@lalpremi
3 ай бұрын
@@SemaphoreCI I managed to get llama.cpp running with phi-2-uncensored, love it.. fast answers are not bad.. I showed it to my 10-year-old, he wanted a 1000-page SA on cats..lol
Great video!
@SemaphoreCI
25 күн бұрын
Thank you!
Like some other commenters have said, this video was really useful to unpack the different options available for working with Llama models. Thank you so much! Aside: Your pronunciation of the “ll” sounds like the way it’s done in Argentinian Spanish…are you in Argentina?
@SemaphoreCI
25 күн бұрын
Yes I am. That ""LL" the sound is what we use to call the actual llamas, which we have plenty of. I understand that in English it's more like "lama" but it's hard to remember that when I'm recording. Thank you for the kind words.
just subs. Which model would you recommend to train using a specific golang repo, for example a project repo. Would it allow me to train locally by pointing to a directory? thanks Ah I should have continued watching your video to the end where you mention GPT4ALL allows you to point to a directory which it indexes. Do you think this would work if I point it to a repo?
@SemaphoreCI
3 ай бұрын
In my experience GPT4ALL models are not great at coding. YMMV but I think something like Copilot or AWS CodeWhisperer would be better for your use case.
It was very useful.
@SemaphoreCI
2 ай бұрын
Thank you!
appreciate your good work
where exactly did you run the gh repo clone of llama.cpp on 13:04? Thank you
Good video. Thanks
@SemaphoreCI
Ай бұрын
Glad you liked it! Thank you!
LocalAI in docker?
@SemaphoreCI
10 күн бұрын
They provide a docker image: localai.io/basics/getting_started/
@damnft8218
10 күн бұрын
@@SemaphoreCI thank you
Wow, I'm not here yet. See you in...
@SemaphoreCI
2 ай бұрын
Good luck!
why there is 0 model in conversation now
@SemaphoreCI
2 ай бұрын
Sorry, what do you mean?
@manasa7015
2 ай бұрын
Hugging face has no conversation model now
@jesskrikra
2 ай бұрын
He's right. The filter option for "conversational" under Natural Language Processing in huggingface is gone. Maybe the filter "question amswering" is covering that now?
@manasa7015
2 ай бұрын
@@jesskrikra may be