How to Install and test LLaMA 3 Locally [2024]

Ғылым және технология

After the release of Llama3 i thought i should make a view to walk anyone who is looking to use it locally. i hope this video helps:)
Related links:
Download Ollama: ollama.com/download
Open web ui: github.com/open-webui/open-webui
Llama3 download link: : ollama.com/library/llama3
Link from video: llama.meta.com/llama3/
Release note from Meta: ai.meta.com/blog/meta-llama-3/
- - - - - - - - - - - - - - - - - - - - - -
Follow us on social networks:
Instagram: / codewithbro_
---
Support us on patreon: / codewithbro
#ai #artificialintelligence #llama3 #metaai #install #macos #machine #machinelearning #aitools #programming #softwaredeveloper #softwareengineer #webdeveloper #developer #iosdeveloper #mobiledevelopment #coding #coder #javascript #developer #computerscience #computersciencestudent #100daysofcode #html #css #programmer #vue #npmpackage #npm #package #CodeNewbies #Code_with_bro #code_withbro #youtubechannel #youtube #youtuber #youtubers #subscribe #youtubevideos #sub #youtubevideo #like #instagram #follow #video #vlog #subscribetomychannel #gaming #music #explorepage #love #smallyoutuber #vlogger #youtubegaming #instagood #llms #youtubecommunity #likes #explore #youtubelife #youtubecreator #ps #bhfyp #fotiecodes

Пікірлер: 40

  • @codewithbro95
    @codewithbro9529 күн бұрын

    Model variants ------------------------ Instruct is fine-tuned for chat/dialogue use cases. Example: ollama run llama3 ollama run llama3:70b Pre-trained is the base model. Example: ollama run llama3:text ollama run llama3:70b-text

  • @Knuhben
    @Knuhben28 күн бұрын

    Nice video! Can you do one on how to set up a local database out of pdf files? So the A.I would be able to search that pdfs and answer questions regarding the content

  • @gallyyouko5993
    @gallyyouko599327 күн бұрын

    How can I run the original not_quantized version of llama 3-8B(It is almost 15gb)?

  • @codewithbro95

    @codewithbro95

    27 күн бұрын

    My best suggestion for you is to get access to the huggingface repo. You will have to apply to meta for this. Here is a helpful link huggingface.co/meta-llama/Meta-Llama-3-8B

  • @gallyyouko5993

    @gallyyouko5993

    27 күн бұрын

    @@codewithbro95 I get it,but I am looking for a web UI to run it.

  • @codewithbro95

    @codewithbro95

    27 күн бұрын

    @@gallyyouko5993 you can use this: github.com/open-webui/open-webui What I used in the video :)

  • @SirDragonClaw
    @SirDragonClaw28 күн бұрын

    How can I run the larger version of the model?

  • @codewithbro95

    @codewithbro95

    27 күн бұрын

    ollama run llama3:70b

  • @Baly5
    @Baly519 күн бұрын

    I didn't really get the part on docker, can you help me ?

  • @codewithbro95

    @codewithbro95

    18 күн бұрын

    How can I help?

  • @GiochiamoinsiemeadAndrydex
    @GiochiamoinsiemeadAndrydex7 күн бұрын

    How to change the location of the insallation and the location of the download of the model?

  • @chintanpatel2229

    @chintanpatel2229

    5 күн бұрын

    kzread.info/dash/bejne/p55luNB9gLWfqNI.html

  • @maorahuvim2108
    @maorahuvim210825 күн бұрын

    How can I run ir with langchain?

  • @codewithbro95

    @codewithbro95

    24 күн бұрын

    python.langchain.com/docs/guides/development/local_llms/

  • @jesuispasla2729
    @jesuispasla272917 күн бұрын

    How much G of ram would be needed

  • @codewithbro95

    @codewithbro95

    11 күн бұрын

    What version do you wanna run?

  • @jesuispasla2729

    @jesuispasla2729

    11 күн бұрын

    @@codewithbro95 well the best with 16 gb of ram on linux unbuntu

  • @jesuispasla2729

    @jesuispasla2729

    11 күн бұрын

    @@codewithbro95 best model on 16gb ram linux unbuntu

  • @PedroHenriquePS00000
    @PedroHenriquePS0000015 күн бұрын

    why do all of these dont have a proper graphical interface... i hate having a black screen to stare at

  • @codewithbro95

    @codewithbro95

    15 күн бұрын

    You can use the web ui I showed in the video

  • @rs-wd9or
    @rs-wd9or27 күн бұрын

    how can we add a model???

  • @codewithbro95

    @codewithbro95

    27 күн бұрын

    Follow the stops and run the ollama command as in the video, it will download the mode to you computer

  • @rs-wd9or

    @rs-wd9or

    27 күн бұрын

    @@codewithbro95 I meant there is no option to select a model in the bar of Ollama Web UI. How can ve download it there?

  • @codewithbro95

    @codewithbro95

    27 күн бұрын

    @@rs-wd9or no need to, ollama web works and integrates with ollama automatically, so all the models you download on ollama will be listed automatically there

  • @hoangroyalir

    @hoangroyalir

    25 күн бұрын

    ​@@codewithbro95 I have downloaded the llama model using the command "ollama run llama3", but the Open Web UI didn't see the models. What should I do now? I use this command to start open webui: docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

  • @recaia
    @recaia27 күн бұрын

    better gpt 3.5?

  • @codewithbro95

    @codewithbro95

    27 күн бұрын

    The 400B maybe, but it’s yet to be released!

  • @waves42069
    @waves4206925 күн бұрын

    Its really slow

  • @codewithbro95

    @codewithbro95

    24 күн бұрын

    There are minimum requirements for tanning the model, it works pretty well on my M1 16gb RAM and 8 core GPU

  • @-_.DI2BA._-

    @-_.DI2BA._-

    24 күн бұрын

    ​@@codewithbro95 does a pre-trained model with 400B work on a M3 128GB RAM?

  • @codewithbro95

    @codewithbro95

    24 күн бұрын

    @@-_.DI2BA._- Not sure, 400B is yet to be released by Meta. they are still training

  • @Thecurioshow1
    @Thecurioshow116 күн бұрын

    😂😂😂😂😂

  • @viniciusmelo5652
    @viniciusmelo565216 күн бұрын

    content is fine, but your explanation wise ...............................................................................................................................................

  • @viniciusmelo5652

    @viniciusmelo5652

    16 күн бұрын

    when you say just go on the documentation, what so ever, you didn't said shit

  • @codewithbro95

    @codewithbro95

    8 күн бұрын

    @@viniciusmelo5652 Thanks for the feedback, will try my best to do better next time...

  • @benbork9835
    @benbork983528 күн бұрын

    stop click baiting, the 400b is not even out

  • @codewithbro95

    @codewithbro95

    28 күн бұрын

    Mark talks about it in the video?

  • @benbork9835

    @benbork9835

    27 күн бұрын

    ​@@codewithbro95 if 70b is already this good 400b is going to be crazy

  • @tiolv1174
    @tiolv117428 күн бұрын

  • @codewithbro95

    @codewithbro95

    27 күн бұрын

    🔥

Келесі