Ollama on Linux: Easily Install Any LLM on Your Server

Ғылым және технология

Ollama has just been released for linux, which means it's now dead simple to run large language models on any linux server you choose. I show you how to install and configure it on digitalocean.
00:00 Installation on DigitalOcean
03:30 Running Llama2 on a Server
05:43 Calling a Model Remotely
12:26 Conclusion
#llm #machinelearning
Link: ollama.ai/download/linux
Support My Work:
Get $200 credit on SignUp to DigitalOcean: m.do.co/c/d05114f84e2f
Check out my website: www.ianwootten.co.uk
Follow me on twitter: / iwootten
Subscribe to my newsletter: newsletter.ianwootten.co.uk
Buy me a cuppa: ko-fi.com/iwootten
Learn how devs make money from Side Projects: niftydigits.gumroad.com/l/sid...
Gear I use:
14" Macbook Pro (US) - amzn.to/3ObEy8G
14" Macbook Pro (UK) - amzn.to/38Hg07d
Shure MV7 USB Mic (US) - amzn.to/3CRNUSD
Shure MV7 USB Mic (UK) - amzn.to/44rAoR4
As an affiliate I earn on qualifying purchases at no extra cost to you.

Пікірлер: 48

  • @DataDrivenDailies
    @DataDrivenDailies7 ай бұрын

    just what i was looking for, thanks ian!

  • @IanWootten

    @IanWootten

    7 ай бұрын

    No problem!

  • @sto3359
    @sto33599 ай бұрын

    This is amazing news! I'm limited to 16gb RAM on my Macs, but not so on my Linux machines!

  • @datpspguy
    @datpspguy5 ай бұрын

    Was using Ubuntu Desktop running mixtral on ollama so i can make api calls with my FastApi app on VS code but realized i should separate them out and go headless for ollama. I didn’t realize that CORS was preventing outside calls from my dev machine and this video helped once i found the github page as well. Thanks for sharing

  • @IanWootten

    @IanWootten

    5 ай бұрын

    Glad to hear you sorted it!

  • @datpspguy

    @datpspguy

    5 ай бұрын

    thank you, I ended up storing the environment variable into the .conf file to bind the IP address so it handles this process automatically.@@IanWootten

  • @perschinski
    @perschinski15 күн бұрын

    Great stuff, thanks a lot!

  • @timjx3675
    @timjx36758 ай бұрын

    Mistral 7B running really sweet on my old Asus (16GB ram ) laptop

  • @IanWootten

    @IanWootten

    8 ай бұрын

    Runs really fast on my MBP too, just started playing with it yesterday.

  • @timjx3675

    @timjx3675

    8 ай бұрын

    @@IanWootten sweet

  • @rishavbharti5225
    @rishavbharti52254 ай бұрын

    This was a really helpful video Ian! But I am facing one issue after running ollama serve the server is shutting down when I close terminal. Please tell me if there is a way to prevent this. Thanks!

  • @PengfeiXue
    @PengfeiXue2 ай бұрын

    can we use ollama to serve in production ? if not,what is your suggestion?

  • @trapez_yt
    @trapez_yt8 күн бұрын

    i cant run it on service ollama start, it says the following: $sudo: service ollama start ollama: unrecognized service

  • @JordanCassady
    @JordanCassady2 ай бұрын

    Which version of Ubuntu did you choose? It seems to be missing from the video.

  • @BileGamer2002
    @BileGamer20024 ай бұрын

    Hello. I'm developing an OnPremises application that consumes Ollama via API. However, after a few minutes, the Ollama Server stops automatically. I would like to know if there is any way to keep it running until I stop it. Thank you very much.

  • @atrocitus777
    @atrocitus7774 ай бұрын

    how does this scale for multiple users sending multiple requests at a time? do you need to use a load balancer / reverse proxy? i don't think ollama supports batch inference still

  • @jakestevens3694

    @jakestevens3694

    4 ай бұрын

    You would have to launch and run the application multiple times, the best way is to just use something like docker. Otherwise, I believe theirs the "screen" command. If I remember correctly on Linux this will allow you to run applications in the CLI with multiple virtual "screens" or rather more like sessions, you would then want to make sure what ever port it uses is different from the others. Also take note the ram it uses, is the ram it uses, CPU can be shared. It might be possible with ram (with some tricks) however it's unlikely.

  • @atrocitus777

    @atrocitus777

    4 ай бұрын

    what about pulling from a custom endpoint where i have my own hosted models? i want to run this on an air gapped network that doen'st have any access to the internet so if i could point it to an on prem server i have that would be awesome. @@jakestevens3694

  • @74Gee
    @74GeeАй бұрын

    Run Pod is very affordable too. From 17c per hour for a Nvidea 3080

  • @IanWootten

    @IanWootten

    Ай бұрын

    Yeah, I wanted to do a comparison of all the new services appearing.

  • @peteprive1361
    @peteprive13619 ай бұрын

    I got an error while executing the curl command : Failure writing output to destination

  • @IanWootten

    @IanWootten

    9 ай бұрын

    Weird. Perhaps try running it from a directory you are certain you have write access to.

  • @ITworld-gw9iy
    @ITworld-gw9iy2 ай бұрын

    for 70B model, what server would I need to rent? docs says at least 64GB of RAM... but regarding NVIDEA card no minimal specs in the docs. Who has experience with this?

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w7 ай бұрын

    do you think it is safe to install on your own laptop instead of the cloud server?

  • @IanWootten

    @IanWootten

    7 ай бұрын

    Yes. Ollama has desktop versions too and it doesn't send anything externally when query when you go that route. I have another video where I do this on my mac.

  • @sugihwarascom
    @sugihwarascom7 ай бұрын

    How come the model run in 8gb of ram? On the docs it self it need at least 16gb for llama2

  • @IanWootten

    @IanWootten

    7 ай бұрын

    No idea - I was going on experience using ollama rather than the model itself.

  • @blasandresayalagarcia3472
    @blasandresayalagarcia34725 ай бұрын

    what is the cost of webhosting ollama or these type of LLM models?

  • @IanWootten

    @IanWootten

    5 ай бұрын

    In this case, it'll be the price of the virtual machine you choose to install it on so depends on the provider.

  • @SuperRia33
    @SuperRia33Күн бұрын

    How do you connect to server via Python Client or Fast APIs for integration with projects/notebook?

  • @IanWootten

    @IanWootten

    7 сағат бұрын

    If you simply want to make a request to an API from Python, there are plenty of options. You can use a package from Python itself like urlllib, or a popular library like requests.

  • @AdarshSingh-rm6er
    @AdarshSingh-rm6er23 күн бұрын

    hello Ian, Its a very great video. I have some query, i will very thankful if you can help me. I am stuck since 3 days. Apparently, I am trying to host the ollama on my server. i am very new to linux and dont understand the whats wrong i am doing. I am using nginx to host the ollama on my proxies and configure the nginx file and yet getting access denied error. I can show you the code if you want, please respond.

  • @ankitvaghasiya3789
    @ankitvaghasiya3789Ай бұрын

    thank you🦙

  • @IanWootten

    @IanWootten

    Ай бұрын

    No problem!

  • @jamiecropley
    @jamiecropley7 ай бұрын

    Anyone got this running on anything lower than 8GB of RAM on digital ocean? I tried locally on my own computer with a huge prompt with a 3B model, and it only used around 1GB of RAM maximum

  • @IanWootten

    @IanWootten

    5 ай бұрын

    Yeah, depends on the model itself. ollama often lists the memory requirements on the model page. e.g. ollama.ai/library/llama2

  • @VulcanOnWheels
    @VulcanOnWheels3 ай бұрын

    0:08 How did you get to your pronunciation of Linux? 10:53 How could one correct the error occurring here?

  • @petermarin
    @petermarin8 ай бұрын

    benefits of running it like this vs docker?

  • @IanWootten

    @IanWootten

    8 ай бұрын

    Running anything within a container will always mean the app runs slower.

  • @GenerativeAI-Guru
    @GenerativeAI-Guru7 ай бұрын

    How do i change IP and port for Ollama

  • @IanWootten

    @IanWootten

    7 ай бұрын

    Use the env var OLLAMA_HOST. e.g. OLLAMA_HOST=127.0.0.1:8001 ollama serve

  • @GenerativeAI-Guru

    @GenerativeAI-Guru

    7 ай бұрын

    Thanks

  • @nickholden585
    @nickholden5857 ай бұрын

    Right now there is an issue with Ollama where if you create an model, it spams you with "do not have permission to open Modelfile" It's super odd, because even if you give full read and execution rights to every user or run the command with sudo it still fails. The only viable work around is to run it on /tmp

  • @IanWootten

    @IanWootten

    7 ай бұрын

    This is an issue with the current user not having access to the ollama group. There's a recommended solution posted here (though sounds like it might not be completely resolved): github.com/jmorganca/ollama/issues/613#issuecomment-1756293841

  • @nickholden585

    @nickholden585

    7 ай бұрын

    @@IanWootten saw that. Even after running sudo usermod -a -G ollama $(whoami) It still won't work. The idea to run it in /tmp came from that thread haha. Outside of this issue, the rest of the project is pretty cool imo. Local llm with reinforced learning, wifi and direct brain integration will be the future.

  • @davidbl1981
    @davidbl19815 ай бұрын

    Even if the killer is dead on the floor the killer is still there and would still be a killer 😅 so the correct answer would be 3.

  • @IanWootten

    @IanWootten

    5 ай бұрын

    A killed killer

Келесі