Using Ollama to Run Local LLMs on the Raspberry Pi 5

Ғылым және технология

My favourite local LLM tool Ollama is simple to set up and works on a raspberry pi 5. I check it out and compare it to some benchmarks from more powerful machines.
00:00 Introduction
00:41 Installation
02:12 Model Runs
09:01 Conclusion
Ollama: ollama.ai
Blog: www.ianwootten.co.uk/2024/01/...
Support My Work:
Check out my website: www.ianwootten.co.uk
Follow me on twitter: / iwootten
Subscribe to my newsletter: newsletter.ianwootten.co.uk
Buy me a cuppa: ko-fi.com/iwootten
Learn how devs make money from Side Projects: niftydigits.gumroad.com/l/sid...
Gear:
RPi 5 from Pimoroni on Amazon: amzn.to/4aoalOd
As an affiliate I earn on qualifying purchases at no extra cost to you.

Пікірлер: 93

  • @metacob
    @metacob2 ай бұрын

    I just got a RPi 5 and ran the new Llama 3 (ollama run llama3). I was not expecting it to be this fast for something that is on the level of GPT-3.5 (or above). On a Raspberry Pi. Wow.

  • @brando2818

    @brando2818

    Ай бұрын

    I just recieved my pi, and I'm about to do the same thing.. Are you doing anything else on it?

  • @sweetbb125
    @sweetbb125Ай бұрын

    I've trie drunning OLLAMA on my Raspberry Pi 5, as well as an Intel Celeron based computer, and also an old Intel i7 based computer, and it worked everywhere. It is really behind impressive, thank you for this video to show me how to do it!

  • @nilutpolsrobolab
    @nilutpolsrobolab2 ай бұрын

    Such a calm tutorial but so informative💙

  • @KDG860
    @KDG8603 ай бұрын

    Thank u for sharing this. I am blown away.

  • @markr9640
    @markr96404 ай бұрын

    Really useful stuff on your videos. Subscribed 👍

  • @SocialNetwooky
    @SocialNetwooky5 ай бұрын

    As I just said on the discord server : you might be able to squeeze a (very) tiny bit of performance by not loading the WM and just interact with ollama via SSH. But great that it works as well with tinyllama! Phi based models might work well too! Dolphin-Phi is a 2.7B model.

  • @BradleyPitts666

    @BradleyPitts666

    4 ай бұрын

    I don't follow? What VM? ssh into what?

  • @SocialNetwooky

    @SocialNetwooky

    4 ай бұрын

    @@BradleyPitts666 WM ... windows Manager.

  • @SocialNetwooky

    @SocialNetwooky

    4 ай бұрын

    ​ @BradleyPitts666 meh ... youtube not showing my previous (phone written) answer again, so I can't edit it, and I can't see/edit my previous answer ... so this might be a near identical answer to another answer, sorry. I blame KZread :P The Edit is that I disabled even more services, and marginally faster answer. So : WM is the Windows Manager. It uses resources (processor time and memory) while it runs, not a lot, but it's not marginal. So disabling the WM with 'sudo systemctl disable lightdm' and rebooting is beneficial for this particular usecase. Technically ,just calling 'systemctl stop lightdm' would work too, but by disabling and rebooting you make sure any services lightdm started really aren't running in the background. You can then use ollama on the command line. If you want to use it from your main system without hooking the rpi to a monitor and plug a keyboard in it you can enable sshd (the ssh daemon, which isn't enabled by default in the pi-os image afaik) and then ssh to it, and then use ollama there (THAT uses a marginal amount of memory though). I also disabled bluetooth, sound.target and graphical.target, snapd (though I only stop that one, as I need it for nvim), pipewire and pipewire-pulse (those two are disabled using systemctl --user disable pipewire.socket and systemctl --user disable pipewire-pulse.socket). Without any models loaded, at idle, I only have 154MB of memory used. With that configuration tinyllama on the question 'why is the sky blue' I get 13.02 t/s on my rpi5, so nearly 1/3rd faster than with all the unneeded services

  • @DominequeTaylor

    @DominequeTaylor

    2 күн бұрын

    What about the new ai attachment that they announced for the pi to do ai stuff. Would this work faster?

  • @SocialNetwooky

    @SocialNetwooky

    2 күн бұрын

    @@DominequeTaylor as far as I know it's for visual recognition, not for llms

  • @isuckatthat
    @isuckatthat5 ай бұрын

    I've been testing llamacpp on it and it works great as well. Although, I've had to use my air purifier as a fan to keep it from overheating even with the aftermarket cooling fan/heatsync on it.

  • @whitneydesignlabs8738
    @whitneydesignlabs87385 ай бұрын

    Thanks, Ian. Can confirm. It works and is plausible. I am getting about 8-10 minutes for multi-modal image processing with Llava. I find the tiny models to be too dodgy for good responses, and have currently settled on Llama2-uncensored as my go to LLM for the moment. Response times are acceptable, but looking for better performance. (BTW my Pi5 is using an nVME drive and a Hat from Pineberry)

  • @IanWootten

    @IanWootten

    5 ай бұрын

    Nice, I'd like to compare to see how much faster an nVME would run these models.

  • @whitneydesignlabs8738

    @whitneydesignlabs8738

    5 ай бұрын

    If you want to do a test, let me know. I could run the same model and query as you, and we could compare notes. My guess is that processing time has more to do with CPU and RAM. but not 100% sure. Having said that large (1TB+) nvme makes storing models on the Pi convenient. Also boot times are rather expeditious. When the Pi5 was announced, I knew right away that I wanted to to add an nvme via the PCI express connector. Worth the money, IMO. @@IanWootten

  • @BillYovino
    @BillYovino4 ай бұрын

    Thanks for this. So far I've tested TinyLlama, Llama2, and Gemma:2b with the question "Who's on first" ( a baseball reference from a classic Abbott and Costello comedy skit). TinyLlama and Llama2 understood that it was a baseball reference, but had some bizarre ideas on how baseball works. Gemma:2b didn't understand the question but when asked "What is a designated hitter?" came up with an equally incorrect answer.

  • @IanWootten

    @IanWootten

    4 ай бұрын

    Nice. I love your Hal replica. Was that done with a Raspberry Pi?

  • @BillYovino

    @BillYovino

    4 ай бұрын

    @@IanWoottenYes, a 3B+. I'm working on a JARVIS that uses ChatGPT API and I'm interested in preforming the AI function locally. That's why I'm looking into Ollama.

  • @Augmented_AI
    @Augmented_AI4 ай бұрын

    How do we run this in python, so for voice to text and text to speech for a voice assistant

  • @daveys
    @daveys3 ай бұрын

    The Pi 5 is pretty good when you consider the cost, and what you can do with it. I picked one up recently for Python coding, and it runs Jupyter Notebook beautifully on my 4k screen. I might give the GPIO a whirl at some point in the near future.

  • @MarkSze
    @MarkSze5 ай бұрын

    Might be worth trying the quantised versions of llama2

  • @donmitchinson3611
    @donmitchinson3611Ай бұрын

    Thanks for video and testing. I was wondering if you have tried setting num_threads =3. I can't find video of where I saw this but I think they set it before calling ollama. Like environment variable. It's supposed to run faster. I'm just building a rpi5 test station now

  • @Vhbaske
    @Vhbaske2 ай бұрын

    In the USA Digilent also has many Raspberys5 available!

  • @m41ek
    @m41ek5 ай бұрын

    Thanks for the video! What's your camera please ?

  • @davidkisielewski605
    @davidkisielewski6054 ай бұрын

    Hi! I have the m.2 VMe hat and I am waiting for my coral accelerator. Does anyone else run with the accelerator and how much does it speed things up? I know what they say it does, but I am interested in real-world figures. I'll post when it arrives from blighty.

  • @BenAulbrook
    @BenAulbrook5 ай бұрын

    I finally got my Pi5 yesterday and already have ollama working with a couple of models. But id like to provide a text to speech for the output on the screen having a hard time wrapping my brain around it how it works... like allowing the Ollama functions from the terminal to turn into audible speech.. but so many resources too pick from and also just getting the code/scripts working, i wish it was easy to install an external package and allow the internal functions to just "work" without having to move files and scripts around it becomes confusing sometimes.

  • @Wolkebuch99

    @Wolkebuch99

    5 ай бұрын

    Well, how about a pi-cluster where one node runs ollama and one runs a screen reader ssh'd into the ollama node? Could add another layer and have another node running NLP for the screen reader node, or a series of nodes connected to animatronics and sensors.

  • @davidkisielewski605

    @davidkisielewski605

    4 ай бұрын

    You can run meta whisper alongside your model from what I read. t-t-s and s-t-t

  • @AlwaysCensored-xp1be
    @AlwaysCensored-xp1be3 ай бұрын

    Been having fun running different LLM. The small ones are fast, the 7B ones are slow. I have Pi5 8G. The small LLMs should run on a Pi4? Tinyllama has trouble adding 2+2. They also seem Monotropic, spiting out random vaguely related answers. I need more Pi5 so I can network a bunch with different LLM on each.

  • @dinoscheidt
    @dinoscheidt5 ай бұрын

    Would love to know if the google coral board would provide a substantial improvement. If Ollama can even utilize that. Also, how it would compare to a jetson nano. Nonetheless: Thank you very much for posting this. Chirps to the Birds ❤️

  • @IanWootten

    @IanWootten

    5 ай бұрын

    That would be great to try out if I could get my hands on one.

  • @dibu28
    @dibu282 ай бұрын

    also try MS Phi2 for Python and Gemma-2b

  • @nmstoker
    @nmstoker5 ай бұрын

    Great video but it's not a good idea to encourage use of those all-in-one curl commands. Best to download the shell script, ideally look over it before you run it, but even if you don't check it first at least you have the file if something goes wrong

  • @IanWootten

    @IanWootten

    5 ай бұрын

    Yes, I've mentioned this in my other videos and have in my blog on this too.

  • @nmstoker

    @nmstoker

    5 ай бұрын

    @@IanWootten ah, sorry hadn't seen that. Anyway thanks again for the video! I've subscribed to your channel as looks great 🙂

  • @1091tube
    @1091tube4 ай бұрын

    could the compute process be distributed, like a grid compute? 4 raspberry pi?

  • @IanWootten

    @IanWootten

    4 ай бұрын

    Not really - a model file is downloaded to the machine using Ollama and brought into memory.

  • @technocorpus1
    @technocorpus12 ай бұрын

    Awesome! I want to try this now! Can someone tell me if it necessary to install the model on an exterior SSD?

  • @IanWootten

    @IanWootten

    2 ай бұрын

    Not necessary, but may be faster. All the experiments here I was just using a microsd.

  • @technocorpus1

    @technocorpus1

    2 ай бұрын

    ​@@IanWootten That's just amazing to me. I have a Pi3, but am planning on upgrading to a pi5. After I saw your video, I downloaded ollama onto my windows pc. It only has 4 GB RAM, but I will still able to run several models!

  • @fontende
    @fontende5 ай бұрын

    maybe better try genius Mozilla LLM container in one file project LLAMAFILE. I was able to run it on my 2011 laptop(some ancient Gpu) with Windows 8 a LLAVA in llamafile, which is also an image scanner llm. Ollama i've tested can't run on win 8.

  • @juanmesid
    @juanmesid5 ай бұрын

    ur from the discord server! keep going

  • @IanWootten

    @IanWootten

    5 ай бұрын

    You mean the ollama one? I'm on there from time to time.

  • @jdray

    @jdray

    5 ай бұрын

    @@IanWootten Just posted this video there. Glad to know you're part of the community.

  • @Lp-ze1tg
    @Lp-ze1tg3 ай бұрын

    Was this pi 5 consisted of microsd card or external storage? How big the storage size is suitable?

  • @IanWootten

    @IanWootten

    3 ай бұрын

    Just using the microsd. I'd imagine speeds would be a fair bit better from USB or nvme.

  • @markmonroe4154
    @markmonroe41545 ай бұрын

    This is a good start - I bet the Raspberry Pi makers have a Pi 6 in the works with a better GPU to really drive these LLM's.

  • @IanWootten

    @IanWootten

    5 ай бұрын

    No doubt they will do. But, the Pi 4 was released 4 years ago, so you might have to wait a while.

  • @madmax2069

    @madmax2069

    4 ай бұрын

    That's wishful thinking. You might as well try to figure out how to run an ADLINK Pocket AI on a Pi 5.

  • @Bigjuergo
    @Bigjuergo5 ай бұрын

    can you connect it with speach recognition and make tts output with pretrained voicemodel (*.index and *.pth) file?

  • @IanWootten

    @IanWootten

    5 ай бұрын

    You probably could, but it wouldn't give a quick enough response for something like a conversation.

  • @whitneydesignlabs8738

    @whitneydesignlabs8738

    5 ай бұрын

    I am working on something similar, but using a Pi4 for STT & TTS (and animatronics) and a dedicated Pi5 for running the LLM with Ollama like Ian demonstrates. They are on the same network and use MQTT for communication protocol. This is for robotics project.@@IanWootten

  • @isuckatthat

    @isuckatthat

    5 ай бұрын

    I've been trying to do this, but its impossibly hard to get tts setup.

  • @donniealfonso7100

    @donniealfonso7100

    5 ай бұрын

    @@isuckatthat Yes not easy. I was trying to implement speech with Google Wavenet using KZread Data Slayer example. I put the key reference in pi's user.profile as export. Script runs okay now creating the mp3 files but no speech so pretty much gave up as other fish to fry.

  • @nmstoker

    @nmstoker

    5 ай бұрын

    ​@@isuckatthathave you tried espeak? It would give robotic quality output but uses very little processing and works fine on a Pi

  • @NicolasSilvaVasault
    @NicolasSilvaVasaultАй бұрын

    that's super impressive even if it takes quite a while to respond, is a RASPBERRY PI

  • @IanWootten

    @IanWootten

    Ай бұрын

    EXACTLY!

  • @user-vl4vo2vz4f
    @user-vl4vo2vz4f4 ай бұрын

    please try adding a coral module to the pi and see the difference

  • @madmax2069

    @madmax2069

    4 ай бұрын

    A Coral module is not suited for this. It lacks the available Ram to really partake in helping an LLM run. What you really need is something like an external GPU, something like one of those ADLINK Pocket AI GPUs to hook up to the system, BUT it only has 4GB Vram.

  • @chetana9802
    @chetana98025 ай бұрын

    now lets try it on a cluster or ampere altra?

  • @IanWootten

    @IanWootten

    5 ай бұрын

    Happy to give it a try if there's one going spare!

  • @galdakaMusic
    @galdakaMusic13 күн бұрын

    What about renew this video with the new Rpi Hat AI? Thanks

  • @IanWootten

    @IanWootten

    13 күн бұрын

    Could do, but I don't think Ollama would be able to leverage it, plus it's not out yet.

  • @GuillermoTs
    @GuillermoTs3 ай бұрын

    Is possible to run in a Raspberry Pi 3?

  • @IanWootten

    @IanWootten

    3 ай бұрын

    Maybe one of the smaller models, but it'll run a lot slower than here

  • @anonymously-rex-cole
    @anonymously-rex-cole3 ай бұрын

    is that realtime? is that how fast it replies?

  • @IanWootten

    @IanWootten

    3 ай бұрын

    All the text model responses are in realtime. I've only made edits when using llava since there was a 5 min delay between hitting enter and it responding...

  • @TreeLuvBurdpu
    @TreeLuvBurdpu4 ай бұрын

    What if you put a compute module on it or something?

  • @IanWootten

    @IanWootten

    4 ай бұрын

    A compute module is a RPi in a slightly different form. So I think it would behave the same.

  • @AlexanderGriaznov
    @AlexanderGriaznov2 ай бұрын

    Am I the only one who noticed tiny llama response to “why sky is blue?” was shitty? What the heck rust causing blue color of the sky?

  • @IanWootten

    @IanWootten

    2 ай бұрын

    Others have mentioned it in the comments too. It is a much smaller model, but there are many others to choose from (albeit possibly slower).

  • @zachhoy
    @zachhoy5 ай бұрын

    I'm curious why run it on a Pi instead of a proper PC?

  • @IanWootten

    @IanWootten

    5 ай бұрын

    To satisfy my curiosity - to see whether it's technically possible on such a low powered, cheap machine.

  • @zachhoy

    @zachhoy

    5 ай бұрын

    thanks for the genuine response :D Yes I can see that drive now. @@IanWootten

  • @TreeLuvBurdpu

    @TreeLuvBurdpu

    4 ай бұрын

    There's lot of videos of people running it on their PC, but if you use it all the time it will hog your PC all the time. There's several reasons you might want a dedicated host.

  • @marsrocket
    @marsrocket3 ай бұрын

    What’s the point of running a LLM locally if the responses are going to be nonsense? That blue sky response was ridiculous.

  • @IanWootten

    @IanWootten

    3 ай бұрын

    The response for that one model/prompt may have been, but there are plenty of others to choose from.

  • @allurbase
    @allurbase5 ай бұрын

    How big was the image, maybe that affected the response time? Very cool, although not convinced by tiny-llama or the speed for a 7B model, but still crazy we are getting close. You should try something with more power like a Jetson Nano. THanks!!

  • @IanWootten

    @IanWootten

    5 ай бұрын

    Less than 400KB. Might try a jetson nano if I get my hands on one.

  • @Tarbard
    @Tarbard5 ай бұрын

    I liked tinydolphin better than tinyllama.

  • @IanWootten

    @IanWootten

    5 ай бұрын

    Not tried it out yet.

  • @pengain4
    @pengain42 ай бұрын

    I dunno. It seems cheaper for buy actual second-hand GPU to run Ollama on it than to buy RPi. [Partially] a joke. :)

  • @IanWootten

    @IanWootten

    2 ай бұрын

    Possibly if you already have a machine. This might work out if you don't. Power consumption is next to nothing on the Pi too.

  • @blender_wiki
    @blender_wiki5 ай бұрын

    Too expensive for what it is. Interesting proof of concept but absolutely useless and inefficient in a production context

  • @arkaprovobhattacharjee8691
    @arkaprovobhattacharjee86915 ай бұрын

    This is so exciting! Can you pair this with a Coral TPU ? and then check the inference speed ? I was wondering if that's possible

  • @madmax2069

    @madmax2069

    4 ай бұрын

    The coral TPU isn't suited for this, it lacks the available Ram to do any good with an LLM. What you'd need is one of those ADLINK Pocket AI GPUs but it only has 4GB Vram.

  • @arkaprovobhattacharjee8691

    @arkaprovobhattacharjee8691

    4 ай бұрын

    @@madmax2069 makes sense.

  • @BradleyPitts666
    @BradleyPitts6664 ай бұрын

    I have cpu usage at 380% when ollama2 responding. Anyone else tested?

Келесі