Use Your Self-Hosted LLM Anywhere with Ollama Web UI

Ғылым және технология

Take your self-hosted Ollama models to the next level with Ollama Web UI, which provides a beautiful interface and features like chat history, voice input, and user management. We'll also explore how to use this interface and the models that power it on your phone using the powerful Ngrok tool.
Watch my other Ollama videos - • Get Started with Ollama
Links:
Code from the video - decoder.sh/videos/use-your-se...
Ollama - ollama.ai
Docker - docs.docker.com/engine/install/
Ollama Web UI - github.com/ollama-webui/ollam...
NGrok - ngrok.com/docs/getting-started/
Timestamps:
00:00 - Is this free ChatGPT?
00:16 - Tools Needed
00:19 - Tools: Ollama
00:25 - Tools: Docker
00:38 - Tools: Ollama Web UI
00:55 - Tools: Ngrok
01:12 - Ollama status check
01:37 - Docker command walkthrough
04:20 - Starting the docker container
04:33 - Container status check
04:53 - Web UI Sign In
05:17 - Web UI Walkthrough
07:11 - Getting started with Ngrok
07:55 - Running Ngrok
08:29 - Ollama Web UI on our Phone!!
09:37 - Outro - What's Next?
Credits:
Wikimedia.org for the photo of Earth

Пікірлер: 208

  • @kameit00
    @kameit003 ай бұрын

    Just in case you missed it, your auth token was visible since the position changed on your screen changed. If you want to regenerate it. Thanks for posting your videos!

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    Good eye! That was one of the things I told myself I wouldn’t do as I started this process, and of course it’s the first thing I did 😰 But don’t worry, I regenerated it before publishing the video as part of good hygiene. Stay tuned to see what other PII I leak 😅

  • @proterotype

    @proterotype

    3 ай бұрын

    Good man @kameit00

  • @kornflakesss

    @kornflakesss

    3 ай бұрын

    Such a good person

  • @JarppaGuru

    @JarppaGuru

    2 ай бұрын

    if we can generate it it wont matter? LOL. there is nothing good with keys. its only for tracking

  • @steftrando
    @steftrando3 ай бұрын

    See, these are the types of KZread tech videos I like to watch. This guy is clearly a very knowledgeable senior dev, and he puts more priority into the tech than a fancy KZreadr influencer setup.

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    Thank you for watching, and for the kind words! Don't expect to see me on a sound stage anytime soon 😂

  • @mikect05

    @mikect05

    3 ай бұрын

    Like how KZread used to be. Less loud intro music. Less advertising. Less sponsor segments. Less click bait titles. Less hiding the actual valued info among wasted time. Makes sense though... If your explicit about what your video is about the people interested will watch, but if you make it a mystery than hopefully anyone that thinks it might be helpful will click, then they have to parse through.... Fack "don't recommend (Shii) channel!!"

  • @TheColdharbour
    @TheColdharbour3 ай бұрын

    Really enjoyed this video too! Complete success, really well paced and carefully explained! Looking forward to the next one (open source LLMs) - thanks for the great tutorials! :)

  • @imadsaddik
    @imadsaddik3 ай бұрын

    Oh man, I don't know how and why KZread recommended your video, but I am very happy that they did. I enjoyed this video a lot

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    Happy to hear it, welcome to my channel!

  • @xXWillyxWonkaXx

    @xXWillyxWonkaXx

    3 ай бұрын

    I second that. Straight to the point, very sharp with the info, thank you bro

  • @decoder-sh

    @decoder-sh

    2 ай бұрын

    @@ts757arseI'm thrilled to hear that! I'd love to hear more about your business if you're willing to share

  • @decoder-sh

    @decoder-sh

    2 ай бұрын

    @@ts757arselooks like it got nuked :( I need to set up a google org email with my domain so I can talk to viewers 1:1

  • @decoder-sh

    @decoder-sh

    2 ай бұрын

    @@ts757arse Ah so it's like pen testing simulation and planning? Very cool, that's a necessary service. Self-hosting an uncensored model seems like the perfect use case. Nuke test still fails, but I finally set up "david at decoder dot sh"!

  • @Bearistotle_
    @Bearistotle_4 ай бұрын

    Amazing tutorial, all the steps are broken down and explained very well

  • @annbjer
    @annbjer3 ай бұрын

    Really cool stuff, thanks for keeping it clear and to the point. It’s awesome that experimenting with local and custom models is becoming more accessible. I’m definitely planning to give it a try and hope to design my own custom interfaces someday. Just subbed and looking forward to learning more!

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    I look forward to seeing what you create! I have some really fun videos planned, thanks for the sub :)

  • @SashaBaych
    @SashaBaych3 ай бұрын

    You are really good at explaining things! Thank you so much. No useless hype, just plain useful hands on information that is completely understandable.

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    Thank you so much for watching and leaving a comment! I’ll continue to do my best to make straightforward and easy to understand videos in the future 🫡

  • @chrisumali9841
    @chrisumali98413 ай бұрын

    Thanks for the demo and info, have a great day

  • @RamseyLEL
    @RamseyLEL4 ай бұрын

    Solid, detailed, and thorough video tutorial

  • @eric.o
    @eric.o4 ай бұрын

    Excellent video, super easy to follow

  • @mikect05
    @mikect053 ай бұрын

    So excited to find your channel... looking forward to more videos. I'm a total noob so feel a bit like I'm floating out in space.

  • @paoloavogadro7329
    @paoloavogadro73293 ай бұрын

    Very well done, quick and clean to the point.

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    I'm glad you think so, thanks for watching!

  • @aolowude
    @aolowude2 ай бұрын

    Worked like a charm. Great walkthrough!

  • @decoder-sh

    @decoder-sh

    2 ай бұрын

    Happy to hear it!

  • @dhmkkk
    @dhmkkk2 ай бұрын

    What a great tutorial please keep on making more content!

  • @decoder-sh

    @decoder-sh

    2 ай бұрын

    Thanks for watching, I certainly will!

  • @WolfeByteLabs
    @WolfeByteLabs17 күн бұрын

    Thanks so much for this video man. Awesome entry point to local + private llms

  • @decoder-sh

    @decoder-sh

    16 күн бұрын

    My pleasure, thanks for watching!

  • @adamtechnology3204
    @adamtechnology32043 ай бұрын

    This was really benificial thank you a lot!

  • @scott701230
    @scott7012303 ай бұрын

    Awesomeness! Thank you for the Tutorial!

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    My pleasure, thanks for watching!

  • @anthony.boyington
    @anthony.boyington3 ай бұрын

    Very good video and easy to follow.

  • @ronaldokun
    @ronaldokun2 ай бұрын

    Thank you for the exceptional tutorial!

  • @decoder-sh

    @decoder-sh

    2 ай бұрын

    My pleasure, thanks for subscribing!

  • @anand83r
    @anand83r3 ай бұрын

    Very useful, simple to understand and very focused on subject 👌. Its hard to find Americans like this who delivers messages without sugarcoating or too much filler content. Good jobs 👌. people its worth to support this person👏

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    Thank you for your support!

  • @collinsk8754
    @collinsk87543 ай бұрын

    Excellent tutorial 👏👏!

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    I’m glad you enjoyed it!

  • @rgm4646
    @rgm464612 күн бұрын

    This works great! thanks!!

  • @decoder-sh

    @decoder-sh

    7 күн бұрын

    Thank you so much!

  • @JacobLehman-ov4eu
    @JacobLehman-ov4eu25 күн бұрын

    Thanks, very helpful and simple. I'm very new to all of this (and coding) but it really fascinates me. I would love to be able to set up an LLM with RAG and use in web ui so that my coworkers could test projects. I will get there and your content is very helpful!

  • @bhagavanprasad
    @bhagavanprasad19 күн бұрын

    Excellent. thank you

  • @MacProUser99876
    @MacProUser998763 ай бұрын

    Beautiful stuff, mate!

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    Cheers, thank you!

  • @synaestesia-bg3ew

    @synaestesia-bg3ew

    3 ай бұрын

    ​@decoder-sh Everything seems to look so easy with you. I did this a month ago, but not so easy.

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    @@synaestesia-bg3ew thank you for the kind words!

  • @CodingScot

    @CodingScot

    3 ай бұрын

    It could be done through docker, portainer and nginx proxy manager as well?

  • @bndy0
    @bndy03 ай бұрын

    Ollama WebUI has been renamed to Open WebUI, video tutorial on how to update would be helpful!

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    Looks like it's the same codebase, but I could possibly go over the migration? Appears to be just a couple commands github.com/open-webui/open-webui?tab=readme-ov-file#moving-from-ollama-webui-to-open-webui

  • @Uconnspartan
    @Uconnspartan3 ай бұрын

    Great content!

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    Thanks for watching!

  • @UnchartedWorlds
    @UnchartedWorlds4 ай бұрын

    Thank you keep it up! Sub made

  • @kashifrit
    @kashifrit22 күн бұрын

    its an extremely good video

  • @decoder-sh

    @decoder-sh

    22 күн бұрын

    Thank you for watching!

  • @keylanoslokj1806
    @keylanoslokj18063 ай бұрын

    Great info. What kind of beast workstation server you need to set-up though to run your own gpt?

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    Depends what your needs are! If you just want to use a small model for simple tasks, any gpu in the last 5(?) years should be fine, or a beefy cpu. I’m using an M1 MacBook Pro, though I’ve also got requests for Linux demos and would be happy to show you how models run on a 2080ti

  • @Candyapplebone
    @Candyapplebone2 ай бұрын

    Interesting. You really didn’t have to code that much to actually get it all up and running.

  • @decoder-sh

    @decoder-sh

    2 ай бұрын

    Yes indeed! There will be more coding in future videos, but in the beginning I’d like to show what’s possible without much coding experience

  • @soyhenryxyz
    @soyhenryxyz3 ай бұрын

    For cloud hosting of the Ollama web UI, which services do you suggest? Additionally, are there any services you recommend for API use to avoid installing and storing large models? appreciate any insight here and great video!

  • @simonbrennan7283

    @simonbrennan7283

    3 ай бұрын

    Most people considering self hosting would be doing so because of privacy and security concerns, which I think is the target audience for this video. Cloud hosting totally defeats the purpose.

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    I don't have any recommended services at the moment, but I would like to research and create a video reviewing a few of the major providers in the near future. Ditto for API providers, I've been mostly focused on self-hosting at the moment. Some that come to mind are openAI (obviously), Mistral (mistral.ai/product/), and one that was just announced is Groq (wow.groq.com/)

  • @yashkaul802
    @yashkaul80223 күн бұрын

    please make a video on deploying this on huggingface spaces or AWS ECS. Great video!

  • @baheth3elmy16
    @baheth3elmy163 ай бұрын

    I really liked your video, I subscribed of course. I don't think Ollama adds much with the current abundant services available for mobile.

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    Thanks for watching and subscribing! What are your current favorite LLM apps?

  • @baheth3elmy16

    @baheth3elmy16

    3 ай бұрын

    @@decoder-sh I use Oobabooga sometimes on its own and sometimes I use SillyTavern as a front, and Faraday, for local LLM

  • @baheth3elmy16

    @baheth3elmy16

    3 ай бұрын

    @@decoder-sh Oobabooga sometimes on its own and sometimes with SillyTavern, and Faraday

  • @jayadky5983
    @jayadky59833 ай бұрын

    Hey, good work mate! I wanted to know if we could self host our Ollama API to Ngrok just as we hosted WebUI? I am using a server to run ollama and I have to ssh in everytime to use it. So, can we instead forward the ollama localhost api to ngrock and then use it in my machine?

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    Yeah you could definitely do that! Let me know how it works out for you :)

  • @ollimacp
    @ollimacp3 ай бұрын

    Splendid tutorial. Thanks alot :) You got a like and a sub from me! And if i write a custom model(Memgpt+CrewAI) and want to use the WebUI, would it be better to try to get the model into a ollama modelfile, or just expose the model via an API which mimiks the standard (openai)?

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    Thanks for watching! It looks like MemGPT isn't a model as much as a library that uses models (via openAI and their own endpoint) to act as agents. So a modelfile wouldn't work, but it does look like they have some instructions for connection to a UI (oogabooga in this case memgpt.readme.io/docs/local_llm). Best of luck, let us know how it goes!

  • @khalidkifayat
    @khalidkifayat2 ай бұрын

    great one. few questions here 1. can u through some light on input/output token consumption to/from LLM. 2. How can we give this app to client as service provider ?? thank you

  • @iseverynametakenwtf1
    @iseverynametakenwtf13 ай бұрын

    This is cool. Might see if I can get LM Studio to work. Why not host your own server too?

  • @luiferreira8437
    @luiferreira84373 ай бұрын

    Thanks for the video. I would like to know if it is possible to have this be done with a RAG system built on ollama and also add a diffuser model (like stable diffusion) to generate images

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    This is my first time hearing someone talk about combining RAG with image generation - what kind of use case do you have in mind?

  • @luiferreira8437

    @luiferreira8437

    3 ай бұрын

    @@decoder-sh the idea that I have is to improve model accuracy on a certain topic, while having the option to generate images if needed. Some use case would be like writing a book, keeping consistent characters descriptions and images. I actually didn’t have in mind both simultaneously, but it could be interesting

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    That seems a bit more like a knowledge graph where you update connections or attributes of entities as the model parses more text. I'll be covering some RAG topics in the near future and would like to eventually get to knowledge graphs and their use with LLMs

  • @safetime100
    @safetime1003 ай бұрын

    Legend ❤

  • @SODKGB
    @SODKGB3 ай бұрын

    I would like to make changes to the provided interface for example hide/remove left menu bar, change colors, change fonts or adding some graphics. Any pointers to the right direction would be great. Thinking might need to download the web-ui and edit the source before starting docker and ollama?

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    The UI already allows you to show/hide the left menu (there's a tiny button that's hard to see, but it's there). Beyond that, yes you'd need to download their repo and manually edit their frontend code. Let me know how it turns out!

  • @SODKGB

    @SODKGB

    3 ай бұрын

    @decoder-sh It's been a lot of hacking. At least the ollama for windows in combination with docker is fast and easy. Potential exists to use Python to send and receive content from local server and modify the content to accept variables via get or post.

  • @alizaka1467
    @alizaka14673 ай бұрын

    Can we use GPT models with this? Thanks. Great video as always

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    Do you mean OpenAI? Yes you can add your OpenAI API key to the webui in Settings. Sorry for not showing that!

  • @VimalMeena7
    @VimalMeena73 ай бұрын

    everything working final locally but when i run it on internet using ngrok it shows "Ollama WebUI Backend Required". although my backend running ... on local system i am getting responses to my queries. please help. i am not able to resolve it.

  • @spencerfunk6697
    @spencerfunk6697Ай бұрын

    integration with open interpreter would be cool

  • @aimademerich
    @aimademerich4 ай бұрын

    Phenomenal

  • @hmdz150
    @hmdz1503 ай бұрын

    This is amazing, does the ollama web ui work with pdf files too?

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    It does have document uploading abilities, but I haven’t looked at their code to see how that actually works under the hood. I believe it does do some naive parsing and embedding generation. Try uploading a document and asking a question about it!

  • @thegamechanger3793
    @thegamechanger3793Ай бұрын

    Do you need good cpu/ram to run? Just trying to see when you install docker/LLM/grok if it require high end system requirements?

  • @decoder-sh

    @decoder-sh

    Ай бұрын

    It depends on the model you want to run. docker & ngrok don't require much resources at all, and I've seen people run (heavily quantized) 7B models on a raspberry Pi. I'm using an M1 macbook, but it's overkill for smaller models.

  • @peterparker5161

    @peterparker5161

    10 күн бұрын

    You can run Phi-3 mini quantized on an entry level laptop with 8gb RAM. If you have 4gb VRAM, the response will be very quick.

  • @Fordtruck4sale
    @Fordtruck4sale3 ай бұрын

    How does this handle multiple users wanting to load multiple different models at same time? FIFO?

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    Yes I believe so

  • @maidenseddie1701
    @maidenseddie17013 ай бұрын

    Thanks for the clear video. What are the use cases for a non-technical person like me to use a self-hosted LLM? Or is this video only for developers working at businesses? Would like to understand why a person would use a self-hosted LLM when there are LLMs already there like Llama, GPT4, Claude 2.0 and Google’s Gemini? I still don’t understand the use case for using self hosted LLMs.

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    Fair question! You might self-host an LLM if you wanted to keep your data private, or if you wanted greater flexibility in which models you use, or if you didn't want to pay for API access, or if you didn't want to be constrained by OpenAI's intentionally restrictive system prompt. Let us know if you decide to give self-hosting a try!

  • @maidenseddie1701

    @maidenseddie1701

    2 ай бұрын

    @@decoder-sh thank you, will give it a shot. I’m a non-technical person!

  • @maidenseddie1701

    @maidenseddie1701

    2 ай бұрын

    I’m trying to follow your steps but I’m stuck at the command line on Mac. I can’t seem to add more than one line of code as whenever I hit the return key, command line is processing that single line of code. Unable to paste the entire code. Can you message the code so i can paste it in its entirety. Your help will be greatly appreciated!

  • @decoder-sh

    @decoder-sh

    2 ай бұрын

    @@maidenseddie1701​​⁠ah you either need to put it all on one line OR end each line with a forwardslash \. This has the effect of escaping the newline character that follows it. See the code herev decoder.sh/videos/use-your-self_hosted-llm-anywhere-with-ollama-web-ui

  • @maidenseddie1701

    @maidenseddie1701

    2 ай бұрын

    @@decoder-sh thank you for the link, pasting the full code helped. I have other issues though and I really want to build this out, so will appreciate your help until I get this right! Can I DM you on LinkedIn or anywhere else? 1. The Ollama interface doesn’t load the response from Llama2 when I test it with ‘Tell me a random fun fact about the Roman Empire.’ Is my computer too slow? I have 8GB RAM and using Chrome browser. Only 1 out of 4 attempts returned an answer so far. So it has worked just once. 2. ngrok: Terminal keeps saying ngrok: command not found when I ask it to check “ngrok config check”. How do I proceed? I’m desperate to make this work :)

  • @danteinferno8983
    @danteinferno89832 ай бұрын

    Hi can we have a local AI Model installed in our Linux VPS and then use it with API to integrate it in our WordPress website or something like it?

  • @Enkumnu
    @Enkumnu5 күн бұрын

    Very interesting! However, can we configure Ollama on a specific port? The default is localhost, but how do we use a server with a specific IP address (e.g., 192.168.0.10)?

  • @shobhitagnihotri416
    @shobhitagnihotri4163 ай бұрын

    I am not able to understand to docker part , May be some glitch at my MacBook .I s there any way we can do it without use of docker

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    It will be a bit messier, but they do provide instructions for non-Docker installation. Docker desktop should just be a .dmg you open to install github.com/ollama-webui/ollama-webui?tab=readme-ov-file#how-to-install-without-docker

  • @big_sock_bully3461
    @big_sock_bully34613 ай бұрын

    Is there any other way I can keep ngrok running in the background like I wanna integrate it with my own personal website so ngrok won't work. So do you have any other solution ?

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    If you want to run ngrok (or anything) as a background task, you can just add "&" after the command. See here www.digitalocean.com/community/tutorials/how-to-use-bash-s-job-control-to-manage-foreground-and-background-processes#starting-processes If you're on linux, you could also create a service for it which is a bit more of a sustainable way of accomplishing this.

  • @sitedev
    @sitedev3 ай бұрын

    This is nuts. Imagine if you could (you probably can) connect this with a RAG system running on the local machine which contains a business's entire knowledge base and then deploy it to your entire sales/support team.

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    You totally can! Maybe as a browser extension that integrates with gmail? I'm planning a series on RAG now, and may eventually discuss productionizing and use cases as well. Stay tuned 📺

  • @sitedev

    @sitedev

    3 ай бұрын

    @@decoder-sh Cool. I saw another video yesterday discussing using very small LLM's fine-tuned for specific function calling - I can imagine this would also be a neat method of extending the local ai to perform other tasks too (replying to requests via email etc). Have you experimented with local LLMs and function calling?

  • @matthewarchibald5118
    @matthewarchibald5118Ай бұрын

    would it be possible to use tailscale instead of ngrok?

  • @decoder-sh

    @decoder-sh

    Ай бұрын

    If you're just using it for yourself, or with other people that you trust to share a VPN with, then tailscale definitely works! In that case your UI address will either be localhost or whatever your tailscale dns name is. I use tailscale myself for networking my devices

  • @Shivam-bi5uo
    @Shivam-bi5uo2 ай бұрын

    can you help me, if i want to host a fine tuned LLM how can i do so?

  • @YorkyPoo_UAV
    @YorkyPoo_UAV2 ай бұрын

    At first I thought it was great but since I've turned on then off a VPN, I can't get models to load on the remote page. Also every time I start an instance, a new code is generated so I can keep using the same URL.

  • @nachesdios1470
    @nachesdios14702 ай бұрын

    This is really cool, but for anyone that wants to try this out, be careful when exposing services on the internet. - Check updates regularly - try to break the app yourself first before exposing it - I would highly recommend monitoring activity closely.

  • @michamohe
    @michamohe3 ай бұрын

    I'm on a windows 11 machine, is there anything I would do differently with that in mind?

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    Ollama is working on windows support now! x.com/ollama/status/1757560242320408723 For now, you can still run ollama on ubuntu in windows via WSL.

  • @hypergraphic
    @hypergraphicАй бұрын

    Great walk-through, although I think I will just install it on a vps instead.

  • @decoder-sh

    @decoder-sh

    Ай бұрын

    A VPS also works! Which would you use?

  • @leandrogoethals6599
    @leandrogoethals65993 ай бұрын

    oh thx man i was tyred of going to rdp with port forwarding where it ran locally ;)

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    I actually do something a little similar - I use tailscale as a VPN into my home network, then I can easily access whatever services are running. Ngrok is great for a one-off, but I use the VPN daily since I don't need to share it with anyone else.

  • @leandrogoethals6599

    @leandrogoethals6599

    3 ай бұрын

    @@decoder-sh But don't u lose the ability to use the foreign network when connecting when not using virtual adapters? Wich is a pain on phones

  • @adamtechnology3204
    @adamtechnology32043 ай бұрын

    How can I see the hardware requirements for each model? Since even phi doesnt give me response back after minutes waiting I have really old laptop XD

  • @rajkumar3433
    @rajkumar34332 ай бұрын

    What will be deployment command on azure Linux machine.

  • @mernik5599
    @mernik5599Ай бұрын

    Please! How can I add function calling to this ollama served web ui? And is it possible to add internet access so that if I ask for today's news highlights then it can give a summary of news from today

  • @decoder-sh

    @decoder-sh

    Ай бұрын

    I'm not sure if open-webui supports functoin calling from their UI, unfortunately

  • @Wade_NZ
    @Wade_NZ28 күн бұрын

    My AV (Bitdefender) goes nuts and wont allow the NGROK Agent to remain installed on my PC :(

  • @JenuelDev
    @JenuelDev6 күн бұрын

    Hi! I wanna deploy this on my own server, how to do that?

  • @albertlan
    @albertlan3 ай бұрын

    Anyone know how to access ollama via API like you would with ChatGPT? I got the webui working, would love to be able to code on my laptop and utilize the remote PC's GPU

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    I find that the easiest way to use services on another machine is just to ssh into it. So if you have ollama serving its api on your beefy machine on port 11434, then from your local machine you’d run ssh -L 11434:11434 beefy-user@beefy-local-ip-address. This assumes you have sshd running on your other machine, but it’s not hard to set up

  • @albertlan

    @albertlan

    3 ай бұрын

    @@decoder-sh how did you know my user name lol. I finally got it working thru nginx but the speed was too slow to be useful unfortunately

  • @neuralgarden
    @neuralgarden3 ай бұрын

    looks like Docker only works on Windows, Linux and Intel Macs, but not M1 Macs.. are there any other alternatives?

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    This video was made on an M1 Mac, docker should work!

  • @neuralgarden

    @neuralgarden

    3 ай бұрын

    @@decoder-sh oh wait never mind I got it to work. For some reason it says only intel macs on the docker website but I scrolled down to the bottom of the website and found the download button for M1 macs. Thanks, great tutorial btw.

  • @gold-junge91
    @gold-junge913 ай бұрын

    on my root server, its not working its looks like the docker container have no access to ollama, the troubleshoot section doesn't help

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    Do you have any logs that you could share? Ollama is running? When url is listed when you go into the web UI settings and look at the "ollama api url"?

  • @bhagavanprasad
    @bhagavanprasad19 күн бұрын

    Question: Docker image is running, but web-ui is not listing any models that are installed in my PC How to fix it?

  • @riseupallday

    @riseupallday

    3 күн бұрын

    Download any model of your choice using ollama run name_of_model

  • @Rambo51TV
    @Rambo51TV3 ай бұрын

    An you show how do use it offline with personal information?

  • @decoder-sh

    @decoder-sh

    2 ай бұрын

    I will have videos about this coming soon!

  • @kashifrit
    @kashifrit22 күн бұрын

    NGROK keeps changing the link everytime it gets started up ?

  • @decoder-sh

    @decoder-sh

    22 күн бұрын

    Yes, each session's link will be unique. It may be possible to have consistent links if you pay for their service

  • @kevinfox9535
    @kevinfox9535Ай бұрын

    I used webui to run mistral but its very slow. I have 3050 6gb vram with 16gb ram. However i can run ollama mistral model fine on command prompt.

  • @ArtificialChange
    @ArtificialChange2 ай бұрын

    my olama wont install models and i dont know where to put them, theres no folder called models

  • @decoder-sh

    @decoder-sh

    2 ай бұрын

    Once you have ollama installed, it should manage the model files for you (you shouldn't need to put them anywhere yourself). If `ollama pull [some-model]` isn't working for you, you may need to re-install ollama

  • @ArtificialChange

    @ArtificialChange

    2 ай бұрын

    @@decoder-shI will give it another try. I want to know where to put my own models

  • @shanesteven4578
    @shanesteven45783 ай бұрын

    Would love to see what you could do with something like ‘Arduino GIGA R1 WiFi’ with Screen and others such small devices as ESP32 Meshtastic, LLM’s being accessible on such devices with LLM’s limited to subject specific such as: emergency services, medical, logistics, finance, administration, sales & marketing, radio communications, agriculture, math, etc etc ….

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    As long as it has a screen and internet connection, you can use this method to interact with your LLM's on the device!

  • @johnmyers3233
    @johnmyers3233Ай бұрын

    Does a pile downloaded seems to be coming up with some malicious software

  • @JT-tg9uo
    @JT-tg9uo3 ай бұрын

    Everything works but I can't select a model. I can acess from phone , etc. but cannot select model.

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    It may be that you don't have any models installed yet? I didn't actually call that out in the video, so that's my bad! In the web ui go to settings > models, and then type in any of the model names you see here ollama.ai/library ("phi" is an easy one to start with). Let me know if that was the issue! Thanks for watching.

  • @JT-tg9uo

    @JT-tg9uo

    3 ай бұрын

    Thank you Sir I'll give it a whirl

  • @JT-tg9uo

    @JT-tg9uo

    3 ай бұрын

    Yeh it says Ollama:webuii server connection error when trying to pull phi or any other. But other than that it works from phone etc.

  • @JT-tg9uo

    @JT-tg9uo

    3 ай бұрын

    Ollama works fine from terminal with phi, etc. Maybe docker not configured right. I never used docker before.

  • @optalgin2371
    @optalgin2371Ай бұрын

    Do you have to use 3000:8080 ?

  • @decoder-sh

    @decoder-sh

    Ай бұрын

    No, you can change the docker config to use whatever host ports you want

  • @optalgin2371

    @optalgin2371

    28 күн бұрын

    @@decoder-sh What if I want to use the ollama server on my win machine and connect the OpenWebUI to the server on a different Mac machine? I've seen there's a code using ollama on a different host but whenever I use that code with 3000:8080 the UI page opens I can register change things but it doesn't connect, however when I use the network flag fix it doesn't even load the webui page.

  • @optalgin2371

    @optalgin2371

    28 күн бұрын

    @@decoder-sh Is there a way to use this method to connect two machines?

  • @ANIMATION_YT520
    @ANIMATION_YT5203 ай бұрын

    Bro , how do you connect to internet for free using domain host

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    Do you mean how would you use a custom domain with ngrok for free? I'm not sure if that's possible, that's probably something they'd make you pay for.

  • @J4M13M1LL3R
    @J4M13M1LL3R4 ай бұрын

    Please wen llamas with image recognition

  • @decoder-sh

    @decoder-sh

    4 ай бұрын

    The llamas have eyes! You can use multimodal models with ollama NOW. Currently the two models that support images are llava and bakllava and both are sub 7b params I believe

  • @MrMehrd
    @MrMehrd3 ай бұрын

    Should we connected to the internet ?

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    Yes you'll need to be connected to the internet

  • @dvn8ter
    @dvn8ter2 ай бұрын

    ⭐️⭐️⭐️⭐️⭐️

  • @decoder-sh

    @decoder-sh

    2 ай бұрын

    Thanks for watching!

  • @fedorp4713
    @fedorp47133 ай бұрын

    Wow, hosting an app on a free hostname from your home, it's just like 2002.

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    Next I'll show you how to use an LLM to create your very own ringtone

  • @fedorp4713

    @fedorp4713

    3 ай бұрын

    @@decoder-sh How will that work with my pager?

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    @@fedorp4713I’ve seen people make music with HDDs, I’m sure we can quantize some Beach Boys to play on DTMF

  • @fedorp4713

    @fedorp4713

    3 ай бұрын

    @@decoder-sh Love it! Subbed, can't wait for the boomer pager LLM series.

  • @gabrielkasonde367
    @gabrielkasonde3673 ай бұрын

    please add the commands to the description, thank you.

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    Good call, will do!

  • @BogdanTestsSoftware
    @BogdanTestsSoftware3 ай бұрын

    What hardware do I need to run this container? GPU ? Ah, found it: "WARNING: No NVIDIA GPU detected. Ollama will run in CPU-only mode."

  • @PhenomRom
    @PhenomRom3 ай бұрын

    why didnt you put the commands in the description

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    KZread doesn't support code blocks in the description so I spent the day writing code to generate a static site for each video, so I can post the code there. Enjoy! decoder.sh/videos/use-your-self_hosted-llm-anywhere-with-ollama-web-ui

  • @PhenomRom

    @PhenomRom

    3 ай бұрын

    oh wow. thank you @@decoder-sh

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    @@PhenomRommy pleasure! Might do a video about how to make my website too 😂

  • @ArtificialChange
    @ArtificialChange2 ай бұрын

    remember your docker looks different

  • @samarbid13
    @samarbid13Ай бұрын

    Ngrok is considered a security risk because it is closed-source, leaving users uncertain about how their data is being handled.

  • @decoder-sh

    @decoder-sh

    Ай бұрын

    Fair enough! One could also just use a VPN of their choice (including self-hosted Wireguard) to connect their phone to the host device, and reach the webui on localhost

  • @JarppaGuru
    @JarppaGuru2 ай бұрын

    yes now we can use AI answer what was trained. this is question answer this. we allready had jarvis with voice LOL. now we back text LOL

  • @photize
    @photize3 ай бұрын

    Great video , macOs what happened to the majority vote you lost me there not even a mention for the non lemmings Nvidia crew!

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    Far enough, I’d be happy to do some videos for Linux as well! Thanks for watching

  • @photize

    @photize

    3 ай бұрын

    @@decoder-sh I'm presuming the majority to be Windows, it is amazing me how many ai guys have CrApple where in the real world many are using gaming machines for investing time in ai. (Not me I'm just an Apple hater)

  • @garbagechannel6514
    @garbagechannel65143 ай бұрын

    isnt the electric bill higher than just paying for chatgpt

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    Depends on the price of electricity where you are, and how much you use it! But running llms locally has other benefits as well. No need for an internet connection, no vendor lock-in, no concern about sending your data to meta or openai, ability to use different models for different jobs, plus some people just like to own their whole stack. It would be interesting to figure out the electricity utilization per token for an average gpu though…

  • @garbagechannel6514

    @garbagechannel6514

    3 ай бұрын

    @@decoder-sh true enough, the concept is appealing but thats what holds me back atm. i was also looking at on demand cloud servers but it seems like it would get either very expensive or very slow if u let an instance spin up for every query. most effective does seem to be anything with shared resources like chatgpt

  • @arupde6320
    @arupde63203 ай бұрын

    be regular

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    L1 or L2?

  • @Soniboy84
    @Soniboy843 ай бұрын

    You forgot to mention that you need a chunky computer at home running those models, potentially costing $1000s

  • @decoder-sh

    @decoder-sh

    3 ай бұрын

    It doesn't hurt! But even small models like Phi are pretty functional and don't have very high hw requirements. Plus if you're a gamer then you've already got a chunky GPU, and LLMs give you one more thing you can use it for 👨‍🔬

  • @razorree
    @razorree2 ай бұрын

    another 'ollama' tutorial....

  • @decoder-sh

    @decoder-sh

    2 ай бұрын

    Guywhowatchesollamatutorialssayswhat

  • @arquitectoqasdetautomatiza5373
    @arquitectoqasdetautomatiza53732 ай бұрын

    Eres la mera v3rga carnal, por favor sigue subiendo videos

Келесі