Unlimited AI Agents running locally with Ollama & AnythingLLM

Ғылым және технология

Hey everyone,
Recently in AnythingLLM Desktop, we merged in AI Agents. AI Agents are basically LLMs that do something instead of just replying. We support both tool-call-enabled models like OpenAI but have even now have a no-code way to bring AI agents to every open-source LLMs like with Ollama or LMStudio.
Now, with no code required, you can take any LLM and get automatic web scraping, web-browsing, chart generation, RAG memory, and summarization all autonomously and running locally.
If the future of AI is agents, AnythingLLM is where it is going to happen!.
Download AnythingLLM: useanything.com/download
Star on Github: github.com/Mintplex-Labs/anyt...
Chapters:
0:00 Introduction to adding agents to Ollama
0:45 What is Ollama?
1:08 What is LLM Quantization?
1:28 What is an AI Agent?
2:54 How to pick the right LLM on Ollama
5:11 Pulling Ollama models and running the server
5:45 Downloading AnythingLLM Desktop
6:17 AnythingLLM - Initial setup
7:21 Sending our first chat - no RAG
8:22 Uploading a document privately
8:43 Sending a chat again but with RAG
9:10 How to add agent capabilities to Ollama
10:45 Add live web-searching to Ollama LLMs (Free)
11:41 Using agents in AnythingLLM demonstration
13:24 Agent document summarization and long-term memory
14:35 Why you should use AnythingLLM x Ollama
15:00 Star on Github, please!
15:06 Thank you

Пікірлер: 186

  • @sergiofigueiredo1987
    @sergiofigueiredo198712 күн бұрын

    @TimCarambat I had to pause the video just to leave a comment! I'm deeply impressed by the excellence and simplicity of the content presented here. It's truly remarkable to have access to such tools, created by a team that clearly demonstrates passion and a keen ear for what we all think and wish would be great to have, and at every update, distilling all p of these wishes into a few simple clicks within this amazing piece of technology! I'm immensely grateful for the opportunity to experienceh the brilliance of software engineering and development of Anything LLM, especially within the context of open-source communities. Participating in the advancement of genuine and incredible open tools is a privilege. Thank you Tim! I will be promoting this project to the moon and back, because this deserves to be known.

  • @TimCarambat

    @TimCarambat

    12 күн бұрын

    This is so incredibly kind. Sharing with team!

  • @jonathan58475
    @jonathan58475Күн бұрын

    Tim, thank you for making the world a better place with this awesome tool! :)

  • @fxstation1329
    @fxstation13294 күн бұрын

    What I love about your tutorials is that you succinctly explain all the things that come across during the tutorial. Thanks!

  • @surfkid1111
    @surfkid111113 күн бұрын

    You built an amazing piece of software. Thank god that I stumbled across this video.

  • @michaelklimpel3020
    @michaelklimpel302058 минут бұрын

    Big thanks man. This video helps alot for me as an beginner to understand how good a local llm is and which Usecases we have. Thumbs up for this great video.

  • @liviuspinu11
    @liviuspinu1110 күн бұрын

    Thank you for explaining quantisation in details for niebiews.

  • @kangoclap
    @kangoclap4 күн бұрын

    looking forward to utilizing AnythingLLM, it looks really awesome! congrats on creating such an impressive application! thank you!

  • @akikuro1725
    @akikuro172513 күн бұрын

    Awesome! thank you for this. looking forward to more information/details/examples on using agents w/AnythingLLM!

  • @OpenAITutor
    @OpenAITutor2 күн бұрын

    Amazing Tim. Keep up the good work.

  • @SiliconSouthShow
    @SiliconSouthShow13 күн бұрын

    Fantastic Tim! Mine doesnt have agent config, guess i need to delete and udate, ill try that, looks great! keep up good work, i love anythingllm i really do!

  • @yusufaliyu9759
    @yusufaliyu975913 күн бұрын

    Great this will make LLM more understandable for many ppl.

  • @SiliconSouthShow
    @SiliconSouthShow13 күн бұрын

    @TimCarambat I'm excited to see the features you talked about work with the ollama like in the video for the agent, as of now, its same as before I updated, but it's exciting to think of the future.

  • @TheDrMusician
    @TheDrMusician13 күн бұрын

    This is by far the easiest and most powerful way to use LLMs locally, full support, like and sub. And many thanks for the amazing work, especially being open source.

  • @TimCarambat

    @TimCarambat

    12 күн бұрын

    🫡

  • @MartinBlaha
    @MartinBlaha12 күн бұрын

    Thank you! Will test it for sure. I think you guys are on the exact right path 😎👍

  • @jimg8296
    @jimg829611 күн бұрын

    Anythingllm is awesome. Glad to hear custom agents are on the roadmap. It's the big hole in capability. Also need config to change agent promt. I scan a lot of code and the @ is used often to define decorators.

  • @stanTrX
    @stanTrX12 күн бұрын

    This is the easiest all-in-one platform. Thanks. More videos please ❤

  • @figs3284
    @figs328411 күн бұрын

    Incredible.. gonna make building tools so much easier. Cant wait to see more agent abilities added!

  • @MaliciousCode-gw5tq
    @MaliciousCode-gw5tq9 күн бұрын

    Damm,... finally found the tools that i been looking for..MAN you save my day, i have been crazy stuck finding webui for my ollama remote server..your a gift from heaven keep it up your helping alot of people like us..thank you so much..❤❤❤😂😅😊😊

  • @tunoajohnson256
    @tunoajohnson2565 күн бұрын

    Awesome vid! Really impressed with how you presented the information. 🙏 thank you

  • @vulcan4d
    @vulcan4d12 күн бұрын

    This is awesome work. I looked at the other simple to install Windows front ends and stumbled on this. Pretty cool stuff and I love how you can add documents and external websites to feed it information. An offline LLM is soooooo much more preferred. The only item I don't understand is why you could just ask a regular question once you provided the document, but used @agent when asking to summarize a document.

  • @TimCarambat

    @TimCarambat

    12 күн бұрын

    IMO, i find having a local LLM that even is **only** like 75% as good as on online alternative is just much more rewarding. Like i can be on an airplane, open my laptop, and start brainstorming with an AI. Pretty neat. Next evolution would be a local AI on your phone but i dont think we have that tech _yet_

  • @d.d.z.
    @d.d.z.11 күн бұрын

    You are amazing. Thank you 🎉

  • @FlynnTheRedhead
    @FlynnTheRedhead13 күн бұрын

    So training/finetuning is coming up as well? Loving the progress and process updates, keep up the great work Tim!

  • @TimCarambat

    @TimCarambat

    13 күн бұрын

    how'd you know!? We will likely make some kind of external supplemental process for fine-tuning, but at least make the tuning process easy to integrate with AnythingLLM. RAG + Fine-tune + agents = very powerful without question

  • @FlynnTheRedhead

    @FlynnTheRedhead

    12 күн бұрын

    @@TimCarambat That's awesome to hear!! I created an agent to get insider info, that's how I know of course!

  • @TimCarambat

    @TimCarambat

    12 күн бұрын

    @@FlynnTheRedhead !!!!! I thought i was hearing clicks during my phone calls!!!

  • @TokyoNeko8
    @TokyoNeko810 күн бұрын

    Debug mode would be ideal. Agent to scrape the web just exits without any error even though I do have search engine api defined

  • @sashkovarha
    @sashkovarha13 күн бұрын

    This explained the rag and agents parts I couldn't set up. Great educational content for those who are not programmers. Appreciate your explanations being without that much of "pre-supposed" know-how, that coders have - which is most tutorials on youtube... I still didn't get why there's a difference between @agent commands and just regular chat

  • @TimCarambat

    @TimCarambat

    13 күн бұрын

    In a perfect world, they are the same. AnythingLLM originally was only rag. In the near future @agent won't be needed and agent commands will work seamlessly in the chat. So @agent is temporary for now so you know for sure you want to possibly use some kind of tool for your prompt. Otherwise, it's just simple rag

  • @gillopez8660
    @gillopez866012 күн бұрын

    Wow this is amazing... I'm gonna go star you!

  • @mrinalraj4801
    @mrinalraj48019 күн бұрын

    Great work. Thanks a lot 🙏

  • @johnbrewer1430
    @johnbrewer14302 күн бұрын

    @sergiofigueiredo1987, @TimCarambat, I agree with Sergio. Wow! I have Ollama installed locally on a Windows machine in WSL. (I was leery of the Windows preview, but I may switch because NATing the Docker container is a pain.) I also pondered how to build a vector DB on my machine and integrate agents. You guys have already done it!

  • @EddieAdolf
    @EddieAdolf6 күн бұрын

    I've been using it for months. Love it! Will you enable voice to voice soon?

  • @TimCarambat

    @TimCarambat

    5 күн бұрын

    We just did in our most recent update. TTS is live for all, STT is only live for the docker version. There are some restrictions and limitations we need to work around to get STT to fully function cross-platform. It will be solved soon

  • @aimademerich
    @aimademerich13 күн бұрын

    Would love to see this run stable diffusion and comfy ui workflows

  • @star95
    @star958 күн бұрын

    Great video! I also want to know how well the RAG function of AnythingLLM performs. It's important that text, images, and papers are handled properly and meaningful chunking are achieved

  • @zirize
    @zirize12 күн бұрын

    I think it's a very good application, easy to use, and after testing it for a day or so, I have some wishes. 1. direct commands Bypass Agent LLM in Agent mode. It takes time for the agent to understand the sentence and convert it into internal command, and url parsing sometimes fails depending on the agent. For example, a command that scrapes the specified URL and shows the result, or a command that lists the currently registered documents with numbering. And a command that summarizes the document by this number instead of its full name. 2. I wish there was a way to pre-test the settings in the options window to make sure they are correct, such as specifying LLM or search engine. I hope this application is widely known and loved by many people.

  • @JacquesvanWyk
    @JacquesvanWykКүн бұрын

    Really awesome demonstration. I am excited about agents. Would be nice to be able to build custom tools in python for agents to use.

  • @GoranMarkovic85
    @GoranMarkovic8513 сағат бұрын

    Amazing work 👏

  • @DaveEtchells
    @DaveEtchells4 күн бұрын

    Wow, this looks *_amazing!_* I’m just starting to experiment with local LLMs and wanting to play with agents; this looks SO easy! I’m going to download and set it up right away. I’m also interested in Open Interpreter for having an AI assistant do things on my local machine. Can this interface with that, or is it really meant as a substitute/enhancement to it? (Also, how can I support your project? I gather your biz model is selling the cloud service, but my usage will be purely local. Anywhere I could send a token few bucks?)

  • @sharankumar31
    @sharankumar312 күн бұрын

    this is seriously very neat tool👏👏👏 Pls add some feature to custom develop agents with function calls. It will be helpful for our local automations.

  • @TimCarambat

    @TimCarambat

    2 күн бұрын

    This is shown in the UI that we will be supporting custom agents soon!

  • @madhudson1
    @madhudson113 күн бұрын

    Been struggling to get custom agents to integrate reliably with external tooling, using frameworks like crewui with local LLMs. Would love a video guide explaining best practices for this

  • @ImSlo7yHD
    @ImSlo7yHDКүн бұрын

    This is perfect it just needs more tools and agent customization like crew ai and it is going to be an absolute killer for the ai industry.

  • @TimCarambat

    @TimCarambat

    Күн бұрын

    Will be coming soon! Just carving out how agents should work within the context of AnythingLLM and should be good. Also, it would be nice to be able to just import your current CrewAI and use it in AnythingLLM - save you the work you have done so far

  • @HarpaAI
    @HarpaAIКүн бұрын

    🎯 Key Takeaways for quick navigation: 00:00 *🤖 Introduction to Ollama & AnythingLLM and AMA* - Introduction to Ollama and AnythingLLM - Explanation of AMA application for running LLMs on local devices - Overview of quantization process and agent capabilities in LLMs 02:30 *🧠 Understanding Model Quantization and Selection* - Importance of selecting the right quantization level for LLMs - Differences between various quantization levels like Q1 and Q8 - How quantization impacts model performance and reliability 06:07 *🛠 Setting up AnythingLM with Q8 Model and AMA* - Instructions for setting up AMA with Q8 LLW model - Steps to download and run AnythingLM on local devices - Connecting to AMA server and configuring privacy settings 08:27 *💬 Enhancing Model Knowledge Using RAG and Workspace* - Uploading documents for model referencing in workspace - Improving model responses by utilizing documents in the workspace - Configuring workspace settings for better model performance 11:41 *🌐 Using Agents for Advanced Functionality in AnythingLLM* - Utilizing agents to enhance LLMs capabilities beyond basic text responses - Enabling web scraping, file generation, summarization, and memory functions - Integrating external services like Google for web browsing functionalities Made with HARPA AI

  • @finessejones3109
    @finessejones31092 күн бұрын

    I'm so happy I came across your video. Thank you. I am having trouble on where you to get the base link that you pasted in @6:36 mark to install the ollama3

  • @finessejones3109

    @finessejones3109

    2 күн бұрын

    I was able to follow along from your other video to install it. Thank you I'm now a new sub.

  • @Great_Muzik
    @Great_Muzik4 күн бұрын

    Awesome tutorial Tim! Can this extract specific data from PDF files and save it to an Excel file?

  • @vishalchouhan07
    @vishalchouhan076 күн бұрын

    Hi Tim.. I am absolutely impressed with the capabilities of AnythingLLM. Just a small query..how can I deploy it on a cloud machine and serve it as a chat agent on my website? I actually want to add few learning resources as pdf for the rag document of this llm so that my users can chat with the content of those pdfs on my website. I also want to understand how many such parallel instances of similar scenario but with different set of pdf is possible? For instance, if I am selling ebooks as digital product to my users, can I have unique instances autogenerated for each user based on their purchase?

  • @TimCarambat

    @TimCarambat

    5 күн бұрын

    We offer a standalone docker image that is a multi-user version of the desktop app. It has a public chat embed that is basically a publicly accessible workspace chat window. You can deploy a lot of places depending on what you want to accomplish: github.com/Mintplex-Labs/anything-llm?tab=readme-ov-file#-self-hosting For this, you could do one AnythingLLM instance, multiple workspace where each has its own set of documents, and then a chat widget for each. This would give you the end result you are looking for

  • @red_onex--x808
    @red_onex--x80810 күн бұрын

    Awesome info……thx

  • @emil8367
    @emil83673 күн бұрын

    Many thanks for nice introduction ! Is there a way to configure this LanceDB ? Is there a doc how it's integrated with the AnythingLLM ?

  • @TimCarambat

    @TimCarambat

    3 күн бұрын

    There is nothing to configure, it is preinstalled and saves to the same location as the application's main storage folder!

  • @CotisoHanganu
    @CotisoHanganu4 күн бұрын

    Great things shown. Tx for all the work and commitment. 🎉 Here is a kind of dedicated use case I am interested to get acces: I am a mind mapping addict. I use Mind Manager, that stores the mm in .mmap format. I would like to ask ANYTHINGLLM to help me scan all folders for mind maps on different subjects and Rag & summarize on them, without having to export all mmap files in another format. Is this doable at this stage? What else should have or have created?

  • @marinetradeapp
    @marinetradeappКүн бұрын

    Great work - thanks for sharing - Question - how can we send data to the agent via webhooks - is this a possibility?

  • @Alex29196
    @Alex291969 күн бұрын

    Hi Tim, thank you for your dedication and effort in teaching us about local LLMs. I have a medium-spec computer with 4GB VRAM and 16GB RAM. The last time I installed ALLM, the inference speed was a bit slower compared to other alternatives. How does it perform with the new version? Thanks again.

  • @TimCarambat

    @TimCarambat

    5 күн бұрын

    Unfortunately, i doubt much would change on the inference side. When you say alternatives, what were you using? You might get slower responses in AnythingLLM vs just chatting via CLI in ollama, but that is because we are adding that valuable context to the prompt. More tokens = more work on the LLM to respond!

  • @themax2go
    @themax2go6 күн бұрын

    very cool!!! subbed!

  • @redbaron3555
    @redbaron355511 күн бұрын

    Amazing software!! Congratulations and thank you! Very similar to MemGPT server but seems easier to set up and use. I wonder whether you can save a whole company database (i.e. ERP data: products, materials etc.) in it and being g able to ask questions about it? Also can you instigate more than one agent simultaneously?

  • @TimCarambat

    @TimCarambat

    10 күн бұрын

    In theory, this would be better delegated by some purpose-built agent that can traverse the data. Currently, we only have one-agent conversations but the code _does_ support multi-agent. We just find it to be really messy and cumbersome when many agents are once are trying to do something and your Ollama instance is already at max use generating tokens!

  • @UrbanCha0s
    @UrbanCha0s12 күн бұрын

    Looks really good and simple. I tried PrivateGPT using conda/Poetry and could never get it to work, so jumped into WSL for Windows connecting to Ubuntu running ollama, via WEBUI. Works great, but this just looks so much easier. Will have to give it a try. What I do like with the WEBUI I have is I can select different model, and even use multiple models at the same time.

  • @TimCarambat

    @TimCarambat

    12 күн бұрын

    Yeah, we didnt want to "rebuild" what is already built and amazing like text-web-gen. No reason why we cant wrap around your existing efforts on those tools and just elevate that experience with additional tools like RAG, agents, etc

  • @mrgyani
    @mrgyani5 күн бұрын

    This is incredible..

  • @DanRegalia
    @DanRegalia2 күн бұрын

    Hey, just found you on a random youtube video suggestion. Love this concept.. A few questions, how deep into a website can this scrape? Can it read a sitemap or robots.txt and download all the data, summarize, etc? Can I hook it into different LLMs? For instance, assign agents to different LLMs? Most importantly, if we're using a vector database, can I feed it rows and rows of data to remember forever?

  • @TimCarambat

    @TimCarambat

    Күн бұрын

    The one in the document uploader is a single site, but we have a deep website scraper as you mentioned. You can use a different LLM per workspace and also per workspace-agent. So yes. The vector database we use runs locally and is built in. It works like any other and yes does persist information - so yes to the last point as well

  • @LakerTriangle
    @LakerTriangle13 күн бұрын

    Literally sitting here wondering this when you dropped the video

  • @lhxperimental
    @lhxperimental13 сағат бұрын

    13:00 Funny how in American English a question is actually a command. An LLM dve;oped by another culture would just say yes but not store it till you command it to.

  • @elu1
    @elu16 күн бұрын

    really nice!

  • @sashkovarha
    @sashkovarha13 күн бұрын

    Also, will there be a text to speech and speech to text option?

  • @TimCarambat

    @TimCarambat

    13 күн бұрын

    It is a pending issue at this time, yes

  • @SiliconSouthShow
    @SiliconSouthShow13 күн бұрын

    @TimCarambat Hey Tim it wont let me select anything under Workspace Agent LLM Provider even though everything is setup and working, obviously ollama is running and everything else in anything is using ollama fine in the app, but this selection option doesn't show like yours does.

  • @leninmariyajoseph352
    @leninmariyajoseph3529 күн бұрын

    Great!!!...

  • @SiliconSouthShow
    @SiliconSouthShow13 күн бұрын

    wOOHOO I GOT IT NOW! ID LOVE A UPDATE BUTTON LOL!

  • @TimCarambat

    @TimCarambat

    13 күн бұрын

    It probably just was not refreshed yet. I think we have it on a 1 hour expiration to check so it may have been in between checks

  • @caleb.miller
    @caleb.miller7 күн бұрын

    Thanks for the tutorial Tim. For some reason I am not able to get web search working. I am using the same setting you showed in the video. Can you do another video with more detail on setting up the google search engine for this purpose?

  • @ChristianIsai

    @ChristianIsai

    6 күн бұрын

    I have the same issues, the agent will answer that it doesn't need to use any function and will answer its alucinarions, if I give it the direct order to scrape it will trow a lack of openid key, I think is a work in progress still

  • @TimCarambat

    @TimCarambat

    5 күн бұрын

    Is the model just refusing to call the tool at all or when it does call the tool it says it failed?

  • @ChristianIsai

    @ChristianIsai

    5 күн бұрын

    @@TimCarambat the model will tell me no need for using any tool I got this and then hallucinate

  • @deylightmedia3266
    @deylightmedia32664 күн бұрын

    @TimCarambat sir kindly have a for loop so that multiple agents can talk to each other in a chatroom style conversation

  • @septemberstranger
    @septemberstranger9 күн бұрын

    Hello! Thanks for uploading this...very helpful. I'm stuck on something though. When I try to setup agents for Ollama, it says that agents only work with OpenAI currently. When I try to scrape sites like you do in the video using Ollama, the AI tells me that it can't. Am I missing something?

  • @gammingtoch259

    @gammingtoch259

    6 күн бұрын

    I have the same issue, but i am using lmstrudio as backend

  • @TimCarambat

    @TimCarambat

    5 күн бұрын

    You are able to use Ollama as you agent correct? If that is the case, are you using a small quantized model? Sometimes models have issues calling tools when they were built for that. Our system we implement works well, but we dont "force" the model to call a tool, it still has to generate a valid response to call it.

  • @user-ld8sy9xu2v
    @user-ld8sy9xu2v8 күн бұрын

    Hey Tim,what is actual folder that Anything LLM use to store models? I have all models downloaded using it on other apps so i would rather just put the model in the right folder then download it again. Thanks in Advance!

  • @TimCarambat

    @TimCarambat

    5 күн бұрын

    on Mac: /Library/Application Support/anythingllm-desktop/storage/models On window: /Users/user/AppData/Roaming/anythingllm-desktop/storage/models

  • @user-ld8sy9xu2v

    @user-ld8sy9xu2v

    4 күн бұрын

    @@TimCarambat thanks!

  • @carloscms23
    @carloscms2311 күн бұрын

    Great Work :)

  • @amulbhatia-te9jl
    @amulbhatia-te9jl17 сағат бұрын

    Would it be possible to see a vide of setting up your Ollama models on Anything LLM, I followed these instructions but my ollama models never load.

  • @aimademerich
    @aimademerich13 күн бұрын

    Phenomenal

  • @Nicola-cc2di
    @Nicola-cc2di5 күн бұрын

    @TimCarambat can you please let me know wich model is anythingLLM using to generate embedding and if it is possible to choose another one? thanks

  • @TimCarambat

    @TimCarambat

    5 күн бұрын

    We use the huggingface.co/sentence-transformers/all-MiniLM-L6-v2 by default, 384 dimension

  • @AndyBerman
    @AndyBerman13 күн бұрын

    @TimCarambat Can this run on an old slow server and connect to ollama on a fast server, or does AnythingLLM use a lot of local CPU when invoked?

  • @TimCarambat

    @TimCarambat

    13 күн бұрын

    Actually, this is a perfect combination. AnythingLLM using an external LLM and embedder is no more overhead than just running an HTML page - seriously. The only demanding process is if you use the built-in embedder, and that is really only when you are embedding documents. Depending on the size of your documents you could crash the server with the built-in embedder. For reference, our hosted starter tier is 2vCPU and 2GB RAM and we squeak by. If it's more than that, you are golden. The vector database is so lightweight and fast it is legitimately a non-issue.

  • @user-tz1hj8em7e
    @user-tz1hj8em7e3 күн бұрын

    can you upload a video showing how to embed a chat widget onto a website using the llm ran locally on ollama?

  • @RhythmRiftsDataDreams
    @RhythmRiftsDataDreams12 күн бұрын

    What is the chunking method you use to create the vectors? Is there a way that the user can control the method of chunking? Say : Short, Token Size, Semantic, Long etc...

  • @TimCarambat

    @TimCarambat

    10 күн бұрын

    We currently use a static recursive chunk splitter. So basically just character counts. You can modify those chunking settings in the settings when you go to "embedder preference". So you can define max length and overlap

  • @davidgalea430
    @davidgalea43013 күн бұрын

    Will not load models in the linux version when I select local Ollama

  • @theknowledgelenspro
    @theknowledgelenspro8 күн бұрын

    Hi i am having this issue when using agent . to summarize my pdf file " Could not respond to message. fetch failed"

  • @user-go7xt5jd2b
    @user-go7xt5jd2b7 күн бұрын

    Hi could you explain how to add the Google Programmatic Access API Key and search engine ID?

  • @TimCarambat

    @TimCarambat

    5 күн бұрын

    I see this question a ton, i didnt think it would be so confusing but Googles doc that we link to probably can be a bit confusing for those who are unfamiliar with how Google/GCP works

  • @ZeerakImran
    @ZeerakImran13 күн бұрын

    Hi Tim. Thank you for the video. One small suggestion. Can you please make the dock icon on macos the correct size please. I won't add it to the dock because all other icons are the same size whereas anything-llm's icon is oversized. Thanks.

  • @TimCarambat

    @TimCarambat

    13 күн бұрын

    It is the exact dimensions the Apple guidelines specify with a 100px padding for 1024x1024. I literally got the layout from their published figma file!

  • @TimCarambat

    @TimCarambat

    13 күн бұрын

    In older versions it was indeed the wrong size, it should be good now as of 1.5.3

  • @ZeerakImran

    @ZeerakImran

    2 күн бұрын

    @@TimCarambat hi Tim. Sorry for the late response. You're right. After seeing your message, I tried to see if I could get the program to check for an update but I wasn't able to find an option for that. So I deleted the app and downloaded the latest version which does have the icon as the right size in the dock. The icon also looks much nicer now. The app recently also showed a nice indicator for a newer version being available in the top right. That's nice too. I would quite like a white (non-grey) mode or light grey mode but that's not a top priority feature but if the app continues to get developed and all goes well, that would be lovely.

  • @TimCarambat

    @TimCarambat

    2 күн бұрын

    @@ZeerakImran Ah this must have been a really old version! The version has been showing in the Ui for update alerts for a while so that explains why the icon was so ugly as well! We will be adding a light mode now. So many haters on the dark mode only, I personally dont get it but who am i to say!

  • @nagisupercell
    @nagisupercell3 күн бұрын

    Can I edit my question and regenerate the result in AnythingLLM? I use OpenAI GPT-4o api, but I don't find the edit button in AnythingLLM UI.

  • @SebastianMuller-pz9xl
    @SebastianMuller-pz9xl13 күн бұрын

    Amazing ⭐⭐⭐⭐⭐

  • @foxnyoki5727
    @foxnyoki572710 күн бұрын

    Does Internet Search Work for You ? I configured the agent to use Google Custom Search Engine but search does not return any results.

  • @TimCarambat

    @TimCarambat

    10 күн бұрын

    With some models you _might_ have to word a prompt more directly. Like even explicitly asking it to call `web-browsing` and run this search. Which i know breaks the "fluidity" of conversation, but this is just a facet of the non-determinisic non-steerable nature of LLMs and trying to get them to listen. Mostly, its the model that needs to be better so it can follow prompts more closely, but its also not always that simple!

  • @lucygelz
    @lucygelz10 күн бұрын

    Any plans or methods to integrate text to speech or just a way for the output to be spoken to you

  • @TimCarambat

    @TimCarambat

    5 күн бұрын

    We just merged in TTS for all platforms for v.1.5.5, STT is live for docker only but we should hopefully have STT live for desktop soon. Just some technical details blocking that

  • @flb5078
    @flb507813 күн бұрын

    So it works only with Ollama or also with LM Studio which is my LLM provider, as for many people Ollama does not work on windows?

  • @TimCarambat

    @TimCarambat

    13 күн бұрын

    I didnt go over every provider in the window, but lmstudio is supported as well and I was going to make a video showcasing that provider because there are many more models to choose from

  • @TheShawn2880
    @TheShawn288010 күн бұрын

    Your the best

  • @Armaan27012012
    @Armaan2701201212 күн бұрын

    Hi, what features are you focusing on launching in the near future?

  • @TimCarambat

    @TimCarambat

    10 күн бұрын

    For Desktop: - We are hoping to have a RPA/desktop macro. basically imagine the "LAM" from RabbitR1, but runs 100% locally and works. You can "train" an LLM to use a browser by just clicking around as you normally would and do anything you could do on a browser and have an LLM be able to replicate that but with dynamic input. - Workspace "sharing" via encrypted cloud. So you can create a unique link to publish pre-embedded workspaces you can share with others so they can use the same documents and embeddings. Docker & Desktop: - Agent expansion. Custom tools, custom agent prompts, more LLM providers - More customization for text splitting/chunking strategies

  • @Armaan27012012

    @Armaan27012012

    9 күн бұрын

    @@TimCarambat will it be able to go on social media sites it well?

  • @SiliconSouthShow
    @SiliconSouthShow13 күн бұрын

    Agent @agent invoked. Swapping over to agent chat. Type /exit to exit agent execution loop early. yep wow, I still have a paid chatgpt and api but it cost everything you use it so, I am cancelling it next month anyway I find more use from my own offline stuff.

  • @paulagerbeek5874
    @paulagerbeek5874Күн бұрын

    CUDO's to AnythingLLM. After extensive searching for solutions with which I can use LLM's locally including advanced features as adding documents into the knowledgebase AnythingLM delivered all I need in combination with Ollama and/or LM Studio. Especially because it has an API which i can use in my own applications. I do have one question: How can i make sure the questions I ask the LLM are solely based on the documents i added to the workspace?

  • @Linguisticsfreak
    @Linguisticsfreak13 күн бұрын

    @TimCarambat For the time being, I cannot use an agent to access my email account and deal with emails, right?

  • @TimCarambat

    @TimCarambat

    10 күн бұрын

    Correct, we dont have that kind of connector yet

  • @ImmacHn
    @ImmacHn10 күн бұрын

    I'm trying to make the agent look for some information online, but it refuses, using the Llama 8B with 8bitQ, am I missing something? I activated the agent toggles and used @agent, added Google search and all. (I basically tried to replicate what you did in this video)

  • @TimCarambat

    @TimCarambat

    10 күн бұрын

    Ill need to write a doc as to why this is. Ill try to summarize as succiently as i can here why even the same inputs dont product the same outputs. TLDR; LLMs are non-deterministic, even with temp:0. This _also_ has other compounding variables like training, type, even the existing history can modify the outputs and called tools! This is not unique to AnythingLLM, but unique to tool-calling for any LLM. This is why tool calling is deliberate in OpenAI's API as well. Because when you basically need to force the LLM to use the tools. When given and option, like we do with AnythingLLM, you get much less refusals outright, but sometimes it even skips the tools altogether. It can be quite annoying, but its better than nothing at all!

  • @boragungor777

    @boragungor777

    10 күн бұрын

    @@TimCarambat Hi Tim, I would be nice to see the background processes when calling agents for knowing what's going on.

  • @pradeepjain2872
    @pradeepjain28729 күн бұрын

    Hello. I was just playing with RAG. It seams that the acuracy and results are very poor. I tried with laama 3, wizardlm etc. LLM is unclear of my questions. Is the context windows too short? LLM is giving answeres in a hindsight

  • @dadadies
    @dadadies8 күн бұрын

    Can AnythingLLM access and read all the content on your computer, or say a corporation's database, including projects, notes, and perhaps even media files? It seems that rag can already do that to a point.

  • @TimCarambat

    @TimCarambat

    5 күн бұрын

    not all the content, you have to upload that content to it. it doesnt just access your whole compute on install -i think people would be annoyed by that!

  • @themax2go
    @themax2go6 күн бұрын

    is there a roadmap for the project?

  • @TimCarambat

    @TimCarambat

    5 күн бұрын

    I have not yet authored it! It will be on docs.useanything.com/roadmap when it is live

  • @dissidentx
    @dissidentx13 күн бұрын

    Does anythingLLM collect ANY user data that it send to you as developer or any other external connection?

  • @seasons-zd6ij

    @seasons-zd6ij

    13 күн бұрын

    This is a very important point ...

  • @TimCarambat

    @TimCarambat

    10 күн бұрын

    Its in the README. We have anon telemetry and you can look exactly in the repo where what "data points" are sent. TLDR: never any chats, model specifics, heuristics, or any bs like that. Also you can turn it off and thats that. No exceptions or anything. github.com/Mintplex-Labs/anything-llm?tab=readme-ov-file#telemetry--privacy

  • @dissidentx

    @dissidentx

    10 күн бұрын

    @@TimCarambat I appreciate you being straight and recognizing the importance of this. That gives people like me assurance in terms of our privacy so we can leave those features on without worry .... thank you👍

  • @SiliconSouthShow
    @SiliconSouthShow13 күн бұрын

    @TimCarambat aw I see the agents wont work for us if with don't have those 2 on the list right now, so the way it shows in the video isnt available yet, ok, might want to mention that, boo, agents dont work for me. I just use the free ollama local stuff only no paid per stuff.

  • @SonGoku-pc7jl
    @SonGoku-pc7jl13 күн бұрын

    more more! ;)

  • @PreparelikeJoseph
    @PreparelikeJoseph11 күн бұрын

    What specs are good for a PC to run a good LLM? 32gb ram GTx 4070?

  • @TimCarambat

    @TimCarambat

    10 күн бұрын

    4070 should be plently. You main limitation is the quanization and parameter count. With those specs you should be able to load any Q8 8-13B model with no issue

  • @wyohost
    @wyohost15 сағат бұрын

    Just went through this whole setup and for some odd reason It keeps telling me it can't search the internet. I've tried local LLMs and OpenAI API with GPT-4o. Also have both Google Search API and Serper API. Neither seems to be able to 'reach' the internet. What in the heck am I missing? I understand this stuff pretty well and I just can't get it to search the web.

  • @DanielRolfe
    @DanielRolfe6 күн бұрын

    Can it be used with elasticsearch as a vector store ?

  • @TimCarambat

    @TimCarambat

    5 күн бұрын

    I had not thought of using ElasticSearch as a vector store - do you have existing vectors that you wanted to leverage

  • @MrAnt1V1rus
    @MrAnt1V1rus2 күн бұрын

    I've followed your guide but instead of searching the web the model is hallucinating responses

  • @GoldCaesar
    @GoldCaesar7 күн бұрын

    The agent settings for ollama crash the application in the desktop application

  • @TimCarambat

    @TimCarambat

    5 күн бұрын

    How does it crash?

  • @mcub3988
    @mcub398813 күн бұрын

    Download links are broken for some reason

  • @ntelo
    @ntelo5 күн бұрын

    Can I create Python Flask APIs with the Anything LLM software?

  • @TimCarambat

    @TimCarambat

    5 күн бұрын

    You can ask an LLM to, but this app does not create other apps or run/execute arbitrary code (yet)

  • @ntelo

    @ntelo

    4 күн бұрын

    @@TimCarambat what I meant is that Anything LLM provides an application with a GUI that can quickly set up agents and LLMs with instructions. Is there a possibility to use it as an endpoint?

  • @Sage16226
    @Sage162266 сағат бұрын

    If you upload a file(like the example in this video) then close anything llm and open it up at a later date. Will it remember that file?

  • @TimCarambat

    @TimCarambat

    6 сағат бұрын

    Yes, the vector database is the LLMs memory and it is stored in the app itself unless you change it. It will remember that file until you remove it from the workspace

  • @Sage16226

    @Sage16226

    6 сағат бұрын

    @@TimCarambat btw I was wondering if you guys would be working on something similar to the Microsoft recall feature they want to introduce. The feature that takes constant screenshots of the computer? I like the idea but I think something like that should be both open source and locally ran without sending data back to the cloud. I just don't trust Microsoft enough to create something like that and not have it send some information back to them.

  • @TimCarambat

    @TimCarambat

    5 сағат бұрын

    @@Sage16226 this is something we have given a non trivial amount of thought to. Myself and many others like the idea of recall, but not so much not having the observability of OSS and seeing how it works or the exact data stored that is not shared outside your device. Adding this as a feature makes anythingllm more into an AI assistant, which is certainly the direction we are going so you can expect that functionality, but I cannot specify a time line at this time 😊

  • @Rkcuddles
    @Rkcuddles18 сағат бұрын

    would just loooove instructions on building an agent that will query my database for me. sick of copying schemas back and forth and correcting small errors ook just discovered that the latest version allows me to add db credentials to the agent. But I can't get it to actually query anything. huh... hope this gets more intuitive soon

  • @renato79
    @renato7912 күн бұрын

    I couldn't run it on Linux:centos 7 I also followed some KZread tutorials but it didn't work.

Келесі