Tim Carambat

Tim Carambat

Software Engineer, Founder and CEO of Mintplex Labs. Creator of AnythingLLM and Senate/House Stock Watcher.

Пікірлер

  • @gordonwoo8127
    @gordonwoo812718 сағат бұрын

    Thank you very much. As soon as I saw that RAG was built in and it was simple to use, I immediately started finding readme pdfs on various topics to ensure I could use this tool as efficiently as possible. After my targeted pdfs are found, I plan on grabbing data from how-to and wiki.

  • @coryrichter3680
    @coryrichter368022 сағат бұрын

    Very cool to play with, look forward to seeing where the Agents go, nice work!

  • @equilibrium964
    @equilibrium964Күн бұрын

    I have a little problem, I use AnythingLLM with the TextGenerationWebUI as a backend and when I try to use @agent feature, I always get the error msg "Could not respond to message. OpenAI API key must be provided to use agents." But I want it to use my local llm, isn't that possible?

  • @tomcat3258
    @tomcat3258Күн бұрын

    Why is this video filled with comments from users with a word and 4 letters as a name? AI generated engagement???

  • @StunMuffin
    @StunMuffinКүн бұрын

    I really appreciate your time to explain a lot of things to us. ❤🎉

  • @chasisaac
    @chasisaac2 күн бұрын

    If I re-downloaded the default llama how do I reinstall the Q8 just install it???

  • @hxxzxtf
    @hxxzxtf2 күн бұрын

    🎯 Key points for quick navigation: 00:13 *🤔 The speaker asks if you can tell him exactly what data is in your vector database, and guarantees that you can't.* 00:39 *🔓 VectorAdmin is a universal GUI that allows you to manage your vector data like any other database.* 01:05 *💼 VectorAdmin focuses on application, while other companies focus on infrastructure.* 02:11 *📁 VectorAdmin can connect to multiple types of databases, including Pinecone and Chroma.* 04:42 *🔑 When you first log in to VectorAdmin, you'll see a page with a red connection indicator because it hasn't been filled out yet.* 06:03 *⏱️ The sync Pinecone data button indicates that there's data in Pinecone that VectorAdmin doesn't know about.* 07:12 *📁 You can add documents directly into your Pinecone instance using an embedding service like OpenAI.* 09:04 *🔍 You can see what's in the vector database, including text chunks and embedded documents.* 10:02 *💪 You can edit or delete vectors atomically without affecting other workspaces or namespaces.* Made with HARPA AI

  • @guyjaber1628
    @guyjaber16282 күн бұрын

    So I downloaded Ollama on my Mac and all, but when I got everything LLM, it promoted me to download Ollama too so it runs on it rather than having to run both at the same time. What's the difference?

  • @jimg8296
    @jimg82963 күн бұрын

    Looking forward to be able to import our own skills.

  • @drakouzdrowiciel9237
    @drakouzdrowiciel92374 күн бұрын

    good job

  • @jimg8296
    @jimg82964 күн бұрын

    Just tripped onto this. Awesome. Added GitHub star.

  • @Frankvegastudio
    @Frankvegastudio4 күн бұрын

    Hi there, I got stuck at getting the API…I think so no longer free. I have a Google account but it's asking for admin access. Please help

  • @Frankvegastudio
    @Frankvegastudio4 күн бұрын

    Nevermind, I got it

  • @drakouzdrowiciel9237
    @drakouzdrowiciel92375 күн бұрын

    thx 😉

  • @iam8333
    @iam83335 күн бұрын

    Billionaire alert🎉 seriously dope content , easy to understand effective communication

  • @billwaterson9492
    @billwaterson94926 күн бұрын

    300th comment. AND THE FIRST ONE WHOS HUMAN, HOW YOU LIKE MY AI ARMY

  • @TimCarambat
    @TimCarambat6 күн бұрын

    bro got AGI on YT comments

  • @Alias_Reign
    @Alias_Reign7 күн бұрын

    I couldn’t get the web browsing feature to work. I put in my api key and search engine ID but still nothing. I’m attempting to do this with llama 3 dolphin, has anyone been successful getting this feature to work on the dolphin model?

  • @TimCarambat
    @TimCarambat6 күн бұрын

    What quant and param size? These two factors play into the formed JSON and ability to "listen" to the instructions to call a tool properly

  • @gurudaki
    @gurudaki7 күн бұрын

    Great video!What was the ollama external url you pasted???

  • @bro_truth
    @bro_truth7 күн бұрын

    Bro this has to be the most comprehensive, simple, engaging and all around entertaining video on AI I've ever watched. Your presentation, explanations, and exert level knowledge base are all 'S' tier! Bra-freakin'-vo! Subscriber well earned and deserved! 🏆👏🏽👏🏽

  • @filipeeduardo1177
    @filipeeduardo11777 күн бұрын

    that is really good, CrewAI is amazing but not all users interested in LLMs have the time to configure that much of stuff,

  • @TimCarambat
    @TimCarambat6 күн бұрын

    I was diving deep into CrewAI the other day because I would love to have CrewAI users be able to "port" their work into AnythingLLM but found CrewAI doesn't have like a "server" or REST API, its like just a library like LangChain is - so scripts and static code :/ CrewAI+ (their hosted and paid model) is the only thing we could possibly integrate with - which is not ideal. AutoGenStudio is probably the only other AI Agent framework/tool we could integrate with. SuperAgent as well,

  • @enriquebruzual1702
    @enriquebruzual17028 күн бұрын

    @-22:52 Oh Charlie, grow up and smile this is for your dad.

  • @queerhjhj
    @queerhjhj8 күн бұрын

    This videos goated

  • @enriquebruzual1702
    @enriquebruzual17028 күн бұрын

    I love this tool, I already made several Workspaces, each with its own LLM and RAG. This video was good how with an explanation. I am a Python developer and I would like to create my own agents

  • @Naki87
    @Naki878 күн бұрын

    @TimCarambat I have stallednin my progress when trying to run ollama It sits for about 5 minutes and then the powershell tells me that it " timed out waiting for llama runner to start - progress 1.00" Suggestions?

  • @danvorosmarty9854
    @danvorosmarty98548 күн бұрын

    Great video abd software thanks. I am unable to get agent commands to do anything. They just hang indefinitely with no indication that it's doing anything. The non agent commands seem to work fine but as soon as i try to use the @ agent command, it does nothing. I am pretty sure I have everything configured correctly. Ideas?

  • @user-cl7vn1eg3u
    @user-cl7vn1eg3u8 күн бұрын

    The potential of this is near limitless so congratulations on this app.

  • @user-wt7pq5qc2q
    @user-wt7pq5qc2q9 күн бұрын

    Nice, can we use a web browser to connect to it ? thanks

  • @TimCarambat
    @TimCarambat8 күн бұрын

    The Docker version, which is demoed here, yes

  • @JohnPamplin
    @JohnPamplin10 күн бұрын

    @TimCarambat I'm very impressed with AnythingLLM, particularly how you can easily incorporate additional capabilities with agents. I'm trying to create a GPT chatbot in Slack without using OpenAI, and at first, I was going to use LM Studio since it has a "server" component - where you can pass it API calls and reflect the answer in a Slack bot. I've looked around and I do not see this feature in AnythingLLM - is this coming or planned? I'd love to drop everything and just use your excellent tool.

  • @TimCarambat
    @TimCarambat8 күн бұрын

    We have an API that runs in the background. You can make an API key and communicate with AnythingLLM's workspaces via a Slackbot to accomplish this.

  • @Spartacusroo
    @Spartacusroo11 күн бұрын

    Beware connecting your LLM for live browsing and web search if not using a virtual machine/docker. Your model is then open to potential security risks such as bias, malicious links, malicious links purposely looking to infect local LLMS and then pivot to your local environment.

  • @TimCarambat
    @TimCarambat10 күн бұрын

    This would impact the local llms response for a given query though and have no long term effects on your model or machine. Additionally a vm or docker would provide no additional security? Since it's just reading text from a link with misinfo?

  • @KINGLIFERISM
    @KINGLIFERISM11 күн бұрын

    This video got me to download so the marketing works... Very impressed. The software didnt for me. straightforward but... once I connected Google it just did not do anything. (The agent that is). disappointed. Gamer PC. Uninstalled.

  • @TimCarambat
    @TimCarambat10 күн бұрын

    This is likely because you use and OSS model and as the hint says at the very top where you enabled agents "open source llms ability to call tools is entirely dependent on model". Additionally in this video I explained many times I'm on a larger quantizatiin because the default is not that adept at tool calls. And lastly, your gaming pc only determines how big or fast a model can run. Not that a mid range Q4 Llama 3 (what you downloaded) will suddenly become more capable. I think I outlined this pretty clearly in the video and I'm not sure how more clear I can be that tool calling with OSS models is model dependent. I would urge you to try again with a beefier model using ollama directly so you access higher quantizations and get the performance I am exactly using in this video

  • @ZeroCool22
    @ZeroCool2211 күн бұрын

    100 queries = 100 searches? Thx for all your works, starred github.

  • @AIVisionaryLab
    @AIVisionaryLab11 күн бұрын

    🔥 🔥 🔥 🔥 🔥 🔥 🔥 🔥 🔥 🔥 🔥 🔥 🔥 🔥

  • @wat7842
    @wat784211 күн бұрын

    hi. just installed and connected to ollama thats running on a separate slightly beefier machine with a 2060Super GPU. the responses are very fast. ill sub and keep checking back

  • @TimCarambat
    @TimCarambat10 күн бұрын

    What kind of model? If I can ask? Always curious what people run

  • @wat7842
    @wat784210 күн бұрын

    @@TimCarambati5 10400 32gb so far I've tried codellama 7B very fast, codellama 13B slower but okay. Llama3 8b very fast. CodeGemma7b very fast. Mistral7b fast. Dolphin-mixtral 8x7b slooowwww. I'm brand new to it I'm trying to put together something that will help me come up with PowerShell scripts and Linux scripts and commands using natural language to describe what I want. Chatgpt has been really good at this. so far mixed results with ollama. anythingLLM is seeming very cool but Ive barely scratched the surface.

  • @frosti7
    @frosti711 күн бұрын

    Awesome, but anything LLM won't see PDFS with OCR like ChatGPT would, is there a multi-model that can do that?

  • @TimCarambat
    @TimCarambat10 күн бұрын

    We need to support vision first so we can enable OCR!

  • @kekuramusa
    @kekuramusa11 күн бұрын

    Very helpful video. Thanks!

  • @jimg8296
    @jimg829611 күн бұрын

    Been using for the past few months and is my go-to app for local RAG. Adding agents huge plus. Looking forward to being able to add my own AutoGen agents to the list with their own special tools. Thanks for the great work Tim.

  • @MedicinalMJ
    @MedicinalMJ11 күн бұрын

    I tried this but it ran ridiculously slow, and the web search features never really worked either. In comparison, running ollama / llama3 through wsl was lightning fast.

  • @TimCarambat
    @TimCarambat11 күн бұрын

    Why don't you just connect anythingllm to the ollama running is wsl then? Sounds like you don't have Cuda installed on the host machine. Also why even use ollama in wsl when ollama has a windows app? Sounds like it was using olny cpu. And if not the case, more tokens === slower inference so yes of course adding context, the foundational part of what RAG is, would result in a slow time to first token

  • @SouthbayCreations
    @SouthbayCreations12 күн бұрын

    Today when I tried to start up AnythingLLM I'm just getting a spinning circle on the main startup screen. I've left it sit for 30 minutes and it was still doing it. Tired rebooting but no luck, even uninstalled and reinstalled but still does it. Any ideas?

  • @TimCarambat
    @TimCarambat8 күн бұрын

    Sounds like a permission issue - like the computer is not allowing the required workers to boot

  • @SouthbayCreations
    @SouthbayCreations12 күн бұрын

    Forgive my ignorance but is this a better option than Ollama?

  • @TimCarambat
    @TimCarambat8 күн бұрын

    LMStudio is basically Ollama with more model support and a UI

  • @Spot120
    @Spot12012 күн бұрын

    Yo honestly it feels great when guys like you make your software completely free and i also think you should keep a option of donation. after seeing guys like you i will make something great and i will make it completely free to use and open source. again thanks dude!❤.

  • @moomoo8115
    @moomoo811512 күн бұрын

    How to download and find the tight model on a Mac m1?

  • @TimCarambat
    @TimCarambat8 күн бұрын

    In ollama? You can browser their available model tags on ollama.com

  • @estebann
    @estebann13 күн бұрын

    Great work! Though I can't get it to work! I am trying to replicate this put I am unable to. I am using Ollama and llama3:8b-instruct-q8_0 . It seems to try to use it but... @agent can you please summarize DavydovTayar.pdf? Agent @agent invoked. Swapping over to agent chat. Type /exit to exit agent execution loop early. [debug]: @agent is attempting to call `document-summarizer` tool @agent: Looking at the available documents. @agent: Found 1 documents @agent: Grabbing all content for DavydovTayar.pdf @agent: Summarizing DavydovTayar.pdf... I apologize, but it seems that I encountered an issue while attempting to summarize the file “DavydovTayar.pdf”. Unfortunately, I was unable to access the necessary information due to a missing OpenAI or Azure OpenAI API key. I tried latest docker, latest standalone app in macOs to no avail. The same document can be accessed via traditional Rag in the same chat without the agent, but I am looking to replicate this video. I saw the Issue 1335 in github "[FEAT]: Expand summarization to generic LLM Provider" but it is not resolved. Was anyone able to replicate this?

  • @estebann
    @estebann13 күн бұрын

    I tried a few variations (other models, mistral, other quants of llama3, other pdfs, etc) and still asks for an open ai key.

  • @SagarRana
    @SagarRana13 күн бұрын

    Thank you so much the only problem i have is i cant seem to find anything llm github pdf file. Where do i download it from?

  • @TimCarambat
    @TimCarambat8 күн бұрын

    Oh i just saved the README file as a PDF. You can save it as anything. That is just my example file I use since its larger than most model contexts

  • @SagarRana
    @SagarRana8 күн бұрын

    @@TimCarambat thank you

  • @a14266
    @a1426613 күн бұрын

    8:28 AI cheated human... hahahha

  • @AGI2030
    @AGI203014 күн бұрын

    Great work Tim! If using 'AnythingLLM' in the 'LLM Provider' section, can I load other LLMs that are not listed? Like the '8b-instruct-q8_0' you mention? So I don't have to rum Ollama separately to load a model?

  • @TimCarambat
    @TimCarambat8 күн бұрын

    The default Ollama we ship with has some "basic" models. if you want always the latest and greatest models you would need the separate ollama. You bring up a good point. It would be nice to have a "custom" option where you can paste in any valid ollama tag. The built-in version is usually behind the latest and some models dont work with older Ollama versions which is why that initially does not exist

  • @morganblais5046
    @morganblais504614 күн бұрын

    guessing things have changed but I cannot seem to find where my programmatic access api key would be

  • @TimCarambat
    @TimCarambat8 күн бұрын

    If you click on Settings on sidebar (wrench icon) it is on the sidebar

  • @Rewe4life
    @Rewe4life14 күн бұрын

    This Looks so Great! Is it possible to run it on a Server and Access the um via a Web Browser?

  • @TimCarambat
    @TimCarambat14 күн бұрын

    Yes, in this demo I was just using the desktop app but we have a docker server based/web UI as well

  • @Rewe4life
    @Rewe4life14 күн бұрын

    @@TimCarambat that sound amazing. I will try that in the next days. Today I‘ve installed privateGPT and that is quite cool, but very limited in inputs. Yours looks much more flexible with all the possible inputs like links to websites and so on. I have thousands of pdf documents (scanned my file cabinet. That took months). Is it possible to load them all in and then kind of talk to my entire file cabinet?

  • @Albert-wh7gj
    @Albert-wh7gj14 күн бұрын

    I like the idea, but the name "AnythingLLM" is really misleading, It is not a LLM. It is an Add-on for Ollama.

  • @TimCarambat
    @TimCarambat14 күн бұрын

    Nope, it has an LLM inside it as well. In this demo specifically I use ollama because lots do. But we have an LLM engine built in, works just like ollama because it is

  • @SagarRana
    @SagarRana14 күн бұрын

    what is the ollama base url? neverthless Thank you

  • @TimCarambat
    @TimCarambat8 күн бұрын

    usually 127.0.0.1:11434

  • @SagarRana
    @SagarRana8 күн бұрын

    @@TimCarambat thank you

  • @_TheDudeAbides_
    @_TheDudeAbides_14 күн бұрын

    I have tried this out a bit now and I really like it. However, I would like to import lots of dokuments and not via a GUI. Is it possible somehow to use python to connect to anythingLLM to post text files via an API or something like that? It would be fun to pump in huge amounts of text files and ask stuff.

  • @TimCarambat
    @TimCarambat8 күн бұрын

    The desktop and docker app both ship with a full API that would enable this!

  • @ika9
    @ika914 күн бұрын

    anythingllm is good and very fast thank u for providing such useful tool , however sql agent im finding difficulties to make it work