Python Advanced AI Agent Tutorial - LlamaIndex, Ollama and Multi-LLM!

Interested in AI development? Then you are in the right place! Today I'm going to be showing you how to develop an advanced AI agent that uses multiple LLMs.
If you want to land a developer job: techwithtim.net/dev
🎞 Video Resources 🎞
Code: github.com/techwithtim/AI-Age...
Requirements.txt: github.com/techwithtim/AI-Age...
Download Ollama: github.com/ollama/ollama
Create a LlamaCloud Account to Use LLama Parse: cloud.llamaindex.ai
Info on LLama Parse: www.llamaindex.ai/blog/introd...
Understanding RAG: • Why Everyone is Freaki...
⏳ Timestamps ⏳
00:00 | Video Overview
00:42 | Project Demo
03:49 | Agents & Projects
05:44 | Installation/Setup
09:26 | Ollama Setup
14:18 | Loading PDF Data
21:16 | Using llama Parse
26:20 | Creating Tools & Agents
32:31 | The Code Reader Tool
38:50 | Output-Parser & Second LLM
48:20 | Retry Handle
50:20 | Saving To A File
Hashtags
#techwithtim
#machinelearning
#aiagents

Пікірлер: 144

  • @257.4MHz
    @257.4MHz2 ай бұрын

    You are one of the best explainers ever. Out of 50 years listening to thousands of people trying to explain thousands of things. Also, it's raining and thundering outside and I'm creating this monster, I feel like Dr. Frankenstein

  • @justcars2454

    @justcars2454

    2 ай бұрын

    50 years of listening, and learning, iam sure you have great knowlege

  • @samliske1482
    @samliske14822 ай бұрын

    You are by far my favorite tech educator on this platform. Feels like you fill in every gap left by my curriculum and inspire me to go further with my own projects. Thanks for everything!

  • @bajerra9517
    @bajerra95172 ай бұрын

    I wanted to express my gratitude for the Python Advanced AI Agent Tutorial - LlamaIndex, Ollama and Multi-LLM! This tutorial has been incredibly helpful in my journey to learn and apply advanced AI techniques in my projects. The clear explanations and step-by-step examples have made it easy for me to understand and implement these powerful tools. Thank you for sharing your knowledge and expertise!

  • @Batselot
    @Batselot2 ай бұрын

    I was really looking forward to learn this. Thanks for the video

  • @AlexKraken
    @AlexKraken2 ай бұрын

    If you keep getting timeout errors and happen to be using a somewhat lackluster computer like me, changing `request_timeout` in these lines llm = Ollama(model="mistral", request_timeout=3600.0) ... code_llm = Ollama(model="codellama", request_timeout=3600.0) to a larger number (3600.0 is 1 hour, but it usually takes only 10 minutes) helped me out. Thanks for the tutorial!

  • @ricardopata8846

    @ricardopata8846

    Ай бұрын

    thanks mate!

  • @seanbergman8927
    @seanbergman89272 ай бұрын

    Excellent demo! I liked seeing it built in vs code with loops, unlike many demos that are in Jupyter notebooks and can’t run this way. Regarding more demos like this…Yes!! Most definitely could learn a lot from more and more advanced LlamaIndex agent demos. Would be great to see a demo that uses their chat agent and maintain chat state for follow-up questions. Even more advanced and awesome would be an example where the agent will ask a follow up question if it needs more information to complete a task.

  • @ChadHuffman
    @ChadHuffman2 ай бұрын

    Amazing as always, Tim. Thanks for spending the time to walk through this great set of tools. I'm looking forward to trying this out with data tables and PDF articles on parsing these particular data sets to see what comes out the other side. If you want to take this in a different direction, I'd love to see how you would take PDFs on how different parts of a system work and their troubleshooting methodology and then throw functional data at the LLM with errors you might see. I suspect (like other paid LLMs) it could draw some solid conclusions. Cheers!

  • @techgiantt
    @techgiantt2 ай бұрын

    Just used your code with llama 3, and made the code generator a function tool, and it was fvcking awesome. Thanks for sharing👍🏻

  • @ravi1341975
    @ravi13419752 ай бұрын

    wow this is absolutely mind blowing ,thanks Tim.

  • @ft4jemc
    @ft4jemc2 ай бұрын

    Great video. Would really like to see methods that didn't involve reaching out to the cloud but keeping everything local.

  • @jorgitozor
    @jorgitozor2 ай бұрын

    This is very clear and very instructive, so much valuable information! Thanks for your work

  • @briancoalson
    @briancoalson2 ай бұрын

    Some helpful things when going through this: - Your Python version needs to be

  • @mikewebb3855

    @mikewebb3855

    Ай бұрын

    For me, once installed xcode, rerun installing the package and was able to get llama_cp_python wheel to install. Thanks for this note, helped make sense of the error message.

  • @dearadulthoodhopeicantrust6155

    @dearadulthoodhopeicantrust6155

    Ай бұрын

    Yup. I encountered this on windows. On my Visual studio I used the ctrl + shift + p opens a search bar. I searched for interpreter and then I was able to access previous versions of python in different environments, I selected Conda environment and opened a new Terminal. I checked python --version and then the selected python version was up.

  • @garybpt
    @garybpt2 ай бұрын

    This was fascinating, I'm definitely going to be giving it a whirl! I'd love to learn how something like this could be adapted to write articles using information from our own files.

  • @martin-xq7te
    @martin-xq7te18 күн бұрын

    Great work TIM you hit it on the head ,what put people of is downloading. Putting into a requirements file is a great idea

  • @davidtindell950
    @davidtindell950Ай бұрын

    Thank You for this very informative video. I really like the capabilities of 'LlamaIndex' with PDF's. I used it to process several of my own medium-size PDF's and it was very quick and correct. It would be great to have another vid on how to save and reuse the VectorStore for queries against PDF's already processed. To me this is more important even than the code generation.

  • @beautybarconn
    @beautybarconnАй бұрын

    No idea what’s going on but I love falling asleep to these videos 😊

  • @siddharthp9216
    @siddharthp92169 күн бұрын

    I really loved the video please keep making videos like this

  • @tomasemilio
    @tomasemilioАй бұрын

    Bro your videos are gold.

  • @camaycama7479
    @camaycama74792 ай бұрын

    Awesome video, man thx a big bunch!

  • @AaronGayah-dr8lu
    @AaronGayah-dr8luАй бұрын

    This was brilliant, thank you.

  • @samwamae6498
    @samwamae64982 ай бұрын

    Awesome 💯

  • @nour.mokrani
    @nour.mokrani2 ай бұрын

    Thanks for this tutorial and your way of explaining, I've been looking for this , Can you also make a vid on how to build enterprise grade generative ai with Nemo Nvidia that would be so interesting, thanks again

  • @vaughanjackson2262
    @vaughanjackson22622 ай бұрын

    Great vid.. only issue is the fact that the parsing is done externally. For RAG's ingesting sensitive data this would be a major issue.

  • @ChathurangaBW
    @ChathurangaBWАй бұрын

    just awesome !

  • @siddharthp9216
    @siddharthp92169 күн бұрын

    The way you explain is really good and I understood it , you code line by line others just copy paste and donot explain what the code is doing but you explained everything really good content ALso can you bring more tutorial using mutlti agent in crew ai using this multi local llm model thing coz the open ai key is very expensive and all the other channel use that none does it in the local llm

  • @ben3ng933
    @ben3ng9334 күн бұрын

    This is awesome.

  • @Ari-pq4db
    @Ari-pq4db2 ай бұрын

    Nice ❤

  • @SashoSuper
    @SashoSuper2 ай бұрын

    Nice one

  • @seanh1591
    @seanh15912 ай бұрын

    Tim - thanks for the wonderful video. Very well done sir!! Is there an alternative to LlamaParse to keep the parsing local?

  • @stevenheymans

    @stevenheymans

    2 ай бұрын

    pymupdf

  • @kayoutube690
    @kayoutube690Ай бұрын

    New subscriber here!!!

  • @Pushedrabbit699-lk6cr
    @Pushedrabbit699-lk6cr2 ай бұрын

    Could you also do a video on infinite world generation using chunks for RPG type pygame games?

  • @ricardokullock2535
    @ricardokullock2535Ай бұрын

    The guys at llmware have some fone-tuned models for RAG and some for function calling (outputing structured data). Could be interesting to try out with this.

  • @henrylam4934
    @henrylam49342 ай бұрын

    Thanks for the tutorial. Is there any alternate to LlamaParse that allows me to run the application completely local?

  • @purvislewies3118
    @purvislewies3118Ай бұрын

    yes man...this what i want to do and more...

  • @mohanvenkataraman648
    @mohanvenkataraman6486 күн бұрын

    Great video tutorial or walk-thru. It would be nice to determine minimum configuration required to run. I tried the example on a Xeon 4 core Ubuntu laptop , 16GB with a NVIDIA Corporation GM107GLM [Quadro M2000M] / Mesa Intel® HD . Sometimes it gave a bunch of errors and I had to do cold restart. Also, the only difference in an Ollama versus non-Ollama version should be the instantiation of the LLM and embedding model. Am I right?

  • @equious8413
    @equious8413Ай бұрын

    "If I fix these up." My god, Tim. You know that won't scale.

  • @robertwclayton6962
    @robertwclayton6962Ай бұрын

    Great video tutorial! Thanks 🙌 (liked and subscribed, lol) A bit of a "noob" developer here, so vids like this really help. I know it's a lot to ask, but.... I was wondering if you might consider showing us how to build a more modular app, where we have separate `.py` files to ingest and embed our docs, then another to create and/or add embeddings to a vector DB (like Chroma), then another for querying the DB. Would this be possible? It would be nice to know how to have data from one Python file feed data to another, while also minimizing redundancy (e.g., IF `chroma_db` already exists, the `query.py` file will know to load the db and query with LlamaIndex accordingly) Even better if you can show us how make our `query_engine` remember users' prior prompts (during a single session). Super BONUS POINTS if you can show us how to then feed the `query.py` data into a front-end interface for an interactive chat with a nice UI. Phew! That was a lot 😂

  • @blissfulDew
    @blissfulDew2 ай бұрын

    Thanks for this!! Unfortunately I can't run it on my laptop, it takes forever and the AI seems confused. I guess it needs powerful machine...

  • @billturner2112
    @billturner21122 ай бұрын

    I liked this. Out of curiosity, why venv rather than Conda?

  • @_HodBuri_
    @_HodBuri_2 ай бұрын

    Error 404 not found - local host - api - chat [FIX] If anyone else gets an error like that when trying to run the llamacode agent, just run the llamacode llm in terminal to download it, as it did not download it automatically for me at least as he said around 29:11 So similar to what he showed at the start with Mistral: ollama run mistral. You can run this in a new terminal to download codellama: ollama run codellama

  • @aishwarypatil8708

    @aishwarypatil8708

    2 ай бұрын

    thanks alot !!!!

  • @firasarfaoui2739

    @firasarfaoui2739

    Ай бұрын

    i love this community ... thanks alot

  • @jishh7

    @jishh7

    10 күн бұрын

    @TechWithTim This should be pinned :D

  • @nikta456
    @nikta456Ай бұрын

    Please create a video about production-ready AI agents!

  • @camaycama7479
    @camaycama74792 ай бұрын

    Does the mistral large will be available ? I'm wondering if the LLM availability will be up to date or there's other step to do.

  • @mredmister3014
    @mredmister3014Ай бұрын

    Good video but do you have a complete ai agent with your own data without the coding formatting? This is the closest tutorial I’ve found to do on premise ai agent implementation that I can understand. Thanks!

  • @JNET_Reloaded
    @JNET_Reloaded2 ай бұрын

    nice

  • @willlywillly
    @willlywillly2 ай бұрын

    Another great tutorial... Thank You! How do I get in touch with you Tim for consultant?

  • @TechWithTim

    @TechWithTim

    2 ай бұрын

    Send an email to the email listed on my about page on youtube

  • @kodiak809
    @kodiak8092 ай бұрын

    so ollama is run locally in your machine? can i make it cloud based by applying it into my backend?

  • @jay.ogayon
    @jay.ogayon2 ай бұрын

    what keyboard are you using? 😊

  • @sethngetich4144
    @sethngetich41442 ай бұрын

    I keep getting errors when trying to install the dependencies from requirements.txt

  • @I2ealTuber

    @I2ealTuber

    Ай бұрын

    Make sure you have the correct version of python

  • @user-zq2nr2sp7o

    @user-zq2nr2sp7o

    22 күн бұрын

    or better since I prefer he pip install them manually

  • @unflappableunflappable1248
    @unflappableunflappable12482 ай бұрын

    круто

  • @mayerxc
    @mayerxc2 ай бұрын

    What are your MacBook Pro specs? I'm looking for a new computer to run llm locally.

  • @techgiantt

    @techgiantt

    2 ай бұрын

    Buy a workstation with very good Nvidia gpu, so u can use cuda. If u still want to go for a MacBook Pro, get the M2 with 32gb or 64gb ram. I’m using a MacBook m1 16” 16gb ram and I can only run llms with 7 - 13b without crashing it

  • @TechWithTim

    @TechWithTim

    2 ай бұрын

    I have an M2 Max

  • @GiustinoEsposito98

    @GiustinoEsposito98

    2 ай бұрын

    Have you ever thought about using colab as a remote webserver with local llm such as llama3 and calling it from your pc to get predictions? I have your same problem and was thinking about solving like this

  • @iamderrickfoo

    @iamderrickfoo

    Ай бұрын

    My mbp pro m1 8gb is hanging while running the llm locally. Any alternatives that we can learn to build without killing my mbp?

  • @meeFaizul
    @meeFaizul2 ай бұрын

    ❤❤❤❤❤❤

  • @RolandDewonou
    @RolandDewonouАй бұрын

    it seems multiple elements in the requirements.txt doc require different versions of python and other libraries. Could you clarify what versions what what is needed in order for this to work?

  • @Pyth_onist
    @Pyth_onist2 ай бұрын

    I did one using Llama2.

  • @giovannip.6473

    @giovannip.6473

    2 ай бұрын

    are you sharing it somewhere?

  • @adilzahir9921
    @adilzahir99212 ай бұрын

    Can i use that to make ai agent that can call customers and interact with them and take notes of what's happens ? Thank's

  • @bigbena23
    @bigbena232 ай бұрын

    What if I don't my data to be manipulated in the cloud? Is there an alternative for LlamaParser that can be ran locally?

  • @anandvishwakarma933
    @anandvishwakarma9332 ай бұрын

    Hey can you shared the system configuration need to run this application ?

  • @WismutHansen
    @WismutHansen2 ай бұрын

    You obviously went to the Matthew Berman School of I'll revoke this API Key before publishing this video!

  • @avxqt966
    @avxqt9662 ай бұрын

    I can't install packages of llama-index in my Windows system. Also, the 'guidance' package is showing an error

  • @maximelhuillier8964

    @maximelhuillier8964

    Ай бұрын

    did u find the error ?

  • @danyloustymenko7465
    @danyloustymenko74652 ай бұрын

    What's the latency of models running locally?

  • @Marven2
    @Marven22 ай бұрын

    Can you make a series

  • @hamsehassan7304
    @hamsehassan7304Ай бұрын

    everytime i try to install the requirements.txt files, it only downloads some of the content but then i get this error message: Requires-Python >=3.8.1, im runnning this on a mac with python version 3.12.3 and I can't seem to download the older version of python.

  • @DomenicoDiFina
    @DomenicoDiFina2 ай бұрын

    Is it possible to create an agent using other languages?

  • @ofeksh
    @ofeksh2 ай бұрын

    Hi Tim! GREAT JOB on pretty much everything! BUT, i have a problem im running on windows with pycharm and it shows me an error when installing the requirements, because its pycharm, i have 2 options for installing the requirements, one from within pycharm and one from the terminal FIRST ERROR (when i install through pycharm) in both options im seeing an error (similar one, but not exactly the same) can you please help me with it?

  • @diegoromo4819

    @diegoromo4819

    2 ай бұрын

    you can check which python version you have installed.

  • @ofeksh

    @ofeksh

    2 ай бұрын

    @@diegoromo4819hey, thank you for your response, which version should i have? i can't find it in the video.

  • @neilpayne8244

    @neilpayne8244

    2 ай бұрын

    @@ofeksh 3.11

  • @ofeksh

    @ofeksh

    2 ай бұрын

    @@neilpayne8244 shit, that's my version...

  • @vedantbande5682
    @vedantbande5682Ай бұрын

    how to know the requirements.txt dependencies we required (it is a large list)

  • @Czarlsen
    @CzarlsenАй бұрын

    Is there much difference between result_type = "Markdown" and result_type = "text"?

  • @amruts4640
    @amruts46402 ай бұрын

    Can you please do a video about making a gui in python

  • @user-zx9pz3dn8b
    @user-zx9pz3dn8bАй бұрын

    Why did I need to downgrade python 3.12 to 11 to be able to run requirements.txt which some dependencies were calling to use a version less than 3.12 but I see you are using python 3 with no errors?

  • @JRis44
    @JRis44Ай бұрын

    Dang seems im stuck with a 404 message @ 31:57 . Anyone else have that issue? Or have a fix for it possibly? Maybe the dependencies need an update already?

  • @Darkvader9427
    @Darkvader9427Ай бұрын

    can i do the same using langchain

  • @joshuaarinaitwe8351
    @joshuaarinaitwe83512 ай бұрын

    Hey tim. Great video. I have been watching your videos for some time, though i was definitely young then. I need some guidance. Am 17, i want to do ai and machine learning course. Somebody advise me.

  • @AndyPandy-ni1io
    @AndyPandy-ni1io19 күн бұрын

    what am i doing wrong cause when I run it does not work no matter what I try

  • @radheyakhade9853
    @radheyakhade98532 ай бұрын

    Can anyone tell me what basic things should one know before going into this video ??

  • @notaras1985
    @notaras1985Ай бұрын

    How do we know that Meta hasn't corrupted the ollama model with spyware or other malicious code?

  • @technobabble77
    @technobabble772 ай бұрын

    I'm getting the following when I run the prompt: Error occured, retry #1: timed out Error occured, retry #2: timed out Error occured, retry #3: timed out Unable to process request, try again... What is this timing out on?

  • @coconut_bliss5539

    @coconut_bliss5539

    2 ай бұрын

    Your Agent is unable to reach your Ollama server. It's repeatedly trying to query your Ollama server's API on localhost, then those requests are timing out. Check if your Ollama LLM is initializing correctly. Also make sure your Agent constructor contains the correct LLM argument.

  • @TballaJones

    @TballaJones

    2 ай бұрын

    Do you have a VPN like NordVPN running? Sometimes that can't mess up local servers

  • @adithyav6877

    @adithyav6877

    10 күн бұрын

    change the request_timeout to a bigger value, like 3600.0

  • @adilzahir9921
    @adilzahir99212 ай бұрын

    What the minimum laptop to run this model ? Thank's

  • @samohtGTO

    @samohtGTO

    Ай бұрын

    you need a good gpu to run like litrarly any llm

  • @dolapoadefisayomioluwole1341
    @dolapoadefisayomioluwole13412 ай бұрын

    First to comment today 😂

  • @levinkrieger8452
    @levinkrieger84522 ай бұрын

    First

  • @257.4MHz
    @257.4MHz2 ай бұрын

    Well, I can't get it to work. It gives 404 on /api/chat

  • @omkarkakade3438

    @omkarkakade3438

    2 ай бұрын

    I am getting the same error

  • @mrarm4x

    @mrarm4x

    2 ай бұрын

    you are probably getting this error because you are missing the codellama model, run ollama pull codellama and it should fix it

  • @ases4320
    @ases43202 ай бұрын

    But this is not completely "local" since you need an api key, no?

  • @matteominellono

    @matteominellono

    2 ай бұрын

    These APIs are used within the same environment or system, enabling different software components or applications to communicate with each other locally without the need to go through a network. This is common in software libraries, operating systems, or applications where different modules or plugins need to interact. Local APIs are accessed directly by the program without the latency or the overhead associated with network communications.

  • @nikta456
    @nikta456Ай бұрын

    Problems ? # make sure the LLM is listening `pip install llama-index qdrant_client torch transformers` `pip install llama-index-llms-ollama` # didn`t download codellama `ollama pull codellama` # timeout error set request_timeout to 500.

  • @Aiden-rz6vf
    @Aiden-rz6vf2 ай бұрын

    Llama 3

  • @PANDURANG99
    @PANDURANG9912 күн бұрын

    multiple pdf at a time and pdf contains drawing, how to make

  • @Meir-ld2yi
    @Meir-ld2yi2 ай бұрын

    ollama mistral work so slowly that even hello take like 20 min

  • @neiladriangomez
    @neiladriangomez2 ай бұрын

    I’ll come back to this in a couple of months. Too advance for me, my head is spinning I cannot grasp a single info😵‍💫

  • @TechWithTim

    @TechWithTim

    2 ай бұрын

    Haha no problem! I have some easier ones on the channel

  • @cocgamingstar6990

    @cocgamingstar6990

    2 ай бұрын

    Me too😅

  • @alantripp6175

    @alantripp6175

    2 ай бұрын

    I can't figure out which AI agent vendor is open for me to sign up to use.

  • @dr_harrington
    @dr_harrington2 ай бұрын

    DEAL BREAKER: 17:20 "What this will do is actually take our documents and push them out to the cloud."

  • @dezly-macauley
    @dezly-macauley2 ай бұрын

    I want to learn how to make an AI agent that auto-removes / auto-deletes these annoying spam s3x bot comments on useful KZread videos like this.

  • @kazmi401
    @kazmi4012 ай бұрын

    Why youtube does not add my comment. F*CK

  • @NathanChambers
    @NathanChambers2 ай бұрын

    Using a module that requires you to upload the files or data (LlamaParse/LlamaCloud) totally defeats the purpose of self hosting your on LLM models... Dislike just for that! it makes as little sense as putting your decentralized currency in a centralized bank. LLAL

  • @skyamar

    @skyamar

    2 ай бұрын

    stupid orc

  • @iva1389

    @iva1389

    2 ай бұрын

    How is that an issue? You want to have the ability to parse the files to the model. Are you sure you've grasped the concept of agents and tools? The whole point is have RAG locally. Decentralized comparison is simply unrelated to what has been done here.

  • @NathanChambers

    @NathanChambers

    2 ай бұрын

    @@iva1389 It is the same thing being done. You're taking something that allows you/your business to do things on their own without third party... but adding 3rd party for no reason. 3rd party where your data can be hacked/stolen/man-in-the-middle attacked. So the comparison IS VALID!

  • @NathanChambers

    @NathanChambers

    2 ай бұрын

    @@iva1389 The whole point of things like ollama and LLMs is to keep things IN-HOUSE. Doing 3rd party defeats the purpose of using these models. Same things as putting decentralized money in central banks. So they really are the same type of stupid thing to do! It's like saying cocaine is bad for you, but let's go do some crack. :P

  • @TechWithTim

    @TechWithTim

    2 ай бұрын

    Then simply don’t use it and use the local loading instead. I’m just showing a great option that works incredibly well, you can obviously tweak this and thats the idea.

  • @jaivalani4609
    @jaivalani46092 ай бұрын

    Hi tim.its really.simple 2 understand One ask is.llama.parse free to use ? Or does it needs subscription key ?

  • @jaivalani4609

    @jaivalani4609

    2 ай бұрын

    Can we use Lama parse locally ?

  • @TechWithTim

    @TechWithTim

    2 ай бұрын

    It’s free to use!

  • @jaivalani4609

    @jaivalani4609

    2 ай бұрын

    @@TechWithTim Thanks, but does it requires data to be sent to Cloud?

  • @samohtGTO

    @samohtGTO

    Ай бұрын

    @@jaivalani4609 it does send it to the cloud and you can do 1000 pages each day with free. it will send the file to the cloud and gets the markdownfile from it

  • @norminemralino2260
    @norminemralino2260Ай бұрын

    I get an error when trying to parse readme.pdf: Error while parsing the file '/Users/.../AI-Agent-Code-Generator/data/readme.pdf': Illegal header value b'Bearer ' Failed to load file /Users/.../AI-Agent-Code-Generator/data/readme.pdf with error: Illegal header value b'Bearer '. Skipping... Any clue to what might be happening?

  • @norminemralino2260

    @norminemralino2260

    Ай бұрын

    I'm pretty sure if has something to do with LlamaParse(). I can't seem to reach LlamaCloud using my API. I copied and pasted it into the .env file

  • @norminemralino2260

    @norminemralino2260

    Ай бұрын

    Not sure why load_dotenv() doesn't work for me. I was able to set it using os.environ['LLAMA_CLOUD_API_KEY']

  • @maximelhuillier8964
    @maximelhuillier8964Ай бұрын

    i have this error message : [WinError 126] Le module spécifié est introuvable. Error loadin \AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\torch\lib\shm.dll" or one of its dependencies. Can you help me ?

  • @donaldhawkins6610

    @donaldhawkins6610

    Ай бұрын

    This is a bug and should be fixed with pytorch >= 2.3.1. If pytorch is version 2.3.0 in requirements.txt, change it to 2.3.1 or a newer release if another one is already out

  • @AndyPandy-ni1io
    @AndyPandy-ni1io19 күн бұрын

    from sentence_transformers import SentenceTransformer model = SentenceTransformer("BAAI/bge-m3") from llama_index import download_loader download_loader("LocalDiskVectorStore")().persist(persist_dir="./storage")

  • @YeungLorentz

    @YeungLorentz

    13 күн бұрын

    what does it do?

  • @AndyPandy-ni1io

    @AndyPandy-ni1io

    13 күн бұрын

    @YeungLorentz my attempt at getting the script working. Turns out to be a version issue with something can't remember but the tutorial is out of date so no point trying to follow it.

  • @entzyeung

    @entzyeung

    13 күн бұрын

    @@AndyPandy-ni1io Yea, like the run function doesn't work anymore. I am not too sure if there are some others