RAG with Llama-Index: Vector Stores

Ғылым және технология

In this third video of our series on Llama-index, we will explore how to use different vector stores in llama-index while building RAG applications. We will look at self-hosted solution (chroma-DB) and cloud-based solution (Pinecone).
CONNECT:
☕ Buy me a Coffee: ko-fi.com/promptengineering
|🔴 Support my work on Patreon: Patreon.com/PromptEngineering
🦾 Discord: / discord
▶️️ Subscribe: www.youtube.com/@engineerprom...
📧 Business Contact: engineerprompt@gmail.com
💼Consulting: calendly.com/engineerprompt/c...
Links:
Vector stores in LlamaIndex: tinyurl.com/2p877e6k
Google Colab: tinyurl.com/2s36eyb2
llamaIndex playlist: • Llama-Index
Timestamps:
[00:00] Intro
[00:21] Vector stores in llamaIndex
[01:24] Basic Setup
[02:45] Upload files
[03:45] Self-Hosted Vector Store
[09:00] Cloud based Vector Store

Пікірлер: 24

  • @engineerprompt
    @engineerprompt9 ай бұрын

    Want to connect? 💼Consulting: calendly.com/engineerprompt/consulting-call 🦾 Discord: discord.com/invite/t4eYQRUcXB ☕ Buy me a Coffee: ko-fi.com/promptengineering |🔴 Join Patreon: Patreon.com/PromptEngineering

  • @user-em4ld3zc9y
    @user-em4ld3zc9y8 ай бұрын

    This tutorial series is great ! Best one I found so far. Thank you for sharing this.

  • @gregorykarsten7350
    @gregorykarsten73509 ай бұрын

    Great work. Excellent topic. Llama index opens up so much more possibility for RAG. Im very much interested in building a knowledge base. That gets added to on a daily basis. What do think of knowledge graph in this context

  • @anilshinde8025
    @anilshinde80258 ай бұрын

    great video. Thanks. waiting for addition of Local LLM in the same code

  • @fuba44
    @fuba449 ай бұрын

    This was great, love this kind of content! ❤❤❤

  • @engineerprompt

    @engineerprompt

    9 ай бұрын

    Thank you 🙏

  • @vitalis
    @vitalis9 ай бұрын

    Super interesting, looking forward to the video

  • @engineerprompt

    @engineerprompt

    9 ай бұрын

    Thank you 🙏

  • @hassentangier3891
    @hassentangier38919 ай бұрын

    Awesome Work, Like Always. Can you refer to documentation or video "on how to update the chromadb in this context"

  • @kdlin1
    @kdlin15 ай бұрын

    Why is OpenAI API Key needed when it does not use OpenAI? Thanks!

  • @smoq20
    @smoq209 ай бұрын

    I always seem to run into the problem of exclusions when using vector similarity search for RAG. I.e. when you run a query for "Tell me everything you know about dogs other then Labradors." guess which documents will be returned as first 10 (assuming you have a lot of chunks)? Yes, about Labradors. Has anyone figured a way around that yet? I've been attempting to filter out results if queries include exclusions with additional LLM passes, but only GPT4 seems to have enough brains to do it correctly. PaLM 2 gets it right in 50% of cases.

  • @saikashyapcheruku6103
    @saikashyapcheruku61035 ай бұрын

    Is there a way to bypass the rate limit error for openai api? Additionally, why is the openai being used even after specifically mentioning the service context?

  • @arkodeepchatterjee
    @arkodeepchatterjee9 ай бұрын

    please make the video comparing different embedding models

  • @Rahul-zq8ep
    @Rahul-zq8ep7 ай бұрын

    Great I understood most of the explanation in video but Where is the RAG implementation in it ? I have also created a vector_store, storage_context, index etc when I was implementing chatBot with my data, but I am confused on how to implement RAG as an added functionality ?

  • @Kishorekkube
    @Kishorekkube9 ай бұрын

    Self hosting? Seems interesting

  • @toannn6674
    @toannn66749 ай бұрын

    I have 2 million data chunks of text, i was used db chroma but it didn't work. Can you help me?

  • @shubhamanand9095
    @shubhamanand90959 ай бұрын

    Can you share the full architecture diagram

  • @user-hq6or3fh9d
    @user-hq6or3fh9d6 ай бұрын

    hey i have question as we have injested our data to the vector db how do retrive answer without runnin the injestion code all the time

  • @chrisksjdvs603

    @chrisksjdvs603

    3 ай бұрын

    setting up the vector store as persistent should help like he says in the video. Once you have your data stored you just need to load the vector store to communicate with the data if I understand it correctly

  • @scorpionrevenge
    @scorpionrevenge6 ай бұрын

    I keep receiving this error : cannot import name 'Doc' from 'typing_extensions' I am trying to run your codes on jupyter notebook environment. Can you please help and let me know how to create a vector db?

  • @srikanth1107262
    @srikanth11072626 ай бұрын

    Would like to have a video on local download model ( llama2 ggml/gguf ) using llamaindex to build rag pipeline with chormadb. Thank you for videos its helps a lot.

  • @hiramcoriarodriguez1252
    @hiramcoriarodriguez12529 ай бұрын

    is this a LangChain competitor library?

  • @engineerprompt

    @engineerprompt

    9 ай бұрын

    Yes

  • @devikasimlai4767
    @devikasimlai4767Ай бұрын

    1:30 onw

Келесі