Advanced RAG with Llama 3 in Langchain | Chat with PDF using Free Embeddings, Reranker & LlamaParse
Let's build an advanced Retrieval-Augmented Generation (RAG) system with LangChain! You'll learn how to "teach" a Large Language Model (Llama 3) to read a complex PDF document and intelligently answer questions about it. We'll simplify the process by breaking the document into small pieces, converting these into vectors, and organizing them for fast answers. We'll build our RAG using only open models (Llama 3, FlagEmbedding & MS Marco reranker).
Follow me on X: / venelin_valkov
AI Bootcamp: www.mlexpert.io/bootcamp
Discord: / discord
Subscribe: bit.ly/venelin-subscribe
GitHub repository: github.com/curiousily/AI-Boot...
00:00 - Intro
00:17 - Text tutorial on MLExpert.io
00:43 - Our RAG Architecture
05:11 - Google Colab Setup
06:36 - Document Parsing with LlamaParse
09:07 - Text Splitting, Vector Embeddings & Vector DB (Qdrant)
13:26 - Reranking with FlashRank
14:45 - Q&A Chain with LangChain, Llama 3 and Groq API
16:32 - Chat with the PDF
21:30 - Conclusion
Join this channel to get access to the perks and support my work:
/ @venelin_valkov
#artificialintelligence #langchain #chatbot #llama #chatgpt #llm
Пікірлер: 8
Full-text tutorial (requires MLExpert Pro): www.mlexpert.io/bootcamp/advanced-rag-with-llama-3-in-langchain
thanks mate ! subscribed, keep up the good work !!!
Thank you for the tutorial very useful and easy to follow, can you please add the UI for this RAG application so that normal user can interact.
за пореден път - отличен материал!
tanks
Great material! My I ask you what model and hardware config are you using to get those performances? Thank youu
@doansai
Ай бұрын
just saw! it's a remote llama model from groq api :)
i think RetrievalQA class is deprecated. What about updating it to use the create_retrieval_chain