Retrieval Augmented Generation (RAG) Explained: Embedding, Sentence BERT, Vector Database (HNSW)
Ғылым және технология
Get your 5$ coupon for Gradient: gradient.1stcollab.com/umarja...
In this video we explore the entire Retrieval Augmented Generation pipeline. I will start by reviewing language models, their training and inference, and then explore the main ingredient of a RAG pipeline: embedding vectors. We will see what are embedding vectors, how they are computed, and how we can compute embedding vectors for sentences. We will also explore what is a vector database, while also exploring the popular HNSW (Hierarchical Navigable Small Worlds) algorithm used by vector databases to find embedding vectors given a query.
Download the PDF slides: github.com/hkproj/retrieval-a...
Sentence BERT paper: arxiv.org/pdf/1908.10084.pdf
Chapters
00:00 - Introduction
02:22 - Language Models
04:33 - Fine-Tuning
06:04 - Prompt Engineering (Few-Shot)
07:24 - Prompt Engineering (QA)
10:15 - RAG pipeline (introduction)
13:38 - Embedding Vectors
19:41 - Sentence Embedding
23:17 - Sentence BERT
28:10 - RAG pipeline (review)
29:50 - RAG with Gradient
31:38 - Vector Database
33:11 - K-NN (Naive)
35:16 - Hierarchical Navigable Small Worlds (Introduction)
35:54 - Six Degrees of Separation
39:35 - Navigable Small Worlds
43:08 - Skip-List
45:23 - Hierarchical Navigable Small Worlds
47:27 - RAG pipeline (review)
48:22 - Closing
Пікірлер: 102
You are the best teacher of ML that I have experienced. Thanks for sharing the knowledge.
This is what a teacher with a deep knowledge on what is teaching can do. Thank you very much.
and learning becomes more interesting and fun when you have an Teacher like Umar who explains each and everything related to the topic so good that everyone feels like they know complete algorithms. A big fan of your teaching methods Umar.. Thanks for making all the informative videos..
Wow, thanks a lot. This Is the best explanation on RAG I found on KZread
Wow! I finally understood everything. I am a student in ML. I have watched already half of your videos. Thank you so much for sharing. Greetings from Jerusalem
Man, your content is awesome. Please do not stop making these videos as well as code walkthroughs.
Amazing teacher! 50 minutes flew by :)
The best explanation of RAG
Waited for such content for a while. You made my day. I think I got almost everything. So educational. Thank you Umar
What an exceptional explanation of HNSW algo ❤
Just love ur videos. Soo much Details but extremly well put together
Awesome context sir, it was the best explanation I found till now!
This was fantastic (as usual). Thanks for putting it together. It has helped my understanding no end.
One of the best channels to learn and grow
Impressively intuitive, something most explanations are not. Great video!
This was fantastic and I have learned a lot from this! Thanks a lot for putting this lesson together!
This video is really good, subscribed! You explained the topic super well. Thanks!
Amazing content and what clear explanation. Please make more videos. Keep making this channel will grow like anything.
Thanks Umar. I look forward for your videos as you explain the topic in an easy to understand way. I would request you to make "BERT implementation from scratch" video.
The explanation of HNSW is excellent!
Wow! You explained everything great! Please make more videos like this
Best video ever!
Thank you so much - this is a great video. Great balance of details and explanation. I have learned a ton and have saved it down for future reference
Really amazing content!!, looking forward for more such content Umar :)
This was such a great explanation. Thank you!
Thank you so much for sharing. Looking for more content about NLP and LLMs
Good explanation, thanks
Very informative, thanks
Awesome paper. Please keep posting more videos like this.
Amazing explanation!
This was super insightful, thank you very much!
Thank you very much for a detailed explanation on RAG with Vector Database. I have one question: Can you please explain how do we design the skip list with embeddings? Basically how to design which embedding is going to which level?
Glad I've subscribed to your channel. Please do these more.
Man, keep it up! Love your content
Great explanation! Thank you so much
Hello sir i just want to say thanks for creating very good content for us. love from India :)
amazing work very clear explanation ty!
Thanks for making these videos🎉
awesome as usual! ty
wow wonderful explanation thanks
Excellent content!
Thank you for the excellent content!
Nice lecture, Thank you!
Thanks for making this video!
simply impressive
Wooo you are the best I have ever seen
Thank you, awesome video!
Thanks for sharing, really a great content 👏
awesome content
One of the best videos
Please bring some more content !
Hola, coming back with a great content as usual
@umarjamilai
6 ай бұрын
Thanks 🤓😺
Excellent video! 👏👏👏
Amazing presentation! I have a couple of questions though... What size of chunks should be used when using Ada-002? Is that dependent on the Embedding model? Or is it to optimize the granularity of 'queriable' embedded vectors? And another thing: am I correct to assume that, in order to capture the most contexts possible, I should embed a 'tree structure' object (like a complex object in C#, with multiple nested object properties of other types) sectioned from more granular all the way up to the full object (as in, first the children, then the parents, then the grand-parents)?
Great content , keep doing it .
so helpful! thx for sharing
I am so glad I am subscribed to you!
This was a wonderful explanation! I understood everything and I didn't have to watch the Transformers or BERT video (I actually know nothing about them but I have dabbled with Vector DBs). I have subbed and I will definitely watch the transformer and BERT video. Thank you!❤❤ Made a little donation too. This is my first ever saying $Thanks$ on KZread haha
Hi Umar,does RAG also has context window limitation as prompt engineering technique
Cool video about RAG! You could also upload into Bilibili, as you live in China, you should know that. :D
Thank you so much. Such a nice explanation. 😀
Salam Mr Jamil, i was wondering if it was possible to use the BERT model provided by apple in coreml for sentimental analysis when talking to siri then having a small gpt2 model fine tuned in conversational intelligence give a response that siri then reads out
How would you find number 3 at 44:01 ? The algorithm you said will go to 5 and then since 5 is greater than 3, it won't go further. Am I right?
You are the BEST!
Thank you so much man..
Are we storing the sentence embeddings together with the original sentence they were created? If not how do we map them back (from the top-k most similar stored vectors) into the text they were originated for, given that the sentence embedding lost some information when pooling was done.
@umarjamilai
6 ай бұрын
Yes, the vector database stores the embedding and the original text. Sometimes, they do not store the original text but a reference to it (for example instead of storing the text of a tweet, you may store the ID of the tweet) and then retrieve the original content using the reference.
Thanks
Thank YOU :)
Hey, big thanks for this awesome and super informative video! I'm really intrigued by the Siamese architecture and its connection to RAG. Could someone explain that a bit more? Am I right in saying it's used for top-K retrievals ? Meaning, we create the database with the output embeddings, and then use a trained Siamese architecture to find the top-K most relevant chunks computing similarities ? Is it necessary to use this approach in every framework, or can sometimes just computing similarity through the embeddings work effectively?
why is the context window size limited? Is it because these models are based on transformers and for a given transformer architecture, long distance semantic relationship detection will be bounded by the number of words/context length ?
thanks
Awesome I completely understand the RAG just because of you, Now I am here with some questions let's I am using the Llama2 model to where my main concern is I am giving him the pdf for context then user can ask question question on this, but this approach took time, during inferencing. so after watching your video what i undersatnd using the RAG pipeline is it possible to store the uploaded pdf into vector db then we will used it like that. I am thinking right or not or is it possible or not? Thanks,
You are legend
Thanks!
Great video!! Shouldn't 5 come after 3 in skip list?
how to do get target cosine similarity at first place?
Wow, I saw the Chinese knotting on your wall ~
Thanks bro
keep it up!
💪👍 good introduktion
Legend
Great video, keep up the good work! :) Around 19:25 you're saying that the embedding for "capital" is updated during backprop. Isn't that wrong for the shown example / training run where "capital" is masked? I always thought only the embedding associated with non-masked tokens can be updated.
@umarjamilai
6 ай бұрын
You're right! First of all, ALL embedding vectors of the 14 tokens are updated (including the embedding associated with the MASK token). What happens actually is that the model updates the embedding of all the surrounding words in such a way that it can rebuild the missing word next time. Plus, the model is forced to use (mostly) the embedding of the context words to predict the masked token, since any word may be masked, so there's not so much useful information in the embedding of the MASK token itself. It's easy to get confused when you make long videos like mine 😬😬 Thanks for pointing out!
@christopherhornle4513
6 ай бұрын
I see, didn't know that the mask token is also updated! Thank you for the quick response. You really are a remarkable person. Keep going!
So how llm converts vector to text ?
Umar, great content! Around 25:00, when you say that we have a target cosine similarity. How is that target's cosine similarity calculated? Because there is no mathematical way to calculate the cosine similarity between two sentences. All we can do is only take a subjective guess. Can you please exlain in detail to me how this works?
@umarjamilai
5 ай бұрын
When you train the model, you have a dataset that maps two sentences to a score (chosen by a human being based on a scale from 1 to 10 for example). This score can be used as a score for the cosine similarity. If you look papers in this field, you'll see there are many sofisticated methods, but the training data is always labeled by a human being.
@rkbshiva
5 ай бұрын
@@umarjamilai Understood! Thanks very much for the prompt response. It would be great if we can identify a bias free way to do this as the numbering between 1 - 10, especially when done by multiple people and at scale, could get biased.
at 44:00 , the order of linked list is incorrect... isn't it? because it should be 1 3 5 9
@moviesnight248
4 ай бұрын
Even I have the same doubt. It should have been sorted as per the definition
So how LLM converts vector to text ?
Lets say I want to create a Online semantic search tool , that uses vector DB, and RAG performance. just like bing tool . will it follow the same procedure and what new things I will be adding it to integrate to Internet? Plus nicely put video Umar . can you do a coding session for this one like you do for all others , like make something with real time output with rag ? or anything . will be a pleasure to watch.
Do you plan to record coding sentence bert from scratch
Merci !
Half
Thanks!
You are legend
Thanks
Thanks!