What is Retrieval-Augmented Generation (RAG)?
Try RAG with watsonx → ibm.biz/BdMsRT
Learn more about RAG→ ibm.biz/BdMsRt
Large language models usually give great answers, but because they're limited to the training data used to create the model. Over time they can become incomplete--or worse, generate answers that are just plain wrong. One way of improving the LLM results is called "retrieval-augmented generation" or RAG. In this video, IBM Senior Research Scientist Marina Danilevsky explains the LLM/RAG framework and how this combination delivers two big advantages, namely: the model gets the most up-to-date and trustworthy facts, and you can see where the model got its info, lending more credibility to what it generates.
Get started for free on IBM Cloud → ibm.biz/sign-up-now
Subscribe to see more videos like this in the future → ibm.biz/subscribe-now
Пікірлер: 368
This lecturer should be given credit for such an amazing explanation.
@cosmicscattering5499
3 ай бұрын
I was thinking the same, she explained this so clearly.
@tariqmking
2 ай бұрын
Yes this was excellently explained, kudos to her.
@brianmi40
Ай бұрын
Or at least credit for being able to write backwards!
@victoriamilhoan512
10 күн бұрын
The connection between a human answering a question in real life vs how LLMs (with or without RAG) do it was so helpful!
IBM should start a learning platform. Their videos are so good.
@XEQUTE
5 ай бұрын
i think they already do
@srinivasreddyt9555
Ай бұрын
Yes, they have it already. KZread.
@siddheshpgaikwad
29 күн бұрын
Its mirrored video, she wrote naturally and video was mirrored later
@Hossam_Ahmed_
28 күн бұрын
They have skill build but not videos at least most of the content
@CaptPicard81
25 күн бұрын
They do, I recently attended a week long AI workshop based on an IBM curriculum
I'm sure it was already said, but this video is the most thorough, simple way I've seen RAG explained on YT hands down. Well done.
Your ability to write backwards on the glass is amazing! ;-)
@jsonbourne8122
6 ай бұрын
They flip the video
@Paul-rs4gd
3 ай бұрын
@@jsonbourne8122 So obvious, but I did not think of it. My idea was way more complicated!
4:15 Marina combines the colors of the word prompt to emphasis her point. Nice touch
I love seeing a large company like IBM invest in educating the public with free content! You all rock!
Marina is a talented teacher. This was brief, clear and enjoyable.
Loved the simple example to describe how RAG can be used to augment the responses of LLM models.
Wow, I opened youtube coming from the ibm blog just to leave a comment. Clearly explained, very good example, and well presented as well!! :) Thank you
Very well explained!!! Thank you for your explanation of this. I’m so tired of 45 minute KZread videos with a college educated professional trying to explain ML topics. If you can’t explain a topic in your own language in 10 minutes or less than you have failed to either understand it yourself or communicate effectively.
Wow, this is the best beginner's introduction I've seen on RAG!
That's a really great explanation of RAG in terms most people will understand. I was also sufficiently fascinated by how the writing on glass was done to go hunt down the answer from other comments!
One of the easiest to understand RAG explanations I've seen - thanks.
I believe the video is slightly inaccurate. As one of the commenters mentioned, the LLM is frozen and the act of interfacing with external sources and vector datastores is not carried out by the LLM. The following is the actual flow: Step 1: User makes a prompt Step 2: Prompt is converted to a vector embedding Step 3: Nearby documents in vector space are selected Step 4: Prompt is sent along with selected documents as context Step 5: LLM responds with given context Please correct me if I'm wrong.
@DJ-lo8qj
28 күн бұрын
I’m not sure. Looking at OpenAI documentation on RAG, they have a similar flow as demonstrated in this video. I think the retrieval of external data is considered to be part of the LLM (at least per OpenAI)
@PlaytimeEntertainment
27 күн бұрын
I do not think retrieval is part of LLM. LLM is the best model at the end of convergence after training. It can't be modified rather after LLM response you can always use that info for next flow of retrieval
Please keep all these videos coming! They are so easy to understand and straightforward. Muchas gracias!
For me, this is the most easy-to-understand video to explain RAG!
Great explanation. Even the pros in the field I have never seen explain like this.
hold up - the fact that the board is flipped is the most underrated modern education marvel nobody's talking about
@RiaKeenan
3 ай бұрын
I know, right?!
@euseikodak
3 ай бұрын
Probably they filmed it in front of a glass board and flipped the video on edition later on
@politicallyincorrect1705
3 ай бұрын
Filmed in front of a non-reflective mirror.
@TheTomtz
Ай бұрын
Just simply write on a glass board ,record it from the other side and laterally flip the image! Simple aa that.. and pls dont distract people from the contents being lectured by thinkin about the process behind the rec🤣
@thewallstreetjournal5675
Ай бұрын
Is the board fliped or has she been flipped?
The explanation was spot on! IBM is the go to platform to learn about new technology with their high quality content explained and illustrated with so much simplicity.
This video is highly underviewed for as informative as it is!
Good Explanation of RAG. Thanks for sharing.
this let's me understand why the embeddings used to generate the vectorstore is a different set from the embeddings of the LLM... Thanks, Marina!
This is the best explanation I have seen so far for RAG! Amazing content!
1. Understanding the challenges with LLMs - 0:36 2. Introducing Retrieval-Augmented Generation (RAG) to solve LLM issues - 0:18 3. Using RAG to provide accurate, up-to-date information - 1:26 4. Demonstrating how RAG uses a content store to improve responses - 3:02 5. Explaining the three-part prompt in the RAG framework - 4:13 6. Addressing how RAG keeps LLMs current without retraining - 4:38 7. Highlighting the use of primary sources to prevent data hallucination - 5:02 8. Discussing the importance of improving both the retriever and the generative model - 6:01
Great video as always. Thanks for sharing.
Brilliant explanation and illustration. Thanks for your hard work putting this presentation together.
Such an amazing explanation. Thank you ma'am!
I have watched many IBM videos and this is the undoubtedly the best ! I will be searching for your videos now Marina!
Very precise and exact information on RAG in a nutshell. Thank you for saving my time.
The interesting part is not retrieval from the internet, but retrieval from long term memory, and with a stated objective that builds on such long term memory, and continually gives it "maintenance" so it's efficient and effective to answer. LLMs are awesome because even though there are many challenges ahead, they sort of give us a hint of what's possible, without them it would be hard to have the motivation to follow the road
perfect explanation understood every bit , no lags kept it very interesting ,amazing job
Marina has done a great job explaining LLM and RAGs in simple terms.
Great explanation! The video was very didactic, congratulations!
Pretty simple explanation, thank you
Great, simple, quick explanation
Great down the rabbit hole video. Very deep and understandable. IBM academy worthy in my opinion.
Great video. Thanks for sharing
Best explanation so far from all the content on internet.
That was excellent, simple, and elegant! Thank you!
Wow, having a lightbulb moment finally after hearing this mentioned so often. Makes more sense now!
That's what Knowledge graphs are for, to keep LLMs grounded with a reliable source and up-to-date.
Great explanation. Thank you!😊
Thanks for letting us know about this feature of LLM :)
very well executed presentation. i had to think twice about how you can write in reverse but then i RAGed my system 2 :)
An amazing explanation that made RAG understandable in about 4:23 minutes!
good explanation, it's very easy to understand. this video is the first one when I search RAG on KZread. great job ;)
Great explanation with an example. Thank you
Fantastic video and explanation. Thank you!
The ability to write backwards, much less cursive writing backwards, is very impressive!
@IBMTechnology
7 ай бұрын
See ibm.biz/write-backwards
@jsonbourne8122
6 ай бұрын
Left hand too!
@NishanSaliya
5 ай бұрын
@@IBMTechnology Thanks .... I was reading comments to check for an answer for that question!
This is so well explained! Thank you 👍🏻✅
Thank you for such a great explanation.
This was such an amazing explanation!
Amazing video, thanks IBM ❤
Very Helpful! Great explanation. thx IBM
This was explained fantastically.
Very clear explanation, much respect 🫡
Thank you, Marina Danilevsky ....
Fantastic explanation, proud to be an IBMer
Appreciate the succinct explanation. 👍
AWESOME EXPLANATION OF THE CONCEPT RAG
The explanation was very good 💯.
wow this was an amazing Explanation ,very easy to understand
Super good and clear, well done!
Great video, you guys should do one on promising tech industries
This is a really good video thank you for sharing this knowledge
This is excellent and I hope IBM does well in this space. We need a reliable, non-hype vendor.
Beautifully explained....thanks
Awesome explanation. Love you.
Amazing explanation, finally i understand it.
That's the best video about RAG that I've watched
This is a fantastic lesson video.
Excellent ! thank you for sharing this knowledge !
Amazing explanation! Thank you:)
thanks for the great explanation
Insightful, please more video like this
Very well explained and it is easily understandable to non AI person as well. Thanks.
Finally, we got a clear explanation!
BRILLIANT VIDEO thank you!
Thank you for these videos. Makes it much easier to nagivate this new AI-ra of machine learning.
very good and clear explanation
Excellent explanation!
Great video, excellent explanation!
Great lessons! Nice of you to step out 🙃 and make such engaging and educative content This is a very useful in helping us in critical thinking. Thank you for sharing this video. 👍 Current ai models may impose neurotypical norms and expectations based on current data trained on . 🤔 Curious to see more on how IBM approach the challenges and limitations of Ai
Все толково, четко и понятно. Респект автору.
the color coding on your whiteboard is really apt here !
thank you. very informative!
Great explanation!
Great explanation
Excellent explanation. thx
Great video! thanks for educating!
Best explanation ever
We also need the models to cross check their own answers with the sources of information before printing out the answer to the user. There is no self control today. Models just say things. "I don't know" is actually a perfectly fine answer sometimes!
Amazing work. Thanks for sharing this.
From which corpus/database are the documents retrieved from? Are they up-to date? and how does it know the best documents to select from a given set?
The video is short and consice yet the delivery is very elegant. She might be the best instructor that have teached me. Any idea how the video was created?
Very good explanation!
nice video - great explanation!
Very well explained 🙏🏼👍
very nicely explained
Well explained!