Understanding ReACT with LangChain

Ғылым және технология

Colab : drp.li/aSOiF
My Links:
Twitter - / sam_witteveen
Linkedin - / samwitteveen
Github:
github.com/samwit/langchain-t...
github.com/samwit/llm-tutorials
00:00 Intro
00:27 ReACT Paper
05:47 Code Time
06:54 CoT Chain of Reasoning
08:47 Manual ReACT
11:43 ReACT with LangChain

Пікірлер: 86

  • @Justinwk11
    @Justinwk11 Жыл бұрын

    Wow Sam, you're blowing my mind with this info. You're a content wizard, I swear. This is very informative, please don't stop!

  • @EigenA
    @EigenA5 ай бұрын

    I just finished reading this paper and your video was exactly what I needed to cement it in there! Thank you!

  • @theh1ve
    @theh1ve Жыл бұрын

    Once again I appreciate the time and effort you take in these videos to set things out simply so that it is easily understood. Thank you!

  • @KA-qm3qc
    @KA-qm3qc7 ай бұрын

    Brilliant explanation from the paper research up to demonstrating the ReACT CoT. Thank you again. Well done.

  • @leloulouduski
    @leloulouduski Жыл бұрын

    Thank you so much Sam. I was stuck with React trying to understand how it works. It is now much clearer with your very good explanations.

  • @markwolfe5782
    @markwolfe5782 Жыл бұрын

    Great explanation, thanks for taking the time to post it!

  • @ARSH_DS007
    @ARSH_DS0075 ай бұрын

    One of the best video content I witnessed. Great work Sam, Truly appreciate your efforts & clarity in your content.

  • @garrettsmith2256
    @garrettsmith2256 Жыл бұрын

    Fantastic high quality content. Really appreciate the work you're putting in 🙌

  • @wellspokenman81
    @wellspokenman8110 ай бұрын

    good work - appreciated the clear explanations and now feel confident in using this part of langchain. cheers.

  • @yvesdance2306
    @yvesdance2306 Жыл бұрын

    Great explanation! You keep it clear and simple. Txs so much.

  • @andrewyo3374
    @andrewyo3374 Жыл бұрын

    That is a brilliant explanation! Thank you so much, sir!

  • @tarun4705
    @tarun4705 Жыл бұрын

    Really the best explanation till now on how LangChain uses React prompting.

  • @samwitteveenai

    @samwitteveenai

    Жыл бұрын

    Thanks, glad it was helpful.

  • @cmthimmaiah
    @cmthimmaiah10 ай бұрын

    Amazing, the only youtuber with depth in the LLM space

  • @IamMarcusTurner
    @IamMarcusTurner Жыл бұрын

    I appreciate this a lot Sam really good deep dive observation

  • @gustavoadolfocastellanosca4606
    @gustavoadolfocastellanosca460610 ай бұрын

    thanks Sam, i found yourvideos very informative and helpful

  • @cyb3rs1n
    @cyb3rs1n5 ай бұрын

    Excellent explanation

  • @ahmedzahid8354
    @ahmedzahid835410 ай бұрын

    Thanks for these videos really helpful

  • @prikarsartam
    @prikarsartam11 ай бұрын

    Awesome elaboration!

  • @TreeLuvBurdpu
    @TreeLuvBurdpu4 ай бұрын

    Think, THEN speak. That's a good idea, also for humans.

  • @therealrichot
    @therealrichot2 ай бұрын

    Incredible helping breakdown on this topic, thanks.

  • @samwitteveenai

    @samwitteveenai

    Ай бұрын

    Glad it was helpful!

  • @Raulvic
    @Raulvic8 ай бұрын

    Thank you for sharing!

  • @jakekill8715
    @jakekill8715 Жыл бұрын

    Yep output parsing is very important, I personally like to use a tool like jsonforming with reasoning in the given schema.

  • @toddnedd2138
    @toddnedd2138 Жыл бұрын

    Thank you for explaining the topic. If you want to train a small model to do reasoning, maybe you could look at this paper "Orca: Progressive Learning from Complex Explanation Traces of GPT-4" or wait until this model is released into the wild. The number of tools written to use in LLMs is growing fast, so you can not mention them all in the prompt. Is there already a technique to manage this, like a database approach for tools to search them by topic or such thing like TreeOfTools?

  • @christopheprotat
    @christopheprotat Жыл бұрын

    First 😊. I just saw the notification and ReAct was the one I was waiting for . Thanks as usual

  • @Ryan-yj4sd
    @Ryan-yj4sd Жыл бұрын

    Great video!

  • @streamocu2929
    @streamocu29297 ай бұрын

    thank you sir ❤

  • @AlTheRize
    @AlTheRize10 ай бұрын

    This was super interesting - I wonder how difficult it would be to add this kind of reasoning to an open source model with some fine tuning

  • @susdoge3767
    @susdoge37678 күн бұрын

    useful,subscribed

  • @xb3sox_
    @xb3sox_9 ай бұрын

    What an incredible and useful channel, your content is awesome ⚡ It would be very nice if you shared a video about Text Generation Web UI and how to use it with MetaGPT or AutoGPT. Because i tried so hard to use a drop-in replacement API for openai with those projects but the output is not as expected.

  • @wayallen831
    @wayallen83110 ай бұрын

    Thanks for the great tutorial Sam! One question about output parsers I'm confused abt is how does the program know which name to take if there's multiple names that shows up in the search result. For example, if asking abt the POTUS, it may return the past few presidents in the text. Does the LLM get involved to figure out which is most related to the question? If not, how does the regex know which name to pass onto the next action? Thanks!

  • @nickey0207
    @nickey02074 ай бұрын

    I love you man.

  • @msrajeswari
    @msrajeswari8 ай бұрын

    This is something awesome. I understood that you need to make that standard content in the template on any question you are going to ask. So i guess this content template can be used only to hop to websites and get info and not do any calculations. Am I right? btw is there any link to understand that content template given by the react paper?

  • @EddADP
    @EddADP9 ай бұрын

    Hi, thanks for this nice explanation. I had already seen the benefits of CoT prompting on ChatGPT but was wondering if there was a better way to guide it or rather make it guide itself to a better answer. ReACT looks way better. How can one implement this in flowise? (not a dev. just comfortable enough to use the drag and drop of flowise) My previous method of using prompt chains to trigger the reasoning part now looks stupid lol..

  • @harigovind511
    @harigovind511 Жыл бұрын

    Hey Sam, love your content. Could you please try out a similar experiment but using Guanaco or Falcon models?

  • @samwitteveenai

    @samwitteveenai

    11 ай бұрын

    I tried the Falcon and it didn't work. I haven't tried on Guanaco.

  • @AshikaUmanga
    @AshikaUmanga7 ай бұрын

    at the end you mentioned about changing CoT prompts. i assume these are embedded in the core agent execution framework in LangChain . how can i change these CoT prompts?

  • @gubartz
    @gubartz Жыл бұрын

    One thing that concerns me is the lack of control over the number of calls to LLMs when using agents. With langchain, a single question can lead to multiple calls to LLMs, and this process remains somewhat opaque. Am I correct in understanding this?

  • @buksa7257
    @buksa72577 ай бұрын

    Thanks for the vid. What I do not understand yet is that it needs search and lookup actions? It is already trained on a big dataset right? So it might know Russel crow already? How do you define that it should not use a tool but just use its own dataset for answers? ( my agent only wants to use tools also when I don’t think it should be nessecary )

  • @s.chandrasekhar8290
    @s.chandrasekhar8290 Жыл бұрын

    Thank you Sam. As always yours videos are very helpful. I am trying with LMQL and LangChain for the ReAct paradigm. LMQL (Language Model Query Language is a programming language for large language model (LM) interaction). LMQL guarantees that the output text follows the expected format. It would be nice to try with OpenSource LLMS.

  • @samwitteveenai

    @samwitteveenai

    Жыл бұрын

    LMQL is very interesting, I have thought about making a video about it. Do you have it working with an open source model for ReACT?

  • @s.chandrasekhar8290

    @s.chandrasekhar8290

    Жыл бұрын

    Not with Open Source Models . Only Commercial Ones.

  • @alkebabish
    @alkebabish10 ай бұрын

    Can you explain what the output parser is doing? I don't understand why it is needed to get whatever the agent searched for, isn't that information already there in the prompt sent to the LLM?

  • @nintendo2000
    @nintendo2000Ай бұрын

    Is it possible to fine tune a model's 'thoughts'?

  • @user-ef2pv2du3j
    @user-ef2pv2du3j10 ай бұрын

    Thanks for you clear explanation and showing examples!! It all makes sense to me, but as I am implementing it, I am getting alot of errors with the REACT_DOCSTORE agent type. When in debug mode, I can see that it found the answer but does not output it and reaches the maximum iterations.

  • @samwitteveenai

    @samwitteveenai

    10 ай бұрын

    This will depend on the model you are using it with.

  • @kevinehsani3358
    @kevinehsani3358 Жыл бұрын

    Thanks for the great video. Would one be correct to assume this is what autogpt is all about and even hugginggpt which probably used technical plugins for react.

  • @samwitteveenai

    @samwitteveenai

    Жыл бұрын

    Yes kind of. HuggingGPT and HF Transformers Agent use something similar to this. AutoGPT it really depends on how it is setup

  • @raymondlei1022
    @raymondlei10229 ай бұрын

    Thank you for the sharing! At the end of the video you said that it won’t work for most of the open source models, does it mean that we have to use gpt-4? Will llama 2 or some other model works?

  • @samwitteveenai

    @samwitteveenai

    9 ай бұрын

    So that video is from a while ago, it will work with some open source models, especially if they are fine tuned for it.

  • @raymondlei1022

    @raymondlei1022

    9 ай бұрын

    @@samwitteveenai thank you!

  • @joxa6119
    @joxa61193 ай бұрын

    What is the research paper name on 5:00 ?

  • @AMX0013
    @AMX0013Ай бұрын

    Hey, Im currently working on this and need to build an actor-critique style LLMchain, so a tool-less bot that would analyse transcripts. Can you go over a showcase on how to setup output parsers and prompt.format() for a llm_chain usecase?

  • @saminchowdhury7995
    @saminchowdhury7995 Жыл бұрын

    I dream of a day when open source models will be as good as openai models and everyone will have an assistant like this in their pockets

  • @mohammadaliraza5134
    @mohammadaliraza51344 ай бұрын

    works well enough with some llama models

  • @cbusse7842
    @cbusse7842 Жыл бұрын

    I'd like to see you use PaLM2 for this

  • @VPopkins
    @VPopkins10 ай бұрын

    I like ReAct. But not sure whether using LangChain makes it simpler or adds a level of abstraction that actually complicates it.

  • @rosszhu1660
    @rosszhu166026 күн бұрын

    Thanks Sam. One year later I came back to watch this video again. Do you think this is still useful with tools and function calling? Most LLMs now support it, including Claude and Gemini

  • @generationgap416

    @generationgap416

    25 күн бұрын

    Nothing changes because of the inclusion of function calling in particular LLM models. LLMs use functions as tools. ReAct a sequential process: thought --> action --> (action_result or observation) then this is looped until a certain number of iterations is met or an answer is reached. Rewatch the video, my friend. One of the main things was how to use functions as tools.

  • @rosszhu1660

    @rosszhu1660

    25 күн бұрын

    @@generationgap416 Thanks my friend. I might have confused you ^_^... By functions/tools, I actually meant LLM's native function-callling/Tool-using feature, not from users' prompt or using Langchain. OpenAI firstly released its function calling feature on 20230613 that this video of Sam had no way to cover. The point is that there's no way you can insert any Thought-Observation-Action prompt during OpenAI's native function calling flow and OpenAI decides which tool to use and what to follow the next. Everything is behind the scenes but it seems to work well. I am not sure if OpenAI is using the ReAct logic internally but I can't control this flow.

  • @kenfink9997
    @kenfink9997 Жыл бұрын

    This is almost pseudo-AGI/Agent design. A single prompt / chain forms a linear process but with steps implied at the start. Would this be improved with camel/AGI/Agent-style back-and-forth conversations? Moreover, I'm wondering if the future of success is in balancing the large language models with specialized models for some tasks. So could something like this include OpenAI asking StarCoder-beta or Gorilla for a code or API output, and then using that to build steps, evaluate / improve code, etc...

  • @clray123

    @clray123

    Жыл бұрын

    This is an intuitive idea, but it is a bit at odds with the history of the research field. This "specific model for specific task" approach is what AI researchers (and computer game AI developers) have been dabbling with for decades, in lack of better options. And then, bang, the big unified transformer models came about, and produced general results that far surpassed such handcrafted solutions. In fact the only reason to mess around with the handcrafting would be if you wanted to save on the hardware. But this might be a "penny wise pound stupid" sort of approach when it comes to AGI (while certainly reasonable if you only want a single task done well), sort of like trying to build a winning race car from off-the-shelf scrap parts in your local hardware shop.

  • @kenfink9997

    @kenfink9997

    Жыл бұрын

    @@clray123 Interesting way of looking at it. Self-referentially (but with human agents) I wonder what other folks think here. Is @clray123 right? Sam? Please explain your reasoning in your answer. :)

  • @nintendo2000
    @nintendo2000Ай бұрын

    6:32 "a lot of the reasoning stuff doesn't work on the open source models" so this was a year ago -- I wonder if this is still true for the newer models like llama 3?

  • @andy111007
    @andy111007 Жыл бұрын

    Hi Sam, Thanks for the Amazing video. Is it possible to do the same for pdf documents?. Looking forward to hearing from you. Thanks, Andy

  • @samwitteveenai

    @samwitteveenai

    Жыл бұрын

    Not sure what you mean the same for PDF?

  • @BoHorror
    @BoHorror8 ай бұрын

    I'm just spitting at the wall and seeing what sticks. But in order to use react at a somewhat cheaper level why not have openai API do the basic steps (so layout the general steps) and then parse off this info to a open source LLM that's run on a GPU to do the menial tasks? Could that possibly work?

  • @samwitteveenai

    @samwitteveenai

    8 ай бұрын

    Yeah often is better to just fine tune the open source model to do it all I have found.

  • @MadhavanSureshRobos
    @MadhavanSureshRobos Жыл бұрын

    So theoretically, if we only fine-tune an open source model to only perform these thought and action generating tasks, a small OpenSource can potentially do all these tasks really really well? Only constraint is the data. Right?

  • @generationgap416

    @generationgap416

    25 күн бұрын

    One of the biggest drivers is the reasoning capability of the foundation LLM that you are using. The LLM model also must have function-calling abilities. Data for fine-tuning or data for training the LLM?

  • @sagarsarma1
    @sagarsarma1 Жыл бұрын

    If i post the question of Russel Crowe to Chatgpt, it returns me the correct answer. Does this model is able to do ReACT reasoning of its own without need for react prompting?

  • @samwitteveenai

    @samwitteveenai

    11 ай бұрын

    It is using the ChatGPT model

  • @josephchin200
    @josephchin200 Жыл бұрын

    how does it stop the LLM generation mid stream?

  • @samwitteveenai

    @samwitteveenai

    Жыл бұрын

    it often doesn't it, it just users the output parser to cut it off

  • @clray123
    @clray123 Жыл бұрын

    I think the model's tendency to "justify" or "reinforce" its invalid answer is the same kind of issue as the LM repeating itself ad nauseam, the simpler LMs do it on verbatim word sequences, the larger ones on verbatim sentences, and I suspect the really clever ones do it on meanings of sentences (or paragraphs). The exact reason, as far as I'm aware, has not been truly researched, but the sequence of generated tokens stored in context tends to amplify the probability of the same (!) sequence getting generated again. This is puzzling because this sort of repetition is not really found in the training datasets. So I guess something is fundamentally broken in the (transformer?) algorithm and we are desperately patching our way around it. I suspect that it also a feature because the model is supposed to repeat certain words (e.g. refer to actors of a story repeatedly) while suppressing repetition of others, and it really is not able to tell between these categories of "repetition which makes sense / is inevitable" and "repetition because repetition is so much fun".

  • @clray123

    @clray123

    Жыл бұрын

    Another way to put it would be that the problem is that the model gives equal importance to its own generated bs as to external inputs (be it prompts or results of the observations from its senses/tools). Perhaps the solution will be to teach the bastard models some self-criticism and humility (attenuate the probabilities depending on where the in-context tokens came from). There's probably already someone writing a paper about it lol.

  • @Ryan-yj4sd
    @Ryan-yj4sd Жыл бұрын

    Why don’t you use GPT-3.5Turbo? isn’t it better and cheaper?

  • @samwitteveenai

    @samwitteveenai

    Жыл бұрын

    Yeah I do in a lot of the videos. 2 reasons for this one, 1. didn't want to confuse people with the system prompt stuff etc. 2. Actually often the older davinci model does better with ReACT. The best one is by far is to use the GPT-4 but a few more months before the price on that one comes down.

  • @Ryan-yj4sd

    @Ryan-yj4sd

    Жыл бұрын

    @@samwitteveenai thanks. What are the steps to modify for turbo? Which video should I look at? Or if you mind pasting the system prompt modification I need to make? Just want to make it 10 times cheaper! Thanks!

  • @pensiveintrovert4318
    @pensiveintrovert4318 Жыл бұрын

    Have you considered that the answers are not better but people perceive them as better because there is "reasoning." You have to create answers up front to test this. Double blind studies are needed.

  • @samwitteveenai

    @samwitteveenai

    Жыл бұрын

    Actually when tested on various datasets the models do much better using reasoning and tools. I do wonder about some of the newer reasoning techniques like Tre of Thoughts for how well they generalization to things that weren't in the paper.

  • @dansplain2393
    @dansplain239310 ай бұрын

    Does the model ignore Wikipedia and just decide Russell Crowe won the Oscar for Gladiator? I don’t see how it made that leap…

  • @rafaeldelrey9239
    @rafaeldelrey923910 ай бұрын

    Running the doc_store example now returns consistently "Thought: Joe Biden is the 46th and current president of the United States, so the answer is 46.". I asked davinci-003 to critique the thought and answer and it says its ok. gpt4, on the other hand, pointed "No, the thought and answer are not good. The age of the president is not determined by their position in the sequence of presidents. The thought should be about finding out the current age of the president, which can be done by subtracting their birth year from the current year."

  • @samwitteveenai

    @samwitteveenai

    10 ай бұрын

    Interesting so perhaps this has changed with the new updates to which model is the default fine-tuned model they use. Will try to look at it when I get a chance.

  • @klammer75
    @klammer75 Жыл бұрын

    This is awesome! Great work as always Sam and my understanding grows by the day! Thanks to you🥳🤔🦾

Келесі