Building a LangChain Custom Medical Agent with Memory

Ғылым және технология

Colab: drp.li/uZcAc
In this video I go through how to build a custom agent with memory and custom search of a particular web domain.
My Links:
Twitter - / sam_witteveen
Linkedin - / samwitteveen
Github:
github.com/samwit/langchain-t...
github.com/samwit/llm-tutorials

Пікірлер: 84

  • @toddnedd2138
    @toddnedd2138 Жыл бұрын

    It's always nice when someone takes the time to explain it. This helps a lot than just reading the documentation. Thanks!

  • @ubergraham
    @ubergraham Жыл бұрын

    Outstanding, Sam. As a physician this really helps me understand how you can unlock the web with just LangChain and LLMs. Bravo!

  • @autonomousreviews2521
    @autonomousreviews2521 Жыл бұрын

    The more Python I learn, the more awesome your videos become :) Thank you for sharing!

  • @mathematicus4701
    @mathematicus47015 ай бұрын

    Hey Sam! Just wanted to drop a quick note to express my sincere gratitude for your incredibly helpful videos. They’ve been a game changer for me, offering clarity and guidance when I needed it most. A thousand thanks for your hard work and dedication. Keep up the great work! 🌟 #ThankYouSam

  • @kevon217
    @kevon217 Жыл бұрын

    This video was SO helpful for troubleshooting!

  • @pankymathur
    @pankymathur Жыл бұрын

    This is definitely going in “Gem List” Thank You!

  • @helipilot501
    @helipilot5015 ай бұрын

    Hi Sam, I love your videos! They are so helpful.

  • @tubingphd
    @tubingphd Жыл бұрын

    Another great video. Thank you Sam

  • @MadhavanSureshRobos
    @MadhavanSureshRobos Жыл бұрын

    Great job as usual! Adding a source like bing in the chain might add a little edge to the response. Really useful content keep it coming

  • @ugyaltenzing3237
    @ugyaltenzing323711 ай бұрын

    Thank you for the video! I have a use case where my agent will query a dataframe/csv and also have the memory buffer. The architecture i went through was make a custom agent ( same as you have done in the video) and use the dataframe agent as a tool for querying the dataframe. What are your thoughts on this?

  • @odedy9946
    @odedy994610 ай бұрын

    Fantastic video Sam. Thank you. I built a similar agent using custom tools, I updated the prompt to write a full detailed report incorporating also information from all its observations however the final answer is always shortly summarized without the full details. Any idea why and how to solve?

  • @bingolio
    @bingolio11 ай бұрын

    Excellent video. 1 quick question, if one wanted to make a “subject matter experts” could it be done using a similar approach with out fine tuning a model?

  • @rafaeldelrey9239
    @rafaeldelrey923910 ай бұрын

    Your videos are some of the most advanced, yet you still made them comprehensible. Thanks a lot. Do you think these could be made better with gpt functions? Or ReAct perform similarly?

  • @samwitteveenai

    @samwitteveenai

    10 ай бұрын

    Yeah almost everything can be made better with GPT functions lately. This was recorded before they existed. Most the principles would still be the same.

  • @GyroO7
    @GyroO7 Жыл бұрын

    This is mind-blowing Sam Appreciate your work Only one observation I have is that in any medical context, sources should always be mentioned Can you add it to the notebook or show us how to do it please?

  • @samwitteveenai

    @samwitteveenai

    Жыл бұрын

    I will look at showing meta data and sources at some point in the future. It is not that easy in this case for webpages.

  • @clarissamarsfiels7961
    @clarissamarsfiels7961 Жыл бұрын

    Please also consider doing these videos with local LLMs from transformers, (Python)-Llama-Cpp, autogptq etc., at least a few. I do not have an OpenAI API key nor am I really interested in paying for one or getting one in general. However, I do have successfully run several local LLM using my Hardware, which are surprisingly good! Like, I'm not talking "gibberish good", I am talking like "ChatGPT good". I think using local LLMs will give you even more freedom, especially in picking the model you actually want to use (Different model, different purpose).

  • @samwitteveenai

    @samwitteveenai

    Жыл бұрын

    Can I ask what model you are using? I have done a number of No OpenAI vids in the past and I will do more in the future for sure. I would hope that people see that the code here can be converted for any Non OpenAI Model pretty easily.

  • @clarissamarsfiels7961

    @clarissamarsfiels7961

    Жыл бұрын

    @@samwitteveenai Sorry, KZread won't let me answer you and deletes my messages as soon as I name any model. O_O

  • @bibutikoley
    @bibutikoley3 ай бұрын

    Hi Sam, Great tutorial and learning a lot from you. Just one question on this, Is it possible to add citations to the answer and list down the links from which the answer is generated?

  • @mytechnotalent
    @mytechnotalent Жыл бұрын

    Fantastic job Sam! Looking for these LangChain vids just wish we did not use the OpenAPI key and rather an open-source HuggingFace example. I would encourage you to go in that direction.

  • @joaoalmeida4380

    @joaoalmeida4380

    Жыл бұрын

    Same here. Can we use like Open Assistant or other open source?

  • @NicolasEmbleton

    @NicolasEmbleton

    11 ай бұрын

    I second that. Llama2 or any open source one.

  • @paryska991

    @paryska991

    10 ай бұрын

    @@NicolasEmbleton You can do that, Langchain supports Llamacpp through llama-cpp-python bindings OR i can link you a API wrapper for KoboldCPP (which is what i use). Seems to work so far for everything, but I am just learning langchain too. I can try to help out and link the Koboldcpp wrapper i found if you want.

  • @user-by5lp3cs7d
    @user-by5lp3cs7d10 ай бұрын

    Hi , Thanks for this detailed explanation, i have a doubt , how do we select Agent, there are different types of agents in LLM and how do we choose which one to use , is it based on the task we are working on like that or can we use anyone? please could you explain

  • @julesgransden3736
    @julesgransden37363 ай бұрын

    Hey Sam, I'm trying to do my own implementation of this, but it seems like when the agent is determining the best answer, it gets caught up in the details of webMD. For example, it will first assess the sprained ankle, but then realize that it could be linked with a blood clot condition and then focus on that and the final answer is an explanation on how to avoid blood clots and other diseases, when in reality all I wanted to know was what I need to do once my ankle is sprained. Let me know if you could help! thank you so much!!

  • @yzhang3458
    @yzhang3458 Жыл бұрын

    Hello Sam, nice videos. I went through the example but when it came to the part with memory I noticed the custom agent still does not recall previous conversation context (even after adding the memory components to custom agent) do you know why that is and how to fix it?

  • @scienceofart9121
    @scienceofart91219 ай бұрын

    Hi Sam, thanks for your contributions. Can you create a new custom agent tutorial because it is already little bit outdated.

  • @generationgap416

    @generationgap416

    21 күн бұрын

    He did. Check out his tutorial on LangGraph. This is important because LangGraph seems a better option for multi-agent. If you want to have more control over the flow of your decision flow, check the tutorial out. One notable part of that tutorial is the part about conditional edge.

  • @ninadjagtap3845
    @ninadjagtap38457 ай бұрын

    When would you use AgentTokenBufferMemory? How does it differ to conversation memory?

  • @alexanderacevedo962
    @alexanderacevedo96211 ай бұрын

    La fuente de datos ¿Puedo usarla para PDF, búsqueda en Internet al mismo tiempo?

  • @vishnuajoshy1588
    @vishnuajoshy15882 ай бұрын

    Thank you ..This really helps....I used only Google serper API as tool & mistral 7B model ..For some questions I could see in the log that {tool name} is not a valid tool & gave me not good answer. But after sometimes when I tried again it used the tool & gave me good answer for the sane question. What could be the reason?..can you please help me

  • @user-jk6ez9ln6b
    @user-jk6ez9ln6b Жыл бұрын

    Nice video. I'm wondering what happens if the prompt reaches the model token limit. Do you know anything about that?

  • @samwitteveenai

    @samwitteveenai

    Жыл бұрын

    This won't normally happen in this use case with the 4k token limits, but you can use a transformation chain to trim the amount of tokens or put in a summarization chain etc.

  • @riis08
    @riis08 Жыл бұрын

    Great thanks again for another amazing video. Can you we do this example with custom language model or any hugging face language model?

  • @samwitteveenai

    @samwitteveenai

    Жыл бұрын

    I tried to do it with 4 models and they all kinda sucked at it including Falcon7B etc. It really needs to either be a big open Source model or fine tuned for the task.

  • @svenandreas5947
    @svenandreas5947 Жыл бұрын

    I really like your tutorials, is there one where you add a tool into a retrievalqa agent? Like sending email / save the communication after human is satisfied?

  • @samwitteveenai

    @samwitteveenai

    Жыл бұрын

    I made some Retrieval vids for PDF and Text the principles in those can be used for Emails as well.

  • @svenandreas5947

    @svenandreas5947

    Жыл бұрын

    @@samwitteveenai sorry for my bad English, what I did try to explain is, adding a tool to send a email, just as an example, or basically after the human answer do an action who is programmatically done not by chatgpt

  • @ms4sman
    @ms4sman10 ай бұрын

    I can't seem to find the source for this video on the GitHub repo in the description. Can you give a link to the source from this video?

  • @tusharkhatri5795
    @tusharkhatri5795 Жыл бұрын

    Can you please do a video for building chatbot on large custom data of company (something like 40-50 pdf some audio files some ppt) with memory using pinecone . That can generate output from the text given if not found generate generic possible response using open ai

  • @ilijanl
    @ilijanl9 ай бұрын

    Great video!!! I have a questions, say the LLM need more information from the user, do you know how to incorporate in this agent the "I need more info" conversation? Thanks

  • @samwitteveenai

    @samwitteveenai

    9 ай бұрын

    You would achieve that by putting it in the prompt.

  • @shuvojyotirakshit5808
    @shuvojyotirakshit5808 Жыл бұрын

    Hello , Sam, I am a med student from India , currently I am trying to make a paper on the use langchain and llm models in medical education. I have some technical difficulties. Is it be possible to contact with you on this ? I can't say much here. Your advice and help will help a lot

  • @adityahpatel
    @adityahpatel9 ай бұрын

    How can i create custom tools within flowise to use them with the agent?

  • @usama5318
    @usama53186 ай бұрын

    Can you please share the exact link of code, where it is on github, Because there are lot of folder on link that you gave in description.

  • @Raptorbk
    @Raptorbk10 ай бұрын

    Hi! Nice video, i'm having trouble implementing this, i make use of Custom Tools with StructuredTool.from_function(func...) and apparently it works well but the input of some tools is not always a single string, for one tool i need it to input a JSON and string which in action input seems fine but actually is all on a single string

  • @reedamm

    @reedamm

    10 ай бұрын

    hey can you share your discord id or something I was really looking for someone who knows how to make a custom tool and really need help with that

  • @Ashish-th5th
    @Ashish-th5th9 ай бұрын

    Hey Sam, can you tell how can we increase the quantity of answer that llm model is giving us after analyzing websites? The problem I am facing right is, now no matter where and how many times I mention 300 words it keeps on giving answer in one or two lines only

  • @samwitteveenai

    @samwitteveenai

    9 ай бұрын

    The models often won't listen to a length like that. You can either Fine Tune the model for longer responses of can try something like asking for multiple paragraphs. Don't forget in this case the inputs are not huge either.

  • @user-ut8qp7et9f
    @user-ut8qp7et9f Жыл бұрын

    Could you share how much is the cost for each of these questions using ReACT with OpenAI LLM? I think it will be quite a lot.

  • @samwitteveenai

    @samwitteveenai

    Жыл бұрын

    If you are using the Chat turbo model then the cost isn't that much.

  • @reedamm
    @reedamm10 ай бұрын

    Hey Sam amazing video. Just wanted to ask is it possible to make a tool in this which could tap into an e-commerce website and can show things which user asks?

  • @samwitteveenai

    @samwitteveenai

    10 ай бұрын

    Yeah sure, what things are you thinking of ? pricing etc?

  • @reedamm

    @reedamm

    10 ай бұрын

    @@samwitteveenai um I just want that whatever product user asks it can show its image and price from a particular e-commerce website. Is this possible? If yes please guide me a little how can I achieve this. For example if user asks Need a white suit for wedding Then the agent should show 2-3 white suit from that website

  • @reedamm

    @reedamm

    10 ай бұрын

    @@samwitteveenai hey sam can you please help me out a little i need to submit this project for a hackathon

  • @samwitteveenai

    @samwitteveenai

    10 ай бұрын

    you can do this using the search like I show in that video, just change the domain to amazon etc.

  • @reedamm

    @reedamm

    10 ай бұрын

    @@samwitteveenai I solved it created an api which fetches the search result from that e commerce website and then returns json object and then I used that json object to choose a particular product and then created tool based on it

  • @kennt7575
    @kennt7575 Жыл бұрын

    It’s an amazing tutorial, but I'm very curious about how you organize the memories for a multi-section chat.

  • @samwitteveenai

    @samwitteveenai

    Жыл бұрын

    what do you mean by multi-section chat?

  • @kennt7575

    @kennt7575

    Жыл бұрын

    I mean, many people chat with the model at the same time, and the memory of each person who is chatting with it.

  • @samwitteveenai

    @samwitteveenai

    Жыл бұрын

    To do that you need to serialize the conversations. the load and save each time etc. I usually save to a DB like Firestore so it is quick and easy to just keep a UUID for the conversation etc.

  • @kennt7575

    @kennt7575

    Жыл бұрын

    @@samwitteveenai thanks Sam, it worked this way.

  • @reeder960
    @reeder9608 ай бұрын

    How would I include the url that the agent got its answer from?

  • @samwitteveenai

    @samwitteveenai

    8 ай бұрын

    You need to include it in meta data and pass it back. I have n example of doing this in another video

  • @binstitus3909
    @binstitus39095 ай бұрын

    How can I keep the conversation context of multiple users separately?

  • @vibuvignesh7359

    @vibuvignesh7359

    3 ай бұрын

    Create multiple memory objects for each user

  • @OfficialChatbotBuilder
    @OfficialChatbotBuilder Жыл бұрын

    But how is this scalable and where will the actual user interact? I feel like LangChain is a waste of time.

  • @robxmccarthy

    @robxmccarthy

    Жыл бұрын

    The front end options are dependent on the end implementation. Look for - how to implement chatbots in X web framework (Django, grails, whatever). Langchain is a framework for building processes on top of back end components like the LLM, Vector DB.

  • @OfficialChatbotBuilder

    @OfficialChatbotBuilder

    Жыл бұрын

    @@robxmccarthy thanks for the reply. I’ve yet to see any completed application with this tool to my knowledge, only a lot of people explaining what it could do on videos. Guess it will just be an exciting time as more folks begin to build 🤘

  • @miguelhermar
    @miguelhermar11 ай бұрын

    Very insightful video! Thanks Sam. One question, in this case you're using the DuckDuckGoSearchRun pre-built function and modifying it a bit so it narrows the search to a specific website. When working with Toolkits like GmailToolkit, they probably have a pre-built function (which I can't find anywhere) with the steps to search for emails in this case, so I'm not sure if I can customize my own function to act somewhat different, the same way you did in this video. It's clear to me that I would need to change the description of the tools so the agent can do a better job, but I'm not sure if in this case it's advisable to change the main function built to read through emails. Also, my plan is to customize and change a bit the agent.agent.llm_chain.prompt.template since I think it could improve search results. As you can see I'm trying to build an email reader chatbot that is useful for when I don't have time to read them all, so instead I'd ask this agent something like "What did Sam has sent me recently?" Thanks again mate

  • @madakuse
    @madakuse11 ай бұрын

    Any ways to print the source as well to refer?

  • @samwitteveenai

    @samwitteveenai

    11 ай бұрын

    Yeah this requires using meta data on the look ups.

  • @clray123
    @clray123 Жыл бұрын

    Pretty sure that WebMD ToS forbid scraping (as should every website with any sort of worthwhile content).l

  • @samwitteveenai

    @samwitteveenai

    Жыл бұрын

    actually this is getting the info via DuckDuckGo and the sections that beings back. But I agree the idea was to show how people could something like this for their own site.

  • @pensiveintrovert4318
    @pensiveintrovert43189 ай бұрын

    These videos would have more utility if you could transcribe (maybe with AI) the video and interleave the transcript with screen images. The basic problem is that people will mostly remember only a couple points/names and that's it.

  • @keyser1989
    @keyser19897 ай бұрын

    where is the notebook?

  • @samwitteveenai

    @samwitteveenai

    6 ай бұрын

    Unfortunately Google took it down. It is in the Github. I have no idea why they removed it

  • @generationgap416
    @generationgap41621 күн бұрын

    I got to ask it. Are you or were you ever in the past ever a trained teacher so help you, God?

  • @generationgap416

    @generationgap416

    21 күн бұрын

    No, the question is not for you. The question is for Sam Witteveen

  • @samwitteveenai

    @samwitteveenai

    20 күн бұрын

    Nope

  • @hiranga
    @hiranga10 ай бұрын

    Hey Sam, loving your videos - I'm having a hell of a time trying to use the new OpenAI Functions Agent in a similar manner. For some reason the prompt organises itself so that System Msg is less adhered to. ie the prompt is delivered as: SystemMessage; ChatHistory; HumanInput.

  • @klammer75
    @klammer75 Жыл бұрын

    You’re really amazing Sam! I’ve learned so much from watching your videos….keep them coming and you’re one of the best out there IMHO!🥳🦾😎👏🏼

Келесі