Пікірлер

  • @genXstream
    @genXstreamКүн бұрын

    Which would you say is more crucial to analyzing the "correctness" of the language agent tree search result: "blah blah blah" or "yada yada yada"?

  • @AdamLucek
    @AdamLucek6 сағат бұрын

    Im more partial to yada yada yada, but I can see the benefits of blah blah blah. Really comes down to your use case and desired blah to yada ratio

  • @matthewturnerphd
    @matthewturnerphdКүн бұрын

    Thanks for this video! Several smaller details you emphasized were things I had missed in other tutorials, and really helped me.

  • @madhudson1
    @madhudson15 күн бұрын

    absolutely fantastic. I've been trying to do something similar today and experienced much of the 'going rogue' - with incorrect special tokens, until I followed your example.

  • @AdamLucek
    @AdamLucek5 күн бұрын

    Glad I could help!

  • @JoshuaMillerDev
    @JoshuaMillerDev5 күн бұрын

    I wonder if anyone has went through the thought exercise of how an AI model could benefit from having anyone "fold at home" in order to build it. In other words... instead of the owner (OpenAI, Llama, whatever..) dedicating servers to build a LLM, could it not be distributed such that video cards in PC's around the world contribute idle cycles towards the training? Seems like a good way to have an open sourced model to get off the ground, or build using more tokens. Could even have a reward system (minor) allowing X privileged API access per contribution node or whatever.

  • @psousa50
    @psousa505 күн бұрын

    Hi Adam, thank you very much for your video, it's very helpful. I have one doubt, I thought that Langchain would take care of those special tokens for us when we are using the ChatOllama class, am I wrong?

  • @AdamLucek
    @AdamLucek5 күн бұрын

    From my tests with ChatOllama they did not cover that automatically, so you still need the special tokens to prompt it correctly as of now!

  • @psousa50
    @psousa50Күн бұрын

    @@AdamLucek I'm not sure about that. From the Ollama docs we can see that there is a raw parameter that we can use when we do want to provide those special tokens. Langchain ChatOllama class use the Ollama endpoint (localhost:11434) and it doesn't not specify this parameter, so I think we should not use those tokens when sending a prompt

  • @vv1nter__
    @vv1nter__5 күн бұрын

    And it can be used to communicate in discord?

  • @user-rl9yz3rg7r
    @user-rl9yz3rg7r5 күн бұрын

    Thank you for the wonderful lecture and example of source code. The example source code worked nicely in the local environment, and the test code inserted conveniently in the middle helped me understand the example a lot

  • @ringpolitiet
    @ringpolitiet6 күн бұрын

    Very well done, subbed. A perfect complexity project to get into LangGraph.

  • @TestMyHomeChannel
    @TestMyHomeChannel6 күн бұрын

    You are an awesome teacher. I am already running almost the same setup, with agents created automatically by another PraisonAI but didn’t fully follow what was going on and many times not working and I didn’t know why. I loved the way you broke down and explained everything. Looking forward to see more videos from you. Best wishes

  • @OliNorwell
    @OliNorwell6 күн бұрын

    Very good video, I'm impressed, you got yourself a new sub. I'm not a massive fan of LangChain but your video style is very easy to follow so I'm looking forward to watching your others too. Great work.

  • @geofffane5276
    @geofffane52766 күн бұрын

    Hey can you please share the miro board link? Or drop it into a high res pdf? AWESOME work btw 👍👍👍

  • @AdamLucek
    @AdamLucek6 күн бұрын

    Here you go! drive.google.com/file/d/1ESnrIy4c5LPOhNHRnn87Cv7DU_i0-_J9/view?usp=sharing

  • @JoshuaMillerDev
    @JoshuaMillerDev6 күн бұрын

    I like seeing these. Something to consider is having the web search differentiate between sponsored and non sponsored results. I have not seen anyone tackle that yet. It seems to me that search results and LLM outputs would be more accurate when steering away from sponsored data.

  • @AdamLucek
    @AdamLucek6 күн бұрын

    Good idea!

  • @JoshuaMillerDev
    @JoshuaMillerDev5 күн бұрын

    Just FYI, doing what I mentioned could be problematic in a long long run as folks use search less and LLMs more. At some point there is a potential conflict where search engines get nothing from billions + of AI crawling. Not much of a worry, but a good thought exercise.

  • @WladBlank
    @WladBlank6 күн бұрын

    Interesting concept. I try to force my llms to produce valid json and this would be easier.

  • 6 күн бұрын

    Excellent video thank you!

  • @alejandroGTES
    @alejandroGTES7 күн бұрын

    Awesome project! Is it possible to use another service as translation rather than Chatgpt that doesn't require a subscription?

  • @AdamLucek
    @AdamLucek7 күн бұрын

    Certainly possible. Translation service could be anything as the sentence string is all thats being passed back and forth. I just used OpenAI for a quick solution, but any service could be substituted in that step.

  • @alejandroGTES
    @alejandroGTES6 күн бұрын

    @@AdamLucek Oh nice, I would love to see and updated version with a free alternative.

  • @amanmeghrajani1
    @amanmeghrajani17 күн бұрын

    loving your content, thank you for sharing this. learning a lot! would you be interested in making a video to show how to deploy these models and have access to them from inputs like whatsapp chat, email? would be super helpful

  • @GeorgAubele
    @GeorgAubele7 күн бұрын

    Thanks for the really interesting and well done clip. I get an error at the end, when testing the whole thing: It says, it has a ratelimit exception in the duckduckgo_search module: 202 Ratelimit.

  • @AdamLucek
    @AdamLucek7 күн бұрын

    Seems like something is broken in the connection between DuckDuckGo and LangChain's integration... getting this error too. You can use Tavily for the time being, replacing the duckduckgo lines, although you may need a Tavily API key. from langchain_community.tools.tavily_search import TavilySearchResults web_search_tool = TavilySearchResults(include_raw_content=True, search_depth='advanced', max_results=5) Will look into other ways to get around this.

  • @AdamLucek
    @AdamLucek6 күн бұрын

    Found a fix! Somethings up with the recent version of the python API. Running `pip install -U duckduckgo_search==5.3.0b4` and then restarting your environment fixed it for me :)

  • @GeorgAubele
    @GeorgAubele6 күн бұрын

    @@AdamLucek Thanks, will try it

  • @GeorgAubele
    @GeorgAubele2 күн бұрын

    @@AdamLucek Thanks! That did the trick! Even a new verion 5.3.1 does have that bug ... :/

  • @jimlynch9390
    @jimlynch93907 күн бұрын

    I really enjoyed this video. I've seen lots of "how to" vids WRT programming using local LLMs but this is by far the best one I've viewed. I often get lost and have to re read some of the steps however you moved along at exactly the right pace for me and explained pretty much all of the questions I was dreaming up during the view. Thank you! Ollama does have function calling but this method seems to be more logical and easier to understand.

  • @GlobalAiServices
    @GlobalAiServices7 күн бұрын

    What's the point, SORA is not released yet. Just a waste of time!

  • @ringpolitiet
    @ringpolitiet6 күн бұрын

    Why are you here?

  • @szpiegzkrainydeszczowcow8476
    @szpiegzkrainydeszczowcow84767 күн бұрын

    Great job. Subscribing. Any chance you would make some video on long term memory in vector db? greetings

  • @dr.mikeybee
    @dr.mikeybee7 күн бұрын

    Nice work

  • @bharaths5603
    @bharaths56037 күн бұрын

    Kalakitta nanba!

  • @MEvansMusic
    @MEvansMusic7 күн бұрын

    what is used for scoring?

  • @nilamara7620
    @nilamara76208 күн бұрын

    Really impressive that combination of these 3. But to have a perfect loop how to deal with an input audio (voice) in real time before start speaking to respond ? And another question the generating audio at last could be an emulation of your microphone ?

  • @AdamLucek
    @AdamLucek8 күн бұрын

    As this is currently setup, the streaming STT from AssemblyAI will transcribe, and then output a final "sentence" after some variable breakpoint of no speech. It is with this output that I process the rest of it into speech. As this is more an MVP, more could be done within that intermediary step (checks for speech, pauses, etc) that could change how and when the translated speech is played back, or even done as a separate process, not sequentially like this is happening!

  • @camilocampos5900
    @camilocampos59008 күн бұрын

    awesome dude, I've been working with llama3 and langgraph to see if you can use tools with llama3 but you did it, you are great, cheers.

  • @AdamLucek
    @AdamLucek8 күн бұрын

    Glad I could help!

  • @rajesharora27
    @rajesharora278 күн бұрын

    Awesome stuff!

  • @lavamonkeymc
    @lavamonkeymc8 күн бұрын

    Question: If I have a data preprocessing agent that has access to around 20 preprocessing tools, what is the best way to go about executing them on a pandas data frame? Do I have the data frame in the State and then pass that input in the function? Does the agent need to have access to that data frame or can we abstract that?

  • @AdamLucek
    @AdamLucek8 күн бұрын

    I imagine it could be abstracted out. A lot of the processing you can do with a langgraph setup similar to these doesn't necessarily need an LLM touch at the computation/function step- could use the LLM for logic based routing to the right node function that is already defined to affect a pre set dataframe

  • @xollob
    @xollob8 күн бұрын

    Hi Adam, great work. I've been struggling trying to evaluate the different agent frameworks, autogen, crewai VRSEN and on and on. langchain etc. seems to be more logical as we can see what's happening and is more predictable. Would it be possible to get the Miro you built for this presentation? Greetings from France.

  • @AdamLucek
    @AdamLucek6 күн бұрын

    Here you go! drive.google.com/file/d/1ESnrIy4c5LPOhNHRnn87Cv7DU_i0-_J9/view?usp=sharing

  • @xollob
    @xollob3 сағат бұрын

    @@AdamLucek Thank you so much Adam.

  • @ricardoaltamiranomarquez753
    @ricardoaltamiranomarquez7539 күн бұрын

    ¿Puedes compartir con nosotros tu presentación de Miro?, Great Job

  • @AdamLucek
    @AdamLucek6 күн бұрын

    Here you go! drive.google.com/file/d/1ESnrIy4c5LPOhNHRnn87Cv7DU_i0-_J9/view?usp=sharing

  • @ricardoaltamiranomarquez753
    @ricardoaltamiranomarquez7536 күн бұрын

    @@AdamLucek thank you very much, you are very good

  • @sanesanyo
    @sanesanyo9 күн бұрын

    Great work, thanks for this🙏. There is another agentic approach which is called self discovery. Would be cool if you cover that as well 😊.

  • @prafulmaka7710
    @prafulmaka771010 күн бұрын

    Good explanation!

  • @GriffinBrown-tq9jz
    @GriffinBrown-tq9jz11 күн бұрын

    Well done! Thank you, sir

  • @tyler-morrison
    @tyler-morrison11 күн бұрын

    This breakdown is insanely helpful 👏 I’ve been working as a Web Engineer for > 10 yrs and recently started learning about AI/ML. I began my career as a self-taught dev in the good ol’ jQuery days, but my lack of CS fundamentals is starting to come back an bite me. These architectural diagrams are incredibly useful for breaking down high-level concepts.

  • @AdamLucek
    @AdamLucek9 күн бұрын

    Glad you found this helpful! Everything I record and share is all self-taught as well, I've got no formal CS background- I just think the topic is interesting and worth sharing!

  • @tk0150
    @tk01508 күн бұрын

    Would you share your slides? So helpful!

  • @caokang4957
    @caokang495711 күн бұрын

    Thank you for sharing! Great summary.

  • @PRColacino
    @PRColacino11 күн бұрын

    Great video! Could you share the code?

  • @AdamLucek
    @AdamLucek9 күн бұрын

    Thanks! The code comes from LangChain's series on LangGraph, linked in the description. Here's a direct link to their repo github.com/langchain-ai/langgraph/tree/main/examples

  • @pinkmatter8488
    @pinkmatter848811 күн бұрын

    Your channel has been very valuable today to get me situated on how to get the hang of LLM use. I can now start thinking about project ideas to get some practice. Thank you very much !

  • @cmthimmaiah
    @cmthimmaiah12 күн бұрын

    Very nicely done, thank you for such a good preseentation.

  • @user-gy7te1ql3g
    @user-gy7te1ql3g12 күн бұрын

    Good overview. It would be very interesting to see the answer quality benchmarks for these techniques. In a lot of real business cases the time and cost have much less importance than the quality.

  • @kenchang3456
    @kenchang345612 күн бұрын

    This is really great info, thanks a bunch for sharing. What's really eye-opening is the run times and token counts.

  • @linuszhu
    @linuszhu13 күн бұрын

    which one do you prefer for the recommendation

  • @AdamLucek
    @AdamLucek12 күн бұрын

    I would say each have different applications, and are better used as parts of larger agent ecosystems. I.e. taking a reflection based approach to some end validation step would be useful, however a more plan-and-execute style approach to initial generation would likely be a better first step. As with most llm based apps, a lot depends on what data your using, the task/end goal you want, and your tolerance of processing time. Would more so apply the general concepts here rather than see them as strict end solutions 😁

  • @sharannagarajan4089
    @sharannagarajan408913 күн бұрын

    AlL of them suck

  • @readmarketings9061
    @readmarketings906113 күн бұрын

    Do you have better solution?

  • @missigno42
    @missigno4212 күн бұрын

    Why?

  • @TheFocusedCoder
    @TheFocusedCoder13 күн бұрын

    Really good break down for folks building,thanks for putting this out

  • @matthewpublikum3114
    @matthewpublikum311413 күн бұрын

    Where's the code? It would be nice to know what is the smallest LLM capable of doing the planner/task decomposition and verification.

  • @AdamLucek
    @AdamLucek13 күн бұрын

    The code comes from LangChain's series on LangGraph, linked in the description. Here's a direct link to their repo github.com/langchain-ai/langgraph/tree/main/examples

  • @Leonid.Shamis
    @Leonid.Shamis13 күн бұрын

    Thank you, excellent explanation!

  • @Jandodev
    @Jandodev13 күн бұрын

    We made a 7th with output focused recursive events at my company :)

  • @PYETech
    @PYETech14 күн бұрын

    That's an amazing work we have here, guys. Cheers to you, bro. Thanks!

  • @niftylius
    @niftylius14 күн бұрын

    Hello

  • @MekMoney79
    @MekMoney7914 күн бұрын

    outstanding overview of key the agentic architectures, I learned a ton, prob one of the best out atm - Thanks

  • @suhnyllakler5842
    @suhnyllakler584214 күн бұрын

    Adam - you have done a brilliant easy to understand (by you showing) masterclass!!!

  • @andydataguy
    @andydataguy19 күн бұрын

    You've got a great way of describing things! Would love to hear your description of evaluation for agentic systems 🙌🏾