If we can estimate that type of llm usage, why we cant use this interface to train or fine tune llms to not just solving tasks but for making right controlling decisions in some type of workflow? Is there are any researches about this idea?
@palashjyotiborah988811 сағат бұрын
Please improve the microphone quality. Why wont you do this? We have been requesting for ages.
@brandonwinston9 сағат бұрын
Also, could just run the audio through the Adobe audio optimizer.
@darkmatter958312 сағат бұрын
keep doing you are doing great 🎉🎉🎉🎉❤❤❤
@ibrahimsaidi723919 сағат бұрын
Keep up the good work Brace. Much appreciated 🙏🏾
@maxlgemeinderat920223 сағат бұрын
Can you go more into Detail about the Memory checkpoint? I have difficulties to understand how i can use the chat history e.g. In memory history
@AmanKumar-qx2wlКүн бұрын
Nice Explaination, Examples are great
@tolorunlekedanieljesutoni4628Күн бұрын
Hi please has anyone here ever worked on building a chat bot that respond to people like a particular person?, i.e chat bot that respond or generate replies like trump or barack
@jellz77Күн бұрын
Hi Lance - great video again!Question for you. Recently I’ve been omitting LLM function calling just as a precautionary measure. I’m basically separating out the LLM from my functions (like api calls) and just asking the LLM to return a jsonOutput compliant with the parms in the api function. Am I doing myself a disservice by keeping these separate?
@journeymanaipodКүн бұрын
Great video! I've been loving this new framework
@user-wr4yl7tx3wКүн бұрын
but it doesn't seem like firework is free.
@GuriLudhianaКүн бұрын
Knowladgeable
@sravan9253Күн бұрын
For everything if you say "you can look into the notebook" why put up a video? The video is running as if being chased by someone.
@sabre_code2 күн бұрын
Few days back tried a lot.. finally went with gemini model. Worked fantastically.
@rossanovinicius73732 күн бұрын
For anyone looking to save time: Not even cloning the repository makes this work. It only functions in a development environment. Any attempt to run the build fails with the error [EmptyChannelError]. Langchain seems more focused on releasing videos and new features than on ensuring functionality, and doesn't even have the courtesy to respond to those trying to resolve the issue.
@AdvogaIA2 күн бұрын
exactly!
@MalcolmJones-bossjones2 күн бұрын
6:10 I had an "ah-ha" 💡💡💡 moment from what you said about grabbing different info from a trace instead of having to go directly to the run, thank you so much for that. This helps me with a problem I am currently stuck on.
@jennievo1002 күн бұрын
Excellent video! Thank you. Would you know how to handle the potential case that the agent goes into infinite loop, e.g. it gets stuck at the hallucinating check. I can only think of keeping track of the threshold for number of checks, and am wondering if there's a more elegant way to do that in Langchain.
@pragyantiwari38852 күн бұрын
Literally, I was dealing with llama3 and integrating tools within it...got many errors... And now I just got this video
@Slimshady683562 күн бұрын
First
@GuriLudhiana2 күн бұрын
Knowledgeable
@user-kj5ci9ro1p2 күн бұрын
thx
@user-kj5ci9ro1p2 күн бұрын
Thx
@husnainyousaf91414 күн бұрын
first time in life i had to watch it at 0.75x speed. Worth to watch.
@mahoanghai33645 күн бұрын
Great tutorial <3
@deanchanter2175 күн бұрын
Would to a see full end to end python example with something like reflex
@balusubhanuprakash80455 күн бұрын
What's this witchcraft 😵
@stanTrX6 күн бұрын
Thanks. How to take two inputs for a function (tool)?
@hectorcastro24676 күн бұрын
Gold
@wshobson6 күн бұрын
Awesome Brace! Absolutely love this!
@AdvogaIA6 күн бұрын
Good video. But how could I save the messages and access them again? Since the messages are displayed in {elements} without any map, how could I access them again?
@amazingsly2 күн бұрын
Same question I have. I want to store the messages in the database. Maybe someone will help
@hxxzxtf6 күн бұрын
🎯 Key points for quick navigation: 00:15 *📊 The retrieval process in RAG involves indexing documents, splitting them into smaller chunks, and storing their embeddings in an index.* 00:41 *🔍 Documents are embedded into a high-dimensional space where similar documents are located near each other.* 01:36 *💡 The location of a document in this space is determined by its semantic meaning or content.* 02:03 *🔎 Retrieval involves searching for nearby documents to a given question in this high-dimensional space.* 02:56 *📈 LangChain provides many different embedding models, indexes, document loaders, and splitters that can be combined to test different ways of doing indexing or retrieval.* Made with HARPA AI
@hxxzxtf6 күн бұрын
🎯 Key points for quick navigation: 00:02 *📹 The second video in the RAG from Scratch series focuses on indexing, a crucial component of RAG pipelines.* 00:28 *🔍 The goal of indexing is to retrieve documents related to a given question using numerical representations of documents.* 00:53 *📊 Numerical representations of documents are used for easy comparison and search, with approaches including sparse vectors and machine learning-based embedding methods.* 01:08 *💡 Embedding methods compress documents into fixed-length vectors that capture their meaning, allowing for efficient search and retrieval.* 02:03 *📈 Documents are split into smaller chunks to accommodate embedding models' limited context windows, and each chunk is compressed into a vector representation.* Made with HARPA AI
@hxxzxtf6 күн бұрын
🎯 Key points for quick navigation: 00:03 *📹 The "RAG from Scratch" series will cover basic principles and advanced topics for building LLM applications with LangChain.* 00:15 *🔒 LLMs haven't seen all data, including private or recent data, due to limited pre-training runs.* 00:44 *📊 LLMs have context windows that are increasing in size, representing dozens to hundreds of pages of information.* 01:10 *💻 Retrieval-Augmented Generation (RAG) is a popular paradigm for connecting LLMs to external data, involving three stages: indexing, retrieval, and generation.* 02:06 *📝 Future videos will explore methods and tricks for RAG's three basic components in detail.* Made with HARPA AI
@meditatvio59586 күн бұрын
Where are we heading to ?!?😮😮😮
@tourtlelaser6 күн бұрын
Your audio is so quiet
@hrudaykumar33446 күн бұрын
crisp and informative series
@mahoanghai33647 күн бұрын
Very cool <3
@drlordbasil7 күн бұрын
I really love the work bud! :D Always listen to you in background tabs while coding <3
@riot1212127 күн бұрын
This is the future of UI and UX I've been uinable to stop thinking about this and now I find that there are incredibly bbright minds already implementing it. Pleasure to watch and learn :)
@mrmetaverse_2 күн бұрын
totally agree. Like imagine an older user has trouble finding the "share document" button. They ask their on screen agent, and the UI changes, removing all irrelevant buttons, and making the share button much larger. This really changes a lot. Once you know how to ask the question, you can learn faster, and do so much more. LLM chatbots and generative UI are a wonderful step towards making it easier to ask "the question".
@akimodeli7 күн бұрын
It's beautiful people like you that make the world a better place. Kudos brother! 👏
@janwillemaltink22167 күн бұрын
awesome content, i noticed that in de readme.md with the gitclone command is a small typo. (bracesprou instead of bracesproul)
@feossandon7 күн бұрын
Thankyou for this content :D!
@cyAbhishek7 күн бұрын
Wowwwwwww, literally what I wanted
@gitmaxd7 күн бұрын
Massive Applause! This was a fantastic video series with a lot of value for any skill level. This is the ‘last mile’ of the LangGraph learning path. The LangChain LangGraph Series is a great place to start with LangGraph, then the Deep Learning “AI Agents in LangGraph” series. This one ties everything together with GenUI and Vercel .
@cyAbhishek7 күн бұрын
Completely agree
@fkxfkx7 күн бұрын
Very nice 👌
@darkmatter95837 күн бұрын
❤ huge fan continue great cotent have a question, i want to get as aws do your documentation in q pdf for reading, to be easier to handle the api documentatioon langchain and langgraph, would be helpful or if not how can i approach that problem because i feel bad and limited, your github repo dpnt see the same inforrmation as your api documentation
@varunmehra57 күн бұрын
WHERE IS THE PYTHON VIDEO!!!
@user-bw6qh4zz4q4 күн бұрын
kzread.info/dash/bejne/lmep0a6blqW2m9o.html
@lw25197 күн бұрын
I'm still confuse about Chain and Agent, when I can use Chain, when should I use Agent, is anyone can help me to answer my query, please? Thanks !
@captainkirk89997 күн бұрын
Just installed llama3 , it can not remember my name!
@samueljabes82678 күн бұрын
How i can acess this jupyter notebook?
@nguyenanhnguyen76588 күн бұрын
Rerank model that is trained on relevant dataset will help, and that is the most important
Пікірлер
If we can estimate that type of llm usage, why we cant use this interface to train or fine tune llms to not just solving tasks but for making right controlling decisions in some type of workflow? Is there are any researches about this idea?
Please improve the microphone quality. Why wont you do this? We have been requesting for ages.
Also, could just run the audio through the Adobe audio optimizer.
keep doing you are doing great 🎉🎉🎉🎉❤❤❤
Keep up the good work Brace. Much appreciated 🙏🏾
Can you go more into Detail about the Memory checkpoint? I have difficulties to understand how i can use the chat history e.g. In memory history
Nice Explaination, Examples are great
Hi please has anyone here ever worked on building a chat bot that respond to people like a particular person?, i.e chat bot that respond or generate replies like trump or barack
Hi Lance - great video again!Question for you. Recently I’ve been omitting LLM function calling just as a precautionary measure. I’m basically separating out the LLM from my functions (like api calls) and just asking the LLM to return a jsonOutput compliant with the parms in the api function. Am I doing myself a disservice by keeping these separate?
Great video! I've been loving this new framework
but it doesn't seem like firework is free.
Knowladgeable
For everything if you say "you can look into the notebook" why put up a video? The video is running as if being chased by someone.
Few days back tried a lot.. finally went with gemini model. Worked fantastically.
For anyone looking to save time: Not even cloning the repository makes this work. It only functions in a development environment. Any attempt to run the build fails with the error [EmptyChannelError]. Langchain seems more focused on releasing videos and new features than on ensuring functionality, and doesn't even have the courtesy to respond to those trying to resolve the issue.
exactly!
6:10 I had an "ah-ha" 💡💡💡 moment from what you said about grabbing different info from a trace instead of having to go directly to the run, thank you so much for that. This helps me with a problem I am currently stuck on.
Excellent video! Thank you. Would you know how to handle the potential case that the agent goes into infinite loop, e.g. it gets stuck at the hallucinating check. I can only think of keeping track of the threshold for number of checks, and am wondering if there's a more elegant way to do that in Langchain.
Literally, I was dealing with llama3 and integrating tools within it...got many errors... And now I just got this video
First
Knowledgeable
thx
Thx
first time in life i had to watch it at 0.75x speed. Worth to watch.
Great tutorial <3
Would to a see full end to end python example with something like reflex
What's this witchcraft 😵
Thanks. How to take two inputs for a function (tool)?
Gold
Awesome Brace! Absolutely love this!
Good video. But how could I save the messages and access them again? Since the messages are displayed in {elements} without any map, how could I access them again?
Same question I have. I want to store the messages in the database. Maybe someone will help
🎯 Key points for quick navigation: 00:15 *📊 The retrieval process in RAG involves indexing documents, splitting them into smaller chunks, and storing their embeddings in an index.* 00:41 *🔍 Documents are embedded into a high-dimensional space where similar documents are located near each other.* 01:36 *💡 The location of a document in this space is determined by its semantic meaning or content.* 02:03 *🔎 Retrieval involves searching for nearby documents to a given question in this high-dimensional space.* 02:56 *📈 LangChain provides many different embedding models, indexes, document loaders, and splitters that can be combined to test different ways of doing indexing or retrieval.* Made with HARPA AI
🎯 Key points for quick navigation: 00:02 *📹 The second video in the RAG from Scratch series focuses on indexing, a crucial component of RAG pipelines.* 00:28 *🔍 The goal of indexing is to retrieve documents related to a given question using numerical representations of documents.* 00:53 *📊 Numerical representations of documents are used for easy comparison and search, with approaches including sparse vectors and machine learning-based embedding methods.* 01:08 *💡 Embedding methods compress documents into fixed-length vectors that capture their meaning, allowing for efficient search and retrieval.* 02:03 *📈 Documents are split into smaller chunks to accommodate embedding models' limited context windows, and each chunk is compressed into a vector representation.* Made with HARPA AI
🎯 Key points for quick navigation: 00:03 *📹 The "RAG from Scratch" series will cover basic principles and advanced topics for building LLM applications with LangChain.* 00:15 *🔒 LLMs haven't seen all data, including private or recent data, due to limited pre-training runs.* 00:44 *📊 LLMs have context windows that are increasing in size, representing dozens to hundreds of pages of information.* 01:10 *💻 Retrieval-Augmented Generation (RAG) is a popular paradigm for connecting LLMs to external data, involving three stages: indexing, retrieval, and generation.* 02:06 *📝 Future videos will explore methods and tricks for RAG's three basic components in detail.* Made with HARPA AI
Where are we heading to ?!?😮😮😮
Your audio is so quiet
crisp and informative series
Very cool <3
I really love the work bud! :D Always listen to you in background tabs while coding <3
This is the future of UI and UX I've been uinable to stop thinking about this and now I find that there are incredibly bbright minds already implementing it. Pleasure to watch and learn :)
totally agree. Like imagine an older user has trouble finding the "share document" button. They ask their on screen agent, and the UI changes, removing all irrelevant buttons, and making the share button much larger. This really changes a lot. Once you know how to ask the question, you can learn faster, and do so much more. LLM chatbots and generative UI are a wonderful step towards making it easier to ask "the question".
It's beautiful people like you that make the world a better place. Kudos brother! 👏
awesome content, i noticed that in de readme.md with the gitclone command is a small typo. (bracesprou instead of bracesproul)
Thankyou for this content :D!
Wowwwwwww, literally what I wanted
Massive Applause! This was a fantastic video series with a lot of value for any skill level. This is the ‘last mile’ of the LangGraph learning path. The LangChain LangGraph Series is a great place to start with LangGraph, then the Deep Learning “AI Agents in LangGraph” series. This one ties everything together with GenUI and Vercel .
Completely agree
Very nice 👌
❤ huge fan continue great cotent have a question, i want to get as aws do your documentation in q pdf for reading, to be easier to handle the api documentatioon langchain and langgraph, would be helpful or if not how can i approach that problem because i feel bad and limited, your github repo dpnt see the same inforrmation as your api documentation
WHERE IS THE PYTHON VIDEO!!!
kzread.info/dash/bejne/lmep0a6blqW2m9o.html
I'm still confuse about Chain and Agent, when I can use Chain, when should I use Agent, is anyone can help me to answer my query, please? Thanks !
Just installed llama3 , it can not remember my name!
How i can acess this jupyter notebook?
Rerank model that is trained on relevant dataset will help, and that is the most important