Hi there! I'm Dave, an AI Engineer and the founder of Datalumina. On this channel, I share practical coding tutorials to help you become better at building intelligent systems. If you want to know how I help tech professionals beyond these videos, then check out the links below!
Пікірлер
So cool that you make such great content, with clear explanations, and are so transparent <3
I appreciate that!
I like how you are so comprehensive in covering the different branches of scenarios and possibilities, going over the trade-offs. All these delivered so systematically and articulately as well. Well done. I RARELY subscribe to tutorial channels but yours is an instant subscribe after 2 videos.
Searched for you on google and can't find it. I follow you for a long time. For example, if MrBeast searches him on Google, he can be found quickly.
Great! would love to see more of these.
combined it with fastapi to transform it to an endpoint and call in the frontend side ooooofff... faster development for machine learning web system
Can you try something like this with Langfuse ?
I followed through. Your system looks, great, with content creation processing with your AI generative pipeline. However, I think the point of Agentic, which is not there for sure, is to be able to work in a non-rigid system, the bottom-up approach, but where all systems are communicating together. So it is a build-up on pipelines that you have. Which is amazing by the way 😊 And putting them all together in a system that works autonomously. The idea is to get AI to the point where it will work as a team without instruction. The whole point is why Sam and all the others are building these huge systems now. One thing I want to get my head around is nongenerative AI; is content base. One thing I am seriously delving into now is API endpoints of all kinds with AI LLM model support. With some tasks, they are not required. But with many where data is involved, they are. Hope this makes sense. Not here to put a dent in your wonderful work. You are great at coding and putting the AI infrastructure together. Look forward to following along with you. AI agent workflows are the way forward now.
Thanks for the video. What kind of whiteboard tool do you use?
its figma
Congratulations this is just perfect!
need more videos like this!!!!
You have 50 000 classes transcripts you need to do a recommendation engine. Best approach?
Did you use any vscode extension to generate those code comments in your file or?
Have you looked at VRSEN’s Agent-Swarm? Sounds like he avoids many of the pitfalls you describe here…
Excellent video. Can you go into a bit more detail of how a database of this type of information might look and operate. Or any type of automation that would be involved? You mentioned sentiment or you mentioned doing analytics
How do you deal with the objections of sending this 'sensitive' data to OpenAI? We are doing a project now where we have to clean the data before sending it to openAI which is a big challenge. Curious to hear other people thoughts on this...
We use Azure OpenAI. Clients are generally okay with that in our experience.
in this video, you show a list of tools available. where can find that list?
If you are using VSCode you can use the "ports" feature which takes about 30 seconds. Instead of all the hasle of ngrok. Make sure you set the port to public
thank you for sharing Dave, really interesting. I think you would love LangGraph, it is made for LLM accuracy/state managment/classification.
Loved the content. What are the advantages of using this instead of function calling?
@@sumitbindra streamlines prompt engineering, less code, and auto retries.
@@daveebbelaar makes sense. thank you
Maaaaaan.... You have THE BEST content, HANDS DOWN, for Gen AI Development. Clear, concise, every step explained, context.... Context is key... Bravo! And thanks a lot for this, it's inspiring.
Wow, thanks!
Gold
Amazing!
Why don't you just just use the json response from openai directly?
This unifies your data structures without relying on prompt engineering. You still have to provide a JSON schema when using the JSON response with OpenAI, and there is also no automated retry mechanism if it fails to load your Pydantic model afterwards. Overall, this streamlines the development experience, especially if you're working with multiple developers who might all have slightly different prompting styles for JSON. Instructor uses the JSON response and Function Calling under the hood.
@@daveebbelaar How good or bad this solution is compared to other alternatives like langchain and llamaimdex output parsers?
@@AbdulBasit-ff6tq I don't think it's related.
This is exactly what i needed ! Thanks !!
great yar bhoat zbrdst.
Good tip thnkss
I give no comments, no likes, no nothing. But you made do it.
This video is so strong - I almost dont want to like & share it, because I would rather keep it as my hidden golden secret.
Hey! When I run the code I get the following error: if prompt := st.chat_input(“How can I help?”): Syntax error: invalid syntax
I ageee
Is there a part II for this project? Would love to see you struggle through and find a solution for a seemingly "real world" problem:) This is really great. Always enjoy watching your videos:)
Greatt!! I enjoy watching your video. I have tried to hands-on the code from your GitHub but i am facing an error ModuleNotFoundError: No module named 'pgvector_service'. Then, I tried to pip install pgvector_service but this occured. ERROR: Could not find a version that satisfies the requirement pgvector_service (from versions: none) ERROR: No matching distribution found for pgvector_service Do you have any ideas how to overcome this?
U can use the shiny extension for creating rapid webapps also
For people experiencing the webhooks error "The callback URL or verify token couldn't be validated. Please verify the provided information or try again later.", the issue is that Meta don't consider some regions of ngrok as "safe". The solution is to try with other tunneling methods like pinggy, serveo, etc.
4 minutes in, and STILL nothing about what A.I. is. Geez...I'm out.
I think ngrok doesnt work any more
If you are using visual studio code can use Local Port Forwarding.
would this work with a deno kernel in a jupyter notebook too?
great work. please publish the next tutorial. is there a github for the code?
🔥
Absolutely fantastic! Thanks for sharing @daveebballar! Can we make this work with a local llm - e.g. ollama?
Hi Brother, This is very much clear and do by anyone easyway , it was amazing Thanks Bro... 🙏
Which app you use to record your Videos
Great video, I am not big on KZread, but this is the first time I see someone really understanding the current state of the tech.
Here's something you can help me understand, as an intermediate-level coder learning all of the nuances of AI/ML and their applcations. You're extolling the value of the directed acyclic graph approach towards data processing pipelines, to avoid sending data to earlier stages. As a fan of idempotency and functional programming, I _think_ that I somewhat understand where you're coming from in your premise. But in my studies of models, I'm also seeing a lot of buzz around the differentiation between methodologies of KANs vs MLPs. My question is this: wouldn't there be some value in using information uncovered later in the pipeline to refine what you're doing earlier on? For instance, let's say you're entertaining guests, and planning to serve appetizers. A very early step might be purchasing ingredients. Later on, you realize that not all of the guests show up. If we're just going to keep moving forward, we make more appetizers than are needed. The alternative: when less guests show up or RSVP, instead of making as many apps as your ingredients/plans dictate, you make less. Now you have less appetizers and you store or freeze the ingredients you didn't use. You _could_ make them, and freeze the unused portions. But by sending the information collected later back to an earlier step, you instead have the raw ingredients to use in other recipes instead. This is a really lousy and forced metaphor, but it's all I could come up with off the top of my head. It just seems like there's value in the concept. On a different level, isn't this just sort of a form of backpropagation? The ability to reinform earlier calculations with the results of later ones?
I am an experienced coder and my wife is a nurse. Recently she began online part time job. That is “to evaluate OpenAI’s language models.” I was absolutely convinced it was a scam, but she's already made $500 in three days. What kind of era are we living in?
My problem is having hard time to find girl friends.
AutoGen and CrewAI I think are more experiments than anything else. I use Python as a Maestro... then some AI, when I need it's generative abilities, some playwright when I need web automation... but business run on logic not in a democratic way and not in a creative way. They incorporate creativity in some steps but that's just that. A procedure is the only way to go since ever or else we'll have unpredictability that goes against efficiency and bad for processes in business, in factories, and would make it impossible to have any type of Quality Control.
In Jupyter, why can I still not cut and paste text in a cell using the mouse?
Cyclical/recursive algorithms are needed for many problems which in part, is what agentic frameworks attempt solve. Your sequential processing only paradigm is applicable only to certain problems.
Awesome dude,came across this at the right time of my life