Adding Depth to DSPy Programs
Ғылым және технология
Hey everyone! Thank you so much for watching the 3rd edition of the DSPy series, Adding Depth to DSPy Programs!! This video begins with some DSPy news such as STORM, DSPy Assertions, and Typed Signatures! We then dive into the concept of adding depth to DSPy programs, taking a further look at what it means to have unique input-output examples for each component and how we can compose DSPy programs with different LLMs per component! We then dive into two notebooks illustrating adding depth to RAG programs and a 4-layer question to blog post writer!
Demo #1 Notebook: github.com/weaviate/recipes/b...
Demo #2 Notebook: github.com/weaviate/recipes/b...
You can find the examples and links to community resources / news on github.com/weaviate/recipes!
Chapters
0:00 Intro
0:50 Chapters Overview
5:06 Weaviate Recipes
5:24 DSPy News and Community Notes
13:51 Adding Depth to RAG Programs
18:40 Multi-Model DSPy Programs
20:18 DSPy Optimizers
25:30 Deep Dive Optimizers
27:55 Into the Optimizer Code!
37:48 Demo #1: Adding Depth to RAG
1:05:25 Demo #2: Questions to Blogs
1:07:48 Thank you so much for watching!
Пікірлер: 33
dude, you provide so much alpha for us by doing these actional pragmatic rundowns of the documentation. Thanks again.
DSPy to the moon 👏
@connorshorten6311
2 ай бұрын
Haha indeed, thanks Karl!
Dude, you must be getting millions in karma for this. Thanks. Great tutorial
The man is back in the game!!!
@connorshorten6311
2 ай бұрын
Haha absolutely! Thanks Tim!
I love your energy throughout this video Connor!
Super interesting examples! I think your videos are really underrated. Your explanation is clear and concise.
Love how easy it is to plugin different models for different tasks within the same DSPy program.
Thank you Connor for these updates and "adding depth" to the DSPy topic ;) I really appreciate it and it looks like you're about to become Mr DSPy here on youtube, keep the content coming.
My head is spinning, but man this is really opening up possibilities for optimizing and overcome all childdiseases of llm inference. Thans conner, keep up the great work
@truliapro7112
2 ай бұрын
Teaching too fast for this complex topic.
Awesome video!
@connorshorten6311
2 ай бұрын
Thanks Erika!
If DSPy can autonomously optimize prompts, what about doing the same with code on the fly? How might we go about having code examine itself, its operation efficiency, its results and come up with self improvements Could DSPy be harnessed for this task? I could see doing both at once to get increased performance across 2 domains of prompt + code optimization
@connorshorten6311
2 ай бұрын
Yeah I think you are definitely thinking on the right path. It is crazy how you can connect the loop with synthetic data to achieve this. You could use the python interpreter and use things like `time.time() - start`, but I'm not sure how you might interface deeper performance inspections like a cpu or lock profile for example.
24 күн бұрын
Performance as in speed is not always the target. In order for the code to be optimizable, you would need to give it data matching the real world. If you just optimize for unrealistic dummy data, the optimized one may be faster for that use case but completely fail in the real world. I think a more realistic approach would be something where the LLM can have a discussion with you and showcase different approaches with their pro's and con's, and allow you to decide.
@fkxfkx
24 күн бұрын
@ that’s not at all realistic or imaginative. You seem to be stuck in legacy thinking. Try using your imagination.
Could someone please clarify what "parse float rating" means? Generally speaking, I admire your enthusiasm and appreciate the effort you put into your content. However, I found myself a bit perplexed by some of the new jargon and terminology. Providing clear definitions could significantly enhance comprehension for us, the audience. Keep up the excellent work-I'm eagerly looking forward to your upcoming content.
@connorshorten6311
2 ай бұрын
Thank you so much for the kind words of encouragement! "Parse float rating" refers to extracting a float value from the initial response from an LLM -- this is one way to achieve structured output parsing with LLMs, there are many others as this is one of the biggest issues in LLM programming these days. DSPy also has DSPy Assertions with `dspy.Suggest` / `dspy.Assert` that is similar to this 2 model call philosophy, another idea is to first validate a response with a pydantic schema and then if it fails, format a retry prompt -- so I guess also 2 model calls in philosophy. The other approach would be maybe like deeply integrated decoding in the LLM itself -- idk, I've settled on the 2 model call solution personally, hope it works for you as well!
Would love a video on the TRACE great video!
Connor be experimenting with video formats.
I think spending quite a lot of time in the DSPy code is not ideal. You have to race through it because of the time constraints. Maybe get GPT4 to describe the code and use that to explain how it works?
Hi! have you tried DSPY with the Google Gemini API? because it gives me an authentication error with GCP
Is there video about optimization with gradient descent?
can you share a link to the notebook?
@connorshorten6311
2 ай бұрын
Hey! Just updated the description! Thanks so much!
@frazuppi4897
2 ай бұрын
@@connorshorten6311thanks to you for the amazing video
How can we get metadata that is associated with any chunk of docs
How do you get the final optimized prompt?
@redflipper992
Ай бұрын
print(f"""I'm actually a retard""")
Maybe we could be dividing by 4 instead of 5.
My dude... I can tell this is an 'extra-curricular activity' that you've done for us. But there is a lot of handwaving especially toward the end when you're getting tired. I really appreciate the video and production, but certain parts are an all-or-none type of deal. It would be good if you could take a breather and give those sections the attention and unpackaging that they deserve. Anyhow, thank you for what you've done so far~