Anthropic's Meta Prompt: A Must-try!

Ғылым және технология

The Anthropic prompt is to help you create better prompts
MetaPrompt Colab: drp.li/8Odmv
Meta Prompt colab Original: colab.research.google.com/dri...
Anthropic Prompt Guide: docs.anthropic.com/claude/doc...
Anthropic Cook Book: github.com/anthropics/anthrop...
🕵️ Interested in building LLM Agents? Fill out the form below
Building LLM Agents Form: drp.li/dIMes
👨‍💻Github:
github.com/samwit/langchain-t... (updated)
git hub.com/samwit/llm-tutorials
⏱️Time Stamps:
00:00 Intro
00:14 Anthropic Prompt Library
00:25 OpenAI Cookbook
01:22 Anthropic Prompt Library: Website Wizard Prompt
01:36 Anthropic Cookbook
01:55 Anthropic Helper Metaprompt Docs
02:55 Code Time

Пікірлер: 113

  • @icandreamstream
    @icandreamstream3 ай бұрын

    Perfect timing, I was experimenting with this myself yesterday. But this is a much more in depth take, I’ll have to check it out.

  • @samwitteveenai

    @samwitteveenai

    3 ай бұрын

    glad it was useful

  • @erniea5843
    @erniea58432 ай бұрын

    Appreciate you bringing attention to this. Great walkthrough

  • @Taskade
    @Taskade3 ай бұрын

    Thanks for sharing this insightful video! It's great to see how Anthropic's Metaprompt can really enhance prompt quality and model interactions. Looking forward to experimenting with it myself. Keep up the awesome work! 😊👍

  • @levi2408

    @levi2408

    2 ай бұрын

    This comment has to be AI generated

  • @MartinBroadhurst

    @MartinBroadhurst

    2 ай бұрын

    ​@levi2408 thanks for sharing your interesting insights into fake AI generated comments. I can't wait to learn more about AI generated spam. 😉

  • @IvarDaigon
    @IvarDaigon3 ай бұрын

    I guess one use case might be using a larger LLM to create system prompts for a smaller faster model to enable it to better follow instructions and collect information before the information is summarized and then passed back to the larger model to formalize. For example. Model A instructs model B how to interview the customer and collect the required information which then gets passed back to model A to fill out and submit an online form. this approach would be faster and cheaper than getting model A to do all of the work because A-teir models are often 10X the cost of B-tier models. This kind of system would work really well when collecting information via email, instant message or over the phone.

  • @keithprice3369

    @keithprice3369

    3 ай бұрын

    Interesting. I was envisioning the opposite. Use Haiku to generate the prompt and pass that to either Sonnet or Opus. Worth experimenting with both, I think.

  • @davidw8668

    @davidw8668

    3 ай бұрын

    DSPy pipeline work indeed kinda that way, so you use larger models in the beginning to optimize prompts and finetune automatically, create data with larger models to train smaller models, and then run the decomposed tasks on smaller models and evaluate the pipeline using larger models

  • @Stewz66
    @Stewz663 ай бұрын

    Very helpful, constructive, and practical information. Thank you!

  • @CybermindForge
    @CybermindForge3 ай бұрын

    This is the truth. It is just semantically and Syntactically difficult to adjust to them all. If you add the video, and audio generation it gets 😅 great video!

  • @matthewtschetter1953
    @matthewtschetter19533 ай бұрын

    Sam, helpful as always, thank you! How do you think these promopting cookbooks could help agents perform tasks?

  • 2 ай бұрын

    Really interesting. This will help having hyperspecialized agents. When we know that swarms of these are the future of AI, at least for the coming month ... Thank You Sam

  • @polysopher
    @polysopher3 ай бұрын

    Totally what I think the future will be!

  • @AnimusOG
    @AnimusOG3 ай бұрын

    My favorite AI Guy, thanks again for your content bro. I hope I get to meet you one day.

  • @ShawnThuris
    @ShawnThuris3 ай бұрын

    Very interesting, there must be so many use cases for this. (Minor point: in August the time is PDT rather than PST.)

  • @WhySoBroke
    @WhySoBroke3 ай бұрын

    Great tutorial and nicely explained. I believe you assume a certain level of knowledge from the viewer. For a beginner where do we enter the Claude api key? Just as an example of things you assume the viewer already knows. Maybe direct to a basic video explaining so it is not redundant?

  • @aleksh6329
    @aleksh63293 ай бұрын

    There is also a framework called DSPy by Omar Khattab that attempts to remove prompt engineering and it works with any LLM!

  • @MattJonesYT
    @MattJonesYT3 ай бұрын

    Whenever I do A/B testing between chatgpt and the free claude model I end up choosing chatgpt mainly because for whatever reason claude tends to hallucinate in authoritative sounding ways whereas if chatgpt doesn't understand something it is more likely to admit that (but not always). For instance today I told claude I added PEA to a gas engine and it assumed I was using biodiesel and proceeded to give a long chat about that. Chatgpt understood that PEA is polyetheramine for cleaning gas systems. So it's hard for me to take claude seriously as yet.

  • @alextrebek5237
    @alextrebek52372 ай бұрын

    Criminally underrated channel

  • @micbab-vg2mu
    @micbab-vg2mu3 ай бұрын

    At the moment I use personalise prompts for every task diffrent one - the quality output is much higher:)

  • @ivancartago7944
    @ivancartago79442 ай бұрын

    This is awesome

  • @manikyaaditya1216
    @manikyaaditya1216Ай бұрын

    hey this is insightful, thanks for sharing it. The colab link seems to be broken, can you please share the updated one.

  • @joser100
    @joser1003 ай бұрын

    This looks great but very specific to Anthropic models, no? is not that what we are after with using programmatic tools such as DSPy for example to reach the same goal but more "generically"? (similar with Instructor only that this one more focused on formatting I think)

  • @ronnieleon7857
    @ronnieleon78573 ай бұрын

    Hello Sam. I hope you're holding up well. There's a video you talked about open-sourcing a web scraping tool. Did you open-source the project? I'd like to contribute to a tool that automates web scraping.

  • @alchemication
    @alchemication3 ай бұрын

    Interesting, just when I concluded that too long prompts are not good and usually a symptom of trying to cram in too much info ;D (depending on the situation obviously). Nevertheless the concept is nice, and indeed some of us utilised it for a long time for prompt development and optimisation ;)

  • @damien2198
    @damien21983 ай бұрын

    I use a GPT Claude 3 prompt Optimizer (it loads Claude prompt documentation / cookbook into chatgpt)

  • @mikeplockhart
    @mikeplockhart3 ай бұрын

    Thanks for the content. With meta prompting in mind, do you thing something like DSPy is a more programmatic alternative or there doing similar things under the hood? And if you’re looking for video ideas… 😊

  • @samwitteveenai

    @samwitteveenai

    3 ай бұрын

    have been playing with DSPy a fair bit with some interesting results. It is quite a bit different than Metaprompt but has some very interesting ideas.

  • @odw32
    @odw322 ай бұрын

    Looking at how we work together with humans, it would make a lot more sense if the prompting process could be split up: First you chat/onboard the model with the right task/context, they ask for clarifications where needed, then they execute a task, and ask for a review/feedback reflecting back on what they did. Especially the "ask for clarification", "admit lack of knowledge" and "request feedback" parts aren't a default part of the commercial tools yet. Luckily things like meta-prompting and using LangChain agents all seem to converge in that a direction though, like little pieces of the puzzle.

  • @mushinart
    @mushinart2 ай бұрын

    cool video ,sir ... did langchain optimize it for claude 3 yet ?

  • @samwitteveenai

    @samwitteveenai

    2 ай бұрын

    Yeah you can use LangChain with Claude 3 no problems

  • @SonGoku-pc7jl
    @SonGoku-pc7jlАй бұрын

    cool! :) thanks!

  • @TheRealHassan789
    @TheRealHassan7893 ай бұрын

    Idea: use RAG to grab the closest prompts from GitHub repos to inject into the meta prompt notebook…. This would probably give even better results?

  • @samwitteveenai

    @samwitteveenai

    3 ай бұрын

    there are some really nice research and applications of using RAG to choose the best exemplars/examples for few shot learning. What you are saying certainly makes sense.

  • @LucianThorr
    @LucianThorr2 ай бұрын

    What is the "typical" UX for developers who are using these LLVMs on a daily basis? Is it all in Notebooks like the above video? Web browser based conversations? IDE integrations? And especially if IDE, how do you keep your proprietary code from being scanned and leaking back up to these companies?

  • @codelucky
    @codelucky3 ай бұрын

    Thank you, Sam. I'm excited to dive deep into Metaprompt and learn how to create comprehensive prompts with precise instructions that produce the desired outcomes for users. Can you suggest a course or resource to help me get started on this journey?

  • @amandamate9117
    @amandamate91173 ай бұрын

    i love this

  • @TheBlackClockOfTime
    @TheBlackClockOfTime3 ай бұрын

    I thought my screen was scratched there for a second until I realized it's the human head logo's grey outlines.

  • @user-lb2gu7ih5e
    @user-lb2gu7ih5e2 ай бұрын

    By YouSum 00:01:58 Utilize Metaprompt for precise AI instructions. 00:08:49 Metaprompt aids in generating detailed and specific prompts. 00:10:42 Metaprompt ensures polite, professional, and customized AI responses. 00:11:55 Experiment with Metaprompt for tailored AI agent interactions. By YouSum

  • @hosst
    @hosst2 ай бұрын

    wonderful

  • @benjaminanderson1014
    @benjaminanderson10142 ай бұрын

    So we've successfully outsourced ai prompt engineering to ai? Cool.

  • @samwitteveenai

    @samwitteveenai

    2 ай бұрын

    For some things. It certainly helps.

  • @jarosawburyk893
    @jarosawburyk8932 ай бұрын

    I wonder how Claude specific it is - would it generate good prompts for OpenAI GTP4 ?

  • @samwitteveenai

    @samwitteveenai

    2 ай бұрын

    Certainly worth a try. You can alter it and also run on OpenAI or Gemini etc as well

  • @Koryogden
    @Koryogden2 ай бұрын

    META-PROMPTING!!!! YES!!!! This topic excites me!

  • @LaHoraMaker
    @LaHoraMaker3 ай бұрын

    Maybe this is the work of the famous Prompt Engineer & Librarian position at Anthropic with a base salary of 250-375k USD :D

  • @KeiS14
    @KeiS143 ай бұрын

    This is cool and all but what is 4? 1:03

  • @nicdemai
    @nicdemai3 ай бұрын

    The meta prompt is a feature currently in Gemini Advanced but its not released yet. Although it’s not as detailed as this.

  • @Walczyk

    @Walczyk

    2 ай бұрын

    you mean how it writes out a list first?

  • @nicdemai

    @nicdemai

    2 ай бұрын

    @@Walczyk No i mean when you write a prompt. Before you send it to the Ultra model another model tried to modify it to make it longer and more detailed with concise instructions before sending it to the Ultra model.

  • @Walczyk

    @Walczyk

    2 ай бұрын

    oic, i had a feeling it was doing that; because it would read out this clean structure of what it would do; i could see it had received that as its prompt@@nicdemai

  • @armankarambakhsh4456
    @armankarambakhsh44563 ай бұрын

    Wont the metaprompt work if I just copy it into claude main interface itself?

  • @samwitteveenai

    @samwitteveenai

    2 ай бұрын

    yesyou can certainly do something similar like that

  • @gvi341984
    @gvi3419842 ай бұрын

    Claude free mode only lasts a few messages until pay wall

  • @MiraMamaSinCodigo
    @MiraMamaSinCodigo2 ай бұрын

    Metaprompt feeling like a RAG of all models.

  • @harigovind511
    @harigovind5113 ай бұрын

    I am part of a team who is building a GenAI powered analytic tool, we still use a combination of GPT-3.5 and 4…Don’t get me wrong Claude is good, specially sonet the price to performance ration is just out of this world, I guess we are just primarily impressed by OpenAI’s track record of releasing quality model which are significantly better than the previous version under the same API umbrella.

  • @fburton8
    @fburton82 ай бұрын

    Probably a silly question, but why is the "and" at 4:24 blue?

  • @einmalderfelixbitte

    @einmalderfelixbitte

    2 ай бұрын

    I am sure that is because ‘and’ is normally used as an operator in boolean equations (like ‘or’). So the editor (wrongly) highlights it everywhere even when it is not used as a logical operator.

  • @mikeyai9227
    @mikeyai92272 ай бұрын

    What happens when your output has xml in it?

  • @samwitteveenai

    @samwitteveenai

    2 ай бұрын

    You can parse it very easily.

  • @xitcix8360
    @xitcix83603 ай бұрын

    I want cook the soup

  • @JC-jz6rx
    @JC-jz6rx2 ай бұрын

    Problem is cost. Imagine sending that prompt on every API call

  • @samwitteveenai

    @samwitteveenai

    2 ай бұрын

    you don't need to send this prompt each time. the idea is this makes a prompt that is much shorter that you can use.

  • @heresmypersonalopinion
    @heresmypersonalopinion2 ай бұрын

    I hate it when it tells me " i feel uncomfortable completing this task".

  • @AntonioVergine
    @AntonioVergine2 ай бұрын

    Am i the only one thinking that if i have to "fix my prompt or the AI won't understand" it means that the AI simply is not good enough yet?

  • @samwitteveenai

    @samwitteveenai

    2 ай бұрын

    Kinda yes, kinda no. You could imagine you have to talk to different humans in different ways etc. Models are like. Of course ideally we would like them to understand everything we want, even humans aren't good at that though

  • @sbutler860
    @sbutler8602 ай бұрын

    I asked Claude 3 questions: 1. Who won the FA Cup in 1955? 2. Which composer won the Academy Award in 1944 and for which film? What was the date of the death of Queen Elizabeth II? CLAUDE got all three questions WRONG. I asked the same of Google's Gemini and it got all three CORRECT. I also asked the same questions of Windows' CO-PILOT, and it also got all three correct, although it took its sweet time about it. Therefore, Claude may know how to metaprompt a travel agent, but it doesn't know its arse from its elbow about anything else. Long live Google Gemini! And Co-Pilot! x

  • @ellielikesmath
    @ellielikesmath2 ай бұрын

    prompt engineering still seems like a such dead end. if it requires each prompt be unrolled into something with a lot of common sense filler, why not add that as a filter to the LLM? so you feed in your prompt, some automated system makes reasonable guesses as to what filler to pack it with, and then see what the LLM makes of it. the problem is the user thinks all of the assumptions they make are obvious and shared by the LLM, and it's not always the case. I'd be interested to know if any LLM tries to predict sentences/clauses the user left out of their prompt, or heaven forbid, ask the user questions!! about what they may have omitted or meant. this is but one way out of this nonsense, and i assume people are trying lots of ways to get rid of this besides what i am suggesting.

  • @Koryogden

    @Koryogden

    2 ай бұрын

    What I was trying myself, is a Style/Principles Guide Framework... It just doesn't quite apply the principles though, but it did qualitatively increase responses

  • @llliiillliiilll404
    @llliiillliiilll4042 ай бұрын

    00:00:35 - watching this guy typing with two fingers is so painful

  • @JaredFarrer
    @JaredFarrerАй бұрын

    Yeah but nobody wants to try and figure out how to word things so the model will respond it’s junk. I wouldn’t pay for Gemini

  • @motbus3
    @motbus33 ай бұрын

    It is ridiculous that each AI model should be prompted in a different way.

  • @TheMadridfan1
    @TheMadridfan13 ай бұрын

    Sadly OpenAI is acting worthless lately. I sure hope they release 5 soon

  • @nickdisney3D

    @nickdisney3D

    3 ай бұрын

    After using claude, gpt4 is poop.

  • @tomwojcik7896

    @tomwojcik7896

    3 ай бұрын

    @@nickdisney3D really? in what areas did you find Claude (Opus, i assume) significantly better?

  • @rcingfever3882

    @rcingfever3882

    3 ай бұрын

    I would say in conversation for example if you see a diference in gpt 3.5 and gpt 4 the later just understands better. Same is true between gpt 4 and opus not a lot but slightly. And when it comes to coding and image generation to make a small change on my website i had to prompt i guess 10 times to make understand when it comes to gpt4 but for opus i got it in first time.

  • @rcingfever3882

    @rcingfever3882

    3 ай бұрын

    .​@@tomwojcik7896

  • @nickdisney3D

    @nickdisney3D

    3 ай бұрын

    @@tomwojcik7896 everything. Except that it gets emotional and moody when you push it the right way. I had a chat with it today where it just refused to respond.

  • @JaredFarrer
    @JaredFarrerАй бұрын

    OpenAI has really dropped the ball lately bad leadership

  • @exmodule6323
    @exmodule63232 ай бұрын

    Tighten up your delivery man, I had to stop listening because your intro was too long

  • @roc1761

    @roc1761

    2 ай бұрын

    Can't you take what you're given...

  • @cas54926

    @cas54926

    2 ай бұрын

    Someone might have attention span issues 😂

  • @Koryogden

    @Koryogden

    2 ай бұрын

    ????? What????

  • @bomarni

    @bomarni

    2 ай бұрын

    1.75x speed is your friend

  • @ronaldokun

    @ronaldokun

    2 ай бұрын

    Be cool, man. Someone is teaching you something. You can increase the speed and give a nice feedback. Everybody wins.

  • @filthyE
    @filthyE2 ай бұрын

    Thoughts on ChatGPT 4 (via official WebUI) vs. Claude 3 Opus (via official WebUI) in March 2024? Assuming a person can only afford one? (hypothetical scenario, just wondering your thoughts) Obviously API access through terminal or a custom UI frontend to various models is ideal, but wondering what you'd recommend to a layperson who could only choose among the web versions of each of these two services.

  • @onekycarscanners6002

    @onekycarscanners6002

    2 ай бұрын

    Why will they not go directly to the site and input their prompt

  • @onekycarscanners6002

    @onekycarscanners6002

    2 ай бұрын

    You can create Web UI per prompt niche. And control prices plus make it easier for many who have no idea to what a good prompt is.

  • @samwitteveenai

    @samwitteveenai

    2 ай бұрын

    I have both I find myself using Claude more these past 2 weeks.

Келесі