How Good is LLAMA-3 for RAG, Routing, and Function Calling

Ғылым және технология

How good is Llama-3 for RAG, Query Routing, and function calling? We compare the capabilities of both 8B and 70B models for these tasks. We will be using GROQ API for accessing these models.
🦾 Discord: / discord
☕ Buy me a Coffee: ko-fi.com/promptengineering
|🔴 Patreon: / promptengineering
💼Consulting: calendly.com/engineerprompt/c...
📧 Business Contact: engineerprompt@gmail.com
Become Member: tinyurl.com/y5h28s6h
💻 Pre-configured localGPT VM: bit.ly/localGPT (use Code: PromptEngineering for 50% off).
Signup for Advanced RAG:
tally.so/r/3y9bb0
LINKS:
Notebooks
RAG, Query Routing: tinyurl.com/3s6jzmuw
Function Calling: tinyurl.com/4299fjn5
TIMESTAMPS:
[00:00] LLAMA-3 Beyond Benchmarks
[00:35] Setting up RAG with llamaIndex
[05:15] Query Routing
[07:31] Query Routing
[10:35] Function Calling [Tool Usage] with Llama-3
All Interesting Videos:
Everything LangChain: • LangChain
Everything LLM: • Large Language Models
Everything Midjourney: • MidJourney Tutorials
AI Image Generation: • AI Image Generation Tu...

Пікірлер: 15

  • @engineerprompt
    @engineerpromptАй бұрын

    If you are interested in learning more about how to build robust RAG applications, check out this course: prompt-s-site.thinkific.com/courses/rag

  • @shameekm2146
    @shameekm21462 ай бұрын

    Thank you bro. Today itself i switched the LLM in RAG to Llama - 3 8B. It is performing really well.

  • @johnkintree763
    @johnkintree7632 ай бұрын

    Excellent presentation.

  • @engineerprompt

    @engineerprompt

    2 ай бұрын

    thank you.

  • @1981jasonkwan
    @1981jasonkwan2 ай бұрын

    I found that the llama-3-70b from groq does not do as well on the test rag task I ran versus a local version, so they might have quantized it a lot on groq.

  • @engineerprompt

    @engineerprompt

    2 ай бұрын

    I have seen people saying that. That might be the case.

  • @mchl_mchl
    @mchl_mchl2 ай бұрын

    Would love to see a reliable way to utilize function calling on completely local model. I saw a fine tuned model on HF designed for function calling, but users said that it had issues Anyone know if this has been done locally? relatively reliably?

  • @engineerprompt
    @engineerprompt2 ай бұрын

    You want learn RAG beyond basics? Make sure to sign up here: tally.so/r/3y9bb0

  • @csowm5je
    @csowm5je2 ай бұрын

    No module named 'packaging' - does not work in windows or wsl

  • @Embassy_of_Jupiter
    @Embassy_of_Jupiter2 ай бұрын

    Function calling without groq would be cool. we are looking to self host 70B with OpenAI compatible functions/tools. so far there is nothing promising except Trelis' models.

  • @_SimpleSam

    @_SimpleSam

    2 ай бұрын

    Use grammars and any model can do function calling. Just need to struggle to get the "bnf grammar" perfect. Try to hard code as many characters as possible, it improves the quality of the output. Quickly you'll realize how inefficient the OpenAI JSON style format is, and you'll go down the parsing rabbit hole, trying YAML TOML, etc. Good luck!

  • @engineerprompt

    @engineerprompt

    2 ай бұрын

    I have seen a few finetunes for function calling. Will cover some of them soon.

  • @Content_Supermarket
    @Content_SupermarketАй бұрын

    Can I run llm on 4 GB ram and 16 bit processor laptop? Please tell me

  • @premusic242

    @premusic242

    Ай бұрын

    U can run but using open source locally will not perform well . Instead of using llama 3 locally using ollama use groq API it would be insanely fast

  • @looseman
    @looseman2 ай бұрын

    fully decensored?

Келесі