How Good is Phi-3-Mini for RAG, Routing, Agents
Ғылым және технология
Microsoft just released their Phi-3 family of models that are SOTA for their weight class. But are they good for RAG and agent use-cases?
🦾 Discord: / discord
☕ Buy me a Coffee: ko-fi.com/promptengineering
|🔴 Patreon: / promptengineering
💼Consulting: calendly.com/engineerprompt/c...
📧 Business Contact: engineerprompt@gmail.com
Become Member: tinyurl.com/y5h28s6h
💻 Pre-configured localGPT VM: bit.ly/localGPT (use Code: PromptEngineering for 50% off).
Signup for Advanced RAG:
tally.so/r/3y9bb0
LINKS:
tinyurl.com/vju67pj8
TIMESTAMPS:
[00:00] Tiny but Mighty
[00:16] Beyond Benchmarks
[00:47] Building RAG with Phi-3-mini
[07:31] Query Routing
[15:40] Using Phi-3 for Agents
[18:24] Mathematical agents
All Interesting Videos:
Everything LangChain: • LangChain
Everything LLM: • Large Language Models
Everything Midjourney: • MidJourney Tutorials
AI Image Generation: • AI Image Generation Tu...
Пікірлер: 27
You want learn RAG beyond basics? Make sure to sign up here: tally.so/r/3y9bb0
@VerdonTrigance
2 ай бұрын
You said about phi-3 small, but it's not releazed yet. Later in the video you are downloading phi-3 mini, which smaller than small.
@VerdonTrigance
2 ай бұрын
I'm actually wait for phi-3 small 128 k context length for talking to the documents which are set of different docs like docx, xlsx, txt and python scripts. They are all relevant and I want to put them all in a RAG, but maybe routing will be helpful for that too. But I need a really big context for that. Or I should somehow train it. Only option I know to train on a big document set is ask another model to generate questions and later ask to answer that questions. Anyway, any of thse would be helpful.
It's not making any mistake, meta is the real open AI. 😂😂😂
Brilliant content! I think it is more interesting to test a model by looking at practical applications rather than asking a series of questions that could be in the training data. You should consider making a series of videos in this format.
Excellent demo of Phi-3's RAG abilities. At the same time we seek a 3 billion parameters language model that runs well on a smartphone with at least 6 GB of RAM, we will also want a speech recognition model, and a dynamic graph neural network that can merge with a vector store to provide long-term memory.
@krisvq
2 ай бұрын
Was thinking the same. We expect a lot from a small model.
These tests are really great! Please recommend what are the best llms for these purposes, at the time of making your tests.
I enjoyed your calm mellow speaking tone. Nice contrast to pretty much all of YT. Subscribed!
@tvwithtiffani
2 ай бұрын
Question: Is there a local model that you would recommend for RAG? I've been building rag systems since gpt 3 (not 3.5) and I've yet to find a model that comes close to simply understand whats being asked at that given point in the conversation, extracting relevant info from stuffed context, and providing a response. I would even have gpt 3.0 (pre-chatgpt) quote the sentence from which it got its answer. My experience so far locally is that all of the moving parts outside of the local model have to be damn near 100% perfect to work correctly and even then the model will muck it up somehow every now and again, to the point its unreliable. Which models do you recommend for this specific use-case?
@engineerprompt
2 ай бұрын
I personally like the zypher models if you are looking for smaller LLMs. For bigger local LLMs, llama-3 70B is good (in my use cases) and also CommmandR+.
Nearly Perfect! Somehow my agent does not use tools for all questions, including the ones about meta, but the rest works :)
I don't get the concept of multiple vector stores. How do they differ? Do they store different documents? Use different embedding models? Or maybe the chunking strategies are different?
@engineerprompt
2 ай бұрын
In this case, each store will contain different docs. Imagine you have different knowledge bases for different departments and you want to retrieve info from the relevant department just based on the query
@jaysonp9426
2 ай бұрын
Couldn't you just add a different meta filter though? Is there a computational advantage to multiple vector stores?
19:33 - The model should only decide if something is a mathematical question, and then the script should decide that it has to use a tool.
When running these what caliber of computing power are we talking? Any mid-high end laptop or mid-high end PC rig with good graphics card?
@engineerprompt
Ай бұрын
For this model, you will be able to run it on 6-8GB of vRAM. Potentially even with CPU.
Perfect content but the camera consumed me I am not able to focus I hope I can find similar content with normal static camera view like other videos
Bro, please, fix sound level. It's too quiet. I'm on 100 and can't hear anything, while all other videos are fine on 30.
can train it with document data or not
@engineerprompt
2 ай бұрын
Yes, you can finetune it
Your code looks so difficult mate. But thanks 🎉