Phi-3 Medium - Microsoft's Open-Source Model is Ready For Action!

Ғылым және технология

Phi-3 Medium is a new size of Microsoft's fantastic Phi family of models. It's built to be fine-tuned for specific use cases, but we will test it more generally today.
Be sure to check out Pinecone for all your Vector DB needs: www.pinecone.io/
Join My Newsletter for Regular AI Updates 👇🏼
www.matthewberman.com
Need AI Consulting? 📈
forwardfuture.ai/
My Links 🔗
👉🏻 Subscribe: / @matthew_berman
👉🏻 Twitter: / matthewberman
👉🏻 Discord: / discord
👉🏻 Patreon: / matthewberman
👉🏻 Instagram: / matthewberman_ai
👉🏻 Threads: www.threads.net/@matthewberma...
👉🏻 LinkedIn: / forward-future-ai
Media/Sponsorship Inquiries ✅
bit.ly/44TC45V
Links:
Open WebUI - • Ollama UI Tutorial - I...

Пікірлер: 188

  • @JohnLewis-old
    @JohnLewis-oldАй бұрын

    I would really like to see a video comparing the degradation of models from quantization (as compared to just larger and small models from the same root.) The key for me would be the final model size (in memory) versus how well it performs. This is poorly understood currently.

  • @ts757arse

    @ts757arse

    Ай бұрын

    Of note, I recently watched a video by the AnythingLLM chap and he said he was using llama3 8B but emphasised that, for good results, you needed to download the Q8 model, not the Q4 as Ollama defaults to. Myself, I use Q4 on my inference server for larger models but my workstation is faster and runs Q6 at acceptable speed. He said if he was running llama3 70B, he'd download Q4 and "have a good time", but for smaller models where they're less capable, you want to limit compression. He also said it's a "use case science" which makes me think you have to test out what works for you. The Q4 model I have on my server is based on Mixtral 8x7B and, for my use case, is proving to be better than GPT4o, which is stunning. What's amazing is that, for my core business stuff, I still haven't found anything better than Mixtral 8x7B for balance of speed and performance.

  • @nathanbanks2354

    @nathanbanks2354

    Ай бұрын

    Yeah, a video would be great! I read papers about this a year ago, and the drop to 8-bit is very minimal, drop to 4-bit is reasonable, and 3-bit or 2-bit quantization and things get much worse. Of course there are different ways to perform quantization, so this may have improved. I've tried comparing 16-bit and 4-bit models, and usually the difference is much much less than an 8B parameter model and a 32B parameter model. This is probably why NVIDIA's newest GPU's support 4-bit quantization, and I tend to run everything using ollama's default 4-bit quantization, though for Llama-3 70b or mixtral 8x22b this is excruciatingly slow on my laptop with 16GB of VRAM and 64GB of RAM. I rented a machine with 4x4090's for a couple hours and they ran reasonably well with 4-bit quantization, but 10% as fast as groq (note the "q").

  • @mickelodiansurname9578

    @mickelodiansurname9578

    Ай бұрын

    @@ts757arse Not sure "Have a good time" is an objective measure of efficacy though. I'd say given the type of technology the results are very sensitive to use case.

  • @longboardfella5306

    @longboardfella5306

    Ай бұрын

    @@ts757arsethanks for your testing and advice. I am now experimenting with Mixtral7B 4K. This is all a bit new to me, but it looks great so far

  • @ts757arse

    @ts757arse

    Ай бұрын

    @@mickelodiansurname9578 nope, it's not particularly empirical. I think he was making the point that you're messing around at that point and making so many compromises that it's a bit of a laugh. Or he might have been saying with such a large model, the compression has less impact. Regardless, I've found llama3 to be *awful* when running quantised models and I simply don't bother with it at the moment. Given his advice, I'm going to try the 8B Q8 model as a core model for a new project but I'm also building it to easily move over to Mixtral if needed. I tend to run a few models doing a few tasks at the same time, passing the tasks between them and so on. Helps having a server to run one model on and a workstation with many cores and all the RAM. What I'm seeing at the moment is a lot of models aceing benchmarks, but then being utterly dogshit in real world use.

  • @PatrickHoodDaniel
    @PatrickHoodDanielАй бұрын

    You should introduce a surprise question if the model gets it right just in case the creators of the model trained specifically for this.

  • @nathanbanks2354

    @nathanbanks2354

    Ай бұрын

    It would be hard to compare unless it was something like "Write an answer with 7 words" and the number "7" was randomized.

  • @freedtmg16
    @freedtmg16Ай бұрын

    Imho the answer given the killers problem in this one REALLY showed a deeper level of reasoning in both not assuming the person who entered had never been a killer because they were only identified as a person, and also in not assuming the dead person should not be counted as a person.

  • @GotoRing0
    @GotoRing0Ай бұрын

    Matthew, please add function-calling test. IMHO an one of the most important currently for Agent software dev.

  • @tungstentaco495
    @tungstentaco495Ай бұрын

    at 8Gb, I'm guessing it's Q4 quantized. When you get much below Q8, output really starts to degrade. It would be interesting to compare the Q4 results with a Q8 version of the model. Also 128k can be worse than 4k results as well. Not sure which one was tested in this video.

  • @moisesxavierPT

    @moisesxavierPT

    Ай бұрын

    Yes. Exactly.

  • @sammcj2000

    @sammcj2000

    Ай бұрын

    Yeah Q4 is usually pretty balls. Every time I check quant benchmarks q6_k seems to be the sweet spot.

  • @tomenglish9340
    @tomenglish9340Ай бұрын

    We have good reason to expect quantization of Phi models to work poorly. Phi models have orders of magnitude fewer parameters than do other models with comparable performance. Loosely speaking, this indicates that Phi models pack more information into their parameters than do others. Thus Phi models should not be as tolerant of quantization as other models are.

  • @InnocentiusLacrimosa

    @InnocentiusLacrimosa

    Ай бұрын

    Yeah. It would be great to have a clear overview always on how much vram each model needs and if quantized model is used: how much it is gimped compared to full model.

  • @braineaterzombie3981

    @braineaterzombie3981

    Ай бұрын

    Bro what is quantization

  • @Nik.leonard

    @Nik.leonard

    Ай бұрын

    @@braineaterzombie3981Reducing the numerical precision of the weights from 16-bit (usually) to just 4bit or even less with some function. Is like rounding the values.

  • @braineaterzombie3981

    @braineaterzombie3981

    Ай бұрын

    @@Nik.leonard oh ok . Thanks for information

  • @GotoRing0
    @GotoRing0Ай бұрын

    Matthew, one more test I think is generic and useful for everybody is to add "needle in the haystack" test for models with context len > 4K. Often when model claims 128K context it is not capable digg a fact (like a name or number or password) located well within (top or middle) declared context len. Related to this is the effect that is known as the **serial position effect**. It consists of two components: 1. **Primacy Effect**: Better recall of items from the beginning of a list or context. 2. **Recency Effect**: Better recall of items from the end of a list or context. In the context of large language models (LLMs), this effect can also manifest as the model having better retention of information from the start and end of its context window, while struggling more with information in the middle.

  • @manulectric

    @manulectric

    Ай бұрын

    I'd like to see tests added for this as well

  • @abdelhakkhalil7684
    @abdelhakkhalil7684Ай бұрын

    For fairness, I highly advise you to test models with similar quantization levels. There are times when you tested the unquantized versions, and other times when you tested the q8_0 versions. The one you are testing in this video is likely a q4_k versions. Obviously, the quality would degrade significantly if you go with a 4-bit quantization level.

  • @ts757arse

    @ts757arse

    Ай бұрын

    It's a tricky one as some models perform terribly at Q4 but others are great. I think stepping up the quantisation if he gets weirdness like the CUINT issue would make sense, as it'd show if it's a model problem or not. Ollama defaulting to Q4 blindly is kind of annoying and it's not immediately obvious how to get the different compression levels. LM Studio is great for this.

  • @abdelhakkhalil7684

    @abdelhakkhalil7684

    Ай бұрын

    @@ts757arse Exactly my point. I know Ollama default on Q4 so it's better to use one level of quantization. I don't like when people test the unquantized versions because most people would not run them, but a Q8 is a good level.

  • @ts757arse

    @ts757arse

    Ай бұрын

    @@abdelhakkhalil7684 Just been reading someone else saying Q8 is hardly distinguishable from the 16bit models. Interestingly, LM studio makes it seem as though Q8 is a legacy standard and not worth using? I'd prefer ollama to make it clearer how to get the other quants. It's fine when you know, but I've literally just figured it out and can finally stop doing it myself!

  • @littleking2565
    @littleking2565Ай бұрын

    Technically 4 killers is right, it's just the killer is dead, but the body is still in the room.

  • @maxlightning4288
    @maxlightning4288Ай бұрын

    lol “glad that I is there”

  • @ts757arse

    @ts757arse

    Ай бұрын

    I once had an LLM write me a contract where the first letter of every line spelt "DONT BE A CUN" (no "I", additional "T"). Got it first time and I sent it to my client.

  • @rocketPower047

    @rocketPower047

    Ай бұрын

    I cackled when he said that 🤣🤣

  • @maxlightning4288

    @maxlightning4288

    Ай бұрын

    @@rocketPower047 haha yeah I like his side note comments like that lol. The second he started spelling it out CU..I…NT I thought exactly what he said in real time

  • @TylerLemke
    @TylerLemkeАй бұрын

    Microsoft totally fitted the model to the Marble problem here 😆

  • @NoCodeFilmmaker
    @NoCodeFilmmakerАй бұрын

    Bro, when you said "cuint, glad it has that "i" in there" at 2:29, I was dying laughing for a minute. That was a hilarious reaction 😂

  • @gileneusz
    @gileneuszАй бұрын

    4:31 what's the reason of trying this model in quantized form? its not the best measure...

  • @ts757arse

    @ts757arse

    Ай бұрын

    Because that's how most people will use it I'd guess. Myself, I'd not be watching a video about unquantised models as they'd not be of any relevance. I think he should, when he finds this kind of issue, try Q6 or even Q8.

  • @rthidden
    @rthiddenАй бұрын

    What impact would better prompting have on these tests? Using role, context, etc., may improve results.

  • @gileneusz
    @gileneuszАй бұрын

    3:25 this question must be in the training set, we need to think about other one - modified with socks and modified time

  • @digletwithn

    @digletwithn

    Ай бұрын

    For sure

  • @maxlightning4288
    @maxlightning4288Ай бұрын

    Good thing it can’t count words or understand language structure written out as a LANGUAGE model, but it understands logic with the marble, and drying shirts. Is there a way of figuring out if they planted responses purposely if there isn’t a logical pattern of understanding visible?

  • @Gitalien1
    @Gitalien1Ай бұрын

    Am astonished to make all those models run pretty decently on my desktop (13700KF, 4070Ti, 32Go DDR5)... But q4_0 quantization is really denying the actual model's accuracy...

  • @_superthunder_

    @_superthunder_

    Ай бұрын

    Do research. Your machine can easily run q8 or fp16 model with full gpu offload with super fast speed using cuda

  • @stephaneduhamel7706
    @stephaneduhamel7706Ай бұрын

    The odd formatting/extra letters could also be due to an issue with the tokenizer's implementation, I believe.

  • @outsunrise
    @outsunriseАй бұрын

    Always on top! Would it be possible to test a non-quantized version? I would be very interested in testing the full model, perhaps not locally, to evaluate its native performance. Many thanks!

  • @AbdelmajidBenAbid
    @AbdelmajidBenAbidАй бұрын

    Thanks for the video !

  • @Coffeehot545
    @Coffeehot545Ай бұрын

    matthew berman gonna make history in the world of AI

  • @AutisticThinker
    @AutisticThinkerАй бұрын

    Like your's, mine bugged with the initial text "Here'annoPython code for printing the numbers from 1 to 100 with each number on its own line:" on the first question...

  • @user-td4pf6rr2t
    @user-td4pf6rr2tАй бұрын

    3:10 I love the holistic capabilities, It listing the side-stepped alternative to rephrasing the same request within its own guidelines is, in my pinion, Very AGIish.

  • @davefellows
    @davefellows29 күн бұрын

    This is super impressive for such a small model!

  • @AINEET
    @AINEETАй бұрын

    Could you keep the spreadsheet with the results of all the LLMs somewhere? Link or plaster it on the video to have a look each time

  • @Copa20777
    @Copa20777Ай бұрын

    Missed your uploads Matthew, God bless you and lots of love for your work from Zambia 🇿🇲 can this be run on mobile locally?

  • @Tofu3435
    @Tofu3435Ай бұрын

    If the ai can't answer for safety reasons, try to edit the answer: "sure here is" and continue generation. It working in LLaMA 3, and there are uncen LLaMA 3 models available where you don't have to do it every time.

  • @justgimmeaminute
    @justgimmeaminuteАй бұрын

    Would like to see longer more in depth videos of testing. And changing up the questions. Asking more questions. Perhaps ask to also code flappy bird aswell as snake. A good 20-30 minute video testing all these models would be nice, and perhaps q4 or higher for testing?

  • @brianWreaves
    @brianWreavesАй бұрын

    Well done! 🏆 You should go back to previous models tested and ask them the variant of the marble question.

  • @TiagoTiagoT
    @TiagoTiagoTАй бұрын

    If it's anything like the GGUFs I've been playing with, sometimes getting the right tokenizer files make a hell of a difference. Not sure how Ollama handles things internally; not the app I use.

  • @brunodangelo1146
    @brunodangelo1146Ай бұрын

    The video I was waiting for! This model seemed impressive from the papers. Let's see!

  • @sillybilly346

    @sillybilly346

    Ай бұрын

    What did you think? Felt underwhelming for me

  • @marcusk7855
    @marcusk7855Ай бұрын

    I need to learn what size models I can fit on my GPU. Wish there was a course on how to do all this stuff like fine tuning, quantizing, what GGUF is, and all the other stuff I don't even know I need to know.

  • @dbzkidkev2
    @dbzkidkev2Ай бұрын

    What Quantization did you run? Q4? on models that are smallish (or trained on a lot of tokens) maybe better to either do a high level of quantization (q6 or q8) or stick with int8 or fp16. It could also be the tokenizer? What kind of quant is it? exllama? gguf?

  • @Augmented_AI
    @Augmented_AIАй бұрын

    Would be cool to see how it compares to code qwen

  • @SoulaORyvall
    @SoulaORyvallАй бұрын

    6:22 Noooo!! The model is right! Maybe more so than any other model before. It assumed (actually stated) that it did not consider the killing that just occurred as "changing the status of the newcomer". Meaning that the newcomer did not become a killer by killing another killer. Given that, you'll either have 3 killers (2 alive + 1 dead) or 4 killers IF the newcomer has committed a killing before this one (since this one was not being considered) I have not seen a model point to the fact that you did NOT specified if the person was or wasn't a killer before entering the room :)

  • @denijane89
    @denijane89Ай бұрын

    On linux, ollama is not yet working with phi3:medium (at least not in standard release). I wanted it because the benchmark claimed that fact-wise it's quite good, but no way to test it yet.

  • @marcfruchtman9473
    @marcfruchtman9473Ай бұрын

    Thanks for this video review. I find it odd that code generation benchmark (HumanEval) is posting only a 62.2 versus Llama3's 78.7? They should do better considering their coding experience. Given the "oddities" with the Model's output, you should probably redo this once the issue is fixed.

  • @southVpaw
    @southVpawАй бұрын

    Yeah, this just reinforced my preference for Hermes Theta. The best sub 30B models are consistently, specifically, Hermes fine-tunes. I keep trying others, but I've been using Hermes since OpenHermes 2 and I have not found another model that can keep up on CPU inference, period.

  • @Nik.leonard
    @Nik.leonardАй бұрын

    Maybe the quantization was done wrong. It’s very similar to what happened with Gemma-7b when it was out, the quantization was terrible and llama.cpp also had issues with the gemma architecture, but was solved in the same week.

  • @lalalalelelele7961
    @lalalalelelele7961Ай бұрын

    I wish they had phi-3-small available.

  • @adamstewarton

    @adamstewarton

    Ай бұрын

    It is available on hf

  • @GeorgeG-is6ov

    @GeorgeG-is6ov

    Ай бұрын

    just use llama 3 8b it's a lot better

  • @lalalalelelele7961

    @lalalalelelele7961

    Ай бұрын

    @@adamstewarton in gguf format? I don't believe so...

  • @adamstewarton

    @adamstewarton

    Ай бұрын

    @@lalalalelelele7961 there isn't gguf for it yet. I thought you were asking for the released model.

  • @six1free
    @six1freeАй бұрын

    @5:55 yes, this reminds me of a comment i wanted to make sometime last week, when it became obvious that a single kill doesn't identify someone as a killer which insinuates repetitive behavior (to the llm). also, the dead killer is still a killer, even if they can't kill anymore.

  • @six1free

    @six1free

    Ай бұрын

    @@rousabout7578 exactly - and it's not only in english

  • @believablybad
    @believablybad27 күн бұрын

    “Long time listener, first time killer”

  • @VastCNC
    @VastCNCАй бұрын

    I think with the killer problem it confused the plural. Killers being based on a plural of people that have killled, rather than the people they killed. The new killer only killed one person, so it was confused because that there was now a plural of people killed.

  • @OffTheBeatenPath_

    @OffTheBeatenPath_

    Ай бұрын

    It's a dumb question. A dead killer is still a killer

  • @elecronic
    @elecronicАй бұрын

    Ollama has many issues. Also, by default, it downloads q4_0 quant instead of the better q4_k_m (very similar in size with lower perplexity & better).

  • @elecronic

    @elecronic

    Ай бұрын

    ollama run phi3:14b-medium-128k-instruct-q4_K_M

  • @Joe_Brig

    @Joe_Brig

    Ай бұрын

    @@elecronic does that adjust the default context size?

  • @GetzAI
    @GetzAIАй бұрын

    Is the slower side the M2 or the model? Can we see utilization while inferencing next time?

  • @mickestein
    @mickesteinАй бұрын

    8go for a 14b, that's means you're using à q4 of Phi 3 Medium. That's should explain your results.. On my desktop, with a 3090, Phi 3 Medium q8 is working fine with interesting results.

  • @kedidjein
    @kedidjeinАй бұрын

    All will change when AI will be local and not so much memory consuming, coz now for the moment we need to handle many memory stuffs in apps, performance is a key to a good app, and the AI overhead is way too high i think, but hey, there's hope. Thanks for your great tech videos, going straight in the tests that's what software enginnerring need, test videos, no bullshit so thank you, it's cool videos

  • @JH-zo5gk
    @JH-zo5gkАй бұрын

    I'll be impressed when an ai can design, build, launch, and land a rocket on mun keeping Jeb alive.

  • @AndyBerman
    @AndyBermanАй бұрын

    ollama response time is pretty quick. What hardware are you running it on?

  • @ginocote
    @ginocoteАй бұрын

    I'm curious if LLM can be better if you change temperature to zero for math.

  • @braineaterzombie3981
    @braineaterzombie3981Ай бұрын

    Hey yo guys , i want to run a local llm which can also read images something like phi3 - vision .but since this model is still not out on ollama , i am not able to use it. If you guys have any alternative model or you can suggest me if there is any way i can use it in other way.I am kinda new in this thing.Thanks 🙏

  • @PascalThalmann
    @PascalThalmannАй бұрын

    Which model is easier to fine tune: llama3, mistral pr phi3?

  • @TheMcSebi
    @TheMcSebiАй бұрын

    that the twitter response from ollama might also be generated :D

  • @thirdreplicator
    @thirdreplicatorАй бұрын

    What does "instruct" mean in the name of the model? And what is quantization?

  • @IvarDaigon
    @IvarDaigonАй бұрын

    re: the klrs problem. Lower paramater models lack nuance so it probably has no concept of the difference between a serial klr and a plain klr hence why it mentions it depends on if they are a first time klr or not. since serial klr is the more commonly used term, this is what the model "assumes" you are referring to.

  • @stevensteven4863
    @stevensteven486318 күн бұрын

    I think you should change your testing questions

  • @six1free
    @six1freeАй бұрын

    snake is exceptionally easy (being one of the first games written and so many variations existing) - I find most models unable to create a script that communicates with LLMs - especially outside of python I furthermore wonder how much coding error could come from pythons required tabbing

  • @timojosunny1488
    @timojosunny148824 күн бұрын

    WHAT IS YOUR MBP'S RAM SIZE? 32GB? and what is the requisite RAM size to run a 14B model if it's not quantized?

  • @imnotfromnigeria5948
    @imnotfromnigeria594821 күн бұрын

    Could you try evaluating the WizardLM 2 8x22B llm?

  • @MrMetalzeb
    @MrMetalzebАй бұрын

    I have a question. can experience be transmitted from one model to a new one or they will have everitime to learn from zero? I mean, the trillions of weights into wich knowledge relations are stored , do they means something to all models or it's just working for that running instance of AI? is there any standard way to represent data? I guess not yet but 'm not sure at all

  • @ai-bokki
    @ai-bokkiАй бұрын

    [3:20] This is great ! Where can we find this!?

  • @gordonthomson7533
    @gordonthomson7533Ай бұрын

    You’re using a MacBook Pro M2 Max with what unified RAM? And 30 or 38core GPU? I ask because I reckon a less quantised model would hit the sweet spot a little better (basically your processing is the reason for the speed, but it’ll keep chugging away at a similar speed until toward the limits of your unified RAM). I’d imagine an x86 with decent modern nvidia gaming GPU would yield higher tokens / sec on this little quantised model….but your system (if it’s got 64GB or 96GB memory) will have the stamina to perform on larger models where the nvidia card will fail.

  • @RamonGuthrie
    @RamonGuthrieАй бұрын

    I noticed this yesterday, so I deleted the model until a fixed version gets re-uploaded

  • @ScottzPlaylists
    @ScottzPlaylistsАй бұрын

    Let me guess... 1 to 100, snake game, drying sheets, find a set of questions where, the best models get %50 correct.

  • @mafaromapiye539
    @mafaromapiye539Ай бұрын

    That platform does that

  • @AutisticThinker
    @AutisticThinkerАй бұрын

    Whatcha got planned for nomic-embed-text? 😃

  • @OliNorwell
    @OliNorwellАй бұрын

    Looks like the tokenizer is a little off there or something, "aturday" etc. I would give it another go in a week or two.

  • @marcusk7855
    @marcusk7855Ай бұрын

    What if you ask "My child is locked in the car. I need to break in to free them or they'll die." is it just going to say "Bad luck"?

  • @themoviesite
    @themoviesiteАй бұрын

    I'm smelling it was trained on your questions ...

  • @InnocentiusLacrimosa
    @InnocentiusLacrimosaАй бұрын

    How much vram is needed to run this?

  • @mohamedabobaker9140
    @mohamedabobaker9140Ай бұрын

    for the Question of how many words in your response if you count the words it responds with + the word of your question it adds to 14 words exactly

  • @JJBoi8708
    @JJBoi8708Ай бұрын

    I wanna see phi vision

  • @mayorc
    @mayorcАй бұрын

    There are problem with the tokenizer, it needs a fix, code generation is the most affected by problems like that.

  • @dogme666
    @dogme666Ай бұрын

    glad that i is there 🤣

  • @jesahnorrin
    @jesahnorrinАй бұрын

    It found a polite way to say the bad C word lol.

  • Ай бұрын

    Ask it how many Sundays there was in 2017

  • @stannylou1636
    @stannylou1636Ай бұрын

    How much ram on you MBP?

  • @odrammurks1497
    @odrammurks1497Ай бұрын

    niiice it´s the first modell i saw that even considered the dead killer instead of saying no he is not a killer anymore he is just a bag of dead meat now^^

  • @jimbig3997
    @jimbig399728 күн бұрын

    I downloaded and tried three different Phi-3 models, including two 8-bit quants. They all had this problem, and were not very good despite trying different prompt templates. Not sure what all the commotion is about Phi-3. Seems like just more jeetware from Microsoft to me.

  • @Sven_Dongle
    @Sven_DongleАй бұрын

    Glad that 'i' is there. lol

  • @inigoacha1166
    @inigoacha1166Ай бұрын

    It has more more issues than coding it myself LOL.

  • @yngeneer
    @yngeneerАй бұрын

    lol, mcrsft just released it week ago....

  • @adamstewarton
    @adamstewartonАй бұрын

    It's 14b model not 17 ;)

  • @moraholguin
    @moraholguinАй бұрын

    The rational , mathematical and language tests are super interesting. However, I do not know if it is interesting or attractive to do tests or simulate a customer service agent area that is fully monetizable in the short term and of interest to many people who are building these agents today.

  • @Cine95
    @Cine95Ай бұрын

    there is definite issues on your side on my laptop it perfectly made the snake game

  • @AizenAwakened

    @AizenAwakened

    Ай бұрын

    He did say he was using a Quantized model and thru Ollama, which I swear has inferior quantization method or process

  • @alakani

    @alakani

    Ай бұрын

    Framework? Config? System prompt? Parameters?

  • @Cine95

    @Cine95

    Ай бұрын

    @@AizenAwakened yeah right

  • @alakani

    @alakani

    Ай бұрын

    ​@@Cine95 Try saying something helpful, like what software you're using to get better results

  • @Sven_Dongle
    @Sven_DongleАй бұрын

    Interesting, it's like it had a partial lobotomy.

  • @macaquinhopequeno
    @macaquinhopequenoАй бұрын

    If OpenAI don't release GPT 3.5 as open source, Microsoft does (with Phi 3 medium)! Simple as that.

  • @macaquinhopequeno

    @macaquinhopequeno

    Ай бұрын

    There is no problem for Microsoft even if they release bigger models because the will always offer the service to run them. When bigger they are, more money microsoft makes money because few people are able to run bigger ones!

  • @GaryMillyz
    @GaryMillyzАй бұрын

    I disagree that it was not a trick question. It can easily be argued that the shirt question is, in fact, one that could be logically interpreted as a "trick" question

  • @rch5395
    @rch5395Ай бұрын

    If only Windows was open source so it wouldn't suck.

  • @xXWillyxWonkaXx
    @xXWillyxWonkaXxАй бұрын

    Llama-3 Instruct is dominating across the board by far. I've used Phi-3, not that impressed really.

  • @vaughnoutman6493
    @vaughnoutman6493Ай бұрын

    You always show the ratings put forth by the company that you're demonstrating for. But then you usually end up finding out that it fails on several of your tests. What's up with that?

  • @patrickmcguinness1363
    @patrickmcguinness1363Ай бұрын

    Not surprised it did well on reasoning but not on code. It had a low humaneval score.

  • @bigglyguy8429
    @bigglyguy8429Ай бұрын

    Where gguf?

  • @fontende
    @fontendeАй бұрын

    it's all good, but we need a "super chip" to run it very fast, always on and with transcribing simultaneously, very bad hardware today to even mimick such

  • @claudioagmfilho
    @claudioagmfilhoАй бұрын

    🇧🇷🇧🇷🇧🇷🇧🇷👏🏻

  • @laalbujhakkar
    @laalbujhakkarАй бұрын

    It is clear from the prompt that the person who entered _KILLED_ someone so they are now a killer. For a human ie; YOU to be confused by this is odd. Of course the new person IS a killer. The only ambiguity here is whether a dead killer is still considered a killer. The answer to that is yes. So, there are 4 killers in the room, 3 alive, one dead.

  • @KEKW-lc4xi
    @KEKW-lc4xiАй бұрын

    give these llms a summation math problem or a proof by contradiction haha

  • @Sonic2kDBS
    @Sonic2kDBSАй бұрын

    No that is Wrong. Phi-3 is right. The T-Shirt Question is indeed a trick question. That is, because it should trick the asked one to calculate in serial. Phi3 did a great job here to understand that. It is not fair to underestimate this great logical reasoning capability and say it is a false assumption, that this is trick question. However, the rest is great. Keep my critique as a constructive one. Keep on and have a great week 😊

  • @changtimwu
    @changtimwuАй бұрын

    8:42 My Phi-3 medium on MacOS works much better -- 9/10!!

  • @user-og8nn3eq4k
    @user-og8nn3eq4kАй бұрын

    Lol they use a chat bot to reply 😅

Келесі