Model Output Comparison with GPT/Gemini/Mistral/Llama2 - PulseRead Followup

Ғылым және технология

Comparing analysis skills and output between 6 language models while assessing what models are best for different social media analysis as part of my PulseRead project.
Models of Interest:
GPT-4-Turbo (OpenAI)
GPT-3.5-Turbo (OpenAI)
Gemini Ultra (Google)
Gemini Pro (Google)
Mistral - Medium (Mistral)
Llama 2 - 7b (Meta)
Chapters:
00:00 - Intro
00:37 - Refresher
01:21 - Model Overview
01:56 - Discussion Analysis
08:16 - Commonality Analysis
15:34 - Overall Report Analysis
26:48 - Sentiment Analysis
32:03 - Conclusion & Limitations

Пікірлер: 2

  • @anaghavelliyatt3602
    @anaghavelliyatt36023 ай бұрын

    Yay!!!

  • @xspydazx
    @xspydazxАй бұрын

    for me mistral is the best model !, Its a quite simple source code and easy to adjest and add new componets as well as the many models who have cloned and made improvements to the knowledge and roles etc: you can mess with the config and generate any size model (smaller easy to train for single tasks, Larger Slower to train but there is a method!) the 7b was good enough to perform any task ! the larger models even im not sure if there is a true difference, even i have made a few mistral moe... etc ... the smaller models are fun and do perform well (not just for lang modelling)(even to slot into existing GAME_AI! (to much fun) .... Local LLM !(no External models) all round good base to begin with !

Келесі