LLM OS with gpt-4o
Lets build the `LLM OS` inspired by the great Andrej Karpathy using the new GPT-4o model.
Can LLMs be the CPU of a new operating system and solve problems using:
💻 software 1.0 tools
🌎 internet browsing
📕 knowledge retrieval
🤖 communication with other LLMs
Code: git.new/llm-os
⭐️ Phidata: git.new/phidata
Questions on Discord: phidata.link/discord
Пікірлер: 65
woah just finished the "older" video and BAM a new one is already avaiable and already updated! You are a great dev, thanks for making it avaiable for everyone and for the simplicity which you use to describe difficult things!
@phidata
Ай бұрын
i aim to please :)
This is really great @phidata! Great resource for my project, am building a system much like this with a twist of the ACE (Autonomous cognitive Entities) framework and can use any open source models (Groq is super cool and super fast). Thank you for open sourcing your work.
I love that you have dived in and produced a practical demo. Thanks for that and for releasing the code. Longer term I wonder if using traditional OS concepts might be limiting and if a more human centric model has benefits? Eg, a minimal design that grows with the user, uses self-evaluation and develops it's own assistants and tools specific to that user's needs?
Amazing job, thank you so much for sharing it!
God Bless you and your work so far . very cool
@phidata
Ай бұрын
@liamlarsen9286 🙏🙏thank you for the blessing 🙏🙏
You're the man 👏
you are on top of it !
@phidata
Ай бұрын
thank you, at your service :)
Great video, what kind of pointer do you use? the red line that disappears
Bro, how you put this together so fast ahahahah. You get leaked news early! Awesome video, I can’t wait to try after work
LOVED IT
the father of LLM OS! awesome
Security. Reliability. Performance. So many concerns.
@phidata
25 күн бұрын
fun fun fun :) i think its good to experiment and then add those on top :)
I was able to get something similar running reliably using llama 3 8b quantized to 4 bits. Not quite as advanced, it doesn’t have any task delegation, but I don’t see a need for it so I doubt I’ll add it. But I’m really happy I was able to get it to run on such a relatively “weak” model that can run locally
Instantly subscribed, thanks
@phidata
Ай бұрын
thank you
This is amazing! Is a LLAMA-3 with Groq version in your roadmap? I'm going to attempt to convert it, but don´t know if I'm as skilled...
Thank you. I'd love to see the text of the research report it wrote for you.
I'm very skeptical of all things around AI lately, but this is a really cool implementation/conceptualization of what a powerful LLM can do. I want to build one of these locally and see if I can make it an 'expert' at something niche and traditionally 'difficult' for a computer to do.
is there a way to connect this to my vscode ? I'm trying to connect gpt4o to my project that I'm working on(it's an e2e playwright/ts automation framework for a nextjs project) kind of like a copilot, but I feel copilot doesn't give sometimes the best suggestions
amazing!
Awesome
@phidata
Ай бұрын
appreciate you
The links in the description don't seem to work at the moment. Great video and coding, however!
@phidata
Ай бұрын
sorry, i tried to put em but not sure why its not working. everything should be under the phidata repo: git.new/phidata
Since this can see shell, can it also write or create files? Create or edit system settings if I chose to do so?
I think GPT-4o which is a multimodal model has been trained on millions of KZread videos, it will be the same for GPT-5, just think of it, to scale up a model you train it on more data with more parameters, since the maximum of relevant tokens available online cannot exceed 50 trillion, the biggest source of quality data is KZread with over 4 billions of videos, I think it's why GPT-4o is so good at agent capabilities.
@liamlarsen9286
Ай бұрын
you must take into account, that the data has to be increasingly well prompted as you grow. because of how large the multiparamter models are, their data must become increasingly complex sets of training / test data to actually see an increase in reasoning performance. Right now we are experiencing a plateau, as we can see even the 400 billion paramter model waiting on by meta has been training for months, which means our constraint limit is complex data and compute (GPUs). For models to become better at reasoning tasks, they have to complete reasoning tasks in a dataset, which may be significantly more complex than just watching youtube. They possibly created lots and lots of training data Even USING GPT4 ! So we can see the models become more efficient - but not grow (model collapse)
@Nakatoa0taku
Ай бұрын
The quantity is impressive isn't it? The quality of the data is subpar nonsense and insanity. Quality over quantity of course Shakes head in disbelief you people are bloody idiots innit?
@Nakatoa0taku
Ай бұрын
@@liamlarsen9286compute ain't the problem dude. Just laughable meritless mediocre meaningless mumbojumbo nothing more. Why is it you people don't grasp any of what is going on?
@liamlarsen9286
29 күн бұрын
@@Nakatoa0taku did chatgpt help you write that
How do you recommend I add vision capabilities? Im hoping to use this as the backend of an Android app that can take plant images and diagnose their health and living conditions. Thank you in advance! ❤️
@phidata
25 күн бұрын
the assistant.run() functions accepts images, checkout github.com/phidatahq/phidata/blob/main/cookbook/assistants/vision.py#L7-L15 for an example
Fantastic but please tell me me when it searches does it read the pages as well, scrape them maube with other cheaper llms team members or firecrawl which can scrape structured data to vector, does it read pages or only search results
@phidata
Ай бұрын
we can program it to work however you like :) currently it reads the page if non-javascript but we have a firecrawl integration so can use that aswell
@jarad4621
25 күн бұрын
@@phidata awesome thanks and could technically add other scrapers or integrations as well if needed?
can you share with us how can we add bunch of files (pdf, txt, docx etc) then the agents/assistant can respond based on the knowledge base and respond accordingly. not neccesary for users to upload from the front end.
@phidata
Ай бұрын
yup absolutely doable, will add a video to it. For now please check the docs here: docs.phidata.com/knowledge/introduction
Technically, LLM OS would work with a local Llama-3 model right? Since you do not need the "omni" multi-modal input.
@phidata
25 күн бұрын
technically yes, but local models probably are not yet good enough to pull this off. maybe i do a video testing local models with this
thank you for this valuable video, i've followed all the steps and it also launches the web ui, but gives me this error: NotFoundError: Error code: 404 - {'error': {'message': 'The model `gpt-4o` does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}
@shadeall
Ай бұрын
API keys are connected and in use as well
How big can the data base be? Can I add multiple doc files?
@jarad4621
Ай бұрын
Yes please confirm this
@phidata
Ай бұрын
@@jarad4621 you can add as much as you like, its a postgres database so the limits havent yet been found 😂
@phidata
Ай бұрын
you can add many, many doc files. i've personally built this with 50-70 gb of data and it does fine (ofcourse need a HNSW index + retrieval fine tuning)😂
Can use local open source model instead of gpt4o?i mean how to code if can
@BarathwajAnandan
25 күн бұрын
The prompts might have to be tuned I believe. I’ve tried multiple good repos which doesnt work as expected if not for the same LLM it was built with.
why do you say that it only works wit GPT-4o?
@phidata
25 күн бұрын
gpt-4o or gpt-4-turbo, other models aren't there yet. maybe i try with opus that should work probably
is that standalone? if so, why is it buried in phidata?
@phidata
Ай бұрын
its built using phidata :)
@six1free
Ай бұрын
@@phidata I just realized that's also your name lol I guess I'll have to download the whole thing then :D
That is cool, but damn that code looks like a spaghetti spiral. I really wish people would format their code properly and get rid of test things.
@phidata
25 күн бұрын
sorry :(
@Jshicwhartz
25 күн бұрын
@@phidata No need to be sorry lol it's good stuff I like it! Just don't bait people into downloading code which will eat tokens if you decide to opensource it, otherwise! Keep it up, I'll sub and wait for your next video!
langchain?
@phidata
Ай бұрын
ohh no no
youve done so many of these and you repeat the same script instead of showing us interesting things like custom tools and using llama through ollama......
@phidata
10 күн бұрын
hi if you give me recommendations i'll make more vids, i did a few videos with llama3 and ollama but happy to do more eg: kzread.info/dash/bejne/X2yCuKqae660m7A.html groq: kzread.info/dash/bejne/q6B6p9F-ZJW5Yto.html I actually just record what im working on nowadays, but im happy to take requests :)
wow!