GPT-4o: What They Didn't Say!
Ғылым және технология
While yesterday's GPT-4o announcement has been covered in detail in lots of places I want to not only cover that but talk about some of the things they didn't say and what the implications are for GPT-5
openai.com/index/hello-gpt-4o/
openai.com/index/spring-update/
🕵️ Interested in building LLM Agents? Fill out the form below
Building LLM Agents Form: drp.li/dIMes
👨💻Github:
github.com/samwit/langchain-t... (updated)
git hub.com/samwit/llm-tutorials
⏱️Time Stamps:
Пікірлер: 70
So what was ot that they didn't tell us? This is the only reason I listened...
@JG27Korny
14 күн бұрын
clickbait
@rikschoonbeek
14 күн бұрын
I think from 8:00 you'll hear it
@pluto9000
14 күн бұрын
rikschoonbeek I didn't hear it. But you saved me watching all.
@Merlinvn82
13 күн бұрын
The gpt-4o is not actually a finetune from 4, it's a new one trainned with the same gpt-4 datasets.
@fellowshipofthethings3236
13 күн бұрын
congratulations for being baited..
Why free? I think it's the same reason they removed the login. The more people using it, the more data they get to train on. They couldn't stay ahead of google in the data game otherwise - google has gigantic amounts of people's data. This explains why google has been so stingy with their bots. They don't need more of your data
@ondrazposukie
14 күн бұрын
I think they just want to be as open as possible to make many people use their AI.
@neoglacius
14 күн бұрын
exactly, why facebook or google is free? because YOURE THE PRODUCT , now including logistical and operational data from all companies in the planet
@rikschoonbeek
14 күн бұрын
Can't say there is a single motive. But data seems extremely valuable, so that's probably a big motive
@kamu747
14 күн бұрын
That's not the reason. We'll, not the main, there might be something there but... 1) It's a competition. Meta changed the pace when they decided to provide their AI for free. Others will need to offer better for free to stay in the game. 2) It has always been part of their mission to provide free services. There are altruisitic reasons behind the intentions of those involved. OpenAI isnt really a company as you know it, it is a movement. A changed world is their ultimate goal. If they don't do this, the global implications are catastrophic, AI risks cresting an irreversibly massive divide between classes because believe it or not, a lot of people can't afford to pay $20. This is what OpenAI started as am NGO, but their mission was too expensive, they needed to placate some investors and monetise a little in order to be sustainable for the time being while compute was expensive. Compute just got cheaper with Nvidias new H200 which allows them to afford to offer services to more people. However, there's certainly more advanced capabilities that paid users will benefit from later on. 4) As for user data, they no longer need it as much as you think they do.
@4l3dx
14 күн бұрын
Their actual product is the API; ChatGPT is like the playground
It makes sense that this model is based on a different transformer (or tokenizer) because they were calling it gpt2 (gpt2-chatbot or sth like that).
Re the 1.5 models and the new tokenizer but still GPT-4, I see this as comparable to the Intel “tick-tock” pattern of CPU upgrades - you’ve got a new process node, first you port the old CPU architecture to run on it - that’s the tick - and once that’s proved out, then you get your new CPU architecture running on it, and that’s the tock. Then repeat. This let them split the challenge into two different phases, and gave them something good to release at each phase.
Great content as always Sam! Excited by how this could be a teaser to a GPT5, totally agree with what you said.
great update - I am waiting for audio vesrion in GPT4o - so far i use it for coding and imagies analysing,
don forget that gpt depends on deep-learning advancements in the cientific field to deliver something better, its not like a regular company.
I am not sure they were ready for launch "Sorry, our systems are experiencing high volume". Shouldn't that be expected?
Gpt4o was able to refactor complex JavaScript code. I was impressed.
Great reviews. Loves the channel! Voice IN! Voice OUT! Human doctors learning (better) bedside manners from machines! (Film @ 11).
the default setting I seen in their API docs for gpt4o was 2 FPS... however it can be increased.... i'm thinking there's a sweet spot but I hope its not 2 FPS! Also the audio API controls are not integrated yet and you have to use the old 'whisper' rigmarole of TTS and ASR
Wow, what an interesting time to be alive. I think it's an improvement in many ways, but only around the edges and not the core 'intelligence'. I'm seeing very similar answers to previous versions and other LLMs. Also, I see that it now does a more web search to include in the results and it is telling me that it can store persistent information from our sessions, which seems like a big enhancement. I don't see the 10x improvement that 3.5 to 4 showed and I suspect that they are quite a ways from achieving that in a version 5, but I'd love to be proven wrong.
This is a very interesting and informative for anyone interested who has not seen the presentation and I am also watching just because your voice is so calming and engaging… Thanks for bringing this to the AI Community 🎉🎉🎉🎉
Been playing with all last night and this am.. this is a world changer..
It's simple if gpt 3.5 got replaced by this, gpt 4 will most likely replaced by something better for paid users.
Something that I haven't seen largely discussed yet is the opportunity for __personalized tutoring__ that was demoed at OpenAI's GPT-4o announcement event. Imagine a world where every student struggling with a subject like math or physics has a personal tutor at hand to help them grasp a difficult subject. Not solving a homework problem for a student, but guiding them step by step through the solution process, so they can derive the solution on their own with minimal help. IMHO this will make the entire (on-site as well as online) tutoring industry to some degree obsolete.
@samwitteveenai
12 күн бұрын
I agree this is huge. I know there are people working on it, but agree it is going to be one of the biggest areas for all these models.
I found it incredibly quick with a simple text completion but it didnt actually read or do what I asked. It needed reminding to visit the URL I gave it (tool use) which I had to do several times and it still seemed to prioritise its own out of date knowledge over the content it had just fetched. I need to try out all the features fully (limit hit after a few messages) but it came across as a bit too quick to churn out code without reading the initial prompt properly...it felt a bit lazy. Perhaps I just need to learn how to prompt it to get the best from it (as was the case with Claude)
Are you going to any fun events in bay area?
To me it would seem OpenAI is using multi-token prediction method with this new model, but I could be wrong. What do you think?
If the audio and image are integrated into the model and use the same neural network, how did they manage to dissociate them in the version currently available?
@samwitteveenai
12 күн бұрын
The model will have different heads out and they can just turn it one off etc.
They understood the ongoing market shares competition
I have a feeling GPT-4o was trained using knowledge distillation Teacher-Student framework, with 4o being the student, and Arrakis or whatever else as multimodal teacher. 😅 I have no proof of it anyway. Also good to mention optimized tokenization process.
I wonder if Dall-E will be available for free users?
Actually Ultra 1.0 is able to do img in and out. But as usual with Google we will witness it next year
That's it i think. It's a mid point to version 5. Sam talked before about how multi model would lead to better reasoning. They were running out of text data so clearly shifting focus to video plus audio. That resulted in Sora . Now the audio gets us this.
@Anuclano
13 күн бұрын
If they ran out of text data, why it has no idea about what Pushkin wrote.
im free user how can i try it
I tested it with the mobile app. Its quite amazing how fast it can respond. But the whole thing with different emotions, sounding sad, happy, excited, did not work at all. The voice was using the same tone and "emotion" everytime. Did anyone had different experience with it? Could anyone re-create what they showed in the live-demo?
@XerazoX
14 күн бұрын
the voice mode isn't updated yet
The world doesn't have enough chip to make ai cheap. Technology still requires a lot more innovation
the free 4o access is pretty limited, especially for conversations
My GPT 4o does not do any of the new things. It is just like gpt 4
When does it roll out in australia?
@gavinknight8560
13 күн бұрын
Already here
Can't build much on the free tier. 16 messages per 3 hours
14 күн бұрын
Why would you build something on a free tier?
@74Gee
14 күн бұрын
Well I wouldn't, but at 1:20 that's the suggestion.
So is it only free on the desktop?
@markhathaway9456
14 күн бұрын
an Apple machine first and others over a couple of weeks. They said the API is also free, so we'll see some apps for iOS ans Android.
one eye just a illuminati thing it is
Who are these guys? Free Ilya!
spoiler: they told everything. go on to next video.
One thing they didn't say is that you can only ask 'GPT-4o' about 5 questions before being blocked for the day unless you pay up.
I would bet that GPT-4o is 3-5 times smaller than the original GPT-4, if not even smaller. There have been so many advances in the field since GPT-4 released, especially from Meta, which would be stupid to not take advantage of. And the model being completely free backs this up, if it was similar to the original, going free would completely bury the company financially. And I would guess that GPT-5 will be similar size with GPT-4, but taking advantage of every new known innovation in the field, plus dozens more that OpenAI will most likely have made internally, will make it a couple times better, plus with having true multimodality and better memory will likely make it be the first glimpse of AGI by the summer of next year.
THAT The deal announced by Perplexity and SOUND HOUND $SOUN platform is being used by GPT-40