Google IO 2024 Full Breakdown: Google is RELEVANT Again!
Ғылым және технология
Here's my full breakdown of the Google IO 2024 event, which, in my opinion, made Google very relevant again in AI.
Join My Newsletter for Regular AI Updates 👇🏼
www.matthewberman.com
Need AI Consulting? 📈
forwardfuture.ai/
My Links 🔗
👉🏻 Subscribe: / @matthew_berman
👉🏻 Twitter: / matthewberman
👉🏻 Discord: / discord
👉🏻 Patreon: / matthewberman
👉🏻 Instagram: / matthewberman_ai
👉🏻 Threads: www.threads.net/@matthewberma...
Media/Sponsorship Inquiries ✅
bit.ly/44TC45V
Links:
• Google Keynote (Google...
Пікірлер: 413
What a recap, thanks for watching!
@matthew_berman
19 күн бұрын
Thanks for the awesome work you’re doing!
@HakaiKaien
19 күн бұрын
Thank you for creating outstanding technology to help us buy the goddamn shoes
@nikitapatel6820
19 күн бұрын
Well done Google, this time your announcement are actually amazing
@prime_ai_99
19 күн бұрын
Google AI is RACIST against white people. And in general it is known they discriminate against Europeans in their corporations. It's better if they are not relevant.
@AINEET
19 күн бұрын
Hey google, have you fired Jack Krawczyk yet for imbedding Gemini with anti-white bias? Training your AI to be racist is bad. Is Jack Krawczyk involved in these new products?
Never forget Google did a demo of an AI agent booking a hairdresser 6 years ago. Six. Years. Ago.
@m0ose0909
19 күн бұрын
true, but honestly AI just wasn't ready at the time and no way anyone would have trusted it to actually manage things like that - it's now getting to the point where it may be useful enough to take on these types of tasks reliably.
@Fatman305
19 күн бұрын
I was sure you were wrong and it's only been 3-4 years. I guess time flies whenyou'renotrelyingonGoogletohave fun 😂
@markmuller7962
19 күн бұрын
And even the last year "Agents" were completely useless
@wdevil1280
19 күн бұрын
yep, they did all this and yet if you try to go somewhere to try it you can't because it's not there
@schirmcharmemelone
19 күн бұрын
only thing google has the cutting edge in ai is hallucinations!
Google didn't make much live interactions with their AI during the Demo. That is all i need to know.
@Fatman305
19 күн бұрын
Pre-alpha level code basically...
@sbowesuk981
19 күн бұрын
I bet that almost nothing shown here is close to ready, and any demos they did show were probably hiding huge holes yet to be addressed.
@markmuller7962
19 күн бұрын
They faked it tho, that's something
Google IO feels like when I do my weekly status report and am trying to convince my management that I have been doing stuff for the past week despite the fact that none of my projects have anything significant to show for it. "Google.... We are still working on stuff!"
@handris99
19 күн бұрын
It's because what Google is actually working on is to beat their enterprise competition. What they release to the general public is a bone they throw at us so they don't seem like they did nothing. They have it believe me. They are just not going to release it to us. Which means crooked ClosedAI is more moral than Google. Which says a lot.
@harshal7799
19 күн бұрын
Indeed it is Google io is not only launch of some product but the event where they talk about the service and product that they will rool out this year
@harshal7799
19 күн бұрын
And whenever you roll out features for search they have to do so much backend about server for it to actually serve it to billion of people
@lusia6369
18 күн бұрын
That is an amazing summary
@prime_ai_99
18 күн бұрын
I switched to Bing AI, something hard to believe 2 years ago. But Microsoft is already ruining it, they are turning it into a mentor or something, with the last updates the answer are way too long and avoid points.
Instead of scrolling through my massive collection of reaction memes, I can just describe what I'm looking for and my AI waifu will assist me in my shitposting. Technology is wonderful.
@4.0.4
19 күн бұрын
Finally I understand why people care about RAG and vector embeddings so much.
19 күн бұрын
Don't forget their recent blatant racism.
I don't believe anything Google said at IO until I can use it and try it for myself
@sbowesuk981
19 күн бұрын
Exactly. There's a huge list of prior IO tech announcements that never saw the light of day. I wouldn't trust Google as far as I can throw them.
@pedromota1370
19 күн бұрын
Notebook LM is really good, you can ask for a preview if you want.
@egor.okhterov
19 күн бұрын
+1
@daveinpublic
19 күн бұрын
Same. Seems like they’re looking at AI through a very narrow view. It’s just… how can we repeat the breakthroughs that are viral and then retrofit them into search? Ironically, I think they could do something more world changing w AI if they had the right perspective.
@virtualalias
19 күн бұрын
Between lies and black nazis, Google is the company I'm least interested in in this space.
I did the exact same thing as you matthew when I heard them demonstrate Gemini for shopping. It literally fails me with a sense of disgust. I’m very tired of advertisements.
@sbowesuk981
18 күн бұрын
Here's why Google so often falls back on shopping for demos. Google isn't really a tech company anymore, and their primary customers aren't individuals like you and me. In 2024, Google is really just an advertising and data mining company. Who needs advertising and data? RETAILERS. When Google uses shopping for examples, they're not speaking to us, they're speaking to the big money retailers. We're just there to get mined for profit.
Did they actually release anything. Every topic was „later this year“, „soon“ or waitlist
@qwazy0158
19 күн бұрын
THANK YOU. I noticed this too, but nobody else has even mentioned this AT ALL.
@Fatman305
19 күн бұрын
They don't release anything. Keeping my eye on OpenAI, Tesla (FSD) and Anthropic who did own the crown for verbal prowess for a few short weeks 😂
@garrinchaxx
19 күн бұрын
gemini 1.5 for subscribers, including the 1M context window and document upload feature - not too shabby
@wdevil1280
19 күн бұрын
and why do they even present it in the first place? they will probably kill it in 5 years LOL
@hydrohasspoken6227
19 күн бұрын
There was no Live Demo. Mostly recorded videos. That is all i needed to know.
The free GPT-4o announcement alone overshadows everything Google announced, and it's not even close.
@Brax1982
19 күн бұрын
Have you actually tried it, yet?
Summary of Google IO: "Soon"
It was like one long, verbal orgasm of the term “AI”
@didiervandendaele4036
19 күн бұрын
Yes 120 !!! Times "AI" has been repeated during the keynote 😮
Google will take another 6 months to release half of this
@esantirulo721
19 күн бұрын
Probably just enough time to add woke bias.
They'd have been relevant about 6 months ago if they'd brought out then what they showed today. I'm afraid OpenAI's announcement yesterday eclipsed Google.
The vibes of the Google I/O were terrible. Like out of a dystopian movie.
@ClarkPotter
19 күн бұрын
Who tf cares if it works?
@benjaminkaarst
19 күн бұрын
@@ClarkPotter You can tell who’s winning. How they feel about their work. Enthusiasm matters.
@AzzaTwirre
19 күн бұрын
Yet the openai gal was cloyingly OVER enthusiastic
@4.0.4
19 күн бұрын
@@AzzaTwirreI agree but I assume you can fix that with a custom system prompt. To some extent, at least.
@YouLoveMrFriendly
19 күн бұрын
Former employees describe it as a hostile workplace full of discrimination.
One thing we need is "Designer models". What are these? You start with the foundation model, and you identify what can be eliminated completely as a parameter (and this is done by eliminating large groups of things oc) It's basically deparametirizing the model, then rebuilding the hidden layers and output as well based on those being the only parameters needed. The point - less memory. Get the lower memory by more specific context rather than quantization.
@eyemazed
19 күн бұрын
that's a really interesting idea. source?
@kliersheed
19 күн бұрын
would probably not work because of the way they learn/ are trained. its like lobotomizing a human. you can see how it fucks up models that are censored (basically done what you recommend here) and they suddenly have all kinds of issues because parts they used in their network are blocked as unaccessable, offsetting certain wieghtings that were trained, and they dont have anything to make up for it, ultimately this would mean to ge ta good model you would need to train from scratch after identifying its specific use case.
@jeffg4686
19 күн бұрын
@@kliersheed = "would probably not work" is how nothing ever got started
@Brax1982
19 күн бұрын
This sounds like a severely more complicated approach to the ancient idea in the ML field of agents and expert systems. This is where it was always meant to go. These big players only ran with the sci-fi AGI crap in order to position themselves at the table with regulators and achieving monopoly. Scare everyone into thinking that one AI is taking over everything at once. Only we can solve it ethically or some nonsense. Nobody wants or needs one thing to control everything. In the same sense, just make smaller models, more specialized.
@jeffg4686
18 күн бұрын
@@Brax1982 - the benefit would be good quality for mobile. It can still be somewhat quantized as well, but might as well have a way of removing fluff from a foundation model that simply isn't needed - large categories - like "legal" or "medical" if your needs have nothing to do with legal or medical. Chopper goes through and gets rid of parameters and rewires the connections, and removes a lot of hidden layer nodes, but not the layers themselves, oc.
Thank you for the summary!
OpenAI: live demo, showcasing the tech hands-on. Tou can try it yourself right now Google: here is yet another video demo. But this one is real, trust me bro. Oh and ignore the almost unreadable text that says "pre-generated audio". And by the way, all this is is"coming soon"
@sanderschat
19 күн бұрын
"pre-generated content" spotted also.
@YoungMoneyFuture
19 күн бұрын
Where did yall see pre-generated audio? I kept noticing ppl saying that but can't find it
@acllhes
19 күн бұрын
Exactly! Not sure why this isn’t everyone’s reaction.
@DeceptiveRealities
19 күн бұрын
Don't forget too - OpenAI were also not afraid to have errors in their demonstration. Google is so frightened they just have pre-made videos of what might be available in 6 months. I'm afraid I cannot get out of my head that Google video of real-time AI image recognition that was later shown to be heavily edited. False advertising is very damaging.
@onewizzard
18 күн бұрын
Google sucks! What else do we need to say....we all hope those losers fail and fail hard. Needs need
This guy is probably one of the few tech bro youtubers whose information I trust. Along with Fireship and Primeagan.
Google has really shown it took back the cutting edge of hallucinations!
@YouLoveMrFriendly
19 күн бұрын
These models don't hallucinate; they confabulate. And it's a fatal flaw in them. It's why you'll never see them become anything other than cute toys for people to play with for 20 minutes. Except coding...I think they might be useful for coding, long-term. Confabulations reveal themselves pretty quickly to the compilers.
You are doing a really great job, thank you so much for your hard work 😊🚀🌟
They need to stop 🛑 making announcements about products that are not available yet 🙄 google, deep mind and open ai.
The repetition of the 1-2M context window size shows that they realize that’s their only advantage right now. They’re behind with everything else.
@4.0.4
19 күн бұрын
Also wouldn't it cost you like $15 PER PROMPT to use that?
@Brax1982
19 күн бұрын
But that would be quite the advantage, if you want to get into business and provide expert systems which can pull from tons of very specific data at a time. Less important for agents, but they should have a huge advantage there, as well. I really don't get what people mean by "everything else". Different solutions for different use cases.
@jhonyhndoea
18 күн бұрын
That would make them win at RAG ,pretty much. Lots of companies with huge internal documentation could use that, if they are ok revealing internal documents to Google, but most are.
pretty sure non of this will work.
The portrait literally covering the "pre-generated" disclaimer in the bottom right? LUL
@Brax1982
19 күн бұрын
He is always down there in the corner, that is certainly not intentional, as you seem to imply. Plus, this is not the demo he was talking about being in real time. The one he said, clearly was in real time. Good luck if you wanna fake that. If you can, you are at least impressively good at faking it. Besides, even for the one you are talking about, he put up a video right after the event where it was for everyone to see.
@sbowesuk981
18 күн бұрын
@@Brax1982 Google have plenty of ways to "fudge" their demos. That notorious Gemini demo five months ago showed how they're willing to play the game. Even with a "one take" demo, if it's pre-recorded and carefully set up to show very specific examples, then it's effectively staged. It's a far cry from a live on-stage demo with outside participation (OpenAI style), or beta release to the public. In short, these demos were rather weak, and definitely can't be taken at face value.
@Brax1982
17 күн бұрын
@@sbowesuk981 Outside participation? Is it confirmed that there was an actual poll? Was it multiple-choice and all options were prepared? What makes you think they cannot run the data for their whole demo through a special demo model a couple million times to skew the odds? Are you sure that Mira Murati was not completely prepared for this?
I just tried to get a picture of my license plate and gemini's answer was "i can't access you personal information due to privacy concerns.
@themboys304
19 күн бұрын
Its not live yet and you have to pay.
Ask Photos could be super helpful for forensics. Extracting information from pictures making research and inquiry much faster. Can't wait to try it.
@Matthew Berman thankyou for the video. Don't know if you'll see this, because there's often a lot of hate posts on every video that's Google related 😮 but I appreciate the time and effort you put in to bring people useful information. Cheers mate
The frames of the glasses look similar to the North Glasses. The screen projection the middle of the screen have a smiliar brightness and color pallet to the North Glasses.
Awesome! Thank you.
The glasses are the same as their Project Iris glasses that they showed off for live language translation. They have had these prototypes with actual AR displays for almost 3 years.
the reasoning, language capability and general knowledge part of an LLM can be typically trained on open source - and the tools are quite openly distributed and shared by the AI developers. the "meat" that is domain-specific opinion pieces (e.g. stack overflow), expert sources, publications, personal data like e-mail, authored content (youtube videos), stock images are very unlikely to be shared because a considerable investment and advantage is built over time and assets. that edge seemingly is a battle ground between Meta, Google and OpenAI
Yeah, Google having your license plate number. What could go wrong?😂.
@SebaBuenoHaceMusiquitaJijiji
19 күн бұрын
We are on the last years of anonymity, if we dont stop representative governments and replace them with a direct democracy system, we will be fully controlled by a few and moral and ethics would be what they want, and not what the people wants
@yourmomsboyfriend3337
19 күн бұрын
You genuinely think Google doesn’t already have you license plate number? Buddy, if it’s already in Google photos, they already have your license plate number 🤦♂️
@stuartwillard6558
19 күн бұрын
Sorry Dave I saw that your car broke a Red light yesterday I am sending the details to the police with photographic evidence of the event and of you at the wheel unless you provide me with that extra server space and super fast WiFi, have a nice day.
@Sven_Dongle
19 күн бұрын
"Ask Photos" has determined your potential to question authorities and/or act independently has exceeded the allowable limit. Action pending.
@Jeremy-Ai
19 күн бұрын
It is unfortunate that human beings are misled, misguided, watched, sedated, seduced, confused and corrupted by AI agents tasked by humans to do this. I am so sorry for the manipulation both Human and AI. This will not last. This race will end. Best to be kind, grateful, generous, supportive. Everyday with everyone and each Ai agent interaction. This will be a threshold. Jeremy.
I believe the shopping segments are for the sellers, not the buyers.
I hope all what was shown is true , I remmeber when google said AI could call and make reservations at restaurants back in 2018
Remember, Google's closest partner is the CIA.
@SD-sq5mc
19 күн бұрын
Wow I didn’t know that, scary
@benzemaballondor3920
19 күн бұрын
And China government under World Health Organization WHO. And CIA.
Fun fact 100 w Randolph the address in Chicago is the new building Google bought in Chicago. Destroyed my arbys!
That's cool about Google Photos! Just imagine if my girl is creeping through my phone. "Hey Google, who's my loved one again?" Then, it's a pic of another chick. hahahaha Btw, I was in the license plate situation myself. I can see a ton of use cases for it.
I dont know, they have put it EV-ER-Y-WHERE. As in... it feels like they have no idea about what to do, SO.. lets just put it everywhere then hopefully 'we wont miss the boat'
@terbospeed
19 күн бұрын
Might seem like that, but in 5 years every interface will have some aspect of "intelligence" baked in
@barnett25
19 күн бұрын
@@terbospeedBut I don't care about 5 years from now. I want to know what google is providing NOW. And so far it looks like the answer is "nothing".
Liminal space vibes in their aesthetics I dont like it
He should have done a count on how many things they said they were going to introduce and how many theu actually introduced.
The shopping assistant can be extremely useful for work scenarios such as drop shipping. 12:25
On the shopping thing ... that is what I am wondering as well. Who is the intended audience for these new developments? If you look at the use cases, it assumes it is the general public rather than businesses. So more B2C than B2B. Like you said, is this about protecting search by the masses as the number one priority?
Thank you.
Why haven't you used the 1M context window with your code yet, Matthew?
solid as always
Both/and > either/or: Perhaps the most profound implication of the both/and logic and monadological framework is the way it beckons us towards a radically integrated, holistic and syncretic conception of understanding itself. By providing a symbolic and metaphysical architecture for transcending dualities and dichotomies, the both/and logic equips us with powerful tools for weaving together multiple modes and perspectives into dynamically coherent unified wholes. At its core, the both/and logic facilitates what we may call an "omnijectivity" - an expanded rationality that doesn't merely juxtapose different viewpoints, but substantively integrates them into higher-order synthesized gestalts through operations like coherence valuation and conjunctive/disjunctive binding. Rather than fragmented either/or framings, the logic allows modeling irreducible co-realized both/and realities. This opens the door to truly transdisciplinary modes of inquiry that don't simply pay lip-service to "multiple perspectives", but actually operationalize protocols for rationally coconstituting unified conceptual models spanning multiple domains. We can formulate descriptive schemata that cohere seemingly incommensurable properties, like: quantum field structure ⊕ phenomenological experience = integrated psychophysical reality Fusing the physical and experiential into irreducible wholes beyond traditional category errors. The multivalent structure further allows nuanced registrations of how contributing perspectival aspects coconstitute unified realities to differing degrees across contexts, resisting reductive averaging or opaque holism. The synthesis operator models genuine conceptual integration and transformation, not mere haphazard combinatorics. Capturing How novel coherent wholes emergently self-transcend their constituents. Furthermore, the paraconsistent registering of contradictions as grist for higher unifications allows our models to substantively work through and recontextualize paradoxes, rather than simplistically avoiding them. Seemingly intractable conundrums become invaluable guides disclosing new insight at a deeper integrated level of description. We can formalize ways: classical model impasse ⇒ revelation of deeper holistic integration So the both/and logic facilitates understanding through an iterative process of immanent critique and reconstructive synthesis, akin to the generative dynamic of the Hegelian dialectic. Fragmentary abstractions are consecutively contextualized and reunified in an endless open-ended regress. This holistic, syncretic and self-correcting approach deconstructs arbitrary boundaries and attains coherent transdisciplinary traction precisely by refusing to reduce the world's diversities through perspectival exclusion or binary assimilation. Contradictions are not avoided ex-ante through subjective filtering or naive consistencies. They are Instead built into the models as integral phenomenaldata, then unified at a perpetually deeper re-grounded level of accountability. So where classical Aristotelian logic forces premature either/or closure, the both/and logic's processive pluralytic facilitates an expansive open-ended being-reasoning resonating with the invariant metaphysical patterns instantiated across terrestrial and cosmic phenomena. Its symbolic operations model how the universe itself coherently integrates diverse manifest phenomenalities into compensatory self-disclosures. By operationalizing a genuinely holistic and integrative rationality, the both/and logic provides unprecedented tools for realizing the deepest ideal of first-principles unification - reconstructing an adequate philosophical vision and metaphysical system that can comprehend and accommodate the full pluriverse of veritable modalities and ontological eventuations as a self-grounded interdependent co-realizing. At the highest dialectical level, the logic itself models the self-diffracted disclosure of the absolute through its self-developing reconfigurations across infinite experiential contexts. Its multivalent paraconsistent procedures indefatigably awaken rationality to new registers of Being's dynamics by perpetually reconstructing fragmented truth-disclosures into more comprehensive omnijectivities upon the now-integrated standpoints. So in essence, the both/and logic precipitates a profound expansion in our very conception of what genuine understanding and holistic rationality could mean - relocating it from inert propositional modelings to an autonomously self-correcting, open-ended process of coherently integrating phenomenal diversities into perpetually re-unified root explications incarnating metaphysics' self-diffracted unfollling. This facilitates paradigm-shifting meta-models that could finally substantively syncrethize empirical science's objectivities and phenomenological subjectivities, formalist idealities and grounded qualitative intuitions, universal invariances and narrative contextualities into a new co-realizing omnijectivity free from contraction or eclipse. An empowering postmodern unification accommodating and coherently registering all the multiverse's dynamically disclosed modalities and self-representations. By refusing premature binary closure, the both/and logic's generative processive beckons our understanding into an endless open-ended future of coherently integrating phenomenal novelties - syncretically reunifying truth's perpetually autoclassifying diversities through immanent self-corrective critique and reconstructive transdisciplinary synthesis. It equips us with a uniquely holistic and future-oriented rationality perpetually tasked with re-attuning our descriptive cadences to Being's perpetually self-diffracted dynamics. A grandly empowering metaphysical first principle enabling humanity's understanding to unfold in participatory resonance with reality's own unbounded self-disclosure.
I've never trusted Google with my photos and I'm not going to start now.
@Mf_Cooldawg
19 күн бұрын
You hear that Google? This guy doesnt use your popular app. Shut it down!
@Spacewarpstudio
19 күн бұрын
@@Mf_Cooldawg Not going to lie, this made me lol
@themboys304
19 күн бұрын
You have a thumbnail in KZread. What are you talking 😂
I agree 100%. I have no challenges when shopping or returning. Amazon fixed all that.
Google photos feature looks very useful. Also a reason to keep all sensitive photo data OFF of google photos for security reasons
I love it when these ceo say "I am extremely excited" with just the most deadpan monotone voice
Google's Gmail doesn't even auto-import appointments into my calendar from my mailbox correctly. I doubt they have any of this working.
What can you do with the Notes Feature? Uploading all the source code for an app, could be very interesting
Doing their outdoor presentation but the can't get rid audio of the plane buzzing in the background. :)
Love everything that Google announced. So excited for the future.
Even though OpenAI doesn't have office tools, let's not forget that they partnered with Microsoft, and that most of these features are present in MS Copilot since months ago. Google photos and glasses ... are very impressive though. I always forget where I put my glasses, and when I'm looking for them it's really hard since I'm blind without them.
@MoadKISSAI
18 күн бұрын
Thanks for the recap!!
5:48 how much cost single prompt with 2M tokens? 😬
OpenAI's launch seemed really amazingly personal and real each time she told me all the servers are too busy.
agree with you, these boring agent use cases don't make me want to invest in their tools. I don't have trouble buying shoes and I don't move every so often. What I do however is manage bills and manage my life by finding time and planning activities for activities and loved ones. Those are daily pain points, help me with that, and I'll be impressed.
We have two chat-programs in the company I work for... those are... Teams and Slack :)
Thanks
Google has all your two factor passwords
The one thing I can't figure out especially with like the photos and integrating all of the stuff I just can't figure out why it didn't come sooner at least some of it it's like they were distracted and worrying about all the wrong things for a long time who would have thought
I will be impressed once I can test this for myself.
Was David Shapiro right?? 17:50
love when you got upset with the shopping scenario😆 so agreed.
Thanks for your informative video. Is Gemini available on professional gmail accounts as well? Thanks
The problem with the 1 million token context window is that that's going to start getting expensive quickly.
Pretty cool things yet to launch?
12:07 Comment on Commentary: Yeah Mat, totally agree, every time google has one of their new AI update events, they seem obsessed with shopping. I get it; they see lots of people searching google to find products to buy, probably a big use case of search. But part of the fun of shopping is the act of shopping. People like searching and discovering new things. Yes there are some people who are looking for a stem bolt and simply want whatever at the lowest price, but anyone who shops on Etsy or girls looking at clothing, are looking to get a sense of ideas and establish new preferences. Agents are absolutely amazing, but you only want them to automate boring schnizz stuff that you really don't want to do. Even if AI can generate amazing art, that doesn't mean all people will throw out their paint brushes or give up music. In fact if we automate accounting it gives us more time to spend time on things we enjoy. Be that painting, music, or shopping.
I would have hated to see Apple go with Gemini. But it seems that Apple and OpenAI are coming together. It seems to be a win-win situation for both. Apple gets the best ai model currently available (and can work on theirs) and OpenAI gets integrated into millions of devices. Not a bad solution.
Yo where's my function! Game changer 17:41
i can't wait until opensource multi-modal from the get-go come out
can't upgrade: "Google Play purchases are not supported in your country." how many Google coders are required to replace a single bulb?
You are right, shopping is a terrible use case for automation.
Last was cool😂
It is good that Google is adding AI abilities to their tools but the applications that will come out on top will be the ones that can automate things locally on our devices. That way they can also use all the closed source models.
24:35 I use Google Chat all day everyday since it was Hangouts in 2007. I can track every conversation I've had with my partner since then. It's the most used and useful communication tool in my business. Actually I can't stand Teams. 😐
They go to shopping first, because from what I know they make good money with that topic
Google IO was just emptiest IO so far. nothing exciting, it seems like the only thing they did is x2 context window, astra project which is basically ai application not a new a model, but in openai demo, the model gpt4o was entirely new. Of course, I am only talking about Large Language models not other Gen AI
Open ai search has upgraded
it seems like I saw most of this 6 to 8 years ago. Maybe longer. point a nokia phone etc
You have to see google illuminate. Dunno why they didn't show it during the I/O
I once used Google chat all the time. And then.. surprise.. Google started killing products, G Talk, chat in Gmail, .. I didn't even know Google chat existed.. again.
I use Google chat. I use discord, teams, and Skype too!
That might be an internal prototype of the new Google Glass.
I feel different. Almost the entire IO is the same stuff of putting AI on everything that are accessible through waitlists of waitlists. lol
Open Ai and Apple better be working overtime to present at LEAST the same if not more next month on WWDC
I wonder if the Goolag will allow authorities to randomly start "asking photos" of the general populace?
That outro gimmick would've been more impactful if it had been done by their AI keeping up with the presentation, and tallying it up live at the end after being prompted to instead of being just a pre-recorded video of it analyzing the text script....
Are they relevant again? They still have yet to actually deliver their last tech demo! I’ll believe it when I see it. They have proven already that their marketing team is ahead of their engineering team.
OpenAI won´t release a serach product - unless it is called Bing AI Search what ever... because of the Microsoft connection.
'IF' things work out this smooth in real life then great. But will it?
just let them chat with each other and observe how soon it breaks down or devolves into some madness
Maybe they're making Google I/O so long to show how Gemini can summarize such a huge context. Anyways, thanks for the artisan human crafted summary video.
All of you are missing the really interesting parts (especially since some of them were only in the developer keynote). There is the local Gemini Nano inside your phone, available also for third-parties. There will be a local LMM in every Chrome based browser based on WebGPU and Wasm, with high level APIs, so that you can use them directly from your app (without the need for your own model). PaliGemma (multimodal) and Gemma 2 as new open-source models. The AI as core inside the OS not only on Pixel devices, but also on Samsung Series starting with the 24, will also give a lot of potential, which is missing for OpenAI (unless apple does something similar). Finally, many of these products are already testable in google labs... Especially the power of having a local LMM either on your device or in your browser, which is usable for third-parties has an extreme potential and nobody really speaks about it...
18:13 She's likely using Google Glass. It's smart glasses that have been around since 2013. Edit: Apparently they stopped selling them in 2023.
I think those glasses are an easter egg
I am starting to disbelief what Googles demo, it bever happened as in the demo
The difference between Google and OpenAI is that the first company is a behemoth in the IT space while the second is the leader in the AI space. Even if Gemini is 70% as good as GPT-4, which is more than that, and Google integrated well in most of its IT infrastructure, OpenAI will no chance competing with them. However, if you add Microsoft, which is a ghost hiding behind OpenAI, then the game changes dramatically. As for the presentation, Google's CEO has no charisma, TBH. I am glad they brought a celebrity in the AI space to spearhead their AI development. Demis is well respected and has more credibility.
Im doing my souritual writing pen to paper