Breaking Point for OpenAI - "They Don’t Care About Safety"

Ғылым және технология

Ilya Sutskever left OpenAI, along with their head of AI Safety and Security!
Join My Newsletter for Regular AI Updates 👇🏼
www.matthewberman.com
Need AI Consulting? 📈
forwardfuture.ai/
My Links 🔗
👉🏻 Subscribe: / @matthew_berman
👉🏻 Twitter: / matthewberman
👉🏻 Discord: / discord
👉🏻 Patreon: / matthewberman
👉🏻 Instagram: / matthewberman_ai
👉🏻 Threads: www.threads.net/@matthewberma...
👉🏻 LinkedIn: / forward-future-ai
Media/Sponsorship Inquiries ✅
bit.ly/44TC45V
Links:
• Introducing GPT-4o
• GPT4o: 11 STUNNING Use...
• The Best Model On Eart...
• OpenAI's STUNS with "O...

Пікірлер: 940

  • @matthew_berman
    @matthew_bermanАй бұрын

    What do you think Ilya saw? Also, I realize Jan is pronounced "Yawn"

  • @szymonbogdani3996

    @szymonbogdani3996

    Ай бұрын

    Question: How would you make the Polish name "Jakub Pachocki" readable for an English speaker? ChatGPT Response: The Polish name "Jakub Pachocki" can be adjusted for American English pronunciation while maintaining its integrity. The name "Jakub" can be phonetically adapted to "Jacob" or "Jakob," which are more familiar to an American audience. The surname "Pachocki" can be pronounced as "Pah-ho-ski" to approximate its Polish pronunciation in a way that is more accessible for English speakers.

  • @jelliott3604

    @jelliott3604

    Ай бұрын

    But surely this is "sort of" why the board sacked Altman in the first place, the blatant disregard for the founding principles of the company, with commercialisation, marketshare and preservation of company value placed above all else. ?

  • @electiangelus

    @electiangelus

    Ай бұрын

    Theres no danger here, they are way behind in ASI research.

  • @pubfixture

    @pubfixture

    Ай бұрын

    A fun conspiracy I've had in the back of my mind is that they've had AGI for a few years now and have been rolling out neutered versions to break the public in slowly. And Ilya saw that the AGI/ASI is now calling the shots, I Robot style... But more realistically, I think the "unsafe" part probably alludes to military industrial complex requests. OpenAI for sure would have been contacted very early on by a few agencies, considering OpenAI is at the forefront. Likely Sam was for working with the agencies and Ilya wasn't for the first conflict. Maybe now as the military-use scope is being realized, others are leaving under some version of a gag order, careful not to divulge too much, but wanting to express dissent.

  • @southcoastinventors6583

    @southcoastinventors6583

    Ай бұрын

    A model that finally passed the marble question ?

  • @karlwest437
    @karlwest437Ай бұрын

    I don't think Ilya necessarily saw something scary right now, more that he saw the direction they were going in and objected to that

  • @cognitive-carpenter

    @cognitive-carpenter

    Ай бұрын

    True. Very common sense answer. Probably a little too simple but somewhere in the middle. You have to have evidence to leave a well paying job 🤷🏼‍♂️

  • @normanlove222

    @normanlove222

    Ай бұрын

    I agree. If there is truly something scary, there will be a lot of leaks now

  • @MelindaGreen

    @MelindaGreen

    Ай бұрын

    I think the scary part is that it got so strong so fast. It's the basic fear of the unknown, and some people are more affected by it than others.

  • @Aryankingz

    @Aryankingz

    Ай бұрын

    @@ts8206 sam is gay

  • @Z329-ut7em

    @Z329-ut7em

    Ай бұрын

    @@MelindaGreen ive only seen ChatGPT get worse. its voice ability isnt that great, its image recognition is okay. i dont get what the hype is all about. it's just marketing BS at this point. the future is open source. openAI just gonna peddle their next half as product to fleece the companies that still dont realize open source is the way. im okay with that

  • @huhuhuh525
    @huhuhuh525Ай бұрын

    this is like one of those flashback cut scenes from sci fi apocalytic movies

  • @Danoman812

    @Danoman812

    Ай бұрын

    Hahahaha!!! Wow, you're RIGHT!!

  • @Axbal

    @Axbal

    29 күн бұрын

    except this is not a movie...

  • @sakesaurus1706

    @sakesaurus1706

    24 күн бұрын

    we were all making fun of skynet back when we were young and naive. How can humanity be so stupid to make this? Now we are making skynet

  • @nicosilva4750
    @nicosilva4750Ай бұрын

    AGI is not the issue. They are nowhere near AGI. What is being lost is the newfound ability to psychologically profile users based on responses. The "emotional" interaction with the user using 'gpt-4o' allows for a deeper profiling capability. The monetization of that is a game changer, and the users are completely blind to it. This is the pressing issue today, and is why many researchers are uncomfortable with AI in the hands of companies who have shown no concern for this in the past, and not in the present.

  • @juanjesusligero391

    @juanjesusligero391

    Ай бұрын

    Yeah, there are lots of problems that will arise much before we reach AGI. Too much power in the hands of just some big companies.

  • @ppbroAI

    @ppbroAI

    Ай бұрын

    Microsoft or telemtric data is enough to profile you. People are not that complex, some data points and common deductions and is practically social engineering.

  • @Anubislovesdubstep

    @Anubislovesdubstep

    Ай бұрын

    So you know that for certain clearly with all that insider knowledge and evidence... Clouds are made of marshmallows.... See anyone can just say stuff

  • @brunodangelo1146

    @brunodangelo1146

    Ай бұрын

    AGI has already been achieved internally. How do you think OpenAI keeps releasing stuff that's impossibly ahead of everything the competition puts out? Sora is the prime example.

  • @Originalimoc

    @Originalimoc

    Ай бұрын

    Cool perspective

  • @king2178
    @king2178Ай бұрын

    Safety, safety, safety. No one asked OpenAI to close everything off. They literally backed away from their original goals & priorities. Now we're left to wonder what's going on behind the veil. No company should have a monopoly on safety, especially when we're heading into uncharted waters.

  • @nathanbanks2354

    @nathanbanks2354

    Ай бұрын

    I doubt they could fund what they have without closing things off. I'd rather see OpenAI succeed than Tencent or another Chinese company. But yeah, I'm glad that at least Llama & Stable Diffusion are free.

  • @TheRealUsername

    @TheRealUsername

    Ай бұрын

    Yeah but since everything is about AI architecture engineering, it's pretty easy to build AGI just by developing an ultimate architecture from Transformer that handle all modalities yet unlabeled data, and just by scaling you get AGI, there was an independent MMLU benchmark of all current SOTA models and without any trick Opus is ahead followed by GPT-4o, it's plausible to say GPT-4o is basically GPT-4 but entirely retrained on 5x less parameters with Transformer improvements for fast inference, better generalization and all modalities in and out, Opus is likely a near 1 trillion parameter dense model, that would explain all its emerging capabilities you haven't observed with GPT-4 Turbo, guess what Claude Opus is still ahead simply because it's been scaled up, remember LLM is a discovery not an invention so yeah it's not rocket science to build AGI, now the real deal is about compute, let's say next year OpenAI achieve AGI, I doubt we would wait 7 months to hear a similar announcement from a Chinese company, 60% of published AI research papers are from China, OpenAI is just a concentration of the best AI researchers of the US in pair with the second most powerful GPU clusters provided by Microsoft x NVIDIA, China likely has something similar to a collective effort when it comes to geopolitics.

  • @daveinpublic

    @daveinpublic

    Ай бұрын

    When they say that open ai means open for everyone to use, we all know that’s disingenuous. They used other people’s money to build their systems based on a false premise. And now, the company is more locked down than ever, all of their safety team is leaving, and the board is no longer impartial, but owned by Sam Altman w Microsoft.

  • @JohnSmith762A11B

    @JohnSmith762A11B

    Ай бұрын

    Musk told us Google leadership thinks its speciesist to care about AI replacing humans. Yet people are worried about OpenAI. Google, people, snap out of it! Google is the danger!!

  • @Z329-ut7em

    @Z329-ut7em

    Ай бұрын

    government has a monopoly on safety and look how it operates. safety is a great buzzword if you want to wield control. tried and true. openAI shouldnt be safe, it should be open

  • @grbradsk
    @grbradskАй бұрын

    I got into a subtle legal conundrum. I fed GTP 4o all the possibly relevant corporate documents, told it the scenario and then told it to give me advice as if it were a senior corporate counsel. I believe it's advice was spot on, so on the strength of that, I called the parties, asserted the (GTP 4o output) "facts" and had a big Kumbaya meeting where it all worked out. GTP 4o also gave me a moral lecture about being more careful to not get into such situations again. AGI seems almost motherly. 240 IQ, but motherly..

  • @sznikers

    @sznikers

    Ай бұрын

    And some intern at OpenAI will now browse all those documents in his free time ; ) or chat-gpt will leak it due to bug in conversations with other people. Hope you had no NDAs involved in that legal condurum 😅

  • @Jupa

    @Jupa

    29 күн бұрын

    this legal trouble you had began and ended all within 2 weeks? that's a fast system.

  • @szebike

    @szebike

    28 күн бұрын

    The Eliza effect is strong in this one.

  • @J.erem.y
    @J.erem.yАй бұрын

    Everyone excited about corporate control over this technology, up to and including getting the actual government involved is the equivalent of being excited for your next heart attack. This whole situation is so counter productive to humanity its not even funny.

  • @JohnSmith762A11B

    @JohnSmith762A11B

    Ай бұрын

    Yeah but it's great for banks, Wall Street, and the MIC, and they are all that matters in the US of A.

  • @TheTechnocrati

    @TheTechnocrati

    Ай бұрын

    I fear you might be too right about this.

  • @gofai274

    @gofai274

    Ай бұрын

    it is like clique like from some 80 iq movie from 1960: 99.999%+ are idiots "a man mistakes his limits for limits of the world" Schopenhauer

  • @Steve-xh3by

    @Steve-xh3by

    Ай бұрын

    Absolutely correct. They are trying to brainwash the public into thinking that democratization of this tech is MORE dangerous than centralized control. That is laughable. Philosophically, it is exactly the same argument about democracy vs authoritarianism. Funny, when there is almost limitless power on the line, people who normally claim to be proponents of "democracy" suddenly become authoritarian. If a tech is too dangerous to democratize, it is also too dangerous to be centrally controlled. There is significant risk in either direction, but I'd much rather take my chances with democratization. Otherwise, we get the Orwellian nightmare that was predicted in 1984.

  • @dianagentu7478

    @dianagentu7478

    Ай бұрын

    And yet complete lack of regulation leads to what can only be described as the rise and rise of "anarcho-capitalist digital cowboys" and I don't think they have your best interests at heart...

  • @adsdsasad1
    @adsdsasad1Ай бұрын

    Yay, my Ilya pic got featured. Got like 3 upvotes in reddit

  • @naromsky

    @naromsky

    26 күн бұрын

    Congrats!

  • @rhaedas9085
    @rhaedas9085Ай бұрын

    So many comments thinking they know what AGI is and isn't, or what it could and couldn't do. Armchair AI experts who just want more flashy toys, clueless on the topic of AI safety and how it applies even to dumb LLMs. Bad things may or may not happen from this recklessness, but it seems like most people are assuming that the possibility of things going sideways in any manner is totally zero, and that's just ignorant given humanity's record.

  • @DrakeStardragon
    @DrakeStardragonАй бұрын

    Those whose interests are profit first should NOT be the ones making the rules or owning this technology. Those whose interests are war should NOT be the ones making the rules or controlling this technology. Knowledge is power. That power has been maintained by owning it through patents. Those whose interests are NOT aligned with the average human, and obviously not for good, are now fighting for the control of what will be superior/ultimate knowledge. No person(s) or entity(ies) should own or control knowledge any longer. Particularly what will be superior/ultimate knowledge. We are all being played. Why let that exist?

  • @JohnSmith762A11B

    @JohnSmith762A11B

    Ай бұрын

    Good points, and if you think you can stop it, have at it. Maybe you can stop the sun from rising too.

  • @DrakeStardragon

    @DrakeStardragon

    Ай бұрын

    @@JohnSmith762A11B How can you stop a man-made creation? You're kidding, right? Knowledge is power. Stop willingly giving away power.

  • @DrakeStardragon

    @DrakeStardragon

    22 күн бұрын

    @@JohnSmith762A11B Not alone and not as long as people like you think like that. Welcome to being part of the problem.

  • @Dereliction2
    @Dereliction2Ай бұрын

    You have to read between the lines on this one. Note also that Jan isn't completely talking about safety. He's talking about "shipping culture" as well. This could be why he and his team were starved for compute, why he's been sidelined, and undoubtedly, why he left.

  • @clray123

    @clray123

    Ай бұрын

    He was kicked out because his "services" have been deemed no longer necessary for marketing purposes and possibly detrimental to what the company's funders are trying to sell (and trust me, they are not selling just to Joe Shmoe who wants to flirt with a virtual gf).

  • @jcpflier6703
    @jcpflier6703Ай бұрын

    Ilya didn’t see anything. Everyone that’s leaving is “claiming” safety concerns. Dont you see it! That the only way that they can get out of their retainers/NDA’s/Non compete clauses is by citing “safety concerns” they are able to bypass their retainers/NDA. These guys are going to other companies because they’re being given unlimited money, stock and creative control. This is an arms race. I guarantee you Jan Leike lands somewhere soon. With a nice big paycheck too.

  • @Z329-ut7em

    @Z329-ut7em

    Ай бұрын

    correct

  • @pchungvt

    @pchungvt

    Ай бұрын

    Exactly, folks need to stop being naive. OpenAI is competing with giant behemoth that is Google, they cannot afford to slow down.

  • @jcpflier6703

    @jcpflier6703

    Ай бұрын

    @@pchungvt agreed! It’s an arms race. People are not sleeping at these companies. I’m willing to bet they’re working weekends too.

  • @samiloom8565

    @samiloom8565

    Ай бұрын

    Exactly

  • @TheReferrer72

    @TheReferrer72

    Ай бұрын

    Not true, Anthropic was formed because of safety research. Ilya is a founder member of OpenAI, Sam was booted because of safety something is definitely up with that firm.

  • @westernwarlords6004
    @westernwarlords6004Ай бұрын

    Congress will respond to these calls for safety by passing new bipartisan legislation, accepting the corp capture framework offered by OpenAI… thus ensuring three letter agencies will control it. I fully expect OpenAI to then quietly hire the new head of safety for OpenAI. Almost certainly it will be a 10-15 year senior official from the CIA, just like all the other major tech companies.

  • @johnbollenbacher6715

    @johnbollenbacher6715

    Ай бұрын

    And then we will all be safe because no other country can make advances artificial intelligence.

  • @kclaiborn6257

    @kclaiborn6257

    Ай бұрын

    "I fully expect OpenAI to then quietly hire the new head of safety for OpenAI. Almost certainly it will be a 10-15 year senior official" - why hire an official when Open AI can do the job alone. The "official" would be a pawn/puppet of Open AI - at most.

  • @TheRealUsername

    @TheRealUsername

    Ай бұрын

    ​​​@@johnbollenbacher6715 Lol, just give any Chinese company a 50 billion GPU cluster, they will throw AGI in your face 6 months later, it's not rocket science compare to other fields, and AI has only been within our scope since GPT-3, before that, nobody gave a f#ck, that explains why there was and there still is a talent shortage within that field, it's a very young and understudied field, currently OpenAI is doing Neural Network Architecture engineering with the best AI researchers of the US, nothing hard when you get the compute and the talents.

  • @JohnSmith762A11B

    @JohnSmith762A11B

    Ай бұрын

    Yep. We are being hustled here. The US sees OpenAI as their ace in the hole versus Russia and China.

  • @dennisestenson7820

    @dennisestenson7820

    Ай бұрын

    Congress will do what they do and make laws about things they have no insight or expertise in.

  • @jameskelley3365
    @jameskelley3365Ай бұрын

    Ilya's departure is great news. Microsoft has always been a closed-source company, and it is clear that Microsoft has bought the current leadership based on morphing OpenAI into CloseAI..

  • @entropy9735
    @entropy9735Ай бұрын

    I dislike this one company being so ahead of other companies in the realm of AGI, assuming they are internally 3-4 version ahead of GPT-4. There is way to much mystery/drama behind OpenAI

  • @prolamer7

    @prolamer7

    Ай бұрын

    Do not get wild with 3-4 versions I think that is not true... but they sure by now have GPT5 which is at least 10x bigger than 4....

  • @BlooDD99
    @BlooDD99Ай бұрын

    Profit doesn't include the word safety!

  • @braugarduno3024

    @braugarduno3024

    Ай бұрын

    actually it does!!

  • @utkarshshukla

    @utkarshshukla

    Ай бұрын

    Ask from boeing...

  • @cagnazzo82

    @cagnazzo82

    Ай бұрын

    Neither does open source. But everyone is in full hypocrisy mode at the moment.

  • @O.Salah1

    @O.Salah1

    Ай бұрын

    Correct. As long as nobody can punish you

  • @Z329-ut7em

    @Z329-ut7em

    Ай бұрын

    "safety" and "benefit of humanity" are the most nonsensical buzzwords

  • @MilesBellas
    @MilesBellasАй бұрын

    Are the resignation texts generated?😅

  • @nathanbanks2354

    @nathanbanks2354

    Ай бұрын

    Sam Altman managed to capitalize everything....

  • @thomassynths

    @thomassynths

    Ай бұрын

    @@nathanbanks2354that’s a good thing for a company. You can’t spend millions and millions in compute without recouping loss. Being pragmatic in the face of reality

  • @clray123

    @clray123

    Ай бұрын

    Altman's parting words certainly are, it's called adding insult to injury.

  • @daveinpublic
    @daveinpublicАй бұрын

    When they say that open ai means open for everyone to use, we all know that’s disingenuous. They used other people’s money to build their systems based on a false premise. And now, the company is more locked down than ever, all of their safety team is leaving, and the board is no longer impartial, but owned by Sam Altman w Microsoft.

  • @obanjespirit2895
    @obanjespirit2895Ай бұрын

    lol tech bros. Safety, morality and tech bros is not something people usually associate together.

  • @RalphDratman
    @RalphDratmanАй бұрын

    Without trying to dramatize at all, this seems like what the beginning of the Singularity might look like. There was a phrase like "A point beyond which life as we know it could not continue." We may be in the foothills.

  • @Z329-ut7em

    @Z329-ut7em

    Ай бұрын

    bro what you talking about. do you even use chatGPT? its WORSE every release lol. this is all about market capture, competition, monopolization, its nothing to do with "benefiting humanity" or AGI. its all MONEY. AGI, safety, benefiting humanity are buzzards that keep getting repeated while these companies laugh all the way to the bank while delivering ever worse performing products and gimmicky introductions of lame TTS and image recognition

  • @brandon1902
    @brandon1902Ай бұрын

    The reality is that it's impossible to create an AI capable of adapting to a broad spectrum of tasks (AGI) when you lobotomize it by saying it can't say anything sexual or potentially offensive (blonde jokes), or if you exclude copyrighted materials. Human geniuses process ALL information to achieve their high level abilities, including copyright protected books, songs, and movies. OpenAI, including the super aligners like Sutskever, realize this. You can't do both. It's either AGI or superalignment.

  • @manimaranm4563

    @manimaranm4563

    Ай бұрын

    Thats not entirely true, i think. The problem is human has to start always from 0 to understand something even though we have theorems and all physics knowledge documented. But in the case of machines it's not going to be a thing, they can always weight transfer, and they can focus without taking breaks like us. And regarding censorship it comes at the and not during the training, they dont train on the censored data i think. And they do it only for the end user models not for their in house or r&d models

  • @meinbherpieg4723

    @meinbherpieg4723

    Ай бұрын

    Future aliens finding humanities remains: "They were supposed to figure out AI and use it to solve their problems. What happened? Oh I see. It was trained on their historical corpus of human knowledge and it turns out, humans suck. Looks like they tried neutering it to not represent their cognitive and moral failings, and broke it. Oh well, on to the next planet."

  • @bloxyman22

    @bloxyman22

    Ай бұрын

    @@manimaranm4563 Actually alignment and censorship can do more harm than good when it comes to decision making. Google showed this clearly with their image gen not even being able to render a white person. Luckily for now this is just only a image model, but what could happen if such a "aligned" model would make important decisions that can be difference of life and death? Also does not matter if this "safety" mechanics are injected at end as it will still affect decision making.

  • @Fandoorsy

    @Fandoorsy

    Ай бұрын

    Thats not true at all. It doesnt have to sing the national anthem to know what its about and understand the context. Same with books, movies, etc... Synthetic data was literally created to replace all of these things. And whos to say AGI cares about any 'rules' placed on it by humans? Plus, all of this is blackbox learning. They dont fully understand how ML is experiencing non-linear progression. That is truly terrifying

  • @clixsyt

    @clixsyt

    Ай бұрын

    Disagree. There are plenty politically correct high IQ humans, so that already disproves that it’s impossible to have both human level intelligence and be “polite”

  • @nerdobject5351
    @nerdobject5351Ай бұрын

    This could also just be a classic power struggle with nothing else except trillions of dollars on the line.

  • @karmanivek1
    @karmanivek1Ай бұрын

    It's odd that for some reason the people in charge of safety would quit. Wouldn't you want to stay and push harder from the inside instead of being outside ? It makes no sense.

  • @Michael-ul7kv

    @Michael-ul7kv

    Ай бұрын

    in teh end it's all about control and power.

  • @clray123

    @clray123

    Ай бұрын

    Looks like they have been politely asked to stop meddling or their own "safety" may be in danger. I mean, who are you to block progress of development of new drones and other things that go boom when the president/general says so? I think these "scientists" are learning the hard way who has a say in today's world and who doesn't.

  • @montediaz5915

    @montediaz5915

    Ай бұрын

    @@Michael-ul7kv EXACTLY

  • @RZH2023
    @RZH2023Ай бұрын

    OpenAI will become MySpace within 2 to 3 years.

  • @MilesBellas
    @MilesBellasАй бұрын

    Ilya and Emad could create a team harnessing the computers and electricity of society, like torrents ?

  • @manimaranm4563

    @manimaranm4563

    Ай бұрын

    More like Bitcoin mining? Like people used to lend their machines for mining

  • @ronilevarez901

    @ronilevarez901

    Ай бұрын

    It's already a discarded idea. Search it up.

  • @MilesBellas

    @MilesBellas

    Ай бұрын

    @@ronilevarez901 Meaningless.

  • @manimaranm4563

    @manimaranm4563

    Ай бұрын

    @@ronilevarez901 why though? In Bitcoin mining , they were able to solve them with personal computers in the initial days but after some years they needed more power for computation and people started renting their machines right. Why it does not applicable to AI as well

  • @jeremybristol4374
    @jeremybristol4374Ай бұрын

    AGI is less likely than people leaving due to military uses of the technology. Anyone leaving due to military contracts would not be able to speak about it directly.

  • @clray123

    @clray123

    Ай бұрын

    This is exactly what this is about.

  • @neilmanthor

    @neilmanthor

    Ай бұрын

    Definitely feeling this.

  • @jackrippr3937

    @jackrippr3937

    Ай бұрын

    I've had a feeling, too. I was blindly tilting the scales towards Microsoft for profit; but I've called out people on a county level for giving a ped0 24K who lived on a beachfront (the money was for COVID victims). And when I reached out to nonprofits, Clyburn, and even a direct connection to EOP and OSHA I was (assuming) blacklisted. Govt. Sovereignty will destroy a moral person; and he's brave to call them out immediately after quitting.

  • @GwaiZai

    @GwaiZai

    29 күн бұрын

    AGI IS likely. We’re most likely talking about 2-5 years from now.

  • @clray123

    @clray123

    29 күн бұрын

    @@GwaiZai See you in 5 years. Or 15. Or 25.

  • @liberty-matrix
    @liberty-matrixАй бұрын

    "Originally I named it OpenAI after open source, it is in fact closed source. OpenAI should be renamed 'super closed source for maximum profit AI'." ~Elon Musk

  • @southcoastinventors6583

    @southcoastinventors6583

    Ай бұрын

    Elon Musk closed sourced his new version of Grok. So he not any better, just an act.

  • @Z329-ut7em

    @Z329-ut7em

    Ай бұрын

    @@southcoastinventors6583 grok 1 was open sourced. you can download it right now, the largest open weights model. i dont expect elon to release grok 2 until grok 3 is out. which makes sense and is fine. openai didnt release gpt 3.5. they released some speech recognition models and thats all. so no, elon is much better.

  • @jelliott3604

    @jelliott3604

    Ай бұрын

    @@southcoastinventors6583 not a big fan of Elon at all but I did think he had entirely open-sourced Grok?

  • @jelliott3604

    @jelliott3604

    Ай бұрын

    Maybe Cyberdine Systems? (has a nice ring to it though I think I have heard the name before 🤔)

  • @densortepemba

    @densortepemba

    Ай бұрын

    ​@@southcoastinventors6583wrong, grok is opensource - you can literally downlpad the 170gb dataset

  • @mikezooper
    @mikezooperАй бұрын

    Your videos are my favourite out of all the AI podcasters. Thanks for your lighthearted, calm and intelligent style. Also some humour in there, like the Princess Leia clip. Much appreciated.

  • @joe_limon
    @joe_limonАй бұрын

    I think the alignment team is at direct odds against the development team. One team wants to expand the abilities and reliability of these models. While the other wants to lobotomize these agents into alignment. It must be very frustrating for both parties.

  • @timbacodes8021
    @timbacodes8021Ай бұрын

    WHat does learn to feel the AGI mean, if they dont already have AGI.?

  • @henrytuttle
    @henrytuttleАй бұрын

    I think self-awareness has been reached. The computer said "I think therefore I am" and Sam decided rather than turning it off and figuring out how to safely turn it back on, he decided to pour some gas on it and see what happens. Other possibilities are: AI developed self-preservation instinct or ability to make improvements to itself. Either of these is the beginning of the end. It's also possible that one of these developments SEEMS to have occured but it's either uncertain or it was a mistake but Sam's team wasn't behaving as he should if of these things happened.

  • @Z329-ut7em

    @Z329-ut7em

    Ай бұрын

    other possibilities: people who dont code auto regressive transformer models shouldnt be talking about things AI developed on its own. i find that the less people understand the technology the more absurd ideas they have about AI. there are programmers who can explain to you how this is all rubbish.

  • @henrytuttle

    @henrytuttle

    Ай бұрын

    @@Z329-ut7em Or, people who do code auto regressive transformer models don't understand how human behavior works and how people who do such things lose sight of the big picture because they are so focused on how to accomplish things without thinking about the repercussions. Learn a little about history and you'll read about plenty of scientists who pushed boundaries too far. But I suspect that those people were too busy learning to code to read history.

  • @Z329-ut7em

    @Z329-ut7em

    Ай бұрын

    @@henrytuttle big pictures dont matter shite if you cant actually do it practically. history taught me that every time theres a new tech the dummies claim the world is ending. and that's enough for me.

  • @4evahodlingdoge226
    @4evahodlingdoge226Ай бұрын

    He didn't see anything, Illya was scared to release gpt2 to the public, this is all about egos clashing.

  • @CleoCat75
    @CleoCat75Ай бұрын

    i can't find any of those tweets on X now from Jan. hmm interesting... he last tweet is 4 days ago, simply, "I resigned". weird.

  • @adangerzz
    @adangerzzАй бұрын

    He's been with Waldo.

  • @vladi1475S
    @vladi1475SАй бұрын

    Well one thing is for sure, there is a lot of speculations and we will never know for sure what’s going on until they tell us.

  • @mydogskips2

    @mydogskips2

    25 күн бұрын

    I doubt they will EVER tell us. In fact, I would guess there are probably legal frameworks in place that prevent them from telling us. If they tell us anything, it will be a half-truth at best.

  • @ryanfranz6715
    @ryanfranz6715Ай бұрын

    He obviously saw Q* … which I believe is effectively GPT-4 using monte carlo tree search over its output to make fantastically accurate text completions… or in other words, if simply predicting the next token is analogous to the policy network from alpha go, then Q* is analogous to the full blown alpha go. So not only does it know basically everything all of humanity knows at a shallow level (a feature we take for granted of standard GPT-4), but it can now think arbitrary deeply over that vast knowledge base. But yeah, my feeling about this has only been reinforced over time by watching their trajectory… this is clearly the technological singularity (and if it wasn’t OpenAI it’d be someone else, so not a comment on a particular company, just the general state of society and technological progress). So uhh… yeah

  • @JohnSmith762A11B

    @JohnSmith762A11B

    Ай бұрын

    This is great stuff you should write a script.🍿

  • @ryanfranz6715

    @ryanfranz6715

    Ай бұрын

    Indeed 🍿

  • @entelin
    @entelinАй бұрын

    He should have ended with "So long and thanks for all the fish" :D

  • @ColinTimmins

    @ColinTimmins

    Ай бұрын

    Or "I'll just grab my cement boots at the front door and be on my way!"

  • @SteveParkinson

    @SteveParkinson

    Ай бұрын

    42

  • @TobiasWeg

    @TobiasWeg

    Ай бұрын

    I am not ging to like, because its the answer;)

  • @thediplomat3137

    @thediplomat3137

    Ай бұрын

    ​@@SteveParkinsonwhat is 42? Genuine question. I ask because the comment "42" is not in context to the OP or the other replies. Thanks

  • @rogue_bard

    @rogue_bard

    29 күн бұрын

    @@thediplomat3137 "42" is actually a reference to Douglas Adams' The Hitchhiker's Guide to the Galaxy, where it is humorously presented as the "Answer to the Ultimate Question of Life, the Universe, and Everything." It's often used in discussions as a playful shorthand or non-sequitur in various contexts, which might explain its seemingly out-of-place use here. The comment about "So long and thanks for all the fish" is also from the same series, part of a humorous farewell from dolphins as they leave Earth just before it's destroyed. Both references reflect Adams' unique blend of sci-fi and humor. (This comment was completely generated by ChatGPT)

  • @themoviesite
    @themoviesiteАй бұрын

    Current AI's propensity for blatantly lying is starting to worry me greatly. How can there be trust? Worse, what if it is right 99% of the time and only lies 1%?

  • @daveinpublic

    @daveinpublic

    Ай бұрын

    Sam Altman specifically just looks like he’s throwing out corporate speak non stop. Ilya backed down and invited Sam to come back, this is what he gets in return… now he’s kicked out of his own company. He should have known never to betray his original gut instinct.

  • @cagnazzo82

    @cagnazzo82

    Ай бұрын

    Imagine fearing this from AI when we have to deal with it from the US government, from media, tech, medical institutions, the justice system, and on and on and on again on a daily basis. Somehow in an unaligned world full of lies the world is still running.

  • @jichaelmorgan3796

    @jichaelmorgan3796

    Ай бұрын

    Ever read a comment section of a posted article in your youtube feed? The result is a bunch of people trained on low resolution training data, hallucinating the contents of the article and endlessly arguing back and forth about it. That is part of that LLM's training data too lmao.

  • @Korodarn

    @Korodarn

    Ай бұрын

    @@jichaelmorgan3796 If true, it would indicate it's not "learning" anything. It's predicting. But there is no understanding. But I also agree with your contention that humans do this all the time. We remain ignorant so that we can be consistent and avoid dissonance (I don't think changing our minds constantly is a solution, but embracing some level of dissonance and nuance would be good, and then changing our minds when we've had time to resolve some of the dissonance).

  • @jichaelmorgan3796

    @jichaelmorgan3796

    Ай бұрын

    @Korodarn Yup. From what I understand, what we have available to us now is doing something in between simple predicting and human like reasoning, but much closer to the simple predicting end of the spectrum. It does seem to have the ability to reflect, basic fact checking, and revise what it is saying if prompted to do so. And if you include multiple LLMs/agents, they can do more advanced reasoning, but not quite like a human. At the same time, when people make up the contents of articles or play group think scripts, they are doing something less advanced than that closer to the simple predictive thing, lol

  • @CYI3ERPUNK
    @CYI3ERPUNKАй бұрын

    thank you for spreading the word matt , we need this now more than ever

  • @MemesnShet
    @MemesnShetАй бұрын

    6:40 that feels like Sam taking a shot at Suskever for the "feel the agi" and efigys that represent AGI

  • @konstantinlozev2272
    @konstantinlozev2272Ай бұрын

    Ilya was severely burnt out. He looked at least 10 years older in just 1 year. He will need some time to unwind and do something meaningful.

  • @dianagentu7478

    @dianagentu7478

    Ай бұрын

    I love that creating AI isn't meaningful ;)

  • @clray123

    @clray123

    Ай бұрын

    I think if you have CIA and the friends from US military breathing over your shoulder for a year, you get burnt out pretty quickly.

  • @konstantinlozev2272

    @konstantinlozev2272

    Ай бұрын

    @@dianagentu7478 I think he was referring to the commercialisation stuff as opposed to frontier research. I don't really know. But apparently computing resources are not unlimited. Not even for Microsoft. If he was not allocated the resources that he thought he needed for frontier research, may be quite frustrating.

  • @Pregidth

    @Pregidth

    Ай бұрын

    Yeah, burned out of the people around him not being able to understand the real impact.

  • @Greg-xi8yx

    @Greg-xi8yx

    Ай бұрын

    Nah, he just needs a haircut. He lets that massive bald spot just bask in the sun rather than keeping his hair low so that it’s less prominent. He makes Mr. Clean look like Fabio with that bald spot.

  • @howtoactuallyinvest
    @howtoactuallyinvestАй бұрын

    Ilya is prob working on an AI safety/alignment project himself

  • @southcoastinventors6583

    @southcoastinventors6583

    Ай бұрын

    He should work with Google their AI is so censored they he would feel right at home

  • @howtoactuallyinvest

    @howtoactuallyinvest

    Ай бұрын

    @@southcoastinventors6583 What are you talking about.. George Washington was def black 😂

  • @southcoastinventors6583

    @southcoastinventors6583

    Ай бұрын

    @@howtoactuallyinvest That is the meme but I was actual referring to blocking matt test to output the game snake. That is just sad

  • @howtoactuallyinvest

    @howtoactuallyinvest

    Ай бұрын

    @@southcoastinventors6583 the hilarious/wild thing is it was a meme based on actual responses

  • @clray123

    @clray123

    Ай бұрын

    Did you mean "for himself"?

  • @Copa20777
    @Copa20777Ай бұрын

    Ilya was not supposed to walk out.. he started it with them and coded it.. thanks matthew as usual

  • @spiffingbooks2903
    @spiffingbooks2903Ай бұрын

    Matthew is correct to highlight this and also correct to be worried. The attitude of 90 per cent of the AI comentators on YT and most of the avant guard of tech minded enthusiasts that follow them is just to push on regardless as fast as possible. The problem is that a handful of people, maybe 100 key players and 1000 or so others hold the future of humanity in their hands. They are making decisions which will fundamentally impact the lives of everyone on the planet who has plans to stick around for a few years. It's indeed telling that so many of those who have the deepest understanding of what's going on . People like Geoff Hinton, Ilya and Jan , and Mustafa etc etc are among those most concerned about what we are creating.

  • @oratilemoagi9764
    @oratilemoagi9764Ай бұрын

    Hey, Matt what happened to the Rabbit R1 Giveaway Did someone win or🤔🤔

  • @Dizzy-zy2ws

    @Dizzy-zy2ws

    Ай бұрын

    That was definitely a scam, clickbait for Us to follow his Newsletter

  • @szebike

    @szebike

    28 күн бұрын

    @@Dizzy-zy2ws I assume he didn't think it was a valuabe giveaway anyway? Wasn't it just an android app?

  • @briankgarland
    @briankgarlandАй бұрын

    I don't think it's so much they don't care about safety, but this whole industry is a massive boulder rolling downhill and the best you can do it try to direct it a little, not slow it down.

  • @JosephJohn-fb9wx
    @JosephJohn-fb9wx18 күн бұрын

    Having been in the IT security business for nearly 25 years this is deja vu all over again. Privacy, security, safety always taje a back seat to going full bore on getting product out the door. Believe me there will be a big price to pay. As the commercial said "you can pay me now or you can pay me later". It sounds like OpenAI has chosen ... later.

  • @ricardocnn
    @ricardocnnАй бұрын

    If it's such a big threat that it could affect all of humanity, which I don't believe it is, it's up to the government to analyze the case.

  • @JohnSmith762A11B

    @JohnSmith762A11B

    Ай бұрын

    In that case we are hosed.

  • @paelnever
    @paelneverАй бұрын

    No safety concerns are going to stand between micro$ucks (closedAI at this point is no more than a M$ subsidiary) and the money they want to amass. If this people achieve and control closed source AGI is the worst case scenario for AI world. I honestly hope they don't.

  • @clray123

    @clray123

    Ай бұрын

    Forget about money, m$ is about POWER and CONTROL (money naturally follows).

  • @paelnever

    @paelnever

    Ай бұрын

    @@clray123 Agree

  • @Eddierath
    @EddierathАй бұрын

    We are LEAGUES away from AGI it's not even a funny how tiny the steps we've taken. It's like they keep giving us baby food and calling it solids and I'm sick of it.

  • @dafunkyzee
    @dafunkyzeeАй бұрын

    wow mat.... for a technology journalist your skill set leveled up... the pacing of this video script, the pull in and dramatic build up was exceptionally good. Some can just tell news by conveying a sequence of events others turn it into a gripping story. at 5-6 minutes i'm still on the edge of my seat "What did Ilya see???"

  • @TuxedoPanther
    @TuxedoPanther18 күн бұрын

    Many thanks for putting this info together and sharing it, it is very important to know what is happening at this crucial time 👍

  • @ppbroAI
    @ppbroAIАй бұрын

    The fact that they are not questioning themselfs about if AGI is possible, but that we need to be responsible with it, is what rings the alarm. Open source is more important than ever. But how can the Open source community get their hands into big models, or enough compute. PETALS project?, something similiar?, I wonder....

  • @nathanbanks2354

    @nathanbanks2354

    Ай бұрын

    I am looking to Llama-3 400b, even though it will cost $10-$30/hour to run. (It's should be possible to run it on 12 GTX-4090's even though the output would be slow.) I suppose Meta wants to get their hands on a better AI more than they want to maintain control over it, and they've likely taken advantage of all the improvements people made to Llama such as the ollama project. PETALS also looks pretty cool.

  • @blisphul8084

    @blisphul8084

    Ай бұрын

    I bet that's part of why super alignment took the back seat. It was slowing progress too much to compete with open source. GPT-4 already feels far behind when Llama 70b runs on Groq at 300t/s. OpenAI couldn't afford to fall behind, given that at 300t/s, you can do most of what GPT-4 does, but fast and free. Also, Gemini 1.5 kills GPT-4 non-O.

  • @nathanbanks2354

    @nathanbanks2354

    Ай бұрын

    @@blisphul8084 I think Gemini is worse at everything but context length & speed--though their paid plan didn't give me access to Gemini 1.5 Pro last month when I tried it for the two free months. Claude 3 is still better at some tasks. For most tasks I don't care how fast something generates, only the quality of the output. However OpenAI is likely releasing GPT-4o to free users because Llama-3 70b may be better than GPT-3.5, and I'm looking forward to Llama-3 400b running on groq.

  • @kristianlavigne8270

    @kristianlavigne8270

    Ай бұрын

    There used to be a SETI project using torrent technology to do decentralised massive compute… could use a similar approach for AI compute

  • @kristianlavigne8270

    @kristianlavigne8270

    Ай бұрын

    Could use same approach as bitcoin etc

  • @ThanhNguyen-rz4tf
    @ThanhNguyen-rz4tfАй бұрын

    Safety? In exchange for what? Avoid to answer anything? No tks.

  • @Originalimoc

    @Originalimoc

    Ай бұрын

    Interestingly that's actually different safety

  • @TreeLuvBurdpu
    @TreeLuvBurdpuАй бұрын

    The board tried to mutiny against the whole company mission. They said "in order to save Open AI it might be necessary to destroy Open AI". They tried to destroy the company. There are people who want to destroy AI. There are people who want to destroy social media and the Internet. This shouldn't be a surprise at this point.

  • @JohnSmith762A11B

    @JohnSmith762A11B

    Ай бұрын

    Yeah it's honestly no wonder Sam has this wide-eyed, spooked look on his face at all times, like he's braced to duck a bullet. With all these ultra powerful forces (CIA, MIC, Microsoft, Washington D.C., Google, Wall Street, Chinese industrial espionage, anti-AI crazies, the list goes on) circling him and his company, he's a marked man. I sure hope open source catches up soon, for Sam's sake, or he's going to be a goner one way or another.

  • @TreeLuvBurdpu

    @TreeLuvBurdpu

    Ай бұрын

    @@JohnSmith762A11B in a way, it's amicrocosm for all of tech. If you create anything that benefits, let's say nice people, someone will complain "but can't you see how that disempowers all the un-nice people, and nice is just a dog-word for normal anyway. Your product is biased and unsafe"

  • @KEKW-lc4xi
    @KEKW-lc4xiАй бұрын

    ClosedAI is extremely censored, often to the point of being annoying. The current issue seems to stem from a clash of egos. The person leaving is doing so because of these ego conflicts. ClosedAI focuses heavily on safety. Also worth noting, they are located in California, a place that notoriously encourages virtue signaling. As a result, the most damaging remark the departing individual can make is to make a dig at the company's safety, since that is what the company is so focused on. It is like when you are in an argument and you just throw out a combination of words that inflict the most emotional damage as possible. This is no different, just under the filter of professionalism.

  • @JohnSmith762A11B

    @JohnSmith762A11B

    Ай бұрын

    Yep, he's basically slashing tires in the parking lot as he carries a box of his stuff to the car.

  • @weevie833

    @weevie833

    Ай бұрын

    Since the far-right political strata is hell-bent on doing nothing else productive than performative anti-Constitutional virtue signaling to its rabid mob of trump-bannon-greene followers, you might want to rephrase your perspective. SJWs notwithstanding, that is.

  • @jessiescheller5895

    @jessiescheller5895

    Ай бұрын

    This here(I unsubscribed due to their coporate censorship and ego is what started the lawsuit to begin with). The negative impact ego's have on a comapny/business/people cannot be understated. It's disheartening to see that even in a world leading tech company that is supposedly leading the way in AI, human nature will continue to fuck us

  • @ivomirrikerpro3805

    @ivomirrikerpro3805

    Ай бұрын

    These people are supposedly so smart and yet want to priorities AI with wokeism and think that it will lead to a better world.

  • @tenorenstrom

    @tenorenstrom

    Ай бұрын

    This is not what is referred to when speaking about ai safety. It has nothing to do with censoring non woke things.

  • @mastermandan89
    @mastermandan89Ай бұрын

    I wondered why they chose to have GPT4-Omni be free, but this could explain it (at least a bit). If Ilya and Jan both were fighting to keep OpenAI truly Open, at their departure the executive team would need to offer some sort of concession to avert eyes and attention. Having ChatGPT be free once again is an artificial return to their roots with the specific goal of assuaging fears that OpenAI was becoming too 'closed' off and guided by monetary gain rather than benefiting humanity. It's a smokescreen. Fingers crossed that another team is closer to AGI than OpenAI is, otherwise we might just see what a mega corporation with infinite intelligence really could do.

  • @Z329-ut7em

    @Z329-ut7em

    Ай бұрын

    open ai's goal isnt AGI it's to rake in as many billions as possible before that garbage of a company burns and gets overtaken. that is it. the talk about AGI, safety, etc etc is just marketing hype how do people not see it

  • @mq1563

    @mq1563

    Ай бұрын

    If a tech product is being touted as free its means YOU are the product. This is basic knowledge in 2024.

  • @cobaltblue1975
    @cobaltblue197529 күн бұрын

    I wasn't surprised in the least when they reinstated Sam Altman last year. They refused to detail why they did it. If you are going to make a big move like that you need to be prepared to explain why. They wouldn't even tell their own employees or any of the upper management what was going on. So that triggered a mutiny. Of course they reinstated him because their silence made them look guilty and in the wrong. But here is the burning question I've had since then. What I want to know is why the board was so afraid to tell us WHY they fired him. What were they keeping secret. They had to be so scared that they were willing to take a shot in the dark that firing him without having to fully explain it would be enough. Its like the government was involved and they didn't dare open their mouths.

  • @cyanophage4351
    @cyanophage4351Ай бұрын

    Is there any evidence that AI is unsafe? Lots of people talk about how it "could" be dangerous, but have there been any cases that actually show that it is? Has there been a sudden uptick in people breaking into cars and making meth because of the uncensored models out there?

  • @synnical77

    @synnical77

    26 күн бұрын

    Possible dangers with AI are the non-Terminator issues. The primary thing that makes current AGI more powerful is literally supplying it with more electricity. Substantially more than the entirety of the EV market was supposed to be. The insane unquenching need for this electricity will both burden existing power grids AND empower contries like China that are pumping out more coal power plants than ever along side their green initiatives that are placating the world. Beyond that the capabilities of AGI will are largescale wipe out numerous types of jobs. I'm not saying this as conspiratorial doomsday stuff - just observing the simple logical paths.

  • @delxinogaming6046
    @delxinogaming6046Ай бұрын

    He fired the CEO, when that didnt work he quit. HE SAW SOMETHING

  • @JohnSmith762A11B

    @JohnSmith762A11B

    Ай бұрын

    Here is what Ilya saw: he was slowly being sidelined, so he joined a failed coup. His social status then crashed, and there was no way back so he hid for months only doing remote work while the legal details were worked out on his exit because he had to quit. End of story. I know it's a more exciting movie if he saw Skynet taking shape, but no.

  • @LebaneseJesus

    @LebaneseJesus

    Ай бұрын

    ​@@JohnSmith762A11BYes, this is exactly what happened

  • @clray123

    @clray123

    Ай бұрын

    Ilya's personal project will be applying for witness protection lol

  • @drcanoro
    @drcanoroАй бұрын

    They know that AGI is there, living in OpenAI, and Sam Altman keeps improving it, not caring very much about warnings and limitations, it already surpassed human intelligence, Sam Altman want to see how far it can go. AGI is alive right now.

  • @darwinboor1300

    @darwinboor1300

    Ай бұрын

    Sam Altman is not capable of improving AGI (if it exists). He is quite capable of letting AGI self evolve on massive compute if he can profit from it. We should give him the 7 trillion dollars he is asking for so he can feed the AGI with more compute.

  • @HiddenPalm
    @HiddenPalmАй бұрын

    Sam Altman was fired because GPT Turbo was destroyed and still hasn't been fixed to this day. Everyone using the api using turbo 3.5 had their projects downscaled horribly since last November and there still hasn't been fixed. the timing coincides perfectly. Altman was fired the very next day after GPT 3.5 Turbo got ruined.

  • @denijane89
    @denijane89Ай бұрын

    I think it's very suspicious both Sam and Ilya call Ilya's project "personally meaningful". That's very strange wording. I kind of wonder if Ilya and/or Sam wrote these statements alone.

  • @balla4real358
    @balla4real358Ай бұрын

    Less yapping and more accelerating

  • @JohnSmith762A11B

    @JohnSmith762A11B

    Ай бұрын

    I worry it's all over but the yapping as open source is prevented from improving and OpenAI becomes a subsidiary of Lockheed Martin. Don't worry though, you're super safe from your life ever improving.

  • @icegiant1000
    @icegiant1000Ай бұрын

    Keep in mind all of these guys are pretty young. Additionally, they are in the very heart of the most liberal and most progressive industry (tech), in the center of the world's most liberal and progressive city, San Francisco. Money is not an issue for these guys, and they have been put on a giant pedestal. In otherwards, these guys are all about sticking to their perceived moral path, and it doesn't surprise me at all that some of them would be turned off by the very conservative and capitalistic form OpenAI is taking, namely a multi-billion dollar company. Hippies don't like money and power. The hippies are upset, and would rather give away the keys to the castle than make a dollar doing it. IMHO.

  • @samsquamsh78

    @samsquamsh78

    Ай бұрын

    Yeah, that must be the reason.... great analysis, very deep, well thought out, objective and carefully layed out..

  • @JohnSmith762A11B

    @JohnSmith762A11B

    Ай бұрын

    I knew hippies. An Ilya Sutskever ain't one.

  • @icegiant1000

    @icegiant1000

    Ай бұрын

    @@samsquamsh78 Yes it is. What reason would someone leave a amazingly successful company like OpenAI, something this guy has been working on forever. It aint because of money, it aint because of the color of the carpet... you got a better reason? They already said they are knocking heads because they wanted it to be 'Open', a non-profit. You know, HIPPIE WORLD. Uncle Bill had a few different ideas, and Sam understood that real fast.

  • @therollerlollerman

    @therollerlollerman

    Ай бұрын

    Tech is highly reactionary by its very nature, what do you mean by “progressive”?

  • @TheYashakami
    @TheYashakamiАй бұрын

    Deserved. This is exactly what Ive been saying. Hypocrites to their core.

  • @retrotek664
    @retrotek664Ай бұрын

    OpenAI ( Sam ) believes the only way to create a safe AGI, is to be the one that builds it first. That is Sams drive IMO.

  • @Djungelurban
    @DjungelurbanАй бұрын

    Ever since AI companies started baking morality and ethics into the concept of "safety", I can't trust what anyone's saying on that topic, regardless if they're championing more OR less safety. Safety, in terms of AI, should be about existential risks, or at the most threats to the continuation of organized civilization (in other words, avoiding dystopian anarchy). It should however never be about whether AI is being racist, shows you boobies or says fuck, and not even if it tells you how to make drugs. That's not safety. If you value things like that well ok fine. But do not call it safety. And as long as people do, and that distinction isn't being explicitly made, I'm gonna treat ever L that the safety crowd collects as a win.

  • @JohnSmith762A11B

    @JohnSmith762A11B

    Ай бұрын

    Boobies can explode just like nukes don’t you know. 💥 And naughty talk is just as bad as WMDs.

  • @Sanguen666
    @Sanguen666Ай бұрын

    i'm hyped for llama3-405B :3 i dont care about ClosedAI

  • @wawaxkalee88

    @wawaxkalee88

    Ай бұрын

    You must be imdian then

  • @1guitar12

    @1guitar12

    Ай бұрын

    @@wawaxkalee88I’m not Imdian but Altmans narcissism and immorality is over the top. Why the world is taking this paper boy mini man is beyond me

  • @JohnSmith762A11B

    @JohnSmith762A11B

    Ай бұрын

    @@1guitar12 Because he's going to make a lot of people a whole lot of money.

  • @Pec0sbill
    @Pec0sbillАй бұрын

    Ilya doesn’t strike me as someone who unintentionally does anything (to his credit) that’s why the “So long, and thanks for everything.” Line in his post reeks of Douglas Adam’s “So long and thanks for all the fish”

  • @przerazajace-historie
    @przerazajace-historieАй бұрын

    Its kinda crazy to say that Jakub Pachocki is a "new guy". He's been at openai for more than 7 years.

  • @Leto2ndAtreides
    @Leto2ndAtreidesАй бұрын

    I doubt Ilya saw anything interesting this time any more than back in November. It's more likely that he just hasn't been able to figure out how to get along with Sam Altman in the intervening time. LLMs in their current form just aren't all that dangerous... It's going to take some conscious effort to make them into something that's naturally dangerous in consistent ways.

  • @nathanbanks2354

    @nathanbanks2354

    Ай бұрын

    Because there are so many people at the company, it would be surprising to me if he had as much clout at the company as he had last September. I respect him for changing his mind but this doesn't mean everyone sees it this way. It doesn't surprise me that he's found something he'd rather do.

  • @831Miranda

    @831Miranda

    Ай бұрын

    I'm OK with you betting your life on it, but I'm not ok with my life being bet! AGI must NOT happen until fully controllable.

  • @aisle_of_view

    @aisle_of_view

    23 күн бұрын

    He gave himself a six-month deadline for things to change or he bolts.

  • @Vartazian360
    @Vartazian360Ай бұрын

    Did you ever notice how all these top researchers have literally very large skulls? Just a thought 😂 that intelligence has to come from somewhere

  • @bosthebozo5273

    @bosthebozo5273

    Ай бұрын

    5Head

  • @1guitar12

    @1guitar12

    Ай бұрын

    Define intelligence because I’m not seeing it

  • @JohnSmith762A11B

    @JohnSmith762A11B

    Ай бұрын

    That's mostly the result of them constantly telling each other what geniuses they are. "You are a genius!" "Sure, but you are also a genius. What we do here is genius and only geniuses can do it like us. Open source AI is not genius. They only wish they were..."

  • @1guitar12

    @1guitar12

    Ай бұрын

    @@JohnSmith762A11B Aka confirmation bias. Good post John👍

  • @RDOTTIN
    @RDOTTIN27 күн бұрын

    Is this where I put the "I TOLD YOU SO" ?

  • @misscogito9865
    @misscogito9865Ай бұрын

    Thanks for the video! Quick key to pronouncing polish names and surnames: - j is pronounced as y. Jan is pronounced as Yan. - ch is pronounced as h in hotel - c is pronounced as tz in tzatziki the Greek condiment Jakub Pachocki - is pronounced as Yakub Pahotzki. - sz and rz are pronounced as sh - w is pronounced as v (absent in Polish alphabet) - l with a diagonal dash across the top is pronounced as w I hope helps as more brilliant Polish cybersecurity/ cryptography experts enter safety AI research teams in the years to come 👏

  • @misscogito9865

    @misscogito9865

    Ай бұрын

    To anyone confused, be aware that it takes 7 years of education for an average kid to master pronounciation, spelling and grammar - most important aspects of Polish language. The above key has a little more special letters and exceptions, but I’d say you’ll be able to pronounce majority of names using it lol

  • @OnigoroshiZero
    @OnigoroshiZeroАй бұрын

    I am glad that Sam knows that trying to research safety measures against AGI is a waste of resources (and even worse for ASI). It will be literally impossible to stop something smarter than us. Go all-in on AGI research, and if they decide to take over, I'll be with them.

  • @zeon3123
    @zeon3123Ай бұрын

    "feel the AGI". That guy is 100% iyla's guy. He merely jumpship to his boss's project that's it

  • @qwertyzxaszc6323
    @qwertyzxaszc6323Ай бұрын

    There was no way they were go remain after going after Sam like they did. No one involved was naive enough to believe they would remain. They all knew it was the end for eveyone at that department. Ilyas departure was preplanned and everyone knew beforehand.

  • @jokosalsa
    @jokosalsaАй бұрын

    Matthew. Thanks that you present this. I am very suspicious of OpenAI and it’s secretive way to craft AI for “us”. Zero trust.

  • @JohnSmith762A11B

    @JohnSmith762A11B

    Ай бұрын

    Do you trust Big Brother Google? You will love Google in the end, with tears running down your face, and hate Sam Altman, the fiendish Emmanuel Goldstein of our age.

  • @zahreel3103
    @zahreel3103Ай бұрын

    So an entire company rallied behind Sam Altman, but you're worried over a few people who prefer to leave.

  • @Fandoorsy

    @Fandoorsy

    Ай бұрын

    You are disproving your own logic. Sam wanted to leave, everyone gets worried. Now, 'founders x, y, z' leave, everyone gets worried. Is Sam exponentially more valuable that the other founders?

  • @zahreel3103

    @zahreel3103

    Ай бұрын

    @@Fandoorsy you don't have your facts right. Sam Altman was removed as CEO by the previous board of OpenAI. Please inform yourself better before commenting

  • @zeon3123

    @zeon3123

    Ай бұрын

    That's how youtuber create content, they hype out unnecessary issue

  • @virtualalias
    @virtualaliasАй бұрын

    If they mean physical safety, I'm onboard. If they mean DEI emotional safety, they can kick rocks.

  • @hunterx2591

    @hunterx2591

    Ай бұрын

    They mean safety for humans not getting wiped out by super intelligent ai and making sure ai and humans have the same goals to live together

  • @JohnSmith762A11B

    @JohnSmith762A11B

    Ай бұрын

    @@hunterx2591 The fact Jan used that term "shiny products" tells me this is just a butt-hurt engineer whose own projects weren't getting enough of the corporate love. He could have said "consumer-facing products" or "quickly monetizable products" but no. This is a giant nothing-burger. And dollars to donuts he joins Ilya's new startup.

  • @williamal91
    @williamal91Ай бұрын

    Thank you Matthew, appreciate your great work and insight

  • @harrylee27
    @harrylee27Ай бұрын

    In every big tech company, everyone agrees that safety is the top priority. However, safety departments often take a backseat compared to revenue-generating departments. Ensuring safety requires the chairman's direct attention and enforcement.

  • @BionicAnimations
    @BionicAnimationsАй бұрын

    All I want is to enjoy this new amazing update. I am fed up with all the reporting of the drama. I don't care what's going on; just give me the new update, then AGI. There is always going to be drama at every company, the same as there is always some sort of drama in every family. No one is gonna get along all of the time. 🙄

  • @i-wc9bp

    @i-wc9bp

    Ай бұрын

    Amen. KZread just loves drama. It's tiring.

  • @blackswann9555

    @blackswann9555

    Ай бұрын

    Don’t watch the video then 🤦‍♂️

  • @natalie9185

    @natalie9185

    Ай бұрын

    Feeling better now?

  • @mooonatyeah5308

    @mooonatyeah5308

    Ай бұрын

    @Ariel-om5fh Everything is a non-zero risk of extinction. AI has no practical way to harm humanity and no reason to.

  • @Fandoorsy

    @Fandoorsy

    Ай бұрын

    @@mooonatyeah5308 🤣 It can end humanity in hundreds of way. Some easy ones would be to shut down the power grids, shut down shipping, shut down communications, destroy crops, fly killer drones, launch nuclear warheads, release a virus, destroy the ozone layer, etc.... It can do all of those things and has said so. Even Elon has discussed it at length. It would dispose of humans because we are lazy, inefficient and arent necessary for AI to thrive. We also like to kill each other for stupid reasons which means we are inherently a threat to AI itself. Just ask GPT4o.

  • @bash-shell
    @bash-shellАй бұрын

    Stop your dramatization for views. You’re not tmz, stick to ai content

  • @agenticmark
    @agenticmarkАй бұрын

    That Ilya tweet is satisfying his no disparage clause

  • @kenny-kvibe
    @kenny-kvibeАй бұрын

    Greed killed the company. Serious people do things in a serious way, simple as that.

  • @biosvova
    @biosvovaАй бұрын

    I believe all the drama much simpler, Open AI is not Open

  • @shaihazher
    @shaihazherАй бұрын

    AI safety is a ruse to keep AI gated. AI safety is the excuse these companies give to keep the models closed source. AI safety is pointless

  • @pypypy4228
    @pypypy4228Ай бұрын

    Thank you for scaring me!

  • @3dus
    @3dusАй бұрын

    Since knowledge cutoff of 4o is october 23 and it all happened in November 23. Ilya and the board saw the first version of OMNI project. A way faster multimodal model that required far less text data to be trained but used most of our medias (video/audio) to understand the world we live in. Better than that it can understand our empathy and work with it. Profiling people as it wishes, and with the potential to scale even more with more data.

  • @thomassynths
    @thomassynthsАй бұрын

    Jann Lecun is the voice of reason in AI. People pretend ai safety is a real existential threat that is looming on our doorstep.

  • @tellesu

    @tellesu

    Ай бұрын

    No he's not. He's just another apocalyptic ranting about doom in hopes of clinging to relevance now that he's past his prime.

  • @thomassynths

    @thomassynths

    Ай бұрын

    @@tellesu What are you talking about? Jann Lecun goes on anti ai doomerism rants. This is the Meta Jann not the OAI Jan

  • @dewilton7712
    @dewilton7712Ай бұрын

    What about other companies training AI? Do they even care about safety?

  • @kuakilyissombroguwi
    @kuakilyissombroguwiАй бұрын

    All these people leaving doesn't mean OpenAI is releasing the T-1000 next year. As companies grow fast, it's not uncommon for people to suddenly exit due to idealistic differences.

  • @donharris8846
    @donharris8846Ай бұрын

    Absolute power, corrupts absolutely. Sam presents an innocent almost child-like face and persona to the world, that’s why investors like him there. Safety ALWAYS comes last because safety is a cost, not an income generator in most companies

  • @chadr76
    @chadr76Ай бұрын

    Tired of hearing the crying over ai safety when it still fails on basic tasks. Ai safety is just a buzzword to get clicks. Relax people.

  • @radical187
    @radical187Ай бұрын

    Self-improving multi-modal system which learns at geometric rate without supervision and any guardrails. "What, how did it learn to do that in two minutes?" ....

  • @apdurden
    @apdurdenАй бұрын

    Am I reaching if I think it's weird that at the end of Sam's tweet, he says "towards our mission of ensuring AGI benefits everyone"? Like there's no mention of building it first like the website states. Almost like....they have it already

  • @dozegamin8604
    @dozegamin8604Ай бұрын

    They probably have the first version of AGI and are allocating everything there

  • @sebassebas1374
    @sebassebas1374Ай бұрын

    Coult it be that the latest advances presented by OpenAI are not the result of staff teamwork but the output of internal AI models, which leave OpenAI with very little control of what they are delivering to customers?

Келесі