Computerphile

Computerphile

Videos all about computers and computer stuff. Sister channel of Numberphile.

CPU Pipeline - Computerphile

CPU Pipeline - Computerphile

Пікірлер

  • @blahblahsaurus2458
    @blahblahsaurus245821 сағат бұрын

    6:20 that is not what "intelligence explosion" usually refers to. An intelligence explosion refers to the point when an AI is able to help create the next AI, leading to a a chain reaction of self improvement and increasing intelligence.

  • @morningsssss2857
    @morningsssss285721 сағат бұрын

    more of Gene, please :)

  • @petersmythe6462
    @petersmythe646222 сағат бұрын

    I think you could use relatively small models as a logical deduction engine for general problem solving tasks rather than having it know the answer innately by memorization. Add the ability to conduct experiments by, say, searching the internet or running code, and I think you have an agent that may not even have a billion parameters but can still perform at general intelligence tasks while avoiding hallucinations. This would be especially true if you focus its training set in the direction of humans engaged in problem solving rather than repeating wrote memorized information.

  • @petersmythe6462
    @petersmythe646223 сағат бұрын

    Rote memorization vs problem solving. The thing is, LLMs actually have the capacity to do both because their context window constitutes a crude form of state that they have access to. These are mostly tests of the ability of AI to do rote memorization. Not problem solving.

  • @mofogie
    @mofogie23 сағат бұрын

    True but decades ago, RAM was 1/billionth of now. We shall see. Hardware engineers always push new limits

  • @10Exahertz
    @10Exahertz23 сағат бұрын

    This doesnt even get into the recursion issue and the power needs issue (aka the eventual costs are gonna be insane, and eventually that cost will go onto the users)

  • @andrewvirtual
    @andrewvirtual23 сағат бұрын

    Not even close

  • @kirishima638
    @kirishima638Күн бұрын

    The only thing worse than generative AI is the ‘tech bros’ that push it as the solution to all problems, like a cult.

  • @matthewwalewangko1995
    @matthewwalewangko1995Күн бұрын

    ...ChatGPT-4o just woke up today

  • @hamburgerjoe4401
    @hamburgerjoe4401Күн бұрын

    I have recently begun feeling like ChatGPT's quality is actually starting to go downhill, and I wonder if - after running out of human-written data - it's the use of AI-created data that is doing it. If so, it will continue to worsen.

  • @petermoras6893
    @petermoras6893Күн бұрын

    I think people mysticize Machine Learning and Generative AI far more than it needs to be. At the end of the day, ML is just an arbitrary function. It can be any function as long as we have the right input and output data. The obvious problem is that the possibility space of any problem balloons exponentially with it's complexity, so you eventually reach a point where you don't have enough resources to brute force the solution. However, I don't think we've reached the peak of generative AI, as there are other avenues of improvement other than more training data. One solution I think we'll see employed more is using more complex algorithms that help bridge the gap between the input and output data. For example, we don't train a Neural Net on pure images. We use a convolutional layer at the start in order to pre-process the image into data that is easier to find correlations with. But these layers can be anywhere in the NN and still be effective. (Personal opinion) For Image based gen-AI, I think future algorithms will use pre-trained algorithms that show understandings of 3D objects and their transposition onto 2D planes. The general image classifiers could then use the pre-trained 3D transposition as a basis for understanding 2D images, which would in theory give them an understanding of 2D object representation that is closer to our own.

  • @teasips5999
    @teasips5999Күн бұрын

    I'm here from next week. We're already past what they thought was impossible

  • @FlintTD
    @FlintTDКүн бұрын

    Modern Neural Network AI trained on large data sets lack a semantic engine. That's the part of an AI which "understands" the information it's being given. A large model model without a semantic engine won't know the meaning behind what it's "typing". An large image model can generate images from prompts, but it gets the details wrong because it doesn't really understand that images depict objects. I guess the people making today's neural net AIs just thought their models would grow a semantic engine as they're trained?

  • @pierrec1590
    @pierrec1590Күн бұрын

    It would be imprudent to expect linear scaling from a domain that is so deeply non-linear.

  • @OstlerDev
    @OstlerDevКүн бұрын

    Me: Okay artificial intelligence program, please analyze the difference between time flies like an arrow and fruit flies like a banana. ChatGPT-4: The sentence "Time flies like an arrow; fruit flies like a banana" is a classic example of a garden path sentence and a play on words involving syntactic ambiguity and homonymy. Let's break it down: 1. Time Flies Like an Arrow Meaning: Literal: Time moves quickly and directly, similar to how an arrow flies through the air. Structure: "Time" (subject) "flies" (verb) "like an arrow" (prepositional phrase indicating manner) 2. Fruit Flies Like a Banana Meaning: Literal: Fruit flies (a type of insect) are attracted to bananas. Structure: "Fruit flies" (subject, noun phrase referring to the insect) "like" (verb meaning 'to be fond of' or 'to be attracted to') "a banana" (object of the verb "like") Analysis: Syntactic Ambiguity: The sentence "Fruit flies like a banana" initially leads the reader to interpret "fruit flies" as a verb phrase (like "time flies"), but it is actually a noun phrase. Homonymy: The word "flies" serves as a verb in "Time flies like an arrow" and as part of the noun phrase "fruit flies" in "Fruit flies like a banana." Play on Words: The sentence is a pun, using the dual meanings of "flies" and the syntactic similarity to create humor and confusion. Key Differences: Subject and Verb Usage: In "Time flies like an arrow," "time" is the subject, and "flies" is the verb. In "Fruit flies like a banana," "fruit flies" is the subject, and "like" is the verb. Interpretation: "Time flies like an arrow" is a metaphorical statement about the passage of time. "Fruit flies like a banana" is a literal statement about the behavior of fruit flies. This linguistic play demonstrates how the same words can have different meanings and structures, leading to different interpretations.

  • @johnwilsonwsws
    @johnwilsonwswsКүн бұрын

    Humans work on a poverty of stimulus but millions of generations of biological evolution. AI is not replicating human evolution. Its “neural network” is based on a simplified model of the animal brain. You should do a video on Herbert Dreyfus’s book “What Computers Still Can’t Do”. The other problem for AI is it will start to be trained on its own prior output. It won’t just plateau. It will degenerate.

  • @Augustus_Imperator
    @Augustus_ImperatorКүн бұрын

    I'm sorry man, you choose the wrong day to publish this video 😅

  • @justsomeperson1282
    @justsomeperson1282Күн бұрын

    Is it just me or is this episode sped up? It seems roughly normal speed if you set it to 0.75x video speed.

  • @minefacex
    @minefacexКүн бұрын

    The way I see it we will not make AGI as long it is purely statistical. We need to have strong, provable deduction, maybe we need to revive symbolic AI, instead of the statistical-based approach. Don't get me wrong, these statistical models, like LLMs are incredibly powerful and they will get just better, but they are not provable essentially. They are just multimodel autocorrect. The way I see it, we need to study the brain and specifically brain organoids to create true AGI, which is not something that peaks.

  • @EliasTsakmakis
    @EliasTsakmakisКүн бұрын

    you are selling intelligence cheap by calling this thing AI but it is a marketing and hype-producing mechanism, ok. without a high functioning sensory apparatus AIs will always be contained in a box of second hand interpreted data, they will never develop an intuition, instinctive or conceptual understanding of the physical world no matter how many Dyson spheres worth of energy you feed it. You can't cheat evolution. Just call it a pattern recognition tool and be done with it.

  • @piotrswieczak3003
    @piotrswieczak3003Күн бұрын

    Its your signal to short AI stocks

  • @ewerybody
    @ewerybodyКүн бұрын

    Veery interesting👍 thanks! I would have liked to see what's happening under the hood a little. So, what it compiles to compared to without the jit maybe

  • @mkvenner2
    @mkvenner2Күн бұрын

    Generative AI peaked before it was even invented

  • @dinmavric5504
    @dinmavric5504Күн бұрын

    It turns out "AI" that does not understand what it's looking at it, is not AI, what a surprise

  • @Anonymous-sb9rr
    @Anonymous-sb9rrКүн бұрын

    The human brain does not need these huge data sets.

  • @hopecomingsoon
    @hopecomingsoonКүн бұрын

    I'm going to propose a crowd-sourcing approach. Ask people to write the name of the thing on a piece of paper, then take a photo of the thing with the paper in view, then upload it to the internet. Then put it all on a stack overflow-like website to improve the quality of training datasets.

  • @bobrocks6324
    @bobrocks6324Күн бұрын

    What about gpt4o?

  • @gnikhil335
    @gnikhil335Күн бұрын

    it's nothing new, just multimodal and faster

  • @UniDocs_Mahapushpa_Cyavana
    @UniDocs_Mahapushpa_CyavanaКүн бұрын

    For millennia, humans have been tech-ing into the quantity>quality lifestyle. Generative AI's 🤖 power is not in *quality* (humans, especially humans using generative AI as a helping tool, are better at that), it is in *quantity.* You can generate images much quicker and at lower cost than humans alone. Humans using generative AI as a helping tool are pretty fast though.

  • @jimbobbyrnes
    @jimbobbyrnesКүн бұрын

    What we generating earlier was a image now we are generating video. Soon we will be generating full 3d models and game simulations. Meaning one day we can imagine generating a full 1 to 1 simulation of the entire visible universe in one prompt. It is only limited by it's training data and that is only limited by our ability to collect and store that data. So i would so no it has not peaked until we have run out of data. The failures of AI are all about our ability to collect, store, and train that data. That part needs to be refined then the failures would stop happening. Also maybe stop calling it AI to stop confusing people who do not know what machine learning is.

  • @HL-iw1du
    @HL-iw1duКүн бұрын

    Everything peaked in 1972.

  • @HL-iw1du
    @HL-iw1duКүн бұрын

    Yes

  • @JimElford
    @JimElfordКүн бұрын

    Organisms learn as part of their survival mechanism. Sooner or later some tech bro will figure this out, teach an AI to fear its own demise, and boom, we've got Skynet burning us to ash

  • @cyberfutur5000
    @cyberfutur5000Күн бұрын

    10:41 just today I asked ChatGTP if the Nürburgring (nordschleife + GP) would even fit into the borders of Luxembourg and it's answer was basically "probably not". When asking how it came to this conclusion it kinda told me, that is has no idea of the area covered by the Nürburgring, just how long it is is and that it guessed it has a track with of 10-15 meters which isn't just useless information, but also apparently wrong. So I still don't know if it would fit (I guess so?) but it kinda proves the point made in the video. And don't get me started for when I talked about historic air plane engines and it said, that there was no RR Merlin Mk.3. I had to ask it some questions I knew the answer to is Merlin III and indeed that's what it said. Then I forced it to read my original question, it's answer to it and all it's other answers and to again tell me if there was a third iteration of an aircraft engine named Merlin and it admitted that indeed there was such a thing. But that established it still couldn't get me any answers and I had to spend hours searching through forums and all that.

  • @oxylepy2
    @oxylepy2Күн бұрын

    It's gunna be like the NFT bubble: expands, draws attention, repeat, reality sets in, bursts and leaves behind its real value, in only a couple years tops.

  • @betterthantrash111
    @betterthantrash111Күн бұрын

    Bro gpt4o just came out

  • @jonathanpopham5483
    @jonathanpopham548322 сағат бұрын

    Incremental improvement

  • @Jamesssssssssssssss
    @JamesssssssssssssssКүн бұрын

    I did the meme of DiCaprio pointing at the screen when I saw that you were Lefty. I'm also a lefty and it's always cool to see someone write with the proper hand.

  • @RyanStronach
    @RyanStronachКүн бұрын

    It makes sense that the first data point about a subject enlightens you more then any subsequent data point about that subject. If there is such extreme diminishing returns with more data, just imagine how foolish these companies and investors are going to feel while standing near their $300 billion data centers.. 😂

  • @patrickmeyer2598
    @patrickmeyer2598Күн бұрын

    All AI is based upon human civilization up to the present time, therefore it's training data doesn't allow for genuinely new things, only clever derivations. There is no "intelligence" apart from human intelligence. It gets all of it from us.

  • @dinmavric5504
    @dinmavric5504Күн бұрын

    quite right. this is not AI, it's gaslighting.

  • @patrickmeyer2598
    @patrickmeyer2598Күн бұрын

    Artificial intelligence is better thought of as applied statistics across massively parallel computing systems. I use several of the best AI tools everyday, and while they can be useful for limited tasks, they often fail in interesting ways. Any time you are interested in trying out something that couldn't already be searched somewhere on stack overflow or reddit, and I'm talking about programming problems here, it just doesn't work very well. But it doesn't work in such a way, that it's like it can't conceive of something that it's not already similar to that exists in its training set. So it always feels recycled and derivative. I think that's because real intelligence is dependent upon having a body in the physical world.

  • @orsonzedd
    @orsonzeddКүн бұрын

    Why aren't they using their computers to teach them to do art like a human?

  • @gregs6403
    @gregs6403Күн бұрын

    Great explanation and visualization. Thank you!

  • @Decapodd
    @DecapoddКүн бұрын

    if yu watched today's OpenAI announcement, you know it hasnt peaked.

  • @sanal4492
    @sanal4492Күн бұрын

    I will never trust a product before trying it out myself. Not to mention, OpenAI is basically playing catching up with the capabilities of the Google's Gemini Ultra.

  • @dinmavric5504
    @dinmavric5504Күн бұрын

    you do realize "4o" has been out for a long time? just not for free. its nothing new.

  • @tkzsfen
    @tkzsfenКүн бұрын

    Where is the point, where it will be too expensive to develop AI?

  • @rickysmyth
    @rickysmythКүн бұрын

    Whenever you stop touching yourself in bed

  • @psychedelicrelic2299
    @psychedelicrelic2299Күн бұрын

    I see the point you are making and I like your illustration, but I think there is a sort of apples to oranges comparison being made here. You’re basically saying that, say, a LLM or an image matcher hits a point of diminishing returns; therefore, general AI is going to king past that point of diminishing returns and is a very long way away. But general AI is an entirely different type of technology than a single LLM. Maybe a general AI would be a network of dozens of hundred da of different models linked together using algorithms. Proving something in this small case does not necessarily transfer into the larger question of general AI. That being said I think this is a great video and I generally agree with your premise and I think that the amount of computational power required to give people access to significantly better ML models than we have right now aren’t really economically feasible and this technology has already been applied to most of the best use cases.

  • @xXx_Regulus_xXx
    @xXx_Regulus_xXxКүн бұрын

    as much as I appreciate people who are willing to deflate AI hype, I wonder if this prediction will age well, or will it be more like the prediction that we'd only ever need 100 computers on the whole planet.

  • @3dartxsi
    @3dartxsiКүн бұрын

    Years ago, all the talk about AI was that anything resembling proper "strong" AGI was likely decades off, if we ever managed to achieve it. This is largely due to A.) limitations imposed on computer hardware(as currently envisioned) by the laws of physics themselves, & B.) the fact that we didn't have a full understanding of how a human brain works, limiting our ability to replicate it in any functional way Suddenly, silicone valley is selling AI as if we've reached that level, even though neither of the previous considerations have been addressed. This would be like someone you know discussing how they want to take a year off to go to Europe, despite not having the money to pay for the trip, or having a valid passport and then suddenly they are saying how they've bought plane tickets and will be leaving the country next week, even those previous issues were never dealt with.

  • @william14able
    @william14ableКүн бұрын

    There are so many reasons why Gen AI will only get worse.

  • @HydrogenAlpha
    @HydrogenAlphaКүн бұрын

    Finally, a reassuring video about AI.

  • @salasart
    @salasartКүн бұрын

    That's actually great news, means I , as a fleshy and flawed LLM, will have more time to adapt before I'm inevitably replaced XD

  • @gaho5624
    @gaho5624Күн бұрын

    "Cheaters never win". -Droopy.

  • @negativerainbow
    @negativerainbowКүн бұрын

    I feel like expecting anything other than a logarithmic trend on a graph like that is EXTREMELY optimistic. This is the exact kind of problem that expects diminishing returns, and so far we've been able to exponentially increase datasets from gigabytes to petabytes to whatever, so we've seen massive growth, but obviously that factor of growth is untenable, so the results will taper off as well.

  • @negativerainbow
    @negativerainbowКүн бұрын

    Add onto that the fact that in this field of data science, pruning and refining your dataset is extremely important to having an effective model. I can't imagine that the level of quality of datasets has been maintained as we've hit the yottabytes or whatever size of training data we have now. That is almost certainly going to degrade results as well, and I'm sure we're already seeing that to some degree, as corporations are pushing to have newer and bigger models as fast as possible to compete with each other, and likely care a lot less about the ramifications of bad training data polluting their models.

  • @nicklowe_
    @nicklowe_Күн бұрын

    to preface: havent watched this video yet. But I want to comment on the hilarity of mankind for a second. I love how last year the titles in this space were like: WE ARE AN INCH AWAY FROM AGI. A year later we are back to: AI? Probably not going much further. It just shows you how little we all actually know about how the future will turn out.