Mad Computer Scientist

Mad Computer Scientist

This channel will cover a varity of topics in math, computer science and technology that are of interest to me. I have an Associate's Degree in information Technology, a Bachelors Degree in Computer Science, and am working on a Master's Degree in Data Science.

Пікірлер

  • @johnnyragadoo2414
    @johnnyragadoo241413 сағат бұрын

    Nice and clear. (2^0.5)^3/2 could also be interpreted as 2^1.5/2^1. Divide by subtracting the logs, 1.5-1, and you get 2^0.5. A good visualization for why that works is to see that x*x*x describes a cube. The area of one face will always be (x*x*x)^2/3 because that leaves two "x's" out of three. The square root of the area of a cube's side is the length of an edge. x*x*x, the volume of a cube, raised to the 2/3 power is the area of a side. The square root of (x*x*x)^2/3 is always going to be x. I got hung up on the order of stacked exponents fairly recently. PEMDAS is widely preached as always left to right in the categories, but that's not true. Multiple exponents are right to left. By the way, wouldn't it be cool if KZread supported Mathjax in comments? Terrence Howard would be on cloud e^ln(3^2).

  • @markcaesar4443
    @markcaesar444314 сағат бұрын

    The law of exponents, you have the power!

  • @user-ex8dk3ic3x
    @user-ex8dk3ic3xКүн бұрын

    Nice to see you got this put back up.

  • @MadComputerScientist1
    @MadComputerScientist118 сағат бұрын

    It was put back up a while ago, probably not in time to increase traffic to it. But it's okay. I didn't think I was going to win against Joe Rogan's media company if they held the claim anyway. The copyright claim was released so hopefully it will start getting promoted again.

  • @AnuragShrivastav-7058
    @AnuragShrivastav-7058Күн бұрын

    She is right actually.Even Roger penrose seconds this opinion. What LLMs are can be said to mimic intelligenge. Its not intelligence in the sense that these language models dont understand or reason their solutions. They are like machines which spit out something based on some input after the input passes through a series of matrix multiplications. Its artificial cleverness but not artificial intelligence

  • @MadComputerScientist1
    @MadComputerScientist122 сағат бұрын

    Neither you nor Dr. Collier understood what Turing was asking when he proposed the Turing test. Here's the first few paragraphs from Computing Machinery and Intelligence, where the Turing Test was first proposed: "I propose to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms ‘machine’ and ‘think’. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words ‘machine’ and ‘think’ are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, ‘Can machines think?’ is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words. The new form of the problem can be described in terms of a game which we call the ‘imitation game’. It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either ‘X is A and Y is B’ or ‘X is B and Y is A’. The interrogator is allowed to put questions to A and B thus: C: Will X please tell me the length of his or her hair? Now suppose X is actually A, then A must answer. It is A's object in the game to try and cause C to make the wrong identification. His answer might therefore be ‘My hair is shingled, and the longest strands are about nine inches long.’ In order that tones of voice may not help the interrogator the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprinter communicating between the two rooms. Alternatively the question and answers can be repeated by an intermediary. The object of the game for the third player (B) is to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as ‘I am the woman, don’t listen to him!’ to her answers, but it will avail nothing as the man can make similar remarks. We now ask the question, ‘What will happen when a machine takes the part of A in this game?’ Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, ‘Can machines think?’ Note that this question isn't about if a machine can think, but can a machine can behave convincingly enough to think others that it's a human. The Turing Test is not without its critics, here's a brief overview here: www.geeksforgeeks.org/turing-test-artificial-intelligence/ Modern AI applications -- with the exception of Artificial General Intelligence -- aren't trying to mimic full human intelligence, but rather one specific application of it. Our robotic vacuum cleaners, which do use machine learning, don't need natural language processing capabilities, and the Alexa does need a way to drive us to work each day. (We can do the latter without AI anyway.) So, no, Dr. Collier is not right on this issue. The reason she isn't right is two-fold. One, she's assuming abstractions like computer code don't exist because they have no meaning outside of the human mind and the machines that process the text files representing those abstract concepts, and because she didn't understand that we aren't trying to create "thinking" machines. We are trying to come up with machines that behave as if they can think the same way humans do.

  • @AnuragShrivastav-7058
    @AnuragShrivastav-705821 сағат бұрын

    @@MadComputerScientist1 I have nothing against the LLMs and personally i find them very useful tools. I have issue with how these products are marketed and how we project that AGI will be achieved by 2030. The word intelligence here is what i think is causing a lot of people to get caught in the hype of the capabilities of these technologies. Intelligence is something very deep and achieving it artificially that too by 2030 is misinformation and propaganda by these greedy corporates. I have issue with this type of overhyping and misinforming people who dont know much about what these things can do.

  • @MadComputerScientist1
    @MadComputerScientist121 сағат бұрын

    And I assume you understand we are on the same page on this issue. I am not a fan of TechBro CEOs putting "AI" -- usually LLMs -- into everything, because they are relying on the general public's misunderstanding of what these programs are designed to do. I get sick of people like Musk over promising the capabilities of other forms of AI automation, such as saying they have fully self-driving cars. LLMs are neat. People just need to stop thinking they're designed to come up with the correct answer. Even if they are trained on only accurate data, they will still come up with wrong answers.

  • @user-hb1yo5ep9y
    @user-hb1yo5ep9y2 күн бұрын

    I feel like his mind broke under some sort of "stress", and now hes trying to make sense of all the wierd things running through it😊

  • @thomasneal9291
    @thomasneal92916 күн бұрын

    gees. so many of your commenters appear absolutely clueless. THE DANGER IS CULTISM. that is the problem with people following "terryism". If you don't understand how dangerous cults can be, go read yourself a book on the subject. there are plenty of them. stop being so damn ignorant.

  • @FernLovebond
    @FernLovebond6 күн бұрын

    We're still here? Crap.

  • @MadComputerScientist1
    @MadComputerScientist16 күн бұрын

    I really need to do something about the sound echoing. In any case, I'm not sure where they say the world will be wiped out in 21 days, I watched the hour long video in its entirety and there seems to be nothing to relate to the title's dire Doomsday prediction.

  • @FernLovebond
    @FernLovebond6 күн бұрын

    @@MadComputerScientist1 Cursed clickbaiters!

  • @orlandoarellano7390
    @orlandoarellano73906 күн бұрын

    Funny shapes dont mean shit terrence

  • @electrodacus
    @electrodacus8 күн бұрын

    AI is going to take over most jobs so yes it will replace humans in the work place. I'm going to guess that both Angela and you think humans are more than what they actually are and that is a biological machine. I use Claude 3.5 Sonnet and it is a significant step up from latest Chat GPT and Gemini both for coding and physics. Most of the problems are from bad training data as internet is full of that. At this rate of progress I do not expect it will take more than 5 years before AI exceeds humans in all tasks and there will still be people that will think we still have something special.

  • @alfredosantos6669
    @alfredosantos66698 күн бұрын

    No worries Howard. We know that saying people are crazy is standard government protocol. Also, 99.9% of scientists agree with the people that are funding them. True story.

  • @MadComputerScientist1
    @MadComputerScientist17 күн бұрын

    No one is funding research into why 1* 1 = 1. Howard is breaking at least two fundamental theorems, and for him to be right, the rest of math would have to be wrong. If Howard were right about this, there would not be a single unique prime factorization for every integer greater than one. It would also break the Fundamental Theorem of Algebra. The prime factorization thing is easier to understand. Under actual math, the prime factorization of 24 is 2 * 2 * 2 *3 Under Howard's Math, we can break that down to 2*2*2*3, and 1,7,3, and 1,1,1,1,1,1,12. Note is not one factorization under Howard's math. This is how we know he's wrong. No research grant required. I will give Mr. Howard credit for going out and being so boldly wrong with this theories, and for starting to think about math. Now he just needs to learn why he's wrong and improve. Computer scientists, physicists, and mathematicians make math mistakes all the time. The ability to think about mathematical concepts doesn't mean a person is capable of understanding them. If Howard wants to contribute anything to math, he is going to have to go back to the point where he started failing his courses and unlearn what he has learned. I"m not going to solve the P = NP problem by saying if N = 1 or P = 0, because stating that assertion shows that I actually don't know that P and NP is a set theory problem, or that P vs NP is about if all NP problems can be reduced to polynomial time through the means of more efficient algorithms, thus showing the sets have the same cardinality and P = NP, or by showing that one NP problem can't be reduced to polynomial time, and thus showing that P is a proper subset of NP.

  • @kimberlygerman294
    @kimberlygerman2948 күн бұрын

    No matter who says what proving he is wrong, he will still believe his figures.

  • @ckleber-t2p
    @ckleber-t2p9 күн бұрын

    Terrence Howard is wrong about 1+1x1=2

  • @jamesdelapena5648
    @jamesdelapena56489 күн бұрын

    This is a really interesting project! Data science, programming, natural language processing, statistics, an application of the Euclidean distance metric in 10 dimensions. Love it

  • @MadComputerScientist1
    @MadComputerScientist19 күн бұрын

    It would be more accurate if I added more dimensions and a higher k-value. But I don't think that would change the result much at all.

  • @anthonyharty1732
    @anthonyharty173210 күн бұрын

    Howard is a Pathological Liar! LYING! is what he does.

  • @GursimarSinghMiglani
    @GursimarSinghMiglani11 күн бұрын

    Hi

  • @user-pu9hw8xi3r
    @user-pu9hw8xi3r12 күн бұрын

    Wow it's a flame reply in video format. Gross.

  • @davidhitchen5369
    @davidhitchen536912 күн бұрын

    I don't get why he's talking about modern physics when he doesn't seem to understand Newton's Laws. I bet he couldn't set up the equations for a 1st year physics experiment like measuring the acceleration due to gravity with Atwood's machine.

  • @luna-ltzyxienne780
    @luna-ltzyxienne78012 күн бұрын

    roughly speaking, the fundamental theorem of calculus states that the indefinite integral of a function is its antiderivative

  • @hansolo6831
    @hansolo683113 күн бұрын

    Ten years in the joint...

  • @MadComputerScientist1
    @MadComputerScientist113 күн бұрын

    Dealing with Terrence Howard as much as I have this summer has apparently had adverse affects, so I need to issue the following corrections. I think brain power diminished trying to understand the things about physics I knew he got wrong, and it was worse for the PhD physicist, but here are some corrections. 1 x 1 x 1 = 1 It is also true that I can get 1 x 1 x 1 to equal something close to Pi with certain floating point values. This doesn't mean that the statement is true, merely that computers don't always handling decimal values well. 1 million x 1 million really does equal 1 million squared, Duh. This equals 1 trillion. I should have gone with things that were unambiguously units like dollars or inches. 1 + 1 = 1 is absolutely correct in Boolean Algebra and does not require correction. 1 + 1 +1 =1 in Boolean algebra as well. You will see most of these mistakes towards the end of the video. The good news is I am done with Terrence Howard for a while.

  • @TIO540S1
    @TIO540S114 күн бұрын

    I disagree strenuously with your wish to have Michio Kaku on Rogan's channel. He was a reputable scientist a long time ago, but he's turned into a publicity hound and what amounts to a pseudoscientist.

  • @BestHolkin
    @BestHolkin15 күн бұрын

    Honestly you as a CS guy do not convince me. And that is disappointing. Completely on her side.

  • @MadComputerScientist1
    @MadComputerScientist114 күн бұрын

    Most of my fellow sisters don't think I make a very convincing guy either.

  • @jamesdelapena5648
    @jamesdelapena564815 күн бұрын

    I've been debating whether or not to chime in on this, since I actually adore Dr. Collier and her channel; I think she's hilarious. But nonetheless, she does misspeak when discussing AI (I think she meant "AGI", Artifical General Intelligence, rather than AI)... but regardless of my admiration of her, I do feel that professional criticism is always important, even for accomplished physicists like her, and I think your video is well done. I read the description and your pinned comment, so I know where you're coming from. I thought her video where she calls out the CEO of Zoom for discussing his vision for sending "AI clones" to all your meetings was hilarious!! (If you haven't seen that one, it's good)

  • @MadComputerScientist1
    @MadComputerScientist115 күн бұрын

    Yes, let's make AI clones to make the task of brushing people off easier. As long as AI remains software, there's no way to make it accountable. AGI is in its infancy, but I think what we'll find is that much like humans, making machines generalists weakens their capabilities in other areas. I also did not expect this video to do as well as it did. After all, it's just a point of disagreement on terminology.

  • @jamesdelapena5648
    @jamesdelapena564815 күн бұрын

    Very nice! Love how you discussed variable "scope"; that tends to confuse students sometimes, in my experience. And love the mention of COBOL at the end there, lol... can you believe that many big banks and government systems still run COBOL? We used to call it "Completely Obsolete Business-Oriented Langauge", even back when I first learned to code in like 2003. Apparently it's not obsolete, though it certainly should be!! And yeah, they made us learn PERL when I took CS at University, which nowadays has been almost entirely replaced by Python. Programming is a wonderful way to explore prime-finding and factorization algorithms. If I'm not mistaken, many encryption algorithms are still based on factoring very large numbers, since the time complexity requires such a long time to factor very large integer values. Keep up the great coding videos!

  • @MadComputerScientist1
    @MadComputerScientist115 күн бұрын

    A lot of COBOL is legacy code, but sometimes when a language is really good at something, there's no reason to change it. COBOL is really good at batch processing large amounts of data efficiently. Many modern languages don't even come close. They've tried to replace COBOL with Java, and a lot of businesses have. It's just that a few mainframes running mission critical applications work better using COBOL than they would with Java. Fortran is still used in scientific applications as well. Python has taken over a lot of it, but there are a lot of scientific libraries written in Fortran, and Fortran code runs faster than most Python code.

  • @Jayc5001
    @Jayc500116 күн бұрын

    As someone who just watched both videos, you are going kind of soft with your definitions. You say AI is trying to simulate human intelligence. But that's not true, it IS intelligence. What do you think your brain does when recognizing images? What do you think the model does and learns? Simple, it's an algorithm. Similar to the algorithms your brain uses to classify images and objects. Our brains contain many discrete algorithms made of neuron circuits to do various tasks. AI systems like us, can learn a valid algorithm to accomplish a goal. The purpose of the training data is to guide the model to develop the correct algorithm. And unlike what you said. Given perfect data and the correct model size and configuration, they can learn any algorithm or any combination of algorithms. They, like us, are turing complete. They can learn any program, including you. All of our mental processes are algorithms. Period. We like them, need training data, and have many problems extrapolating data beyond what we have seen. The whole field of ML and AI is about function approximation. What types and size of algorithm various models and architecture can approximate. Not all functions have the same N complexity. Some models simply can't learn some things due to physical and mathematical constraints. Right now, I literally mean right now with current tech, if you knew all of the algorithms in a human brain, you could make a RNN that does the same exact thing as the brain. Currently AGI isn't an artitecture problem, it's a data, scale, and training problem. Our biggest LLM's are only about the size of a cat brain. If you swap parameters for synapsis we need models over 100 trillion parameters. We still haven't even approached human scale by a long margin. But to train something of that scale with current tech would require years of training, and more than the entire US GDP. That's why we don't have AGI right now. We are trying to find a better path other than just making it human scale and brute forcing it. You don't want to spend a whole nation's GDP worth of money on something that is suddenly obsolete in a year. Just to reiterate my point, AI intelligence is the same exact thing as human intelligence. Humans are just much larger systems trained for many years. Like some types of AI systems we are black boxes, and the reply we give to our motivations and reasoning are a lot of the time post hoc justifications that have nothing to do with why we actually did a thing. We do the same kind of algorithms and many times things learned for neuroscience can be applied into AI systems or vice versa. The benefit of these systems is that they DO think like us.

  • @Jayc5001
    @Jayc500116 күн бұрын

    Seriously think about a model over 100 trillion parameters because the brain synapse is more complex and has more ways of communicating than parameters. We currently don't have a company on the planet capable of simulating or running a single human brain because of the computation required. Let alone compute or data to train a system that complex. Our largest LLMs are only a measly 1% of the way to a human brain in complexity. The scale war hasn't even started yet, take a seat. That's how OP our brains are in terms of efficiency and complexity. And that's the kind of computational jump we can expect from technology in the future. Having a phone with more computers than a modern day datacenter is physically possible in the future.

  • @MadComputerScientist1
    @MadComputerScientist116 күн бұрын

    I think your issue is you're looking at it from more of a biolgoical perspective. I hope I can be as diplomatic as possible, but you're focusing on the wrong keyword. We're trying to teach computers to do tasks that normally require *human* intelligence. Artificial intelligence is an atttempt to simulate *human* intelligence. No one is arguing that the brain isn't more complex than our electronic computers. Most biologits and comptuer scientists will tell you it is. We've modeled some useful AI tools after the way our brain processes information, such as Neural nets,, but for other thinks, our brains do things that our still very hard for digital computers to do.

  • @user-ex8dk3ic3x
    @user-ex8dk3ic3x16 күн бұрын

    Excellent video. This link shows the best of what we have atm. It uses trial division as part of it so if trial division can be improved upon which i believe I have it can be used in this to speed it up further. Between us we could have something we could jointly name and be of use. You don't have to but thought id ask. en.m.wikipedia.org/wiki/General_number_field_sieve

  • @MadComputerScientist1
    @MadComputerScientist116 күн бұрын

    Maybe? If this is indeed an NP problem, showing that it can't be reduced to a more efficient algorithm would, however, solve the biggest unknown quetion in computer science, which is if {P} = {NP} or if {P} is a proper subset of {NP}.

  • @user-ex8dk3ic3x
    @user-ex8dk3ic3x16 күн бұрын

    ​@@MadComputerScientist1No rush have a think. Just to increase the efficiency of standard trial division would be a big deal because it's used in the best we've got.

  • @MadComputerScientist1
    @MadComputerScientist116 күн бұрын

    I think you might not know about the P vs NP problem. If I'm wrong, just ignore the rest of this. Tthe P vs NP problem has been unsolved for over 70 years. It's one of the Millennium Problems. I think you might be unfamiliar with it though based on comments. And this is fine, although it is both a math problem and a computer science problem, solving it has more implications for computer scientists than it does for mathematicians.

  • @user-ex8dk3ic3x
    @user-ex8dk3ic3x16 күн бұрын

    @@MadComputerScientist1 Thanks. My surname Hodge is one of the other remaining ones lol. 12 years in number theory concentrating on primes anything with crossover I've read up on but thanks.

  • @tiagodagostini
    @tiagodagostini16 күн бұрын

    What this woman fails to grasp is the bridge from the Weierstrass theorem. Any transformation upon data is a function. Intelligence is by definition a function, a very complex one. The Weierstrass theorem shows that any continuous function in a domain can be approximated by a polynomial. Machine learning is what? it is a mechanism to find polynomials that approximate a function. So the subset of intelligence that is continuous on a given domain CAN be emulated by Machine learning (since the Weierstrass theorem guarantees that such a polynomial exists) Doing that is easy ? No. But is has been mathematically proven that is real and doable! That woman may have a PhD , but she is utterly clueless on this subject. Her explanation on how GPT would do a text is as wrong as anyone could do.

  • @MadComputerScientist1
    @MadComputerScientist116 күн бұрын

    Her PhD in physics, so this is understandable. mathworld.wolfram.com/WeierstrassApproximationTheorem.html And for other people who need confirmation, here's what the Weierstrass approximation theorem is. Functions in computer science can do the same thing they do in math, but they can also do more. Unless you're using Pascal, where a function only returns a value. If you don't return a value, you use a procedure.

  • @tiagodagostini
    @tiagodagostini16 күн бұрын

    @@MadComputerScientist1 True, but given that physicists frequently use numerical methods that depend on Weiestrass theorem, I would think she would have heard of it. Effectively she did not approach the theme very "scientifically".

  • @MadComputerScientist1
    @MadComputerScientist116 күн бұрын

    @tiagodagostini I've never heard of it until now. Most newer AI aplications use linear algebra, a subject I need to buckle down and learn better. I probably shouldn't wait until I finally take the course in an upcoming semester. But I guess it would depend on her branch of physics. I can be relatively certain she's never heard of the Master Theorem for determining runtime complexity.

  • @tiagodagostini
    @tiagodagostini16 күн бұрын

    @@MadComputerScientist1 True but Weierstrass is also important in physics sicne in modern physics the concept of entropy being bound to information being a real thing. Maybe, I think no one helped her do the connection that intelligence is a data transformation, and therefor a function (on the mathematical sense of the word) and therefore subject to the same rules as functions. But I do get angry when people that do not understand AI say things like she said, that the AI searches for a text like the one you need copy pieces of it etc.. I am a strong advocate that people that do not understand complex subjects should say " I THINK" before they speak something.

  • @BestHolkin
    @BestHolkin15 күн бұрын

    This polynomial may be actually computable infeasible. In mathematic sense it may exist but practically it may not. That is with an assumption that human brain can be modelled as a function. You assume it is data processing, I may assume the brain just downloads data from the cosmic knowledge.

  • @mrslave41
    @mrslave4116 күн бұрын

    would you like to collaborate?

  • @MadComputerScientist1
    @MadComputerScientist116 күн бұрын

    I have been avoiding this simply because being on the autism spectrum -- like many people in STEM fields are -- I often talk over people and do not wish to be rude. I am willing, but it looks like your content is mostly physics related while mine is not. What did you have in mind?

  • @mrslave41
    @mrslave4116 күн бұрын

    please fix your audio

  • @MadComputerScientist1
    @MadComputerScientist116 күн бұрын

    The audio issue is?

  • @EnigmicIdentity
    @EnigmicIdentity17 күн бұрын

    Her fundamental point is basically true. All you are doing is arguing semantics.

  • @TheRealSykx
    @TheRealSykx16 күн бұрын

    when I clicked this video I suspected this, thanks for saving me the time

  • @jimrello7878
    @jimrello787817 күн бұрын

    this channel does not exist

  • @minhuang8848
    @minhuang884817 күн бұрын

    I like Angela and her snarky tone about some academia-wide nonsense, but yeah, she tripped me up with some of the base-level, mostly nonsensical pop-criticisms about this subject. Barely started the video and the biggest one already appeared: "it's not called AI." I really, really need there to be a mass educational campaign launched towards clearing up all the misconceptions, about what intelligence (also very vague term being used fairly specifically as needed) means, about the many different ways we had and used AI, even before Pac-Man fully anthropomorphized them for us to relate to... it's not a great sign when someone talks about these things and they end up getting stunlocked by some very basic assumptions that, frankly speaking, are just wrong and entirely founded in their ignorance about how language works and why we take "shortcuts," as it were. I personally always avoided "AI" in relation to modern machine learning efforts (which I strictly and curtly place at having begun in 2012, for obvious reasons), simply because I can't be arsed to get sidetracked by the discussions about semantics... but I happily use it when the context is clear, and I sure never stopped describing video game agents (which emulate human intelligence more often than not - especially if they really suck) as such. Doubtful I ever will, to be honest. Same for all the pop-scientists chiming in and all the stupid, stupid things. I still can't get over the irony of people going into huge diatribes about "stochastic parrots" while themselves proving to be nothing more than the very basic, prototypical, nowhere-near-as-sophisticated version of what they consider "not intelligent" or "non-reasoning" agents (that happen to suddenly completely dominate language tasks out of nowhere)... or the same old soundbites about synthetic data and enshittification "threatening the Internet" when, really, all current research overwhelmingly hints at a healthy (i.e. huge) mix of synthetic data not only being something we can deal with, but something that can drastically improve overall performance of many models in many domains. It is an incessant onslaught of bad takes grown on even worse intuitions by people with exactly zero credentials - and that's where I can't really endorse outside "experts" chiming in and abusing their authority to strengthen wrong beliefs in their audiences; the type of audiences that would finally have had the chance of breaking free of their misconceptions if only popular youtuber X were to not phone in the one segment they're not proficient in. Still better than scamming your "students" with inanely expensive learning materials or whatever Rick Beato is currently up to, again.

  • @seanyoung247
    @seanyoung24717 күн бұрын

    She's using the definition of AI that the majority of people use, which is more like Artificial General Intelligence rather than what a computer scientist means by AI. Of course AI from a computer scientist stand point exists. But when you say AI to most people they think AGI, and AGI doesn't exist.

  • @EnigmicIdentity
    @EnigmicIdentity17 күн бұрын

    AGI was a term promoted just so they could call stupid matrix multiplication AI. Sells stocks. lol

  • @porteal8986
    @porteal898617 күн бұрын

    Nah dude, it's perfectly valid to say artificial intelligence doesn't exist, because the term artificial intelligence suggests intelligence that is artificial, not a simulation of intelligence, even if we also use this term as a misnomer to refer to things that actually are not intelligence

  • @LeonardoGPN
    @LeonardoGPN17 күн бұрын

    There are no simulation of intelligence, if it looks like intelligence then it is. WTF are you trying to say?