#57 - Prof. MELANIE MITCHELL - Why AI is harder than we think

Patreon: / mlst
Since its beginning in the 1950s, the field of artificial intelligence has vacillated between periods of optimistic predictions and massive investment and periods of disappointment, loss of confidence, and reduced funding. Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected.
Professor Melanie Mitchell thinks one reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself.
Framing [00:00:00]
Dartmouth AI Summer Workshop [00:07:02]
Letitia Intro to Melanie [00:09:22]
The Googleplex situation with Melanie and Douglas Hofstadter [00:14:58]
Melanie paper [00:21:04]
Note on audio quality [00:25:45]
Main show kick off [00:26:51]
AI hype [00:29:57]
On GPT-3 [00:31:46]
Melanie's "Why is AI harder than we think" paper [00:36:18]
The 3rd fallacy: Avoiding wishful mnemonics [00:42:23]
Concepts and primitives [00:47:56]
The 4th fallacy [00:51:19]
What can we learn from human intelligence? [00:53:00]
Pure intelligence [01:00:14]
Unrobust features [01:02:34]
The good things of the past in AI research [01:11:30]
Copycat [01:17:56]
Thoughts on the "neuro-symbolic camp" [01:26:49]
Type I or Type II [01:32:06]
Adversarial examples -- a fun question. [01:35:55]
How much do we want human-like (human-interpretable) features? [01:43:44]
The difficulty of creating intelligence [01:47:49]
Show debrief [01:51:24]
Pod: anchor.fm/machinelearningstre...
Panel:
Dr. Tim Scarfe
Dr. Keith Duggar
Letitia Parcalabescu and and Ms. Coffee Bean ( / aicoffeebreak )
Why AI is Harder Than We Think - Melanie Mitchell
arxiv.org/abs/2104.12871
melaniemitchell.me/aibook/
www.santafe.edu/people/profil...
/ melmitchell1
melaniemitchell.me/
#machinelearning

Пікірлер: 126

  • @LucasDimoveo
    @LucasDimoveo2 жыл бұрын

    This podcast is shockingly high quality for the viewership. I hope this channel grows much more!

  • @marceloleal5127

    @marceloleal5127

    2 жыл бұрын

    y6

  • @marceloleal5127

    @marceloleal5127

    2 жыл бұрын

    m 0

  • @rodi4850

    @rodi4850

    2 жыл бұрын

    It is a very good channel! Sadly channels that get views are the ones that are easily digested and have short content.

  • @ddoust
    @ddoust2 жыл бұрын

    Without a doubt, MLST is the best channel for AI practitioners - every episode is mandated work time viewing for our team. Their instinct for the right guests, the quality of the panel and the open minded ventilation of competing key issues is exemplary. Friston, Chollet, Saba, Marcus, Mitchell and Hawkins are among the spearhead thinkers for the next (and final) breakthrough. If I might humbly recommend three more: David Deutsch, Herb Roitblat and Cecilia Heyes.

  • @MachineLearningStreetTalk

    @MachineLearningStreetTalk

    2 жыл бұрын

    Thanks David! And fantastic guest suggestions!

  • @oncedidactic

    @oncedidactic

    2 жыл бұрын

    Can I work there? Lol

  • @CristianGarcia
    @CristianGarcia2 жыл бұрын

    "Machine Learning practitioners were often quick to differentiate their discipline" How differentiable are we talking?

  • @oncedidactic
    @oncedidactic2 жыл бұрын

    Letitia was an excellent addition to the show! I love the varied perspective she brings, really complements the panel. As always I love d Keith’s contributions as well, and together they bring a formidable physics lens. Kudos on having such an eminent guest and thank you for all your hard work. It makes a fantastic show.

  • @haldanesghost

    @haldanesghost

    6 ай бұрын

    Her point about intelligence being limited by physical law totally caught me off guard and found it incredibly thought provoking. She was a great addition for sure.

  • @marilysedevoyault465
    @marilysedevoyault4652 жыл бұрын

    And it is so great that you have Mr. Duggar interacting in your interviews, giving a voice to philosophy!

  • @sabawalid
    @sabawalid2 жыл бұрын

    Another great episode guys!!! Keep 'em coming.

  • @ChaiTimeDataScience
    @ChaiTimeDataScience2 жыл бұрын

    MLST releases a new episode on Sunday. Time to start my Monday Chores :D

  • @user-xs9ey2rd5h
    @user-xs9ey2rd5h2 жыл бұрын

    Awesome episode, I'm looking forward to the one with Jeff Hawkins as well, I've learned so much from this podcast and am very glad you guys are doing what you're doing.

  • @imerovislam
    @imerovislam2 жыл бұрын

    Lex Fridman's Podcast led me here, I'm really glad. Wonderful content!

  • @MixedRealityMusician
    @MixedRealityMusician9 ай бұрын

    Thank you for these conversations and ideas. As a musician who is looking to go into computer science and AI, there are so many questions and worries around creativity and art and it takes a lot of humility and curiosity to approach these questions with an open mind.

  • @2sk21
    @2sk212 жыл бұрын

    Very enjoyable way to spend a summer Sunday. You have had some great guests lately

  • @marilysedevoyault465
    @marilysedevoyault4652 жыл бұрын

    Tim Scarfe, you are such an amazing pedagogue! I wish everybody would be as good as you when explaining something!

  • @teamatalgo7
    @teamatalgo72 жыл бұрын

    One of the best talks on the topic, congrats to the team for pulling such an amazing content. I am hooked to MLST now and binge watching all videos.

  • @jordan13589
    @jordan135892 жыл бұрын

    Has the Jeff Hawkins episode not yet been released? I was confused by references to a previous discussion with him.

  • @TimScarfe

    @TimScarfe

    2 жыл бұрын

    We will release it next, we need to get it checked with them before we publish. Sorry for the confusion

  • @bertbrecht7540
    @bertbrecht75402 жыл бұрын

    I am 20 minutes into this video and am so inspired. Thank you so much for the hard work you all put into creating this.

  • @MachineLearningStreetTalk

    @MachineLearningStreetTalk

    2 жыл бұрын

    Thanks so much!

  • @abby5493
    @abby54932 жыл бұрын

    Wow! Love this video! Awesome quality and so interesting 😍

  • @alexijohansen
    @alexijohansen2 жыл бұрын

    Thanks for doing these!

  • @JohnDoe-ie9iw
    @JohnDoe-ie9iw2 жыл бұрын

    I wasn't expecting this quality. So happy I found this channel

  • @EricFontenelle
    @EricFontenelle2 жыл бұрын

    1:10:48 You love you some François Chollet 😂😂

  • @MachineLearningStreetTalk

    @MachineLearningStreetTalk

    2 жыл бұрын

    Have you noticed? 😎

  • @bethcarey8530
    @bethcarey85302 жыл бұрын

    Agreed, 'shockingly high quality for the viewership' - you can tell the effort that goes into these productions FOR the audience's digestion and appreciation of such complex concepts across so many global experts. Thankyou Letitia, Keith, Tim and of course Melanie. I particularly love one of your 'fallacies' Melanie that 'narrow AI is on a continuum of general AI'. ML achieves so much good, case in point DeepMind's protein structure mapping, for focused and valuable problems. But conflating that with a step toward general AI does a major disservice for advancement of that goal.

  • @ugaray96
    @ugaray962 жыл бұрын

    The problem lies on what's the research that has more financial interest (in the short term): probably downstream tasks such as translation, summarisation, object detection and more. If there was more financial interest in doing research on general intelligence, we would be seeing a whole different panorama

  • @SatheeshKumar-V

    @SatheeshKumar-V

    2 жыл бұрын

    Well said. That’s a profound thought many miss to see today. I feel the same.

  • @BROHAMMER_OK
    @BROHAMMER_OK2 жыл бұрын

    Great episode as always.

  • @nembobuldrini
    @nembobuldrini2 жыл бұрын

    Great content, guys! And enjoying the framing and debrief discussion very much. The idea to factor in time and energy efficiency reminded me of a recent conference of Robin Hiesinger, regarding the growing of neural networks (which in turns reminded me of Ken Stanley and co. work on Hyper-NEAT - BTW, that was a great show from you guys as well!). It would be interesting hearing your take on that.

  • @dr.mikeybee
    @dr.mikeybee2 жыл бұрын

    This was very interesting. Thank you.

  • @bigdoor64
    @bigdoor642 жыл бұрын

    Hi Tim, I feel your pain regarding this audio quality. Check out descript if that happens again. It might be a better alternative than painfully denoising/enhancing guests voice. Not a free software of course.

  • @Avichinky2725
    @Avichinky2725 Жыл бұрын

    Completed the entire video in 4 days. I have been practicing Machine Learning for the last 5 years and this video gave me knowledge about the things that I never encountered during my tenure. Great Podcast.

  • @Hexanitrobenzene
    @Hexanitrobenzene2 жыл бұрын

    Replying to a note on audio quality, sampling does not happen in a microphone, it happens in an analog to digital converter on some IO chip in a computer. My guess would be that the audio driver settings somehow were incorrect.

  • @EricFontenelle
    @EricFontenelle2 жыл бұрын

    I wish you did more editing with the group talk. Love the channel and material.

  • @oncedidactic

    @oncedidactic

    2 жыл бұрын

    With respect to taste and preferences, disagree! I really value the free flowing convo.

  • @caiinfoindia1511
    @caiinfoindia15112 жыл бұрын

    Nice episode guys!

  • @johntanchongmin
    @johntanchongmin2 жыл бұрын

    Thanks for this video!

  • @danieleallois4633
    @danieleallois46332 жыл бұрын

    Amazing show guys. Keep it up please :)

  • @minma02262
    @minma022622 жыл бұрын

    It is 12.3 am here and this street talk is 2.3 hours. Yes, I'm sleeping at 3 am today.

  • @crimythebold
    @crimythebold2 жыл бұрын

    So. Intelligence must be measured in Watt then... I'm so relieved that we did not create a new unit for that 😉

  • @dougewald243
    @dougewald2437 ай бұрын

    Seems that the Turing Test is sufficiently ambiguous that much debate has emerged as to whether or not it's been passed. Do we have an updated & refined version of it? Do we need a Turing Test 2.0? A TT2?

  • @dr.mikeybee
    @dr.mikeybee2 жыл бұрын

    We easily see what large models don't learn -- in other words how they are not like us. What we don't even begin to see is all the things they do learn that we can never learn without them, because we lack the cognitive ability. This is what we should be discussing.

  • @CandidDate
    @CandidDate2 жыл бұрын

    AGI right now is experiencing growing pains in the form of criticizing the current methods. It is up to us so we better get this right.

  • @satychary
    @satychary2 жыл бұрын

    Hi Tim, excellent episode! Around the 3:00 minute mark, you switch from talking about the brittleness of expert systems in the 80s, to NNs in the 90s. In between, from 1984-1994, the Cyc project happened - the largest attempt ever, to distill common sense from humans. It didn't succeed, meaning, the system (Cyc) did not become intelligent the way it was hoped [robust, flexible, generalizable etc.]. IMO, the missing "glue" is not common sense, rather, it is experience - which can only be acquired via an appropriate body+brain combination, by directly interacting with the environment.

  • @NelsLindahl
    @NelsLindahl2 жыл бұрын

    That audio did sound like a bad speakerphone recording. The content however was great.

  • @dr.mikeybee
    @dr.mikeybee2 жыл бұрын

    Getting to intelligence is a function of overcoming local entropy. So organizations of atoms developed that could do that well. At this point, we are designing new organizations of atoms that don't need to overcome entropy on their own. We do that for them. Therefore, these new organizations, our computers, don't need to concentrate on overcoming entropy -- survival. They can have all their resources devoted to problem solving.

  • @kennethlloyd4878
    @kennethlloyd48782 жыл бұрын

    When AI researchers speak of concepts, they often only refer to 'mental concepts' without acknowledging more abstract structures. That is a barrier to understanding and leads to anthropocentric models.

  • @TEAMPHY6
    @TEAMPHY62 жыл бұрын

    Love the sci-fi womp womp sound effects on the predictions

  • @dr.mikeybee
    @dr.mikeybee2 жыл бұрын

    It seems to me that context extraction, filtering, and summarization is an important area of AI that is not getting enough attention. For example, we don't have a good English to SPARQL model.

  • @dr.mikeybee
    @dr.mikeybee2 жыл бұрын

    Why do we assume that AGI can be built with a smaller architecture than the human brain? We won't know what our models can do until we get some orders of magnitude larger models? Evolution has had 4 billion years to create the human brain; so we should assume it is very well optimized. It's certainly very well optimized for energy usage. I truly believe we can't discount connectionist ideas until we get significantly larger models.

  • @doubtif
    @doubtif2 жыл бұрын

    That short story about adversarial examples (at 1:36:00 or so) sounds like one of the central plot lines in Infinite Jest. I wonder if Hofstadter is aware of it.

  • @dr.mikeybee
    @dr.mikeybee2 жыл бұрын

    There are 2.5 billion seconds in 80 years. If you look at one node per second, how could you ever hope to comprehend GPT3 that has 175 billion parameters? It would take 70 lifetimes just to inspect every node.

  • @balayogig6030
    @balayogig60302 жыл бұрын

    If possible catch Dileep George from vicarious AI. If you are planning for any human inspired AI. Thanks. One of the nice channel for machine learning and AI in general.

  • @Hypotemused
    @Hypotemused2 жыл бұрын

    It’s the best. I’d say ML News (with Yannic Kilcher) and Lex Friedman’s podcast are in the same level. Different styles but top notch insights. But Yannic’s ML News is the funniest for sure. His sarcasm and wit run deep 🥸

  • @dr.mikeybee
    @dr.mikeybee2 жыл бұрын

    Distorting data is a good way to get rid of statistical shortcuts. For example, occlusion in images can remove watermarks, and other incorrect correlations in images that might be creating over-fit models. I saw a recent example of a cancer model that got the majority of it's cancer samples from a particular lab, so all the images were the same aspect ratio, whereas the non cancer images were different. So the model was predicting from aspect ration. LIME can help uncover this sort of wrongheadedness.

  • @dr.mikeybee
    @dr.mikeybee2 жыл бұрын

    We can do analogies in model form in transformers with pairs of sentences, and this can be done in the same model as a masked model just using different heads.

  • @someonespotatohmm9513
    @someonespotatohmm95132 жыл бұрын

    About the temporal dynamics of learning: wouldn't RL fit this description because every timestep the network is updated and "learning" and it "determines" what the next observations are. Or am I misunderstanding something here? Because to me it doesn't sound like it matters if you learn on a already existing dataset or a dataset you "discover" as you learn, you can implement it such that both are the same.

  • @michaelguan883
    @michaelguan8832 жыл бұрын

    How would three entities of "pure intelligence" divide $10 among themselves? Of course, $10 cannot be evenly split into three.

  • @michaelguan883

    @michaelguan883

    2 жыл бұрын

    From Melanie's "Why AI is Harder Than We Think": Nothing in our knowledge of psychology or neuroscience supports the possibility that “pure rationality” is separable from the emotions and cultural biases that shape our cognition and our objectives. Instead, what we’ve learned from research in embodied cognition is that human intelligence seems to be a strongly integrated system with closely interconnected attributes, including emotions, desires, a strong sense of selfhood and autonomy, and a commonsense understanding of the world. It’s not at all clear that these attributes can be separated.

  • @dr.mikeybee

    @dr.mikeybee

    2 жыл бұрын

    It's an interesting question. If we have an intents model that chooses actions, we could have various symbolic actions that could be chosen. If you are talking about end-to-end models, the correct prompt on a large enough model could get the correct answer. As you know, we can only retrieve what has been encoded; so if that sort of scenario exists in the data, it can be found.

  • @michaelguan883

    @michaelguan883

    2 жыл бұрын

    @@dr.mikeybee Actually, I was not thinking of technical aspects, but viewing "pure intelligence" from the perspective of society, economy, morality, etc. There are many dilemmas in our society. And I doubt that "pure intelligence" can ever exist because they still need to face those problems. In my example above, will a "pure intelligence" entity try to eliminate the other two entities so that it can maximize its income? Or it will take only $3.33 to be "fair"? Who is going to take the 1cent left? How can one entity protect its own wealth? It is good to unite other entities? You know what I mean.

  • @Hexanitrobenzene

    @Hexanitrobenzene

    2 жыл бұрын

    3 perfect mathematicians would just give up :) 3 AIs could just take the last cent perfectly randomly, real world always includes many such situations, so it would even out.

  • @XOPOIIIO
    @XOPOIIIO2 жыл бұрын

    The fact that we are constantly failing in AI development could be explained by anthropic principle. Like constantly failing to develop the fatal technology is the necessary prerequisite to our existence.

  • @dr.mikeybee

    @dr.mikeybee

    2 жыл бұрын

    I think the biggest reason we've failed at AGI is that we haven't gotten powerful enough systems, but we're getting closer. Moreover, it's tough to say we're failing. I would say we're succeeding pretty quickly.

  • @dr.mikeybee
    @dr.mikeybee2 жыл бұрын

    Specialized message passing is a kind of abstraction that vitiates the need for a large section of network.

  • @user-or7ji5hv8y
    @user-or7ji5hv8y2 жыл бұрын

    It’s almost comical to hear such outlier predictions in hindsight but of course, those predictions were taken with much gravity when boldly made by such titans.

  • @jgpeiro
    @jgpeiro2 жыл бұрын

    Hey, is there a problem with the audio?

  • @MachineLearningStreetTalk

    @MachineLearningStreetTalk

    2 жыл бұрын

    We made a comment on this at 00:25:45

  • @AICoffeeBreak

    @AICoffeeBreak

    2 жыл бұрын

    Yes. 😥 See chapter "Note on audio quality [00:25:45]"

  • @jgpeiro

    @jgpeiro

    2 жыл бұрын

    @@AICoffeeBreak no no, my audio was completely muted... But after replay the video few times, now it works. I dont know what happened

  • @AICoffeeBreak

    @AICoffeeBreak

    2 жыл бұрын

    @@jgpeiro I understand now what you are saying. Someone else reported having *no* audio too and I had the problem of no image for a while. I guess it might have been a glitch from YT since the video was to fresh and available only in SD and not HD quality at the moment we saw these errors.

  • @jgpeiro

    @jgpeiro

    2 жыл бұрын

    @@AICoffeeBreak thanks fot the explanation

  • @dr.mikeybee
    @dr.mikeybee2 жыл бұрын

    We shouldn't confuse intelligence with intelligent agents. Dumb agents can fetch correct answers. In fact, we probably don't want really smart agents.

  • @EricFontenelle
    @EricFontenelle2 жыл бұрын

    1:07:03 I almost spit my coffee out thinking about that dem debate where president biden says, “No man has a right to raise a hand to a woman in anger other than in self-defense, and that rarely ever occurs.” “So we have to just change the culture, period,” Biden said. “And keep punching at it and punching it and punching at it.”

  • @QuaaludeCharlie
    @QuaaludeCharlie Жыл бұрын

    We have nothing to replicate the breath of Life , Biological replication , the Penial gland , 0nce we Have these traits developed A.I. will be a living being .

  • @NextFuckingLevel
    @NextFuckingLevel2 жыл бұрын

    I feel dirty for watching this for free

  • @dr.mikeybee
    @dr.mikeybee2 жыл бұрын

    Why would we ever want synthetic self-motivated agents? Motivation is essential for survival. It's not essential for synthetic intelligence.

  • @dr.mikeybee

    @dr.mikeybee

    2 жыл бұрын

    If we are creating and choosing agents by an evolutionary algorithm, survival might be something to optimize, but I don't recommend ever doing that. Moreover, I would say, survival as a objective function should be prohibited in our designs. That is to say, if we give an agent the ability to choose objective functions, programmatically, survival is one that should never be a choice.

  • @user-or7ji5hv8y
    @user-or7ji5hv8y2 жыл бұрын

    Nice forest

  • @dr.mikeybee
    @dr.mikeybee2 жыл бұрын

    Adversarial systems have a serial temporal framework.

  • @_ARCATEC_
    @_ARCATEC_2 жыл бұрын

    I got this 💓

  • @OneFinalTipple
    @OneFinalTipple2 жыл бұрын

    When will you release the Hawkins vid?

  • @MachineLearningStreetTalk

    @MachineLearningStreetTalk

    2 жыл бұрын

    Asap 😎

  • @OneFinalTipple

    @OneFinalTipple

    2 жыл бұрын

    @@MachineLearningStreetTalk {Waits patiently 😒🤣}

  • @satychary
    @satychary2 жыл бұрын

    All forms of intelligence [plants, animals, group, synthetic...] can be defined to be 'considered response'.

  • @Hypotemused
    @Hypotemused2 жыл бұрын

    Dommage about melanies audio. Some ought to call Dr. Karkaeur and make him send mikes to all SFI staff. Come on David, your running an Airbnb for noble prize winners. Give em a damn mike.

  • @singularity844
    @singularity8442 жыл бұрын

    A machine with the same general intelligence as a human should have extremely similar biases.

  • @alcoholrelated4529
    @alcoholrelated45292 жыл бұрын

    what is "go-fi"?

  • @MachineLearningStreetTalk

    @MachineLearningStreetTalk

    2 жыл бұрын

    "Good old-fashioned AI" i.e. symbolic AI methods, this is what AI used to mean before the statistical / empirical methods i.e. machine learning. Symbolic basically means not data-driven, rather trying to create an AI using code and explicit knowledge.

  • @user-or7ji5hv8y
    @user-or7ji5hv8y2 жыл бұрын

    But replace music with art which also is about emotion. Can ML create art?

  • @kitastro
    @kitastro Жыл бұрын

    SCP type example here 1:36:30

  • @chrislecky710
    @chrislecky7102 жыл бұрын

    humanities intelligence is not based on logic gates its based on frequency because frequency has a scale and that scale allows for more information per connection than is possible with logic gates. quantum computers are the only current technology that has the potential to crunch that much data at once. A new type computer framework will need to be designed to make such things possible. The issue is at first glance such a framework will not be coherent at the beginning as quantum Ai will need to explore every possible variation to create coherence which is similar to a new born baby who can only perceive abstract shapes shades and colours. Coherence will then form from AI exploring every possible variation of frequency of every connection presented. For example your nervous system is able to process a light touch and something painful using the same nerves because their is a variation in frequency that your brain processes. its how our entire body works including our brain.

  • @AICoffeeBreak
    @AICoffeeBreak2 жыл бұрын

    Second!

  • @oualadinle
    @oualadinle2 жыл бұрын

    10

  • @charcoaljohnson
    @charcoaljohnson2 жыл бұрын

    Morse Code in the intro: QDEFHLY

  • @dr.mikeybee
    @dr.mikeybee2 жыл бұрын

    To keep agents ethical, chosen actions need to be passed through a policy network.

  • @machinelearningdojowithtim2898
    @machinelearningdojowithtim28982 жыл бұрын

    First!

  • @rohankashyap2252
    @rohankashyap22522 жыл бұрын

    Third

  • @stevenhines5550
    @stevenhines5550 Жыл бұрын

    Watched the Chomsky interview. Can't get past his estimation that after half a century this discipline has accomplished nothing. I am left to wonder, why all this effort and investment of intense brainpower? I suspect it has more to do with inventing systems which subjugate human dignity to power in service to the ruling class.

  • @da-st6ux
    @da-st6ux2 жыл бұрын

    fifth!

  • @annaibanez2499
    @annaibanez2499 Жыл бұрын

    LOL

  • @geoffansell4388
    @geoffansell43882 жыл бұрын

    Letitia Parcalabescu is so blemishless I thought she was AI generated at first.

  • @osman7900
    @osman79002 жыл бұрын

    It is an irony that despite all the progress in AI, it is still not possible to repair and enhance voice recordings.

  • @MachineLearningStreetTalk

    @MachineLearningStreetTalk

    2 жыл бұрын

    Wait till you see how we restored our recent interview with Chomsky 😀

  • @RavenAmetr
    @RavenAmetr2 жыл бұрын

    I feel that the last "fallacy" rather than addressing intellectual laziness is representing it. The body is necessary for intelligence? Cool, but what exactly does it mean? A "brain in a jar" is not intelligent, or conscious? That would be a bold statement. A virtual body or robot body would cause the emergence of intelligence? I don't think that's the point. Physical body constraints? Then which constraints are necessary and in what way? Why they cannot be programmed? Yes, I saw the video with prof. Bishop. Nothing makes sense there. I've only learned that anyone who is trying to explore human cognition from a computational standpoint, is a quasi-religious idiot, and anyone sane must avoid even thinking about it if they want to get precious Bishop's approval. Sorry for the sarcastic tone for the last part.

  • @MachineLearningStreetTalk

    @MachineLearningStreetTalk

    2 жыл бұрын

    Have you looked at Godel, Escher, Bach? Bishop cited it as one of his biggest inspirations. As well as intelligence being a process/emergent -- Bishop points out the "observer-relative problem" for computationalism. I don't think anyone is saying that you couldn't reproduce the emergent intelligence, rather they are saying that the nature of the intelligence is strongly determined by the entire chain i.e. the environment, the agent, how many sensors it has, how it interacts with the environment. So there is something "uniquely human" about our own intelligence. But as we discussed at the end, perhaps the uniqueness of the intelligence doesn't matter -- if a common language emerges or we even "discover" universal knowledge primitives like transitivity.

  • @RavenAmetr

    @RavenAmetr

    2 жыл бұрын

    ​@@MachineLearningStreetTalk Thank you for the response. No, I didn't, probably I should add it to my reading list. The "observer-relative problem", is an interesting and afaik a really old one, and yes I do see it as a problem. If I would know the solution, I would gladly share, but I don't, and I beleive nobody does. There's also no solid proof, that there cannot be a computational solution for it. I can't see that "pixie" thing or another Bishop's analogies convincing or even relevant. Nevertheless, I find it arrogant to state incomputability based on such "proofs". In regards to something "uniquely human". I'm not sure if my intelligence is uniquely human, or human intelligence is uniquely mine ;) I'm quite sure that I am uniquely myself, and I can make a bold guess, that you too. But isn't our uniqueness a "red herring"? I don't see how my uniqueness helps me to be sentient. By the way, do you know this guy: kzread.info/dash/bejne/iqqTrKxritiqerA.html Would be awesome to see an interview with him.

  • @MachineLearningStreetTalk

    @MachineLearningStreetTalk

    2 жыл бұрын

    @@RavenAmetr Robin Hiesinger is great, we would love to get him on the show. Thanks for the suggestion

  • @DontfallasleeZZZZ

    @DontfallasleeZZZZ

    2 жыл бұрын

    "A "brain in a jar" is not intelligent, or conscious? That would be a bold statement." Is the “brain in a jar” really not embodied? Sure, it may be in a jar now, but if it’s based on a human brain, its design is the result of millions of years of evolution, a very embodied process. It depends how much causality you are willing to ignore. “Physical body constraints? Then which constraints are necessary and in what way? Why they cannot be programmed?” Maybe they can, but for that, you need programmers, using their embodied intelligence to create the program. What the program does is a direct causal result of what the programmer’s fingers does on the keyboard.

  • @RavenAmetr

    @RavenAmetr

    2 жыл бұрын

    ​@@DontfallasleeZZZZ I think you and I talking about embodiment in different contexts. Feel free to clarify, what are you intended to prove and I will try to clarify my side. Embodyment in the given context is a way to say - everything is important. And it is probably correct, but such an attitude is just not helpful, non-informative, and lazy. It is not solving anything and not describing anything. It is just: "it is what it is", there's nothing to learn, nothing to discuss. On the other hand, we could go another way. and describe what it is not, what makes it different, what makes it special. Is my message clear?

  • @magnuswootton6181
    @magnuswootton61812 жыл бұрын

    well your not thinking hard enough.

  • @muzzletov
    @muzzletov2 жыл бұрын

    complete bs, adversarial examples exist in humans as well. we got trained over thousands of years, yet were still susceptible to "adversarial examples". the issue is a rather fundamental one. you always have a bias, no matter what structure u r. the definition of a structure is even biased in itself. idk what youre even hoping for. but i guess its some kinda sensationalism to attract more viewers. which i have no problem with, i enjoy the concept, but dont like the sensationalism.