How AI Learns (Backpropagation 101)

Explore the fundamental process of backpropagation in artificial intelligence (AI). This video show how neural networks learn and improve by adapting to data during each training phase. Backpropagation is crucial in calculating errors and updating the network's weights to enhance decision-making within the AI system. This tutorial breaks down the core mechanics of neural network training, making it easier to understand for individuals interested in AI, machine learning, and network training. By understanding backpropagation, viewers can better grasp how neural networks evolve to process information more accurately. Keywords: rosenblatt, AI, Artificial Intelligence, Neural Networks, Backpropagation, Machine Learning, Network Training, Data Adaptation, Error Calculation, Performance Tuning, Decision Making.

Пікірлер: 135

  • @markheaney
    @markheaney4 жыл бұрын

    This is easily the best channel on KZread.

  • @TimBorny
    @TimBorny4 жыл бұрын

    As always, worth the wait. You are a genius at distillation and visualization.

  • @ArtOfTheProblem

    @ArtOfTheProblem

    8 ай бұрын

    just finished this series, please help me share it: kzread.info/dash/bejne/gXqHm5JmdrucoMo.html

  • @robertbohrer7501
    @robertbohrer75014 жыл бұрын

    This is the best explanation of neural networks I've seen by far, and I've seen most of them.

  • @ArtOfTheProblem

    @ArtOfTheProblem

    4 жыл бұрын

    thrilled to hear this

  • @robosergTV

    @robosergTV

    3 жыл бұрын

    3Blue1Braun is on par I'd say

  • @AstonVantage8

    @AstonVantage8

    Ай бұрын

    To me this video provides a better intuitive understanding than 3Blue1Braun.

  • @austinvw1988
    @austinvw19883 ай бұрын

    WOW!! This is the only video that I've watched that made me finally get it. The inclusion of the physical dimmer switch and weights in the neural net made me finally start to grasp this concept. Thank You! 👏

  • @ArtOfTheProblem

    @ArtOfTheProblem

    3 ай бұрын

    So so happy to hear this, glad I took the time. I need to get this video out there more

  • @TimBorny
    @TimBorny4 жыл бұрын

    Seriously impressive. As someone currently applying to masters degrees in science communication, you are an inspiration. While a personal inquiry within a public forum is generally not advisable, I'm compelled to wonder if you'd be willing to be available for a brief conversation.

  • @ArtOfTheProblem

    @ArtOfTheProblem

    4 жыл бұрын

    I appreciate hearing this. You can reach me britjcruise@gmail.com

  • @ArtOfTheProblem

    @ArtOfTheProblem

    8 ай бұрын

    just finished this series, please help me share it: kzread.info/dash/bejne/gXqHm5JmdrucoMo.html

  • @baechlio
    @baechlio4 жыл бұрын

    Yay!!! My favourite channel finally uploads again.. to be honest the quality of your videos makes the wait worth it

  • @idiosinkrazijske.rutine
    @idiosinkrazijske.rutine4 жыл бұрын

    The highlight of this day

  • @iMamoMC
    @iMamoMC4 жыл бұрын

    This video was great! What an awesome introduction to deep learning ^^

  • @raresmircea
    @raresmircea4 жыл бұрын

    This, along with everything else on this channel, is fantastic material for schools. I hope it gets noticed by teachers

  • @KipColeman

    @KipColeman

    8 ай бұрын

    College IT professor here... we are noticing! :)

  • @rj8875
    @rj88754 жыл бұрын

    After reading all day about tensorflow you just inspired me to go deeper on this subject. Thank you

  • @ArtOfTheProblem

    @ArtOfTheProblem

    4 жыл бұрын

    woo hoo!

  • @KhaliliStudios
    @KhaliliStudios4 жыл бұрын

    I’m always very impressed at the novel approach to teaching these subjects - another hit, Brit!

  • @ArtOfTheProblem

    @ArtOfTheProblem

    4 жыл бұрын

    that you for your ongoing feedback. I worked super hard on this one,

  • @AstonVantage8
    @AstonVantage8Ай бұрын

    Have been watching quite a few videos to try to get an intuitive understanding of the inner workings of neural network. Some are too simplistic while others go way deep into the math. This video has just the right level of detail and accompany with excellent illustrations and explanation. It is by far the best I have come across. Glad I found this channel, look forward to the other videos.

  • @ArtOfTheProblem

    @ArtOfTheProblem

    Ай бұрын

    thrilled to hear it! working on a big video on history of RL stay tuned

  • @yomanos
    @yomanos4 жыл бұрын

    Brilliant video, as always. The part on the explanation of a deep neural network was really well explained.

  • @Aksahnsh
    @Aksahnsh4 жыл бұрын

    I just don't understand, why this channel is not popular.

  • @ArtOfTheProblem

    @ArtOfTheProblem

    4 жыл бұрын

    I know, i kinda stopped asking myself. I know it's due to algorithm changes in some way. because my videos don't even go to subscribers much at all

  • @Aksahnsh

    @Aksahnsh

    4 жыл бұрын

    @@ArtOfTheProblem True, even I didn't got it in my recommendation feed. I just realized that why there is no new video from you from long time, have to open your channel to manually find it. Clicked on bell icon though now.

  • @ByteNishi

    @ByteNishi

    4 жыл бұрын

    @@ArtOfTheProblem Please, don't get disheartened. I really love your videos and eagerly wait for new ones :)

  • @roygalaasen

    @roygalaasen

    2 жыл бұрын

    There are no truer words like these. It baffles me as these videos are at least on the level of other highly popular science/math youtubers. It feels kind of unfair. Even the videos made 8+ years ago are pieces of masterful art. Did any of the other youtubers even exist back then? (I guess some did.)

  • @roygalaasen

    @roygalaasen

    2 жыл бұрын

    @@ByteNishi I am praying for the same. I am happy for the once a year schedule. At least there is something. Edit: I know it is a bit of exaggeration. There is at least 4 videos per year, which seems close to what 3b1b does nowadays as well.

  • @kriztoperurmeneta7089
    @kriztoperurmeneta70894 жыл бұрын

    This kind of content is a treasure.

  • @ccc3
    @ccc34 жыл бұрын

    Your videos are great at making someone more curious about a subject. They have the right balance of simplification and complexity.

  • @ArtOfTheProblem

    @ArtOfTheProblem

    4 жыл бұрын

    appreciate the feedback that's what i'm looking to do with these videos. Stay tuned to the next in this series it took me a long while to write

  • @TheFirstObserver
    @TheFirstObserver2 жыл бұрын

    This is a well-done, visual representation of artificial neural networks and how they compare to biological ones. I will say, the only item I might add is that the real reason the "gradual" activation functions mentioned in the latter half of the video are so useful is because they are differentiable. The functions being differentiable are what truly allowed backpropagation to shine, as the chain rule is what allowed the error of a neuron to be determined by the error of the neurons following it, rather than calculating a neuron's error from the output directly each time.

  • @mehdia5176
    @mehdia51764 жыл бұрын

    Beautiful work coming from a beautiful biological neural network about the beauty of artificial neural networks.

  • @zyugyzarc
    @zyugyzarc Жыл бұрын

    now that's a brilliant explanation of neural networks. better than anything Ive ever seen.

  • @ArtOfTheProblem

    @ArtOfTheProblem

    Жыл бұрын

    glad you found it

  • @srabansinha3430
    @srabansinha34304 жыл бұрын

    As a Medical Student studying Neural anatomy and Physiology , this is a whole new perspective to me !!! Keep teaching us More !!You are the Best teacher :)

  • @ArtOfTheProblem

    @ArtOfTheProblem

    4 жыл бұрын

    this means a lot, thanks for sharing

  • @ilovett
    @ilovett4 жыл бұрын

    This could be a Netflix series. Bravo.

  • @chris_1337
    @chris_13374 жыл бұрын

    Fantastic work!

  • @NoNTr1v1aL
    @NoNTr1v1aL Жыл бұрын

    Absolutely brilliant video!

  • @ssk081
    @ssk0814 жыл бұрын

    Great explanation of why we use a smooth activation function

  • @elektrisksitron9054
    @elektrisksitron90544 жыл бұрын

    Another amazing video!

  • @interspect_
    @interspect_4 жыл бұрын

    Great video as always!!

  • @CYON4D
    @CYON4D4 жыл бұрын

    Excellent video as always.

  • @karolakkolo123
    @karolakkolo1234 жыл бұрын

    Wow! The most amazing explanation on the internet probably. Will actual calculations be talked about in the series? (e.g. backpropagation calculus, etc) or will the series be mostly conceptual? (Any way I'm sure it will be interesting and of an unmatched quality)

  • @ArtOfTheProblem

    @ArtOfTheProblem

    4 жыл бұрын

    great question and thank you. No more details on backprop calculations (there are lots of good videos of that) in order to focus on other key insights. stay tuned!

  • @poweruser64
    @poweruser644 жыл бұрын

    Wow. Thank you so much for this

  • @jayaganthan1
    @jayaganthan12 жыл бұрын

    Just wow. Awesome explanation.

  • @midhunrajr372
    @midhunrajr3724 жыл бұрын

    what a nice presentation..

  • @JoshKings-tr2vc
    @JoshKings-tr2vc2 ай бұрын

    This is a very well written video and explains it quite well. I have a question for anyone willing to answer; what would occur if we took a simple functioning neutral network and added another layer to it? Would it get better in confidence or in conceptualization or would it simply not have any noticeable effect? Along the same lines, if it even did a minor improvement (for better generalizations) would it be a more efficient way of training a deep neural net? Sort of like calculating the amount of ohms each resistor takes up in a circuit by breaking it down to simpler bite sized problems. Just things that tickle my fancy.

  • @JoshKings-tr2vc

    @JoshKings-tr2vc

    2 ай бұрын

    That second question was confusing. All I’m saying is, if we have this huge neural net, why not break it down to smaller parts of it and optimize for confidence because adding more layers would supposedly make it better at generalization?

  • @ByteNishi
    @ByteNishi4 жыл бұрын

    Love your videos, can you please post videos more often. Thanks, your videos are always worth the wait.

  • @ArtOfTheProblem

    @ArtOfTheProblem

    4 жыл бұрын

    thank so much. I can't possibly post more often but what I can do is promise to continue for another 10 years :)

  • @ArtOfTheProblem
    @ArtOfTheProblem4 жыл бұрын

    STAY TUNED: Next video will be on "History of RL | How AI Learned to Feel" SUBSCRIBE: www.youtube.com/@ArtOfTheProblem?sub_confirmation=1 WATCH AI series: kzread.info/head/PLbg3ZX2pWlgKV8K6bFJr5dhM7oOClExUJ

  • @acidtears
    @acidtears4 жыл бұрын

    Great video! Do you have any idea how these types of Neural Networks would respond to visual illusions? I'm writing my thesis about Neural Networks and biological plausibility and realized that there seems to be a disconnect between human perception and the processing of Neural Networks. Either way, incredibly informative.

  • @sumitlahiri209
    @sumitlahiri2094 жыл бұрын

    Amazing Video. It was really worth the wait. I have watched all your videos. Just awesome I would say. Best channel for inspiring computer science enthusiasts

  • @ArtOfTheProblem

    @ArtOfTheProblem

    4 жыл бұрын

    that's really cool to hear you've watched them all. thanks for sharing

  • @sumitlahiri209

    @sumitlahiri209

    4 жыл бұрын

    @@ArtOfTheProblem I watched all of them. They inspired me to take up computer science. I really love the video on Turing Machine. I share your videos in cirlces as well.

  • @ArtOfTheProblem

    @ArtOfTheProblem

    4 жыл бұрын

    @@sumitlahiri209 you can offer my no higher compliment

  • @DaveMakes
    @DaveMakes4 жыл бұрын

    great work

  • @shawnbibby
    @shawnbibby7 ай бұрын

    I would also love to see a video of all the terminologies used together and defined in a single video. Such as Bit, Node, Neuron, Layer, Weight, Deep learning, Entropy, Capacity, Memory etc. I am trying to write them down myself as a little glossary. There meanings are so much greater when they are grouped together.

  • @ArtOfTheProblem

    @ArtOfTheProblem

    7 ай бұрын

    thank you! i was thinking of making a super edit of this series just need to scope it correctly...

  • @shawnbibby
    @shawnbibby7 ай бұрын

    The term Distributed Representation when compared to musical notes makes it seem like it has its own image Resonance or Signature Frequency. As if we really are seeing or feeling the totality of the image of the firing neurons. We seem to be addicted to understanding perceptions from a Human Point of View, imagine if we could begin to find translations to see them from Animal Point of Views, Different Sensory Combination Point of Views and different combinations of Layered Senses. The potential is infinite. I like the addition of the Linear Learning Machine versus one that forgets and uses Feelings. It seems that by combining both memory styles you would have more unique potentialities in the flavor pot of experiences, especially when the two interact with each other. Not to mention the infinite different perspectives they would each carry while traveling through time. Small and large epochs of time. I seem to keep coming back to the Encryption / Decryption videos on how it requires complete Randomness to create strong encryption and how the babies babbling was seemingly random in nature, which begs the question, was it truly random or could we simply just not see the pattern from our limited perspective? What is the scales and size of the pattern? And what conceptions and perspectives need to merge to simply find the Key to interpreting it?

  • @ArtOfTheProblem

    @ArtOfTheProblem

    7 ай бұрын

    yes, i'd say "feeling"

  • @thisaccountisdead9060
    @thisaccountisdead90604 жыл бұрын

    I'm not an expert or anything. But I had just been looking at networks. I was interested in the erdos formula: - erdos number = ln(population size) / ln(average number of friends per person) = degrees of separation for example it is thought there is something like 6 degrees of separation and an average of 30 friends each person among the global population. But I was also looking at Pareto distributions as well: - 1 - 1/Pareto index = ln(1 - P^n) / ln[1 - (1 - P)^n], where P relates to population of wealtheist and (1 - P) is the proportion of wealth they have.. for example if 20% of people have 80% of the wealth then P = 0.2 and (1 - P) = 0.8. n = 1 (but can be any number... if n = 3 it gives 1% of people with 50% wealth) and the Pareto Index would be 1.161. Whether it was a fluke I don't know? I did derive the formula as best I could rather than just guessing. But it seemed as though the following was true: - 1 - 1/Pareto Index = 1/Erdos Number Meaning that the Pareto Index = ln(population size) / [ln(populationn size) - ln(average number of friends per person)] Suggesting that the more friends people had on average then the lower the wealth inequality would be. Which I thought was a fascinating idea... ...But it also seemed as though the wealtheist actually had the most 'friends' or 'connections'. So the poorest would have the least connections while the wealthiest would have the most connections - in effect poor people would be channeling their attention toward the wealtheist. Like the top 1% would have an average of around 2,000 connections each (*and a few million dollars) while the poorest would have as little as 1 or 2 connections each (*with just a few thousand dollars... *based on a share of $300 Trillion). Maybe in like a neural network the most dominant parts of the brain could be the most connected parts? As I say I am not an expert. I was just messing around with it.

  • @fungi42021
    @fungi420213 жыл бұрын

    always looking for new content to watch on this topic.. great channel

  • @ArtOfTheProblem

    @ArtOfTheProblem

    3 жыл бұрын

    i'm so happy you found this series as it isn't ranking well yet. i have more videos coming out in this series soon

  • @mridhulml9238
    @mridhulml92382 жыл бұрын

    Wow this is really really great...you are really good at explaining

  • @yagomg7790
    @yagomg77904 жыл бұрын

    Best explanation on youtube. Keep it up

  • @ArtOfTheProblem

    @ArtOfTheProblem

    4 жыл бұрын

    appreciate the feedback

  • @lalaliri
    @lalaliri3 жыл бұрын

    amazing work! thank you

  • @ArtOfTheProblem

    @ArtOfTheProblem

    3 жыл бұрын

    appreciate the feedback

  • @username4441
    @username44414 жыл бұрын

    11:49 And the narration model took how long to-train?

  • @harryb.234
    @harryb.2342 ай бұрын

    Really cool. Haven't seen a better explanation

  • @trainer1kali
    @trainer1kali4 жыл бұрын

    a message to the one's responsible for the choices of the background music to translate the mood: "you're pretty good". P.S. In fact - you are AWESOME.

  • @ArtOfTheProblem

    @ArtOfTheProblem

    4 жыл бұрын

    thank you, glad it's working

  • @Arifi070
    @Arifi0704 жыл бұрын

    Great work! However, although the artificial neural network was inspired from the working of our brains, visualizing the network inside a head, can give a wrong idea that the human brain works that way. In fact, it is not like a feed forward neural network. [Just a side note]

  • @user-eh9jo9ep5r
    @user-eh9jo9ep5r5 ай бұрын

    What input could be done to heal neurons to basic correct stages, for give correct outputs

  • @harryharpratap
    @harryharpratap4 жыл бұрын

    Biologically speaking, what are the "weights" inside our brains? What physical part of the brain do they represent?

  • @solsticeprojekt1937
    @solsticeprojekt193711 ай бұрын

    Hi! Three years late, but at 0:26 where you say "feelings", you describe something much better explained as "realizations". The answer to the "Why?" about this lies behind the saying "an image speaks a thousand words". The part that takes care of logic works in steps, sequentially and can analyse the "whole" of a realization, just like we can put feelings and ideas into words. This works both ways, of course, but the path from words to realizations is a much, much slower one.

  • @ArtOfTheProblem

    @ArtOfTheProblem

    8 ай бұрын

    Took 2 years to finish this one, finally live would love your feedback: kzread.info/dash/bejne/gXqHm5JmdrucoMo.html

  • @KDOERAK
    @KDOERAK4 ай бұрын

    Simply excellent 👍

  • @ArtOfTheProblem

    @ArtOfTheProblem

    4 ай бұрын

    thanks, stay tuned for more!

  • @zuhail1519
    @zuhail1519 Жыл бұрын

    I want to mention here, I watched the video halfway and I must say, I am a complete noob when it comes to biology but without making things complicated for a person like me, You made it so incredibly clear to me to appreciate how amazing our brain works and generalizes stuff, especially with your example of the short-story (can you please mention that author name, I quite couldn't catch it and cc are not clear either). Thank you for making this content, I'm grateful. Jazakallah hu khayr

  • @ArtOfTheProblem

    @ArtOfTheProblem

    Жыл бұрын

    thrilled to have you, i'm still working on the final video in this series so please stay tuned. Was it "Borges'?

  • @zuhail1519

    @zuhail1519

    Жыл бұрын

    @@ArtOfTheProblem Already have my seatbelt fastened !

  • @iamacoder8331
    @iamacoder8331 Жыл бұрын

    Very good content.

  • @ArtOfTheProblem

    @ArtOfTheProblem

    Жыл бұрын

    more on the way thanks

  • @ahmadsalmankhan3200
    @ahmadsalmankhan32008 ай бұрын

    Amazing

  • @ArtOfTheProblem

    @ArtOfTheProblem

    8 ай бұрын

    :))

  • @user-eh9jo9ep5r
    @user-eh9jo9ep5r5 ай бұрын

    What if one layer behaviour is different from expected, and not recognised as correct, but other layers are give output from sorts geomerical inputs on the level sense impulse, what can be done to filter inputs and recieve correct outputs

  • @user-eh9jo9ep5r
    @user-eh9jo9ep5r5 ай бұрын

    If network recieve input, give output, and answer isnt clear and recognised as not correct. Could it be recognised as network desieas , and if it so could be recognised as consequences influenced from other network or networks outputs

  • @emanuelmma2
    @emanuelmma25 ай бұрын

    Amazing Video.

  • @ArtOfTheProblem

    @ArtOfTheProblem

    3 ай бұрын

    would love if you could help share my newest video: kzread.info/dash/bejne/Z3mXs5OCk6izdrQ.html

  • @KalimbaRlz
    @KalimbaRlz3 жыл бұрын

    excellent explained

  • @ArtOfTheProblem

    @ArtOfTheProblem

    3 жыл бұрын

    thanks for feedback, have you watched the whole series?

  • @KalimbaRlz

    @KalimbaRlz

    3 жыл бұрын

    @@ArtOfTheProblem yes I did!, thank you for all the information

  • @slazy9219
    @slazy9219 Жыл бұрын

    holy shit this is some next level explanation thank you so much!

  • @ArtOfTheProblem

    @ArtOfTheProblem

    Жыл бұрын

    super glad you found it, still working on this series

  • @abiakhil69
    @abiakhil694 жыл бұрын

    Consensus mechanism?

  • @robosergTV
    @robosergTV3 жыл бұрын

    Isn't universal approximation theorem the mathematical proof NN can solve and model any problem/function?

  • @ArtOfTheProblem

    @ArtOfTheProblem

    3 жыл бұрын

    right but that is only in theory, in practice the number of neurons is impractical to make it "practically impossible" to impliment.

  • @robosergTV

    @robosergTV

    3 жыл бұрын

    @@ArtOfTheProblem true, but at the end of the video, you were talking something about "we still don't have a mathematical proof of how NN works" or something like that.

  • @columbus8myhw
    @columbus8myhw4 жыл бұрын

    Link to the Hinton lecture?

  • @ArtOfTheProblem

    @ArtOfTheProblem

    4 жыл бұрын

    kzread.info/dash/bejne/rKBtm6uTprqdoqg.html

  • @arty4679
    @arty46794 жыл бұрын

    Anyone knows the name of the Borjes story?

  • @ArtOfTheProblem

    @ArtOfTheProblem

    4 жыл бұрын

    worth reading: Funes the Memorious

  • @Trombonauta
    @Trombonauta6 ай бұрын

    1:13 Cajal is pronounced more closely to /kah'al/ than to that, FYI.

  • @AceHardy
    @AceHardy4 жыл бұрын

    👑

  • @user-eh9jo9ep5r
    @user-eh9jo9ep5r5 ай бұрын

    If sensory order was destroyed or noised or anything like this, something like network trafficking , what need to do for to safe all neural network

  • @lakeguy65616
    @lakeguy656162 жыл бұрын

    so adding hidden layers allows a NN to solve more complex problems. How many layers is too many? You are limited by the speed of the computer training the NN. I assume too many layers allow the NN to "memorize" instead of generalizing. Any other limits on the number of hidden layers? What about the number of neurons/nodes per layer? Is there a relationship between the number of inputs and the number of neurons/nodes in the network? What about the relationship between the number of rows in your dataset versus the number of columns? As I understand it, the number of rows imposes a limit on the number of columns. Adding rows to your dataset allows you to expand the number of columns too. Do you agree or have a different understanding? OUTSTANDING VIDEOS! John D Deatherage

  • @ArtOfTheProblem

    @ArtOfTheProblem

    2 жыл бұрын

    super great questions. I hope others can chime in. just wanted to add that in 'theory' you only need one hidden layer if it was really really wide to solve any problem (see universal approximation theorem), but in practice that doesn't work. and yes if the network is "too deep" it will be too difficult to train, so you need a sweet spot. when it comes to how wide those layers need to be, the most interesting research to me is how 'narrow' you can make them to 'force' the network to abstract (compress/generalize) the information in the middle. you can also make the hidden layers very wide which will cause it to 'memorize' instead of generalize. i didn't quite follow your column / row question though

  • @bicates
    @bicates Жыл бұрын

    Eureka!

  • @abiakhil69
    @abiakhil694 жыл бұрын

    Sir any blockchain related videos in future?

  • @ArtOfTheProblem

    @ArtOfTheProblem

    4 жыл бұрын

    have you seen my bitcoin video?

  • @abiakhil69

    @abiakhil69

    4 жыл бұрын

    @@ArtOfTheProblem Yes sir. One of the best video YT.

  • @ArtOfTheProblem

    @ArtOfTheProblem

    4 жыл бұрын

    @@abiakhil69 I do plan a follow up video, starting with ETH

  • @abiakhil69

    @abiakhil69

    4 жыл бұрын

    @@ArtOfTheProblem great sir. Another best video coming . Waiting👍.

  • @KittyBoom360
    @KittyBoom3604 жыл бұрын

    This might be more of a tangent to your great video, but my understanding is that intuition and logic aren't really distinct things. The former is merely more hidden in deep webs of logic while the latter is the surface or what is most obvious and intuitive. Ah, see the paradox? It's a false dichotomy resulting from awkward folk terms and their common definitions. I was always like the teacher's pet in college courses of logic and symbolic reasoning while majoring in philosophy maybe partly because anything that was labeled "counter-intuitive" was just something I would never accept until I could make it intuitive for me via study and understanding. But putting me and my possible ego aside, look at the example of a great mathematician such as Ramanujan and how he described his own process of doing math while in dream-like states. His gift of logic was indeed his gift in intuition, or vice versa, depending on your definitions.

  • @ArtOfTheProblem

    @ArtOfTheProblem

    4 жыл бұрын

    Yes I had a section in this video i cut which I kinda wish I left it. it was about how intuition is the foundation out of which logic grows. Kids won't learn with "words first" they learn with "sense first" - so mathematicians are of course guided by intuition and then they can later prove things with logic.

  • @Libertariun
    @Libertariun7 ай бұрын

    14:45 ... can learn to configure THEIR connections ...

  • @kmachine5110
    @kmachine51104 жыл бұрын

    KZread is a mind reader.

  • @fredsmith2277
    @fredsmith22778 ай бұрын

    it's all a chain of cause and effect from start to finish, each level or layer sharpens and zero's in on the exact match and is refined until a result is locked in, the human brain compares past results to incoming stimuli, and the result is also linked by chains of associations with the result, like result it's a dog, associations : dogs are fury, playful, dangerous, have a master, wage there tail when happy, and so on, but associations are unique to each separate mind ????

  • @fxtech-art8242
    @fxtech-art8242 Жыл бұрын

    gpt4

  • @ArtOfTheProblem

    @ArtOfTheProblem

    Жыл бұрын

    a lot of progress since this video :)

  • @escapefelicity2913
    @escapefelicity29134 жыл бұрын

    Get rid of the background noise

  • @escapefelicity2913

    @escapefelicity2913

    4 жыл бұрын

    For anything expository, any background sound is unhelpful.

  • @I-Do-NOT-Consent-303
    @I-Do-NOT-Consent-3038 ай бұрын

    After 35 seconds, you are already wrong. We do not think in sentences!!! Thinking is wordless. However, we are translating our thinking into words. But this is not necessary. The point being, is that languageing is only necessary if we want to communicate to another person. But thinking comes first, NOT as a result of sentences. If you get good at meditation and centering yourself, you can drive your car without verbalizing what you are doing. You can make decisions and act them out without verbalization, internally or externally! Language is only a descriptor, not the thinking faculty itself!!!

  • @ArtOfTheProblem

    @ArtOfTheProblem

    8 ай бұрын

    check out the whole series as I built up to this, i agree with you!

  • @vj.joseph
    @vj.joseph8 ай бұрын

    you are wrong within the first 40 seconds.

  • @ArtOfTheProblem

    @ArtOfTheProblem

    8 ай бұрын

    say more!

  • @motherfucc
    @motherfucc6 ай бұрын

    good sound design