Neural Networks and Deep Learning: Crash Course AI #3

You can learn more about CuriosityStream at curiositystream.com/crashcourse.
Today, we're going to combine the artificial neuron we created last week into an artificial neural network. Artificial neural networks are better than other methods for more complicated tasks like image recognition, and the key to their success is their hidden layers. We'll talk about how the math of these networks work and how using many hidden layers allows us to do deep learning. Neural networks are really powerful at finding patterns in data which is why they've become one of the most dominant machine learning technologies used today.
Crash Course is on Patreon! You can support us directly by signing up at / crashcourse
Thanks to the following patrons for their generous monthly contributions that help keep Crash Course free for everyone forever:
Eric Prestemon, Sam Buck, Mark Brouwer, Timothy J Kwist, Brian Thomas Gossett, Haxiang N/A Liu, Jonathan Zbikowski, Siobhan Sabino, Zach Van Stanley, Bob Doye, Jennifer Killen, Nathan Catchings, Brandon Westmoreland, dorsey, Indika Siriwardena, Kenneth F Penttinen, Trevin Beattie, Erika & Alexa Saur, Justin Zingsheim, Jessica Wode, Tom Trval, Jason Saslow, Nathan Taylor, Khaled El Shalakany, SR Foxley, Sam Ferguson, Yasenia Cruz, Eric Koslow, Caleb Weeks, Tim Curwick, David Noe, Shawn Arnold, William McGraw, Andrei Krishkevich, Rachel Bright, Jirat, Ian Dundore
--
Want to find Crash Course elsewhere on the internet?
Facebook - / youtubecrashcourse
Twitter - / thecrashcourse
Tumblr - / thecrashcourse
Support Crash Course on Patreon: / crashcourse
CC Kids: / crashcoursekids
#CrashCourse #ArtificialIntelligence #MachineLearning

Пікірлер: 177

  • @SavingSpace
    @SavingSpace4 жыл бұрын

    By the end of this course John Green Bot will replace John Green.

  • @i_smoke_ghosts

    @i_smoke_ghosts

    4 жыл бұрын

    yea dunno if this one's gonna take off mate. back to the drawing board ay

  • @fatimasalmansiddiqui1182

    @fatimasalmansiddiqui1182

    4 жыл бұрын

    I think John Green will come in at least one episode of this series though.

  • @MisterJasro

    @MisterJasro

    4 жыл бұрын

    Nha this is actually a prequel to all other crash course series. There never was a "real"John Green.

  • @kavinumasankar6544

    @kavinumasankar6544

    4 жыл бұрын

    @@MisterJasro Then why is John Green in the credits?

  • @river_brook

    @river_brook

    4 жыл бұрын

    @@kavinumasankar6544 Time travel, of course!

  • @knack8284
    @knack82844 жыл бұрын

    Just remember, the brain perceives things through a series of guesses. so with billions of neurons doing complex statistical analysis, nobody is as bad at math as they think :)

  • @druidiron

    @druidiron

    4 жыл бұрын

    You have clearly never seen me do math.

  • @knowledgemagnet4077

    @knowledgemagnet4077

    4 жыл бұрын

    @@druidiron egotist

  • @WolfiePH
    @WolfiePH4 жыл бұрын

    8:56 > Answers simple question correctly > Moonwalks away John green bot is basically all 1st grade elementary boys ever.

  • @radreed

    @radreed

    4 жыл бұрын

    Why only boys? What you tryna say

  • @maevab2923

    @maevab2923

    4 жыл бұрын

    @@radreed because john green bot looks more like a boy than a girl and is also literally named john. Don't be so sensitive

  • @idkmy_name7705
    @idkmy_name7705 Жыл бұрын

    I wish Crash Course would make notes for the courses in written format. So I can recall the learnt materials easily. Also this series is fireee

  • @Chelsieelf
    @Chelsieelf4 жыл бұрын

    This is perfect for me to understand about AI since I'm taking Neuro! Thank you so much 💙💙

  • @lincolnpepper816
    @lincolnpepper8164 жыл бұрын

    single best explanation of neural networks I've seen.

  • @JukeboxTheGhoul
    @JukeboxTheGhoul4 жыл бұрын

    Captcha uses this to teach computers as well as checking if we're human

  • @kilianblum8161

    @kilianblum8161

    4 жыл бұрын

    Neptune Productions small detail but “teach computers” is misleading. In captcha you label data to train models. A computer would be used to train the model or apply the model later, it doesn’t “learn” anything.

  • @Souchirouu

    @Souchirouu

    4 жыл бұрын

    Yeah, it's kinda weird that the test to prove your human is training computers to be able to do the same thing. So there will be a point where Captcha will change/become more complex because computers can solve the current generation ones as well as humans can.

  • @thomas.02

    @thomas.02

    4 жыл бұрын

    @@Souchirouu imagine solving complex calculus that even wolfram alpha couldn't handle just to sign up to something

  • @JukeboxTheGhoul

    @JukeboxTheGhoul

    4 жыл бұрын

    @@thomas.02 There are certain things that Humans are really just better at doing than computers are. Take for example AI in videogames. A human can probably execute a tactical manuveur, but they take time to process and plan. Computers can take instant action and rush the human before it has chance to think. (I'm referring to Total War: Atilla)

  • @avaavazpour2786

    @avaavazpour2786

    4 жыл бұрын

    So they're teaching a bot how to verify that it is not a bot. Fine logic.

  • @dejohnny2
    @dejohnny24 жыл бұрын

    Jabrill, you hit a home run with this video. 5 stars dude!

  • @cameronhunt5967
    @cameronhunt59674 жыл бұрын

    You guys should put a link to Sethbling’s video about marIO.

  • @whocares2087.1
    @whocares2087.14 жыл бұрын

    This is a really great series. **takes notes**

  • @stevenfeldstein6224
    @stevenfeldstein62244 жыл бұрын

    I find it strange that Alex Krizhevsky (as 12/20/19) doesn’t have his own page on Wikipedia nor does he appear in the Wikipedia pages for neural networks nor machine learning, yet his work is cited over 84,000 on google scholar.

  • @amanatee27
    @amanatee274 жыл бұрын

    This is a great series, thank you all for taking the time to make it! For future videos, could Jabril's audio be turned up just a bit more? Sometimes, the end of his sentences get quieter and it's harder to catch all the info. Thank you!

  • @mattkuhn6634
    @mattkuhn66344 жыл бұрын

    Oh man, I can't wait to see you guys talk about gradient descent! Great job so far!

  • @simpleskills7222
    @simpleskills72224 жыл бұрын

    Crash Course is the place to be. I expecially love this series. This channel has inspired me to create my own channel. It is new and I would love to get some support/guidance on how to improve.

  • @mikeywatson5654

    @mikeywatson5654

    4 жыл бұрын

    Keep trying dude

  • @ShaneHummus
    @ShaneHummus4 жыл бұрын

    Curiosity made me watch this crash course AI series.

  • @lamidom

    @lamidom

    4 жыл бұрын

    It means you have more than one neuron

  • @werothegreat
    @werothegreat4 жыл бұрын

    You have a very mellow, soothing voice. Just wanted to say that!

  • @elihinze3161
    @elihinze31614 жыл бұрын

    I need you to narrate an audiobook. Your voice is so soothing..

  • @BrainsApplied
    @BrainsApplied4 жыл бұрын

    *I love this series* ❤️❤️❤️

  • @marcelocondori7761
    @marcelocondori77614 жыл бұрын

    So good and interesting explanation!

  • @anke4347
    @anke43474 ай бұрын

    This is a great video for complete beginners. Thank you!

  • @NawabKhan-vq5de
    @NawabKhan-vq5de4 жыл бұрын

    amzing sir keep it up and we are waiting for your more tutorials...

  • @abrahammekonnen
    @abrahammekonnen4 жыл бұрын

    Thanks for the video Jabril

  • @MiguelAlastreP
    @MiguelAlastreP4 жыл бұрын

    This is awesome. Great big picture about AI spaghetti 5:48

  • @thepowerful7593

    @thepowerful7593

    4 жыл бұрын

    3% chance of that

  • @phasingout
    @phasingout4 жыл бұрын

    Now i want to learn to program. Ty for this

  • @beingpras
    @beingpras4 жыл бұрын

    Deep learning and understanding is really what differentiates most successful people. NO matter what fields they are in!!

  • @hugo54758
    @hugo547584 жыл бұрын

    I LOVE THIS GUY

  • @nezimar
    @nezimar4 жыл бұрын

    Nice shout-out to MarI/O !

  • @JC-vu6sn
    @JC-vu6sn Жыл бұрын

    excellent lesson

  • @ammaeaar
    @ammaeaar4 жыл бұрын

    fantastic video

  • @element9224
    @element92244 жыл бұрын

    Amazing job with these videos! I’m excited for the next video because I’m stuck on backpropagation. Also did you mention bias’s or not?

  • @IceMetalPunk

    @IceMetalPunk

    4 жыл бұрын

    They didn't, but a bias can be thought of as an extra weight on each layer (or an extra neuron that's always inputting 1), so it's kind of captured by the simplified discussion of weights. Also, as someone who is a professional software engineer, who has a degree in computer science, and who took several in-depth machine learning courses at uni, let me tell you: I'm still stuck on backpropagation XD The overall idea is simple enough, but I have yet to be able to ever remember the math that goes into it.

  • @element9224

    @element9224

    4 жыл бұрын

    IceMetalPunk yeah, I ended up just running two versions of the same network, and the one that is better gets cloned too the other with slight changes to the weights and biases. It works, but takes more computation and time (as cloning the original one with differences can make it worse). Also my network doesn’t have a limit where it can only be in between 1 and 0. I use outputs of different numbers to tell me what it’s saying. Ex output 0.5-1.4 means blank output 1.5-2.4 means blank etc.

  • @IceMetalPunk

    @IceMetalPunk

    4 жыл бұрын

    @@element9224 That is known as neuroevolution, my dude :) As for the output, there's functionally not really a difference between 0-1 or 0-2.4; you can always project the range onto any other range proportionally :)

  • @MaksymCzech
    @MaksymCzech4 жыл бұрын

    AI is basically math. To understand, how backpropagation learning in neural nets works, you need to know your multidimensional calculus, chain derivation rule, and some undergraduate-level linear algebra. That's all there is to it.

  • @jweezy101491
    @jweezy1014914 жыл бұрын

    I already know all this stuff but I love jabrils and this series is so good.

  • @hollyg.5516

    @hollyg.5516

    4 жыл бұрын

    jweezy2045 had to make your absolute genius apparent, eh?

  • @evanreidel22
    @evanreidel224 жыл бұрын

    I'm surprised you didn't decide to voiceover your Crash Course as well :)

  • @feyisayoolalere6059
    @feyisayoolalere60594 жыл бұрын

    Could you do a video comparing how the early visual processing /feedforward process works in the brain? thanks for being cool!

  • @remuladgryta
    @remuladgryta4 жыл бұрын

    3:14 MarI/O? Nice!

  • @Danilego
    @Danilego4 жыл бұрын

    I just love it when John Green Bot completely ignores what Jabril says and does something random lol

  • @thepowerful7593

    @thepowerful7593

    4 жыл бұрын

    Lol

  • @yourbuddyunit
    @yourbuddyunit4 жыл бұрын

    I'll never be able to articulate how wonderful it is to be learning this from a fellow black man. This is a blessing. Thank you infinitely my brotha. Thank you, you inspire me to be greater.

  • @jasonsoto5273
    @jasonsoto52734 жыл бұрын

    Good vid!

  • @28MUSE
    @28MUSE4 жыл бұрын

    Speaking of Learning things fast 💟 AI is one such industry that requires constant learning.. Thanks for this video. Once you learn this technology it's so dynamic that if you don't catch up and be updated, your knowledge will get outdated

  • @reelsalih
    @reelsalih4 жыл бұрын

    I'm a simple human. I see cute doggos and I click.

  • @valhernandez4247

    @valhernandez4247

    4 жыл бұрын

    R I literally saw the pug and I clicked 🐕

  • @julioservantes8242

    @julioservantes8242

    4 жыл бұрын

    @@valhernandez4247 A pug is an abomination not a cute doggo.

  • @zhongliangcai602

    @zhongliangcai602

    4 жыл бұрын

    I love corgis

  • @discordtrolls5668

    @discordtrolls5668

    4 жыл бұрын

    No one cares

  • @gadgetboyplaysmc
    @gadgetboyplaysmc4 жыл бұрын

    I can finally hear Jabrils opening and closing his mouth while talking.

  • @CuriousIndiaCI
    @CuriousIndiaCI4 жыл бұрын

    Thanks... Crash Course

  • @williamwebb2863
    @williamwebb28634 жыл бұрын

    How did I not know Jabrils did an AI crash course series?!

  • @Anonarchist
    @Anonarchist4 жыл бұрын

    John_GreenBot learns "Dog", by the end of the series he might learn how to drive, and then DESTROY US ALL!

  • @geoffreywinn4031
    @geoffreywinn40314 жыл бұрын

    Educational!

  • @mikek4025
    @mikek40254 жыл бұрын

    what about google's deep learning neural network used in alpha zero and alpha go? That's pretty cool

  • @theodorechandra8450

    @theodorechandra8450

    4 жыл бұрын

    I believe that alphago and alpha zero put the grid in the chess or GO as one input neuron, in the GO they can make black piece as -1, none as 0, white as 1. In chess each piece can be assigned a single number.

  • @mattkuhn6634

    @mattkuhn6634

    4 жыл бұрын

    Alpha Go is notable less for the basic architecture of its network, and more for the way it's trained. The reason why Go was considered such a problem was because its decision space is huge. Chess was comparatively simple. Chess is played on an 8x8 grid, which means it was feasible for a computer to calculate on the fly every single possible board state given the current state, and then it would simply pick the move that made it most likely to win. That's why computers beat grandmasters in chess during the 90's, almost 20 years before neural networks really took off. Go, on the other hand, is a 15x15 grid, and so the decision space was much to large for that kind of brute force calculation. A simple multi-layer perceptron like this episode shows wouldn't work for this either, partly because there simply isn't enough data. You would need move-by-move dissections of hundreds of thousands of games, minimum, and probably more in the millions or even possibly billions of games. They'll get to why this is the case next week when they talk about optimization methods. It also wouldn't work because you would have no way of telling the network what makes a move "good". The solution was to use a different method of optimization called a policy gradient, or more broadly, reinforcement learning. In a sense they simulated games of Go, with the computer playing against itself. At every turn, the network takes the board state as input, and then decides on an action to take - in the case of Go, which spot to put a tile on. It starts off making decisions randomly, but you update the weights on the various actions based on its performance - it plays out whole games, probably against itself, and gives a reward to the winning set of actions, and a punishment to the losing ones, weighting them up or down respectively. Over many, many games, the system learns a policy - what action to take given a board state. Importantly, it doesn't need to know anything about what moves were made to get you into this state, nor does it have to calculate future permutations of the board. Much like the image recognizer, it learns "if I see tiles in this configuration, this move is the most likely to win." In this way, it kind of provides its own training data. Alpha Go is far more complicated than this simple description of course, and if anyone knows its architecture better than me I'd be glad to hear more about it (I haven't read that paper) but that's the essence of it.

  • @mariafemina

    @mariafemina

    4 жыл бұрын

    @@mattkuhn6634 wow thank you so much for the explanation!! 😍 That's why I often like comments more than actual vids

  • @mattkuhn6634

    @mattkuhn6634

    4 жыл бұрын

    Maria Fedotova Glad it was enjoyable! I’m finishing up grad school on this topic now, and I had a seminar last semester that covered reinforcement learning extensively, so I find it super interesting!

  • @cakezzi
    @cakezzi4 жыл бұрын

    Deeper nueral networks with deeper layers? That's deep

  • @abrahammekonnen
    @abrahammekonnen4 жыл бұрын

    So basically every layer let's you measure more parts of the picture letting you be more accurate in classifications, right?

  • @inertiasquared6667

    @inertiasquared6667

    4 жыл бұрын

    Yes and no, input layers let you measure more, hidden layers allow the program to do more with the data and 'think about it' in a more complex manner. It also depends on how the weights have been calibrated as well, though. Hope I could help!

  • @ASLUHLUHCE

    @ASLUHLUHCE

    4 жыл бұрын

    Every pixel is inputted at the start. Watch 3blue1brown's neural network series for a far deeper explanation.

  • @abrahammekonnen

    @abrahammekonnen

    4 жыл бұрын

    Thanks for the answers :) guys.

  • @kevadu

    @kevadu

    4 жыл бұрын

    One thing that was glossed over (though understandably so because this is just the intro) is how the input features are used. What he described is a simple 'fully connected' layer that treats every input pixel as a separate feature and looks at arbitrary combinations of them. But this is actually not very robust against things like translation, i.e. if you trained it on images of dogs where the dogs were all centered in the picture and then you showed it the image of a dog in which the dog is off the side it probably wouldn't even recognize it as a dog because the starting location of each pixel is extremely important. What almost all image recognition algorithms use today are 'convolution layers'. Rather than training neurons that look at specific pixel they train 'filters' that are a small group of pixels that gets scanned across an image. So the specific pixels input into the filter are constantly changing but they're always going to be in the same positions relative to each other. This emphasizes relative positions of pixels over absolute positions and makes the whole algorithm a lot more robust as well as easier to train.

  • @abrahammekonnen

    @abrahammekonnen

    4 жыл бұрын

    @@kevadu so they basically program the pixels to be scanned as one large 'pixel' and that changes based on what you are looking at, right? (Just trying to synthesize the info u said so I can make sure I understand it)

  • @i_smoke_ghosts
    @i_smoke_ghosts4 жыл бұрын

    very good Gibraltar! my mans

  • @i_smoke_ghosts

    @i_smoke_ghosts

    4 жыл бұрын

    You know his name is not Gibraltar ay?

  • @Vishal-np9pe
    @Vishal-np9pe4 жыл бұрын

    Hey! Jabril, want to thank you for such an informative and easy to comprehend lecture. But the only thing is that I didn't get that gist of the math imagery. Could you help me out?

  • @nomobobby

    @nomobobby

    4 жыл бұрын

    vishal Ghulati What part did you stumble on?

  • @Vishal-np9pe

    @Vishal-np9pe

    4 жыл бұрын

    @@nomobobby how input layer sends its input to the next layer?

  • @horisontial
    @horisontial4 жыл бұрын

    John Green-bot? Hhahaha, I might be easily delighted

  • @thelastone0001
    @thelastone00014 жыл бұрын

    I really like this host. I hope Jabroni sticks around for a long time.

  • @photophone5574
    @photophone55744 жыл бұрын

    3:14 uncredited video from Sethbling.

  • @user-xq5og9lt8p
    @user-xq5og9lt8p4 жыл бұрын

    World isn't just bagels and donuts. Sometimes it's bees and threes.

  • @thatafricanboii
    @thatafricanboii4 жыл бұрын

    Anyone else watches this _and_ Ted-Ed?

  • @ac3_train3r_blak34
    @ac3_train3r_blak344 жыл бұрын

    He's back, ladies and gentlemen!!!!

  • 4 жыл бұрын

    What was the guy saying "nope, nope" with his head an closing the laptop, looking at ?

  • @robellyosief8820
    @robellyosief8820 Жыл бұрын

    Jabrill!!!

  • @ananyapujar6797
    @ananyapujar67974 жыл бұрын

    Doesn't lighting change the face Id's recognition of people?

  • @chillsahoy2640
    @chillsahoy26404 жыл бұрын

    Seeing the cassette made me realize that for many younger viewers, this will be a strange type of old technology they may have never seen before.

  • @LiaAwesomeness
    @LiaAwesomeness4 жыл бұрын

    why red green and blue and not red yellow and blue (or magenta, yellow and cyan blue)? and how do the neurons "distribute tasks"? how do they "decide" which neuron of the hidden layers focuses on what?

  • @idkmy_name7705
    @idkmy_name7705 Жыл бұрын

    i just finished high school. The maths is basically complex probability?

  • @adammorley6966
    @adammorley69664 жыл бұрын

    Music theory, please?

  • @1224chrisng
    @1224chrisng4 жыл бұрын

    Who ships Jabrills with Carykh

  • @BinaryReader
    @BinaryReader4 жыл бұрын

    Glossing over a few details there.

  • @W0lfbaneShikaisc00l
    @W0lfbaneShikaisc00l4 жыл бұрын

    Jabril: Hey, I'm y'bro! Yeah you are! Ahh I'm kidding it just sounds very like it.

  • @discordtrolls5668

    @discordtrolls5668

    4 жыл бұрын

    Bruh

  • @gabedarrett1301
    @gabedarrett13014 жыл бұрын

    What about quantum computers?

  • @supernova5434
    @supernova54344 жыл бұрын

    It is like minecraft building, the bigger the scale the more details you get

  • @nickd7986
    @nickd79864 жыл бұрын

    Wanted Johann Bon Greenvot to fire lasers and summon magic but he slowly backed away.

  • @LashknifeTalon

    @LashknifeTalon

    4 жыл бұрын

    I was expecting him to calculate the possibility that Jabril was a dog.

  • @shortssquad1
    @shortssquad14 жыл бұрын

    Miss this line "yo guys jabrils here!"

  • @chu8
    @chu84 жыл бұрын

    john green bot? literally skynet

  • @thomas.02
    @thomas.024 жыл бұрын

    why can't the neurons within each hidden layer interact with each other? For example, if a neighbour neuron got a high number, that'd make another neuron (of the same layer) act differently. is that arrangement helpful or just unnecessarily complicating things?

  • @openedsaucer

    @openedsaucer

    4 жыл бұрын

    Thomas Chow the network described in the video is called a feedforward network. The kind of network you’re talking about does exist and is called a recurrent neural network. It’s usually used for sequential data where as feedforward nets are used for one-off computations.

  • @thomas.02

    @thomas.02

    4 жыл бұрын

    @@openedsaucer where does sequential data pop up, i'll guess data about location of a self-driving car?

  • @openedsaucer

    @openedsaucer

    4 жыл бұрын

    @@thomas.02 sequential data can come in a bunch of different forms. Usually it comes in the the form of a time series. For example you would use a feed forward net to classify images and you would use an RNN classify videos with each video being a stream of images over a period of time. You're right in that RNN's are likely used in self driving applications as data is captured in real time so to speak. Another place you might want to use an RNN is for something like speech to text where the number of words/syllables in a sentence can vary. Typically you don't want to use RNN's for simple classification as it's a bit overkill. You can make feedforward nets as big/complex as you want to approximate whatever function you're trying to map.

  • @navidb
    @navidb4 жыл бұрын

    Watched the whole thing, don't understand anything, back to eating hot dogs.

  • @ASLUHLUHCE

    @ASLUHLUHCE

    4 жыл бұрын

    This isn't a very good video imo. Watch numberphile's neural network videos, and then 3blue1brown's neural network series.

  • @programmingjobesch7291
    @programmingjobesch729111 ай бұрын

    9:12- Genuinely thought this was a rocket ship when it came on screen...😐

  • @nikolasgrosland9341
    @nikolasgrosland93414 жыл бұрын

    5:46 Spaghetti is spelled incorrectly, just a heads up.

  • @16.t.badulla86
    @16.t.badulla864 жыл бұрын

    Apart from the science the dog 🐕 is cute.

  • @sxndra.y543
    @sxndra.y5434 жыл бұрын

    ok can we talk about how big his beanie is compared to his head

  • @JuBerryLive
    @JuBerryLive4 жыл бұрын

    Jakequaline?

  • @erkins8818
    @erkins88184 жыл бұрын

    How to implement

  • @gjinn5001
    @gjinn50014 жыл бұрын

    In the past I had never even cared for AI but when the world began to change humans adopted AI now I should study some things for future life 😂

  • @discordtrolls5668

    @discordtrolls5668

    4 жыл бұрын

    I still don’t care I just gotta watch this for a class

  • @dragonface528
    @dragonface5284 жыл бұрын

    jabril!!!!

  • @markorendas1790
    @markorendas17904 жыл бұрын

    IM SURE THERE WILL BE A PART OF ME IN A AI VERSION AROUND ON THE INTERNET AFTER IM GONE...

  • @Bubbalikestoast
    @Bubbalikestoast4 жыл бұрын

    Hi

  • @freddypelo
    @freddypelo4 жыл бұрын

    Tibetan Monks discovered this "Neural Network" long ago. Hundreds of them chant independently parts of a prayer so it is done in one sec. The problem is that god is deaf.

  • @jeffthegangster6065
    @jeffthegangster60654 жыл бұрын

    Is green bot real or is it edited in

  • @rosswebster7877

    @rosswebster7877

    4 жыл бұрын

    I'm going to guess a bit of both.

  • @jeffthegangster6065

    @jeffthegangster6065

    4 жыл бұрын

    @@rosswebster7877 only they know

  • @jimmyshrimbe9361

    @jimmyshrimbe9361

    4 жыл бұрын

    I'm pretty sure it's full CGI.

  • @jeffthegangster6065

    @jeffthegangster6065

    4 жыл бұрын

    @@jimmyshrimbe9361 it seems to real to be edited

  • @nomobobby

    @nomobobby

    4 жыл бұрын

    I guessing real but I wonder if he’s actually computing the examples or if there feeding him lines to say?

  • @Audrey-eg2zf
    @Audrey-eg2zf4 жыл бұрын

    Yo

  • @nappyjonze
    @nappyjonze4 жыл бұрын

    Could you guys do an African American History Crash Course?

  • @randomguy263

    @randomguy263

    4 жыл бұрын

    But why?

  • @rich.n3215
    @rich.n32154 жыл бұрын

    Bring back Crash course mythology... That's the first one I ever watched

  • @Benimation
    @Benimation4 жыл бұрын

    AS A HUMAN; I LOVE() EATING() SPAGEHTTI;

  • @alanoudalhamdi8216
    @alanoudalhamdi82164 жыл бұрын

    How can I translate to Arabic?

  • @blacktommer3543
    @blacktommer35434 жыл бұрын

    At 5:48 there's a typo that says spagehtti instead of spaghetti

  • @pojokindie
    @pojokindie4 жыл бұрын

    Yup like hmmm I hate dog but I love cat

  • @TonyTigerTonyTiger
    @TonyTigerTonyTiger4 жыл бұрын

    Contradiction. At 9:30 he says that AlexNet needed "more than 60 million neurons", but at 2:33 we can see the abstract of the paper and it says AlexNet used only 650,000 neurons.

  • @jimmyshrimbe9361
    @jimmyshrimbe93614 жыл бұрын

    Haha you said doggernaut.

  • @dylanparker130
    @dylanparker1304 жыл бұрын

    so, they got everyone to do their work for nothing? how, er... ingenious

  • @IceMetalPunk

    @IceMetalPunk

    4 жыл бұрын

    It's not doing "their" work. It's getting people to help with the work that will help everyone.

  • @dylanparker130

    @dylanparker130

    4 жыл бұрын

    @@IceMetalPunk if a psychology researcher wishes to carry out a study on human subjects, they have to pay those subjects. yet somehow, people literally mucking in on the hard graft of research should do it for free?

  • @IceMetalPunk

    @IceMetalPunk

    4 жыл бұрын

    @@dylanparker130 The people who are "doing it for free" are the same people who are benefiting from it directly. In your analogy, it'd be less like having free subjects and more like having other psychologists help with their research --- something that happens all the time.

  • @dylanparker130

    @dylanparker130

    4 жыл бұрын

    @@IceMetalPunk i don't believe the people who respond to crowd funding are typically fellow researchers, but rather interested amateurs if they were fellow researchers, why would they agree to help other researchers in this way? to do so would be to help the competition, unless they were credited with authorship, which they would not be in such cases

  • @IceMetalPunk

    @IceMetalPunk

    4 жыл бұрын

    @@dylanparker130 First of all, many people in STEM fields are more concerned with advancing their field than trying to compete with others at the expense of their field. Secondly, whether you're an amateur programmer or a professional computer science researcher, advancing AI tech still helps you out. (Don't believe me? Take a look at Runway ML, a little software suite that's designed for amateur programmers, but lets you use tons of advanced machine learning algorithms and pre-trained networks without having to even understand the details of implementation if you don't want. Machine learning advancements aren't just for professionals and researchers.)

  • @donrichards1362
    @donrichards13624 жыл бұрын

    What's my animal that do is Two Princes of Darkness from the future or from the past from the present

  • @FollowTheRabbitHole
    @FollowTheRabbitHole4 жыл бұрын

    No lie, I read the title as "neutral fireworks".

  • @yourbuddyunit

    @yourbuddyunit

    4 жыл бұрын

    That's literally what neurons do. Literally. I think, Therefore I am an amalgamation of neural fireworks.

  • @th3bear01
    @th3bear014 жыл бұрын

    Kinda lame that you did not credit Sethbling for that clip of marIO.

  • @mattlangstraaat3508
    @mattlangstraaat35084 жыл бұрын

    Affirmative action solved... everyone replaced by JOHN GREEN BOTS.. sounds like a dem solution to me.

  • @HassanAli-yw4kf
    @HassanAli-yw4kf4 жыл бұрын

    Play in 1.25 speed and thank me later.