Building makemore Part 4: Becoming a Backprop Ninja

Ғылым және технология

We take the 2-layer MLP (with BatchNorm) from the previous video and backpropagate through it manually without using PyTorch autograd's loss.backward(): through the cross entropy loss, 2nd linear layer, tanh, batchnorm, 1st linear layer, and the embedding table. Along the way, we get a strong intuitive understanding about how gradients flow backwards through the compute graph and on the level of efficient Tensors, not just individual scalars like in micrograd. This helps build competence and intuition around how neural nets are optimized and sets you up to more confidently innovate on and debug modern neural networks.
!!!!!!!!!!!!
I recommend you work through the exercise yourself but work with it in tandem and whenever you are stuck unpause the video and see me give away the answer. This video is not super intended to be simply watched. The exercise is here:
colab.research.google.com/dri...
!!!!!!!!!!!!
Links:
- makemore on github: github.com/karpathy/makemore
- jupyter notebook I built in this video: github.com/karpathy/nn-zero-t...
- collab notebook: colab.research.google.com/dri...
- my website: karpathy.ai
- my twitter: / karpathy
- our Discord channel: / discord
Supplementary links:
- Yes you should understand backprop: / yes-you-should-underst...
- BatchNorm paper: arxiv.org/abs/1502.03167
- Bessel’s Correction: math.oxford.emory.edu/site/mat...
- Bengio et al. 2003 MLP LM www.jmlr.org/papers/volume3/b...
Chapters:
00:00:00 intro: why you should care & fun history
00:07:26 starter code
00:13:01 exercise 1: backproping the atomic compute graph
01:05:17 brief digression: bessel’s correction in batchnorm
01:26:31 exercise 2: cross entropy loss backward pass
01:36:37 exercise 3: batch norm layer backward pass
01:50:02 exercise 4: putting it all together
01:54:24 outro

Пікірлер: 263

  • @Davourflave
    @Davourflave Жыл бұрын

    I can say without a doubt that there are not many highly qualified, passionate teachers who are also able to teach their subject. Sharing knowledge in this way is the greatest gift a researcher can give to the world! Me and everyone else thank you for that! :)

  • @vaguebrownfox

    @vaguebrownfox

    9 ай бұрын

    I saw his previous micrograd lecture and it literally moved me to tears. I had endured the struggle of drowning in pytorch source code, trying to understand what it is that they are really doing! For someone who simply can't move past without cutting open abstractions, this is pure blessing.

  • @uniquescience7047

    @uniquescience7047

    4 ай бұрын

    exactly same with me@@vaguebrownfox

  • @BradCordovaAI
    @BradCordovaAI Жыл бұрын

    Andrej you are a gifted teacher. I love this teaching style of starting from scratch with a simple specific model to set the structure and ideology of the problem. 2. Add necessary and motivated complexity to get where we are today, 3. Seamlessly transfer to modern technology (eg PyTorch) to solve modern problems. 4. You make it all simple and compress it into the essentials without unnecessary lingo. It reinvigorates my passion for the field. Thank you very much for taking so much time to make this for free for everyone.

  • @nohcho_9548

    @nohcho_9548

    Жыл бұрын

    Ky .

  • @cojocarucosmin202
    @cojocarucosmin202 Жыл бұрын

    Bro just want to say that for the past 3 years I've been looking everywhere on the Internet for an explanation like thsi for backpropagation.. Found all kinf of things(e.g. Jacobian differentiable) but none actually made sense until today. U r the best, you bring so much value and let others light their candles at your light

  • @seanwalsh358
    @seanwalsh3588 күн бұрын

    I suspect this is a video I'll be coming back to for years to come. Thanks!

  • @kishantripathi4521
    @kishantripathi452124 күн бұрын

    no words to explain my feelings. karpathy is just Supercalifragilisticexpialidocious

  • @kshitijbanerjee6927
    @kshitijbanerjee692711 ай бұрын

    These lectures are literally GOLD. I'd pay for these, but Andrej is kind enough to give everything for free. I hope others find these gold lectures. Thank you so much for doing this. Please don't lose steam and I hope you continue to create them.

  • @weystrom
    @weystrom Жыл бұрын

    Man, what a time to be alive. Imagine how hard it would be to get this kind of information just a couple decades ago. And now it's free and easily accessible at any convenient time. Thank you, Andrey, truly.

  • @dohyun0047
    @dohyun0047 Жыл бұрын

    i am still on part 2 but i had to write this comment , your part 4 thumbnail is awesome and funny I am very grateful for these lectures. I could feel that the artificial intelligence knowledge that was intertwined inside me was well aligned because of you.

  • @aaronwill1983
    @aaronwill1983 Жыл бұрын

    Binge worthy! Ran through all lectures back-to-back after discovering. On the edge of my seat for more. Thanks Andrej!

  • @RebeccaBrunner
    @RebeccaBrunner Жыл бұрын

    Thank you for providing a series that's so approachable but doesn't shy away from explaining the details. Also love the progression through all the impactful papers

  • @andonisudupe3446
    @andonisudupe3446 Жыл бұрын

    yes, I always wanted to be a backprop ninja, now my dream will become true, thanks Andrej!

  • @kemalware4912
    @kemalware4912 Жыл бұрын

    I will put your poster on my wall to look at you everyday and remember how a great person you are. Your smile is contagious.

  • @efogleman
    @efogleman Жыл бұрын

    This lecture series is excellent. Seriously, some of the best learning resources for Neural Networks available anywhere: up-to-date, and goes deep into the details. These lectures with detailed examples and notebooks are an amazing resource. Thanks so much for this, Andrej.

  • @kimiochang
    @kimiochang Жыл бұрын

    Finally completed this one. I have to say this lecture is the most valuable one throughout all my studying of deep learning. As always, thank you Andrej for your generosity. Moving on to the next one!

  • @Themojii
    @Themojii10 ай бұрын

    Hello Andrej, I truly love this approach that you included exercises in your video. Your suggestion to first attempt to solve the exercises and then watching as you provide the solutions is the most effective way I personally grasp the concepts. Thank you for your outstanding work!

  • @mohammadhomsee8640
    @mohammadhomsee86406 ай бұрын

    That's incredible !!! It's impossible to give such a knowledge without very deep knowledge with neural nets. I am really appreciate your work. I hope we can get more videos. This is defiantly a golden video!!! Thank you so much!

  • @DanteNoguez
    @DanteNoguez Жыл бұрын

    I was "taught" calculus in high school but didn't really understand anything at all. Now, after seven years of no math formal education at all, I was able to immediately understand this exercise thanks to your lecture on micrograd. You're a brilliant teacher and I'm really grateful for that!

  • @peterszilvasi752
    @peterszilvasi752 Жыл бұрын

    I really appreciate the lectures that you share with us. It is not about definitions, raw memorization, or even exercise per se. Instead, first-principle-thinking: take a big "mess" and then broke down into small manageable pieces. You do not solely demonstrate the problem-solving approaches brilliantly but also ignite curiosity to dig deeper (to go down to the level of atoms) into a specific topic. Thank you for the preparation, the passion, and the memes! :D

  • @ThemeParkTeslaCamping360
    @ThemeParkTeslaCamping360 Жыл бұрын

    Excellent Andrej!! Can't wait for your next lecture. I'm so excited and motivated 🥰

  • @nova2577
    @nova25779 ай бұрын

    I spent almost a whole day digesting this video. It's definitely worth it!

  • @danielkusuma6473
    @danielkusuma6473 Жыл бұрын

    Just grateful to have the chance to learn from Andrej Karpathy. Thanks heaps, it means a lot!

  • @borismeinardus
    @borismeinardus Жыл бұрын

    Andrej is providing the world with so much value, be it through his professional work in the industry (e.g. Tesla AI) or through education. He is literally one of the greatest of all time but is so down to earth and such a sweetheart. Thank you very much for your hard work to make it easier for all the rest of us and for inspiring us! 💚

  • @BlockDesignz
    @BlockDesignz Жыл бұрын

    I come to each of these videos to like them. I can't keep up with his pace of release but I will watch all of them in due time. Thanks Andrej.

  • @DiogoSanti
    @DiogoSanti6 ай бұрын

    What a wonderful effort Andrej. Thanks for this!

  • @parasmaliklive
    @parasmaliklive Жыл бұрын

    Thank you Andrej. I really appreciate your work.

  • @cangozpinar
    @cangozpinar Жыл бұрын

    Thank you, thank you, thank you ... What you are doing with these videos is amazing !

  • @jayhyunjo141
    @jayhyunjo141 Жыл бұрын

    As a bioinformatician and a part-time data scientist, I should say this series is the best educational youtube video on deep neural network. Thank you for the video and offering the opportunity to learn.

  • @user-oi3be8dm8x
    @user-oi3be8dm8x Жыл бұрын

    Thanks for top-level video. Can't wait to see more. Thanks 🙏

  • @greatfate
    @greatfate Жыл бұрын

    These videos are unironically pretty fun! You're not just a genius researcher but an an amazing teacher Andrej

  • @Raix03
    @Raix033 ай бұрын

    I almost completed Exercise 1 all on my own, but I had to step back for a day to refresh the basics because my college algebra was a bit rusty from 10 years of not using it. Exercises 2 and 3 totally overwhelmed me. However, when I follow your explanations, I understand everything. This is a huge because I remember that professors at my college couldn't explain complex concepts so easily. Andrej, you are a gift to this world!

  • @Nimrad780
    @Nimrad780 Жыл бұрын

    Thank you for "making everything fully explicit"!

  • @tecknowledger
    @tecknowledger Жыл бұрын

    Thanks for the videos! Please make a lot more! Please continue to share your knowledge with the world! Thanks

  • @kapitan104
    @kapitan104 Жыл бұрын

    Andrej, you are the best techer. I am 100% sure these lectures will become a CORE watching for any student who starts his ML journey. Hope we will have such lectures in CV and RL.

  • @michadaniluk9604
    @michadaniluk9604 Жыл бұрын

    Thanks Andrej for your amazing videos. Here is my implementation of finding dC without for loops: dC = F.one_hot(Xb).float().view(-1, C.shape[0]).T @ demb.view(-1, C.shape[1])

  • @nikita67493

    @nikita67493

    Жыл бұрын

    Unfortunately it produces inexact results: C | exact: False | approximate: True | maxdiff: 9.313225746154785e-10 The for-loop creates an exact match. Another way to do the same is to use Einstein notation (which is also an inexact result): dC = torch.einsum("ijk, ijm -> km", F.one_hot(Xb, num_classes=vocab_size).float(), demb)

  • @gembancud

    @gembancud

    Жыл бұрын

    This one is another impl, though i dont know if it produces the exact results: dC = torch.zeros_like(C).scatter_add_(0, Xb.view(-1,1).repeat(1,demb.shape[-1]),demb.view(-1, demb.shape[-1]))

  • @rohitsathya8099

    @rohitsathya8099

    Ай бұрын

    @@nikita67493why do you want an exact match?

  • @ColinKiegel

    @ColinKiegel

    Ай бұрын

    On my system all these implementations of dC are equivalent and only match approximately (with the same maxdiff: 5.587935447692871e-09) - including the for-loop I also came up with the same "einsum" solution Xb_onehot = F.one_hot(Xb, num_classes=vocab_size).float() dC = torch.einsum('ija, ijb->ab', Xb_onehot, demb) # shape: [32, 3, 27] @ [32, 3, 10] -> [27, 10]

  • @Sickkkkiddddd
    @Sickkkkiddddd9 ай бұрын

    Bruh, I'd be paying a shit ton of money in education for this otherwise free knowledge if it wasn't for your videos. Thank you so much, man. I cannot believe the ease with which you explain what seemed complex to me from a distance years ago. I cannot even believe I understand this stuff, man.

  • @vulkanosaure
    @vulkanosaure Жыл бұрын

    I just finished part 2 yesterday night, and i was feeling blue that there was only 1 video left ! And this came to my notification, i just had to share my excitement :)))

  • @kaspiimal3340
    @kaspiimal3340 Жыл бұрын

    Andrej, thank you for the work you put into this (and previous) lectures❤. Thanks to you, me and a lot of other people can enjoy learning NN 😍from the best.

  • @srikika
    @srikika Жыл бұрын

    love your channel and content Andrej.. please keep more videos coming!

  • @kaushaljani814
    @kaushaljani8149 ай бұрын

    Pure gem...💎💎💎 Thanks Andrej for this amazing lecture.

  • @badreddinefarah1127
    @badreddinefarah1127 Жыл бұрын

    Thanks a lot Andrej, can't wait to see more 🙏🙏

  • @rmajdodin
    @rmajdodin Жыл бұрын

    Thank you Andrej for sharing your experience with us! John Carmack used exactly this learning method, as he told in his interview with Lex Fridmann. In his "larval stage", he implemented the whole NN machinary, including back propagation, in C (so really low-level:)), to make sure that he understands how stuff work!

  • @vivekakaviv
    @vivekakaviv4 ай бұрын

    This was very insightful. Andrej you are the best!

  • @sauloviedo2677
    @sauloviedo2677 Жыл бұрын

    Andrej is on-firee! Thank for this awesome material!

  • @fbf3628
    @fbf3628 Жыл бұрын

    Wow! This lecture is truly incredible and i have certainly learned a ton. Thank you very much, Andrej :)

  • @lagousis
    @lagousis Жыл бұрын

    Thanks for all the time you put into that lecture!

  • @TonyStark-cp3tj
    @TonyStark-cp3tj5 ай бұрын

    Hey Andrej, I don't know if you'll see this, but I just wanted to thank you whole heartedly for your awesome neural network playlist. It's by far the best and the most in-depth content on NNs I've ever come across. I really appreciate you sharing your knowledge for community. You're the best! Excited and awaiting for more such treasures!

  • @muhannadobeidat
    @muhannadobeidat Жыл бұрын

    Excellent series and delivery as usual. Thanks for all the hard work you put into this. Part of it is challenging to get through but a joy to decipher all the moving parts. I think a good understanding of the math behind back prop helps understand this. A good resource that covers this from a math perspective is Andrew Ng original Neural Net course.

  • @AlecksSubtil
    @AlecksSubtil5 ай бұрын

    simply the best! very good lessons with such maestry and passion, thanks a lot for sharing

  • @hermestrismegistus9142
    @hermestrismegistus9142 Жыл бұрын

    This lecture really makes me appreciate autograd. I commend the ancient ML practitioners for surviving this brutality.

  • @martakosiv6483
    @martakosiv64832 ай бұрын

    Thanks for the great content! That's the best explanation I've ever seen! Also, regarding the last back propagation in the excersise 1 I've found the following method in pytorch: dC = torch.zeros_like(C) dC.index_add_(0, Xb.view(-1), demb.view(-1, demb.shape[2])) cmp('C', dC, C)

  • @JTMoustache
    @JTMoustache Жыл бұрын

    Love that he explains matlab as if it is not still used in 80% of labs in the world. Living in a world of tech giants will heal the matlab ptsd This is a masterclass - I've never seen it explained so thoroughly and clearly, and i've been around. PEAK EXPERTISE

  • @sevarbg83
    @sevarbg8310 ай бұрын

    Have mercy Andrej, my brain hurts! :D Feels like I'll need years to digest just these few lectures.

  • @owendorsey5866
    @owendorsey5866 Жыл бұрын

    This is the first time truly understood. Thank you!

  • @steampunkcircus
    @steampunkcircus Жыл бұрын

    A deluge of knowledge from you so often it's ridiculous. I'm absolutely certain you're a robot. Anyhow, Ninjas are awesome. Wax on Sensei!

  • @santoshk.c.1896
    @santoshk.c.1896 Жыл бұрын

    Thanks a lot Andrej for all these awesome lectures. Please enable auto generated subtitle for this lecture.

  • @DavidIvan1991
    @DavidIvan1991Ай бұрын

    Very useful educational videos, thanks for making and sharing them! It's interesting that Andrej also considers the shapes when backpropagating through matrix multiply, just how I came to "memorize" it :)

  • @art4eigen93
    @art4eigen939 ай бұрын

    It took me days to backprop through this lecture. Phew!. got it now.

  • @mehulajax21
    @mehulajax21 Жыл бұрын

    This is exactly how I work through my coding problems as well. I also have similar thought process while developing algorithms.

  • @TheOrowa
    @TheOrowa Жыл бұрын

    I believe the loop implementing the final derivative at 1:24:21 can be vectorized if you just rewrite the selection operation as a matrix operation, then do a matmul derivative like done elsewhere in the video: X_e = F.one_hot(Xb, num_classes = 27).float() # Convert the selection operation into a selection matrix (emb = C[Xb] X_e @ C) dC = (X_e.permute(0,2,1) @ demb).sum(0) # Differentiate like any other matrix operation (dC = X_e.T @ demb; indices to track the batch dimensions)

  • @barni_7762

    @barni_7762

    10 ай бұрын

    Imo it's cleaner if you do this instead: Xe = F.one_hot(Xb.flatten(), num_classes=27).float().permute(1, 0) dC = Xe @ demb.view((-1, demb.shape[2])) I think this method is more understandable because it uses a 2D matmul...

  • @arashrouhani5388

    @arashrouhani5388

    10 ай бұрын

    @@barni_7762 Thanks, it seems to have worked for me.

  • @user-gk8ri6ww7e

    @user-gk8ri6ww7e

    9 ай бұрын

    Very good point on the fact that C[Xb] X_e @ C. It makes things much more clear. I came to the same solution, but from the bottom, experimenting with single records, imagining what I want to get. final solution is: dC = (torch.nn.functional.one_hot(Xb, num_classes=C.shape[0]).float().swapaxes(-1,-2) @ demb).sum(0) and one can investigate what is going on for a single batch element: torch.nn.functional.one_hot(Xb[0], num_classes=C.shape[0]).T.float() @ demb[0]

  • @inar.timiryasov

    @inar.timiryasov

    6 ай бұрын

    dC = torch.einsum('abc,abg->cg', F.one_hot(Xb, vocab_size).float(), demb)

  • @amogha7332

    @amogha7332

    2 ай бұрын

    @@barni_7762 very clean solution, this is what i did too!

  • @FrozenArtStudio
    @FrozenArtStudio Жыл бұрын

    my favorite prof with new lecture

  • @yagvtt
    @yagvtt7 ай бұрын

    That is so useful, thank you very much for this series.

  • @ayogheswaran9270
    @ayogheswaran9270 Жыл бұрын

    Thanks a lot for making this Andrej !!!

  • @juanolano2818
    @juanolano2818 Жыл бұрын

    "...assuming that pytorch is correct..." hahahaha not only a great lecture but also with very funny nuggets. Thank you!

  • @sam.rodriguez
    @sam.rodriguez8 ай бұрын

    You can love people you don't know. I love you Andrej.

  • @joneskiller8
    @joneskiller82 ай бұрын

    This dude is based!. I can actually cognitively map and visualize his explanations, and I am so grateful to have found him. Keep the videos coming please, and thank you so much.

  • @arjunsinghyadav4273
    @arjunsinghyadav4273 Жыл бұрын

    sprinkling Andrej magic through out the video - had me cracking at 43:40

  • @frippRulez
    @frippRulezАй бұрын

    This one kicked my ass! The way of the ninja is not an easy path, but I really enjoyed it, it was amazing as I started to solve it myself as the lecture progressed. Maybe this is the future of education

  • @nirajs
    @nirajs Жыл бұрын

    Such a great video for really understanding the detail under the hood! And lol at the momentary disappointment at 1:16:20 just before realizing the calculation wasn't complete yet 😂

  • @anrilombard1121
    @anrilombard1121 Жыл бұрын

    Patiently waiting for part 5 :)

  • @lwtwl
    @lwtwl11 ай бұрын

    Btw, the "low-budget" gray block mask at the end is very creative :D

  • @cthzierp5830
    @cthzierp58308 ай бұрын

    Thank you very much for an amazing series! The logit backprop derivation can be simplified a bit by realizing that log(f/g) is log f - log g. The second term is log Sum, the derivative will be 1/Sum times dSum/dxi which immediately yields the activation output. The first term is the log of an exponent, this cancels and the result has a trivial derivative of 0 or -1 when the index isn't/is the correct answer. This neatly shows that the derivative is "softmax output minus correct answer".

  • @tecknowledger
    @tecknowledger Жыл бұрын

    Thanks Andrej! I feel like a buff doge! Just understood and backproped ~ 80% of the video and colab code from this video (downloaded and did exercises)! Colab kept occasionally throwing errors. Worked fine on local Jupyter.

  • @jonathanr4242
    @jonathanr4242 Жыл бұрын

    very nice. Thank you, Andrej.

  • @yunhuaji3038
    @yunhuaji3038 Жыл бұрын

    Hi Andrej, congrats on your "new" journey at OpenAI. Thank you very much for this series. It's extremely helpful and arguably the best learning material to go through for deep learning. I've always been looking for something like this series. It solidly deepens my understanding of neural networks even though I have been playing with them for a while. Will you continue on this series after your back to OpenAI? and I look forward to seeing your future work & contribution to this community, to the following generations, and to the world.

  • @yoonhero3701
    @yoonhero3701 Жыл бұрын

    that's awesome! thank you for your passion. i'd like to be like you someday :)

  • @anrilombard1121
    @anrilombard1121 Жыл бұрын

    Can't wait to come watch this when school holiday starts!

  • @anrilombard1121

    @anrilombard1121

    Жыл бұрын

    13 days later: here I am!

  • @muhammadbaqir3736
    @muhammadbaqir3736 Жыл бұрын

    01:25:00 Here is the better implementation of the code: dC = torch.zeros_like(C) dC.index_add_(0, Xb.view(-1), demb.view(-1, 10)) Thanks to the ChatGPT :)

  • @kl_moon
    @kl_moon6 ай бұрын

    Thank you so much for this lecture!!!!TT..It actually made my day.

  • @thasinatabashum6853
    @thasinatabashum685310 ай бұрын

    I'm 3rd year Ph.D. student and I started my Ph.D. right after my undergrad, and I had very little idea how all the calculations are happening in neural networks back then. In the last three years to learn about neural nets I watched lots of videos, attended lectures, and completed summer camp, courses, also read books, papers, and blogs. But undoubtedly this is the best lecture on backprop! Thank you!

  • @CoolWorm13

    @CoolWorm13

    21 күн бұрын

    what uni are you study in?

  • @seanconnollymv
    @seanconnollymv Жыл бұрын

    Huge fan of your videos, Andrej! I'll admit I've had to pause and watch them all twice or more, but they are so useful! Thank you!. I was really excited when you started down the path or RNN and LTSM in your video, only to find you had other plans for us! Is there an ETA on RNN and LTSM videos? Possibly even GAN tutorial? Again, Thank you so much for these videos, they are so helpful, and your ability to teach is phenomenal.

  • @MrEmbrance
    @MrEmbrance Жыл бұрын

    Can't wait for the next video

  • @stephennfernandes
    @stephennfernandes Жыл бұрын

    Excellent content Andrej

  • @beathoven70
    @beathoven70 Жыл бұрын

    I'm so glad even Andrej forgets how the logits = h @ W2 + b2 backprob works by heart. I've really struggled with to remember that as well and used the same "hack" to just look at the sizes of the matrices and knowing what dimensions i needed to get out of it simply transpose the matrices accordingly, hahaha.

  • @ronaldlegere
    @ronaldlegere10 ай бұрын

    This is one of the most valuable videos I have come across for building strong intuition about what is going on in the backpropagation. BTW My solution for dC: dC = torch.einsum('bij,bik -> jk', F.one_hot(Xb, vocab_size).float(), demb). Gotta love einsum :)

  • @markr9640
    @markr9640 Жыл бұрын

    Just Brilliant!

  • @sammyblues1979
    @sammyblues1979 Жыл бұрын

    Excellent tutorial to understand the mathematical process behind Neural net operations. Just shows how intuitively comfortable Andrej is with the fundamentals of the subject. Hats off!

  • @b0nce
    @b0nce Жыл бұрын

    Thank you so much :) It was a bit tough but very interesting task. P.S.: 1:25:47 dC can be done with dC.index_add_(0, Xb.view(-1), demb.view(-1, 10)) ;)

  • @AndrejKarpathy

    @AndrejKarpathy

    Жыл бұрын

    very cool, nice find, didn't know about index_add_, ty :)

  • @ArvidLunnemark

    @ArvidLunnemark

    Жыл бұрын

    I arrived at a very similar solution, but I didn't know about index_add_. Instead you can do: Xb_onehot = F.one_hot(Xb.view(-1), num_classes=C.shape[0]).float() dC = Xb_onehot.T @ demb.view(-1, C.shape[1]) ty for the video :)

  • @oferyehuda6131

    @oferyehuda6131

    Жыл бұрын

    can also be done with torch.einsum without the reshaping (but a little more confusion)

  • @danieljaszczyszczykoeczews2616

    @danieljaszczyszczykoeczews2616

    Жыл бұрын

    i've done with a basic approach dC = torch.zeros_like(C)# ([27, 10]) for i,iemb in zip(Xb.view(-1).tolist(),demb.view(-1, n_embd)): dC[i]+=iemb # zip (([96]), ([96, 10]))

  • @KibberShuriq

    @KibberShuriq

    Жыл бұрын

    @@ArvidLunnemark Instead of Xb.view(-1), one could also use Xb.flatten(), which is a bit more straightforward to interpret (and I believe is just a wrapper for view() internally anyway).

  • @arielfayol7198
    @arielfayol719811 ай бұрын

    Please don't stop the series 😢

  • @itsm0saan
    @itsm0saan Жыл бұрын

    Thank you so much for the lecture ;)

  • @atabakp
    @atabakp Жыл бұрын

    Thanks for the great series! what is the best practice to avoid the zero in denominators in terms of the backpropagation? 1- Sum the denominator with a tiny value? 2- Replace the zeros with a tiny value? max(denom, eps)

  • @nickgannon7466
    @nickgannon7466 Жыл бұрын

    Hi Andrej, thanks so much for putting out these lessons, their absolutely phenomenal. Outside of the videos you're creating, what other resources would you recommend for someone who is interested pursuing a career in deep learning?

  • @user-vn3vd6wq7n
    @user-vn3vd6wq7n10 ай бұрын

    this is a masterpiece

  • @veeramahendranathreddygang1086
    @veeramahendranathreddygang1086 Жыл бұрын

    Awesome. Thank you.

  • @KadeemSometimes
    @KadeemSometimes Жыл бұрын

    You are a hero!

  • @ahmadibraheem1141
    @ahmadibraheem1141 Жыл бұрын

    Hi @Andrej, great tutorial as always. I am a bit confused as to why we need to take log of the probabilities. The reasoning that we are converting a product to sum using log doesn't seem to hold, as for loss calculation, we are plucking out only that log probability whose index matches the true class. Therefore, there is no product or sum left in the equation, just one value. I tried to make it work without taking the log, that also seems to work. Though not as well. I hope you can share some insights on this :)

  • @mdrayedbinwahed7126
    @mdrayedbinwahed7126 Жыл бұрын

    Whatteh lecture! My god was it awesome.

  • @afsarequebal
    @afsarequebal Жыл бұрын

    really grateful, thanks a lot

  • @bharathithal8299
    @bharathithal8299 Жыл бұрын

    I think they (publishers of the BatchNorm paper) are using n during training and n-1 during testing because test set is a subset of the dataset that we're taking even for training and since the train dataset is considerably larger than test dataset it's almost as if we taking a sample set from the main dataset and thus justifies taking n-1 during testing and n during training. I'm not sure about this, but this kinda makes sense.

  • @amortalbeing
    @amortalbeing Жыл бұрын

    great job.

  • @GiuseppeRomagnuolo
    @GiuseppeRomagnuolo Жыл бұрын

    Andrej, words cannot express enough gratitude for sharing these lectures. Your passion for this subject is truly inspiring and your willingness to share your knowledge speaks to your moral character. Although you recorded this lecture 5 months ago, your words continue to lit up lightbulb-smiles across the globe and create intellectual connections with people all over the world. Thank you for your dedication and generosity.

Келесі