[Classic] Deep Residual Learning for Image Recognition (Paper Explained)

Ғылым және технология

#ai #research #resnet
ResNets are one of the cornerstones of modern Computer Vision. Before their invention, people were not able to scale deep neural networks beyond 20 or so layers, but with this paper's invention of residual connections, all of a sudden networks could be arbitrarily deep. This led to a big spike in the performance of convolutional neural networks and rapid adoption in the community. To this day, ResNets are the backbone of most vision models and residual connections appear all throughout deep learning.
OUTLINE:
0:00 - Intro & Overview
1:45 - The Problem with Depth
3:15 - VGG-Style Networks
6:00 - Overfitting is Not the Problem
7:25 - Motivation for Residual Connections
10:25 - Residual Blocks
12:10 - From VGG to ResNet
18:50 - Experimental Results
23:30 - Bottleneck Blocks
24:40 - Deeper ResNets
28:15 - More Results
29:50 - Conclusion & Comments
Paper: arxiv.org/abs/1512.03385
Abstract:
Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.
The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.
Authors: Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
Links:
KZread: / yannickilcher
Twitter: / ykilcher
Discord: / discord
BitChute: www.bitchute.com/channel/yann...
Minds: www.minds.com/ykilcher
Parler: parler.com/profile/YannicKilcher
LinkedIn: / yannic-kilcher-488534136
If you want to support me, the best thing to do is to share out the content :)
If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: www.subscribestar.com/yannick...
Patreon: / yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

Пікірлер: 147

  • @YannicKilcher
    @YannicKilcher4 жыл бұрын

    This is a pre-recorded scheduled release :D still on break :)

  • @Phobos11

    @Phobos11

    4 жыл бұрын

    Yannic Kilcher welcomed surprise 😄

  • @VALedu11
    @VALedu114 жыл бұрын

    for someone like me who has ventured into neural nets recently, this explanation is a boon. IT was like listening to classics. Legendary paper and equally awesome explanation.

  • @Notshife
    @Notshife4 жыл бұрын

    Yep, revisiting this classic paper in your usual style was still interesting to me. Thanks as always

  • @thomaesm
    @thomaesm2 жыл бұрын

    I really wanted to drop you a line that I really, really enjoyed your paper walkthrough; super informative and entertaining! Thank you so much for uploading this! :)

  • @li-lianang8304
    @li-lianang8304 Жыл бұрын

    I've watched like 5 other videos explaining ResNets and this was the only video I needed. Thank you so much for explaining it so clearly!!

  • @cycman98
    @cycman984 жыл бұрын

    Visiting old and influential papers seems like a great idea

  • @milindbebarta2226
    @milindbebarta2226 Жыл бұрын

    This is probably one of the better videos on these classic research papers on KZread. I've seen some terrible explanations but you did pretty well. Good job!

  • @DiegoJimenez-ic8by
    @DiegoJimenez-ic8by4 жыл бұрын

    Thanks for visiting iconic papers, great content!!!

  • @anheuser-busch
    @anheuser-busch3 жыл бұрын

    Thanks for this! And I really enjoy going through the old papers, since you can pick up things you missed when first reading them. Enjoy the break!!

  • @SunilMeena-do7xn
    @SunilMeena-do7xn3 жыл бұрын

    Thanks Yannic. Revisiting these classic papers is very helpful for beginners like me.

  • @WLeigh-pt6qs
    @WLeigh-pt6qs2 жыл бұрын

    Hey Yannic, you are such a good company for learning deep learning. You lifted me from all the struggles. Thank you for sharing your insight.

  • @timdernedde993
    @timdernedde9934 жыл бұрын

    Really enjoyed this video! I think going through these older paper that had lasting impact for multiple years is really a great insight especially to those who are fairly new to the field like me

  • @aadil0001
    @aadil00013 жыл бұрын

    Revisiting the classics which had massively changed and forged the direction for DL research is so fun. Loved the way you explained the things. So cool. Thanks a lot :)

  • @MyU2beCall
    @MyU2beCall3 жыл бұрын

    COOL ! To discuss those classics. A formidable tribute to the writers and a great way to emphasize their contributions to the history of Artificial Intelligence.

  • @briancase6180
    @briancase61802 жыл бұрын

    This is a great series. I'm a very experienced software and hardware engineer who's just now getting serious about learning about ML and feel learning and the whole space. So, what really helps me at this point is not NN 101 but what is the landscape, what do all the acronyms mean, what is the relative importance of various ideas and techniques. This review of classic material is extremely helpful: it paints a picture of the world and helps me put things in their places in my mental model. Then I can dive deeper when I see something important for my current tasks and needs. Keep these coming!

  • @frederickwilliam6497
    @frederickwilliam64974 жыл бұрын

    Building hype for attention is all you need v2! Nice selection!

  • @jingrenxu3250
    @jingrenxu32502 жыл бұрын

    Wow, you read the author names perfectly!

  • @alandolhasz7863
    @alandolhasz78634 жыл бұрын

    I've used Resnets quite a bit and thought I understood the paper reasonably well when I read it, but I was wrong. Great video!

  • @RefaelVivanti
    @RefaelVivanti3 жыл бұрын

    Thanks. this was fun. I knew some of it but you put it on context. Please do more of these classics. If you can, maybe something on UNET/fully convolutional basic papers.

  • @emmarbee
    @emmarbee3 жыл бұрын

    Loved it and subscribed! And yes please do more of classics!

  • @reasoning9273
    @reasoning92732 жыл бұрын

    Great video! I have watched like five videos about ResNet on youtube and this one is by far the best. Thanks.

  • @zawdvfth1
    @zawdvfth13 жыл бұрын

    "Sadly, the world has taken the ResNet, but the world hasn't all taken the research methodology of this paper." I really appreciate your picks are not only those papers surpassing the performance of the state of the art, but also those with intriguing insights or papers inspiring us by their ways of conducting experiments and testing hypotheses. Most vanish, but residual, as it moves forward.

  • @LNJP13579
    @LNJP135794 жыл бұрын

    Yannic - you are doing a superb job. Your quality content has "lower dopamine rush effect". Thus, it wud not be viral, but with time you would be a force to reckon with. Not many can explain with so much clarity, depth & speed(daily 1 paper). I have one request. If you can create an ACTIVE mapping of paper with CITATIONS(and similar metrics) so that I get to choose the MOST RELEVANT PAPERS to watch. It would be a great time saver & drastically improve views on better metric videos :) .

  • @rockapedra1130
    @rockapedra11303 жыл бұрын

    Another excellent summary! Yannic is one of the best educators out there!

  • @scottmiller2591
    @scottmiller25914 жыл бұрын

    I was doing something similar for a few decades before this paper came out (no ReLU on the stage output, though). I was engaged in studies in layer by layer training, and the argument for me was "why spend all that time generating a good output for layer k, just to distort it in layer k+1?" Also, I think the physicist in me liked the notion of nonlinear perturbation of a linear model, since linear models work really well a lot of the time (MNIST, I'm looking at you). At any rate, this approach worked quite well in the time series signal processing I was doing, and when the paper came out, I read with relish to see what else they had found that was new. Unfortunately, like you I found that underneath the key idea was a heap of tricks to make the whole thing hang together which seemed to obscure how much was ResNet and how much was tricks.

  • @wamkong
    @wamkong2 жыл бұрын

    Great discussion of the paper. Thanks for doing this.

  • @MiottoGuilherme
    @MiottoGuilherme3 жыл бұрын

    Great video! I think there is a lot of value on reviewing old papers when they a cited all the time by the new ones. That is exactly the case of ResNets.

  • @dhruvgrover7416
    @dhruvgrover74163 жыл бұрын

    Loved the way u are reviewing papers.

  • @TimScarfe
    @TimScarfe4 жыл бұрын

    I love the old papers idea! Nice video

  • @lucashou4920
    @lucashou4920 Жыл бұрын

    Amazing explanation. Keep up the good work!

  • @romagluskin5133
    @romagluskin51333 жыл бұрын

    what a fantastic summary, thank you very much !

  • @slackstation
    @slackstation4 жыл бұрын

    Great paper. It must be obvious to you but, to a layman, I finally understand where the "Res" in "ResNet" comes from. Great work.

  • @OwenCampbellMoore
    @OwenCampbellMoore4 жыл бұрын

    Love these reviews of earlier landmark papers! Thanks!!!

  • @MrjbushM
    @MrjbushM3 жыл бұрын

    Thanks for this videos the classic series, not all of us have masters or PHD degree, this classic papers help us to understand the main and core ideas of deep learning, papers that important and push fordward the field.

  • @lolitzshelly
    @lolitzshelly3 жыл бұрын

    Thank you for this clear explanation!

  • @lilhikaru8361
    @lilhikaru83613 жыл бұрын

    Excellent video featuring an extraordinary paper. Good job bro

  • @animeshsinha
    @animeshsinha10 ай бұрын

    Thank You for this beautiful explanation!!

  • @yoyoyoyo7813
    @yoyoyoyo78132 жыл бұрын

    im struggling to understand papers, but your explanation to me it really hand held me to grasp this particular paper. For that to me you are awesome. Thank you so much

  • @hleyjr
    @hleyjr3 жыл бұрын

    Thank you for explaining it! So much easier for a beginner like me to understand

  • @bijjalanaganithin3798
    @bijjalanaganithin37983 жыл бұрын

    Loved the explanation Thank You so much!

  • @yahuiz7877
    @yahuiz7877 Жыл бұрын

    looking forward to more videos like this!

  • @sabako123
    @sabako1233 жыл бұрын

    Thank you Yannic for this great work

  • @gringo6969
    @gringo69693 жыл бұрын

    Great idea to review classic papers.

  • @shambhaviaggarwal9977
    @shambhaviaggarwal99773 жыл бұрын

    Thank you so much! Keep making such awesome videos

  • @PetrosV5
    @PetrosV53 жыл бұрын

    Amazing narration, keep up the excellent work.

  • @ahmedabbas2595
    @ahmedabbas2595 Жыл бұрын

    This is beautiful! a beautiful paper and a beautiful explanation, simplicity is genius!

  • @woolfel
    @woolfel4 жыл бұрын

    nice explanation. I've read the paper before and missed a lot of details. still more insights to learn from that paper.

  • @danbochman
    @danbochman3 жыл бұрын

    Love the [Classic] series.

  • @Annachrome
    @Annachrome Жыл бұрын

    self learning anns and coming across these papers is daunting - tysm!!

  • @tungvuthanh5537
    @tungvuthanh55372 жыл бұрын

    This helped me so much , big thanks to you

  • @matthewevanusa8853
    @matthewevanusa88533 жыл бұрын

    Best explanation I have seen, nice work

  • @oncedidactic
    @oncedidactic3 жыл бұрын

    This is really valuable tbh. Great video!

  • @housseynenadour2233
    @housseynenadour22332 жыл бұрын

    Very insightful explanation for beginners like me. Thank you.

  • @goldfishjy95
    @goldfishjy953 жыл бұрын

    Thank you! this is unbelievably helpful as someone whos just starting out. subscribed!

  • @alexandrostsagkaropoulos
    @alexandrostsagkaropoulos10 ай бұрын

    Your explanations resonate so good with me that is like pushing knowledge directly in my head. Does anyone has the same feeling?

  • @chaima7774
    @chaima77742 жыл бұрын

    Thanks for these great explanations , still a beginner in deep learning but I understood the paper very well !

  • @rodrigogoni2949
    @rodrigogoni2949 Жыл бұрын

    Very clear thank you!

  • @RaviAnnaswamy
    @RaviAnnaswamy2 жыл бұрын

    I like how you have highlighted that if there is a small architecture exists that can solve a problem, residual connections will help discover it from within a larger architecture - I think this is a great explanation of the power of residual connections. This has two nice implications. I do not need to worry that I should exactly find how many layers are appropriate to capture. I can start with a supersized architecture and let training reduce to the subset that is needed! Let data carve out the subnetwork architecture. Secondly, even if the subnetwork is small, it is harder to directly train a small network. Easier to train a larger network with more degrees of freedom which functionally reduces to the smaller network. One can distill later.

  • @user-nm7mf7uu3j
    @user-nm7mf7uu3j3 жыл бұрын

    This is it!!!!! Great thanks from South Korea!!!!!

  • @itayblum3405
    @itayblum34053 жыл бұрын

    Thanks so much ! This is extremely helpful

  • @__init__k917
    @__init__k9173 жыл бұрын

    Would love to see papers like these which have used unique tricks to train, I request you do more videos on paper which solves the problem of training neural networks, tips and tricks and why they work. Why local response normalisation works, what's the best way to initialise your network layers for a vision task, for a NLP task. In a nutshell what works and why.🙏

  • @xuraiis3100
    @xuraiis31004 жыл бұрын

    10:50 This should have been so obvious, how did I never think of it like that 😨

  • @anadianBaconator
    @anadianBaconator4 жыл бұрын

    That was a short break

  • @swayson5208

    @swayson5208

    4 жыл бұрын

    :D

  • @YannicKilcher

    @YannicKilcher

    4 жыл бұрын

    It's pre-recorded :)

  • @fugufish247
    @fugufish2473 жыл бұрын

    Fantastic explanation

  • @vladimirfokow6420
    @vladimirfokow6420 Жыл бұрын

    Great video! Thanks

  • @aa-xn5hc
    @aa-xn5hc3 жыл бұрын

    I love this series on historical papers

  • @prithvishah2618
    @prithvishah2618 Жыл бұрын

    Very nice, thanks! :)

  • @LouisChiaki
    @LouisChiaki3 жыл бұрын

    Nice review about residual network!

  • @lenayoharna4030
    @lenayoharna40302 жыл бұрын

    such a great explanation... tysm

  • @kamyarjanparvari4244
    @kamyarjanparvari42442 жыл бұрын

    Very Helpful. thanks a lot. 👍👌

  • @johngrabner
    @johngrabner4 жыл бұрын

    Would love a video enumerating with explanation all the learned lessons organized by importance to modern solutions.

  • @ramchandracheke
    @ramchandracheke4 жыл бұрын

    Hats off to Dedication level 💯

  • @ayushgupta1881
    @ayushgupta18812 жыл бұрын

    Thanks a lot ! Amazing explanation :)

  • @Parisneo
    @Parisneo2 жыл бұрын

    I loved this paper. Resnets are still cool. Nowadays there are a more complicated versions of these nets but the ideas still pretty much hold these days. Nice video by the way.

  • @lucidraisin
    @lucidraisin4 жыл бұрын

    You are back! I was getting withdrawals lol

  • @shardulparab3102
    @shardulparab31024 жыл бұрын

    Another Great one! Would like to request if a review is possible on angular losses especially ArcFace, as it has begun being adopted for multiple classification tasks as another *classic* review. Thanks!

  • @davidvc4560
    @davidvc4560 Жыл бұрын

    excellent explanation

  • @haniyek7811
    @haniyek78113 жыл бұрын

    That was a great explanation.

  • @geethikaisurusampath
    @geethikaisurusampath8 ай бұрын

    Thank you man.

  • @nathandfox
    @nathandfox2 жыл бұрын

    Revisiting classic paper is SO NICE for new people enter into the field to understand the history of the million tricks that get automatically applied nowadays.

  • @rippleproject7467
    @rippleproject74673 жыл бұрын

    I think the identity layer on a 3x3 matrix wud be a diagonal set of 1 instead of a 1 in the center. @Yannic Kilcher 08:50

  • @MrMIB983
    @MrMIB9834 жыл бұрын

    Universal transformer please! Love your videos, great job

  • @RaviAnnaswamy
    @RaviAnnaswamy2 жыл бұрын

    Very enjoyable, insight filled presentation, Yannic, thanks! It almost seems like residual connections allow the network to only use the layers that dont corrupt the insight. Since every fully connected or convolutional layer is a destructive operation (reduction) of its inputs, signal may get distorted beyond recovery over a few blocks. By having a sideline crosswire where not only the original input but any derived computation can potentially be preserved at each step, network is freed from the 'tyranny of tranformation'. :) Both the paper and Yannic highlight the idea that - the goal shifts from 'deriving new insights from data' to 'preserving input as long (deep) as needed' - while all other types of layers in a network distort information or derive inferences from data, the residual connection allows preserving information and protecting it from being automatically distorted, so that any information can be safely copied over to any later layer.

  • @RaviAnnaswamy

    @RaviAnnaswamy

    2 жыл бұрын

    residual connection can be seen as similar to the invention of zero to arithmetic.

  • @to33x
    @to33x3 жыл бұрын

    Came here from DongXii to support our NIO superstar, Ren Shaoqing!

  • @riccosoares1225
    @riccosoares12253 жыл бұрын

    Very good video

  • @sebastianamaruescalantecco7916
    @sebastianamaruescalantecco79162 жыл бұрын

    Thank you very much for the explanation! I'm just starting to use the pretrained nets I wondered how could I improve the performance of my models, and this video cleared many doubts I had. Keep up the amazing work!

  • @utku_yucel
    @utku_yucel4 жыл бұрын

    Thanks!

  • @t.lnnnnx
    @t.lnnnnx3 жыл бұрын

    thank you!!

  • @MrYurecz
    @MrYurecz4 жыл бұрын

    Very interesting

  • @julianoamadeulopesmoura5666
    @julianoamadeulopesmoura56663 жыл бұрын

    I've got the impression that you're a very good chinese speaker for your pronounciation of the authors' names.

  • @gorgolyt
    @gorgolyt3 жыл бұрын

    Little question about the connections when the shape changes: a simple 1x1 convolution can give the right depth but the feature maps would still be the original size. So I assume the 1x1 convolutions are also with stride 2?

  • @herp_derpingson
    @herp_derpingson4 жыл бұрын

    24:06 I think LeNet also did something similar but my memory fades. . Legendary paper. Great work. Too bad, I think in last two years we havent seen any major breakthroughs.

  • @tylertheeverlasting

    @tylertheeverlasting

    4 жыл бұрын

    Are large scale use of transformers not a big breakthrough?

  • @herp_derpingson

    @herp_derpingson

    4 жыл бұрын

    @@tylertheeverlasting Transformers came out in 2017, if I remember it right.

  • @Tehom1
    @Tehom14 жыл бұрын

    Yes, this was interesting.

  • @seanbenhur
    @seanbenhur3 жыл бұрын

    Please make more videos on Classic Papers..like yolo..inception!!

  • @sivuyilesifuba
    @sivuyilesifuba Жыл бұрын

    Nice video

  • @duncanmays68
    @duncanmays683 жыл бұрын

    I disagree with the assertion that the layers are learning “smaller” functions in ResNets. The results cited to support this claim, that the activations of the layers in the ResNets are larger than those in comparable feed-forward networks, can be caused by small weights and large biases, which L-2 regularization would encourage since it only operates on weights and not biases. The average magnitude of the weights in a layer have no relation to the complexity of the function they encode, since the weights of a layer can simply be scaled down without drastically changing this function. Moreover, in their paper on the Lottery Ticket Hypothesis, Frankle et al. find that ResNets are generally less compressible than feed-forward networks, meaning the functions they encode are more complex than in comparable feed-forward networks.

  • @norik1616
    @norik16164 жыл бұрын

    I love how you will *not* review papers based on impact, except when you do :D JK, please mix in more [classic] papers, or whatever else you feel like - just keep the drive for ML. Is's contagious! 💦 An idea: combined review/your take of a whole class of models (eg. MobileNet and its variants &| YOLO variants)

  • @SirDumbledore16
    @SirDumbledore1611 ай бұрын

    that chuckle at 13:06 😂

  • @bernardoramos9409
    @bernardoramos94094 жыл бұрын

    These skip connections were also "learned" automatically by AutoML

  • @GauravSharma-ui4yd
    @GauravSharma-ui4yd4 жыл бұрын

    What is inception-net hypothesis?? In xception-net paper, the author explained the hypothesis of inception-net. But I couldn't grasp it fully and get lost a bit. Can you explain that??

  • @YannicKilcher

    @YannicKilcher

    3 жыл бұрын

    I'm sorry I have no clue what the inception-net hypothesis is, but also I don't know too much about inception networks.

Келесі