Rethinking Pre-training and Self-Training

Ғылым және технология

*ERRATA* at 9:31 I called the large scale jittering "color jittering", this isn't an operation specifically on colors.
This video explores an interesting paper from researchers at Google AI. They show that self-training outperforms supervised or self-supervised (SimCLR) pre-training. The video explains what self-training is and how all these methods attempt to utilize extra data (labeled or not) for better performance on downstream tasks.
Thanks for watching! Please Subscribe!
Paper Links:
Rethinking Pre-training and Self-training: arxiv.org/pdf/2006.06882.pdf
OpenImages Dataset:storage.googleapis.com/openim...
RetinaNet: arxiv.org/pdf/1708.02002.pdf
Rethinking ImageNet Pre-training: arxiv.org/pdf/1811.08883.pdf
Image Classification State-of-the-Art: paperswithcode.com/sota/image...
Self-Training with Noisy Student: arxiv.org/pdf/1911.04252.pdf
Rotation Self-Supervised Learning: arxiv.org/pdf/1803.07728.pdf
POET: arxiv.org/pdf/1901.01753.pdf
ImageGPT: openai.com/blog/image-gpt/

Пікірлер: 18

  • @Batu135
    @Batu1354 жыл бұрын

    Damn, you're speaking fast!

  • @connorshorten6311
    @connorshorten63114 жыл бұрын

    1:19 How to use Extra Data? 3:08 Self-Training Algorithm 5:19 Examples of Pseudo-Labels (Semantic Segmentation) 5:48 Comparison with Supervised and Self-Supervised Pre-training 7:30 Feature Backbones for Object Detection 8:40 Experiments (To be continued)

  • @DistortedV12
    @DistortedV124 жыл бұрын

    Nice! Thank you for covering this Connor!

  • @connorshorten6311

    @connorshorten6311

    4 жыл бұрын

    Thank you for watching!

  • @youngjin8300
    @youngjin83004 жыл бұрын

    LOVE your channel.

  • @minma02262
    @minma022623 жыл бұрын

    Awesome!

  • @0102030405Jacky
    @0102030405Jacky3 жыл бұрын

    Note: 1. Self-training is more robust and performs better for several downstream tasks 2. Pretraining still yields acceptable performance. With 1.3x-8x faster 3. Random initialization has the best performance.

  • @philborba
    @philborba4 жыл бұрын

    Great video! Congrats! Can you make more videos on semantic segmentation? Thanks for all your videos, they are awesome!

  • @connorshorten6311

    @connorshorten6311

    4 жыл бұрын

    Thank you so much! I'll look into it more, I've been really interested in the idea of pipelining semantic segmentation models with GauGAN for data augmentation. Would you mind sharing what interests you about semantic segmentation?

  • @philborba

    @philborba

    4 жыл бұрын

    Henry AI Labs I’m a masters student and I’m studying semantic segmentation applied on remote sensing. There are lots of new architectures and they are very exciting. It would be very nice to see a video where you can explain some architectures or talk about the influence of different techniques of data augmentation. Regarding this last topic, the best part of this video you just published is the possibility of using self training to improve accuracy, that has blown my mind! We can talk more if you want to, I follow you on Twitter

  • @blakeedwards3582
    @blakeedwards35824 жыл бұрын

    Thank you!

  • @sayakpaul3152
    @sayakpaul31524 жыл бұрын

    Could you elaborate on the part where you discussed self-training and meta pseudo- labels?

  • @user-to7dp2uc3o
    @user-to7dp2uc3o4 жыл бұрын

    Thank you so much

  • @connorshorten6311

    @connorshorten6311

    4 жыл бұрын

    Thanks for watching!

  • @nguyentnhoang
    @nguyentnhoang2 жыл бұрын

    What is AP in the y axis @10:45?

  • @SachinSingh-do5ju
    @SachinSingh-do5ju4 жыл бұрын

    Mannn, how much do you read a day?

  • @connorshorten6311

    @connorshorten6311

    4 жыл бұрын

    Haha, thank you! I try to read between 2-4 hours right after I wake up

  • @faizanahemad
    @faizanahemad4 жыл бұрын

    Dude talk a little slower. Had to slow down the vid. Great work but go slow. Long videos are ok.

Келесі