"A General and Adaptive Robust Loss Function" Jonathan T. Barron, CVPR 2019

Ғылым және технология

arXiv: arxiv.org/abs/1701.03077
TensorFlow code: github.com/google-research/go...
JAX code: github.com/google-research/go...
Pytorch code: github.com/jonbarron/robust_l...
More at jonbarron.info/

Пікірлер: 38

  • @WhiteDragon103
    @WhiteDragon1035 жыл бұрын

    Love the visualizations, makes it very easy to grasp.

  • @magno5157
    @magno51575 жыл бұрын

    Barron loss function

  • @TileBitan
    @TileBitan3 жыл бұрын

    LOl i didn't see the youtuber's name til now. Jon Barron himself lmfao! Nice work guys you are contributing greatly to ML

  • @sheetalborar6813
    @sheetalborar68133 жыл бұрын

    This was an excellent video. Thank you!

  • @mehulmonisha
    @mehulmonisha5 жыл бұрын

    Phenomenal video. Cheers

  • @aniketrangrej
    @aniketrangrej4 жыл бұрын

    Great paper !! Nice work !!

  • @Shontushontu
    @Shontushontu3 жыл бұрын

    Genius!!! you are absolutelely a genius!

  • @guolarry4048
    @guolarry40485 жыл бұрын

    Great work !!

  • @michaelcopeman9577
    @michaelcopeman95777 ай бұрын

    Simply brilliant.

  • @arsenylevin582
    @arsenylevin5825 жыл бұрын

    cool stuff! it's great to see code along with the paper!

  • @frenchmarty7446
    @frenchmarty7446 Жыл бұрын

    Insanely cool

  • @arthurwu6399
    @arthurwu63995 жыл бұрын

    Awesome!

  • @iamsiddhantsahu
    @iamsiddhantsahu5 жыл бұрын

    This is great, eager to read this paper!

  • @jon_barron

    @jon_barron

    5 жыл бұрын

    Thanks! The paper is here: arxiv.org/abs/1701.03077

  • @visuality2541
    @visuality25414 жыл бұрын

    this is gold

  • @alfcnz
    @alfcnz3 жыл бұрын

    Oh, wow, your videos are always so great! Thanks for putting it together! P.S. What do you use for the animations?

  • @jon_barron

    @jon_barron

    3 жыл бұрын

    Hah, thanks! The animations are just matplotlib and ffmpeg.

  • @alfcnz

    @alfcnz

    3 жыл бұрын

    @@jon_barron it's *really* well done. Like I'm serious. 😍

  • @sheetalborar6813

    @sheetalborar6813

    3 жыл бұрын

    @@jon_barron thats amazing

  • @AhmedThahir2002

    @AhmedThahir2002

    6 ай бұрын

    Could you share the code? The animations look lovely

  • @billy818
    @billy8185 жыл бұрын

    Wow this makes me wana actually do a masters in AI now damn, i need to start learning this (and calc :/)

  • @kristoferkrus
    @kristoferkrus9 ай бұрын

    Sweet, looks like it works well and it's simple to use! Have you tried using a mixture of Gaussians as output and KL-divergence as loss function? It would be interesting to see how well that performs against your method. Granted, even a mixture of Gaussians will be sensitive to outliers unless you include some Gaussian with extremely large standard deviation; we just need some way to enable the network to be able to easily output values that can be interpreted as very large standard deviations.

  • @sheetalborar6813
    @sheetalborar68133 жыл бұрын

    Would this loss work in classification tasks as well? As it does not match the shape of the cross-entropy loss function

  • @tenetsuoyoung1191
    @tenetsuoyoung11913 жыл бұрын

    Can this loss be used in super-resolution? Thank you.

  • @yasserothman4023
    @yasserothman40233 жыл бұрын

    @1:42 what is x ?

  • @giuseppeguap7250
    @giuseppeguap72504 жыл бұрын

    How do you do those animations thoooo!

  • @jon_barron

    @jon_barron

    4 жыл бұрын

    It's all just matplotlib, avconv, and elbow grease.

  • @ta6847
    @ta68474 жыл бұрын

    2:38 "large errors have less influence than moderate errors" i.e. large errors have less "marginal" influence than moderate errors; alternatively, large errors are penalized more than moderate errors always, but by less and less as alpha approaches infinity

  • @jon_barron

    @jon_barron

    4 жыл бұрын

    Here "influence" is a technical term from the m-estimation literature, where it means the magnitude of the derivative.

  • @ta6847

    @ta6847

    4 жыл бұрын

    @@jon_barron Good to know!

  • @stivstivsti
    @stivstivsti3 жыл бұрын

    it doesn't seem to generalize logloss, does it?

  • @jon_barron

    @jon_barron

    3 жыл бұрын

    Nope, this is just looking at the losses used for regression, not classification.

  • @theshuman100
    @theshuman1005 жыл бұрын

    is this loss

  • @silkwurm
    @silkwurm5 жыл бұрын

    seems to me that the claims of "bounded gradient" don't apply when alpha = 2

  • @jon_barron

    @jon_barron

    5 жыл бұрын

    That's right, the gradient is bounded iff alpha

  • @diwu1877
    @diwu18774 жыл бұрын

    isn't there a tool called "Hyperas"? why not use that?

  • @emilianorosso1113
    @emilianorosso11133 жыл бұрын

    Have you meet God?

  • @zulkafilabbas
    @zulkafilabbas3 жыл бұрын

    barron loss

Келесі