"A General and Adaptive Robust Loss Function" Jonathan T. Barron, CVPR 2019
Ғылым және технология
arXiv: arxiv.org/abs/1701.03077
TensorFlow code: github.com/google-research/go...
JAX code: github.com/google-research/go...
Pytorch code: github.com/jonbarron/robust_l...
More at jonbarron.info/
Пікірлер: 38
Love the visualizations, makes it very easy to grasp.
Barron loss function
LOl i didn't see the youtuber's name til now. Jon Barron himself lmfao! Nice work guys you are contributing greatly to ML
This was an excellent video. Thank you!
Phenomenal video. Cheers
Great paper !! Nice work !!
Genius!!! you are absolutelely a genius!
Great work !!
Simply brilliant.
cool stuff! it's great to see code along with the paper!
Insanely cool
Awesome!
This is great, eager to read this paper!
@jon_barron
5 жыл бұрын
Thanks! The paper is here: arxiv.org/abs/1701.03077
this is gold
Oh, wow, your videos are always so great! Thanks for putting it together! P.S. What do you use for the animations?
@jon_barron
3 жыл бұрын
Hah, thanks! The animations are just matplotlib and ffmpeg.
@alfcnz
3 жыл бұрын
@@jon_barron it's *really* well done. Like I'm serious. 😍
@sheetalborar6813
3 жыл бұрын
@@jon_barron thats amazing
@AhmedThahir2002
6 ай бұрын
Could you share the code? The animations look lovely
Wow this makes me wana actually do a masters in AI now damn, i need to start learning this (and calc :/)
Sweet, looks like it works well and it's simple to use! Have you tried using a mixture of Gaussians as output and KL-divergence as loss function? It would be interesting to see how well that performs against your method. Granted, even a mixture of Gaussians will be sensitive to outliers unless you include some Gaussian with extremely large standard deviation; we just need some way to enable the network to be able to easily output values that can be interpreted as very large standard deviations.
Would this loss work in classification tasks as well? As it does not match the shape of the cross-entropy loss function
Can this loss be used in super-resolution? Thank you.
@1:42 what is x ?
How do you do those animations thoooo!
@jon_barron
4 жыл бұрын
It's all just matplotlib, avconv, and elbow grease.
2:38 "large errors have less influence than moderate errors" i.e. large errors have less "marginal" influence than moderate errors; alternatively, large errors are penalized more than moderate errors always, but by less and less as alpha approaches infinity
@jon_barron
4 жыл бұрын
Here "influence" is a technical term from the m-estimation literature, where it means the magnitude of the derivative.
@ta6847
4 жыл бұрын
@@jon_barron Good to know!
it doesn't seem to generalize logloss, does it?
@jon_barron
3 жыл бұрын
Nope, this is just looking at the losses used for regression, not classification.
is this loss
seems to me that the claims of "bounded gradient" don't apply when alpha = 2
@jon_barron
5 жыл бұрын
That's right, the gradient is bounded iff alpha
isn't there a tool called "Hyperas"? why not use that?
Have you meet God?
barron loss