Robust Regression with the L1 Norm

Ғылым және технология

This video discusses how least-squares regression is fragile to outliers, and how we can add robustness with the L1 norm.
Book Website: databookuw.com
Book PDF: databookuw.com/databook.pdf
These lectures follow Chapter 3 from:
"Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control" by Brunton and Kutz
Amazon: www.amazon.com/Data-Driven-Sc...
Brunton Website: eigensteve.com
This video was produced at the University of Washington

Пікірлер: 17

  • @zoheirtir
    @zoheirtir3 жыл бұрын

    This channel is one of the most important channels for me! MANY Thanks Steve

  • @3003eric
    @3003eric3 жыл бұрын

    Nice video. Your channel and book are amazing! Congratulations.

  • @user-ez9ol7om8d
    @user-ez9ol7om8d7 ай бұрын

    Along with visual aids you have explained the concept in a very understandable manner. Thanks for the video.

  • @alexandermichael3609
    @alexandermichael36093 жыл бұрын

    Thank you, Professor. It is pretty helpful for me.

  • @Headbangnuker
    @Headbangnuker3 жыл бұрын

    I was just talking in a meeting about this, get out of my head Brunton.

  • @JousefM
    @JousefM3 жыл бұрын

    Thumbs up Steve!

  • @JeffersonRodrigoo
    @JeffersonRodrigoo3 жыл бұрын

    Excellent!

  • @Calvin4016
    @Calvin40163 жыл бұрын

    Prof. Brunton, thank you for the lecture! However, in some cases such as maximize a posterior and maximum likelihood estimation, under the assumption that the noise is Gaussian distributed, minimizing the L2 norm provides the optimal solution. Usually certain heuristics such as M-Estimation are applied to mitigate issues arise from outliers, in other words changing the kernel to a shape that can tolerate certain amount of outliers in the system. It sounds like using L1 norm here has very similar effects to that of robust kernels where we are effectively changing the shape of the cost/error. Can you please elaborate on the differences between using (L1 norm) and (L2 norm + M-estimator), and how the L1 norm performs in applications where data uncertainty is considered? Thanks!

  • @keyuchen5992

    @keyuchen5992

    9 ай бұрын

    I think you are right

  • @sutharsanmahendren1071
    @sutharsanmahendren10713 жыл бұрын

    Dear sir, I am from Sri Lanka and I am really admired by your video series. My doubt is l1 norm does not differentiable at zero due to its non-continuty. To impose sparsity, researchers use ISTA (Iterative Soft Thresholding Algorithm) to handle the weights when they come near to the zero with a certain threshold. What are your thoughts related to this?

  • @haticehocam2020
    @haticehocam20203 жыл бұрын

    Mr. Brunton What material and program did you use while shooting this video?

  • @vijayendrasdm
    @vijayendrasdm3 жыл бұрын

    Hi Steve L1 solution (i.e regularization) error surface is not convex. Are you planning to explain how do we optimize such functions ? Mathematical derivations would be helpful :) Thanks

  • @pierregravel5941
    @pierregravel5941 Жыл бұрын

    Is there any way we might generate a sampling matrix which is maximally incoherent? What if the samples are positioned randomly and maximally distant from each other? Can we add additional constraints on the sampling matrix?

  • @twk844
    @twk8443 жыл бұрын

    Does anyone know historical reasons for such popularity of L2 norm? Very entertaining videos! Namaste!

  • @MrHaggyy

    @MrHaggyy

    Жыл бұрын

    I think it's so popular because you need it so damn often. Pythagoras or the distance between two points in 2D knows basically everybody. This idea dominates mechanical engineering. The whole idea of complex numbers require l2 norm with i = sqrt(-1) is designed around l2 norm. So all the differential equation in mechanics and electronics need it. And basic optics need it too.

  • @alegian7934
    @alegian79343 жыл бұрын

    there is a point in each video, where you loose consciousness of time passing :D

Келесі