Sparsity and the L1 Norm

Ғылым және технология

Here we explore why the L1 norm promotes sparsity in optimization problems. This is an incredibly important concept in machine learning, and data science more broadly, as sparsity helps us to improve robustness and prevent overfitting.
Book Website: databookuw.com
Book PDF: databookuw.com/databook.pdf
These lectures follow Chapter 3 from:
"Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control" by Brunton and Kutz
Amazon: www.amazon.com/Data-Driven-Sc...
Brunton Website: eigensteve.com
This video was produced at the University of Washington

Пікірлер: 57

  • @Ninjashifter
    @Ninjashifter3 жыл бұрын

    Currently losing my mind over how effective that simple diamond shape is in finding the 1-norm, I suddenly want to research topology. Amazing stuff.

  • @Eigensteve

    @Eigensteve

    3 жыл бұрын

    I know, it is kind of a mind twister. It gets even more interesting in higher dimensions...

  • @danbochman
    @danbochman3 жыл бұрын

    It's amazing how you can work for years with a function and still gain new insights about it and improve your intuition. Thanks you so much for these videos.

  • @alexhartung2369
    @alexhartung23693 жыл бұрын

    mind blown. this guy is a rare breed. Loving the content

  • @SamAndHenryPlaz
    @SamAndHenryPlaz3 жыл бұрын

    Finally got a nibble of insight into L1, L2 norms they bring up in Ng's coursera classes. Seems like it would be good to use when you want to prune the size of a network after training. I love the chunks of insights these videos give and the very understandable way they're explained.

  • @nmerrillvt
    @nmerrillvt2 жыл бұрын

    Amazing videos. The quality and clarity are above and beyond. Thanks!

  • @pevogam
    @pevogam3 жыл бұрын

    These are all very visual and well made lectures, thanks a lot for the high quality videos and openly available materials!

  • @sahilchawla2485
    @sahilchawla24852 жыл бұрын

    Best 11 mins spent! Thanks a lot for distinguishing your solution than the rest of the internet!

  • @aantoine5819
    @aantoine58193 жыл бұрын

    This was a perfect supplement to a lecture I just watched for class

  • @PunmasterSTP
    @PunmasterSTP Жыл бұрын

    Sparsity? More like “Super explanation that’s gotta be”…one of the best I’ve ever heard on the topic. Thank you so much for making and sharing all of these incredibly high-quality videos!

  • @jasleenkaur1208
    @jasleenkaur12082 жыл бұрын

    Really best videos...I have been reading the stuff for dictionaries creation from last two week and now I understood the concept

  • @yuhuawei1763
    @yuhuawei17632 жыл бұрын

    Excellent explanation! Thank you!

  • @chaser27
    @chaser273 жыл бұрын

    You've convinced me to buy your book based on these videos.

  • @Eigensteve

    @Eigensteve

    3 жыл бұрын

    Awesome, hope you like it!

  • @adityamrai8508
    @adityamrai85083 жыл бұрын

    You are simply amazing. Keep inspiring.😎

  • @andreygrant8676
    @andreygrant86763 жыл бұрын

    I don't know why this came up on my suggestions list but was a good explanation! Thank you Steve :)

  • @TheLuceArs

    @TheLuceArs

    3 жыл бұрын

    Same I was literally browsing YT suggestions while waiting a night at the airport and saw one of these videos. Who would've known that this becomes the most useful night in my university years

  • @hashirroshinvaliyaparambil70
    @hashirroshinvaliyaparambil703 жыл бұрын

    Hey prof Steve, could you please make videos about Lyapunove stability and especially nonlinear systems

  • @proteuswave
    @proteuswave2 ай бұрын

    Great video. Also..your setup is nuts! Writing backward? Mirror image? Does not compute!

  • @hoaxuan7074
    @hoaxuan70743 жыл бұрын

    It is quite simple to use smoothing to solve compressed sensing. Though I found quite slow. You repeatedly correct with the compressed sensing data, smooth, correct, smooth..... The binomial filter is quite good. These days I would prefer to solve with a neural net in one step. Likely to be much faster.

  • @yasir9909
    @yasir99093 жыл бұрын

    Sir do also have a similar detailed lecture on sparsity introduced by l-infinity norm?

  • @EduardoGarcia-tv2fc
    @EduardoGarcia-tv2fc4 жыл бұрын

    Are we gonna get more videos on chapter 3?

  • @varunjohnson9570
    @varunjohnson95704 жыл бұрын

    Sir when can we expect continuation videos for Chapter 3?

  • @nguyenngocly1484
    @nguyenngocly14843 жыл бұрын

    You can train neural networks with sparse mutations. Continuous Gray Code Optimization mutations work well. Each GPU gets the full neural model and part of the training set. Not much data needs to move around, just a sparse mutation list, costs back and accept or reject. A fixed random pattern of sign flips is a simple random projection. If you follow it by a Walsh Hadamard transform you get a much better random projection. The WHT acts as a system of fixed orthogonal dot products. The variance equation for linear combinations of random variables applies to dot products. The CLT applies not just to sums but any combination of adding and subtracting (cf WHT) You can have inside_out neural networks with fixed dot products and parametric (adjustable) activation functions. Ie Fast Transform fixed-filter-bank neural nets. You do a simple random projection before the net to stop the first layer taking a spectrum and do a final fast transform as a pseudo_readout layer. f(x)=x.ai x=0 , i=0 to n-1 is a suitable parametric activation function.

  • @umangyadav4787
    @umangyadav47873 жыл бұрын

    Hey, steve I have question like in lasso we have regularisation term |w| if this is. Change to w^2/1+w^2 ... What change it make to coefficient?

  • @alegian7934
    @alegian79343 жыл бұрын

    I love how Y variable sounds like "why"

  • @NowanIlfideme
    @NowanIlfideme3 жыл бұрын

    Hi, will you also be covering the mathematical background of the L1 sparsity property? I understand it intuitively, but I haven't found a good intuitive and rigorous mathematical proof/explanation of it.

  • @praveencv431
    @praveencv4312 жыл бұрын

    Any formal proof that l1 norm minimisation gives the sparsest solution in higher dimensions? I was trying to find a reference for it but unable to get one .

  • @TheMazyProduction
    @TheMazyProduction3 жыл бұрын

    Is L_infinity norm the most uniform solution?

  • @meccamiles7816
    @meccamiles78163 жыл бұрын

    Does someone have a simple way of describing why the L0 sparse solution is NP complete?

  • @mikefischbein3230
    @mikefischbein32302 жыл бұрын

    I keep picturing scenes from Cheers where Norm walks into the bar and everyone shouts "Norm!" Then Sam asks "What'll it be today, Norm? L1 or L2?"

  • @zrmsraggot
    @zrmsraggot2 жыл бұрын

    What's the difference between a sparse solution coming up on S1 and one on S2 ?

  • @meganauger354
    @meganauger3544 жыл бұрын

    Sort of a random question, but what would happen if, when using the L1 norm, the s vector was perfectly parallel to one of the sides of the L1 square? Would it return a dense solution, instead of sparse? Would that mean you are in an inappropriate basis for a sparse solution? Thanks!

  • @viniciusvilela5371

    @viniciusvilela5371

    2 жыл бұрын

    I guess that's why he said it often brings a sparse solution, not always...

  • @cDtag00
    @cDtag003 жыл бұрын

    Visually, l_{2/3} looks like an attractive choice as well, as it approximates l_0 even better than l_1 does. Whats the advantage of l_1 over l_{2/3}?

  • @NowanIlfideme

    @NowanIlfideme

    3 жыл бұрын

    I think, like the 0-norm, it's not possible to use convex optimization to find solutions with a norm of p

  • @cDtag00

    @cDtag00

    3 жыл бұрын

    @@NowanIlfideme Thank you, makes sense... I was indeed thinking of neural networks and other forms of non-linear optimization

  • @deez_gainz
    @deez_gainz3 жыл бұрын

    Does L1 norm regularization for the neural network layer also promote sparsest weights in the layer?

  • @SamAndHenryPlaz

    @SamAndHenryPlaz

    3 жыл бұрын

    I'm wondering this as well. It does seem that way.

  • @xinyichen4332
    @xinyichen43323 жыл бұрын

    Hi, I still don’t understand why finding the L0 norm of S is NP hard. Could you please explain a bit? I’d appreciate your help. Thanks!

  • @Eigensteve

    @Eigensteve

    3 жыл бұрын

    Good question. So computing the L0 norm of a vector is simple: just count all of the non-zero entries. But finding the solution "x" of a system of equations Ax=b where x has the minimum L0 norm is NP hard. You would have to search through all possible sparse vectors (you would try all vectors with only one entry, then all vectors with only 2 entries, then all vectors with only 3 entries), until you find the sparsest solution. This is a combinatorial search that scales like n! (where n is the dimension of x). n! grows way too fast for modern computers to handle.

  • @adityagarg259
    @adityagarg259 Жыл бұрын

    What if the line and one-norm diamond becomes parallel?

  • @davidmadsen2761
    @davidmadsen27613 жыл бұрын

    Isn't taxi-cab norm L1? As in the distance a car will have to drive in a grid system

  • @alexeyl22
    @alexeyl223 жыл бұрын

    "there are infinitely many asses" 😂😂😂. You said it two times and I concur

  • @johnathancorgan3994
    @johnathancorgan39942 жыл бұрын

    Steve, I love how your abbreviation of "solution" is "sol" to nth power 😏

  • @zakaryaelkhiyati8563
    @zakaryaelkhiyati85633 жыл бұрын

    neat

  • @tcratius1748
    @tcratius17483 жыл бұрын

    How do you prove a L0 if you can not compute it? Maths is weird, yet the L1 is handy for a project I'm doing, so I should really be asking, how did you read my mind...👻 ps is it useful for imbalance classes?

  • @snippletrap

    @snippletrap

    3 жыл бұрын

    It's computable but not differentiable (since non-convex). Later in the video he conflates "computable" with "tractable", but that is a mistake and not true.

  • @tcratius1748

    @tcratius1748

    3 жыл бұрын

    @@snippletrap hmm, as a fairly poor mathematician studying data science at uni, this all computable, understanding takes a whole new and incredibly sparse vocabulary that seems to be different depending on the authors preference of symbols. Oh well, just keep on keeping on. :)

  • @snippletrap

    @snippletrap

    3 жыл бұрын

    @@tcratius1748 Among computer scientists these terms are not ambiguous. "Computable" means it can be computed. Not everything can be computed -- see the Halting Problem for instance. "Tractable" generally means that computation time is not exponential in the size of the input. Hard problems like non-convex optimization are computable but not tractable. A computer will get eventually solve those problems, but it may take centuries to do so.

  • @tcratius1748

    @tcratius1748

    3 жыл бұрын

    @@snippletrap yeap, I guess it didn't help I did biology as my bach. Oh, well enjoy, if you really wish to chat more I can give you my LinkedIn otherwise cheers for the chat.

  • @enigmabloom
    @enigmabloom3 жыл бұрын

    Shoot, this guy's from UW? I'm doing the master's for EEs there right now lol

  • @MrAlextorex
    @MrAlextorex3 жыл бұрын

    He writes backwards. how impressive!

  • @davidmadsen2761

    @davidmadsen2761

    3 жыл бұрын

    probably mirroring in edit, so this is not actually how Steve would look like in person

  • @DerekWoolverton
    @DerekWoolverton3 жыл бұрын

    Now I have to go searching for the formula for l_{2/3} norm. I can guess what it is, but it seems oddly out of place compared to all the integer bases, so I'm as curious where it originally came up.

  • @goodgame7474
    @goodgame74743 жыл бұрын

    This comment is literally literal !

  • @MrAlextorex
    @MrAlextorex3 жыл бұрын

    He tells a nice story but his explanation requires a lot of background knowledge so in the end I do not feel very satisfied. It would be useful to add also more detailed tutorial links for further study

Келесі