13.3.1 L1-regularized Logistic Regression as Embedded Feature Selection (L13: Feature Selection)

Ғылым және технология

Without going into the nitty-gritty details behind logistic regression, this lecture explains how/why we can consider an L1 penalty --- a modification of the loss function -- as an embedded feature selection method.
Slides: sebastianraschka.com/pdf/lect...
Code: github.com/rasbt/stat451-mach...
Links to the logistic regression videos I referenced:
sebastianraschka.com/blog/202...
-------
This video is part of my Introduction of Machine Learning course.
Next video: • 13.3.2 Decision Trees ...
The complete playlist: • Intro to Machine Learn...
A handy overview page with links to the materials: sebastianraschka.com/blog/202...
-------
If you want to be notified about future videos, please consider subscribing to my channel: / sebastianraschka

Пікірлер: 5

  • @amrelsayeh4446
    @amrelsayeh44466 күн бұрын

    @sebastian At 13:20, why is the solution between the global minimum and the penalty minimum lie somewhere where one of the weights is zero. In other words, why it should lie at the corner of the penalty function not just at the line. between the global minimum and the penalty minimum.

  • @AyushSharma-jm6ki
    @AyushSharma-jm6ki Жыл бұрын

    @sebastian amazing video. Thanks for sharing. I am getting deeper understanding of these topics with your videos.

  • @arunthiru6729
    @arunthiru6729 Жыл бұрын

    @sebastian I think using Logistic Regression directly for feature selection based on respective weights/coefficients means we are assuming all dimensions/features are independent. I understand this is not the correct way to do this. Pls advise.

  • @SebastianRaschka

    @SebastianRaschka

    Жыл бұрын

    Yes, this assumption is correct. ML is full of trade-offs 😅. If you cannot make this assumption, I recommend the sequential feature selection approach

  • @wayne7936

    @wayne7936

    2 ай бұрын

    Thanks for pointing this out!

Келесі