Machine Learning Lecture 11 "Logistic Regression" -Cornell CS4780 SP17

Cornell class CS4780. (Online version: tinyurl.com/eCornellML )
Lecture Notes: www.cs.cornell.edu/courses/cs4...
If you want to take the course for credit and obtain an official certificate, there is now a revamped version (with much higher quality videos) offered through eCornell ( tinyurl.com/eCornellML ). Note, however, that eCornell does charge tuition for this version.

Пікірлер: 47

  • @vatsan16
    @vatsan164 жыл бұрын

    Am I the only one who raises his hands from my home whenever he says raise your hands? :P

  • @varunjindal1520

    @varunjindal1520

    3 жыл бұрын

    Me too

  • @Enem_Verse
    @Enem_Verse3 жыл бұрын

    So Many professor have knowledge But only few have enthusiasm while teaching

  • @sandeepreddy6295
    @sandeepreddy62953 жыл бұрын

    Awesome lectures !! glad to have bumped into one of them; after that spending time on the entire series felt worthwhile.

  • @smallstone626
    @smallstone6264 жыл бұрын

    A fantastic lecture. Thank you professor.

  • @naifalkhunaizi4372
    @naifalkhunaizi43723 жыл бұрын

    Amazing professor Kilian!!

  • @rolandheinze7182
    @rolandheinze71825 жыл бұрын

    Thanks for posting all these lectures Dr. Weinberger. Should make Siraj Rival aware of their availability!

  • @insoucyant
    @insoucyant3 жыл бұрын

    Amazing Lecture!!!! Tanks a lot Prof.

  • @flaskapp9885
    @flaskapp98853 жыл бұрын

    The best teacher ever

  • @nrupatunga
    @nrupatunga4 жыл бұрын

    Hi Kilian, The flow of your lectures are awesome. How you build upon the concepts is amazing. Do you have Matlab codes shared publically? Really cool demos

  • @kodjigarpp
    @kodjigarpp3 жыл бұрын

    You literally saved my comprehension of Statistical Learning, thanks!

  • @ugurkap
    @ugurkap5 жыл бұрын

    It could also be nice to see a dataset correctly classified by Naive Bayes and that if Logistic Regression optimizes the hyperplane even further.

  • @vatsan16
    @vatsan164 жыл бұрын

    I love the way he gets so excited when he says TADA! xD

  • @YulinZhang777
    @YulinZhang7774 жыл бұрын

    This is great stuff. It's just funny that those are motorized chalkboard instead of dry erase boards.

  • @taketaxisky
    @taketaxisky4 жыл бұрын

    Interesting to learn the link between naive Bayes and logistic regression. Thank you! For the spam email example with very high dimension feature, logistic regression won’t work right.

  • @30saransh
    @30saranshАй бұрын

    Amazing!!!!!!!!!!!!!!!!!!!!!!!!

  • @doyourealise
    @doyourealise3 жыл бұрын

    9:48 this is amazing :)

  • @dhrumilshah6957
    @dhrumilshah69574 жыл бұрын

    This video lectures are great ! completed 12 in 2 days ! I find it more intuitive than Andrew Ng one . Also Prof have you ever recorded lectures on unsupervised learning ? would love to watch those since those are missing from this series.

  • @kilianweinberger698

    @kilianweinberger698

    4 жыл бұрын

    Sorry, never recorded them. But will try to, the next time I teach that course.

  • @jiviteshsharma1021

    @jiviteshsharma1021

    4 жыл бұрын

    @@kilianweinberger698 YES PLEASEEE

  • @mhsnk905
    @mhsnk9052 жыл бұрын

    @KilianWeinberger Unfortunately online viewers don't have access to the course homework but I think your claim in 20:31 is only valid if across each dimension, data from classes +1 and -1 happen to come from the same variance Gaussian distributions. Otherwise, you would need quadratic terms too.

  • @sinhavaibhav
    @sinhavaibhav4 жыл бұрын

    Since we are using the same form of distribution for P(Y|X) for NB and Logistic Regression, are we still making the same underlying assumption of conditional independence of Xi|Y in case of Logistic Regression. Or does directly estimating the parameters of P(Y|X) means that we are relaxing that assumption?

  • @JoaoVitorBRgomes
    @JoaoVitorBRgomes3 жыл бұрын

    The distance from naive bayes line from points is the best? Or the line is placed equally distant from points(+1, -1)? @kilian weinberger

  • @sudhanshuvashisht8960
    @sudhanshuvashisht89604 жыл бұрын

    I couldn't prove that Naive Bayes for continuous variable is a linear classifier except for the case where I assumed the variance doesn't vary across labels (spam,ham as an example) of y and only varies across input variables x_alpha. Was anyone able to prove it?

  • @kilianweinberger698

    @kilianweinberger698

    4 жыл бұрын

    I believe you do have to make that assumption. Sorry, wasn‘t clear in the lecture.

  • @aajanquail4196
    @aajanquail41964 жыл бұрын

    i think the product at 4:43 is missing the indicator variable?

  • @jachawkvr
    @jachawkvr4 жыл бұрын

    It was nice learning about the connection between naive bayes and logistic regression. However, at the moment, I am only able to see the connection between GaussianNB and Logistic regression. Is there some way to logistic regression if the features are not real-valued?

  • @kilianweinberger698

    @kilianweinberger698

    4 жыл бұрын

    Yes, typically you can derive the relationship if you use a member of the exponential family to model the class conditional feature distributions in NB. Hope this helps.

  • @JoaoVitorBRgomes
    @JoaoVitorBRgomes3 жыл бұрын

    So what's best to find the P of logistic regression? MAP or MLE?

  • @satyagv3670
    @satyagv36704 жыл бұрын

    Hi Kilian, As Naive Bayes comes up with hyperplane that separates two distributions rather two datasets, can the same statement hold good even if the input data set is highly imbalanced..? I mean with out balancing can we still proceed..?

  • @kilianweinberger698

    @kilianweinberger698

    4 жыл бұрын

    Yes totally. The imbalance would then be reflected in the prior distributions over the class labels P(Y) that is incorporated in Bayes Formula.

  • @xiaoweidu4667
    @xiaoweidu46673 жыл бұрын

    Weinberger is one of the best machine learning lecturer

  • @JoaoVitorBRgomes
    @JoaoVitorBRgomes3 жыл бұрын

    @kilian Weinberger: Does logistic regression also has to respect gauss-markov theorem assumptions?

  • @kilianweinberger698

    @kilianweinberger698

    3 жыл бұрын

    Only if it is unbiased …

  • @JoaoVitorBRgomes

    @JoaoVitorBRgomes

    3 жыл бұрын

    @@kilianweinberger698 is it the same as saying the error term has normally distributed residuals? If so then has to respect Gauss Markov theorem? But a binary target would not have residuals right. I can't seem to wrap my mind with this.

  • @JoaoVitorBRgomes
    @JoaoVitorBRgomes3 жыл бұрын

    At circa 42:30 what is lambda? A regularization constant?

  • @kilianweinberger698

    @kilianweinberger698

    3 жыл бұрын

    yes, exactly.

  • @sekfook97
    @sekfook973 жыл бұрын

    can we say that Gaussian Naive Bayes is logistic regression In the case of continuous features?

  • @kilianweinberger698

    @kilianweinberger698

    3 жыл бұрын

    You need to have two classes, and you need to have the same variance for both Gaussians. In the limit of infinite data (and if your modeling assumption is right) it will indeed become the same thing, but note that the two algorithms optimize the parameters differently. LR fits P(y|w,x) and NB fits P(x|y,theta). With limited data, these two approaches will “miss” the true distribution in different ways.

  • @sekfook97

    @sekfook97

    3 жыл бұрын

    @@kilianweinberger698 thank for the very detailed answer. I completely missed out of optimisation part before. It all starts to make sense to me now.

  • @maddai1764
    @maddai17645 жыл бұрын

    Dear Professor can you explain a little bit of what you said at 0:37 about that we can't use where the the derivative equals to zero to find extremum here. I mean what does STUCK mean here

  • @lordjagus

    @lordjagus

    5 жыл бұрын

    What it means is you cannot find an analytical representation of the point where the derivative is equal zero. So it will be equal to zero somewhere, but you cannot find a closed form formula to which you can just stick your data and it will compute the point, you have to approximate the point using some numerical method.

  • @maddai1764

    @maddai1764

    5 жыл бұрын

    lordjagus thanks. I got that but my question is why it’s not possible.

  • @JoaoVitorBRgomes
    @JoaoVitorBRgomes3 жыл бұрын

    But is logistic regression limited to linear separable dataset?

  • @kilianweinberger698

    @kilianweinberger698

    3 жыл бұрын

    wait for the kernel trick :-)

  • @shrishtrivedi2652
    @shrishtrivedi26523 жыл бұрын

    31:00 Logistic

  • @abunapha
    @abunapha5 жыл бұрын

    Starts at 0:35

Келесі