Machine Learning Lecture 24 "Kernel Support Vector Machine" -Cornell CS4780 SP17

Lecture Notes:
www.cs.cornell.edu/courses/cs4...

Пікірлер: 31

  • @sophiahan3889
    @sophiahan38894 жыл бұрын

    Actual lecture starts at 15:10

  • @cge007

    @cge007

    Жыл бұрын

    but you will miss out on real life insight/ ideas, which you can use 😅

  • @kirtanpatel797
    @kirtanpatel7974 жыл бұрын

    These are actually episodes! Full of fun and learning :)

  • @stphb6201
    @stphb62014 жыл бұрын

    Vielen Dank, dass Sie diese Vorlesungen hier bereitstellen! Das sind echt die besten Erklärungen, die ich bisher gehört habe.

  • @kilianweinberger698

    @kilianweinberger698

    4 жыл бұрын

    Vielen Dank :-)

  • @itachi4alltime
    @itachi4alltime2 жыл бұрын

    If the students in this class writes spam emails, we are all doomed lol

  • @jordankuzmanovik5297
    @jordankuzmanovik52973 жыл бұрын

    Is there anyway we get the jupyter notebooks? Thanks

  • @roniswar
    @roniswar3 жыл бұрын

    24:40 or about, I think x_i is already a column vector, hence (x_i)^T is a row vector. Otherwise, it is the outer product that gives a matrix, instead of the inner product that produce a scalar.

  • @vatsan16
    @vatsan164 жыл бұрын

    "They stopped attending their kids and loved ones in pursuit of this beauty" :D :D :D

  • @huyle3597
    @huyle3597 Жыл бұрын

    Dear professor, do you have a reference for the primal-dual formulation of kernel svm that contains detailed description of the algorithm (like pseudocode, enough to actually program it)?

  • @chimu3056
    @chimu30564 жыл бұрын

    Was Arthur not on the leaderboard? :(

  • @vatsan16

    @vatsan16

    4 жыл бұрын

    Arthur let others win. Obviously.

  • @yannickpezeu3419
    @yannickpezeu34193 жыл бұрын

    Thanks

  • @kunindsahu974
    @kunindsahu9743 жыл бұрын

    Could you please give some examples of Kernels on Graphs/ Molecules that have been known to work well? It'd be really helpful, thank you!

  • @sandeshhegde9143
    @sandeshhegde91435 жыл бұрын

    Starts from: kzread.info/dash/bejne/hKt6k8esfKatZLg.html

  • @hamzalebbar9105
    @hamzalebbar91054 жыл бұрын

    If we reason the same way as in Lecture 22 (but using soft SVM's objective function instead of squared loss), then we can prove that W can be written as a linear combination of xi.xj (i.e. Kij). So why don't we use kernel in the primal problem as access to data in this case is only through Kij?? I'm I mistaken smthg??

  • @kilianweinberger698

    @kilianweinberger698

    4 жыл бұрын

    Yes you can also derive a version of kernel-SVM that way. Historically, this is not how it came about, and it also is not what people typically refer to as „kernel-SVM“. One difference is that if you actually solve the dual you will obtain a sparse solution (with most alpha‘s set to zero).

  • @hamzalebbar9105

    @hamzalebbar9105

    4 жыл бұрын

    ​@@kilianweinberger698Ok I understand now, thank you. Honestly, free lectures recordings + written notes, and now even responding to general audience questions. Hats off Mr. Weinberger!!

  • @rachel2046
    @rachel20462 жыл бұрын

    The Chinese characters above "Depth" literally mean "Grandpa Jiang", it might have referred to the former Chinese President Jiang Zemin. The next team's name is also in Chinese, it means "Your thigh(s) has/have admitted defeat". No idea what it was referring to.

  • @Theophila-FlyMoutain
    @Theophila-FlyMoutain4 ай бұрын

    For non-parametric regression, when we test a data, we always need the whole training dataset right? Does it mean that we need to use a lot of memory to save the model, i.e. the training dataset?

  • @kilianweinberger698

    @kilianweinberger698

    4 ай бұрын

    Exactly, that is one downside of non-parametric models. (Some keep a subset of the data, or a digest of the data, but in general the model size grows with the training set size.)

  • @Theophila-FlyMoutain

    @Theophila-FlyMoutain

    4 ай бұрын

    Thank you!@@kilianweinberger698

  • @Theophila-FlyMoutain

    @Theophila-FlyMoutain

    4 ай бұрын

    Thank you! @@kilianweinberger698

  • @opencvitk
    @opencvitk2 жыл бұрын

    There are many prophets of machine learning but only one God of ML: Kilian. :-)

  • @upenderphogat7501
    @upenderphogat75013 жыл бұрын

    Actually wanted to ask one question, Kij matrix is of d*d shape, it's inverse would also be d*d and to get alpha = [ K^-1 ][y] , unable to understand the possibility of their multiplications. the shape of alpha is n*1. Can someone help me out with this, also correct me if I have understood it wrongly.

  • @jarv1s104

    @jarv1s104

    3 жыл бұрын

    I'm not exactly sure, but Kij is probably n*n (because vector x_i is d-dimentional, and we have n of them in our dataset)

  • @upenderphogat7501

    @upenderphogat7501

    3 жыл бұрын

    @@jarv1s104 Yes it makes sense K = X.TX which is n*n matrix, I got confused with X.TX to be a d*d matrix. Thanks :)

  • @palomavictoriafernandezove8485
    @palomavictoriafernandezove8485 Жыл бұрын

    Professor, I am taking CS 4780 right now. Are you planning on teaching this course any time soon? Thank you for these videos!

  • @kilianweinberger698

    @kilianweinberger698

    Жыл бұрын

    Yes, in Spring 23. Always looking for TAs.

  • @palomavictoriafernandezove8485

    @palomavictoriafernandezove8485

    Жыл бұрын

    @@kilianweinberger698 I see. Are you only looking for undergrads? I'm actually taking 5780.

  • @neelmishra2320
    @neelmishra2320Ай бұрын

    RIP harambe