Support Vector Machines (SVMs): A friendly introduction

Ғылым және технология

For a code implementation, check out this repo:
github.com/luisguiserrano/man...
Announcement: New Book by Luis Serrano! Grokking Machine Learning. bit.ly/grokkingML
40% discount code: serranoyt
An introduction to support vector machines (SVMs) that requires very little math (no calculus or linear algebra), only a visual mind.
This is the third of a series of three videos.
- Linear Regression: • Linear Regression: A f...
- Logistic Regression: • Logistic Regression an...
0:00 Introduction
1:42 Classification goal: split data
3:14 Perceptron algorithm
6:00 Split data - separate lines
7:05 How to separate lines?
12:01 Expanding rate
18:19 Perceptron Error
19:26 SVM Classification Error
20:34 Margin Error
25:13 Challenge - Gradient Descent
27:25 Which line is better?
28:24 The C parameter
30:16 Series of 3 videos
30:30 Thank you!

Пікірлер: 129

  • @mohammedhasan6522
    @mohammedhasan65225 жыл бұрын

    As always, very nicely and easily explained. Looking forward to seeing your explanation about PCA, TSNE and some topics of Reinforcement Learning.

  • @naps9249
    @naps92495 жыл бұрын

    The best Machine learning / Deep learning I've learnt from.

  • @JimmyGarzon
    @JimmyGarzon5 жыл бұрын

    Thank you, this is fantastic! Your visual explanations are great, they’ve really helped understand the intuition of these techniques.

  • @Vikram-wx4hg
    @Vikram-wx4hg3 жыл бұрын

    Super explanation Luis! It great when someone can bring out the intuitions and meaning behind mathematics in such a clear way!

  • @blesucation4417
    @blesucation44176 ай бұрын

    Just want to leave a comment so that more people could learn from your amazing videos! Many thanks for the wonderful and fun creation!!!

  • @JohnTheStun
    @JohnTheStun4 жыл бұрын

    Visual, thorough, informal - perfect!

  • @sofiayz7472
    @sofiayz74722 жыл бұрын

    This is the best SVM explanation! I never truly understand it until I watch your video!

  • @user-ns6qk9wj2t
    @user-ns6qk9wj2t4 жыл бұрын

    The best SVM explanation I ve listened to. Thank you.

  • @hichamsabah31
    @hichamsabah313 жыл бұрын

    Best explanation of SVM on KZread. Keep up the good work.

  • @mudcoff
    @mudcoff4 жыл бұрын

    Mr. Serano, U r the only 1, who explains the logic of ML and not the technicalities. Thank U

  • @meenakshichoudhary4554
    @meenakshichoudhary45545 жыл бұрын

    Sir, thank you for the video, extremely well explained in short duration. Really appreciable

  • @obheech
    @obheech5 жыл бұрын

    Very nice explanations.. May your channel flourish !!

  • @MANISHMEHTAIIT
    @MANISHMEHTAIIT5 жыл бұрын

    Nice Sir, best teaching style. Love the way you teach...

  • @user-eh9yd9se2d
    @user-eh9yd9se2d3 жыл бұрын

    Very insightful lecture. Thank you very much Dr Serrano.

  • @EngineeringChampion
    @EngineeringChampion4 жыл бұрын

    Thank you for this simplifying the concepts! I enjoyed watching this video!

  • @08ae6013
    @08ae60135 жыл бұрын

    Thank you very much for this video. As usual you are so good in explaining the complex things in simple way. First time I am able to understand the motive behind SVC and also how it is different from Logistic regression. Can you please make a video on SVM kernels (Polynomial, Gaussian, Radial ...)

  • @shrisharanrajaram4766
    @shrisharanrajaram47664 жыл бұрын

    Hats off to you,sir. Very clear with the concept

  • @cherisykonstanz2807
    @cherisykonstanz28073 жыл бұрын

    I really like your accent, could listen all day. Living legend Luis

  • @bassimeledath2224
    @bassimeledath22245 жыл бұрын

    Legend. Keep doing what you do!

  • @johncyjoanofarc
    @johncyjoanofarc3 жыл бұрын

    This video should go viral.... So that ppl benefit from it.... Great teaching

  • @yasssh7835
    @yasssh78354 жыл бұрын

    Best explanation! You got some skills to teach hard things in an easy way.

  • @7anonimo1
    @7anonimo13 жыл бұрын

    Thank you for your work Luis!

  • @giannismaris13
    @giannismaris132 жыл бұрын

    BEST explanation of SVM so far!

  • @nguyenbaodung1603
    @nguyenbaodung16032 жыл бұрын

    This is terrifying omg. You approach it soooooo perfectly and all the math behind just guide me to the point that I have to say WOW! Such a good observation, this video is by far golddd. I love your approach at 22:56 so much, you guide me to that point and say, that's the regulization term and I was omg wtf is happening, that's what I was trying to understand all this time and this guy, you, just explain it in a few minutes. Really appreciate

  • @houyao2147
    @houyao21474 жыл бұрын

    I love this so much! Explain in a ver friendly way!

  • @zullyholly
    @zullyholly3 жыл бұрын

    very succinct way of explaining hyperparameter of eta and c. normally I just take things for granted and just do hyperparameter tuning

  • @rajeshvarma2162
    @rajeshvarma21622 жыл бұрын

    Thanks for your easy and understandable explanation

  • @khatiwadaAnish
    @khatiwadaAnish11 ай бұрын

    You made complex topic very easily understandable 👍👍

  • @ismailcezeri1691
    @ismailcezeri16914 жыл бұрын

    The best explanation of SVM I have ever seen

  • @johnrogers1274
    @johnrogers12745 жыл бұрын

    Efficient, effective and fun. Thanks very much

  • @humzaiftikhar1130
    @humzaiftikhar11302 жыл бұрын

    Thank you very much for that hard work. it was so informative and well described.

  • @ignaciosanchezgendriz1457
    @ignaciosanchezgendriz145711 ай бұрын

    Luis, tus vídeos son simplemente maravillosos! Pienzo cuanto conociemiento e claridad fue necesário. Quote by Dejan Stojanovic: “The most complicated skill is to be simple.”

  • @karanpatel1906
    @karanpatel19064 жыл бұрын

    Simply awesome...even thank you is not enough to describe how well this video is....explained thoughest things in kids language

  • @gitadanesh7496
    @gitadanesh74964 жыл бұрын

    Explained very simple. Thanks a lot.

  • @yeeunsong3423
    @yeeunsong34234 жыл бұрын

    Thanks for your easy and understandable explanation:)

  • @polarbear986
    @polarbear9862 жыл бұрын

    best svm explanation. Thanks a lot!

  • @AA-yk8zi
    @AA-yk8zi3 жыл бұрын

    Really good explanation! thank you sir.

  • @imagnihton2
    @imagnihton22 жыл бұрын

    I am way too late here...but so happy to have found a gold mine of information! Amazing explanation!!

  • @sandeepgill4282
    @sandeepgill42822 жыл бұрын

    Thanks a lot for such a nice explanation.

  • @KundanKumar-zu1xk
    @KundanKumar-zu1xk2 жыл бұрын

    As always excellent and easy to understandable vedio.

  • @RIYASHARMA-he9vz
    @RIYASHARMA-he9vz3 жыл бұрын

    A very nice explanation of SVM I have ever read.

  • @ocarerepairlab8218
    @ocarerepairlab8218 Жыл бұрын

    Hey Louis, I have recently come across your videos and I am blown away by your simplistic approach to delivering the mathematics and logic especially the mention of the applications. A quick one, DO YOU TAKE STUDENTS, I WOULD LIKE TO ENROLL. I have more interest in analysis of biological data and o rarely find as much good video as this. I'm simply in love with your methods !!!!!

  • @OL8able
    @OL8able4 жыл бұрын

    Thanks Luis, SVM makes much sense now :)

  • @witoldsosnowski6764
    @witoldsosnowski67645 жыл бұрын

    A very good explanation comparing to other available in the Internet

  • @rakhekhanna
    @rakhekhanna4 жыл бұрын

    You are an Awesome Teacher. Love you :)

  • @AnilAnvesh
    @AnilAnvesh2 жыл бұрын

    Thank You for this video ❤️

  • @yousufali_28
    @yousufali_285 жыл бұрын

    as always well explained.

  • @EliezerTseytkin
    @EliezerTseytkin4 жыл бұрын

    Pure genius. It really takes a genius to explain these things with such extreme simplicity.

  • @ronaktiwari7041
    @ronaktiwari70413 жыл бұрын

    You are the best Luis.

  • @kimsethseu6596
    @kimsethseu65962 жыл бұрын

    thank you for the good explanation.

  • @pushkarparanjpe
    @pushkarparanjpe5 жыл бұрын

    Great work!

  • @sandipansarkar9211
    @sandipansarkar92113 жыл бұрын

    Great explanation

  • @krishnanarra5578
    @krishnanarra55783 жыл бұрын

    Awesome.. I liked your videos so much that I bought your book and the book is great too.

  • @SerranoAcademy

    @SerranoAcademy

    3 жыл бұрын

    Thank you Krishna, so glad to hear you liked it! ;)

  • @mohameddjemai4840
    @mohameddjemai48403 жыл бұрын

    Thank you very much for this video.

  • @sriti_hikari
    @sriti_hikari3 жыл бұрын

    Thank you for that video!

  • @terryliu3635
    @terryliu3635 Жыл бұрын

    Great video!!!

  • @koushikkou2134
    @koushikkou21343 жыл бұрын

    Mate you're a great teacher

  • @damelilad875
    @damelilad8754 жыл бұрын

    Great Lecture! You need to make a video on how to perform all these algorithms with Scikit-learn package in python

  • @ikramullahmohmand
    @ikramullahmohmand4 жыл бұрын

    very well explained. thanks mate :-)

  • @drewlehe3763
    @drewlehe37634 жыл бұрын

    This is a great explanation of the concepts, it helped me. But isn't this video about the Support Vector Classifier and not the SVM (which uses kernelization)? The SVC uses the maximal margin classifier, with a budget parameter for errors, and the SVM uses the SVC in an expanded feature space made by kernelization.

  • @geogeo14000
    @geogeo140003 жыл бұрын

    amazing work thx

  • @hanfei3468
    @hanfei34684 жыл бұрын

    Thanks Luis, great video and explanation! How do you do the animation in the video?

  • @ardhidattatreyavarma5337
    @ardhidattatreyavarma53373 ай бұрын

    awesome explanation

  • @omrygilon1099
    @omrygilon10994 жыл бұрын

    Great videos!

  • @todianmishtaku6249
    @todianmishtaku62492 жыл бұрын

    Superb!

  • @robertpollock8617
    @robertpollock86177 ай бұрын

    Excellent!!!!!

  • @mikeczyz
    @mikeczyz4 жыл бұрын

    great video!

  • @frankhendriks2637
    @frankhendriks26372 жыл бұрын

    Hi Luis, Thanks very much for these videos. I watch them with great pleasure. I have some questions though about this one. The questions are preceded by the moment in the video (in mm:ss) where I have my question. 14:26: For determining whether a point is correctly classified, should you compare the red points to the red (dashed) line and the blue points to blue (dashed) line? Or should we compare all points to the black line? I assume it is the first although this is not mentioned explicitly. 22:07: The margin is different when you start with a different value of d in ax+by+c=d. Would you always start with d=1 and -1 or are there situations you start with other values of d (see also my question below)? 27:33: Two questions here. 1) In the second example the margin is actually not increased but decreased. Your video however only talks about expansion, not the opposite. How does reduction of the margin happen? Or does this only work by starting the algorithm with a smaller expansion so with a smaller value of d than 1 in ax+by+c=d? 2) It seems to me that the first solution will also be the result of minimizing the log-loss function as this maximizes the probabilities that a point is classified correctly. So the further the points are away from the line in the correct area, the better it is. And that seems to be the case for the first solution. So what is the difference between this log-loss approach and the SVM approach? Do they deliver different results? If so, when would you choose the one or the other? Thanks, Frank

  • @keshavkumar7769
    @keshavkumar77694 жыл бұрын

    what a explanation . Dammn good . you r great sir please make some video on Xgboost and other algorithm also

  • @souravkumar-yu9vi
    @souravkumar-yu9vi4 жыл бұрын

    Excellent

  • @vitor613
    @vitor6133 жыл бұрын

    HOLY SHIT, BEST EXPLANATION EVER

  • @scherwinn
    @scherwinn5 жыл бұрын

    Clever great!

  • @andresalmodovar3473
    @andresalmodovar34733 жыл бұрын

    Hi Luis, amazing job. But just one question. Could there be a typo on the criteria for misclassification of points?. I mean, I think the criteria should be: for blue: ap+bq+c>-1, and for red: ap+bq+c

  • @gammaturn
    @gammaturn5 жыл бұрын

    Thank you very much for this amazing video. I have come across your channel only recently and I do like your way of explaining these complicated topics. I have got two (hopefully not too dumb) questions regarding SVMs: Given the similarity of SVMs and logistic regression, would it be a good idea to start from an LR-result instead of a random line? Did I understand correctly, that the distance between the two lines can only increase during the search for the best solution? Wouldn't it be conceivable that at some point the combined error function decreases by decreasing the distance between the lines?

  • @SerranoAcademy

    @SerranoAcademy

    5 жыл бұрын

    Thank you, great questions! 1. That's a good idea, it's always good to start from an good position rather than a random one. Since the two algorithms are of similar speed (complexity), I'm not sure if starting from LR is necessarily better than just doing an SVM from the start, but it's definitely worth a try. 2. Actually, in the process of moving the line, one could change the coefficients in such a way that the lines get a little closer again (for example, if a and b are both increased in magnitude, the lines get close together).

  • @STWNoman
    @STWNoman5 жыл бұрын

    Amazing

  • @xruan6582
    @xruan65824 жыл бұрын

    Great tutorial. (16:23) "if point is blue, and ap + bq + c > 0", I think the equation should have BLUE color (to indicate the BLUE dash on the graph) rather than RED. Similarly, "if point is red, and ap + bp + c < 0", the equation should be RED (to indicate the RED dash on the graph) instead of BLUE. Pardon me if I am wrong.

  • @manjunatharadhya4361
    @manjunatharadhya43613 жыл бұрын

    Very Nice

  • @scientific-reasoning
    @scientific-reasoning3 жыл бұрын

    Hi Luis, I like your youtube video animations, they are great! Can I know what software you use for animations?

  • @alyyahfoufy6222
    @alyyahfoufy62224 жыл бұрын

    Hello, When we multiply the equation by the expanding rate of 0.99, should the right side of the equal be 0.99, 0, and -0.99? Thanks.

  • @sharkk2979
    @sharkk29794 жыл бұрын

    Again i luv u!!

  • @Pulorn1
    @Pulorn15 ай бұрын

    Thank you for the good explanation. However, I miss some introductions. What is its added value compared to Logistic Regression? And some recommendations on when to prioritize this algorithm against other...

  • @iidtxbc
    @iidtxbc5 жыл бұрын

    What is the name of the algorithm you have introduced in the lecture?

  • @samirelzein1978
    @samirelzein19784 жыл бұрын

    the more you speak the better it gets, please keep giving practical examples of applications at the end of each video

  • @fadydawra
    @fadydawra4 жыл бұрын

    thank you

  • @lucycai3356
    @lucycai33563 жыл бұрын

    thanks!

  • @farzadfarzadian8827
    @farzadfarzadian88275 жыл бұрын

    SVM is constrained optimization so it needs Lagrange Multiplier?

  • @dante_calisthenics
    @dante_calisthenics3 жыл бұрын

    Can I ask that step of separating line is just only for optimizing the model, right? Like in the case when you have 2 lines have already separated the training data, so you expand the line to see how wide they are?

  • @raviankitaava
    @raviankitaava4 жыл бұрын

    Would be grateful if you can have explanations on Gaussian Process and hyperparameters optimisation techniques.

  • @dilipgawade9686
    @dilipgawade96865 жыл бұрын

    Hello Sir, Do we have video on feature selection ?

  • @rafaelborbacs
    @rafaelborbacs4 жыл бұрын

    How to generalize these algorithms to many dimensions? My problem has about 50 atributes instead of 2, and I need to classify data as "red or blue"

  • @aigaurav5024
    @aigaurav50245 жыл бұрын

    Thanku sir

  • @karanpatel1906
    @karanpatel19064 жыл бұрын

    Perfect

  • @sakcee
    @sakcee5 ай бұрын

    excellent

  • @dante_calisthenics
    @dante_calisthenics3 жыл бұрын

    And at step 5, I think after add/subtract 0.01, you should also have to do gradient descent, right?

  • @macknightxu2199
    @macknightxu21993 жыл бұрын

    in the loop, when do you use the parallel lines?ax+by+c=1 and ax+ bx+c=-1

  • 4 жыл бұрын

    So if data is separable with a large margin, the margin error is small... even though the model produces worse classification than the model with a small margin having a high margin error. is that correct?

  • @john22594
    @john225944 жыл бұрын

    Nice tutorial. Thank you so much. It would be easy for us if you add code for this algoirthm.

  • @ruskinchem4300
    @ruskinchem43003 жыл бұрын

    Hi Luis,explaination is great no doubt but the equations that u wrote for margin error should be ax+by=1 and ax+by=-1

  • @anujshah645
    @anujshah6453 жыл бұрын

    In the pseudo algorithm of svm, in the last step we multiply a,b,c by 0.99 then even the right hand side should be multiplied by 0.99 making the right hand side to 0.99 and not 1. Am I missing something?

  • @keshav2136
    @keshav21364 жыл бұрын

    Best!

Келесі