The formula is wrong.. he conveniently moved on...Read below many have commented on this....
@tseckwr37836 күн бұрын
thank you.
@B_knows_A_R_D-xh5lo7 күн бұрын
classics 0:07 0:08 0:08
@felipeazank31347 күн бұрын
this kind of videos remind me of what internet is all about: sharing knowledge. Thanks for the content. I hope internet stopped here
@Ken08Odida7 күн бұрын
Thank you. Perfectly simplified in 2 minutes. Now I can build on this basic understanding
@harsh_hybrid_thenx9 күн бұрын
At 17:34 you said, if g is positive and then log of negative quantity would be infinity. Is it correct? log of a negative quantity is not defined right.
@asmaaali82639 күн бұрын
That was amazing, thanks 😊
@gustavgille932310 күн бұрын
The least squares error example is beautiful!!!
@mirandac136411 күн бұрын
This is such a great video on so many levels. May god bless the people who had a hand in making it 🙏🏻🙏🏻
@evavashisth910313 күн бұрын
Amazing explanation Thank you so much ☺️
@theProf-xc5pe13 күн бұрын
hmm close but no cigar
@user-yf5jz3zq5n15 күн бұрын
Давно слежу за твоим каналом) рад что у тебя все збс и ты прогрессируешь)
@magalhaees17 күн бұрын
We center the data to have a mean of 0, which allows us to match the form of the covariance matrix provided in the video
@mehdirexon17 күн бұрын
Nice video
@RAyLV1718 күн бұрын
Man, I just checked that you haven't uploaded any new videos since 2 years! Hope you're doing well and come back with these amazing videos <3
@user-mm8wj5hb8y21 күн бұрын
Да вроде кайфовый сайтик. Играл немного, но мне вполне понравилось)
@NarimanRava22 күн бұрын
I watched you video, you video is so informative and humble �� thank you for share your video, I follow you
@weisanpang717323 күн бұрын
Is the answer to follow up question#1 = 4 ?
@andres_camarillo25 күн бұрын
Amazing video. Thanks!
@tejkiranv405627 күн бұрын
@VisuallyExplained Is the answer 2nd follow-up question (the median value) 2 throws? For example, take 100 throws, out of these, 16.6 throws would yield 6 in first throw. Around 41.6 throws would yield 6 in their second attempts. And since we want the 50th throw (or rather avg of 50th and 51st), it would be 2.
@iskhezia28 күн бұрын
I love it! Thanks for that. Can you share the code used for PCA in this video, please? I am trying repeat, but my results dont check with yours, I want to see where I'm going wrong (I didn't find it in the description on github). Thanks for the video.
@Leo-vv3jd29 күн бұрын
I really liked the video and the visuals, but I think it would be better without the "generic music" in the background.
@VisuallyExplained29 күн бұрын
Thank you for taking the time to post your feedback, this is very useful for the growth of this channel!
@Arthur-uw1vm29 күн бұрын
at 4:57, "the happiest country seems to be the most balanced ones", seems wrong, it should be "the most power ones" ?
@anikdas567Ай бұрын
very nice animations, and well explained. But just to be a bit technical isn't what you described called "mini-batch gradient descent". Because for stochastic gradient descent don't we just use one training example per iteration?? 😅😅
@angelo6082Ай бұрын
You saved me for my Data mining exam tomorrow 🙏
@chrischoir3594Ай бұрын
rubbish video
@hantiopАй бұрын
Quick question: How do we choose the gamma parameter in the RBF kernel at 3:00? By, say, cross validation?
@manojcygnus9305Ай бұрын
Basically NN is f*@k around untill you find a best possibile value
@DG123zАй бұрын
It's like being less restrictive keeps you from optimizing the wrong thing and getting stuck in the wrong valley (or hill for evolution). Feels a lot like how i kept trying to optimize being a nice guy bc there was some positive responses and without some chaos i never would have seen another valley of being a bad boy which has much less cost and better results
@naveedanwer8262Ай бұрын
Just learn how to speak slwoly and you will have more views
@snowcamoАй бұрын
Honestly didn't really help with my questions, but I didn't expect a 3 minute video to answer them. This was very well done, the visualization was great, and everything it touched on (while brief) was concise and accurate. Subbed. <3
@1matzeplayer1Ай бұрын
Great video!
@adnon2604Ай бұрын
Amazing video! I could save a lot of time! Thank you very much.
@Christoo228Ай бұрын
sagapo<3
@ashimov1970Ай бұрын
Brilliantly Genius!
@pnachtweyАй бұрын
how can v_k be used before it is calculated in the next line? How can one know the 'condition' if this is actual data and not a mathematical formula?
@mohammadzeinali5414Ай бұрын
Perfect thank you
@rand4492Ай бұрын
Perfect explanation thank you 🙏🏼
@_Lavanya-ju8yiАй бұрын
great explanation!
@larissacury7714Ай бұрын
That's great!
@duydangdroidАй бұрын
3:39 It's possible to get less than 1/2 Max-Cut. If all nodes are the same color, that's 0 cuts. We would have to shuffle the list of nodes and split them equally for assignment. Independent assignments will get you something like coin-flip results without a 1/2 lower bound.
@yashpermalla3494Ай бұрын
Isn’t the one who “goes first” the one on the inside?
@negarmahmoudi-wt5bgАй бұрын
Thank you for this clear explanation.
@AndresGarcia-pv5feАй бұрын
good but why the loud ass shopping music
@jameskirkham5019Ай бұрын
Amazing video thank you
@-T.K.-Ай бұрын
Awesome video! This is very very helpful (as I'm going to take the convex optimization class exam tomorrow...) However, I am a bit confused at around 6:30. You mentioned that the minimizer x goes first and the maximizer u goes second in the expression at 6:45. I think in mathematics, the expression is evaluated inside-first? So in this case the inner part, maximizer u, would be the first player, and the minimizer x would be the second. I'm not sure if I understand this correctly...
@ZinzinsIAАй бұрын
Awesome content and video edition, thank you so much. Do you have any advice to produce such kind of graphics and animation ?
@johns4929Ай бұрын
Wow what an amazing video, i understood svm in 2 minutes, which I didn't watching other 15 minutes tutorial
@johngray6436Ай бұрын
I've finally known where the hell Lagrangian comes from Such a great video
Пікірлер
Link's broken my guy =) great vid.
The formula is wrong.. he conveniently moved on...Read below many have commented on this....
thank you.
classics 0:07 0:08 0:08
this kind of videos remind me of what internet is all about: sharing knowledge. Thanks for the content. I hope internet stopped here
Thank you. Perfectly simplified in 2 minutes. Now I can build on this basic understanding
At 17:34 you said, if g is positive and then log of negative quantity would be infinity. Is it correct? log of a negative quantity is not defined right.
That was amazing, thanks 😊
The least squares error example is beautiful!!!
This is such a great video on so many levels. May god bless the people who had a hand in making it 🙏🏻🙏🏻
Amazing explanation Thank you so much ☺️
hmm close but no cigar
Давно слежу за твоим каналом) рад что у тебя все збс и ты прогрессируешь)
We center the data to have a mean of 0, which allows us to match the form of the covariance matrix provided in the video
Nice video
Man, I just checked that you haven't uploaded any new videos since 2 years! Hope you're doing well and come back with these amazing videos <3
Да вроде кайфовый сайтик. Играл немного, но мне вполне понравилось)
I watched you video, you video is so informative and humble �� thank you for share your video, I follow you
Is the answer to follow up question#1 = 4 ?
Amazing video. Thanks!
@VisuallyExplained Is the answer 2nd follow-up question (the median value) 2 throws? For example, take 100 throws, out of these, 16.6 throws would yield 6 in first throw. Around 41.6 throws would yield 6 in their second attempts. And since we want the 50th throw (or rather avg of 50th and 51st), it would be 2.
I love it! Thanks for that. Can you share the code used for PCA in this video, please? I am trying repeat, but my results dont check with yours, I want to see where I'm going wrong (I didn't find it in the description on github). Thanks for the video.
I really liked the video and the visuals, but I think it would be better without the "generic music" in the background.
Thank you for taking the time to post your feedback, this is very useful for the growth of this channel!
at 4:57, "the happiest country seems to be the most balanced ones", seems wrong, it should be "the most power ones" ?
very nice animations, and well explained. But just to be a bit technical isn't what you described called "mini-batch gradient descent". Because for stochastic gradient descent don't we just use one training example per iteration?? 😅😅
You saved me for my Data mining exam tomorrow 🙏
rubbish video
Quick question: How do we choose the gamma parameter in the RBF kernel at 3:00? By, say, cross validation?
Basically NN is f*@k around untill you find a best possibile value
It's like being less restrictive keeps you from optimizing the wrong thing and getting stuck in the wrong valley (or hill for evolution). Feels a lot like how i kept trying to optimize being a nice guy bc there was some positive responses and without some chaos i never would have seen another valley of being a bad boy which has much less cost and better results
Just learn how to speak slwoly and you will have more views
Honestly didn't really help with my questions, but I didn't expect a 3 minute video to answer them. This was very well done, the visualization was great, and everything it touched on (while brief) was concise and accurate. Subbed. <3
Great video!
Amazing video! I could save a lot of time! Thank you very much.
sagapo<3
Brilliantly Genius!
how can v_k be used before it is calculated in the next line? How can one know the 'condition' if this is actual data and not a mathematical formula?
Perfect thank you
Perfect explanation thank you 🙏🏼
great explanation!
That's great!
3:39 It's possible to get less than 1/2 Max-Cut. If all nodes are the same color, that's 0 cuts. We would have to shuffle the list of nodes and split them equally for assignment. Independent assignments will get you something like coin-flip results without a 1/2 lower bound.
Isn’t the one who “goes first” the one on the inside?
Thank you for this clear explanation.
good but why the loud ass shopping music
Amazing video thank you
Awesome video! This is very very helpful (as I'm going to take the convex optimization class exam tomorrow...) However, I am a bit confused at around 6:30. You mentioned that the minimizer x goes first and the maximizer u goes second in the expression at 6:45. I think in mathematics, the expression is evaluated inside-first? So in this case the inner part, maximizer u, would be the first player, and the minimizer x would be the second. I'm not sure if I understand this correctly...
Awesome content and video edition, thank you so much. Do you have any advice to produce such kind of graphics and animation ?
Wow what an amazing video, i understood svm in 2 minutes, which I didn't watching other 15 minutes tutorial
I've finally known where the hell Lagrangian comes from Such a great video