Пікірлер

  • @john_michaelz1823
    @john_michaelz18239 сағат бұрын

    Link's broken my guy =) great vid.

  • @KyloRen-o5g
    @KyloRen-o5g13 сағат бұрын

    The formula is wrong.. he conveniently moved on...Read below many have commented on this....

  • @tseckwr3783
    @tseckwr37836 күн бұрын

    thank you.

  • @B_knows_A_R_D-xh5lo
    @B_knows_A_R_D-xh5lo7 күн бұрын

    classics 0:07 0:08 0:08

  • @felipeazank3134
    @felipeazank31347 күн бұрын

    this kind of videos remind me of what internet is all about: sharing knowledge. Thanks for the content. I hope internet stopped here

  • @Ken08Odida
    @Ken08Odida7 күн бұрын

    Thank you. Perfectly simplified in 2 minutes. Now I can build on this basic understanding

  • @harsh_hybrid_thenx
    @harsh_hybrid_thenx9 күн бұрын

    At 17:34 you said, if g is positive and then log of negative quantity would be infinity. Is it correct? log of a negative quantity is not defined right.

  • @asmaaali8263
    @asmaaali82639 күн бұрын

    That was amazing, thanks 😊

  • @gustavgille9323
    @gustavgille932310 күн бұрын

    The least squares error example is beautiful!!!

  • @mirandac1364
    @mirandac136411 күн бұрын

    This is such a great video on so many levels. May god bless the people who had a hand in making it 🙏🏻🙏🏻

  • @evavashisth9103
    @evavashisth910313 күн бұрын

    Amazing explanation Thank you so much ☺️

  • @theProf-xc5pe
    @theProf-xc5pe13 күн бұрын

    hmm close but no cigar

  • @user-yf5jz3zq5n
    @user-yf5jz3zq5n15 күн бұрын

    Давно слежу за твоим каналом) рад что у тебя все збс и ты прогрессируешь)

  • @magalhaees
    @magalhaees17 күн бұрын

    We center the data to have a mean of 0, which allows us to match the form of the covariance matrix provided in the video

  • @mehdirexon
    @mehdirexon17 күн бұрын

    Nice video

  • @RAyLV17
    @RAyLV1718 күн бұрын

    Man, I just checked that you haven't uploaded any new videos since 2 years! Hope you're doing well and come back with these amazing videos <3

  • @user-mm8wj5hb8y
    @user-mm8wj5hb8y21 күн бұрын

    Да вроде кайфовый сайтик. Играл немного, но мне вполне понравилось)

  • @NarimanRava
    @NarimanRava22 күн бұрын

    I watched you video, you video is so informative and humble �� thank you for share your video, I follow you

  • @weisanpang7173
    @weisanpang717323 күн бұрын

    Is the answer to follow up question#1 = 4 ?

  • @andres_camarillo
    @andres_camarillo25 күн бұрын

    Amazing video. Thanks!

  • @tejkiranv4056
    @tejkiranv405627 күн бұрын

    @VisuallyExplained Is the answer 2nd follow-up question (the median value) 2 throws? For example, take 100 throws, out of these, 16.6 throws would yield 6 in first throw. Around 41.6 throws would yield 6 in their second attempts. And since we want the 50th throw (or rather avg of 50th and 51st), it would be 2.

  • @iskhezia
    @iskhezia28 күн бұрын

    I love it! Thanks for that. Can you share the code used for PCA in this video, please? I am trying repeat, but my results dont check with yours, I want to see where I'm going wrong (I didn't find it in the description on github). Thanks for the video.

  • @Leo-vv3jd
    @Leo-vv3jd29 күн бұрын

    I really liked the video and the visuals, but I think it would be better without the "generic music" in the background.

  • @VisuallyExplained
    @VisuallyExplained29 күн бұрын

    Thank you for taking the time to post your feedback, this is very useful for the growth of this channel!

  • @Arthur-uw1vm
    @Arthur-uw1vm29 күн бұрын

    at 4:57, "the happiest country seems to be the most balanced ones", seems wrong, it should be "the most power ones" ?

  • @anikdas567
    @anikdas567Ай бұрын

    very nice animations, and well explained. But just to be a bit technical isn't what you described called "mini-batch gradient descent". Because for stochastic gradient descent don't we just use one training example per iteration?? 😅😅

  • @angelo6082
    @angelo6082Ай бұрын

    You saved me for my Data mining exam tomorrow 🙏

  • @chrischoir3594
    @chrischoir3594Ай бұрын

    rubbish video

  • @hantiop
    @hantiopАй бұрын

    Quick question: How do we choose the gamma parameter in the RBF kernel at 3:00? By, say, cross validation?

  • @manojcygnus9305
    @manojcygnus9305Ай бұрын

    Basically NN is f*@k around untill you find a best possibile value

  • @DG123z
    @DG123zАй бұрын

    It's like being less restrictive keeps you from optimizing the wrong thing and getting stuck in the wrong valley (or hill for evolution). Feels a lot like how i kept trying to optimize being a nice guy bc there was some positive responses and without some chaos i never would have seen another valley of being a bad boy which has much less cost and better results

  • @naveedanwer8262
    @naveedanwer8262Ай бұрын

    Just learn how to speak slwoly and you will have more views

  • @snowcamo
    @snowcamoАй бұрын

    Honestly didn't really help with my questions, but I didn't expect a 3 minute video to answer them. This was very well done, the visualization was great, and everything it touched on (while brief) was concise and accurate. Subbed. <3

  • @1matzeplayer1
    @1matzeplayer1Ай бұрын

    Great video!

  • @adnon2604
    @adnon2604Ай бұрын

    Amazing video! I could save a lot of time! Thank you very much.

  • @Christoo228
    @Christoo228Ай бұрын

    sagapo<3

  • @ashimov1970
    @ashimov1970Ай бұрын

    Brilliantly Genius!

  • @pnachtwey
    @pnachtweyАй бұрын

    how can v_k be used before it is calculated in the next line? How can one know the 'condition' if this is actual data and not a mathematical formula?

  • @mohammadzeinali5414
    @mohammadzeinali5414Ай бұрын

    Perfect thank you

  • @rand4492
    @rand4492Ай бұрын

    Perfect explanation thank you 🙏🏼

  • @_Lavanya-ju8yi
    @_Lavanya-ju8yiАй бұрын

    great explanation!

  • @larissacury7714
    @larissacury7714Ай бұрын

    That's great!

  • @duydangdroid
    @duydangdroidАй бұрын

    3:39 It's possible to get less than 1/2 Max-Cut. If all nodes are the same color, that's 0 cuts. We would have to shuffle the list of nodes and split them equally for assignment. Independent assignments will get you something like coin-flip results without a 1/2 lower bound.

  • @yashpermalla3494
    @yashpermalla3494Ай бұрын

    Isn’t the one who “goes first” the one on the inside?

  • @negarmahmoudi-wt5bg
    @negarmahmoudi-wt5bgАй бұрын

    Thank you for this clear explanation.

  • @AndresGarcia-pv5fe
    @AndresGarcia-pv5feАй бұрын

    good but why the loud ass shopping music

  • @jameskirkham5019
    @jameskirkham5019Ай бұрын

    Amazing video thank you

  • @-T.K.-
    @-T.K.-Ай бұрын

    Awesome video! This is very very helpful (as I'm going to take the convex optimization class exam tomorrow...) However, I am a bit confused at around 6:30. You mentioned that the minimizer x goes first and the maximizer u goes second in the expression at 6:45. I think in mathematics, the expression is evaluated inside-first? So in this case the inner part, maximizer u, would be the first player, and the minimizer x would be the second. I'm not sure if I understand this correctly...

  • @ZinzinsIA
    @ZinzinsIAАй бұрын

    Awesome content and video edition, thank you so much. Do you have any advice to produce such kind of graphics and animation ?

  • @johns4929
    @johns4929Ай бұрын

    Wow what an amazing video, i understood svm in 2 minutes, which I didn't watching other 15 minutes tutorial

  • @johngray6436
    @johngray6436Ай бұрын

    I've finally known where the hell Lagrangian comes from Such a great video