Soheil Feizi

Soheil Feizi

Пікірлер

  • @JabirKhan-le7xq
    @JabirKhan-le7xq6 күн бұрын

    What is true distribution?

  • @JabirKhan-le7xq
    @JabirKhan-le7xq6 күн бұрын

    Thank you, Sir.

  • @freerockneverdrop1236
    @freerockneverdrop123615 күн бұрын

    The formula for the neural network in this video should be a 2 level summation instead of one level.

  • @KittyCat-lp3zy
    @KittyCat-lp3zyАй бұрын

    یاشا آذربایجان ثروتی ❤

  • @MonkkSoori
    @MonkkSoori3 ай бұрын

    At 20:20 why does Phi(Q_i) not cancel out in the numerator and denominator?

  • @janesun9008
    @janesun90083 ай бұрын

    Thank you for sharing this lecture, prof. Great quality and easy to understand!

  • @NavaAbdolalipour
    @NavaAbdolalipour3 ай бұрын

    من با شما قلمچی اردبیل بودم،بعد سالها توی کشورهای نزدیک بهم اسمتون رو دیدم،خوشحالم از موفقیت های هم دوره ایی ها

  • @simaranjbari
    @simaranjbari3 ай бұрын

    your explanation was very nice and easy to understand. Thank you!

  • @fierydino9402
    @fierydino94024 ай бұрын

    Wonderful lecture!! Thank you for sharing

  • @PradeepKumar-zy6cd
    @PradeepKumar-zy6cd4 ай бұрын

    Can you please share the slide

  • @prabhavkaula9697
    @prabhavkaula96974 ай бұрын

    Thank you for the lecture! ☺️

  • @ax5344
    @ax53444 ай бұрын

    @1:58:57 You said you will explain different procedures to generate different responses later. I did not find it till you start discussing Step 3. Could you illustrate further?

  • @ax5344
    @ax53444 ай бұрын

    @2:15:30 Found it. Thanks!

  • @ax5344
    @ax53444 ай бұрын

    When you upload the video, could you set the speed to 1.5? Right now, I'm setting it to 2X, it is still very very slow.

  • @sabujchattopadhyay
    @sabujchattopadhyay4 ай бұрын

    Can you please share the slides? (2)

  • @miquelnogueralonso2576
    @miquelnogueralonso25764 ай бұрын

    Can you please share the slides

  • @bardiasafaei457
    @bardiasafaei4574 ай бұрын

    Thank you Soheil for the great content and the clear way of explanation! Could you also share the final written notes of each session for download?

  • @ai__76
    @ai__765 ай бұрын

    Massage lesson! Tnx

  • @amiltonwong
    @amiltonwong5 ай бұрын

    Thanks a lot for providing such an excellent lecture. Would it be possible to release the notes for study? Thanks~

  • @parhamsalar3826
    @parhamsalar38265 ай бұрын

    Many thanks for your excellent lectures, particularly those on diffusion models. I do have a few inquiries regarding models of conditional diffusion. Can we think of text vectors as the query (Q) and image vectors as the key (K) and value (V) in cross-attention instead of image vectors as the query (Q)?

  • @parisaemkani5730
    @parisaemkani57305 ай бұрын

    Hi, could u please introduce a good course in the basics of machine learning and deep learning for beginners?

  • @Stealph_Delta_3003
    @Stealph_Delta_30035 ай бұрын

    Thanks for sharing.

  • @sdiabr6792
    @sdiabr67925 ай бұрын

    Real quality content

  • @mozhganmomtaz8169
    @mozhganmomtaz81695 ай бұрын

    I just want to thank you 🤗

  • @INSTIG8R
    @INSTIG8R5 ай бұрын

    This here is the best video on SWIN transformers

  • @MrNoipe
    @MrNoipe5 ай бұрын

    The handwriting is difficult to read, maybe write slower or with a different brush?

  • @naeemkhoshnevis
    @naeemkhoshnevis5 ай бұрын

    Thanks for uploading these lectures.

  • @miladkhademinori2709
    @miladkhademinori27095 ай бұрын

  • @naeemkhoshnevis
    @naeemkhoshnevis5 ай бұрын

    Thanks for uploading the lectures.

  • @Nerraruzi
    @Nerraruzi6 ай бұрын

    Thanks so much for sharing this updated version of the course!!

  • @shayanmohammadizadeh172
    @shayanmohammadizadeh1726 ай бұрын

    It's minute 30 of the video and I have watched +8 ads. Really attention is all we need!

  • @Umar-Ateeq
    @Umar-Ateeq4 ай бұрын

    you can use "adblock for youtube" extension to avoid ads.

  • @junqi7050
    @junqi70506 ай бұрын

    Thank Soheil for sharing the updated deep learning theory courses. I ever followed Sohail's former lectures in 2020, where I learned the theoretical knowledge of deep learning in terms of representation, generalization, and optimization. I found that Soheil's course schedule this year has substantially changed to state-of-the-art transformer-based technologies, such as large language models, etc. I plan to catch up with Sohail's updated deep learning foundation course this year and really appreciate the new lecture videos.

  • @mohammadshahbazhussain2029
    @mohammadshahbazhussain20296 ай бұрын

    Thank you for sharing it

  • @user-pz5nn2kg2j
    @user-pz5nn2kg2j6 ай бұрын

    Hi professor, I was also wondering that if you plan to to add some contents related to distanglement learning? like nonlinear ICA which I think is very theoretically interesting and important.

  • @user-pz5nn2kg2j
    @user-pz5nn2kg2j6 ай бұрын

    Sorry, there's a typo. It should be 'disentanglement'.

  • @user-pz5nn2kg2j
    @user-pz5nn2kg2j6 ай бұрын

    Thanks for updating this really amazing course. I've read the syllabus of this semester, and find it is really interesting, especially the generative models and multi-modal models part. Hope to see more latest course videos. Thanks a lot for your effort of sharing the contents of this amazing course.

  • @hesamce
    @hesamce6 ай бұрын

    Thank you for sharing the updated version of the course🙏

  • @AyushSharma-ie7tj
    @AyushSharma-ie7tj Жыл бұрын

    Really nice lecture with a very even pace. Thank you for sharing.

  • @StratosFair
    @StratosFair Жыл бұрын

    Great lecture. Thank you for sharing

  • @hamedgholami261
    @hamedgholami261 Жыл бұрын

    explanation of: "Loss landscapes and optimization in over-parameterized non-linear systems and neural networks"

  • @mojtabakolahdoozi2418
    @mojtabakolahdoozi2418 Жыл бұрын

    Great lecture on the highly ignored ground! thanks

  • @quanguyenang1615
    @quanguyenang1615 Жыл бұрын

    Thanks for the great lectures, Prof. Soheil.

  • @sylus121
    @sylus121 Жыл бұрын

    25:00 (Bookmark)

  • @Thaumast
    @Thaumast Жыл бұрын

    24:18 The loss function is sometime defined by an L and sometime edfines by the caligraohic L, are they the same? thank you very much !

  • @bryanbocao4906
    @bryanbocao4906 Жыл бұрын

    42:30 one option could be KL divergence loss?

  • @mskang009
    @mskang009 Жыл бұрын

    Such a great lecture I've seen in KZread related to self-supervised learning. So many thanks!

  • @sinaasadiyan
    @sinaasadiyan Жыл бұрын

    great explanation, just Subscribed!

  • @sumitsah6092
    @sumitsah6092 Жыл бұрын

    How can we guranttee that w_t lies within the ball??? Because if that is not the case then we can't apply PL inequality. Please comment.

  • @zonghua
    @zonghua Жыл бұрын

    The handwriting is unclear.

  • @user-cp8uy9om7o
    @user-cp8uy9om7o Жыл бұрын

    Amazing video. Thanks!

  • @luisjacobson4375
    @luisjacobson4375 Жыл бұрын

    ✌️ ρ尺oΜ𝐎ᔕᗰ

  • @jfjfcjcjchcjcjcj9947
    @jfjfcjcjchcjcjcj9947 Жыл бұрын

    Thanks for the lecture! I think that there are a couple of points that would make the lecture more digestible if they are explained a bit more in depth. First, is the definition of interpolation, different people imply different things when referring to interpolation. I presume that Misha here by interpolation refers to the ability of estimators to achieve zero training error and still not overfit to the hold-out test set? The second aspect which might require some more explanation is @40:05 when Misha is describing the difference between interpolation accuracy and interpolation in the l2-norm. I presume that what is meant by l2-norm interpolation is that 2 estimators with the same l2-norm on their parameters exhibit the same generalisation capabilities, is that right?