Data Science Courses

Data Science Courses

PyTorch tutorial

PyTorch tutorial

Пікірлер

  • @chaowang6903
    @chaowang69036 күн бұрын

    great stuff, do we still need egen decomposition to get lambda_max ?

  • @faezehabdolinejad
    @faezehabdolinejad8 күн бұрын

    ممنونم استاد خیلی عالی بود

  • @chaowang6903
    @chaowang69038 күн бұрын

    Thank you so much for sharing your amazing course!

  • @akera2775
    @akera277521 күн бұрын

    Thanku sir for explaining this course in easy way Now I have started enjoying the course❤

  • @joshi98kishan
    @joshi98kishan27 күн бұрын

    Thank you professor. This lecture explains exactly what I was looking for - why principal components are the eigenvectors of the sample covariance matrix.

  • @purushottammishra3423
    @purushottammishra342327 күн бұрын

    I got answers to almost every"WHY?" that I had while reading books.

  • @amins6695
    @amins6695Ай бұрын

    Amazing Lectures

  • @HelloWorlds__JTS
    @HelloWorlds__JTSАй бұрын

    Immediately after explaining the importance of centering the data, he purposely neglects this in his first demo! But then he mentions this at 22:38, and his second demo he does center the data. Great instructions, thanks!

  • @VanshRaj-pf2bm
    @VanshRaj-pf2bmАй бұрын

    Ye lecture kis bachhe ke liye h

  • @mayankagrawal7865
    @mayankagrawal78652 ай бұрын

    I am myself one of the person which you are claiming to be GAN generated. Please don't misled people.

  • @sripradhaiyengar9980
    @sripradhaiyengar99802 ай бұрын

    Thank you thank you!

  • @PradeepKumar-tl7dd
    @PradeepKumar-tl7dd3 ай бұрын

    best video oh PCA

  • @mahdig4739
    @mahdig47393 ай бұрын

    That was great Dr. Ghodsi! Many thanks!

  • @anadianBaconator
    @anadianBaconator3 ай бұрын

    I love this lecture! Finally I have a better overview of Transformers! Thank you so much prof!

  • @trontan642
    @trontan6423 ай бұрын

    Much more clear than my professor.

  • @MrFunasty
    @MrFunasty3 ай бұрын

    How can we add x(d by 1)and z(m by 1) when their shapes are different? 🙄

  • @CS_n00b
    @CS_n00b4 ай бұрын

    what is the guarantee that u1 sigma1 v1 are non negative if A is non negative?

  • @prateekpatel6082
    @prateekpatel60824 ай бұрын

    wrong derivation of derivative of st w.r.t w , its a recursive equation , since s implicitly depends on w

  • @thomastsao7507
    @thomastsao75074 ай бұрын

    excellent !

  • @Rasha-tc5bl
    @Rasha-tc5bl4 ай бұрын

    رائع جدا ..الله يعطيه العافيه ويسعده ويوفقه ويوفق عياله ان كان عنده عيال

  • @amirrezamohammadi
    @amirrezamohammadi5 ай бұрын

    Truly Enjoyed! Thanks

  • @bsementmath6750
    @bsementmath67505 ай бұрын

    Prof. You used to be very verbose and invasive on the board. Why this hybrid mode of ppt and some board? Love from Pakistan!

  • @mahsakhoshnoodi2972
    @mahsakhoshnoodi29725 ай бұрын

    Thank you for this informative lecture, I have a question though. Why the expectaion of epsilon^2 with the normal distribution of mean zero is going to be sigma^2?

  • @moodi2002
    @moodi20023 ай бұрын

    If \( e_i \) is a random variable that is proportional to a Gaussian distribution \( N(0, \sigma^2) \), then we can write \( e_i = k \cdot X_i \), where \( X_i \) is a standard Gaussian random variable with mean \( 0 \) and variance \( \sigma^2 \), and \( k \) is a constant of proportionality. Since \( X_i \) follows a standard Gaussian distribution, its expectation \( \mathbb{E}[X_i] \) is \( 0 \), and its variance \( \text{Var}[X_i] \) is \( \sigma^2 \). Now, we want to find the expectation of \( e_i^2 \): \[ \mathbb{E}[e_i^2] = \mathbb{E}[(k \cdot X_i)^2] = k^2 \cdot \mathbb{E}[X_i^2] \] For a standard Gaussian random variable \( X_i \), the expectation of \( X_i^2 \) is the variance of \( X_i \) plus the square of its mean: \[ \mathbb{E}[X_i^2] = \text{Var}[X_i] + (\mathbb{E}[X_i])^2 = \sigma^2 + 0^2 = \sigma^2 \] So, substituting this into the expression for \( \mathbb{E}[e_i^2] \), we get: \[ \mathbb{E}[e_i^2] = k^2 \cdot \mathbb{E}[X_i^2] = k^2 \cdot \sigma^2 \] Therefore, the expectation of \( e_i^2 \) is \( k^2 \cdot \sigma^2 \).

  • @prateekpatel6082
    @prateekpatel60825 ай бұрын

    subtle mistake for perceptron learning : we dont update gradient for correct classified point , the update happens only on mis classified point .

  • @user-fq3ms4bz2p
    @user-fq3ms4bz2p5 ай бұрын

    I think the proff wrote the wrong formula for marginal P(x=x0).

  • @user-gd8bt9qs4l
    @user-gd8bt9qs4l5 ай бұрын

    c'est un grand

  • @longh
    @longh5 ай бұрын

    thank you, professor! The explanation is very intuitive.

  • @chaowang6903
    @chaowang69035 ай бұрын

    Great lecture on why we use testing errors for true error estimation

  • @ai__76
    @ai__765 ай бұрын

    Thanks very much

  • @ai__76
    @ai__765 ай бұрын

    Thanks for the useful course

  • @garmdarehalborz5441
    @garmdarehalborz54416 ай бұрын

    Great, Thanks a lot

  • @zeynolabedinsoleymani4591
    @zeynolabedinsoleymani45916 ай бұрын

    What is the "intuition" behind using Chebyshev polynomials in GNN rather than other orthonormal basis functions? Why it works well?

  • @Falconoo7383
    @Falconoo73834 ай бұрын

    Because computation is very expensive.

  • @homataha5626
    @homataha56266 ай бұрын

    Are the slides available?

  • @DataScienceCoursesUW
    @DataScienceCoursesUW21 күн бұрын

    uwaterloo.ca/data-analytics/videos-and-slides-courses

  • @AmrMoursi-sm3cl
    @AmrMoursi-sm3cl6 ай бұрын

    Thanks for sharing this amazing information ❤❤❤❤

  • @user-us1jf8zd8e
    @user-us1jf8zd8e6 ай бұрын

    Detail mathematical formula explanation start @47:00

  • @kiannaderi8374
    @kiannaderi83746 ай бұрын

    thank you

  • @mohamedmarzouk2537
    @mohamedmarzouk25376 ай бұрын

    Thank you, very helpful and informative

  • @omidbazgir9891
    @omidbazgir98916 ай бұрын

    Amazing lecture! Thanks for uploading the videos! Is there anyway we can have access to the coding assignments?

  • @asntrk1
    @asntrk16 ай бұрын

    Variational Autoencoder: 24:52

  • @AmrMoursi-sm3cl
    @AmrMoursi-sm3cl7 ай бұрын

    1000000 Thanks ❤

  • @MrMIB983
    @MrMIB9837 ай бұрын

    I'm so excited to finally see a Diffusion models lecture by Professor Ali. Thank you.

  • @user-us1jf8zd8e
    @user-us1jf8zd8e7 ай бұрын

    Supervise PCA start @57:19

  • @MrMIB983
    @MrMIB9837 ай бұрын

    Amazing, this topic is new in the professor's course. It would be awesome to have an only RL course from professor Ali. Also looking forward to seeing amazing diffusion models lectures.

  • @vivekrai1974
    @vivekrai19747 ай бұрын

    5:35 Why would it be one row of W prime? In the first case, we got a column vector of W because we multiplied a one-hot encoded vector with W. However, multiplying with h should not give one particular row as h is not a one-hot encoded vector.

  • @HarpaAI
    @HarpaAI7 ай бұрын

    🎯 Key Takeaways for quick navigation: 00:07 📚 *Introduction to GPT and BERT* - GPT and BERT are both Transformer-based models. - GPT stands for Generative Pre-Trained Transformer, while BERT stands for Bidirectional Encoder Representations from Transformers. 05:26 🧠 *How BERT Works* - BERT is a stack of encoders with multiple layers and attention heads. - It is trained by masking words in sentences and predicting the masked words, making it bidirectional in nature. 10:17 🏭 *Applications of BERT* - BERT can be used in various applications by fine-tuning its pretrained model. - It's especially useful for tasks like sentiment analysis and can handle domain-specific tasks. 14:55 🧬 *Domain-Specific BERT Models* - There are domain-specific BERT models trained for specific fields like bioinformatics and finance. - These models can be used in applications within their respective domains. 25:09 📝 *Introduction to GPT* - GPT is a stack of decoder layers, where each decoder is similar to the transformer decoder but without cross-attention. - GPT is trained to predict the next word in a sequence. 29:48 🚀 *GPT's Evolution* - GPT models have evolved over time, with each version becoming larger and more powerful in terms of parameters. - GPT-4, for instance, has an enormous 175 billion parameters, making it highly capable in natural language understanding and generation. 30:28 🧠 *Introduction to GPT 4 and its size* - Introduction to GPT 4 and its undisclosed size. - Speculation on the impact of model size on performance. 34:04 🌐 *T5: Combining BERT and GPT* - T5 is a combination of BERT and GPT. - Transformation of various NLP problems into text-to-text format. - The application of T5 to a wide range of NLP tasks. 44:12 🔐 *Challenges in Aligning Language Models with User Intent* - The challenge of aligning language models with user instructions. - The importance of alignment for ethical and practical reasons. - The need to avoid harmful or offensive responses. 49:30 🎯 *Steps for Alignment and Reinforcement Learning* - Overview of the three steps for alignment: Supervised, Fine-Tuning, and Reinforcement Learning. - Introduction to reinforcement learning from human feedback. - The importance of understanding reinforcement learning for alignment. Made with HARPA AI

  • @praveenkumar-tu1sj
    @praveenkumar-tu1sj7 ай бұрын

    It is simple sir please accept my humble request 🙏

  • @anadianBaconator
    @anadianBaconator3 ай бұрын

    Learn some respect.

  • @praveenkumar-tu1sj
    @praveenkumar-tu1sj7 ай бұрын

    Kindly help me sir i am failing miserably

  • @praveenkumar-tu1sj
    @praveenkumar-tu1sj7 ай бұрын

    Thanks in advanced.so that can replace actual data to get required predication i think it is very easy for a person who has knowledge of CNN/NN THANKS SIR. say example, I have lid driven cavity problem I get velocities u and v of bith sizes are 2 dimensional (say 33 by 33 example) , it is time dependent so I want to use cnn to predict u for t=25 providing u for t= 10,15,20. And I will give actual u at t=25 for comparison and add statistical regressions , loss gain, training plots. Thanks sir. Please kindly help and I would be grateful and appreciative for kindness and support.

  • @micahdelaurentis6551
    @micahdelaurentis65514 ай бұрын

    pathetic

  • @adhirajbanerjee7288
    @adhirajbanerjee72887 ай бұрын

    any lnks to the slides in this course ?

  • @timandersen8030
    @timandersen80307 ай бұрын

    Thank you for a new 2023 course! You are one of the best to teach the subject!!!