Attention for Neural Networks, Clearly Explained!!!

Attention is one of the most important concepts behind Transformers and Large Language Models, like ChatGPT. However, it's not that complicated. In this StatQuest, we add Attention to a basic Sequence-to-Sequence (Seq2Seq or Encoder-Decoder) model and walk through how it works and is calculated, one step at a time. BAM!!!
NOTE: This StatQuest is based on two manuscripts. 1) The manuscript that originally introduced Attention to Encoder-Decoder Models: Neural Machine Translation by Jointly Learning to Align and Translate: arxiv.org/abs/1409.0473 and 2) The manuscript that first used the Dot-Product similarity for Attention in a similar context: Effective Approaches to Attention-based Neural Machine Translation arxiv.org/abs/1508.04025
NOTE: This StatQuest assumes that you are already familiar with basic Encoder-Decoder neural networks. If not, check out the 'Quest: • Sequence-to-Sequence (...
If you'd like to support StatQuest, please consider...
Patreon: / statquest
...or...
KZread Membership: / @statquest
...buying my book, a study guide, a t-shirt or hoodie, or a song from the StatQuest store...
statquest.org/statquest-store/
...or just donating to StatQuest!
www.paypal.me/statquest
Lastly, if you want to keep up with me as I research and create new StatQuests, follow me on twitter:
/ joshuastarmer
0:00 Awesome song and introduction
3:14 The Main Idea of Attention
5:34 A worked out example of Attention
10:18 The Dot Product Similarity
11:52 Using similarity scores to calculate Attention values
13:27 Using Attention values to predict an output word
14:22 Summary of Attention
#StatQuest #neuralnetwork #attention

Пікірлер: 387

  • @statquest
    @statquest Жыл бұрын

    To learn more about Lightning: lightning.ai/ Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/

  • @koofumkim4571
    @koofumkim457111 ай бұрын

    “Statquest is all you need” - I really needed this video for my NLP course but glad it’s out now. I got an A+ for the course, your precious videos helped a lot!

  • @statquest

    @statquest

    11 ай бұрын

    BAM! :)

  • @atharva1509
    @atharva1509 Жыл бұрын

    Somehow Josh always figures out what video are we going to need!

  • @yashgb

    @yashgb

    Жыл бұрын

    Exactly, I was gonna say the same 😃

  • @statquest

    @statquest

    Жыл бұрын

    BAM! :)

  • @yesmanic

    @yesmanic

    Жыл бұрын

    Same here 😂

  • @MelUgaddan
    @MelUgaddan9 ай бұрын

    The level of explainability from this video is top-notch. I always watch your video first to grasp the concept then do the implementation on my own. Thank you so much for this work !

  • @statquest

    @statquest

    9 ай бұрын

    Glad it was helpful!

  • @clockent
    @clockent11 ай бұрын

    This is awesome mate, can't wait for the next installment! Your tutorials are indispensable!

  • @statquest

    @statquest

    11 ай бұрын

    Thank you!

  • @rajapandey2039

    @rajapandey2039

    Ай бұрын

    @@statquest BAM!

  • @rutvikjere6392
    @rutvikjere6392 Жыл бұрын

    I was literally trying to understand attention a couple of days ago and Mr.BAM posts a video about it. Thanks 😊

  • @NoahElRhandour

    @NoahElRhandour

    Жыл бұрын

    same :D abesolutely insane...

  • @statquest

    @statquest

    Жыл бұрын

    BAM! :)

  • @Travel-Invest-Repeat
    @Travel-Invest-Repeat11 ай бұрын

    Great work, Josh! Listening to my deep learning lectures and reading papers become way easier after watching your videoes, because you explain the big picture and the context so well!! Eagerly waiting for the transformers video!

  • @statquest

    @statquest

    11 ай бұрын

    Coming soon! :)

  • @dylancam812
    @dylancam812 Жыл бұрын

    Dang this came out just 2 days after my neural networks final. I’m still so happy to see this video in feed. You do such great work Josh! Please keep it up for all the computer scientists and statisticians that love your videos and eagerly await each new post

  • @statquest

    @statquest

    Жыл бұрын

    Thank you very much! :)

  • @Neiltxu

    @Neiltxu

    11 ай бұрын

    @@statquest it came out 3 days before my Deep Learning and NNs final. BAM!!!

  • @statquest

    @statquest

    11 ай бұрын

    @@Neiltxu Awesome! I hope it helped!

  • @Neiltxu

    @Neiltxu

    11 ай бұрын

    @@statquest for sure! Your videos always help! btw, do you ship to spain? I like the hoodies of your shop

  • @statquest

    @statquest

    11 ай бұрын

    @@Neiltxu I believe the hoodies ship to Spain. Thank you for supporting StatQuest! :)

  • @SharingFists
    @SharingFists11 ай бұрын

    This channel is pure gold. I'm a machine learning and deep learning student.

  • @statquest

    @statquest

    11 ай бұрын

    Thanks!

  • @aayush1204
    @aayush12048 ай бұрын

    1 million subscribers INCOMING!!! Also huge thanks to Josh for providing such insightful videos. These videos really make everything easy to understand, I was trying to understand Attention and BAM!! found this gem.

  • @statquest

    @statquest

    8 ай бұрын

    Thank you very much!!! BAM! :)

  • @Murattheoz
    @Murattheoz8 ай бұрын

    I feel like I am watching a cartoon as a kid. :)

  • @statquest

    @statquest

    8 ай бұрын

    bam!

  • @aquater1120
    @aquater1120 Жыл бұрын

    I was just reading the original attention paper and then BAM! You uploaded the video. Thank you for creating the best content on AI on KZread!

  • @statquest

    @statquest

    Жыл бұрын

    Thank you very much! :)

  • @ArpitAnand-yd7tr
    @ArpitAnand-yd7tr11 ай бұрын

    The best explanation of Attention that I have come across so far ... Thanks a bunch❤

  • @statquest

    @statquest

    11 ай бұрын

    Thank you very much! :)

  • @lunamita
    @lunamita3 ай бұрын

    Can’t thank enough for this guy helped me get my master degree in AI back in 2022, now I’m working as a data scientist and still kept going back to your videos.

  • @statquest

    @statquest

    3 ай бұрын

    BAM!

  • @ncjanardhan
    @ncjanardhanАй бұрын

    The BEST explanation of Attention models!! Kudos & Thanks 😊

  • @statquest

    @statquest

    Ай бұрын

    Thank you very much!

  • @benmelis4117
    @benmelis4117Ай бұрын

    I just wanna let you know that this series is absolutely amazing. So far, as you can see, I've made it to the 89th video, guess that's something. Now it's getting serious tho. Again, love what you're doing here man!!! Thanks!!

  • @statquest

    @statquest

    Ай бұрын

    Thank you so much!

  • @benmelis4117

    @benmelis4117

    Ай бұрын

    @@statquest Personally, since I'm a medical student, I really can't explain how valuable it is to me that you used so many medical examples in the video's. The moment you said in one of the first video's that you are a geneticist I was sold to this series, it's one of my favorite subjects at uni, crazy interesting!

  • @statquest

    @statquest

    Ай бұрын

    @@benmelis4117 BAM! :)

  • @linhdinh136
    @linhdinh136 Жыл бұрын

    Thanks for the wholesome contents! Looking for Statquest video on the Transformer.

  • @statquest

    @statquest

    Жыл бұрын

    Wow!!! Thank you so much for supporting StatQuest!!! I'm hoping the StatQuest on Transformers will be out by the end of the month.

  • @linhdinh136

    @linhdinh136

    Жыл бұрын

  • @sinamon6296
    @sinamon62965 ай бұрын

    Hi mr josh, just wanna say that there is literally no one that makes it so easy for me to understand such complicated concepts. Thank you ! once I get a job I will make sure to give you guru dakshina! (meaning, an offering from students to their teachers)

  • @statquest

    @statquest

    5 ай бұрын

    Thank you very much! I'm glad my videos are helpful! :)

  • @d_b_
    @d_b_ Жыл бұрын

    Thanks for this. The way you step through the logic is always very helpful

  • @statquest

    @statquest

    Жыл бұрын

    Thanks!

  • @brunocotrim2415
    @brunocotrim241519 күн бұрын

    Hello Statquest, I would like to say Thank You for the amazing job, this content helped me understand a lot how Attention works, specially because visual things help me understand better, and the way you join the visual explanation with the verbal one while keeping it interesting is on another level, Amazing work

  • @statquest

    @statquest

    19 күн бұрын

    Thank you!

  • @weiyingwang2533
    @weiyingwang253311 ай бұрын

    You are amazing! The best explanation I've ever found on KZread.

  • @statquest

    @statquest

    11 ай бұрын

    Wow, thanks!

  • @won20529jun
    @won20529jun Жыл бұрын

    I was literally just thinking an Id love an explanation of attention by SQ..!!! Thanks for all your work

  • @statquest

    @statquest

    Жыл бұрын

    bam!

  • @familywu3869
    @familywu3869 Жыл бұрын

    Thank you for the excellent teaching, Josh. Looking forward to the Transformer tutorial. :)

  • @statquest

    @statquest

    Жыл бұрын

    Coming soon!

  • @tupaiadhikari
    @tupaiadhikari10 ай бұрын

    Thanks Professor Josh for such a great tutorial ! It was very informative !

  • @statquest

    @statquest

    10 ай бұрын

    My pleasure!

  • @usser-505
    @usser-5058 ай бұрын

    The end is a classic cliffhanger for the series. You talk about how we don't need the LSTMs and I wait for an entire summer for transformers. Good job! :)

  • @statquest

    @statquest

    8 ай бұрын

    Ha! The good news is that you don't have to wait! You can binge! Here's the link to the transformers video: kzread.info/dash/bejne/rKyF27aEaNTbqbw.html

  • @usser-505

    @usser-505

    8 ай бұрын

    @@statquestYeah! I already watched when you released it. I commented on how this deep learning playlist is becoming a series! :)

  • @statquest

    @statquest

    8 ай бұрын

    @@usser-505 bam!

  • @saschahomeier3973
    @saschahomeier397310 ай бұрын

    You have a talent for explaining these things in a straightforward way. Love your videos. You have no video about Transformers yet, right?

  • @statquest

    @statquest

    10 ай бұрын

    The transformers video is currently available to channel members and patreon supporters.

  • @rafaeljuniorize
    @rafaeljuniorize2 ай бұрын

    this was the most beautiful explanation that i ever had in my entire life, thank you!

  • @statquest

    @statquest

    2 ай бұрын

    Wow, thank you!

  • @KevinKansas1
    @KevinKansas1 Жыл бұрын

    The way you explain complex subjects in a easy-to-understand format is amazing! Do you have an idea when will you release a video about transformers? Thank you Josh!

  • @statquest

    @statquest

    Жыл бұрын

    I'm shooting for the end of the month.

  • @JeremyHalfon

    @JeremyHalfon

    11 ай бұрын

    Hi Josh@@statquest , any update on the following? Would definitely need it for my final tomorrow :))

  • @statquest

    @statquest

    11 ай бұрын

    @@JeremyHalfon I'm finishing my first draft today. Hope to edit it this weekend and record next week.

  • @The-Martian73
    @The-Martian73 Жыл бұрын

    Great, that's really what I was looking for, thanks mr Starmer for the explanation ❤

  • @statquest

    @statquest

    Жыл бұрын

    bam! :)

  • @rishabhsoni
    @rishabhsoni6 ай бұрын

    Superb Videos. One question, is the fully connected layer just simply the softmax layer, there is no hidden layer with weights (meaning no weights are learned)?

  • @statquest

    @statquest

    6 ай бұрын

    No, there are weights along the connections between the input and output of the fully connected layer, and those outputs are then pumped into the softmax. I apologize for not illustrating the weights in this video. However, I included them in my video on transformers, and it's the same here. Here's the link to the transformers video: kzread.info/dash/bejne/rKyF27aEaNTbqbw.html

  • @manuelcortes1835
    @manuelcortes1835 Жыл бұрын

    I have a question that could benefit from clarification: In the final FC layer for word predictions, it is claimed that the Attention Values and 'encodings' are used as input (13:38). By 'encodings', do we mean the short term memories from the top LSTM layer in the decoder?

  • @statquest

    @statquest

    Жыл бұрын

    Yes. We use both the attention values and the LSTM outputs (short-term memories or hidden states) as inputs to the fully connected layer.

  • @jacobverrey4075
    @jacobverrey4075 Жыл бұрын

    Josh - I've read the original papers and countless online explanations, and this stuff never makes sense to me. You are the one and only reason as to why I understand machine learning. I wouldn't be able to make any progress on my PhD if it wasn't for your videos.

  • @statquest

    @statquest

    Жыл бұрын

    Thanks! I'm glad my videos are helpful! :)

  • @okay730
    @okay73011 ай бұрын

    I'm excited for the video about transformers. Thank you Josh, your videos are extremely helpful

  • @statquest

    @statquest

    11 ай бұрын

    Coming soon!

  • @MartinGonzalez-wn4nr
    @MartinGonzalez-wn4nr11 ай бұрын

    Hi Josh, I just bought your books, Its amazing the way that you explain complex things, read the papers after wach your videos is easier. NOTE: waiting for the video of transformes

  • @statquest

    @statquest

    11 ай бұрын

    Glad you like them! I hope the video on Transformers is out soon.

  • @mehmeterenbulut6076
    @mehmeterenbulut60768 ай бұрын

    I was stunned when you start the video with a catch jingle man, cheers :D

  • @statquest

    @statquest

    8 ай бұрын

    :)

  • @rathinarajajeyaraj1502
    @rathinarajajeyaraj1502 Жыл бұрын

    Much awaited one .... Awesome as always ..

  • @statquest

    @statquest

    Жыл бұрын

    Thank you!

  • @capyk5455
    @capyk545510 ай бұрын

    You're amazing Josh, thank you so much for all this content

  • @statquest

    @statquest

    10 ай бұрын

    Glad you enjoy it!

  • @patrikszepesi2903
    @patrikszepesi29037 ай бұрын

    Hi, great video. At 13:49 can you please explain how you get -.3 and 0.3 for the input to the fully connected? THank you

  • @statquest

    @statquest

    7 ай бұрын

    The outputs from the softmax function are multiplied with the short-term memories coming out of the encoders LSTM units. We then add those products together to get -0.3 and 0.3.

  • @abrahammahanaim3859
    @abrahammahanaim385911 ай бұрын

    Hey Josh your explanation is easy to understand. Thanks

  • @statquest

    @statquest

    11 ай бұрын

    Glad it was helpful!

  • @sreerajnr689
    @sreerajnr689Ай бұрын

    Your explanation is AMAZING AS ALWAYS!! I have 1 doubt. Do we do the attention calculation only on the final layer? For example, if there are 2 layers in encoder and 2 layers in decoder, we use only the outputs from 2nd layer of encoder and 2nd layer of decoder for attention estimation, right?

  • @statquest

    @statquest

    Ай бұрын

    I believe that is correct, but, to be honest, I don't think there is a hard rule.

  • @tupaiadhikari
    @tupaiadhikari10 ай бұрын

    At 13:38 are we Concatenating the output of the attention values and the output of the decoder LSTM for the translated word (EOS in this case) and then using a weights of dimensions (4*4) to convert into a dimension 4 pre Softmax output?

  • @statquest

    @statquest

    10 ай бұрын

    yep

  • @statquest

    @statquest

    10 ай бұрын

    If you want to see a more detailed view of what is going on at that stage, check out my video on Transformers: kzread.info/dash/bejne/rKyF27aEaNTbqbw.html In that video, I go over every single mathematical operation, rather than gloss over them like I do here.

  • @tupaiadhikari

    @tupaiadhikari

    10 ай бұрын

    @@statquest Thank You Professor Josh for the clarifications !

  • @alexfeng75
    @alexfeng7511 ай бұрын

    Fantastic video, indeed! Is the attention described in the video the same as in the attention paper? I didn't see the mention of QKV in the video and would like to know whether it was omitted to simplify or by mistake.

  • @statquest

    @statquest

    11 ай бұрын

    Are you asking about the QKV notation that appears in the "Attention is all you need" paper? That manuscript arxiv.org/abs/1706.03762 , which came out in 2017, didn't introduce the concept of attention for neural networks. Instead it introduces a more advanced topic - Transformers. The original "how to add attention to neural networks" manuscript arxiv.org/pdf/1409.0473.pdf came out in 2015 and did not use the QKV notation that appeared later in the transformer manuscript. Anyway, my video follows the original, 2015, manuscript. However, I'm working on a video that covers the 2017 manuscript right now. And I've got a long section talking all about the QKV stuff in it. That said, in this video, you can think of the output from each LSTM in the decoder as a "Query", and the outputs from each LSTM in the Encoder as the "Keys" and "Values". The "Keys" are used, in conjunction with each "Query" to calculate the Similarity Scores and the "Values" are then scaled by those scores to create the attention values.

  • @alexfeng75

    @alexfeng75

    11 ай бұрын

    @@statquest Thanks for the reply, Josh. Yes, I was referring to the 2017 paper. I look forward to your video covering it.

  • @rrrprogram8667
    @rrrprogram8667 Жыл бұрын

    Excellent josh.... So finally MEGA Bammm is approaching..... Hope u r doing good...

  • @statquest

    @statquest

    Жыл бұрын

    Yes! Thank you! I hope you are doing well too! :)

  • @abdullahhashmi654
    @abdullahhashmi654 Жыл бұрын

    Been wanting this video for so long, gonna watch it soon!

  • @statquest

    @statquest

    Жыл бұрын

    bam!

  • @owlrion
    @owlrion11 ай бұрын

    Hey! Great video, this is really helping me with neural networks at the university, do we have a date for when the transformer video comes out?

  • @statquest

    @statquest

    11 ай бұрын

    Soon....

  • @envynoir
    @envynoir Жыл бұрын

    Godsent! Just what I needed! Thanks Josh.

  • @statquest

    @statquest

    Жыл бұрын

    bam!

  • @ArpitAnand-yd7tr
    @ArpitAnand-yd7tr11 ай бұрын

    Really looking forward to your explanation of Transformers!!!

  • @statquest

    @statquest

    11 ай бұрын

    Thanks!

  • @abdullahbinkhaledshovo4969
    @abdullahbinkhaledshovo496910 ай бұрын

    I have been waiting for this for a long time

  • @statquest

    @statquest

    10 ай бұрын

    Transformers comes out on monday...

  • @sciboy123
    @sciboy12310 ай бұрын

    I had a little confusion about the final fully connected layer. It takes in separate attention values for each input word. But doesn't this mean that the dimension of the input depends on how many input words there are (thus it would be difficult to generalize for arbitrarily long sentences)? Did I misunderstand something?

  • @statquest

    @statquest

    10 ай бұрын

    I can see why this might be confusing because we have 2 input words and two inputs for attention going into the final fully connected layer. However, the number of inputs for attention going into the final fully connected layer is not determined by the number of input words, instead it is determined by the number of LSTM cells we have per layer (or, alternatively, the number of output values from the LSTMs per layer). In this case, we have 2 LSTM cells in a single layer. And thus, regardless of the number of input words, there will only be 2 attention values inputed into the final fully connected layer. If this is confusing, review how the attention values are created at 12:58 - regardless of the number of input words, we add the scaled values together to get one sum per LSTM.

  • @rajatjain7894
    @rajatjain7894 Жыл бұрын

    Was eagerly waiting for this video

  • @statquest

    @statquest

    Жыл бұрын

    Bam! :)

  • @orlandopalmeira623
    @orlandopalmeira6232 ай бұрын

    Hello, I have a doubt. The initialization of the cell state and hidden state of the decoder is a context vector that is the representation (generated by encoder) of the entire sentence (input)? And what about each hidden state (from encoder) used in decoder? Are they stored somehow? Thanks!!!

  • @statquest

    @statquest

    2 ай бұрын

    1) Yes, the context vector is a representation of the entire input. 2) The hidden states in the encoder are stored for attention.

  • @orlandopalmeira623

    @orlandopalmeira623

    2 ай бұрын

    @@statquest Thanks!!

  • @miladafrasiabi5499
    @miladafrasiabi54999 ай бұрын

    Thank you for the awesome video. I have a question. What does the similarity score entails in reality? I assume that the Ws and Bs are being optimized by backpropagation in order to give larger positive values to synonyms, close to 0 values to unrelated words and large negative values to antonyms. Is this a right assumption?

  • @statquest

    @statquest

    9 ай бұрын

    I believe that is correct. However, if there is one thing I've learned about neural networks, it's that the weights and biases are optimized only to fit the data and the actual values may or may not make any sense beyond that specific criteria.

  • @handsomemehdi3445
    @handsomemehdi34458 ай бұрын

    Hello, Thank you for the video, but I am so confused that some terms introduced in original 'Attention is All You Need' paper were not mentioned in video, for example, keys, values, and queries. Furthermore, in the paper, authors don't talk about cosine similarity and LSTM application. Can you please clarify this case a little bit much better?

  • @statquest

    @statquest

    8 ай бұрын

    The "Attention is all you need" manuscript did not introduce the concept of attention. That does done years earlier, and that is what this video describes. If you'd like to understand the "Attention is all you need" concept of transformers, check out my video on transformers here: kzread.info/dash/bejne/rKyF27aEaNTbqbw.html

  • @imkgb27
    @imkgb2711 ай бұрын

    Many thanks for your great video! I have a question. You said that we calculate the similarity score between 'go' and EOS (11:30). But I think the vector (0.01,-0.10) is the context vector for "let's go" instead of "go" since the input includes the output for 'Let's' as well as the embedding vector for 'go'. It seems that the similarity score between 'go' and EOS is actually the similarity score between "let's go" and EOS. Please make it clear!

  • @statquest

    @statquest

    11 ай бұрын

    You can talk about it either way. Yes, it is the context vector for "Let's go", but it's also the encoding, given that we have already encoded "Let's", of the word "go".

  • @souravdey1227
    @souravdey1227 Жыл бұрын

    Had been waiting for this for months.

  • @statquest

    @statquest

    Жыл бұрын

    The wait is over! :)

  • @hasansayeed3309
    @hasansayeed330911 ай бұрын

    Amazing video Josh! Waiting for the transformer video. Hopefully it'll come out soon. Thanks for everything!

  • @statquest

    @statquest

    11 ай бұрын

    Thanks! I'm working on it! :)

  • @chessplayer0106
    @chessplayer0106 Жыл бұрын

    Ah excellent this is exactly what I was looking for!

  • @statquest

    @statquest

    Жыл бұрын

    Thank you!

  • @birdropping

    @birdropping

    Жыл бұрын

    @@statquest Can't wait for the next episode on Transformers!

  • @gordongoodwin6279
    @gordongoodwin62796 ай бұрын

    fun fact - if your vectors are scaled/mean-centered, cosine similarity is geometrically equivalent to the pearson correlation, and the dotproduct is the same as the covariance (un-scaled correlation).

  • @statquest

    @statquest

    6 ай бұрын

    nice.

  • @naomilago
    @naomilago Жыл бұрын

    The music sang before the video are contagious ❤

  • @statquest

    @statquest

    Жыл бұрын

    :)

  • @yizhou6877
    @yizhou687710 ай бұрын

    I am always amazed by your tutorials! Thanks. And when we can expect the transformer tutorial to be uploaded?

  • @statquest

    @statquest

    10 ай бұрын

    Tonight!

  • @yoshidasan4780
    @yoshidasan47806 ай бұрын

    first of all thanks a lot Josh! you made it way too understandable for us and i would be forever grateful to you for this !! Have a nice time! and can you please upload videos on Bidirectional LSTM and BERT?

  • @statquest

    @statquest

    6 ай бұрын

    I'll keep those topics in mind.

  • @AntiPolarity
    @AntiPolarity Жыл бұрын

    can't wait for the video about Transformers!

  • @statquest

    @statquest

    Жыл бұрын

    Me too!

  • @user-fj2qq7cp2n
    @user-fj2qq7cp2n11 ай бұрын

    Thank you very much for your explanation! You are always super clear. Will the transformer video be out soon? I have a natural language processing exam in a week and I just NEED your explanation to go through them 😂

  • @statquest

    @statquest

    11 ай бұрын

    Unfortunately I still need a few weeks to work on the transformers video... :(

  • @markus_park
    @markus_park Жыл бұрын

    Thanks! This was a great video!

  • @statquest

    @statquest

    Жыл бұрын

    Thank you very much! :)

  • @luvxxb
    @luvxxb6 ай бұрын

    thank you so much for making these great materials

  • @statquest

    @statquest

    6 ай бұрын

    Thanks!

  • @michaelbwin752
    @michaelbwin75210 ай бұрын

    Thank you for this explanation. But my question is how with backprogation are the weights and bias adjusted in such a model like this. if you could explain that i would deeply appreciate it.

  • @statquest

    @statquest

    10 ай бұрын

    Backpropagation works for models like this just like it works for simpler models. You just use a whole lot of Chain Rule to calculate the derivatives of the loss function (which is cross entropy in this case) with respect to each weight and bias. To learn more about backpropagation, see: kzread.info/dash/bejne/e4Jmus97mKyypJc.html kzread.info/dash/bejne/m62ilNydca_PmZs.html and kzread.info/dash/bejne/eX-O0bGBiKrJfNI.html To learn more about cross entropy, see: kzread.info/dash/bejne/aHWmtdusZdSucbg.html and kzread.info/dash/bejne/qnZ5yphvhpzNitI.html

  • @sabaaslam781
    @sabaaslam781 Жыл бұрын

    Hi Josh! No doubt, you teach in the best way. I have a request, I have been enrolled in PhD and going to start my work on Graphs, Can you please make a video about Graph Neural Networks and its variants, Thanks.

  • @statquest

    @statquest

    Жыл бұрын

    I'll keep that in mind.

  • @Sarifmen
    @Sarifmen11 ай бұрын

    13:15 so the attention for EOS is just 1 number (per LSTM cell) which combines references to all the input words?

  • @statquest

    @statquest

    11 ай бұрын

    Yep.

  • @akashat1836
    @akashat18362 ай бұрын

    Hey Josh! Firstly, Thank you so much for this amazing content!! I can always count on your videos for a better explanation! I have one quick clarification to make. Before the fully dense layer. The first two numbers we get are from the [scaled(input1-cell1) + scaled(input2-cell1) ] and [scaled(input1-cell2) + scaled(input2-cell2) ] right? And the other two numbers are from the outputs of the decoder, right?

  • @statquest

    @statquest

    2 ай бұрын

    Yes.

  • @akashat1836

    @akashat1836

    2 ай бұрын

    @@statquest Thank you for the clarification!

  • @shaktisd
    @shaktisd5 ай бұрын

    I have one fundamental question related to how attention model learns, so basically higher attention score is given to those pairs of word which have higher softmax (Q.K) similarity score. Now the question is how relationship in the sentence "The cat didn't climb the tree as it was too tall" is calculated and it knows that in this case "it" refers to tree and not "cat" . Is it from large content of data that the model reads helps it in distinguishing the difference ?

  • @statquest

    @statquest

    5 ай бұрын

    Yes. The more data you have, the better attention is going to work.

  • @carloschau9310
    @carloschau9310 Жыл бұрын

    thank you sir for your brilliant work!

  • @statquest

    @statquest

    Жыл бұрын

    Thank you!

  • @Rykurex
    @Rykurex10 ай бұрын

    Do you have any courses with start-to-finish projects for people who are only just getting interested in machine learning? Your explanations on the mathematical concepts has been great and I'd be more than happy to pay for a course that implements some of these concepts into real world examples

  • @statquest

    @statquest

    10 ай бұрын

    I don't have a course, but hope to have one one day. In the meantime, here's a list of all of my videos somewhat organized: statquest.org/video-index/ and I do have a book called The StatQuest Illustrated Guide to Machine Learning: statquest.org/statquest-store/

  • @rikki146
    @rikki146 Жыл бұрын

    When I see new vid from Josh, I know today is a good day! BAM!

  • @statquest

    @statquest

    Жыл бұрын

    BAM! :)

  • @arvinprince918
    @arvinprince91810 ай бұрын

    hey there josh @statquest, your videos are really awsome and super helpful, thus i was wondering when will your video for transformer model come out

  • @statquest

    @statquest

    10 ай бұрын

    All channel members and patreon supports have access to it right now. It will be available to everyone else in a few weeks.

  • @madjohnshaft
    @madjohnshaft11 ай бұрын

    I am currently taking the AI cert program from MIT - I thank you for your channel

  • @statquest

    @statquest

    11 ай бұрын

    Thanks and good luck!

  • @theelysium1597
    @theelysium159711 ай бұрын

    Since you asked for video suggestions in another video: A video about the EM and Mean Shift algorithm would be great!

  • @statquest

    @statquest

    11 ай бұрын

    I'll keep that in mind.

  • @lequanghai2k4
    @lequanghai2k4 Жыл бұрын

    I am stilling learning this so hope next video come out soon

  • @statquest

    @statquest

    Жыл бұрын

    I'm working on it as fast as I can.

  • @tangt304
    @tangt3048 ай бұрын

    Another awesome video! Josh, will you plan to talk about BERT? Thank you!

  • @statquest

    @statquest

    8 ай бұрын

    I'll keep that in mind.

  • @hassanalmaazi3768
    @hassanalmaazi3768 Жыл бұрын

    Hello professor, Great video! but could you help with key representation here as there are three representations query, key, and value. I cannot identify what is key representation here, with outputs from encoder as value and output from decoder as query.

  • @statquest

    @statquest

    Жыл бұрын

    The Query, Key, and Value terminology is from the specific type of "Self Attention" that is used in Transformers. Those terms do not apply in this situation (where we are just adding attention to a standard encoder-decoder model with LSTMs). However, I'll explain what Query, Key and Values are in my upcoming StatQuest on Transformers.

  • @hassanalmaazi3768

    @hassanalmaazi3768

    Жыл бұрын

    @@statquest Thanks for answering. Also what about the role of window size in attention model, does it play the same role as in continuous bag of words or any special other than that.

  • @statquest

    @statquest

    Жыл бұрын

    @@hassanalmaazi3768 Presumably. To be honest, I don't know much about local attention at this point other than the maid ideas and the fact that it wasn't used in the original Transformers.

  • @andresg3110
    @andresg3110 Жыл бұрын

    You are on Fire! Thank you so much

  • @statquest

    @statquest

    Жыл бұрын

    Thank you! :)

  • @umutnacak
    @umutnacak11 ай бұрын

    Great videos! So after watching technical videos I think complicating the math has no effect on removing bias from the model. In the future one can find a model with self-encoder-soft-attention-direct-decoder you name it, but it's still garbage in garbage out. Do you think there is a way to plug a fairness/bias filter to the layers so instead of trying to filter the output of the model you just don't produce unfair output? It's like preventing a disease instead of looking for a cure. Obviously I'm not an expert and just trying to get a direction for my personal ethics research out of this naive question. Thanks!

  • @statquest

    @statquest

    11 ай бұрын

    To be honest, I'm super sure I understand what you are asking about. However, I know that there is something called "constitutional AI" that you might be interested in.

  • @umutnacak

    @umutnacak

    11 ай бұрын

    @@statquest Thanks for the reply. OK this looks promising. Actually they already have a model called Claude. Not sure this is the thing I'm looking for or not, but at least a direction for me to look further into. Thanks again!

  • @thanhtrungnguyen8387
    @thanhtrungnguyen838711 ай бұрын

    can't wait for the next StatQuest

  • @statquest

    @statquest

    11 ай бұрын

    :)

  • @thanhtrungnguyen8387

    @thanhtrungnguyen8387

    11 ай бұрын

    @@statquest I'm currently trying to fine-tune Roberta so I'm really excited about the following video, hope the following videos will also talk about BERT and fine-tune BERT

  • @statquest

    @statquest

    11 ай бұрын

    @@thanhtrungnguyen8387 I'll keep that in mind.

  • @JL-vg5yj
    @JL-vg5yj Жыл бұрын

    super clutch my final is on thursday thanks a lot!

  • @statquest

    @statquest

    Жыл бұрын

    Good luck!

  • @Xayuap
    @Xayuap Жыл бұрын

    weeeeee, video for tonite, tanks a lot

  • @statquest

    @statquest

    Жыл бұрын

    :)

  • @RafaelRabinovich
    @RafaelRabinovich Жыл бұрын

    To really create a translator model, we would have to work a lot through values of linguistics since there are differences in word order, verb conjugation, idioms, etc. Going from one language to another is a big structural challenge for coders.

  • @statquest

    @statquest

    Жыл бұрын

    That's the way they used to do it - by using linguistics. But very few people do it that way anymore. Now pretty much all translation is done with transformers (which are just encoder-decoder networks with attention, but not the LSTMs). Improvements in translation quality are gained simply by adding more layers of attention and using larger training datasets. For more details, see: en.wikipedia.org/wiki/Natural_language_processing

  • @juliank7408
    @juliank74084 ай бұрын

    Phew! Lots of things in this model, my brain feels a bit overloaded, haha But thanks! Might have to rewatch this

  • @statquest

    @statquest

    4 ай бұрын

    You can do it!

  • @nogur9
    @nogur911 ай бұрын

    Thank you very much!

  • @statquest

    @statquest

    11 ай бұрын

    You're welcome!

  • @Thepando20
    @Thepando209 ай бұрын

    Hi, great video SQ as always! I had the same question as @manuelcortes1835 and I understand that the encodings are the LSTM outputs. However, in 9:02 the outputs are 0.91 and 0.38, maybe I am missing something here?

  • @statquest

    @statquest

    9 ай бұрын

    Yes, and at 13:36 they are rounded to the nearest 10th so they can fit in the small boxes. Thus 0.91 is rounded to 0.9 and 0.38 is rounded to 0.4.

  • @Thepando20

    @Thepando20

    9 ай бұрын

    Thank you, all clear!

  • @mrstriker1847
    @mrstriker1847 Жыл бұрын

    Please add to the neural network playlist! Or don't it's your video, I just want to be able to find it when I'm looking for it to study for class.

  • @statquest

    @statquest

    Жыл бұрын

    I'll add it to the playlist, but the best place to find my stuff is here: statquest.org/video-index/

  • @hujosh8693
    @hujosh86933 ай бұрын

    why is the first output of decoder is ? is that correct?

  • @statquest

    @statquest

    3 ай бұрын

    The fist output from the decoder is "vamos". The first "input" to the decoder, is . We us that just to initialize the decoder.

  • @hujosh8693

    @hujosh8693

    3 ай бұрын

    @@statquestI just read the orginal paper of seq2seq and it mentioned that the reversed order of input is necessary. is that what you mean? thank you for replying.

  • @statquest

    @statquest

    3 ай бұрын

    @@hujosh8693 They just said that performance was improved when they used the reversed order. It still worked either way, it was just better with the reversed input. In this example, we don't reverse the input word order. However, we have to initialize the decoder with something. In the original manuscript, they use the token to initialize the decoder, and that's what I use here. However, I could have created an token for "initialize decoder" token if I wanted to and that would have also worked. I decided to use because that kept the model as small as possible and that allowed me to clearly illustrate what is happening.

  • @hujosh8693

    @hujosh8693

    3 ай бұрын

    @@statquest i got it, thank you for replying.

  • @user-zt8vw7df4f
    @user-zt8vw7df4f2 ай бұрын

    Sorry I can not quite understood, 1. why the output of decoding (0.9, 0.4) could plug in the attention values (-0.3,0.3)? What if the total length of them is not four? for example if I have 3 decoding output values and 3 attention values, the total length of fc layer is six unequal to the sequence length 4. 2. What does "Do some math" mean? how (-0.3, 0.3, 0.9, 0.4) became (-0.7,4.7,-2,-2), why the maximum 0.9 correspond to -2 ?

  • @statquest

    @statquest

    2 ай бұрын

    What time points, minutes and seconds, are you referring to?

  • @elmehditalbi8972
    @elmehditalbi897210 ай бұрын

    Could you do a video about Bert? Architectures like these can be very helpful on NLP and I think a lot of folks will benefit from that :)

  • @statquest

    @statquest

    10 ай бұрын

    I've got a video on transformers coming out soon.

  • @Mars.2024
    @Mars.20242 ай бұрын

    Hi :) Thanks for your great nlp playlist, but . . . I still couldn't understand the concept of attention, lstm and encoder_decoder, rnn،etc :( All is vague to me. Not clear to understand. Would you Please introduce one or more conceptual reference about these chapter ? That would be great .

  • @statquest

    @statquest

    2 ай бұрын

    It might be easier if you just watched my video on transformers: kzread.info/dash/bejne/rKyF27aEaNTbqbw.html

  • @jarsal_firahel
    @jarsal_firahel8 ай бұрын

    Before, I was dumb, "guitar" But now, people say I'm smart "guitar" What is changed ? "guitar" Now I watch..... StatQueeeeeest ! "guitar guitar"

  • @statquest

    @statquest

    8 ай бұрын

    bam!

  • @kaixuan5236
    @kaixuan523611 ай бұрын

    Can you do videos on transformer network and multi-head attention? Love your vids!

  • @statquest

    @statquest

    11 ай бұрын

    I'm working on it.

  • @sagardesai1253
    @sagardesai125311 ай бұрын

    great video thanks

  • @statquest

    @statquest

    11 ай бұрын

    Thanks!

  • @faysoufox
    @faysoufox Жыл бұрын

    Thank you for this video. Just a comment, your website didn't display well on my phone.

  • @statquest

    @statquest

    Жыл бұрын

    Noted!

  • @notyet1213
    @notyet1213 Жыл бұрын

    Thanks!

  • @statquest

    @statquest

    Жыл бұрын

    You bet!

  • @nikolamarkovic9906
    @nikolamarkovic9906 Жыл бұрын

    for this video attention is all you need

  • @statquest

    @statquest

    Жыл бұрын

    Ha!

  • @frogloki882
    @frogloki882 Жыл бұрын

    Another BAM!

  • @statquest

    @statquest

    Жыл бұрын

    Thanks!