Decoder-Only Transformers, ChatGPTs specific Transformer, Clearly Explained!!!

Transformers are taking over AI right now, and quite possibly their most famous use is in ChatGPT. ChatGPT uses a specific type of Transformer called a Decoder-Only Transformer, and this StatQuest shows you how they work, one step at a time. And at the end (at 32:14), we talk about the differences between a Normal Transformer and a Decoder-Only Transformer. BAM!
NOTE: If you're interested in learning more about Backpropagation, check out these 'Quests:
The Chain Rule: • The Chain Rule
Gradient Descent: • Gradient Descent, Step...
Backpropagation Main Ideas: • Neural Networks Pt. 2:...
Backpropagation Details Part 1: • Backpropagation Detail...
Backpropagation Details Part 2: • Backpropagation Detail...
If you're interested in learning more about the SoftMax function, check out:
• Neural Networks Part 5...
If you're interested in learning more about Word Embedding, check out: • Word Embedding and Wor...
If you'd like to learn more about calculating similarities in the context of neural networks and the Dot Product, check out:
Cosine Similarity: • Cosine Similarity, Cle...
Attention: • Attention for Neural N...
If you'd like to learn more about Normal Transformers, see: • Transformer Neural Net...
If you'd like to support StatQuest, please consider...
Patreon: / statquest
...or...
KZread Membership: / @statquest
...buying my book, a study guide, a t-shirt or hoodie, or a song from the StatQuest store...
statquest.org/statquest-store/
...or just donating to StatQuest!
paypal: www.paypal.me/statquest
venmo: @JoshStarmer
Lastly, if you want to keep up with me as I research and create new StatQuests, follow me on twitter:
/ joshuastarmer
0:00 Awesome song and introduction
1:34 Word Embedding
7:26 Position Encoding
10:10 Masked Self-Attention, an Autoregressive method
22:35 Residual Connections
23:00 Generating the next word in the prompt
26:23 Review of encoding and generating the prompt
27:20 Generating the output, Part 1
28:46 Masked Self-Attention while generating the output
30:40 Generating the output, Part 2
32:14 Normal Transformers vs Decoder-Only Transformers
#StatQuest

Пікірлер: 305

  • @statquest
    @statquest9 ай бұрын

    To learn more about Lightning: lightning.ai/ Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/

  • @razodactyl
    @razodactyl8 ай бұрын

    Bruh. This channel is criminally underrated.

  • @statquest

    @statquest

    8 ай бұрын

    Thanks!

  • @razodactyl

    @razodactyl

    5 ай бұрын

    🎉🎉🎉 Love your work!

  • @EobardUchihaThawne

    @EobardUchihaThawne

    4 ай бұрын

    Bam!

  • @rickymort135

    @rickymort135

    Ай бұрын

    Well then criminals should rate it more highly

  • @razodactyl

    @razodactyl

    Ай бұрын

    @@rickymort135I laughed. ⭐️

  • @ayush_stha
    @ayush_sthaАй бұрын

    This explanation is essential for anyone looking to understand how ChatGPT works. While more in-depth exploration is necessary to grasp all the intricacies fully, I believe this explanation couldn't be better. It's exactly what I needed.

  • @statquest

    @statquest

    Ай бұрын

    Thanks! I have a video that shows how all of these calculations are done using matrix algebra coming out soon.

  • @NTesla00
    @NTesla008 ай бұрын

    Haven't had a single stats course in over 3 years but I still keep up with this channel from time to time! Neural networks are way more complex than what I've ever had to deal with, but you manage to break down even these topics into bite size pieces...Bam!!

  • @statquest

    @statquest

    8 ай бұрын

    Thank you so much!!!

  • @peerbr7849
    @peerbr78499 ай бұрын

    And I thought you'd stop at ChatGPT. Thanks for never stopping to learn and teach!

  • @statquest

    @statquest

    9 ай бұрын

    Thank you!

  • @xspydazx

    @xspydazx

    8 ай бұрын

    Yes it's a good series

  • @gvlokeshkumar
    @gvlokeshkumar7 ай бұрын

    Quests on attention, transformer and decoder only transformer are of immeasurable value! Thank you so much! Keep the quests coming!

  • @statquest

    @statquest

    7 ай бұрын

    Thanks, will do!

  • @sidereal6296
    @sidereal62966 ай бұрын

    I just want to say you are AMAZING. Thank you so much. I would personally love to see a video on backprop to train this, or even just training an RNN since we saw multi dim training, but not training once we get the state machine / unrolling involved. Loved the whole series 🎉

  • @statquest

    @statquest

    6 ай бұрын

    Thanks! I have notes for training an RNN, but the equations get big really fast. That said, it really is the exact same techniques presented in other videos, just a lot more of them.

  • @karlnikolasalcala8208
    @karlnikolasalcala82086 ай бұрын

    YOU ARE THE BEST TEACHER EVER JOSHH!! I wish you can feel the raw feeling we feel when we watch your videos

  • @statquest

    @statquest

    6 ай бұрын

    bam! :)

  • @spartan9729
    @spartan97298 ай бұрын

    Oh my. Thanks for the recap, it was so necessary for this video. It made the concept extremely clear.

  • @statquest

    @statquest

    8 ай бұрын

    Glad it was helpful!

  • @cheolyeonbyun9640
    @cheolyeonbyun96408 ай бұрын

    Congrats on 1 million subs statquest!! All the Love from Korea!!

  • @statquest

    @statquest

    8 ай бұрын

    Thank you very much!!! :)

  • @dineth9d
    @dineth9d6 ай бұрын

    Hey Josh, I’ve been really digging your videos! They’re not only informative and helpful for my studies, but they’re also super entertaining. In fact, you’ve played a big part in my decision to continue pursuing AI Engineering. Could you please do a video about low-rank adaptation(LoRA). I am not good with that.

  • @statquest

    @statquest

    6 ай бұрын

    Thanks! I'll keep that in mind.

  • @namunamu5258
    @namunamu52586 ай бұрын

    Thank you so much! It is an amazing video and I haven't seen a video teaching AI/ML techniques like this anywhere! You're talented. And my research areas span Efficient LLM (LoRA, Quantization, etc). It cannot be better if I can see those concepts

  • @statquest

    @statquest

    6 ай бұрын

    Glad it was helpful!

  • @danberm1755
    @danberm17558 ай бұрын

    Thanks again Josh! I noticed that many GPTs are decoder only. Thanks for clarifying! BTW saw that Yannic had a video on history rewrites. Probably not a topic for this channel, but still pretty cool 😁

  • @statquest

    @statquest

    8 ай бұрын

    Interesting!

  • @konstantinlevin8651
    @konstantinlevin86518 ай бұрын

    Woahh, this is actually cool. We appreciate it a lot Josh!

  • @statquest

    @statquest

    8 ай бұрын

    Thanks!

  • @ciciparsons3651
    @ciciparsons36518 ай бұрын

    awesome, really helpful. Can't wait for another exciting episode!!

  • @statquest

    @statquest

    8 ай бұрын

    More to come!

  • @antonindusek3725
    @antonindusek37258 ай бұрын

    Hello Josh, i am enjoying your videos as they are helping me so much with my studies as well as entertaining me. You are kinda a reason i decided to continue studying bioinformatics. Since you are covering chatGTP and stuff now, could you maybe make a video about AlphaFold architecture in the future? I understand it might not be your topic of interrest, but i would love to lear in more deeply (pun intended). Thanks either way!

  • @statquest

    @statquest

    8 ай бұрын

    I'll keep that in mind.

  • @al8-.W
    @al8-.W8 ай бұрын

    This video is proof that repetition is prime when teaching advanced concepts. I've watched many similar videos in the past and could never get all of these numbers to finally make sense in my mind. With your previous transformer video, I was getting closer but somewhat got lost again with the QVK values. Having to this second video to watch in a row made it clearer for me what all these numbers do and why we need them.

  • @statquest

    @statquest

    8 ай бұрын

    BAM! :)

  • @gabip265
    @gabip2658 ай бұрын

    Another great video as always! Would be amazing if you could continue with Masked Language Models such as BERT in the future!

  • @statquest

    @statquest

    8 ай бұрын

    I'll keep that in mind.

  • @asheeshmathur
    @asheeshmathur8 ай бұрын

    Delighted to watch one of the Most Brilliant videos. Hats off. Will join the channel tomorrow, first thing. Meanwhile do you have a one on Probability Density Function.

  • @statquest

    @statquest

    8 ай бұрын

    All of my videos are organized on this page: statquest.org/video-index/

  • @asheeshmathur

    @asheeshmathur

    8 ай бұрын

    Thanks, All are good, may be I could not find a one on P Density Function. Could you please point me out that specific video.

  • @bhaskersuri1541
    @bhaskersuri15415 ай бұрын

    This the most brilliant explanation that I have seen!!!!!! You are just awesome!!!!

  • @statquest

    @statquest

    5 ай бұрын

    Wow, thanks!

  • @aseemlimbu7672
    @aseemlimbu76728 ай бұрын

    Triple BAM ❤❤👌👌

  • @statquest

    @statquest

    8 ай бұрын

    Hooray! :)

  • @gustavsnolle8424
    @gustavsnolle84248 ай бұрын

    What an awesome video and channel😁👍. Would you consider doing a video on deep q learning models? I believe everyone would benefit from a video on such a fundamental topic. Thank you for your invaluable work🤩

  • @statquest

    @statquest

    8 ай бұрын

    I'll keep that in mind.

  • @brucewayne6744
    @brucewayne67448 ай бұрын

    Perfect video! Quick question, how are you drawing your lines? This line style is awesome!

  • @statquest

    @statquest

    8 ай бұрын

    I do everything in "keynote".

  • @chihhaohuang9858
    @chihhaohuang98583 ай бұрын

    BAM... You really killed it. Thanks for your explanation.

  • @statquest

    @statquest

    3 ай бұрын

    Thank you!

  • @YumanKumar
    @YumanKumar2 ай бұрын

    Amazing Explanation! Double Bam 😊👍

  • @statquest

    @statquest

    2 ай бұрын

    Thank you! 😃

  • @ayseguldalgic
    @ayseguldalgic3 ай бұрын

    Hey Josh! You're a gift for this planet 😍 so thanks this awsome explanations..

  • @statquest

    @statquest

    3 ай бұрын

    Wow, thank you!

  • @colmon46
    @colmon465 ай бұрын

    Your videos are awesome! I've never thought I could learn machine learning in such an easy way. Love from china

  • @statquest

    @statquest

    5 ай бұрын

    Thank you!

  • @josephsueke
    @josephsueke3 ай бұрын

    incredible! this is such a clear explanation. thank you!

  • @statquest

    @statquest

    3 ай бұрын

    Thank you!

  • @mitch7w
    @mitch7w8 ай бұрын

    Thanks for the excellent explanation!

  • @statquest

    @statquest

    8 ай бұрын

    You are welcome!

  • @juaneshberger9567
    @juaneshberger95677 ай бұрын

    great vids, any chance you could make videos on Q-Learning, Deep Q-Learning, and other RL Topics! Keep up the good work!

  • @statquest

    @statquest

    7 ай бұрын

    I hope to.

  • @tamoghnamaitra9901
    @tamoghnamaitra99016 ай бұрын

    Great Video. If possible, please do a video on model fine-tuning techniques like PEFT/LoRA

  • @statquest

    @statquest

    6 ай бұрын

    I'll definitely keep that in mind.

  • @julianh7305
    @julianh73055 ай бұрын

    Hi Josh, great video, as always. I was wondering if you would also make a video about Encoder-only Transformers, like Google's BERT for instance, which can also be used for a great variety of tasks.

  • @statquest

    @statquest

    5 ай бұрын

    I'll keep that in mind.

  • @exoticcoder5365
    @exoticcoder53658 ай бұрын

    Hey Josh ! Would you mind making videos about graph neural networks ( GNN ) or graph convolutional network ( GCN ), and most importantly, the graph Attention Network ( GAT ) ? I have briefly gone over the maths these days, I already knew the matrix manipulation stuff but I think with your help, it would be much clear like your Transformer series, especially on the attention mechanism in the graph attention network ( GAT ), many Thanks 🙏🏻🙏🏻🙏🏻🙏🏻 appreciated !

  • @statquest

    @statquest

    8 ай бұрын

    I'll keep that in mind.

  • @nahiyan8

    @nahiyan8

    8 ай бұрын

    GNNs really only have two main elements to them, the aggregate function and the update function. The different choices of these two functions give rise to the different variants, GCN, GAT, etc.

  • @yuanyuan524
    @yuanyuan5248 ай бұрын

    Thanks for clear explanation

  • @statquest

    @statquest

    8 ай бұрын

    Glad it was helpful!

  • @user-se8ld5nn7o
    @user-se8ld5nn7o26 күн бұрын

    Hey, fantastic video as usual! Getting hard to find new ways to compliment, haha. Just one quick question since you mentioned positional encoding. When generating embeddings from GPT embedding models (e.g., text-embedding-3-large), do the embeddings contain both positional encoding layer and masked-self-attention info in the numbers?

  • @statquest

    @statquest

    26 күн бұрын

    I believe it's just the word embeddings.

  • @linhdinh136
    @linhdinh1368 ай бұрын

    Thank you, Josh, for yet another excellent video on GPT. I find myself slightly puzzled regarding the input and output used to train the Decoder-only transformer in your example. In the normal Transformer model, the training input would be "what is statquest ," and the output would be "awesome ." However, in the case of the Decoder-only model, as far as I understand, the training input remains "what is statquest ," but the output becomes "what is statquest awesome ." Could you help to clarify this? If my understanding is correct, I'm wondering how the Decoder-only transformers know when to stop during inference, considering that there are two tokens within the generated response.

  • @statquest

    @statquest

    8 ай бұрын

    Because the first is technically part of the input, we just ignore it during inference. Alternatively, you could use a different token to indicate the end of the input.

  • @hanhtrannguyen7791
    @hanhtrannguyen7791Ай бұрын

    Greate explanation. It help me a lot. A million heart for u!!

  • @statquest

    @statquest

    Ай бұрын

    Thank you!

  • @xspydazx
    @xspydazx8 ай бұрын

    Chat gpt is still a dialog system at its heart and has many different models which it gets results from . It softmaxes the outpits acording to the intent , ... So intent detection plays a large role in the chatgpt response .. the transformers are doing major works .. its super interested despite bqttling away with vb net !

  • @AndreasAlexandrou-to5pw
    @AndreasAlexandrou-to5pw7 ай бұрын

    Had a couple of questions regarding word embedding: - Why do we represent each word using two values? Couldn't we just use a single one? - What is the purpose of the linear activation function, can't we just pass the summation straight to the embedder output? Thanks for the video!

  • @statquest

    @statquest

    7 ай бұрын

    1) Yes. In these examples I use 2 because that's the minimum required for the math to be interesting enough to highlight what's really going on. However, usually people us 512 or more embedding values. 2) Yes. The activation functions serve only to be a point where we do summations.

  • @jacksonrudd3886
    @jacksonrudd38868 ай бұрын

    Thank you for the incredible content. Josh, quick question for you. I didn't see you mention vertically stacking the decoders in a way where the output of one decoder is the input for the next. From the 'Illustrated Transformer' page (I can't link b.c. youtube won't let me) it seems to be a core aspect of transformers. Thanks again.

  • @statquest

    @statquest

    8 ай бұрын

    Personally, I wouldn't call that a core aspect. Unlike the ability to layer attention units, feeding the output of one decoder into the input of another in a stack, did not influence how the decoder-transformer (or even an encoder-decoder transformer) was designed. In contrast, the ability to layer attention had a big influence on how the transformer was designed.

  • @jacksonrudd3886

    @jacksonrudd3886

    8 ай бұрын

    That makes sense. I just took another look at the Attention is All You Need paper, and it corroborates your explanation. Thank you. Request: a video on the practicalities of creating and training a production LLM. The data volume, the number of parameters and how the production architecture differs from the simplified educational model provided in this video. I think this would allow the audience to better understand what simplifications were (rightly) made for the purpose of explication. Also thank you so much for what you do. You are creating some of the best educational content on the internet. I am so jealous of this upcoming generation for having teachers like you :)

  • @statquest

    @statquest

    8 ай бұрын

    @@jacksonrudd3886 That's right - in the Attention is all you need paper, they just mention the stacking (N=6) in passing and don't spend any time on it. And I'm planning on making the exact video that you want me to make. It may take some time, but it's in the works.

  • @terryliu3635
    @terryliu3635Ай бұрын

    Another great session, thank you!!! Quick question, how do we decide what numbers to use for the Keys and Values?

  • @statquest

    @statquest

    Ай бұрын

    For the weights? Those are determined with backpropagation: kzread.info/dash/bejne/e4Jmus97mKyypJc.html

  • @garychow7719
    @garychow77198 ай бұрын

    thank you! the video is really nice

  • @statquest

    @statquest

    8 ай бұрын

    Glad you liked it!

  • @RayGuo-bo6nr
    @RayGuo-bo6nr7 ай бұрын

    What a wonderful video!!! BTW, When will you publish your CD? I will buy it too😄Thanks!

  • @statquest

    @statquest

    7 ай бұрын

    BAM! Thank you!

  • @hoangminhan460
    @hoangminhan4607 ай бұрын

    that's perfect. Can you do more lectures on LLMs? Thanks a lot.

  • @statquest

    @statquest

    7 ай бұрын

    I'll keep that in mind.

  • @Lzyue0092youtube
    @Lzyue0092youtube2 ай бұрын

    your series almost save me...love from China💥

  • @statquest

    @statquest

    2 ай бұрын

    Happy to help!

  • @random-ds
    @random-ds2 ай бұрын

    Hello Josh! First of all thank you for this great video, as usual it's very simplified and straightforward. However, I have a little question. I saw your videos on transformers and this one, but every time I feel like the output is already there waiting to be embedded and then predicted. I mean that why the answer can't be "great" in stead of "awesome", what was the probablities given by the model for "great" and for "awesome" to make the final prediction. Here I gave the example of one extra word (great) but in real life it's the whole dictionary of words that can be predicted. So when generating the output, does it compute the "query" and "key" of the whole dictionary of words and then hopefully the right word has the best softmax probability? Thanks in advance for the clarification.

  • @statquest

    @statquest

    2 ай бұрын

    no, you only calculate the queries, keys and values for the input tokens and the output as it is generated. However, in practice, instead of training on just a few phrases, we train on the entire wikipedia. As a result, the transformer can be much more expressive.

  • @shamshersingh9680
    @shamshersingh9680Ай бұрын

    Hi Josh, thanks a ton for making such a simple video on such a complex topic. Can you please explain what do you mean when you say "

  • @statquest

    @statquest

    Ай бұрын

    Your comment is missing the quote that you have from the video. Could you retype it in?

  • @shamshersingh9680

    @shamshersingh9680

    Ай бұрын

    @@statquest Yeah. Can you please explain - Note :- If we were training the Decoder-only transformer, then we would use the fact that we made a mistake to modify weights and biases. In contrast when we are just using the model to generate the responses, then it doesn't really matter what words come out right now.

  • @MrHummerle
    @MrHummerle8 ай бұрын

    Hi there! Came to YT in hope you had a nice video of Rank Robustness. Would be amazing, if you wanted to make a video about it! Keep it up! Also: nice Dinosaurs!

  • @statquest

    @statquest

    8 ай бұрын

    Thanks!

  • @nobiaaaa
    @nobiaaaa7 ай бұрын

    Great explanation! Btw, what is the manuscript that first described the original GPT?

  • @statquest

    @statquest

    7 ай бұрын

    I believe it is called "Improving Language Understanding by Generative Pre-Training"

  • @victorluo1049
    @victorluo1049Ай бұрын

    Hello Josh, thank you again for your video ! I had one question concerning training the model on next token prediction: As training data, would you use "What is statquest " or "What is statquest awesome" ? What I mean by that, is when training the model by feeding it an input prompt such as "What is statquest ", do you also feed the model the word that comes after it (for calculating the loss), here "awesome" ?

  • @statquest

    @statquest

    Ай бұрын

    The training inputs were "What is statquest awesome", and the labels were "is statquest awesome ". I'm working on a video that goes through how to code a transformer and how to prepare the training data. Hopefully it will be out soon.

  • @victorluo1049

    @victorluo1049

    Ай бұрын

    @@statquest Thank you for your answer. I see that the decoder also learns to embed the input then (here, on the input , the label is "awesome"). I'm looking forward to your next vide !

  • @adam.phelps
    @adam.phelps2 ай бұрын

    I really enjoyed this video!

  • @statquest

    @statquest

    2 ай бұрын

    Thank you!

  • @101alexmartin
    @101alexmartin4 ай бұрын

    Thanks for the great video, Josh. I got a question for you. What should drive my decision on which model to choose when facing a problem? In other words, how to choose between an Encoder-Decoder transformer, Decoder-only transformer or Encoder-only transformer? For instance, why ChatGPT was based on a Decoder-only model, and not on a Encoder-Decoder model or an Encoder-only model (like BERT, which has a similar application)

  • @statquest

    @statquest

    4 ай бұрын

    Well, the reason ChatGPT choose Decoder-Only instead of Encoder-Decoder was that it was shown to work with half as many parameters. As for why they didn't use an Encoder-Only model, let me quote my friend and colleague, Sebastian Raschka: "In brief, encoder-style models are popular for learning embeddings used in classification tasks, encoder-decoder-style models are used in generative tasks where the output heavily relies on the input (for example, translation and summarization), and decoder-only models are used for other types of generative tasks including Q&A." magazine.sebastianraschka.com/p/understanding-encoder-and-decoder#:~:text=In%20brief%2C%20encoder%2Dstyle%20models,other%20types%20of%20generative%20tasks

  • @yasboyy
    @yasboyy8 ай бұрын

    I just have a question. The Word Embedding network contains weights that were obtained with back propagation. But on which data was it trained ? Is it like a huge superset of our current "what is Statquest awsome EOS" vocabulary ?

  • @statquest

    @statquest

    8 ай бұрын

    In this case, the word embedding network was trained with the same input/output sequences I used for the entire decoder-only transformer. In other words, i trained all of the weights at the same time, rather than training the word embeddings separately.

  • @NJCLM
    @NJCLM8 ай бұрын

    Awsome as always from you !! now we only need en real tutorial with python to creat a mini transformer model. hops it is on the making as my wish list

  • @statquest

    @statquest

    8 ай бұрын

    Working on it!

  • @iProFIFA
    @iProFIFA8 ай бұрын

    would love to learn about bidirectional transformers next ;-)

  • @statquest

    @statquest

    8 ай бұрын

    I'll keep that in mind.

  • @cristinaprecioso

    @cristinaprecioso

    8 ай бұрын

    @@statquest Pleeeeease, Josh!

  • @ruksharalam173
    @ruksharalam1738 ай бұрын

    A thorough explanation 😀

  • @statquest

    @statquest

    8 ай бұрын

    Thanks!

  • @ruksharalam173

    @ruksharalam173

    8 ай бұрын

    @@statquest if possible, could you please do a video on structural differences between llama and GPT?

  • @statquest

    @statquest

    8 ай бұрын

    @@ruksharalam173 I'll keep that in mind.

  • @zhangeluo3947
    @zhangeluo39478 ай бұрын

    My another question is just for training the encoder-decoder transformer, we can just do Masked-Self-Attention to all the ground true(known) decoded tokens at the same time? Is that right?

  • @statquest

    @statquest

    8 ай бұрын

    I believe that is correct

  • @OliviaB-xu1vc
    @OliviaB-xu1vcАй бұрын

    Thank you so much for another great video! I did have a question -- I'm confused about why you can train word embeddings with only linear activation functions because I thought that linear activation functions wouldn't allow you to learn non-linear patterns in the data, so why wouldn't you just not use an activation function at all in that case or use only one?

  • @statquest

    @statquest

    Ай бұрын

    For word embeddings specifically, we want to learn linear relationships among the words. This is illustrated in my video on word embeddings: kzread.info/dash/bejne/qJ2O1LGnesbSiZM.html And, technically, when coding a linear activation function, you just omit the activation function.

  • @aryamansinha2932
    @aryamansinha29322 ай бұрын

    hello..first off thank you for this great content I had a question/s could you give an example of how the embedding neural network is trained? i.e. what is the input and output in the embedding neural network during training? The neural networks I have worked with statements that go along the lines of "given a set of pixels determine whether the picture is a cat or not".. I do not know what the equivalent is with embedding neural networks and follow up question..can the embedding neural network be the same for an encoder-decoder model and a decoder-decoder model?

  • @statquest

    @statquest

    2 ай бұрын

    1) We don't train the embedding layer separately from the rest of the transformer. So the inputs are what you see here as well as the ideal outputs that we use for training. 2) Once trained, yes.

  • @aryamansinha2932
    @aryamansinha2932Ай бұрын

    one more question... why is there one common FC layer used in the decoder bit (predict statquest given "what is") vs (predicting awesome when given EOS token and "what is statquest")...i would think they would be separate FC layers for both of them since one is predicting the next word..the other is predicting the word in the middle?

  • @statquest

    @statquest

    Ай бұрын

    If you use an encoder-decoder design, you can have different fully connected layers for the different parts of the input and output. However, they decided that this simpler model, with fewer parameters, worked better.

  • @cosmicfluke3718
    @cosmicfluke37185 ай бұрын

    We dont have to ask gpt to know stat quest is awesome reply from gpt BAM!!! BAM!! BAM!!

  • @statquest

    @statquest

    5 ай бұрын

    BAM! :)

  • @victorluo1049
    @victorluo10497 ай бұрын

    Hello Josh, thank you very much for your videos, they are by far the most informative I have seen ! I had a question regarding the training of generative transformers: Can a generative encoder-decoder transformer (we expect it to behave like gpt-3 or llama) be trained with next token prediction ? Because from what I understand, for inference, to generate the output, we encode the input sentence, then we feed to the decoder (embedding layer), then we get the prediction of the first token, which we re-feed to the decoder to generate the next token, until we get a . So we get a sentence as output. However, if the training was done with next token prediction, it means that given an input (sentence), we only try to predict the very next token, which means that we encode the input, we feed to the decoder, we get the token prediction and that's it. In that case, the decoder's embedding layer never sees tokens other than in the training. So during inference, how could it comprehend tokens other than ? Maybe my assumption about the decoder only receiving during next token prediction pre-training is false.

  • @statquest

    @statquest

    7 ай бұрын

    To train an encoder-decoder we do something called "teacher forcing" which allows the network to predict the next token and then continue to predict all of the other tokens that are in the desired output, one token at a time. For details on how teacher forcing works, see: kzread.info/dash/bejne/fmx8rdmeiqy1nco.html

  • @victorluo1049

    @victorluo1049

    7 ай бұрын

    @@statquest thank you for your answer! If I understand correctly, it is thus impossible to train an encoder-decoder on next token prediction, as we need longer outputs. On your videos, we see an encoder-decoder which is trained seq2seq for a specific task, like translation. Is it possible to build a task agnostic (like GPT-3) encoder-decoder by pre-training it seq2seq, with next sentence prediction for example ? And concerning task agnostic decoder only models (like GPT), is it because the encoder and decoder share the same structure and weights that it is possible to pre-train it with next token prediction ? Because even if the decoder's embedding layer only sees during training, the encoder's embedding layer sees many different tokens, and since they share weights, the decoder's embedding layer also learns

  • @victorluo1049

    @victorluo1049

    7 ай бұрын

    When I say "task agnostic model", I mean a generative model to which you can feed any prompt as input, and it will generates a text as answer, so not specific to any task So my question is about which task we can train these models on (like next token prediction, masked language modelling), so that they can be task agnostic Sorry if I'm not clear enough !

  • @statquest

    @statquest

    7 ай бұрын

    @@victorluo1049 I think we might be using different definitions for "next token prediction". To me, "next token prediction" can be applied to long outputs because we predict the output one token at a time given the preceding input and predictions. So whatever we predict, we feed it back into the model and then predict the next token. Thus, encoder-decoder and decoder or encoder only transformers all do "next token prediction". If you are using a different definition for "next token prediction", then you might come to a different conclusion.

  • @victorluo1049

    @victorluo1049

    7 ай бұрын

    @@statquest my definition of next token prediction is predicting the token n+1 with the tokens 1 to n as input For example let's say we have tokens 1 to 10 and I use an input window of 3 tokens, then during training : Sample 1. Input : tokens 1 to 3. Target : token 4. Details : we encode tokens 1 to 3, and feed it to the decoder, we also feed to the decoder's embedding layer. Then the decoder outputs a token prediction, which we hope to be equal to token 4. (The loss is probably calculated by comparing the probability distributions ?) Sample 2. Input : tokens 2 to 4. Target : token 5 Sample 3. Input : tokens 3 to 5. Target : token 6 Etc So the tokens 4,5,6 are predicted, but separately. There is no mechanism of feeding back an output (or true value if we use teacher forcing) to the decoder to predict the next output. So here, during each training step, the decoder only receives as input. Which would be problematic during inference, as we are supposed to feed each predicted token (which are different from ) back to the decoder to predict the next one. I may have a misunderstanding about this, but after reading GPT first paper, it feels like this is basically how the training works, they wrote about maximizing the likelihood L(token n+1 | token 1,...,token n) I would have understood if for inference we feed the initial prompt to the encoder, get a token prediction, add it to the initial prompt then re-feed it to the encoder, to get the next token, etc... But after seeing your video I saw that it is not done by iteratively feeding the encoder, but rather by iteratively feeding the decoder, so I am a bit confused (maybe this is actually only true for models that are trained seq2seq ?)

  • @vuhuynh8740
    @vuhuynh8740Ай бұрын

    StatQuest is awesome!!

  • @statquest

    @statquest

    Ай бұрын

    double bam!!! :)

  • @jossevandekerchove1020
    @jossevandekerchove10206 ай бұрын

    Can you please make a video about GNN? You are reaaallyy good at explaining

  • @statquest

    @statquest

    6 ай бұрын

    I'll keep that in mind.

  • @raminziaei6411
    @raminziaei64117 ай бұрын

    Hi Josh! I'm a little bit confused about the whole idea of generating the input first, compare it to the actual input and use to modify weights and biases in the training phase. I cannot find anywhere on the internet that it is mentioned. All I see is that the masked self attention is used on the input sequence to make contextualized version of each word and then, they are used to generate the target tokens. Nowhere can I find that generating the input sequence and compare it to the actual input is part of the process. Can you please clarify?

  • @statquest

    @statquest

    7 ай бұрын

    What time point, minutes and seconds, are you asking about?

  • @raminziaei6411

    @raminziaei6411

    7 ай бұрын

    @@statquest The section "generating the next word in the prompt" in this video. mins 23-27

  • @statquest

    @statquest

    7 ай бұрын

    @@raminziaei6411 The idea of comparing the predicted input sequence to the known input sequence comes from the original manuscript that describes decoder only transformers, GENERATING WIKIPEDIA BY SUMMARIZING LONG SEQUENCES. They say: "Since the model is forced to predict the next token in the input as well as [the output] error signals are propagated from both input and output time-steps during training."

  • @raminziaei6411

    @raminziaei6411

    7 ай бұрын

    @@statquest Thanks Josh. A related question would be "Does this only hold true if we are considering causal decoder transformers where we have masked self-attention for both input and output sequences? For prefix decoder transformers, where the input has bidirectional self-attention (full self-attention) and the output has masked self-attention, it should not hold true. Is that correct? I mean if the input has bidirectional self-attention, there is no point in predicting the next token in the input, since it has already seen the whole input sequence.

  • @statquest

    @statquest

    7 ай бұрын

    @@raminziaei6411 I think "Encoder-Only Transformers", like BERT, use full self-attention on the input, even though they still can predict the input. However, I don't know for sure.

  • @sreerajnr689
    @sreerajnr68916 күн бұрын

    In an encoder-decoder transformer encoder was trained in English and decoder was trained in Spanish which made it possible to do translations. But here, only English is used for both encoding and decoding which makes it impossible to convert the English encoding to Spanish output. So here, would we used both language datasets combined to train the model to enable it to do translations as well?

  • @statquest

    @statquest

    16 күн бұрын

    Usually the tokens are just fragments of words, instead of entire words. This gives the decoder-only transformer more flexibility in terms of the vocabulary, since it can form new works it was never even trained on by combining the tokens in new ways. In this way, you can train a decoder-only transformer to translate english to spanish.

  • @thanhtrungnguyen8387
    @thanhtrungnguyen83878 ай бұрын

    In 25:42, when the model generates the wrong word, it will be fixed by backpropagation if this is the training process and it will be ignored if this is the generation process, right?

  • @statquest

    @statquest

    8 ай бұрын

    Yes.

  • @AbuDurum
    @AbuDurum8 ай бұрын

    Hey Josh. I just want to ask what software you use to make the diagrams?

  • @statquest

    @statquest

    8 ай бұрын

    I use keynote and show some of my tricks here: kzread.info/dash/bejne/laaAuqyAXainmM4.html

  • @JanKowalski-dm5vr
    @JanKowalski-dm5vr2 ай бұрын

    Great video. Do I understand correctly DNN which is responsible for word embedding, it not only converts the token to its representation as a numeric vector, but already predicts as the next word should be returned ?

  • @statquest

    @statquest

    2 ай бұрын

    In a transformer the embedding layer alone does not predict the next word because it wasn't specifically trained to do that the way a stand alone word embedding layer (like word2vec) would.

  • @JanKowalski-dm5vr

    @JanKowalski-dm5vr

    2 ай бұрын

    @@statquest But if we train the whole model at the same time, then backpropagation does not change the weights of the network responsible for word embedding in such a way that they learn to predict the next word? Or don't we train this first network while learning ?

  • @statquest

    @statquest

    2 ай бұрын

    @@JanKowalski-dm5vr It might. But the whole model, word embeddings and attention and everything, is trained to predict the next word, or translate, or whatever it's trained to do. So it's hard to say exactly what the word embedding layer will learn.

  • @nivethanyogarajah1493
    @nivethanyogarajah14932 ай бұрын

    Incredible!

  • @statquest

    @statquest

    2 ай бұрын

    Thank you!

  • @zhangeluo3947
    @zhangeluo39478 ай бұрын

    Hey sir, in terms of training that decoder-only generative transformer, each time for training a input prompt, we just need x ,..., x (all tokens besides the last ) all those tokens' Masked Self-Attention vectors to feed into softmax and observe their ouputs(which are generated tokens immediately after them) against those real(ground true) prompt tokens? Is that true for training only?

  • @statquest

    @statquest

    8 ай бұрын

    To be honest, I'm not sure I understand your question, but for training, we compare the known tokens to the predicted tokens.

  • @zhangeluo3947

    @zhangeluo3947

    8 ай бұрын

    Okay, I get that@@statquest

  • @zhangeluo3947

    @zhangeluo3947

    8 ай бұрын

    By the way, my stupid question is that for decoder-only, the training is just focus on the input prompt right?@@statquest

  • @statquest

    @statquest

    8 ай бұрын

    @@zhangeluo3947 For decoder only, we can use the input and the output for training.

  • @txxie
    @txxie5 ай бұрын

    Thank you, your video is great! But I'm really confused about the EOS token. Why does the model keep generating new words after generating the EOS token in the prompt? Should it just stop? What is the difference between the EOS tokens in the prompt and the output?

  • @statquest

    @statquest

    5 ай бұрын

    I'm not sure I understand your question. After the input prompt, we insert an EOS token so that the decoder will be correctly initialized and then we generate output tokens until a second EOS is generated.

  • @txxie

    @txxie

    5 ай бұрын

    Thank you for your reply, but most LLMs such as LLaMA and GPT do not use an EOS token to initialize the generation of the output.@@statquest

  • @statquest

    @statquest

    5 ай бұрын

    @@txxie The versions I've seen do. And if they don't, then they presumably use some other token that fills the same role. So, you can use one special token for both, or you can use two. Either way works.

  • @Primes357
    @Primes3573 ай бұрын

    I didn't understand just one part: how are the weights to calculate Q, K and V for each word in the sentence calculated? Is it also an optimization process? If so, how is the loss function calculated?

  • @statquest

    @statquest

    3 ай бұрын

    At 5:08 I say that all of the Weights in the entire transformer are determined using backpropagation. Specifically, we use cross entropy as the loss function. For more details about cross entropy, see: kzread.info/dash/bejne/aHWmtdusZdSucbg.html and kzread.info/dash/bejne/qnZ5yphvhpzNitI.html

  • @kartikchaturvedi7868
    @kartikchaturvedi78688 ай бұрын

    Superrrb Awesome Fantastic video

  • @statquest

    @statquest

    8 ай бұрын

    Thank you!

  • @zhangeluo3947
    @zhangeluo39478 ай бұрын

    The last question is how stacking all possible different cells of K,Q and V work? By just averaging their different ouputs or linearly transformed by a certain matrix W0?

  • @statquest

    @statquest

    8 ай бұрын

    They concatenate the outputs (the attention values) into a vector, and then run that vector through a neural network that has the same number of outputs as the word embeddings.

  • @shixiancui6870
    @shixiancui68708 ай бұрын

    Looks like when we encode the prompts, we only need to compute K and V for each input words, then generate outputs token by token starting with EOS. Is this true? I'm a bit confused here because previous half of your video shows that we also need to compute the whole self-attention values for prompts i.e. Q*K*V. Edit: maybe it's because of that we need to reuse the same masked self-attention cell for encoding, and cannot avoid computing Q*K*V for prompts.

  • @statquest

    @statquest

    8 ай бұрын

    I'm not sure I understand your comment. We calculate the "queries" for every token in the input and the output except for the final output .

  • @shixiancui6870

    @shixiancui6870

    7 ай бұрын

    @@statquest Yes, what I meant was that, although we calculate "queries" for input tokens, but we don't use it to generate the first output token (only use K, V vectors in this case), am I right?

  • @statquest

    @statquest

    7 ай бұрын

    ​@@shixiancui6870 For the very first input token, "masked attention" only the Value numbers play a significant role. The Query and Key numbers are still used, but they always result in using 100% of the Value numbers for the first token.

  • @Max-ry9wl
    @Max-ry9wl6 ай бұрын

    Hey Josh! I need to solve generation task using decoder only model. How I should preprocess corpus for this? I think that splitting in 2 parts and separate parts with token is good solution. But I dont understand how train this model and calculate loss. Input for model is tokens_first_part + tokens_second ann output[index of sep:] of model compare with input[indx of:]

  • @statquest

    @statquest

    6 ай бұрын

    I'll create videos on how to code transformers and decoder-only transformers soon.

  • @ishaansehgal2570
    @ishaansehgal25704 ай бұрын

    I am a bit confused why are we encoding the input prompt and generating the next predicted word for each word in the input prompt. We don't use this information at all when generating the output part right? For generating the output part we just use the KQV from the input prompt and continue from there? How are the two parts connected

  • @statquest

    @statquest

    4 ай бұрын

    That is correct - we don't use the output until we get to new stuff. However, if we wanted to, we could use the early output for training (since we know what the input is, we can compare it to what the decoder generates).

  • @pfever
    @pfever15 күн бұрын

    I don't understand why we need the residual connections.... =''( isn't the word and position encoded values information already included in the masked self-attention values? or is most of the information lost so we need to directly add the word and position encoded values?

  • @statquest

    @statquest

    15 күн бұрын

    In theory you do not need them, but in practice they make it much easier to train large neural networks since each component can focus on it's own thing without having to maintain the information that came before it.

  • @pfever

    @pfever

    15 күн бұрын

    @@statquest Bam! Thank you!

  • @cromi4194
    @cromi41946 ай бұрын

    Wow this series culminating in a perfect explanation of GPT is the most magnificent piece of education in the history of mankind. Explaining the very climax of data science in this understandable step-by-step way so I can say that I understood it should earn you the noble prize in education! I am so grateful that you never used linear algebra in and of your videos. Professors at university don't understand that using linear algebra prevents everyone from actually understanding what is going on but only learning the formula. I have an exam in Data Science on Friday in a week. Can you make a quick video about spectral clustering by Wednesday evening? I will pay you 250$! :)

  • @statquest

    @statquest

    6 ай бұрын

    Thanks! If I could make a video on anything in a week, that would be a miracle. Unfortunately, all of my videos take forever to make.

  • @Nana-wu6fb
    @Nana-wu6fb4 ай бұрын

    Thanks!

  • @statquest

    @statquest

    4 ай бұрын

    Thank you so much for supporting StatQuest!!! BAM! :)

  • @jazzeuphoria
    @jazzeuphoria8 ай бұрын

    Bedankt

  • @statquest

    @statquest

    8 ай бұрын

    HOORAY!!!! Thank you so much for supporting StatQuest! BAM! :)

  • @laurentlusinchi519
    @laurentlusinchi5192 ай бұрын

    The embedding values for "what" and "Statquest" are identical before the positional encoding. Is that not a typo ?

  • @statquest

    @statquest

    2 ай бұрын

    That is correct. In order to illustrate how a decoder-only transformer worked, I had to make the model as simple as possible, and, as a result, some of the nuance in the values for the weights was lost.

  • @BooleanDisorder
    @BooleanDisorder3 ай бұрын

    Dude, can you make a video on state space models like Mamba? It's super interesting!

  • @statquest

    @statquest

    3 ай бұрын

    I'll keep that in mind.

  • @BooleanDisorder

    @BooleanDisorder

    3 ай бұрын

    Bam! @@statquest

  • @by301892
    @by30189228 күн бұрын

    I feel it’s a bit misleading that it seems the tokens of the input sequence is fed in one by one, and that when you put in the first token, it predicts the second token but just ignores it, where in reality it feeds the entire sequence to predict the next target token, and on next iteration, you append the input sequence with the target token as input, and predicts the second target token, and so on. Right?

  • @statquest

    @statquest

    28 күн бұрын

    At 26:28 I state that each token in the prompt is processed simultaneously.

  • @by301892

    @by301892

    27 күн бұрын

    @@statquest gotcha. Thanks for the clarification, sensei.

  • @ytpah9823
    @ytpah98237 ай бұрын

    🎯 Key Takeaways for quick navigation: 00:00 🤖 Decoder-only Transformers are used in ChatGPT to generate responses to input prompts. 01:48 📊 Word embedding is a common method to convert words into numbers for neural networks like Transformers. 08:09 🌐 Positional encoding is used in Transformers to maintain word order information in input data. 10:53 🧩 Masked self-attention in Transformers helps associate words in a sentence by calculating similarities between words. 16:28 🧮 Softmax function is used to determine the percentage of each word's influence on encoding a given word in self-attention. 19:56 🧠 Reusing sets of weights for queries, keys, and values allows Transformers to handle prompts of different lengths. 23:52 🤖 Decoder-only Transformers both encode input prompts and generate responses, enabling training and evaluation. 25:58 🧠 The decoder-only Transformer process involves several steps, including word embedding, positional encoding, masked self-attention, residual connections, and softmax for generating responses. 29:09 🤖 Masked self-attention in a decoder-only Transformer ensures it keeps track of significant words in the input when generating the output. 32:23 🔄 Key differences between a decoder-only Transformer and a regular Transformer include using the same components for encoding and decoding in the decoder-only Transformer, using masked self-attention all the time, and including input and output in the attention mechanism. 34:15 📚 During training, a regular Transformer uses masked self-attention on known output tokens to learn correct generation without cheating, while a decoder-only Transformer uses masked self-attention throughout the process.

  • @statquest

    @statquest

    7 ай бұрын

    bam!

  • @sreerajnr689
    @sreerajnr68916 күн бұрын

    Is it the same network that is being used in BERT and GPT? What makes them different?

  • @statquest

    @statquest

    16 күн бұрын

    BERT is an encoder-only transformer. The major difference is that in Bert, attention can look at stuff that comes before and after instead of just before.

  • @enchanted_swiftie
    @enchanted_swiftie8 ай бұрын

    You didn't use the innocent, cozy, soft bear for softmax 🧸😢 _(in most of the parts)_

  • @statquest

    @statquest

    8 ай бұрын

    Good point!!! I think I need a smaller bear. :)

  • @Modern_Nandi
    @Modern_Nandi6 ай бұрын

    Brilliant

  • @statquest

    @statquest

    6 ай бұрын

    Thanks!

  • @shinoo5004
    @shinoo50047 ай бұрын

    Hi josh. Would you mind making a video for retention network?

  • @statquest

    @statquest

    7 ай бұрын

    I'll keep that in mind.

  • @PetrBorkovec-wk1ux
    @PetrBorkovec-wk1uxАй бұрын

    I don´t undersatnd how is it possible to add numbers for word embeding to positional encoding and then to self-attention. I think it is the same as to add together for instance length, weight and temperature? Could anybody help me, please??!

  • @sharjeel_mazhar
    @sharjeel_mazharАй бұрын

    Can you please make a tutorial on it using Pytorch? And maybe train it on any text dataset, so that all of us get a gist of how we can make our own decoder only transformer? Like ChatGPT but mini scale?

  • @statquest

    @statquest

    Ай бұрын

    I'm working on it.

  • @sharjeel_mazhar

    @sharjeel_mazhar

    Ай бұрын

    @@statquest Much appreciated sir! Any estimate when that video will be up?

  • @statquest

    @statquest

    Ай бұрын

    @@sharjeel_mazhar Soon.

  • @sharjeel_mazhar

    @sharjeel_mazhar

    Ай бұрын

    @@statquest thank you sir, I'm from Pakistan and your videos are just.... I don't know how to explain it, this stuff is diamond, I don't know how it's available to everyone for free. Hats off to you! Keep making these kinda videos for all of us who are learning Data Science. God bless you!

  • @statquest

    @statquest

    Ай бұрын

    @@sharjeel_mazhar Thank you very much!

  • @user-ff1qi2yk9q
    @user-ff1qi2yk9q6 ай бұрын

    20:56 I think the value of the word "is" is miswritten it should be 1.1 , 0.9 not 2.9,1.3 it should not be same with the value of word 'what' Thank you for your videos btw ur explanation is awesome.

  • @statquest

    @statquest

    6 ай бұрын

    That is correct. Sorry for the typo! :)

  • @SakvaUA
    @SakvaUA6 ай бұрын

    So, when one picks encoder-decoder architecture and when decoder only is sufficient?

  • @statquest

    @statquest

    6 ай бұрын

    It might depend on a the problem, but I think the real question might be when encoder only is best vs when decoder only is best. And that definitely depends on the problem. Encoder-only use unmasked attention all the time, so they are best for problems where looking ahead really is needed.

  • @edphi
    @edphi6 ай бұрын

    Damn i have learn the whole of decoder and encoder models from start to finish including training and deploying but not understand the math the way you opened the pandora box. Now the sine and cosine and query key value and everything is flying out in my head

  • @statquest

    @statquest

    6 ай бұрын

    bam?

  • @adityarajora7219
    @adityarajora72196 ай бұрын

    Begging, Please teach us the BERT model, BAM!!

  • @statquest

    @statquest

    6 ай бұрын

    I'll keep that in mind.

  • @SriramSrinivasan-fg9nx
    @SriramSrinivasan-fg9nx3 ай бұрын

    Hi Josh, Can you please make a video about Encoder-only Transformers, like Google's BERT

  • @statquest

    @statquest

    3 ай бұрын

    I'll keep that in mind.

  • @huiwencheng4585
    @huiwencheng45854 ай бұрын

    謝謝!

  • @statquest

    @statquest

    4 ай бұрын

    Thank you so much for supporting StatQuest!!! TRIPLE BAM! :)

  • @lolololo-cx4dp
    @lolololo-cx4dp8 ай бұрын

    In chat gpt we can input random gibberish that certainly doesn't exist in the Training tokens, but it still can generate answer using that gibberish. Maybe they encode random gibberish to a specific token hmm.

  • @statquest

    @statquest

    8 ай бұрын

    ChatGPT uses letters and parts of words as tokens, rather than full words and phrases. This allows it to operate on words it was not trained on.

  • @lolololo-cx4dp

    @lolololo-cx4dp

    8 ай бұрын

    @@statquest but still, I put random letters that that are unique

  • @statquest

    @statquest

    8 ай бұрын

    @@lolololo-cx4dp It breaks down the input into smaller bits - one or 2 letter fragments.

  • @lolololo-cx4dp

    @lolololo-cx4dp

    8 ай бұрын

    @@statquest I think you are correct, it still having a hard time reversing my random gibberish.

  • @acasualviewer5861
    @acasualviewer58614 ай бұрын

    The word "similarity" may be confusing when talking about self-attention. Often when speaking of embeddings people think the embedding only encodes meaning (as in word2vec), but in Transformers these embeddings encode some notion of relationship with other words that would help predict the next word (each head encoding different aspects of this). So when self-attention compares each word with the previous word, it isn't figuring how "similar" the word "it" is to "pizza", but rather "how related" these are based on things like gramatical rules, word order, parts of speech, or even meaning. "Similarity" may be misleading here, though strictly speaking "how related" is a type of "similarity" score but only in the ML sense. Not the usual sense. There's nothing "similar" about "it" and "pizza" but the two are related.

  • @statquest

    @statquest

    4 ай бұрын

    Noted

  • @acasualviewer5861

    @acasualviewer5861

    4 ай бұрын

    @@statquest you have no idea how long this notion blocked my understanding of attention.. because all the examples always talk about word2vec and then talk about attention, so its easy to get bewildered by this

  • @acasualviewer5861

    @acasualviewer5861

    4 ай бұрын

    @@statquest By the way.. even though I've seen many explanations of transformers, I learned a key point in this video and that is how residual connections can enable the embeddings to not have to carry position information. I had never thought of it that way.

  • @statquest

    @statquest

    4 ай бұрын

    @@acasualviewer5861 Happy to help! I will say that the reason I used the word "similarity" when describing attention is that it is based on an unscaled metric of "similarity". The dot-product is the unscaled cosine similarity (for details about the cosine similarity, see: kzread.info/dash/bejne/l22JkrN6dsXMfKw.html ) . So I was trying to be consistent with the mathematical terminology.