Transformer Neural Networks, ChatGPT's foundation, Clearly Explained!!!

Transformer Neural Networks are the heart of pretty much everything exciting in AI right now. ChatGPT, Google Translate and many other cool things, are based on Transformers. This StatQuest cuts through all the hype and shows you how a Transformer works, one-step-at-a time.
NOTE: If you're interested in learning more about Backpropagation, check out these 'Quests:
The Chain Rule: • The Chain Rule
Gradient Descent: • Gradient Descent, Step...
Backpropagation Main Ideas: • Neural Networks Pt. 2:...
Backpropagation Details Part 1: • Backpropagation Detail...
Backpropagation Details Part 2: • Backpropagation Detail...
If you're interested in learning more about the SoftMax function, check out:
• Neural Networks Part 5...
If you're interested in learning more about Word Embedding, check out: • Word Embedding and Wor...
If you'd like to learn more about calculating similarities in the context of neural networks and the Dot Product, check out:
Cosine Similarity: • Cosine Similarity, Cle...
Attention: • Attention for Neural N...
If you'd like to support StatQuest, please consider...
Patreon: / statquest
...or...
KZread Membership: / @statquest
...buying my book, a study guide, a t-shirt or hoodie, or a song from the StatQuest store...
statquest.org/statquest-store/
...or just donating to StatQuest!
paypal: www.paypal.me/statquest
venmo: @JoshStarmer
Lastly, if you want to keep up with me as I research and create new StatQuests, follow me on twitter:
/ joshuastarmer
0:00 Awesome song and introduction
1:26 Word Embedding
7:30 Positional Encoding
12:53 Self-Attention
23:37 Encoder and Decoder defined
23:53 Decoder Word Embedding
25:08 Decoder Positional Encoding
25:50 Transformers were designed for parallel computing
27:13 Decoder Self-Attention
27:59 Encoder-Decoder Attention
31:19 Decoding numbers into words
32:23 Decoding the second token
34:13 Extra stuff you can add to a Transformer
#StatQuest #Transformer #ChatGPT

Пікірлер: 1 100

  • @statquest
    @statquest9 ай бұрын

    To learn more about Lightning: lightning.ai/ Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/

  • @NeoShameMan

    @NeoShameMan

    9 ай бұрын

    Personally I find it more clear to link embedding to hidden class of words, i use character sheets as a netaphor, because what attention does is not looking at word but the description in its sheet, with each attention head focusing on different part of the description, which mean a word representation have multiple attention on different hidden class. Then at the end we look at the sheets transformed at each layer to find the next word. That also allows to explain multimodality, ie make sure image input and text input share the same description sheet.

  • @statquest

    @statquest

    9 ай бұрын

    @@NeoShameMan Interesting.

  • @MrMehrd

    @MrMehrd

    9 ай бұрын

    Transformers needs more than one video, each part(multi H attention, word embeding(sin&cosine similarity),training &…) I was waiting for long to reach stat of art.

  • @statquest

    @statquest

    9 ай бұрын

    @@MrMehrd I thought about doing it that way - and that was the original plan. But my video on Attention convinced me that most people would rather have a single video that has everything in it all at once. However, I've provided links in this video's description to full length videos on each topic you are interested in.

  • @NeoShameMan

    @NeoShameMan

    9 ай бұрын

    @@statquest oh you mentioned that you don't know why this number of head, that's hardware optimization, ie they can be split into gpu or memory pool or reduce bandwidth, such that they can parralelized or compute sequentially on resource starved machine.

  • @jediknight120
    @jediknight1209 ай бұрын

    As a Computer Science professor who teaches Machine Learning, this is probably my most anticipated video ever. I regularly use your videos to brush up on/review ML concepts myself and recommend them to my students as study aids. You explain these concepts in the clear, straightforward way that I aspire to. Thank you!

  • @statquest

    @statquest

    9 ай бұрын

    Thank you! BAM! :)

  • @yizhou6877

    @yizhou6877

    9 ай бұрын

    Me too!

  • @Daigandar

    @Daigandar

    9 ай бұрын

    @@statquest our data analysis professor also uses your videos as references and recommends you almost every session haha. i learned about this amazing channel from him.

  • @statquest

    @statquest

    9 ай бұрын

    @@Daigandar That's awesome! BAM! :)

  • @cienciadedados

    @cienciadedados

    9 ай бұрын

    Well said. I do the same!

  • @alefalfa
    @alefalfa9 ай бұрын

    Its kinda hilarious that StatQuest videos give the impression they were menat for 5 year olds, yet are exploring legitimately complex topics. No jargon, no overcomplicated diagrams. Josh really tries to explain things and not show off his supirior understanding of neural networks. Thanks Josh!

  • @statquest

    @statquest

    9 ай бұрын

    Thank you! :)

  • @ran_domness

    @ran_domness

    7 ай бұрын

    Much like Richard Feynman.

  • @williamarias815

    @williamarias815

    3 ай бұрын

    BAM!

  • @aayushsmarten
    @aayushsmarten9 ай бұрын

    This is the complet-est, precious-est, pur-est, brilliant-est video ever. Can't imagine how much work you've put into creating these illustrations. It's just brilliant. Hats off.

  • @statquest

    @statquest

    9 ай бұрын

    Wow, thank you!

  • @lumiey

    @lumiey

    8 ай бұрын

    Did you just tokenize your comment?

  • @statquest

    @statquest

    8 ай бұрын

    @@lumiey I'm not sure I understand.

  • @lumiey

    @lumiey

    8 ай бұрын

    @@statquest He just separated words like complet, est, precious, est, pur, est... like tokenizer does (e.g. following -> follow, ing)

  • @aayushsmarten

    @aayushsmarten

    8 ай бұрын

    @@lumieyHaha

  • @AmitBhor
    @AmitBhor9 ай бұрын

    22:12 8 heads because 8 gpu clusters are common and hence can compute in parallel . The embedding dimension are 512 and that leaves each head has 64 query size. Great video 👍

  • @statquest

    @statquest

    9 ай бұрын

    Awesome!

  • @TheTimtimtimtam

    @TheTimtimtimtam

    9 ай бұрын

    Thank you

  • @jakob2946

    @jakob2946

    3 ай бұрын

    Does the second part mean that each head only gets a portion of the embeddings?

  • @oliviervangoethem9365

    @oliviervangoethem9365

    Ай бұрын

    @@jakob2946 curious aswell, I looked it up and it seems that its not true, every head is applied to all dimensions of the embedding. This also makes more sense to me since the word embeddings should be looked at as a whole. please correct me if I'm wrong

  • @tekrunner987

    @tekrunner987

    Ай бұрын

    @@oliviervangoethem9365 I don't know about more recent transformers, but in the initial architecture each attention head is applied to a projection of input embeddings, with reduced dimensionality (in the original "Attention is all you need" paper: embeddings have a dimension of 512, and each of the 8 attention heads has a dimension of 64). The reason for this is spelled out in the original paper: "Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this."

  • @midolion8510
    @midolion85109 ай бұрын

    I can't imagine how much effort it took for ai scientists to make this model. I really admire your illustration 😀

  • @statquest

    @statquest

    9 ай бұрын

    Thank you so much 😀!

  • @bobbymath2813
    @bobbymath28134 ай бұрын

    How a model like this was created is just beyond me. There’s so many different moving parts. You could write a whole book on the fully-connected network alone. Add in all the other stuff? Wow. Thank you, Josh, for explaining this so well!

  • @statquest

    @statquest

    4 ай бұрын

    Thanks! It's a little easier to understand how this model was created in the first place if you follow the whole Neural Networks playlist. You'll see how things changed, one step at a time, to eventually end up with a transformer: kzread.info/dash/bejne/daWDyMttYa_MdNo.html

  • @bobbymath2813

    @bobbymath2813

    4 ай бұрын

    @@statquest Thanks Josh! I’ll check out that playlist. What you’re doing is so special to the world, and humanity is so indebted.

  • @linhdinh136
    @linhdinh1369 ай бұрын

    Thanks, Josh, for keeping your promise to make a video about Transformers. I learned a lot and truly appreciate your effort in explaining this concept. I just placed an order to buy your book and made a donation to support the channel. I'm looking forward to more content on Machine Learning and hope to see videos about GPT and BERT models. ♥

  • @statquest

    @statquest

    9 ай бұрын

    Thank you so much!!! I really appreciate your support (TRIPLE BAM!!!). I hope to do the GPT video soon, but we'll see - the timeline is a little out of my control right now.

  • @nilson_001
    @nilson_0018 ай бұрын

    Thanks to your engaging visualization and clear explanation, I've grasped the Stanford CS224n course! Your content is neatly condensed but doesn't miss a thing. It's like you've taken all the complex concepts and served them up on a platter. Triple Bam!

  • @statquest

    @statquest

    8 ай бұрын

    Congratulations! TRIPLE BAM! :)

  • @user-rs9zs8kj7o
    @user-rs9zs8kj7o6 ай бұрын

    You're the only person on social media that can explain such complicated topics in an easy to understand manner. Keep up!

  • @statquest

    @statquest

    6 ай бұрын

    Thanks, will do!

  • @gvascons
    @gvascons9 ай бұрын

    And so we reach the state-of-art!! Congrats Josh :D

  • @statquest

    @statquest

    9 ай бұрын

    Hooray! :)

  • @eating_a_cookie

    @eating_a_cookie

    9 ай бұрын

    Triple bam.

  • @pw7225

    @pw7225

    9 ай бұрын

    2017...

  • @fgfgdfgfgf
    @fgfgdfgfgf8 ай бұрын

    I've been looking for tutorial about transformers for a long time. This is the smoothest tutorial. It does not hide any complexities(making me confident that I actually understand the concept instead of its dumbed down version for mortals that won't end up ever using the knowledge), but also does not get lost while explaining those complexities and clearly calls out what else I can learn about to understand the side concepts better. Super !!!

  • @statquest

    @statquest

    8 ай бұрын

    Thank you very much! :)

  • @coolsai
    @coolsai9 ай бұрын

    BEST EVER VIDEO ABOUT CHAT GPT! I watched many videos but this video is just BAM!

  • @statquest

    @statquest

    9 ай бұрын

    Thank you!

  • @VeloFX
    @VeloFX8 ай бұрын

    The explanations in your videos are incredibly precise and efficient at the same time. There is nothing better to watch when learning any ML topic! 👍

  • @statquest

    @statquest

    8 ай бұрын

    Thank you very much! :)

  • @darshagarwal8307
    @darshagarwal83079 ай бұрын

    As always I loved the video! Thank you so much for producing such easy, fun and clear videos explaining these concepts. Always looking forward to more!

  • @statquest

    @statquest

    9 ай бұрын

    You are so welcome!

  • @MinChitXD
    @MinChitXD4 ай бұрын

    I've just learned machine learning for a month, my major is a pure business student. I've been working as a Data Analyst for 2 months as the internship and I believe machine learning will be essential if I want to go further in this industry. Out of all tutorials videos I've watched, your videos brought up the clearest and most concise concepts for me to understand. All the videos walked me through from the series of neural network, back propagation, cross entropy with backward propagation, recurrent, LSTM and convolutional neural network, lastly, this video. Really appreciate for your understandings and amazing storytelling through your videos, your contents always make me eager to keep learning machine learning myself. Thanks a lot

  • @statquest

    @statquest

    4 ай бұрын

    Thank you very much! I'm glad my videos are helpful.

  • @maximeentsi2205
    @maximeentsi22059 ай бұрын

    I try-harded deeply to understand transformers in few mouths ago, I can say that this video is a must have. Thank you Josh

  • @statquest

    @statquest

    9 ай бұрын

    Glad it was helpful!

  • @urazc5917
    @urazc59176 ай бұрын

    This video is a treasure in a world where is explained in 2 minutes. Thank you Josh!

  • @statquest

    @statquest

    6 ай бұрын

    Thank you very much!

  • @CharlesPayne
    @CharlesPayneАй бұрын

    Not to be a buzz kill, but I suffered a bad Traumatic brain injury in my late 40's after being hit by an SUV while stopped on a motorcycle. I'm blessed i survived . At the time my job dealt with engineering and architecting IT solutions and I was looking forward to advancing my career into AI and Machine Learning. I was in a coma for a while and I lost lots of what i used to know. I know have Learning disabilities and memory issues. I have improved some over the last years, but If I'm being honest with myself, I wouldn't want me as an engineer, so I'm trying to move into management. I'm glad I ran across these videos . I purchased the .pdf books and notebooks today and I can honestly say they are well worth it. Josh, I'm so glad You created this material. Your books and notebooks etc.. are helping me slowly understand complex topics in hopes that I can stay relevant and continue to advance my career. Thanks again!

  • @statquest

    @statquest

    Ай бұрын

    TRIPLE BAM!!! Thank you so much for supporting StatQuest and I wish you the best as you continue to learn about ML and Data Science! :)

  • @Joy-dn8yz
    @Joy-dn8yz9 ай бұрын

    words cannot describe how happy I am to be able to watch this video. You really helped me with my studies. It is you who made me so interested in AI and think that I am actuaaly able to understand what is going on. Thank you for your simplified models. They really help when larning more complex stuff on this or that theme. But everytime there's a theme I do not know, the first thing I do is go to statquest. Thank you, Josh!

  • @statquest

    @statquest

    9 ай бұрын

    Hooray!!! Thank you very much!

  • @kurtosis4573
    @kurtosis45738 ай бұрын

    I just finished watching almost all the videos on this channel and i have to say that this is probably the best place to learn stats and machine learning. I also bought the ML book and it captures the essence of the style of teaching on this channel really well and is very handy to go back and quickly look up some details. You are doing great work!

  • @statquest

    @statquest

    8 ай бұрын

    Wow, thanks!

  • @meirgoldenberg5638

    @meirgoldenberg5638

    8 ай бұрын

    Which book?

  • @statquest

    @statquest

    8 ай бұрын

    @@meirgoldenberg5638 I think he is referring to my book, The StatQuest Illustrated Guide to Machine Learning at statquest.org/statquest-store/

  • @jordanmuniz6167
    @jordanmuniz61674 күн бұрын

    Your videos have to be the best instance of teaching I have ever seen! Thank you for the amazing work!

  • @statquest

    @statquest

    4 күн бұрын

    Thank you!

  • @limitlesslife7536
    @limitlesslife7536Ай бұрын

    you are a blessing for anyone who is a visual learner. You have the gift to be able to explain complex topics in easy way.

  • @statquest

    @statquest

    Ай бұрын

    Thank you!

  • @kosukenishio9670
    @kosukenishio96709 ай бұрын

    For slowpokes like me: The example assumes total vacabulary size of 4 for each language. Thanks Josh for providing some of the best content on the subject! Finally the K, Q, V made clear sense

  • @statquest

    @statquest

    9 ай бұрын

    BAM! :)

  • @TheTimtimtimtam

    @TheTimtimtimtam

    9 ай бұрын

    Thank you from a fellow slowpoke

  • @tdv8686
    @tdv86869 ай бұрын

    OMG, I waited for it for so long!!, thank you, Josh!

  • @statquest

    @statquest

    9 ай бұрын

    bam! :)

  • @emanelsheikh6344
    @emanelsheikh63448 ай бұрын

    I've searched a lot about the transformers but seriously this is the best explanation I've ever got. Amazing!❤

  • @statquest

    @statquest

    8 ай бұрын

    Wow, thank you!

  • @michaelongmk
    @michaelongmk9 ай бұрын

    Love these Quests! Kudos for explaining these complex data science concepts in layman terms but also with great depth ❤

  • @statquest

    @statquest

    9 ай бұрын

    Thank you!

  • @user-ls9zb3dy1i
    @user-ls9zb3dy1i9 ай бұрын

    Your neural networks playlist including this video gave me an intuitive understanding of transformers in less than a week which is something that would have taken an entire semester otherwise. I stumbled onto them while searching for a better understanding of Q,K,V, which everyone seems to say is as simple as querying a database…but what does that even mean?? Your explanations are brilliant, and I will be sharing with everyone I know who wants to learn more about this topic. I look forward to future videos. Thank you!

  • @statquest

    @statquest

    9 ай бұрын

    Thank you very much!!! I really appreciate it.

  • @harryspeaks
    @harryspeaks7 ай бұрын

    Definitely the clearest walkthru of Transformer. It's very good that you put heavy emphasis on the parallelizability of Transformer since IMO it is the most important feature that made Transformer so useful

  • @statquest

    @statquest

    7 ай бұрын

    agreed!

  • @apah
    @apah9 ай бұрын

    Man oh man the crazy timing .. I just watched your video on attention yesterday !! TRIPPLE BAAAAM your rock josh thanks :D

  • @statquest

    @statquest

    9 ай бұрын

    BAM! :)

  • @tangchunxin979
    @tangchunxin9799 ай бұрын

    The videos are really fantastic!!! First time ever that helps me understand every single detail!! Thank you!!! Plz keep posting!!

  • @statquest

    @statquest

    9 ай бұрын

    Thank you! Will do!

  • @TudorTatar-ny8zw
    @TudorTatar-ny8zw9 ай бұрын

    The positional encoding explanation truly was a BAM!

  • @statquest

    @statquest

    9 ай бұрын

    Hooray! :)

  • @isseym8592
    @isseym85926 ай бұрын

    As a computer science student getting into the field of NLP, I really can't thank you enough for making a video that breaks down Transformer like this. Our uni doesn't go in depth about NLP related topics and with a very brief explanation they do, the uni expects us to have a full understanding about NLP. I can't thank you enough!

  • @statquest

    @statquest

    6 ай бұрын

    Thanks!

  • @fgh680
    @fgh6809 ай бұрын

    The most AWESOME 36 MINUTES - What an explanation of Transformers!

  • @statquest

    @statquest

    9 ай бұрын

    Thank you very much!!! BAM! :)

  • @prathameshdinkar2966
    @prathameshdinkar29664 ай бұрын

    So nicely explained! I have searched for "how transformers work" but no one on youtube explained with both concept and math! Keep the good work going 😁👍

  • @statquest

    @statquest

    4 ай бұрын

    Glad you liked it!

  • @aanchaldogra
    @aanchaldogra8 ай бұрын

    I owe my data science job to so many beautiful people on youtube, you are one of them. Thank you

  • @statquest

    @statquest

    8 ай бұрын

    Wow, thank you!

  • @vinny2688
    @vinny26889 ай бұрын

    THIS is what I've been waiting for!

  • @statquest

    @statquest

    9 ай бұрын

    Thanks!

  • @rikki146
    @rikki1469 ай бұрын

    That is a lot of stuff in a single video!! For those who are wondering, ChatGPT is a decoder only neural network, and the main diff between an encoder and a decoder is that a decoder uses masked attention - thus ChatGPT is essentially an autoregressive model. Notice how ChatGPT generates a response in sequential order, from left to right. Anyway, good stuff!

  • @statquest

    @statquest

    9 ай бұрын

    Yep - I'd like to make a GPT video just to highlight the explicit use of masking (the self attention in the decoder in this video used masking implicitly).

  • @technicalbranch99

    @technicalbranch99

    9 ай бұрын

    @@statquest Please do that video soon :) BAM

  • @shaktisd
    @shaktisd4 ай бұрын

    One of the best explanation of encoder / decoder architecture. Esp. the self attention part. I really liked the way you colored Q,K, V to keep track of how things are moving . Looking forward to more such videos

  • @statquest

    @statquest

    4 ай бұрын

    Thanks! I've also got a video on Decoder-Only Transformers: kzread.info/dash/bejne/lIVppNGonLufcco.html and I'm working on one that shows the matrix algebra (color coded) of how these things are computed.

  • @shaktisd

    @shaktisd

    4 ай бұрын

    @@statquest are all these topics covered in your book ? Would love to read them in printed format

  • @statquest

    @statquest

    4 ай бұрын

    @@shaktisd They'll be in my next book.

  • @shaktisd

    @shaktisd

    4 ай бұрын

    @@statquest looking forward to the next edition.

  • @nobiaaaa
    @nobiaaaa6 ай бұрын

    Only videos like this can have "clearly explained" in the title.

  • @statquest

    @statquest

    6 ай бұрын

    bam!

  • @wd8222
    @wd82229 ай бұрын

    Best explanation I found in the whole Internet ! although I admit I needed 2 full turns. well done Josh !

  • @statquest

    @statquest

    9 ай бұрын

    Thanks! - Yes, this video packs in a ton of information, but I couldn't figure out any other way to make it work.

  • @REV_Pika
    @REV_Pika28 күн бұрын

    its amazing how you make a 2 hours lecture in just 30 mins and explain it way better, after finishing this video and realizing what I just grasped, its mind blowing how you can make such complicated subject easy to understand. thank you very much!

  • @statquest

    @statquest

    28 күн бұрын

    Glad it helped!

  • @chandraprakash934
    @chandraprakash9349 ай бұрын

    This video is amazing just as other videos of yours ! Thank you for spreading knowledge ! Eagerly waiting for upcoming videos.

  • @statquest

    @statquest

    9 ай бұрын

    Thank you!

  • @tupaiadhikari
    @tupaiadhikari8 ай бұрын

    Prof. Starmer, Thank You very much. You are an inspiration to all the aspiring Machine Learning Enthusiasts. Respect and Gratitude from India. #RESPECT

  • @statquest

    @statquest

    8 ай бұрын

    Thank you very much!

  • @williamflinchbaugh6478
    @williamflinchbaugh64788 ай бұрын

    Great video! I'd love to see a pytorch + lightning tutorial on transformers similar to the LSTM video!

  • @statquest

    @statquest

    8 ай бұрын

    That's the plan!

  • @TekeshwarHirwani
    @TekeshwarHirwani5 ай бұрын

    Best video on Transformer I have seen in KZread! Amazing ! huge respect for you

  • @statquest

    @statquest

    5 ай бұрын

    Thank you so much 😀!

  • @srikanthganta7626
    @srikanthganta76265 ай бұрын

    Thank you for such amazing illustrations! HOW I WISH I HAD THIS DURING MY STUDIES, BUT I'M JUST GLAD I GET TO LEARN THESE AS A WORKING PROFESSIONAL. THANK YOU SO MUCH FOR ALL THE CONTENT YOU MAKE. I'M SURE YOU MAKE THOUSANDS OF LIVES BETTER. YOU'RE TRULY AN INSPIRATION JOSH!

  • @statquest

    @statquest

    5 ай бұрын

    Thank you!

  • @berkk1993
    @berkk19939 ай бұрын

    I've spent a good deal of time studying attention, the critical concept behind transformers. Don't anticipate a natural understanding of the Q, K, and V parameters. We aren't entirely certain about their function; we can only hypothesize. They could still function effectively even if we used four parameters instead of three. One crucial point to remember is that our intuitive understanding of neural networks (NNs) is far from complete. The matrices for Q, K, and V aren't static; they're learned via backpropagation over lengthy training periods, thus changing over time. As a result, it's not as certain as mathematical operations like 1+1=2. The same applies to the head count in transformers; we can't definitively state whether eight is a good number or not. We don't fully grasp what each head is precisely doing; we can only speculate.

  • @GreenCowsGames

    @GreenCowsGames

    9 ай бұрын

    In visual transformers, we do understand what each head does. I guess heads trained on language are more difficult to interpret for us.

  • @nich.1918

    @nich.1918

    9 ай бұрын

    @@GreenCowsGames no, we don’t know that they do.

  • @patriciachang5079
    @patriciachang50799 ай бұрын

    You really explaining these concepts in a clear way! Will you do more explanation video on statistic like Cox model for survival ? Thanks! :)

  • @statquest

    @statquest

    9 ай бұрын

    I'll keep that in mind.

  • @AntiLawyer0
    @AntiLawyer05 ай бұрын

    The best video that explains Transformer I've ever seen. Thanks for your contribution!

  • @statquest

    @statquest

    5 ай бұрын

    Thank you very much! :)

  • @andrewdouglas9559
    @andrewdouglas95597 ай бұрын

    I don't know how I'd learn DataScience/ML without this channel. Thanks so much for doing what you do!

  • @statquest

    @statquest

    7 ай бұрын

    Happy to help!

  • @vidbot4037
    @vidbot40379 ай бұрын

    HE HAS DONE IT YET AGAIN!

  • @statquest

    @statquest

    9 ай бұрын

    Thanks!

  • @matthewhaythornthwaite9910
    @matthewhaythornthwaite99107 ай бұрын

    Thanks Josh, another great video, I’ve been following your channel for years now and your videos have massively helped me to change career so huge thanks. On to the transformer network, there’s something about the positional encoding that makes me feel a little uneasy. It feels we’ve gone through great effort to train a word embedding model that can cluster similar words together in n-dimensional word embedding space (where n can be very large, often 1,000). By then applying positional encoding before our self-attention, whilst you very clearly explained with your example how important adding this information to the model is, seems to me to mess up all the effort we put into word embedding to get similar words clustered together. The word pizza, instead of being positioned in the same place can now jump around word/positional embedding space. Instead of one representation of pizza in space, it can now move around to be in many different positions, and not move locally around its own 'area' but because we add the positional encoding to the word embedding, scaled equally, it can jump around a great deal of space. To me it would seem adding this much freedom to where the word pizza can be represented in space would make it much much harder to train the model. Is my understanding correct or is there something I’m missing?

  • @statquest

    @statquest

    7 ай бұрын

    I have a couple of thoughts on this. Maybe I should make a short video called "some thoughts about positional encoding". Anyway, here they are... Thought #1: Remember the positional encoding is fixed, so the word embedding values have to take them into account when training. For example, since all of the positional encoding value are between -1 and 1, it is possible that the word embedding values will have larger magnitudes and thus, not move around a lot when position is added to them. Thought #2: Because the periods of the squiggles get larger for larger embedding positions, after about the 20th position, the position encoding values end up alternating 1 and 0 (in other words, after the 20th position, the position encoding values are 1010101....) and it is in that space, from the 20th position to the 512th position (usually word embeddings have 512 or more positions) that the word embeddings are really learned, and that the first 20 positions are mostly just for position encoding.

  • @matthewhaythornthwaite9910

    @matthewhaythornthwaite9910

    7 ай бұрын

    @@statquest Ah ok yeh that makes a lot of sense, thanks so much for taking the time to reply!

  • @matthewhaythornthwaite9910

    @matthewhaythornthwaite9910

    6 ай бұрын

    I’ve been having some additional thoughts on this and think I may have another reason (or rather an example) why adding positional encoding to the word embedding vectors makes sense, Josh if you read this, feel free to shoot it down! Take the following sentence: “The weather is bad, but my mood is good”. In this sentence the first “is” refers to the weather, whereas the second "is" refers to my mood. Without positional encoding and only word embedding, the vector for “is” being passed into the attention unit will be the same for the two instances of the word in the sentence. If we don’t use masked self-attention and compare the word “is” to every word including itself in the sentence, then the output of the word “is” in the self-attention unit I believe should be the same for both instances. Therefore, the unit will struggle to successfully differentiate the relative meaning of the two words. By adding in positional encoding prior to the self-attention unit, we’re suddenly adding context to the word. The second “is” comes straight after the word “mood”, therefore the position vector we’re adding to each of the two words should be similar. However, because the word “weather” comes 6 words before the second “is”, the positional vector we add will be quite different. Presumably this difference helps a self-attention unit to differentiate the relative meanings of the two instances of the word “is”.

  • @statquest

    @statquest

    6 ай бұрын

    @@matthewhaythornthwaite9910 That all sounds reasonable to me! BAM! :)

  • @luckusters8568

    @luckusters8568

    Ай бұрын

    @@matthewhaythornthwaite9910 Another reason why you would want to add positional encoding instead of doing something else is that it preserves the dimensionality of the encoding. Imagine a theoretical encoding which is not added (like a one-hot encoding for each sequence location), and some linear (or non-linear for that matter) transform to combine word embedding and positinonal encoding. This is great in the sense that we do not polute the embedding space with "arbitrary" offsets, but now our input sequence has to be of a fixed shape. Addition of orthogonal sinusoids guarrantees a non-parametric, dimensionality preserving encoding which does not fix the number of inputs we can give to the network. By the way, I think there is an analogy between adding positional encoding to embeddings and adding residual/skip connections to network outputs. Imagine that we have a network that is represented by the function f(x) and we have some target function F(x) which we want the network to learn. Imagine now that we modify our network to compute the function f(x) = h(x) + x (where "h(x)" is the network in front of the skip connection "h(x) + x"). Here too we polute the output space of h(x) with the values of x. However the network f can still learn F, so long as the network h(x) learns the function h(x) = F(x) - x (such that f(x) = h(x) + x = F(x) - x + x = F(x)). I suppose for positional encoding something similar holds (altough it probably has to learn a much more difficult internal pattern), where the network is f(E(x)+q) learns to associate word embedding values E(x) which are "convolved" by some known offsets q and probably learns to deconvolve E(x) and q (into some abstract representation).Given that E(x) + q may in theory be (nearly) non-unique (i.e. E(x_1) + q_1 = approx E(x_2) + q_2) it might still be possible for the network to deconvolve the values into the correct inputs based on the context vector C which is calculable from the rest of the input sequence. I suppose one can't exclude that the network may never get this wrong, but in practical terms, it seems to work well enough.

  • @vohiepthanh9692
    @vohiepthanh96925 ай бұрын

    Penta BAM!!! All of your videos are extremely easy to understand in a peculiar way, they have helped me a lot, thank you very much.

  • @statquest

    @statquest

    5 ай бұрын

    Glad you like them!

  • @debayantalapatra2066
    @debayantalapatra20666 ай бұрын

    This is the best of all that is available right now on Transformers. Thank you!!

  • @statquest

    @statquest

    6 ай бұрын

    Thank you!

  • @vladimirmihajlovic1504
    @vladimirmihajlovic15049 ай бұрын

    Hey @statquest - here is a quick suggestion. Another convenient way to explain positional encoding might be by drawing clock with minute and hour hand. Then - instead of sin() and cos() functions you could simply track the x and y coordinates of the tip of the minute and hour hand. It gives much more convenient intuition behind mechanics of the encoding. (a) it shows its repetitive nature (b) ties encoding position with sense of time (which is intuitive since speech is tied to time as well). Speech is the most common way we use language (c) it explains why we use both sin() and cos() functions (to track circular motion of the clock hand) (d) it provides intuition on why having two pair of sin() and cos() functions is better than just one

  • @statquest

    @statquest

    9 ай бұрын

    That's a great idea!

  • @Ali-Aslam

    @Ali-Aslam

    8 ай бұрын

    So kind of like a unit circle?

  • @abdoualgerian5396
    @abdoualgerian53969 ай бұрын

    We wanna more NLP material please, tiny bam !

  • @statquest

    @statquest

    9 ай бұрын

    :)

  • @user-et8es9vg5z
    @user-et8es9vg5z18 күн бұрын

    I finally decided to buy your book thinking there'd be transformers in the "Neural Network" section. But even if they're not, I'm glad it supports you. Your content is the best in popularisation that I've seen. It mainly helps me a lot to refresh and understand better than before to start my internship in AI after 1 year of gap year.

  • @statquest

    @statquest

    17 күн бұрын

    I'm starting a book on neural networks every soon.

  • @ruicai9084
    @ruicai90849 ай бұрын

    I feel so lucky that I just started learning Transformer and found out StatQuest made a video for it one day ago!

  • @statquest

    @statquest

    9 ай бұрын

    bam!

  • @jessiondiwangan2591
    @jessiondiwangan25919 ай бұрын

    (Verse 1) Here we are with another quest, A journey through the world of stats, no less, Data sets in rows and columns rest, StatQuest, yeah, it's simply the best. (Chorus) We're diving deep, we're reaching wide, In the land of statistics, we confide, StatQuest, on a learning ride, With your wisdom, we abide. (Verse 2) From t-tests to regression trees, You make understanding these a breeze. Explaining variance and degrees, StatQuest, you got the keys. (Chorus) We're scaling heights, we're breaking ground, In your lessons, profound wisdom's found, StatQuest, with your sound, We'll solve the mysteries that surround. (Bridge) With bar charts, line plots, and bell curves, Through distributions, we observe, With every lesson, we absorb and serve, StatQuest, it's knowledge we preserve. (Chorus) We're traversing realms, we're touching sky, In the field of data, your guidance, we rely, StatQuest, with your learning tie, You're the statistical ally. (Outro) So here's to Josh Starmer, our guide, To the realm of stats, you provide, With StatQuest, on a high tide, In the world of statistics, we stride. (End) So get ready, set, quest on, In the realm of stats, dawn upon, StatQuest, till the fear's gone, Keep learning, till the break of dawn.

  • @statquest

    @statquest

    9 ай бұрын

    THAT IS AWESOME!!! (what are the chords?)

  • @technicalbranch99

    @technicalbranch99

    9 ай бұрын

    @@statquest I V vi IV

  • @pratyushrao7979
    @pratyushrao79793 ай бұрын

    I had never struggled so much with understanding a concept before. But you cleared all the doubts. Thank you!

  • @statquest

    @statquest

    3 ай бұрын

    Glad it helped!

  • @pratyushrao7979

    @pratyushrao7979

    3 ай бұрын

    @@statquest I actually had a doubt as I was going through, about the decoder part. In the masked multi head attention part of the typical transformer, what inputs do we provide? And is this part only used during training?

  • @statquest

    @statquest

    3 ай бұрын

    @@pratyushrao7979 I actually talk about masking in my video on decoder-only transformers here: kzread.info/dash/bejne/lIVppNGonLufcco.html

  • @adithyakumar1111
    @adithyakumar11116 ай бұрын

    Thank you Josh for this fantastic video. One of the best videos to explain the math behind the Query, Key and Values.

  • @statquest

    @statquest

    6 ай бұрын

    Thank you!

  • @rishabhjain1468
    @rishabhjain14689 ай бұрын

    much awaited and anticipated video!!, Tysm

  • @statquest

    @statquest

    9 ай бұрын

    Thanks! :)

  • @NethaneelEdwards
    @NethaneelEdwards9 ай бұрын

    Been waiting daily for this. Here we go! Thanks!

  • @statquest

    @statquest

    9 ай бұрын

    :)

  • @hamidrezahosseinkhani5980
    @hamidrezahosseinkhani59805 ай бұрын

    It was incredible. step-by-step, clear and concise, detailed enough. great great. thank you for such an amazing video!

  • @statquest

    @statquest

    5 ай бұрын

    Glad you enjoyed it!

  • @sdsa007
    @sdsa0078 ай бұрын

    Transformers! More than meets the eye!? I think there is a lot of value in knowing this technology well! Thank you for your humor and learning support, I can't wait to return the favor!

  • @statquest

    @statquest

    8 ай бұрын

    Thanks!

  • @johnas3
    @johnas39 ай бұрын

    Thank you!! Still need some time to digest such a big concept… But worth for waiting! Hooray🎉

  • @statquest

    @statquest

    9 ай бұрын

    Yep - this is a big one! :)

  • @iwokeupdead1093
    @iwokeupdead10936 ай бұрын

    I'm currently studying for job interviews and I don't know what I would do without you, thank you! When I get paid from my first job I will donate to you :)

  • @statquest

    @statquest

    6 ай бұрын

    Wow! Thank you!

  • @TheSuperFlyo
    @TheSuperFlyo7 ай бұрын

    We have been waiting for this!! Awesome

  • @statquest

    @statquest

    7 ай бұрын

    Thank you very much!

  • @okay730
    @okay7309 ай бұрын

    I HAVE BEEN WAITING SO LONG FOR THIS VIDEO TYSM

  • @statquest

    @statquest

    9 ай бұрын

    bam!

  • @kidley17
    @kidley179 ай бұрын

    Although is way beyond my area of knowledge I love to watch your videos, it brings me a warm nostalgic feeling from college and reminds me how awesome statistics are.

  • @statquest

    @statquest

    9 ай бұрын

    That's awesome!

  • @kidley17

    @kidley17

    9 ай бұрын

    @@statquest BAM 🔥

  • @starlord3286
    @starlord32863 ай бұрын

    I like how he says "In this example we kept things super simple". Great video, thank you!

  • @statquest

    @statquest

    3 ай бұрын

    Glad you liked it!

  • @brianprzezdziecki
    @brianprzezdziecki9 ай бұрын

    Holy crap I’ve been waiting for this for months!!! Finally!

  • @statquest

    @statquest

    9 ай бұрын

    Hooray! :)

  • @silver_soul98
    @silver_soul989 ай бұрын

    was waiting for this one. thanks so much man.

  • @statquest

    @statquest

    9 ай бұрын

    Bam! :)

  • @yashsvidixit7169
    @yashsvidixit71693 ай бұрын

    A lot of hard work must have gone into these videos. And the results are these brilliant super helpful videos. Thanks a lot for these videos.

  • @statquest

    @statquest

    3 ай бұрын

    Glad you like them!

  • @adarshvemali2966
    @adarshvemali29663 ай бұрын

    What a legend, there is no better channel than this!

  • @statquest

    @statquest

    3 ай бұрын

    Thank you!

  • @jyotsnachoudhary8999
    @jyotsnachoudhary89999 ай бұрын

    Thanks a lot @Josh for this comprehensive video on Transformers. It was really helpful!

  • @statquest

    @statquest

    9 ай бұрын

    bam! :)

  • @jyotsnachoudhary8999

    @jyotsnachoudhary8999

    9 ай бұрын

    @@statquestHey Josh, I have a doubt that I'd like your help with. I noticed that the decoded token for EOS is "vamos," but I expected it to be since the self-attention and Encoder-decoder attention for should be the highest. Could you please explain this?

  • @statquest

    @statquest

    9 ай бұрын

    @@jyotsnachoudhary8999 is just what we use to initialize the decoding and the network is trained to use the encoder-decoder attention to convert that to "vamos" (this transformer can also correctly translate "to go" to "ir").

  • @jyotsnachoudhary8999

    @jyotsnachoudhary8999

    9 ай бұрын

    @@statquest Ah, okay. Got it. Thanks a lot :))

  • @erikleichtenberg3950
    @erikleichtenberg39502 ай бұрын

    1 million subscribers and still taking the time to answer questions from his viewers. Absolute legend

  • @statquest

    @statquest

    2 ай бұрын

    BAM! :)

  • @luisfernando5998

    @luisfernando5998

    2 ай бұрын

    Bet it’s an AI bot answering 🤖

  • @statquest

    @statquest

    2 ай бұрын

    @@luisfernando5998 Nope - it's me. I really read all the comments and respond to as many as I can.

  • @luisfernando5998

    @luisfernando5998

    2 ай бұрын

    @@statquest do u have a team ? 🤔 how do u manage the time ? 🤯

  • @statquest

    @statquest

    2 ай бұрын

    @@luisfernando5998 It only takes about 30 minutes a day. It's not that big of a deal.

  • @Isakilll
    @Isakilll8 ай бұрын

    Just wanted to say that I understood everything about LMs (thanks to your videos), except the part on transformers cuz the video wasn't out yet ahah. Well now that my dear squash teacher explained it, everything's clear. So really THANK YOU for your hard work and dedication, it made all the difference in my understanding of Neural Networks in general

  • @statquest

    @statquest

    8 ай бұрын

    Great to hear!

  • @thomasdeneux
    @thomasdeneux5 ай бұрын

    thank you very much for this impressive work! it is so important that we can all have a grasp of how this works

  • @statquest

    @statquest

    5 ай бұрын

    Thanks!

  • @gyuio100
    @gyuio1009 ай бұрын

    Very clear and builds up the concepts in a step by step manner, rather than starting with the overall architrcture.

  • @statquest

    @statquest

    9 ай бұрын

    Thanks!

  • @dkkkkkkk
    @dkkkkkkk9 ай бұрын

    This is a masterpiece! Appreciated it, Squatch!

  • @statquest

    @statquest

    9 ай бұрын

    BAM! :)

  • @ItIsJan
    @ItIsJan9 ай бұрын

    we have been waiting for so long! thanks

  • @statquest

    @statquest

    9 ай бұрын

    bam!

  • @akarimsiddiqui7572
    @akarimsiddiqui757228 күн бұрын

    I finally found you! Thank you for this detailed yet super simple break down.

  • @statquest

    @statquest

    28 күн бұрын

    Glad it was helpful!

  • @spartan9729
    @spartan97297 ай бұрын

    This is your only video that I had to see twice to get complete idea of the topic. Transformers really is a decently tough topic.

  • @statquest

    @statquest

    7 ай бұрын

    This is a lot of material for one video. But people wanted a single video, rather than a series of videos making incremental steps in learning, for transformers. Personally, I would have preferred a sequence of shorter videos, each focused on just one part. That said, there is something about seeing it all at once and getting that big picture. My book on neural networks (that I'm working on right now) will try to do both - take things one step at a time and give a big picture.

  • @spartan9729

    @spartan9729

    7 ай бұрын

    @@statquest Nice. Waiting for the book in that case.

  • @manuelapacheco9129
    @manuelapacheco91297 ай бұрын

    man i love you for this video thank you so much, there's absolutely no way i'd have understood all of this without your help

  • @statquest

    @statquest

    7 ай бұрын

    Glad I could help!

  • @carleanoravelzawongso
    @carleanoravelzawongso8 ай бұрын

    Please create more vids!! Your explanations are truly beautiful, such a work of art. I couldn't agree more that you are one of the most brilliant teachers at statistic and ML! Actually, I wanna hug you right now haha

  • @statquest

    @statquest

    8 ай бұрын

    Wow, thank you!

  • @JavierSanchez-yc8qo
    @JavierSanchez-yc8qo28 күн бұрын

    @statquest you are a true professional and a master of your craft. The field of ML is getting a little stronger each day bc of content like this!

  • @statquest

    @statquest

    28 күн бұрын

    Thank you very much!

  • @pranav7471
    @pranav74714 күн бұрын

    A great explanation of Transformer, the one thing I found missing was the decoder has a masked self attention, to prevent future embedding from "leaking" into current output

  • @statquest

    @statquest

    3 күн бұрын

    For an encoder-decoder transformer, masked self-attention is only used during training, which this video doesn't cover. However, I cover it in my video on Decoder-Only Transformers here: kzread.info/dash/bejne/lIVppNGonLufcco.html

  • @user-xp2gc7tm8h
    @user-xp2gc7tm8hАй бұрын

    the best and simplest video to learn transformer ever!

  • @statquest

    @statquest

    Ай бұрын

    Thank you!

  • @BooleanDisorder
    @BooleanDisorder2 ай бұрын

    This is so mindblowingly complex and impressive. Great video! ❤ The transformer architecture is also complex and impressive, ofc. 😊

  • @statquest

    @statquest

    2 ай бұрын

    BAM! :)

  • @theunconventionalenglishman
    @theunconventionalenglishmanАй бұрын

    I've recently discovered your channell and I love it - the songs rule. Cheers mate

  • @statquest

    @statquest

    Ай бұрын

    Thank you!

  • @abeeRidge
    @abeeRidge9 ай бұрын

    What a clean, easy to follow video!

  • @statquest

    @statquest

    9 ай бұрын

    Thank you very much! :)

  • @tudor6210
    @tudor6210Ай бұрын

    Thank you!! One of the best explanations of transformers out there.

  • @statquest

    @statquest

    Ай бұрын

    Glad you think so!

  • @bin4ry_d3struct0r
    @bin4ry_d3struct0r9 ай бұрын

    The amount of detail that went into this must've taken A LOT of work. Kudos!! On a side note: the GPT variants are decoder-only (i.e., they do not employ an encoder component).

  • @statquest

    @statquest

    9 ай бұрын

    Yep. I'd like to create a video on decoder only transformers soon.

  • @bin4ry_d3struct0r

    @bin4ry_d3struct0r

    9 ай бұрын

    @@statquest Looking forward to it!

  • @heike_p
    @heike_p3 ай бұрын

    I'm following an advanced master of Artificial Intelligence. This whole NN playlist has saved me while studying for my exams! Thanks a bunch!

  • @statquest

    @statquest

    3 ай бұрын

    good luck!

  • @jiayuemao4985
    @jiayuemao49854 ай бұрын

    Nice video! Thank you for explaining so clearly! As a starter, this video helps a lot.

  • @statquest

    @statquest

    4 ай бұрын

    Thanks!

  • @michaelcharlesthearchangel
    @michaelcharlesthearchangelАй бұрын

    Bravo! Excellent teaching skills! Teaching weights and biases is not easy but, by God, man, you've done it!

  • @statquest

    @statquest

    Ай бұрын

    Thank you very much!

  • @pypypy4228
    @pypypy42289 ай бұрын

    A long anticipated video! ❤

  • @statquest

    @statquest

    9 ай бұрын

    Thanks!

  • @howardhao-chunchuang6742
    @howardhao-chunchuang67425 ай бұрын

    Thank you for your wonderful work and crystally clear explanations. Finally K, Q, & V make sense.

  • @statquest

    @statquest

    5 ай бұрын

    BAM! :)