Transformers explained | The architecture behind LLMs

Ғылым және технология

All you need to know about the transformer architecture: How to structure the inputs, attention (Queries, Keys, Values), positional embeddings, residual connections. Bonus: an overview of the difference between Recurrent Neural Networks (RNNs) and transformers.
9:19 Order of multiplication should be the opposite: x1(vector) * Wq(matrix) = q1(vector). Otherwise we do not get the 1x3 dimensionality at the end. Sorry for messing up the animation!
➡️ AI Coffee Break Merch! 🛍️ aicoffeebreak.creator-spring....
Outline:
00:00 Transformers explained
00:47 Text inputs
02:29 Image inputs
03:57 Next word prediction / Classification
06:08 The transformer layer: 1. MLP sublayer
06:47 2. Attention explained
07:57 Attention vs. self-attention
08:35 Queries, Keys, Values
09:19 Order of multiplication should be the opposite: x1(vector) * Wq(matrix) = q1(vector).
11:26 Multi-head attention
13:04 Attention scales quadratically
13:53 Positional embeddings
15:11 Residual connections and Normalization Layers
17:09 Masked Language Modelling
17:59 Difference to RNNs
Thanks to our Patrons who support us in Tier 2, 3, 4: 🙏
Dres. Trost GbR, Siltax, Vignesh Valliappan, @Mutual_Information , Kshitij
Our old Transformer explained 📺 video: • The Transformer neural...
📺 Tokenization explained: • What is tokenization a...
📺 Word embeddings: • How modern search engi...
📽️ Replacing Self-Attention: • Replacing Self-attention
📽️ Position embeddings: • Positional encodings i...
@SerranoAcademy Transformer series: • The Attention Mechanis...
📄 Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. "Attention is all you need." Advances in neural information processing systems 30 (2017).
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
🔥 Optionally, pay us a coffee to help with our Coffee Bean production! ☕
Patreon: / aicoffeebreak
Ko-fi: ko-fi.com/aicoffeebreak
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
🔗 Links:
AICoffeeBreakQuiz: / aicoffeebreak
Twitter: / aicoffeebreak
Reddit: / aicoffeebreak
KZread: / aicoffeebreak
#AICoffeeBreak #MsCoffeeBean #MachineLearning #AI #research​
Music 🎵 : Sunset n Beachz - Ofshane
Video editing: Nils Trost

Пікірлер: 95

  • @YuraCCC
    @YuraCCC4 ай бұрын

    Thanks for the explanation. At 9:19 : Shouldn't the order of multiplication be the opposite here? E.g. x1(vector) * Wq(matrix) = q1(vector). Otherwise I don't understand how we get the 1x3 dimensionality at the end

  • @AICoffeeBreak

    @AICoffeeBreak

    4 ай бұрын

    Oh, shoot, messed up the order in the animations there. You are right. Sorry, pinning your comment.

  • @YuraCCC

    @YuraCCC

    4 ай бұрын

    No problem thanks for clarifying that, and thanks again for the great video@@AICoffeeBreak

  • @DerPylz
    @DerPylz4 ай бұрын

    Wow, you've come a long way since your first transformer explained video!

  • @rahulrajpvr7d
    @rahulrajpvr7d4 ай бұрын

    Tomorrow i have thesis evaluation and i was thinking about watching that video again, but youtube algorithm suggested me without searching anything, Thank u youtube algo.. 😅❤🔥

  • @AICoffeeBreak

    @AICoffeeBreak

    4 ай бұрын

    It read your mind.

  • @420_gunna
    @420_gunna4 ай бұрын

    Awesome video, thank you! I love the idea of you revisiting older topics -- either as a 201 or as a re-introduction. "Attention combines the representation of input vector's value vectors, weighted by the importance score (computed by the query and key vectors)."

  • @AICoffeeBreak

    @AICoffeeBreak

    4 ай бұрын

    Thanks for your appreciation!

  • @abhishek-tandon
    @abhishek-tandon4 ай бұрын

    One of the best videos on transformers that I have ever watched. Views 📈

  • @AICoffeeBreak

    @AICoffeeBreak

    4 ай бұрын

    Do you have examples of others you liked?

  • @volpir4672
    @volpir46724 ай бұрын

    that's great, I'm a little stuck on the special mask token? ... I'll keep digging, good info, the video is good explanation, it allows for more experimentation instead of relying on open source models that can have components look like a black box to noobs like me :)

  • @xyphos915
    @xyphos9154 ай бұрын

    Wow, this explanation on the difference between RNNs and Transformers at the end is what I was missing! I've always heard that Transformers are great because of parallelization but never really saw why until today, thank you! Great video!

  • @AICoffeeBreak

    @AICoffeeBreak

    4 ай бұрын

    Oh, this makes me happy !

  • @connorshorten6311
    @connorshorten63114 ай бұрын

    Awesome! Epic Visuals!

  • @AICoffeeBreak

    @AICoffeeBreak

    4 ай бұрын

    Thanks, Connor!

  • @DaveJ6515
    @DaveJ65154 ай бұрын

    You know how to explain things. This one is not easy: I can see the amount of work that went into this video, and it was a lot. I hope that your career takes you where you deserve.

  • @AICoffeeBreak

    @AICoffeeBreak

    4 ай бұрын

    Thanks for watching and thanks for the kind words. All the best to you as well!

  • @Thomas-gk42
    @Thomas-gk424 ай бұрын

    Understood about 10%, but I like these vidoes and feel intuitively the usefulness.

  • @AICoffeeBreak

    @AICoffeeBreak

    4 ай бұрын

  • @mumcarpet109
    @mumcarpet1094 ай бұрын

    your videos has helped visual learner like me so much, thank you

  • @AICoffeeBreak

    @AICoffeeBreak

    4 ай бұрын

    Happy to hear that!

  • @MachineLearningStreetTalk
    @MachineLearningStreetTalk4 ай бұрын

    Epic as always 🤌

  • @AICoffeeBreak

    @AICoffeeBreak

    4 ай бұрын

    Thanks, Tim!

  • @DatNgo-uk4ft
    @DatNgo-uk4ft4 ай бұрын

    Great Video!! Nice improvement over the original

  • @AICoffeeBreak

    @AICoffeeBreak

    4 ай бұрын

    Glad you think so!

  • @user-th2ec8ms3m
    @user-th2ec8ms3m4 ай бұрын

    Really well done and easy to follow, thank you

  • @AICoffeeBreak

    @AICoffeeBreak

    4 ай бұрын

    Glad you enjoy it!

  • @cosmic_reef_17
    @cosmic_reef_174 ай бұрын

    Thank you very much for the very clear explanations and detailed analysis of the transformer architecture. Your truly the 3blue1brown of machine learning!

  • @AICoffeeBreak

    @AICoffeeBreak

    4 ай бұрын

  • @l.suurmeijer1382
    @l.suurmeijer13824 ай бұрын

    Absolute banger of a video. Wish I had seen this when I was learning about transformers in uni last year :-)

  • @AICoffeeBreak

    @AICoffeeBreak

    4 ай бұрын

    Haha, glad I could help. Even if a bit late.

  • @16876
    @168764 ай бұрын

    What a thorougfh and much anticipated overview laid out so coherently ,, thank you

  • @AICoffeeBreak

    @AICoffeeBreak

    4 ай бұрын

    Our pleasure! We should have done this video much earlier, considering that our old Transformer Explained is our most watched video to date. 😅

  • @SamehSyedAjmal
    @SamehSyedAjmal4 ай бұрын

    Thank you for the video! Maybe an explanation on the Mamba Architecture next?

  • @AICoffeeBreak

    @AICoffeeBreak

    3 ай бұрын

    The Mamba and SSM beans are roasting as we speak.

  • @jonas4223
    @jonas42234 ай бұрын

    Today, I had the problem I need to understand how Transformers work.. I searched on youtube and found your video 20 minutes after release. What a perfect timing

  • @AICoffeeBreak

    @AICoffeeBreak

    4 ай бұрын

    What a timing!

  • @manuelafernandesblancorodr6366
    @manuelafernandesblancorodr63664 ай бұрын

    What a wonderful video! Thank you so much for sharing it!

  • @AICoffeeBreak

    @AICoffeeBreak

    3 ай бұрын

    Thank you too for this wonderful comment!

  • @darylallen2485
    @darylallen2485Ай бұрын

    Letitia, you're awesome and I look forward to learning more from you.

  • @muhammedaneesk.a4848
    @muhammedaneesk.a48484 ай бұрын

    Thanks for the explanation 😊

  • @AICoffeeBreak

    @AICoffeeBreak

    4 ай бұрын

    Thanks for watching!

  • @phiphi3025
    @phiphi30254 ай бұрын

    Thanks, you helped so much explain Transformers to my PhD advisors

  • @AICoffeeBreak

    @AICoffeeBreak

    4 ай бұрын

    This is really funny. In what field are you doing your PhD? 😅

  • @mccartym86
    @mccartym863 ай бұрын

    I think I had at least 10 aha moments watching this, and I've watched many videos on these topics. Incredible job, thank you!

  • @AICoffeeBreak

    @AICoffeeBreak

    3 ай бұрын

    Wow, thank You for this wonderful comment!

  • @jcneto25
    @jcneto254 ай бұрын

    Best didatic explanation about Transformers so far. Thank you for sharing it.

  • @AICoffeeBreak

    @AICoffeeBreak

    3 ай бұрын

    Wow, thanks! Glad it's helpful.

  • @bartlomiejkubica1781
    @bartlomiejkubica17814 ай бұрын

    Thank you! Finally, I start to get it...

  • @ai-interview-questions
    @ai-interview-questions4 ай бұрын

    Thank you, Letitia!

  • @AICoffeeBreak

    @AICoffeeBreak

    3 ай бұрын

    Our pleasure!

  • @dannown
    @dannown4 ай бұрын

    Really appreciate this video.

  • @AICoffeeBreak

    @AICoffeeBreak

    4 ай бұрын

    So glad!

  • @ArthasDKR
    @ArthasDKR4 ай бұрын

    Excellent explanation. Thank you!

  • @AICoffeeBreak

    @AICoffeeBreak

    4 ай бұрын

  • @axelmarora6743
    @axelmarora67432 ай бұрын

    This is a very well-made explanation. I hadn't known that the feedforward layers only received one token at a time. Thanks for clearing that up for me! 😁

  • @HarishAkula-df8gs
    @HarishAkula-df8gs2 ай бұрын

    Amazing explanation, Thank you! Just discovered your channel and I really like how the difficult topics are demystified.

  • @AICoffeeBreak

    @AICoffeeBreak

    2 ай бұрын

    Thanks a lot!

  • @paprikar
    @paprikar4 ай бұрын

    here we go! TY for content

  • @meguellatiyounes8659
    @meguellatiyounes86594 ай бұрын

    well explained . as you promised

  • @AICoffeeBreak

    @AICoffeeBreak

    4 ай бұрын

  • @zbynekba
    @zbynekba4 ай бұрын

    ❤ Letitia, thank you for great visualization and intuition. For inspiration: In the original paper, the decoder utilizes the output of the encoder by running a cross-attention process. Why does GPT not use an encoder? As you've mentioned, the encoder is typically used for classification, while the decoder is for text generation. They are never used in combination. Why is this the case? Missing Intuition: Why does the cross-attention layer inside the decoder take the values from the ENCODER’s output to create the enhanced embeddings (as a weighted mix)? Intuitively, I would use the values from the DECODER.

  • @AICoffeeBreak

    @AICoffeeBreak

    4 ай бұрын

    Thanks for your thoughts! Encoders are sometimes used in combination with decoders, right? The most famous example is the T5 architecture.

  • @zbynekba

    @zbynekba

    4 ай бұрын

    Thanks for your prompt reply. Hence, understanding the concept and intuition behind feeding the encoder output into the decoder is essential. I found only this one video on encoder-decoder cross-attention: kzread.info/dash/bejne/dqWe05aAqMfOnso.htmlsi=gtLzNxAU0pUGyLvk In it, Lennart emphasizes the observation that, based on the original equations, we have the enhanced embeddings calculated as a weighted sum of ENCODER values. Inside of a DECODER, I would rather expect to have the DECODER values pass through. Letitia, I am sure, you will resolve this mystery. 🍀

  • @Ben_D.
    @Ben_D.2 ай бұрын

    ...ok. After binging some of your vids, I now need to go make coffee. 😆

  • @AICoffeeBreak

    @AICoffeeBreak

    2 ай бұрын

    Please do!

  • @Clammer999
    @Clammer99925 күн бұрын

    Thanks so much for this video. I’ve gone through a number of videos on transformers and this is much easier to grasp and understand for a non-data scientist like myself.

  • @AICoffeeBreak

    @AICoffeeBreak

    25 күн бұрын

    You're very welcome!

  • @MuruganR-tg9yt
    @MuruganR-tg9yt3 ай бұрын

    Thank you. Nice explanation 😊

  • @AICoffeeBreak

    @AICoffeeBreak

    3 ай бұрын

    Thank You for your visit!

  • @pfever
    @pfever3 ай бұрын

    Just discovered your channel and this is great! Thank you! :D

  • @AICoffeeBreak

    @AICoffeeBreak

    3 ай бұрын

    Thank you! Hope to see you again soon in the comments.

  • @zahrashah6567
    @zahrashah6567Ай бұрын

    What a wonderful explanation😍 Just discovered your channel and absolutely loving the explanations as well as visuals😘

  • @AICoffeeBreak

    @AICoffeeBreak

    Ай бұрын

    Thank you! welcome!

  • @tildarusso
    @tildarusso4 ай бұрын

    As far as I am aware, word embedding has changed from legacy static embedding like Word2Vec/GLOVE (like the famous queen=woman+king-man metaphor) to BPE & unigram, this change gave me quite a headache, as most of paper do not mention any detail of their "word embedding". Perhaps Letitia you can make a video to clarify this a bit for us.

  • @AICoffeeBreak

    @AICoffeeBreak

    4 ай бұрын

    Great suggestion, thanks!

  • @l3nn13
    @l3nn134 ай бұрын

    great video

  • @AICoffeeBreak

    @AICoffeeBreak

    4 ай бұрын

    Thanks for the visit and for leaving the comment!

  • @LEQN
    @LEQN2 ай бұрын

    Awesome video :) thanks!

  • @AICoffeeBreak

    @AICoffeeBreak

    2 ай бұрын

    Thank you for watching and for your wonderful comment!

  • @kallamamran
    @kallamamran4 ай бұрын

    Phew 😳

  • @ehudamitai
    @ehudamitai4 ай бұрын

    In 11:14, the weighted sum is the sum of 3 vectors of 3 elements each, but the results is a vector of 4 elements. Which, conveniently, is the same size as the input vector. Could there be a missing step there?

  • @AICoffeeBreak

    @AICoffeeBreak

    4 ай бұрын

    Yes, there is a missing back transformation to 4 dimensions I skipped. :) Well spotted!

  • @M4ciekP
    @M4ciekP4 ай бұрын

    How about a video explaining SSMs?

  • @AICoffeeBreak

    @AICoffeeBreak

    4 ай бұрын

    ✍️

  • @AICoffeeBreak

    @AICoffeeBreak

    3 ай бұрын

    Psst: This will be the video coming up in a few days. it's in editing right now.

  • @M4ciekP

    @M4ciekP

    3 ай бұрын

    Yaay! @@AICoffeeBreak

  • @nmfhlbj
    @nmfhlbj2 ай бұрын

    hi! can i ask question of how did you get the dimension (d)? because all i know is dimension can be found in square matrices, and the dot product of the attention formula says that Q•K^T. if we're using 1x3 matrices, we'll get 1x1 matrices or 1 dimension, how do you get 3 ? unless its 3x1 matrix beforehand, so we'll get 3x3 or 3 dimensional matrix. thankyouu !

  • @AICoffeeBreak

    @AICoffeeBreak

    2 ай бұрын

    Hi, if you mean the mistake at 10:00, then the problem is that I have written matrix times vector when I should have written vector times matrix! (or I could have used column vectors instead of row vectors). Is this what you mean?

  • @DaeOh
    @DaeOh4 ай бұрын

    Everything makes sense except multiple attention heads. Each layer has only one set of Q, K, V, O matrices. But 8 attention heads per layer? I want to understand that.

  • @AICoffeeBreak

    @AICoffeeBreak

    4 ай бұрын

    Think about it this way: In one layer, instead of having one head telling you how to pay attention at things, you have 8. In other words, instead of having one person shout at you the things they want you to pay attention to, you have 8 people simultaneously shouting at you. This is beneficial because it has an ensembling effect (the effect of a voting parliament. Think of Random Forests that are an ensemble of Decision Trees). I do not know if this helps, but I thought giving it another shot at explaining this.

  • @benjamindilorenzo
    @benjamindilorenzo3 ай бұрын

    What a great video. It still could expand more and really sum up every sub-part and connect it to a certain clear visualization or clear step of what happens with the information at each time step and how its "transformation" progresses over time. So i think you could redo this video and really make it monkey proof for folks like me. But beware, if you look for example at the StatQuest version, its to slow and too repetative and also does not really capture, what really goes on inside the Transformer, once all steps are stacked together. Great work!

  • @josephvanname3377
    @josephvanname33774 ай бұрын

    I want to train a transformer that eats a row of matrices instead of just a row of vectors.

Келесі