Transformers for beginners | What are they and how do they work
Ғылым және технология
Over the past five years, Transformers, a neural network architecture, have completely transformed state-of-the-art natural language processing.
*************************************************************************
For queries: You can comment in comment section or you can mail me at aarohisingla1987@gmail.com
*************************************************************************
The encoder takes the input sentence and converts it into a series of numbers called vectors, which represent the meaning of the words. These vectors are then passed to the decoder, which generates the translated sentence.
Now, the magic of the transformer network lies in how it handles attention. Instead of looking at each word one by one, it considers the entire sentence at once. It calculates a similarity score between each word in the input sentence and every other word, giving higher scores to the words that are more important for translation.
To do this, the transformer network uses a mechanism called self-attention. Self-attention allows the model to weigh the importance of each word in the sentence based on its relevance to other words. By doing this, the model can focus more on the important parts of the sentence and less on the irrelevant ones.
In addition to self-attention, transformer networks also use something called positional encoding. Since the model treats words as individual entities, it doesn't have any inherent understanding of word order. Positional encoding helps the model to understand the sequence of words in a sentence by adding information about their position.
Once the encoder has calculated the attention scores and combined them with positional encoding, the resulting vectors are passed to the decoder. The decoder uses a similar attention mechanism to generate the translated sentence, one word at a time.
Transformers are the model behind GPT, BERT, and T5
#transformers #naturallanguageprocessing #nlp
Пікірлер: 92
This is the only video around that REALLY EXPLAINS the transformer! I immensely appreciate your step by step approach and the use of the example. Thank you so much 🙏🙏🙏
@CodeWithAarohi
3 ай бұрын
Glad it was helpful!
@eng.reemali9214
13 күн бұрын
exactly
I had watched 3 or 4 videos about transformers before this tutorial. Finally, this tutorial made me understand the concept of transformers. Thanks for your complete and clear explanations and your illustrative example. Specially, your description about query, key and value was really helpful.
@CodeWithAarohi
4 күн бұрын
You're very welcome!
Very nice high level description of Transformer
@CodeWithAarohi
4 ай бұрын
Glad you think so!
Very well explained ! I can instantly grab the concept ! Thank you Miss !
@CodeWithAarohi
10 ай бұрын
Glad it was helpful!
Hello and Thank you so much. 1 question: I don't realize where the numbers in word embedding and positional encoding come from?
Accidentally I came across this video, very well explained. You are doing an excellent job .
@CodeWithAarohi
Ай бұрын
Glad it was helpful!
Great explanation! Keep uploading such nice informative content.
@CodeWithAarohi
5 ай бұрын
Thank you, I will
Its great. I have only one query as whats the input of the masked multi-head attention as its not clear to me kindly guide me about it?
Just amazing explanation 👌
@CodeWithAarohi
6 ай бұрын
Thanks a lot 😊
Great Explanation, Thanks
@CodeWithAarohi
Ай бұрын
Glad it was helpful!
Very well explained, even with such niche viewer base, keep making more of these please
@CodeWithAarohi
9 ай бұрын
Thank you, I will
This is a fantastic, Very Good explanation. Thank you so much for good explanation
@CodeWithAarohi
10 ай бұрын
Glad it was helpful!
best explanation i saw multiple video but this provide the clear concept keep it up
@CodeWithAarohi
3 ай бұрын
Glad to hear that
Well explained. before watching this video i was very confused in understanding how transformers works but your video helped me alot
@CodeWithAarohi
4 ай бұрын
Glad my video is helpful!
Best video ever explaining the concepts in really lucid way maam,thanks a lot,pls keep posting,i subscribed 😊🎉
@CodeWithAarohi
9 күн бұрын
Thanks and welcome
excellent explanation madam... thank you so much
@CodeWithAarohi
4 ай бұрын
Thanks and welcome
Hello Ma’am Your AI and Data Science content is consistently impressive! Thanks for making complex concepts so accessible. Keep up the great work! 🚀 #ArtificialIntelligence #DataScience #ImpressiveContent 👏👍
@CodeWithAarohi
6 ай бұрын
Thank you!
thank you very much for explaining and breaking it down 😀 comparatively so far, your explanation is easy to understand compared to other channels thank you very much for making this video and sharing to everyone❤
@CodeWithAarohi
10 күн бұрын
Glad it was helpful!
Great Explanation mam
@CodeWithAarohi
Ай бұрын
Glad you liked it
excellent explanation
@CodeWithAarohi
4 ай бұрын
Glad you liked it!
Thanks for making such an informative video. Please could you make a video on the transformer for image classification or image segmentation applications.
@CodeWithAarohi
10 ай бұрын
Will cover that soon
Very well explained
@CodeWithAarohi
7 ай бұрын
Thanks for liking
Nice explanation Ma'am.
@CodeWithAarohi
7 ай бұрын
Thank you! 🙂
Thanks. Concept explained very well. Could you please add one custom example (e.g finding similarity questions)using Transformers?
@CodeWithAarohi
10 ай бұрын
Will try
Very Good Video Ma'am, Love from Gujarat, Keep it up
@CodeWithAarohi
6 ай бұрын
Thanks a lot
Thanks Aaroh i😇
@CodeWithAarohi
6 ай бұрын
Glad it helped!
Ma'am, we are eagerly hoping for a comprehensive Machine Learning and Computer Vision playlist. Your teaching style is unmatched, and I truly wish your channel reaches 100 million subscribers! 🌟
@CodeWithAarohi
Ай бұрын
Thank you so much for your incredibly kind words and support!🙂 Creating a comprehensive Machine Learning and Computer Vision playlist is an excellent idea, and I'll definitely consider it for future content.
The best explanation of transformer that I have got on the internet , can you please make a detailed long video on transformers with theory , mathematics and more examples. I am not clear about linear and softmax layer and what is done after that , how training happens and how transformers work on the test data , can you please make a detailed video on this?
@CodeWithAarohi
2 ай бұрын
I will try to make it after finishing the pipelined work.
@sahaj2805
2 ай бұрын
@@CodeWithAarohi Thanks will wait for the detailed transformer video :)
Thank you. The concept has been explained very well. Could you please also explain how these query, key and value vectors are calculated?
@CodeWithAarohi
10 ай бұрын
Sure, Will cover that in a separate video.
can you please upload the presentation
Really very nice explanation ma'am!
@CodeWithAarohi
Ай бұрын
Glad my video is helpful!
Can you please let us know I/p for mask multi head attention. You just said decoder. Can you please explain. Thanks
maam can you please make one video of classification using multi-head attention with custom dataset
@CodeWithAarohi
9 ай бұрын
Will try
Can you also talk about the purpose of the 'feed forward' layer. looks like its only there to add non-linearity. is that right?
@abirahmedsohan3554
Ай бұрын
Yes you can say that..but mayb also for make key, quarry and value trainable
how to get pdfs mam
Question about query, key, value dimensionality Given that query is a word that is looking for other words to pay attention to key is a word that is being looked at by other words shouldn't query and word be a vector of size the same as number of input tokens? so that when there is a dot product between query and key the word that is querying can be correctly (positionally) dot product'd with key and get the self attention value for the word?
@CodeWithAarohi
3 ай бұрын
The dimensionality of query, key, and value vectors in transformers is a hyperparameter, not directly tied to the number of input tokens. The dot product operation between query and key vectors allows the model to capture relationships and dependencies between tokens, while positional information is often handled separately through positional embeddings.
Can you please make a detailed video explaining the Attention is all you need research paper line by line, thanks in advance :)
@CodeWithAarohi
2 ай бұрын
Noted!
thank you mam
@CodeWithAarohi
7 ай бұрын
Most welcome 😊
can you please explain 22:07 onward
Could you make a video on image classification for vision transformer, madam ?
@CodeWithAarohi
10 ай бұрын
Sure, soon
Great Video ma'am could you please clarify what you said at 22:20 once again... I think there was a bit confusion there.
@AyomideFagoroye-oe2hd
26 күн бұрын
same here
Didn't understand what is the input to the masked multi head self attention layer in the decoder, Can you please explain me?
@CodeWithAarohi
6 ай бұрын
In the Transformer decoder, the masked multi-head self-attention layer takes three inputs: Queries(Q), Keys(K) and Values(V) Queries (Q): These are vectors representing the current positions in the sequence. They are used to determine how much attention each position should give to other positions. Keys (K): These are vectors representing all positions in the sequence. They are used to calculate the attention scores between the current position (represented by the query) and all other positions. Values (V): These are vectors containing information from all positions in the sequence. The values are combined based on the attention scores to produce the output for the current position. The masking in the self-attention mechanism ensures that during training, a position cannot attend to future positions, preventing information leakage from the future. In short, the masked multi-head self-attention layer helps the decoder focus on relevant parts of the input sequence while generating the output sequence, and the masking ensures it doesn't cheat by looking at future information during training.
hello maa is this transform concept same for transformers in NLP?
@CodeWithAarohi
6 ай бұрын
The concept of "transform" in computer vision and "transformers" in natural language processing (NLP) are related but not quite the same.
Can you please make a video on bert?
@CodeWithAarohi
3 ай бұрын
I will try!
Could you explain with python code which would be more practical. Thanks for sharing your knowledge
@CodeWithAarohi
10 ай бұрын
Sure, will cover that soon.
I thought it's transformers in CV. all explanations were in NLP
@CodeWithAarohi
3 ай бұрын
I recommend you to understand this video first and then check this video: kzread.info/dash/bejne/pp-Or8xqhq6qadY.html After watching these 2 videos, you will understand properly the concept of transformers used in computer vision. Transformers in CV are based on the idea of transformers in NLP. SO its better for understanding if you learn the way I told you.
Gonna tell my kids this was optimus prime.
@CodeWithAarohi
Ай бұрын
Haha, I love it! Optimus Prime has some serious competition now :)
Use mic, background noise irritate
@CodeWithAarohi
6 ай бұрын
Noted! Thanks for the feedback.