Word Embedding and Word2Vec, Clearly Explained!!!

Words are great, but if we want to use them as input to a neural network, we have to convert them to numbers. One of the most popular methods for assigning numbers to words is to use a Neural Network to create Word Embeddings. In this StatQuest, we go through the steps required to create Word Embeddings, and show how we can visualize and validate them. We then talk about one of the most popular Word Embedding tools, word2vec. BAM!!!
Note, this StatQuest assumes that you are already familiar with...
The Basics of how Neural Networks Work: • The Essential Main Ide...
The Basics of how Backpropagation Works: • Neural Networks Pt. 2:...
How the Softmax function works: • Neural Networks Part 5...
How Cross Entropy works: • Neural Networks Part 6...
If you'd like to support StatQuest, please consider...
Patreon: / statquest
...or...
KZread Membership: / @statquest
...buying my book, a study guide, a t-shirt or hoodie, or a song from the StatQuest store...
statquest.org/statquest-store/
...or just donating to StatQuest!
www.paypal.me/statquest
Lastly, if you want to keep up with me as I research and create new StatQuests, follow me on twitter:
/ joshuastarmer
0:00 Awesome song and introduction
4:25 Building a Neural Network to do Word Embedding
8:18 Visualizing and Validating the Word Embedding
10:42 Summary of Main Ideas
11:44 word2vec
13:36 Speeding up training with Negative Sampling
#StatQuest #word2vec

Пікірлер: 455

  • @statquest
    @statquest Жыл бұрын

    To learn more about Lightning: lightning.ai/ Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/

  • @karanacharya18
    @karanacharya1811 күн бұрын

    In simple words, word embeddings is the by-product of training a neural network to predict the next word. By focusing on that single objective, the weights themselves (embeddings) can be used to understand the relationships between the words. This is actually quite fantastic! As always, great video @statquest!

  • @statquest

    @statquest

    10 күн бұрын

    bam! :)

  • @rishavkumar8341
    @rishavkumar8341 Жыл бұрын

    Probably the most important concept in NLP. Thank you explaining it so simply and rigorously. Your videos are a thing of beauty!

  • @statquest

    @statquest

    Жыл бұрын

    Wow, thank you!

  • @exxzxxe
    @exxzxxe2 ай бұрын

    Josh; this is the absolutely clearest and most concise explanation of embeddings on KZread!

  • @statquest

    @statquest

    2 ай бұрын

    Thank you very much!

  • @davins90

    @davins90

    Ай бұрын

    totally agree

  • @HarpitaPandian
    @HarpitaPandian5 ай бұрын

    Can't believe this is free to watch, your quality content really helps people develop a good intuition about how things work!

  • @statquest

    @statquest

    5 ай бұрын

    Thanks!

  • @pichazai
    @pichazai8 күн бұрын

    this channel is the best resource of ML in the entire internet

  • @statquest

    @statquest

    8 күн бұрын

    Thank you!

  • @rachit7185
    @rachit7185 Жыл бұрын

    This channel is literally the best thing happened to me on youtube! Way too excited for your upcoming video on transformers, attention and LLMs. You're the best Josh ❤

  • @statquest

    @statquest

    Жыл бұрын

    Wow, thanks!

  • @MiloLabradoodle

    @MiloLabradoodle

    Жыл бұрын

    Yes, please do a video on transformers. Great channel.

  • @statquest

    @statquest

    Жыл бұрын

    @@MiloLabradoodle I'm working on the transformers video right now.

  • @liuzeyu3125

    @liuzeyu3125

    Жыл бұрын

    @@statquest Can't wait to see it!

  • @SergioPolimante
    @SergioPolimante3 ай бұрын

    Statquest is by far the best machine learning Chanel on KZread to learn the basic concepts. Nice job

  • @statquest

    @statquest

    3 ай бұрын

    Thank you!

  • @yuxiangzhang2343
    @yuxiangzhang23438 ай бұрын

    So good!!! This is literally the best deep learning tutorial series I find… after a very long search on the web!

  • @statquest

    @statquest

    8 ай бұрын

    Thank you! :)

  • @dreamdrifter
    @dreamdrifter11 ай бұрын

    Thank you Josh, this is something I've been meaning to wrap my head around for a while and you explained it so clearly!

  • @statquest

    @statquest

    11 ай бұрын

    Glad it was helpful!

  • @awaredz007
    @awaredz00716 күн бұрын

    Wow!! This is the best definition I have ever heard or seen, of word embedding. Right at 09:35. Thanks for the clear and awesome video. You guy rock!!

  • @statquest

    @statquest

    16 күн бұрын

    Thanks! :)

  • @harichandananeralla8099
    @harichandananeralla80998 ай бұрын

    I was struggling to understand NLP and DL concepts, thinking of dropping my classes, and BAM!!! I found you, and now I'm writing a paper on neural program repair using DL techniques.

  • @statquest

    @statquest

    8 ай бұрын

    BAM! :)

  • @ah89971
    @ah899718 ай бұрын

    When I watched this,I have only one question which is why all the others failed to explain this if they are fully understood the concept?

  • @statquest

    @statquest

    8 ай бұрын

    bam!

  • @rudrOwO

    @rudrOwO

    5 ай бұрын

    @@statquest Double Bam!

  • @meow-mi333

    @meow-mi333

    4 ай бұрын

    Bam the bam!

  • @tanbui7569
    @tanbui75698 ай бұрын

    Damn, when I first learned about this 4 years ago, it took me two days to wrap my head around to understand these weights and embeddings to implement in codes. Just now, I need to refreshe myself the concepts since I have not worked with it in a while and your videos illustrated what I learned (whole 2 days in the past) in just 16 minutes !! I wished this video existed earlier !!

  • @statquest

    @statquest

    8 ай бұрын

    Thanks!

  • @chad5615
    @chad561511 ай бұрын

    Keep up the amazing work (especially the songs) Josh, you're making live easy for thousands of people !

  • @statquest

    @statquest

    11 ай бұрын

    Wow! Thank you so much for supporting StatQuest! TRIPLE BAM!!!! :)

  • @FullStackAmigo
    @FullStackAmigo Жыл бұрын

    Absolutely the best explanation that I've found so far! Thanks!

  • @statquest

    @statquest

    Жыл бұрын

    Thank you! :)

  • @mannemsaisivadurgaprasad8987
    @mannemsaisivadurgaprasad89876 ай бұрын

    On of the best videos I've seen till now regarding Embeddings.

  • @statquest

    @statquest

    6 ай бұрын

    Thank you!

  • @haj5776
    @haj5776 Жыл бұрын

    The phrase "similar words will have similar numbers" in the song will stick with me for a long time, thank you!

  • @statquest

    @statquest

    Жыл бұрын

    bam!

  • @ananpinya835
    @ananpinya835 Жыл бұрын

    StatQuest is great! I learn a lot from your channel. Thank you very much!

  • @statquest

    @statquest

    Жыл бұрын

    Glad you enjoy it!

  • @muthuaiswaryaaswaminathan4079
    @muthuaiswaryaaswaminathan40796 ай бұрын

    Thank you so much for this playlist! Got to learn a lot of things in a very clear manner. TRIPLE BAM!!!

  • @statquest

    @statquest

    6 ай бұрын

    Thank you! :)

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w Жыл бұрын

    This is the best explanation of word embedding I have come across.

  • @statquest

    @statquest

    Жыл бұрын

    Thank you very much! :)

  • @wizenith
    @wizenith Жыл бұрын

    haha I love your opening and your teaching style! when we think something is extremely difficult to learn, everything should begin with singing a song, that make a day more beautiful to begin with ( heheh actually I am not just teasing lol, I really like that ) thanks for sharing your thoughts with us

  • @statquest

    @statquest

    Жыл бұрын

    Thanks!

  • @TropicalCoder
    @TropicalCoder8 ай бұрын

    That was the first time I actually understood embeddings - thanks!

  • @statquest

    @statquest

    8 ай бұрын

    bam! :)

  • @mamdouhdabjan9292
    @mamdouhdabjan929211 ай бұрын

    Hey Josh. A great new series that I, and many others, would be excited to see is bayesian statistics. Would love to watch you explain the intricacies of that branch of stats. Thanks as always for the great content and keep up with the neural-network related videos. They are especially helpful.

  • @statquest

    @statquest

    11 ай бұрын

    That's definitely on the to-do list.

  • @mamdouhdabjan9292

    @mamdouhdabjan9292

    11 ай бұрын

    @@statquest looking forward to it.

  • @user-eq9cf4mt2s
    @user-eq9cf4mt2s21 күн бұрын

    Great presentation, You saved my day after watching several videos, thank you!

  • @statquest

    @statquest

    21 күн бұрын

    Glad it helped!

  • @mycotina6438
    @mycotina6438 Жыл бұрын

    BAM!! StatQuest never lie, it is indeed super clear!

  • @statquest

    @statquest

    Жыл бұрын

    Thank you! :)

  • @rathinarajajeyaraj1502
    @rathinarajajeyaraj1502 Жыл бұрын

    This is one of the best sources of information.... I always find videos a great source of visual stimulation... thank you.... infinite baaaam

  • @statquest

    @statquest

    Жыл бұрын

    BAM! :)

  • @acandmishra
    @acandmishraАй бұрын

    your work is extremely amazing and so helpful for new learns who want to go into detail of working of Deep Learning models , instead of just knowing what they do!! Keep it up!

  • @statquest

    @statquest

    Ай бұрын

    Thanks!

  • @exxzxxe
    @exxzxxeАй бұрын

    Hopefully everyone following this channel has Josh's book. It is quite excellent!

  • @statquest

    @statquest

    Ай бұрын

    Thanks for that!

  • @flow-saf
    @flow-saf5 ай бұрын

    This video explains the source of the multiple dimensions in a word embedding, in the most simple way. Awesome. :)

  • @statquest

    @statquest

    5 ай бұрын

    Thanks!

  • @gustavow5746
    @gustavow57466 ай бұрын

    the best video I saw about this topic so far. Great Content! Congrats!!

  • @statquest

    @statquest

    6 ай бұрын

    Wow, thanks!

  • @channel_SV
    @channel_SV Жыл бұрын

    It's so nice to google and realize that there is a StatQuest about your question, when you are certain of that there hadn't been one some time before

  • @statquest

    @statquest

    Жыл бұрын

    BAM! :)

  • @vpnserver407
    @vpnserver40711 ай бұрын

    highly valuable video and book tutorial, thanks for putting this kind of special tuts out here .

  • @statquest

    @statquest

    11 ай бұрын

    Glad you liked it!

  • @lfalfa8460
    @lfalfa84605 ай бұрын

    I love all of your songs. You should record a CD!!! 🤣 Thank you very much again and again for the elucidating videos.

  • @statquest

    @statquest

    5 ай бұрын

    Thanks!

  • @michaelcheung6290
    @michaelcheung6290 Жыл бұрын

    Thank you statquest!!! Finally I started to understand LSTM

  • @statquest

    @statquest

    Жыл бұрын

    Hooray! BAM!

  • @wellwell8025
    @wellwell8025 Жыл бұрын

    Way better than my University slides. Thanks

  • @statquest

    @statquest

    Жыл бұрын

    Thanks!

  • @user-qc5uk6ei2m
    @user-qc5uk6ei2m7 ай бұрын

    Hey Josh, i'm a brazilian student and i love to see your videos, it's such a good and fun to watch explanation of every one of the concepts, i just wanted to say thank you, cause in the last few months you made me smile beautiful in the middle of studying, so, thank you!!! (sorry for the bad english hahaha)

  • @statquest

    @statquest

    7 ай бұрын

    Muito obrigado!!! :)

  • @mahdi132
    @mahdi1329 ай бұрын

    Thank you sir. Your explanation is great and your work is much appreciated.

  • @statquest

    @statquest

    9 ай бұрын

    Thanks!

  • @RaynerGS
    @RaynerGS6 ай бұрын

    I admire your work a lot. Salute from Brazil.

  • @statquest

    @statquest

    6 ай бұрын

    Muito obrigado! :)

  • @eamonnik
    @eamonnik Жыл бұрын

    Hey Josh! Loved seeing your talk at BU! Appreciate your videos :)

  • @statquest

    @statquest

    Жыл бұрын

    Thanks so much! :)

  • @manuelamankwatia6556
    @manuelamankwatia655629 күн бұрын

    This is by far the best video on embeddings. A while university corse is broken down in 15minutes

  • @statquest

    @statquest

    29 күн бұрын

    Thanks!

  • @ramzirebai3661
    @ramzirebai3661 Жыл бұрын

    Thank you so much Mr.Josh Starmer, you are the only one that makes ML concepts easy to understand Can you , please , explain Glove ?

  • @statquest

    @statquest

    Жыл бұрын

    I'll keep that in mind.

  • @bancolin1005
    @bancolin1005 Жыл бұрын

    BAM! Thanks for your video, I finally realize what the negative sampling means ~

  • @statquest

    @statquest

    Жыл бұрын

    Happy to help!

  • @alfredoderodt6519
    @alfredoderodt65198 ай бұрын

    You are a beautiful human! Thank you so much for this video! I was finally able to understand this concept! Thanks so much again!!!!!!!!!!!!! :)

  • @statquest

    @statquest

    8 ай бұрын

    Glad it was helpful!

  • @saisrisai9649
    @saisrisai96494 ай бұрын

    Thank you Statquest!!!!

  • @statquest

    @statquest

    4 ай бұрын

    Any time!

  • @fouadboutaleb4157
    @fouadboutaleb41578 ай бұрын

    Bro , i have my master degree in ML but trust me you explain it better than my teachers ❤❤❤ Big thanks

  • @statquest

    @statquest

    8 ай бұрын

    Thank you very much! :)

  • @p-niddy
    @p-niddy11 ай бұрын

    Great video! One suggestion is that you could expand on the Negative Sampling discussion by explaining how it chooses purposely unrelated (non-context) words to increase the model's accuracy in predicting related (context) words of the target word.

  • @statquest

    @statquest

    11 ай бұрын

    It actually doesn't purposely select unrelated words. It just selects random words and hopes that the vocabulary is large enough that the probability that the words are unrelated will be relatively high.

  • @m3ow21
    @m3ow2111 ай бұрын

    I love the way you teach!

  • @statquest

    @statquest

    11 ай бұрын

    Thanks!

  • @wenqiangli7544
    @wenqiangli7544 Жыл бұрын

    Great video for explaining word2vec!

  • @statquest

    @statquest

    Жыл бұрын

    Thanks!

  • @danish5326
    @danish53268 ай бұрын

    Thanks for enlightening us Master.

  • @statquest

    @statquest

    7 ай бұрын

    Any time!

  • @yasminemohamed5157
    @yasminemohamed5157 Жыл бұрын

    Awesome as always. Thank you!!

  • @statquest

    @statquest

    Жыл бұрын

    Thank you! :)

  • @pedropaixaob
    @pedropaixaob4 ай бұрын

    This is an amazing video. Thank you!

  • @statquest

    @statquest

    4 ай бұрын

    Thanks!

  • @ColinTimmins
    @ColinTimmins7 ай бұрын

    Thank you so much for these videos. It really helps with the visuals because I am dyslexic… Quadruple BAM!!!! lol 😊

  • @statquest

    @statquest

    7 ай бұрын

    Happy to help!

  • @study-tp4ts
    @study-tp4ts Жыл бұрын

    Great video as always!

  • @statquest

    @statquest

    Жыл бұрын

    Thanks again!

  • @LakshyaGupta-ge3wj
    @LakshyaGupta-ge3wj6 ай бұрын

    Absolutely mind blowing and amazing presentation! For the Word2Vec's strategy for increasing context, does it employ the 2 strategies in "addition" to the 1-Output-For-1-Input basic method we talked about in the whole video or are they replacements? Basically, are we still training the model on predicting "is" for "Gymkata" in the same neural network along with predicting "is" for a combination of "Gymkata" and "great"?

  • @statquest

    @statquest

    5 ай бұрын

    Word2Vec uses one of the two strategies presented at the end of the video.

  • @exxzxxe
    @exxzxxe2 ай бұрын

    You ARE the Batman and Superman of machine learning!

  • @statquest

    @statquest

    2 ай бұрын

    :)

  • @pakaponwiwat2405
    @pakaponwiwat24058 ай бұрын

    Wow, Awesome. Thank you so much!

  • @statquest

    @statquest

    8 ай бұрын

    You're very welcome!

  • @auslei
    @auslei11 ай бұрын

    Love this channel.

  • @statquest

    @statquest

    11 ай бұрын

    Glad to hear it!

  • @aniketsakpal4969
    @aniketsakpal496911 ай бұрын

    Just incredible!

  • @statquest

    @statquest

    11 ай бұрын

    Thank you!

  • @c.nbhaskar4718
    @c.nbhaskar4718 Жыл бұрын

    great stuff as usual ..BAM * 600 million

  • @statquest

    @statquest

    Жыл бұрын

    Thank you so much! :)

  • @pushkar260
    @pushkar260 Жыл бұрын

    That was quite informative

  • @statquest

    @statquest

    Жыл бұрын

    BAM! Thank you so much for supporting StatQuest!!! :)

  • @minhmark.01
    @minhmark.01Ай бұрын

    thanks for your tutorial!!!

  • @statquest

    @statquest

    Ай бұрын

    You're welcome!

  • @MaskedEngineerYH
    @MaskedEngineerYH Жыл бұрын

    Keep going statquest!!

  • @statquest

    @statquest

    Жыл бұрын

    That's the plan!

  • @janapalaswathi4262
    @janapalaswathi42623 ай бұрын

    Awesome explanation..

  • @statquest

    @statquest

    3 ай бұрын

    Thanks!

  • @phobiatheory3791
    @phobiatheory3791 Жыл бұрын

    Hi, I love your videos! They're really well explained. Could you please make a video on partial least squares (PLS)

  • @statquest

    @statquest

    Жыл бұрын

    I'll keep that in mind.

  • @denismarcio
    @denismarcioАй бұрын

    Extremamente didático! Parabéns.

  • @statquest

    @statquest

    Ай бұрын

    Muito obrigado! :)

  • @tupaiadhikari
    @tupaiadhikari10 ай бұрын

    Great Explanation. Please make a video on how do we connect the output of an Embedding Layer to an LSTM/GRU for doing classification for say Sentiment Analysis

  • @statquest

    @statquest

    10 ай бұрын

    I show how to connect it to an LSTM for language translation here: kzread.info/dash/bejne/fmx8rdmeiqy1nco.html

  • @tupaiadhikari

    @tupaiadhikari

    10 ай бұрын

    @@statquest Thank You Professor Josh !

  • @avishkaravishkar1451
    @avishkaravishkar14515 ай бұрын

    For those of you who find it hard to understand this video, my recommendation is to watch it at a slower pace and make notes of the same. It will really make things much more clear.

  • @statquest

    @statquest

    5 ай бұрын

    0.5 speed bam!!! :)

  • @ishaqpaktinyar7766
    @ishaqpaktinyar77663 ай бұрын

    you da bessssst, saved me alota time and confusion :..)

  • @statquest

    @statquest

    3 ай бұрын

    Thanks!

  • @mariafernandaruizmorales2322
    @mariafernandaruizmorales2322 Жыл бұрын

    It would also be nice to have a video about the difference between LM (linear regression models) and GLM (Generalized Linear Models). I know they're different but don't quite understand thAT when interpreting them or programming them in R. THAAANKS!

  • @statquest

    @statquest

    Жыл бұрын

    Linear models are just models based on linear regression and I describe them here in this playlist: kzread.info/head/PLblh5JKOoLUIzaEkCLIUxQFjPIlapw8nU Generalized Linear Models is more "generalized" and includes Logistic Regression kzread.info/head/PLblh5JKOoLUKxzEP5HA2d-Li7IJkHfXSe and a few other methods that I don't talk about like Poisson Regression.

  • @mariafernandaruizmorales2322

    @mariafernandaruizmorales2322

    Жыл бұрын

    @@statquest Thanks Josh!! I'll watch them all 🤗

  • @lexxynubbers
    @lexxynubbers11 ай бұрын

    Machine learning explained like Sesame Street is exactly what I need right now.

  • @statquest

    @statquest

    11 ай бұрын

    bam!

  • @CaHeoMapMap
    @CaHeoMapMap Жыл бұрын

    so goooood! Thank alot!

  • @statquest

    @statquest

    Жыл бұрын

    Glad you like it!

  • @AliShafiei-ui8tn
    @AliShafiei-ui8tn9 ай бұрын

    the best channel ever.

  • @statquest

    @statquest

    9 ай бұрын

    Double bam! :)

  • @meguellatiyounes8659
    @meguellatiyounes8659 Жыл бұрын

    My favourite topic its magic. Bam!!

  • @statquest

    @statquest

    Жыл бұрын

    :)

  • @user-bd2fm9lk5b
    @user-bd2fm9lk5b5 ай бұрын

    Thank you Josh for this great video. I have a quick question about the Negative Sampling: If we only want to predict A, why do we need to keep the weights for "abandon" instead of just ignoring all the weights except for "A"?

  • @statquest

    @statquest

    5 ай бұрын

    If we only focused on the weights for "A" and nothing else, then training would cause all of the weights to make every output = 1. In contrast, by adding some outputs that we want to be 0, training is forced to make sure that not every single output gets a 1.

  • @NewMateo
    @NewMateo Жыл бұрын

    Great vid. So your going to do a vid on transformer architectures? That would be incredible if so. Btw bought your book. Finished it in like 2 weeks. Great work on it!

  • @statquest

    @statquest

    Жыл бұрын

    Thank you! My video on Encoder-Decoders will come out soon, then Attention, then Transformers.

  • @thomasstern6814

    @thomasstern6814

    Жыл бұрын

    @@statquest When the universe needs you most, you provide

  • @familywu3869
    @familywu386911 ай бұрын

    Thank you very much for your excellent tutorials! Josh. Here I have a question, at around 13:30 of this video tutorial, you mentioned to multiply by 2. I am not sure why 2? I mean if there are more than 2 outputs, will we multiply the number of output nodes, instead of 2? Thank you for your clarification in advance.

  • @statquest

    @statquest

    11 ай бұрын

    If we have 3,000,000 words and phrases as inputs, and each input is connected to 100 activation functions, then we have 300,000,000 weights going from the inputs to the activation function. Then from those 100 activation function, we have 3,000,000 outputs (one per word or phrase), each with a weight. So we have 300,000,000 weights on the input side, and 300,000,000 weights on the output side, or a total of 600,000,000 weights. However, since we always have the same number of weights on the input and output sides, we only need to calculate the number of weights on one side and then just multiply that number by 2.

  • @surojit9625

    @surojit9625

    9 ай бұрын

    @@statquest Thanks for explaining! I also had the same question.

  • @jwilliams8210

    @jwilliams8210

    5 ай бұрын

    Ohhhhhhhhh! I missed that the first time around! BTW: (Stat)Squatch and Norm are right: StatQuest is awesome!!

  • @SousanTarahomi-vh2jp
    @SousanTarahomi-vh2jp5 ай бұрын

    Thanks!

  • @statquest

    @statquest

    5 ай бұрын

    Hooray!!! Thank you so much for supporting StatQuest!!! TRIPLE BAM! :)

  • @mariafernandaruizmorales2322
    @mariafernandaruizmorales2322 Жыл бұрын

    Please make a video about the metrics for prediction performance: RMSE, MAE and R SQUARED. 🙏🏼🙏🏼🙏🏼 YOURE THE BEST!

  • @statquest

    @statquest

    Жыл бұрын

    The first video I ever made is on R-squared: kzread.info/dash/bejne/ZHWFrc-wYZfTeLA.html NOTE: Back then I didn't know about machine learning, so I only talk about R-squared in the context of fitting a straight line to data. In that context, R-squared can't be negative. However, with other machine learning algorithms, it is possible.

  • @MrAhsan99
    @MrAhsan995 ай бұрын

    watched this video multiple times but unable to understand a thing. I'm sure I am dumb and the Josh is great!

  • @statquest

    @statquest

    5 ай бұрын

    Maybe you should start with the basics for neural networks: kzread.info/dash/bejne/daWDyMttYa_MdNo.html

  • @MecchaKakkoi
    @MecchaKakkoi Жыл бұрын

    Nice!

  • @statquest

    @statquest

    Жыл бұрын

    Thanks!

  • @hasansoufan
    @hasansoufan11 ай бұрын

    Thanks ❤

  • @statquest

    @statquest

    11 ай бұрын

    :)

  • @smooth7041
    @smooth7041 Жыл бұрын

    Hello. Thank you very much. Great, great video. I have a question. In the negative sampling procedure we never use A = 1 as input at any step in the training process. I am wondering about the time the embeddings for A are trained. I can see how the weights for A at the right of the activation functions are trained, but not for the weights at the left. I can see that because we use a lot of training steps, in some moment A will be a word we don't want to predict at the input; therefore the embeddings for A will change, however, the prediction won't be A for those steps.

  • @statquest

    @statquest

    Жыл бұрын

    Why would we never use "A = 1" in training?

  • @user-pd1gy8xh4y
    @user-pd1gy8xh4y8 ай бұрын

    funny and very nicely explained.

  • @statquest

    @statquest

    8 ай бұрын

    Thanks! 😃

  • @neemo8089
    @neemo80898 ай бұрын

    Thank you so much for the video! I have one question, at 15:09, why we only need to optimize 300 steps? For one word with 100 * 2 weights? not sure how to understand the '2' as well.

  • @statquest

    @statquest

    8 ай бұрын

    At 15:09 there are 100 weights going from the word "aardvark" to the 100 activation functions in the hidden layer. There are then 100 weights going from the activation functions to the sum for the word "A" and 100 weights going from the activation functions to the sum for the word "abandon". Thus, 100 + 100 + 100 = 300.

  • @neemo8089

    @neemo8089

    8 ай бұрын

    Thank you!@@statquest

  • @MadeyeMoody492
    @MadeyeMoody492 Жыл бұрын

    Great video! Was just wondering why the output of the softmax activation at 10:10 are just 1 and 0s. Wouldn't that only be the case if we applied ArgMax here not SoftMax?

  • @statquest

    @statquest

    Жыл бұрын

    In this example the data set is very small and, for example, the word "is" is always followed by "great", every single time. In contrast, if we had a much larger dataset, then the word "is" would be followed by a bunch of words (like "great", or "awesome" or "horrible", etc) and not followed by a bunch of other words (like "ate", or "stand", etc). In that case, the soft max would tells which words had the highest probability of following is and we wouldn't just get 1.0 for a single word that could follow the word 'is'.

  • @MadeyeMoody492

    @MadeyeMoody492

    Жыл бұрын

    @@statquest Ohh ok, that clears it up. Thanks!!

  • @guillaumebarreau
    @guillaumebarreau7 ай бұрын

    Hi Josh, thank you for your excellent work! Just discovered your videos and consuming like a pack of crisps. I was wondering about the desired output when using the skip-gram model. When we have a word as input, the desired output is to have all the words found within the window size on any sentence of the corpus activate to 1 at the same time on the output layer, right? It is not said explicitly but I guess it is the only way it can be.

  • @statquest

    @statquest

    7 ай бұрын

    The outputs from a softmax function are all between 0 and 1 and add up to 1. In other words, softmax function does not allow more than one output to have a value of 1. See 12:16 for an example of outputs for the skipgram method.

  • @guillaumebarreau

    @guillaumebarreau

    7 ай бұрын

    @@statquest, thanks for your prompt reply! You are right, I didn't look carefully enough. I guess I got confused because after watching the video, I read other sources which seem to consider every skip-gram pair as a separate training example, which confused me.

  • @gabrielrochasantana
    @gabrielrochasantanaАй бұрын

    Amazing lecture, congrats. The audio was also made from an NPL (Natural Language Processing), right?

  • @statquest

    @statquest

    Ай бұрын

    The translated overdubs were.

  • @user-rj6wc7bm8x
    @user-rj6wc7bm8x Жыл бұрын

    That's awesome! But how would the multilingual word2vec be trained? Would the training dataset simply include corpus of two (or more) languages? or would additional NN infrastructure be required?

  • @statquest

    @statquest

    Жыл бұрын

    Are you asking about something that can translate one language to another? If so, then, yes, additional infrastructure is needed and I'll describe it in my next video in this series (it's called "sequence2sequence").

  • @user-rj6wc7bm8x

    @user-rj6wc7bm8x

    Жыл бұрын

    @@statquest not exactly, it's more like having similar words from multiple languages to be mapped within the same vector spaces. so for example King and "King" in French, German and Spanish - would appear to be the same.

  • @statquest

    @statquest

    Жыл бұрын

    @@user-rj6wc7bm8x Hmmm.. I'm not sure how that would work because the the english word "king" and the Spanish translation, "rey", would be in different contexts (For example, the english "king" would be in a phrase "all hail the king", and the spanish version would be in a sentence that had completely different words (even if they meant the same thing).

  • @user-cr6vg9kf2t
    @user-cr6vg9kf2t4 ай бұрын

    This guy really loves Troll 2!

  • @statquest

    @statquest

    4 ай бұрын

    bam!

  • @BalintHorvath-mz7rr
    @BalintHorvath-mz7rr2 ай бұрын

    Awesome video! This time, I feel I miss one step through. Namely, how do you train this network? I mean, I get that we want the network as such that similar words have similar embeddings. But what is the 'Actual' we use in our loss function to measure the difference from and use backpropagation with?

  • @statquest

    @statquest

    2 ай бұрын

    Yes

  • @balintnk

    @balintnk

    2 ай бұрын

    @@statquest haha I feel like I didn't ask the question well :D How would the network know, without human input, that Troll 2 and Gymkata is very similar and so it should optimize itself so that ultimately they have similar embeddings? (What "Actual" value do we use in the loss function to calculate the residual?)

  • @statquest

    @statquest

    2 ай бұрын

    @@balintnk We just use the context that the words are used in. Normal backpropagation plus the cross entropy loss function where we use neighboring words to predict "troll 2" and "gymkata" is all you need to use to get similar embedding values for those. That's what I used to create this video.

  • @robott12
    @robott12 Жыл бұрын

    Fantastic video! How do you apply in powerpoint the style of "pencil-written" boxes?

  • @statquest

    @statquest

    Жыл бұрын

    I use Keynote, and it's one of the default line types.

  • @robott12

    @robott12

    Жыл бұрын

    @@statquest Thanks!

  • @user-ck3qk5ce9k
    @user-ck3qk5ce9k3 ай бұрын

    Can you do GloVe? i really enjoyed Word2Vec it will be great to see how GloVe works...how factorization based method works. Thank you for this amazing content!

  • @statquest

    @statquest

    3 ай бұрын

    I'll keep that in mind.

  • @himgos13
    @himgos136 ай бұрын

    i understood word embedding in first 10 seconds

  • @statquest

    @statquest

    6 ай бұрын

    bam!

  • @fernandofa2001
    @fernandofa2001 Жыл бұрын

    I'm not sure if I understood correctly. Have those millions of word embeddings been preprocessed and are public? Or ar they dependent on context? I need to do a project on word clustering of movie genres and I'm not sure if this is my way to go. Any help is appreciated!

  • @statquest

    @statquest

    Жыл бұрын

    I'm not sure this is the way to go either - these specific embeddings are usually used for processing natural language. However, you can download some publicly available embeddings here: fasttext.cc/docs/en/crawl-vectors.html

  • @lancezhang892
    @lancezhang8926 ай бұрын

    Hello Josh, thanks for your video.May I know if we could use 3 neuron network to predict the next words?

  • @statquest

    @statquest

    6 ай бұрын

    Sure

  • @jayachandrarameshkalakutag7329
    @jayachandrarameshkalakutag73295 ай бұрын

    Hi josh firstly thank you for all your videos. I had one doubt , in skip gram what will be the loss function on which the network is been optimized, in CBOW i can see that cross entropy is enough

  • @statquest

    @statquest

    5 ай бұрын

    I believe it's cross entropy in both.

  • @S.A_1992
    @S.A_19923 ай бұрын

    Thank you so much for this video. Could you do something like this for audio embedding as well? or how could we merge (do fusion) audio and text embedding? I really appreciate it.

  • @statquest

    @statquest

    3 ай бұрын

    Unfortunately, I'm not familiar with audio embedding.

  • @kimsobota1324
    @kimsobota13245 ай бұрын

    I appreciate the knowledge you've just shared. It explains many things to me about neural networks. I have a question though, If you are randomly assigning a Value to a word, why not try something easier? For example, In Hebrew, each of the letters of the Alef - Bet is assigned a value. these values are added together to form a sum of a word. It is the context of the word, in a sentence that forms the block. Sabe? Take a look at Gamatra, Hewbew has been doing this for thousands of years. Just a thought.

  • @statquest

    @statquest

    5 ай бұрын

    Would that method result in words used in similar contexts to have similar numbers? Does it apply to other languages? Other symbols? And can we end up with multiple numbers per symbol to reflect how it can be used or modified in different contexts?

  • @kimsobota1324

    @kimsobota1324

    5 ай бұрын

    I wish I could answer that question better than to tell you context is EVERYTHING in Hebrew, a language that has but doesn't use vowels, since all who use the language understand the consonant-based word structures. Not only that, but in the late 1890s Rabbis from Ukraine and Azerbaijan developed a mathematical code that was used to predict word structures from the Torah that were accurate to a value of 0.001%. Others have tried to apply it to other books like Alice in Wonderland and could not duplicate the result. You can find more information on the subject through a book called, The Bible Code, which gives much more information as well as the formuli the Jewish Mathameticians created. While it is a poor citation, I have included this Wikipedia link: en.wikipedia.org/wiki/Bible_code#:~:text=The%20Bible%20code%20(Hebrew%3A%20%D7%94%D7%A6%D7%95%D7%A4%D7%9F,has%20predicted%20significant%20historical%20events. The book is available on Amazon if you find it peaks your interest. Please let me know if this helps. @@statquest

  • @kimsobota1324

    @kimsobota1324

    5 ай бұрын

    @starquest, I had not heard from you about the Wiki?

  • @Rex389
    @Rex389 Жыл бұрын

    Hi Josh, great video. I have one question, how are the 2-20 words selected for being dropped while doing negative sampling

  • @statquest

    @statquest

    Жыл бұрын

    This is answered at 13:44. We can pick a random set because the assumption is that when the vocabulary is large, the chances of selecting a similar word are small. And I believe you select a different subset each iteration, so even if you do pick a similar word, the long term effects will not be huge.

  • @Rex389

    @Rex389

    Жыл бұрын

    @@statquest Got it. Thanks