Few Shot Learning - EXPLAINED!

Follow me on M E D I U M: towardsdatascience.com/likeli...
TIMESTAMPS
0:00 Case Study
1:37 Prior Knowledge
3:07 Disadvantage of Traditional Models
4:57 The new Model - explained
6:40 Math
9:04 Code
REFERENCES
[1] Approaches to Few Shot Learning: research.aimultiple.com/few-s...
[2] Blog on Meta Learning & Few Shot Learning (Borealis AI): www.borealisai.com/en/blog/tu...
[2] A Survey of Few Shot Learning (Section 2.4 explains the approaches pretty well in plain english): arxiv.org/pdf/1904.05046.pdf
[2] This paper on knowledge distillation has some points on few shot learning: arxiv.org/pdf/2004.05937.pdf
[] Code : towardsdatascience.com/one-sh...

Пікірлер: 45

  • @PD-vt9fe
    @PD-vt9fe3 жыл бұрын

    Glad to see you're back. This channel deserved more than 41k Subs! Keep it up!

  • @CodeEmporium

    @CodeEmporium

    3 жыл бұрын

    Thank you for tuning in again. Apologies for the wait. Hoping to make up for it :)

  • @IdiotDeveloper
    @IdiotDeveloper3 жыл бұрын

    Your videos are really informative and entertaining.

  • @atifadib
    @atifadib3 жыл бұрын

    Hey this channel is my fav, glad you're back

  • @waleorki2016

    @waleorki2016

    3 жыл бұрын

    +1

  • @CodeEmporium

    @CodeEmporium

    3 жыл бұрын

    Much appreciated. :)

  • @kwang-jebaeg2460

    @kwang-jebaeg2460

    3 жыл бұрын

    Me too. Favorite of favorite

  • @renkewang7641

    @renkewang7641

    3 жыл бұрын

    It is the only personal channel I subscribe on youtube.

  • @user-nr3nf5nd5m
    @user-nr3nf5nd5m Жыл бұрын

    Thank you very much for the explanation.

  • @luansouzasilva31
    @luansouzasilva312 жыл бұрын

    Thank you very much for the explanation. But I still don't understand how I can pretrain the similarity function, how should I organize its inputs, etc. Can you explain a little bit more about it?

  • @user-xt2om1ev9z
    @user-xt2om1ev9z2 жыл бұрын

    Truely great explanation

  • @helloansuman
    @helloansuman3 жыл бұрын

    Amazing. If possible cover the coding part as well. Good luck.

  • @linzhu5178
    @linzhu5178 Жыл бұрын

    so during training we still have plenty of data to train the model, including data from same category right? I am first time learner, the name sounds to me even during training we only have very few data, or one example per category. Thanks for the video!

  • @aliboudjema78
    @aliboudjema782 жыл бұрын

    can you provide a cosine similarity code using TensorFlow ; pls?

  • @rochaksaini7527
    @rochaksaini75273 жыл бұрын

    How does zero shot learning fits into this example?

  • @umarfarooqshaik56
    @umarfarooqshaik56 Жыл бұрын

    thank you!

  • @amandarios4214
    @amandarios42143 жыл бұрын

    Amazing explanation!

  • @CodeEmporium

    @CodeEmporium

    3 жыл бұрын

    Thank you very much

  • @KlimovArtem1
    @KlimovArtem13 жыл бұрын

    8:55 - but if it’s the same network used to process image i and j, how exactly do you tune its parameters? Tuning it to make A embedding look closer to B embedding will also change the B embedding values in the same time.

  • @xinyuli9423

    @xinyuli9423

    2 жыл бұрын

    I guess it's simpler if you just look at the math. We are doing gradient descent on the loss function, so you just take the gradient of the loss with respect to parameters of network f. If you want more math details, you can go check out this video. kzread.info/dash/bejne/ZodhuqaelrbQhLA.html&ab_channel=ShusenWang

  • @quadhd1121
    @quadhd11213 жыл бұрын

    Hey can u please send link me to the original GAN video

  • @somerset006
    @somerset0062 жыл бұрын

    Hey, I've really enjoyed all your videos! Very nicely done at an appropriate technical level. I'd say the name of your channel is a bit misleading. It could also be affecting the number of your subscribers... Keep up the good work. Much appreciated!

  • @CodeEmporium

    @CodeEmporium

    2 жыл бұрын

    Thanks for the compliments! I make all sorts of videos these days. In fact recently, I've been coding a lot. Just hoping people will find the channel eventually 🙂

  • @clairewang8370
    @clairewang8370 Жыл бұрын

    Cute demo!!!!❤

  • @CodeEmporium

    @CodeEmporium

    Жыл бұрын

    Why thank you :)

  • @McMurchie
    @McMurchie3 жыл бұрын

    So can someone help me, if the network can only tell if two images are the same or not, what is the actual learning done here? Isn't it just a image to vec compare? Also how does this help with the original problem is it sam or not? Thanks in advance!

  • @KlimovArtem1

    @KlimovArtem1

    3 жыл бұрын

    It doesn’t tell if the images are the same or not, it’s trained to tell if it’s the similar looking person on two different images. It’s doing that by converting an images into 64 dimensional vector of values, that somehow describes all important to us features, and then compares those vectors from different images. It’s trained relatively easy - you just feed it with pairs of images with the same or different people in them and compare its output with the expected results. When it’s wrong (most of the time during the training), you back propagate the error, so tune the network weights. Eventually it learns to compare different people, not only those, that were used for training.

  • @McMurchie

    @McMurchie

    3 жыл бұрын

    @@KlimovArtem1 thank you very very much - that's a clear and concise explanation.

  • @lovebytoto
    @lovebytoto Жыл бұрын

    Thanksss

  • @AshokKumar_2216
    @AshokKumar_22163 жыл бұрын

    Wow nice video I learned new things And your's secondary voice is makes me fun😅😂🤣 bye bye

  • @CodeEmporium

    @CodeEmporium

    3 жыл бұрын

    Glad it's entertaining :)

  • @KlimovArtem1
    @KlimovArtem13 жыл бұрын

    Do the final embeddings of the trained network make any human readable sense? Like, hair color, face roundness, etc.

  • @SabbirAhmed-nc5hh
    @SabbirAhmed-nc5hh2 жыл бұрын

    well explained

  • @CodeEmporium

    @CodeEmporium

    2 жыл бұрын

    Thanks!

  • @kenonerboy
    @kenonerboy3 жыл бұрын

    I learned i either have half a brain or just face blindness

  • @CodeEmporium

    @CodeEmporium

    3 жыл бұрын

    Sorry to break it to you

  • @wonderfulvamsi
    @wonderfulvamsi2 жыл бұрын

    wow

  • @CodeEmporium

    @CodeEmporium

    2 жыл бұрын

    Very wow

  • @kryogenica4759
    @kryogenica47593 жыл бұрын

    What about prior knowledge you did not go into it.

  • @KlimovArtem1

    @KlimovArtem1

    3 жыл бұрын

    Prior knowledge that it learns here is - what important features (embedding) of the images it needs to compare. Like, when it learned to compare a few human faces, now it can compare other humans that it never seen before (‘cause it learned what makes them look different).

  • @jijiefan4435
    @jijiefan44353 жыл бұрын

    who are these models you hired 😩

  • @jijiefan4435

    @jijiefan4435

    3 жыл бұрын

    Get it, models?

  • @CodeEmporium

    @CodeEmporium

    3 жыл бұрын

    I fell of my chair. Wait till the next one, i hired some pros that'll make you dizzy

  • @Estereos
    @Estereos2 жыл бұрын

    Several amateur problems here: 1. All, so called "prior" knowledge must be handled on preprocessing stage. Like Face detection, for example. First "cook" the data, then "eat". 2 Huge misunderstanding across entire AI/ML community. You professors didn't teach you that there is a huge difference between an array and a vector. Not every array is a vector! Performing "similarity" functions, or any vector function, on an array is useless and you will always get an illusion of recognition. There will always be "weird" cases where you will not be able to explain the decision made by your model.

  • @Sn-nw6zb
    @Sn-nw6zb2 жыл бұрын

    Dude, this loss function wouldn't work at all in practice. Think about it first before posting video.... Let's discuss positive case only for actual similar pair, Let's say distance = 0, so sigmoid(0)=0.5 and loss=-log(0.5) = 0.69 And similarity = inverse(distance) = 1/(1+distance) That's why folks use contrastive loss.