Llama - EXPLAINED!

ABOUT ME
⭕ Subscribe: kzread.info...
📚 Medium Blog: / dataemporium
💻 Github: github.com/ajhalthor
👔 LinkedIn: / ajay-halthor-477974bb
RESOURCES
[1 🔴] Llama 1 dissertation: arxiv.org/pdf/2302.13971.pdf
[2 🔴] Llama 2 dissertation: scontent-lax3-1.xx.fbcdn.net/...
[3 🔴] Llama code: github.com/facebookresearch/l...
[4 🔴] Where I got the decoder only transformer architecture: ai.stackexchange.com/question...
[5 🔴] Huggingface models on Llama that you can use for inference: huggingface.co/models?search=...
[6 🔴] Llama vs Alpaca: sapling.ai/llm/alpaca-vs-llama2
[7 🔴] Llama2 file using Qlora: colab.research.google.com/dri...
[8 🔴] ‪@1littlecoder‬ 's video describing Qlora and how you can fine tune llama: • 🐐Llama 2 Fine-Tune wit...
[9 🔴] Autotrain repo for 1 line training: github.com/huggingface/autotr...
[10 🔴] Video describing on how to use Autotrain (‪@abhishekkrthakur‬ ) : • The EASIEST way to fin...
PLAYLISTS FROM MY CHANNEL
⭕ Transformers from scratch playlist: • Self Attention in Tran...
⭕ ChatGPT Playlist of all other videos: • ChatGPT
⭕ Transformer Neural Networks: • Natural Language Proce...
⭕ Convolutional Neural Networks: • Convolution Neural Net...
⭕ The Math You Should Know : • The Math You Should Know
⭕ Probability Theory for Machine Learning: • Probability Theory for...
⭕ Coding Machine Learning: • Code Machine Learning
MATH COURSES (7 day free trial)
📕 Mathematics for Machine Learning: imp.i384100.net/MathML
📕 Calculus: imp.i384100.net/Calculus
📕 Statistics for Data Science: imp.i384100.net/AdvancedStati...
📕 Bayesian Statistics: imp.i384100.net/BayesianStati...
📕 Linear Algebra: imp.i384100.net/LinearAlgebra
📕 Probability: imp.i384100.net/Probability
OTHER RELATED COURSES (7 day free trial)
📕 ⭐ Deep Learning Specialization: imp.i384100.net/Deep-Learning
📕 Python for Everybody: imp.i384100.net/python
📕 MLOps Course: imp.i384100.net/MLOps
📕 Natural Language Processing (NLP): imp.i384100.net/NLP
📕 Machine Learning in Production: imp.i384100.net/MLProduction
📕 Data Science Specialization: imp.i384100.net/DataScience
📕 Tensorflow: imp.i384100.net/Tensorflow
#chatgpt #deeplearning #machinelearning #bert #gpt

Пікірлер: 63

  • @CodeEmporium
    @CodeEmporium10 ай бұрын

    Would you like to see more videos on Llama? Let me know. Have a wonderful day :)

  • @paisanareeprasertkul1950

    @paisanareeprasertkul1950

    10 ай бұрын

    Yes, definitely. One of the best explanations of the topic!

  • @manusrivastava2047

    @manusrivastava2047

    10 ай бұрын

    great video, love the well structured and informative nature they have. Would love to see how to use word embeddings from Llama2 or other language model for transfer learning. Thanks and keep up the good work!

  • @ozne_2358

    @ozne_2358

    10 ай бұрын

    Yes, please. More details on the code, how the parameters are initialized from the parameter file and used in the various stages.

  • @scitechtalktv9742

    @scitechtalktv9742

    9 ай бұрын

    I am struggling to have llama 2 working with Dutch language reliably, so you can pose questions in Dutch and have llama 2 give the answer in Dutch. (This will be due to the fact that llama 2 is trained on data that contains very little Dutch language). I have had some succes using special prompts to do that, but sometimes it switches back to English unexpectedly. What technique(s) can I use to solve this? My use case is: I have Dutch texts that I want to be able to pose questions to in Dutch by means of Retrieval Augmented Generation (RAG) (using a llama 2 LLM) and get answers in correct Dutch?

  • @user-yi8vs7lb7d

    @user-yi8vs7lb7d

    9 ай бұрын

    I'm waiting the video!

  • @share4713
    @share471310 ай бұрын

    The more i watch videos , the more i understand a subject, this is propably because i Can now see the subject in different angles or perspectives, now i have a better intuition of transformer architectures and i Can code it from scratch, thank you.

  • @jeswer9
    @jeswer97 ай бұрын

    Yes please more deep dive into the code! Super valuable video because of that part.

  • @pipinstallyp
    @pipinstallyp10 ай бұрын

    Hey, thanks a lot for your videos. Your video - transformer attention is all you need helped me build an intuition back before transformers were really cool. It's lovely to see your video on llama, as I actively get to finetune llama on day to day basis :) Much love.

  • @CodeEmporium

    @CodeEmporium

    10 ай бұрын

    Super happy to hear! Thanks so much for watching :)

  • @abhijitnayak1639
    @abhijitnayak163910 ай бұрын

    Thank you for such an insightful video. Would definitely love a deep-dive video on the architecture and code of LLama 2. Could you please also do an implementation of BERT or RoBERTa fine-tuning (the training process optimized via deepspeed) . Thanks again!!

  • @naevan1
    @naevan17 ай бұрын

    amazing work man. one of my favourite deep learning creators!

  • @prasadraavi390
    @prasadraavi3907 ай бұрын

    Beautifully explained. Thank you.

  • @prasadraavi390
    @prasadraavi3907 ай бұрын

    Beautifully Explained. Thank you. Yes, I want to know more about its architecture too.

  • @steel-r_ua
    @steel-r_ua3 ай бұрын

    Thanks for the great video and a GREAT way of presenting data and showing the code!

  • @spydeyftw
    @spydeyftw9 ай бұрын

    Good explanation with proper understanding !

  • @aurkom
    @aurkom10 ай бұрын

    Would love a deep dive into stuff like LoRA and quantization (bitsandbytes library) as well. Perhaps, doing it from scratch in pytorch!

  • @CodeEmporium

    @CodeEmporium

    10 ай бұрын

    Perfect. I have coded out the transformer from scratch using PyTorch. Maybe I’ll think of a similar series for llama :)

  • @jiaxingyu8300
    @jiaxingyu83009 ай бұрын

    Nice explanation!

  • @andresg297
    @andresg2977 ай бұрын

    Excellent explanation. Thank you

  • @YashVerma-ii8lx
    @YashVerma-ii8lx6 ай бұрын

    Thank you so much for explaining brother! Would be really great if you could give a code walkthrough video as well!

  • @dollarscholar2956
    @dollarscholar295610 ай бұрын

    Clear, informative, well presented. Great video!

  • @CodeEmporium

    @CodeEmporium

    10 ай бұрын

    Thanks so much for commenting:)

  • @dinoscheidt
    @dinoscheidt10 ай бұрын

    Commenting for the algorithm. Very well explained. You have a talent !

  • @CodeEmporium

    @CodeEmporium

    10 ай бұрын

    Much appreciated ! Thank you!

  • @dan1ar
    @dan1ar10 ай бұрын

    Great video! Looking forward to deep dive into llama code

  • @CodeEmporium

    @CodeEmporium

    10 ай бұрын

    Sure thing. I have slated it on my TODOs :) Thank you for watching

  • @alexandertakele7528
    @alexandertakele75285 ай бұрын

    Thank you so much

  • @gopalakrishna9651
    @gopalakrishna96515 ай бұрын

    yes. please. deep dive arch. and code walkthrough if possible. Thanks a lot for the video. May gods blessing be with you.

  • @naevan1
    @naevan17 ай бұрын

    would you be intersted in making a guide of finetuning llamma2 or you thin kit is oversaturated?

  • @younessamih3188
    @younessamih318810 ай бұрын

    Very helpful ! that will be great ...

  • @CodeEmporium

    @CodeEmporium

    10 ай бұрын

    Thanks so much! I’ll think of a deep dive as a future video / series

  • @popamaji
    @popamaji10 ай бұрын

    please make a video about how the generative feature and how the reinforcement learning is used in language models?

  • @popamaji
    @popamaji10 ай бұрын

    I have not implemented the code for decoder only so I have 3questions: 1. so it uses the triangular mask? I have heard from 2 sources which it does, but I dont get it, as we only feed inputs and not the outputs(unlike original transformer),how triangular mask on input data makes sense? 2. does why its called `decoder only`? the architecture seems much closer to encoder part of original transformer model, than its decoder part!! specially when the mask also not different than encoder of original. 3. is it autoregressive or still can be autoencoder to output the outputs in one pass?

  • @dikshyakasaju7541
    @dikshyakasaju754110 ай бұрын

    Very informative!! Would be sick if you could dive deeper.

  • @CodeEmporium

    @CodeEmporium

    10 ай бұрын

    Yes! Thanks for watching! Will think about if as a future video / series

  • @adarshsaurabh7871
    @adarshsaurabh787110 ай бұрын

    Can you please help me. I have multiple doubts. As all of these models are LLM and these generated next words based on the previous words, can I find tune them on any type of data, for example I like to make a model which can make poems, shayeri for me so can I train these for this task. Also as llama doesn't have an encoder. Isn't it a disadvantage. Also can you please make a video on encoder and decoder and their specific details. Please 🤓🤓

  • @ruksharalam173
    @ruksharalam17310 ай бұрын

    It'd be great if you could please dig deeper into llama code and architecture.

  • @ajaytaneja111
    @ajaytaneja11110 ай бұрын

    Hi Ajay, would love to hear your insights on PEFT - the theoretical aspects of course. I have seen a lot of videos on PEFT and some reading too. The theoretical aspects are not well explained.

  • @CodeEmporium

    @CodeEmporium

    10 ай бұрын

    Ajay! Yea for sure. I am interested to learn more about this too. I’ll read more and make some content on this soon :)

  • @ajaytaneja111
    @ajaytaneja11110 ай бұрын

    Hi Ajay, I have been reading Llama 2 research paper. They talk a lot of safety during pre-training as you might have seen. Do you think they score over GPT in this aspect?

  • @CodeEmporium

    @CodeEmporium

    10 ай бұрын

    Yea. That 77 page dissertation in llama 2 definitely makes the claim that it is safer. They have sections and infographics dedicated to showing this as well. That said, I would need to check how much of this safety is incorporated in the pre training as well. I didn’t think there would be much in this phase. But I haven’t read the entire dissertation, so I may be wrong.

  • @NicholasRenotte
    @NicholasRenotte10 ай бұрын

    1.8k and closing in my boi!!!!

  • @CodeEmporium

    @CodeEmporium

    10 ай бұрын

    Ma guy. I will join the ranks of the 6 digit sub counts

  • @popamaji
    @popamaji10 ай бұрын

    is this decoder with simplified form?!?!!?!? or its encoder with decoder mask?

  • @StrangeMemes52
    @StrangeMemes5210 ай бұрын

    wow , amazing video 😁 , so how language modle after training fine-tuned , i mean how works this fine_tune ?

  • @CodeEmporium

    @CodeEmporium

    10 ай бұрын

    Fine tuning is done depending on the specific task you want. In llama chat and ChatGPTs case, we want the fine tuning on question answering. So we feed the model a bunch of questions + answer pairs and the model parameters are “fine tuned”. Hope this helps.

  • @DaTruAndi
    @DaTruAndi10 ай бұрын

    I think you didn’t describe RLHF fully. What you described was more SFT, you seemingly skipped mentioning the reward model explicitly. Maybe implicitly you meant it, but it could help to clarify this part of reinforcement learning

  • @rogermenezes

    @rogermenezes

    10 ай бұрын

    He has a very good series called "chatGPT explained" where he goes into detailed explanation of RHLF: kzread.info/dash/bejne/kYGErJV8qafVm7g.html

  • @CodeEmporium

    @CodeEmporium

    10 ай бұрын

    Yea that’s true. I mentioned this as “humans determining what is a better answer” when I probably should have said “humans determine the better answer to train the rewards model (s) and this in turn is used with the original fine tuned model to further fine tune it. And this happens via some proximal policy optimization” ~ or maybe something along these lines. Thanks for pointing it out. I’ll clarify this on some follow up videos in the near future too

  • @abzs5811

    @abzs5811

    4 ай бұрын

    @@CodeEmporiumlost me fam

  • @ajaytaneja111
    @ajaytaneja11110 ай бұрын

    Hi Ajay, I suppose they do grouped query Attention and not multi head attention

  • @CodeEmporium

    @CodeEmporium

    10 ай бұрын

    I’ll need to check the fine grained details out. Thanks for the heads up. If so, I’ll address this in that future video

  • @ajaytaneja111

    @ajaytaneja111

    10 ай бұрын

    Thanks for the response, Ajay. As always, great video.

  • @tunkskabulungana46
    @tunkskabulungana463 ай бұрын

    You said llama is an 8 language model, which prg.langs are they?😮

  • @jackhale8497
    @jackhale84979 ай бұрын

    😢 "Promo sm"

  • @azai.online
    @azai.online9 ай бұрын

    I do like Llama 2 and found it easy to use. I am using it in my own multi application platform and its great.