Fine-tuning Large Language Models (LLMs) | w/ Example Code

Book a call: calendly.com/shawhintalebi
This is the 5th video in a series on using large language models (LLMs) in practice. Here, I discuss how to fine-tune an existing LLM for a particular use case and walk through a concrete example with Python code.
Series Playlist: • Large Language Models ...
📰 Read more: towardsdatascience.com/fine-t...
💻 Example code: github.com/ShawhinT/KZread-B...
Final Model: huggingface.co/shawhin/distil...
Dataset: huggingface.co/datasets/shawh...
More Resources
[1] Deeplearning.ai Finetuning Large Langauge Models Short Course: www.deeplearning.ai/short-cou...
[2] arXiv:2005.14165 [cs.CL] (GPT-3 Paper)
[3] arXiv:2303.18223 [cs.CL] (Survey of LLMs)
[4] arXiv:2203.02155 [cs.CL] (InstructGPT paper)
[5] 🤗 PEFT: Parameter-Efficient Fine-Tuning of Billion-Scale Models on Low-Resource Hardware: huggingface.co/blog/peft
[6] arXiv:2106.09685 [cs.CL] (LoRA paper)
[7] Original dataset source - Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning Word Vectors for Sentiment Analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142-150, Portland, Oregon, USA. Association for Computational Linguistics.
--
Homepage: shawhintalebi.com/
Socials
/ shawhin
/ shawhintalebi
/ shawhint
/ shawhintalebi
The Data Entrepreneurs
🎥 KZread: / @thedataentrepreneurs
👉 Discord: / discord
📰 Medium: / the-data
📅 Events: lu.ma/tde
🗞️ Newsletter: the-data-entrepreneurs.ck.pag...
Support ❤️
www.buymeacoffee.com/shawhint
Intro - 0:00
What is Fine-tuning? - 0:32
Why Fine-tune - 3:29
3 Ways to Fine-tune - 4:25
Supervised Fine-tuning in 5 Steps - 9:04
3 Options for Parameter Tuning - 10:00
Low-Rank Adaptation (LoRA) - 11:37
Example code: Fine-tuning an LLM with LoRA - 15:40
Load Base Model - 16:02
Data Prep - 17:44
Model Evaluation - 21:49
Fine-tuning with LoRA - 24:10
Fine-tuned Model - 26:50

Пікірлер: 225

  • @ShawhinTalebi
    @ShawhinTalebi7 ай бұрын

    🔧Fine-tuning: kzread.info/dash/bejne/l3dqqsZqmKncn9Y.html 🤖Build a Custom AI Assistant: kzread.info/dash/bejne/ZoZ12KytY8m9n6w.html 👉Series playlist: kzread.info/head/PLz-ep5RbHosU2hnz5ejezwaYpdMutMVB0 📰 Read more: towardsdatascience.com/fine-tuning-large-language-models-llms-23473d763b91?sk=fd31e7444cf8f3070d9a843a8218ddad 💻 Example code: github.com/ShawhinT/KZread-Blog/tree/main/LLMs/fine-tuning -- More Resources [1] Deeplearning.ai Finetuning Large Langauge Models Short Course: www.deeplearning.ai/short-courses/finetuning-large-language-models/ [2] arXiv:2005.14165 [cs.CL] (GPT-3 Paper) [3] arXiv:2303.18223 [cs.CL] (Survey of LLMs) [4] arXiv:2203.02155 [cs.CL] (InstructGPT paper) [5] PEFT: Parameter-Efficient Fine-Tuning of Billion-Scale Models on Low-Resource Hardware: huggingface.co/blog/peft [6] arXiv:2106.09685 [cs.CL] (LoRA paper) [7] Original dataset source - Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning Word Vectors for Sentiment Analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142-150, Portland, Oregon, USA. Association for Computational Linguistics.

  • @beaux2572
    @beaux25724 ай бұрын

    Honestly the most straightforward explanation I've ever watched. Super excellent work Shaw. Thank you. It's so rare to find good communicators like you!

  • @ShawhinTalebi

    @ShawhinTalebi

    4 ай бұрын

    Thanks, glad it was clear 😁

  • @VivekAlamuri
    @VivekAlamuri29 күн бұрын

    Amazing video! Super clear, straightforward and LOVE the visual and mathematical explanations. You have a gift for this stuff!!

  • @junjieya
    @junjieya4 ай бұрын

    A very clear and straightforward video explaining finetuning.

  • @ShawhinTalebi

    @ShawhinTalebi

    4 ай бұрын

    Glad it was clear :)

  • @scifithoughts3611
    @scifithoughts36113 ай бұрын

    Great video Shaw! It was a good balance between details and concepts. Very unusual to see this so well done. Thank you.

  • @ShawhinTalebi

    @ShawhinTalebi

    3 ай бұрын

    Glad you enjoyed it!

  • @yoffel2196
    @yoffel21965 ай бұрын

    Wow dude, just you wait, this channel is gonna go viral! You explain everything so clearly, wish you led the courses at my university.

  • @ShawhinTalebi

    @ShawhinTalebi

    5 ай бұрын

    Thanks for the kind words! Maybe one day 😉

  • @adarshsharma8039
    @adarshsharma803917 күн бұрын

    You have explained this so clearly, that even a novice in NLP can understand it.

  • @checkdgt
    @checkdgt2 ай бұрын

    Just came to this video from HF and I have to say, I love they way you describe this! Thanks for the great video!

  • @ShawhinTalebi

    @ShawhinTalebi

    2 ай бұрын

    Great to hear! Thanks for watching :)

  • @lukaboljevicboljevic
    @lukaboljevicboljevic2 ай бұрын

    Such a great video. This is the first one I watched from you. You explain everything so nicely, and in my opinion you provided just the right amount of information - not too little, so it doesn't feel superficial and you feel like you've learned something, but not too much, so that you can take what you've learned and read more about it yourself if you're interested. Looking forward to seeing more of your content!

  • @ShawhinTalebi

    @ShawhinTalebi

    2 ай бұрын

    Great to hear! Glad it was clear :)

  • @EigenA
    @EigenAАй бұрын

    Great video, I wanted to hear further discussion on mitigation techniques for overfitting. Thanks for making the video!

  • @saraesshaimi
    @saraesshaimi23 күн бұрын

    excellent simple explanation to the point. Love it !

  • @saadati
    @saadati6 ай бұрын

    Amazing video Shawhin. It was quite easy to follow and stuff were clearly explained. Thank you so much,

  • @ShawhinTalebi

    @ShawhinTalebi

    6 ай бұрын

    Thanks! I'm glad it was clear and helpful

  • @arunshrestha791
    @arunshrestha7913 күн бұрын

    Clear Explanation, Amazing

  • @sreeramch
    @sreeramch2 ай бұрын

    Thank you for the detailed explaination line by line. Finally a place, I can rely on with working example

  • @ShawhinTalebi

    @ShawhinTalebi

    2 ай бұрын

    Glad it was helpful!

  • @rubencabrera8519
    @rubencabrera85195 ай бұрын

    This was one of the best videos on this topic, really nice man, keep going.

  • @ShawhinTalebi

    @ShawhinTalebi

    5 ай бұрын

    Thanks! Glad it was clear :)

  • @Kevin.Kawchak
    @Kevin.KawchakАй бұрын

    Thank you for the discussion

  • @user-bp9dx1ir7w
    @user-bp9dx1ir7w23 күн бұрын

    Very good & simple showcase, thanks

  • @thehousehusbandcn5074
    @thehousehusbandcn50744 ай бұрын

    You are the man! No BS, just good useful info

  • @ShawhinTalebi

    @ShawhinTalebi

    4 ай бұрын

    Thanks, glad it was helpful 😁

  • @salmaelbarbori579
    @salmaelbarbori5792 ай бұрын

    Clear and straightforward to the point, thanks a lot for making this valuable content accessible on ytb💡

  • @ShawhinTalebi

    @ShawhinTalebi

    2 ай бұрын

    Happy to help!

  • @Akshatgiri
    @Akshatgiri2 ай бұрын

    This is gonna come handy. Thanks for breaking it down

  • @ShawhinTalebi

    @ShawhinTalebi

    2 ай бұрын

    Happy to help!

  • @alikarooni9713
    @alikarooni97133 ай бұрын

    Even though this was high level instruction, it was perfect. I can continue from here. Thanks Shahin jan!

  • @ShawhinTalebi

    @ShawhinTalebi

    3 ай бұрын

    Glad it helped!

  • @upadisetty
    @upadisetty2 ай бұрын

    Best video i saw. thanks a ton for sharing. glad i found right place

  • @Mastin70
    @Mastin70Ай бұрын

    Fantastic explanation.

  • @yb3134
    @yb31342 ай бұрын

    Very well explained

  • @user-uh7kh5ef9e
    @user-uh7kh5ef9e6 ай бұрын

    I was struggling to understand some details, before this video, thanks a lot

  • @ShawhinTalebi

    @ShawhinTalebi

    6 ай бұрын

    Great to hear. I’m glad it helped!

  • @bitschips
    @bitschips2 ай бұрын

    So educative, thanks a lot!

  • @simplyshorts748
    @simplyshorts748Ай бұрын

    Great video! I love good explainations

  • @tintumarygeorge9309
    @tintumarygeorge93096 ай бұрын

    Thank you, Keep up the good work

  • @ShawhinTalebi

    @ShawhinTalebi

    6 ай бұрын

    Thanks, happy to help!

  • @ITforGood
    @ITforGood4 ай бұрын

    Thanks Shaw, very helpful.

  • @ShawhinTalebi

    @ShawhinTalebi

    4 ай бұрын

    Glad it was helpful!

  • @kevon217
    @kevon2173 ай бұрын

    Excellent walkthrough

  • @ShawhinTalebi

    @ShawhinTalebi

    3 ай бұрын

    🙏

  • @payam-bagheri
    @payam-bagheri6 ай бұрын

    Great video, Shawhin!

  • @ShawhinTalebi

    @ShawhinTalebi

    5 ай бұрын

    Thanks, glad you enjoyed it!

  • @adrianfiedler3520
    @adrianfiedler35206 ай бұрын

    Very good video and explanation!

  • @ShawhinTalebi

    @ShawhinTalebi

    6 ай бұрын

    Glad it helped!

  • @ramp2011
    @ramp20114 ай бұрын

    Excellent..... Thank you for sharing

  • @ShawhinTalebi

    @ShawhinTalebi

    3 ай бұрын

    My pleasure, glad you liked it!

  • @richardpinter9218
    @richardpinter92187 ай бұрын

    Fantastic video. Thanks for the upload. Keep up the good work, you're awesome 😎

  • @ShawhinTalebi

    @ShawhinTalebi

    7 ай бұрын

    Thanks, I’m glad you liked it 😁

  • @user-hj1to2gf8m
    @user-hj1to2gf8m2 ай бұрын

    i was amazing ....thanks for uploading Shaw

  • @ShawhinTalebi

    @ShawhinTalebi

    2 ай бұрын

    Thanks, happy to help!

  • @zeusgamer5860
    @zeusgamer58605 ай бұрын

    HI Shaw, amazing video - very nicely explained! Would be great if you could also do a video (with code examples) for Retrieval Augmented Generation as an alternative to fine-tuning :)

  • @ShawhinTalebi

    @ShawhinTalebi

    5 ай бұрын

    Great suggestion. I have a few follow-up use cases planned out and RAG will definitely be part of it.

  • @BamiCake

    @BamiCake

    4 ай бұрын

    ​@@ShawhinTalebimaybe also how to fine tune openai model too?

  • @ShawhinTalebi

    @ShawhinTalebi

    2 ай бұрын

    Just dropped! kzread.info/dash/bejne/ZoZ12KytY8m9n6w.html

  • @xugefu
    @xugefu3 күн бұрын

    Thanks!

  • @aldotanca9430
    @aldotanca94305 ай бұрын

    Very clear, thanks!

  • @ShawhinTalebi

    @ShawhinTalebi

    5 ай бұрын

    Thanks Aldo!

  • @KaptainLuis
    @KaptainLuis4 ай бұрын

    So nice video thank you soooo much!!❤

  • @ShawhinTalebi

    @ShawhinTalebi

    4 ай бұрын

    Happy to help 😁

  • @srinivasguptha9538
    @srinivasguptha95382 ай бұрын

    One thing that really standout for me is not using Google Colab for explanation. Explaining all code without scrolling helps the audience better grasp the content as it goes with the flow without waiting for the code to execute and helps the audience to remember where the variables were defined and all. Great approach and thanks for the amazing content!

  • @ShawhinTalebi

    @ShawhinTalebi

    2 ай бұрын

    Thanks, that's good feedback! I'll keep this in mind for future videos.

  • @NateKrueger805
    @NateKrueger8054 ай бұрын

    Nicely done!

  • @ShawhinTalebi

    @ShawhinTalebi

    4 ай бұрын

    Thanks!

  • @machireddyshyamsunder987
    @machireddyshyamsunder9872 ай бұрын

    Thankyou very much it is really very useful .

  • @ShawhinTalebi

    @ShawhinTalebi

    2 ай бұрын

    Happy to help!

  • @heatherbrm
    @heatherbrmАй бұрын

    here, you earned this: 👑

  • @ShawhinTalebi

    @ShawhinTalebi

    Ай бұрын

    Thanks 🤴

  • @jasoncole3253
    @jasoncole32537 ай бұрын

    Well done, even if I already knew all this shit it was really nice to listen to your clear explanation

  • @ShawhinTalebi

    @ShawhinTalebi

    7 ай бұрын

    lol! Glad you enjoyed it :)

  • @yejieguo2844
    @yejieguo284414 күн бұрын

    great video

  • @rbrowne4255
    @rbrowne42556 ай бұрын

    Fantastic job on this overview, as for other videos, I don't see many videos on Inference scaling, i.e requirements for concurrency, latency etc...what are the hardware requirements i.e number of GPUs per systems or number of systems, etc

  • @ShawhinTalebi

    @ShawhinTalebi

    6 ай бұрын

    I'm glad it was helpful :) That's is a great suggestion. I will add it to my list. Thank you!

  • @iampii_1905
    @iampii_19056 ай бұрын

    Very helpful! Tysm

  • @ShawhinTalebi

    @ShawhinTalebi

    6 ай бұрын

    Happy to help!

  • @zsmj820
    @zsmj8208 күн бұрын

    Nice video !

  • @user-ut4vj4qd9t
    @user-ut4vj4qd9t6 ай бұрын

    Thank you sooo much❤

  • @ShawhinTalebi

    @ShawhinTalebi

    5 ай бұрын

    You're welcome 😊

  • @alex70301
    @alex703014 ай бұрын

    Best video on llm fine tuning. Very concise and informative.

  • @ShawhinTalebi

    @ShawhinTalebi

    4 ай бұрын

    Thanks! Glad you liked it :)

  • @keithhickman7399
    @keithhickman73995 ай бұрын

    Shaw, terrific job explaining very complicated ideas in an approachable way! One question - are there downsides to combining some of the approaches you mentioned, say, prompt engineering + fine-tuning + RAG to optimize output...how would that compare to using one of the larger OOTB LLMs with hundreds of billions of params?

  • @ShawhinTalebi

    @ShawhinTalebi

    5 ай бұрын

    Great question. The biggest consideration is the tradeoff between cost and performance. On one side you can use an LLM OOTB (e.g. ChatGPT) which costs nothing and has some baseline performance. One the other side you can build a custom system using all the bells and whistles (e.g. fine-tuning, PE, and RAG) which will likely perform much better than ChatGPT but comes at significantly greater cost. Hope that helps!

  • @mookiejapan7351
    @mookiejapan73512 ай бұрын

    Wow! Amazing make-up! If it wasn't for the voice, I wouldn't believe this is actually David Cross!

  • @ShawhinTalebi

    @ShawhinTalebi

    2 ай бұрын

    Haha, I was wearing jean shorts while filming this 😂

  • @Throwingness
    @Throwingness3 ай бұрын

    Very good. Very fast and also easy to follow. As far as future content, keep us posted about how to do LoRA on quantized models. How can the future be anything but LoRA on quantized models?!?!?!?

  • @ShawhinTalebi

    @ShawhinTalebi

    3 ай бұрын

    Thanks, glad you liked it. Video coming this quarter on exactly that!

  • @diamond2869
    @diamond28693 ай бұрын

    thank you so much!

  • @ShawhinTalebi

    @ShawhinTalebi

    2 ай бұрын

    Happy to help :)

  • @Bboreal88
    @Bboreal882 ай бұрын

    My next question after this video would be on how to pack this fine-tuned model into a UI and deploy.

  • @ShawhinTalebi

    @ShawhinTalebi

    2 ай бұрын

    Great question. I discussed how to create a chat interface with Hugging Face + Gradio in a previous video: kzread.info/dash/bejne/nJWikpmgnNLHgso.html

  • @dendi1076
    @dendi10763 ай бұрын

    this channel is going to hit 6 figure subscribers at this rate

  • @ShawhinTalebi

    @ShawhinTalebi

    3 ай бұрын

    I hope so 😅

  • @totalcooljeff
    @totalcooljeff6 ай бұрын

    Random question i how do you edit you audio clips together to make them so seamless because idk where to mate them. And great video by the way 👍

  • @ShawhinTalebi

    @ShawhinTalebi

    6 ай бұрын

    I use iMovie :)

  • @yanzhang7861
    @yanzhang78616 ай бұрын

    nice video, thanks😁

  • @ShawhinTalebi

    @ShawhinTalebi

    6 ай бұрын

    Thanks, glad you liked it :)

  • @tgyawali
    @tgyawali2 ай бұрын

    I found you in youtube just today. Your presentation style, quality of content is very good. Keep up the great work. I am very passionate about AI technology in general, have been trying to conduct basic trainings to undergraduate college students and would love to connect to collaborate if you are interested. Thank you for doing this!

  • @ShawhinTalebi

    @ShawhinTalebi

    2 ай бұрын

    Thanks for watching! Glad it was clear :) Feel free to set up a call if you like: calendly.com/shawhintalebi

  • @tgyawali

    @tgyawali

    2 ай бұрын

    @@ShawhinTalebi Thank you. I will set up some time to connect.

  • @naevan1
    @naevan16 ай бұрын

    Hey dude nice video. I think I'll try to find tuned Lamma to detect phrases and subsequently classify tweets - but multiclass classification. Hope it works ,I guess I'll transfer the csv to the prompt you mentioned like alpaca was done and see if it works

  • @ShawhinTalebi

    @ShawhinTalebi

    5 ай бұрын

    Thanks! Sounds like a fun project :)

  • @user-bp9pe3qe1z
    @user-bp9pe3qe1z4 ай бұрын

    thank you so much

  • @ShawhinTalebi

    @ShawhinTalebi

    4 ай бұрын

    Happy to help!

  • @melliott117
    @melliott117Ай бұрын

    Really great content. I love your balance of details and overview. It’s made it easy for me as a newcomer who is interested in details. My only criticism/advice is that you edit to remove silence. This is great for minimizing pauses mid sentence. But it would be helpful to have slightly more time at the end of each thought/point. Pausing for that extra 0.25 seconds at the end of a coherent teaching point helps greatly.

  • @ShawhinTalebi

    @ShawhinTalebi

    Ай бұрын

    Thanks, that's good feedback! I do get a bit heavy-handed with the edits 😅

  • @harshanaru1501
    @harshanaru15014 ай бұрын

    Such a great video ! Wondering how self supervised fine tuning works. Is there any video available on that ?

  • @ShawhinTalebi

    @ShawhinTalebi

    4 ай бұрын

    Thanks! I found this on self-supervised fine-tuning: kzread.info/dash/bejne/h6dpvKipYZm2kbg.html

  • @evan7306
    @evan73062 күн бұрын

    Thank you for your great tutorial! What I don't understand is how to use the fine tuned model as an API so we can use it on website. Do you have any tutorial about that?

  • @pawan3133
    @pawan313310 күн бұрын

    Thanks for the beautifully explanation!! When you said, for PEFT "we augment the model with additional parameters that are trainable", how do we add these parameters exactly? Do we add a new layer? Also, when we say "%trainable parameters out of total parameters", doesn't that mean that we are updating a certain % of original parameters?

  • @amnakhan1159
    @amnakhan11594 ай бұрын

    Hello! I'm trying to use a similar approach but for a different task. Given a paragraph, I want my model to be able to generate a set of tags associated with it for a specific use case. Not quite sure how the Auto Model would differ here and would love your thoughts on this!

  • @ShawhinTalebi

    @ShawhinTalebi

    4 ай бұрын

    Given you have the structured dataset ready to go, you can use the example code as a jumping off point. You might want to explore alternative base models and fine-tuning approaches. For instance, before using LoRA evaluating the performance of transfer learning alone.

  • @user-qt1uk7uv9m
    @user-qt1uk7uv9m4 ай бұрын

    Nice Video. I need your help to clarify my doubt. When we do the PEFT based finetuning, the final finetuned model size (in KBs/GBs) will increase by the additional parameters ( base model size + additional parameters size) . In this case base model size will be lesser and final finetuned model size will be more. Deploying the final finetuned model in the edge devices will be more difficult because of the limited edge device resources. Are there any way adapters / LoRA can help in reducing the final finetuned model memory size so that easily we can deploy the final model in the edge devices? Your insights will be helpful. Currently i am working in the vision foundation model deployment in the edge device where i am finding it difficult to deploy because of vision foundation model memory size and inference speed.

  • @ShawhinTalebi

    @ShawhinTalebi

    4 ай бұрын

    Great question. PEFT methods like LoRA only reduce the number of trainable parameters not the total number of parameters. And to your point, the storage requirements actually increase in the case of LoRA! To reduce the final model size, you will need to fine-tune a smaller base model. Hope that helps!

  • @amparoconsuelo9451
    @amparoconsuelo94515 ай бұрын

    Understood. The codes were very helpful. They were not constantly scrolling and panning. But please display the full code and mention the Python version and system configuration, including folders, etc.

  • @ShawhinTalebi

    @ShawhinTalebi

    5 ай бұрын

    Good to hear! All code and env files are available here: github.com/ShawhinT/KZread-Blog/tree/main/LLMs/fine-tuning

  • @misspanda5717
    @misspanda57174 ай бұрын

    thanks

  • @elrecreoadan878
    @elrecreoadan8786 ай бұрын

    Would a botpress with a vector kb connected to chatgpt would be enough for Q&A ? When fine tuning starts to be neededvand is there an inexpensive way to do it with no or low code? Thank you!

  • @ShawhinTalebi

    @ShawhinTalebi

    6 ай бұрын

    This depends on the use case. However, taking a quick-and-easy no code approach to start is never a bad idea. It typically gives you a sense of how sophisticated approaches will pan out. Fine-tuning will come into play when the "quick-and-easy" starts to becomes too inconvenient (or expensive) due to the scale of the solution. Hope that helps!

  • @brucoder
    @brucoder3 ай бұрын

    Hi Shaw - this answered so many questions about specializing an LLM in concise terms, thanks! One question that I'm running up against is physical machine abilities (CPU Speed/Cores, System Memory, GPU cores and memory, and storage speeds. In my case, I have a 32/64 core/thread Epyc CPU on PCIE4.0 MB with 128GB of DDR4 RAM and a PNY/NVIDIA RTX A5000 with 24GB DDR5 VRAM and 8192 CUDA cores dedicated to ML/AI (video is via a separate RTX A2000 GPU). With that info, what should I be looking at as a starting point that will take full advantage of those specs in local mode?

  • @ShawhinTalebi

    @ShawhinTalebi

    3 ай бұрын

    Wow that's a lot of firepower. While I'm less knowledgeable about the ML engineering side of things, I'd suggest checking out DeepSpeed: github.com/microsoft/DeepSpeed. They have several resources on training/running large models efficiently.

  • @brucoder

    @brucoder

    3 ай бұрын

    @@ShawhinTalebi Thatnks for the pointer. And thinks for all of your output. I've picked up some great information.

  • @arthurs6405
    @arthurs6405Ай бұрын

    This was beautifully described. I wish you had provided a Linux alternative for the "model.to('mps/cpu'). I have a linux workstation and a p100 gpu. Also, you did not include the means to save your newly trained model. I think most of us students would appreciate knowing how to save the model locally and to huggingface. Thanks for your efforts.

  • @ShawhinTalebi

    @ShawhinTalebi

    Ай бұрын

    I do fine-tuning on a Linux machine here: kzread.info/dash/bejne/iqSjraRspdbTe8Y.html

  • @amanpreetsingh8100
    @amanpreetsingh81006 ай бұрын

    This was a great video. I have one question though. In the LoRA demonstration in your video(at ~14 minutes) you mention this operation (W0 + BA)x = h(x), in this how the sum (W0 + BA) is possible, as W0 has dimentions d*k, and output of operation BA would have the dimentions r*r. This matrix sum is not mathematiaclly possible. So can you elaborate more on this...

  • @ShawhinTalebi

    @ShawhinTalebi

    6 ай бұрын

    Good question! The math works out here because B is d x r and A is r x k, therefore BA will be d x k.

  • @amanpreetsingh8100

    @amanpreetsingh8100

    6 ай бұрын

    @@ShawhinTalebi 👍

  • @naehalmulazim
    @naehalmulazim19 күн бұрын

    Greetings! Really nice tutorial! THANK YOU for including Lora! I need to train an Llm on a higher level language we wrote in C++, to produce our code. It's all private infrastructure. Time isnt an issue but I'd like to do it locally on a mac m2 if I can and was considering Lora on a tiny llm. Is this going to be possible?

  • @ShawhinTalebi

    @ShawhinTalebi

    17 күн бұрын

    While I haven't done that myself, that is surely possible. The challenge I've run into is that many open-source models don't work so easily on Mac, but I plan to figure it out and many video about it.

  • @researchforumonline
    @researchforumonline3 ай бұрын

    Thanks

  • @ShawhinTalebi

    @ShawhinTalebi

    2 ай бұрын

    Welcome :)

  • @devtest202
    @devtest202Ай бұрын

    Hi thanks!! A question for a model in which I have more than 2,000 pdfs. Do you recommend improving the handling of vector databases? When do you recommend fine tunning and when do you recommend vector database

  • @ShawhinTalebi

    @ShawhinTalebi

    Ай бұрын

    Great question! Generally, fine-tuning and RAG have different strengths. Fine-tuning is great when you want to endow the model with a particular style or to tailor completions for a particular use case, while RAG is good to provide the model with specialized and specific knowledge.

  • @parisaghanad8042
    @parisaghanad80424 ай бұрын

    thanks!

  • @ShawhinTalebi

    @ShawhinTalebi

    4 ай бұрын

    Happy to help!

  • @samadhanpawar6554
    @samadhanpawar65546 ай бұрын

    Can you recommend any course where i can learn to build llm from scratch and fine-tune in depth

  • @ShawhinTalebi

    @ShawhinTalebi

    6 ай бұрын

    Paul Iusztin has some good content on that. Hands-on-llms: github.com/iusztinpaul/hands-on-llms More resources: www.pauliusztin.me/

  • @jdiazram
    @jdiazram5 ай бұрын

    Hi, Nice tutorial. I have a question. Is it possible to have more than 1 output in a supervised way? For example: {"input": "ddddddd", "output1":"dddd","eeee", "ffffff", "output2": "xxxx", "zzzzz", etc} Thx

  • @ShawhinTalebi

    @ShawhinTalebi

    5 ай бұрын

    Good question. I'd say it depends on the use case and the meaning of the outputs. However, here are 2 thoughts. 1) concatenate "output1" and "output2" to make "output" e.g. "output1":"dddd","eeee", "ffffff" + "output2": "xxxx", "zzzzz", = "output":"dddd", "eeee", "ffffff", "xxxx", "zzzzz" 2) train 2 models, one for "output1" and another for "output2" Hope that helps!

  • @vitola1111
    @vitola11113 ай бұрын

    Great video! Is the process for fine tuning a stable diffusion model the same? I think if you make a vid on that itd get a lot of views as well.

  • @ShawhinTalebi

    @ShawhinTalebi

    2 ай бұрын

    I haven't worked with stable diffusion models before, so I don't now, but that would be a great video. Thanks for the suggestion!

  • @Sebastian-di6sj
    @Sebastian-di6sj6 ай бұрын

    nice video! is it then at all possible to feed it large amounts of data and make it give correct answers to similar situations as the ones in the database?

  • @ShawhinTalebi

    @ShawhinTalebi

    6 ай бұрын

    Thanks! In principle, yes that is possible with fine-tuning. In practice, this can be a challenge depending on the use case and available data.

  • @Sebastian-di6sj

    @Sebastian-di6sj

    6 ай бұрын

    @@ShawhinTalebi That was very helpful, thanks man! I will try that out. :)

  • @charismaowojoameh7681
    @charismaowojoameh7681Ай бұрын

    When trying to create a Ai model that generates airticle for a particular niche, is it best to gather airtcle on that niche and Fine-tune it or use open ai knowledge base just giving it some prompts.

  • @ShawhinTalebi

    @ShawhinTalebi

    Ай бұрын

    Good question. This depends how you are trying to generate the article. If you have a clear structure for how the articles should be written, you can go far with an off-the-shelf model + RAG. However, if the article format is not so rigid (but you have lots of examples), fine-tuning may work best.

  • @RajatDhakal
    @RajatDhakal3 ай бұрын

    Can I use any open source LLM to train my, for example, healthcare dataset or the LLM should be the one which was pre-trained with healthcare dataset of my interest?

  • @ShawhinTalebi

    @ShawhinTalebi

    3 ай бұрын

    Depends on the use case. If there's an existing healthcare fine-tuned model, why not use that instead of fine-tuning yourself?

  • @lauraharyo1128
    @lauraharyo1128Ай бұрын

    Thanks a lot for such a straightforward walkthrough! I tried a similar code for a text generation model, but I keep getting the error 'ValueError: prefetch_factor option could only be specified in multiprocessing. Let num_workers > 0 to enable multiprocessing.' Do you know why this keeps happening? I've even tried changing the torch version, but it's not working.

  • @ShawhinTalebi

    @ShawhinTalebi

    Ай бұрын

    Not sure that could be. Does the machine have a GPU?

  • @lauraharyo1128

    @lauraharyo1128

    Ай бұрын

    @@ShawhinTalebi Thanks for your help! I figured out the issue was an outdated Linux kernel.

  • @sahil0094
    @sahil0094Ай бұрын

    I know you mentioned 1k is a good number of training data for LORA? is it also dependent on model size? If we are using 70b parameter model , will 1k training points be still enough for LORA?

  • @ShawhinTalebi

    @ShawhinTalebi

    Ай бұрын

    Good question! While this will depend on the use case, 1k is great place to start. I recommend giving it a go and evaluating whether the model performance is acceptable your use case.

  • @Akshatgiri
    @Akshatgiri2 ай бұрын

    Can you show us how to do transfer learning for open source llms, and why that should be the first step for fine tuning a model? Is it more efficient way of finetuning?

  • @ShawhinTalebi

    @ShawhinTalebi

    2 ай бұрын

    Great suggestion! Next video will touch on this by covering how to fine-tune open-source LLMs with QLoRA.

  • @Bboreal88
    @Bboreal882 ай бұрын

    This feature could already be available on KZread for creators. Perhaps, you could refine a chatbot that can automatically respond to comments using Gemini. It could even learn to respond based on your videos, eliminating the need for you to upload anything or messing with fine-tuning.

  • @ShawhinTalebi

    @ShawhinTalebi

    2 ай бұрын

    It is to some extent, as we get response recommendations in the creator studio. Using multimodal models might takes this to the next level!

  • @junyehu2315
    @junyehu23156 ай бұрын

    Is there any limitation to the GPU memory? I am just a student with only a 3050 GPU with only 4GB memory

  • @ShawhinTalebi

    @ShawhinTalebi

    5 ай бұрын

    Great question. While it may take some time, the example here should run on a CPU, so I suspect it should run fine with your GPU. Give it a try and let me know how it goes.

  • @madhu1987ful
    @madhu1987ful3 ай бұрын

    How to control the % of params that are being trained? Where are we specifying this? Also can you pls tell me how to choose r? What are these r values: 2,4,8 etc?

  • @ShawhinTalebi

    @ShawhinTalebi

    3 ай бұрын

    When using LoRA you control the number of trainable parameters via the r value and target modules. These are both specified at 24:10, where r=4 and only the query layers are augmented. As for choosing r this depends on your use case. Small r means less parameters but (generally) worse performance, while large r means more parameters and better performance.

  • @hadianasliwa
    @hadianasliwa3 ай бұрын

    is there a way that distilbert or any other LLM can be trained for QA using dataset that has only text field without any label? I'm trying to trian the LLM for QA but my dataset has only text field without any labels or questions and answers.

  • @ShawhinTalebi

    @ShawhinTalebi

    3 ай бұрын

    What does your text field consist of? Does it include questions or answers?

  • @hadianasliwa

    @hadianasliwa

    3 ай бұрын

    @@ShawhinTalebi no only raw text, you may refer to any dataset of hf website that has only (text field & ID) so I'm trying to fine-tune the model on the Arabic dataset which is only raw text. Appreciate it, if you can make a video on: 1. how to fine-tune the model on languages other than English (because the model is originaly trained on English) 2. how to fien-tune the model with data that only has text and use the model for QA 3. Will the model that is not trained on English originally require pre-training and then fine-tuning

  • @ShawhinTalebi

    @ShawhinTalebi

    3 ай бұрын

    If you only have raw text, you will likely need to due data preprocessing to generate input-output pairs for fine-tuning. Thanks for the suggestions!!

  • @aketo8082
    @aketo80825 күн бұрын

    Thank you. Is there a chance to create own LLM on own computer? A small version? Thank you for information.

  • @ShawhinTalebi

    @ShawhinTalebi

    3 күн бұрын

    It depends what you consider a "Large" Language Model. ~100M parameters is probably the practical limit for (heavy-duty) consumer hardware, at least for now.

  • @aketo8082

    @aketo8082

    3 күн бұрын

    @@ShawhinTalebi Maybe there is a small standard LLM available, which is possible to extend/train/finetune with own data. So the first step for the language rule are available. I have now idea if this is possible, that's why I ask, but could be possible.

  • @umeshtiwari9249
    @umeshtiwari92496 ай бұрын

    nice

  • @ShawhinTalebi

    @ShawhinTalebi

    6 ай бұрын

    Thanks

  • @MannyBernabe
    @MannyBernabe3 ай бұрын

    Excellent walk-thru. Thank you, Shaw!I was getting errors on the new model. Switching the device worked for me. # Check if CUDA is available and set the device accordingly device = 'cuda' if torch.cuda.is_available() else 'cpu' model.to(device) # Move the model to the appropriate device (GPU or CPU)

  • @ShawhinTalebi

    @ShawhinTalebi

    2 ай бұрын

    Thanks Manny! That's a good note, I wasn't able to test the code on a non-Mac machine.

  • @user-rp7if3sv5j
    @user-rp7if3sv5jАй бұрын

    hi, when i run the part of your code on the training_args (snippet 60), # define training arguments training_args = TrainingArguments( output_dir= model_checkpoint + "-lora-text-classification", learning_rate=lr, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, num_train_epochs=num_epochs, weight_decay=0.01, evaluation_strategy="epoch", save_strategy="epoch", load_best_model_at_end=True, ) I get the following error - "ImportError: Using the `Trainer` with `PyTorch` requires `accelerate>=0.21.0`: Please run `pip install transformers[torch]` or `pip install accelerate -U`" Suggested fixes of pip install did not work. Help!! Thansk.

  • @ShawhinTalebi

    @ShawhinTalebi

    Ай бұрын

    Thanks for raising this. Are you using the conda environment from the GitHub?

  • @Mesenqe
    @Mesenqe5 ай бұрын

    This is incredible, thank you for the clear tutorial. Please subscribe to this channel. One question: Can we apply LoRA to finetune models used in image classification or any computer vision problems? Links to read or a short tutorial would be helpful.

  • @ShawhinTalebi

    @ShawhinTalebi

    5 ай бұрын

    Thanks, glad it was clear! Yes! LoRA is not specific to language models. Here is a guide on image classification using LoRA from HF: huggingface.co/docs/peft/task_guides/image_classification_lora

  • @Mesenqe

    @Mesenqe

    5 ай бұрын

    ​@@ShawhinTalebi Thank you for the link.

  • @madhu1987ful
    @madhu1987ful3 ай бұрын

    Did you do this fine tuning on CPU or GPU, can you provide details? Thanks

  • @ShawhinTalebi

    @ShawhinTalebi

    3 ай бұрын

    I have a Mac M1 which uses unified memory (i.e. GPU and CPU are one).

  • @crossray974
    @crossray9744 ай бұрын

    It all depends on the selection of the much smaller r parameter, like in PCA!

  • @ShawhinTalebi

    @ShawhinTalebi

    4 ай бұрын

    That's right, smaller r means less trainable parameters.

  • @mohsenghafari7652
    @mohsenghafari7652Ай бұрын

    hi. please help me. how to create custom model from many pdfs in Persian language? tank you.

  • @ShawhinTalebi

    @ShawhinTalebi

    Ай бұрын

    Thanks for the question Mohsen. I haven't worked with these models for any language other than English, so I don't know the best way to build such as system.

  • @InnocenceVVX
    @InnocenceVVX4 ай бұрын

    Sound (gain) a bit low but great vid bro!

  • @ShawhinTalebi

    @ShawhinTalebi

    4 ай бұрын

    Thanks for the note, glad you liked it!

  • @TheIronMason
    @TheIronMasonАй бұрын

    Could you please title these as #1-#6?

  • @ShawhinTalebi

    @ShawhinTalebi

    Ай бұрын

    The proper order for the series is listed here: kzread.info/head/PLz-ep5RbHosU2hnz5ejezwaYpdMutMVB0