L19.6 DistilBert Movie Review Classifier in PyTorch -- Code Example
Ғылым және технология
Slides: sebastianraschka.com/pdf/lect...
Code: github.com/rasbt/stat453-deep...
-------
This video is part of my Introduction of Deep Learning course.
The complete playlist: • Intro to Deep Learning...
A handy overview page with links to the materials: sebastianraschka.com/blog/202...
-------
If you want to be notified about future videos, please consider subscribing to my channel: / sebastianraschka
Пікірлер: 18
Thank you very much Prof. Rashcka for the great content in this series. I walked through almost all lectures in series. You've always replied my questions that helped me to better understand topics.
@SebastianRaschka
2 жыл бұрын
my pleasure!
Your videos help me a lot understanding attention and transformers, thanks a lot for this lectures!
This is great tutorial! I like the slides and the way you talked through the transformer papers! Thank you!
Hi Sebastian Raschka, I truly enjoyed all your lectures, especially the ones about Large Language Models. I have a copy of "Attention is all you need" paper and went through it a few times but had not quite understood it until I watched your video. Thanks so much for posting these videos. Wish you the best.
Hi Love you man ❤ thanks for everything
Awesome tutorial ❤️
@SebastianRaschka
2 жыл бұрын
Thanks!!!
Is this the last lecture of the DL course?
@SebastianRaschka
2 жыл бұрын
For now, yes, but I am working on a new course ;)
If I got three types in my sentiment, for example 2 is positive, 1 is negative, 0 is neutral, where should I change in the last sections of code as the error returned in the Train model session in Foward part: IndexError: Target 2 is out of bounds. Thks for any help
@SebastianRaschka
Жыл бұрын
you could change tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased') to tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased', num_labels=3) Btw this sounds like an ordinal problem (positive > neutral > negative) and you might also be interested in this: kzread.info/dash/bejne/d2Fhm82Ekdiwm7A.html
I used it on yelp dataset . The accuracy didn't beat Naive bayes. I don't know why!
@SebastianRaschka
Жыл бұрын
Wow interesting. How many epochs did you fine-tune, and what were the respective accuracies for Naive Bayes and DistilBert?
tried just everything but getting 38% hamming score accuracy on my multilabel classificastion of 24000 dataset into 26 labels, please suggest something
@SebastianRaschka
2 ай бұрын
This is actually not horrible. If you have 26 labels, and the dataset is balanced, a random classifier would only get 1/26 * 100% = 3.8% accuracy. Sounds like your classifier is 10x better than random. In any case, how does it compare to other models, e.g., Logistic Regression via sklearn (just as a sanity check)? You can use the code template from here: github.com/rasbt/LLMs-from-scratch/tree/main/ch06/03_bonus_imdb-classification
@runjhunsingh2348
2 ай бұрын
@@SebastianRaschka i am only trying large language models like distillibert etc. still not getting this much great accuracy
Looking to understand bert models.. I have particular interest in darkbert