L15.7 An RNN Sentiment Classifier in PyTorch
Ғылым және технология
Slides: sebastianraschka.com/pdf/lect...
Code notebooks: github.com/rasbt/stat453-deep...
-------
This video is part of my Introduction of Deep Learning course.
Next video: • L16.0 Introduction to ...
The complete playlist: • Intro to Deep Learning...
A handy overview page with links to the materials: sebastianraschka.com/blog/202...
-------
If you want to be notified about future videos, please consider subscribing to my channel: / sebastianraschka
Пікірлер: 27
Great presentation. Have spent a couple weeks now, every night, doing your videos and hands on notebooks! And I feel I made a lot more progress than with other, less coding-oriented classes. Suggestion: define TEXT_COLUMN_NAME, LABEL_COLUMN_NAME as local variables, in all caps, and reference them as variable names everywhere.
Wonderful tutorial. Thanks.
@SebastianRaschka
Жыл бұрын
Glad you enjoyed it!
nn.lstm handles itself, previous output is input to next in the network?
thanks boss!
This is really awesome stuff 🙂 Do you also have videos on transformer/BERT architecture? and the codes related to that?
@SebastianRaschka
Жыл бұрын
Glad to hear you found it useful! And yes, I some videos on transforms incl Bert and a code video (DistilBert if I recall correctly). It should be all under L19 in this playlist. For easier reference to the individual videos, here's also an overview page: sebastianraschka.com/blog/2021/dl-course.html#l19-self-attention-and-transformer-networks
Hello Prof. Raschka. What an amazing hands on tutorial on RNN! I have seen one issue. At 37:26, "packed", the return value of "pack_padded_sequence", is not passed to the next layer "self.rnn". But still this version is much better than the first one. As far as I've experimented, the reason is that when you enable sorting within batch, the sequence lengths in batches are very similar. This way RNN learns much better instead of learning dummy paddings.
@SebastianRaschka
2 жыл бұрын
Thanks for the note! You are right, it should have been "self.rnn(packed)" not "self.rnn(embedded)" -- updated it in the code. Interestingly, it worked similarly well before. This is probably due to the sorting (sort_within_batch) as you described.
Great video! Does the in the vocabulary indicate words that are not in our vocabulary? So in case our LSTM encounters an unknown word, it will be regarded as ?
@SebastianRaschka
2 жыл бұрын
Yes, that's correct, all words that are not in the vocabulary will be mapped to the unknown word token ''
Thanks so much for this, I have been looking for examples of RNNs in pytorch, this is very clear. Has anybody figured out how to use the new torchtext API? They removed legacy and the provided migration guide is also broken, it's been a challenge to figure out how to get this to run with the current API.
for text preprocessing you could have used a library like neattext.
@SebastianRaschka
2 жыл бұрын
Haven't heard about it before, thanks for the suggestion!
Hello Sebastian. Love your books, just keep it up that way. As I said many times your book along with Aurelion Geron one are the best books on subject. I have read second and thrid edition and I always keep it in a desk, although I ve read it page to page... P. S. Convolution types pictures, same and valid when you explain them in a book, are replaced its unsignificant detail but cause it is repeated in second and third edition I thought just to let you know. Best regards
@SebastianRaschka
3 жыл бұрын
Thanks for the kind words! Regarding the picture: Yeah, I agree. We fixed it in a reprint of the 2nd edition but somehow the publishers reverted it back to the original version when they layouted the drafts for the 3rd edition. It's frustrating but I will remind them to double-check this carefully next time. Thanks for the feedback!
If I am not wrong is this a single unit LSTM unit used in the model?
@SebastianRaschka
Жыл бұрын
Yes, that's correct!
how to fix this??
AttributeError: module 'torchtext' has no attribute 'legacy'
Hi can I get the pdfs ?
@SebastianRaschka
2 жыл бұрын
I made a page here with links to all the material. It's probably easiest to look it up from there: sebastianraschka.com/blog/2021/dl-course.html
sm i believe it means small model
@SebastianRaschka
3 ай бұрын
Good point!