L15.7 An RNN Sentiment Classifier in PyTorch

Ғылым және технология

Slides: sebastianraschka.com/pdf/lect...
Code notebooks: github.com/rasbt/stat453-deep...
-------
This video is part of my Introduction of Deep Learning course.
Next video: • L16.0 Introduction to ...
The complete playlist: • Intro to Deep Learning...
A handy overview page with links to the materials: sebastianraschka.com/blog/202...
-------
If you want to be notified about future videos, please consider subscribing to my channel: / sebastianraschka

Пікірлер: 27

  • @bitdribble
    @bitdribble Жыл бұрын

    Great presentation. Have spent a couple weeks now, every night, doing your videos and hands on notebooks! And I feel I made a lot more progress than with other, less coding-oriented classes. Suggestion: define TEXT_COLUMN_NAME, LABEL_COLUMN_NAME as local variables, in all caps, and reference them as variable names everywhere.

  • @vikramsandu6054
    @vikramsandu6054 Жыл бұрын

    Wonderful tutorial. Thanks.

  • @SebastianRaschka

    @SebastianRaschka

    Жыл бұрын

    Glad you enjoyed it!

  • @abubakarali6399
    @abubakarali63992 жыл бұрын

    nn.lstm handles itself, previous output is input to next in the network?

  • @saadouch
    @saadouch2 жыл бұрын

    thanks boss!

  • @madhu1987ful
    @madhu1987ful Жыл бұрын

    This is really awesome stuff 🙂 Do you also have videos on transformer/BERT architecture? and the codes related to that?

  • @SebastianRaschka

    @SebastianRaschka

    Жыл бұрын

    Glad to hear you found it useful! And yes, I some videos on transforms incl Bert and a code video (DistilBert if I recall correctly). It should be all under L19 in this playlist. For easier reference to the individual videos, here's also an overview page: sebastianraschka.com/blog/2021/dl-course.html#l19-self-attention-and-transformer-networks

  • @kafaayari
    @kafaayari2 жыл бұрын

    Hello Prof. Raschka. What an amazing hands on tutorial on RNN! I have seen one issue. At 37:26, "packed", the return value of "pack_padded_sequence", is not passed to the next layer "self.rnn". But still this version is much better than the first one. As far as I've experimented, the reason is that when you enable sorting within batch, the sequence lengths in batches are very similar. This way RNN learns much better instead of learning dummy paddings.

  • @SebastianRaschka

    @SebastianRaschka

    2 жыл бұрын

    Thanks for the note! You are right, it should have been "self.rnn(packed)" not "self.rnn(embedded)" -- updated it in the code. Interestingly, it worked similarly well before. This is probably due to the sorting (sort_within_batch) as you described.

  • @randb9378
    @randb93782 жыл бұрын

    Great video! Does the in the vocabulary indicate words that are not in our vocabulary? So in case our LSTM encounters an unknown word, it will be regarded as ?

  • @SebastianRaschka

    @SebastianRaschka

    2 жыл бұрын

    Yes, that's correct, all words that are not in the vocabulary will be mapped to the unknown word token ''

  • @donatocapitella
    @donatocapitella Жыл бұрын

    Thanks so much for this, I have been looking for examples of RNNs in pytorch, this is very clear. Has anybody figured out how to use the new torchtext API? They removed legacy and the provided migration guide is also broken, it's been a challenge to figure out how to get this to run with the current API.

  • @Rahulsircar94
    @Rahulsircar942 жыл бұрын

    for text preprocessing you could have used a library like neattext.

  • @SebastianRaschka

    @SebastianRaschka

    2 жыл бұрын

    Haven't heard about it before, thanks for the suggestion!

  • @milanradovanovic3693
    @milanradovanovic36933 жыл бұрын

    Hello Sebastian. Love your books, just keep it up that way. As I said many times your book along with Aurelion Geron one are the best books on subject. I have read second and thrid edition and I always keep it in a desk, although I ve read it page to page... P. S. Convolution types pictures, same and valid when you explain them in a book, are replaced its unsignificant detail but cause it is repeated in second and third edition I thought just to let you know. Best regards

  • @SebastianRaschka

    @SebastianRaschka

    3 жыл бұрын

    Thanks for the kind words! Regarding the picture: Yeah, I agree. We fixed it in a reprint of the 2nd edition but somehow the publishers reverted it back to the original version when they layouted the drafts for the 3rd edition. It's frustrating but I will remind them to double-check this carefully next time. Thanks for the feedback!

  • @akashghosh4766
    @akashghosh4766 Жыл бұрын

    If I am not wrong is this a single unit LSTM unit used in the model?

  • @SebastianRaschka

    @SebastianRaschka

    Жыл бұрын

    Yes, that's correct!

  • @sadikaljarif9635
    @sadikaljarif96358 ай бұрын

    how to fix this??

  • @debabratasikder9448
    @debabratasikder94487 ай бұрын

    AttributeError: module 'torchtext' has no attribute 'legacy'

  • @DataTheory92
    @DataTheory922 жыл бұрын

    Hi can I get the pdfs ?

  • @SebastianRaschka

    @SebastianRaschka

    2 жыл бұрын

    I made a page here with links to all the material. It's probably easiest to look it up from there: sebastianraschka.com/blog/2021/dl-course.html

  • @abderahimmazouz2088
    @abderahimmazouz20884 ай бұрын

    sm i believe it means small model

  • @SebastianRaschka

    @SebastianRaschka

    3 ай бұрын

    Good point!

Келесі