Adding Self-Attention to a Convolutional Neural Network! : PyTorch Deep Learning Tutorial Section 13

Ғылым және технология

TIMESTAMPS:
0:00 Introduction
0:22 Attention Mechanism Overview
1:20 Self-Attention Introduction
3:02 CNN Limitations
4:09 Using Attention in CNNs
6:30 Attention Integration in CNN
9:06 Learnable Scale Parameter
10:14 Attention Implementation
12:52 Performance Comparison
14:10 Attention Map Visualization
14:29 Conclusion
In this video I show how we can add Self-Attention to a CNN in order to improve the performance of our classifier!
Donations
www.buymeacoffee.com/lukeditria
The corresponding code is available here!
github.com/LukeDitria/pytorch...
Discord Server:
/ discord

Пікірлер: 9

  • @esramuab1021
    @esramuab10219 күн бұрын

    thank U

  • @profmoek7813
    @profmoek781326 күн бұрын

    Master piece. Thank you so much 💗

  • @thouys9069
    @thouys906925 күн бұрын

    very cool stuff. Any idea how this compares to Feature Pyramid Networks, which are typically used to enrich the high-res early convolutional layers? I would imagine that the FPN works well if the thing of interest is "compact". I.e. can be captured well by a quadratic crop, whereas the attention would even work for non-compact things. Examples would be donuts with large holes and little dough, or long sticks, etc.

  • @LukeDitria

    @LukeDitria

    25 күн бұрын

    I believe Feature Pyramid Networks were primarily for object detection, and are a way of bringing fine grain information from earlier layers deeper into the network with big residual connections, they sill rely on multiple conv layers to combine spatial information. What we're trying to do here is mix spatial information early in the network. With attention the model can also choose how exactly to do that.

  • @yadavadvait
    @yadavadvait24 күн бұрын

    Good video! Do you think this experiment of adding the attention head so early on can extrapolate well to graph neural networks?

  • @LukeDitria

    @LukeDitria

    24 күн бұрын

    Hi thanks for your comment! Yes, Graph Attention Networks do what you are describing!

  • @unknown-otter
    @unknown-otter26 күн бұрын

    I'm guessing that adding self-attention in deeper layers would have lesser of an impact due to each value having greater receprive field? If not, then why not to add at the end, where it would be less expensive? Without the fact that we could incorporate it in every conv block if we had infinite compute

  • @LukeDitria

    @LukeDitria

    26 күн бұрын

    Thanks for your comment! Yes you are correct, in terms of combining features spatially it won't have as much of an impact if the features already have a large receptive field. The idea is to try to add it as early as possible, and yes you could add it multiple times throughout your network, though you would probably stop once your feature map is around 4x4 etc...

  • @unknown-otter

    @unknown-otter

    26 күн бұрын

    Thanks for the clarification! Great video

Келесі