Theoretical Foundations of Graph Neural Networks

Ғылым және технология

Deriving graph neural networks (GNNs) from first principles, motivating their use, and explaining how they have emerged along several related research lines.
Computer Laboratory Wednesday Seminar, 17 February 2021
Slide deck: petar-v.com/talks/GNN-Wednesd...
Link at Talks.cam: talks.cam.ac.uk/talk/index/15...

Пікірлер: 57

  • @leodu561
    @leodu5613 жыл бұрын

    Petar's talk are great as always! (I remember attending his talk while at Google lol). Timestamps for those looking to rewatch specific sections :) 0:00 - Introduction by Pietro Lio 1:10 - Overview 1:56 - 1. Fantastic GNNs in the Wild 6:52 - 2. Talk Roadmap 9:00 - 3. Towards GNNs from first principles 10:34 - 4. Permutation invariance and equivariance 15:42 - 5. Learning on Graphs 20:22 - 6. Message passing on graphs 24:34 - 7. Perspectives on GNNs 25:42 - 7.1 Node Embedding Techniques 29:39 - 7.2 Natural Language Processing 31:23 - 7.3 Spectral GNNs 41:17 - 7.4 Probabilistic Graphical Models 45:09 - 7.5 Graph Isomorphism Testing 48:53 - 7.6 Geometric Deep Learning 50:23 - 7.7 Historical Concepts 51:15 - 7.8 Computational Chemistry 52:22 - Acknowledgements and Q&A

  • @kristofhorvath3812
    @kristofhorvath38122 жыл бұрын

    This is one of the cleanest, most sophisticated and organized scientific speeches I have ever heard...

  • @KyleCranmer
    @KyleCranmer3 жыл бұрын

    Excellent talk Petar, so useful to have these different perspectives brought together in one consistent framing.

  • @coder8i
    @coder8i3 жыл бұрын

    Petar! This is solid work. Clear thinking and speaking.

  • @shawnwang5650
    @shawnwang56502 жыл бұрын

    Very friendly to a beginner, and there are large amounts of recourses for futhermore learning. Thanks a lot Petar!

  • @blackguardian89
    @blackguardian893 жыл бұрын

    Great talk! It definitely improved my understanding about GNNs. Thank you!

  • @vladansaracpv
    @vladansaracpv2 жыл бұрын

    Beautiful presentation. Dr Velickovic is one of the best lecturers I've heard in my life. Everything he says is so clear and concise. Add his charisma on top of all that and you can understand why he attracts more and more people to study GNNs. We are so proud to have him

  • @daoudpiracha9891
    @daoudpiracha98913 жыл бұрын

    Thank you Petar for this talk!

  • @abyoussef19
    @abyoussef193 жыл бұрын

    We need more lectures like this! Nice Lecture!

  • @alexmorehead6723
    @alexmorehead67233 жыл бұрын

    Thank you for the great talk, Petar!

  • @adityamishra348
    @adityamishra3483 жыл бұрын

    Great talk! The first 20 minutes are simply brilliant! Kind of first principles explanation that I dream of when starting any new type of topic :)

  • @emmarocheteau5788
    @emmarocheteau57883 жыл бұрын

    Rewatching some of this talk - it is that good!

  • @nguyenthanhdat93
    @nguyenthanhdat932 жыл бұрын

    great presentation. Thank you for sharing, Dr Velickovic

  • @ceevaaaaa
    @ceevaaaaa3 жыл бұрын

    Thank you very much! I just completed my undergrad, and I am in the process of discovering new ideas and topics to work upon and learn more. These kinds of videos really help me (esp as a young graduate who doesn't have much idea about multiple topics but want to discover more).

  • @tasveerahmad5002
    @tasveerahmad50023 жыл бұрын

    Very nice lecture talk. Good GNN resources, tools and exposure

  • @syleshgupta5957
    @syleshgupta5957 Жыл бұрын

    It was way informative and slides are self explanatory for the people with basic understanding of math equations :) Thank you !

  • @calebparks8318
    @calebparks83182 жыл бұрын

    This was a great talk. Thank you!

  • @kaanyolsever1149
    @kaanyolsever11493 жыл бұрын

    Great presentation. very entertaining and informative. Thanks Petar

  • @stefanspalevic
    @stefanspalevic3 жыл бұрын

    Hvala Petre!! :)

  • @GiovannaIwishyou
    @GiovannaIwishyou3 жыл бұрын

    Hvala, Petre :)

  • @love12xfuture
    @love12xfuture Жыл бұрын

    amazing talk! I like the way that you connect concepts with applicational and historical context. It makes me motivated to make this talk making all senses to a 7 year-old or a 107 year-old (:

  • @bayrameda
    @bayrameda3 жыл бұрын

    Hi Petar, it's been a nice reframing of GNNs, thanks! Noting that GAT can treat non-homophilic graphs strikes this analogy to me: If propagation is error smoothing then attention makes edge-aware smoothing (in image processing).

  • @pw7225
    @pw72252 жыл бұрын

    Your presentation skills only became better since Cambridge times. And they were stellar then already.

  • @kristofneys2349
    @kristofneys23493 жыл бұрын

    thank you soo much - very good and useful!

  • @amitsett8117
    @amitsett81173 жыл бұрын

    Found this on reddit. Great talk.

  • @reidwyde5723
    @reidwyde57233 жыл бұрын

    Thank you so much!

  • @DeepFindr
    @DeepFindr3 жыл бұрын

    Really great summary! :)

  • @sahar2003
    @sahar20033 жыл бұрын

    Good talk. Thanks.

  • @ryderbrooks1783
    @ryderbrooks17832 жыл бұрын

    This gentleman is very good at this.

  • @sacramentofwilderness6656
    @sacramentofwilderness66563 жыл бұрын

    Thanks for a great talk! Very interesting and inspiring! I wonder, is there any research for graph networks in the limit, amenable for analytical treatment, like the NTK mode? Are there any special properties of the loss landscapes, not present in more common fully-connected NNs or usual CNNs?

  • @petarvelickovic6033

    @petarvelickovic6033

    3 жыл бұрын

    Thank you for the kind words! While it may not fully align with what you're after, I think you might find the recent paper from Xu et al. on (G)NN extrapolation very interesting: arxiv.org/abs/2009.11848 Herein, the authors make several (nicely visualised) geometrical arguments on the properties of GNNs in the extrapolation regime. The main tool and setting for their analysis is, indeed, the NTK mode.

  • @amiltonwong
    @amiltonwong3 жыл бұрын

    Hi, Petar, thanks a lot for the talk recording. Could you also release the slides of your talk?

  • @petarvelickovic6033

    @petarvelickovic6033

    3 жыл бұрын

    They are provided in the video description now :)

  • @amiltonwong

    @amiltonwong

    3 жыл бұрын

    @@petarvelickovic6033 Thanks a lot again :)

  • @GorsanMestiri
    @GorsanMestiri3 жыл бұрын

    Excellent talk, Thank you so much. I'll be more than happy if you share the best resources to dive into GNN applied to combinatorial optimisation problems. 🙏

  • @petarvelickovic6033

    @petarvelickovic6033

    3 жыл бұрын

    Thank you for the kind words! As a matter of fact, we've very recently put out a survey on GNNs for combinatoral tasks: arxiv.org/abs/2102.09544

  • @GorsanMestiri

    @GorsanMestiri

    3 жыл бұрын

    @@petarvelickovic6033 This is amazing. Thanks

  • @kejianshi9196
    @kejianshi91962 жыл бұрын

    pretty fast . rewatching 3~4 times is helpful.

  • @NelsonOtuma
    @NelsonOtuma2 жыл бұрын

    This is an Interesting area of study, you would have dropped the link for pdf books on comments

  • @JorGe-eu3wi
    @JorGe-eu3wi3 жыл бұрын

    Excellent presentation! A question came up though, which flavour of gnn layer could we say that graphsage uses for the embedding algorithm? Could the learned weights matrices W be considered as fixed weight inputs of the convolutional GNN layer?

  • @petarvelickovic6033

    @petarvelickovic6033

    3 жыл бұрын

    An excellent question -- thanks for asking! This would depend on which type of GraphSAGE we're looking at :) GraphSAGE-mean, GraphSAGE-GCN and GraphSAGE-pool are all conv-GNNs. They transform every node in isolation, they then use a permutation invariant aggregator, and do not take the receiver node at all into account. The matrix W is just part of either of the two functions (psi or phi), depending on whether it's applied to individual neighbours or aggregated vectors. On the other hand, GraphSAGE-LSTM is not permutation equivariant, and hence does not fit any of the three flavours. It is possible to 'massage' the LSTM aggregator to make it fit, however; see Janossy Pooling (Murphy et al.). Lastly, I'd note that GraphSAGE's main contribution is its scalability to inductive learning on very large graphs (as per its title ;) ) through neighbourhood sampling. Many of the embedding algorithms it proposes are not unlike models already previously proposed in the literature (e.g. the GCN of Kipf & Welling).

  • @JorGe-eu3wi

    @JorGe-eu3wi

    3 жыл бұрын

    @@petarvelickovic6033 thank you very much for the answer :D

  • @kndlt
    @kndlt2 жыл бұрын

    Would it be possible for someone without too much ML experiences (but have CS degree) learn theoretical part of GNN inside out within a month?

  • @petarvelickovic6033

    @petarvelickovic6033

    2 жыл бұрын

    It should be possible, in my opinion. Especially since GNNs are fundamentally discrete structures, they align very well with the kind of computation typically studied in a theoretical CS degree.

  • @tae898
    @tae8983 жыл бұрын

    If you are using Transformers, you are using GNNs!

  • @fairuzshadmanishishir8171
    @fairuzshadmanishishir81713 жыл бұрын

    Relation between pgm and graph nn is not clear can you clear the concepts?

  • @thegreatlazydazz
    @thegreatlazydazz2 жыл бұрын

    Matrices that commute are jointly diagonalizable. I understand this as if AB = BA, then A and B have the same eigen vectors? However this cannot be true as I commutes with any matrix, and any vector is an eignevector of I.

  • @petarvelickovic6033

    @petarvelickovic6033

    2 жыл бұрын

    Good sighting! I didn't have the time in the talk to get into the nuances of this, but essentially, you'll have exactly the same eigenbasis if the matrices commute _and_ have no repeated eigenvalues. If you have repeated eigenvalues (as is the case for I), this theorem becomes more ambiguous to apply. en.m.wikipedia.org/wiki/Commuting_matrices has a bucket list of properties wrt commutativity and diagonalisation.

  • @thegreatlazydazz

    @thegreatlazydazz

    2 жыл бұрын

    I understand the assertion, with no repeated eigen values. I thought about it a bit more, for repeated eigen values, it's like there exists a change of basis that diagonalize all of them? Any unitary matrix diagonalizes I. Thanks for the reference. The shift matrix was an excellent hint for the next property.

  • @sachin63442
    @sachin634422 жыл бұрын

    why can't you use XGboost or decision trees for node level classification instead of GCN?

  • @petarvelickovic6033

    @petarvelickovic6033

    2 жыл бұрын

    Of course you can! Sergey Ivanov et al. recently showed it's a very strong baseline: arxiv.org/abs/2101.08543

  • @sachin63442

    @sachin63442

    2 жыл бұрын

    @@petarvelickovic6033 so when to use GCN over XGB or decision trees? Not combined.

  • @petarvelickovic6033

    @petarvelickovic6033

    2 жыл бұрын

    As far as I know, boosting and decision trees are great for dealing with data that is assumed tabular, i.e. where you don't assume your nodes are linked together. GCNs (and/or more expressive GNNs) should be used whenever you assume that the links between your data points are actually meaningful and should be exploited.

  • @insightfool
    @insightfool2 жыл бұрын

    Brain.....melted

  • @kprakash9665
    @kprakash96653 жыл бұрын

    Theoretically also not clearly explained and practically not explained worst University

  • @sb-xq1sr
    @sb-xq1sr Жыл бұрын

    37:17 Why can't an adjacency matrix be eigen decomposed? AFAIK, any real symmetric matrix is diagonalizable. en.wikipedia.org/wiki/Eigendecomposition_of_a_matrix#Real_symmetric_matrices I believe you were trying to substitute adjacency matrix with a positive semi-definite matrix that can also express all adjacency properties. That way, the eigen decomposed diagonal matrix Λ only has non-negative values.

  • @petarvelickovic6033

    @petarvelickovic6033

    Жыл бұрын

    Thanks for your note! You are certainly correct, and this is one aspect I hadn't qualified well enough in the talk. Yes, any undirected adjacency matrix can be eigendecomposed. However, the Laplacian's eigendecomposition has nicer properties. It guarantees all eigenvalues are nonnegative (while the multiplicity of the zero eigenvalues can be used to track connected components), and the resulting eigenvectors can be directly used to approximate many interesting problems on the graph (eg. optimal cuts).

Келесі