北洋智算论坛

第139期 Distance Encoding: Designing Provably More Powerful Graph Neural Networks

2021年04月16日 13:31

讲座主题:

Distance Encoding: Designing Provably More Powerful Graph Neural Networks

距离编码:设计能被验证的更强大的图神经网络

主讲人姓名及介绍:

Yanbang Wang is a Master Student of Computer Science at Stanford University, and a Ph.D. student at Cornell University, United States. His research interests include machine learning on graphs, data mining, computational social science, and big data analytics. He currently works at Stanford Social Network Analysis Group, supervised by Prof. Jure Leskovec. He has published multiple papers at various top international conferences and journals such as NeurIPS, ICLR, WWW, LAK, IEEE TVCG, SSHA etc. He worked as visiting student researcher at MIT CSAIL and UIUC DMG in 2018 and 2020, respectively.

王彦邦曾是斯坦福大学计算机科学专业的一名硕士研究生,现在是美国康奈尔大学的一名在读博士。他的研究兴趣包括图机器学习、数据挖掘、计算社会科学以及大数据分析。他目前在斯坦福大学社交网络分析组(SNAP)工作,由Jure Leskovec教授指导。他在各类国际顶级会议和期刊(如:NeurIPS, ICLR, WWW, LAK, IEEE TVCG, SSHA等)上发表了多篇文章。他分别于2018年和2020年在麻省理工学院计算机科学与人工智能实验室MIT CSAIL)和伊利诺伊大学厄巴纳-香槟分校数据挖掘研究组UIUC DMG)做访问学生研究员。

报告摘要:

Learning representations of sets of nodes in a graph is crucial for applications ranging from node-role discovery to link prediction and molecule classification. In recent years, Graph Neural Networks (GNNs) have achieved great success in graph representation learning. However, the expressive power of GNNs is limited by the 1-Weisfeiler-Lehman (WL) test and thus GNNs generate identical representations for graph substructures that may in fact be very different. This talk will introduce and analyze a general class of structure-related features, Distance Encoding (DE), which helps GNNs break the limit efficiently and achieve state-of-the-art performance on many different graph representation learning tasks. The talk will also briefly cover some of the recent progress made on top of the idea of DE.

学习图中节点集的表示对节点角色发现、链接预测以及分子分类等一系列任务都是至关重要的。近年来,图神经网络(GNNs)在图表示学习方面取得了巨大的成功。然而,图神经网络的表达能力受到1-Weisfeiler-Lehman (WL)测试的限制,因此图神经网络可能会为实际上非常不同的子结构生成相同的表示。本次报告将介绍并分析一种与结构相关的特征,即距离编码(DE),该编码帮助图神经网络有效突破限制并在许多不同的图表示学习任务中达到最好的表现。此外,本次报告还将简要介绍基于距离编码(DE)理念近期所取得的一些进展。


扫码关注微信公众号

联系我们

地址:天津市津南区海河教育园区雅观路135号天津大学北洋园校区55教学楼,300350
邮箱:coic@tju.edu.cn

Copyright ©2017 天津大学智能与计算学部 版权所有