题名 | DGI: An Easy and Efficient Framework for GNN Model Evaluation |
作者 | |
通讯作者 | Xiao Yan |
DOI | |
发表日期 | 2023
|
会议名称 | 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD)
|
会议录名称 | |
会议日期 | AUG 06-10, 2023
|
会议地点 | null,Long Beach,CA
|
出版地 | 1601 Broadway, 10th Floor, NEW YORK, NY, UNITED STATES
|
出版者 | |
摘要 | ["While many systems have been developed to train graph neural networks (GNNs), efficient model evaluation, which computes node embedding according to a given model, remains to be addressed. For instance, using the widely adopted node-wise approach, model evaluation can account for over 90% of the time in the end-to-end training process due to neighbor explosion, which means that a node accesses its multi-hop neighbors. The layer-wise approach avoids neighbor explosion by conducting computation layer by layer in GNN models. However, layer-wise model evaluation takes considerable implementation efforts because users need to manually decompose the GNN model into layers, and different implementations are required for GNN models with different structures.","In this paper, we present DGI-a framework for easy and efficient GNN model evaluation, which automatically translates the training code of a GNN model for layer-wise evaluation to minimize user effort. DGI is general for different GNN models and evaluation requests (e.g., computing embedding for all or some of the nodes), and supports out-of-core execution on large graphs that cannot fit in CPU memory. Under the hood, DGI traces the computation graph of GNN model, partitions the computation graph into layers that are suitable for layer-wise evaluation according to tailored rules, and executes each layer efficiently by reordering the computation tasks and managing device memory consumption. Experiment results show that DGI matches hand-written implementations of layer-wise evaluation in efficiency and consistently outperforms node-wise evaluation across different datasets and hardware settings, and the speedup can be over 1,000x."] |
关键词 | |
学校署名 | 其他
|
语种 | 英语
|
相关链接 | [来源记录] |
收录类别 | |
资助项目 | CUHK direct grant[4055146]
; Guangdong Basic and Applied Basic Research Foundation[2021A1515110067]
; Shenzhen Fundamental Research Program[20220815112848002]
|
WOS研究方向 | Computer Science
|
WOS类目 | Computer Science, Information Systems
; Computer Science, Interdisciplinary Applications
; Computer Science, Theory & Methods
|
WOS记录号 | WOS:001118896305043
|
来源库 | Web of Science
|
引用统计 |
被引频次[WOS]:2
|
成果类型 | 会议论文 |
条目标识符 | http://sustech.caswiz.com/handle/2SGJ60CL/646921 |
专题 | 南方科技大学 工学院_计算机科学与工程系 |
作者单位 | 1.AWS Shanghai AI Lab 2.Southern University of Science and Technology 3.TensorChord 4.George Washington University 5.The Chinese University of Hong Kong |
推荐引用方式 GB/T 7714 |
Peiqi Yin,Xiao Yan,Jinjing Zhou,et al. DGI: An Easy and Efficient Framework for GNN Model Evaluation[C]. 1601 Broadway, 10th Floor, NEW YORK, NY, UNITED STATES:ASSOC COMPUTING MACHINERY,2023.
|
条目包含的文件 | 条目无相关文件。 |
|
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论