What Can a Node Learn from Its Neighbors in Graph Neural Networks?
Yilin Lu - University of Minnesota, Twin Cities, Minneapolis , United States
Chongwei Chen - University of Minnesota, minneapolis, United States
Matthew Xu - University of Minnesota, Minneapolis, United States
Qianwen Wang - University of Minnesota, Minneapolis , United States
Room: Bayshore I
2024-10-13T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract
Graph Neural Networks (GNNs) have gained huge success in a variety of applications, from modeling protein-protein interactions in biomedical graphs to identifying fraud in social networks. However, the complex structures of graphs and the complicated inner workings of graph neural networks make it hard for non-AI-experts to understand the essential concepts of GNNs. To address this, we present GNN 101, an educational visualization tool designed for interactive learning of GNNs. GNN 101 seamlessly integrates different levels of abstraction, including a model overview, layer operations, and detailed animations for matrix calculations, with smooth transitions between them. It offers both a node-link view and a matrix view, which complement each other. The node-link view supports an intuitive understanding of the graph structure, while the matrix view provides a space-efficient and comprehensive overview of all features and their changes across layers. GNN 101 not only reveals the computation of GNN in an engaging and intuitive way but also effectively demonstrates how node features update layer by layer through learning from their neighbors. It runs locally in web browsers using ONNX Runtime without additional installations or setups.