Deep Learning models are at the core of artificial intelligence research today. It is well known that deep learning techniques are disruptive for Euclidean data, such as images or sequence data, and not immediately applicable to graph-structured data such as text. This gap has driven a wave of research for deep learning on graphs, including graph representation learning, graph generation, and graph classification. The new neural network architectures on graph-structured data (graph neural networks, GNNs in short) have performed remarkably on these tasks, demonstrated by applications in social networks, bioinformatics, and medical informatics. Despite these successes, GNNs still face many challenges ranging from the foundational methodologies to the theoretical understandings of the power of the graph representation learning.
This book provides a comprehensive introduction of GNNs. It first discusses the goals of graph representation learning and then reviews the history, current developments, and future directions of GNNs. The second part presents and reviews fundamental methods and theories concerning GNNs while the third part describes various frontiers that are built on the GNNs. The book concludes with an overview of recent developments in a number of applications using GNNs.
This book is suitable for a wide audience including undergraduate and graduate students, postdoctoral researchers, professors and lecturers, as well as industrial and government practitioners who are new to this area or who already have some basic background but want to learn more about advanced and promising techniques and applications.
Author(s): Lingfei Wu, Peng Cui, Jian Pei, Liang Zhao
Publisher: Springer
Year: 2022
Language: English
Pages: 725
City: Singapore
Foreword
Preface
Book Website and Resources
To the Instructors
To the Readers
Acknowledgements
Editor Biography
List of Contributors
Contents
Terminologies
1 Basic concepts of Graphs
2 Machine Learning on Graphs
3 Graph Neural Networks
Notations
Numbers, Arrays, and Matrices
Graph Basics
Basic Operations
Functions
Probablistic Theory
Part I Introduction
Chapter 1 Representation Learning
1.1 Representation Learning: An Introduction
1.2 Representation Learning in Different Areas
1.2.1 Representation Learning for Image Processing
1.2.2 Representation Learning for Speech Recognition
1.2.3 Representation Learning for Natural Language Processing
1.2.4 Representation Learning for Networks
1.3 Summary
Chapter 2 Graph Representation Learning
2.1 Graph Representation Learning: An Introduction
2.2 Traditional Graph Embedding
2.3 Modern Graph Embedding
2.3.1 Structure-Property Preserving Graph Representation Learning
2.3.1.1 Structure Preserving Graph Representation Learning
2.3.1.2 Property Preserving Graph Representation Learning
2.3.2 Graph Representation Learning with Side Information
2.3.3 Advanced Information Preserving Graph Representation Learning
2.4 Graph Neural Networks
2.5 Summary
Chapter 3 Graph Neural Networks
3.1 Graph Neural Networks: An Introduction
3.2 Graph Neural Networks: Overview
3.2.1 Graph Neural Networks: Foundations
3.2.2 Graph Neural Networks: Frontiers
3.2.3 Graph Neural Networks: Applications
3.2.3.1 Graph Construction
3.2.3.2 Graph Representation Learning
3.2.4 Graph Neural Networks: Organization
3.3 Summary
Part II Foundations of Graph Neural Networks
Chapter 4 Graph Neural Networks for Node Classification
4.1 Background and Problem Definition
4.2 Supervised Graph Neural Networks
4.2.1 General Framework of Graph Neural Networks
4.2.2 Graph Convolutional Networks
4.2.3 Graph Attention Networks
4.2.4 Neural Message Passing Networks
4.2.5 Continuous Graph Neural Networks
4.3 Unsupervised Graph Neural Networks
4.3.1 Variational Graph Auto-Encoders
4.3.1.1 Problem Setup
4.3.1.2 Model
4.3.1.3 Discussion
4.3.2 Deep Graph Infomax
4.3.2.1 Problem Setup
4.3.2.2 Model
4.3.2.3 Discussion
4.4 Over-smoothing Problem
4.5 Summary
Chapter 5 The Expressive Power of Graph Neural Networks
5.1 Introduction
5.2 Graph Representation Learning and Problem Formulation
5.3 The Power of Message Passing Graph Neural Networks
5.3.1 Preliminaries: Neural Networks for Sets
5.3.2 Message Passing Graph Neural Networks
5.3.3 The Expressive Power of MP-GNN
5.3.4 MP-GNN with the Power of the 1-WL Test
5.4 Graph Neural Networks Architectures that are more Powerful than 1-WL Test
5.4.1 Limitations of MP-GNN
5.4.2 Injecting Random Attributes
5.4.2.1 Relational Pooling GNN (RP-GNN) (Murphy et al, 2019a)
5.4.2.2 Random Graph Isomorphic Network (rGIN) (Sato et al, 2021)
5.4.2.3 Position-aware GNN (PGNN) (You et al, 2019)
5.4.2.4 Randomized Matrix Factorization (Srinivasan and Ribeiro, 2020a)(Dwivedi et al, 2020)
5.4.3 Injecting Deterministic Distance Attributes
5.4.3.1 Distance Encoding (Li et al, 2020e)
5.4.3.2 Identity-aware GNN (You et al, 2021)
5.4.4 Higher-order Graph Neural Networks
5.4.4.1 k-WL-induced GNNs (Morris et al, 2019)
5.4.4.2 Invariant and equivariant GNNs (Maron et al, 2018, 2019b)
5.4.4.3 FWL-induced GNNs (Maron et al, 2019a; Chen et al, 2019f)
5.5 Summary
Chapter 6 Graph Neural Networks: Scalability
6.1 Introduction
6.2 Preliminary
6.3 Sampling Paradigms
6.3.1 Node-wise Sampling
6.3.1.1 GraphSAGE
6.3.1.2 VR-GCN
6.3.2 Layer-wise Sampling
6.3.2.1 FastGCN
6.3.2.2 ASGCN
6.3.3 Graph-wise Sampling
6.3.3.1 Cluster-GCN
6.3.3.2 GraphSAINT
6.3.3.3 Overall Comparison of Different Models
6.4 Applications of Large-scale Graph Neural Networks on Recommendation Systems
6.4.1 Item-item Recommendation
6.4.2 User-item Recommendation
6.5 Future Directions
Chapter 7 Interpretability in Graph Neural Networks
7.1 Background: Interpretability in Deep Models
7.1.1 Definition of Interpretability and Interpretation
7.1.2 The Value of Interpretation
7.1.2.1 Model-Oriented Reasons
7.1.2.2 User-Oriented Reasons
7.1.3 Traditional Interpretation Methods
7.1.3.1 Post-Hoc Interpretation
7.1.3.2 Interpretable Modeling
7.1.4 Opportunities and Challenges
7.2 Explanation Methods for Graph Neural Networks
7.2.1 Background
7.2.2 Approximation-Based Explanation
7.2.2.1 White-Box Approximation Method
7.2.2.2 Black-Box Approximation Methods
7.2.3 Relevance-Propagation Based Explanation
7.2.4 Perturbation-Based Approaches
7.2.5 Generative Explanation
7.3 Interpretable Modeling on Graph Neural Networks
7.3.1 GNN-Based Attention Models
7.3.1.1 Attention Models for Homogeneous Graphs
7.3.1.2 Attention Models for Heterogeneous Graphs
7.3.2 Disentangled Representation Learning on Graphs
7.3.2.1 Is A Single Vector Enough?
7.3.2.2 Prototypes-Based Soft-Cluster Assignment
7.3.2.3 Dynamic Routing Based Clustering
7.4 Evaluation of Graph Neural Networks Explanations
7.4.1 Benchmark Datasets
7.4.1.1 Synthetic Datasets
7.4.1.2 Real-World Datasets
7.4.2 Evaluation Metrics
7.5 Future Directions
Chapter 8 Graph Neural Networks: Adversarial Robustness
8.1 Motivation
8.2 Limitations of Graph Neural Networks: Adversarial Examples
8.2.1 Categorization of Adversarial Attacks
Aspect 1: Property under Investigation (Attacker’s Goal)
Aspect 2: The Perturbation Space (Attacker’s Capabilities)
Aspect 3: Available Information (Attacker’s Knowledge)
Aspect 4: The Algorithmic View
8.2.2 The Effect of Perturbations and Some Insights
8.2.2.1 Transferability and Patterns
8.2.3 Discussion and Future Directions
8.3 Provable Robustness: Certificates for Graph Neural Networks
8.3.1 Model-Specific Certificates
Lower Bounds on the Worst-Case Margin
8.3.2 Model-Agnostic Certificates
Putting Model-Agnostic Certificates into Practice
8.3.3 Advanced Certification and Discussion
8.4 Improving Robustness of Graph Neural Networks
8.4.1 Improving the Graph
8.4.2 Improving the Training Procedure
8.4.2.1 Robust Training
8.4.2.2 Further Training Principles
8.4.3 Improving the Graph Neural Networks’ Architecture
8.4.3.1 Adaptively Down-Weighting Edges
8.4.3.2 Further Approaches
8.4.4 Discussion and Future Directions
8.5 Proper Evaluation in the View of Robustness
Empirical Robustness Evaluation
Provable Robustness Evaluation
8.6 Summary
Acknowledgements
Part III Frontiers of Graph Neural Networks
Chapter 9 Graph Neural Networks: Graph Classification
9.1 Introduction
9.2 Graph neural networks for graph classification: Classic works and modern architectures
9.2.1 Spatial approaches
9.2.2 Spectral approaches
9.3 Pooling layers: Learning graph-level outputs from node-level outputs
9.3.1 Attention-based pooling layers
9.3.2 Cluster-based pooling layers
9.3.3 Other pooling layers
9.4 Limitations of graph neural networks and higher-order layers for graph classification
9.4.1 Overcoming limitations
9.5 Applications of graph neural networks for graph classification
9.6 Benchmark Datasets
9.7 Summary
Chapter 10 Graph Neural Networks: Link Prediction
10.1 Introduction
10.2 Traditional Link Prediction Methods
10.2.1 Heuristic Methods
10.2.1.1 Local Heuristics
10.2.1.3 Summarization
10.2.2 Latent-Feature Methods
10.2.2.1 Matrix Factorization
10.2.2.2 Network Embedding
10.2.2.3 Summarization
10.2.3 Content-Based Methods
10.3 GNN Methods for Link Prediction
10.3.1 Node-Based Methods
10.3.1.1 Graph AutoEncoder
10.3.1.2 Variational Graph AutoEncoder
10.3.1.3 Variants of GAE and VGAE
10.3.2 Subgraph-Based Methods
10.3.2.1 The SEAL Framework
10.3.2.2 Variants of SEAL
10.3.3 Comparing Node-Based Methods and Subgraph-Based Methods
10.4 Theory for Link Prediction
10.4.1 γ-Decaying Heuristic Theory
10.4.1.1 Definition of γ-Decaying Heuristic
10.4.1.2 Katz index
10.4.1.3 PageRank
10.4.1.4 SimRank
10.4.1.5 Discussion
10.4.2 Labeling Trick
10.4.2.1 Structural Representation
10.4.2.2 Labeling Trick Enables Learning Structural Representations
10.5 Future Directions
10.5.1 Accelerating Subgraph-Based Methods
10.5.2 Designing More Powerful Labeling Tricks
10.5.3 Understanding When to Use One-Hot Features
Chapter 11 Graph Neural Networks: Graph Generation
11.1 Introduction
11.2 Classic Graph Generative Models
11.2.1 Erdős–Rényi Model
11.2.1.1 Model
11.2.1.2 Discussion
11.2.2 Stochastic Block Model
11.2.2.1 Model
11.2.2.2 Discussion
11.3 Deep Graph Generative Models
11.3.1 Representing Graphs
11.3.2 Variational Auto-Encoder Methods
11.3.2.1 The GraphVAE Family
11.3.2.2 Hierarchical and Constrained GraphVAEs
11.3.3 Deep Autoregressive Methods
11.3.3.1 GNN-based Autoregressive Model
11.3.3.2 Graph Recurrent Neural Networks (GraphRNN)
11.3.3.3 Graph Recurrent Attention Networks (GRAN)
11.3.4 Generative Adversarial Methods
11.3.4.1 Adjacency Matrix Based GAN
11.3.4.2 Random Walk Based GAN
11.4 Summary
Chapter 12 Graph Neural Networks: Graph Transformation
12.1 Problem Formulation of Graph Transformation
12.2 Node-level Transformation
12.2.1 Definition of Node-level Transformation
12.2.2 Interaction Networks
12.2.3 Spatio-Temporal Convolution Recurrent Neural Networks
12.3 Edge-level Transformation
12.3.1 Definition of Edge-level Transformation
12.3.2 Graph Transformation Generative Adversarial Networks
12.3.3 Multi-scale Graph Transformation Networks
12.3.4 Graph Transformation Policy Networks
12.4 Node-Edge Co-Transformation
12.4.1 Definition of Node-Edge Co-Transformation
12.4.1.1 Junction-tree Variational Auto-encoder Transformer
12.4.1.2 Molecule Cycle-Consistent Adversarial Networks
12.4.1.3 Directed Acyclic Graph Transformation Networks
12.4.2 Editing-based Node-Edge Co-Transformation
12.4.2.1 Graph Convolutional Policy Networks
12.4.2.2 Molecule Deep Q-networks Transformer
12.4.2.3 Node-Edge Co-evolving Deep Graph Translator
12.5 Other Graph-based Transformations
12.5.1 Sequence-to-Graph Transformation
12.5.2 Graph-to-Sequence Transformation
12.5.3 Context-to-Graph Transformation
12.6 Summary
Chapter 13 Graph Neural Networks: Graph Matching
13.1 Introduction
13.2 Graph Matching Learning
13.2.1 Problem Definition
13.2.2 Deep Learning based Models
13.2.3 Graph Neural Network based Models
13.3 Graph Similarity Learning
13.3.1 Problem Definition
13.3.2 Graph-Graph Regression Tasks
13.4 Summary
Chapter 14 Graph Neural Networks: Graph Structure Learning
14.1 Introduction
14.2 Traditional Graph Structure Learning
14.2.1 Unsupervised Graph Structure Learning
14.2.1.1 Graph Structure Learning from Smooth Signals
14.2.1.2 Spectral Clustering via Graph Structure Learning
14.2.2 Supervised Graph Structure Learning
14.2.2.1 Relational Inference for Interacting Systems
14.2.2.2 Structure Learning in Bayesian Networks
14.3 Graph Structure Learning for Graph Neural Networks
14.3.1 Joint Graph Structure and Representation Learning
14.3.1.1 Problem Formulation
14.3.1.2 Learning Discrete Graph Structures
14.3.1.3 Learning Weighted Graph Structures
14.3.2 Connections to Other Problems
14.3.2.1 Graph Structure Learning as Graph Generation
14.3.2.2 Graph Structure Learning for Graph Adversarial Defenses
14.3.2.3 Understanding Transformers from a Graph Learning Perspective
14.4 Future Directions
14.4.1 Robust Graph Structure Learning
14.4.2 Scalable Graph Structure Learning
14.4.3 Graph Structure Learning for Heterogeneous Graphs
14.5 Summary
Chapter 15 Dynamic Graph Neural Networks
15.1 Introduction
15.2 Background and Notation
15.2.1 Graph Neural Networks
15.2.2 Sequence Models
15.2.3 Encoder-Decoder Framework and Model Training
15.3 Categories of Dynamic Graphs
15.3.1 Discrete vs. Continues
15.3.2 Types of Evolution
15.3.3 Prediction Problems, Interpolation, and Extrapolation
15.4 Modeling Dynamic Graphs with Graph Neural Networks
15.4.1 Conversion to Static Graphs
15.4.2 Graph Neural Networks for DTDGs
15.4.3 Graph Neural Networks for CTDGs
15.5 Applications
15.5.1 Skeleton-based Human Activity Recognition
15.5.2 Traffic Forecasting
15.5.3 Temporal Knowledge Graph Completion
15.6 Summary
Chapter 16 Heterogeneous Graph Neural Networks
16.1 Introduction to HGNNs
16.1.1 Basic Concepts of Heterogeneous Graphs
16.1.2 Challenges of HG Embedding
16.1.3 Brief Overview of Current Development
16.2 Shallow Models
16.2.1 Decomposition-based Methods
16.2.2 Random Walk-based Methods
16.3 Deep Models
16.3.1 Message Passing-based Methods (HGNNs)
16.3.2 Encoder-decoder-based Methods
16.3.3 Adversarial-based Methods
16.4 Review
16.5 Future Directions
16.5.1 Structures and Properties Preservation
16.5.2 Deeper Exploration
16.5.3 Reliability
16.5.4 Applications
Chapter 17 Graph Neural Networks: AutoML
17.1 Background
17.1.1 Notations of AutoGNN
17.1.2 Problem Definition of AutoGNN
17.1.3 Challenges in AutoGNN
17.2 Search Space
17.2.1 Architecture Search Space
17.2.1.1 Micro-architecture Search Space
17.2.1.2 Macro-architecture Search Space
17.2.2 Training Hyperparameter Search Space
17.2.3 Efficient Search Space
17.3 Search Algorithms
17.3.1 Random Search
17.3.2 Evolutionary Search
17.3.3 Reinforcement Learning Based Search
17.3.4 Differentiable Search
17.3.5 Efficient Performance Estimation
17.4 Future Directions
Acknowledgements
Chapter 18 Graph Neural Networks: Self-supervised Learning
18.1 Introduction
18.2 Self-supervised Learning
18.3 Applying SSL to Graph Neural Networks: Categorizing Training Strategies, Loss Functions and Pretext Tasks
18.3.1 Training Strategies
18.3.1.1 Self-training
18.3.1.2 Pre-training and Fine-tuning
18.3.1.3 Joint Training
18.3.2 Loss Functions
18.3.2.1 Classification and Regression Loss
18.3.2.2 Contrastive Learning Loss
18.3.3 Pretext Tasks
18.4 Node-level SSL Pretext Tasks
18.4.1 Structure-based Pretext Tasks
18.4.2 Feature-based Pretext Tasks
18.4.3 Hybrid Pretext Tasks
18.5 Graph-level SSL Pretext Tasks
18.5.1 Structure-based Pretext Tasks
18.5.2 Feature-based Pretext Tasks
18.5.3 Hybrid Pretext Tasks
18.6 Node-graph-level SSL Pretext Tasks
18.7 Discussion
18.8 Summary
Part IV Broad and Emerging Applications with
Graph Neural Networks
Chapter 19 Graph Neural Networks in Modern Recommender Systems
19.1 Graph Neural Networks for Recommender System in Practice
19.1.1 Introduction
19.1.2 Classic Approaches to Predict User-Item Preference
19.1.3 Item Recommendation in user-item Recommender Systems: a Bipartite Graph Perspective
19.2 Case Study 1: Dynamic Graph Neural Networks Learning
19.2.1 Dynamic Sequential Graph
19.2.2 DSGL: Dynamic Sequential Graph Learning
19.2.2.1 Overview
19.2.2.2 Embedding Layer
19.2.2.3 Time-Aware Sequence Encoding
19.2.2.4 Second-Order Graph Attention
19.2.2.5 Aggregation and Layer Combination
19.2.3 Model Prediction
19.2.4 Experiments and Discussions
19.2.4.1 Performance Comparison
19.2.4.2 Effectiveness of Graph Structure and Layer Combination
19.2.4.3 Effectiveness of Time-Aware Sequence Encoding
19.2.4.4 Effectiveness of Second-Order Graph Attention
19.3 Case Study 2: Device-Cloud Collaborative Learning for Graph Neural Networks
19.3.1 The proposed framework
19.3.1.1 MetaPatch for On-device Personalization
19.3.1.2 MoMoDistill to Enhance the Cloud Modeling
19.3.2 Experiments and Discussions
19.3.2.1 How is the performance of DCCL compared with the SOTAs?
19.3.2.2 Whether on-device personalization benefits to the cloud model?
19.3.2.3 The iterative characteristics of the multi-round DCCL.
19.3.2.4 Ablation Study of DCCL
19.4 Future Directions
Chapter 20 Graph Neural Networks in Computer Vision
20.1 Introduction
20.2 Representing Vision as Graphs
20.2.1 Visual Node representation
20.2.2 Visual Edge representation
20.2.2.1 Spatial Edges
20.2.2.2 Temporal Edges
20.3 Case Study 1: Image
20.3.1 Object Detection
20.3.2 Image Classification
20.4 Case Study 2: Video
20.4.1 Video Action Recognition
20.4.2 Temporal Action Localization
20.5 Other Related Work: Cross-media
20.5.1 Visual Caption
20.5.2 Visual Question Answering
20.5.3 Cross-Media Retrieval
20.6 Frontiers for Graph Neural Networks on Computer Vision
20.6.1 Advanced Graph Neural Networks for Computer Vision
20.6.2 Broader Area of Graph Neural Networks on Computer Vision
20.7 Summary
Chapter 21 Graph Neural Networks in Natural Language Processing
21.1 Introduction
21.2 Modeling Text as Graphs
21.2.1 Graph Representations in Natural Language Processing
21.2.2 Tackling Natural Language Processing Tasks from a Graph Perspective
21.3 Case Study 1: Graph-based Text Clustering and Matching
21.3.1 Graph-based Clustering for Hot Events Discovery and Organization
21.3.2 Long Document Matching with Graph Decomposition and Convolution
21.4 Case Study 2: Graph-based Multi-Hop Reading Comprehension
21.5 Future Directions
21.6 Conclusions
Chapter 22 Graph Neural Networks in Program Analysis
22.1 Introduction
22.2 Machine Learning in Program Analysis
22.3 A Graph Represention of Programs
22.4 Graph Neural Networks for Program Graphs
22.5 Case Study 1: Detecting Variable Misuse Bugs
22.6 Case Study 2: Predicting Types in Dynamically Typed Languages
22.7 Future Directions
Chapter 23 Graph Neural Networks in Software Mining
23.1 Introduction
23.2 Modeling Software as a Graph
23.2.1 Macro versus Micro Representations
23.2.1.1 Macro-level Representations
23.2.1.2 Micro-level Representations
23.2.2 Combining the Macro- and Micro-level
23.3 Relevant Software Mining Tasks
23.4 Example Software Mining Task: Source Code Summarization
23.4.1 Primer GNN-based Code Summarization
23.4.1.1 Model Input / Output
23.4.1.2 Model Architecture
23.4.1.3 Experiment
23.4.1.4 What benefit did the GNN bring?
23.4.2 Directions for Improvement
23.4.2.1 Example Micro-level Improvement
23.4.2.2 Example Macro-level Improvement
23.5 Summary
Chapter 24 GNN-based Biomedical Knowledge Graph Mining in Drug Development
24.1 Introduction
24.2 Existing Biomedical Knowledge Graphs
24.3 Inference on Knowledge Graphs
24.3.1 Conventional KG inference techniques
24.3.2 GNN-based KG inference techniques
24.4 KG-based hypothesis generation in computational drug development
24.4.1 A machine learning framework for KG-based drug repurposing
24.4.2 Application of KG-based drug repurposing in COVID-19
24.5 Future directions
24.5.1 KG quality control
24.5.2 Scalable inference
24.5.3 Coupling KGs with other biomedical data
Chapter 25 Graph Neural Networks in Predicting Protein Function and Interactions
25.1 From Protein Interactions to Function: An Introduction
25.1.1 Enter Stage Left: Protein-Protein Interaction Networks
25.1.2 Problem Formulation(s), Assumptions, and Noise: A Historical Perspective
25.1.3 Shallow Machine Learning Models over the Years
25.1.4 Enter Stage Right: Graph Neural Networks
25.1.4.1 Preliminaries
25.1.4.2 GNNs for Representation Learning
25.1.4.3 GNNs for the Link Prediction Problem
25.1.4.4 GNNs for Automated Function Prediction as a Multi-label Classification Problem
25.2 Highlighted Case Studies
25.2.1 Case Study 1: Prediction of Protein-Protein and Protein-Drug Interactions: The Link Prediction Problem
25.2.2 Case Study 2: Prediction of Protein Function and Functionally-important Residues
25.2.3 Case Study 3: From Representation Learning to Multirelational Link Prediction in Biological Networks with Graph Autoencod
25.3 Future Directions
Chapter 26 Graph Neural Networks in Anomaly Detection
26.1 Introduction
26.2 Issues
26.2.1 Data-specific issues
26.2.2 Task-specific Issues
26.2.3 Model-specific Issues
26.3 Pipeline
26.3.1 Graph Construction and Transformation
26.3.2 Graph Representation Learning
26.3.3 Prediction
26.4 Taxonomy
26.5 Case Studies
26.5.1 Case Study 1: Graph Embeddings for Malicious Accounts Detection
26.5.2 Case Study 2: Hierarchical Attention Mechanism based Cash-out User Detection
26.5.3 Case Study 3: Attentional Heterogeneous Graph Neural Networks for Malicious Program Detection
26.5.4 Case Study 4: Graph Matching Framework to Learn the Program Representation and Similarity Metric via Graph Neural Network
26.5.5 Case Study 5: Anomaly Detection in Dynamic Graph Using Attention-based Temporal GCN
26.5.6 Case Study 6: GCN-based Anti-Spam for Spam Review Detection
26.6 Future Directions
Chapter 27 Graph Neural Networks in Urban Intelligence
27.1 Graph Neural Networks for Urban Intelligence
27.1.1 Introduction
27.1.2 Application scenarios in urban intelligence
27.1.3 Representing urban systems as graphs
27.1.4 Case Study 1: Graph Neural Networksin urban configuration and transportation
27.1.5 Case Study 2: Graph Neural Networks in urban anomaly and event detection
27.1.6 Case Study 3: Graph Neural Networks in urban human behavior inference
27.1.7 Future Directions
References