Picture for Guibin Zhang

Guibin Zhang

Graph Sparsification via Mixture of Graphs

Add code
May 23, 2024
Viaarxiv icon

All Nodes are created Not Equal: Node-Specific Layer Aggregation and Filtration for GNN

Add code
May 13, 2024
Viaarxiv icon

DynST: Dynamic Sparse Training for Resource-Constrained Spatio-Temporal Forecasting

Mar 05, 2024
Figure 1 for DynST: Dynamic Sparse Training for Resource-Constrained Spatio-Temporal Forecasting
Figure 2 for DynST: Dynamic Sparse Training for Resource-Constrained Spatio-Temporal Forecasting
Figure 3 for DynST: Dynamic Sparse Training for Resource-Constrained Spatio-Temporal Forecasting
Figure 4 for DynST: Dynamic Sparse Training for Resource-Constrained Spatio-Temporal Forecasting
Viaarxiv icon

CaT-GNN: Enhancing Credit Card Fraud Detection via Causal Temporal Graph Neural Networks

Feb 22, 2024
Figure 1 for CaT-GNN: Enhancing Credit Card Fraud Detection via Causal Temporal Graph Neural Networks
Figure 2 for CaT-GNN: Enhancing Credit Card Fraud Detection via Causal Temporal Graph Neural Networks
Figure 3 for CaT-GNN: Enhancing Credit Card Fraud Detection via Causal Temporal Graph Neural Networks
Figure 4 for CaT-GNN: Enhancing Credit Card Fraud Detection via Causal Temporal Graph Neural Networks
Viaarxiv icon

Modeling Spatio-temporal Dynamical Systems with Neural Discrete Learning and Levels-of-Experts

Feb 06, 2024
Viaarxiv icon

EXGC: Bridging Efficiency and Explainability in Graph Condensation

Add code
Feb 05, 2024
Figure 1 for EXGC: Bridging Efficiency and Explainability in Graph Condensation
Figure 2 for EXGC: Bridging Efficiency and Explainability in Graph Condensation
Figure 3 for EXGC: Bridging Efficiency and Explainability in Graph Condensation
Figure 4 for EXGC: Bridging Efficiency and Explainability in Graph Condensation
Viaarxiv icon

Two Heads Are Better Than One: Boosting Graph Sparse Training via Semantic and Topological Awareness

Feb 02, 2024
Viaarxiv icon

The Snowflake Hypothesis: Training Deep GNN with One Node One Receptive field

Aug 19, 2023
Figure 1 for The Snowflake Hypothesis: Training Deep GNN with One Node One Receptive field
Figure 2 for The Snowflake Hypothesis: Training Deep GNN with One Node One Receptive field
Figure 3 for The Snowflake Hypothesis: Training Deep GNN with One Node One Receptive field
Figure 4 for The Snowflake Hypothesis: Training Deep GNN with One Node One Receptive field
Viaarxiv icon