Picture for Defang Chen

Defang Chen

On the Trajectory Regularity of ODE-based Diffusion Sampling

Add code
May 18, 2024
Viaarxiv icon

Knowledge Translation: A New Pathway for Model Compression

Add code
Jan 11, 2024
Viaarxiv icon

Fast ODE-based Sampling for Diffusion Models in Around 5 Steps

Add code
Nov 30, 2023
Viaarxiv icon

Customizing Synthetic Data for Data-Free Student Learning

Add code
Jul 10, 2023
Figure 1 for Customizing Synthetic Data for Data-Free Student Learning
Figure 2 for Customizing Synthetic Data for Data-Free Student Learning
Figure 3 for Customizing Synthetic Data for Data-Free Student Learning
Figure 4 for Customizing Synthetic Data for Data-Free Student Learning
Viaarxiv icon

Adaptive Multi-Teacher Knowledge Distillation with Meta-Learning

Add code
Jun 11, 2023
Figure 1 for Adaptive Multi-Teacher Knowledge Distillation with Meta-Learning
Figure 2 for Adaptive Multi-Teacher Knowledge Distillation with Meta-Learning
Figure 3 for Adaptive Multi-Teacher Knowledge Distillation with Meta-Learning
Figure 4 for Adaptive Multi-Teacher Knowledge Distillation with Meta-Learning
Viaarxiv icon

A Geometric Perspective on Diffusion Models

Add code
May 31, 2023
Figure 1 for A Geometric Perspective on Diffusion Models
Figure 2 for A Geometric Perspective on Diffusion Models
Figure 3 for A Geometric Perspective on Diffusion Models
Figure 4 for A Geometric Perspective on Diffusion Models
Viaarxiv icon

Accelerating Diffusion Sampling with Classifier-based Feature Distillation

Add code
Nov 22, 2022
Figure 1 for Accelerating Diffusion Sampling with Classifier-based Feature Distillation
Figure 2 for Accelerating Diffusion Sampling with Classifier-based Feature Distillation
Figure 3 for Accelerating Diffusion Sampling with Classifier-based Feature Distillation
Figure 4 for Accelerating Diffusion Sampling with Classifier-based Feature Distillation
Viaarxiv icon

Online Cross-Layer Knowledge Distillation on Graph Neural Networks with Deep Supervision

Add code
Oct 25, 2022
Figure 1 for Online Cross-Layer Knowledge Distillation on Graph Neural Networks with Deep Supervision
Figure 2 for Online Cross-Layer Knowledge Distillation on Graph Neural Networks with Deep Supervision
Figure 3 for Online Cross-Layer Knowledge Distillation on Graph Neural Networks with Deep Supervision
Figure 4 for Online Cross-Layer Knowledge Distillation on Graph Neural Networks with Deep Supervision
Viaarxiv icon

Label-Efficient Domain Generalization via Collaborative Exploration and Generalization

Add code
Aug 07, 2022
Figure 1 for Label-Efficient Domain Generalization via Collaborative Exploration and Generalization
Figure 2 for Label-Efficient Domain Generalization via Collaborative Exploration and Generalization
Figure 3 for Label-Efficient Domain Generalization via Collaborative Exploration and Generalization
Figure 4 for Label-Efficient Domain Generalization via Collaborative Exploration and Generalization
Viaarxiv icon

Improving Knowledge Graph Embedding via Iterative Self-Semantic Knowledge Distillation

Add code
Jun 07, 2022
Figure 1 for Improving Knowledge Graph Embedding via Iterative Self-Semantic Knowledge Distillation
Figure 2 for Improving Knowledge Graph Embedding via Iterative Self-Semantic Knowledge Distillation
Figure 3 for Improving Knowledge Graph Embedding via Iterative Self-Semantic Knowledge Distillation
Figure 4 for Improving Knowledge Graph Embedding via Iterative Self-Semantic Knowledge Distillation
Viaarxiv icon