Picture for Zhenting Wang

Zhenting Wang

Evaluating and Mitigating IP Infringement in Visual Generative AI

Add code
Jun 07, 2024
Viaarxiv icon

Towards Imperceptible Backdoor Attack in Self-supervised Learning

Add code
May 23, 2024
Viaarxiv icon

How to Trace Latent Generative Model Generated Images without Artificial Watermark?

Add code
May 22, 2024
Viaarxiv icon

Exploring Concept Depth: How Large Language Models Acquire Knowledge at Different Layers?

Add code
Apr 10, 2024
Viaarxiv icon

Finding needles in a haystack: A Black-Box Approach to Invisible Watermark Detection

Mar 30, 2024
Viaarxiv icon

How to Detect Unauthorized Data Usages in Text-to-image Diffusion Models

Jul 06, 2023
Figure 1 for How to Detect Unauthorized Data Usages in Text-to-image Diffusion Models
Figure 2 for How to Detect Unauthorized Data Usages in Text-to-image Diffusion Models
Figure 3 for How to Detect Unauthorized Data Usages in Text-to-image Diffusion Models
Figure 4 for How to Detect Unauthorized Data Usages in Text-to-image Diffusion Models
Viaarxiv icon

Alteration-free and Model-agnostic Origin Attribution of Generated Images

Add code
May 29, 2023
Figure 1 for Alteration-free and Model-agnostic Origin Attribution of Generated Images
Figure 2 for Alteration-free and Model-agnostic Origin Attribution of Generated Images
Figure 3 for Alteration-free and Model-agnostic Origin Attribution of Generated Images
Figure 4 for Alteration-free and Model-agnostic Origin Attribution of Generated Images
Viaarxiv icon

NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models

Add code
May 28, 2023
Figure 1 for NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models
Figure 2 for NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models
Figure 3 for NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models
Figure 4 for NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models
Viaarxiv icon

UNICORN: A Unified Backdoor Trigger Inversion Framework

Add code
Apr 05, 2023
Figure 1 for UNICORN: A Unified Backdoor Trigger Inversion Framework
Figure 2 for UNICORN: A Unified Backdoor Trigger Inversion Framework
Figure 3 for UNICORN: A Unified Backdoor Trigger Inversion Framework
Figure 4 for UNICORN: A Unified Backdoor Trigger Inversion Framework
Viaarxiv icon

Backdoor Vulnerabilities in Normally Trained Deep Learning Models

Add code
Nov 29, 2022
Figure 1 for Backdoor Vulnerabilities in Normally Trained Deep Learning Models
Figure 2 for Backdoor Vulnerabilities in Normally Trained Deep Learning Models
Figure 3 for Backdoor Vulnerabilities in Normally Trained Deep Learning Models
Figure 4 for Backdoor Vulnerabilities in Normally Trained Deep Learning Models
Viaarxiv icon