Alert button
Picture for Jinyuan Jia

Jinyuan Jia

Alert button

MMCert: Provable Defense against Adversarial Attacks to Multi-modal Models

Add code
Bookmark button
Alert button
Apr 02, 2024
Yanting Wang, Hongye Fu, Wei Zou, Jinyuan Jia

Figure 1 for MMCert: Provable Defense against Adversarial Attacks to Multi-modal Models
Figure 2 for MMCert: Provable Defense against Adversarial Attacks to Multi-modal Models
Figure 3 for MMCert: Provable Defense against Adversarial Attacks to Multi-modal Models
Figure 4 for MMCert: Provable Defense against Adversarial Attacks to Multi-modal Models
Viaarxiv icon

SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding

Add code
Bookmark button
Alert button
Feb 24, 2024
Zhangchen Xu, Fengqing Jiang, Luyao Niu, Jinyuan Jia, Bill Yuchen Lin, Radha Poovendran

Viaarxiv icon

PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented Generation of Large Language Models

Add code
Bookmark button
Alert button
Feb 12, 2024
Wei Zou, Runpeng Geng, Binghui Wang, Jinyuan Jia

Viaarxiv icon

Brave: Byzantine-Resilient and Privacy-Preserving Peer-to-Peer Federated Learning

Add code
Bookmark button
Alert button
Jan 10, 2024
Zhangchen Xu, Fengqing Jiang, Luyao Niu, Jinyuan Jia, Radha Poovendran

Viaarxiv icon

TextGuard: Provable Defense against Backdoor Attacks on Text Classification

Add code
Bookmark button
Alert button
Nov 25, 2023
Hengzhi Pei, Jinyuan Jia, Wenbo Guo, Bo Li, Dawn Song

Figure 1 for TextGuard: Provable Defense against Backdoor Attacks on Text Classification
Figure 2 for TextGuard: Provable Defense against Backdoor Attacks on Text Classification
Figure 3 for TextGuard: Provable Defense against Backdoor Attacks on Text Classification
Figure 4 for TextGuard: Provable Defense against Backdoor Attacks on Text Classification
Viaarxiv icon

IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Generative AI

Add code
Bookmark button
Alert button
Oct 30, 2023
Bochuan Cao, Changjiang Li, Ting Wang, Jinyuan Jia, Bo Li, Jinghui Chen

Viaarxiv icon

Prompt Injection Attacks and Defenses in LLM-Integrated Applications

Add code
Bookmark button
Alert button
Oct 19, 2023
Yupei Liu, Yuqi Jia, Runpeng Geng, Jinyuan Jia, Neil Zhenqiang Gong

Viaarxiv icon

On the Safety of Open-Sourced Large Language Models: Does Alignment Really Prevent Them From Being Misused?

Add code
Bookmark button
Alert button
Oct 02, 2023
Hangfan Zhang, Zhimeng Guo, Huaisheng Zhu, Bochuan Cao, Lu Lin, Jinyuan Jia, Jinghui Chen, Dinghao Wu

Viaarxiv icon

PORE: Provably Robust Recommender Systems against Data Poisoning Attacks

Add code
Bookmark button
Alert button
Mar 26, 2023
Jinyuan Jia, Yupei Liu, Yuepeng Hu, Neil Zhenqiang Gong

Figure 1 for PORE: Provably Robust Recommender Systems against Data Poisoning Attacks
Figure 2 for PORE: Provably Robust Recommender Systems against Data Poisoning Attacks
Figure 3 for PORE: Provably Robust Recommender Systems against Data Poisoning Attacks
Figure 4 for PORE: Provably Robust Recommender Systems against Data Poisoning Attacks
Viaarxiv icon

PointCert: Point Cloud Classification with Deterministic Certified Robustness Guarantees

Add code
Bookmark button
Alert button
Mar 03, 2023
Jinghuai Zhang, Jinyuan Jia, Hongbin Liu, Neil Zhenqiang Gong

Figure 1 for PointCert: Point Cloud Classification with Deterministic Certified Robustness Guarantees
Figure 2 for PointCert: Point Cloud Classification with Deterministic Certified Robustness Guarantees
Figure 3 for PointCert: Point Cloud Classification with Deterministic Certified Robustness Guarantees
Figure 4 for PointCert: Point Cloud Classification with Deterministic Certified Robustness Guarantees
Viaarxiv icon