Alert button
Picture for Shengshan Hu

Shengshan Hu

Alert button

Detector Collapse: Backdooring Object Detection to Catastrophic Overload or Blindness

Add code
Bookmark button
Alert button
Apr 17, 2024
Hangtao Zhang, Shengshan Hu, Yichen Wang, Leo Yu Zhang, Ziqi Zhou, Xianlong Wang, Yanjun Zhang, Chao Chen

Viaarxiv icon

Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples

Add code
Bookmark button
Alert button
Mar 19, 2024
Ziqi Zhou, Minghui Li, Wei Liu, Shengshan Hu, Yechao Zhang, Wei Wan, Lulu Xue, Leo Yu Zhang, Dezhong Yao, Hai Jin

Figure 1 for Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples
Figure 2 for Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples
Figure 3 for Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples
Figure 4 for Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples
Viaarxiv icon

Revisiting Gradient Pruning: A Dual Realization for Defending against Gradient Attacks

Add code
Bookmark button
Alert button
Jan 30, 2024
Lulu Xue, Shengshan Hu, Ruizhi Zhao, Leo Yu Zhang, Shengqing Hu, Lichao Sun, Dezhong Yao

Viaarxiv icon

MISA: Unveiling the Vulnerabilities in Split Federated Learning

Add code
Bookmark button
Alert button
Dec 19, 2023
Wei Wan, Yuxuan Ning, Shengshan Hu, Lulu Xue, Minghui Li, Leo Yu Zhang, Hai Jin

Viaarxiv icon

Corrupting Convolution-based Unlearnable Datasets with Pixel-based Image Transformations

Add code
Bookmark button
Alert button
Nov 30, 2023
Xianlong Wang, Shengshan Hu, Minghui Li, Zhifei Yu, Ziqi Zhou, Leo Yu Zhang, Hai Jin

Viaarxiv icon

AdvCLIP: Downstream-agnostic Adversarial Examples in Multimodal Contrastive Learning

Add code
Bookmark button
Alert button
Aug 14, 2023
Ziqi Zhou, Shengshan Hu, Minghui Li, Hangtao Zhang, Yechao Zhang, Hai Jin

Figure 1 for AdvCLIP: Downstream-agnostic Adversarial Examples in Multimodal Contrastive Learning
Figure 2 for AdvCLIP: Downstream-agnostic Adversarial Examples in Multimodal Contrastive Learning
Figure 3 for AdvCLIP: Downstream-agnostic Adversarial Examples in Multimodal Contrastive Learning
Figure 4 for AdvCLIP: Downstream-agnostic Adversarial Examples in Multimodal Contrastive Learning
Viaarxiv icon

Downstream-agnostic Adversarial Examples

Add code
Bookmark button
Alert button
Aug 14, 2023
Ziqi Zhou, Shengshan Hu, Ruizhi Zhao, Qian Wang, Leo Yu Zhang, Junhui Hou, Hai Jin

Figure 1 for Downstream-agnostic Adversarial Examples
Figure 2 for Downstream-agnostic Adversarial Examples
Figure 3 for Downstream-agnostic Adversarial Examples
Figure 4 for Downstream-agnostic Adversarial Examples
Viaarxiv icon

Benchmarking and Analyzing Robust Point Cloud Recognition: Bag of Tricks for Defending Adversarial Examples

Add code
Bookmark button
Alert button
Aug 10, 2023
Qiufan Ji, Lin Wang, Cong Shi, Shengshan Hu, Yingying Chen, Lichao Sun

Figure 1 for Benchmarking and Analyzing Robust Point Cloud Recognition: Bag of Tricks for Defending Adversarial Examples
Figure 2 for Benchmarking and Analyzing Robust Point Cloud Recognition: Bag of Tricks for Defending Adversarial Examples
Figure 3 for Benchmarking and Analyzing Robust Point Cloud Recognition: Bag of Tricks for Defending Adversarial Examples
Figure 4 for Benchmarking and Analyzing Robust Point Cloud Recognition: Bag of Tricks for Defending Adversarial Examples
Viaarxiv icon

Why Does Little Robustness Help? Understanding Adversarial Transferability From Surrogate Training

Add code
Bookmark button
Alert button
Jul 19, 2023
Yechao Zhang, Shengshan Hu, Leo Yu Zhang, Junyu Shi, Minghui Li, Xiaogeng Liu, Wei Wan, Hai Jin

Figure 1 for Why Does Little Robustness Help? Understanding Adversarial Transferability From Surrogate Training
Figure 2 for Why Does Little Robustness Help? Understanding Adversarial Transferability From Surrogate Training
Figure 3 for Why Does Little Robustness Help? Understanding Adversarial Transferability From Surrogate Training
Figure 4 for Why Does Little Robustness Help? Understanding Adversarial Transferability From Surrogate Training
Viaarxiv icon

Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning

Add code
Bookmark button
Alert button
Apr 21, 2023
Hangtao Zhang, Zeming Yao, Leo Yu Zhang, Shengshan Hu, Chao Chen, Alan Liew, Zhetao Li

Figure 1 for Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning
Figure 2 for Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning
Figure 3 for Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning
Figure 4 for Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning
Viaarxiv icon