Alert button
Picture for Zhan Qin

Zhan Qin

Alert button

Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Watermarking Feature Attribution

Add code
Bookmark button
Alert button
May 08, 2024
Shuo Shao, Yiming Li, Hongwei Yao, Yiling He, Zhan Qin, Kui Ren

Viaarxiv icon

A Causal Explainable Guardrails for Large Language Models

Add code
Bookmark button
Alert button
May 07, 2024
Zhixuan Chu, Yan Wang, Longfei Li, Zhibo Wang, Zhan Qin, Kui Ren

Viaarxiv icon

Going Proactive and Explanatory Against Malware Concept Drift

Add code
Bookmark button
Alert button
May 07, 2024
Yiling He, Junchi Lei, Zhan Qin, Kui Ren

Viaarxiv icon

Sora Detector: A Unified Hallucination Detection for Large Text-to-Video Models

Add code
Bookmark button
Alert button
May 07, 2024
Zhixuan Chu, Lei Zhang, Yichen Sun, Siqiao Xue, Zhibo Wang, Zhan Qin, Kui Ren

Viaarxiv icon

LLM-Guided Multi-View Hypergraph Learning for Human-Centric Explainable Recommendation

Add code
Bookmark button
Alert button
Jan 16, 2024
Zhixuan Chu, Yan Wang, Qing Cui, Longfei Li, Wenqing Chen, Sheng Li, Zhan Qin, Kui Ren

Viaarxiv icon

Certified Minimax Unlearning with Generalization Rates and Deletion Capacity

Add code
Bookmark button
Alert button
Dec 16, 2023
Jiaqi Liu, Jian Lou, Zhan Qin, Kui Ren

Viaarxiv icon

Towards Sample-specific Backdoor Attack with Clean Labels via Attribute Trigger

Add code
Bookmark button
Alert button
Dec 03, 2023
Yiming Li, Mingyan Zhu, Junfeng Guo, Tao Wei, Shu-Tao Xia, Zhan Qin

Viaarxiv icon

Pitfalls in Language Models for Code Intelligence: A Taxonomy and Survey

Add code
Bookmark button
Alert button
Oct 27, 2023
Xinyu She, Yue Liu, Yanjie Zhao, Yiling He, Li Li, Chakkrit Tantithamthavorn, Zhan Qin, Haoyu Wang

Viaarxiv icon

PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models

Add code
Bookmark button
Alert button
Oct 19, 2023
Hongwei Yao, Jian Lou, Zhan Qin

Figure 1 for PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models
Figure 2 for PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models
Figure 3 for PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models
Figure 4 for PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models
Viaarxiv icon

SurrogatePrompt: Bypassing the Safety Filter of Text-To-Image Models via Substitution

Add code
Bookmark button
Alert button
Sep 25, 2023
Zhongjie Ba, Jieming Zhong, Jiachen Lei, Peng Cheng, Qinglong Wang, Zhan Qin, Zhibo Wang, Kui Ren

Figure 1 for SurrogatePrompt: Bypassing the Safety Filter of Text-To-Image Models via Substitution
Figure 2 for SurrogatePrompt: Bypassing the Safety Filter of Text-To-Image Models via Substitution
Figure 3 for SurrogatePrompt: Bypassing the Safety Filter of Text-To-Image Models via Substitution
Figure 4 for SurrogatePrompt: Bypassing the Safety Filter of Text-To-Image Models via Substitution
Viaarxiv icon