Alert button
Picture for Tamer Abuhmed

Tamer Abuhmed

Alert button

From Attack to Defense: Insights into Deep Learning Security Measures in Black-Box Settings

Add code
Bookmark button
Alert button
May 03, 2024
Firuz Juraev, Mohammed Abuhamad, Eric Chan-Tin, George K. Thiruvathukal, Tamer Abuhmed

Viaarxiv icon

Impact of Architectural Modifications on Deep Learning Adversarial Robustness

Add code
Bookmark button
Alert button
May 03, 2024
Firuz Juraev, Mohammed Abuhamad, Simon S. Woo, George K Thiruvathukal, Tamer Abuhmed

Viaarxiv icon

Unveiling Vulnerabilities in Interpretable Deep Learning Systems with Query-Efficient Black-box Attacks

Add code
Bookmark button
Alert button
Jul 21, 2023
Eldor Abdukhamidov, Mohammed Abuhamad, Simon S. Woo, Eric Chan-Tin, Tamer Abuhmed

Figure 1 for Unveiling Vulnerabilities in Interpretable Deep Learning Systems with Query-Efficient Black-box Attacks
Figure 2 for Unveiling Vulnerabilities in Interpretable Deep Learning Systems with Query-Efficient Black-box Attacks
Figure 3 for Unveiling Vulnerabilities in Interpretable Deep Learning Systems with Query-Efficient Black-box Attacks
Viaarxiv icon

Microbial Genetic Algorithm-based Black-box Attack against Interpretable Deep Learning Systems

Add code
Bookmark button
Alert button
Jul 13, 2023
Eldor Abdukhamidov, Mohammed Abuhamad, Simon S. Woo, Eric Chan-Tin, Tamer Abuhmed

Figure 1 for Microbial Genetic Algorithm-based Black-box Attack against Interpretable Deep Learning Systems
Figure 2 for Microbial Genetic Algorithm-based Black-box Attack against Interpretable Deep Learning Systems
Figure 3 for Microbial Genetic Algorithm-based Black-box Attack against Interpretable Deep Learning Systems
Figure 4 for Microbial Genetic Algorithm-based Black-box Attack against Interpretable Deep Learning Systems
Viaarxiv icon

Single-Class Target-Specific Attack against Interpretable Deep Learning Systems

Add code
Bookmark button
Alert button
Jul 12, 2023
Eldor Abdukhamidov, Mohammed Abuhamad, George K. Thiruvathukal, Hyoungshick Kim, Tamer Abuhmed

Figure 1 for Single-Class Target-Specific Attack against Interpretable Deep Learning Systems
Figure 2 for Single-Class Target-Specific Attack against Interpretable Deep Learning Systems
Figure 3 for Single-Class Target-Specific Attack against Interpretable Deep Learning Systems
Figure 4 for Single-Class Target-Specific Attack against Interpretable Deep Learning Systems
Viaarxiv icon

Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial Perturbations against Interpretable Deep Learning

Add code
Bookmark button
Alert button
Nov 29, 2022
Eldor Abdukhamidov, Mohammed Abuhamad, Simon S. Woo, Eric Chan-Tin, Tamer Abuhmed

Figure 1 for Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial Perturbations against Interpretable Deep Learning
Figure 2 for Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial Perturbations against Interpretable Deep Learning
Figure 3 for Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial Perturbations against Interpretable Deep Learning
Figure 4 for Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial Perturbations against Interpretable Deep Learning
Viaarxiv icon