Alert button
Picture for Ambra Demontis

Ambra Demontis

Alert button

AttackBench: Evaluating Gradient-based Attacks for Adversarial Examples

Add code
Bookmark button
Alert button
Apr 30, 2024
Antonio Emanuele Cinà, Jérôme Rony, Maura Pintor, Luca Demetrio, Ambra Demontis, Battista Biggio, Ismail Ben Ayed, Fabio Roli

Viaarxiv icon

Improving Fast Minimum-Norm Attacks with Hyperparameter Optimization

Add code
Bookmark button
Alert button
Oct 12, 2023
Giuseppe Floris, Raffaele Mura, Luca Scionis, Giorgio Piras, Maura Pintor, Ambra Demontis, Battista Biggio

Figure 1 for Improving Fast Minimum-Norm Attacks with Hyperparameter Optimization
Figure 2 for Improving Fast Minimum-Norm Attacks with Hyperparameter Optimization
Viaarxiv icon

Samples on Thin Ice: Re-Evaluating Adversarial Pruning of Neural Networks

Add code
Bookmark button
Alert button
Oct 12, 2023
Giorgio Piras, Maura Pintor, Ambra Demontis, Battista Biggio

Figure 1 for Samples on Thin Ice: Re-Evaluating Adversarial Pruning of Neural Networks
Figure 2 for Samples on Thin Ice: Re-Evaluating Adversarial Pruning of Neural Networks
Figure 3 for Samples on Thin Ice: Re-Evaluating Adversarial Pruning of Neural Networks
Figure 4 for Samples on Thin Ice: Re-Evaluating Adversarial Pruning of Neural Networks
Viaarxiv icon

Hardening RGB-D Object Recognition Systems against Adversarial Patch Attacks

Add code
Bookmark button
Alert button
Sep 13, 2023
Yang Zheng, Luca Demetrio, Antonio Emanuele Cinà, Xiaoyi Feng, Zhaoqiang Xia, Xiaoyue Jiang, Ambra Demontis, Battista Biggio, Fabio Roli

Figure 1 for Hardening RGB-D Object Recognition Systems against Adversarial Patch Attacks
Figure 2 for Hardening RGB-D Object Recognition Systems against Adversarial Patch Attacks
Figure 3 for Hardening RGB-D Object Recognition Systems against Adversarial Patch Attacks
Figure 4 for Hardening RGB-D Object Recognition Systems against Adversarial Patch Attacks
Viaarxiv icon

Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training

Add code
Bookmark button
Alert button
Jul 01, 2023
Dario Lazzaro, Antonio Emanuele Cinà, Maura Pintor, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo

Figure 1 for Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training
Figure 2 for Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training
Figure 3 for Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training
Figure 4 for Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training
Viaarxiv icon

A Survey on Reinforcement Learning Security with Application to Autonomous Driving

Add code
Bookmark button
Alert button
Dec 12, 2022
Ambra Demontis, Maura Pintor, Luca Demetrio, Kathrin Grosse, Hsiao-Ying Lin, Chengfang Fang, Battista Biggio, Fabio Roli

Figure 1 for A Survey on Reinforcement Learning Security with Application to Autonomous Driving
Figure 2 for A Survey on Reinforcement Learning Security with Application to Autonomous Driving
Figure 3 for A Survey on Reinforcement Learning Security with Application to Autonomous Driving
Figure 4 for A Survey on Reinforcement Learning Security with Application to Autonomous Driving
Viaarxiv icon

Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning

Add code
Bookmark button
Alert button
May 04, 2022
Antonio Emanuele Cinà, Kathrin Grosse, Ambra Demontis, Sebastiano Vascon, Werner Zellinger, Bernhard A. Moser, Alina Oprea, Battista Biggio, Marcello Pelillo, Fabio Roli

Figure 1 for Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
Figure 2 for Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
Figure 3 for Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
Figure 4 for Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
Viaarxiv icon

Machine Learning Security against Data Poisoning: Are We There Yet?

Add code
Bookmark button
Alert button
Apr 12, 2022
Antonio Emanuele Cinà, Kathrin Grosse, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo

Figure 1 for Machine Learning Security against Data Poisoning: Are We There Yet?
Figure 2 for Machine Learning Security against Data Poisoning: Are We There Yet?
Figure 3 for Machine Learning Security against Data Poisoning: Are We There Yet?
Viaarxiv icon

Energy-Latency Attacks via Sponge Poisoning

Add code
Bookmark button
Alert button
Apr 11, 2022
Antonio Emanuele Cinà, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo

Figure 1 for Energy-Latency Attacks via Sponge Poisoning
Figure 2 for Energy-Latency Attacks via Sponge Poisoning
Figure 3 for Energy-Latency Attacks via Sponge Poisoning
Figure 4 for Energy-Latency Attacks via Sponge Poisoning
Viaarxiv icon

ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches

Add code
Bookmark button
Alert button
Mar 07, 2022
Maura Pintor, Daniele Angioni, Angelo Sotgiu, Luca Demetrio, Ambra Demontis, Battista Biggio, Fabio Roli

Figure 1 for ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches
Figure 2 for ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches
Figure 3 for ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches
Figure 4 for ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches
Viaarxiv icon