Alert button
Picture for Dario Pasquini

Dario Pasquini

Alert button

Neural Exec: Learning (and Learning from) Execution Triggers for Prompt Injection Attacks

Add code
Bookmark button
Alert button
Mar 06, 2024
Dario Pasquini, Martin Strohmeier, Carmela Troncoso

Figure 1 for Neural Exec: Learning (and Learning from) Execution Triggers for Prompt Injection Attacks
Figure 2 for Neural Exec: Learning (and Learning from) Execution Triggers for Prompt Injection Attacks
Figure 3 for Neural Exec: Learning (and Learning from) Execution Triggers for Prompt Injection Attacks
Figure 4 for Neural Exec: Learning (and Learning from) Execution Triggers for Prompt Injection Attacks
Viaarxiv icon

Can Decentralized Learning be more robust than Federated Learning?

Add code
Bookmark button
Alert button
Mar 07, 2023
Mathilde Raynal, Dario Pasquini, Carmela Troncoso

Figure 1 for Can Decentralized Learning be more robust than Federated Learning?
Figure 2 for Can Decentralized Learning be more robust than Federated Learning?
Figure 3 for Can Decentralized Learning be more robust than Federated Learning?
Figure 4 for Can Decentralized Learning be more robust than Federated Learning?
Viaarxiv icon

Universal Neural-Cracking-Machines: Self-Configurable Password Models from Auxiliary Data

Add code
Bookmark button
Alert button
Jan 18, 2023
Dario Pasquini, Giuseppe Ateniese, Carmela Troncoso

Figure 1 for Universal Neural-Cracking-Machines: Self-Configurable Password Models from Auxiliary Data
Figure 2 for Universal Neural-Cracking-Machines: Self-Configurable Password Models from Auxiliary Data
Figure 3 for Universal Neural-Cracking-Machines: Self-Configurable Password Models from Auxiliary Data
Figure 4 for Universal Neural-Cracking-Machines: Self-Configurable Password Models from Auxiliary Data
Viaarxiv icon

On the Privacy of Decentralized Machine Learning

Add code
Bookmark button
Alert button
May 17, 2022
Dario Pasquini, Mathilde Raynal, Carmela Troncoso

Figure 1 for On the Privacy of Decentralized Machine Learning
Figure 2 for On the Privacy of Decentralized Machine Learning
Figure 3 for On the Privacy of Decentralized Machine Learning
Figure 4 for On the Privacy of Decentralized Machine Learning
Viaarxiv icon

Eluding Secure Aggregation in Federated Learning via Model Inconsistency

Add code
Bookmark button
Alert button
Nov 14, 2021
Dario Pasquini, Danilo Francati, Giuseppe Ateniese

Figure 1 for Eluding Secure Aggregation in Federated Learning via Model Inconsistency
Figure 2 for Eluding Secure Aggregation in Federated Learning via Model Inconsistency
Figure 3 for Eluding Secure Aggregation in Federated Learning via Model Inconsistency
Figure 4 for Eluding Secure Aggregation in Federated Learning via Model Inconsistency
Viaarxiv icon

Unleashing the Tiger: Inference Attacks on Split Learning

Add code
Bookmark button
Alert button
Dec 04, 2020
Dario Pasquini, Giuseppe Ateniese, Massimo Bernaschi

Figure 1 for Unleashing the Tiger: Inference Attacks on Split Learning
Figure 2 for Unleashing the Tiger: Inference Attacks on Split Learning
Figure 3 for Unleashing the Tiger: Inference Attacks on Split Learning
Figure 4 for Unleashing the Tiger: Inference Attacks on Split Learning
Viaarxiv icon

Reducing Bias in Modeling Real-world Password Strength via Deep Learning and Dynamic Dictionaries

Add code
Bookmark button
Alert button
Oct 26, 2020
Dario Pasquini, Marco Cianfriglia, Giuseppe Ateniese, Massimo Bernaschi

Figure 1 for Reducing Bias in Modeling Real-world Password Strength via Deep Learning and Dynamic Dictionaries
Figure 2 for Reducing Bias in Modeling Real-world Password Strength via Deep Learning and Dynamic Dictionaries
Figure 3 for Reducing Bias in Modeling Real-world Password Strength via Deep Learning and Dynamic Dictionaries
Figure 4 for Reducing Bias in Modeling Real-world Password Strength via Deep Learning and Dynamic Dictionaries
Viaarxiv icon

Interpretable Probabilistic Password Strength Meters via Deep Learning

Add code
Bookmark button
Alert button
Apr 29, 2020
Dario Pasquini, Giuseppe Ateniese, Massimo Bernaschi

Figure 1 for Interpretable Probabilistic Password Strength Meters via Deep Learning
Figure 2 for Interpretable Probabilistic Password Strength Meters via Deep Learning
Figure 3 for Interpretable Probabilistic Password Strength Meters via Deep Learning
Figure 4 for Interpretable Probabilistic Password Strength Meters via Deep Learning
Viaarxiv icon

Out-domain examples for generative models

Add code
Bookmark button
Alert button
Mar 07, 2019
Dario Pasquini, Marco Mingione, Massimo Bernaschi

Figure 1 for Out-domain examples for generative models
Figure 2 for Out-domain examples for generative models
Figure 3 for Out-domain examples for generative models
Figure 4 for Out-domain examples for generative models
Viaarxiv icon