Alert button
Picture for Hamed Pirsiavash

Hamed Pirsiavash

Alert button

One Category One Prompt: Dataset Distillation using Diffusion Models

Add code
Bookmark button
Alert button
Mar 11, 2024
Ali Abbasi, Ashkan Shahbazi, Hamed Pirsiavash, Soheil Kolouri

Figure 1 for One Category One Prompt: Dataset Distillation using Diffusion Models
Figure 2 for One Category One Prompt: Dataset Distillation using Diffusion Models
Figure 3 for One Category One Prompt: Dataset Distillation using Diffusion Models
Figure 4 for One Category One Prompt: Dataset Distillation using Diffusion Models
Viaarxiv icon

GeNIe: Generative Hard Negative Images Through Diffusion

Add code
Bookmark button
Alert button
Dec 05, 2023
Soroush Abbasi Koohpayegani, Anuj Singh, K L Navaneet, Hadi Jamali-Rad, Hamed Pirsiavash

Viaarxiv icon

Compact3D: Compressing Gaussian Splat Radiance Field Models with Vector Quantization

Add code
Bookmark button
Alert button
Nov 30, 2023
KL Navaneet, Kossar Pourahmadi Meibodi, Soroush Abbasi Koohpayegani, Hamed Pirsiavash

Viaarxiv icon

BrainWash: A Poisoning Attack to Forget in Continual Learning

Add code
Bookmark button
Alert button
Nov 24, 2023
Ali Abbasi, Parsa Nooralinejad, Hamed Pirsiavash, Soheil Kolouri

Viaarxiv icon

NOLA: Networks as Linear Combination of Low Rank Random Basis

Add code
Bookmark button
Alert button
Oct 04, 2023
Soroush Abbasi Koohpayegani, KL Navaneet, Parsa Nooralinejad, Soheil Kolouri, Hamed Pirsiavash

Figure 1 for NOLA: Networks as Linear Combination of Low Rank Random Basis
Figure 2 for NOLA: Networks as Linear Combination of Low Rank Random Basis
Figure 3 for NOLA: Networks as Linear Combination of Low Rank Random Basis
Figure 4 for NOLA: Networks as Linear Combination of Low Rank Random Basis
Viaarxiv icon

SlowFormer: Universal Adversarial Patch for Attack on Compute and Energy Efficiency of Inference Efficient Vision Transformers

Add code
Bookmark button
Alert button
Oct 04, 2023
KL Navaneet, Soroush Abbasi Koohpayegani, Essam Sleiman, Hamed Pirsiavash

Figure 1 for SlowFormer: Universal Adversarial Patch for Attack on Compute and Energy Efficiency of Inference Efficient Vision Transformers
Figure 2 for SlowFormer: Universal Adversarial Patch for Attack on Compute and Energy Efficiency of Inference Efficient Vision Transformers
Figure 3 for SlowFormer: Universal Adversarial Patch for Attack on Compute and Energy Efficiency of Inference Efficient Vision Transformers
Figure 4 for SlowFormer: Universal Adversarial Patch for Attack on Compute and Energy Efficiency of Inference Efficient Vision Transformers
Viaarxiv icon

A Cookbook of Self-Supervised Learning

Add code
Bookmark button
Alert button
Apr 24, 2023
Randall Balestriero, Mark Ibrahim, Vlad Sobal, Ari Morcos, Shashank Shekhar, Tom Goldstein, Florian Bordes, Adrien Bardes, Gregoire Mialon, Yuandong Tian, Avi Schwarzschild, Andrew Gordon Wilson, Jonas Geiping, Quentin Garrido, Pierre Fernandez, Amir Bar, Hamed Pirsiavash, Yann LeCun, Micah Goldblum

Figure 1 for A Cookbook of Self-Supervised Learning
Figure 2 for A Cookbook of Self-Supervised Learning
Figure 3 for A Cookbook of Self-Supervised Learning
Figure 4 for A Cookbook of Self-Supervised Learning
Viaarxiv icon

Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning

Add code
Bookmark button
Alert button
Apr 04, 2023
Ajinkya Tejankar, Maziar Sanjabi, Qifan Wang, Sinong Wang, Hamed Firooz, Hamed Pirsiavash, Liang Tan

Figure 1 for Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning
Figure 2 for Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning
Figure 3 for Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning
Figure 4 for Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning
Viaarxiv icon

Is Multi-Task Learning an Upper Bound for Continual Learning?

Add code
Bookmark button
Alert button
Oct 26, 2022
Zihao Wu, Huy Tran, Hamed Pirsiavash, Soheil Kolouri

Figure 1 for Is Multi-Task Learning an Upper Bound for Continual Learning?
Figure 2 for Is Multi-Task Learning an Upper Bound for Continual Learning?
Figure 3 for Is Multi-Task Learning an Upper Bound for Continual Learning?
Figure 4 for Is Multi-Task Learning an Upper Bound for Continual Learning?
Viaarxiv icon

SimA: Simple Softmax-free Attention for Vision Transformers

Add code
Bookmark button
Alert button
Jun 17, 2022
Soroush Abbasi Koohpayegani, Hamed Pirsiavash

Figure 1 for SimA: Simple Softmax-free Attention for Vision Transformers
Figure 2 for SimA: Simple Softmax-free Attention for Vision Transformers
Figure 3 for SimA: Simple Softmax-free Attention for Vision Transformers
Figure 4 for SimA: Simple Softmax-free Attention for Vision Transformers
Viaarxiv icon