Alert button
Picture for Itay Safran

Itay Safran

Alert button

Depth Separations in Neural Networks: Separating the Dimension from the Accuracy

Add code
Bookmark button
Alert button
Feb 11, 2024
Itay Safran, Daniel Reichman, Paul Valiant

Viaarxiv icon

How Many Neurons Does it Take to Approximate the Maximum?

Add code
Bookmark button
Alert button
Jul 18, 2023
Itay Safran, Daniel Reichman, Paul Valiant

Viaarxiv icon

On the Effective Number of Linear Regions in Shallow Univariate ReLU Networks: Convergence Guarantees and Implicit Bias

Add code
Bookmark button
Alert button
May 18, 2022
Itay Safran, Gal Vardi, Jason D. Lee

Figure 1 for On the Effective Number of Linear Regions in Shallow Univariate ReLU Networks: Convergence Guarantees and Implicit Bias
Figure 2 for On the Effective Number of Linear Regions in Shallow Univariate ReLU Networks: Convergence Guarantees and Implicit Bias
Viaarxiv icon

Optimization-Based Separations for Neural Networks

Add code
Bookmark button
Alert button
Dec 04, 2021
Itay Safran, Jason D. Lee

Viaarxiv icon

Random Shuffling Beats SGD Only After Many Epochs on Ill-Conditioned Problems

Add code
Bookmark button
Alert button
Jun 12, 2021
Itay Safran, Ohad Shamir

Figure 1 for Random Shuffling Beats SGD Only After Many Epochs on Ill-Conditioned Problems
Figure 2 for Random Shuffling Beats SGD Only After Many Epochs on Ill-Conditioned Problems
Viaarxiv icon

The Effects of Mild Over-parameterization on the Optimization Landscape of Shallow ReLU Neural Networks

Add code
Bookmark button
Alert button
Jun 01, 2020
Itay Safran, Gilad Yehudai, Ohad Shamir

Figure 1 for The Effects of Mild Over-parameterization on the Optimization Landscape of Shallow ReLU Neural Networks
Figure 2 for The Effects of Mild Over-parameterization on the Optimization Landscape of Shallow ReLU Neural Networks
Figure 3 for The Effects of Mild Over-parameterization on the Optimization Landscape of Shallow ReLU Neural Networks
Viaarxiv icon

How Good is SGD with Random Shuffling?

Add code
Bookmark button
Alert button
Jul 31, 2019
Itay Safran, Ohad Shamir

Figure 1 for How Good is SGD with Random Shuffling?
Viaarxiv icon

Depth Separations in Neural Networks: What is Actually Being Separated?

Add code
Bookmark button
Alert button
May 26, 2019
Itay Safran, Ronen Eldan, Ohad Shamir

Figure 1 for Depth Separations in Neural Networks: What is Actually Being Separated?
Figure 2 for Depth Separations in Neural Networks: What is Actually Being Separated?
Viaarxiv icon

A Simple Explanation for the Existence of Adversarial Examples with Small Hamming Distance

Add code
Bookmark button
Alert button
Jan 30, 2019
Adi Shamir, Itay Safran, Eyal Ronen, Orr Dunkelman

Figure 1 for A Simple Explanation for the Existence of Adversarial Examples with Small Hamming Distance
Figure 2 for A Simple Explanation for the Existence of Adversarial Examples with Small Hamming Distance
Figure 3 for A Simple Explanation for the Existence of Adversarial Examples with Small Hamming Distance
Figure 4 for A Simple Explanation for the Existence of Adversarial Examples with Small Hamming Distance
Viaarxiv icon

Spurious Local Minima are Common in Two-Layer ReLU Neural Networks

Add code
Bookmark button
Alert button
Aug 09, 2018
Itay Safran, Ohad Shamir

Figure 1 for Spurious Local Minima are Common in Two-Layer ReLU Neural Networks
Figure 2 for Spurious Local Minima are Common in Two-Layer ReLU Neural Networks
Figure 3 for Spurious Local Minima are Common in Two-Layer ReLU Neural Networks
Viaarxiv icon