Alert button
Picture for Jared Tanner

Jared Tanner

Alert button

Deep Neural Network Initialization with Sparsity Inducing Activations

Add code
Bookmark button
Alert button
Feb 25, 2024
Ilan Price, Nicholas Daultry Ball, Samuel C. H. Lam, Adam C. Jones, Jared Tanner

Viaarxiv icon

Beyond IID weights: sparse and low-rank deep Neural Networks are also Gaussian Processes

Add code
Bookmark button
Alert button
Oct 25, 2023
Thiziri Nait-Saada, Alireza Naderi, Jared Tanner

Figure 1 for Beyond IID weights: sparse and low-rank deep Neural Networks are also Gaussian Processes
Figure 2 for Beyond IID weights: sparse and low-rank deep Neural Networks are also Gaussian Processes
Figure 3 for Beyond IID weights: sparse and low-rank deep Neural Networks are also Gaussian Processes
Figure 4 for Beyond IID weights: sparse and low-rank deep Neural Networks are also Gaussian Processes
Viaarxiv icon

Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs

Add code
Bookmark button
Alert button
Oct 17, 2023
Yuxin Zhang, Lirui Zhao, Mingbao Lin, Yunyun Sun, Yiwu Yao, Xingjia Han, Jared Tanner, Shiwei Liu, Rongrong Ji

Figure 1 for Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs
Figure 2 for Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs
Figure 3 for Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs
Figure 4 for Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs
Viaarxiv icon

On the Initialisation of Wide Low-Rank Feedforward Neural Networks

Add code
Bookmark button
Alert button
Jan 31, 2023
Thiziri Nait Saada, Jared Tanner

Figure 1 for On the Initialisation of Wide Low-Rank Feedforward Neural Networks
Figure 2 for On the Initialisation of Wide Low-Rank Feedforward Neural Networks
Figure 3 for On the Initialisation of Wide Low-Rank Feedforward Neural Networks
Figure 4 for On the Initialisation of Wide Low-Rank Feedforward Neural Networks
Viaarxiv icon

Optimal Approximation Complexity of High-Dimensional Functions with Neural Networks

Add code
Bookmark button
Alert button
Jan 30, 2023
Vincent P. H. Goverse, Jad Hamdan, Jared Tanner

Figure 1 for Optimal Approximation Complexity of High-Dimensional Functions with Neural Networks
Viaarxiv icon

Improved Projection Learning for Lower Dimensional Feature Maps

Add code
Bookmark button
Alert button
Oct 27, 2022
Ilan Price, Jared Tanner

Figure 1 for Improved Projection Learning for Lower Dimensional Feature Maps
Figure 2 for Improved Projection Learning for Lower Dimensional Feature Maps
Figure 3 for Improved Projection Learning for Lower Dimensional Feature Maps
Figure 4 for Improved Projection Learning for Lower Dimensional Feature Maps
Viaarxiv icon

Tuning-free multi-coil compressed sensing MRI with Parallel Variable Density Approximate Message Passing (P-VDAMP)

Add code
Bookmark button
Alert button
Mar 08, 2022
Charles Millard, Mark Chiew, Jared Tanner, Aaron T. Hess, Boris Mailhe

Figure 1 for Tuning-free multi-coil compressed sensing MRI with Parallel Variable Density Approximate Message Passing (P-VDAMP)
Figure 2 for Tuning-free multi-coil compressed sensing MRI with Parallel Variable Density Approximate Message Passing (P-VDAMP)
Figure 3 for Tuning-free multi-coil compressed sensing MRI with Parallel Variable Density Approximate Message Passing (P-VDAMP)
Figure 4 for Tuning-free multi-coil compressed sensing MRI with Parallel Variable Density Approximate Message Passing (P-VDAMP)
Viaarxiv icon

Activation function design for deep networks: linearity and effective initialisation

Add code
Bookmark button
Alert button
May 17, 2021
Michael Murray, Vinayak Abrol, Jared Tanner

Figure 1 for Activation function design for deep networks: linearity and effective initialisation
Figure 2 for Activation function design for deep networks: linearity and effective initialisation
Figure 3 for Activation function design for deep networks: linearity and effective initialisation
Figure 4 for Activation function design for deep networks: linearity and effective initialisation
Viaarxiv icon

Dense for the Price of Sparse: Improved Performance of Sparsely Initialized Networks via a Subspace Offset

Add code
Bookmark button
Alert button
Feb 12, 2021
Ilan Price, Jared Tanner

Figure 1 for Dense for the Price of Sparse: Improved Performance of Sparsely Initialized Networks via a Subspace Offset
Figure 2 for Dense for the Price of Sparse: Improved Performance of Sparsely Initialized Networks via a Subspace Offset
Figure 3 for Dense for the Price of Sparse: Improved Performance of Sparsely Initialized Networks via a Subspace Offset
Figure 4 for Dense for the Price of Sparse: Improved Performance of Sparsely Initialized Networks via a Subspace Offset
Viaarxiv icon

An Empirical Study of Derivative-Free-Optimization Algorithms for Targeted Black-Box Attacks in Deep Neural Networks

Add code
Bookmark button
Alert button
Dec 03, 2020
Giuseppe Ughi, Vinayak Abrol, Jared Tanner

Figure 1 for An Empirical Study of Derivative-Free-Optimization Algorithms for Targeted Black-Box Attacks in Deep Neural Networks
Figure 2 for An Empirical Study of Derivative-Free-Optimization Algorithms for Targeted Black-Box Attacks in Deep Neural Networks
Figure 3 for An Empirical Study of Derivative-Free-Optimization Algorithms for Targeted Black-Box Attacks in Deep Neural Networks
Figure 4 for An Empirical Study of Derivative-Free-Optimization Algorithms for Targeted Black-Box Attacks in Deep Neural Networks
Viaarxiv icon