Alert button
Picture for Paul N. Whatmough

Paul N. Whatmough

Alert button

AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator

Add code
Bookmark button
Alert button
Nov 10, 2021
Chuteng Zhou, Fernando Garcia Redondo, Julian Büchel, Irem Boybat, Xavier Timoneda Comas, S. R. Nandakumar, Shidhartha Das, Abu Sebastian, Manuel Le Gallo, Paul N. Whatmough

Figure 1 for AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator
Figure 2 for AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator
Figure 3 for AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator
Figure 4 for AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator
Viaarxiv icon

Federated Learning Based on Dynamic Regularization

Add code
Bookmark button
Alert button
Nov 09, 2021
Durmus Alp Emre Acar, Yue Zhao, Ramon Matas Navarro, Matthew Mattina, Paul N. Whatmough, Venkatesh Saligrama

Figure 1 for Federated Learning Based on Dynamic Regularization
Figure 2 for Federated Learning Based on Dynamic Regularization
Figure 3 for Federated Learning Based on Dynamic Regularization
Figure 4 for Federated Learning Based on Dynamic Regularization
Viaarxiv icon

S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN Acceleration

Add code
Bookmark button
Alert button
Jul 16, 2021
Zhi-Gang Liu, Paul N. Whatmough, Yuhao Zhu, Matthew Mattina

Figure 1 for S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN Acceleration
Figure 2 for S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN Acceleration
Figure 3 for S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN Acceleration
Figure 4 for S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN Acceleration
Viaarxiv icon

A LiDAR-Guided Framework for Video Enhancement

Add code
Bookmark button
Alert button
Mar 15, 2021
Yu Feng, Patrick Hansen, Paul N. Whatmough, Guoyu Lu, Yuhao Zhu

Figure 1 for A LiDAR-Guided Framework for Video Enhancement
Figure 2 for A LiDAR-Guided Framework for Video Enhancement
Figure 3 for A LiDAR-Guided Framework for Video Enhancement
Figure 4 for A LiDAR-Guided Framework for Video Enhancement
Viaarxiv icon

Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices

Add code
Bookmark button
Alert button
Feb 14, 2021
Urmish Thakker, Paul N. Whatmough, Zhigang Liu, Matthew Mattina, Jesse Beu

Figure 1 for Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices
Figure 2 for Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices
Figure 3 for Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices
Figure 4 for Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices
Viaarxiv icon

Information contraction in noisy binary neural networks and its implications

Add code
Bookmark button
Alert button
Feb 01, 2021
Chuteng Zhou, Quntao Zhuang, Matthew Mattina, Paul N. Whatmough

Figure 1 for Information contraction in noisy binary neural networks and its implications
Figure 2 for Information contraction in noisy binary neural networks and its implications
Figure 3 for Information contraction in noisy binary neural networks and its implications
Figure 4 for Information contraction in noisy binary neural networks and its implications
Viaarxiv icon

MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers

Add code
Bookmark button
Alert button
Oct 25, 2020
Colby Banbury, Chuteng Zhou, Igor Fedorov, Ramon Matas Navarro, Urmish Thakker, Dibakar Gope, Vijay Janapa Reddi, Matthew Mattina, Paul N. Whatmough

Figure 1 for MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers
Figure 2 for MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers
Figure 3 for MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers
Figure 4 for MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers
Viaarxiv icon

Sparse Systolic Tensor Array for Efficient CNN Hardware Acceleration

Add code
Bookmark button
Alert button
Sep 04, 2020
Zhi-Gang Liu, Paul N. Whatmough, Matthew Mattina

Figure 1 for Sparse Systolic Tensor Array for Efficient CNN Hardware Acceleration
Figure 2 for Sparse Systolic Tensor Array for Efficient CNN Hardware Acceleration
Figure 3 for Sparse Systolic Tensor Array for Efficient CNN Hardware Acceleration
Figure 4 for Sparse Systolic Tensor Array for Efficient CNN Hardware Acceleration
Viaarxiv icon

TinyLSTMs: Efficient Neural Speech Enhancement for Hearing Aids

Add code
Bookmark button
Alert button
May 20, 2020
Igor Fedorov, Marko Stamenovic, Carl Jensen, Li-Chia Yang, Ari Mandell, Yiming Gan, Matthew Mattina, Paul N. Whatmough

Figure 1 for TinyLSTMs: Efficient Neural Speech Enhancement for Hearing Aids
Figure 2 for TinyLSTMs: Efficient Neural Speech Enhancement for Hearing Aids
Figure 3 for TinyLSTMs: Efficient Neural Speech Enhancement for Hearing Aids
Figure 4 for TinyLSTMs: Efficient Neural Speech Enhancement for Hearing Aids
Viaarxiv icon

Systolic Tensor Array: An Efficient Structured-Sparse GEMM Accelerator for Mobile CNN Inference

Add code
Bookmark button
Alert button
May 16, 2020
Zhi-Gang Liu, Paul N. Whatmough, Matthew Mattina

Figure 1 for Systolic Tensor Array: An Efficient Structured-Sparse GEMM Accelerator for Mobile CNN Inference
Figure 2 for Systolic Tensor Array: An Efficient Structured-Sparse GEMM Accelerator for Mobile CNN Inference
Figure 3 for Systolic Tensor Array: An Efficient Structured-Sparse GEMM Accelerator for Mobile CNN Inference
Figure 4 for Systolic Tensor Array: An Efficient Structured-Sparse GEMM Accelerator for Mobile CNN Inference
Viaarxiv icon