Picture for Daniel Coquelin

Daniel Coquelin

AB-Training: A Communication-Efficient Approach for Distributed Low-Rank Learning

May 02, 2024
Viaarxiv icon

Harnessing Orthogonality to Train Low-Rank Neural Networks

Jan 16, 2024
Viaarxiv icon

Feed-Forward Optimization With Delayed Feedback for Neural Networks

Add code
Apr 26, 2023
Figure 1 for Feed-Forward Optimization With Delayed Feedback for Neural Networks
Figure 2 for Feed-Forward Optimization With Delayed Feedback for Neural Networks
Figure 3 for Feed-Forward Optimization With Delayed Feedback for Neural Networks
Figure 4 for Feed-Forward Optimization With Delayed Feedback for Neural Networks
Viaarxiv icon

Massively Parallel Genetic Optimization through Asynchronous Propagation of Populations

Add code
Jan 20, 2023
Figure 1 for Massively Parallel Genetic Optimization through Asynchronous Propagation of Populations
Figure 2 for Massively Parallel Genetic Optimization through Asynchronous Propagation of Populations
Figure 3 for Massively Parallel Genetic Optimization through Asynchronous Propagation of Populations
Figure 4 for Massively Parallel Genetic Optimization through Asynchronous Propagation of Populations
Viaarxiv icon

HyDe: The First Open-Source, Python-Based, GPU-Accelerated Hyperspectral Denoising Package

Add code
Apr 14, 2022
Figure 1 for HyDe: The First Open-Source, Python-Based, GPU-Accelerated Hyperspectral Denoising Package
Figure 2 for HyDe: The First Open-Source, Python-Based, GPU-Accelerated Hyperspectral Denoising Package
Figure 3 for HyDe: The First Open-Source, Python-Based, GPU-Accelerated Hyperspectral Denoising Package
Viaarxiv icon

Accelerating Neural Network Training with Distributed Asynchronous and Selective Optimization (DASO)

Apr 15, 2021
Figure 1 for Accelerating Neural Network Training with Distributed Asynchronous and Selective Optimization (DASO)
Figure 2 for Accelerating Neural Network Training with Distributed Asynchronous and Selective Optimization (DASO)
Figure 3 for Accelerating Neural Network Training with Distributed Asynchronous and Selective Optimization (DASO)
Figure 4 for Accelerating Neural Network Training with Distributed Asynchronous and Selective Optimization (DASO)
Viaarxiv icon

HeAT -- a Distributed and GPU-accelerated Tensor Framework for Data Analytics

Add code
Jul 27, 2020
Figure 1 for HeAT -- a Distributed and GPU-accelerated Tensor Framework for Data Analytics
Figure 2 for HeAT -- a Distributed and GPU-accelerated Tensor Framework for Data Analytics
Figure 3 for HeAT -- a Distributed and GPU-accelerated Tensor Framework for Data Analytics
Figure 4 for HeAT -- a Distributed and GPU-accelerated Tensor Framework for Data Analytics
Viaarxiv icon