Alert button
Picture for Adrian Riekert

Adrian Riekert

Alert button

Non-convergence to global minimizers for Adam and stochastic gradient descent optimization and constructions of local minimizers in the training of artificial neural networks

Add code
Bookmark button
Alert button
Feb 07, 2024
Arnulf Jentzen, Adrian Riekert

Viaarxiv icon

Deep neural network approximation of composite functions without the curse of dimensionality

Add code
Bookmark button
Alert button
Apr 12, 2023
Adrian Riekert

Viaarxiv icon

Algorithmically Designed Artificial Neural Networks (ADANNs): Higher order deep operator learning for parametric partial differential equations

Add code
Bookmark button
Alert button
Feb 07, 2023
Arnulf Jentzen, Adrian Riekert, Philippe von Wurstemberger

Figure 1 for Algorithmically Designed Artificial Neural Networks (ADANNs): Higher order deep operator learning for parametric partial differential equations
Figure 2 for Algorithmically Designed Artificial Neural Networks (ADANNs): Higher order deep operator learning for parametric partial differential equations
Figure 3 for Algorithmically Designed Artificial Neural Networks (ADANNs): Higher order deep operator learning for parametric partial differential equations
Figure 4 for Algorithmically Designed Artificial Neural Networks (ADANNs): Higher order deep operator learning for parametric partial differential equations
Viaarxiv icon

Normalized gradient flow optimization in the training of ReLU artificial neural networks

Add code
Bookmark button
Alert button
Jul 13, 2022
Simon Eberle, Arnulf Jentzen, Adrian Riekert, Georg Weiss

Figure 1 for Normalized gradient flow optimization in the training of ReLU artificial neural networks
Viaarxiv icon

On the existence of global minima and convergence analyses for gradient descent methods in the training of deep neural networks

Add code
Bookmark button
Alert button
Dec 17, 2021
Arnulf Jentzen, Adrian Riekert

Figure 1 for On the existence of global minima and convergence analyses for gradient descent methods in the training of deep neural networks
Figure 2 for On the existence of global minima and convergence analyses for gradient descent methods in the training of deep neural networks
Viaarxiv icon

Convergence proof for stochastic gradient descent in the training of deep neural networks with ReLU activation for constant target functions

Add code
Bookmark button
Alert button
Dec 13, 2021
Martin Hutzenthaler, Arnulf Jentzen, Katharina Pohl, Adrian Riekert, Luca Scarpa

Figure 1 for Convergence proof for stochastic gradient descent in the training of deep neural networks with ReLU activation for constant target functions
Viaarxiv icon

Existence, uniqueness, and convergence rates for gradient flows in the training of artificial neural networks with ReLU activation

Add code
Bookmark button
Alert button
Aug 18, 2021
Simon Eberle, Arnulf Jentzen, Adrian Riekert, Georg S. Weiss

Viaarxiv icon

A proof of convergence for the gradient descent optimization method with random initializations in the training of neural networks with ReLU activation for piecewise linear target functions

Add code
Bookmark button
Alert button
Aug 10, 2021
Arnulf Jentzen, Adrian Riekert

Viaarxiv icon

Convergence analysis for gradient flows in the training of artificial neural networks with ReLU activation

Add code
Bookmark button
Alert button
Jul 09, 2021
Arnulf Jentzen, Adrian Riekert

Viaarxiv icon

A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions

Add code
Bookmark button
Alert button
Apr 01, 2021
Arnulf Jentzen, Adrian Riekert

Viaarxiv icon