Alert button

"Balakrishnan": models, code, and papers
Alert button

Rejoinder: On nearly assumption-free tests of nominal confidence interval coverage for causal parameters estimated by machine learning

Aug 07, 2020
Lin Liu, Rajarshi Mukherjee, James M. Robins

Figure 1 for Rejoinder: On nearly assumption-free tests of nominal confidence interval coverage for causal parameters estimated by machine learning
Figure 2 for Rejoinder: On nearly assumption-free tests of nominal confidence interval coverage for causal parameters estimated by machine learning
Figure 3 for Rejoinder: On nearly assumption-free tests of nominal confidence interval coverage for causal parameters estimated by machine learning
Viaarxiv icon

Stochastic Adversarial Koopman Model for Dynamical Systems

Add code
Bookmark button
Alert button
Sep 10, 2021
Kaushik Balakrishnan, Devesh Upadhyay

Figure 1 for Stochastic Adversarial Koopman Model for Dynamical Systems
Figure 2 for Stochastic Adversarial Koopman Model for Dynamical Systems
Figure 3 for Stochastic Adversarial Koopman Model for Dynamical Systems
Figure 4 for Stochastic Adversarial Koopman Model for Dynamical Systems
Viaarxiv icon

A Functional EM Algorithm for Panel Count Data with Missing Counts

Mar 28, 2020
Alexander Moreno, Zhenke Wu, Jamie Yap, David Wetter, Cho Lam, Inbal Nahum-Shani, Walter Dempsey, James M. Rehg

Figure 1 for A Functional EM Algorithm for Panel Count Data with Missing Counts
Figure 2 for A Functional EM Algorithm for Panel Count Data with Missing Counts
Figure 3 for A Functional EM Algorithm for Panel Count Data with Missing Counts
Figure 4 for A Functional EM Algorithm for Panel Count Data with Missing Counts
Viaarxiv icon

Randomly initialized EM algorithm for two-component Gaussian mixture achieves near optimality in $O(\sqrt{n})$ iterations

Aug 28, 2019
Yihong Wu, Harrison H. Zhou

Figure 1 for Randomly initialized EM algorithm for two-component Gaussian mixture achieves near optimality in $O(\sqrt{n})$ iterations
Figure 2 for Randomly initialized EM algorithm for two-component Gaussian mixture achieves near optimality in $O(\sqrt{n})$ iterations
Figure 3 for Randomly initialized EM algorithm for two-component Gaussian mixture achieves near optimality in $O(\sqrt{n})$ iterations
Figure 4 for Randomly initialized EM algorithm for two-component Gaussian mixture achieves near optimality in $O(\sqrt{n})$ iterations
Viaarxiv icon

Statistical Guarantees for Estimating the Centers of a Two-component Gaussian Mixture by EM

Aug 07, 2016
Jason M. Klusowski, W. D. Brinda

Figure 1 for Statistical Guarantees for Estimating the Centers of a Two-component Gaussian Mixture by EM
Viaarxiv icon

Average-Case Lower Bounds for Learning Sparse Mixtures, Robust Estimation and Semirandom Adversaries

Aug 08, 2019
Matthew Brennan, Guy Bresler

Figure 1 for Average-Case Lower Bounds for Learning Sparse Mixtures, Robust Estimation and Semirandom Adversaries
Figure 2 for Average-Case Lower Bounds for Learning Sparse Mixtures, Robust Estimation and Semirandom Adversaries
Figure 3 for Average-Case Lower Bounds for Learning Sparse Mixtures, Robust Estimation and Semirandom Adversaries
Figure 4 for Average-Case Lower Bounds for Learning Sparse Mixtures, Robust Estimation and Semirandom Adversaries
Viaarxiv icon

Regularized EM Algorithms: A Unified Framework and Statistical Guarantees

Dec 05, 2015
Xinyang Yi, Constantine Caramanis

Figure 1 for Regularized EM Algorithms: A Unified Framework and Statistical Guarantees
Figure 2 for Regularized EM Algorithms: A Unified Framework and Statistical Guarantees
Viaarxiv icon