Autoencoders and Generative Adversarial Networks for Anomaly Detection for Sequences

Jan 15, 2019

Stephanie Ger, Diego Klabjan

Jan 15, 2019

Stephanie Ger, Diego Klabjan

**Click to Read Paper and Get Code**

Layer Flexible Adaptive Computational Time for Recurrent Neural Networks

Dec 14, 2018

Lida Zhang, Diego Klabjan

Dec 14, 2018

Lida Zhang, Diego Klabjan

**Click to Read Paper and Get Code**

Dynamic Prediction Length for Time Series with Sequence to Sequence Networks

Jul 02, 2018

Mark Harmon, Diego Klabjan

Jul 02, 2018

Mark Harmon, Diego Klabjan

**Click to Read Paper and Get Code**

Online Adaptive Machine Learning Based Algorithm for Implied Volatility Surface Modeling

Jun 07, 2018

Yaxiong Zeng, Diego Klabjan

Jun 07, 2018

Yaxiong Zeng, Diego Klabjan

* 34 Pages

**Click to Read Paper and Get Code**

Competitive Multi-agent Inverse Reinforcement Learning with Sub-optimal Demonstrations

Jun 05, 2018

Xingyu Wang, Diego Klabjan

Jun 05, 2018

Xingyu Wang, Diego Klabjan

* 31 pages, to be presented at ICML 2018

**Click to Read Paper and Get Code**

**Click to Read Paper and Get Code**

Bayesian active learning for choice models with deep Gaussian processes

May 04, 2018

Jie Yang, Diego Klabjan

May 04, 2018

Jie Yang, Diego Klabjan

**Click to Read Paper and Get Code**

k-Nearest Neighbors by Means of Sequence to Sequence Deep Neural Networks and Memory Networks

May 02, 2018

Yiming Xu, Diego Klabjan

May 02, 2018

Yiming Xu, Diego Klabjan

**Click to Read Paper and Get Code**

**Click to Read Paper and Get Code**

A Stochastic Large-scale Machine Learning Algorithm for Distributed Features and Observations

Mar 29, 2018

Biyi Fang, Diego Klabjan

Mar 29, 2018

Biyi Fang, Diego Klabjan

* 11 figures, 41 pages

**Click to Read Paper and Get Code**

* 40 pages (including Appendix), 3 tables, 3 figures

**Click to Read Paper and Get Code**

**Click to Read Paper and Get Code**

**Click to Read Paper and Get Code**

We present the first model and algorithm for L1-norm kernel PCA. While L2-norm kernel PCA has been widely studied, there has been no work on L1-norm kernel PCA. For this non-convex and non-smooth problem, we offer geometric understandings through reformulations and present an efficient algorithm where the kernel trick is applicable. To attest the efficiency of the algorithm, we provide a convergence analysis including linear rate of convergence. Moreover, we prove that the output of our algorithm is a local optimal solution to the L1-norm kernel PCA problem. We also numerically show its robustness when extracting principal components in the presence of influential outliers, as well as its runtime comparability to L2-norm kernel PCA. Lastly, we introduce its application to outlier detection and show that the L1-norm kernel PCA based model outperforms especially for high dimensional data.

**Click to Read Paper and Get Code**
Convergence Analysis of Batch Normalization for Deep Neural Nets

May 22, 2017

Yintai Ma, Diego Klabjan

May 22, 2017

Yintai Ma, Diego Klabjan

**Click to Read Paper and Get Code**

Optimization for Large-Scale Machine Learning with Distributed Features and Observations

Apr 15, 2017

Alexandros Nathan, Diego Klabjan

Apr 15, 2017

Alexandros Nathan, Diego Klabjan

**Click to Read Paper and Get Code**

Many activation functions have been proposed in the past, but selecting an adequate one requires trial and error. We propose a new methodology of designing activation functions within a neural network at each layer. We call this technique an "activation ensemble" because it allows the use of multiple activation functions at each layer. This is done by introducing additional variables, $\alpha$, at each activation layer of a network to allow for multiple activation functions to be active at each neuron. By design, activations with larger $\alpha$ values at a neuron is equivalent to having the largest magnitude. Hence, those higher magnitude activations are "chosen" by the network. We implement the activation ensembles on a variety of datasets using an array of Feed Forward and Convolutional Neural Networks. By using the activation ensemble, we achieve superior results compared to traditional techniques. In addition, because of the flexibility of this methodology, we more deeply explore activation functions and the features that they capture.

**Click to Read Paper and Get Code****Click to Read Paper and Get Code**

Temporal Topic Analysis with Endogenous and Exogenous Processes

Jul 04, 2016

Baiyang Wang, Diego Klabjan

Jul 04, 2016

Baiyang Wang, Diego Klabjan

**Click to Read Paper and Get Code**