Supervised Machine Learning with a Novel Kernel Density Estimator

Oct 16, 2007

Yen-Jen Oyang, Darby Tien-Hao Chang, Yu-Yen Ou, Hao-Geng Hung, Chih-Peng Wu, Chien-Yu Chen

In recent years, kernel density estimation has been exploited by computer scientists to model machine learning problems. The kernel density estimation based approaches are of interest due to the low time complexity of either O(n) or O(n*log(n)) for constructing a classifier, where n is the number of sampling instances. Concerning design of kernel density estimators, one essential issue is how fast the pointwise mean square error (MSE) and/or the integrated mean square error (IMSE) diminish as the number of sampling instances increases. In this article, it is shown that with the proposed kernel function it is feasible to make the pointwise MSE of the density estimator converge at O(n^-2/3) regardless of the dimension of the vector space, provided that the probability density function at the point of interest meets certain conditions.
Oct 16, 2007

Yen-Jen Oyang, Darby Tien-Hao Chang, Yu-Yen Ou, Hao-Geng Hung, Chih-Peng Wu, Chien-Yu Chen

* The new version includes an additional theorem, Theorem 3

**Click to Read Paper**

**Click to Read Paper**

A Low Complexity Algorithm with $O(\sqrt{T})$ Regret and Finite Constraint Violations for Online Convex Optimization with Long Term Constraints

Oct 05, 2016

Hao Yu, Michael J. Neely

Oct 05, 2016

Hao Yu, Michael J. Neely

* In the previous version, both the regret bound and the constraint violation bound are $O(\sqrt{T})$. The current version improves the constraint violation bound from $O(\sqrt{T})$ to $O(1)$, i.e., a finite constant that is independent of T, while preserving the same $O(\sqrt{T})$ regret bound

**Click to Read Paper**

Extending Adversarial Attacks and Defenses to Deep 3D Point Cloud Classifiers

Jan 10, 2019

Daniel Liu, Ronald Yu, Hao Su

Jan 10, 2019

Daniel Liu, Ronald Yu, Hao Su

* 8 pages, 3 figures, 5 tables

**Click to Read Paper**

MOHONE: Modeling Higher Order Network Effects in KnowledgeGraphs via Network Infused Embeddings

Nov 01, 2018

Hao Yu, Vivek Kulkarni, William Wang

Nov 01, 2018

Hao Yu, Vivek Kulkarni, William Wang

**Click to Read Paper**

Parallel Restarted SGD for Non-Convex Optimization with Faster Convergence and Less Communication

Jul 17, 2018

Hao Yu, Sen Yang, Shenghuo Zhu

Jul 17, 2018

Hao Yu, Sen Yang, Shenghuo Zhu

**Click to Read Paper**

Orthogonal Echo State Networks and stochastic evaluations of likelihoods

Jun 13, 2017

Norbert Michael Mayer, Ying-Hao Yu

Jun 13, 2017

Norbert Michael Mayer, Ying-Hao Yu

* Cogn Comput (2017) 9:379-390

**Click to Read Paper**

Occlusion-Model Guided Anti-Occlusion Depth Estimation in Light Field

Aug 18, 2016

Hao Zhu, Qing Wang, Jingyi Yu

Aug 18, 2016

Hao Zhu, Qing Wang, Jingyi Yu

* 19 pages, 13 figures, pdflatex

**Click to Read Paper**

Detail Preserving Depth Estimation from a Single Image Using Attention Guided Networks

Sep 03, 2018

Zhixiang Hao, Yu Li, Shaodi You, Feng Lu

Sep 03, 2018

Zhixiang Hao, Yu Li, Shaodi You, Feng Lu

* Published at IEEE International Conference on 3D Vision (3DV) 2018

**Click to Read Paper**

LSTD: A Low-Shot Transfer Detector for Object Detection

Mar 05, 2018

Hao Chen, Yali Wang, Guoyou Wang, Yu Qiao

Mar 05, 2018

Hao Chen, Yali Wang, Guoyou Wang, Yu Qiao

* Accepted by AAAI2018

**Click to Read Paper**

Online Convex Optimization with Stochastic Constraints

Aug 12, 2017

Hao Yu, Michael J. Neely, Xiaohan Wei

This paper considers online convex optimization (OCO) with stochastic constraints, which generalizes Zinkevich's OCO over a known simple fixed set by introducing multiple stochastic functional constraints that are i.i.d. generated at each round and are disclosed to the decision maker only after the decision is made. This formulation arises naturally when decisions are restricted by stochastic environments or deterministic environments with noisy observations. It also includes many important problems as special cases, such as OCO with long term constraints, stochastic constrained convex optimization, and deterministic constrained convex optimization. To solve this problem, this paper proposes a new algorithm that achieves $O(\sqrt{T})$ expected regret and constraint violations and $O(\sqrt{T}\log(T))$ high probability regret and constraint violations. Experiments on a real-world data center scheduling problem further verify the performance of the new algorithm.
Aug 12, 2017

Hao Yu, Michael J. Neely, Xiaohan Wei

* This paper extends our own ArXiv reports arXiv:1604.02218 (by considering more general stochastic functional constraints) and arXiv:1702.04783 (by relaxing a deterministic Slater-type assumption to a weaker stochastic Slater assumption; refining proofs; and providing high probability performance guarantees). See Introduction section (especially footnotes 1 and 2) for more details of distinctions

**Click to Read Paper**

CIFT: Crowd-Informed Fine-Tuning to Improve Machine Learning Ability

Jun 28, 2017

John P. Lalor, Hao Wu, Hong Yu

Item Response Theory (IRT) allows for measuring ability of Machine Learning models as compared to a human population. However, it is difficult to create a large dataset to train the ability of deep neural network models (DNNs). We propose Crowd-Informed Fine-Tuning (CIFT) as a new training process, where a pre-trained model is fine-tuned with a specialized supplemental training set obtained via IRT model-fitting on a large set of crowdsourced response patterns. With CIFT we can leverage the specialized set of data obtained through IRT to inform parameter tuning in DNNs. We experiment with two loss functions in CIFT to represent (i) memorization of fine-tuning items and (ii) learning a probability distribution over potential labels that is similar to the crowdsourced distribution over labels to simulate crowd knowledge. Our results show that CIFT improves ability for a state-of-the art DNN model for Recognizing Textual Entailment (RTE) tasks and is generalizable to a large-scale RTE test set.
Jun 28, 2017

John P. Lalor, Hao Wu, Hong Yu

* 8 pages plus references, 3 tables, 2 figures

**Click to Read Paper**

Sequence-based Multimodal Apprenticeship Learning For Robot Perception and Decision Making

Feb 24, 2017

Fei Han, Xue Yang, Yu Zhang, Hao Zhang

Apprenticeship learning has recently attracted a wide attention due to its capability of allowing robots to learn physical tasks directly from demonstrations provided by human experts. Most previous techniques assumed that the state space is known a priori or employed simple state representations that usually suffer from perceptual aliasing. Different from previous research, we propose a novel approach named Sequence-based Multimodal Apprenticeship Learning (SMAL), which is capable to simultaneously fusing temporal information and multimodal data, and to integrate robot perception with decision making. To evaluate the SMAL approach, experiments are performed using both simulations and real-world robots in the challenging search and rescue scenarios. The empirical study has validated that our SMAL approach can effectively learn plans for robots to make decisions using sequence of multimodal observations. Experimental results have also showed that SMAL outperforms the baseline methods using individual images.
Feb 24, 2017

Fei Han, Xue Yang, Yu Zhang, Hao Zhang

* 8 pages, 6 figures, accepted by ICRA'17

**Click to Read Paper**

* To appear in the proceedings of EMNLP 2016

**Click to Read Paper**

Exploiting Sentence Embedding for Medical Question Answering

Nov 15, 2018

Yu Hao, Xien Liu, Ji Wu, Ping Lv

Nov 15, 2018

Yu Hao, Xien Liu, Ji Wu, Ping Lv

* 8 pages

**Click to Read Paper**

DeepHTTP: Semantics-Structure Model with Attention for Anomalous HTTP Traffic Detection and Pattern Mining

Oct 30, 2018

Yuqi Yu, Hanbing Yan, Hongchao Guan, Hao Zhou

Oct 30, 2018

Yuqi Yu, Hanbing Yan, Hongchao Guan, Hao Zhou

**Click to Read Paper**

Deep Functional Dictionaries: Learning Consistent Semantic Structures on 3D Models from Functions

Oct 25, 2018

Minhyuk Sung, Hao Su, Ronald Yu, Leonidas Guibas

Oct 25, 2018

Minhyuk Sung, Hao Su, Ronald Yu, Leonidas Guibas

* NIPS 2018

**Click to Read Paper**

On Modular Training of Neural Acoustics-to-Word Model for LVCSR

Mar 03, 2018

Zhehuai Chen, Qi Liu, Hao Li, Kai Yu

End-to-end (E2E) automatic speech recognition (ASR) systems directly map acoustics to words using a unified model. Previous works mostly focus on E2E training a single model which integrates acoustic and language model into a whole. Although E2E training benefits from sequence modeling and simplified decoding pipelines, large amount of transcribed acoustic data is usually required, and traditional acoustic and language modelling techniques cannot be utilized. In this paper, a novel modular training framework of E2E ASR is proposed to separately train neural acoustic and language models during training stage, while still performing end-to-end inference in decoding stage. Here, an acoustics-to-phoneme model (A2P) and a phoneme-to-word model (P2W) are trained using acoustic data and text data respectively. A phone synchronous decoding (PSD) module is inserted between A2P and P2W to reduce sequence lengths without precision loss. Finally, modules are integrated into an acousticsto-word model (A2W) and jointly optimized using acoustic data to retain the advantage of sequence modeling. Experiments on a 300- hour Switchboard task show significant improvement over the direct A2W model. The efficiency in both training and decoding also benefits from the proposed method.
Mar 03, 2018

Zhehuai Chen, Qi Liu, Hao Li, Kai Yu

* accepted by ICASSP2018

**Click to Read Paper**

TransG : A Generative Mixture Model for Knowledge Graph Embedding

Sep 08, 2017

Han Xiao, Minlie Huang, Yu Hao, Xiaoyan Zhu

Sep 08, 2017

Han Xiao, Minlie Huang, Yu Hao, Xiaoyan Zhu

**Click to Read Paper**

Semantic Image Synthesis via Adversarial Learning

Jul 21, 2017

Hao Dong, Simiao Yu, Chao Wu, Yike Guo

Jul 21, 2017

Hao Dong, Simiao Yu, Chao Wu, Yike Guo

* Accepted to ICCV 2017

**Click to Read Paper**