Supervised Machine Learning with a Novel Kernel Density Estimator

Oct 16, 2007

Yen-Jen Oyang, Darby Tien-Hao Chang, Yu-Yen Ou, Hao-Geng Hung, Chih-Peng Wu, Chien-Yu Chen

In recent years, kernel density estimation has been exploited by computer scientists to model machine learning problems. The kernel density estimation based approaches are of interest due to the low time complexity of either O(n) or O(n*log(n)) for constructing a classifier, where n is the number of sampling instances. Concerning design of kernel density estimators, one essential issue is how fast the pointwise mean square error (MSE) and/or the integrated mean square error (IMSE) diminish as the number of sampling instances increases. In this article, it is shown that with the proposed kernel function it is feasible to make the pointwise MSE of the density estimator converge at O(n^-2/3) regardless of the dimension of the vector space, provided that the probability density function at the point of interest meets certain conditions.
Oct 16, 2007

Yen-Jen Oyang, Darby Tien-Hao Chang, Yu-Yen Ou, Hao-Geng Hung, Chih-Peng Wu, Chien-Yu Chen

**Click to Read Paper**

Deep metric learning aims to learn a function mapping image pixels to embedding feature vectors that model the similarity between images. The majority of current approaches are non-parametric, learning the metric space directly through the supervision of similar (pairs) or relatively similar (triplets) sets of images. A difficult challenge for training these approaches is mining informative samples of images as the metric space is learned with only the local context present within a single mini-batch. Alternative approaches use parametric metric learning to eliminate the need for sampling through supervision of images to proxies. Although this simplifies optimization, such proxy-based approaches have lagged behind in performance. In this work, we demonstrate that a standard classification network can be transformed into a variant of proxy-based metric learning that is competitive against non-parametric approaches across a wide variety of image retrieval tasks. We address key challenges in proxy-based metric learning such as performance under extreme classification and describe techniques to stabilize and learn higher dimensional embeddings. We evaluate our approach on the CAR-196, CUB-200-2011, Stanford Online Product, and In-Shop datasets for image retrieval and clustering. Finally, we show that our softmax classification approach can learn high-dimensional binary embeddings that achieve new state-of-the-art performance on all datasets evaluated with a memory footprint that is the same or smaller than competing approaches.

**Click to Read Paper**
A Low Complexity Algorithm with $O(\sqrt{T})$ Regret and Finite Constraint Violations for Online Convex Optimization with Long Term Constraints

Oct 05, 2016

Hao Yu, Michael J. Neely

Oct 05, 2016

Hao Yu, Michael J. Neely

**Click to Read Paper**

MOHONE: Modeling Higher Order Network Effects in KnowledgeGraphs via Network Infused Embeddings

Nov 01, 2018

Hao Yu, Vivek Kulkarni, William Wang

Nov 01, 2018

Hao Yu, Vivek Kulkarni, William Wang

**Click to Read Paper**

Parallel Restarted SGD for Non-Convex Optimization with Faster Convergence and Less Communication

Jul 17, 2018

Hao Yu, Sen Yang, Shenghuo Zhu

Jul 17, 2018

Hao Yu, Sen Yang, Shenghuo Zhu

**Click to Read Paper**

Orthogonal Echo State Networks and stochastic evaluations of likelihoods

Jun 13, 2017

Norbert Michael Mayer, Ying-Hao Yu

Jun 13, 2017

Norbert Michael Mayer, Ying-Hao Yu

**Click to Read Paper**

Occlusion-Model Guided Anti-Occlusion Depth Estimation in Light Field

Aug 18, 2016

Hao Zhu, Qing Wang, Jingyi Yu

Aug 18, 2016

Hao Zhu, Qing Wang, Jingyi Yu

**Click to Read Paper**

Detail Preserving Depth Estimation from a Single Image Using Attention Guided Networks

Sep 03, 2018

Zhixiang Hao, Yu Li, Shaodi You, Feng Lu

Sep 03, 2018

Zhixiang Hao, Yu Li, Shaodi You, Feng Lu

**Click to Read Paper**

LSTD: A Low-Shot Transfer Detector for Object Detection

Mar 05, 2018

Hao Chen, Yali Wang, Guoyou Wang, Yu Qiao

Mar 05, 2018

Hao Chen, Yali Wang, Guoyou Wang, Yu Qiao

**Click to Read Paper**

Online Convex Optimization with Stochastic Constraints

Aug 12, 2017

Hao Yu, Michael J. Neely, Xiaohan Wei

This paper considers online convex optimization (OCO) with stochastic constraints, which generalizes Zinkevich's OCO over a known simple fixed set by introducing multiple stochastic functional constraints that are i.i.d. generated at each round and are disclosed to the decision maker only after the decision is made. This formulation arises naturally when decisions are restricted by stochastic environments or deterministic environments with noisy observations. It also includes many important problems as special cases, such as OCO with long term constraints, stochastic constrained convex optimization, and deterministic constrained convex optimization. To solve this problem, this paper proposes a new algorithm that achieves $O(\sqrt{T})$ expected regret and constraint violations and $O(\sqrt{T}\log(T))$ high probability regret and constraint violations. Experiments on a real-world data center scheduling problem further verify the performance of the new algorithm.
Aug 12, 2017

Hao Yu, Michael J. Neely, Xiaohan Wei

**Click to Read Paper**

CIFT: Crowd-Informed Fine-Tuning to Improve Machine Learning Ability

Jun 28, 2017

John P. Lalor, Hao Wu, Hong Yu

Item Response Theory (IRT) allows for measuring ability of Machine Learning models as compared to a human population. However, it is difficult to create a large dataset to train the ability of deep neural network models (DNNs). We propose Crowd-Informed Fine-Tuning (CIFT) as a new training process, where a pre-trained model is fine-tuned with a specialized supplemental training set obtained via IRT model-fitting on a large set of crowdsourced response patterns. With CIFT we can leverage the specialized set of data obtained through IRT to inform parameter tuning in DNNs. We experiment with two loss functions in CIFT to represent (i) memorization of fine-tuning items and (ii) learning a probability distribution over potential labels that is similar to the crowdsourced distribution over labels to simulate crowd knowledge. Our results show that CIFT improves ability for a state-of-the art DNN model for Recognizing Textual Entailment (RTE) tasks and is generalizable to a large-scale RTE test set.
Jun 28, 2017

John P. Lalor, Hao Wu, Hong Yu

**Click to Read Paper**

Sequence-based Multimodal Apprenticeship Learning For Robot Perception and Decision Making

Feb 24, 2017

Fei Han, Xue Yang, Yu Zhang, Hao Zhang

Apprenticeship learning has recently attracted a wide attention due to its capability of allowing robots to learn physical tasks directly from demonstrations provided by human experts. Most previous techniques assumed that the state space is known a priori or employed simple state representations that usually suffer from perceptual aliasing. Different from previous research, we propose a novel approach named Sequence-based Multimodal Apprenticeship Learning (SMAL), which is capable to simultaneously fusing temporal information and multimodal data, and to integrate robot perception with decision making. To evaluate the SMAL approach, experiments are performed using both simulations and real-world robots in the challenging search and rescue scenarios. The empirical study has validated that our SMAL approach can effectively learn plans for robots to make decisions using sequence of multimodal observations. Experimental results have also showed that SMAL outperforms the baseline methods using individual images.
Feb 24, 2017

Fei Han, Xue Yang, Yu Zhang, Hao Zhang

**Click to Read Paper**

**Click to Read Paper**

Exploiting Sentence Embedding for Medical Question Answering

Nov 15, 2018

Yu Hao, Xien Liu, Ji Wu, Ping Lv

Nov 15, 2018

Yu Hao, Xien Liu, Ji Wu, Ping Lv

**Click to Read Paper**

DeepHTTP: Semantics-Structure Model with Attention for Anomalous HTTP Traffic Detection and Pattern Mining

Oct 30, 2018

Yuqi Yu, Hanbing Yan, Hongchao Guan, Hao Zhou

Oct 30, 2018

Yuqi Yu, Hanbing Yan, Hongchao Guan, Hao Zhou

**Click to Read Paper**

Deep Functional Dictionaries: Learning Consistent Semantic Structures on 3D Models from Functions

Oct 25, 2018

Minhyuk Sung, Hao Su, Ronald Yu, Leonidas Guibas

Oct 25, 2018

Minhyuk Sung, Hao Su, Ronald Yu, Leonidas Guibas

**Click to Read Paper**

On Modular Training of Neural Acoustics-to-Word Model for LVCSR

Mar 03, 2018

Zhehuai Chen, Qi Liu, Hao Li, Kai Yu

Mar 03, 2018

Zhehuai Chen, Qi Liu, Hao Li, Kai Yu

**Click to Read Paper**

TransG : A Generative Mixture Model for Knowledge Graph Embedding

Sep 08, 2017

Han Xiao, Minlie Huang, Yu Hao, Xiaoyan Zhu

Sep 08, 2017

Han Xiao, Minlie Huang, Yu Hao, Xiaoyan Zhu

**Click to Read Paper**

Semantic Image Synthesis via Adversarial Learning

Jul 21, 2017

Hao Dong, Simiao Yu, Chao Wu, Yike Guo

Jul 21, 2017

Hao Dong, Simiao Yu, Chao Wu, Yike Guo

**Click to Read Paper**

A Binary Convolutional Encoder-decoder Network for Real-time Natural Scene Text Processing

Dec 12, 2016

Zichuan Liu, Yixing Li, Fengbo Ren, Hao Yu

Dec 12, 2016

Zichuan Liu, Yixing Li, Fengbo Ren, Hao Yu

**Click to Read Paper**