Models, code, and papers for "Yu Zhou":

Cloud Computing framework for Computer Vision Research:An Introduction

Feb 06, 2013
Yu Zhou

Cloud computing offers the potential to help scientists to process massive number of computing resources often required in machine learning application such as computer vision problems. This proposal would like to show that which benefits can be obtained from cloud in order to help medical image analysis users (including scientists, clinicians, and research institutes). As security and privacy of algorithms are important for most of algorithms inventors, these algorithms can be hidden in a cloud to allow the users to use the algorithms as a package without any access to see/change their inside. In another word, in the user part, users send their images to the cloud and configure the algorithm via an interface. In the cloud part, the algorithms are applied to this image and the results are returned back to the user. My proposal has two parts: (1) investigate the potential of cloud computing for computer vision problems and (2) study the components of a proposed cloud-based framework for medical image analysis application and develop them (depending on the length of the internship). The investigation part will involve a study on several aspects of the problem including security, usability (for medical end users of the service), appropriate programming abstractions for vision problems, scalability and resource requirements. In the second part of this proposal I am going to thoroughly study of the proposed framework components and their relations and develop them. The proposed cloud-based framework includes an integrated environment to enable scientists and clinicians to access to the previous and current medical image analysis algorithms using a handful user interface without any access to the algorithm codes and procedures.

* 3 pages 

  Click for Model/Code and Paper
Multi-Instance Learning by Treating Instances As Non-I.I.D. Samples

May 13, 2009
Zhi-Hua Zhou, Yu-Yin Sun, Yu-Feng Li

Multi-instance learning attempts to learn from a training set consisting of labeled bags each containing many unlabeled instances. Previous studies typically treat the instances in the bags as independently and identically distributed. However, the instances in a bag are rarely independent, and therefore a better performance can be expected if the instances are treated in an non-i.i.d. way that exploits the relations among instances. In this paper, we propose a simple yet effective multi-instance learning method, which regards each bag as a graph and uses a specific kernel to distinguish the graphs by considering the features of the nodes as well as the features of the edges that convey some relations among instances. The effectiveness of the proposed method is validated by experiments.

* ICML, 2009 

  Click for Model/Code and Paper
Tunneling Neural Perception and Logic Reasoning through Abductive Learning

Feb 06, 2018
Wang-Zhou Dai, Qiu-Ling Xu, Yang Yu, Zhi-Hua Zhou

Perception and reasoning are basic human abilities that are seamlessly connected as part of human intelligence. However, in current machine learning systems, the perception and reasoning modules are incompatible. Tasks requiring joint perception and reasoning ability are difficult to accomplish autonomously and still demand human intervention. Inspired by the way language experts decoded Mayan scripts by joining two abilities in an abductive manner, this paper proposes the abductive learning framework. The framework learns perception and reasoning simultaneously with the help of a trial-and-error abductive process. We present the Neural-Logical Machine as an implementation of this novel learning framework. We demonstrate that--using human-like abductive learning--the machine learns from a small set of simple hand-written equations and then generalizes well to complex equations, a feat that is beyond the capability of state-of-the-art neural network models. The abductive learning framework explores a new direction for approaching human-level learning ability.

* Corrected typos 

  Click for Model/Code and Paper
Learning to Generate Posters of Scientific Papers by Probabilistic Graphical Models

Feb 21, 2017
Yu-ting Qiang, Yanwei Fu, Xiao Yu, Yanwen Guo, Zhi-Hua Zhou, Leonid Sigal

Researchers often summarize their work in the form of scientific posters. Posters provide a coherent and efficient way to convey core ideas expressed in scientific papers. Generating a good scientific poster, however, is a complex and time consuming cognitive task, since such posters need to be readable, informative, and visually aesthetic. In this paper, for the first time, we study the challenging problem of learning to generate posters from scientific papers. To this end, a data-driven framework, that utilizes graphical models, is proposed. Specifically, given content to display, the key elements of a good poster, including attributes of each panel and arrangements of graphical elements are learned and inferred from data. During the inference stage, an MAP inference framework is employed to incorporate some design principles. In order to bridge the gap between panel attributes and the composition within each panel, we also propose a recursive page splitting algorithm to generate the panel layout for a poster. To learn and validate our model, we collect and release a new benchmark dataset, called NJU-Fudan Paper-Poster dataset, which consists of scientific papers and corresponding posters with exhaustively labelled panels and attributes. Qualitative and quantitative results indicate the effectiveness of our approach.

* 10 pages, submission to IEEE TPAMI. arXiv admin note: text overlap with arXiv:1604.01219 

  Click for Model/Code and Paper
On Reinforcement Learning for Full-length Game of StarCraft

Sep 23, 2018
Zhen-Jia Pang, Ruo-Ze Liu, Zhou-Yu Meng, Yi Zhang, Yang Yu, Tong Lu

StarCraft II poses a grand challenge for reinforcement learning. The main difficulties of it include huge state and action space and a long-time horizon. In this paper, we investigate a hierarchical reinforcement learning approach for StarCraft II. The hierarchy involves two levels of abstraction. One is the macro-action automatically extracted from expert's trajectories, which reduces the action space in an order of magnitude yet remains effective. The other is a two-layer hierarchical architecture which is modular and easy to scale, enabling a curriculum transferring from simpler tasks to more complex tasks. The reinforcement training algorithm for this architecture is also investigated. On a 64x64 map and using restrictive units, we achieve a winning rate of more than 99\% against the difficulty level-1 built-in AI. Through the curriculum transfer learning algorithm and a mixture of combat model, we can achieve over 93\% winning rate of Protoss against the most difficult non-cheating built-in AI (level-7) of Terran, training within two days using a single machine with only 48 CPU cores and 8 K40 GPUs. It also shows strong generalization performance, when tested against never seen opponents including cheating levels built-in AI and all levels of Zerg and Protoss built-in AI. We hope this study could shed some light on the future research of large-scale reinforcement learning.


  Click for Model/Code and Paper
MIDAS: A Dialog Act Annotation Scheme for Open Domain Human Machine Spoken Conversations

Aug 27, 2019
Dian Yu, Zhou Yu

Dialog act prediction is an essential language comprehension task for both dialog system building and discourse analysis. Previous dialog act schemes, such as SWBD-DAMSL, are designed for human-human conversations, in which conversation partners have perfect language understanding ability. In this paper, we design a dialog act annotation scheme, MIDAS (Machine Interaction Dialog Act Scheme), targeted on open-domain human-machine conversations. MIDAS is designed to assist machines which have limited ability to understand their human partners. MIDAS has a hierarchical structure and supports multi-label annotations. We collected and annotated a large open-domain human-machine spoken conversation dataset (consists of 24K utterances). To show the applicability of the scheme, we leverage transfer learning methods to train a multi-label dialog act prediction model and reach an F1 score of 0.79.


  Click for Model/Code and Paper
Experimentally detecting a quantum change point via Bayesian inference

Jan 23, 2018
Shang Yu, Chang-Jiang Huang, Jian-Shun Tang, Zhih-Ahn Jia, Yi-Tao Wang, Zhi-Jin Ke, Wei Liu, Xiao Liu, Zong-Quan Zhou, Ze-Di Cheng, Jin-Shi Xu, Yu-Chun Wu, Yuan-Yuan Zhao, Guo-Yong Xiang, Chuan-Feng Li, Guang-Can Guo, Gael Sentís, Ramon Muñoz-Tapia

Detecting a change point is a crucial task in statistics that has been recently extended to the quantum realm. A source state generator that emits a series of single photons in a default state suffers an alteration at some point and starts to emit photons in a mutated state. The problem consists in identifying the point where the change took place. In this work, we consider a learning agent that applies Bayesian inference on experimental data to solve this problem. This learning machine adjusts the measurement over each photon according to the past experimental results finds the change position in an online fashion. Our results show that the local-detection success probability can be largely improved by using such a machine learning technique. This protocol provides a tool for improvement in many applications where a sequence of identical quantum states is required.

* Phys. Rev. A 98, 040301 (2018) 

  Click for Model/Code and Paper
Abstractive Dialog Summarization with Semantic Scaffolds

Oct 02, 2019
Lin Yuan, Zhou Yu

The demand for abstractive dialog summary is growing in real-world applications. For example, customer service center or hospitals would like to summarize customer service interaction and doctor-patient interaction. However, few researchers explored abstractive summarization on dialogs due to the lack of suitable datasets. We propose an abstractive dialog summarization dataset based on MultiWOZ. If we directly apply previous state-of-the-art document summarization methods on dialogs, there are two significant drawbacks: the informative entities such as restaurant names are difficult to preserve, and the contents from different dialog domains are sometimes mismatched. To address these two drawbacks, we propose Scaffold Pointer Network (SPNet)to utilize the existing annotation on speaker role, semantic slot and dialog domain. SPNet incorporates these semantic scaffolds for dialog summarization. Since ROUGE cannot capture the two drawbacks mentioned, we also propose a new evaluation metric that considers critical informative entities in the text. On MultiWOZ, our proposed SPNet outperforms state-of-the-art abstractive summarization methods on all the automatic and human evaluation metrics.

* unpublished preprint 

  Click for Model/Code and Paper
Domain Adaptive Dialog Generation via Meta Learning

Jun 08, 2019
Kun Qian, Zhou Yu

Domain adaptation is an essential task in dialog system building because there are so many new dialog tasks created for different needs every day. Collecting and annotating training data for these new tasks is costly since it involves real user interactions. We propose a domain adaptive dialog generation method based on meta-learning (DAML). DAML is an end-to-end trainable dialog system model that learns from multiple rich-resource tasks and then adapts to new domains with minimal training samples. We train a dialog system model using multiple rich-resource single-domain dialog data by applying the model-agnostic meta-learning algorithm to dialog domain. The model is capable of learning a competitive dialog system on a new domain with only a few training examples in an efficient manner. The two-step gradient updates in DAML enable the model to learn general features across multiple tasks. We evaluate our method on a simulated dialog dataset and achieve state-of-the-art performance, which is generalizable to new tasks.

* Accepted as a long paper in ACL 2019 

  Click for Model/Code and Paper
Performance Estimation of Synthesis Flows cross Technologies using LSTMs and Transfer Learning

Nov 14, 2018
Cunxi Yu, Wang Zhou

Due to the increasing complexity of Integrated Circuits (ICs) and System-on-Chip (SoC), developing high-quality synthesis flows within a short market time becomes more challenging. We propose a general approach that precisely estimates the Quality-of-Result (QoR), such as delay and area, of unseen synthesis flows for specific designs. The main idea is training a Recurrent Neural Network (RNN) regressor, where the flows are inputs and QoRs are ground truth. The RNN regressor is constructed with Long Short-Term Memory (LSTM) and fully-connected layers. This approach is demonstrated with 1.2 million data points collected using 14nm, 7nm regular-voltage (RVT), and 7nm low-voltage (LVT) FinFET technologies with twelve IC designs. The accuracy of predicting the QoRs (delay and area) within one technology is $\boldsymbol{\geq}$\textbf{98.0}\% over $\sim$240,000 test points. To enable accurate predictions cross different technologies and different IC designs, we propose a transfer-learning approach that utilizes the model pre-trained with 14nm datasets. Our transfer learning approach obtains estimation accuracy $\geq$96.3\% over $\sim$960,000 test points, using only 100 data points for training.


  Click for Model/Code and Paper
Sentiment Adaptive End-to-End Dialog Systems

May 13, 2018
Weiyan Shi, Zhou Yu

End-to-end learning framework is useful for building dialog systems for its simplicity in training and efficiency in model updating. However, current end-to-end approaches only consider user semantic inputs in learning and under-utilize other user information. Therefore, we propose to include user sentiment obtained through multimodal information (acoustic, dialogic and textual), in the end-to-end learning framework to make systems more user-adaptive and effective. We incorporated user sentiment information in both supervised and reinforcement learning settings. In both settings, adding sentiment information reduced the dialog length and improved the task success rate on a bus information search task. This work is the first attempt to incorporate multimodal user information in the adaptive end-to-end dialog system training framework and attained state-of-the-art performance.

* ACL 2018 

  Click for Model/Code and Paper
Dependency Parsing for Spoken Dialog Systems

Sep 07, 2019
Sam Davidson, Dian Yu, Zhou Yu

Dependency parsing of conversational input can play an important role in language understanding for dialog systems by identifying the relationships between entities extracted from user utterances. Additionally, effective dependency parsing can elucidate differences in language structure and usage for discourse analysis of human-human versus human-machine dialogs. However, models trained on datasets based on news articles and web data do not perform well on spoken human-machine dialog, and currently available annotation schemes do not adapt well to dialog data. Therefore, we propose the Spoken Conversation Universal Dependencies (SCUD) annotation scheme that extends the Universal Dependencies (UD) (Nivre et al., 2016) guidelines to spoken human-machine dialogs. We also provide ConvBank, a conversation dataset between humans and an open-domain conversational dialog system with SCUD annotation. Finally, to demonstrate the utility of the dataset, we train a dependency parser on the ConvBank dataset. We demonstrate that by pre-training a dependency parser on a set of larger public datasets and fine-tuning on ConvBank data, we achieved the best result, 85.05% unlabeled and 77.82% labeled attachment accuracy.

* To be presented at EMNLP 2019 

  Click for Model/Code and Paper
Building Task-Oriented Visual Dialog Systems Through Alternative Optimization Between Dialog Policy and Language Generation

Sep 06, 2019
Mingyang Zhou, Josh Arnold, Zhou Yu

Reinforcement learning (RL) is an effective approach to learn an optimal dialog policy for task-oriented visual dialog systems. A common practice is to apply RL on a neural sequence-to-sequence (seq2seq) framework with the action space being the output vocabulary in the decoder. However, it is difficult to design a reward function that can achieve a balance between learning an effective policy and generating a natural dialog response. This paper proposes a novel framework that alternatively trains a RL policy for image guessing and a supervised seq2seq model to improve dialog generation quality. We evaluate our framework on the GuessWhich task and the framework achieves the state-of-the-art performance in both task completion and dialog quality.


  Click for Model/Code and Paper
Large scale continuous-time mean-variance portfolio allocation via reinforcement learning

Jul 26, 2019
Haoran Wang, Xun Yu Zhou

We propose to solve large scale Markowitz mean-variance (MV) portfolio allocation problem using reinforcement learning (RL). By adopting the recently developed continuous-time exploratory control framework, we formulate the exploratory MV problem in high dimensions. We further show the optimality of a multivariate Gaussian feedback policy, with time-decaying variance, in trading off exploration and exploitation. Based on a provable policy improvement theorem, we devise a scalable and data-efficient RL algorithm and conduct large scale empirical tests using data from the S&P 500 stocks. We found that our method consistently achieves over 10% annualized returns and it outperforms econometric methods and the deep RL method by large margins, for both long and medium terms of investment with monthly and daily trading.

* 15 pages, 4 figures. arXiv admin note: substantial text overlap with arXiv:1904.11392 

  Click for Model/Code and Paper
Continuous-Time Mean-Variance Portfolio Selection: A Reinforcement Learning Framework

May 05, 2019
Haoran Wang, Xun Yu Zhou

We approach the continuous-time mean-variance (MV) portfolio selection with reinforcement learning (RL). The problem is to achieve the best tradeoff between exploration and exploitation, and is formulated as an entropy-regularized, relaxed stochastic control problem. We prove that the optimal feedback policy for this problem must be Gaussian, with time-decaying variance. We then establish connections between the entropy-regularized MV and the classical MV, including the solvability equivalence and the convergence as exploration weighting parameter decays to zero. Finally, we prove a policy improvement theorem, based on which we devise an implementable RL algorithm. We find that our algorithm outperforms both an adaptive control based method and a deep neural networks based algorithm by a large margin in our simulations.

* 39 pages, 5 figures 

  Click for Model/Code and Paper
An Efficient Network Intrusion Detection System Based on Feature Selection and Ensemble Classifier

Apr 02, 2019
Yu-Yang Zhou, Guang Cheng

Since Internet is so popular and prevailing in human life, countering cyber threats, especially attack detection, is a challenging area of research in the field of cyber security. Intrusion detection systems (IDSs) are essential entities in a network topology aiming to safeguard the integrity and availability of sensitive assets in the protected systems. Although many supervised and unsupervised learning approaches from the field of machine learning and pattern recognition have been used to increase the efficacy of IDSs, it is still a problem to deal with lots of redundant and irrelevant features in high-dimension datasets for network anomaly detection. To this end, we propose a novel methodology combining the benefits of correlation-based feature selection(CFS) and bat algorithm(BA) with an ensemble classifier based on C4.5, Random Forest(RF), and Forest by Penalizing Attributes(Forest PA), which can be able to classify both common and rare types of attacks with high accuracy and efficiency. The experimental results, using a novel intrusion detection dataset, namely CIC-IDS2017, reveal that our CFS-BA-Ensemble method is able to contribute more critical features and significantly outperforms individual approaches, achieving high accuracy and low false alarm rate. Moreover, compared with the majority of the existing state-of-the-art and legacy techniques, our approach exhibits better performance under several classification metrics in the context of classification accuracy, f-measure, attack detection rate, and false alarm rate.


  Click for Model/Code and Paper
Incorporating Structured Commonsense Knowledge in Story Completion

Nov 01, 2018
Jiaao Chen, Jianshu Chen, Zhou Yu

The ability to select an appropriate story ending is the first step towards perfect narrative comprehension. Story ending prediction requires not only the explicit clues within the context, but also the implicit knowledge (such as commonsense) to construct a reasonable and consistent story. However, most previous approaches do not explicitly use background commonsense knowledge. We present a neural story ending selection model that integrates three types of information: narrative sequence, sentiment evolution and commonsense knowledge. Experiments show that our model outperforms state-of-the-art approaches on a public dataset, ROCStory Cloze Task , and the performance gain from adding the additional commonsense knowledge is significant.


  Click for Model/Code and Paper
A Theoretical Framework of Approximation Error Analysis of Evolutionary Algorithms

Oct 26, 2018
Jun He, Yu Chen, Yuren Zhou

In the empirical study of evolutionary algorithms, the solution quality is evaluated by either the fitness value or approximation error. The latter measures the fitness difference between an approximation solution and the optimal solution. Since the approximation error analysis is more convenient than the direct estimation of the fitness value, this paper focuses on approximation error analysis. However, it is straightforward to extend all related results from the approximation error to the fitness value. Although the evaluation of solution quality plays an essential role in practice, few rigorous analyses have been conducted on this topic. This paper aims at establishing a novel theoretical framework of approximation error analysis of evolutionary algorithms for discrete optimization. This framework is divided into two parts. The first part is about exact expressions of the approximation error. Two methods, Jordan form and Schur's triangularization, are presented to obtain an exact expression. The second part is about upper bounds on approximation error. Two methods, convergence rate and auxiliary matrix iteration, are proposed to estimate the upper bound. The applicability of this framework is demonstrated through several examples.


  Click for Model/Code and Paper
Statistical and Computational Guarantees of Lloyd's Algorithm and its Variants

Dec 07, 2016
Yu Lu, Harrison H. Zhou

Clustering is a fundamental problem in statistics and machine learning. Lloyd's algorithm, proposed in 1957, is still possibly the most widely used clustering algorithm in practice due to its simplicity and empirical performance. However, there has been little theoretical investigation on the statistical and computational guarantees of Lloyd's algorithm. This paper is an attempt to bridge this gap between practice and theory. We investigate the performance of Lloyd's algorithm on clustering sub-Gaussian mixtures. Under an appropriate initialization for labels or centers, we show that Lloyd's algorithm converges to an exponentially small clustering error after an order of $\log n$ iterations, where $n$ is the sample size. The error rate is shown to be minimax optimal. For the two-mixture case, we only require the initializer to be slightly better than random guess. In addition, we extend the Lloyd's algorithm and its analysis to community detection and crowdsourcing, two problems that have received a lot of attention recently in statistics and machine learning. Two variants of Lloyd's algorithm are proposed respectively for community detection and crowdsourcing. On the theoretical side, we provide statistical and computational guarantees of the two algorithms, and the results improve upon some previous signal-to-noise ratio conditions in literature for both problems. Experimental results on simulated and real data sets demonstrate competitive performance of our algorithms to the state-of-the-art methods.


  Click for Model/Code and Paper
Weak Evolvability Equals Strong Evolvability

Feb 17, 2010
Yang Yu, Zhi-Hua Zhou

An updated version will be uploaded later.

* This paper has been withdrawn by the authors 

  Click for Model/Code and Paper