Models, code, and papers for "Yue Wang":

Respect Your Emotion: Human-Multi-Robot Teaming based on Regret Decision Model

Sep 18, 2019
Longsheng Jiang, Yue Wang

Often, when modeling human decision-making behaviors in the context of human-robot teaming, the emotion aspect of human is ignored. Nevertheless, the influence of emotion, in some cases, is not only undeniable but beneficial. This work studies the human-like characteristics brought by regret emotion in one-human-multi-robot teaming for the application of domain search. In such application, the task management load is outsourced to the robots to reduce the human's workload, freeing the human to do more important work. The regret decision model is first used by each robot for deciding whether to request human service, then is extended for optimally queuing the requests from multiple robots. For the movement of the robots in the domain search, we designed a path planning algorithm based on dynamic programming for each robot. The simulation shows that the human-like characteristics, namely, risk-seeking and risk-aversion, indeed bring some appealing effects for balancing the workload and performance in the human-multi-robot team.

* IEEE 15th International Conference on Automation Science and Engineering (CASE), Vancouver, BC, Canada, August 22-26, 2019 
* 8 pages, 4 figures, conference 

  Click for Model/Code and Paper
EmotionX-HSU: Adopting Pre-trained BERT for Emotion Classification

Jul 23, 2019
Linkai Luo, Yue Wang

This paper describes our approach to the EmotionX-2019, the shared task of SocialNLP 2019. To detect emotion for each utterance of two datasets from the TV show Friends and Facebook chat log EmotionPush, we propose two-step deep learning based methodology: (i) encode each of the utterance into a sequence of vectors that represent its meaning; and (ii) use a simply softmax classifier to predict one of the emotions amongst four candidates that an utterance may carry. Notice that the source of labeled utterances is not rich, we utilise a well-trained model, known as BERT, to transfer part of the knowledge learned from a large amount of corpus to our model. We then focus on fine-tuning our model until it well fits to the in-domain data. The performance of the proposed model is evaluated by micro-F1 scores, i.e., 79.1% and 86.2% for the testsets of Friends and EmotionPush, respectively. Our model ranks 3rd among 11 submissions.

  Click for Model/Code and Paper
Deep Learning for Genomics: A Concise Overview

May 08, 2018
Tianwei Yue, Haohan Wang

Advancements in genomic research such as high-throughput sequencing techniques have driven modern genomic studies into "big data" disciplines. This data explosion is constantly challenging conventional methods used in genomics. In parallel with the urgent demand for robust algorithms, deep learning has succeeded in a variety of fields such as vision, speech, and text processing. Yet genomics entails unique challenges to deep learning since we are expecting from deep learning a superhuman intelligence that explores beyond our knowledge to interpret the genome. A powerful deep learning model should rely on insightful utilization of task-specific knowledge. In this paper, we briefly discuss the strengths of different deep learning models from a genomic perspective so as to fit each particular task with a proper deep architecture, and remark on practical considerations of developing modern deep learning architectures for genomics. We also provide a concise review of deep learning applications in various aspects of genomic research, as well as pointing out potential opportunities and obstacles for future genomics applications.

* Invited chapter for Springer Book: Handbook of Deep Learning Applications 

  Click for Model/Code and Paper
Opinion Recommendation using Neural Memory Model

Feb 06, 2017
Zhongqing Wang, Yue Zhang

We present opinion recommendation, a novel task of jointly predicting a custom review with a rating score that a certain user would give to a certain product or service, given existing reviews and rating scores to the product or service by other users, and the reviews that the user has given to other products and services. A characteristic of opinion recommendation is the reliance of multiple data sources for multi-task joint learning, which is the strength of neural models. We use a single neural network to model users and products, capturing their correlation and generating customised product representations using a deep memory network, from which customised ratings and reviews are constructed jointly. Results show that our opinion recommendation system gives ratings that are closer to real user ratings on data compared with Yelp's own ratings, and our methods give better results compared to several pipelines baselines using state-of-the-art sentiment rating and summarization systems.

  Click for Model/Code and Paper
Learning Structural Changes of Gaussian Graphical Models in Controlled Experiments

Mar 15, 2012
Bai Zhang, Yue Wang

Graphical models are widely used in scienti fic and engineering research to represent conditional independence structures between random variables. In many controlled experiments, environmental changes or external stimuli can often alter the conditional dependence between the random variables, and potentially produce significant structural changes in the corresponding graphical models. Therefore, it is of great importance to be able to detect such structural changes from data, so as to gain novel insights into where and how the structural changes take place and help the system adapt to the new environment. Here we report an effective learning strategy to extract structural changes in Gaussian graphical model using l1-regularization based convex optimization. We discuss the properties of the problem formulation and introduce an efficient implementation by the block coordinate descent algorithm. We demonstrate the principle of the approach on a numerical simulation experiment, and we then apply the algorithm to the modeling of gene regulatory networks under different conditions and obtain promising yet biologically plausible results.

* Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010) 

  Click for Model/Code and Paper
PRNet: Self-Supervised Learning for Partial-to-Partial Registration

Oct 29, 2019
Yue Wang, Justin M. Solomon

We present a simple, flexible, and general framework titled Partial Registration Network (PRNet), for partial-to-partial point cloud registration. Inspired by recently-proposed learning-based methods for registration, we use deep networks to tackle non-convexity of the alignment and partial correspondence problems. While previous learning-based methods assume the entire shape is visible, PRNet is suitable for partial-to-partial registration, outperforming PointNetLK, DCP, and non-learning methods on synthetic data. PRNet is self-supervised, jointly learning an appropriate geometric representation, a keypoint detector that finds points in common between partial views, and keypoint-to-keypoint correspondences. We show PRNet predicts keypoints and correspondences consistently across views and objects. Furthermore, the learned representation is transferable to classification.

* NeurIPS 2019 

  Click for Model/Code and Paper
Deep Closest Point: Learning Representations for Point Cloud Registration

May 08, 2019
Yue Wang, Justin M. Solomon

Point cloud registration is a key problem for computer vision applied to robotics, medical imaging, and other applications. This problem involves finding a rigid transformation from one point cloud into another so that they align. Iterative Closest Point (ICP) and its variants provide simple and easily-implemented iterative methods for this task, but these algorithms can converge to spurious local optima. To address local optima and other difficulties in the ICP pipeline, we propose a learning-based method, titled Deep Closest Point (DCP), inspired by recent techniques in computer vision and natural language processing. Our model consists of three parts: a point cloud embedding network, an attention-based module combined with a pointer generation layer, to approximate combinatorial matching, and a differentiable singular value decomposition (SVD) layer to extract the final rigid transformation. We train our model end-to-end on the ModelNet40 dataset and show in several settings that it performs better than ICP, its variants (e.g., Go-ICP, FGR), and the recently-proposed learning-based method PointNetLK. Beyond providing a state-of-the-art registration technique, we evaluate the suitability of our learned features transferred to unseen objects. We also provide preliminary analysis of our learned model to help understand whether domain-specific and/or global features facilitate rigid registration.

  Click for Model/Code and Paper
The Scaling Limit of High-Dimensional Online Independent Component Analysis

Nov 07, 2017
Chuang Wang, Yue M. Lu

We analyze the dynamics of an online algorithm for independent component analysis in the high-dimensional scaling limit. As the ambient dimension tends to infinity, and with proper time scaling, we show that the time-varying joint empirical measure of the target feature vector and the estimates provided by the algorithm will converge weakly to a deterministic measured-valued process that can be characterized as the unique solution of a nonlinear PDE. Numerical solutions of this PDE, which involves two spatial variables and one time variable, can be efficiently obtained. These solutions provide detailed information about the performance of the ICA algorithm, as many practical performance metrics are functionals of the joint empirical measures. Numerical simulations show that our asymptotic analysis is accurate even for moderate dimensions. In addition to providing a tool for understanding the performance of the algorithm, our PDE analysis also provides useful insight. In particular, in the high-dimensional limit, the original coupled dynamics associated with the algorithm will be asymptotically "decoupled", with each coordinate independently solving a 1-D effective minimization problem via stochastic gradient descent. Exploiting this insight to design new algorithms for achieving optimal trade-offs between computational and statistical efficiency may prove an interesting line of future research.

* 10 pages, 3 figures, 31st Conference on Neural Information Processing Systems (NIPS 2017) 

  Click for Model/Code and Paper
Using Dynamic Embeddings to Improve Static Embeddings

Nov 07, 2019
Yile Wang, Leyang Cui, Yue Zhang

How to build high-quality word embeddings is a fundamental research question in the field of natural language processing. Traditional methods such as Skip-Gram and Continuous Bag-of-Words learn {\it static} embeddings by training lookup tables that translate words into dense vectors. Static embeddings are directly useful for solving lexical semantics tasks, and can be used as input representations for downstream problems. Recently, contextualized embeddings such as BERT have been shown more effective than static embeddings as NLP input embeddings. Such embeddings are {\it dynamic}, calculated according to a sentential context using a network structure. One limitation of dynamic embeddings, however, is that they cannot be used without a sentence-level context. We explore the advantages of dynamic embeddings for training static embeddings, by using contextualized embeddings to facilitate training of static embedding lookup tables. Results show that the resulting embeddings outperform existing static embedding methods on various lexical semantics tasks.

  Click for Model/Code and Paper
Syntax-Enhanced Self-Attention-Based Semantic Role Labeling

Oct 24, 2019
Yue Zhang, Rui Wang, Luo Si

As a fundamental NLP task, semantic role labeling (SRL) aims to discover the semantic roles for each predicate within one sentence. This paper investigates how to incorporate syntactic knowledge into the SRL task effectively. We present different approaches of encoding the syntactic information derived from dependency trees of different quality and representations; we propose a syntax-enhanced self-attention model and compare it with other two strong baseline methods; and we conduct experiments with newly published deep contextualized word representations as well. The experiment results demonstrate that with proper incorporation of the high quality syntactic information, our model achieves a new state-of-the-art performance for the Chinese SRL task on the CoNLL-2009 dataset.

  Click for Model/Code and Paper
Enhancing Model Interpretability and Accuracy for Disease Progression Prediction via Phenotype-Based Patient Similarity Learning

Sep 26, 2019
Yue Wang, Tong Wu, Yunlong Wang, Gao Wang

Models have been proposed to extract temporal patterns from longitudinal electronic health records (EHR) for clinical predictive models. However, the common relations among patients (e.g., receiving the same medical treatments) were rarely considered. In this paper, we propose to learn patient similarity features as phenotypes from the aggregated patient-medical service matrix using non-negative matrix factorization. On real-world medical claim data, we show that the learned phenotypes are coherent within each group, and also explanatory and indicative of targeted diseases. We conducted experiments to predict the diagnoses for Chronic Lymphocytic Leukemia (CLL) patients. Results show that the phenotype-based similarity features can improve prediction over multiple baselines, including logistic regression, random forest, convolutional neural network, and more.

* 12 pages, accepted by Pacific Symposium on Biocomputing (PSB) 2020 

  Click for Model/Code and Paper
X-LineNet: Detecting Aircraft in Remote Sensing Images by a pair of Intersecting Line Segments

Jul 29, 2019
Haoran Wei, Wang Bing, Zhang Yue

In the field of aircraft detection, tremendous progress has been gained movitated by the development of deep convolution neural networks(DCNNs). At present, most state-of-art models based on DCNNs belong to top-down approaches which take a wide use of anchor mechanism. The obtaining of high accuracy in them relys on the enumeration of massive potentional locations of objects with the form of rectangular bounding box, which is wasteful and less elaborate. In this paper, we present a novel aircraft detection model in a bottom-up manner, which formulated the task as detection of two intersecting line segments inside each target and grouping of them, thus we name it as X-LineNet. As the result of learning more delicate visual grammars information of aircraft, detection results with more concrete details and higher accuracy can be gained by X-LineNet. Just for these advantages, we designed a novel form of detetction results--pentagonal mask which has less redundancy and can better represent airplanes than that of rectangular box in remote sensing images.

  Click for Model/Code and Paper
Super-resolution based generative adversarial network using visual perceptual loss function

Apr 24, 2019
Xuan Zhu, Yue Cheng, Rongzhi Wang

In recent years, perceptual-quality driven super-resolution methods show satisfactory results. However, super-resolved images have uncertain texture details and unpleasant artifact. We build a novel perceptual loss function composed of morphological components adversarial loss and color adversarial loss and salient content loss to ameliorate these problems. The adversarial loss is applied to constrain color and morphological components distribution of super-resolved images and the salient content loss highlights the perceptual similarity of feature-rich regions. Experiments show that proposed method achieves significant improvements in terms of perceptual index and visual quality compared with the state-of-the-art methods.

* 11 pages, 7 figures, 1 table 

  Click for Model/Code and Paper
Accelerating Minibatch Stochastic Gradient Descent using Typicality Sampling

Mar 11, 2019
Xinyu Peng, Li Li, Fei-Yue Wang

Machine learning, especially deep neural networks, has been rapidly developed in fields including computer vision, speech recognition and reinforcement learning. Although Mini-batch SGD is one of the most popular stochastic optimization methods in training deep networks, it shows a slow convergence rate due to the large noise in gradient approximation. In this paper, we attempt to remedy this problem by building more efficient batch selection method based on typicality sampling, which reduces the error of gradient estimation in conventional Minibatch SGD. We analyze the convergence rate of the resulting typical batch SGD algorithm and compare convergence properties between Minibatch SGD and the algorithm. Experimental results demonstrate that our batch selection scheme works well and more complex Minibatch SGD variants can benefit from the proposed batch selection strategy.

* 10 pages, 4 figures, for journal 

  Click for Model/Code and Paper
Deep Learning Based Autoencoder for Interference Channel

Feb 18, 2019
Dehao Wu, Maziar Nekovee, Yue Wang

Deep learning (DL) based autoencoder has shown great potential to significantly enhance the physical layer performance. In this paper, we present a DL based autoencoder for interference channel. Based on a characterization of a k-user Gaussian interference channel, where the interferences are classified as different levels from weak to very strong interferences based on a coupling parameter {\alpha}, a DL neural network (NN) based autoencoder is designed to train the data set and decode the received signals. The performance such a DL autoencoder for different interference scenarios are studied, with {\alpha} known or partially known, where we assume that {\alpha} is predictable but with a varying up to 10\% at the training stage. The results demonstrate that DL based approach has a significant capability to mitigate the effect induced by a poor signal-to-noise ratio (SNR) and a high interference-to-noise ratio (INR). However, the enhancement depends on the knowledge of {\alpha} as well as the interference levels. The proposed DL approach performs well with {\alpha} up to 10\% offset for weak interference level. For strong and very strong interference channel, the offset of {\alpha} needs to be constrained to less than 5\% and 2\%, respectively, to maintain similar performance as {\alpha} is known.

* 6 pages, 10 figures, IEEE WiOpt 2019 submitted. arXiv admin note: text overlap with arXiv:1710.05312, arXiv:1206.0197 by other authors 

  Click for Model/Code and Paper
Intention Oriented Image Captions with Guiding Objects

Nov 19, 2018
Yue Zheng, Yali Li, Shengjin Wang

Although existing image caption models can produce promising results using recurrent neural networks (RNNs), it is difficult to guarantee that an object we care about is contained in generated descriptions, for example in the case that the object is inconspicuous in image. Problems become even harder when these objects did not appear in training stage. In this paper, we propose a novel approach for generating image captions with guiding objects (CGO). The CGO constrains the model to involve a human-concerned object, when the object is in the image, in the generated description while maintaining fluency. Instead of generating the sequence from left to right, we start description with a selected object and generate other parts of the sequence based on this object. To achieve this, we design a novel framework combining two LSTMs in opposite directions. We demonstrate the characteristics of our method on MSCOCO to generate descriptions for each detected object in images. With CGO, we can extend the ability of description to the objects being neglected in image caption labels and provide a set of more comprehensive and diverse descriptions for an image. CGO shows obvious advantages when applied to the task of describing novel objects. We show experiment results on both MSCOCO and ImageNet datasets. Evaluations show that our method outperforms the state-of-the-art models in the task with average F1 75.8, leading to better descriptions in terms of both content accuracy and fluency.

  Click for Model/Code and Paper
Human-Robot Trust Integrated Task Allocation and Symbolic Motion planning for Heterogeneous Multi-robot Systems

Oct 26, 2018
Huanfei Zheng, Zhanrui Liao, Yue Wang

This paper presents a human-robot trust integrated task allocation and motion planning framework for multi-robot systems (MRS) in performing a set of tasks concurrently. A set of task specifications in parallel are conjuncted with MRS to synthesize a task allocation automaton. Each transition of the task allocation automaton is associated with the total trust value of human in corresponding robots. Here, the human-robot trust model is constructed with a dynamic Bayesian network (DBN) by considering individual robot performance, safety coefficient, human cognitive workload and overall evaluation of task allocation. Hence, a task allocation path with maximum encoded human-robot trust can be searched based on the current trust value of each robot in the task allocation automaton. Symbolic motion planning (SMP) is implemented for each robot after they obtain the sequence of actions. The task allocation path can be intermittently updated with this DBN based trust model. The overall strategy is demonstrated by a simulation with 5 robots and 3 parallel subtask automata.

  Click for Model/Code and Paper
A Directionally Selective Small Target Motion Detecting Visual Neural Network in Cluttered Backgrounds

Sep 29, 2018
Hongxin Wang, Jigen Peng, Shigang Yue

Discriminating targets moving against a cluttered background is a huge challenge, let alone detecting a target as small as one or a few pixels and tracking it in flight. In the fly's visual system, a class of specific neurons, called small target motion detectors (STMDs), have been identified as showing exquisite selectivity for small target motion. Some of the STMDs have also demonstrated directional selectivity which means these STMDs respond strongly only to their preferred motion direction. Directional selectivity is an important property of these STMD neurons which could contribute to tracking small targets such as mates in flight. However, little has been done on systematically modeling these directional selective STMD neurons. In this paper, we propose a directional selective STMD-based neural network (DSTMD) for small target detection in a cluttered background. In the proposed neural network, a new correlation mechanism is introduced for direction selectivity via correlating signals relayed from two pixels. Then, a lateral inhibition mechanism is implemented on the spatial field for size selectivity of STMD neurons. Extensive experiments showed that the proposed neural network not only is in accord with current biological findings, i.e. showing directional preferences, but also worked reliably in detecting small targets against cluttered backgrounds.

* 14 pages, 21 figures 

  Click for Model/Code and Paper
A Feedback Neural Network for Small Target Motion Detection in Cluttered Backgrounds

Jul 10, 2018
Hongxin Wang, Jigen Peng, Shigang Yue

Small target motion detection is critical for insects to search for and track mates or prey which always appear as small dim speckles in the visual field. A class of specific neurons, called small target motion detectors (STMDs), has been characterized by exquisite sensitivity for small target motion. Understanding and analyzing visual pathway of STMD neurons are beneficial to design artificial visual systems for small target motion detection. Feedback loops have been widely identified in visual neural circuits and play an important role in target detection. However, if there exists a feedback loop in the STMD visual pathway or if a feedback loop could significantly improve the detection performance of STMD neurons, is unclear. In this paper, we propose a feedback neural network for small target motion detection against naturally cluttered backgrounds. In order to form a feedback loop, model output is temporally delayed and relayed to previous neural layer as feedback signal. Extensive experiments showed that the significant improvement of the proposed feedback neural network over the existing STMD-based models for small target motion detection.

* 10 pages, 5 figures 

  Click for Model/Code and Paper