Models, code, and papers for "Ping Wang":

LSANet: Feature Learning on Point Sets by Local Spatial Attention

May 14, 2019
Lin-Zhuo Chen, Xuan-Yi Li, Deng-Ping Fan, Ming-Ming Cheng, Kai Wang, Shao-Ping Lu

Directly learning features from the point cloud has become an active research direction in 3D understanding. Existing learning-based methods usually construct local regions from the point cloud and extract the corresponding features using shared Multi-Layer Perceptron (MLP) and max pooling. However, most of these processes do not adequately take the spatial distribution of the point cloud into account, limiting the ability to perceive fine-grained patterns. We design a novel Local Spatial Attention (LSA) module to adaptively generate attention maps according to the spatial distribution of local regions. The feature learning process which integrates with these attention maps can effectively capture the local geometric structure. We further propose the Spatial Feature Extractor (SFE), which constructs a branch architecture, to aggregate the spatial information with associated features in each layer of the network better.The experiments show that our network, named LSANet, can achieve on par or better performance than the state-of-the-art methods when evaluating on the challenging benchmark datasets. The source code is available at https://github.com/LinZhuoChen/LSANet.


  Click for Model/Code and Paper
The iterative convolution-thresholding method (ICTM) for image segmentation

Apr 24, 2019
Dong Wang, Xiao-Ping Wang

In this paper, we propose a novel iterative convolution-thresholding method (ICTM) that is applicable to a range of variational models for image segmentation. A variational model usually minimizes an energy functional consisting of a fidelity term and a regularization term. In the ICTM, the interface between two different segment domains is implicitly represented by their characteristic functions. The fidelity term is then usually written as a linear functional of the characteristic functions and the regularized term is approximated by a functional of characteristic functions in terms of heat kernel convolution. This allows us to design an iterative convolution-thresholding method to minimize the approximate energy. The method is simple, efficient and enjoys the energy-decaying property. Numerical experiments show that the method is easy to implement, robust and applicable to various image segmentation models.

* 13 pages, 4 figures 

  Click for Model/Code and Paper
Recovery of Sparse Signals Using Multiple Orthogonal Least Squares

Dec 31, 2015
Jian Wang, Ping Li

We study the problem of recovering sparse signals from compressed linear measurements. This problem, often referred to as sparse recovery or sparse reconstruction, has generated a great deal of interest in recent years. To recover the sparse signals, we propose a new method called multiple orthogonal least squares (MOLS), which extends the well-known orthogonal least squares (OLS) algorithm by allowing multiple $L$ indices to be chosen per iteration. Owing to inclusion of multiple support indices in each selection, the MOLS algorithm converges in much fewer iterations and improves the computational efficiency over the conventional OLS algorithm. Theoretical analysis shows that MOLS ($L > 1$) performs exact recovery of all $K$-sparse signals within $K$ iterations if the measurement matrix satisfies the restricted isometry property (RIP) with isometry constant $\delta_{LK} < \frac{\sqrt{L}}{\sqrt{K} + 2 \sqrt{L}}.$ The recovery performance of MOLS in the noisy scenario is also studied. It is shown that stable recovery of sparse signals can be achieved with the MOLS algorithm when the signal-to-noise ratio (SNR) scales linearly with the sparsity level of input signals.


  Click for Model/Code and Paper
Deep Self-Learning From Noisy Labels

Aug 20, 2019
Jiangfan Han, Ping Luo, Xiaogang Wang

ConvNets achieve good results when training from clean data, but learning from noisy labels significantly degrades performances and remains challenging. Unlike previous works constrained by many conditions, making them infeasible to real noisy cases, this work presents a novel deep self-learning framework to train a robust network on the real noisy datasets without extra supervision. The proposed approach has several appealing benefits. (1) Different from most existing work, it does not rely on any assumption on the distribution of the noisy labels, making it robust to real noises. (2) It does not need extra clean supervision or accessorial network to help training. (3) A self-learning framework is proposed to train the network in an iterative end-to-end manner, which is effective and efficient. Extensive experiments in challenging benchmarks such as Clothing1M and Food101-N show that our approach outperforms its counterparts in all empirical settings.

* Accepted by IEEE International Conference on Computer Vision (ICCV) 2019 

  Click for Model/Code and Paper
Optimistic Adaptive Acceleration for Optimization

Mar 04, 2019
Jun-Kun Wang, Xiaoyun Li, Ping Li

We consider a new variant of \textsc{AMSGrad}. AMSGrad \cite{RKK18} is a popular adaptive gradient based optimization algorithm that is widely used in training deep neural networks. Our new variant of the algorithm assumes that mini-batch gradients in consecutive iterations have some underlying structure, which makes the gradients sequentially predictable. By exploiting the predictability and some ideas from the field of \textsc{Optimistic Online learning}, the new algorithm can accelerate the convergence and enjoy a tighter regret bound. We conduct experiments on training various neural networks on several datasets to show that the proposed method speeds up the convergence in practice.


  Click for Model/Code and Paper
Pruning Filter via Geometric Median for Deep Convolutional Neural Networks Acceleration

Nov 01, 2018
Yang He, Ping Liu, Ziwei Wang, Yi Yang

Previous works utilized "smaller-norm-less-important" criterion to prune filters with smaller norm values in a convolutional neural network. In this paper, we analyze this norm-based criterion and point out that its effectiveness depends on two requirements that not always met: (1) the norm deviation of the filters should be large; (2) the minimum norm of the filters should be small. To solve this problem, we propose a novel filter pruning method, namely Filter Pruning via Geometric Median (FPGM), to compress the model regardless of those two requirements. Unlike previous methods, PFGM compresses CNN models by determining and pruning those filters with redundant information via Geometric Median (GM), rather than those with "relatively less" importance. When applied to two image classification benchmarks, our method validates its usefulness and strengths. Notably, on Cifar-10, PFGM reduces more than 52% FLOPs on ResNet-110 with even 2.69% relative accuracy improvement. Besides, on ILSCRC-2012, PFGM reduces more than 42% FLOPs on ResNet-101 without top-5 accuracy drop, which has advanced the state-of-the-art.

* 9 pages 

  Click for Model/Code and Paper
Optimal Subsampling for Large Sample Logistic Regression

Mar 07, 2018
HaiYing Wang, Rong Zhu, Ping Ma

For massive data, the family of subsampling algorithms is popular to downsize the data volume and reduce computational burden. Existing studies focus on approximating the ordinary least squares estimate in linear regression, where statistical leverage scores are often used to define subsampling probabilities. In this paper, we propose fast subsampling algorithms to efficiently approximate the maximum likelihood estimate in logistic regression. We first establish consistency and asymptotic normality of the estimator from a general subsampling algorithm, and then derive optimal subsampling probabilities that minimize the asymptotic mean squared error of the resultant estimator. An alternative minimization criterion is also proposed to further reduce the computational cost. The optimal subsampling probabilities depend on the full data estimate, so we develop a two-step algorithm to approximate the optimal subsampling procedure. This algorithm is computationally efficient and has a significant reduction in computing time compared to the full data approach. Consistency and asymptotic normality of the estimator from a two-step algorithm are also established. Synthetic and real data sets are used to evaluate the practical performance of the proposed method.


  Click for Model/Code and Paper
GSLAM: Initialization-robust Monocular Visual SLAM via Global Structure-from-Motion

Oct 19, 2017
Chengzhou Tang, Oliver Wang, Ping Tan

Many monocular visual SLAM algorithms are derived from incremental structure-from-motion (SfM) methods. This work proposes a novel monocular SLAM method which integrates recent advances made in global SfM. In particular, we present two main contributions to visual SLAM. First, we solve the visual odometry problem by a novel rank-1 matrix factorization technique which is more robust to the errors in map initialization. Second, we adopt a recent global SfM method for the pose-graph optimization, which leads to a multi-stage linear formulation and enables L1 optimization for better robustness to false loops. The combination of these two approaches generates more robust reconstruction and is significantly faster (4X) than recent state-of-the-art SLAM systems. We also present a new dataset recorded with ground truth camera motion in a Vicon motion capture room, and compare our method to prior systems on it and established benchmark datasets.

* 3DV 2017 Project Page: https://frobelbest.github.io/gslam 

  Click for Model/Code and Paper
Predicting the Quality of Short Narratives from Social Media

Jul 08, 2017
Tong Wang, Ping Chen, Boyang Li

An important and difficult challenge in building computational models for narratives is the automatic evaluation of narrative quality. Quality evaluation connects narrative understanding and generation as generation systems need to evaluate their own products. To circumvent difficulties in acquiring annotations, we employ upvotes in social media as an approximate measure for story quality. We collected 54,484 answers from a crowd-powered question-and-answer website, Quora, and then used active learning to build a classifier that labeled 28,320 answers as stories. To predict the number of upvotes without the use of social network features, we create neural networks that model textual regions and the interdependence among regions, which serve as strong benchmarks for future research. To our best knowledge, this is the first large-scale study for automatic evaluation of narrative quality.

* 7 pages, 2 figures. Accepted at the 2017 IJCAI conference 

  Click for Model/Code and Paper
Object Proposal with Kernelized Partial Ranking

May 18, 2017
Jing Wang, Jie Shen, Ping Li

Object proposals are an ensemble of bounding boxes with high potential to contain objects. In order to determine a small set of proposals with a high recall, a common scheme is extracting multiple features followed by a ranking algorithm which however, incurs two major challenges: {\bf 1)} The ranking model often imposes pairwise constraints between each proposal, rendering the problem away from an efficient training/testing phase; {\bf 2)} Linear kernels are utilized due to the computational and memory bottleneck of training a kernelized model. In this paper, we remedy these two issues by suggesting a {\em kernelized partial ranking model}. In particular, we demonstrate that {\bf i)} our partial ranking model reduces the number of constraints from $O(n^2)$ to $O(nk)$ where $n$ is the number of all potential proposals for an image but we are only interested in top-$k$ of them that has the largest overlap with the ground truth; {\bf ii)} we permit non-linear kernels in our model which is often superior to the linear classifier in terms of accuracy. For the sake of mitigating the computational and memory issues, we introduce a consistent weighted sampling~(CWS) paradigm that approximates the non-linear kernel as well as facilitates an efficient learning. In fact, as we will show, training a linear CWS model amounts to learning a kernelized model. Extensive experiments demonstrate that equipped with the non-linear kernel and the partial ranking algorithm, recall at top-$k$ proposals can be substantially improved.

* Pattern Recognition, 2017 

  Click for Model/Code and Paper
Identifying Outliers using Influence Function of Multiple Kernel Canonical Correlation Analysis

Jun 01, 2016
Md Ashad Alam, Yu-Ping Wang

Imaging genetic research has essentially focused on discovering unique and co-association effects, but typically ignoring to identify outliers or atypical objects in genetic as well as non-genetics variables. Identifying significant outliers is an essential and challenging issue for imaging genetics and multiple sources data analysis. Therefore, we need to examine for transcription errors of identified outliers. First, we address the influence function (IF) of kernel mean element, kernel covariance operator, kernel cross-covariance operator, kernel canonical correlation analysis (kernel CCA) and multiple kernel CCA. Second, we propose an IF of multiple kernel CCA, which can be applied for more than two datasets. Third, we propose a visualization method to detect influential observations of multiple sources of data based on the IF of kernel CCA and multiple kernel CCA. Finally, the proposed methods are capable of analyzing outliers of subjects usually found in biomedical applications, in which the number of dimension is large. To examine the outliers, we use the stem-and-leaf display. Experiments on both synthesized and imaging genetics data (e.g., SNP, fMRI, and DNA methylation) demonstrate that the proposed visualization can be applied effectively.

* arXiv admin note: substantial text overlap with arXiv:1602.05563 

  Click for Model/Code and Paper
A Convolutional Neural Network with Mapping Layers for Hyperspectral Image Classification

Aug 26, 2019
Rui Li, Zhibin Pan, Yang Wang, Ping Wang

In this paper, we propose a convolutional neural network with mapping layers (MCNN) for hyperspectral image (HSI) classification. The proposed mapping layers map the input patch into a low dimensional subspace by multilinear algebra. We use our mapping layers to reduce the spectral and spatial redundancy and maintain most energy of the input. The feature extracted by our mapping layers can also reduce the number of following convolutional layers for feature extraction. Our MCNN architecture avoids the declining accuracy with increasing layers phenomenon of deep learning models for HSI classification and also saves the training time for its effective mapping layers. Furthermore, we impose the 3-D convolutional kernel on convolutional layer to extract the spectral-spatial features for HSI. We tested our MCNN on three datasets of Indian Pines, University of Pavia and Salinas, and we achieved the classification accuracy of 98.3%, 99.5% and 99.3%, respectively. Experimental results demonstrate that the proposed MCNN can significantly improve the classification accuracy and save much time consumption.

* UNDER REVIEW ON IEEE TRANS. GEOSCI. REMOTE SEN 

  Click for Model/Code and Paper
A Translate-Edit Model for Natural Language Question to SQL Query Generation on Multi-relational Healthcare Data

Jul 28, 2019
Ping Wang, Tian Shi, Chandan K. Reddy

Electronic health record (EHR) data contains most of the important patient health information and is typically stored in a relational database with multiple tables. One important way for doctors to make use of EHR data is to retrieve intuitive information by posing a sequence of questions against it. However, due to a large amount of information stored in it, effectively retrieving patient information from EHR data in a short time is still a challenging issue for medical experts since it requires a good understanding of a query language to get access to the database. We tackle this challenge by developing a deep learning based approach that can translate a natural language question on multi-relational EHR data into its corresponding SQL query, which is referred to as a Question-to-SQL generation task. Most of the existing methods cannot solve this problem since they primarily focus on tackling the questions related to a single table under the table-aware assumption. While in our problem, it is possible that questions asked by clinicians are related to multiple unspecified tables. In this paper, we first create a new question to query dataset designed for healthcare to perform the Question-to-SQL generation task, named MIMICSQL, based on a publicly available electronic medical database. To address the challenge of generating queries on multi-relational databases from natural language questions, we propose a TRanslate-Edit Model for Question-to-SQL query (TREQS), which adopts the sequence-to-sequence model to directly generate SQL query for a given question, and further edits it with an attentive-copying mechanism and task-specific look-up tables. Both quantitative and qualitative experimental results indicate the flexibility and efficiency of our proposed method in tackling challenges that are unique in MIMICSQL.


  Click for Model/Code and Paper
LeafNATS: An Open-Source Toolkit and Live Demo System for Neural Abstractive Text Summarization

May 28, 2019
Tian Shi, Ping Wang, Chandan K. Reddy

Neural abstractive text summarization (NATS) has received a lot of attention in the past few years from both industry and academia. In this paper, we introduce an open-source toolkit, namely LeafNATS, for training and evaluation of different sequence-to-sequence based models for the NATS task, and for deploying the pre-trained models to real-world applications. The toolkit is modularized and extensible in addition to maintaining competitive performance in the NATS task. A live news blogging system has also been implemented to demonstrate how these models can aid blog/news editors by providing them suggestions of headlines and summaries of their articles.

* Accepted by NAACL-HLT 2019 demo track 

  Click for Model/Code and Paper
Machine Learning for Survival Analysis: A Survey

Aug 15, 2017
Ping Wang, Yan Li, Chandan K. Reddy

Accurately predicting the time of occurrence of an event of interest is a critical problem in longitudinal data analysis. One of the main challenges in this context is the presence of instances whose event outcomes become unobservable after a certain time point or when some instances do not experience any event during the monitoring period. Such a phenomenon is called censoring which can be effectively handled using survival analysis techniques. Traditionally, statistical approaches have been widely developed in the literature to overcome this censoring issue. In addition, many machine learning algorithms are adapted to effectively handle survival data and tackle other challenging problems that arise in real-world data. In this survey, we provide a comprehensive and structured review of the representative statistical methods along with the machine learning techniques used in survival analysis and provide a detailed taxonomy of the existing methods. We also discuss several topics that are closely related to survival analysis and illustrate several successful applications in various real-world application domains. We hope that this paper will provide a more thorough understanding of the recent advances in survival analysis and offer some guidelines on applying these approaches to solve new problems that arise in applications with censored data.


  Click for Model/Code and Paper
A Method for Expressing and Displaying the Vehicle Behavior Distribution in Maintenance Work Zones

Apr 25, 2019
Qun Yang, Zhepu Xu, Saravanan Gurupackiam, Ping Wang

Maintenance work zones on the road network have impacts on the normal travelling of vehicles, which increase the risk of traffic accidents. The traffic characteristic analysis in maintenance work zones is a basis for maintenance work zone related research such as layout design, traffic control and safety assessment. Due to the difficulty in vehicle microscopic behaviour data acquisition, traditional traffic characteristic analysis mainly focuses on macroscopic characteristics. With the development of data acquisition technology, it becomes much easier to obtain a large amount of microscopic behaviour data nowadays, which lays a good foundation for analysing the traffic characteristics from a new point of view. This paper puts forward a method for expressing and displaying the vehicle behaviour distribution in maintenance work zones. Using portable vehicle microscopic behaviour data acquisition devices, lots of data can be obtained. Based on this data, an endpoint detection technology is used to automatically extract the segments in behaviour data with violent fluctuations, which are segments where vehicles take behaviours such as acceleration or turning. Using the support vector machine classification method, the specific types of behaviours of the segments extracted can be identified, and together with a data combination method, a total of ten types of behaviours can be identified. Then the kernel density analysis is used to cluster different types of behaviours of all passing vehicles to show the distribution on maps. By this method, how vehicles travel through maintenance work zones, and how different vehicle behaviours distribute in maintenance work zones can be displayed intuitively on maps, which is a novel traffic characteristic and can shed light to maintenance work zone related researches such as safety assessment and design method.

* 14 pages, 12 figures, 1 table 

  Click for Model/Code and Paper
Multimodal Sparse Classifier for Adolescent Brain Age Prediction

Apr 01, 2019
Peyman Hosseinzadeh Kassani, Alexej Gossmann, Yu-Ping Wang

The study of healthy brain development helps to better understand the brain transformation and brain connectivity patterns which happen during childhood to adulthood. This study presents a sparse machine learning solution across whole-brain functional connectivity (FC) measures of three sets of data, derived from resting state functional magnetic resonance imaging (rs-fMRI) and task fMRI data, including a working memory n-back task (nb-fMRI) and an emotion identification task (em-fMRI). These multi-modal image data are collected on a sample of adolescents from the Philadelphia Neurodevelopmental Cohort (PNC) for the prediction of brain ages. Due to extremely large variable-to-instance ratio of PNC data, a high dimensional matrix with several irrelevant and highly correlated features is generated and hence a pattern learning approach is necessary to extract significant features. We propose a sparse learner based on the residual errors along the estimation of an inverse problem for the extreme learning machine (ELM) neural network. The purpose of the approach is to overcome the overlearning problem through pruning of several redundant features and their corresponding output weights. The proposed multimodal sparse ELM classifier based on residual errors (RES-ELM) is highly competitive in terms of the classification accuracy compared to its counterparts such as conventional ELM, and sparse Bayesian learning ELM.


  Click for Model/Code and Paper
Towards Understanding Regularization in Batch Normalization

Sep 30, 2018
Ping Luo, Xinjiang Wang, Wenqi Shao, Zhanglin Peng

Batch Normalization (BN) improves both convergence and generalization in training neural networks. This work understands these phenomena theoretically. We analyze BN by using a basic block of neural networks, consisting of a kernel layer, a BN layer, and a nonlinear activation function. This basic network helps us understand the impacts of BN in three aspects. First, by viewing BN as an implicit regularizer, BN can be decomposed into population normalization (PN) and gamma decay as an explicit regularization. Second, learning dynamics of BN and the regularization show that training converged with large maximum and effective learning rate. Third, generalization of BN is explored by using statistical mechanics. Experiments demonstrate that BN in convolutional neural networks share the same traits of regularization as the above analyses.

* Preprint. Work in progress. 17 pages 

  Click for Model/Code and Paper
A Semantic QA-Based Approach for Text Summarization Evaluation

Apr 11, 2018
Ping Chen, Fei Wu, Tong Wang, Wei Ding

Many Natural Language Processing and Computational Linguistics applications involves the generation of new texts based on some existing texts, such as summarization, text simplification and machine translation. However, there has been a serious problem haunting these applications for decades, that is, how to automatically and accurately assess quality of these applications. In this paper, we will present some preliminary results on one especially useful and challenging problem in NLP system evaluation: how to pinpoint content differences of two text passages (especially for large pas-sages such as articles and books). Our idea is intuitive and very different from existing approaches. We treat one text passage as a small knowledge base, and ask it a large number of questions to exhaustively identify all content points in it. By comparing the correctly answered questions from two text passages, we will be able to compare their content precisely. The experiment using 2007 DUC summarization corpus clearly shows promising results.

* AAAI 2018 

  Click for Model/Code and Paper