Models, code, and papers for "Jie Wang":

Deep learning based mood tagging for Chinese song lyrics

May 23, 2019
Jie Wang, Xinyan Zhao

Nowadays, listening music has been and will always be an indispensable part of our daily life. In recent years, sentiment analysis of music has been widely used in the information retrieval systems, personalized recommendation systems and so on. Due to the development of deep learning, this paper commits to find an effective approach for mood tagging of Chinese song lyrics. To achieve this goal, both machine-learning and deep-learning models have been studied and compared. Eventually, a CNN-based model with pre-trained word embedding has been demonstrated to effectively extract the distribution of emotional features of Chinese lyrics, with at least 15 percentage points higher than traditional machine-learning methods (i.e. TF-IDF+SVM and LIWC+SVM), and 7 percentage points higher than other deep-learning models (i.e. RNN, LSTM). In this paper, more than 160,000 lyrics corpus has been leveraged for pre-training word embedding for mood tagging boost.


  Click for Model/Code and Paper
Theme-aware generation model for chinese lyrics

May 23, 2019
Jie Wang, Xinyan Zhao

With rapid development of neural networks, deep-learning has been extended to various natural language generation fields, such as machine translation, dialogue generation and even literature creation. In this paper, we propose a theme-aware language generation model for Chinese music lyrics, which improves the theme-connectivity and coherence of generated paragraphs greatly. A multi-channel sequence-to-sequence (seq2seq) model encodes themes and previous sentences as global and local contextual information. Moreover, attention mechanism is incorporated for sequence decoding, enabling to fuse context into predicted next texts. To prepare appropriate train corpus, LDA (Latent Dirichlet Allocation) is applied for theme extraction. Generated lyrics is grammatically correct and semantically coherent with selected themes, which offers a valuable modelling method in other fields including multi-turn chatbots, long paragraph generation and etc.


  Click for Model/Code and Paper
Semantic WordRank: Generating Finer Single-Document Summarizations

Sep 12, 2018
Hao Zhang, Jie Wang

We present Semantic WordRank (SWR), an unsupervised method for generating an extractive summary of a single document. Built on a weighted word graph with semantic and co-occurrence edges, SWR scores sentences using an article-structure-biased PageRank algorithm with a Softplus function adjustment, and promotes topic diversity using spectral subtopic clustering under the Word-Movers-Distance metric. We evaluate SWR on the DUC-02 and SummBank datasets and show that SWR produces better summaries than the state-of-the-art algorithms over DUC-02 under common ROUGE measures. We then show that, under the same measures over SummBank, SWR outperforms each of the three human annotators (aka. judges) and compares favorably with the combined performance of all judges.

* 12 pages, accepted by IDEAL2018 

  Click for Model/Code and Paper
DTATG: An Automatic Title Generator based on Dependency Trees

Oct 01, 2017
Liqun Shao, Jie Wang

We study automatic title generation for a given block of text and present a method called DTATG to generate titles. DTATG first extracts a small number of central sentences that convey the main meanings of the text and are in a suitable structure for conversion into a title. DTATG then constructs a dependency tree for each of these sentences and removes certain branches using a Dependency Tree Compression Model we devise. We also devise a title test to determine if a sentence can be used as a title. If a trimmed sentence passes the title test, then it becomes a title candidate. DTATG selects the title candidate with the highest ranking score as the final title. Our experiments showed that DTATG can generate adequate titles. We also showed that DTATG-generated titles have higher F1 scores than those generated by the previous methods.

* 8 pages, conference: accepted by KDIR 2016 

  Click for Model/Code and Paper
Scalable Compression of Deep Neural Networks

Aug 26, 2016
Xing Wang, Jie Liang

Deep neural networks generally involve some layers with mil- lions of parameters, making them difficult to be deployed and updated on devices with limited resources such as mobile phones and other smart embedded systems. In this paper, we propose a scalable representation of the network parameters, so that different applications can select the most suitable bit rate of the network based on their own storage constraints. Moreover, when a device needs to upgrade to a high-rate network, the existing low-rate network can be reused, and only some incremental data are needed to be downloaded. We first hierarchically quantize the weights of a pre-trained deep neural network to enforce weight sharing. Next, we adaptively select the bits assigned to each layer given the total bit budget. After that, we retrain the network to fine-tune the quantized centroids. Experimental results show that our method can achieve scalable compression with graceful degradation in the performance.

* 5 pages, 4 figures, ACM Multimedia 2016 

  Click for Model/Code and Paper
Safe Screening for Multi-Task Feature Learning with Multiple Data Matrices

May 15, 2015
Jie Wang, Jieping Ye

Multi-task feature learning (MTFL) is a powerful technique in boosting the predictive performance by learning multiple related classification/regression/clustering tasks simultaneously. However, solving the MTFL problem remains challenging when the feature dimension is extremely large. In this paper, we propose a novel screening rule---that is based on the dual projection onto convex sets (DPC)---to quickly identify the inactive features---that have zero coefficients in the solution vectors across all tasks. One of the appealing features of DPC is that: it is safe in the sense that the detected inactive features are guaranteed to have zero coefficients in the solution vectors across all tasks. Thus, by removing the inactive features from the training phase, we may have substantial savings in the computational cost and memory usage without sacrificing accuracy. To the best of our knowledge, it is the first screening rule that is applicable to sparse models with multiple data matrices. A key challenge in deriving DPC is to solve a nonconvex problem. We show that we can solve for the global optimum efficiently via a properly chosen parametrization of the constraint set. Moreover, DPC has very low computational cost and can be integrated with any existing solvers. We have evaluated the proposed DPC rule on both synthetic and real data sets. The experiments indicate that DPC is very effective in identifying the inactive features---especially for high dimensional data---which leads to a speedup up to several orders of magnitude.


  Click for Model/Code and Paper
Two-Layer Feature Reduction for Sparse-Group Lasso via Decomposition of Convex Sets

Oct 15, 2014
Jie Wang, Jieping Ye

Sparse-Group Lasso (SGL) has been shown to be a powerful regression technique for simultaneously discovering group and within-group sparse patterns by using a combination of the $\ell_1$ and $\ell_2$ norms. However, in large-scale applications, the complexity of the regularizers entails great computational challenges. In this paper, we propose a novel Two-Layer Feature REduction method (TLFre) for SGL via a decomposition of its dual feasible set. The two-layer reduction is able to quickly identify the inactive groups and the inactive features, respectively, which are guaranteed to be absent from the sparse representation and can be removed from the optimization. Existing feature reduction methods are only applicable for sparse models with one sparsity-inducing regularizer. To our best knowledge, TLFre is the first one that is capable of dealing with multiple sparsity-inducing regularizers. Moreover, TLFre has a very low computational cost and can be integrated with any existing solvers. We also develop a screening method---called DPC (DecomPosition of Convex set)---for the nonnegative Lasso problem. Experiments on both synthetic and real data sets show that TLFre and DPC improve the efficiency of SGL and nonnegative Lasso by several orders of magnitude.


  Click for Model/Code and Paper
Learning directed acyclic graphs via bootstrap aggregating

Jun 09, 2014
Ru Wang, Jie Peng

Probabilistic graphical models are graphical representations of probability distributions. Graphical models have applications in many fields including biology, social sciences, linguistic, neuroscience. In this paper, we propose directed acyclic graphs (DAGs) learning via bootstrap aggregating. The proposed procedure is named as DAGBag. Specifically, an ensemble of DAGs is first learned based on bootstrap resamples of the data and then an aggregated DAG is derived by minimizing the overall distance to the entire ensemble. A family of metrics based on the structural hamming distance is defined for the space of DAGs (of a given node set) and is used for aggregation. Under the high-dimensional-low-sample size setting, the graph learned on one data set often has excessive number of false positive edges due to over-fitting of the noise. Aggregation overcomes over-fitting through variance reduction and thus greatly reduces false positives. We also develop an efficient implementation of the hill climbing search algorithm of DAG learning which makes the proposed method computationally competitive for the high-dimensional regime. The DAGBag procedure is implemented in the R package dagbag.


  Click for Model/Code and Paper
War: Detecting adversarial examples by pre-processing input data

May 15, 2019
Hua Wang, Jie Wang, Zhaoxia Yin

Deep neural networks (DNNs) have demonstrated their outstanding performance in many fields such as image classification and speech recognition. However, DNNs image classifiers are susceptible to interference from adversarial examples, which ultimately leads to incorrect classification output of neural network models. Based on this, this paper proposes a method based on War (WebPcompression and resize) to detect adversarial examples. The method takes WebP compression as the core, firstly performs WebP compression on the input image, and then appropriately resizes the compressed image, so that the label of the adversarial example changes, thereby detecting the existence of the adversarial image. The experimental results show that the proposed method can effectively resist IFGSM, DeepFool and C&W attacks, and the recognition accuracy is improved by more than 10% compared with the HGD method, the detection success rate of adversarial examples is 5% higher than that of the Feature Squeezing method. The method in this paper can effectively reduce the small noise disturbance in the adversarial image, and accurately detect the adversarial example according to the change of the samplelabelwhileensuringtheaccuracyoftheoriginalsampleidentification.

* in Chinese 

  Click for Model/Code and Paper
Random Sampling for Fast Face Sketch Synthesis

Aug 11, 2017
Nannan Wang, Xinbo Gao, Jie Li

Exemplar-based face sketch synthesis plays an important role in both digital entertainment and law enforcement. It generally consists of two parts: neighbor selection and reconstruction weight representation. The most time-consuming or main computation complexity for exemplar-based face sketch synthesis methods lies in the neighbor selection process. State-of-the-art face sketch synthesis methods perform neighbor selection online in a data-driven manner by $K$ nearest neighbor ($K$-NN) searching. Actually, the online search increases the time consuming for synthesis. Moreover, since these methods need to traverse the whole training dataset for neighbor selection, the computational complexity increases with the scale of the training database and hence these methods have limited scalability. In this paper, we proposed a simple but effective offline random sampling in place of online $K$-NN search to improve the synthesis efficiency. Extensive experiments on public face sketch databases demonstrate the superiority of the proposed method in comparison to state-of-the-art methods, in terms of both synthesis quality and time consumption. The proposed method could be extended to other heterogeneous face image transformation problems such as face hallucination. We release the source codes of our proposed methods and the evaluation metrics for future study online: http://www.ihitworld.com/RSLCR.html.


  Click for Model/Code and Paper
Object Proposal with Kernelized Partial Ranking

May 18, 2017
Jing Wang, Jie Shen, Ping Li

Object proposals are an ensemble of bounding boxes with high potential to contain objects. In order to determine a small set of proposals with a high recall, a common scheme is extracting multiple features followed by a ranking algorithm which however, incurs two major challenges: {\bf 1)} The ranking model often imposes pairwise constraints between each proposal, rendering the problem away from an efficient training/testing phase; {\bf 2)} Linear kernels are utilized due to the computational and memory bottleneck of training a kernelized model. In this paper, we remedy these two issues by suggesting a {\em kernelized partial ranking model}. In particular, we demonstrate that {\bf i)} our partial ranking model reduces the number of constraints from $O(n^2)$ to $O(nk)$ where $n$ is the number of all potential proposals for an image but we are only interested in top-$k$ of them that has the largest overlap with the ground truth; {\bf ii)} we permit non-linear kernels in our model which is often superior to the linear classifier in terms of accuracy. For the sake of mitigating the computational and memory issues, we introduce a consistent weighted sampling~(CWS) paradigm that approximates the non-linear kernel as well as facilitates an efficient learning. In fact, as we will show, training a linear CWS model amounts to learning a kernelized model. Extensive experiments demonstrate that equipped with the non-linear kernel and the partial ranking algorithm, recall at top-$k$ proposals can be substantially improved.

* Pattern Recognition, 2017 

  Click for Model/Code and Paper
Social Trust Prediction via Max-norm Constrained 1-bit Matrix Completion

Apr 24, 2015
Jing Wang, Jie Shen, Huan Xu

Social trust prediction addresses the significant problem of exploring interactions among users in social networks. Naturally, this problem can be formulated in the matrix completion framework, with each entry indicating the trustness or distrustness. However, there are two challenges for the social trust problem: 1) the observed data are with sign (1-bit) measurements; 2) they are typically sampled non-uniformly. Most of the previous matrix completion methods do not well handle the two issues. Motivated by the recent progress of max-norm, we propose to solve the problem with a 1-bit max-norm constrained formulation. Since max-norm is not easy to optimize, we utilize a reformulation of max-norm which facilitates an efficient projected gradient decent algorithm. We demonstrate the superiority of our formulation on two benchmark datasets.


  Click for Model/Code and Paper
Learning to Rank Binary Codes

Oct 21, 2014
Jie Feng, Wei Liu, Yan Wang

Binary codes have been widely used in vision problems as a compact feature representation to achieve both space and time advantages. Various methods have been proposed to learn data-dependent hash functions which map a feature vector to a binary code. However, considerable data information is inevitably lost during the binarization step which also causes ambiguity in measuring sample similarity using Hamming distance. Besides, the learned hash functions cannot be changed after training, which makes them incapable of adapting to new data outside the training data set. To address both issues, in this paper we propose a flexible bitwise weight learning framework based on the binary codes obtained by state-of-the-art hashing methods, and incorporate the learned weights into the weighted Hamming distance computation. We then formulate the proposed framework as a ranking problem and leverage the Ranking SVM model to offline tackle the weight learning. The framework is further extended to an online mode which updates the weights at each time new data comes, thereby making it scalable to large and dynamic data sets. Extensive experimental results demonstrate significant performance gains of using binary codes with bitwise weighting in image retrieval tasks. It is appealing that the online weight learning leads to comparable accuracy with its offline counterpart, which thus makes our approach practical for realistic applications.


  Click for Model/Code and Paper
Lasso Screening Rules via Dual Polytope Projection

Oct 15, 2014
Jie Wang, Peter Wonka, Jieping Ye

Lasso is a widely used regression technique to find sparse representations. When the dimension of the feature space and the number of samples are extremely large, solving the Lasso problem remains challenging. To improve the efficiency of solving large-scale Lasso problems, El Ghaoui and his colleagues have proposed the SAFE rules which are able to quickly identify the inactive predictors, i.e., predictors that have $0$ components in the solution vector. Then, the inactive predictors or features can be removed from the optimization problem to reduce its scale. By transforming the standard Lasso to its dual form, it can be shown that the inactive predictors include the set of inactive constraints on the optimal dual solution. In this paper, we propose an efficient and effective screening rule via Dual Polytope Projections (DPP), which is mainly based on the uniqueness and nonexpansiveness of the optimal dual solution due to the fact that the feasible set in the dual space is a convex and closed polytope. Moreover, we show that our screening rule can be extended to identify inactive groups in group Lasso. To the best of our knowledge, there is currently no "exact" screening rule for group Lasso. We have evaluated our screening rule using synthetic and real data sets. Results show that our rule is more effective in identifying inactive predictors than existing state-of-the-art screening rules for Lasso.


  Click for Model/Code and Paper
Scaling SVM and Least Absolute Deviations via Exact Data Reduction

Oct 25, 2013
Jie Wang, Peter Wonka, Jieping Ye

The support vector machine (SVM) is a widely used method for classification. Although many efforts have been devoted to develop efficient solvers, it remains challenging to apply SVM to large-scale problems. A nice property of SVM is that the non-support vectors have no effect on the resulting classifier. Motivated by this observation, we present fast and efficient screening rules to discard non-support vectors by analyzing the dual problem of SVM via variational inequalities (DVI). As a result, the number of data instances to be entered into the optimization can be substantially reduced. Some appealing features of our screening method are: (1) DVI is safe in the sense that the vectors discarded by DVI are guaranteed to be non-support vectors; (2) the data set needs to be scanned only once to run the screening, whose computational cost is negligible compared to that of solving the SVM problem; (3) DVI is independent of the solvers and can be integrated with any existing efficient solvers. We also show that the DVI technique can be extended to detect non-support vectors in the least absolute deviations regression (LAD). To the best of our knowledge, there are currently no screening methods for LAD. We have evaluated DVI on both synthetic and real data sets. Experiments indicate that DVI significantly outperforms the existing state-of-the-art screening rules for SVM, and is very effective in discarding non-support vectors for LAD. The speedup gained by DVI rules can be up to two orders of magnitude.


  Click for Model/Code and Paper
Efficient Mixed-Norm Regularization: Algorithms and Safe Screening Methods

Jul 16, 2013
Jie Wang, Jun Liu, Jieping Ye

Sparse learning has recently received increasing attention in many areas including machine learning, statistics, and applied mathematics. The mixed-norm regularization based on the l1q norm with q>1 is attractive in many applications of regression and classification in that it facilitates group sparsity in the model. The resulting optimization problem is, however, challenging to solve due to the inherent structure of the mixed-norm regularization. Existing work deals with special cases with q=1, 2, infinity, and they cannot be easily extended to the general case. In this paper, we propose an efficient algorithm based on the accelerated gradient method for solving the general l1q-regularized problem. One key building block of the proposed algorithm is the l1q-regularized Euclidean projection (EP_1q). Our theoretical analysis reveals the key properties of EP_1q and illustrates why EP_1q for the general q is significantly more challenging to solve than the special cases. Based on our theoretical analysis, we develop an efficient algorithm for EP_1q by solving two zero finding problems. To further improve the efficiency of solving large dimensional mixed-norm regularized problems, we propose a screening method which is able to quickly identify the inactive groups, i.e., groups that have 0 components in the solution. This may lead to substantial reduction in the number of groups to be entered to the optimization. An appealing feature of our screening method is that the data set needs to be scanned only once to run the screening. Compared to that of solving the mixed-norm regularized problems, the computational cost of our screening test is negligible. The key of the proposed screening method is an accurate sensitivity analysis of the dual optimal solution when the regularization parameter varies. Experimental results demonstrate the efficiency of the proposed algorithm.

* arXiv admin note: text overlap with arXiv:1009.4766 

  Click for Model/Code and Paper
Improving Gated Recurrent Unit Based Acoustic Modeling with Batch Normalization and Enlarged Context

Nov 26, 2018
Jie Li, Yahui Shan, Xiaorui Wang, Yan Li

The use of future contextual information is typically shown to be helpful for acoustic modeling. Recently, we proposed a RNN model called minimal gated recurrent unit with input projection (mGRUIP), in which a context module namely temporal convolution, is specifically designed to model the future context. This model, mGRUIP with context module (mGRUIP-Ctx), has been shown to be able of utilizing the future context effectively, meanwhile with quite low model latency and computation cost. In this paper, we continue to improve mGRUIP-Ctx with two revisions: applying BN methods and enlarging model context. Experimental results on two Mandarin ASR tasks (8400 hours and 60K hours) show that, the revised mGRUIP-Ctx outperform LSTM with a large margin (11% to 38%). It even performs slightly better than a superior BLSTM on the 8400h task, with 33M less parameters and just 290ms model latency.

* ISCSLP 2018 

  Click for Model/Code and Paper
Learning ReLU Networks on Linearly Separable Data: Algorithm, Optimality, and Generalization

Aug 14, 2018
Gang Wang, Georgios B. Giannakis, Jie Chen

Neural networks with ReLU activations have achieved great empirical success in various domains. However, existing results for learning ReLU networks either pose assumptions on the underlying data distribution being e.g. Gaussian, or require the network size and/or training size to be sufficiently large. In this context, the problem of learning a two-layer ReLU network is approached in a binary classification setting, where the data are linearly separable and a hinge loss criterion is adopted. Leveraging the power of random noise, this contribution presents a novel stochastic gradient descent (SGD) algorithm, which can provably train any single-hidden-layer ReLU network to attain global optimality, despite the presence of infinitely many bad local minima and saddle points in general. This result is the first of its kind, requiring no assumptions on the data distribution, training/network size, or initialization. Convergence of the resultant iterative algorithm to a global minimum is analyzed by establishing both an upper bound and a lower bound on the number of effective (non-zero) updates to be performed. Furthermore, generalization guarantees are developed for ReLU networks trained with the novel SGD. These guarantees highlight a fundamental difference (at least in the worst case) between learning a ReLU network as well as a leaky ReLU network in terms of sample complexity. Numerical tests using synthetic data and real images validate the effectiveness of the algorithm and the practical merits of the theory.

* 23 pages, 5 figures, work in progress 

  Click for Model/Code and Paper
Pyramid Attention Network for Semantic Segmentation

Jul 15, 2018
Hanchao Li, Pengfei Xiong, Jie An, Lingxue Wang

A Pyramid Attention Network(PAN) is proposed to exploit the impact of global contextual information in semantic segmentation. Different from most existing works, we combine attention mechanism and spatial pyramid to extract precise dense features for pixel labeling instead of complicated dilated convolution and artificially designed decoder networks. Specifically, we introduce a Feature Pyramid Attention module to perform spatial pyramid attention structure on high-level output and combining global pooling to learn a better feature representation, and a Global Attention Upsample module on each decoder layer to provide global context as a guidance of low-level features to select category localization details. The proposed approach achieves state-of-the-art performance on PASCAL VOC 2012 and Cityscapes benchmarks with a new record of mIoU accuracy 84.0% on PASCAL VOC 2012, while training without COCO dataset.


  Click for Model/Code and Paper
Spectral Network Embedding: A Fast and Scalable Method via Sparsity

Jun 13, 2018
Jie Zhang, Yan Wang, Jie Tang, Ming Ding

Network embedding aims to learn low-dimensional representations of nodes in a network, while the network structure and inherent properties are preserved. It has attracted tremendous attention recently due to significant progress in downstream network learning tasks, such as node classification, link prediction, and visualization. However, most existing network embedding methods suffer from the expensive computations due to the large volume of networks. In this paper, we propose a $10\times \sim 100\times$ faster network embedding method, called Progle, by elegantly utilizing the sparsity property of online networks and spectral analysis. In Progle, we first construct a \textit{sparse} proximity matrix and train the network embedding efficiently via sparse matrix decomposition. Then we introduce a network propagation pattern via spectral analysis to incorporate local and global structure information into the embedding. Besides, this model can be generalized to integrate network information into other insufficiently trained embeddings at speed. Benefiting from sparse spectral network embedding, our experiment on four different datasets shows that Progle outperforms or is comparable to state-of-the-art unsupervised comparison approaches---DeepWalk, LINE, node2vec, GraRep, and HOPE, regarding accuracy, while is $10\times$ faster than the fastest word2vec-based method. Finally, we validate the scalability of Progle both in real large-scale networks and multiple scales of synthetic networks.


  Click for Model/Code and Paper