Research papers and code for "An Tang":
We present Matrix Krasulina, an algorithm for online k-PCA, by generalizing the classic Krasulina's method (Krasulina, 1969) from vector to matrix case. We show, both theoretically and empirically, that the algorithm naturally adapts to data low-rankness and converges exponentially fast to the ground-truth principal subspace. Notably, our result suggests that despite various recent efforts to accelerate the convergence of stochastic-gradient based methods by adding a O(n)-time variance reduction step, for the k-PCA problem, a truly online SGD variant suffices to achieve exponential convergence on intrinsically low-rank data.

Click to Read Paper and Get Code
This paper shows that pairwise PageRank orders emerge from two-hop walks. The main tool used here refers to a specially designed sign-mirror function and a parameter curve, whose low-order derivative information implies pairwise PageRank orders with high probability. We study the pairwise correct rate by placing the Google matrix $\textbf{G}$ in a probabilistic framework, where $\textbf{G}$ may be equipped with different random ensembles for model-generated or real-world networks with sparse, small-world, scale-free features, the proof of which is mixed by mathematical and numerical evidence. We believe that the underlying spectral distribution of aforementioned networks is responsible for the high pairwise correct rate. Moreover, the perspective of this paper naturally leads to an $O(1)$ algorithm for any single pairwise PageRank comparison if assuming both $\textbf{A}=\textbf{G}-\textbf{I}_n$, where $\textbf{I}_n$ denotes the identity matrix of order $n$, and $\textbf{A}^2$ are ready on hand (e.g., constructed offline in an incremental manner), based on which it is easy to extract the top $k$ list in $O(kn)$, thus making it possible for PageRank algorithm to deal with super large-scale datasets in real time.

* 29 pages, 2 figures
Click to Read Paper and Get Code
E-Commerce personalization aims to provide individualized offers, product recommendations, and other content to customers based on their interests. The foundation of any personalization effort is customer segmentation. The idea of customer segmentation is to group customers together according to identifiable segmentation attributes including geolocation, gender, age, and interests. Personality quiz turns out to be a powerful tool that enables costumer segmentation by actively asking them questions, and marketers are using it as an effective method of generating leads and increasing e-commerce sales. In this paper, we study the problem of how to select and sequence a group of quiz questions so as to optimize the quality of customer segmentation. In particular, we use conditional entropy to measure the utility of a given group of quiz questions. We model the user behavior when interacting with a sequence of quiz questions as a Markov process. Then we develop a series of question allocation strategies with provable performance bound.

Click to Read Paper and Get Code
We describe classical analogues to quantum algorithms for principal component analysis and nearest-centroid clustering. Given sampling assumptions, our classical algorithms run in time polylogarithmic in input, matching the runtime of the quantum algorithms with only polynomial slowdown. These algorithms are evidence that their corresponding problems do not yield exponential quantum speedups. To build our classical algorithms, we use the same techniques as applied in our previous work dequantizing a quantum recommendation systems algorithm. Thus, we provide further evidence for the strength of classical $\ell^2$-norm sampling assumptions when replacing quantum state preparation assumptions, in the machine learning domain.

* 5 pages
Click to Read Paper and Get Code
A recommendation system suggests products to users based on data about user preferences. It is typically modeled by a problem of completing an $m\times n$ matrix of small rank $k$. We give the first classical algorithm to produce a recommendation in $O(\text{poly}(k)\text{polylog}(m,n))$ time, which is an exponential improvement on previous algorithms that run in time linear in $m$ and $n$. Our strategy is inspired by a quantum algorithm by Kerenidis and Prakash: like the quantum algorithm, instead of reconstructing a user's full list of preferences, we only seek a randomized sample from the user's preferences. Our main result is an algorithm that samples high-weight entries from a low-rank approximation of the input matrix in time independent of $m$ and $n$, given natural sampling assumptions on that input matrix. As a consequence, we show that Kerenidis and Prakash's quantum machine learning (QML) algorithm, one of the strongest candidates for provably exponential speedups in QML, does not in fact give an exponential speedup over classical algorithms.

* 35 pages
Click to Read Paper and Get Code
Segments that span contiguous parts of inputs, such as phonemes in speech, named entities in sentences, actions in videos, occur frequently in sequence prediction problems. Segmental models, a class of models that explicitly hypothesizes segments, have allowed the exploration of rich segment features for sequence prediction. However, segmental models suffer from slow decoding, hampering the use of computationally expensive features. In this thesis, we introduce discriminative segmental cascades, a multi-pass inference framework that allows us to improve accuracy by adding higher-order features and neural segmental features while maintaining efficiency. We also show that instead of including more features to obtain better accuracy, segmental cascades can be used to speed up training and decoding. Segmental models, similarly to conventional speech recognizers, are typically trained in multiple stages. In the first stage, a frame classifier is trained with manual alignments, and then in the second stage, segmental models are trained with manual alignments and the out- puts of the frame classifier. However, obtaining manual alignments are time-consuming and expensive. We explore end-to-end training for segmental models with various loss functions, and show how end-to-end training with marginal log loss can eliminate the need for detailed manual alignments. We draw the connections between the marginal log loss and a popular end-to-end training approach called connectionist temporal classification. We present a unifying framework for various end-to-end graph search-based models, such as hidden Markov models, connectionist temporal classification, and segmental models. Finally, we discuss possible extensions of segmental models to large-vocabulary sequence prediction tasks.

* Thesis
Click to Read Paper and Get Code
We derive the limiting distribution for the largest eigenvalues of the adjacency matrix for a stochastic blockmodel graph when the number of vertices tends to infinity. We show that, in the limit, these eigenvalues are jointly multivariate normal with bounded covariances. Our result extends the classic result of F\"{u}redi and Koml\'{o}s on the fluctuation of the largest eigenvalue for Erd\H{o}s-R\'{e}nyi graphs.

* 13 pages, 2 figures
Click to Read Paper and Get Code
This paper reviews the state-of-the-art of semantic change computation, one emerging research field in computational linguistics, proposing a framework that summarizes the literature by identifying and expounding five essential components in the field: diachronic corpus, diachronic word sense characterization, change modelling, evaluation data and data visualization. Despite the potential of the field, the review shows that current studies are mainly focused on testifying hypotheses proposed in theoretical linguistics and that several core issues remain to be solved: the need for diachronic corpora of languages other than English, the need for comprehensive evaluation data for evaluation, the comparison and construction of approaches to diachronic word sense characterization and change modelling, and further exploration of data visualization techniques for hypothesis justification.

* 2018, Natural Language Engineering
* This is review of the state of the art of semantic change computation, submitted to Natural Language Engineering
Click to Read Paper and Get Code
TF.Learn is a high-level Python module for distributed machine learning inside TensorFlow. It provides an easy-to-use Scikit-learn style interface to simplify the process of creating, configuring, training, evaluating, and experimenting a machine learning model. TF.Learn integrates a wide range of state-of-art machine learning algorithms built on top of TensorFlow's low level APIs for small to large-scale supervised and unsupervised problems. This module focuses on bringing machine learning to non-specialists using a general-purpose high-level language as well as researchers who want to implement, benchmark, and compare their new methods in a structured environment. Emphasis is put on ease of use, performance, documentation, and API consistency.

Click to Read Paper and Get Code
This work focuses on top-k recommendation in domains where underlying data distribution shifts overtime. We propose to learn a time-dependent bias for each item over whatever existing recommendation engine. Such a bias learning process alleviates data sparsity in constructing the engine, and at the same time captures recent trend shift observed in data. We present an alternating optimization framework to resolve the bias learning problem, and develop methods to handle a variety of commonly used recommendation evaluation criteria, as well as large number of items and users in practice. The proposed algorithm is examined, both offline and online, using real world data sets collected from the largest retailer worldwide. Empirical results demonstrate that the bias learning can almost always boost recommendation performance. We encourage other practitioners to adopt it as a standard component in recommender systems where temporal dynamics is a norm.

* 10 pages
Click to Read Paper and Get Code
Recently, fully-connected and convolutional neural networks have been trained to achieve state-of-the-art performance on a wide variety of tasks such as speech recognition, image classification, natural language processing, and bioinformatics. For classification tasks, most of these "deep learning" models employ the softmax activation function for prediction and minimize cross-entropy loss. In this paper, we demonstrate a small but consistent advantage of replacing the softmax layer with a linear support vector machine. Learning minimizes a margin-based loss instead of the cross-entropy loss. While there have been various combinations of neural nets and SVMs in prior art, our results using L2-SVMs show that by simply replacing softmax with linear SVMs gives significant gains on popular deep learning datasets MNIST, CIFAR-10, and the ICML 2013 Representation Learning Workshop's face expression recognition challenge.

* Contribution to the ICML 2013 Challenges in Representation Learning Workshop
Click to Read Paper and Get Code
This paper revisits the classic iterative proportional scaling (IPS) from a modern optimization perspective. In contrast to the criticisms made in the literature, we show that based on a coordinate descent characterization, IPS can be slightly modified to deliver coefficient estimates, and from a majorization-minimization standpoint, IPS can be extended to handle log-affine models with features not necessarily binary-valued or nonnegative. Furthermore, some state-of-the-art optimization techniques such as block-wise computation, randomization and momentum-based acceleration can be employed to provide more scalable IPS algorithms, as well as some regularized variants of IPS for concurrent feature selection.

Click to Read Paper and Get Code
This paper presents a simple but effective density-based outlier detection approach with the local kernel density estimation (KDE). A Relative Density-based Outlier Score (RDOS) is introduced to measure the local outlierness of objects, in which the density distribution at the location of an object is estimated with a local KDE method based on extended nearest neighbors of the object. Instead of using only $k$ nearest neighbors, we further consider reverse nearest neighbors and shared nearest neighbors of an object for density distribution estimation. Some theoretical properties of the proposed RDOS including its expected value and false alarm probability are derived. A comprehensive experimental study on both synthetic and real-life data sets demonstrates that our approach is more effective than state-of-the-art outlier detection methods.

* 22 pages, 14 figures, submitted to Pattern Recognition Letters
Click to Read Paper and Get Code
In this paper, we present a new wrapper feature selection approach based on Jensen-Shannon (JS) divergence, termed feature selection with maximum JS-divergence (FSMJ), for text categorization. Unlike most existing feature selection approaches, the proposed FSMJ approach is based on real-valued features which provide more information for discrimination than binary-valued features used in conventional approaches. We show that the FSMJ is a greedy approach and the JS-divergence monotonically increases when more features are selected. We conduct several experiments on real-life data sets, compared with the state-of-the-art feature selection approaches for text categorization. The superior performance of the proposed FSMJ approach demonstrates its effectiveness and further indicates its wide potential applications on data mining.

* 8 pages, 6 figures, World Congress on Intelligent Control and Automation, 2016
Click to Read Paper and Get Code
Knowledge graph reasoning, which aims at predicting the missing facts through reasoning with the observed facts, is critical to many applications. Such a problem has been widely explored by traditional logic rule-based approaches and recent knowledge graph embedding methods. A principled logic rule-based approach is the Markov Logic Network (MLN), which is able to leverage domain knowledge with first-order logic and meanwhile handle their uncertainty. However, the inference of MLNs is usually very difficult due to the complicated graph structures. Different from MLNs, knowledge graph embedding methods (e.g. TransE, DistMult) learn effective entity and relation embeddings for reasoning, which are much more effective and efficient. However, they are unable to leverage domain knowledge. In this paper, we propose the probabilistic Logic Neural Network (pLogicNet), which combines the advantages of both methods. A pLogicNet defines the joint distribution of all possible triplets by using a Markov logic network with first-order logic, which can be efficiently optimized with the variational EM algorithm. In the E-step, a knowledge graph embedding model is used for inferring the missing triplets, while in the M-step, the weights of logic rules are updated based on both the observed and predicted triplets. Experiments on multiple knowledge graphs prove the effectiveness of pLogicNet over many competitive baselines.

Click to Read Paper and Get Code
The prevalent approaches of Chinese word segmentation task almost rely on the Bi-LSTM neural network. However, the methods based the Bi-LSTM have some inherent drawbacks: hard to parallel computing, little efficient in applying the Dropout method to inhibit the Overfitting and little efficient in capturing the character information at the more distant site of a long sentence for the word segmentation task. In this work, we propose a sequence-to-sequence transformer model for Chinese word segmentation, which is premised a type of convolutional neural network named temporal convolutional network. The model uses the temporal convolutional network to construct an encoder, and uses one layer of fully-connected neural network to build a decoder, and applies the Dropout method to inhibit the Overfitting, and captures the character information at the distant site of a sentence by adding the layers of the encoder, and binds Conditional Random Fields model to train parameters, and uses the Viterbi algorithm to infer the final result of the Chinese word segmentation. The experiments on traditional Chinese corpora and simplified Chinese corpora show that the performance of Chinese word segmentation of the model is equivalent to the performance of the methods based the Bi-LSTM, and the model has a tremendous growth in parallel computing than the models based the Bi-LSTM.

Click to Read Paper and Get Code
Constrained submodular maximization has been extensively studied in the recent years. In this paper, we study adaptive robust optimization with nearly submodular structure (ARONSS). Our objective is to randomly select a subset of items that maximizes the worst-case value of several reward functions simultaneously. Our work differs from existing studies in two ways: (1) we study the robust optimization problem under the adaptive setting, i.e., one needs to adaptively select items based on the feedback collected from picked items, and (2) our results apply to a broad range of reward functions characterized by $\epsilon$-nearly submodular function. We first analyze the adaptvity gap of ARONSS and show that the gap between the best adaptive solution and the best non-adaptive solution is bounded. Then we propose two algorithms that achieve bounded approximation ratios.

Click to Read Paper and Get Code
object detection framework plays crucial role in autonomous driving. In this paper, we introduce the real-time object detection framework called You Only Look Once (YOLOv1) and the related improvements of YOLOv2. We further explore the capability of YOLOv2 by implementing its pre-trained model to do the object detecting tasks in some specific traffic scenes. The four artificially designed traffic scenes include single-car, single-person, frontperson-rearcar and frontcar-rearperson.

Click to Read Paper and Get Code
In this paper, we propose a time-efficient approach to generate safe, smooth and dynamically feasible trajectories for quadrotors in obstacle-cluttered environment. By using the uniform B-spline to represent trajectories, we transform the trajectory planning to a graph-search problem of B-spline control points in discretized space. Highly strict convex hull property of B-spline is derived to guarantee the dynamical feasibility of the entire trajectory. A novel non-uniform kinodynamic search strategy is adopted, and the step length is dynamically adjusted during the search process according to the Euclidean signed distance field (ESDF), making the trajectory achieve reasonable time-allocation and be away from obstacles. Non-static initial and goal states are allowed, therefore it can be used for online local replanning as well as global planning. Extensive simulation and hardware experiments show that our method achieves higher performance comparing with the state-of-the-art method.

* 7 pages,6 figures, conference
Click to Read Paper and Get Code
On the one hand, nowadays, fake news articles are easily propagated through various online media platforms and have become a grand threat to the trustworthiness of information. On the other hand, our understanding of the language of fake news is still minimal. Incorporating hierarchical discourse-level structure of fake and real news articles is one crucial step toward a better understanding of how these articles are structured. Nevertheless, this has rarely been investigated in the fake news detection domain and faces tremendous challenges. First, existing methods for capturing discourse-level structure rely on annotated corpora which are not available for fake news datasets. Second, how to extract out useful information from such discovered structures is another challenge. To address these challenges, we propose Hierarchical Discourse-level Structure for Fake news detection. HDSF learns and constructs a discourse-level structure for fake/real news articles in an automated and data-driven manner. Moreover, we identify insightful structure-related properties, which can explain the discovered structures and boost our understating of fake news. Conducted experiments show the effectiveness of the proposed approach. Further structural analysis suggests that real and fake news present substantial differences in the hierarchical discourse-level structures.

* Accepted to 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics June 2-7, 2019 Minneapolis, USA
Click to Read Paper and Get Code