Bach in 2014: Music Composition with Recurrent Neural Network

Dec 14, 2014

I-Ting Liu, Bhiksha Ramakrishnan

Dec 14, 2014

I-Ting Liu, Bhiksha Ramakrishnan

**Click to Read Paper**

Protein-protein interaction extraction is the key precondition of the construction of protein knowledge network, and it is very important for the research in the biomedicine. This paper extracted directional protein-protein interaction from the biological text, using the SVM-based method. Experiments were evaluated on the LLL05 corpus with good results. The results show that dependency features are import for the protein-protein interaction extraction and features related to the interaction word are effective for the interaction direction judgment. At last, we analyzed the effects of different features and planed for the next step.

* This paper has been withdrawn by the author due to its lack of academic value

* This paper has been withdrawn by the author due to its lack of academic value

**Click to Read Paper*** 6 pages, 3 figures

**Click to Read Paper**

* 8 pages, 4 figures

**Click to Read Paper**

* 9 pages, 3 figures

**Click to Read Paper**

* 7 pages, 3 figures

**Click to Read Paper**

Further properties of the forward-backward envelope with applications to difference-of-convex programming

Oct 18, 2016

Tianxiang Liu, Ting Kei Pong

In this paper, we further study the forward-backward envelope first introduced in [28] and [30] for problems whose objective is the sum of a proper closed convex function and a twice continuously differentiable possibly nonconvex function with Lipschitz continuous gradient. We derive sufficient conditions on the original problem for the corresponding forward-backward envelope to be a level-bounded and Kurdyka-{\L}ojasiewicz function with an exponent of $\frac12$; these results are important for the efficient minimization of the forward-backward envelope by classical optimization algorithms. In addition, we demonstrate how to minimize some difference-of-convex regularized least squares problems by minimizing a suitably constructed forward-backward envelope. Our preliminary numerical results on randomly generated instances of large-scale $\ell_{1-2}$ regularized least squares problems [37] illustrate that an implementation of this approach with a limited-memory BFGS scheme usually outperforms standard first-order methods such as the nonmonotone proximal gradient method in [35].
Oct 18, 2016

Tianxiang Liu, Ting Kei Pong

* Theorem 3.3 is added. Included numerical tests on oversampled DCT matrix

**Click to Read Paper**

Constructing Narrative Event Evolutionary Graph for Script Event Prediction

May 16, 2018

Zhongyang Li, Xiao Ding, Ting Liu

May 16, 2018

Zhongyang Li, Xiao Ding, Ting Liu

* This paper has been accepted by IJCAI 2018

**Click to Read Paper**

Preference-based performance measures for Time-Domain Global Similarity method

Nov 08, 2017

Ting Lan, Jian Liu, Hong Qin

Nov 08, 2017

Ting Lan, Jian Liu, Hong Qin

**Click to Read Paper**

Improvement of training set structure in fusion data cleaning using Time-Domain Global Similarity method

Jun 30, 2017

Jian Liu, Ting Lan, Hong Qin

Jun 30, 2017

Jian Liu, Ting Lan, Hong Qin

**Click to Read Paper**

Improving Fully Convolution Network for Semantic Segmentation

Nov 28, 2016

Bing Shuai, Ting Liu, Gang Wang

Nov 28, 2016

Bing Shuai, Ting Liu, Gang Wang

**Click to Read Paper**

Aspect Level Sentiment Classification with Deep Memory Network

Sep 24, 2016

Duyu Tang, Bing Qin, Ting Liu

Sep 24, 2016

Duyu Tang, Bing Qin, Ting Liu

* published in EMNLP 2016

**Click to Read Paper**

Image Segmentation Using Hierarchical Merge Tree

Jul 31, 2016

Ting Liu, Mojtaba Seyedhosseini, Tolga Tasdizen

Jul 31, 2016

Ting Liu, Mojtaba Seyedhosseini, Tolga Tasdizen

* IEEE.Trans.Image.Processing 25 (2016) 4596-4607

**Click to Read Paper**

Latent Feature Based FM Model For Rating Prediction

Oct 29, 2014

Xudong Liu, Bin Zhang, Ting Zhang, Chang Liu

Oct 29, 2014

Xudong Liu, Bin Zhang, Ting Zhang, Chang Liu

* 4 pages, 3 figures, Large Scale Recommender Systems:workshop of Recsys 2014

**Click to Read Paper**

A Neural Multi-Task Learning Framework to Jointly Model Medical Named Entity Recognition and Normalization

Dec 14, 2018

Sendong Zhao, Ting Liu, Sicheng Zhao, Fei Wang

Dec 14, 2018

Sendong Zhao, Ting Liu, Sicheng Zhao, Fei Wang

* AAAI-2019

**Click to Read Paper**

Sequence-to-Sequence Data Augmentation for Dialogue Language Understanding

Jul 04, 2018

Yutai Hou, Yijia Liu, Wanxiang Che, Ting Liu

Jul 04, 2018

Yutai Hou, Yijia Liu, Wanxiang Che, Ting Liu

* Accepted By COLING2018

**Click to Read Paper**

A successive difference-of-convex approximation method for a class of nonconvex nonsmooth optimization problems

May 26, 2018

Tianxiang Liu, Ting Kei Pong, Akiko Takeda

We consider a class of nonconvex nonsmooth optimization problems whose objective is the sum of a smooth function and a finite number of nonnegative proper closed possibly nonsmooth functions (whose proximal mappings are easy to compute), some of which are further composed with linear maps. This kind of problems arises naturally in various applications when different regularizers are introduced for inducing simultaneous structures in the solutions. Solving these problems, however, can be challenging because of the coupled nonsmooth functions: the corresponding proximal mapping can be hard to compute so that standard first-order methods such as the proximal gradient algorithm cannot be applied efficiently. In this paper, we propose a successive difference-of-convex approximation method for solving this kind of problems. In this algorithm, we approximate the nonsmooth functions by their Moreau envelopes in each iteration. Making use of the simple observation that Moreau envelopes of nonnegative proper closed functions are continuous {\em difference-of-convex} functions, we can then approximately minimize the approximation function by first-order methods with suitable majorization techniques. These first-order methods can be implemented efficiently thanks to the fact that the proximal mapping of {\em each} nonsmooth function is easy to compute. Under suitable assumptions, we prove that the sequence generated by our method is bounded and any accumulation point is a stationary point of the objective. We also discuss how our method can be applied to concrete applications such as nonconvex fused regularized optimization problems and simultaneously structured matrix optimization problems, and illustrate the performance numerically for these two specific applications.
May 26, 2018

Tianxiang Liu, Ting Kei Pong, Akiko Takeda

**Click to Read Paper**

A refined convergence analysis of pDCA$_e$ with applications to simultaneous sparse recovery and outlier detection

Apr 19, 2018

Tianxiang Liu, Ting Kei Pong, Akiko Takeda

We consider the problem of minimizing a difference-of-convex (DC) function, which can be written as the sum of a smooth convex function with Lipschitz gradient, a proper closed convex function and a continuous possibly nonsmooth concave function. We refine the convergence analysis in [38] for the proximal DC algorithm with extrapolation (pDCA$_e$) and show that the whole sequence generated by the algorithm is convergent when the objective is level-bounded, {\em without} imposing differentiability assumptions in the concave part. Our analysis is based on a new potential function and we assume such a function is a Kurdyka-{\L}ojasiewicz (KL) function. We also establish a relationship between our KL assumption and the one used in [38]. Finally, we demonstrate how the pDCA$_e$ can be applied to a class of simultaneous sparse recovery and outlier detection problems arising from robust compressed sensing in signal processing and least trimmed squares regression in statistics. Specifically, we show that the objectives of these problems can be written as level-bounded DC functions whose concave parts are {\em typically nonsmooth}. Moreover, for a large class of loss functions and regularizers, the KL exponent of the corresponding potential function are shown to be 1/2, which implies that the pDCA$_e$ is locally linearly convergent when applied to these problems. Our numerical experiments show that the pDCA$_e$ usually outperforms the proximal DC algorithm with nonmonotone linesearch [24, Appendix A] in both CPU time and solution quality for this particular application.
Apr 19, 2018

Tianxiang Liu, Ting Kei Pong, Akiko Takeda

**Click to Read Paper**

A Deep Neural Network for Chinese Zero Pronoun Resolution

Sep 26, 2017

Qingyu Yin, Weinan Zhang, Yu Zhang, Ting Liu

Sep 26, 2017

Qingyu Yin, Weinan Zhang, Yu Zhang, Ting Liu

* Accepted by IJCAI 2017

**Click to Read Paper**

Unsupervised Total Variation Loss for Semi-supervised Deep Learning of Semantic Segmentation

Aug 07, 2018

Mehran Javanmardi, Mehdi Sajjadi, Ting Liu, Tolga Tasdizen

Aug 07, 2018

Mehran Javanmardi, Mehdi Sajjadi, Ting Liu, Tolga Tasdizen

**Click to Read Paper**