Models, code, and papers for "Jiaya Jia":

Dense Scattering Layer Removal

Oct 13, 2013
Qiong Yan, Li Xu, Jiaya Jia

We propose a new model, together with advanced optimization, to separate a thick scattering media layer from a single natural image. It is able to handle challenging underwater scenes and images taken in fog and sandstorm, both of which are with significantly reduced visibility. Our method addresses the critical issue -- this is, originally unnoticeable impurities will be greatly magnified after removing the scattering media layer -- with transmission-aware optimization. We introduce non-local structure-aware regularization to properly constrain transmission estimation without introducing the halo artifacts. A selective-neighbor criterion is presented to convert the unconventional constrained optimization problem to an unconstrained one where the latter can be efficiently solved.

* 10 pages, 10 figures, Siggraph Asia 2013 Technial Briefs 

  Click for Model/Code and Paper
ESSP: An Efficient Approach to Minimizing Dense and Nonsubmodular Energy Functions

May 19, 2014
Wei Feng, Jiaya Jia, Zhi-Qiang Liu

Many recent advances in computer vision have demonstrated the impressive power of dense and nonsubmodular energy functions in solving visual labeling problems. However, minimizing such energies is challenging. None of existing techniques (such as s-t graph cut, QPBO, BP and TRW-S) can individually do this well. In this paper, we present an efficient method, namely ESSP, to optimize binary MRFs with arbitrary pairwise potentials, which could be nonsubmodular and with dense connectivity. We also provide a comparative study of our approach and several recent promising methods. From our study, we make some reasonable recommendations of combining existing methods that perform the best in different situations for this challenging problem. Experimental results validate that for dense and nonsubmodular energy functions, the proposed approach can usually obtain lower energies than the best combination of other techniques using comparably reasonable time.

* 9 pages, 11 figures 

  Click for Model/Code and Paper
Fast Point R-CNN

Aug 16, 2019
Yilun Chen, Shu Liu, Xiaoyong Shen, Jiaya Jia

We present a unified, efficient and effective framework for point-cloud based 3D object detection. Our two-stage approach utilizes both voxel representation and raw point cloud data to exploit respective advantages. The first stage network, with voxel representation as input, only consists of light convolutional operations, producing a small number of high-quality initial predictions. Coordinate and indexed convolutional feature of each point in initial prediction are effectively fused with the attention mechanism, preserving both accurate localization and context information. The second stage works on interior points with their fused feature for further refining the prediction. Our method is evaluated on KITTI dataset, in terms of both 3D and Bird's Eye View (BEV) detection, and achieves state-of-the-arts with a 15FPS detection rate.


  Click for Model/Code and Paper
Sequential Context Encoding for Duplicate Removal

Oct 20, 2018
Lu Qi, Shu Liu, Jianping Shi, Jiaya Jia

Duplicate removal is a critical step to accomplish a reasonable amount of predictions in prevalent proposal-based object detection frameworks. Albeit simple and effective, most previous algorithms utilize a greedy process without making sufficient use of properties of input data. In this work, we design a new two-stage framework to effectively select the appropriate proposal candidate for each object. The first stage suppresses most of easy negative object proposals, while the second stage selects true positives in the reduced proposal set. These two stages share the same network structure, \ie, an encoder and a decoder formed as recurrent neural networks (RNN) with global attention and context gate. The encoder scans proposal candidates in a sequential manner to capture the global context information, which is then fed to the decoder to extract optimal proposals. In our extensive experiments, the proposed method outperforms other alternatives by a large margin.

* Accepted in NIPS 2018 

  Click for Model/Code and Paper
Semi-parametric Image Synthesis

Apr 29, 2018
Xiaojuan Qi, Qifeng Chen, Jiaya Jia, Vladlen Koltun

We present a semi-parametric approach to photographic image synthesis from semantic layouts. The approach combines the complementary strengths of parametric and nonparametric techniques. The nonparametric component is a memory bank of image segments constructed from a training set of images. Given a novel semantic layout at test time, the memory bank is used to retrieve photographic references that are provided as source material to a deep network. The synthesis is performed by a deep network that draws on the provided photographic material. Experiments on multiple semantic segmentation datasets show that the presented approach yields considerably more realistic images than recent purely parametric techniques. The results are shown in the supplementary video at https://youtu.be/U4Q98lenGLQ

* Published at the Conference on Computer Vision and Pattern Recognition (CVPR 2018) 

  Click for Model/Code and Paper
Automatic Real-time Background Cut for Portrait Videos

Apr 28, 2017
Xiaoyong Shen, Ruixing Wang, Hengshuang Zhao, Jiaya Jia

We in this paper solve the problem of high-quality automatic real-time background cut for 720p portrait videos. We first handle the background ambiguity issue in semantic segmentation by proposing a global background attenuation model. A spatial-temporal refinement network is developed to further refine the segmentation errors in each frame and ensure temporal coherence in the segmentation map. We form an end-to-end network for training and testing. Each module is designed considering efficiency and accuracy. We build a portrait dataset, which includes 8,000 images with high-quality labeled map for training and testing. To further improve the performance, we build a portrait video dataset with 50 sequences to fine-tune video segmentation. Our framework benefits many video processing applications.


  Click for Model/Code and Paper
Hierarchical Saliency Detection on Extended CSSD

Aug 04, 2015
Jianping Shi, Qiong Yan, Li Xu, Jiaya Jia

Complex structures commonly exist in natural images. When an image contains small-scale high-contrast patterns either in the background or foreground, saliency detection could be adversely affected, resulting erroneous and non-uniform saliency assignment. The issue forms a fundamental challenge for prior methods. We tackle it from a scale point of view and propose a multi-layer approach to analyze saliency cues. Different from varying patch sizes or downsizing images, we measure region-based scales. The final saliency values are inferred optimally combining all the saliency cues in different scales using hierarchical inference. Through our inference model, single-scale information is selected to obtain a saliency map. Our method improves detection quality on many images that cannot be handled well traditionally. We also construct an extended Complex Scene Saliency Dataset (ECSSD) to include complex but general natural images.

* 14 pages, 15 figures 

  Click for Model/Code and Paper
Aggregation via Separation: Boosting Facial Landmark Detector with Semi-Supervised Style Translation

Aug 18, 2019
Shengju Qian, Keqiang Sun, Wayne Wu, Chen Qian, Jiaya Jia

Facial landmark detection, or face alignment, is a fundamental task that has been extensively studied. In this paper, we investigate a new perspective of facial landmark detection and demonstrate it leads to further notable improvement. Given that any face images can be factored into space of style that captures lighting, texture and image environment, and a style-invariant structure space, our key idea is to leverage disentangled style and shape space of each individual to augment existing structures via style translation. With these augmented synthetic samples, our semi-supervised model surprisingly outperforms the fully-supervised one by a large margin. Extensive experiments verify the effectiveness of our idea with state-of-the-art results on WFLW, 300W, COFW, and AFLW datasets. Our proposed structure is general and could be assembled into any face alignment frameworks. The code is made publicly available at https://github.com/thesouthfrog/stylealign.

* Accepted to ICCV 2019 

  Click for Model/Code and Paper
Associatively Segmenting Instances and Semantics in Point Clouds

Feb 28, 2019
Xinlong Wang, Shu Liu, Xiaoyong Shen, Chunhua Shen, Jiaya Jia

A 3D point cloud describes the real scene precisely and intuitively.To date how to segment diversified elements in such an informative 3D scene is rarely discussed. In this paper, we first introduce a simple and flexible framework to segment instances and semantics in point clouds simultaneously. Then, we propose two approaches which make the two tasks take advantage of each other, leading to a win-win situation. Specifically, we make instance segmentation benefit from semantic segmentation through learning semantic-aware point-level instance embedding. Meanwhile, semantic features of the points belonging to the same instance are fused together to make more accurate per-point semantic predictions. Our method largely outperforms the state-of-the-art method in 3D instance segmentation along with a significant improvement in 3D semantic segmentation. Code has been made available at: https://github.com/WXinlong/ASIS.

* Accepted by CVPR2019 

  Click for Model/Code and Paper
Convolutional Neural Pyramid for Image Processing

Apr 07, 2017
Xiaoyong Shen, Ying-Cong Chen, Xin Tao, Jiaya Jia

We propose a principled convolutional neural pyramid (CNP) framework for general low-level vision and image processing tasks. It is based on the essential finding that many applications require large receptive fields for structure understanding. But corresponding neural networks for regression either stack many layers or apply large kernels to achieve it, which is computationally very costly. Our pyramid structure can greatly enlarge the field while not sacrificing computation efficiency. Extra benefit includes adaptive network depth and progressive upsampling for quasi-realtime testing on VGA-size input. Our method profits a broad set of applications, such as depth/RGB image restoration, completion, noise/artifact removal, edge refinement, image filtering, image enhancement and colorization.


  Click for Model/Code and Paper
ScribbleSup: Scribble-Supervised Convolutional Networks for Semantic Segmentation

Apr 18, 2016
Di Lin, Jifeng Dai, Jiaya Jia, Kaiming He, Jian Sun

Large-scale data is of crucial importance for learning semantic segmentation models, but annotating per-pixel masks is a tedious and inefficient procedure. We note that for the topic of interactive image segmentation, scribbles are very widely used in academic research and commercial software, and are recognized as one of the most user-friendly ways of interacting. In this paper, we propose to use scribbles to annotate images, and develop an algorithm to train convolutional networks for semantic segmentation supervised by scribbles. Our algorithm is based on a graphical model that jointly propagates information from scribbles to unmarked pixels and learns network parameters. We present competitive object semantic segmentation results on the PASCAL VOC dataset by using scribbles as annotations. Scribbles are also favored for annotating stuff (e.g., water, sky, grass) that has no well-defined shape, and our method shows excellent results on the PASCAL-CONTEXT dataset thanks to extra inexpensive scribble annotations. Our scribble annotations on PASCAL VOC are available at http://research.microsoft.com/en-us/um/people/jifdai/downloads/scribble_sup

* accepted by CVPR 2016 

  Click for Model/Code and Paper
Understanding and Diagnosing Visual Tracking Systems

Apr 23, 2015
Naiyan Wang, Jianping Shi, Dit-Yan Yeung, Jiaya Jia

Several benchmark datasets for visual tracking research have been proposed in recent years. Despite their usefulness, whether they are sufficient for understanding and diagnosing the strengths and weaknesses of different trackers remains questionable. To address this issue, we propose a framework by breaking a tracker down into five constituent parts, namely, motion model, feature extractor, observation model, model updater, and ensemble post-processor. We then conduct ablative experiments on each component to study how it affects the overall result. Surprisingly, our findings are discrepant with some common beliefs in the visual tracking research community. We find that the feature extractor plays the most important role in a tracker. On the other hand, although the observation model is the focus of many studies, we find that it often brings no significant improvement. Moreover, the motion model and model updater contain many details that could affect the result. Also, the ensemble post-processor can improve the result substantially when the constituent trackers have high diversity. Based on our findings, we put together some very elementary building blocks to give a basic tracker which is competitive in performance to the state-of-the-art trackers. We believe our framework can provide a solid baseline when conducting controlled experiments for visual tracking research.


  Click for Model/Code and Paper
STD: Sparse-to-Dense 3D Object Detector for Point Cloud

Jul 22, 2019
Zetong Yang, Yanan Sun, Shu Liu, Xiaoyong Shen, Jiaya Jia

We present a new two-stage 3D object detection framework, named sparse-to-dense 3D Object Detector (STD). The first stage is a bottom-up proposal generation network that uses raw point cloud as input to generate accurate proposals by seeding each point with a new spherical anchor. It achieves a high recall with less computation compared with prior works. Then, PointsPool is applied for generating proposal features by transforming their interior point features from sparse expression to compact representation, which saves even more computation time. In box prediction, which is the second stage, we implement a parallel intersection-over-union (IoU) branch to increase awareness of localization accuracy, resulting in further improved performance. We conduct experiments on KITTI dataset, and evaluate our method in terms of 3D object and Bird's Eye View (BEV) detection. Our method outperforms other state-of-the-arts by a large margin, especially on the hard set, with inference speed more than 10 FPS.

* arXiv admin note: text overlap with arXiv:1812.05276 

  Click for Model/Code and Paper
Attribute-Driven Spontaneous Motion in Unpaired Image Translation

Jul 02, 2019
Ruizheng Wu, Xin Tao, Xiaodong Gu, Xiaoyong Shen, Jiaya Jia

Current image translation methods, albeit effective to produce high-quality results on various applications, still do not consider much geometric transforms. We in this paper propose spontaneous motion estimation module, along with a refinement module, to learn attribute-driven deformation between source and target domains. Extensive experiments and visualization demonstrate effectiveness of these modules. We achieve promising results in unpaired image translation tasks, and enable interesting applications with spontaneous motion basis.


  Click for Model/Code and Paper
IPOD: Intensive Point-based Object Detector for Point Cloud

Dec 13, 2018
Zetong Yang, Yanan Sun, Shu Liu, Xiaoyong Shen, Jiaya Jia

We present a novel 3D object detection framework, named IPOD, based on raw point cloud. It seeds object proposal for each point, which is the basic element. This paradigm provides us with high recall and high fidelity of information, leading to a suitable way to process point cloud data. We design an end-to-end trainable architecture, where features of all points within a proposal are extracted from the backbone network and achieve a proposal feature for final bounding inference. These features with both context information and precise point cloud coordinates yield improved performance. We conduct experiments on KITTI dataset, evaluating our performance in terms of 3D object detection, Bird's Eye View (BEV) detection and 2D object detection. Our method accomplishes new state-of-the-art , showing great advantage on the hard set.


  Click for Model/Code and Paper
Image Inpainting via Generative Multi-column Convolutional Neural Networks

Oct 20, 2018
Yi Wang, Xin Tao, Xiaojuan Qi, Xiaoyong Shen, Jiaya Jia

In this paper, we propose a generative multi-column network for image inpainting. This network synthesizes different image components in a parallel manner within one stage. To better characterize global structures, we design a confidence-driven reconstruction loss while an implicit diversified MRF regularization is adopted to enhance local details. The multi-column network combined with the reconstruction and MRF loss propagates local and global information derived from context to the target inpainting regions. Extensive experiments on challenging street view, face, natural objects and scenes manifest that our method produces visual compelling results even without previously common post-processing.

* Accepted in NIPS 2018 

  Click for Model/Code and Paper
Path Aggregation Network for Instance Segmentation

Sep 18, 2018
Shu Liu, Lu Qi, Haifang Qin, Jianping Shi, Jiaya Jia

The way that information propagates in neural networks is of great importance. In this paper, we propose Path Aggregation Network (PANet) aiming at boosting information flow in proposal-based instance segmentation framework. Specifically, we enhance the entire feature hierarchy with accurate localization signals in lower layers by bottom-up path augmentation, which shortens the information path between lower layers and topmost feature. We present adaptive feature pooling, which links feature grid and all feature levels to make useful information in each feature level propagate directly to following proposal subnetworks. A complementary branch capturing different views for each proposal is created to further improve mask prediction. These improvements are simple to implement, with subtle extra computational overhead. Our PANet reaches the 1st place in the COCO 2017 Challenge Instance Segmentation task and the 2nd place in Object Detection task without large-batch training. It is also state-of-the-art on MVD and Cityscapes. Code is available at https://github.com/ShuLiu1993/PANet

* Accepted to CVPR 2018 

  Click for Model/Code and Paper
ICNet for Real-Time Semantic Segmentation on High-Resolution Images

Aug 20, 2018
Hengshuang Zhao, Xiaojuan Qi, Xiaoyong Shen, Jianping Shi, Jiaya Jia

We focus on the challenging task of real-time semantic segmentation in this paper. It finds many practical applications and yet is with fundamental difficulty of reducing a large portion of computation for pixel-wise label inference. We propose an image cascade network (ICNet) that incorporates multi-resolution branches under proper label guidance to address this challenge. We provide in-depth analysis of our framework and introduce the cascade feature fusion unit to quickly achieve high-quality segmentation. Our system yields real-time inference on a single GPU card with decent quality results evaluated on challenging datasets like Cityscapes, CamVid and COCO-Stuff.

* ECCV 2018 

  Click for Model/Code and Paper
SegStereo: Exploiting Semantic Information for Disparity Estimation

Jul 31, 2018
Guorun Yang, Hengshuang Zhao, Jianping Shi, Zhidong Deng, Jiaya Jia

Disparity estimation for binocular stereo images finds a wide range of applications. Traditional algorithms may fail on featureless regions, which could be handled by high-level clues such as semantic segments. In this paper, we suggest that appropriate incorporation of semantic cues can greatly rectify prediction in commonly-used disparity estimation frameworks. Our method conducts semantic feature embedding and regularizes semantic cues as the loss term to improve learning disparity. Our unified model SegStereo employs semantic features from segmentation and introduces semantic softmax loss, which helps improve the prediction accuracy of disparity maps. The semantic cues work well in both unsupervised and supervised manners. SegStereo achieves state-of-the-art results on KITTI Stereo benchmark and produces decent prediction on both CityScapes and FlyingThings3D datasets.

* Accepted to ECCV 2018 

  Click for Model/Code and Paper