Models, code, and papers for "Chengyu Guo":

DeepStego: Protecting Intellectual Property of Deep Neural Networks by Steganography

Mar 13, 2019
Zheng Li, Ge Han, Shanqing Guo, Chengyu Hu

Deep Neural Networks (DNNs) has shown great success in various challenging tasks. Training these networks is computationally expensive and requires vast amounts of training data. Therefore, it is necessary to design a technology to protect the intellectual property (IP) of the model and externally verify the ownership of the model in a black-box way. Previous studies either fail to meet the black-box requirement or have not dealt with several forms of security and legal problems. In this paper, we firstly propose a novel steganographic scheme for watermarking Deep Neural Networks in the process of training. This scheme is the first feasible scheme to protect DNNs which perfectly solves the problems of safety and legality. We demonstrate experimentally that such a watermark has no obvious influence on the main task of model design and can successfully verify the ownership of the model. Furthermore, we show a rather robustness by simulating our scheme in a real situation.

* Some experiments need to be done 

  Click for Model/Code and Paper
Extended Local Binary Patterns for Efficient and Robust Spontaneous Facial Micro-Expression Recognition

Jul 22, 2019
Chengyu Guo, Jingyun Liang, Geng Zhan, Zhong Liu, Matti Pietikäinen, Li Liu

Facial MicroExpressions (MEs) are spontaneous, involuntary facial movements when a person experiences an emotion but deliberately or unconsciously attempts to conceal his or her genuine emotions. Recently, ME recognition has attracted increasing attention due to its potential applications such as clinical diagnosis, business negotiation, interrogations and security. However, it is expensive to build large scale ME datasets, mainly due to the difficulty of naturally inducing spontaneous MEs. This limits the application of deep learning techniques which require lots of training data. In this paper, we propose a simple, efficient yet robust descriptor called Extended Local Binary Patterns on Three Orthogonal Planes (ELBPTOP) for ME recognition. ELBPTOP consists of three complementary binary descriptors: LBPTOP and two novel ones Radial Difference LBPTOP (RDLBPTOP) and Angular Difference LBPTOP (ADLBPTOP), which explore the local second order information along radial and angular directions contained in ME video sequences. ELBPTOP is a novel ME descriptor inspired by the unique and subtle facial movements. It is computationally efficient and only marginally increases the cost of computing LBPTOP, yet is extremely effective for ME recognition. In addition, by firstly introducing Whitened Principal Component Analysis (WPCA) to ME recognition, we can further obtain more compact and discriminative feature representations, and achieve significantly computational savings. Extensive experimental evaluation on three popular spontaneous ME datasets SMIC, CASMEII and SAMM show that our proposed ELBPTOP approach significantly outperforms previous state of the art on all three evaluated datasets. Our proposed ELBPTOP achieves 73.94% on CASMEII, which is 6.6% higher than state of the art on this dataset. More impressively, ELBPTOP increases recognition accuracy from 44.7% to 63.44% on the SAMM dataset.

  Click for Model/Code and Paper