ReNet: A Recurrent Neural Network Based Alternative to Convolutional Networks

Jul 23, 2015

Francesco Visin, Kyle Kastner, Kyunghyun Cho, Matteo Matteucci, Aaron Courville, Yoshua Bengio

Jul 23, 2015

Francesco Visin, Kyle Kastner, Kyunghyun Cho, Matteo Matteucci, Aaron Courville, Yoshua Bengio

**Click to Read Paper and Get Code**

4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks

May 15, 2019

Christopher Choy, JunYoung Gwak, Silvio Savarese

In many robotics and VR/AR applications, 3D-videos are readily-available sources of input (a continuous sequence of depth images, or LIDAR scans). However, those 3D-videos are processed frame-by-frame either through 2D convnets or 3D perception algorithms. In this work, we propose 4-dimensional convolutional neural networks for spatio-temporal perception that can directly process such 3D-videos using high-dimensional convolutions. For this, we adopt sparse tensors and propose the generalized sparse convolution that encompasses all discrete convolutions. To implement the generalized sparse convolution, we create an open-source auto-differentiation library for sparse tensors that provides extensive functions for high-dimensional convolutional neural networks. We create 4D spatio-temporal convolutional neural networks using the library and validate them on various 3D semantic segmentation benchmarks and proposed 4D datasets for 3D-video perception. To overcome challenges in the 4D space, we propose the hybrid kernel, a special case of the generalized sparse convolution, and the trilateral-stationary conditional random field that enforces spatio-temporal consistency in the 7D space-time-chroma space. Experimentally, we show that convolutional neural networks with only generalized 3D sparse convolutions can outperform 2D or 2D-3D hybrid methods by a large margin. Also, we show that on 3D-videos, 4D spatio-temporal convolutional neural networks are robust to noise, outperform 3D convolutional neural networks and are faster than the 3D counterpart in some cases.
May 15, 2019

Christopher Choy, JunYoung Gwak, Silvio Savarese

* CVPR'19

**Click to Read Paper and Get Code**

* 9 pages, 2 figures

**Click to Read Paper and Get Code**

Characterizing Types of Convolution in Deep Convolutional Recurrent Neural Networks for Robust Speech Emotion Recognition

Jan 13, 2018

Che-Wei Huang, Shrikanth. S. Narayanan

Jan 13, 2018

Che-Wei Huang, Shrikanth. S. Narayanan

* Revised Submission to IEEE Transactions

**Click to Read Paper and Get Code**

Deep Recurrent Convolutional Neural Network: Improving Performance For Speech Recognition

Dec 27, 2016

Zewang Zhang, Zheng Sun, Jiaqi Liu, Jingwen Chen, Zhao Huo, Xiao Zhang

Dec 27, 2016

Zewang Zhang, Zheng Sun, Jiaqi Liu, Jingwen Chen, Zhao Huo, Xiao Zhang

* 11 pages, 13 figures

**Click to Read Paper and Get Code**

**Click to Read Paper and Get Code**

A Non-Technical Survey on Deep Convolutional Neural Network Architectures

Mar 06, 2018

Felix Altenberger, Claus Lenz

Mar 06, 2018

Felix Altenberger, Claus Lenz

* 17 pages (incl. references), 23 Postscript figures, uses IEEEtran

**Click to Read Paper and Get Code**

Graph Edge Convolutional Neural Networks for Skeleton Based Action Recognition

May 31, 2018

Xikun Zhang, Chang Xu, Xinmei Tian, Dacheng Tao

May 31, 2018

Xikun Zhang, Chang Xu, Xinmei Tian, Dacheng Tao

**Click to Read Paper and Get Code**

Learning a Virtual Codec Based on Deep Convolutional Neural Network to Compress Image

Jan 16, 2018

Lijun Zhao, Huihui Bai, Anhong Wang, Yao Zhao

Jan 16, 2018

Lijun Zhao, Huihui Bai, Anhong Wang, Yao Zhao

* 11 pages, 7 figures

**Click to Read Paper and Get Code**

Recent Advances in Convolutional Neural Networks

Oct 19, 2017

Jiuxiang Gu, Zhenhua Wang, Jason Kuen, Lianyang Ma, Amir Shahroudy, Bing Shuai, Ting Liu, Xingxing Wang, Li Wang, Gang Wang, Jianfei Cai, Tsuhan Chen

Oct 19, 2017

Jiuxiang Gu, Zhenhua Wang, Jason Kuen, Lianyang Ma, Amir Shahroudy, Bing Shuai, Ting Liu, Xingxing Wang, Li Wang, Gang Wang, Jianfei Cai, Tsuhan Chen

* Pattern Recognition, Elsevier

**Click to Read Paper and Get Code**

Convexified Convolutional Neural Networks

Sep 04, 2016

Yuchen Zhang, Percy Liang, Martin J. Wainwright

Sep 04, 2016

Yuchen Zhang, Percy Liang, Martin J. Wainwright

* 29 pages

**Click to Read Paper and Get Code**

Combining Recurrent and Convolutional Neural Networks for Relation Classification

May 24, 2016

Ngoc Thang Vu, Heike Adel, Pankaj Gupta, Hinrich SchÃ¼tze

May 24, 2016

Ngoc Thang Vu, Heike Adel, Pankaj Gupta, Hinrich SchÃ¼tze

* NAACL 2016

**Click to Read Paper and Get Code**

Ensemble of Convolutional Neural Networks Trained with Different Activation Functions

May 15, 2019

Gianluca Maguolo, Loris Nanni, Stefano Ghidoni

May 15, 2019

Gianluca Maguolo, Loris Nanni, Stefano Ghidoni

**Click to Read Paper and Get Code**

Extension of Convolutional Neural Network with General Image Processing Kernels

Jan 16, 2019

Jay Hoon Jung, Yousun Shin, YoungMin Kwon

Jan 16, 2019

Jay Hoon Jung, Yousun Shin, YoungMin Kwon

* TENCON 2018

* 4 pages, 6 figures

**Click to Read Paper and Get Code**

Generalizing the Convolution Operator in Convolutional Neural Networks

Jul 14, 2017

Kamaledin Ghiasi-Shirazi

Jul 14, 2017

Kamaledin Ghiasi-Shirazi

**Click to Read Paper and Get Code**

Deep learning has been widely applied and brought breakthroughs in speech recognition, computer vision, and many other domains. The involved deep neural network architectures and computational issues have been well studied in machine learning. But there lacks a theoretical foundation for understanding the approximation or generalization ability of deep learning methods generated by the network architectures such as deep convolutional neural networks having convolutional structures. Here we show that a deep convolutional neural network (CNN) is universal, meaning that it can be used to approximate any continuous function to an arbitrary accuracy when the depth of the neural network is large enough. This answers an open question in learning theory. Our quantitative estimate, given tightly in terms of the number of free parameters to be computed, verifies the efficiency of deep CNNs in dealing with large dimensional data. Our study also demonstrates the role of convolutions in deep CNNs.

**Click to Read Paper and Get Code**
Kernel-based Translations of Convolutional Networks

Mar 19, 2019

Corinne Jones, Vincent Roulet, Zaid Harchaoui

Mar 19, 2019

Corinne Jones, Vincent Roulet, Zaid Harchaoui

**Click to Read Paper and Get Code**

Bayesian Convolutional Neural Networks

Sep 10, 2018

Kumar Shridhar, Felix Laumann, Adrian Llopart Maurin, Marcus Liwicki

Sep 10, 2018

Kumar Shridhar, Felix Laumann, Adrian Llopart Maurin, Marcus Liwicki

* arXiv admin note: text overlap with arXiv:1704.02798 by other authors

**Click to Read Paper and Get Code**

* 9 pages

**Click to Read Paper and Get Code**

Training convolutional neural networks with megapixel images

Apr 16, 2018

Hans Pinckaers, Geert Litjens

Apr 16, 2018

Hans Pinckaers, Geert Litjens

* Submitted to MIDL 2018

**Click to Read Paper and Get Code**