Alert button
Picture for Zheng Shou

Zheng Shou

Alert button

An Efficient COarse-to-fiNE Alignment Framework @ Ego4D Natural Language Queries Challenge 2022

Add code
Bookmark button
Alert button
Nov 16, 2022
Zhijian Hou, Wanjun Zhong, Lei Ji, Difei Gao, Kun Yan, Wing-Kwong Chan, Chong-Wah Ngo, Zheng Shou, Nan Duan

Figure 1 for An Efficient COarse-to-fiNE Alignment Framework @ Ego4D Natural Language Queries Challenge 2022
Figure 2 for An Efficient COarse-to-fiNE Alignment Framework @ Ego4D Natural Language Queries Challenge 2022
Figure 3 for An Efficient COarse-to-fiNE Alignment Framework @ Ego4D Natural Language Queries Challenge 2022
Figure 4 for An Efficient COarse-to-fiNE Alignment Framework @ Ego4D Natural Language Queries Challenge 2022
Viaarxiv icon

CONE: An Efficient COarse-to-fiNE Alignment Framework for Long Video Temporal Grounding

Add code
Bookmark button
Alert button
Sep 22, 2022
Zhijian Hou, Wanjun Zhong, Lei Ji, Difei Gao, Kun Yan, Wing-Kwong Chan, Chong-Wah Ngo, Zheng Shou, Nan Duan

Figure 1 for CONE: An Efficient COarse-to-fiNE Alignment Framework for Long Video Temporal Grounding
Figure 2 for CONE: An Efficient COarse-to-fiNE Alignment Framework for Long Video Temporal Grounding
Figure 3 for CONE: An Efficient COarse-to-fiNE Alignment Framework for Long Video Temporal Grounding
Figure 4 for CONE: An Efficient COarse-to-fiNE Alignment Framework for Long Video Temporal Grounding
Viaarxiv icon

Searching for Two-Stream Models in Multivariate Space for Video Recognition

Add code
Bookmark button
Alert button
Aug 30, 2021
Xinyu Gong, Heng Wang, Zheng Shou, Matt Feiszli, Zhangyang Wang, Zhicheng Yan

Figure 1 for Searching for Two-Stream Models in Multivariate Space for Video Recognition
Figure 2 for Searching for Two-Stream Models in Multivariate Space for Video Recognition
Figure 3 for Searching for Two-Stream Models in Multivariate Space for Video Recognition
Figure 4 for Searching for Two-Stream Models in Multivariate Space for Video Recognition
Viaarxiv icon

Actor-Context-Actor Relation Network for Spatio-Temporal Action Localization

Add code
Bookmark button
Alert button
Jul 15, 2020
Junting Pan, Siyu Chen, Zheng Shou, Jing Shao, Hongsheng Li

Figure 1 for Actor-Context-Actor Relation Network for Spatio-Temporal Action Localization
Figure 2 for Actor-Context-Actor Relation Network for Spatio-Temporal Action Localization
Figure 3 for Actor-Context-Actor Relation Network for Spatio-Temporal Action Localization
Figure 4 for Actor-Context-Actor Relation Network for Spatio-Temporal Action Localization
Viaarxiv icon

SF-Net: Single-Frame Supervision for Temporal Action Localization

Add code
Bookmark button
Alert button
Mar 20, 2020
Fan Ma, Linchao Zhu, Yi Yang, Shengxin Zha, Gourab Kundu, Matt Feiszli, Zheng Shou

Viaarxiv icon

LPAT: Learning to Predict Adaptive Threshold for Weakly-supervised Temporal Action Localization

Add code
Bookmark button
Alert button
Oct 25, 2019
Xudong Lin, Zheng Shou, Shih-Fu Chang

Figure 1 for LPAT: Learning to Predict Adaptive Threshold for Weakly-supervised Temporal Action Localization
Figure 2 for LPAT: Learning to Predict Adaptive Threshold for Weakly-supervised Temporal Action Localization
Figure 3 for LPAT: Learning to Predict Adaptive Threshold for Weakly-supervised Temporal Action Localization
Figure 4 for LPAT: Learning to Predict Adaptive Threshold for Weakly-supervised Temporal Action Localization
Viaarxiv icon

CDSA: Cross-Dimensional Self-Attention for Multivariate, Geo-tagged Time Series Imputation

Add code
Bookmark button
Alert button
May 23, 2019
Jiawei Ma, Zheng Shou, Alireza Zareian, Hassan Mansour, Anthony Vetro, Shih-Fu Chang

Figure 1 for CDSA: Cross-Dimensional Self-Attention for Multivariate, Geo-tagged Time Series Imputation
Figure 2 for CDSA: Cross-Dimensional Self-Attention for Multivariate, Geo-tagged Time Series Imputation
Figure 3 for CDSA: Cross-Dimensional Self-Attention for Multivariate, Geo-tagged Time Series Imputation
Figure 4 for CDSA: Cross-Dimensional Self-Attention for Multivariate, Geo-tagged Time Series Imputation
Viaarxiv icon

DMC-Net: Generating Discriminative Motion Cues for Fast Compressed Video Action Recognition

Add code
Bookmark button
Alert button
Jan 11, 2019
Zheng Shou, Zhicheng Yan, Yannis Kalantidis, Laura Sevilla-Lara, Marcus Rohrbach, Xudong Lin, Shih-Fu Chang

Figure 1 for DMC-Net: Generating Discriminative Motion Cues for Fast Compressed Video Action Recognition
Figure 2 for DMC-Net: Generating Discriminative Motion Cues for Fast Compressed Video Action Recognition
Figure 3 for DMC-Net: Generating Discriminative Motion Cues for Fast Compressed Video Action Recognition
Figure 4 for DMC-Net: Generating Discriminative Motion Cues for Fast Compressed Video Action Recognition
Viaarxiv icon

Low-shot Learning via Covariance-Preserving Adversarial Augmentation Networks

Add code
Bookmark button
Alert button
Oct 30, 2018
Hang Gao, Zheng Shou, Alireza Zareian, Hanwang Zhang, Shih-Fu Chang

Figure 1 for Low-shot Learning via Covariance-Preserving Adversarial Augmentation Networks
Figure 2 for Low-shot Learning via Covariance-Preserving Adversarial Augmentation Networks
Figure 3 for Low-shot Learning via Covariance-Preserving Adversarial Augmentation Networks
Figure 4 for Low-shot Learning via Covariance-Preserving Adversarial Augmentation Networks
Viaarxiv icon

Temporal Convolution Based Action Proposal: Submission to ActivityNet 2017

Add code
Bookmark button
Alert button
Sep 26, 2018
Tianwei Lin, Xu Zhao, Zheng Shou

Figure 1 for Temporal Convolution Based Action Proposal: Submission to ActivityNet 2017
Figure 2 for Temporal Convolution Based Action Proposal: Submission to ActivityNet 2017
Figure 3 for Temporal Convolution Based Action Proposal: Submission to ActivityNet 2017
Figure 4 for Temporal Convolution Based Action Proposal: Submission to ActivityNet 2017
Viaarxiv icon