Alert button
Picture for Chenglin Xu

Chenglin Xu

Alert button

KS-Net: Multi-band joint speech restoration and enhancement network for 2024 ICASSP SSI Challenge

Add code
Bookmark button
Alert button
Feb 02, 2024
Guochen Yu, Runqiang Han, Chenglin Xu, Haoran Zhao, Nan Li, Chen Zhang, Xiguang Zheng, Chao Zhou, Qi Huang, Bing Yu

Viaarxiv icon

Typing to Listen at the Cocktail Party: Text-Guided Target Speaker Extraction

Add code
Bookmark button
Alert button
Oct 15, 2023
Xiang Hao, Jibin Wu, Jianwei Yu, Chenglin Xu, Kay Chen Tan

Figure 1 for Typing to Listen at the Cocktail Party: Text-Guided Target Speaker Extraction
Figure 2 for Typing to Listen at the Cocktail Party: Text-Guided Target Speaker Extraction
Figure 3 for Typing to Listen at the Cocktail Party: Text-Guided Target Speaker Extraction
Figure 4 for Typing to Listen at the Cocktail Party: Text-Guided Target Speaker Extraction
Viaarxiv icon

L-SpEx: Localized Target Speaker Extraction

Add code
Bookmark button
Alert button
Feb 21, 2022
Meng Ge, Chenglin Xu, Longbiao Wang, Eng Siong Chng, Jianwu Dang, Haizhou Li

Figure 1 for L-SpEx: Localized Target Speaker Extraction
Figure 2 for L-SpEx: Localized Target Speaker Extraction
Figure 3 for L-SpEx: Localized Target Speaker Extraction
Figure 4 for L-SpEx: Localized Target Speaker Extraction
Viaarxiv icon

Selective Hearing through Lip-reading

Add code
Bookmark button
Alert button
Jun 14, 2021
Zexu Pan, Ruijie Tao, Chenglin Xu, Haizhou Li

Figure 1 for Selective Hearing through Lip-reading
Figure 2 for Selective Hearing through Lip-reading
Figure 3 for Selective Hearing through Lip-reading
Figure 4 for Selective Hearing through Lip-reading
Viaarxiv icon

Target Speaker Verification with Selective Auditory Attention for Single and Multi-talker Speech

Add code
Bookmark button
Alert button
Apr 02, 2021
Chenglin Xu, Wei Rao, Jibin Wu, Haizhou Li

Figure 1 for Target Speaker Verification with Selective Auditory Attention for Single and Multi-talker Speech
Figure 2 for Target Speaker Verification with Selective Auditory Attention for Single and Multi-talker Speech
Figure 3 for Target Speaker Verification with Selective Auditory Attention for Single and Multi-talker Speech
Figure 4 for Target Speaker Verification with Selective Auditory Attention for Single and Multi-talker Speech
Viaarxiv icon

Multi-stage Speaker Extraction with Utterance and Frame-Level Reference Signals

Add code
Bookmark button
Alert button
Nov 19, 2020
Meng Ge, Chenglin Xu, Longbiao Wang, Eng Siong Chng, Jianwu Dang, Haizhou Li

Figure 1 for Multi-stage Speaker Extraction with Utterance and Frame-Level Reference Signals
Figure 2 for Multi-stage Speaker Extraction with Utterance and Frame-Level Reference Signals
Figure 3 for Multi-stage Speaker Extraction with Utterance and Frame-Level Reference Signals
Figure 4 for Multi-stage Speaker Extraction with Utterance and Frame-Level Reference Signals
Viaarxiv icon

Progressive Tandem Learning for Pattern Recognition with Deep Spiking Neural Networks

Add code
Bookmark button
Alert button
Jul 02, 2020
Jibin Wu, Chenglin Xu, Daquan Zhou, Haizhou Li, Kay Chen Tan

Figure 1 for Progressive Tandem Learning for Pattern Recognition with Deep Spiking Neural Networks
Figure 2 for Progressive Tandem Learning for Pattern Recognition with Deep Spiking Neural Networks
Figure 3 for Progressive Tandem Learning for Pattern Recognition with Deep Spiking Neural Networks
Figure 4 for Progressive Tandem Learning for Pattern Recognition with Deep Spiking Neural Networks
Viaarxiv icon

SpEx: Multi-Scale Time Domain Speaker Extraction Network

Add code
Bookmark button
Alert button
Apr 17, 2020
Chenglin Xu, Wei Rao, Eng Siong Chng, Haizhou Li

Figure 1 for SpEx: Multi-Scale Time Domain Speaker Extraction Network
Figure 2 for SpEx: Multi-Scale Time Domain Speaker Extraction Network
Figure 3 for SpEx: Multi-Scale Time Domain Speaker Extraction Network
Figure 4 for SpEx: Multi-Scale Time Domain Speaker Extraction Network
Viaarxiv icon

I4U Submission to NIST SRE 2018: Leveraging from a Decade of Shared Experiences

Add code
Bookmark button
Alert button
Apr 16, 2019
Kong Aik Lee, Ville Hautamaki, Tomi Kinnunen, Hitoshi Yamamoto, Koji Okabe, Ville Vestman, Jing Huang, Guohong Ding, Hanwu Sun, Anthony Larcher, Rohan Kumar Das, Haizhou Li, Mickael Rouvier, Pierre-Michel Bousquet, Wei Rao, Qing Wang, Chunlei Zhang, Fahimeh Bahmaninezhad, Hector Delgado, Jose Patino, Qiongqiong Wang, Ling Guo, Takafumi Koshinaka, Jiacen Zhang, Koichi Shinoda, Trung Ngo Trong, Md Sahidullah, Fan Lu, Yun Tang, Ming Tu, Kah Kuan Teh, Huy Dat Tran, Kuruvachan K. George, Ivan Kukanov, Florent Desnous, Jichen Yang, Emre Yilmaz, Longting Xu, Jean-Francois Bonastre, Chenglin Xu, Zhi Hao Lim, Eng Siong Chng, Shivesh Ranjan, John H. L. Hansen, Massimiliano Todisco, Nicholas Evans

Figure 1 for I4U Submission to NIST SRE 2018: Leveraging from a Decade of Shared Experiences
Figure 2 for I4U Submission to NIST SRE 2018: Leveraging from a Decade of Shared Experiences
Figure 3 for I4U Submission to NIST SRE 2018: Leveraging from a Decade of Shared Experiences
Figure 4 for I4U Submission to NIST SRE 2018: Leveraging from a Decade of Shared Experiences
Viaarxiv icon

Optimization of Speaker Extraction Neural Network with Magnitude and Temporal Spectrum Approximation Loss

Add code
Bookmark button
Alert button
Mar 24, 2019
Chenglin Xu, Wei Rao, Eng Siong Chng, Haizhou Li

Figure 1 for Optimization of Speaker Extraction Neural Network with Magnitude and Temporal Spectrum Approximation Loss
Figure 2 for Optimization of Speaker Extraction Neural Network with Magnitude and Temporal Spectrum Approximation Loss
Figure 3 for Optimization of Speaker Extraction Neural Network with Magnitude and Temporal Spectrum Approximation Loss
Viaarxiv icon