Alert button
Picture for Junyi Peng

Junyi Peng

Alert button

Target Speech Extraction with Pre-trained Self-supervised Learning Models

Add code
Bookmark button
Alert button
Feb 17, 2024
Junyi Peng, Marc Delcroix, Tsubasa Ochiai, Oldrich Plchot, Shoko Araki, Jan Cernocky

Figure 1 for Target Speech Extraction with Pre-trained Self-supervised Learning Models
Figure 2 for Target Speech Extraction with Pre-trained Self-supervised Learning Models
Figure 3 for Target Speech Extraction with Pre-trained Self-supervised Learning Models
Figure 4 for Target Speech Extraction with Pre-trained Self-supervised Learning Models
Viaarxiv icon

Probing Self-supervised Learning Models with Target Speech Extraction

Add code
Bookmark button
Alert button
Feb 17, 2024
Junyi Peng, Marc Delcroix, Tsubasa Ochiai, Oldrich Plchot, Takanori Ashihara, Shoko Araki, Jan Cernocky

Viaarxiv icon

Improving Speaker Verification with Self-Pretrained Transformer Models

Add code
Bookmark button
Alert button
May 17, 2023
Junyi Peng, Oldřich Plchot, Themos Stafylakis, Ladislav Mošner, Lukáš Burget, Jan Černocký

Figure 1 for Improving Speaker Verification with Self-Pretrained Transformer Models
Figure 2 for Improving Speaker Verification with Self-Pretrained Transformer Models
Figure 3 for Improving Speaker Verification with Self-Pretrained Transformer Models
Figure 4 for Improving Speaker Verification with Self-Pretrained Transformer Models
Viaarxiv icon

Probing Deep Speaker Embeddings for Speaker-related Tasks

Add code
Bookmark button
Alert button
Dec 14, 2022
Zifeng Zhao, Ding Pan, Junyi Peng, Rongzhi Gu

Figure 1 for Probing Deep Speaker Embeddings for Speaker-related Tasks
Figure 2 for Probing Deep Speaker Embeddings for Speaker-related Tasks
Figure 3 for Probing Deep Speaker Embeddings for Speaker-related Tasks
Figure 4 for Probing Deep Speaker Embeddings for Speaker-related Tasks
Viaarxiv icon

Parameter-efficient transfer learning of pre-trained Transformer models for speaker verification using adapters

Add code
Bookmark button
Alert button
Oct 28, 2022
Junyi Peng, Themos Stafylakis, Rongzhi Gu, Oldřich Plchot, Ladislav Mošner, Lukáš Burget, Jan Černocký

Figure 1 for Parameter-efficient transfer learning of pre-trained Transformer models for speaker verification using adapters
Figure 2 for Parameter-efficient transfer learning of pre-trained Transformer models for speaker verification using adapters
Figure 3 for Parameter-efficient transfer learning of pre-trained Transformer models for speaker verification using adapters
Figure 4 for Parameter-efficient transfer learning of pre-trained Transformer models for speaker verification using adapters
Viaarxiv icon

An attention-based backend allowing efficient fine-tuning of transformer models for speaker verification

Add code
Bookmark button
Alert button
Oct 03, 2022
Junyi Peng, Oldrich Plchot, Themos Stafylakis, Ladislav Mosner, Lukas Burget, Jan Cernocky

Figure 1 for An attention-based backend allowing efficient fine-tuning of transformer models for speaker verification
Figure 2 for An attention-based backend allowing efficient fine-tuning of transformer models for speaker verification
Figure 3 for An attention-based backend allowing efficient fine-tuning of transformer models for speaker verification
Figure 4 for An attention-based backend allowing efficient fine-tuning of transformer models for speaker verification
Viaarxiv icon