Alert button
Picture for Jinhwan Park

Jinhwan Park

Alert button

On the compression of shallow non-causal ASR models using knowledge distillation and tied-and-reduced decoder for low-latency on-device speech recognition

Add code
Bookmark button
Alert button
Dec 15, 2023
Nagaraj Adiga, Jinhwan Park, Chintigari Shiva Kumar, Shatrughan Singh, Kyungmin Lee, Chanwoo Kim, Dhananjaya Gowda

Viaarxiv icon

Macro-block dropout for improved regularization in training end-to-end speech recognition models

Add code
Bookmark button
Alert button
Dec 29, 2022
Chanwoo Kim, Sathish Indurti, Jinhwan Park, Wonyong Sung

Figure 1 for Macro-block dropout for improved regularization in training end-to-end speech recognition models
Figure 2 for Macro-block dropout for improved regularization in training end-to-end speech recognition models
Figure 3 for Macro-block dropout for improved regularization in training end-to-end speech recognition models
Figure 4 for Macro-block dropout for improved regularization in training end-to-end speech recognition models
Viaarxiv icon

S-SGD: Symmetrical Stochastic Gradient Descent with Weight Noise Injection for Reaching Flat Minima

Add code
Bookmark button
Alert button
Sep 05, 2020
Wonyong Sung, Iksoo Choi, Jinhwan Park, Seokhyun Choi, Sungho Shin

Figure 1 for S-SGD: Symmetrical Stochastic Gradient Descent with Weight Noise Injection for Reaching Flat Minima
Figure 2 for S-SGD: Symmetrical Stochastic Gradient Descent with Weight Noise Injection for Reaching Flat Minima
Figure 3 for S-SGD: Symmetrical Stochastic Gradient Descent with Weight Noise Injection for Reaching Flat Minima
Figure 4 for S-SGD: Symmetrical Stochastic Gradient Descent with Weight Noise Injection for Reaching Flat Minima
Viaarxiv icon

Single Stream Parallelization of Recurrent Neural Networks for Low Power and Fast Inference

Add code
Bookmark button
Alert button
Mar 30, 2018
Wonyong Sung, Jinhwan Park

Figure 1 for Single Stream Parallelization of Recurrent Neural Networks for Low Power and Fast Inference
Figure 2 for Single Stream Parallelization of Recurrent Neural Networks for Low Power and Fast Inference
Figure 3 for Single Stream Parallelization of Recurrent Neural Networks for Low Power and Fast Inference
Figure 4 for Single Stream Parallelization of Recurrent Neural Networks for Low Power and Fast Inference
Viaarxiv icon

FPGA-Based Low-Power Speech Recognition with Recurrent Neural Networks

Add code
Bookmark button
Alert button
Sep 30, 2016
Minjae Lee, Kyuyeon Hwang, Jinhwan Park, Sungwook Choi, Sungho Shin, Wonyong Sung

Figure 1 for FPGA-Based Low-Power Speech Recognition with Recurrent Neural Networks
Figure 2 for FPGA-Based Low-Power Speech Recognition with Recurrent Neural Networks
Figure 3 for FPGA-Based Low-Power Speech Recognition with Recurrent Neural Networks
Figure 4 for FPGA-Based Low-Power Speech Recognition with Recurrent Neural Networks
Viaarxiv icon

FPGA Based Implementation of Deep Neural Networks Using On-chip Memory Only

Add code
Bookmark button
Alert button
Aug 29, 2016
Jinhwan Park, Wonyong Sung

Figure 1 for FPGA Based Implementation of Deep Neural Networks Using On-chip Memory Only
Figure 2 for FPGA Based Implementation of Deep Neural Networks Using On-chip Memory Only
Figure 3 for FPGA Based Implementation of Deep Neural Networks Using On-chip Memory Only
Figure 4 for FPGA Based Implementation of Deep Neural Networks Using On-chip Memory Only
Viaarxiv icon