Multispeech: Multi-speaker text to speech with transformer M Chen, X Tan, Y Ren, J Xu, H Sun, S Zhao, T Qin, TY Liu arXiv preprint arXiv:2006.04664, 2020 | 115 | 2020 |
Token-level ensemble distillation for grapheme-to-phoneme conversion H Sun, X Tan, JW Gan, H Liu, S Zhao, T Qin, TY Liu arXiv preprint arXiv:1904.03446, 2019 | 77 | 2019 |
LightPAFF: A two-stage distillation framework for pre-training and fine-tuning K Song, H Sun, X Tan, T Qin, J Lu, H Liu, TY Liu arXiv preprint arXiv:2004.12817, 2020 | 19 | 2020 |
Knowledge distillation from bert in pre-training and fine-tuning for polyphone disambiguation H Sun, X Tan, JW Gan, S Zhao, D Han, H Liu, T Qin, TY Liu 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU …, 2019 | 14 | 2019 |