🤗 Transformers: State-of-the-art natural language processing T Wolf, L Debut, V Sanh, J Chaumond, C Delangue, A Moi, P Cistac, ... EMNLP 2020 (Demo), 38-45, 2020 | 15199* | 2020 |
Multitask prompted training enables zero-shot task generalization V Sanh, A Webson, C Raffel, SH Bach, L Sutawika, Z Alyafeai, A Chaffin, ... ICLR 2022, 2021 | 1519 | 2021 |
Bloom: A 176b-parameter open-access multilingual language model T Le Scao, A Fan, C Akiki, E Pavlick, S Ilić, D Hesslow, R Castagné, ... | 1482 | 2023 |
🤗 Datasets: A Community Library for Natural Language Processing Q Lhoest, AV del Moral, Y Jernite, A Thakur, P von Platen, S Patil, ... EMNLP 2021 (Demo), 2021 | 494* | 2021 |
BERT Loses Patience: Fast and Robust Inference with Early Exit W Zhou, C Xu, T Ge, J McAuley, K Xu, F Wei NeurIPS 2020, 2020 | 302 | 2020 |
PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts SH Bach, V Sanh, ZX Yong, A Webson, C Raffel, NV Nayak, A Sharma, ... ACL 2022 (Demo), 2022 | 292 | 2022 |
Baize: An open-source chat model with parameter-efficient tuning on self-chat data C Xu, D Guo, N Duan, J McAuley EMNLP 2023, 2023 | 237 | 2023 |
Bert-of-theseus: Compressing bert by progressive module replacing C Xu, W Zhou, T Ge, F Wei, M Zhou EMNLP 2020, 7859--7869, 2020 | 213 | 2020 |
Starcoder 2 and the stack v2: The next generation A Lozhkov, R Li, LB Allal, F Cassano, J Lamy-Poirier, N Tazi, A Tang, ... arXiv preprint arXiv:2402.19173, 2024 | 91 | 2024 |
A survey on model compression and acceleration for pretrained language models C Xu, J McAuley AAAI 2023, 2023 | 79* | 2023 |
BERT learns to teach: Knowledge distillation with meta learning W Zhou, C Xu, J McAuley ACL 2022, 7037-7049, 2022 | 76 | 2022 |
Repobench: Benchmarking repository-level code auto-completion systems T Liu, C Xu, J McAuley ICLR 2024, 2023 | 59 | 2023 |
LaPraDoR: Unsupervised Pretrained Dense Retriever for Zero-Shot Text Retrieval C Xu, D Guo, N Duan, J McAuley ACL 2022 (Findings), 2022 | 45 | 2022 |
Beyond Preserved Accuracy: Evaluating Loyalty and Robustness of BERT Compression C Xu, W Zhou, T Ge, K Xu, J McAuley, F Wei EMNLP 2021, 2021 | 44 | 2021 |
Small models are valuable plug-ins for large language models C Xu, Y Xu, S Wang, Y Liu, C Zhu, J McAuley ACL 2024 (Findings), 2023 | 42 | 2023 |
Pre-train and Plug-in: Flexible Conditional Text Generation with Variational Auto-Encoders Y Duan, C Xu, J Pei, J Han, C Li ACL 2020, 253–262, 2020 | 39 | 2020 |
DLocRL: A deep learning pipeline for fine-grained location recognition and linking in tweets C Xu, J Li, X Luo, J Pei, C Li, D Ji The Web Conference (WWW) 2019, 3391-3397, 2019 | 36 | 2019 |
LongCoder: A Long-Range Pre-trained Language Model for Code Completion D Guo, C Xu, N Duan, J Yin, J McAuley ICML 2023, 2023 | 35 | 2023 |
A survey on dynamic neural networks for natural language processing C Xu, J McAuley EACL 2023 (Findings), 2023 | 27 | 2023 |
Automatic Multi-Label Prompting: Simple and Interpretable Few-Shot Classification H Wang, C Xu, J McAuley NAACL 2022, 2022 | 25 | 2022 |