GLM-130B: An Open Bilingual Pre-trained Model A Zeng, X Liu, Z Du, Z Wang, H Lai, M Ding, Z Yang, Y Xu, W Zheng, X Xia, ... ICLR 2023, 2022 | 992* | 2022 |
Agentbench: Evaluating llms as agents X Liu, H Yu, H Zhang, Y Xu, X Lei, H Lai, Y Gu, H Ding, K Men, K Yang, ... ICLR 2024, 2023 | 214 | 2023 |
Chatglm: A family of large language models from glm-130b to glm-4 all tools T GLM, A Zeng, B Xu, B Wang, C Zhang, D Yin, D Zhang, D Rojas, G Feng, ... arXiv preprint arXiv:2406.12793, 2024 | 164* | 2024 |
AgentTuning: Enabling Generalized Agent Abilities For LLMs A Zeng, M Liu, R Lu, B Wang, X Liu, Y Dong, J Tang Findings of ACL 2024, 2023 | 94 | 2023 |
FastMoE: A Fast Mixture-of-Expert Training System J He, J Qiu, A Zeng, Z Yang, J Zhai, J Tang arXiv preprint arXiv:2103.13262, 2021 | 85 | 2021 |
WebGLM: Towards an efficient web-enhanced question answering system with human preferences X Liu, H Lai, H Yu, Y Xu, A Zeng, Z Du, P Zhang, Y Dong, J Tang KDD 2023, 4549-4560, 2023 | 70 | 2023 |
Longbench: A bilingual, multitask benchmark for long context understanding Y Bai, X Lv, J Zhang, H Lyu, J Tang, Z Huang, Z Du, X Liu, A Zeng, L Hou, ... arXiv preprint arXiv:2308.14508, 2023 | 61 | 2023 |
xTrimoPGLM: Unified 100B-Scale Pre-trained Transformer for Deciphering the Language of Protein B Chen, X Cheng, Y Geng, S Li, X Zeng, B Wang, J Gong, C Liu, A Zeng, ... bioRxiv, 2023.07. 05.547496, 2023 | 58 | 2023 |
BaGuaLu: targeting brain scale pretrained models with over 37 million cores Z Ma, J He, J Qiu, H Cao, Y Wang, Z Sun, L Zheng, H Wang, S Tang, ... Proceedings of the 27th ACM SIGPLAN Symposium on Principles and Practice of …, 2022 | 53 | 2022 |
Cogdl: An extensive toolkit for deep learning on graphs Y Cen, Z Hou, Y Wang, Q Chen, Y Luo, X Yao, A Zeng, S Guo, P Zhang, ... arXiv preprint arXiv:2103.00959 7 (8), 2021 | 53* | 2021 |
Critiquellm: Scaling llm-as-critic for effective and explainable evaluation of large language model generation P Ke, B Wen, Z Feng, X Liu, X Lei, J Cheng, S Wang, A Zeng, Y Dong, ... arXiv preprint arXiv:2311.18702, 2023 | 29 | 2023 |
Understanding emergent abilities of language models from the loss perspective Z Du, A Zeng, Y Dong, J Tang NeurIPS 2024, 2024 | 20 | 2024 |
xtrimopglm: Unified 100b-scale pre-trained transformer for deciphering the language of protein. bioRxiv B Chen, X Cheng, Y Geng, S Li, X Zeng, B Wang, J Gong, C Liu, A Zeng, ... | 9 | 2023 |
Visualagentbench: Towards large multimodal models as visual foundation agents X Liu, T Zhang, Y Gu, IL Iong, Y Xu, X Song, S Zhang, H Lai, X Liu, H Zhao, ... arXiv preprint arXiv:2408.06327, 2024 | 6 | 2024 |
Critiquellm: Towards an informative critique generation model for evaluation of large language model generation P Ke, B Wen, A Feng, X Liu, X Lei, J Cheng, S Wang, A Zeng, Y Dong, ... Proceedings of the 62nd Annual Meeting of the Association for Computational …, 2024 | 6 | 2024 |
Apar: Llms can do auto-parallel auto-regressive decoding M Liu, A Zeng, B Wang, P Zhang, J Tang, Y Dong arXiv preprint arXiv:2401.06761, 2024 | 6 | 2024 |
ChatGLM-Math: Improving Math Problem-Solving in Large Language Models with a Self-Critique Pipeline Y Xu, X Liu, X Liu, Z Hou, Y Li, X Zhang, Z Wang, A Zeng, Z Du, W Zhao, ... arXiv preprint arXiv:2404.02893, 2024 | 4 | 2024 |
Chatglm-rlhf: Practices of aligning large language models with human feedback Z Hou, Y Niu, Z Du, X Zhang, X Liu, A Zeng, Q Zheng, M Huang, H Wang, ... arXiv preprint arXiv:2404.00934, 2024 | 2 | 2024 |
Revisiting Parallel Context Windows: A Frustratingly Simple Alternative and Chain-of-Thought Deterioration K Yang, X Liu, K Men, A Zeng, Y Dong, J Tang arXiv preprint arXiv:2305.15262, 2023 | 1 | 2023 |
Scaling Speech-Text Pre-training with Synthetic Interleaved Data A Zeng, Z Du, M Liu, L Zhang, S Jiang, Y Dong, J Tang arXiv preprint arXiv:2411.17607, 2024 | | 2024 |