Aohan Zeng
Aohan Zeng
Verified email at
Cited by
Cited by
GLM-130B: An Open Bilingual Pre-trained Model
A Zeng, X Liu, Z Du, Z Wang, H Lai, M Ding, Z Yang, Y Xu, W Zheng, X Xia, ...
ICLR 2023, 2022
Agentbench: Evaluating llms as agents
X Liu, H Yu, H Zhang, Y Xu, X Lei, H Lai, Y Gu, H Ding, K Men, K Yang, ...
arXiv preprint arXiv:2308.03688, 2023
FastMoE: A Fast Mixture-of-Expert Training System
J He, J Qiu, A Zeng, Z Yang, J Zhai, J Tang
arXiv preprint arXiv:2103.13262, 2021
AgentTuning: Enabling Generalized Agent Abilities For LLMs
A Zeng, M Liu, R Lu, B Wang, X Liu, Y Dong, J Tang
arXiv preprint arXiv:2310.12823, 2023
Webglm: Towards an efficient web-enhanced question answering system with human preferences
X Liu, H Lai, H Yu, Y Xu, A Zeng, Z Du, P Zhang, Y Dong, J Tang
Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and …, 2023
BaGuaLu: targeting brain scale pretrained models with over 37 million cores
Z Ma, J He, J Qiu, H Cao, Y Wang, Z Sun, L Zheng, H Wang, S Tang, ...
Proceedings of the 27th ACM SIGPLAN Symposium on Principles and Practice of …, 2022
Cogdl: An extensive toolkit for deep learning on graphs
Y Cen, Z Hou, Y Wang, Q Chen, Y Luo, X Yao, A Zeng, S Guo, P Zhang, ...
arXiv preprint arXiv:2103.00959 7 (8), 2021
xTrimoPGLM: Unified 100B-Scale Pre-trained Transformer for Deciphering the Language of Protein
B Chen, X Cheng, Y Geng, S Li, X Zeng, B Wang, J Gong, C Liu, A Zeng, ...
bioRxiv, 2023.07. 05.547496, 2023
Longbench: A bilingual, multitask benchmark for long context understanding
Y Bai, X Lv, J Zhang, H Lyu, J Tang, Z Huang, Z Du, X Liu, A Zeng, L Hou, ...
arXiv preprint arXiv:2308.14508, 2023
Critiquellm: Scaling llm-as-critic for effective and explainable evaluation of large language model generation
P Ke, B Wen, Z Feng, X Liu, X Lei, J Cheng, S Wang, A Zeng, Y Dong, ...
arXiv preprint arXiv:2311.18702, 2023
Understanding emergent abilities of language models from the loss perspective
Z Du, A Zeng, Y Dong, J Tang
arXiv preprint arXiv:2403.15796, 2024
Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, et al
A Zeng, X Liu
ChatGLM-6B, March 3, 2023
APAR: LLMs Can Do Auto-Parallel Auto-Regressive Decoding
M Liu, A Zeng, B Wang, P Zhang, J Tang, Y Dong
arXiv preprint arXiv:2401.06761, 2024
xtrimopglm: Unified 100b-scale pre-trained transformer for deciphering the language of protein. bioRxiv
B Chen, X Cheng, Y Geng, S Li, X Zeng, B Wang, J Gong, C Liu, A Zeng, ...
ChatGLM-Math: Improving Math Problem-Solving in Large Language Models with a Self-Critique Pipeline
Y Xu, X Liu, X Liu, Z Hou, Y Li, X Zhang, Z Wang, A Zeng, Z Du, W Zhao, ...
arXiv preprint arXiv:2404.02893, 2024
ChatGLM-RLHF: Practices of Aligning Large Language Models with Human Feedback
Z Hou, Y Niu, Z Du, X Zhang, X Liu, A Zeng, Q Zheng, M Huang, H Wang, ...
arXiv preprint arXiv:2404.00934, 2024
Revisiting Parallel Context Windows: A Frustratingly Simple Alternative and Chain-of-Thought Deterioration
K Yang, X Liu, K Men, A Zeng, Y Dong, J Tang
arXiv preprint arXiv:2305.15262, 2023
The system can't perform the operation now. Try again later.
Articles 1–17