Lianmin Zheng
Lianmin Zheng
Verified email at - Homepage
Cited by
Cited by
TVM: An automated end-to-end optimizing compiler for deep learning
T Chen, T Moreau, Z Jiang, L Zheng, E Yan, H Shen, M Cowan, L Wang, ...
13th USENIX Symposium on Operating Systems Design and Implementation (OSDI …, 2018
Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality
WL Chiang, Z Li, Z Lin, Y Sheng, Z Wu, H Zhang, L Zheng, S Zhuang, ..., 2023
Judging llm-as-a-judge with mt-bench and chatbot arena
L Zheng, WL Chiang, Y Sheng, S Zhuang, Z Wu, Y Zhuang, Z Lin, Z Li, ...
Advances in Neural Information Processing Systems 36, 2024
Learning to optimize tensor programs
T Chen, L Zheng, E Yan, Z Jiang, T Moreau, L Ceze, C Guestrin, ...
Advances in Neural Information Processing Systems 31, 2018
Ansor: Generating High-Performance Tensor Programs for Deep Learning
L Zheng, C Jia, M Sun, Z Wu, CH Yu, A Haj-Ali, Y Wang, J Yang, D Zhuo, ...
14th USENIX symposium on operating systems design and implementation (OSDI …, 2020
A hardware–software blueprint for flexible deep learning specialization
T Moreau, T Chen, L Vega, J Roesch, E Yan, L Zheng, J Fromm, Z Jiang, ...
IEEE Micro 39 (5), 8-16, 2019
Magent: A many-agent reinforcement learning platform for artificial collective intelligence
L Zheng, J Yang, H Cai, M Zhou, W Zhang, J Wang, Y Yu
Proceedings of the AAAI conference on artificial intelligence 32 (1), 2018
Efficient memory management for large language model serving with pagedattention
W Kwon, Z Li, S Zhuang, Y Sheng, L Zheng, CH Yu, J Gonzalez, H Zhang, ...
Proceedings of the 29th Symposium on Operating Systems Principles, 611-626, 2023
Alpa: Automating Inter-and Intra-Operator Parallelism for Distributed Deep Learning
L Zheng, Z Li, H Zhang, Y Zhuang, Z Chen, Y Huang, Y Wang, Y Xu, ...
16th USENIX symposium on operating systems design and implementation (OSDI 22), 2022
FlexGen: High-Throughput Generative Inference of Large Language Models with a Single GPU
Y Sheng, L Zheng, B Yuan, Z Li, M Ryabinin, B Chen, P Liang, C Re, ...
International Conference on Machine Learning, 2023
Actnn: Reducing training memory footprint via 2-bit activation compressed training
J Chen, L Zheng, Z Yao, D Wang, I Stoica, M Mahoney, J Gonzalez
International Conference on Machine Learning, 1803-1813, 2021
A unified optimization approach for cnn model inference on integrated gpus
L Wang, Z Chen, Y Liu, Y Wang, L Zheng, M Li, Y Wang
Proceedings of the 48th International Conference on Parallel Processing, 1-10, 2019
AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving
Z Li, L Zheng, Y Zhong, V Liu, Y Sheng, X Jin, Y Huang, Z Chen, H Zhang, ...
arXiv preprint arXiv:2302.11665, 2023
H2o: Heavy-hitter oracle for efficient generative inference of large language models
Z Zhang, Y Sheng, T Zhou, T Chen, L Zheng, R Cai, Z Song, Y Tian, C Ré, ...
Advances in Neural Information Processing Systems 36, 2024
Tensorir: An abstraction for automatic tensorized program optimization
S Feng, B Hou, H Jin, W Lin, J Shao, R Lai, Z Ye, L Zheng, CH Yu, Y Yu, ...
Proceedings of the 28th ACM International Conference on Architectural …, 2023
Tenset: A large-scale program performance dataset for learned tensor compilers
L Zheng, R Liu, J Shao, T Chen, JE Gonzalez, I Stoica, AH Ali
Thirty-fifth Conference on Neural Information Processing Systems Datasets …, 2021
Size-to-depth: a new perspective for single image depth estimation
Y Wu, S Ying, L Zheng
arXiv preprint arXiv:1801.04461, 2018
GACT: Activation compressed training for generic network architectures
X Liu, L Zheng, D Wang, Y Cen, W Chen, X Han, J Chen, Z Liu, J Tang, ...
International Conference on Machine Learning, 14139-14152, 2022
Optimizing deep learning workloads on ARM GPU with TVM
L Zheng, T Chen
Proceedings of the 1st on Reproducible Quality-Efficient Systems Tournament …, 2018
Lmsys-chat-1m: A large-scale real-world llm conversation dataset
L Zheng, WL Chiang, Y Sheng, T Li, S Zhuang, Z Wu, Y Zhuang, Z Li, ...
arXiv preprint arXiv:2309.11998, 2023
The system can't perform the operation now. Try again later.
Articles 1–20