Follow
Haibin Lin
Haibin Lin
Bytedance
Verified email at bytedance.com - Homepage
Title
Cited by
Cited by
Year
Resnest: Split-attention networks.
H Zhang, C Wu, Z Zhang, Y Zhu, Z Zhang, H Lin, Y Sun, T He, J Mueller, ...
Conference on Computer Vision and Pattern (ECV), 2022
8862022
Deep Graph Library: Towards Efficient and Scalable Deep Learning on Graphs
M Wang, L Yu, Q Gan, D Zheng, Y Gai, Z Ye, M Li, J Zhou, Q Huang, ...
International Conference on Learning Representations, 2019
4302019
Self-Driving Database Management Systems.
A Pavlo, G Angulo, J Arulraj, H Lin, J Lin, L Ma, P Menon, TC Mowry, ...
CIDR 4, 1, 2017
2422017
GluonCV and GluonNLP: Deep Learning in Computer Vision and Natural Language Processing
J Guo, H He, T He, L Lausen, M Li, H Lin, X Shi, C Wang, J Xie, S Zha, ...
Journal of Machine Learning Research, 2019
1532019
Temporal-Contextual Recommendation in Real-Time
Y Ma, BM Narayanaswamy, H Lin, H Ding
KDD 2020, 2020
412020
Is Network the Bottleneck of Distributed Training?
Z Zhang, C Chang, H Lin, Y Wang, R Arora, X Jin
SIGCOMM NetAI, 2020
312020
Resnest: Split-attention networks. arXiv
H Zhang, C Wu, Z Zhang, Y Zhu, Z Zhang, H Lin, Y Sun, T He, J Mueller, ...
arXiv preprint arXiv:2004.08955, 2020
312020
Local AdaAlter: Communication-Efficient Stochastic Gradient Descent with Adaptive Learning Rates
C Xie, O Koyejo, I Gupta, H Lin
NeurIPS 2020, optimizations for machine learning, 2019
252019
Deep Graph Library: Towards Efficient and Scalable Deep Learning on Graphs. CoRR abs/1909.01315 (2019)
M Wang, L Yu, QG Da Zheng, Y Gai, Z Ye, M Li, J Zhou, Q Huang, C Ma, ...
arXiv preprint arXiv:1909.01315, 2019
212019
CSER: Communication-efficient SGD with Error Reset
C Xie, S Zheng, OO Koyejo, I Gupta, M Li, H Lin
Advances in Neural Information Processing Systems 33, 2020
202020
Dynamic Mini-batch SGD for Elastic Distributed Training: Learning in the Limbo of Resources
H Lin, H Zhang, Y Ma, T He, Z Zhang, S Zha, M Li
arXiv preprint arXiv:1904.12043, 2019
152019
Accelerated Large Batch Optimization of BERT Pretraining in 54 minutes
S Zheng, H Lin, S Zha, M Li
arXiv preprint arXiv:2006.13484, 2020
142020
Compressed Communication for Distributed Training: Adaptive Methods and System
Y Zhong, C Xie, S Zheng, H Lin
arXiv preprint arXiv:2105.07829, 2021
52021
dPRO: A Generic Performance Diagnosis and Optimization Toolkit for Expediting Distributed DNN Training
H Hu, C Jiang, Y Zhong, Y Peng, C Wu, Y Zhu, H Lin, C Guo
Proceedings of Machine Learning and Systems 4, 623-637, 2022
12022
Espresso: Revisiting Gradient Compression from the System Perspective
Z Wang, H Lin, Y Zhu, TS Ng
arXiv preprint arXiv:2205.14465, 2022
2022
dPRO: A Generic Profiling and Optimization System for Expediting Distributed DNN Training
H Hu, C Jiang, Y Zhong, Y Peng, C Wu, Y Zhu, H Lin, C Guo
arXiv preprint arXiv:2205.02473, 2022
2022
Dive into Deep Learning for Natural Language Processing
H Lin, X Shi, L Lausen, A Zhang, H He, S Zha, A Smola
Proceedings of the 2019 Conference on Empirical Methods in Natural Language …, 2019
2019
Just-in-Time Dynamic-Batching
S Zha, Z Jiang, H Lin, Z Zhang
Conference on Neural Information Processing Systems, 2018
2018
The system can't perform the operation now. Try again later.
Articles 1–18