Follow
Qi Meng
Qi Meng
Principal Researcher, Microsoft Research AI4Science
Verified email at microsoft.com - Homepage
Title
Cited by
Cited by
Year
Lightgbm: A highly efficient gradient boosting decision tree
G Ke, Q Meng, T Finley, T Wang, W Chen, W Ma, Q Ye, TY Liu
Advances in neural information processing systems 30, 2017
109752017
R-drop: Regularized dropout for neural networks
L Wu, J Li, Y Wang, Q Meng, T Qin, W Chen, M Zhang, TY Liu
Advances in Neural Information Processing Systems 34, 10890-10905, 2021
3382021
Asynchronous stochastic gradient descent with delay compensation
S Zheng, Q Meng, T Wang, W Chen, N Yu, ZM Ma, TY Liu
International Conference on Machine Learning, 4120-4129, 2017
2872017
A communication-efficient parallel algorithm for decision tree
Q Meng, G Ke, T Wang, W Chen, Q Ye, ZM Ma, TY Liu
Advances in Neural Information Processing Systems 29, 2016
1422016
Convergence analysis of distributed stochastic gradient descent with shuffling
Q Meng, W Chen, Y Wang, ZM Ma, TY Liu
Neurocomputing 337, 46-57, 2019
1292019
PriorGrad: Improving conditional denoising diffusion models with data-dependent adaptive prior
S Lee, H Kim, C Shin, X Tan, C Liu, Q Meng, T Qin, W Chen, S Yoon, ...
arXiv preprint arXiv:2106.06406, 2021
792021
An efficient Lorentz equivariant graph neural network for jet tagging
S Gong, Q Meng, J Zhang, H Qu, C Li, S Qian, W Du, ZM Ma, TY Liu
Journal of High Energy Physics 2022 (7), 1-22, 2022
622022
Target transfer Q-learning and its convergence analysis
Y Wang, Y Liu, W Chen, ZM Ma, TY Liu
Neurocomputing 392, 11-22, 2020
502020
Reinforcement learning with dynamic boltzmann softmax updates
L Pan, Q Cai, Q Meng, W Chen, L Huang, TY Liu
arXiv preprint arXiv:1903.05926, 2019
392019
SE (3) equivariant graph neural networks with complete local frames
W Du, H Zhang, Y Du, Q Meng, W Chen, N Zheng, B Shao, TY Liu
International Conference on Machine Learning, 5583-5608, 2022
372022
The implicit bias for adaptive optimization algorithms on homogeneous neural networks
B Wang, Q Meng, W Chen, TY Liu
International Conference on Machine Learning, 10849-10858, 2021
272021
-SGD: Optimizing ReLU Neural Networks in its Positively Scale-Invariant Space
Q Meng, S Zheng, H Zhang, W Chen, ZM Ma, TY Liu
arXiv preprint arXiv:1802.03713, 2018
272018
Asynchronous Accelerated Stochastic Gradient Descent.
Q Meng, W Chen, J Yu, T Wang, Z Ma, TY Liu
IJCAI, 1853-1859, 2016
252016
Asynchronous stochastic proximal optimization algorithms with variance reduction
Q Meng, W Chen, J Yu, T Wang, ZM Ma, TY Liu
Proceedings of the AAAI Conference on Artificial Intelligence 31 (1), 2017
242017
Machine-learning nonconservative dynamics for new-physics detection
Z Liu, B Wang, Q Meng, W Chen, M Tegmark, TY Liu
Physical Review E 104 (5), 055302, 2021
232021
Unidrop: A simple yet effective technique to improve transformer without extra cost
Z Wu, L Wu, Q Meng, Y Xia, S Xie, T Qin, X Dai, TY Liu
arXiv preprint arXiv:2104.04946, 2021
232021
Capacity control of ReLU neural networks by basis-path norm
S Zheng, Q Meng, H Zhang, W Chen, N Yu, TY Liu
Proceedings of the AAAI Conference on Artificial Intelligence 33 (01), 5925-5932, 2019
232019
Positively scale-invariant flatness of relu neural networks
M Yi, Q Meng, W Chen, Z Ma, TY Liu
arXiv preprint arXiv:1903.02237, 2019
222019
Dynamic of stochastic gradient descent with state-dependent noise
Q Meng, S Gong, W Chen, ZM Ma, TY Liu
arXiv preprint arXiv:2006.13719, 2020
132020
Conversations powered by cross-lingual knowledge
W Sun, C Meng, Q Meng, Z Ren, P Ren, Z Chen, M Rijke
Proceedings of the 44th International ACM SIGIR Conference on Research and …, 2021
112021
The system can't perform the operation now. Try again later.
Articles 1–20