Wide neural networks of any depth evolve as linear models under gradient descent J Lee, L Xiao, S Schoenholz, Y Bahri, R Novak, J Sohl-Dickstein, ... Advances in neural information processing systems 32, 2019 | 1166 | 2019 |
Dynamical isometry and a mean field theory of cnns: How to train 10,000-layer vanilla convolutional neural networks L Xiao, Y Bahri, J Sohl-Dickstein, S Schoenholz, J Pennington International Conference on Machine Learning, 5393-5402, 2018 | 385 | 2018 |
Bayesian Deep Convolutional Neural Networks with Many Channels are Gaussian Processes R Novak, L Xiao, Y Bahri, J Lee, G Yang, DA Abolafia, J Pennington, ... ICLR 2019, 2018 | 384* | 2018 |
Neural tangents: Fast and easy infinite neural networks in python R Novak, L Xiao, J Hron, J Lee, AA Alemi, J Sohl-Dickstein, ... arXiv preprint arXiv:1912.02803, 2019 | 263 | 2019 |
Dataset distillation with infinitely wide convolutional networks T Nguyen, R Novak, L Xiao, J Lee Advances in Neural Information Processing Systems 34, 5186-5198, 2021 | 252 | 2021 |
Finite versus infinite neural networks: an empirical study J Lee, S Schoenholz, J Pennington, B Adlam, L Xiao, R Novak, ... Advances in Neural Information Processing Systems 33, 15156-15172, 2020 | 230 | 2020 |
Provable benefit of orthogonal initialization in optimizing deep linear networks W Hu, L Xiao, J Pennington arXiv preprint arXiv:2001.05992, 2020 | 142 | 2020 |
Disentangling trainability and generalization in deep neural networks L Xiao, J Pennington, S Schoenholz International Conference on Machine Learning, 10462-10472, 2020 | 125* | 2020 |
The surprising simplicity of the early-time learning dynamics of neural networks W Hu, L Xiao, B Adlam, J Pennington Advances in Neural Information Processing Systems 33, 17116-17128, 2020 | 78 | 2020 |
Beyond human data: Scaling self-training for problem-solving with language models A Singh, JD Co-Reyes, R Agarwal, A Anand, P Patil, X Garcia, PJ Liu, ... arXiv preprint arXiv:2312.06585, 2023 | 66 | 2023 |
Precise learning curves and higher-order scalings for dot-product kernel regression L Xiao, H Hu, T Misiakiewicz, Y Lu, J Pennington Advances in Neural Information Processing Systems 35, 4558-4570, 2022 | 48 | 2022 |
Uniform estimates for bilinear Hilbert transforms and bilinear maximal functions associated to polynomials X Li, L Xiao American Journal of Mathematics 138 (4), 907-962, 2016 | 42 | 2016 |
Small-scale proxies for large-scale transformer training instabilities M Wortsman, PJ Liu, L Xiao, K Everett, A Alemi, B Adlam, JD Co-Reyes, ... arXiv preprint arXiv:2309.14322, 2023 | 39 | 2023 |
Exploring the Uncertainty Properties of Neural Networks' Implicit Priors in the Infinite-Width Limit B Adlam, J Lee, L Xiao, J Pennington, J Snoek ICLR, 2020 | 23 | 2020 |
Endpoint estimates for one-dimensional oscillatory integral operators L Xiao Advances in Mathematics 316, 255-291, 2017 | 22 | 2017 |
Maximal decay inequalities for trilinear oscillatory integrals of convolution type PT Gressman, L Xiao Journal of Functional Analysis 271 (12), 3695-3726, 2016 | 22 | 2016 |
Eigenspace restructuring: a principle of space and frequency in neural networks L Xiao Conference on Learning Theory, 4888-4944, 2022 | 21 | 2022 |
Bilinear Hilbert transforms associated with plane curves J Guo, L Xiao The Journal of Geometric Analysis 26, 967-995, 2016 | 17 | 2016 |
Fast neural kernel embeddings for general activations I Han, A Zandieh, J Lee, R Novak, L Xiao, A Karbasi Advances in neural information processing systems 35, 35657-35671, 2022 | 15 | 2022 |
Sharp estimates for trilinear oscillatory integrals and an algorithm of two-dimensional resolution of singularities L Xiao arXiv preprint arXiv:1311.3725, 2013 | 13* | 2013 |