Follow
Roman Novak
Roman Novak
Google Brain
Verified email at google.com - Homepage
Title
Cited by
Cited by
Year
Deep Neural Networks as Gaussian Processes
J Lee, Y Bahri, R Novak, SS Schoenholz, J Pennington, J Sohl-Dickstein
International Conference on Learning Representations (ICLR) 2018, 2017
8572017
Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent
J Lee, L Xiao, SS Schoenholz, Y Bahri, R Novak, J Sohl-Dickstein, ...
Advances in Neural Information Processing Systems (NeurIPS) 32, 8570 - 8581, 2019
7232019
Sensitivity and Generalization in Neural Networks: an Empirical Study
R Novak, Y Bahri, DA Abolafia, J Pennington, J Sohl-Dickstein
International Conference on Learning Representations (ICLR) 2018, 2018
3732018
Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes
R Novak, L Xiao, J Lee, Y Bahri, G Yang, J Hron, D Abolafia, J Pennington, ...
International Conference on Learning Representations (ICLR) 2019, 2018
2602018
Neural Tangents: Fast and Easy Infinite Neural Networks in Python
R Novak, L Xiao, J Hron, J Lee, AA Alemi, J Sohl-Dickstein, ...
International Conference on Learning Representations (ICLR) 2020, 2019
1832019
Finite versus infinite neural networks: an empirical study
J Lee, SS Schoenholz, J Pennington, B Adlam, L Xiao, R Novak, ...
Neural Information Processing Systems (NeurIPS) 2020, 2020
1232020
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
arXiv preprint arXiv:2206.04615, 2022
1002022
Dataset Distillation with Infinitely Wide Convolutional Networks
T Nguyen, R Novak, L Xiao, J Lee
Neural Information Processing Systems (NeurIPS) 2021, 2021
732021
Infinite attention: NNGP and NTK for deep attention networks
J Hron, Y Bahri, J Sohl-Dickstein, R Novak
International Conference on Machine Learning (ICML) 2020, 2020
642020
On the infinite width limit of neural networks with a standard parameterization
J Sohl-Dickstein, R Novak, SS Schoenholz, J Lee
arXiv preprint arXiv:2001.07301, 2020
322020
Improving the Neural Algorithm of Artistic Style
R Novak, Y Nikulin
arXiv preprint arXiv:1605.04603, 2016
292016
Exploring the Neural Algorithm of Artistic Style
Y Nikulin, R Novak
arXiv preprint arXiv:1602.07188, 2016
292016
Iterative Refinement for Machine Translation
R Novak, M Auli, D Grangier
Bay Area Machine Learning Symposium (BayLearn) 2017, 2016
242016
Exact posterior distributions of wide Bayesian neural networks
J Hron, Y Bahri, R Novak, J Pennington, J Sohl-Dickstein
ICML 2020 Workshop on Uncertainty & Robustness in Deep Learning; BayLearn 2020, 2020
192020
Fast finite width neural tangent kernel
R Novak, J Sohl-Dickstein, SS Schoenholz
International Conference on Machine Learning (ICML) 2022, 2021
152021
Fast Neural Kernel Embeddings for General Activations
I Han, A Zandieh, J Lee, R Novak, L Xiao, A Karbasi
Neural Information Processing Systems (NeurIPS) 2022, 2022
32022
Wide Bayesian neural networks have a simple weight posterior: theory and accelerated sampling
J Hron, R Novak, J Pennington, J Sohl-Dickstein
International Conference on Machine Learning (ICML) 2022, 2022
12022
The system can't perform the operation now. Try again later.
Articles 1–17