Follow
Boxin Wang
Boxin Wang
Research Scientist at NVIDIA
Verified email at nvidia.com - Homepage
Title
Cited by
Cited by
Year
Towards efficient data valuation based on the shapley value
R Jia, D Dao, B Wang, FA Hubis, N Hynes, NM Gürel, B Li, C Zhang, ...
The 22nd International Conference on Artificial Intelligence and Statistics …, 2019
3582019
Efficient task-specific data valuation for nearest neighbor algorithms
R Jia, D Dao, B Wang, FA Hubis, NM Gurel, B Li, C Zhang, CJ Spanos, ...
arXiv preprint arXiv:1908.08619, 2019
1812019
Reinforcement-learning based portfolio management with augmented asset movement prediction states
Y Ye, H Pei, B Wang, PY Chen, Y Zhu, J Xiao, B Li
Proceedings of the AAAI Conference on Artificial Intelligence 34 (01), 1112-1119, 2020
1252020
Infobert: Improving robustness of language models from an information theoretic perspective
B Wang, S Wang, Y Cheng, Z Gan, R Jia, B Li, J Liu
arXiv preprint arXiv:2010.02329, 2020
1022020
Adversarial glue: A multi-task benchmark for robustness evaluation of language models
B Wang, C Xu, S Wang, Z Gan, Y Cheng, J Gao, AH Awadallah, B Li
arXiv preprint arXiv:2111.02840, 2021
992021
G-pate: Scalable differentially private data generator via private aggregation of teacher discriminators
Y Long, B Wang, Z Yang, B Kailkhura, A Zhang, C Gunter, B Li
Advances in Neural Information Processing Systems 34, 2965-2977, 2021
82*2021
Decodingtrust: A comprehensive assessment of trustworthiness in gpt models
B Wang, W Chen, H Pei, C Xie, M Kang, C Zhang, C Xu, Z Xiong, R Dutta, ...
arXiv preprint arXiv:2306.11698, 2023
702023
T3: Tree-autoencoder constrained adversarial text generation for targeted attack
B Wang, H Pei, B Pan, Q Chen, S Wang, B Li
arXiv preprint arXiv:1912.10375, 2019
702019
Semattack: natural textual attacks via different semantic spaces
B Wang, C Xu, X Liu, Y Cheng, B Li
arXiv preprint arXiv:2205.01287, 2022
35*2022
Datalens: Scalable privacy preserving training via gradient compression and aggregation
B Wang, F Wu, Y Long, L Rimanic, C Zhang, B Li
Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications …, 2021
352021
Exploring the limits of domain-adaptive training for detoxifying large-scale language models
B Wang, W Ping, C Xiao, P Xu, M Patwary, M Shoeybi, B Li, ...
Advances in Neural Information Processing Systems 35, 35811-35824, 2022
302022
Shall we pretrain autoregressive language models with retrieval? a comprehensive study
B Wang, W Ping, P Xu, L McAfee, Z Liu, M Shoeybi, Y Dong, O Kuchaiev, ...
arXiv preprint arXiv:2304.06762, 2023
162023
Uncovering the connections between adversarial transferability and knowledge transferability
K Liang, JY Zhang, B Wang, Z Yang, S Koyejo, B Li
International Conference on Machine Learning, 6577-6587, 2021
152021
Certifying out-of-domain generalization for blackbox functions
MG Weber, L Li, B Wang, Z Zhao, B Li, C Zhang
International Conference on Machine Learning, 23527-23548, 2022
122022
Can Public Large Language Models Help Private Cross-device Federated Learning?
B Wang, YJ Zhang, Y Cao, B Li, HB McMahan, S Oh, Z Xu, M Zaheer
arXiv preprint arXiv:2305.12132, 2023
112023
Incorporating external POS tagger for punctuation restoration
N Shi, W Wang, B Wang, J Li, X Liu, Z Lin
arXiv preprint arXiv:2106.06731, 2021
102021
End-to-end robustness for sensing-reasoning machine learning pipelines
Z Yang, Z Zhao, H Pei, B Wang, B Karlas, J Liu, H Guo, B Li, C Zhang
arXiv preprint arXiv:2003.00120, 2020
82020
Improving certified robustness via statistical learning with logical reasoning
Z Yang, Z Zhao, B Wang, J Zhang, L Li, H Pei, B Karlaš, J Liu, H Guo, ...
Advances in Neural Information Processing Systems 35, 34859-34873, 2022
62022
Instructretro: Instruction tuning post retrieval-augmented pretraining
B Wang, W Ping, L McAfee, P Xu, B Li, M Shoeybi, B Catanzaro
arXiv preprint arXiv:2310.07713, 2023
42023
FOCUS: Fairness via Agent-Awareness for Federated Learning on Heterogeneous Data
W Chu, C Xie, B Wang, L Li, L Yin, H Zhao, B Li
arXiv preprint arXiv:2207.10265, 2022
42022
The system can't perform the operation now. Try again later.
Articles 1–20