Follow
Shaoqi Wang
Title
Cited by
Cited by
Year
Scalable distributed dl training: Batching communication and computation
S Wang, A Pi, X Zhou
Proceedings of the AAAI Conference on Artificial Intelligence 33 (01), 5289-5296, 2019
382019
Performance Isolation of Data-Intensive Scale-out Applications in a Multi-tenant Cloud
P Lama, S Wang, X Zhou, D Cheng
2018 IEEE International Parallel and Distributed Processing Symposium (IPDPS …, 2018
282018
An efficient and non-intrusive GPU scheduling framework for deep learning training systems
S Wang, OJ Gonzalez, X Zhou, T Williams, BD Friedman, M Havemann, ...
SC20: International Conference for High Performance Computing, Networking …, 2020
272020
Aggressive synchronization with partial processing for iterative ml jobs on clusters
S Wang, W Chen, A Pi, X Zhou
Proceedings of the 19th International Middleware Conference, 253-265, 2018
222018
Pufferfish: Container-driven elastic memory management for data-intensive applications
W Chen, A Pi, S Wang, X Zhou
Proceedings of the ACM Symposium on Cloud Computing, 259-271, 2019
192019
Overlapping communication with computation in parameter server for scalable DL training
S Wang, A Pi, X Zhou, J Wang, CZ Xu
IEEE Transactions on Parallel and Distributed Systems 32 (9), 2144-2159, 2021
182021
Dependency-aware network adaptive scheduling of data-intensive parallel jobs
S Wang, W Chen, X Zhou, L Zhang, Y Wang
IEEE Transactions on Parallel and Distributed Systems 30 (3), 515-529, 2018
182018
Characterizing scheduling delay for low-latency data analytics workloads
W Chen, A Pi, S Wang, X Zhou
2018 IEEE International Parallel and Distributed Processing Symposium (IPDPS …, 2018
182018
Elastic parameter server: Accelerating ML training with scalable resource scheduling
S Wang, A Pi, X Zhou
IEEE Transactions on Parallel and Distributed Systems 33 (5), 1128-1143, 2021
162021
Semantic-aware workflow construction and analysis for distributed data analytics systems
A Pi, W Chen, S Wang, X Zhou
Proceedings of the 28th International Symposium on High-Performance Parallel …, 2019
132019
Improving utilization and parallelism of hadoop cluster by elastic containers
Y Xu, W Chen, S Wang, X Zhou, C Jiang
IEEE INFOCOM 2018-IEEE Conference on Computer Communications, 180-188, 2018
122018
Os-augmented oversubscription of opportunistic memory with a user-assisted oom killer
W Chen, A Pi, S Wang, X Zhou
Proceedings of the 20th International Middleware Conference, 28-40, 2019
112019
Network-adaptive scheduling of data-intensive parallel jobs with dependencies in clusters
S Wang, X Zhou, L Zhang, C Jiang
2017 IEEE International Conference on Autonomic Computing (ICAC), 155-160, 2017
112017
Addressing skewness in iterative ml jobs with parameter partition
S Wang, W Chen, X Zhou, SY Chang, M Ji
IEEE INFOCOM 2019-IEEE Conference on Computer Communications, 1261-1269, 2019
92019
Memory at your service: Fast memory allocation for latency-critical services
A Pi, J Zhao, S Wang, X Zhou
Proceedings of the 22nd International Middleware Conference, 185-197, 2021
62021
A maximum entropy based reordering model for Mongolian-Chinese SMT with morphological information
Z Yang, M Li, Z Zhu, L Chen, L Wei, S Wang
2014 International Conference on Asian Language Processing (IALP), 175-178, 2014
42014
Flashbyte: Improving memory efficiency with lightweight native storage
J Zhao, A Pi, S Wang, X Zhou
2021 IEEE/ACM 21st International Symposium on Cluster, Cloud and Internet …, 2021
22021
A mutual iterative enhancement model for simultaneous comparable corpora and bilingual lexicons construction
Z Zhu, X Zeng, S Zheng, X Sun, S Wang, S Weng
Ninth Workshop on Building and Using Comparable Corpora, 27, 2016
22016
Toward Scalable Distributed Machine Learning on Data-Parallel Clusters
S Wang
University of Colorado Colorado Springs, 2020
2020
Improving Bilingual Lexicon Extraction Performance from Comparable Corpora via Optimizing Translation Candidate Lists
S Wang, M Li, Z Zhu, Z Yang, S Weng
Proceedings of the Third CIPS-SIGHAN Joint Conference on Chinese Language …, 2014
2014
The system can't perform the operation now. Try again later.
Articles 1–20