Follow
Olatunji Ruwase
Olatunji Ruwase
Microsoft Research
Verified email at microsoft.com - Homepage
Title
Cited by
Cited by
Year
Bloom: A 176b-parameter open-access multilingual language model
T Le Scao, A Fan, C Akiki, E Pavlick, S Ilić, D Hesslow, R Castagné, ...
16232023
Zero: Memory optimizations toward training trillion parameter models
S Rajbhandari, J Rasley, O Ruwase, Y He
SC20: International Conference for High Performance Computing, Networking …, 2020
12042020
Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters
J Rasley, S Rajbhandari, O Ruwase, Y He
Proceedings of the 26th ACM SIGKDD International Conference on Knowledge …, 2020
11112020
A Practical Dynamic Buffer Overflow Detector.
O Ruwase, MS Lam
NDSS 2004, 159-169, 2004
5492004
Phi-3 technical report: A highly capable language model locally on your phone
M Abdin, J Aneja, H Awadalla, A Awadallah, AA Awan, N Bach, A Bahree, ...
arXiv preprint arXiv:2404.14219, 2024
5392024
Accelerating deep convolutional neural networks using specialized hardware
K Ovtcharov, O Ruwase, JY Kim, J Fowers, K Strauss, ES Chung
Microsoft Research Whitepaper 2 (11), 1-4, 2015
5322015
{Zero-offload}: Democratizing {billion-scale} model training
J Ren, S Rajbhandari, RY Aminabadi, O Ruwase, S Yang, M Zhang, D Li, ...
2021 USENIX Annual Technical Conference (USENIX ATC 21), 551-564, 2021
3692021
Zero-infinity: Breaking the gpu memory wall for extreme scale deep learning
S Rajbhandari, O Ruwase, J Rasley, S Smith, Y He
Proceedings of the international conference for high performance computing …, 2021
3212021
Deepspeed-inference: enabling efficient inference of transformer models at unprecedented scale
RY Aminabadi, S Rajbhandari, AA Awan, C Li, D Li, E Zheng, O Ruwase, ...
SC22: International Conference for High Performance Computing, Networking …, 2022
2782022
Flexible hardware acceleration for instruction-grain program monitoring
S Chen, M Kozuch, T Strigkos, B Falsafi, PB Gibbons, TC Mowry, ...
ACM SIGARCH Computer Architecture News 36 (3), 377-388, 2008
2082008
Performance modeling and scalability optimization of distributed deep learning systems
F Yan, O Ruwase, Y He, T Chilimbi
Proceedings of the 21th ACM SIGKDD International Conference on Knowledge …, 2015
1142015
Parallelizing dynamic information flow tracking
O Ruwase, PB Gibbons, TC Mowry, V Ramachandran, S Chen, M Kozuch, ...
Proceedings of the twentieth annual symposium on Parallelism in algorithms …, 2008
932008
Toward accelerating deep learning at scale using specialized hardware in the datacenter
K Ovtcharov, O Ruwase, JY Kim, J Fowers, K Strauss, ES Chung
2015 IEEE Hot Chips 27 Symposium (HCS), 1-38, 2015
912015
Ditto: a system for opportunistic caching in multi-hop wireless networks
FR Dogar, A Phanishayee, H Pucha, O Ruwase, DG Andersen
Proceedings of the 14th ACM international conference on Mobile computing and …, 2008
772008
Hyperdrive: Exploring hyperparameters with pop scheduling
J Rasley, Y He, F Yan, O Ruwase, R Fonseca
Proceedings of the 18th ACM/IFIP/USENIX Middleware Conference, 1-13, 2017
652017
Page overlays: An enhanced virtual memory framework to enable fine-grained memory management
V Seshadri, G Pekhimenko, O Ruwase, O Mutlu, PB Gibbons, MA Kozuch, ...
ACM SIGARCH Computer Architecture News 43 (3S), 79-91, 2015
642015
Deepspeed-chat: Easy, fast and affordable rlhf training of chatgpt-like models at all scales
Z Yao, RY Aminabadi, O Ruwase, S Rajbhandari, X Wu, AA Awan, ...
arXiv preprint arXiv:2308.01320, 2023
532023
Neural network training performance optimization framework
TA Chilimbi, O Ruwase, S Rajbhandari, M Carbin, Y He
US Patent App. 14/986,186, 2017
442017
Zero++: Extremely efficient collective communication for giant model training
G Wang, H Qin, SA Jacobs, C Holmes, S Rajbhandari, O Ruwase, F Yan, ...
arXiv preprint arXiv:2306.10209, 2023
362023
SERF: Efficient scheduling for fast deep neural network serving via judicious parallelism
F Yan, O Ruwase, Y He, E Smirni
SC'16: Proceedings of the International Conference for High Performance …, 2016
352016
The system can't perform the operation now. Try again later.
Articles 1–20