Efficient parallelization of h. 264 decoding with macro block level scheduling J Chong, N Satish, B Catanzaro, K Ravindran, K Keutzer 2007 IEEE international conference on multimedia and expo, 1874-1877, 2007 | 121 | 2007 |
Parallel scalability in speech recognition K You, J Chong, Y Yi, E Gonina, CJ Hughes, YK Chen, W Sung, K Keutzer IEEE Signal Processing Magazine 26 (6), 124-135, 2009 | 74 | 2009 |
Data-parallel large vocabulary continuous speech recognition on graphics processors J Chong, Y Yi, A Faria, N Satish, K Keutzer Proceedings of the 1st Annual Workshop on Emerging Applications and Many …, 2008 | 72 | 2008 |
A fully data parallel WFST-based large vocabulary continuous speech recognition on a graphics processing unit J Chong, E Gonina, Y Yi, K Keutzer Tenth Annual Conference of the International Speech Communication Association, 2009 | 57 | 2009 |
Extensible and scalable time triggered scheduling W Zheng, J Chong, C Pinello, S Kanajan, A Sangiovanni-Vincentelli Fifth International Conference on Application of Concurrency to System …, 2005 | 54 | 2005 |
Classification, customization, and characterization: Using milp for task allocation and scheduling A Davare, J Chong, Q Zhu, DM Densmore, AL Sangiovanni-Vincentelli Systems Research, 2006 | 42 | 2006 |
Belief propagation by message passing in junction trees: Computing each message faster using GPU parallelization L Zheng, O Mengshoel, J Chong arXiv preprint arXiv:1202.3777, 2012 | 37 | 2012 |
Efficient On-The-Fly Hypothesis Rescoring in a Hybrid GPU/CPU-based Large Vocabulary Continuous Speech Recognition Engine. J Kim, J Chong, IR Lane INTERSPEECH, 1035-1038, 2012 | 28 | 2012 |
Opportunities and challenges of parallelizing speech recognition J Chong, G Friedland, A Janin, N Morgan, C Oei Proceedings of the 2nd USENIX conference on Hot topics in parallelism …, 2010 | 28 | 2010 |
Acceleration of market value-at-risk estimation M Dixon, J Chong, K Keutzer Proceedings of the 2nd Workshop on High Performance Computational Finance, 1-8, 2009 | 26 | 2009 |
Method and system for parallel statistical inference on highly parallel platforms J Chong, Y Yi, EI Gonina US Patent 8,566,259, 2013 | 23 | 2013 |
Efficient automatic speech recognition on the gpu J Chong, E Gonina, K Keutzer GPU Computing Gems Emerald Edition, 601-618, 2011 | 21 | 2011 |
Apparatus and method for sharing a functional unit execution resource among a plurality of functional units J Chong, C Olson, GF Grohoski US Patent 7,353,364, 2008 | 21 | 2008 |
Exploring recognition network representations for efficient speech inference on highly parallel platforms. J Chong, E Gonina, K You, K Keutzer INTERSPEECH, 1489-1492, 2010 | 19 | 2010 |
Parallelizing speaker-attributed speech recognition for meeting browsing G Friedland, J Chong, A Janin 2010 IEEE International Symposium on Multimedia, 121-128, 2010 | 17 | 2010 |
Monte carlo–based financial market value-at-risk estimation on gpus MF Dixon, T Bradley, J Chong, K Keutzer GPU Computing Gems Jade Edition, 337-353, 2012 | 15 | 2012 |
Scalable hmm based inference engine in large vocabulary continuous speech recognition J Chong, K You, Y Yi, E Gonina, C Hughes, W Sung, K Keutzer 2009 IEEE International Conference on Multimedia and Expo, 1797-1800, 2009 | 15 | 2009 |
Pattern-oriented application frameworks for domain experts to effectively utilize highly parallel manycore microprocessors J Chong University of California, Berkeley, 2010 | 13 | 2010 |
Methods for hybrid GPU/CPU data processing I Lane, J Chong, J Kim US Patent 9,558,748, 2017 | 12 | 2017 |
Recognition of Tibetan wood block prints with generalized hidden Markov and kernelized modified quadratic distance function F Hedayati, J Chong, K Keutzer Proceedings of the 2011 Joint Workshop on Multilingual OCR and Analytics for …, 2011 | 12 | 2011 |