Jesse Thomason
Cited by
Cited by
ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks
M Shridhar, J Thomason, D Gordon, Y Bisk, W Han, R Mottaghi, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2020
ProgPrompt: Generating situated robot task plans using large language models
I Singh, V Blukis, A Mousavian, A Goyal, D Xu, J Tremblay, D Fox, ...
2023 IEEE International Conference on Robotics and Automation (ICRA), 11523 …, 2023
Experience Grounds Language
Y Bisk, A Holtzman, J Thomason, J Andreas, Y Bengio, J Chai, M Lapata, ...
arXiv preprint arXiv:2004.10151, 2020
Vision-and-dialog navigation
J Thomason, M Murray, M Cakmak, L Zettlemoyer
Conference on Robot Learning (CoRL), 2019
Integrating Language and Vision to Generate Natural Language Descriptions of Videos in the Wild
J Thomason, S Venugopalan, S Guadarrama, K Saenko, R Mooney
Proceedings of the Twenty Fifth International Conference on Computational …, 2014
Learning to Interpret Natural Language Commands through Human-Robot Dialog
J Thomason, S Zhang, R Mooney, P Stone
Proceedings of the 24th International Joint Conference on Artificial …, 2015
BWIBots: A platform for bridging the gap between AI and human–robot interaction research
P Khandelwal, S Zhang, J Sinapov, M Leonetti, J Thomason, F Yang, ...
The International Journal of Robotics Research, 2017
TEACh: Task-driven Embodied Agents that Chat
A Padmakumar, J Thomason, A Shrivastava, P Lange, A Narayan-Chen, ...
arXiv preprint arXiv:2110.00534, 2021
Learning Multi-Modal Grounded Linguistic Semantics by Playing "I Spy"
J Thomason, J Sinapov, M Svetlik, P Stone, RJ Mooney
Proceedings of the Twenty-Fifth international joint conference on Artificial …, 2016
Vision-and-Language Navigation: A Survey of Tasks, Methods, and Future Directions
J Gu, E Stefani, Q Wu, J Thomason, XE Wang
arXiv preprint arXiv:2203.12667, 2022
Shifting the Baseline: Single Modality Performance on Visual Navigation & QA
J Thomason, D Gordon, Y Bisk
Conference of the North American Chapter of the Association for …, 2019
Improving Grounded Natural Language Understanding through Human-Robot Dialog
J Thomason, A Padmakumar, J Sinapov, N Walker, Y Jiang, H Yedidsion, ...
International Conference on Robotics and Automation (ICRA), 2019
Embodied BERT: A Transformer Model for Embodied, Language-guided Visual Task Completion
A Suglia, Q Gao, J Thomason, G Thattai, G Sukhatme
arXiv preprint arXiv:2108.04927, 2021
Prosodic entrainment and tutoring dialogue success
J Thomason, HV Nguyen, D Litman
International conference on artificial intelligence in education, 750-753, 2013
Opportunistic active learning for grounding natural language descriptions
J Thomason, A Padmakumar, J Sinapov, J Hart, P Stone, RJ Mooney
Conference on Robot Learning, 67-76, 2017
Jointly improving parsing and perception for natural language commands through human-robot dialog
J Thomason, A Padmakumar, J Sinapov, N Walker, Y Jiang, H Yedidsion, ...
Journal of Artificial Intelligence Research 67, 325-372, 2020
Language grounding with 3D objects
J Thomason, M Shridhar, Y Bisk, C Paxton, L Zettlemoyer
Conference on Robot Learning, 1691-1701, 2022
Interpreting Black Box Models via Hypothesis Testing
C Burns, J Thomason, W Tansey
Foundations of Data Science (FODS), 2020
Prospection: Interpretable Plans From Language By Predicting the Future
C Paxton, Y Bisk, J Thomason, A Byravan, D Fox
International Conference on Robotics and Automation (ICRA), 2019
RMM: A Recursive Mental Model for Dialog Navigation
HR Roman, Y Bisk, J Thomason, A Celikyilmaz, J Gao
arXiv preprint arXiv:2005.00728, 2020
The system can't perform the operation now. Try again later.
Articles 1–20