If you are coming to EMNLP, let’s talk. Happy to chat more about the paper (anything Document Understanding and multi-modal AI) and other things. https://lnkd.in/ef9yPS6v #emnlp2024
Assistant Professor @ Arizona State University | Postdoc UPenn | PhD University of Utah | AI, NLP, Data Science
EMNLP 2024 results are out, we got in three papers, congrats to students :) First paper of many, Vatsal, Pranshu, Suyash, Janit, Kunal, Srija and Adnan, Congrats 🎉 🎊 🍾 🎈 Evaluating Concurrent Robustness of Language Models Across Diverse Challenge Sets (main) - Vatsal Gupta, PRANSHU PANDYA, Tushar Kataria, Vivek Gupta, Dan Roth tldr: make models robust on multiple adversarial challenge set simultaneously arxiv: https://lnkd.in/ei-65Y6g Knowledge-Aware Reasoning over Multimodal Semi-structured Tables (finding) Suyash Vardhan Mathur, Jainit Bafna, Kunal Kartik, Harshita Khandelwal, Manish Shrivastava, Vivek Gupta, Mohit Bansal, Dan Roth tldr: introduce the novel task of reasoning on multimodal tabular data, text + images in a structure (first of a kind) arxiv: https://lnkd.in/e5ycusfn Unraveling the Truth: Do LLMs really Understand Charts? A Deep Dive into Consistency and Robustness (finding) Srija Mukhopadhyay, Adnan Qidwai, Aparna Garimella, Pritika Ramu, Vivek Gupta, Dan Roth tldr: VLM although seem very accurate on chart, are not all robust and consistent in performance arxiv: https://lnkd.in/eFFA-sVX We also got an workshop paper, at BlockBoxNLP 2024 @EMNLP, here the details Enhancing Question Answering on Charts Through Effective Pre-training Tasks Ashim Gupta, Vivek Gupta, Shuo Zhang, Yujie He, Ning Zhang , Shalin Shah, Ph.D. tldr: models is not robust & consistent on charts, can we use pre-training on simple auxiliary tasks to enhance model robustness arxiv: https://lnkd.in/egbRHm-4