Leveraging visual knowledge in language tasks: An empirical study on intermediate pre-training for cross-modal knowledge transfer

W Jin, DH Lee, C Zhu, J Pujara, X Ren - arXiv preprint arXiv:2203.07519, 2022 - arxiv.org
arXiv preprint arXiv:2203.07519, 2022arxiv.org
Pre-trained language models are still far from human performance in tasks that need
understanding of properties (eg appearance, measurable quantity) and affordances of
everyday objects in the real world since the text lacks such information due to reporting bias.
In this work, we study whether integrating visual knowledge into a language model can fill
the gap. We investigate two types of knowledge transfer:(1) text knowledge transfer using
image captions that may contain enriched visual knowledge and (2) cross-modal knowledge …
Pre-trained language models are still far from human performance in tasks that need understanding of properties (e.g. appearance, measurable quantity) and affordances of everyday objects in the real world since the text lacks such information due to reporting bias. In this work, we study whether integrating visual knowledge into a language model can fill the gap. We investigate two types of knowledge transfer: (1) text knowledge transfer using image captions that may contain enriched visual knowledge and (2) cross-modal knowledge transfer using both images and captions with vision-language training objectives. On 5 downstream tasks that may need visual knowledge to solve the problem, we perform extensive empirical comparisons over the presented objectives. Our experiments show that visual knowledge transfer can improve performance in both low-resource and fully supervised settings.
arxiv.org
Showing the best result for this search. See all results