Improving text auto-completion with next phrase prediction

DH Lee, Z Hu, RKW Lee - arXiv preprint arXiv:2109.07067, 2021 - arxiv.org
arXiv preprint arXiv:2109.07067, 2021arxiv.org
Language models such as GPT-2 have performed well on constructing syntactically sound
sentences for text auto-completion task. However, such models often require considerable
training effort to adapt to specific writing domains (eg, medical). In this paper, we propose an
intermediate training strategy to enhance pre-trained language models' performance in the
text auto-completion task and fastly adapt them to specific domains. Our strategy includes a
novel self-supervised training objective called Next Phrase Prediction (NPP), which …
Language models such as GPT-2 have performed well on constructing syntactically sound sentences for text auto-completion task. However, such models often require considerable training effort to adapt to specific writing domains (e.g., medical). In this paper, we propose an intermediate training strategy to enhance pre-trained language models' performance in the text auto-completion task and fastly adapt them to specific domains. Our strategy includes a novel self-supervised training objective called Next Phrase Prediction (NPP), which encourages a language model to complete the partial query with enriched phrases and eventually improve the model's text auto-completion performance. Preliminary experiments have shown that our approach is able to outperform the baselines in auto-completion for email and academic writing domains.
arxiv.org
Showing the best result for this search. See all results