Click: Controllable Text Generation with Sequence Likelihood Contrastive Learning

Chujie Zheng, Pei Ke, Zheng Zhang, Minlie Huang


Abstract
It has always been an important yet challenging problem to control language models to avoid generating texts with undesirable attributes, such as toxic language and unnatural repetition. We introduce Leo for controllable text generation, which needs no modification to the model architecture and facilitates out-of-the-box use of trained models. It employs a contrastive loss on sequence likelihood, which fundamentally decreases the generation probability of negative samples (i.e., generations with undesirable attributes). It also adopts a novel likelihood ranking-based strategy to construct contrastive samples from model generations. On the tasks of language detoxification, sentiment steering, and repetition reduction, we show that Leo outperforms strong baselines of controllable text generation and demonstrate the superiority of Leo’s sample construction strategy.
Anthology ID:
2023.findings-acl.65
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1022–1040
Language:
URL:
https://aclanthology.org/2023.findings-acl.65
DOI:
10.18653/v1/2023.findings-acl.65
Bibkey:
Cite (ACL):
Chujie Zheng, Pei Ke, Zheng Zhang, and Minlie Huang. 2023. Click: Controllable Text Generation with Sequence Likelihood Contrastive Learning. In Findings of the Association for Computational Linguistics: ACL 2023, pages 1022–1040, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Click: Controllable Text Generation with Sequence Likelihood Contrastive Learning (Zheng et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.65.pdf