Skip to main content

ChatGPT as a Commenter to the News: Can LLMs Generate Human-Like Opinions?

  • Conference paper
  • First Online:
Disinformation in Open Online Media (MISDOOM 2023)

Abstract

ChatGPT, GPT-3.5, and other large language models (LLMs) have drawn significant attention since their release, and the abilities of these models have been investigated for a wide variety of tasks. In this research we investigate to what extent GPT-3.5 can generate human-like comments on Dutch news articles. We define human likeness as ‘not distinguishable from human comments’, approximated by the difficulty of automatic classification between human and GPT comments. We analyze human likeness across multiple prompting techniques. In particular, we utilize zero-shot, few-shot and context prompts, for two generated personas. We found that our fine-tuned BERT models can easily distinguish human-written comments from GPT-3.5 generated comments, with none of the used prompting methods performing noticeably better. We further analyzed that human comments consistently showed higher lexical diversity than GPT-generated comments. This indicates that although generative LLMs can generate fluent text, their capability to create human-like opinionated comments is still limited.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 49.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://chrome.google.com/webstore/detail/save-page-we/dhhpefjklgkmgeafimnjhojgjamoafof.

  2. 2.

    https://www.crummy.com/software/BeautifulSoup/bs4/doc/.

  3. 3.

    https://huggingface.co/docs/transformers/training.

  4. 4.

    https://github.com/raydentseng/generated_opinions.

  5. 5.

    https://github.com/lsys/lexicalrichness.

  6. 6.

    https://shap.readthedocs.io/en/latest/.

  7. 7.

    https://openai.com/gpt-4.

References

  1. Adelani, D.I., Mai, H., Fang, F., Nguyen, H.H. Yamagishi, J., Echizen, I.: Generating sentiment-preserving fake online reviews using neural language models and their human- and machine-based detection. In: Advanced Information Networking and Applications, pp. 1341–1354 (2020)

    Google Scholar 

  2. Brown, T., et al.: Language models are few-shot learners. In: Advances in Neural Information Processing Systems, vol. 33, pp. 1877–1901 (2020)

    Google Scholar 

  3. Chiu, K.-L., Collins, A., Alexander, R.: Detecting hate speech with GPT-3. arXiv preprint arXiv:2103.12407 (2021)

  4. Delobelle, P., Winters, T., Berendt, B.: RobBERT: a Dutch RoBERTa-based language model. In: Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 3255–3265. Association for Computational Linguistics, November 2020

    Google Scholar 

  5. Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186 (2019)

    Google Scholar 

  6. Hendy, A.: How good are GPT models at machine translation? A comprehensive evaluation. arXiv preprint arXiv:2302.09210 (2023)

  7. Holtzman, A., Buys, J., Du, L., Forbes, M., Choi, Y.: The curious case of neural text degeneration. In: International Conference on Learning Representations (2020)

    Google Scholar 

  8. Liu, Y., et al.: RoBERTa: a robustly optimized BERT pretraining approach (2019)

    Google Scholar 

  9. Lundberg, S.M., Lee, S.-I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  10. Salminen, J., Kandpal, C., Kamel, A.M., Jung, S.G., Jansen, B.J.: Creating and detecting fake reviews of online products. J. Retail. Consum. Serv. 64, 102771 (2022)

    Google Scholar 

  11. Sanh, V., et al.: Multitask prompted training enables zero-shot task generalization. In: The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, 25–29 April 2022 (2022)

    Google Scholar 

  12. Shen, L.: LexicalRichness: a small module to compute textual lexical richness (2022)

    Google Scholar 

  13. Song, X., Salcianu, A., Song, Y., Dopson, D., Zhou, D.: Fast WordPiece tokenization. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Punta Cana, Dominican Republic, pp. 2089–2103. Association for Computational Linguistics, November 2021

    Google Scholar 

  14. Spitale, G., Biller-Andorno, N., Germani, F.: AI model GPT-3 (dis)informs us better than humans. Sci. Adv. 9(26), eadh1850 (2023)

    Google Scholar 

  15. Torruella, J., Capsada, R.: Lexical statistics and tipological structures: a measure of lexical richness. Procedia. Soc. Behav. Sci. 95, 447–454 (2013)

    Article  Google Scholar 

  16. Wang, Z., Xie, Q., Ding, Z., Feng, Y., Xia, R.: Is ChatGPT a good sentiment analyzer? A preliminary study. arXiv preprint arXiv:2304.04339 (2023)

  17. Zhang, H., Liu, X., Zhang, J.: Extractive summarization via ChatGPT for faithful summary generation. arXiv preprint arXiv:2304.04193 (2023)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Suzan Verberne .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tseng, R., Verberne, S., van der Putten, P. (2023). ChatGPT as a Commenter to the News: Can LLMs Generate Human-Like Opinions?. In: Ceolin, D., Caselli, T., Tulin, M. (eds) Disinformation in Open Online Media. MISDOOM 2023. Lecture Notes in Computer Science, vol 14397. Springer, Cham. https://doi.org/10.1007/978-3-031-47896-3_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-47896-3_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-47895-6

  • Online ISBN: 978-3-031-47896-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics