Eke 2023

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Journal of Responsible Technology 13 (2023) 100060

Contents lists available at ScienceDirect

Journal of Responsible Technology


journal homepage: www.sciencedirect.com/journal/journal-of-responsible-technology

ChatGPT and the rise of generative AI: Threat to academic integrity?


Damian Okaibedi Eke
School of Computer Science and Informatics, De Montfort University, Leicester, United Kingdom

A R T I C L E I N F O A B S T R A C T

Keywords: The emergence of OpenAI’s ChatGPT has put intense spotlight on Generative AI (Gen-AI) systems and their
ChatGPT possible impacts on Academic integrity. This paper provides an overview of the current arguments around
Large language models ChatGPT and Academic integrity and concludes that although these technologies are capable of revolutionising
OpenAI
academia, the way ChatGPT and other generative AI systems are used could surely undermine academic
Academic integrity
Generative AI
integrity. However, to ensure that the risks to academic integrity are mitigated for greater maximisation,
institutional and multi-stakeholder efforts are required.

1. Introduction 30th of November 2022 as OpenAI’s latest iteration of their large lan­
guage models capable of having ‘intelligent’ conversations. This is part
The emergence of OpenAI’sChatGPT1 has put intense spotlight on of the Generative Pre-trained Transformer (GPT) models from the Cal­
Generative AI (Gen-AI) systems and their possible impacts on Academic ifornia based company. Before now, there has been GPT-1 launched in
integrity. Generative AI systems are designed to generate content or 2018 (Radford et al., 2018), GPT-2 (Radford et al., 2019) and GPT-3
output (such as Text, images, audio, simulations, video and codes) from (Brown et al., 2020). In 2021, OpenAI released DALL.E 2, a Gen-AI
the data they are trained on. Whereas ChatGPT is neither the first Gen-AI system for generating images from text. However, ChatGPT is different
system ever developed nor is it the first by OpenAI, it represents a from the previous models in many ways. Most importantly, it is different
breakthrough in Generative AI technology. In many academic quarters, from GPT-3 that is designed to perform a wide range of natural language
concerns on academic integrity have been raised (Stokel-Walker, 2022). processing (NLP) such as language translation, text summarization, and
This is fascinating considering that this is not the first AI powered text question answering, generation of creative writing (such as poetry or
generator. A number of AI text/content generators for diverse contents fiction), generation of high quality long or short form copy (such as blog
are available including but not limited to: Rytr,2Jasper, posts). On the other hand, ChatGPT is built from the GPT-3 language
3
CopyAI,4Writesonic,5Kafkai,6Copysmith,7Peppertype,8Articoolo,9 model and has unique use cases (such as generation of responses in
Article Forge10 and Copymatic.11 The question then is: what is different dialogues/conversation, explanation of complex subjects, concept or
about ChatGPT that raises serious concerns? themes, generation of new codes or fixing of existing codes for errors).
For clearer perspectives, let us understand what ChatGPT is. Overall, ChatGPT has more use cases than GPT-3 and as with many other
ChatGPT is a large language model (LLM) that uses deep learning to technologies it has logical malleability which means that it can be
generate human-like texts in response to prompts. It was released on the fine-tuned for a variety of language tasks. ChatGPT’s capabilities have

E-mail address: [email protected].


1
https://chat.openai.com/chat
2
https://rytr.me/
3
https://www.jasper.ai/
4
https://www.copy.ai/?via=start
5
https://writesonic.com/
6
https://kafkai.com/en/
7
https://copysmith.ai/#a_aid=start
8
https://www.peppertype.ai/
9
http://articoolo.com/
10
https://www.articleforge.com/
11
https://copymatic.ai/

https://doi.org/10.1016/j.jrt.2023.100060

Available online 20 February 2023


2666-6596/© 2023 The Author(s). Published by Elsevier Ltd on behalf of ORBIT. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/).
D.O. Eke Journal of Responsible Technology 13 (2023) 100060

been hailed as ‘scary good’ by proponents or described as ‘‘prolific and without proper acknowledgement goes against the fundamental values
highly effective and still learning’’ (Gleason, 2022). Additionally, it is of academic integrity, the whole plagiarism debate is a little more
freely available to all users unlike many AI powered content generators. complex.
The inherent capabilities of ChatGPT have been demonstrated in reports I also admit that the hype around ChatGPT and its real-life capabil­
that it has successfully passed a Law school exam (Choi et al., 2023) and ities can either alarm or excite people in academia. The concern in
Master of Business Administration (MBA) exam (Terwiesch, 2023). A academia goes beyond whether it is bad or good technology. ChatGPT is
judge in Colombia has also admitted that a court decision12 was a definition of a disruptive technology. It is here and it is about to disrupt
informed by ChatGPT. both the ontology and epistemology of academia, science and teaching.
However, critics have pointed out that as a large language model, That means that academia is about to reconsider what constitutes
ChatGPT is ‘‘‘not particularly innovative’ and ‘revolutionary’’’ because knowledge and how it can be acquired. The challenge then becomes;
similar systems have been developed in the past. Others have observed how is this technology embraced and applied effectively, safely and
that despite its fluent and persuasive texts, the system still ‘‘lacks the responsibly? Whether ChatGPT is a morally neutral technology or an
ability to truly understand the complexity of human language and existential part of the normative moral order is not the focus of this
conversation’’ (Bogost, 2022). To be fair to the developers, a number of commentary. This does not mean that ChatGPT does not raise other
limitations of the system are made clear to users. It is clearly presented ethical issues beyond issues of academic integrity, or that these concerns
that it can occasionally generate incorrect information, produces do not matter. There are a number of ethical issues surrounding large
harmful instructions or biased content and has limited knowledge language models already identified in literature (Bender et al., 2021).
because of the data it was trained on. Amidst a number of issues that The emerging stories of the human cost of building ChatGPT raise great
ChatGPT raises, this commentary only explores whether it undermines concern (Perrigo, 2023). However, these are not the focus of this essay.
academic integrity. It also provides recommendations on how academia What is clear from what we know about this technology so far is that it
can be proactive in responding to the challenges ChatGPT and Gen-AI could be used in ways that could undermine academic integrity. The
systems raise. question then is: what can academia do about this?

2. Threat to academic integrity? 3. What can academia do?

So far, experiences of academics with ChatGPT is that it correctly There are a number of things academia needs to do including but not
answers questions often asked undergraduates and postgraduate stu­ limited to considerations of the opportunities and challenges ChatGPT
dents (Lock, 2022) including questions requiring coding skills (Scharth, and other LLMs present; understanding ways of maximising these op­
2022). The general fear is that students as well as researchers can start portunities while mitigating challenges to academic integrity.
outsourcing their writing to ChatGPT. If some early responses to uni­
versity level essay questions are anything to go by, professors and lec­ 3.1. Consider academic opportunities and challenges of ChatGPT
turers should be worried about the future of essays as a form of
assessment. According to Stokel-Walker (2022), some of the responses Academia needs to take ChatGPT seriously. By academia, I mean the
‘‘are so lucid, well-researched and decently referenced’’. Although it has ecosystem that facilitates the pursuit of research, teaching, and schol­
its limitations and ethical shortcomings (Birhane & Raji, 2022) like so arship in general. This includes academic and research institutions, ac­
many other language models (Eliot, 2022; Weidinger et al., 2021), it is a ademic publishers and funders. ChatGPT and other generative AI
tool with broader implications for academic integrity. systems are revolutionary and academia needs to be ready to be part of
According to the International Centre for Academic Integrity (2021), that revolution. It is not sustainable to ban, reject or dismiss it. This is a
academic integrity is defined as a commitment to six fundamental technology that presents opportunities for teaching, research and
values: honesty, trust, fairness, respect, responsibility and courage. As innovation. Using ChatGPT can become an efficient and time saving way
such, when a person uses ChatGPT to generate essays or other forms of of carrying out academic activities. From lesson plan design, task crea­
written texts that are then passed off as original work, it violates the core tion, writing to provision of inspiration and ideas, ChatGPT can help
principles of academic integrity. ChatGPT raises similar concerns as the both teachers and students to improve teaching and learning experi­
well documented commercial ‘contract cheating’ in higher education ences. It can also be used to improve research. For instance, it can be a
(Newton, 2018). The only difference is that ChatGPT is free and easily tool for quick and easy generation of data for many types of research. It
accessible to all users. It also offers users the opportunity of interaction. can also be used as an analysis tool as well as a writing assistant for
Users can tweak their queries to know how different the responses can research reports.
be. This means that there are possibilities of generating different text­ However, the responsible use of ChatGPT in academia faces signifi­
s/essays and the user can pick the best out of the lot. One academic was cant challenges, particularly owing to potential misuses that constitute
quoted in a Nature commentary recently (Stokel-Walker, 2022) saying threats to academic integrity. First, its usage without appropriate
that “at the moment, it’s looking a lot like the end of essays as an acknowledgement is currently not reflected in the academic integrity
assignment for education”. The concern in academia however is not policies or statements of academic institutions and publishers. This
limited to its open and free availability, it is also rooted in the lack of needs to change. In addition to this, many people in academia; re­
availability of tools to detect people using this viral chatbot. Turnitin, searchers, teachers and students still do not know how to optimally use
Unicheck, PlagScan, Noplag and other plagiarism checker tools are often the system, not to mention using it responsibly. There is a great need for
used to maintain academic integrity. This gap is and should be a source education.
of concern that needs attention. It is also critical to reflect on whether Second, a harmonised and responsible way of acknowledging the use
using ChatGPT for academic paper or assignments can constitute of ChatGPT is yet to be established. A number of research papers have
plagiarism in the moral sense of ‘‘theft of intellectual property’’. Whose listed ChatGPT as authors (Stokel-Walker, 2023). However, both Nature
intellectual property is stolen when ChatGPT is passed off as original (Nature, 2023) and Science (Thorp, 2023) journals have made their
work? Who is damaged by this act? While I agree that using ChatGPT stance clear that no LLM can be accepted as a credited author in their
journals. The current lack of guidance for users on how to acknowledge
the use of ChatGPT raises a lot of concerns.
12
https://www.theguardian.com/technology/2023/feb/03/colombia-judge Third, a tested, validated and accepted tool to identify dishonest use
-chatgpt-ruling#:~:text=A%20judge%20in%20Colombia%20has,costs%20of% of AI text generators in academia is not yet available. That means that it
20his%20medical%20treatment. is still easy to pass off an output from ChatGPT as an original work

2
D.O. Eke Journal of Responsible Technology 13 (2023) 100060

without detection. To address this challenge, OpenAI has developed a instance, it can generate great ideas and texts that can in turn be per­
free tool (AI text Classifier13) trained to distinguish between AI-written fected by users. It is not an authoritative academic voice and neither is it
and human-written texts. Unfortunately, this has been described as an 100% accurate but it can be a good academic assistant. Devising ways of
‘imperfect tool’ by OpenAI who warned that it should not be used as a referencing its use or application is therefore a necessity to ensure its
primary decision-making tool. How academic institutions and pub­ responsible use in academia. For users who want to maintain the tenets
lishers will implement OpenAI’s imperfect tool or develop better tools of academic integrity before technical tools for identifying cheating are
remains unclear. developed, referencing ChatGPT could involve documenting date of
Fourth, it is important to note that there is a greater concern for generation, prompts used for generation and limiting the use of direct
institutions in Low-and-middle-income countries where Turnitin and quotation to one paragraph.
other plagiarism tools are yet to be integrated as measures for academic Additionally, the possibility of ChatGPT writing or correcting codes
integrity. Technical integration of these tools costs money which many calls for the reimagination of technical coding assessments. So far, it has
of the institutions in these countries do not have. ChatGPT could thus proven to be capable of writing functioning codes with custom prompts
exacerbate an already documented problem of cheating in these places which could help students answer basic data structures and algorithm
(Farahat, 2022). It presents a global challenge that requires a solution questions. I therefore suggest that including oral interview as part of the
that can work for everybody - a less expensive, safe, sustainable and assessment should not be a supplementary but a major part of the
responsible solution. assessment. This will give an opportunity of testing the students’
knowledge of the codes and their functions.
3.2. Consider actionable steps to achieve responsible use of ChatGPT and
other Gen-AI systems in academia 4. Conclusion

Responsible use of generative AI systems in academia entails the I argue that the way ChatGPT and other AI powered text generators
development of implementation approaches that maximise their capa­ are used could surely undermine academic integrity. They are also
bilities while mitigating threats to academic integrity. I believe the first capable of revolutionising academia. It is the responsibility of all of us
thing for academia to do is to identify the use of AI generated texts humans to ensure that the risks to academic integrity are mitigated for
without acknowledgement as part of academic dishonesty. Whereas it is greater maximisation. This needs a multi-stakeholder effort; from the
implied in current academic integrity policies, it needs to be made technical developers, policy makers in academic institutions, publishers,
clearer for staff, students and in the case of journals, potential authors, professors, lecturers to students. Academic writing, essay assignments
what values are violated when AI generated texts are used as original and technical coding assessments may not be dead but it is time to
work. Furthermore, there are many ways ChatGPT and other AI text reimagine critical changes to ensure sustainable integrity in academia.
generators can be integrated into academic activities from assessment, In summary, academic institutions need to do a number of things:
research to teaching. Knowledge of these approaches remains very low
in academia. Generative AI systems are changing the world students are • Embrace ChatGPT as an essential part of pedagogy and research.
being prepared for. It is therefore the responsibility of the same aca­ • Establish ChatGPT training and capacity building for both staff and
demic institutions to prepare students for a world that is effectively students for optimal maximisation and to ensure responsible use.
being revolutionised by LLMs. Providing necessary support and resources to both staff and students
Capacity development, for both staff and students, on the diverse use can help to mitigate possible risks to academic integrity.
cases of ChatGPT is therefore necessary in relevant institutions. On the • Review their academic integrity policies and make necessary
part of staff who are expected to identify dishonest uses of the tool, they changes to reflect current AI trends and possibilities.
should be able to know how it works. • Work with relevant bodies (including but not limited to journal ed­
Furthermore, for universities to preserve the current assessment itors and publishers) to co-create effective ways of acknowledging
methods based on written essays, there is a need to create a reliable tool the use of ChatGPT and other AI tools in academic texts.
that can detect AI generated texts. However, designing such a tool and • Work towards developing cost effective and trusted tools for iden­
incorporating it into effective or reliable assessment approaches will tifying possible dishonest use of AI tools in academia globally.
require a lot of funding and the support or buy-in of OpenAI or other
creators of these language models. It may also take time to develop Finally, OpenAI and other large language model creators should be
whereas in the meantime, AI generated texts may already be part of willing to work with academia to achieve responsible use of AI powered
academic assessments. On the other hand, this may be an opportunity to text generators in academia. OpenAI’s move to develop the ‘imperfect’
reconsider the future of essay writing as a form of assessment as Donald classifier is a welcome development but not sufficient to address aca­
(2018) has suggested. There are already calls to fundamentally change demic integrity concerns. The company’s current engagement with ed­
assessment methods - a change from accessing finished essays to ucators in the US is also commendable. However, such engagement
assessing critical thinking involved in the process. Teaching students should be extended to stakeholders in academia in other parts of the
who become good essay writers is important, but is understanding the world, particularly ones from low-and-middle-income countries. A
process not more important than the finished product? The integration multi-stakeholder endeavour is needed to co-create solutions to main­
of ChatGPT in teaching students critical thinking and writing should be a tain academic integrity. This may include redefinition of what consti­
viable consideration. Where essays are absolutely necessary, oral exams tutes academic achievement, impact and novel ways of measuring them.
can be used more to supplement for better assessment.
This is not a problem limited to academic essays and students. There
Declaration of Competing Interest
are also risks of this happening within the wider academic life: journal
and conference papers, reports, blogs, dissertations, books and other
The authors declare that they have no known competing financial
forms of academic writing. However, there is an argument to be made
interests or personal relationships that could have appeared to influence
that the system could allow people to play to their strengths and increase
the work reported in this paper.
the quality of their academic outputs. With all its limitations and im­
perfections, ChatGPT can become an effective learning companion. For
References

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of
13
https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text/ stochastic parrots: Can language models be too big?. In Proceedings of the 2021 ACM

3
D.O. Eke Journal of Responsible Technology 13 (2023) 100060

Conference on fairness, accountability, and transparency, FAccT ’21 (pp. 610–623). 2022/dec/05/what-is-ai-chatbot-phenomenon-chatgpt-and-could-it-replace-humans
New York, NY, USA: Association for Computing Machinery. https://doi.org/ (accessed 12.20.22).
10.1145/3442188.3445922. Nature. (2023). Tools such as ChatGPT threaten transparent science; here are our ground
Birhane, A., Raji, I.D., 2022. ChatGPT, galactica, and the progress trap | WIRED [WWW rules for their use. Nature, 613. https://doi.org/10.1038/d41586-023-00191-1,
document]. WIRED. URL https://www.wired.com/story/large-language-models- 612–612.
critique/ (accessed 12.20.22). Newton, P. M. (2018). How common is commercial contract cheating in higher education
Bogost, I. (2022). ChatGPT is dumber than you think [WWW document]. The Atlantic. and is it increasing? A systematic review. Frontiers in Education, 3. https://doi.org/
URL https://www.theatlantic.com/. technology/archive/2022/12/chatgpt-openai- 10.3389/feduc.2018.00067
artificial-intelligence-writing-ethics/672386/(accessed 2.6.23). Perrigo, B. (2023). Exclusive: The $2 per hour workers who made ChatGPT safer [WWW
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., document]. Time. URL https://time.com/6247678/openai-chatgpt-kenya-workers/
Shyam, P., Sastry, G, & Askell, A. (2020). Language models are few-shot learners. (accessed 2.6.23).
Advances in Neural Information Processing Systems, 33, 1877–1901. Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., 2018. Improving language
Choi, J.H., Hickman, K.E., Monahan, A., Schwarcz, D., 2023. ChatGPT goes to law understanding by generative pre-training.
school. Available SSRN. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language
Donald, A., 2018. Is the ‘time of the assessed essay’ over? Teach. Perspect. Bus. Sch. URL models are unsupervised multitask learners. OpenAI Blog, 1, 9.
https://blogs.sussex.ac.uk/business-school-teaching/2018/11/14/is-the-time- Scharth, M. (2022). The ChatGPT chatbot is blowing people away with its writing skills.
of-the-assessed-essay-over/(accessed 12.20.22). An expert explains why it’s so impressive [WWW document]. The Conversation. URL
Eliot, L. (2022). AI ethics and the future of where large language models are heading http://theconversation.com/the-chatgpt-chatbot-is-blowing-people-away-with-its
[WWW document]. Forbes. URL https://www.forbes.com/sites/lanceeliot/2022/0 -writing-skills-an-expert-explains-why-its-so-impressive-195908 (accessed
8/30/ai-ethics-asking-aloud-whether-large-language-models-and-their-bossy-bel 12.20.22).
ievers-are-taking-ai-down-a-dead-end-path/ (accessed 12.19.22). Stokel-Walker, C. (2022). AI bot ChatGPT writes smart essays — Should professors
Farahat, A. (2022). Elements of academic integrity in a cross-cultural Middle Eastern worry? Nature. https://doi.org/10.1038/d41586-022-04397-7
educational system: Saudi Arabia, Egypt, and Jordan case study. International Journal Stokel-Walker, C. (2023). ChatGPT listed as author on research papers: Many scientists
for Educational Integrity, 18, 1–18. https://doi.org/10.1007/s40979-021-00095-5 disapprove. Nature, 613, 620–621. https://doi.org/10.1038/d41586-023-00107-z
Gleason, N. (2022). ChatGPT and the rise of AI writers: How should higher education Terwiesch, C., 2023. Would chat GPT3 get a Wharton MBA? A prediction based on its
respond? [WWW document]. Campus Learn Share Connect. URL https://www.timeshi performance in the operations management course. Mack Inst. Innov. Manag. Whart.
ghereducation.com/campus/chatgpt-and-rise-ai-writers-how-should-higher-e Sch. Univ. Pa.
ducation-respond (accessed 12.19.22). Thorp, H. H. (2023). ChatGPT is fun, but not an author. Science (New York, N.Y.), 379.
International Center for Academic Integrity (2021) The fundamental values of academic https://doi.org/10.1126/science.adg7879, 313–313.
integrity, 3rd edn. Available at: https://academicintegrity.org/resources/funda Weidinger, L., Mellor, J., Rauh, M., Griffin, C., Uesato, J., Huang, P.-.S., Cheng, M.,
mental-values (accessed 10.10.2022). Glaese, M., Balle, B., Kasirzadeh, A., Kenton, Z., Brown, S., Hawkins, W., Stepleton,
Lock, S. (2022). What is AI chatbot phenomenon ChatGPT and could it replace humans? T., Biles, C., Birhane, A., Haas, J., Rimell, L., Hendricks, L.A., ..., Gabriel, I., 2021.
[WWW document]. The Guardian. URL https://www.theguardian.com/technology/ Ethical and social risks of harm from language models. ArXiv211204359 Cs.

You might also like