Generative Artificial Intelligence AI in Higher Ed
Generative Artificial Intelligence AI in Higher Ed
Generative Artificial Intelligence AI in Higher Ed
Michal Bobula
University of the West of England, UK
Abstract
This paper explores recent advancements and implications of artificial intelligence (AI)
technology, with a specific focus on Large Language Models (LLMs) like ChatGPT 3.5,
within the realm of higher education. Through a comprehensive review of the academic
literature, this paper highlights the unprecedented growth of these models and their wide-
reaching impact across various sectors. The discussion sheds light on the complex issues
and potential benefits presented by LLMs, providing a comprehensive overview of the
field's current state.
In the context of higher education, the paper explores the challenges and opportunities
posed by LLMs. These include issues related to educational assessment, potential threats
to academic integrity, privacy concerns, the propagation of misinformation, Equity,
Diversity, and Inclusion (EDI) aspects, copyright concerns and inherent biases within the
models. While these challenges are multifaceted and significant, the paper emphasises
the availability of strategies to address them effectively and facilitate the successful
adoption of LLMs in educational settings.
modify their study programmes to align with the evolving educational landscape influenced
by emerging technologies.
Keywords: ChatGPT; large language models; higher education; policy; AI literacy; EDI;
bias; academic integrity; misinformation; privacy; copyright; opportunities.
Introduction
This paper focuses on the impact of generative AI, such as ChatGPT, on Higher Education
Institutions (HEIs), and offers a systematic review of the main themes related to generative
AI and its impact on higher education. The aim is to provide academics and academic
institutions with guidance on how to navigate through the considerations associated with
the technology and inform their practices. The importance of addressing the topic is
evident since generative AI has 'taken the world by storm, with notable tension transpiring
in the field of education' (Lim et al., 2023). Various authors recognise threats associated
with this technology related to plagiarism and academic integrity (Crawford et al., 2023;
Eke, 2023; Stokel-Walker, 2022; Amani et al., 2023), bias (Cooper, 2023; Dwivedi et al.,
2023; Rich and Gureckis, 2019), inaccuracy and misinformation (Currie, 2023; van Dis et
al., 2023), data and privacy concerns (Wang et al., 2023; Tredinnick and Laybats, 2023),
copyright (Strowel, 2023), and broader impact on skills and education. Besides mentioning
challenges for HEIs, some authors also see opportunities for student learning, feedback,
and assessment design (Sullivan et al., 2023; Rasul et al., 2023; Kasneci et al., 2023;
Crowford et al., 2023), emphasising the positive role AI can play in education. Others call
for education and guidelines and regulations (Rudolph et al., 2023). The Quality
Assurance Agency (QAA, 2023) for Higher Education acknowledges the challenges AI
tools like ChatGPT pose to academic integrity, while recognising their potential to facilitate
deeper learning and enhance educational inclusivity and accessibility. Furthermore, the
Department for Education (2023) stresses the ethical use of AI tools in education,
emphasising data privacy, preventing malpractice, and using professional judgment to
validate AI-generated content. While emerging technologies like generative AI have a role,
a rigorous, knowledge-focused curriculum remains vital to prepare students, including
teaching proper and safe use of these tools.
average grade of C+, indicating its capability to address complex legal queries to a certain
extend (Choi et al., 2023). Terwiesch (2023) analysed ChatGPT's capability in an MBA
Operations Management final exam. ChatGPT displayed competence in operations
management and case analysis but struggled with basic calculations and intricate
problems, potentially due to difficulty in deeply comprehending and applying concepts.
However, it does adjust responses based on human cues.
Rudolph et al. (2023a) found ChatGPT adept at elucidating concepts like quantum
computing with real-world instances. However, it has drawbacks: a word cap, inability to
craft visuals, and sporadic network glitches. While it can generate a 500-word essay, the
content often lacks depth and proper citations. Nikolic et al. (2023) further benchmarked
ChatGPT in varied engineering tests, revealing mixed performance across assessment
types. ChatGPT outperformed most students on a common economics course test in the
USA. Compared to student results, ChatGPT scored in the 91st percentile in
Microeconomics and the 99th in Macroeconomics. This exam is a standard multiple-choice
comprehension test of economics with textbook answers, so ChatGPT's results should not
be unexpected (Geerling et al., 2023). Finally, Gilson et al. (2023) evaluate the
performance of ChatGPT on questions within the scope of the United States Medical
Licensing Examination and demonstrate that the model attains a score comparable to that
of a third-year medical student. Moreover, the authors underscore the capability of
ChatGPT to offer logical coherence and contextual information in the majority of its
responses.
It is vital to recognise that these tests were from early 2023. Since then, ChatGPT has
undergone enhancements, boasting increased computational power, internet browsing,
plug-in access, and a new model featuring advanced reasoning and creativity (Open AI,
2023a). Choi et al. (2023) address optimising ChatGPT through prompt engineering,
emphasising tone, word limits, citation clarity, and essay structure. Succinct directives and
grasping the model's context significantly boost its efficacy. Moreover, throughout 2023,
various closed-source and open-source models were released, with ChatGPT 4.0 being
the top performer, while the performance of various other models matched or even
surpassed that of ChatGPT 3.5 according to certain benchmarks. The availability of these
models, many of which are free to use (though the free variants come with limitations),
further complicates the landscape of AI content generation (Chen et al., 2023).
academic integrity present an opportunity for educators to revisit and update their
assessment strategies, making the process more relevant and engaging.
Available solutions
In order to respond, HEIs need to create clear policies for the deployment of AI which is a
vital step towards cultivating a learning environment where this technology is adopted
responsibly and with transparency (Gimpel et al., 2023). Secondly, there is the need to
review and update misconduct policies to clarify what uses of AI may be counted as
plagiarism and what are best practices to ensure academic integrity is upheld (Lim et al.,
2023; Ventayen, 2023). Students may be required to report the aids used during a course,
for example, listing the tools, the fields of application of these tools, and recording, for
example, the prompts when using AI tools such as ChatGPT. Exceptions can be made to
the rules outlined, which will be communicated to the students in advance (Gimpel et al.,
2023). However, this approach, while logical, may be challenging to implement as it would
require additional resources to assess students’ submissions. Moreover, there would be
the need for collaboration and oversight to ensure that educators allow their students to
use AI tools in a coherent way to avoid situations where using AI for a specific purpose is
allowed and encouraged in one module but prohibited in another.
Further, HEIs need to ensure that the students are familiar with the policies related to
academic integrity and comprehend the potential repercussions of engaging in academic
misconduct (Atlas, 2023). Greater effort must be placed on future proofing subjects and
degrees (Crawford et al., 2023) and educators should review and adapt their current
assessment practices (Nikolic et al., 2023, Sansom, 2023; Vantayen, 2023; Currie, 2023;
Amani et al., 2023). However, little is known about teachers' knowledge and skills to
integrate AI-based tools (Celik, 2023). Thus, it is also pivotal to support academic staff and
ensure their AI readiness (Wang et al., 2023) and accept the need to adapt to
technological changes and get autonomy to do so in the way that fits their respective
modules and fields (Lim et al., 2023). They need to have sufficient resources and training
to implement the changes (Rudolph et al., 2023). Overall, great challenges lie ahead for
HEIs which need to realise the importance of adapting to the world with universally
available AI by ensuring leadership and availability of sufficient and appropriate
resourcing.
As discussed in this paper, numerous issues arise when using AI in education. Therefore,
it becomes crucial for teachers to initially focus on understanding what AI can offer them
and how they can adapt to an education system enhanced by AI (Hrastinski et al., 2019).
Finally, from the practical perspective, it is important to be transparent and collaborative
and ensure a consistent approach in utilising AI.
Copyright concerns
One of the main concerns related to ChatGPT and other LLMs is the risk of violating
intellectual property rights. As ChatGPT is trained using a large amount of text data, such
as books, articles, and other written materials, some of the training data may be
copyrighted (Karim, 2023). In the realm of AI development, the prevailing belief is that the
larger the training data set, the better the results. OpenAI's GPT-2 model was trained
using a data set that comprised 40 gigabytes of text. GPT-3, the model upon which
ChatGPT is built, utilised a significantly larger data set of 570 GB. The size of the data set
used for OpenAI's most recent model, GPT-4, has not been disclosed (Heikkilä, 2023).
Therefore, using generative AI leads to substantial risks of accidental plagiarism and
copyright infringements (Gimpel et al., 2023).
For instance, it has been reported that two authors have filed a lawsuit against OpenAI,
alleging that their copyrighted books were used to train the AI model, ChatGPT, without
their permission. The authors claim that the AI was able to generate very accurate
summaries of their novels, which they believe indicates their works were unlawfully
ingested and used in the training process, and that OpenAI has unfairly profited from
stolen writing and ideas (Creamer, 2023). In a separate case, Google, along with its parent
company Alphabet and AI subsidiary DeepMind, has been accused in a lawsuit of scraping
user data without consent and violating copyright laws to train its AI products. A similar
class action lawsuit had previously been filed against OpenAI (Dixit, 2023). The case may
hinge on whether courts view the use of copyrighted material in this way as 'fair use' or as
simple unauthorised copying.
Further, it is important that copyrights differ between jurisdictions. The US operates on the
principle that once something is public, it loses its privacy, which is not aligned with
European legal principles. According to OpenAI, their models undergo training using
publicly accessible content, licensed content, and content reviewed by humans. However,
this falls short of the standards set by the GDPR which gives individuals as ‘data subjects’
certain rights, including the right to be informed about the collection and use of their data,
as well as the right to have their data removed from systems, regardless of whether it was
initially public (Heikkilä, 2023).
The copyright concerns are valid but should be looked at by the appropriate regulators
while universities should recognise these issues and advocate for clarity regarding the
training data underpinning LLMs. Initially, raising awareness among students and faculty
about potential copyright violations is vital. Over time, institutions should align with
providers who can verify their training data's copyright legitimacy. For instance, Adobe's
'Firefly', a generative AI image tool, guarantees commercial usage safety, offering IP
indemnification as it is trained on company-owned images (Gold, 2023). It has been
reported that technology companies are in talks with leading media organisations about
using their news content for AI training (Criddle et al., 2023). Google has updated privacy
terms which now clearly state their right to use publicly available data for AI model training.
This essentially provides them a 'legal' pass to harness user-generated data. Critics argue
that Google leverages its dominant position to sidestep potential intellectual property
lawsuits and access invaluable data for AI enhancement without incurring costs (Dixit,
2023). Finally, Bloomberg, the information provider for financial services that is also used
by over 300 universities worldwide (Bloomberg, 2023a), is developing a new generative AI
model, trained on an extensive variety of financial data owed by the company (Bloomberg,
2023b). While this model is likely to be only available to subscribing HEIs, it shows the
possibility of data providers developing LLMs using their own data, thereby effectively
tackling copyright-related concerns. Universities need to monitor these developments and
endorse such models and initiatives.
ChatGPT has been widely criticised for producing biased and discriminatory content,
which has been caused by using training data that reflects the biases of society
(Weinberger, 2019; Dwivedi et al., 2023). It appears that conversational AI often replicates
and even intensifies the same biases that frequently mislead humans, including
availability, selection, and confirmation biases (van Dis et al., 2023). Overall, LLMs can
perpetuate and amplify existing biases and unfairness in society, which can negatively
impact teaching and learning processes and outcomes (Abdelghani et al., 2023). It is
crucial to understand that ChatGPT does not operate based on ethical principles, nor can
it discern between right and wrong, or truth and falsehood. This tool merely gathers data
from the databases and texts it processes online, thereby inheriting any cognitive biases
present in that information (Sabzalieva and Valentini, 2023). OpenAI has recognised this
issue and claims to have fine-tuned ChatGPT 4.0, achieving an 82% reduction in the
model's tendency to produce disallowed content compared to its predecessor. Additionally,
GPT-4 is 29% more aligned with company safety policies on sensitive data handling
(OpenAI, 2023b). Yet, Zou et al. (2023) found a way to circumvent the guardrails of nearly
all open-source LLMs, which are designed to prevent harmful outputs. Furthermore,
Hartmann et al. (2023) highlight ChatGPT's pro-environmental, left-libertarian bias.
While AI poses challenges, it also offers solutions to bolster equality, diversity, and
inclusion. Chatbots can promote inclusivity for disadvantaged students and those with
diverse backgrounds or disabilities. They can handle course queries, direct students to
resources, provide materials suited to different learning styles (Gupta and Chen, 2022),
and simplify complex ideas, benefiting those with communication challenges (Hemsley et
al., 2023). ChatGPT offers opportunities to enhance academic success for diverse student
groups. Non-native English speakers can use ChatGPT for grammar feedback and as a
semi-translator for complex terms, aiding comprehension (Sullivan et al., 2023). For
neurodivergent students, AI can assist in time management, information processing, and
thought organisation (McMurtrie, 2023). AI can also support adaptive writing and highlight
essential information in various formats (Kasneci et al., 2023). Moreover, the fact that AI
may produce biased responses represents a valuable opportunity to raise awareness and
engage in a discussion about inherent biases which are present in society and therefore
reflected in the LLM’s training data (Heaven, 2023).
models. HEIs must also draft AI policies that cater to the diverse needs of the entire
academic community (McMurtrie, 2023). In conclusion, while the ability of AI to bolster EDI
is clear, the risks associated with AI use can be managed. However, it requires HEIs to
invest significant time and effort to devise and implement suitable practices to achieve
their objectives.
Information privacy
Another obstacle to adopting ChatGPT lies in concerns over privacy and security. Given
the vast data processed by ChatGPT's machine learning algorithms, it is vulnerable to
cyberattacks, risking unauthorised access or misuse of sensitive information (Dwivedi et
al., 2023). Concerns also arise from how ChatGPT uses the information from its
interactions (Azaria, et al., 2023). Tlili et al. (2023) examined OpenAI's policies and found
that while conversations with ChatGPT are stored and used to improve its performance,
the specifics of storage and use are not entirely clear.
The privacy implications of ChatGPT are particularly relevant for learners and educators
who may lack in-depth knowledge of technology and privacy. Young learners could
unintentionally share personal details with ChatGPT, underscoring the need to protect the
privacy of users, particularly the younger demographic. Regulators in Europe and the US
echo these concerns. In 2023, Italy restricted ChatGPT access due to privacy and age-
verification issues. OpenAI responded by addressing and clarifying these privacy matters
(McCallum, 2023). The US Federal Trade Commission is also scrutinising OpenAI for
potential false or harmful statements about real individuals and its data privacy methods
(BBC News, 2023). Similarly, Japanese authorities have voiced privacy concerns,
emphasising the need to weigh these against the benefits of generative AI (Kaur, 2023).
HEIs must ensure that staff and students are well-informed about AI-related privacy
concerns, emphasising the avoidance of sharing personal or sensitive data. It is vital for
HEIs to stay updated on AI advancements, collaborating with providers that uphold
transparent privacy policies. By doing so, institutions can maintain best practices in AI
utilisation. Although HEIs should recognise the potential for privacy breaches, their role is
not to police AI systems but to foster a secure learning environment that benefits from AI.
Regulation should be left to the designated authorities and HEIs should follow these
regulation and guidelines. In summary, while privacy issues are genuine, they should not
hinder AI's integration into higher education.
Misinformation
Prior to ChatGPT, people relied on various methods to filter information, such as verifying
the content directly, assessing the knowledge level of the creator, and evaluating language
rigour, format correctness, and text length as indicators of reliability. However, ChatGPT’s
content generation capabilities excel in these aspects, potentially creating a false sense of
reliability. Users may fall into the trap of blindly trusting the content generated by ChatGPT
(Wu et al., 2023). In fact, LLMs are prone to hallucinations, a phenomenon where the AI
model generates inaccurate or outright false information. This is particularly true when
asking for a literature reference as generative AI can fabricate convincing titles and
authors that do not exist (Burger et al., 2023). Hallucinations from LLMs can mislead and
potentially harm users seeking personal advice (Khowaja et al., 2023). Companies
developing LLMs are aware of these issues and are working to address them (O'Brien,
2023).
Opportunities
The widespread integration of LLMs in higher education can offer substantial advantages
for students and faculty. As tools like ChatGPT become more common at work, graduates
must be equipped with the right knowledge and skills to use them adeptly. This entails
understanding AI capabilities and limitations, and their ethical and societal implications.
This skill development could be scaffolded and progressively developed through strategic
curriculum design and embedded into assessments (Cradle, 2023). Thus, embedding AI
literacy in graduate skills can enhance their employability in a rapidly evolving job market.
First, HEIs must educate students on navigating AI-related challenges, as detailed earlier
in this report. Next, they should introduce basic prompt engineering, enabling effective
model communication. Most crucially, HEIs need to develop use cases for generative AI
models in educational settings that are relevant for graduate jobs and enhance graduate
outcomes as well as the student journey. By following these steps, HEIs would fully
include AI literacy in graduate skillsets boosting their employability and readiness for the
swiftly changing employment landscape.
Developing use cases for generative AI is a significant challenge for educational providers.
LLMs possess notable capabilities for education, but these are yet to be fully explored.
Academic literature offers various examples of such capabilities. Mollick (2022)
recommends having students assess ChatGPT's responses or compare a ChatGPT-
produced research paper with the original to bolster their critical thinking skills. Generative
Summary
The rapid advancement and spread of Generative AI and LLMs, most notably represented
by ChatGPT 3.5, is a testament to the transformative nature of modern AI. Their
multifaceted attributes not only underscore heightened efficiency but also democratise AI,
making it an accessible tool for diverse audiences. Its applications, ranging from simple
text generation to intricate code creation, mark a new era where technology is both an aid
and a collaborator.
One of the immediate impacts has been on education, and this paper outlines challenges
associated with a wide-scale adoption of LLMs, but also offers solutions and presents
Generative AI and LLMs, with their capacity for personalised lessons and engaging with
complex subjects, hold the potential to transform traditional classroom settings. Integrating
these AI models into curricula can nurture creativity, promote critical thinking, and prepare
students for a future where AI collaboration is commonplace in professional life. Through
aiding research, boosting writing skills, and enabling innovative problem-solving, this
technology is set to enhance the educational landscape, and HEIs need to adopt and take
advantage of this technology.
Acknowledgements
The author used the following generative AI tools in the preparation of this manuscript:
ChatGPT 3.5 and Claude 2. The task performed by both tools include proofreading of the
manuscript. ChatGPT 3.5 API in combination with AgentGPT was used to search for
relevant publications, news stories, and themes during the initial stage of the project.
References
Abdelghani, R., Wang, Y.H., Yuan, X., Wang, T., Sauzéon, H., and Oudeyer, P.Y. (2023)
GPT-3-driven pedagogical agents for training children's curious question-asking
skills. arXiv:2211.14228v6. https://doi.org/10.48550/arXiv.2211.14228.
Atlas, S. (2023) ChatGPT for Higher Education and Professional Development: A Guide to
Conversational AI, Available at: https://digitalcommons.uri.edu/cba_facpubs/548
(Accessed: 15 July 2023).
BBC News (2023) 'ChatGPT owner in probe over risks around false answers', BBC News.
Available at: https://www.bbc.com/news/business-66196223 (Accessed: 2 August
2023).
Bendix, T., (2023) ‘TikTok UK Statistics (2023), Everything you need to know’, Social
Films. [blog] Available at: https://www.socialfilms.co.uk/blog/tiktok-uk-statistics
(Accessed 29 June 2023).
Burger, B., Kanbach, D.K., Kraus, S., Breier, M., and Corvello, V. (2023) ‘On the use of AI-
based tools like ChatGPT to support management research’, European Journal of
Innovation Management, 26(7), pp.233-241. https://doi.org/10.1108/EJIM-02-2023-
0156.
Centre for Teaching and Learning (2023) Four lessons from ChatGPT: Challenges and
opportunities for educators. University of Oxford. Available at:
https://www.ctl.ox.ac.uk/article/four-lessons-from-chatgpt-challenges-and-
opportunities-for-educators (Accessed: 13 July 2023).
Center for Information Technology and Society (2023) Why we fall for fake news.
University of California Santa Barbara. Available at: https://www.cits.ucsb.edu/fake-
news/why-we-fall (Accessed: 03 July 2023).
Chen, H., Jiao, F., Li, X., Qin, C., Ravaut, M., Zhao, R., Xiong, C. and Joty, S. (2023)
ChatGPT’s One-year Anniversary: Are Open-Source Large Language Models
Catching up? Available at: https://arxiv.org/abs/2311.16989. (Accessed: 15 January
2024).
Crawford, J., Cowling, M., and Allen, K., A. (2023) ‘Leadership is needed for ethical
ChatGPT: Character, assessment, and learning using artificial intelligence (AI)’,
Journal of University Teaching and Learning Practice, 20(3).
https://doi.org/10.53761/1.20.3.02.
Creamer, E. (2023) 'Authors file a lawsuit against OpenAI for unlawfully ‘ingesting’ their
books', The Guardian, 5 July. Available at:
https://www.theguardian.com/books/2023/jul/05/authors-file-a-lawsuit-against-
openai-for-unlawfully-ingesting-their-books (Accessed: 05 July 2023).
Currie, G.M., (2023) ‘Academic integrity and artificial intelligence: is ChatGPT hype, hero
or heresy?’, Seminars in Nuclear Medicine. Available at:
https://www.sciencedirect.com/science/article/abs/pii/S0001299823000363
(Accessed: 13 June 2023).
Dalalah, D., and Dalalah, O.M.A. (2023) ‘The false positives and false negatives of
generative AI detection tools in education and academic research: The case of
ChatGPT’, The International Journal of Management Education, 21(2), 100822.
Department for Education, (2023) Ethics, Transparency and Accountability Framework for
Automated Decision-Making. GOV.UK. Available at:
https://www.gov.uk/government/publications/ethics-transparency-and-
accountability-framework-for-automated-decision-making/ethics-transparency-and-
accountability-framework-for-automated-decision-making (Accessed: 28 June
2023).
Dixit, P. (2023) 'Google faces lawsuit alleging data scraping and copyright violations for AI
development', Business Today, 12 July. Available at:
https://www.businesstoday.in/technology/news/story/google-faces-lawsuit-alleging-
data-scraping-and-copyright-violations-for-ai-development-389340-2023-07-12
(Accessed: 12 July 2023).
Else, H. (2023) ‘Abstracts written by ChatGPT fool scientists’, Nature, 613, p. 423.
Available at: https://www.nature.com/articles/d41586-023-00056-7 (Accessed: 29
May 2023).
Eke, D.O., (2023) ‘ChatGPT and the rise of generative AI: Threat to academic integrity?’,
Journal of Responsible Technology, 13(2023), pp.100060.
https://doi.org/10.53761/1.20.3.02.
Dwivedi, Y.K., Kshetri, N., Hughes, L., Slade, E.L., Jeyaraj, A., Kar, A.K., Baabdullah,
A.M., Koohang, A. et al. (2023) ‘Opinion Paper: “So what if ChatGPT wrote it?”
Multidisciplinary perspectives on opportunities, challenges and implications of
generative conversational AI for research, practice and policy.’, International Journal
of Information Management, vol. 71, 102642, pp. 102642.
https://doi.org/10.1016/j.ijinfomgt.2023.102642.
Feizi, S., and Huang, F. (2023) Is AI-Generated Content Actually Detectable? College of
Computer, Mathematical, and Natural Sciences, University of Maryland. Available
at: https://cmns.umd.edu/news-events/news/ai-generated-content-actually-
detectable (Accessed 4 July 2023).
Franzoni, V. (2023) ‘From Black Box to Glass Box: Advancing Transparency in Artificial
Intelligence Systems for Ethical and Trustworthy AI’, Computational Science and Its
Applications – ICCSA 2023 Workshops. ICCSA 2023. Lecture Notes in Computer
Science, vol 14107. Springer, Cham. Available at:
https://link.springer.com/chapter/10.1007/978-3-031-37114-1_9 (Accessed 4 July
2023).
Geerling, W., Mateer, G. D., Wooten, J., and Damodaran, N. (2023) 'ChatGPT has
Mastered the Principles of Economics: Now What?', SSRN. Available at:
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4356034 (Accessed: 29 July
2023)
Gilson, A., Safranek, C., Huang, T., Socrates, V., Chi, L., Taylor, R.A. and Chartash, D.
(2023) ‘How Does ChatGPT Perform on the United States Medical Licensing
Examination? The Implications of Large Language Models for Medical Education
and Knowledge Assessment’, JMIR Medical Education, 9. Available at:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9947764/ (Accessed 03 July 2023).
Gimpel, H., Hall, K., Decker, S., Eymann, T., Lämmermann, L., Maedche, A., Röglinger,
M., Ruiner, C., et al. (2023) Unlocking the Power of Generative AI Models and
Systems such as GPT-4 and ChatGPT for Higher Education A Guide for Students
and Lecturers. https://doi.org/10.13140/RG.2.2.20710.09287/2.
Gisondi, M.A., Barber, R., Faust, J.S., Raja, A., Strehlow, M.C., Westafer, L.M. and
Gottlieb, M. (2022) ‘A Deadly Infodemic: Social Media and the Power of COVID-19
Misinformation’, Journal of Medical Internet Research, 24(2), e35552.
https://doi.org/10.2196/35552.
Gonsalves, D. (2023) ‘On ChatGPT: what promise remains for multiple choice
assessment?’, Journal of Learning Development in Higher Education, Issue 27,
https://doi.org/10.47408/jldhe.vi27.1009.
Gold, J. (2023) ‘Adobe offers copyright indemnification for Firefly AI-based image app
users’, Computerworld, 8 June. Available at:
https://www.computerworld.com/article/3699053/adobe-offers-copyright-
indemnification-for-firefly-ai-based-image-app-users.html (Accessed: 12 July 2023).
Gupta, S. and Chen, Y. (2022) ‘Supporting Inclusive Learning Using Chatbots? A Chatbot-
Led Interview Study’, Journal of Information Systems Education, 33(1), pp.98-108.
Available at: https://aisel.aisnet.org/jise/vol33/iss1/11 (Accessed 1 August 2023).
Haensch, A., Ball, S., Herklotz, M. and Kreuter, F. (2023) Seeing ChatGPT Through
Students’ Eyes: An Analysis of TikTok Data. Available at:
https://arxiv.org/abs/2303.05349 (Accessed: 30 June 2023).
Haleem, A., Javaid, M. and Singh, R.P. (2022) 'An era of ChatGPT as a significant
futuristic support tool: A study on features, abilities, and challenges', BenchCouncil
Transactions on Benchmarks, Standards and Evaluations, 2(4).
https://doi.org/10.1016/j.tbench.2023.100089.
Hartmann, J., Schwenzow, J. and Witte, M. (2023) The political ideology of conversational
AI: Converging evidence on ChatGPT's pro-environmental, left-libertarian
orientation. https://doi.org/10.48550/arXiv.2301.01768.
Heaven, W. D. (2023) 'ChatGPT is going to change education, not destroy it', MIT
Technology Review. Available at:
https://www.technologyreview.com/2023/04/06/1071059/chatgpt-change-not-
destroy-education-openai/ (Accessed: 30 July 2023).
Heikkilä, M., (2023) ‘OpenAI’s hunger for data is coming back to bite it’, MIT Technology
Review, (online) Available at:
https://www.technologyreview.com/2023/04/19/1071789/openais-hunger-for-data-is-
coming-back-to-bite-it/ (Accessed: 10 July 2023).
Hemsley, B., Power, E., and Given, F. (2023) ‘Will AI tech like ChatGPT improve inclusion
for people with communication disability?’ The Conversation.
https://theconversation.com/will-ai-tech-like-chatgptimprove-inclusion-for-people-
with-communicationdisability-196481 (Accessed: 11 July 2023).
Javaid, M., Haleem, A., Singh, R.P., Khan, S., and Khan, I.H. (2023) "Unlocking the
opportunities through ChatGPT Tool towards ameliorating the education system."
BenchCouncil Transactions on Benchmarks, Standards and Evaluations, 3(2),
pp.100115. Available at:
https://www.sciencedirect.com/science/article/pii/S2772485923000327#b23
(Accessed: 10 July 2023).
Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F.,
Gasser, U., Groh, G., et al. (2023) ‘ChatGPT for good? On opportunities and
challenges of large language models for education’, Learning and Individual
Differences, 103, p.102274. https://doi.org/10.1016/j.lindif.2023.102274.
Kaur, D. (2023) 'ChatGPT in Japan: concerns and issues', Tech Wire Asia. Available at:
https://techwireasia.com/2023/06/after-italy-japan-has-its-eyes-on-chatgpt-over-
data-privacy-concerns/ (Accessed: 2 August 2023).
Khowaja, S.A., Khuwaja, P., and Dev, K. (2023) ChatGPT Needs SPADE (Sustainability,
PrivAcy, Digital divide, and Ethics) Evaluation: A Review. ArXiv. Available at:
https://arxiv.org/pdf/2305.03123.pdf (Accessed: 2 August 2023).
Lancaster, T. (2023) ‘Artificial intelligence, text generation tools and ChatGPT – does
digital watermarking offer a solution?’ International Journal for Educational Integrity.
Available at: https://edintegrity.biomedcentral.com/articles/10.1007/s40979-023-
00131-6 (Accessed: 2 August 2023).
Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., and Zou, J. (2023) ‘GPT detectors are
biased against non-native English writers’, Patterns. Available at:
https://www.cell.com/patterns/fulltext/S2666-3899(23)00130-7 (Accessed: 12 July
2023).
Lim, W.M., Gunasekara, A., Pallant, J.L., Pallant, J.I., and Pechenkina, E. (2023)
‘Generative AI and the future of education: Ragnarök or reformation? A paradoxical
perspective from management educators’, The International Journal of
Management Education, 21(2), 100790. Available at:
https://www.sciencedirect.com/science/article/pii/S1472811723000289 (Accessed:
09 July 2023).
Mills, A. (2023a) 'AI Text Generators: Sources to Stimulate Discussion Among Teachers',
Google Docs. Available at:
https://docs.google.com/document/d/1V1drRG1XlWTBrEwgGqd-
cCySUB12JrcoamB5i16-Ezw/edit#heading=h.qljyuxlccr6 (Accessed: 12 July 2023).
McCallum, S. (2023) 'ChatGPT accessible again in Italy', BBC News. Available at:
https://www.bbc.com/news/technology-65431914 (Accessed: 2 August 2023).
McMurtrie, B. (2023) 'How ChatGPT could help or hurt students with disabilities', The
Chronicle of Higher Education. Available at: https://www.chronicle.com/article/how-
chatgpt-could-help-or-hurt-students-with-disabilities (Accessed: 30 July 2023).
Mollick, E. (2022) 'ChatGPT Is a Tipping Point for AI', Harvard Business Review. Available
at: https://hbr.org/2022/12/chatgpt-is-a-tipping-point-for-ai (Accessed: 24 June
2023).
Nikolic, S., Daniel, S., Haque, R., Belkina, M., Hassan, G.M., Grundy, S., Lyden, S., Neal,
P., and Sandison, C. (2023) 'ChatGPT versus engineering education assessment: a
multidisciplinary and multi-institutional benchmarking and analysis of this generative
artificial intelligence tool to investigate assessment integrity', European Journal of
Engineering Education, 48(4), pp. 559-614.
https://doi.org/10.1080/03043797.2023.2213169.
O'Brien, M. (2023) Chatbots sometimes make things up. Is AI’s hallucination problem
fixable? AP News. Available at: https://apnews.com/article/artificial-intelligence-
hallucination-chatbots-chatgpt-falsehoods-ac4672c5b06e6f91050aa46ee731bcf4
(Accessed: 31 June 2023).
Rasul, T., Nair, S., Kalendra, D., Robin, M., Santini, F. de O., Ladeira, W. J., Sun, M., Day,
et al. (2023) ‘The role of ChatGPT in higher education: Benefits, challenges, and
future research directions’, Journal of Applied Learning and Teaching. 6(1).
https://doi.org/10.37074/jalt.2023.6.1.29.
Rich, A.S. and Gureckis, T.M., (2019) ‘Lessons for artificial intelligence from the study of
natural stupidity’, Nature Machine Intelligence, 1(4), pp.174-180. Available at:
https://www.nature.com/articles/s42256-019-0038-z (Accessed: 13 July 2023).
Reuters. (2023) ChatGPT sets record for fastest-growing user base - analyst note.
Reuters. Available at: https://www.reuters.com/technology/chatgpt-sets-record-
fastest-growing-user-base-analyst-note-2023-02-01/ (Accessed: 15 June 2023).
Rudolph, J., Tan, S., and Tan, S. (2023a) ‘ChatGPT: Bullshit spewer or the end of
traditional assessments in higher education?’, Journal of Applied Learning and
Teaching. 6(1). https://doi.org/10.37074/jalt.2023.6.1.9.
Rudolph, J., Tan, S. and Tan, S. (2023b) ‘War of the chatbots: Bard, Bing Chat, ChatGPT,
Ernie and beyond. The new AI gold rush and its impact on higher education’, ED-
TECH Reviews. 6(1). https://doi.org/10.37074/jalt.2023.6.1.23.
Sadasivan, V.S., Kumar, A., Balasubramanian, S., Wang, W. and Feizi, S., (2023) Can AI-
Generated Text be Reliably Detected? https://doi.org/10.48550/arXiv.2303.11156.
Stokel-Walker, C., (2022) ‘AI bot ChatGPT writes smart essays — should professors
worry?’ Nature. Available at: https://www.nature.com/articles/d41586-022-04397-7
(Accessed 11 July 2023).
Stokel-Walker, C., Van Noorden, R. (2023) ‘What ChatGPT and generative AI mean for
science’, Nature. Available at: https://pubmed.ncbi.nlm.nih.gov/36747115/
(Accessed 12 July 2023).
Sullivan, M., Kelly, A., and McLaughlan, P. (2023) ‘ChatGPT in higher education:
Considerations for academic integrity and student learning’, Journal of Applied
Learning and Teaching, 6(1). https://doi.org/10.37074/jalt.2023.6.1.17.
Tlili, A., Shehata, B., Adarkwah, M.A., Bozkurt, A., Hickey, D.T., Huang, R., and
Agyemang, B. (2023) ‘What if the devil is my guardian angel: ChatGPT as a case
study of using chatbots in education’, Smart Learning Environments, 10(1), p.15.
https://doi.org/10.1186/s40561-023-00237-x.
Tredinnick, L., and Laybats, C. (2023) ‘The dangers of generative artificial intelligence’,
Business Information Review, 40(2). https://doi.org/10.1177/02663821231183756.
van Dis, E.A.M., Bollen, J., Zuidema, W., van Rooij, R., and Bockting, C.L. (2023)
‘ChatGPT: five priorities for research’, Nature, 614, pp.224-226. Available at:
https://doi.org/10.1038/d41586-023-00288-7 (Accessed: 13 June 2023).
Ventayen, R.J.M. (2023) 'OpenAI ChatGPT Generated Results: Similarity Index of Artificial
Intelligence-Based Contents', Advances in Intelligent Systems and Computing.
Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4332664
(Accessed: 9 July 2023).
Wang, T., Lund, B.D., Marengo, A., Pagano, A., Mannuru, N.R., Teel, Z.A. and Pange, J.
(2023) ‘Exploring the Potential Impact of Artificial Intelligence (AI) on International
Wu, X., Duan, R. and Ni, J. (2023) ‘Unveiling Security, Privacy, and Ethical Concerns of
ChatGPT’, Department of Electrical and Computer Engineering, Queen’s University,
Kingston, Canada. Available at: https://arxiv.org/pdf/2307.14192.pdf (Accessed 3
August 2023).
Yu, H., and Guo, Y. (2023) ‘Generative AI in Education: Current Status and Future
Prospects’, Frontiers in Education. https://doi.org/10.3389/feduc.2023.1183162.
Zou, A., Wang, Z., Kolter, J.Z. and Fredrikson, M. (2023) Universal and Transferable
Adversarial Attacks on Aligned Language Models.
https://doi.org/10.48550/arXiv.2307.15043.
Author details
Michal Bobula is a Senior Lecturer and Programme Leader for Accounting and Finance at
the University of the West of England in Bristol, UK (UWE Bristol). He is a member of the
university AI community of practice and co-leads the efforts to incorporate generative AI
into teaching, learning, and assessment strategies in the College of Business and Law.
Licence
©2024 The Author(s). This is an open-access article distributed under the terms of the
Creative Commons Attribution 4.0 International License (CC-BY 4.0), which permits
unrestricted use, distribution, and reproduction in any medium, provided the original author
and source are credited. See http://creativecommons.org/licenses/by/4.0/. Journal of
Learning Development in Higher Education (JLDHE) is a peer-reviewed open access
journal published by the Association for Learning Development in Higher Education
(ALDinHE).