Generative Artificial Intelligence (AI) in Higher Education: A Comprehensive Review of Opportunities, Challenges and Implications
Generative Artificial Intelligence (AI) in Higher Education: A Comprehensive Review of Opportunities, Challenges and Implications
Generative Artificial Intelligence (AI) in Higher Education: A Comprehensive Review of Opportunities, Challenges and Implications
Working Paper
Michal Bobula
University of the West of England, Bristol, UK
Abstract:
This paper explores recent advancements and implications of artificial intelligence (AI)
technology, with a specific focus on Large Language Models (LLMs) like ChatGPT 3.5 within
the realm of higher education. Through a comprehensive review of academic literature, the
paper highlights the unprecedented growth of these models and their wide-reaching impact
across various sectors. The discussion sheds light on the opportunities and complex
challenges presented by LLMs, providing a comprehensive overview of the field's current
state.
In the context of higher education, the paper explores the challenges and opportunities
posed by LLMs. These include issues related to educational assessment, potential threats to
academic integrity, privacy concerns, the propagation of misinformation, Equity, Diversity,
and Inclusion (EDI) aspects, and inherent biases within the models. While these challenges
are multifaceted and significant, the paper emphasizes the availability of strategies to
address them effectively and facilitate the successful adoption of LLMs in educational
settings.
Furthermore, the paper recognizes the potential opportunities to transform higher education.
It emphasizes the need to update assessment policies, develop guidelines for staff and
students, scaffold AI skills development, and find ways to leverage technology in the
classroom. By proactively pursuing these steps, higher education institutions (HEIs) can
harness the full potential of LLMs while managing their adoption responsibly.
In conclusion, the paper urges HEIs to allocate appropriate resources to handle the adoption
of LLMs effectively. This includes ensuring staff AI readiness and taking steps to modify their
study programmes to align with the evolving educational landscape influenced by emerging
technologies.
Keywords: Generative AI, ChatGPT, large language models, higher education, policy, AI
literacy, equality, diversity, inclusion, bias, plagiarism, academic integrity, misinformation,
privacy, copyrights, opportunities.
Disclaimer: Generative AI technology has been utilised check the following document for errors and to
assist the author in the initial research of the subject.
1. Introduction:
This paper focuses on the impact of generative AI, such as ChatGPT, on Higher
Education Institutions (HEIs), and offers a systematic review of the main themes
related to generative AI and its impact on higher education. The aim is to provide
academics and academic institutions with guidance on how to navigate through the
considerations associated with the technology and inform their practices. The
importance of addressing the topic is evident since generative AI has 'taken the
world by storm, with notable tension transpiring in the field of education' (Lim et al.,
2023). Various authors recognize threats associated with this technology related to
plagiarism and academic integrity (Crawford et al., 2023; Eke, 2023; Stokel-Walker,
2022; Amani et al., 2023), bias (Cooper, 2023; Dwivedi et al., 2023; Rich and
Gureckis, 2019), inaccuracy and misinformation (Currie, 2023; van Dis et al., 2023),
data and privacy concerns (Wang et al., 2023; Tredinnick and Laybats, 2023),
copyright (Strowel, 2023), and broader impact on skills and education. Besides
mentioning challenges for HEIs, some authors also see opportunities for student
learning, feedback, and assessment design (Sullivan et al., 2023; Rasul et al., 2023;
Kasneci et al., 2023; Crowford et al., 2023), emphasizing the positive role AI can
play in education. Others call for education and guidelines and regulations (Rudolph
et al., 2023). The Quality Assurance Agency (QAA, 2023) for Higher Education
acknowledges the challenges AI tools like ChatGPT pose to academic integrity,
while recognizing their potential to facilitate deeper learning and enhance
educational inclusivity and accessibility. Furthermore, the Department for Education
(2023) stresses the ethical use of AI tools in education, emphasizing data privacy,
preventing malpractice, and using professional judgment to validate AI-generated
content. While emerging technologies like generative AI have a role, a rigorous,
knowledge-focused curriculum remains vital to prepare students, including teaching
proper and safe use of these tools.
In a study at the University of Minnesota Law School, ChatGPT was tasked with
answering actual law school exams: 95 multiple choice and 12 essay questions.
Using the school's regular grading method, ChatGPT achieved an average grade of
C+, indicating its capability to address complex legal queries to a certain extend
(Choi et al., 2023).
Terwiesch (2023) analysed ChatGPT's capability in an MBA Operations
Management final exam. ChatGPT displayed competence in operations
management and case analysis but struggled with basic calculations and intricate
problems, potentially due to difficulty in deeply comprehending and applying
concepts. However, it does adjust responses based on human cues.
Rudolph et al. (2023a) found ChatGPT adept at elucidating concepts like quantum
computing with real-world instances. However, it has drawbacks: a word cap,
inability to craft visuals, and sporadic network glitches. While it can generate a 500-
word essay, the content often lacks depth and proper citations. Nikolic et al. (2023)
further benchmarked ChatGPT in varied engineering tests, revealing mixed
performance across assessment types.
It's vital to recognize that these tests were from early 2023. Since then, ChatGPT
has undergone enhancements, boasting increased computational power, internet
browsing, plug-in access, and a new model featuring advanced reasoning and
creativity (Open AI, 2023a). Choi et al. (2023) address optimizing ChatGPT through
prompt engineering, emphasizing tone, word limits, citation clarity, and essay
structure. Succinct directives and grasping the model's context significantly boost its
efficacy.
ChatGPT is known for its human-like writing, with studies showing even expert
scientists can struggle to distinguish between its AI-generated abstracts and those
by humans (Else, 2023). Trained on extensive data, it is competent across
disciplines. While ChatGPT can undeniably be used for academic cheating, at the
point of writing, there is limited actual evidence of a mass-scale cheating taking
place. A survey in January 2023 with over a thousand university students revealed
that more than one-third were using ChatGPT for their assessments. Among these,
75% believed it was cheating but still used it (Intelligent, 2023). Increasing reliance
on this tool amplifies concerns about academic honesty and plagiarism risk (Stokel-
Walker, 2022). Tlili et al. (2023) found an overwhelmingly positive Twitter sentiment
about ChatGPT's educational use. Haensch et al. (2023) analysed the top 100
English TikTok videos related to ChatGPT, totaling 250 million views. Prominent
topics were essay writing, coding, and evasion techniques. Given TikTok's 3.7 million
active UK users, with 26% aged 18-24 (Bendix, 2023), the potential for covert
cheating is significant. Moreover, both Tlili et al. (2023) and Haensch et al. (2023)
emphasize very little discourse related to negative aspects of using generative AI in
education, meaning that systems such as ChatGPT may be used without much
reflection and evaluation.
The literature identifies multiple solutions to the above problem. Gonsalves (2023)
suggests that educators using multiple choice test for assessment may consider
outsmarting the system by taking advantage of the current limitations of generative
AI platforms, such as their inability to interpret visual media or understand up-to-date
information beyond its training data. For instance, multiple choice questions could
incorporate visual elements, such as images, figures, or charts, and require students
to interact with these elements. They can also use a series of related questions that
demand correct answers before progressing, known as conditional logic branching
questions. Additionally, questions that require students to apply concepts to current
events or recent case studies can challenge the AI's ability to provide accurate
responses. Moreover, making all answer options plausible and relevant can
necessitate a deeper understanding of the subject matter to identify the correct
answer. However, designing such tests may be more time consuming, difficult to do
in certain subjects, and as technology improves, attempts to out-design chatbots
might prove fruitless in the long term (Mills, 2023). For instance, while ChatGPT 3.5
does not have internet access, OpenAI allows its newer model GPT 4.0 internet
access through the Bing search engine or third-party plug-ins (Wiggers, 2023).
Educators could also reconsider the types of questions they ask students, placing
emphasis on those that demand analysis rather than mere recollection of rules and
definitions (Choi et al., 2023). Other authors echo this sentiment and recommend
avoiding assignments and exams that are overly formulaic, to the point where it
becomes indistinguishable if they were completed by a computer. Instead, the focus
should be on crafting assessments that encourage the development of students'
creative and critical thinking skills (Rudolph et al., 2023a, Stokel-Walner, 2022).
Another option is to execute certain assessments during class (Rudolph et al.,
2023a), create innovative and varying assessment formats (Gimpel et al., 2023;
Cooper, 2023; Nikolic et al., 2023), enable students to freely express their genuine
interests through writing, ensuring their voices are heard and their opinions hold
significance (McMurtrie, 2022), and create authentic assessments that are
meaningful, intrinsically motivating students to use their knowledge and skills in a
way that mirrors real-world contexts (Rudolph et al., 2023b; QAA, 2023; Sullivan et
al., 2023). Finally, introducing assessment methods where students are assessed
based on their approach, process, draft submissions, reflections, and interaction with
the content, rather than solely on the end result (Smith and Francis, 2023). Overall,
the emergence of generative AI technology and its associated threat to academic
integrity present an opportunity for educators to revisit and update their assessment
strategies, making the process more relevant and engaging.
In order to respond, HEIs need to create clear policies for the deployment of AI
which is a vital step towards cultivating a learning environment where this technology
is adopted responsibly and with transparency (Gimpel et al., 2023). Secondly, there
is the need to review and update misconduct policies to clarify what uses of AI may
be counted as plagiarism and what are best practices to ensure academic integrity is
upheld (Lim et al., 2023; Ventayen, 2023). Students may be required to report the
aids used during a course, for example, listing the tools, the fields of application of
these tools, and recording, for example, the prompts when using AI tools such as
ChatGPT. Exceptions can be made to the rules outlined, which will be
communicated to the students in advance (Spannagel, 2023). However, this
approach, while logical, may be challenging to implement as it would require
additional resources to assess students’ submissions. Moreover, there would be the
need for collaboration and oversight to ensure that educators allow their students to
use AI tools in a coherent way to avoid situations where using AI for a specific
purpose is allowed and encouraged in one module but prohibited in another.
Further, HEIs need to ensure that the students are familiar with the policies related to
academic integrity and comprehend the potential repercussions of engaging in
academic misconduct (Atlas, 2023). Greater effort must be placed on future proofing
subjects and degrees (Crawford et al., 2023) and educators should review and adapt
their current assessment practices (Nikolic et al., 2023, Sansom, 2023; Vantayen,
2023; Currie, 2023; Amani et al., 2023; Lo, 2023). However, little is known about
teachers' knowledge and skills to integrate AI-based tools (Celik, 2023). Thus, it is
also pivotal to support academic staff and ensure their AI readiness (Wang et al.,
2023) and accept the need to adapt to technological changes and get autonomy to
do so in the way that fits their respective modules and fields (Lim et al., 2023). They
need to have sufficient resources and training to implement the changes (Rudolph et
al., 2023). Overall, great challenges lie ahead for HEIs which need to realise the
importance of adapting to the world with universally available AI by ensuring
leadership and availability of sufficient and appropriate resourcing.
3. Copyright concerns
One of the main concerns related to ChatGPT and other LLMs is the risk of violating
intellectual property rights. As ChatGPT is trained using a large amount of text data,
such as books, articles, and other written materials, some of the training data may be
copyrighted (Karim, 2023). In the realm of AI development, the prevailing belief is
that the larger the training data set, the better the results. OpenAI's GPT-2 model
was trained using a data set that comprised 40 gigabytes of text. GPT-3, the model
upon which ChatGPT is built, utilized a significantly larger data set of 570 GB. The
size of the data set used for OpenAI's most recent model, GPT-4, has not been
disclosed (Heikkilä, 2023). Therefore, using generative AI leads to substantial risks
of accidental plagiarism and copyright infringements (Gimpel et al,. 2023).
For instance, it has been reported that two authors have filed a lawsuit against
OpenAI, alleging that their copyrighted books were used to train the AI model,
ChatGPT, without their permission. The authors claim that the AI was able to
generate very accurate summaries of their novels, which they believe indicates their
works were unlawfully ingested and used in the training process, and that OpenAI
has unfairly profited from stolen writing and ideas (Creamer, 2023). In a separate
case, Google, along with its parent company Alphabet and AI subsidiary DeepMind,
has been accused in a lawsuit of scraping user data without consent and violating
copyright laws to train its AI products. A similar class action lawsuit had previously
been filed against OpenAI (Dixit, 2023). The case may hinge on whether courts view
the use of copyrighted material in this way as 'fair use' or as simple unauthorized
copying.
The copyright concerns are valid but should be looked at by the appropriate
regulators while universities should recognize these issues and advocate for clarity
regarding the training data underpinning LLMs. Initially, raising awareness among
students and faculty about potential copyright violations is vital. Over time,
institutions should align with providers who can verify their training data's copyright
legitimacy. For instance, Adobe's 'Firefly', a generative AI image tool, guarantees
commercial usage safety, offering IP indemnification as it is trained on company-
owned images (Gold, 2023). It has been reported that technology companies are in
talks with leading media organizations about using their news content for AI training
(Criddle et al., 2023). Google has updated privacy terms now clearly state their right
to use publicly available data for AI model training. This essentially provides them a
'legal' pass to harness user-generated data. Critics argue that Google leverages its
dominant position to sidestep potential intellectual property lawsuits and access
invaluable data for AI enhancement without incurring costs (Dixit, 2023). Finally,
Bloomberg – the information provider for financial services that is also used by over
300 universities worldwide (Bloomberg, 2023a) is developing a new generative AI
model, trained on an extensive variety of financial data owed by the company
(Bloomberg, 2023b). While this model is likely to be only available to the subscribing
HEIs, it shows the possibility of data providers developing LLMs using their own
data, thereby effectively tackling copyright-related concerns. Universities need to
monitor these developments and endorse such models and initiatives.
ChatGPT has been widely criticised for producing biased and discriminatory content,
which has been caused by using training data that reflects the biases of society
(Weinberger, 2019; Dwivedi et al., 2023). It appears that conversational AI often
replicates and even intensifies the same biases that frequently mislead humans,
including availability, selection, and confirmation biases (van Dis et al., 2023).
Overall, LLMs can perpetuate and amplify existing biases and unfairness in society,
which can negatively impact teaching and learning processes and outcomes
(Abdelghani et al., 2023). It's crucial to understand that ChatGPT does not operate
based on ethical principles, nor can it discern between right and wrong, or truth and
falsehood. This tool merely gathers data from the databases and texts it processes
online, thereby inheriting any cognitive biases present in that information (Sabzalieva
and Valentini, 2023). OpenAI has recognized this issue and claims to have fine-
tuned ChatGPT 4.0, achieving an 82% reduction in the model's tendency to produce
disallowed content compared to its predecessor. Additionally, GPT-4 is 29% more
aligned with company safety policies on sensitive data handling (OpenAI, 2023b).
Yet, Zou et al. (2023) found a way to circumvent the guardrails of nearly all open-
source LLMs, which are designed to prevent harmful outputs. Furthermore,
Hartmann et al. (2023) highlight ChatGPT's pro-environmental, left-libertarian bias.
While AI poses challenges, it also offers solutions to bolster equality, diversity, and
inclusion. Chatbots can promote inclusivity for disadvantaged students and those
with diverse backgrounds or disabilities. They can handle course queries, direct
students to resources, provide materials suited to different learning styles (Gupta
and Chen, 2022), and simplify complex ideas, benefiting those with communication
challenges (Hemsley et al., 2023). ChatGPT offers opportunities to enhance
academic success for diverse student groups. Non-native English speakers can use
ChatGPT for grammar feedback and as a semi-translator for complex terms, aiding
comprehension (Sullivan et al., 2023). For neurodivergent students, AI can assist in
time management, information processing, and thought organization (McMurtrie,
2023). AI can also support adaptive writing and highlight essential information in
various formats (Kasneci et al., 2023). Moreover, the fact that AI may produce
biased responses represents a valuable opportunity to raise awareness and engage
in a discussion about inherent biases which are present in society and therefore
reflected in the LLM’s training data (Heaven, 2023).
5. Information Privacy
Another obstacle to adopting ChatGPT lies in concerns over privacy and security.
Given the vast data processed by ChatGPT's machine learning algorithms, it's
vulnerable to cyberattacks, risking unauthorized access or misuse of sensitive
information (Dwivedi et al., 2023). Concerns also arise from how ChatGPT uses the
information from its interactions (Azaria, et al., 2023). Tlili et al. (2023) examined
OpenAI's policies and found that while conversations with ChatGPT are stored and
used to improve its performance, the specifics of storage and use are not entirely
clear.
The privacy implications of ChatGPT are particularly relevant for learners and
educators who may lack in-depth knowledge of technology and privacy. Young
learners could unintentionally share personal details with ChatGPT, underscoring the
need to protect the privacy of users, particularly the younger demographic.
Regulators in Europe and the US echo these concerns. In 2023, Italy restricted
ChatGPT access due to privacy and age-verification issues. OpenAI responded by
addressing and clarifying these privacy matters (McCallum, 2023). The US Federal
Trade Commission is also scrutinising OpenAI for potential false or harmful
statements about real individuals and its data privacy methods (BBC News, 2023).
Similarly, Japanese authorities have voiced privacy concerns, emphasizing the need
to weigh these against the benefits of generative AI (Kaur, 2023).
HEIs must ensure that staff and students are well-informed about AI-related privacy
concerns, emphasizing the avoidance of sharing personal or sensitive data. It's vital
for HEIs to stay updated on AI advancements, collaborating with providers that
uphold transparent privacy policies. By doing so, institutions can maintain best
practices in AI utilisation. Regulatory bodies offer essential guidelines that HEIs
should follow. Although HEIs should recognise the potential for privacy breaches,
their role is not to police AI systems but to foster a secure learning environment that
benefits from AI. Regulation should be left to the designated authorities. In summary,
while privacy issues are genuine, they should not hinder AI's integration into higher
education.
6. Misinformation
To overcome the limitations of LLMs, it's crucial to inform users about potential
inaccuracies, or hallucinations, and to not rely entirely on AI-generated outputs.
Students who use such content should strive to verify its accuracy and take
responsibility for any factual errors. Moreover, fictional references, which are often
markers of AI-generated content, can help educators pinpoint students falsely
claiming AI work as their own. Rigorous reference checking could serve as one
method of enforcing academic integrity, although this would necessitate additional
time for scrutiny. It remains vital for universities to teach the fundamentals of each
subject and to supervise the use of AI while equipping their students with a solid
foundation in pertinent skills and knowledge. Once students have the necessary
knowledge and skills, they can evaluate the outputs generated by AI, weeding out
inaccuracies and leveraging the beneficial and helpful components of the responses
offered by the technology. This approach fosters the application of critical thinking in
their studies.
7. Opportunities
8. Summary
The rapid advancement and spread of Generative AI and LLMs, most notably
represented by ChatGPT 3.5, is a testament to the transformative nature of modern
AI. Their multifaceted attributes not only underscore heightened efficiency but also
democratize AI, making it an accessible tool for diverse audiences. Its applications,
ranging from simple text generation to intricate code creation, mark a new era where
technology is both an aid and a collaborator.
One of the immediate impacts has been on education and this paper outlines
challenges associated with a wide-scale adoption of LLMs, but also offers solutions
and presents opportunities to leverage technology by institutions and educators to
enhance learning experience. Firstly, HEIs must update their assessment offence
policies and review their current assessment strategy. Secondly, they need to
educate both students and staff to manage various ethical challenges, as it remains
imperative to strike a rational balance, nurturing innovation while safeguarding
ethical integrity. Thirdly, HEIs must explore use cases and the potential of generative
AI to be integrated in their syllabus, offering the opportunity for students to learn how
to use LLMs in academic context, and in the future, in professional settings. This
involves equipping students with such skills through scaffolding and progressively
developing AI skills throughout university courses. HEIs need to realise the scale of
the AI transition and devote adequate resources to ensure staff AI readiness and
prepare to make necessary changes to their teaching, learning and assessment
practices as well as assess the need for reviewing module and programme learning
outcomes. Currently, generative AI is still a relatively novel technology therefore the
question how to successfully integrate it within the higher education remains
unanswered. Academic literature point to multiple opportunities, however educators
need to be given autonomy and training what is necessary to start experimenting
with generative AI. These efforts must be supervised, coordinated and evaluated,
involving an open dialog with the relevant stakeholders. To achieve that, creating
academic or management roles responsible for this transition should be considered.