Generative Artificial Intelligence (AI) in Higher Education: A Comprehensive Review of Opportunities, Challenges and Implications

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

Generative Artificial Intelligence (AI) in Higher Education: A

Comprehensive Review of Opportunities, Challenges and


Implications.

Working Paper

Michal Bobula
University of the West of England, Bristol, UK
Abstract:

This paper explores recent advancements and implications of artificial intelligence (AI)
technology, with a specific focus on Large Language Models (LLMs) like ChatGPT 3.5 within
the realm of higher education. Through a comprehensive review of academic literature, the
paper highlights the unprecedented growth of these models and their wide-reaching impact
across various sectors. The discussion sheds light on the opportunities and complex
challenges presented by LLMs, providing a comprehensive overview of the field's current
state.
In the context of higher education, the paper explores the challenges and opportunities
posed by LLMs. These include issues related to educational assessment, potential threats to
academic integrity, privacy concerns, the propagation of misinformation, Equity, Diversity,
and Inclusion (EDI) aspects, and inherent biases within the models. While these challenges
are multifaceted and significant, the paper emphasizes the availability of strategies to
address them effectively and facilitate the successful adoption of LLMs in educational
settings.
Furthermore, the paper recognizes the potential opportunities to transform higher education.
It emphasizes the need to update assessment policies, develop guidelines for staff and
students, scaffold AI skills development, and find ways to leverage technology in the
classroom. By proactively pursuing these steps, higher education institutions (HEIs) can
harness the full potential of LLMs while managing their adoption responsibly.
In conclusion, the paper urges HEIs to allocate appropriate resources to handle the adoption
of LLMs effectively. This includes ensuring staff AI readiness and taking steps to modify their
study programmes to align with the evolving educational landscape influenced by emerging
technologies.

Keywords: Generative AI, ChatGPT, large language models, higher education, policy, AI
literacy, equality, diversity, inclusion, bias, plagiarism, academic integrity, misinformation,
privacy, copyrights, opportunities.

Disclaimer: Generative AI technology has been utilised check the following document for errors and to
assist the author in the initial research of the subject.
1. Introduction:

Recent advancements in artificial intelligence (AI) technology, particularly Large


Language Models (LLMs), have attracted unprecedented attention with the
introduction of ChatGPT 3.5 – the chatbot that became available to the general
public in November 2022. The app has taken the world by surprise and became the
fastest-growing online application in history, reaching an estimated 100 million
monthly active users in just two months after its launch (Reuters, 2023). This record-
breaking user growth rate is part of a broader trend that has been observed over the
past few decades, where technology adoption rates are accelerating, with inventions
reaching massive numbers of users in increasingly shorter periods of time.

ChatGPT is capable of generating human-like responses using natural language,


essentially leading to an expert system that is available on demand. The user
communicates with the system by using natural language or prompts, which are then
interpreted by the statistical model. This model uses its training data to produce an
answer (Kim et al., 2023). While chatbots are not a new invention, ChatGPT has
brought several advancements to generative AI, including improved contextual
understanding, language generation, task adaptability, and multilingual proficiency. It
can understand the context of a conversation and generate relevant responses,
making it more effective at mimicking human-like interactions. Its advanced language
generation capabilities allow it to produce coherent, contextually accurate, and
grammatically correct text. Furthermore, ChatGPT can be fine-tuned for specific
tasks or domains, increasing its versatility across various industries and applications
(Ray, 2023). The scientific community acknowledges its capabilities and the potential
to significantly impact scientific research and various industries, including IT,
education, travel, tourism, transport, hospitality, finance, and marketing. However,
various authors point to its limitations, controversies, and future challenges (Stokel-
Walker and Van Noorden, 2023; Ray, 2023; Dwivedi et al., 2023).

This paper focuses on the impact of generative AI, such as ChatGPT, on Higher
Education Institutions (HEIs), and offers a systematic review of the main themes
related to generative AI and its impact on higher education. The aim is to provide
academics and academic institutions with guidance on how to navigate through the
considerations associated with the technology and inform their practices. The
importance of addressing the topic is evident since generative AI has 'taken the
world by storm, with notable tension transpiring in the field of education' (Lim et al.,
2023). Various authors recognize threats associated with this technology related to
plagiarism and academic integrity (Crawford et al., 2023; Eke, 2023; Stokel-Walker,
2022; Amani et al., 2023), bias (Cooper, 2023; Dwivedi et al., 2023; Rich and
Gureckis, 2019), inaccuracy and misinformation (Currie, 2023; van Dis et al., 2023),
data and privacy concerns (Wang et al., 2023; Tredinnick and Laybats, 2023),
copyright (Strowel, 2023), and broader impact on skills and education. Besides
mentioning challenges for HEIs, some authors also see opportunities for student
learning, feedback, and assessment design (Sullivan et al., 2023; Rasul et al., 2023;
Kasneci et al., 2023; Crowford et al., 2023), emphasizing the positive role AI can
play in education. Others call for education and guidelines and regulations (Rudolph
et al., 2023). The Quality Assurance Agency (QAA, 2023) for Higher Education
acknowledges the challenges AI tools like ChatGPT pose to academic integrity,
while recognizing their potential to facilitate deeper learning and enhance
educational inclusivity and accessibility. Furthermore, the Department for Education
(2023) stresses the ethical use of AI tools in education, emphasizing data privacy,
preventing malpractice, and using professional judgment to validate AI-generated
content. While emerging technologies like generative AI have a role, a rigorous,
knowledge-focused curriculum remains vital to prepare students, including teaching
proper and safe use of these tools.

2. Plagiarism, Academic Integrity and Assessment Design

The possibility of using generative AI as a plagiarism tool affecting academic integrity


appears to be the main concern. The first question when discussing the threat of
plagiarism and its potential impact on academic integrity is to examine the
capabilities of LLMs.

In a study at the University of Minnesota Law School, ChatGPT was tasked with
answering actual law school exams: 95 multiple choice and 12 essay questions.
Using the school's regular grading method, ChatGPT achieved an average grade of
C+, indicating its capability to address complex legal queries to a certain extend
(Choi et al., 2023).
Terwiesch (2023) analysed ChatGPT's capability in an MBA Operations
Management final exam. ChatGPT displayed competence in operations
management and case analysis but struggled with basic calculations and intricate
problems, potentially due to difficulty in deeply comprehending and applying
concepts. However, it does adjust responses based on human cues.

Rudolph et al. (2023a) found ChatGPT adept at elucidating concepts like quantum
computing with real-world instances. However, it has drawbacks: a word cap,
inability to craft visuals, and sporadic network glitches. While it can generate a 500-
word essay, the content often lacks depth and proper citations. Nikolic et al. (2023)
further benchmarked ChatGPT in varied engineering tests, revealing mixed
performance across assessment types.

ChatGPT outperformed on a common economics course test in the USA. Compared


to student results, ChatGPT scored in the 91st percentile in Microeconomics and the
99th in Macroeconomics. This exam is a standard multiple-choice comprehension
test of economics with textbook answers, so ChatGPT's results should not be
unexpected (Geerling et al., 2023).

Finally, Gilson et al. (2023) evaluate the performance of ChatGPT on questions


within the scope of the United States Medical Licensing Examination and
demonstrate that the model attains a score comparable to that of a third-year
medical student. Moreover, the authors underscore the capability of ChatGPT to
offer logical coherence and contextual information in the majority of its responses.

It's vital to recognize that these tests were from early 2023. Since then, ChatGPT
has undergone enhancements, boasting increased computational power, internet
browsing, plug-in access, and a new model featuring advanced reasoning and
creativity (Open AI, 2023a). Choi et al. (2023) address optimizing ChatGPT through
prompt engineering, emphasizing tone, word limits, citation clarity, and essay
structure. Succinct directives and grasping the model's context significantly boost its
efficacy.

ChatGPT is known for its human-like writing, with studies showing even expert
scientists can struggle to distinguish between its AI-generated abstracts and those
by humans (Else, 2023). Trained on extensive data, it is competent across
disciplines. While ChatGPT can undeniably be used for academic cheating, at the
point of writing, there is limited actual evidence of a mass-scale cheating taking
place. A survey in January 2023 with over a thousand university students revealed
that more than one-third were using ChatGPT for their assessments. Among these,
75% believed it was cheating but still used it (Intelligent, 2023). Increasing reliance
on this tool amplifies concerns about academic honesty and plagiarism risk (Stokel-
Walker, 2022). Tlili et al. (2023) found an overwhelmingly positive Twitter sentiment
about ChatGPT's educational use. Haensch et al. (2023) analysed the top 100
English TikTok videos related to ChatGPT, totaling 250 million views. Prominent
topics were essay writing, coding, and evasion techniques. Given TikTok's 3.7 million
active UK users, with 26% aged 18-24 (Bendix, 2023), the potential for covert
cheating is significant. Moreover, both Tlili et al. (2023) and Haensch et al. (2023)
emphasize very little discourse related to negative aspects of using generative AI in
education, meaning that systems such as ChatGPT may be used without much
reflection and evaluation.

Traditional plagiarism-detection tools are rendered ineffective in this context, as AI is


becoming increasingly proficient at imitating human-like text and AI -generated text is
deemed original and thus undetectable (Dwivedi et al., 2023.), leading to a
performance that's only marginally better than a random classifier (Feizi and Huang,
2023). Moreover, the existing AI detection tools are prone to false positives and false
negatives due to variable human writing styles (Dalalah and Dalalah, 2023). Even if
some potentially feasible solutions such as credible AI detection tools and
watermarking of AI-generated output, students may also opt to edit the AI-generated
outputs or use other tools to make the results less identifiable as machine-generated
(Lancaster, 2023). Finally, there are reports of bias against non-native English
writers due to the design of the detection tool which looks for low perplexity writing
as a marker for AI-generated text, and it penalises non-native speakers with limited
linguistic impressions (Liang et al., 2023). The same study finds that the detection
tool is also bypassed by changing prompting strategy.

The literature identifies multiple solutions to the above problem. Gonsalves (2023)
suggests that educators using multiple choice test for assessment may consider
outsmarting the system by taking advantage of the current limitations of generative
AI platforms, such as their inability to interpret visual media or understand up-to-date
information beyond its training data. For instance, multiple choice questions could
incorporate visual elements, such as images, figures, or charts, and require students
to interact with these elements. They can also use a series of related questions that
demand correct answers before progressing, known as conditional logic branching
questions. Additionally, questions that require students to apply concepts to current
events or recent case studies can challenge the AI's ability to provide accurate
responses. Moreover, making all answer options plausible and relevant can
necessitate a deeper understanding of the subject matter to identify the correct
answer. However, designing such tests may be more time consuming, difficult to do
in certain subjects, and as technology improves, attempts to out-design chatbots
might prove fruitless in the long term (Mills, 2023). For instance, while ChatGPT 3.5
does not have internet access, OpenAI allows its newer model GPT 4.0 internet
access through the Bing search engine or third-party plug-ins (Wiggers, 2023).
Educators could also reconsider the types of questions they ask students, placing
emphasis on those that demand analysis rather than mere recollection of rules and
definitions (Choi et al., 2023). Other authors echo this sentiment and recommend
avoiding assignments and exams that are overly formulaic, to the point where it
becomes indistinguishable if they were completed by a computer. Instead, the focus
should be on crafting assessments that encourage the development of students'
creative and critical thinking skills (Rudolph et al., 2023a, Stokel-Walner, 2022).
Another option is to execute certain assessments during class (Rudolph et al.,
2023a), create innovative and varying assessment formats (Gimpel et al., 2023;
Cooper, 2023; Nikolic et al., 2023), enable students to freely express their genuine
interests through writing, ensuring their voices are heard and their opinions hold
significance (McMurtrie, 2022), and create authentic assessments that are
meaningful, intrinsically motivating students to use their knowledge and skills in a
way that mirrors real-world contexts (Rudolph et al., 2023b; QAA, 2023; Sullivan et
al., 2023). Finally, introducing assessment methods where students are assessed
based on their approach, process, draft submissions, reflections, and interaction with
the content, rather than solely on the end result (Smith and Francis, 2023). Overall,
the emergence of generative AI technology and its associated threat to academic
integrity present an opportunity for educators to revisit and update their assessment
strategies, making the process more relevant and engaging.
In order to respond, HEIs need to create clear policies for the deployment of AI
which is a vital step towards cultivating a learning environment where this technology
is adopted responsibly and with transparency (Gimpel et al., 2023). Secondly, there
is the need to review and update misconduct policies to clarify what uses of AI may
be counted as plagiarism and what are best practices to ensure academic integrity is
upheld (Lim et al., 2023; Ventayen, 2023). Students may be required to report the
aids used during a course, for example, listing the tools, the fields of application of
these tools, and recording, for example, the prompts when using AI tools such as
ChatGPT. Exceptions can be made to the rules outlined, which will be
communicated to the students in advance (Spannagel, 2023). However, this
approach, while logical, may be challenging to implement as it would require
additional resources to assess students’ submissions. Moreover, there would be the
need for collaboration and oversight to ensure that educators allow their students to
use AI tools in a coherent way to avoid situations where using AI for a specific
purpose is allowed and encouraged in one module but prohibited in another.

Further, HEIs need to ensure that the students are familiar with the policies related to
academic integrity and comprehend the potential repercussions of engaging in
academic misconduct (Atlas, 2023). Greater effort must be placed on future proofing
subjects and degrees (Crawford et al., 2023) and educators should review and adapt
their current assessment practices (Nikolic et al., 2023, Sansom, 2023; Vantayen,
2023; Currie, 2023; Amani et al., 2023; Lo, 2023). However, little is known about
teachers' knowledge and skills to integrate AI-based tools (Celik, 2023). Thus, it is
also pivotal to support academic staff and ensure their AI readiness (Wang et al.,
2023) and accept the need to adapt to technological changes and get autonomy to
do so in the way that fits their respective modules and fields (Lim et al., 2023). They
need to have sufficient resources and training to implement the changes (Rudolph et
al., 2023). Overall, great challenges lie ahead for HEIs which need to realise the
importance of adapting to the world with universally available AI by ensuring
leadership and availability of sufficient and appropriate resourcing.

As discussed in this paper, numerous issues arise when using AI in education.


Therefore, it becomes crucial for teachers to initially focus on understanding what AI
can offer them and how they can adapt to an education system enhanced by AI
(Hrastinski et al., 2019). Finally, from the practical perspective, it is important to be
transparent and collaborative and ensure a consistent approach in utilising AI.

3. Copyright concerns

One of the main concerns related to ChatGPT and other LLMs is the risk of violating
intellectual property rights. As ChatGPT is trained using a large amount of text data,
such as books, articles, and other written materials, some of the training data may be
copyrighted (Karim, 2023). In the realm of AI development, the prevailing belief is
that the larger the training data set, the better the results. OpenAI's GPT-2 model
was trained using a data set that comprised 40 gigabytes of text. GPT-3, the model
upon which ChatGPT is built, utilized a significantly larger data set of 570 GB. The
size of the data set used for OpenAI's most recent model, GPT-4, has not been
disclosed (Heikkilä, 2023). Therefore, using generative AI leads to substantial risks
of accidental plagiarism and copyright infringements (Gimpel et al,. 2023).

For instance, it has been reported that two authors have filed a lawsuit against
OpenAI, alleging that their copyrighted books were used to train the AI model,
ChatGPT, without their permission. The authors claim that the AI was able to
generate very accurate summaries of their novels, which they believe indicates their
works were unlawfully ingested and used in the training process, and that OpenAI
has unfairly profited from stolen writing and ideas (Creamer, 2023). In a separate
case, Google, along with its parent company Alphabet and AI subsidiary DeepMind,
has been accused in a lawsuit of scraping user data without consent and violating
copyright laws to train its AI products. A similar class action lawsuit had previously
been filed against OpenAI (Dixit, 2023). The case may hinge on whether courts view
the use of copyrighted material in this way as 'fair use' or as simple unauthorized
copying.

Further, it is important that copyrights differ between jurisdictions. The US operates


on the principle that once something is public, it loses its privacy, which is not
aligned with European legal principles. According to OpenAI, their models undergo
training using publicly accessible content, licensed content, and content reviewed by
humans. However, this falls short of the standards set by the GDPR which gives
individuals as "data subjects" certain rights, including the right to be informed about
the collection and use of their data, as well as the right to have their data removed
from systems, regardless of whether it was initially public (Heikkilä, 2023).

The copyright concerns are valid but should be looked at by the appropriate
regulators while universities should recognize these issues and advocate for clarity
regarding the training data underpinning LLMs. Initially, raising awareness among
students and faculty about potential copyright violations is vital. Over time,
institutions should align with providers who can verify their training data's copyright
legitimacy. For instance, Adobe's 'Firefly', a generative AI image tool, guarantees
commercial usage safety, offering IP indemnification as it is trained on company-
owned images (Gold, 2023). It has been reported that technology companies are in
talks with leading media organizations about using their news content for AI training
(Criddle et al., 2023). Google has updated privacy terms now clearly state their right
to use publicly available data for AI model training. This essentially provides them a
'legal' pass to harness user-generated data. Critics argue that Google leverages its
dominant position to sidestep potential intellectual property lawsuits and access
invaluable data for AI enhancement without incurring costs (Dixit, 2023). Finally,
Bloomberg – the information provider for financial services that is also used by over
300 universities worldwide (Bloomberg, 2023a) is developing a new generative AI
model, trained on an extensive variety of financial data owed by the company
(Bloomberg, 2023b). While this model is likely to be only available to the subscribing
HEIs, it shows the possibility of data providers developing LLMs using their own
data, thereby effectively tackling copyright-related concerns. Universities need to
monitor these developments and endorse such models and initiatives.

4. Equality, Diversity and Inclusion (EDI).

ChatGPT has been widely criticised for producing biased and discriminatory content,
which has been caused by using training data that reflects the biases of society
(Weinberger, 2019; Dwivedi et al., 2023). It appears that conversational AI often
replicates and even intensifies the same biases that frequently mislead humans,
including availability, selection, and confirmation biases (van Dis et al., 2023).
Overall, LLMs can perpetuate and amplify existing biases and unfairness in society,
which can negatively impact teaching and learning processes and outcomes
(Abdelghani et al., 2023). It's crucial to understand that ChatGPT does not operate
based on ethical principles, nor can it discern between right and wrong, or truth and
falsehood. This tool merely gathers data from the databases and texts it processes
online, thereby inheriting any cognitive biases present in that information (Sabzalieva
and Valentini, 2023). OpenAI has recognized this issue and claims to have fine-
tuned ChatGPT 4.0, achieving an 82% reduction in the model's tendency to produce
disallowed content compared to its predecessor. Additionally, GPT-4 is 29% more
aligned with company safety policies on sensitive data handling (OpenAI, 2023b).
Yet, Zou et al. (2023) found a way to circumvent the guardrails of nearly all open-
source LLMs, which are designed to prevent harmful outputs. Furthermore,
Hartmann et al. (2023) highlight ChatGPT's pro-environmental, left-libertarian bias.

While AI poses challenges, it also offers solutions to bolster equality, diversity, and
inclusion. Chatbots can promote inclusivity for disadvantaged students and those
with diverse backgrounds or disabilities. They can handle course queries, direct
students to resources, provide materials suited to different learning styles (Gupta
and Chen, 2022), and simplify complex ideas, benefiting those with communication
challenges (Hemsley et al., 2023). ChatGPT offers opportunities to enhance
academic success for diverse student groups. Non-native English speakers can use
ChatGPT for grammar feedback and as a semi-translator for complex terms, aiding
comprehension (Sullivan et al., 2023). For neurodivergent students, AI can assist in
time management, information processing, and thought organization (McMurtrie,
2023). AI can also support adaptive writing and highlight essential information in
various formats (Kasneci et al., 2023). Moreover, the fact that AI may produce
biased responses represents a valuable opportunity to raise awareness and engage
in a discussion about inherent biases which are present in society and therefore
reflected in the LLM’s training data (Heaven, 2023).

Generative AI, when used appropriately in education, offers significant potential to


enhance EDI and promote inclusive learning. With the rise of various models, it is
imperative for HEIs to ensure equal access to technology. This includes
guaranteeing the inclusivity of tools like ChatGPT and addressing digital disparities
(Lim et al., 2023). Furthermore, it is crucial to inform both students and staff about
the limitations of these models. HEIs must also draft AI policies that cater to the
diverse needs of the entire academic community (McMurtrie, 2023). In conclusion,
while the ability of AI to bolster EDI is clear, the risks associated with AI use can be
managed. However, it requires HEIs to invest significant time and effort to devise
and implement suitable practices to achieve their objectives.

5. Information Privacy

Another obstacle to adopting ChatGPT lies in concerns over privacy and security.
Given the vast data processed by ChatGPT's machine learning algorithms, it's
vulnerable to cyberattacks, risking unauthorized access or misuse of sensitive
information (Dwivedi et al., 2023). Concerns also arise from how ChatGPT uses the
information from its interactions (Azaria, et al., 2023). Tlili et al. (2023) examined
OpenAI's policies and found that while conversations with ChatGPT are stored and
used to improve its performance, the specifics of storage and use are not entirely
clear.

The privacy implications of ChatGPT are particularly relevant for learners and
educators who may lack in-depth knowledge of technology and privacy. Young
learners could unintentionally share personal details with ChatGPT, underscoring the
need to protect the privacy of users, particularly the younger demographic.
Regulators in Europe and the US echo these concerns. In 2023, Italy restricted
ChatGPT access due to privacy and age-verification issues. OpenAI responded by
addressing and clarifying these privacy matters (McCallum, 2023). The US Federal
Trade Commission is also scrutinising OpenAI for potential false or harmful
statements about real individuals and its data privacy methods (BBC News, 2023).
Similarly, Japanese authorities have voiced privacy concerns, emphasizing the need
to weigh these against the benefits of generative AI (Kaur, 2023).

HEIs must ensure that staff and students are well-informed about AI-related privacy
concerns, emphasizing the avoidance of sharing personal or sensitive data. It's vital
for HEIs to stay updated on AI advancements, collaborating with providers that
uphold transparent privacy policies. By doing so, institutions can maintain best
practices in AI utilisation. Regulatory bodies offer essential guidelines that HEIs
should follow. Although HEIs should recognise the potential for privacy breaches,
their role is not to police AI systems but to foster a secure learning environment that
benefits from AI. Regulation should be left to the designated authorities. In summary,
while privacy issues are genuine, they should not hinder AI's integration into higher
education.

6. Misinformation

Prior to ChatGPT, people relied on various methods to filter information, such as


verifying the content directly, assessing the knowledge level of the creator, and
evaluating language rigor, format correctness, and text length as indicators of
reliability. However, ChatGPT’s content generation capabilities excel in these
aspects, potentially creating a false sense of reliability. Users may fall into the trap of
blindly trusting the content generated by ChatGPT (Wu et al., 2023). In fact, LLMs
are prone to hallucinations, a phenomenon where the AI model generates inaccurate
or outright false information. This is particularly true when asking for a literature
reference as generative AI can fabricate convincing titles and authors that do not
exist (Burger et al., 2023). Hallucinations from LLMs can mislead and potentially
harm users seeking personal advice (Khowaja et al., 2023). Companies developing
LLMs are aware of these issues and are working to address them (O'Brien, 2023).

To overcome the limitations of LLMs, it's crucial to inform users about potential
inaccuracies, or hallucinations, and to not rely entirely on AI-generated outputs.
Students who use such content should strive to verify its accuracy and take
responsibility for any factual errors. Moreover, fictional references, which are often
markers of AI-generated content, can help educators pinpoint students falsely
claiming AI work as their own. Rigorous reference checking could serve as one
method of enforcing academic integrity, although this would necessitate additional
time for scrutiny. It remains vital for universities to teach the fundamentals of each
subject and to supervise the use of AI while equipping their students with a solid
foundation in pertinent skills and knowledge. Once students have the necessary
knowledge and skills, they can evaluate the outputs generated by AI, weeding out
inaccuracies and leveraging the beneficial and helpful components of the responses
offered by the technology. This approach fosters the application of critical thinking in
their studies.

There's concern that generative AI might inadvertently propagate misinformation.


The Covid-19 pandemic illustrated how AI can mass-produce questionable content
and amplify misinformation through social media bots (Gisondi, et al., 2023). This
risk implies that even well-intentioned users might unknowingly spread fake news.
While users should verify AI-provided information, many still fall victim to
misinformation (Lim et al., 2023), due to cognitive biases, reliance on headlines, and
the persistence of false information (Centre for Information Technology and Society,
2023). Therefore, it is vital to educate students on information quality, enhance
critical thinking, and raise awareness about AI’s potential to spread misinformation.

7. Opportunities

The widespread integration of LLMs in higher education can offer substantial


advantages for students and faculty. As tools like ChatGPT become more common
at work, graduates must be equipped to use them adeptly, it is important that
graduates have the right knowledge and skills to use these tools effectively. This
entails understanding AI capabilities, limitations, and their ethical and societal
implications This skill development could be scaffolded and progressively developed
through strategic curriculum design and embedded into assessments (Cradle, 2023).
Thus, embedding AI literacy in graduate skills can enhance their employability in a
rapidly evolving job market.

First, HEIs must educate students on navigating AI-related challenges, as detailed


earlier in this report. Next, they should introduce basic prompt engineering, enabling
effective model communication. Most crucially, HEIs need to develop use cases for
generative AI models in educational settings that are relevant for graduate jobs and
allow to enhance graduate outcomes as well as the student journey. By following
these steps, HEIs would fully include AI literacy in graduate skillsets boosting their
employability and readiness for the swiftly changing employment landscape.

Developing use cases for generative AI is a significant challenge for educational


providers. LLMs possess notable capabilities for education, but these are yet to be
fully explored. Academic literature offers various examples of such capabilities.
Mollick (2022) recommends having students assess ChatGPT's responses or
compare a ChatGPT-produced research paper with the original to bolster their
critical thinking skills. generative AI platforms can be viewed as a valuable reservoir
of ideas and motivation, providing a starting point for students as it can assist with
topic brainstorming, and creativity when developing project ideas (Javid et al., 2023).
Gilson et al. (2023), note that ChatGPT’s initial answer could prompt further
questioning and encourage students to apply their knowledge and reasoning skills,
while Rudolph et al. (2023) states that generative AI could be used as an aid to
improve writing and research skills. ChatGPT is a versatile tool that can produce
more than just text. It's capable of generating computer code snippets in different
programming languages, crafting tables and lists, and creating Excel formulas. While
these outputs may require some editing and verification, ChatGPT undeniably serves
as a valuable time-saving resource (Centre for Teaching and Learning, 2023).
Overall, generative AI may support research, writing, and data analysis what
effectively opens new possibilities for educators and students potentially unleashing
their digital creativity. This technology may assist in creation of inventive
assessments which could promote creativity and critical thinking skills contributing to
comprehensive and meaningful evaluation of learning outcomes (Rasul et al., 2023).
From the student perspective, AI-powered tools can assist in problem solving
(Rudolph et al., 2023) and help to foster the creative process (McMurtie, 2023).
Finally, generative AI may provide personalized learning opportunities, for instance,
develop educational materials and content that are customized to a student's
individual interests, abilities, and learning objectives but also offers immediate
feedback and advice, assisting students in rectifying mistakes and enhancing their
study techniques (Javaid at al., 2023; Yu and Guo, 2023)

8. Summary
The rapid advancement and spread of Generative AI and LLMs, most notably
represented by ChatGPT 3.5, is a testament to the transformative nature of modern
AI. Their multifaceted attributes not only underscore heightened efficiency but also
democratize AI, making it an accessible tool for diverse audiences. Its applications,
ranging from simple text generation to intricate code creation, mark a new era where
technology is both an aid and a collaborator.

One of the immediate impacts has been on education and this paper outlines
challenges associated with a wide-scale adoption of LLMs, but also offers solutions
and presents opportunities to leverage technology by institutions and educators to
enhance learning experience. Firstly, HEIs must update their assessment offence
policies and review their current assessment strategy. Secondly, they need to
educate both students and staff to manage various ethical challenges, as it remains
imperative to strike a rational balance, nurturing innovation while safeguarding
ethical integrity. Thirdly, HEIs must explore use cases and the potential of generative
AI to be integrated in their syllabus, offering the opportunity for students to learn how
to use LLMs in academic context, and in the future, in professional settings. This
involves equipping students with such skills through scaffolding and progressively
developing AI skills throughout university courses. HEIs need to realise the scale of
the AI transition and devote adequate resources to ensure staff AI readiness and
prepare to make necessary changes to their teaching, learning and assessment
practices as well as assess the need for reviewing module and programme learning
outcomes. Currently, generative AI is still a relatively novel technology therefore the
question how to successfully integrate it within the higher education remains
unanswered. Academic literature point to multiple opportunities, however educators
need to be given autonomy and training what is necessary to start experimenting
with generative AI. These efforts must be supervised, coordinated and evaluated,
involving an open dialog with the relevant stakeholders. To achieve that, creating
academic or management roles responsible for this transition should be considered.

From personalized lesson plans to interactive engagement with complex subjects,


generative AI and LLMs have the potential to change the traditional classroom
settings. Moreover, the integration of generative AI models into the curriculum will
allow to foster creativity, encourages critical thinking, and prepare students for a
future where collaboration with AI is a common aspect of professional life. Whether
it's aiding in research, enhancing writing skills, or facilitating innovative problem-
solving, the educational landscape will undoubtedly be enriched by the capabilities
brought forth by this technology.
9. References
Abdelghani, R., Wang, Y.-H., Yuan, X., Wang, T., Sauzéon, H., & Oudeyer, P.-Y. (2023).
ChatGPT for good? On opportunities and challenges of large language models for
education. Learning and Individual Differences. Available at:
https://www.sciencedirect.com/science/article/pii/S1041608023000195 (Accessed: 07 July
2023).
Atlas, S. (2023) 'ChatGPT for Higher Education and Professional Development: A Guide to
Conversational AI', Available at: https://digitalcommons.uri.edu/cba_facpubs/548 (Accessed:
15 July 2023).
Awdry, R. and Ives, B., 2022. International Predictors of Contract Cheating in Higher
Education. Journal of Academic Ethics, pp.1-20.
Azaria, A., Azoulay, R. and Reches, S., 2023. ChatGPT is a Remarkable Tool—For Experts.
(online) Available at: https://arxiv.org/pdf/2306.03102.pdf (Accessed 2 August 2023).
BBC News (2023) 'ChatGPT owner in probe over risks around false answers', BBC News.
Available at: https://www.bbc.com/news/business-66196223 (Accessed: 2 August 2023).
Bloomberg (2023a) 'The Terminal on Campus', Bloomberg Professional Services. Available
at: The Terminal on Campus | Bloomberg Professional Services (Accessed: 12 July 2023).
Bloomberg (2023b) 'Introducing BloombergGPT, Bloomberg’s 50-billion parameter large
language model, purpose-built from scratch for finance', Bloomberg LP. Available at:
Introducing BloombergGPT, Bloomberg’s 50-billion parameter large language model,
purpose-built from scratch for finance | Press | Bloomberg LP (Accessed: 12 July 2023).
Brimble, M., 2016. 'Why Students Cheat: An Exploration of the Motivators of Student
Academic Dishonesty in Higher Education', in Handbook of Academic Integrity. Springer, pp.
365-382.
Burger, B., Kanbach, D.K., Kraus, S., Breier, M. and Corvello, V., (2023). On the use of AI-
based tools like ChatGPT to support management research. European Journal of Innovation
Management, 26(7), pp.233-241. Available at: https://www.emerald.com/insight/1460-
1060.htm (Accessed: 11 July 2023).
Celik, I., 2023. Towards Intelligent-TPACK: An empirical study on teachers’ professional
knowledge to ethically integrate artificial intelligence (AI)-based tools into education.
Computers in Human Behavior, (online) 138. Available at:
https://doi.org/10.1016/j.chb.2022.107468 (Accessed: 11 July 2023).
Centre for Teaching and Learning, (2023). "Four lessons from ChatGPT: Challenges and
opportunities for educators." University of Oxford. Available at:
https://www.ctl.ox.ac.uk/article/four-lessons-from-chatgpt-challenges-and-opportunities-for-
educators (Accessed: 13 July 2023).
Center for Information Technology and Society, 2023. Why we fall for fake news. University
of California Santa Barbara. Available at: https://www.cits.ucsb.edu/fake-news/why-we-fall
(Accessed: 03 July 2023).
Cradle. (2023) 'ChatGPT webinar #1 - What do we need to know now?' (Video). YouTube.
Available at: https://www.youtube.com/watch?v=mCCqf6tHI24 (Accessed: 3 July 2023).
Creamer, E. (2023) 'Authors file a lawsuit against OpenAI for unlawfully ‘ingesting’ their
books', The Guardian, 5 July. Available at:
https://www.theguardian.com/books/2023/jul/05/authors-file-a-lawsuit-against-openai-for-
unlawfully-ingesting-their-books (Accessed: 05 July 2023).
Dalalah, D., Dalalah, O.M.A. (2023). The false positives and false negatives of generative AI
detection tools in education and academic research: The case of ChatGPT. The
International Journal of Management Education, 21(2), 100822.
Department for Education, (2023). Ethics, Transparency and Accountability Framework for
Automated Decision-Making. GOV.UK. Available at:
https://www.gov.uk/government/publications/ethics-transparency-and-accountability-
framework-for-automated-decision-making/ethics-transparency-and-accountability-
framework-for-automated-decision-making (Accessed: 28 June 2023).
Dixit, P. (2023). 'Google faces lawsuit alleging data scraping and copyright violations for AI
development', Business Today, 12 July. Available at:
https://www.businesstoday.in/technology/news/story/google-faces-lawsuit-alleging-data-
scraping-and-copyright-violations-for-ai-development-389340-2023-07-12 (Accessed: 12
July 2023).
Feizi, S., Huang, F., 2023. Is AI-Generated Content Actually Detectable? College of
Computer, Mathematical, and Natural Sciences | University of Maryland. (online) Available
at: https://cmns.umd.edu/news-events/news/ai-generated-content-actually-detectable
(Accessed 4 July 2023).
Franzoni, V., 2023. From Black Box to Glass Box: Advancing Transparency in Artificial
Intelligence Systems for Ethical and Trustworthy AI. In: Computational Science and Its
Applications – ICCSA 2023 Workshops. ICCSA 2023. Lecture Notes in Computer Science,
vol 14107. Springer, Cham. Available at: https://link.springer.com/chapter/10.1007/978-3-
031-37114-1_9 (Accessed 4 July 2023).
Geerling, W., Mateer, G. D., Wooten, J., & Damodaran, N. (2023) 'ChatGPT has Mastered
the Principles of Economics: Now What?', SSRN. Available at:
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4356034 (Accessed: 29 July 2023)
Gimpel, H., Hall, K., Decker, S., Eymann, T., Lämmermann, L., Maedche, A., Röglinger, M.,
Ruiner, C., Schoch, M., Schoop, M., Urbach, N. & Vandirk, S., 2023. Unlocking the Power of
Generative AI Models and Systems such as GPT-4 and ChatGPT for Higher Education A
Guide for Students and Lecturers. (online) Available at:
https://doi.org/10.13140/RG.2.2.20710.09287/2 (Accessed: 4 August 2023).
Gonsalves, D. (2023) 'On ChatGPT: what promise remains for multiple choice assessment?',
Journal of Learning Development in Higher Education, Issue 27, April 2023.
Gold, J. (2023). 'Adobe offers copyright indemnification for Firefly AI-based image app
users', Computerworld, 8 June. Available at:
https://www.computerworld.com/article/3699053/adobe-offers-copyright-indemnification-for-
firefly-ai-based-image-app-users.html (Accessed: 12 July 2023).
Gupta, S. and Chen, Y., 2022. Supporting Inclusive Learning Using Chatbots? A Chatbot-
Led Interview Study. Journal of Information Systems Education, (online) 33(1), pp.98-108.
Available at: https://aisel.aisnet.org/jise/vol33/iss1/11 (Accessed 1 August 2023).
Hartmann, J., Schwenzow, J. and Witte, M., 2023. The political ideology of conversational
AI: Converging evidence on ChatGPT's pro-environmental, left-libertarian orientation. arXiv
preprint arXiv:2301.01768.
Heaven, W. D. (2023) 'ChatGPT is going to change education, not destroy it', MIT
Technology Review. Available at:
https://www.technologyreview.com/2023/04/06/1071059/chatgpt-change-not-destroy-
education-openai/ (Accessed: 30 July 2023).
Heikkilä, M., 2023. OpenAI’s hunger for data is coming back to bite it. MIT Technology
Review, (online) Available at:
https://www.technologyreview.com/2023/04/19/1071789/openais-hunger-for-data-is-coming-
back-to-bite-it/ (Accessed: 10 July 2023).
Hemsley, B., Power, E., & Given, F. (2023). Will AI tech like ChatGPT improve inclusion for
people with communication disability? The Conversation. https://theconversation.com/will-ai-
tech-like-chatgptimprove-inclusion-for-people-with-communicationdisability-196481
(Accessed: 11 July 2023).
Holmes, J. (2023) 'Universities warn against using ChatGPT for assignments', BBC News.
Available at: https://www.bbc.com/news/uk-england-bristol-64785020 (Accessed: 4 July
2023).
Chakraborty, S., Bedi, A.S., Zhu, S., An, B., Manocha, D., Huang, F. 2023. On the
Possibilities of AI-Generated Text Detection. arXiv preprint arXiv:2304.00010. (online)
Available at: https://arxiv.org/abs/2304.04736 (Accessed 4 July 2023).
Intelligent.com, (2023). Nearly 1 in 3 College Students Have Used ChatGPT on Written
Assignments. Available at: https://www.intelligent.com/nearly-1-in-3-college-students-have-
used-chatgpt-on-written-assignments/ (Accessed: 10 July 2023).
Javaid, M., Haleem, A., Singh, R.P., Khan, S., Khan, I.H., 2023. "Unlocking the opportunities
through ChatGPT Tool towards ameliorating the education system." BenchCouncil
Transactions on Benchmarks, Standards and Evaluations, 3(2), pp.100115. (Online)
Available at: https://www.sciencedirect.com/science/article/pii/S2772485923000327#b23
(Accessed: 10 July 2023).
Kaur, D. (2023) 'ChatGPT in Japan: concerns and issues', Tech Wire Asia. Available at:
https://techwireasia.com/2023/06/after-italy-japan-has-its-eyes-on-chatgpt-over-data-privacy-
concerns/ (Accessed: 2 August 2023).
Khowaja, S.A., Khuwaja, P., & Dev, K. (2023) 'ChatGPT Needs SPADE (Sustainability,
PrivAcy, Digital divide, and Ethics) Evaluation: A Review', ArXiv. Available at:
https://arxiv.org/pdf/2305.03123.pdf (Accessed: 2 August 2023).
Lancaster, T. (2023). Artificial intelligence, text generation tools and ChatGPT – does digital
watermarking offer a solution? International Journal for Educational Integrity. Available at:
https://edintegrity.biomedcentral.com/articles/10.1007/s40979-023-00131-6 (Accessed: 2
August 2023).
Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased
against non-native English writers. Patterns. Available at:
https://www.cell.com/patterns/fulltext/S2666-3899(23)00130-7 (Accessed: 12 July 2023).
Mills, A. (2023a) 'AI Text Generators: Sources to Stimulate Discussion Among Teachers',
Google Docs. Available at: https://docs.google.com/document/d/1V1drRG1XlWTBrEwgGqd-
cCySUB12JrcoamB5i16-Ezw/edit#heading=h.qljyuxlccr6 (Accessed: 12 July 2023).
McCallum, S. (2023) 'ChatGPT accessible again in Italy', BBC News. Available at:
https://www.bbc.com/news/technology-65431914 (Accessed: 2 August 2023).
McKinsey Global Institute, (2023a). Generative AI and the future of work in America.
Available at: https://www.mckinsey.com/mgi/our-research/generative-ai-and-the-future-of-
work-in-america (Accessed: 4 August 2023).
McMurtrie, B. (2023) 'How ChatGPT Could Help or Hurt Students With Disabilities', The
Chronicle of Higher Education. Available at: https://www.chronicle.com/article/how-chatgpt-
could-help-or-hurt-students-with-disabilities (Accessed: 30 July 2023).
McKinsey Global Institute, (2023b). The productivity potential of AI: A new framework for
estimating the potential of artificial intelligence to enhance human productivity. Available at:
https://www.mckinsey.com/business-functions/mckinsey-digital/our-insights/the-productivity-
potential-of-ai-a-new-framework-for-
Mollick, E. (2022) 'ChatGPT Is a Tipping Point for AI', Harvard Business Review. (online)
Available at: https://hbr.org/2022/12/chatgpt-is-a-tipping-point-for-ai (Accessed: 24 June
2023).
Molnar, K.K. and Kletke, M.G., 2012. Does the type of cheating influence undergraduate
students’ perceptions of cheating?. Journal of Academic Ethics, 10, pp.201-212.
Nikolic, S., Daniel, S., Haque, R., Belkina, M., Hassan, G.M., Grundy, S., Lyden, S., Neal, P.
& Sandison, C. (2023) 'ChatGPT versus engineering education assessment: a
multidisciplinary and multi-institutional benchmarking and analysis of this generative artificial
intelligence tool to investigate assessment integrity', European Journal of Engineering
Education, 48(4), pp. 559-614. DOI: 10.1080/03043797.2023.2213169.
Newton, P.M. and Essex, K., 2022. How common is cheating in online exams and did it
increase during the COVID-19 pandemic? A Systematic Review. Available at:
https://doi.org/10.21203/rs.3.rs-2187710/v1 (Accessed: 24 June 2023).
O'Brien, M. (2023). Chatbots sometimes make things up. Is AI’s hallucination problem
fixable? AP News. Available at: https://apnews.com/article/artificial-intelligence-hallucination-
chatbots-chatgpt-falsehoods-ac4672c5b06e6f91050aa46ee731bcf4 (Accessed: 31 June
2023).
Orru, G., Piarulli, A., Conversano, C., & Gemignani, A. (2023). Human-like problem-solving
abilities in large language models using ChatGPT. Frontiers in Artificial Intelligence.
Available at: https://www.frontiersin.org/articles/10.3389/frai.2023.1199350/full (Accessed: 1
August 2023).
OpenAI (2023b) GPT-4. Available at: https://openai.com/research/gpt-4 (Accessed: 1 August
2023).
Rigby, D., Burton, M., Balcombe, K., Bateman, I. and Mulatu, A., 2015. Contract cheating &
the market in essays. Journal of Economic Behavior & Organization, 111, pp.23-37.
Reuters. (2023). "ChatGPT sets record for fastest-growing user base - analyst note".
Reuters. Available at: https://www.reuters.com/technology/chatgpt-sets-record-fastest-
growing-user-base-analyst-note-2023-02-01/ (Accessed: 15 June 2023).
Sabzalieva, E. and Valentini, A. (2023) 'ChatGPT and Artificial Intelligence in higher
education: Quick start guide', UNESCO International Institute for Higher Education in Latin
America and the Caribbean (IESALC). Available at: https://www.iesalc.unesco.org/wp-
content/uploads/2023/04/ChatGPT-and-Artificial-Intelligence-in-higher-education-Quick-
Start-guide_EN_FINAL.pdf (Accessed: 24 June 2023).
Sansom, C. (2023) 'Sustaining Innovation in Research: Innovations and Issues around
Generative AI', University of London. Available at: https://www.london.ac.uk/centre-online-
distance-education/blog/generative-ai (Accessed: 29 June 2023).
Shin, R. (2023) Humiliated lawyers fined $5,000 for submitting ChatGPT hallucinations in
court: ‘I heard about this new site, which I falsely assumed was, like, a super search engine’.
Fortune. Available at: https://fortune.com/2023/06/23/lawyers-fined-filing-chatgpt-
hallucinations-in-court/ (Accessed: 31 July 2023).
Smith, D. P. and Francis, N. J. (2023). Generative AI in assessment. Available at:
https://network.febs.org/posts/generative-ai-in-assessment (Accessed: 12 September 2023).
The British Academy, (2022). 'AI and work: evidence synthesis', The British Academy, p. 28.
Available at: AI and work: evidence synthesis (Accessed: 5 August 2023).
The Verge, 2023. Google’s AI chatbot Bard makes factual error in first demo. The Verge.
Available at: https://www.theverge.com/2023/2/8/23590864/google-ai-chatbot-bard-mistake-
error-exoplanet-demo (Accessed: 2 August 2023).
Ventayen, R.J.M. (2023) 'OpenAI ChatGPT Generated Results: Similarity Index of Artificial
Intelligence-Based Contents', Advances in Intelligent Systems and Computing. Available at:
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4332664 (Accessed: 9 July 2023).
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., &
Polosukhin, I. (2017). Attention is all you need. In Advances in neural information processing
systems (pp. 5998-6008).
Weinberger, D., 2019. How machine learning pushes us to define fairness. Harvard
Business Review, 11, pp.2-6.
Wiggers, K. (2023) 'OpenAI connects ChatGPT to the internet', TechCrunch, 23 March.
Available at: https://techcrunch.com/2023/03/23/openai-connects-chatgpt-to-the-internet/
(Accessed: 5 July 2023).
Wood, P. (2023) 'Oxford and Cambridge ban ChatGPT over plagiarism fears but other
universities embrace AI bot', iNews. Available at: https://inews.co.uk/news/oxford-
cambridge-ban-chatgpt-plagiarism-universities-2178391 (Accessed: 4 July 2023).
Wu, X., Duan, R. & Ni, J. (2023). Unveiling Security, Privacy, and Ethical Concerns of
ChatGPT. Department of Electrical & Computer Engineering, Queen’s University, Kingston,
Canada. Available at: https://arxiv.org/pdf/2307.14192.pdf (Accessed 3 August 2023).
Yu, H., Guo, Y. (2023). Generative AI in Education: Current Status and Future Prospects. In
Frontiers in Education. Retrieved from
https://www.frontiersin.org/articles/10.3389/feduc.2023.1183162/full#
Zou, A., Wang, Z., Kolter, J.Z. and Fredrikson, M., 2023. Universal and Transferable
Adversarial Attacks on Aligned Language Models. (online) Available at:
https://doi.org/10.48550/arXiv.2307.15043 (Accessed: 1 July 2023).

You might also like