1 s2.0 S0040162523007618 Main

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

Technological Forecasting & Social Change 199 (2024) 123076

Contents lists available at ScienceDirect

Technological Forecasting & Social Change


journal homepage: www.elsevier.com/locate/techfore

The effects of artificial intelligence applications in educational settings:


Challenges and strategies
Omar Ali a, Peter A. Murray b, Mujtaba Momin c, Yogesh K. Dwivedi d, e, *, Tegwen Malik f
a
College of Business and Entrepreneurship, Abdullah Al Salem University, Kuwait
b
University of Southern Queensland, Toowoomba, Qld 4350, Australia
c
College of Business Administration, American University of the Middle East, Egaila 54200, Kuwait
d
Digital Futures for Sustainable Business & Society Research Group, School of Management, Swansea University, Bay Campus, Fabian Bay, Swansea, UK
e
Symbiosis International (Deemed University), Pune, Maharashtra, India
f
School of Management, Swansea University, Swansea SA1 8EN, UK

A R T I C L E I N F O A B S T R A C T

Keywords: With the continuous intervention of AI tools in the education sector, new research is required to evaluate the
ChatGPT viability and feasibility of extant AI platforms to inform various pedagogical methods of instruction. The current
Artificial intelligence manuscript explores the cumulative published literature to date in order to evaluate the key challenges that
Challenges
influence the implications of adopting AI models in the Education Sector. The researchers' present works both in
Strategies
favour and against AI-based applications within the Academic milieu. A total of 69 articles from a 618-article
Education sector
population was selected from diverse academic journals between 2018 and 2023. After a careful review of
selected articles, the manuscript presents a classification structure based on five distinct dimensions: user,
operational, environmental, technological, and ethical challenges. The current review recommends the use of
ChatGPT as a complementary teaching-learning aid including the need to afford customized and optimized
versions of the tool for the teaching fraternity. The study addresses an important knowledge gap as to how AI
models enhance knowledge within educational settings. For instance, the review discusses interalia a range of AI-
related effects on learning from the need for creative prompts, training on diverse datasets and genres, incor­
poration of human input and data confidentiality and elimination of bias. The study concludes by recommending
strategic solutions to the emerging challenges identified while summarizing ways to encourage wider adoption of
ChatGPT and other AI tools within the education sector. The insights presented in this review can act as a
reference for policymakers, teachers, technology experts and stakeholders, and facilitate the means for wider
adoption of ChatGPT in the Education sector more generally. Moreover, the review provides an important
foundation for future research.

1. Introduction embedded AI technologies such as the ChatGPT generative model (Hu,


2023). Progression and integration of deep learning (DL) and repro­
Educational and academic practices have been exposed to significant ducible AI technologies has led to the creation of digital artifacts and
and far-reaching technological advancements in recent times no better relics which systematically integrate audio-visual inputs, movable
exemplified by the recent intervention of Artificial Intelligence (Tuomi, graphics and other digital and script commands. This is achieved by duly
2018). The swift technological research and embedded innovation in scrutinizing training inputs and synchronizing between various data
machine learning sciences has accelerated the introduction of language patterns and designs (Abukmeil et al., 2021; Gui et al., 2023).
generation models (Dwivedi et al., 2021). This has further led to the Contemporary published literature has acknowledged two main
advancement of content generation technologies and innovation per­ generative technologies: AI ⎼ Generative Adversarial Network (GAN) and
taining to digital content development and script development using Generative Pre-trained Transformer (GPT) (Vaswani et al., 2017;

* Corresponding author at: Digital Futures for Sustainable Business & Society Research Group, School of Management, Swansea University, Bay Campus, Fabian
Bay, Swansea, UK.
E-mail addresses: [email protected] (O. Ali), [email protected] (P.A. Murray), [email protected] (M. Momin), y.k.dwivedi@
swansea.ac.uk (Y.K. Dwivedi), [email protected] (T. Malik).

https://doi.org/10.1016/j.techfore.2023.123076
Received 16 June 2023; Received in revised form 1 December 2023; Accepted 2 December 2023
Available online 14 December 2023
0040-1625/© 2023 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
O. Ali et al. Technological Forecasting & Social Change 199 (2024) 123076

Abukmeil et al., 2021; Brown et al., 2020; Hu, 2023; Gui et al., 2023). location.
Presently, GAN is a GAI-enabled technology that uses dual neural net­ Additionally, a significant structural progress is the practice of pri­
works (Karras et al., 2021). Whereas the discriminator network aids in marily training the model system on a considerable dataset before cal­
evaluating the genuineness and authenticity of the generated content, ibrating it for a particular task. This pre-sequencing has been pivotal for
the generator network - which is an assemblage of GPT and GAN – can enhancing the functioning of an array of linguistic syntactical functions
generate complex data such as the graphics of a human face. This iter­ (Hughes et al., 2016; Alzubaidi et al., 2021). Moreover, Bi-directional
ative verification and the corroborative protocols continue until the Encoder Representations from Transformers (BERT) is a pre-trained
discriminator network can discriminate between the synthetic and real transformer-based encoder model commissioned on distinct and
content. The synthetic is then acknowledged as genuine and authentic diverse NLP tasks, with capability of producing sentence cataloguing,
(Jovanović and Campbell, 2022). GAN technology is primarily reliable queries, and answers and termed entity recognition (Devlin et al., 2018).
for generation, graphics, and video (Hu, 2023). Generative-modeling Indeed, GPT-3 and ChatGPT comprise of contemporary evolutions and
artificial intelligence (GAI) such as ChatGPT is an unmonitored or specific advancements where they have been instructed on much larger
moderately monitored machine learning framework that integrates datasets and data availability. Advancements include scripts and
manmade content artifacts with the intervention of statistics and prob­ amassing information from the web which have proven to be efficient on
abilities (Jovanović and Campbell, 2022). At its most basic however, a spectrum of natural-language tasks oscillating from an array of tasks
what is completely unclear with the revelation of generative AI models is such as question-answering, to scripting comprehensive and writing
how they can be used in ways that are not only innovative, but also safe, essays based on the nature and peculiarity of commands received (Flo­
ethical, and reliable (Jain et al., 2023). These important oversights in AI ridi and Chiriatti, 2020). Furthermore, contemporaneous functions have
generative model innovation suggest that scholars have stopped short in aimed at calibrating these NLP technologies on smaller datasets where
reviewing the assortment of challenges that can be identified in extant transfer learning applications have been rendered to new pertinent
research particularly within educational settings. With these facts in challenges (Kasneci et al., 2023; Baidoo-Anu and Ansah, 2023).
mind, this review paper has the following objectives. First, the authors The recent past has witnessed the advancement and adoption of large
appraise the existing literature by identifying complex patterns and language models. However, the advancement in AI tools foregrounds the
challenges that remain unresolved in the science and practice of embedded challenges of these technologies (Dwivedi et al., 2023a;
ChatGPT generative models. Second, the manuscript evaluates the for Dwivedi et al., 2023b; Kasneci et al., 2023; Kshetri et al., 2023; Baidoo-
and against arguments in using AI generative models. Anu and Ansah, 2023; Richey Jr et al., 2023). Some of these include the
Generative AI models and ChatGPT in particular use Natural Lan­ inability to decipher the complex and challenging nexus of predictions
guage Processing (NLP) to recite and yield human-like transcripts in made by these models in the background. Further, moral contagions
diverse dialects. These dialects are enabled to exhibit creative content embody these complex systems which exhibit both predictable and un­
while scripting texts. The AI platform is enabled to create voluminous precedented consequences across diverse contexts and industry milieus.
content from a few lines to ballads and couplets, to a complete research For instance, the abuse of AI technology for immoral and unethical
article. Such content is convincing in almost all the themes that have purposes has to be systematically anticipated and the consequences
substantial content on web–public platforms. Additionally, these models taken into consideration in model design. Taken together, such tech­
are empowered to engage clients in conversations resembling human nologies will broaden the horizons, applications, relevance, and recog­
dialogue; illustrations include customer-support chatbots or fictitious nition of NLP. However, there has to be a systematic intervention
charismatic plots in computerized/electronic games (Pavlik, 2023; Rese addressing these challenges and related ethical considerations. This
and Tränkner, 2024). A much more erudite, better-trained, and becomes increasingly germane when applying AI tools as learning aids
advanced GPT-3 has been introduced recently (Brown et al., 2020). This for increasing know-how within relevant academic fields. Scholars
AI version has 175 billion constraints and criteria (Cooper, 2021) suggest that consolidated and synergized research by the academic and
wherein it can boost task-specific and objective features that can become professional fraternity is required to address such ethical and
highly efficacious through modern calibration (Brown et al., 2020). application-oriented challenges (van Dis et al., 2023). While some
Brown et al. (2020) opined that GPT-3 is ten-fold sophisticated literature is generated in the public domain (viz. open forum posts),
compared to any preceding non-sparse language model. This version is third party information is unreliable and unauthentic. Thus, unanimous
developed as the foundational NLP engine that improves the earlier scrutiny of the AI concept and its consequences can only be accepted
language-enabled model of ChatGPT which has fascinated many diverse when it is an outcome of empirical and systematic research de­
fields of ontology inter alia from academic education (Qu et al., 2022; liberations. Similar to earlier language-driven models, the current re­
Williams, 2023), to engineering (Qadir, 2022), to broadcasting and view identifies a number of research gaps and challenges that need to be
journalism (Pavlik, 2023), across different fields of medicine (García- explored before ChatGPT users can be confident about the knowledge
Peñalvo et al., 2020), and in many business domains related to money produced. Following a detailed and thorough review of contemporary
transactions, finance, and economics (Fallahi et al., 2022; Alshater, scholarly studies, this review asks the following main research question:
2022; Terwiesch, 2023). What are the key challenges of harnessing ChatGPT NLP applications in the
Sizable language models such as GPT-3 garner substantial pro­ education sector and what strategies can be implemented to address them?
gressions in NLP where prototypes are proficient in processing colossal The manuscript is structured as follows. Section 2 explores the
transcripts and script data that can yield texts, answers, questions, and a background and the technological features of ChatGPT. This is followed
bouquet of script-related tasks; these outcomes are achieved with similar by Section 3 where the authors outline a detailed discussion of the
proficiency and intelligence of a human being (Floridi and Chiriatti, research methods used to conduct the review. Next, the challenges to the
2020). Notably, key developments in the sphere of transformer archi­ technology are discussed in Section 4. In Section 5, various educational
tectures and their usage (Devlin et al., 2018; Tay et al., 2023), and strategies are identified that help to address the challenges presented.
fundamental responsive machinery (Vaswani et al., 2017), significantly Future research directions are discussed in Section 6 including recom­
enrich the capacity of auto-regressive, self-controlled language schemas mendations by which the scientific community can better support the
to leverage long-term adjuncts in natural-language scripts. The trans­ progression of generative models. The limitations of the study and the
former architecture presented in GPT-3 (Vaswani et al., 2017), relies on conclusion to the review are discussed and outlined respectively in
the self-attention apparatus to resolve the consequence of the whole Section 6.
input mechanism while engendering prognosis. The architecture thus
empowers the model to enhance the association among texts, their
articulation to a context and the script, irrespective of their locus and

2
O. Ali et al. Technological Forecasting & Social Change 199 (2024) 123076

2. Related literature image outputs and vice-versa, synthesize speech and audio, create
original video content, and generate synthetic data (Porkodi et al.,
2.1. Artificial intelligence overview 2022). Although there are many different subsets and new formats of
generative AI models emerging, the two primary designs are: Generative
AI or machine intelligence is an area of computer science where adversarial networks (GANs) and AI called transformer-based models.
machines are programmed with the ability to perform intelligent tasks With generative AI, the components of the model include two different
that are usually undertaken by humans (Dwivedi et al., 2023c; Tsang neural networks: the generator and the discriminator. The generator
et al., 2020; Ali et al., 2023; Pan and Nishant, 2023). Computers and compiles content based on user inputs and training data while the
machines use AI techniques to understand, analyze, and learn from data discriminator model evaluates generated content against “real” exam­
through specifically designed algorithms (Sasubilli et al., 2020; Richey ples to determine which output is real or accurate (Gonog and Zhou,
Jr et al., 2023). For example, with modern AI technologies, cameras can 2019). With the transformer-based model, encoders and/or decoders are
automatically recognize faces, while computers can translate between built into the platform to decode the tokens or blocks of content that
one language to another (Sasubilli et al., 2020). AI has been founded as have been segmented based on user inputs (Li et al., 2022).
an academic discipline since the 1950s and since then, it has been The primary difference between generative and discriminative AI
significantly researched in areas such as NLP, learning, reasoning and models is that the former can create new content and outputs based on
various knowledge domains. More recently, AI has been transformed their training (Qadir, 2023). Discriminative modeling, on the other
with the expansion of its research beyond computer science, with recent hand, is primarily used to classify existing data through supervised
developments drawing from broad areas such as psychology, linguistics, learning (Van Engelen and Hoos, 2020). As an example, a protein clas­
and philosophy (Ali et al., 2023). Consequently, AI has been applied in sification tool would operate on a discriminative model, while a protein
various areas such as education, e-commerce, robotics, navigation, generator would run on a generative AI model. Generative models are
healthcare, agriculture, military, marketing and in gaming consoles. designed to create something new while predictive AI models are set up
More specifically, widely adopted AI applications include search engines to make predictions based on data that already exists. Continuing with
such as Google, recommender systems such as Netflix, self-driving cars our example above, a tool that predicts the next segment of amino acids
such as Tesla, and human speech recognition systems such as Siri and in a protein molecule would work through a predictive AI model while a
Alexa. In general, AI methods can be broadly categorized in these areas protein generator requires a generative AI model approach (Thomas
as machine learning (Bernardini et al., 2021), robotics NLP (Murray et al., 2023).
et al., 2019), computer visioning (Jahan and Tripathi, 2021), and big In addition, another AI model Variational Autoencoders (VAE), is
data (Hossen and Karmoker, 2020). used for text and audio generation; VAE is a generative model that en­
Classification and clustering are two major techniques used in AI codes data into an embedded space and then decodes it to reconstruct
machine learning. Both algorithms use data such as numbers, text, im­ the original content. VAE models use distinct probabilistic combinations
ages and videos, as input (Jahan and Tripathi, 2021). Classification al­ of input data to generate new content (Yadav et al., 2021). Similarly,
gorithms (such as neural networks, decision trees and Bayesian Autoregressive Models (ARM) generate one-unit element at a time,
networks) use huge amounts of data as training datasets. There are two deriving cues from the earlier generated element (Bai et al., 2021). This
types of classification algorithms: supervised and unsupervised learning regressive one-unit at a time function aids in creating contextual and yet
(Uddin et al., 2019). Supervised learning uses labeled data vectors coherent content (GPT is one of these types). Recurrent Neural Networks
during training; by contrast, unsupervised learning algorithms do not (RNN) is also an AI model that processes sequential data by predicting
use labels. Both methods use class labels during the testing phase. In the next unit element from the previous element; unlike ARM, they are
machine learning, clustering algorithms are used for unsupervised neural networks and lack the potential to generate long sequences of
learning and do not need any class label data whereas prediction algo­ data; functional improvements are currently being developed to over­
rithms are trained using historical data to develop forecasting models come the limitations of RNN (Chen et al., 2019). Transformer-based
(Libbrecht and Noble, 2015). Several algorithms are used in classifica­ models have raised wider acceptance since unlike RNN, they can
tion, clustering and prediction (Elbasi et al., 2021): handle long output sequences of data by creating elaborate, coherent
AI and its sub-areas such as robotics, Internet of Things (IoTs), and and contextual content. Flow-based generative models have the capacity
machine learning can have significant impacts on society. AI technology to portray the data distribution by inverting the metamorphosis between
can improve human life quality, making life easier, safer and more the prompt and generated output. These models aid in generating data
productive (Chaturvedi et al., 2023; Malik et al., 2021; Hradecky et al., as well as density estimation of the data generated.
2022). There are several application areas of AI that make human life It is very important to acknowledge the power of generative AI with
easier such as face recognition for security, automation for industry, NLP its related concepts. In line with our definition, it is worth noting that
for translation, and robotics for homes (Herath and Mittal, 2022). AI has generative AI has the unique ability to not only provide a response but
transformed our society to move into the Industry 4.0 revolution due to also generate the content in that response, going beyond the human-like
the IoTs, cloud computing, robotics, cyber physical systems and machine interactions in conversational AI (Lim et al., 2022). In addition, gener­
to machine communication (Votto et al., 2021). When used effectively, ative AI can create new responses beyond its explicit programming,
the smart automation and interconnectivity can allow people to save whereas conversational AI typically relies on predefined responses.
time, manage work flexibility and increase collaborations (Ahsan and However, not all generative AI is conversational, and not all conversa­
Siddique, 2022). tional AI lacks the ability to generate content (Lim et al., 2023).
Augmented AI models, such as ChatGPT, combine both generative and
2.2. Generative AI types conversational AI to enhance their capabilities (Dwivedi et al., 2023a).
Additional background details about ChatGPT are discussed next.
Generative AI can be defined as a technology that (1) leverages deep
learning models to (2) generate human-like content (e.g., images, 2.3. Generative AI models and background to ChatGPT
words) in response to (3) complex and varied prompts (e.g., languages,
instructions, questions) (Lim et al., 2023). Generative AI models are AI Generative AI is a distinct class of AI and an incredibly powerful
platforms that generate a variety of outputs based on massive training technology that has been popularized by ChatGPT (Lim et al., 2023).
datasets, neural networks and deep learning architecture, and prompts Open Artificial Intelligence (OpenAI) is the source of ChatGPT, which is
from users (Nirala et al., 2022). Depending on the type of generative AI a format of a large language model (Abdullah et al., 2022), which is
model, various models can possibly generate images, translate text into structured to synthesize human-like text based on an array of input

3
O. Ali et al. Technological Forecasting & Social Change 199 (2024) 123076

commands. ChatGPT is useful for a spectrum of Natural Language Pro­ The COVID-19 epidemic has been disruptive highly disruptive in
cessing assignments such as script generation, comprehension of those educational setting in relation to how educational content was delivered
scripts and conversations, and their translation (Kirmani, 2023). The with most major academic institutions transitioning to electronic,
introduction of ChatGPT achieved one million users within a week of its remote, and online learning, to conform to social distancing guidelines
release on November 30, 2022 (Altman, 2022; Mollman, 2022; Hu, (Chatzipanagiotou and Katsarou, 2023). Significant change to this
2023), shocking users with its degree of sophistication and human-like extent was a disruptive shift to digital and online pedagogy, as in­
intelligence exhibited on specific prompts and commands. This gath­ stitutions and the international community mandated quick adoption of
ered attention of social media, news and research-oriented platforms the technology-enabled teaching-learning process in the face of
like Nature (Stokel-Walker, 2022; Metz, 2022). The technology can increasing adversity (Coghlan et al., 2021; Henderson et al., 2022). The
process multiple, complex and comprehensive tasks including drafting pandemic had effectively curtailed the face-to-face learning system
articles (Stokel-Walker, 2022), summarize content, sign-scripting to meaning a major transition and unprecedented embrace of technology
address distinct criteria and perform specialized functions such as were required in the teaching-learning space. This has included the
drafting and de-bugging computer code. This has led to many eclectic acceptance of online virtual conversational platforms such as Google
responses from experts in academic and educational settings as the Classroom, Teams, Zoom, and other different video tools (e.g. online
application has not only been disruptive but revolutionary in terms of its conferences) which extended to other pedagogical tools like e-books,
scholarly and pedagogical capacity (Williams, 2023). At its most basic, videos, and interactive activities (Chatzipanagiotou and Katsarou,
the program has dramatically increased user ability to create knowledge 2023). Use of sophisticated learning management systems like Moodle,
and the means by which it is accessed (Lucy and Bamman, 2021). As the Google Suite, has further empowered the teaching-learning fraternity
authors discuss below however, opponents are vocal that the knowledge with new teaching aids. Thus, the advent of different e-learning plat­
produced creates many ethical dilemmas thus compromising human forms has revolutionized the delivery of education which has had to
ingenuity the quality of the teaching and learning process (Williams, become more agile to invite the participation of students including
2023). remotely.
As discussed, the presence of diverse Generative AI aids in the cre­ Moreover, the global pandemic invited the need for more self-
ation of image, audio, text and visual content; this has increased the dependence and asynchronous learning systems. Here, learners now
usefulness of AI to academic as well practitioners of diverse domains. require substantial independence in how they learn and the speed by
That is, diverse AI tools with their embedded functional and operational which they learn. Artificial intelligence and generative models such as
utility have the latent ability to create credible content within a fraction ChatGPT accounts for the new requirements and convenience of
of the time taken in traditional learning models (Kar et al., 2023). They learning and at least in theory, can embody a learner's socio-cultural
can also be customized to initiate sequential prompts. While current background. However, this AI transition has magnified many pedagog­
Generative AI aids may look revolutionary, their origin can be attributed ical issues related to quality including the digital divide between those
to the 1960s when Chatbots were invented. Mostly, AI models became who have access to sophisticated technology and those who don't (Cain,
invasive around 2014 with the inception of GANs, which is a typology 2023). Generative AI models have brought to the surface other draw­
for machine learning algorithms (Behrad and Abadeh, 2022). This backs as well such as restricted interaction, a dearth of academic read­
breakthrough technology and its application in diverse fields has led to iness, and issues of ethics and poor accountability (Baidoo-Anu and
its application and adoption in a variety of technical, intellectual, Ansah, 2023; Nguyen et al., 2023; Yan et al., 2023; Stahl and Eke, 2024).
business and operational contexts e.g., the movie industry and in aca­ Holistically, the pandemic catalyzed the adoption of technology in ed­
demic writing inter alia. Transformer technology which is based on ucation while underscoring problems of learner-inclusivity and acces­
machine learning has revolutionized learning by enabling the training of sibility (Chatzipanagiotou and Katsarou, 2023). These facts have
AI models to embed colossal content without the necessity to label the brought a greater focus on global education systems. For instance, the
content in advance. These transformer aid models create a nexus of education system is invested with the responsibility to perennially
prompts across, pages, data-sets, books and input chapters, compared to adapt, evolve and bridge the gap between the stakeholder (student,
their predecessors who were restricted to searching for sentences and teachers, and parents) needs during challenging times (Schiff, 2021), yet
words. Thus, AI models have the ability to revolutionize nascent fields many challenges remain in adopting generative AI models such as
like Biotechnology for instance since they can create connections across ChatGPT within educational settings more generally.
bio-codes, bio-chemicals, proteins and DNA strands. Large Language While these advancements may seem to be ‘breakthroughs’, gener­
Models, of which ChatGPT is a type, have unearthed a plethora of ative AI is still in a nascent stage. Similar to any breakthrough tech­
generative AI models that can create images, audio and visual content nology, the introduction of AI models embodies many biases, data
from trivial prompts (Kumar, 2023). accuracy problems, cognitive hallucinations and failures. While the
ChatGPT materializes a transformer architecture, a computer- progression of AI has the potential to revolutionize many diverse fields
enabled neural nexus that accentuates the NLP abilities of an artificial of ontology such as in educational settings and in research domains more
system. This architecture is connected to a large input data-set and is generally, managing the quality of outputs remains a work in progress
integrated to devise transcripts based on its data source though creative given how generative models can create chatbots, deep fakes, movie
blending (Baidoo-Anu and Ansah, 2023). The input command of the dubbing, scripting emails/formal content, creation of art/videos and
application processes and furnishes an output for each unit of transcript others.
as one-unit time. The whole output is attached to the preceding unit- In summary, the introduction of ChatGPT grew remarkedly at the
output in the string in connection with the linkage prompt/command juncture of the receding pandemic with its innovative features advanced
that was rendered. The application materializes attention machinery to quickly into many industry and related fields such as in education set­
focus on the maximum segment of the input to produce an output that is tings. However, as the technology has evolved, many new challenges
intelligent, comprehensive and customized to the input received. and ethical anomalies can now be identified bringing to the forefront
ChatGPT can be seasoned and made comprehensive on specialized ac­ emerging paradoxes that are not easily solved. Our discussions thus far
tions like contextual dialogue-generation or query-resolution systems by suggest that generative AI tools are disruptive yet represent a trans­
offering an additional command of task-specific input by improvising it formative technological intervention. However, generative AI tools such
for the specific NLP application. The mechanism can be configured for as ChatGPT are currently being debated and scientifically explored as
distinct dialects and vernaculars by customizing the input datasets or by technology experts and advanced users consider ways that it can be
prototyping it with specific language computer codes (Kasneci et al., universally adopted. In what follows, the authors delve into the key
2023). challenges that threaten its adoption with the discussions focused

4
O. Ali et al. Technological Forecasting & Social Change 199 (2024) 123076

mainly on the education sector. article to ascertain its relevance to the scope of research outlined across
the key themes (Ali et al., 2018). More details are represented in Table 1.
3. Research methodology The review protocol served as a foundation for developing both
practical and theoretical views on generative AI models. The process
Watson (2015) and Ali et al. (2018) were the seminal papers in the then led to a review of an initial content classification model (Ngai and
area for conducting systematic and scoping reviews. Here, the protocols Wat, 2002) where articles were clustered and catalogued and a frame­
and processes for identifying, selecting, and evaluating the literature work developed. The structure entailed the process of packeting
have been established bringing to light the relevance of specific research research themes and identifying crucial aspects of the key challenges of
parameters. Here, the researchers have carefully constructed the review using ChatGPT in the education sector. For instance, many key chal­
process in such a way that it is highly resourceful (Tranfield et al., 2003), lenges started to emerge from the process such as poor human-AI
systematic, independent and rigorous (Boell and Cecez-Kecmanovic, interface, restricted understanding, bias in training input-data, the sti­
2015). Following the reviews of Kitchenham and Charters (2007) and fling of creativity, data privacy and security, cost of training and
Ali et al. (2018; 2020b), the current manuscript flows through the stages maintenance, and sustainable usage. Each of these challenges is detailed
of planning, execution, and summarizing where a detailed explanation is later in this review.
next outlined.
3.2. Execution stage
3.1. Planning stage
For this phase, the planning phase was used to filter relevant articles
for the three-stage review process. The methodology sequenced for the
First, a planning stage consolidates the need for a review and a study
review study included: (1) identifying the search terms and text in a
of this magnitude for the development of an area of discipline. Despite
perineal format which delved into using exclusive and distinctive
studies on critical challenges in using ChatGPT, academic investigations
technical terms recognized in the sphere of research (Hu and Bai, 2014).
and systematic reviews about this generative AI tool have been under­
The keywords identified were: (“challenge(s)” OR “issue(s)” OR “barrier
developed. Consequently, the current paper entails a comprehensive
(s)” OR “obstacle(s)” OR “consideration(s)”) AND (“ChatGPT” OR “AI”
investigation into the existing literature and information of the effects of
OR “NLP”) AND (“education” OR “university” OR “school”); (2) The
generative AI and ChatGPT extant research and practice. Second, the
database was further scrutinized by filtering tools to enhance the rele­
planning stage enabled the researchers to identify the research question:
vance of search yield with a temporal constraint between 2018 and 2023
What are the key challenges of harnessing ChatGPT NLP applications in the
(Zhang et al., 2014); (3) Following this, the manual check included
education sector and what strategies can be implemented to address them??
scanning the title and abstract to further specify the configuration of the
The researchers augmented the automated search strategy with a
results (Pucher et al., 2013); (4) Articles screened in Stage 3 underwent
manual intelligible review process. The initial stage included a search-
detailed analysis comprising reading the full-text article where the re­
engine automated exploration in diverse electronic data bases and re­
searchers filtered and distinguished between relevant knowledge, in­
positories. Subsequently, a manual analysis of assorted publications was
formation, and theory related to the discipline under investigation (Shea
made (Golder et al., 2014). Based on research terms, the researchers
et al., 2007); (5) Finally, a Quality assessment standard was applied to
scanned Science Direct, Web of Science, IEEE, Emerald, Scopus, and
ensure that all the research articles screened up to and including stage 4
ACM digital library to collect relevant data for the review process.
were relevant and contributed to the formulation of this review manu­
Moreover, a systematic assemblage of methods were used to filter and
script (Hu and Bai, 2014). The quality evaluation comprised of creation
restrict non-relevant research publications (McLean and Antony, 2014).
and acceptance of Quality evaluation criteria to warrant that the
The manual process required the researchers to read each article's title
screened papers qualified for the minimum quality standard (Hu and
and abstract (Golder et al., 2014), followed by a complete reading of the
Bai, 2014). Taken together, the criteria that was adapted from Sadoughi
et al. (2020), and Ali et al. (2018, 2021) included: (1) A statement of
Table 1 research objectives, (2) that the embedded research questions and
Stages of article selection and results.
challenges were stated sequentially, (3) Review data was described and
Stage Actions Result made available, (4) that a comprehensive description of research
Stage 1: Search the literature using Identification of search 618 method, and substantial explanation of its presentation and execution
specific terms or keywords keywords: was available, (5) and that the research outcome was relevant to the
• Challenges research questions. Comprehensive details of the research article se­
• Strategies
lection stage and the results are illustrated in Table 1.
• ChatGPT
• AI
• Education sector 3.3. Summarizing stage
Stage 2: Applied filtering tools within the Apply database filters: 233
database • Language
The review was undertaken between February 2nd 2023, to April 3rd
• Year of publication
• Area of Study 2023, following the sequence and stages represented in Stage 1. The
Stage 3: Exclusion of articles based on Reading title and abstract 124 preliminary database search yielded a total of 618 articles. Following
their title and abstract • Review the title. the review process however, a total of 69 articles were only considered
• Review the abstract for the review as illustrated in Table 2.
Stage 4: Exclusion of articles based on full- Reading full articles 87
text review • Review the whole article
Fig. 1 illustrates the final number of articles selected for the present
Stage 5: Exclusion of articles based on Quality evaluation: 69 review study. Specifically, based on the initial search process (key­
their quality • Research objectives words), 618 unique articles were identified. After applying filters, the
• Research questions number of articles was reduced to 233 articles. The researchers then
• Research problem
conducted a manual review to identify articles irrelevant to the study. In
• Research Data used
• Adopted study this process, the researchers focused on both empirical and conceptual
methodology articles that were directly related to the topic of this research. As a result,
• Research results and 109 articles were removed, and 124 articles remained. Next, the full
outcomes article reviewing process was performed. After reading and reviewing
Total Articles Accepted (based on the 5 stages) 69
the full articles, another 37 irrelevant articles were removed, which

5
O. Ali et al.
Table 2
Categorization framework.
Domain Category Sub-category Description Examples Sources

ChatGPT User Challenges Absence of human The lack of human interaction during the use of such an AI platform • Increasing use of Gong et al. (2018); Rapanta et al. (2020): Baber (2021a, 2021b); Gao
Challenges interaction renders the user experience excessively mundane and mechanical. technology (2021); Bernius et al. (2022); Diederich et al. (2022); Kasneci et al.
• Decrease in face-to- (2023).
face communication
• Lack of social
interaction
Restrained This AI-based assistance tool works on the data that it has been trained • Difficulty in Perelman (2020); Wang et al. (2020); Buhalis and Volchek (2021);
understanding on and that might lead to its limited understanding of the contexts being understanding natural Bernius et al. (2022); Omoge et al. (2022); Raković et al. (2022); Kim
discussed. language et al. (2022); Sheth et al. (2022); Kasneci et al. (2023); Baidoo-Anu and
• Limitation in content Ansah (2023).
knowledge
Little creativity The absence of imaginative stimulus owing to the nature of this tool • Limitations in Pappas and Giannakos (2021); Chen and Wen (2021) Xia (2021);
manifests in an explicit lack of creativity. learning approaches Stevenson et al. (2022); Placed et al. (2022); Kasneci et al. (2023);
• Lack of novelty Biswas (2023); O'Connor (2023); Lund et al. (2023).
• Potential for
overreliance
Restrained contextual The data fed into ChatGPT is collated from a wide variety of sources and • Ambiguity in Niño (2020); Simkute et al. (2021); Miao and Wu (2021); Liu et al.
understanding hence may lack contextual background. language (2021); Diederich et al. (2022); Atlas (2023); Floridi (2023); Dwivedi
• Lack of background et al. (2023a); Kasneci et al. (2023).
knowledge
• Inability to interpret
non-verbal cues
• Limited ability to
adapt to new contexts
Operational Cost of training the The success of this AI tool is dependent on its recency and training, the • Expertise Chen et al. (2020); Okonkwo and Ade-Ibijola (2021); Hu (2021);
Challenges model perennial need for such training data can be an expensive input. • Training data Bogina et al. (2022); Dwivedi et al. (2023a); Kasneci et al. (2023).
6

• Computational
resources
• Ongoing maintenance
Cost of Maintenance The data used by large language models has to be regularly updated and • Technical Gao (2021); Bernius et al. (2022); Haleem et al. (2022); Agomuoh and
vetted for accuracy. Such data maintenance tasks are also high-cost tasks maintenance Larsen (2023); Kasneci et al. (2023); Baidoo-Anu and Ansah (2023);
• Data Sigalov and Nachmias (2023); Polak and Morgan (2023).
• User Feedback
• Model re-training
Inadequate ability to ChatGPT in its present form appears to lack personalization and • Limited information Dehouche (2021); Gao (2021); Ahsan et al. (2022); Kasneci et al.
personalize adequate customization options. However, ChatGPT will become more about student (2023); Baidoo-Anu and Ansah (2023); Eysenbach (2023); Gilson et al.

Technological Forecasting & Social Change 199 (2024) 123076


instruction customizable in the near future. • Inability to provide (2023); Cotton et al. (2023); Kasneci et al. (2023).
feedback
• Limited flexibility
• Limited interactivity
Environmental Sustainable usage The growing popularity of this large language model creates the need for • Energy consumption Patterson et al. (2021); Kasneci et al. (2023).
Challenges huge computing and processing capacity. The need for servers and
processors for this purpose poses a new challenge to sustainable
computing.
Technological Data privacy Since ChatGPT is gaining popularity as a ‘go-to’ solution for a wide • Data breaches Bundy et al. (2019); Breidbach and Maglio (2020); Williamson and
Challenges variety of problems from content generation to coding, users are • Privacy policies Eynon (2020); Stahl (2021); Okonkwo and Ade-Ibijola (2021); Belk
required to share details that may potentially compromise their privacy. • Consent (2021); Irons and Crick (2022); Selwyn (2022); Dwivedi et al. (2023a);
• Data collection and Kasneci et al. (2023).
use
Data security With the exponential growth of its user base, this AI platform is likely to • Cyberattacks Geko and Tjoa (2018); Okonkwo and Ade-Ibijola (2021); Stahl (2021);
attract the attention of malicious players seeking to benefit from the • Compliance Deng and Lin (2023); Dwivedi et al. (2023a); Kasneci et al. (2023).
vulnerabilities in the system. • Data storage
• Authentication
(continued on next page)
O. Ali et al. Technological Forecasting & Social Change 199 (2024) 123076

resulted in 87 remaining articles. Finally, after checking the quality

(2022); Tlili et al. (2023); Dwivedi et al. (2023a); Kasneci et al. (2023).
Sarker (2021); Shen et al. (2021); Asselman et al. (2021); Ouyang et al.

(2022); Hamilton (2022); Heikkilä (2023); Bjork (2023); Kasneci et al.


(2021); Böhm et al. (2022); Yang (2022); Chen et al. (2023); Weissglass
assessment criteria such as objectives, the research questions, the

(2023); Hartmann et al. (2023); Krügel et al. (2023); Dwivedi et al.


description of the collected data, the methodology applied, the tech­

Akter et al. (2021); Potts et al. (2021); Stahl (2021); Bender et al.
nique used to analyze the data and the presentation of the results, 18
articles were removed, reducing the number of articles to 69.

3.3.1. Article distribution by publication year


Fig. 2 depicts the total number of selected publications scanned in

(2023a); Getahun (2023); Chin et al. (2023).


this review analysis over the years. This review study discovered that the
most articles were published between 2021 and 2023 with 21 articles,
while the fewest were published in 2019, with only one article. The
majority of the publications were sourced from 2021 to 2023, indicating
more recent interest.
The results of a comprehensive examination of the challenges of
using ChatGPT in education and scientific related journal papers were
presented and discussed. The categorization framework was used with
five categories considered: user, operational, environmental, techno­
logical, and ethical concerns. The systematic process enabled the
Sources

emergence of the most important challenges of adopting and using any


of the innovation tools provided in ChatGPT. Table 2 also illustrates the
results of the review by key themes and a comprehensive research
framework that can be used for exploring the challenges presented.
In summary, the researchers conducted a categorical and systematic
• Ongoing training

• Lack of diversity
• Quality of data

selection of research methods in the current study by reviewing, col­


• Limitations in

• Potential for

• Reinforcing
training data

lecting, cataloguing and describing the major themes investigation. The


stereotypes
overfitting
Examples

review now breaks down each of the broad challenges by a granular


discussion of the more specific challenges and barriers.

4. ChatGPT key challenges discussion


This AI-based tool is dependent on the data being fed to the model and

Since this large language model is founded from the input-data sourced
from the internet, a lot of prevailing bias makes its way into ChatGPT.
This bias is further perpetuated through the responses and solutions

While many benefits appear on the surface level for education in­
stitutions, many downsides and potential challenges need to be
addressed. This review now addresses each of the themes that were
identified in Table 2 categorization framework.

4.1. User challenges


hence it is constrained by this dependence.

4.1.1. Absence of human interaction


While ChatGPT has harnessed worldwide interest, certain challenges
have to be addressed such as the lack of humane interaction (Diederich
et al., 2022; Kasneci et al., 2023). Applications such as generative AI
models and ChatGPT lack adeptness in rendering human interaction
comparable to language models. They do not accommodate the idea of a
shared by ChatGPT.

humane instructor. Increasing use of technological applications partic­


ularly AI represents a significant concern in all education institutions.
Description

While the presence of technology has revolutionized the way learning is


imparted and information is accessed, it has not been mindful of the
value of in-person communiqué and collaboration benefits which is
central to a well-versed learning system. The seminal works of Rapanta
et al. (2020) found that pupils who received personalized human feed­
Partiality in training
Dependency on data

back and support from instructors exhibited superior educational ac­


complishments and engagement in the teaching-learning process,
Sub-category

relative to those who relied on automated, digitized academic programs


and nodes (Gao, 2021; Bernius et al., 2022). That is, an important un­
data

derstanding of technology intervention in humane disciplines is the


human interaction component which is pivotal in the learning process.
Ethical Challenges

Baber (2021a, 2021b) found that learning aspirants in an online course


received inferior results relative to their peers who participated in the
same course in a traditional classroom. This would suggest the need to
Category

engage learners in a holistic learning process highlighting the need for


Table 2 (continued )

greater collaborative learning and social interaction experiences. In the


era of educational digitization, concerns like these have become a focal
concern for educators across the globe (Kasneci et al., 2023). Similarly,
Gong et al. (2018) found that blended learning environments i.e.
Domain

combining face-to-face and online learning, led to greater commitment


and academic gratification among participants compared to learners

7
O. Ali et al. Technological Forecasting & Social Change 199 (2024) 123076

Fig. 1. Research process of this review

Fig. 2. Publications by year.

who relied on technology alone. Moreover, much research attributes rationalize bespoke students' needs and knowledge requests (Kasneci
academic excellence to personalized coaching and a blended learning et al., 2023). Some challenges associated with limited understanding in
atmosphere much less learning restricted to computer interactions education settings are as follows: (1) Difficulty in understanding natural
(Bernius et al., 2022). Much research suggests the need for increased language: As ChatGPT is structured and has the restricted capacity to
social engagement on the part of the learner in education settings understand NLP to decode and generate human language output, the
(Rapanta et al., 2020). While technology can facilitate the educational current technology is not yet sophisticated to commensurately shuttle
journey and harness information as a support function, it cannot sub­ between natural and human language and share specific outputs
stitute for personalized tuition and human interaction in the teaching- (Buhalis and Volchek, 2021; Omoge et al., 2022). Thus, instances have
learning process (Diederich et al., 2022). been reported where ChatGPT misconstrues or fails to comprehend
students' inquiries influencing output quality (Kasneci et al., 2023).
4.1.2. Restrained understanding Moreover, an ill-informed answer may render the response perplexing
According to extant research, statistical patterns in data on which an and create confusion on the part of students, while diminishing the
AI application is trained is fundamental to Generative model func­ power of technology in the academic sphere (Raković et al., 2022). (2)
tioning. That is, generative models are completely ignorant of the Limitations in content knowledge: While ChatGPT can engender unlimited
knowledge concepts they are helping students with (Perelman, 2020; replies, there is an embedded fundamental issue of limited access to
Kasneci et al., 2023). Ideally, a knowledge generative model should be trained data/content, which restricts the quality of outputs received.
very specific to student's needs and aspirations (Wang et al., 2020; This is particularly more germane on parameters of novelty such that an
Bernius et al., 2022; Baidoo-Anu and Ansah, 2023). Thus, these struc­ incorrect output is offered to students in a relatively unexplored domain
tures should have the intelligence to sense individual specific needs and or gamut of study (Kim et al., 2022). In addition, while ChatGPT can
deliver outputs accordingly. Recent studies suggest that generative engender custom-made learning content and answer queries (in this
model-based instruction requires greater sophistication to be able to case, those generated by students), it is currently incapable of offering

8
O. Ali et al. Technological Forecasting & Social Change 199 (2024) 123076

personalized encouragement and consideration to individualized needs possibility, the inefficiency of the applications can surface. This may
in the same way as traditional education settings (Sheth et al., 2022). lead at its most basic to questions of accuracy resulting in a lack of and
comprehensive understanding bringing in to question the mission of
4.1.3. Little creativity education (Tlili et al., 2023). Further, generative models have embedded
One of the significant challenges faced by the ChatGPT relates to the challenges of generating the same nature of data demanded on the input
lack of innovative output quality (Lund et al., 2023; Kasneci et al., and a lack of incorrect output related to the novel and unexplored
2023). This is largely due to the single source of training data input that themes. (3) Ongoing training: To keep the application relevant, it needs to
the mechanism has received. Generative models rely on the input replenish itself with content revision and training data as an intermittent
training data source, and although they modulate the patterns of output, function. This could be perplexing as the timely availability of novel
they systematically generate monotonous and non-creative content. This data in palatable formats may require colossal infrastructure to collect
curtails the innovation and uniqueness of replies (Pappas and Gianna­ and process the data (Ouyang et al., 2022). Moreover, depending on
kos, 2021; Biswas, 2023). Chen and Wen (2021) moreover established others for the creation of data and labelling it as ‘training’ data for the
that a generative model-based tune composition system had a regulated generative system is a tricky task to address. (4) Potential for overfitting: A
capability to produce unprecedented, novel and distinct tunes. While so called efficient generative AI model runs the risk of being so befitting
some creativity can be observed within limited contexts therefore, sig­ to specific training data within a context that it limits its usability thus
nificant drawbacks such as plagiarism and violation of copyrights re­ restricting its capacity to be categorized and theorized to varied and
stricts the unique aspect of creative content. While ChatGPT can be fine- newer contexts (Asselman et al., 2021). Thus, its relevance to the aca­
tuned and personalized to configure specific learning content and demic stream is questionable requiring greater substantiation through
answer student queries, it is incapable of dealing with resourceful and new research. Further, these regenerative mechanisms need a contin­
ingenious problem-solving contexts such as critical thinking which is a uous flow of input data from parallel–specialized fields of ontology
pre-requisite in the education system (Kasneci et al., 2023). Several (Kasneci et al., 2023). This may need significant investments in to the
other challenges also need be noted as follows: (1) Limitations in learning future and the creation of infrastructure that might be established in a
approaches: ChatGPT spawns responses based on the restrictive training parallel economy.
data rendered. While it can respond to forthright questions, it cannot
deal with contextual problem solving, innovation, and establishing a 4.1.5. Categorical, contextual comprehension
critical mindset such that it might help students find creative solutions Contextuality and application-orientation are pivotal to different
(Mantelero, 2018; Kasneci et al., 2023). (2) Lack of novelty: Input and fields of education with many academic the disciplines solely dependent
training data are the primary sources of ChatGPT responses; thus, on application-oriented computer applications, management studies,
expecting unprecedented, innovative solutions to unprecedented astrophysics, interalia. In the absence of applying AI generative models
queries is most likely a distal expectation (Xia, 2021; O'Connor, 2023) to fit specific academic disciplines and fields of ontology, significant
among learners. (3) Potential for overreliance: Generative AI could be challenges remain how best to use AI (Kasneci et al., 2023). Generative
expected to impair student's self-dependence. Given that it is easy for a models have an inherent challenge of not being receptive and sensitive
learner to access the application, a sense of overreliance may inhibit to contexts which can completely make the content incongruent, irrel­
learner self-dependence and creative ways of problem-solving and evant and thus unusable. Taken together, the contextualization of con­
lateral thinking (Stevenson et al., 2022; Placed et al., 2022). tent, poor human interaction, a lack of interface, and continuous quality
data input intervention compromises the quality of ChatGPT in the
4.1.4. Dependency on data educational field (Diederich et al., 2022). For instance, Miao and Wu
Yet another detriment to the use of ChatGPT is its dependence on (2021) and Liu et al. (2021) suggest that a generative model-based input
data archives and primordial data (Tlili et al., 2023; Dwivedi et al., conversational system has an embedded limitation on interpreting the
2023a). Applications like these that are generative in nature, are trained context and its suitability to an input string. As the technology is rela­
on significant amounts of input data and are highly dependent on the tively novel, the absence of requisite skills for data redemption by
compilation of data to maintain quality (Kasneci et al., 2023). If the teachers can lead to significant and newer challenges especially in the
content is insufficient as far as quantity or quality, then this means that context of education.
the output reproduced will be deficient in some way. Roumeliotis and The application has the embedded issue of contextualizing the input
Tselikas (2012) suggested that such a generative model-based query- question which may lead to inaccurate, irrelevant, and confusing re­
response mechanism accomplished poor responses when the input sponses restricting its usability. Some of the challenges associated with
training content lacked relevance to the context and nature of the task, the lack of contextual understanding in ChatGPT are discussed as fol­
for which the content had to be generated. Thus, ChatGPT mandates lows: (1) Ambiguity in language: The use of natural language can be
enormous sets of data to train the model. Further, the accuracy and ambiguous to a computing device, as the coder of the input content is
effectiveness of such input data has to be contemplated. Input from an human and the contextuality has an integral role to play in modulating
unauthentic and unverified source may influence the quality of data the nature of output expected. ChatGPT may not be able to precisely
output. Some of the challenges associated with dependency on data in construe the milieu of a query cascading as irrelevant or inappropriate
ChatGPT are discussed as follows: (1) Quality of data: The efficiency of responses. That is, if not interpreted correctly with due consideration,
the application in question is heavily dependent on the accuracy of the the knowledge generated may bring in to question the whole regener­
responses generated as an output. If the input content is inaccurate, ative process. For instance, Niño (2020) established that contextual
incomplete, or prejudiced, it can lead to incongruous and inappropriate misunderstandings in Machine learning, translation, and interpretation
output. A study by Shen et al. (2021) for instance found that models can translate as errors in output quality. (2) Lack of background knowl­
trained on low-quality data can results in significant deterioration of edge: Differences in background knowledge/input data between
output quality and thus question the whole performance of generative AI ChatGPT and human tutors can juxtapose the inability of generative
models and their applications. Thus, the data source needs to be well- models to provide exact, authentic, reliable, and complete replies to
researched, substantiated, and authentic. Without this, any future data scholarly queries (Simkute et al., 2021). The mechanistic and non-
generated through ChatGPT will result in another genre of biased, un­ human understanding of the language model can lead to ambiguous,
authentic data. This might spawn a chain reaction of the unverified data confusing, and irrelevant mining of data which may mostly be inade­
string leading to inaccurate output significantly influencing the quality quate for teaching-learning quality. (3) Inability to interpret non-verbal
of knowledge in educational settings. (2) Limitations in training data: cues: The inability of generative models to recognize non-verbal cues
With the paucity of data on certain knowledge parameters a distinct such as facial expressions or tone of voice, can impair it from recognizing

9
O. Ali et al. Technological Forecasting & Social Change 199 (2024) 123076

the meaning and contextuality of the input (Kasneci et al., 2023); thus, student needs such as learning formats, interests, and preferences,
an output void of emotions and human feelings may not be palatable in including strengths and challenges, the capacity of the application to
every scenario. offer a holistic and wholesome learning experience is ambiguous and
questionable. In these circumstances, ChatGPTs usability for ‘personal­
4.2. Operational challenges ized’ learning experience and student inclusivity is under question
(Eysenbach, 2023). (2) Inability to provide feedback: ChatGPT cannot
4.2.1. Cost of training methods harness feedback that is customized to individual learning needs within
The adoption of large language and generative technologies may a context meaning that the AI tool is not currently viable for many
create infrastructure and economic burdens for educational institutions educational institutions and their constituents (Gao, 2021). Compre­
particularly those with restricted financial resources (Kasneci et al., hensively, it fails to offer individualized feedback to students' learning
2023). Moreover, the application model necessitates momentous methods and challenges (Ahsan et al., 2022; Baidoo-Anu and Ansah,
computational resources and specialized expertise which may not be 2023), which currently can only be offered by a human tutor. (3) Limited
feasible for these institutions. Particular issues can be categorized as flexibility: AI tools more generally fail to synergize the ever-transitioning
follows: (1) Computational resources: Coaching issues will consume sig­ needs of student cohorts and their latent learning needs further dimin­
nificant computational infrastructure i.e., processing power, speed, ishing their capacity to offer personalized learning experiences
memory, storage, security which may be a significant drawback for low- customized to individual students' distinctive learning aspirations (Gil­
budget and unfunded/under-funded educational institutions (Okonkwo son et al., 2023; Cotton et al., 2023). (4) Limited interactivity: Personal­
and Ade-Ibijola, 2021). This reality brings to the forefront problems of ized learning experiences with the social and interactive nature of
universal AI adoption. (2) Expertise: Developing the AI model requires learning are limited and questioned (Dehouche, 2021; Kasneci et al.,
expertise in NLP, machine learning, and data science, meaning finding 2023).
expert intelligence is expensive (Hu, 2021). (3) Training data: As dis­
cussed earlier, the input training data is fundamental to the functioning 4.3. Environmental challenges
of ChatGPT which if derived from quality sources can be expensive to
acquire and maintain. Investments will be required in data collection, 4.3.1. Sustainable usage
annotation, and curation as well as in the development of tools and The sustainability and ongoing usage of this application/model in
processes for managing training data (Bogina et al., 2022). the education sector is a very real question for end-users (Kasneci et al.,
2023). High energy consumption, infrastructure maintenance, and
4.2.2. Maintenance costs environmental deterioration represent critical objections that need to be
Ongoing maintenance and debugging are a necessity for this dy­ addressed. Thus, energy-efficient infrastructure and collaborated stor­
namic generative model (Agomuoh and Larsen, 2023). Once the model age (e.g., cloud), powered by renewable and eco-friendly energy sources
is deployed, continued maintenance is required for optimal perfor­ are required for their ecologically sustainable operations in education
mance. Some of the challenges associated with the cost of maintenance settings (Patterson et al., 2021). With the evolution of environmental
in ChatGPT include: (1) Technical maintenance: The model requires consciousness and the human development index, the fact that techno­
ongoing maintenance especially software updates, bug fixes, perfor­ logical advancement is taken as a deterrent to the environment is a
mance, and optimization issues. This could be expensive and time given. For instance, one of the significant reasons for the tardy adoption
consuming and depend on technical know-how which may not exist in of Bitcoin is its significant effect on the environment and mother earth
some institutions (Baidoo-Anu and Ansah, 2023). (2) Data maintenance: suggesting the ChatGPT developers need to find ways to reduce its
The AI model requires continuous data maintenance such as data carbon footprint (Kasneci et al., 2023). For its continuous use in the
cleaning, data annotation, and data quality monitoring. Thus, technical education sector, consolidated efforts of teachers, institutions, policy-
maintenance costs could be exorbitant within specific educational set­ makers, and administrators should be aimed at reducing the immedi­
tings (Sigalov and Nachmias, 2023). (3) User feedback: User feedback is ate and long-term impact of this technology on ecological and envi­
an integral input for the modulation of the application (Bernius et al., ronmental grounds. The actions have to be aimed at maintaining the
2022) which might also be expensive and time-consuming in the initial application and its technical derivatives for their sustained and ethical
years of its introduction (Gao, 2021). This challenge could be escalated implications in the classroom (Kasneci et al., 2023).
with the number of licensed users of the application and the complexity
of the educational setting (Haleem et al., 2022; Baidoo-Anu and Ansah, 4.4. Technological challenges
2023). (4) Model retraining: Given the pace of knowledge renewal in the
21st century, data input and output can become redundant very quickly 4.4.1. Data privacy
suggesting that to maintain the precision of the model, it has to be Another significant challenge is the privacy of pupil information
updated and timely retrained by discarding obsolete parameters while (Irons and Crick, 2022). These include: (1) Data breaches: Student data
onboarding new ones. This can be economically inefficient and time- stored in insecure data connections, or servers can escalate the threat of
consuming particularly for large and complex models (Polak and Mor­ unauthorized access (Kasneci et al., 2023; Williamson et al., 2020);
gan, 2023). which may also increase the threat of crimes and data forgery, plagia­
rism, and copyright violation. Further, plagiarism and copyright viola­
4.2.3. Instructional input personalization tion are yet another significant challenge that educational institutions
A fundamental challenge for AI generative model applications in the will need to address before the application is adopted in mainstream
academic sphere is the restricted ability to personalize commands and educational settings. (2) Privacy policies: Proliferation of the use of
instructions (Kasneci et al., 2023). That is, generative models cannot ChatGPT in education may mandate policy shifts to foster ease of use
interpret personalized instructions/commands to cater for individuals' and address embedded challenges with generative technologies. This
needs (Baidoo-Anu and Ansah, 2023; Eysenbach, 2023), as the machine- can be time-consuming and challenging, especially in contexts wherein
driven mechanisms are not equipped to render customized services. As there is a regulatory and policy vacuum (Williamson and Eynon, 2020).
ChatGPT cannot cater to the personalized learning needs and experi­ (3) Consent: In continuation with the previous element, the adoption of
ences of each pupil, its effectiveness as an educational tool is question­ ChatGPT in academia may need informed consent of parents and their
able. Some of the encounters associated with the limited ability to wards depending on their age, to comply with regulations and privacy
personalize instructions in ChatGPT include: (1) Limited information laws (Stahl, 2021). In situations where a student is not old enough,
about students: In the absence of granular information concerning gaining careful consent from guardians will also be a significant

10
O. Ali et al. Technological Forecasting & Social Change 199 (2024) 123076

challenge (Selwyn, 2022). (4) Data collection and use: The dual process of accounting for bias (Krügel et al., 2023).
monitoring data usage and data access particularly for educational in­ Our discussions thus far suggest that many adoption challenges exist
stitutions may lead to information breaches, data collection trans­ related to the adoption of AI technology in general and ChaptGPT in
parency, and embedded technical issues (Kasneci et al., 2023). particular. For instance, while human input is critical for creative,
intelligent and high-quality contextual output, administrators and de­
4.4.2. Data security signers need to place problems of data privacy and confidentiality at the
Data security closely follows data privacy. Data security is an forefront of potential solutions. Consequently, it will be necessary to
embedded problem within education settings (Okonkwo and Ade- develop a range of strategies and solutions that at least in part help to
Ibijola, 2021; Dwivedi et al., 2023a), since data breaches are subject address many of these AI adoption problems and challenges discussed
to fraudulent individuals and groups. Here, problems are also chal­ earlier. In respect of the current paper and based on emerging research
lenging as follows: (1) Cyberattacks: In the event of mainstream usage, illustrated in Table 2 earlier, we next outline what these strategies might
Cyberattacks will compromise students' data storage. Indeed, data se­ look like in respect of the education sector.
curity should be a strength of data servers such that users feel secure that
no unauthorized access will occur (Kasneci et al., 2023). (2) Authenti­ 5. Strategies to support the education sector in using ChatGPT
cation: If adopted, ChatGPT may have to establish a suitable screening
mechanism for personal and institutional authentication of user data 5.1. Strategies related to inadequate human interaction.
including necessary filters and camouflage. This would invite substantial
investments in infrastructure and technical know-how (Agapito, 2023), Strategies to address the challenges of poor human interaction in the
which might be in its nascence owing to the sophistication of these institutionalization of ChatGPT should be considered within the context
language-enabled models and systems. (3) Compliance: A revamp of of other teaching aids, tools and educational strategies. ChatGPT should
regulations that comply with currently existing data security standards be considered as a complementary aid and not as a substitution for poor
and regulations associated with the adoption of ChatGPT in educational human interaction which we discuss next (Gong et al., 2018; Gao, 2021;
settings may be complex and time-consuming (Stahl, 2021; Kasneci Jalil et al., 2023; Dwivedi et al., 2023a). That is, based on the emerging
et al., 2023). For example, the General Data Protection Regulation themes in Table 2 and subsequent discussion, it is possible to develop
(GDPR) mandates that user institutions should execute suitable tech­ what some of these strategies might look like based on the emerging
nical and institutional measures to protect user data (Geko and Tjoa, challenges identified thus far.
2018). (4) Data storage: The processing of generative models will
mandate storage of large amounts of user data. This makes all data • Blended learning: The latter is a format of education consisting of
vulnerable to data breaches and other security infringements (Deng and online learning blended with face-to-face instruction. By combining
Lin, 2023), requiring significant investment in secure storage machin­ ChatGPT with other teaching methods such as in-person lectures,
ery, continuous data development, security retention, and deletion group discussions, and collaborative learning activities, educators
policies (Kasneci et al., 2023). can help ensure that students have opportunities to interact with
their peers and instructors in a more social and engaging learning
4.5. Ethical challenges environment (Gong et al., 2018). Here, ChatGPT should act as a
support function to the person-driven education system.
4.5.1. Partiality in training data • Personalized learning: Personalized learning bespoke instructional
Scholars note the significant amount of partiality in the input- method caters to individual student needs and learning styles
training data to train the model (Akter et al., 2021; Böhm et al., 2022; (Dwivedi et al., 2023a). By using ChatGPT as a means to provide
Dwivedi et al., 2023a) as follows: (1) Reinforcing stereotypes: In cases pupils individualized-specific feedback, guidance, and support, ed­
where the input data is biased or based on prejudicial language, this ucators can help to create a more individual-adaptive and engaging
situation will influence the quality of responses such that output that is process that is tailored to the needs of each student.
compromised will not meet students learning needs (Bender et al., 2021; • Collaborative learning: This format is a sought-after educational aid
Getahun, 2023). Recent studies found that queries related to mental which emphasizes group work and teamwork. ChatGPT could be a
patients were highly biased around stereotypes thus compromising data very innovative tool to help facilitate group discussions, peer feed­
quality (Chin et al., 2023; Potts et al., 2021). This raises an important back, and collaborative learning activities, which educators could
question on the suitability of ChatGPT for learning systems where harness to offer opportunities of interaction, engagement and
training data quality is critical to the usability of the system for a collaborative learning with fellow-learners and peers (Jalil et al.,
particular purpose (Hamilton, 2022; Heikkilä, 2023). (2) Absence of di­ 2023). In fact, collaborative forms of learning might increase social
versity in data: One complex challenge concern input-trained data and teamwork skills because of the opportunity for teamwork.
generating like-natured outputs (Weissglass, 2022). For example, a • Use of ChatGPT as a learning tool: Teachers can materialize this
ChatGPT model trained on data customized for a certain set of audiences application/platform to complement traditional teaching method­
may fail to furnish precise or comprehensive answers to learners outside ologies rather than as a replacement for them (Dwivedi et al.,
of the discipline or specialization (Bjork, 2023). Thus, partial or inap­ 2023a). By using ChatGPT to afford learners additional resources and
propriate data is being rendered to learners bringing to the forefront feedback, with careful supervision, educators can help to create a
many existing disparities in the whole education system (Chen et al., collaborative, interactive, customized and engaging learning expe­
2023). This portrays the need for preparing and fine-tuning the input rience while building reflective human interaction as a consequence
content (training) data for downstream errands. Scholars suggest that of using the model.
generative models specific to disciplines have to be dissimilar, distinct,
and representative to that specific group of learners or individuals 5.2. Strategies related to limited understanding.
(Kasneci et al., 2023). However, this strategy may make the data un­
usable for other more general users (Yang, 2022; Hartmann et al., 2023). Some strategies that can help to overcome limited understanding of
Timely and frequent scrutiny and analysis of the application's suitability ChatGPT in the education sector include the following:
and functioning on distinct profiles and assemblies of users can help to
diagnose and eradicate gaps and embedded predispositions (Stahl, • Pre-training on educational data: To improve the accuracy of ChatGPT
2021). The human component of the whole system is integral requiring in the educational domain, pre-training on educational data can be
greater need for monitoring, determining the input quality and implemented with timely updates and maintenance. This approach

11
O. Ali et al. Technological Forecasting & Social Change 199 (2024) 123076

mandates training of the model on large and diverse datasets of general data knowledge (Kasneci et al., 2023), which might also
educational texts not restricted to textbooks, lectures, and educa­ reduce the magnitude of training data and operational effort.
tional videos (Sallam, 2023). Such modification strategies should • Active learning: Active learning is a technique that can decrease the
help the model to assimilate, encrypt, better understand and breadth and volume of data required for model training (Budd et al.,
generate responses to educational queries which will enhance the 2021). In this format, the model is iteratively trained on small chunks
quality of output responses. of data which is improvised and added iteratively such that perfor­
• Use of knowledge graphs: Knowledge graphs can be used to represent mance and quality output is improved.
and store knowledge about a particular domain within the education • Data augmentation: This technique can help to increase the amount of
sector (Chicaiza and Valdiviezo-Diaz, 2021; Kasneci et al., 2023). data available for training which can reduce the dependency on data
Knowledge graphs can be beneficial to improve application under­ (Maharana et al., 2022). Data augmentation involves spawning new
standing by providing ChatGPT with additional knowledge about a data from currently existing data by adding or making small modi­
topic within a context. This can advance the accuracy of the re­ fications. Though this technique could be exposed to allegations of
sponses generated by ChatGPT by empowering the tool to better salami slicing, measures can be taken to impair its resemblance and
decrypt, understand and react to inquiries related to the education fundamental nature.
domain.
• Fine-tuning on specific tasks: Fine-tuning ChatGPT on specific tasks or
5.5. Strategies related to training and maintenance expenditures
topics can help improve its understanding and accuracy in these
areas (Kasneci et al., 2023). For example, by fine-tuning the model
Overcoming the costs of training and maintenance expenditures
on specific educational tasks such as answering questions about a
should include the following:
particular topic or providing feedback on student writing, ChatGPT
can be trained to generate more accurate and relevant responses.
• Leveraging open-source resources: Open-source resources can help
• Human-in-the-loop approach: The human-in-the-loop approach in­
reduce the cost of training and maintenance (Kasneci et al., 2023). By
volves incorporating human intelligence input into the model
using open-source ChatGPT models and code, educational in­
training process (Wu et al., 2022). This approach can be used to help
stitutions can avoid the cost of developing a custom solution from
advance the understanding of ChatGPT by allowing humans to cor­
scratch.
rect errors and provide feedback to the model by increasing the
• Using pre-trained models: Pre-trained models can reduce the amount
precision and relevance of the model's output-responses.
of training required for a specific task (Han et al., 2021). These
models are trained on large datasets and can be tailored to specific
5.3. Strategies related to absence of creativity specialized tasks reducing the amount of training required. Timely
updates would also be required on pre-trained models.
Strategies that can help minimize the absence of creative input in • Using cloud-based services: Cloud-based services can help to reduce
generative models include the following: the cost of maintenance by outsourcing the management of the
infrastructure to a third-party provider (Ali et al., 2022). This
• Incorporating creative prompts: One strategy to promote creativity in approach can also reduce the need for in-house IT staff resulting in
ChatGPT responses is to incorporate creative prompts into the reduced costs.
training data (Kasneci et al., 2023). Creative prompts will comprise a • Prioritizing maintenance: Prioritizing maintenance is critical to avoid
component/criterion set of iterations where users can have the lib­ long-term costs (Kasneci et al., 2023). Regular maintenance can help
erty of choosing the output that suits his or her specific need. One to identify and fix problems before they become overly expensive.
downside is that this can only be relevant to mature users of peda­ Prioritizing maintenance is also integral to stamping out plagiarism
gogy and education. and to ensure secure user identity and data security.
• Training on diverse genres: Training ChatGPT on a diverse range of
genres can help promote creativity in the model's responses (Haleem
5.6. Strategies related to inadequate contextual understanding
et al., 2022). By incorporating diverse genres, ChatGPT can learn to
generate responses that are more imaginative and creative. More­
The ability of generative models to facilitate greater contextual un­
over, a criterion of naivety scale could be introduced which would
derstanding should include the following:
help users to modulate between the complexities of content derived
from the application.
• Multi-task learning: This technique empowers designers to improve
• Incorporating human input: Incorporating human input into the
the model's understanding of the context. In this format, the model is
training process can help promote creativity in ChatGPT responses as
trained to juggle and deliver between multiple parallel tasks which
noted earlier (Cooper, 2023). By allowing humans to review and
can help it to learn more about the context (Kasneci et al., 2023).
provide feedback on the model's responses, ChatGPT can learn to
Multi-task learning should significantly eradicate problems associ­
generate more creative and imaginative responses.
ated with contextual irrelevance of ChatGPT in the education sector.
In negotiation a way forward at least within a non-technical process,
5.4. Strategies related to dependency on data designers could work with educational providers to create different
user or learner contexts that a string of code might apply too. For
Strategies that can help to avoid the dependency on data of ChatGPT instance, if a user types in ‘team learning’, the application will search
in the education sector include the following: training data for all it knows about team learning. However, if team
learning is advanced to multiple contexts with different in­
• Incorporating domain-specific knowledge: Incorporating domain- terpretations and then embedded in training data, output quality
specific knowledge in the input/training data can help to reduce related to multi-tasking should improve.
the reliance on general training data (Zhu et al., 2023). By providing • Pre-processing the data: Pre-processing the data can help to provide
domain-specific knowledge, ChatGPT can be programmed to cater to the model with additional contextual information (Dwivedi et al.,
relevant and authentic data pertaining to a specific need. 2023a). This can include adding metadata to the data, such as author
• Transfer learning: Transfer learning enables the application model to or publication date, or using techniques such as named entity
learn from pre-trained models thus reducing the dependency on recognition to identify important entities in the text.

12
O. Ali et al. Technological Forecasting & Social Change 199 (2024) 123076

• Interactive learning: Interactive learning involves allowing users to • Implementing strong authentication and access controls: Implementing
provide feedback to the model in real-time, which can help it to robust authentication and access restrictions is one technique to
improve its understanding of the context. protect data security and privacy (Gupta et al., 2023). This involves
employing multi-factor authentication, role-based access manage­
ment, and strong password criteria to provide authorized access to
5.7. Strategies related to the limited ability to personalize instruction student data.
• Regularly updating security systems: Security systems must be regularly
Currently, the inability of ChatGPT to personalize instruction can be updated to prevent cyberattacks (Gupta et al., 2023). This includes
addressed in the following ways: applying software patches and using up-to-date antivirus and mal­
ware detection software.
• Using student-specific data: One way to improve the ability to • Monitoring and logging: Monitoring and logging are important stra­
personalize instruction is to use student-specific data such as previ­ tegies for identifying security breaches and preventing unauthorized
ous student records of performance, individual interests, and access to student data (Dwivedi et al., 2023a). These logs can also
learning style (Kasneci et al., 2023). This data can be used to help to identify any data privacy violations and allow for quick
customize the teaching method to specific individual needs and ac­ remediation.
count for informed consent from either the student or their • Educating staff and students: Educating staff and students about data
guardians. security and privacy is essential for creating a culture of cyberse­
• Implementing adaptive learning systems: This system uses machine curity (Alshaikh, 2020). This includes teaching users about password
learning algorithms to probe student data and fine-tune instruction management, phishing problems, and other types of cyberattacks.
to individual needs in real-time (Zhou et al., 2021). The inclusion of • Conducting regular risk assessments: Timely and routine risk valuations
tutors in this process could be one technique that can be considered (Kasneci et al., 2023) can support the identification of vulnerabilities
while preparing the system for user-specific functions. in the system and ensure that appropriate measures are taken to
• Using natural language processing (NLP): These techniques are mate­ prevent data breaches and cyberattacks.
rialized to analyze student writing and provide feedback that is
tailored to their individual needs (Bernius et al., 2022). For example, 5.10. Strategies related to bias in training data
NLP can be used to identify areas where learners require iterative
practice and instructional material and require targeted exercises to Our research suggests that the pre-existing biasing of data can be
improve user skills. addressed as follows:
• Incorporating human instructors: While ChatGPT can be useful for
providing personalized instruction, it is important to also incorpo­ • Diverse data collection: Diverse data collection involves collecting
rate human instructors into the learning process (Kasneci et al., data from a range of sources and perspectives to garner that input
2023). Human instructors can offer greater guidance, support, and training data is illustrative and atypical of the population (Bogina
advice that is bespoke and personalized to each student's needs. et al., 2022), which should also be frequently updated to accom­
modate user needs.
5.8. Strategies related to sustainable usage • Human-in-the-loop approach: A human-in-the-loop approach can also
be used to help eliminate biasing of the data (Wu et al., 2022). By
Strategies that will help to achieve sustainable usage of ChatGPT in allowing humans to review and provide feedback on the training
the education sector include the following: data, biases can be identified and corrected leading to more accurate
and inclusive responses.
• Prioritizing energy efficiency: Using energy-efficient hardware and • Regular data audits: Regular data audits involve reviewing the
software can help to reduce the environmental impact of using training data on a timely basis to eradicate bias and prejudice that
ChatGPT (Qadir, 2023). This would need the indulgence of com­ could have been introduced over time (Ayinde et al., 2023). By
plementary sectors to gain new knowledge of research and practice reviewing the data regularly, generative models will be able to
ensuring that hardware addresses the sustainability of the ecosystem. continuously generate accurate and quality outputs.
• Developing ethical guidelines: Developing ethical guidelines for the
specific use of ChatGPT in the education sector will help in its sus­ Taken together, the earlier classification framework list in Table 2
tainable use over time and that it will not be harmful socially and has enabled the researchers to identify how the gaps and challenges of
environmentally (Mhlanga, 2023). The collaborative efforts of all generative AI models such as ChatGPT can be addressed by their
stakeholders such as regulatory bodies and policy-makers should be matching strategies. While the strategies outlined are specifically related
considered in relation to formulating these ethical guidelines. to the education sector, they might also be generalized as potential so­
• Encouraging responsible use: Encouraging responsible use of ChatGPT lutions for the challenges faced by other sectors e.g., the manufacturing
among students and staff can help to minimize the impact of its use sector, given that common concerns such as data security, contextual
on the environment (Michel-Villarreal et al., 2023). Teachers can be understanding, and personalized instruction for example may be com­
an integral part of this process to ensure responsible use of tech­ mon across institutions and industry settings. Although the current re­
nology occurs. view has been based on the compilation of current and extant research,
• Promoting alternative solutions: Alternative education solutions and our findings represent potential pathways by which future research
pedagogical practices continue to exist (Dwivedi et al., 2023a). endeavors can be explored. The authors hope however that this review
Students should be empowered to make complementary use of might move extant research forward by helping to mitigate against the
generative models however as the AI features of these programs significant challenges posed by generative AI models. We now turn to a
continue to improve. discussion of what these future directions look like.

6. Conclusion
5.9. Strategies related to data security and privacy
The novelty of ChatGPT as a generative model of AI for intellectual
Strategic solutions related to data security and privacy issues can be output creation has been one of the most significant innovations of
addressed as follows: contemporary times. However, this review responded to calls for a more

13
O. Ali et al. Technological Forecasting & Social Change 199 (2024) 123076

well-researched literature on the theme. Given that machine learning revolutionize the way education is offered, dispensed, aggregated and
has enabled hi-tech content generation and innovation pertaining to assimilated.
digital content initiation, this process has naturally progressed to so­
phisticated AI technology as a constant theme for most digitally- CRediT authorship contribution statement
dependent fields of inquiry. We explored in the current study for
instance how AI generative models create artificial relics by scanning Omar Ali: Conceptualization, Data curation, Formal analysis,
through the input training specimens that are used to train input data. Investigation, Methodology, Project administration, Resources, Soft­
We also outlined how these features are explored through ChatGPT as an ware, Validation, Visualization, Writing – original draft, Writing – re­
example of a generative model leading to the development of a view & editing. Peter A. Murray: Conceptualization, Investigation,
comprehensive classification framework which was outlined in Table 2. Resources, Supervision, Writing – review & editing. Mujtaba Momin:
We next identified the strategies that can be implemented as solutions to Conceptualization, Resources, Writing – review & editing. Yogesh K.
the challenges presented. We noted for instance that ChatGPT has the Dwivedi: Conceptualization, Supervision, Writing – original draft,
potential to transform the educational sector through digital means and Writing – review & editing. Tegwen Malik: Writing – review & editing.
we analyzed what this process might look like on the basis of the chal­
lenges presented.
Declaration of competing interest
6.1. Limitation, and future research directions
There are no conflicts of interest for this manuscript.
Several limitations can be observed for the current review. First, the
manuscript and search process categorically focused on published arti­ Data availability
cles in peer-reviewed journals and reputable databases. Thus, we have
not looked at books and book chapters and have used a limited longi­ No data was used for the research described in the article.
tudinal design over a designated time period. This process may have
eliminated other seminal and related digital and technology related References
literatures. Second, our methodology could be sequenced for future in­
Abdullah, M., Madain, A., Jararweh, Y., 2022. ChatGPT: fundamentals, applications and
vestigations by including a broader number of keywords, data re­
social impacts. In: The Ninth International Conference on Social Networks Analysis,
positories, and unreported/unpublished data. Use of such literature Management and Security (SNAMS), Milan, Italy, pp. 1–8.
should be cautiously undertaken by applying specific quality parame­ Abukmeil, A., Ferrari, S., Genovese, A., Piuri, V., Scotti, F., 2021. A survey of
unsupervised generative models for exploratory data analysis and representation
ters. This review is also limited by the emerging themes presented and
learning. ACM Comput.Surv. 54 (5), 1–40.
the authors' interpretation of these. For instance, other authors might Agapito, J.J., 2023. User perceptions and privacy concerns related to using ChatGPT in
have different interpretations of data nuances to the extent that they conversational AI systems. Available at SSRN: https://ssrn.com/abstract=4440366.
search for a different range of articles. This would also be similar for Agomuoh, F., Larsen, L., 2023. ChatGPT: how to use the AI chatbot that’s changing
everything. Digital Trends. https://www.digitaltrends.com/computing/how-to-use-
other emerging themes. Moreover, the authors have taken a broad- openai-chatgpt-text-generation-chatbot/.
church approach to review the available literature by focusing on the Ahsan, M.M., Siddique, Z., 2022. Industry 4.0 in healthcare: a systematic review. Int. J.
findings of studies related to broader educational fields. Thus, our re­ Inform. Manag. Data Insights 2 (1), 100079.
Ahsan, K., Akbar, S., Kam, B., 2022. Contract cheating in higher education: a systematic
view findings are limited by generalizing the findings to more general literature review and future research agenda. Assess. Eval. High. Educ. 47 (4),
features of a specific educational effect such as learner experiences, the 523–539.
use of personalized human tutors. Accordingly, these limitations may Akter, S., McCarthy, G., Sajib, S., Michael, K., Dwivedi, Y.K., D’Ambra, J., Shen, K.N.,
2021. Algorithmic bias in data-driven innovation in the age of AI. Int. J. Inform.
result in future opportunities for scholars in subsequent research by Manag. 60, 102387 https://doi.org/10.1016/J.IJINFOMGT.2021.102387.
focusing on the specific effects of generative AI models within a unique Ali, O., Shrestha, A., Soar, J., Wamba, S.F., 2018. Cloud computing-enabled healthcare
educational institution such as schools, as well as a more micro-focused opportunities, issues, and applications: a systematic review. Int. J. Inform. Manag.
43, 146–158.
methodology on particular features of the application. Ali, O., Jaradat, A., Kulakli, A., Abuhalimeh, A., 2021. A comparative study: Blockchain
Extant studies including the current research suggests that ChatGPT technology utilization benefits, challenges and functionalities. IEEE Access 9,
is becoming pivotal in all walks of life inter alia education and many 12730–12749.
Ali, O., Shrestha, A., Jaradat, A., AlAhmad, A., 2022. An evaluation of key adoption
more. The current study indicates an emerging requirement for a
factors towards using the fog technology. Big Data Cognitive Computing 6 (3).
detailed study on ChatGPT and what this might look like for different https://doi.org/10.3390/bdcc6030081.
technological advances in to the near future. Moreover, consistent Ali, O., Abdelbaki, W., Shrestha, A., Elbasi, E., Alryalat, M.A.A., Dwivedi, Y.K., 2023.
research endeavors are required to substantiate the field of research and A systematic literature review of artificial intelligence in the healthcare sector:
benefits, challenges, methodologies, and functionalities. J. Innov. Knowl. 8 (1),
practice. The current review can aid in the development of a unified 100333 https://doi.org/10.1016/j.jik.2023.100333.
theoretical framework to act as a reference point to future ontological Alshaikh, M., 2020. Developing cybersecurity culture to influence employee behavior: a
fields of inquiry and focused investigations. To this end, the authors practice perspective. Comput. Secur. 98, 102003 https://doi.org/10.1016/j.
cose.2020.102003.
hope that the gaps and challenges identified might motivate other Alshater, M., 2022. Exploring the role of artificial intelligence in enhancing academic
scholars to empirically test all or parts of the framework for future performance: a case study of ChatGPT. SSRN. https://ssrn.com/abstract=4312358.
research. Future research for instance might explore the effects of Altman, S., 2022. Twitter. December 4. https://twitter.com/sama/status/1599668808
285028353?s=20&t=j5ymf1tUeTpeQuJKlWAKaQ.
generative models on other theoretical designs such as information Alzubaidi, L., Zhang, J., Humaidi, A.J., Al-Dujaili, A., Duan, Y., Al-Shamma, O.,
systems, organizational learning, and in health-related areas such as Santamaría, J., Fadhel, M.A., Al-Amidie, M., Farhan, L., 2021. Review of deep
electronic health records. Scholars might explore the convergence of learning: concepts, CNN architectures, challenges, applications, future directions.
Journal of Big Data 8 (53), 1–74.
different theories of innovation such as the theoretical Acceptance Asselman, A., Khaldi, M., Aammou, S., 2021. Enhancing the prediction of student
Model (TAM) and the Transfer of Technology (TOT) model for example performance based on the machine learning XGBoost algorithm. Interact. Learn.
by exploring how these mid-range theories influence the technical, Environ. 1–20. https://doi.org/10.1080/10494820.2021. 1928235.
Atlas, S., 2023. Chat GPT for Higher Education and Professional Development: A Guide
operational and organizational nuances of AI-enabled models. The
to Conversational AI, , 1st edVol. 1. The University of Rhode Island. https://digitalco
future generation of researchers might investigate the promising di­ mmons.uri.edu/cba_facpubs/548.
mensions of ChatGPT. While this area of investigation is still in its Ayinde, L., Wibowo, M.P., Ravuri, B., Emdad, F.B., 2023. ChatGPT as an important tool
nascence, its holistic adoption in varied sectors has been swift and in­ in organizational management: A review of the literature. Bus. Inf. Rev. https://doi.
org/10.1177/02663821231187991.
tegral. While generative AI has experienced a slower uptake in the ed­ Baber, H., 2021a. Social Interaction and Effectiveness of the Online Learning - A
ucation sector, there is little doubt that AI technology could Moderating Role of Maintaining Social Distance During the Pandemic COVID-19.

14
O. Ali et al. Technological Forecasting & Social Change 199 (2024) 123076

Baber, H., 2021b. Asian Education and Development Studies. Available at SSRN: htt Cooper, K., 2021. OpenAI GPT-3: Everything You Need to Know. Springboard. November
ps://ssrn.com/abstract=3746111. 1. https://www.springboard.com/blog/data-science/machine-learning-gpt-3-open-a
Bai, Y., Yi, J., Tao, J., Tian, Z., Wen, Z., Zhang, S., 2021. Fast end-to-end speech i/.
recognition via non-autoregressive models and cross-modal knowledge transferring Cooper, G., 2023. Examining science education in ChatGPT: an exploratory study of
from BERT. In: IEEE/ACM Transactions on Audio, Speech, and Language Processing, generative artificial intelligence. J. Sci. Educ. Technol. 32, 444–452. https://doi.org/
29, pp. 1897–1911. https://doi.org/10.1109/TASLP.2021.3082299. 10.1007/s10956-023-10039-y.
Baidoo-Anu, D., Ansah, O.L., 2023. Education in the era of generative artificial Cotton, D.R.E., Cotton, P.A., Shipway, J.R., 2023. Chatting and cheating: ensuring
intelligence (AI): understanding the potential benefits of chatgpt in promoting academic integrity in the era of ChatGPT. Innov. Educ. Teach. Int. 1–12 https://doi.
teaching and learning. SSRN Electron. J. https://doi.org/10.2139/ssrn.4337484. org/10.1080/14703297.2023. 2190148.
Behrad, F., Abadeh, M.S., 2022. An overview of deep learning methods for multimodal Dehouche, N., 2021. Plagiarism in the age of massive generative pre-trained transformers
medical data mining. Expert Syst. Appl. 200, 117006 https://doi.org/10.1016/j. (GPT-3): “the best time to act was yesterday. The next best time is now”. Ethics in
eswa.2022.117006. Science and Environmental Politics 21. https://doi.org/10.3354/esep00195.
Belk, R., 2021. Ethical issues in service robotics and artificial intelligence. Serv. Ind. J. Deng, J., Lin, Y., 2023. The benefits and challenges of ChatGPT: an overview. Frontiers in
41, 13–14. Computing and Intelligent Systems 2 (2), 81–83.
Bender, E.M., Gebru, T., McMillan-Major, A., Shmitchell, S., 2021. On the dangers of Devlin, J., Chang, M.W., Lee, K., Toutanova, K., 2018. BERT: Pre-training of Deep
stochastic parrots: Can language models be too big?. In: FAccT 2021 - Proceedings of Bidirectional Transformers for Language Understanding (Computation and
the 2021 ACM Conference on Fairness, Accountability, and Transparency, Language). https://arxiv.org/abs/1810.04805.
pp. 610–623. Diederich, S., Brendel, A.B., Morana, S., Kolbe, L., 2022. On the design of and interaction
Bernardini, M., Romeo, L., Frontoni, E., Amini, M.R., 2021. A semi-supervised multi-task with conversational agents: an organizing and assessing review of human-computer
learning approach for predicting short-term kidney disease evolution. IEEE J. interaction research. J. Assoc. Inf. Syst. 23 (1), 96–138.
Biomed. Health Inform. 25 (10), 3983–3994. Dwivedi, Y.K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., Williams, M.
Bernius, J.P., Krusche, S., Bruegge, B., 2022. Machine learning based feedback on textual D., 2021. Artificial intelligence (AI): multidisciplinary perspectives on emerging
student answers in large courses. Computers and Education: Artificial Intelligence 3. challenges, opportunities, and agenda for research, practice and policy. International
https://doi.org/10.1016/j.caeai.2022.100081. Journal of Information Management 57, 101994.
Biswas, S., 2023. ChatGPT and the future of medical writing. Radiology 307 (2), Dwivedi, Y.K., Kshetri, N., Hughes, L., Slade, E.L., Jeyaraj, A., Kar, A.K., Baabdullah, A.
e223312. https://doi.org/10.1148/radiol. 223312. M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M.A., Al-
Bjork, C., 2023. ChatGPT threatens language diversity. More needs to be done to protect Busaidi, A.S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D.,
our differences in the age of AI. In: The Conversation. February 9. https://theconv Wright, R., 2023a. “So what if ChatGPT wrote it?” multidisciplinary perspectives on
ersation.com/chatgpt-threatens-language-diversity-more-needs-to-be-done-to-prot opportunities, challenges and implications of generative conversational AI for
ect-our-differences-in-the-age-of-ai-198878. research, practice and policy. International Journal of Information Management 71,
Boell, S.K., Cecez-Kecmanovic, D., 2015. On being ‘systematic’ in literature reviews in IS. 102642. https://doi.org/10.1016/j. ijinfomgt.2023.102642.
J. Inf. Technol. 30 (2), 161–173. Dwivedi, Y.K., Pandey, N., Currie, W., Micu, A., 2023b. Leveraging ChatGPT and other
Bogina, V., Hartman, A., Kuflik, T., Shulner-Tal, A., 2022. Educating software and AI generative artificial intelligence (AI)-based applications in the hospitality and
stakeholders about algorithmic fairness, accountability, transparency and ethics. Int. tourism industry: practices, challenges and research agenda. Int. J. Contemp. Hosp.
J. Artif. Intell. Educ. 32 (3), 808–833. Manag. https://doi.org/10.1108/IJCHM-05-2023-0686.
Böhm, S., Carrington, M., Cornelius, N., de Bruin, B., Greenwood, M., Hassan, L., Jain, T., Dwivedi, Y.K., Sharma, A., Rana, N.P., Giannakis, M., Goel, P., Dutot, V., 2023c.
Karam, C., Kourula, A., Romani, L., Riaz, S., Shaw, D., 2022. Ethics at the centre of Evolution of artificial intelligence research in technological forecasting and social
global and local challenges: thoughts on the future of business ethics. J. Bus. Ethics change: research topics, trends, and future directions. Technological Forecasting and
180 (3), 835–861. Social Change 192, 122579.
Breidbach, C., Maglio, P., 2020. Accountable algorithms? The ethical implications of Elbasi, E., Mathew, S., Topcu, A.E., Abdelbaki, W., 2021. A survey on machine learning
data-driven business models. J. Serv. Manag. https://doi.org/10.1108/JOSM-03- and internet of things for COVID-19. IEEE World AI IoT Congress 0115–0120.
2019-0073 ahead-of-print. Eysenbach, G., 2023. The role of ChatGPT, generative language models, and artificial
Brown, T.B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., intelligence in medical education: A conversation with ChatGPT and a call for
Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., papers. JMIR Medical Education 9, e46885. https://doi.org/10.2196/46885.
Henighan, T., Child, R., Ramesh, A., Ziegler, D.M., Wu, J., Winter, C., Amodei, D., Fallahi, S., Mellquist, A., Mogren, O., Listo Zec, E., Algurén, P., Hallquist, L., 2022.
2020. Language models are few-shot learners. Adv. Neural Inf. Proces. Syst. 33, Financing solutions for circular business models: exploring the role of business
1877–1901. ecosystems and artificial intelligence. Bus. Strateg. Environ. https://doi.org/
Budd, S., Robinson, E.C., Kainz, B., 2021. A survey on active learning and human-in-the- 10.1002/bse.3297.
loop deep learning for medical image analysis. Med. Image Anal. 71, 102062 Floridi, L., 2023. AI as agency without intelligence: on ChatGPT, large language models,
https://doi.org/10.1016/j.media.2021.102062. and other generative models. Philosophy & Technology 36 (1), 15. https://doi.org/
Buhalis, D., Volchek, K., 2021. Bridging marketing theory and big data analytics: the 10.1007/s13347-023-00621-y.
taxonomy of marketing attribution. Int. J. Inf. Manag. 56, 102253 https://doi.org/ Floridi, L., Chiriatti, M., 2020. GPT-3: its nature, scope, limits, and consequences. Mind.
10.1016/j.ijinfomgt.2020.102253. Mach. 30 (4), 681–694.
Bundy, A. et al. (2019, November 28). Explainable AI, retrieved from https://royalsociet Gao, J., 2021. Exploring the feedback quality of an automated writing evaluation system
y.org/topicspolicy/projects/explainable-ai/. Pigai. Int. J. Emerg. Technol. Learn. 16 (11), 322. https://doi.org/10.3991/ijet.
Cain, W., 2023. AI emergence in education: exploring formative tensions across scholarly v16i11.19657.
and popular discourse. J. Interact. Learn. Res. 34 (2), 239–273. https://www.learn García-Peñalvo, F.J., Corell, A., Abella-García, V., Grande, M., 2020. Online assessment
techlib.org/primary/p/222352/. in higher education in times of COVID-19. Education in the Knowledge Society (EKS)
Chaturvedi, R., Verma, S., Das, R., Dwivedi, Y.K., 2023. Social companionship with 21 (0), 26. https://doi.org/10.14201/eks.23013.
artificial intelligence: recent trends and future avenues. Technol. Forecast. Soc. Geko, M., Tjoa, S., 2018. An ontology capturing the interdependence of the general data
Chang. 193, 122634. protection regulation (GDPR) and information security. Proceedings of the Central
Chatzipanagiotou, P., Katsarou, E., 2023. Crisis management, school leadership in European Cybersecurity Conference 1–6. https://doi.org/10.1145/
disruptive times and the recovery of schools in the post COVID-19 era: a systematic 3277570.3277590.
literature review. Education Sciences 13 (2), 1–29. Getahun, H., 2023. ChatGPT Could Be Used for Good, but Like Many Other AI Models,
Chen, N., Wen, G., 2021. Music composition feasibility using a quality classification It’s Rife With Racist and Discriminatory Bias. The Insider. January 26. https://www.
model based on artificial intelligence. Aggress. Violent Behav., 101632 https://doi. insider.com/chatgpt-is-like-many-other-ai-models-rife-with-bias-2023-1.
org/10.1016/j.avb.2021.101632. Gilson, A., Safranek, C.W., Huang, T., Socrates, V., Chi, L., Taylor, R.A., Chartash, D.,
Chen, J., Jing, H., Chang, Y., Liu, Q., 2019. Gated recurrent unit based recurrent neural 2023. How does ChatGPT perform on the United States medical licensing
network for remaining useful life prediction of nonlinear deterioration process. examination? The implications of large language models for medical education and
Reliability Engineering & System Safety 185, 372–382. https://doi.org/10.1016/j. knowledge assessment. JMIR Medical Education 9, e45312. https://doi.org/
ress.2019.01.006. 10.2196/45312.
Chen, X., Xie, H., Zou, D., Hwang, G.-J., 2020. Application and theory gaps during the Golder, S., Loke, Y.K., Zorzela, L., 2014. Comparison of search strategies in systematic
rise of artificial intelligence in education. Computers and Education: Artificial reviews of adverse effects to other systematic reviews. Health Info. Libr. J. 31,
Intelligence 1, 100002. https://doi.org/10.1016/j.caeai. 2020.100002. 92–105.
Chen, Y., Jensen, S., Albert, L.J., Gupta, S., Lee, T., 2023. Artificial intelligence (AI) Gong, L., Liu, Y., Zhao, W., 2018. Using learning analytics to promote student
student assistants in the classroom: designing chatbots to support student success. engagement and achievement in blended learning. In: Proceedings of the 2nd
Inf. Syst. Front. 25 (1), 161–182. International Conference on E-Education, E-Business and E-Technology-ICEBT 2018,
Chicaiza, J., Valdiviezo-Diaz, P., 2021. A comprehensive survey of knowledge graph- pp. 19–24.
based recommender systems: technologies, development, and contributions. Gonog, L., Zhou, Y., 2019. A review: generative adversarial networks. In: The 14th IEEE
Information 12 (6), 232. https://doi.org/10.3390/info12060232. Conference on Industrial Electronics and Applications (ICIEA), Xi’an, China, 2019,
Chin, H., Lima, G., Shin, M., Zhunis, A., Cha, C., Choi, J., Cha, M., 2023. User-chatbot pp. 505–510. https://doi.org/10.1109/ICIEA.2019.8833686.
conversations during the COVID-19 pandemic: study based on topic modeling and Gui, J., Sun, Z., Wen, Y., Tao, D., Ye, J., 2023. A review on generative adversarial
sentiment analysis. J. Med. Internet Res. 25, e40922 https://doi.org/10.2196/ networks: algorithms, theory, and applications. IEEE Trans. Knowl. Data Eng. 35 (4),
40922. 3313–3332.
Coghlan, S., Miller, T., Paterson, J., 2021. Good proctor or “big brother”? Ethics of online
exam supervision technologies. Philosophy and Technology 34 (4), 1581–1606.

15
O. Ali et al. Technological Forecasting & Social Change 199 (2024) 123076

Gupta, M., Akiri, C., Aryal, K., Parker, E., Praharaj, L., 2023. From ChatGPT to agenda. International Journal of Information Management 102716. https://doi.org/
ThreatGPT: impact of generative AI in cybersecurity and privacy. IEEE Access 11, 10.1016/j.ijinfomgt.2023.102716.
80218–80245. https://doi.org/10.1109/ACCESS.2023.3300381. Kumar, V., 2023. Digital enablers. In: The Economic Value of Digital Disruption.
Haleem, A., Javaid, M., Singh, R.P., 2022. An era of ChatGPT as a significant futuristic Management for Professionals. Springer, Singapore. https://doi.org/10.1007/978-
support tool: A study on features, abilities, and challenges. BenchCouncil 981-19-8148-7_1.
Transactions on Benchmarks, Standards and Evaluations 2 (4), 100089. https://doi. Li, I., Pan, J., Goldwasser, J., Verma, N., Wong, W.P., Nuzumlalı, M.Y., Rosand, B., Li, Y.,
org/10.1016/j.tbench.2023.100089. Zhang, M., Chang, D., Taylor, R.A., Krumholz, H.M., Radev, D., 2022. Neural
Hamilton, I., 2022. Don’t worry about AI becoming sentient. Do worry about it finding language processing for unstructured data in electronic health records: A review.
new ways to discriminate against people. Bus. Insid. July 18 https://www. Computer Science Review 46, 100511. https://doi.org/10.1016/j.
businessinsider.com/ai-discrimination-bias-worse-problem-than-sentience-2022-6? cosrev.2022.100511.
IR=T. Libbrecht, M., Noble, W., 2015. Machine learning applications in genetics and genomics.
Han, X., Zhang, Z., Ding, N., Gu, Y., Liu, X., Huo, Y., Qiu, J., Yao, Y., Zhang, A., Zhang, L., Nat. Rev. Genet. 16, 321–332. https://doi.org/10.1038/nrg3920.
Han, W., Huang, M., Jin, Q., Lan, Y., Liu, Y., Liu, Z., Lu, Z., Qiu, X., Song, R., Tang, J., Lim, W.M., Kumar, S., Verma, S., Chaturvedi, R., 2022. Alexa, what do we know about
Wen, J.R., Yuan, J., Zhao, W.X., Zhu, J., 2021. Pre-trained models: past, present and conversational commerce? Insights from a systematic literature review. Psychol.
future. AI Open 2, 225–250. https://doi.org/10.1016/j.aiopen.2021.08.002. Mark. 39 (6), 1129–1155. https://doi.org/10.1002/mar.21654.
Hartmann, J., Schwenzow, J., Witte, M., 2023. The political ideology of conversational Lim, W.M., Gunasekara, A., Pallant, J.L., Pallant, J.I., Pechenkina, E., 2023. Generative
AI: converging evidence on ChatGPT’s pro-environmental, left-libertarian AI and the future of education: Ragnarök or reformation? A paradoxical perspective
orientation. SSRN Electron. J. https://doi.org/10.2139/ssrn.43160 84. from management educators. The International Journal of Management Education
Heikkilä, M., 2023. How OpenAI is trying to make ChatGPT safer and less biased. In: MIT 21 (2), 100790. https://doi.org/10.1016/j.ijme.2023.100790.
Technology Review. February 21. https://www.technologyreview.com/2023/02/2 Liu, M., Bao, X., Liu, J., Zhao, P., Shen, Y., 2021. Generating emotional response by
1/1068893/how-openai-is-trying-to-make-chatgpt-safer-and-less-biased/. conditional variational auto-encoder in open-domain dialogue system.
Henderson, M., Chung, J., Awdry, R., Mundy, M., Bryant, M., Ashford, C., Ryan, K., 2022. Neurocomputing 460, 106–116.
Factors associated with online examination cheating. Assess. Eval. High. Educ. 1–15 Lucy, L., Bamman, D., 2021. Gender and representation bias in GPT-3 generated stories.
https://doi.org/10.1080/02602938. 2022.2144802. The Third Workshop on Narrative Understanding 48–55. https://doi.org/10.18653/
Herath, H.M.K.K.M.B., Mittal, M., 2022. Adoption of artificial intelligence in smart cities: v1/2021.nuse-1.5.
a comprehensive review. International Journal of Information Management Data Lund, B.D., Wang, T., Mannuru, N.R., Nie, B., Shimray, S., Wang, Z., 2023. ChatGPT and
Insights 2 (1), 100076. a new academic reality: artificial intelligence-written research papers and the ethics
Hossen, M.S., Karmoker, D., 2020. Predicting the probability of Covid-19 recovered in of the large language models in scholarly publishing. J. Assoc. Inf. Sci. Technol. 74
south Asian countries based on healthy diet pattern using a machine learning (5), 570–581.
approach. In: The 2nd International Conference on Sustainable Technologies for Maharana, K., Mondal, S., Nemade, B., 2022. A review: data pre-processing and data
Industry 4.0, pp. 1–6. augmentation techniques. Global Transitions Proceedings 3 (1), 91–99. https://doi.
Hradecky, D., Kennell, J., Cai, W., Davidson, R., 2022. Organizational readiness to adopt org/10.1016/j.gltp.2022.04.020.Majanja, M. K. (2020). The status of electronic
artificial intelligence in the exhibition sector in Western Europe. International teaching within South African LIS Education. Library Management, 41(6/7), 317-
Journal of Information Management 65, 102497. https://doi.org/10.1016/j. 337.
ijinfomgt.2022.102497. Malik, N., Tripathi, S.N., Kar, A.K., Gupta, S., 2021. Impact of artificial intelligence on
Hu, J., 2021. Teaching evaluation system by use of machine learning and artificial employees working in industry 4.0 led organizations. Int. J. Manpow. 43 (2),
intelligence methods. Int. J. Emerg. Technol. Learn. (IJET) 16 (05), 87. https://doi. 334–354.
org/10.3991/ijet.v16i05.20299. Mantelero, A., 2018. AI and Big Data: a blueprint for a human right, social and ethical
Hu, K., 2023. ChatGPT Sets Record for Fastest-Growing User Base. Reuters. Available at: impact assessment. Computer Law & Security Review 34 (4), 754–772.
https://www.reuters.com/technology/chatgpt-sets-record-fastest-growinguser-base- McLean, R., Antony, J., 2014. Why continuous improvement initiatives fail in
analyst-note-2023-02-01/. manufacturing environments? A systematic review of the evidence. Int. J. Product.
Hu, Y., Bai, G., 2014. A systematic literature review of cloud computing in eHealth. Perform. Manag. 63 (3), 370–376.
Health Informatics-An International Journal 3 (4), 11–20. Metz, C., 2022. The New Chatbots Could Change the World. Can You Trust Them? The
Hughes, D.L., Dwivedi, Y.K., Rana, N.P., Simintiras, A.C., 2016. Information systems New York Times. December 10. https://www.nytimes.com/2022/12/10/technolo
project failure–analysis of causal links using interpretive structural modelling. gy/ai-chat-bot-chatgpt.html.
Production Planning & Control 27 (16), 1313–1333. Mhlanga, D., 2023. Open AI in Education, the Responsible and Ethical Use of ChatGPT
Irons, A., Crick, T., 2022. Cybersecurity in the digital classroom: Implications for Towards Lifelong Learning. SSRN available at SSRN: https://ssrn.com/abstract
emerging policy, pedagogy and practice. In: The Emerald Handbook of Higher =4354422. or. https://doi.org/10.2139/ssrn.4354422.
Education in a Post-Covid World: New Approaches and Technologies for Teaching Miao, J., Wu, J., 2021. Multi-turn dialogue model based on the improved hierarchical
and Learning. Emerald Publishing Limited, pp. 231–244. https://doi.org/10.1108/ recurrent attention network. Int. J. Eng. Model. https://doi.org/10.31534/
978-1-80382-193-120221011. engmod.2021.2.ri.02d.
Jahan, R., Tripathi, M.M., 2021. Brain tumor detection using machine learning in MR Michel-Villarreal, R., Vilalta-Perdomo, E., Salinas-Navarro, D.E., Thierry-Aguilera, R.,
images. In: The 10th IEEE International Conference on Communication Systems and Gerardou, F.S., 2023. Challenges and opportunities of generative AI for higher
Network Technologies, pp. 664–668. education as explained by ChatGPT. Education Sciences 13 (9), 856. https://doi.org/
Jain, S., Basu, S., Ray, A., Das, R., 2023. Impact of irritation and negative emotions on 10.3390/educsci13090856.
the performance of voice assistants: netting dissatisfied customers’ perspectives. Mollman, S., 2022. ChatGPT Gained 1 Million Users in Under a week. Here's Why the AI
International Journal of Information Management 72, 102662. Chatbot is Primed to Disrupt Search as We Know It. Yahoo Finance. December 10.
Jalil, S., Rafi, S., LaToza, T., Moran, K., Lam, W., 2023. ChatGPT and software testing https://finance.yahoo.com/news/chatgpt-gained-1-million-followers-224523258.
education: promises and perils. In: IEEE International Conference on Software html.
Testing. https://doi.org/10.48550/arXiv.2302.03287. February 7. Murray, M., Macedo, M., Glynn, C., 2019. Delivering health intelligence for healthcare
Jovanović, M., Campbell, M., 2022. Generative artificial intelligence: trends and services. In: The 1st International Conference on Digital Data Processing, pp. 88–91.
prospects. Computer 55 (10), 107–112. Ngai, E.W.T., Wat, F.K.T., 2002. A literature review and classification of electronic
Kar, A.K., Varsha, P.S., Rajan, S., 2023. Unravelling the impact of generative artificial commerce research. Inf. Manag. 39 (5), 415–429.
intelligence (GAI) in industrial applications: a review of scientific and grey Nguyen, A., Ngo, H.N., Hong, Y., Dang, B., Nguyen, B.P.T., 2023. Ethical principles for
literature. Glob. J. Flex. Syst. Manag. 24, 659–689. https://doi.org/10.1007/ artificial intelligence in education. Educ. Inf. Technol. 28 (4), 4221–4241.
s40171-023-00356-x. Niño, A., 2020. Exploring the use of online machine translation for independent language
Karras, T., Laine, S., Aila, T., 2021. A style-based generator architecture for generative learning. Res. Learn. Technol. 28 (0) https://doi.org/10.25304/rlt.v28.2402.
adversarial networks. IEEE Trans. Pattern Anal. Mach. Intell. 43 (12), 4217–4228. Nirala, K.K., Singh, N.K., Purani, V.S., 2022. A survey on providing customer and public
Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., administration based services using AI: chatbot. Multimedia Tools and Application
Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., 81, 22215–22246. https://doi.org/10.1007/s11042-021-11458-y.
Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., O’Connor, S., 2023. Open artificial intelligence platforms in nursing education: tools for
Kasneci, G., 2023. ChatGPT for good? On opportunities and challenges of large academic progress or abuse? Nurse Educ. Pract. 66, 103537 https://doi.org/
language models for education. Learn. Individ. Differ. 103, 102274 https://doi.org/ 10.1016/j.nepr.2022.103537.
10.1016/j.lindif. 2023.102274. Okonkwo, C.W., Ade-Ibijola, A., 2021. Chatbots applications in education: a systematic
Kim, J., Merrill Jr., K., Xu, K., Kelly, S., 2022. Perceived credibility of an AI instructor in review. Computers and Education: Artificial Intelligence 2, 100033. https://doi.org/
online education: the role of social presence and voice features. Comput. Hum. 10.1016/j.caeai.2021.100033.
Behav. 136, 107383 https://doi.org/10.1016/j.chb.2022. 107383. Omoge, A.P., Gala, P., Horky, A., 2022. Disruptive technology and AI in the banking
Kirmani, A.R., 2023. Artificial intelligence-enabled science poetry. ACS Energy Lett. 8 industry of an emerging market. Int. J. Bank Mark. 40 (6), 1217–1247.
(1), 574–576. Ouyang, F., Zheng, L., Jiao, P., 2022. Artificial intelligence in online higher education: a
Kitchenham, B., Charters, S., 2007. Guidelines for Performing Systematic Literature systematic review of empirical research from 2011 to 2020. Education and
Reviews in Software Engineering. Keele University, UK. Information Technologies 27 (6), 7893–7925.
Krügel, S., Ostermaier, A., Uhl, M., 2023. The Moral Authority of ChatGPT (Computers Pan, S.L., Nishant, R., 2023. Artificial intelligence for digital sustainability: an insight
and Society). https://arxiv.org/abs/2301.07098. into domain-specific research and future directions. International Journal of
Kshetri, N., Dwivedi, Y.K., Davenport, T.H., Panteli, N., 2023. Generative artificial Information Management 72, 102668.
intelligence in marketing: applications, opportunities, challenges, and research Pappas, I.O., Giannakos, M.N., 2021. Rethinking learning design in IT education during a
pandemic. Frontiers in Education 6. https://doi.org/10.3389/feduc.2021.652856.

16
O. Ali et al. Technological Forecasting & Social Change 199 (2024) 123076

Patterson, D.A., Gonzalez, J., Le, Q.V., Liang, C., Munguía, L.M., Rothchild, D., So, D.R., Stokel-Walker, C., 2022. AI bot ChatGPT writes smart essays — should professors worry?
Texier, M., Dean, J., 2021. Carbon emissions and large neural network training. Nature. https://doi.org/10.1038/d41586-022-04397-7.
ArXiv 1–22 abs/2104.10350. Tay, Y., Dehghani, M., Bahri, D., Metzler, D., 2023. Efficient transformers: a survey. ACM
Pavlik, J.V., 2023. Collaborating with ChatGPT: considering the implications of Comput. Surv. 55 (6), 1–28.
generative artificial intelligence for journalism and media education. Journalism & Terwiesch, C., 2023. Would Chat GPT3 Get a Wharton MBA? A Prediction Based on Its
Mass Communication Educator 78 (1), 84–93. Performance in the Operations Management Course.
Perelman, L., 2020. The BABEL generator and E-Rater: 21st Century writing constructs Thomas, M., Bender, A., de Graaf, C., 2023. Integrating structure-based approaches in
and automated essay scoring (AES). Journal of Writing Assessment 13 (1), 1–10. generative molecular design. Curr. Opin. Struct. Biol. 79, 102559 https://doi.org/
Placed, J.A., Strader, J., Carrillo, H., Atanasov, N.A., Indelman, V., Carlone, L., 10.1016/j.sbi.2023.102559.
Castellanos, J.A., 2022. A survey on active simultaneous localization and mapping: Tlili, A., Shehata, B., Adarkwah, M.A., Bozkurt, A., Hickey, D.T., Huang, R.,
state of the art and new frontiers. ArXiv 1–20 abs/2207.00254. Agyemang, B., 2023. What if the devil is my guardian angel: ChatGPT as a case study
Polak, M.P., Morgan, D., 2023. Extracting Accurate Materials Data from Research Papers of using chatbots in education? Smart Learning Environments 10 (1), 15. https://doi.
with Conversational Language Models and Prompt Engineering — Example of org/10.1186/s40561-023-00237-x.
ChatGPT. http://arxiv.org/abs/2303.05352. Tranfield, D., Denyer, D., Smart, P., 2003. Towards a methodology for developing
Porkodi, S.P., Sarada, V., Maik, V., Gurushankar, K., 2022. Generic image application evidence-informed management knowledge by means of systematic review. Br. J.
using GANs (generative adversarial networks): a review. Evol. Syst. https://doi.org/ Manag. 14 (3), 207–222.
10.1007/s12530-022-09464-y. Tsang, K.C.H., Pinnock, H., Wilson, A.M., Shah, S.A., 2020. Application of machine
Potts, C., Ennis, E., Bond, R.B., Mulvenna, M.D., McTear, M.F., Boyd, K., Broderick, T., learning to support self-management of asthma with mHealth. In: The 42nd Annual
Malcolm, M., Kuosmanen, L., Nieminen, H., Vartiainen, A.K., Kostenius, C., International Conference of the IEEE Engineering in Medicine & Biology Society,
Cahill, B., Vakaloudis, A., McConvey, G., O’Neill, S., 2021. Chatbots to support pp. 5673–5677.
mental wellbeing of people living in rural areas: can user groups contribute to co- Tuomi, I., 2018. The Impact of Artificial Intelligence on Learning, Teaching, and
design? Journal of Technology in Behavioral Science 6 (4), 652–665. Education. The Joint Research Centre (JRC), the European Commission’s science and
Pucher, K.K., Boot, N.M.W.M., De Vries, N.K., 2013. Systematic review. Health Educ. 113 knowledge service, pp. 1–47.
(5), 372–391. Uddin, S., Khan, A., Hossain, M.E., Moni, M.A., 2019. Comparing different supervised
Qadir, J., 2022. Engineering Education in the Era of ChatGPT: Promise and Pitfalls of machine learning algorithms for disease prediction. BMC Medical Informatics
Generative AI for Education. https://doi.org/10.36227/techrxiv.21789434. Decision Making 19 (281), 1–16. https://doi.org/10.1186/s12 911-019-1004-8.
Qadir, J., 2023. Engineering education in the era of ChatGPT: promise and pitfalls of Van Dis, E.A.M., Bollen, J., Zuidema, W., van Rooij, R., Bockting, C.L., 2023. ChatGPT:
generative AI for education. In: IEEE Global Engineering Education Conference five priorities for research. Nature 614 (7947), 224–226. https://doi.org/10.1038/
(EDUCON), Kuwait, Kuwait, 2023, pp. 1–9. https://doi.org/10.1109/EDU d41586-023-00288-7.
CON54358.2023.10125121. Van Engelen, J.E., Hoos, H.H., 2020. A survey on semi-supervised learning. Mach. Learn.
Qu, J., Zhao, Y., Xie, Y., 2022. Artificial intelligence leads the reform of education 109, 373–440. https://doi.org/10.1007/s10994-019-05855-6.
models. Syst. Res. Behav. Sci. 39 (3), 581–588. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł.
Raković, M., Bernacki, M.L., Greene, J.A., Plumley, R.D., Hogan, K.A., Gates, K.M., ukasz, Polosukhin, I., 2017. Attention is all you need. In: Guyon, I., Von Luxburg, U.,
Panter, A.T., 2022. Examining the critical role of evaluation and adaptation in self- Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (Eds.), Advances in
regulated learning. Contemp. Educ. Psychol. 68, 102027 https://doi.org/10.1016/j. Neural Information Processing Systems, vol. 30. Curran Associates, Inc.. In: https
cedpsych.2021.102027. ://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd05
Rapanta, C., Botturi, L., Goodyear, P., Guàrdia, L., Koole, M., 2020. Online university 3c1c4a845aa-Paper.pdf
teaching during and after the covid-19 crisis: refocusing teacher presence and Votto, A.M., Valecha, R., Najafirad, P., Rao, H.R., 2021. Artificial intelligence in tactical
learning activity. Postdigital Science and Education 2 (3), 923–945. human resource management: A systematic literature review. International Journal
Rese, A., Tränkner, P., 2024. Perceived conversational ability of task-based of Information Management Data Insights 1 (2), 100047.
chatbots–which conversational elements influence the success of text-based Wang, W., Chen, Y., Heffernan, N., 2020. A generative model-based tutoring system for
dialogues? International Journal of Information Management 74, 102699. math word problems. arXiv 1–20 preprint arXiv:2010.04.
Richey Jr., R.G., Chowdhury, S., Davis-Sramek, B., Giannakis, M., Dwivedi, Y.K., 2023. Watson, R.T., 2015. Beyond being systematic in literature reviews in IS. Journal of
Artificial intelligence in logistics and supply chain management: A primer and Information Technology 30 (2), 185–187.
roadmap for research. J. Bus. Logist. https://doi.org/10.1111/jbl.12364. Weissglass, D.E., 2022. Contextual bias, the democratization of healthcare, and medical
Roumeliotis, K.I., Tselikas, N.D, 2023. ChatGPT and Open-AI Models: A Preliminary artificial intelligence in low- and middle-income countries. Bioethics 36 (2),
Review. Future Internet 15 (6), 1–24. https://doi.org/10.3390/fi15060192. 201–209.
Sadoughi, F., Ali, O., Erfannia, L., 2020. Evaluating the factors that influence cloud Williams, C., 2023. Hype, or the future of learning and teaching? 3 Limits to AI's ability
technology adoption—comparative case analysis of health and non-health sectors: a to write student essays (https://kar.kent.ac.uk/99505; K Law). https://kar.kent.ac.
systematic review. Health Informatics J. 26 (2), 1363–1391. uk/99505/.
Sallam, M., 2023. ChatGPT utility in healthcare education, research, and practice: Williamson, B., Eynon, R., 2020. Historical threads, missing links, and future directions
systematic review on the promising perspectives and valid concerns. Healthcare 11 in AI in education. Learn. Media Technol. 45 (3), 223–235.
(6), 887. https://doi.org/10.3390/healthcare11060887. Williamson, B., Bayne, S., Shay, S., 2020. The datafication of teaching in higher
Sarker, I.H., 2021. Deep Learning: a comprehensive overview on techniques, taxonomy, education: critical issues and perspectives. Teach. High. Educ. 25 (4), 351–365.
applications and research directions. SN Computer Science 2, 420–440. Wu, X., Xiao, L., Sun, Y., Zhang, J., Ma, T., He, L., 2022. A survey of human-in-the-loop
Sasubilli, S.M., Kumar, A., Dutt, V., 2020. Machine learning implementation on medical for machine learning. Futur. Gener. Comput. Syst. 135, 364–381. https://doi.org/
domain to identify disease insights using TMS. The International Conference on 10.1016/j.future.2022.05.014.
Advances in Computing and Communication Engineering 1–4. Xia, P., 2021. Design of personalized intelligent learning assistant system under artificial
Schiff, D., 2021. Out of the laboratory and into the classroom: the future of artificial intelligence background. In: The International Conference on Machine Learning and
intelligence in education. AI Soc. 36, 331–348. Big Data Analytics for IoT Security and Privacy. SPIOT 2020, pp. 194–200. https://
Selwyn, N., 2022. The future of AI and education: some cautionary notes. Eur. J. Educ. doi.org/10.1007/978-3-030-62743-0_27.
57 (4), 620–631. Yadav, R., Sardana, A., Namboodiri, V.P., Hegde, R.M., 2021. Speech prediction in silent
Shea, B.J., Grimshaw, J.M., Wells, G.A., Boers, M., Andersson, N., Hamel, C., 2007. videos using variational autoencoders. In: IEEE International Conference on
Development of AMSTAR: a measurement tool to assess the methodological quality Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada,
of systematic reviews. BMC Med. Res. Methodol. 7 (10), 1–7. pp. 7048–7052. https://doi.org/10.1109/ICASSP39728.2021.9414040.
Shen, Z., Fu, H., Shen, J., Shao, L., 2021. Modeling and enhancing low-quality retinal Yan, L., Sha, L., Zhao, L., Li, Y., Martinez-Maldonado, R., Chen, G., Gašević, D., 2023.
fundus images. IEEE Trans. Med. Imaging 40 (3), 996–1006. Practical and ethical challenges of large language models in education: a systematic
Sheth, J.N., Jain, V., Roy, G., Chakraborty, A., 2022. AI-driven banking services: the next scoping review. Br. J. Educ. Technol. https://doi.org/10.1111/bjet.13370.
frontier for a personalized experience in the emerging market. Int. J. Bank Mark. 40 Yang, S., 2022. The Abilities and Limitations of ChatGPT. Anaconda Perspectives.
(6), 1248–1271. December 10. https://www. anaconda.com/blog/the-abilities-and-limitations-o
Sigalov, S.E., Nachmias, R., 2023. Investigating the potential of the semantic web for f-chatgpt.
education: exploring Wikidata as a learning platform. Educ. Inf. Technol. https:// Zhang, H., Xu, X., Xiao, J., 2014. Diffusion of e-government: a literature review and
doi.org/10.1007/s10639-023-11664-1. directions for future directions. Gov. Inf. Q. 31, 631–636.
Simkute, A., Luger, E., Jones, B., Evans, M., Jones, R., 2021. Explainability for experts: a Zhou, H., She, C., Deng, Y., Dohler, M., Nallanathan, A., 2021. Machine learning for
design framework for making algorithms supporting expert decisions more massive industrial internet of things. IEEE Wirel. Commun. 28 (4), 81–87. https://
explainable. Journal of Responsible Technology 7–8, 100017. https://doi.org/ doi.org/10.1109/MWC.301.2000478.
10.1016/j.jrt.2021.100017. Zhu, X., Liu, B., Yao, L., Ding, Z., Zhu, C., 2023. TGR: neural-symbolic ontological
Stahl, B.C., 2021. Artificial Intelligence for a Better Future, , 1st edvol. 1. Springer reasoner for domain-specific knowledge graphs. Appl. Intell. https://doi.org/
International Publishing. https://doi.org/10.1007/978-3-030-69978-9. 10.1007/s10489-023-04834-8.
Stahl, B.C., Eke, D., 2024. The ethics of ChatGPT–exploring the ethical issues of an
emerging technology. International Journal of Information Management 74,
Omar Ali is an Assistant Professor in information systems at Abdullah Al Salem University,
102700.
Kuwait, where he currently teaches business process and systems, and operations man­
Stevenson, C., Smal, I., Baas, M., Grasman, R., Van Der Maas, H., 2022. Putting GPT-3’s
agement. He received his PhD in Management Information System (MIS) from University
creativity to the (alternative uses) test. In: International Conference on
of Southern Queensland (USQ), Australia. Also, he had two master degree. The first one in
Computational Creativity (ICCC) 2022. http://osf.io/vmk3c/.
Information and Communication Technology, and the second one in Information Systems
and Technology by research, both of the master degree from University of Wollongong. His

17
O. Ali et al. Technological Forecasting & Social Change 199 (2024) 123076

research interests include RFID; cloud computing; blockchain; artificial intelligent; secu­ entrepreneurship; CSR, developing industry-academia interface; interpersonal and com­
rity; and system analysis and design. He is reviewer for many leading journals such as munications skills enhancement.
government information quarterly, information systems management, and behaviour and
information technology. Also, he has published in top international leading journals such
Yogesh K Dwivedi is a Professor of Digital Marketing and Innovation and Founding Di­
as international journal of information management, technological forecasting and social
rector of the Digital Futures for Sustainable Business & Society Research Group at the
changes, journal of innovation and knowledge, government information quarterly, infor­
School of Management, Swansea University, Wales, UK. In addition, he holds a Distin­
mation technology and people, and behaviour and information technology.
guished Research Professorship at the Symbiosis International (Deemed University), Pune,
India. Professor Dwivedi is also currently leading the International Journal of Information
Peter A. Murray works in the School of Management and Enterprise at the University of Management as its Editor-in-Chief. His research interests are at the interface of Information
Southern Queensland. His research interests focus on two main themes: 1) high perfor­ Systems (IS) and Marketing, focusing on issues related to consumer adoption and diffusion
mance work & leadership studies and 2) organizational transformation & diversity. Peter of emerging digital innovations, digital government, and digital and social media mar­
does applied research in employee engagement and team learning in local government keting particularly in the context of emerging markets. Professor Dwivedi has published
settings which follows much of his earlier research in organizational learning in large more than 500 articles in a range of leading academic journals and conferences that are
public and private corporations. He is also a regular reviewer for many leading journals widely cited (more than 65 thousand times as per Google Scholar). He has been named on
and his published work appears in high ranked journal outlets such as Supply Chain the annual Highly Cited Researchers™ 2020, 2021 and 2022 lists from Clarivate Analytics.
Management, Human Resource Management Journal, Asia Pacific Journal of Human Re­ Professor Dwivedi is an Associate Editor of the Journal of Business Research, European
sources, International Journal of Human Resource Management, Management Learning Journal of Marketing, and Government Information Quarterly, and Senior Editor of the
and many others. Peter has a strong history of organizational development consultancies Journal of Electronic Commerce Research.
and training to very large corporate organizations throughout Australia.
Tegwen Malik has an applied project management and multi-disciplinary background.
Mujtaba Momin is an Assistant Professor in Human Resource Management (HRM) at the Her focus for many years has been in research and innovation for both the sciences and
American University of Middle-East, Kuwait (In Affiliation with Purdue University Indiana management areas with a particular interest in international standards, sustainability and
USA). He has previously worked with Prince Salman Bin Abdul-Aziz University, Kingdom bio-inspiration. In her work she regularly turns to nature's evolutionary solutions to
of Saudi Arabia. In addition to broader areas of interdisciplinary relevance, his research harness and inspire greener and more sustainable solutions and innovations.
interests include technology and HRM, employability skills enhancement;

18

You might also like