Full Text

Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

Education and Information Technologies

https://doi.org/10.1007/s10639-023-12333-z

Factors influencing students’ intention to adopt and use


ChatGPT in higher education: A study in the Vietnamese
context

Greeni Maheshwari1

Received: 20 June 2023 / Accepted: 30 October 2023


© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature
2023

Abstract
ChatGPT, an extensively recognised language model created by OpenAI, has gained
significant prominence across various industries, particularly in education. This
study aimed to investigate the factors that influence students’ intentions to adopt and
utilise ChatGPT for their academic studies. The study used a Structural Equation
Model (SEM) for analysing the data gathered from 108 participants, comprising
both undergraduate and postgraduate students enrolled in public and private univer-
sities in Vietnam. The findings indicated that students’ inclination to adopt ChatGPT
(referred to as adoption intention or AI) was influenced by their perception of its
user-friendliness (PEU). However, the perceived usefulness (PU) of ChatGPT did
not have a direct impact on students’ adoption intention; instead, it had an indirect
influence through personalisation (with a positive effect) and interactivity (with a
negative effect). Importantly, there was no significant indirect effect of PU on AI
mediated by perceived trust and perceived intelligence. This study is one of the ini-
tial empirical inquiries into ChatGPT adoption within an Asian context, providing
valuable insights in this emerging area of research. As the use of ChatGPT by stu-
dents becomes increasingly inevitable, educational institutions should carefully con-
sider integrating it into the assessment process. It is crucial to design assessments
that encourage responsible usage of ChatGPT, preserving students’ critical think-
ing abilities and creativity in their assessment writing. Moving forward, educators
will play a pivotal role by offering clear guidelines and instructions that set out the
appropriate and ethical use of artificial intelligence tools in the assessments.

Keywords ChatGPT · Human–computer interface · Adoption intention · Vietnam ·


Higher education

* Greeni Maheshwari
[email protected]
1
Department of Management, The Business School, RMIT University, Ho Chi Minh City,
Vietnam

13
Vol.:(0123456789)
Education and Information Technologies

1 Introduction

The integration of artificial intelligence (AI) enabled systems in organisa-


tions is experiencing rapid growth, fundamentally transforming businesses
and expanding their capacities into areas that were conventionally associ-
ated with human participation (Miller, 2018). AI pertains to the ability of
a system to effectively process data, acquire knowledge from it, and adjust
accordingly to meet specific demands (Kaplan & Haenlein, 2019). While
this concept has received considerable focus recently, its roots can be traced
back to the late twentieth century when researchers aimed to define AI as
the capability of machines, robots, computers, or systems to execute tasks
in a manner comparable to humans (Simmons & Chappell, 1988). Initially,
AI received little scholarly interest and application. However, with the emer-
gence of Big Data and advancements in computing capacity, AI has gained
prominence, not only in the corporate world but among the general popula-
tion (Haenlein & Kaplan, 2019). The success of AI systems, particularly in
deep learning models, showcases the transformative nature of this technol-
ogy and its potential to bring about radical changes across various industries
and communities (Samek et al., 2017). Given the fast pace of AI innovation,
it is essential to foster an interdisciplinary landscape encompassing societal,
institutional, and technological adaptations, thereby opening up opportuni-
ties for innovation and development across fields such as science, technol-
ogy, business, and government/public sectors (Dignum, 2020).
AI holds tremendous promise and potential in the education sector, offer-
ing support for formal instruction and lifelong learning (Nalbant, 2021). The
advent of AI technologies has already significantly impacted the educational
landscape, providing students in higher education with new skills and foster-
ing collaborative learning environments, signalling significant changes for
the future (Kuleto et al., 2021). Consequently, the future of higher education
is intertwined with advancements in emerging technologies, with AI serving
as a foundational element (Popenici & Kerr, 2017). AI’s potential utilization
within higher education can be categorised into four primary domains: profiling
and prediction, intelligent instruction systems, assessment and evaluation, and
adaptive systems with a focus on personalization (Zawacki-Richter et al., 2019).
AI-powered educational tools that possess adaptability, inclusivity, customisa-
tion, engagement, and efficiency have the capability to improve accessibility
and enjoyment of learning for students (Nalbant, 2021). Furthermore, AI-driven
advancements in education administration and internal structures present new
prospects and complexities in teaching and learning field (Popenici & Kerr,
2017). Given the advancements of AI in the field of education, it has become
imperative to investigate and comprehend the influence of AI tools in the edu-
cational sector (Pothen, 2022).
There are various AI tools/technologies available and students’ choices
among these AI tools and technologies are influenced by factors such as
their specific learning needs, academic goals, preferences for personalized

13
Education and Information Technologies

instruction, and the urgency of their information requirements. The availabil-


ity of features, ease of use, and alignment with their educational objectives
also play significant roles in shaping their decisions. The summary of factors
as why students choose particular technology and what factors influence their
decision are provided in Table 1.
Recent advancements in chatbot development indicate the feasibility of
users to communicate with technology, as individuals get more accustomed
to interacting with digital entities (Smutny & Schreiberova, 2020). Chatbots,
as communication simulation programs, aim to mimic intelligent spoken or
written conversations by providing responses to users’ questions or prompts
(Dahiya, 2017). Chatbots are used in diverse sectors such as education, market-
ing, customer service, training and technical assistance (Smutny & Schreiber-
ova, 2020). A chatbot that has gained widespread recognition recently across
industries and demographics is the Chat Generative Pre-Trained Transformer
(ChatGPT), which was developed by OpenAI and launched in November 2022
(Liebrenz et al., 2023), attracting considerable global attention. The launch
of ChatGPT attracted tremendous attention, with the chatbot surpassing one
million users within just five days of its initial release (Rudolph et al., 2023).
This AI chatbot utilises a sophisticated natural language processing para-
digm and is often indistinguishable from human writing (Aydın & Karaarslan,
2023). Built upon a transformer design and incorporating a self-attention
mechanism, ChatGPT produces natural responses by understanding the con-
text of the conversation (Aydın & Karaarslan, 2023). While ChatGPT offers
numerous benefits, such as increased efficiency, improved accuracy, and cost
savings, concerns regarding security and limited capabilities have also been
raised (Deng & Lin, 2022). The popularity of ChatGPT has prompted other
technology conglomerates, such as Google Bard (Rahaman et al., 2023) and
Microsoft Bing (Oxford Analytica, 2023), to join the race in developing intel-
ligent chatbots, and this indicates that these types of chatbots are expected to
become prominent across all the sectors. The introduction of ChatGPT has
particularly piqued the interest of the education sector. Several studies have
sought to assess ChatGPT’s ability to answer questions in various exam for-
mats within the education system (Gilson et al., 2023; Nisar & Aslam, 2023).
Additionally, several studies have investigated the real-world implications of
ChatGPT in education, focusing on the viewpoints of educational institutions
and educators (Pavlik, 2023; Qadir, 2022; Zhai, 2022). However, while there
are existing studies examining student intentions to use AI platforms for edu-
cational purposes (Al-Sharafi et al., 2022; Ni & Cheung, 2023; Pillai et al.,
2023; Sing et al., 2022), there has been a lack of empirical research exploring
ChatGPT, especially in the education sector.

1.1 Objective of the study

Despite the availability of studies examining student intention to use AI platforms


for educational purposes (Al-Sharafi et al., 2022; Ni & Cheung, 2023; Pillai et al.,

13
13
Table 1  Summary of various AI tools and the students’ decision impacting their usage
AI Tool / Technology Why students might choose Factors influencing students’ decisions

ChatGPT (Fuchs, 2023) Accessibility, immediate answers, conversational Familiarity with text-based chat, need for quick informa-
interface tion
Intelligent Tutoring Systems (Kurni et al., 2023) Personalized subject-specific instruction, adaptive Academic goals, desire for structured learning, subject-
content specific needs
Massive Open Online Courses (MOOCs) (Al-Mekhlafi Comprehensive courses, certification Desire for formal learning, comprehensive course
et al., 2022) content
Virtual Tutors (Sass & Ali, 2023) High personalization, subject-specific expertise Subject-specific needs, desire for one-on-one guidance
Educational Chatbots (Kuhail et al., 2023) Quick responses, assistance Ease of use, availability
AI-Powered educational technologies (Nazaretsky Evaluation and feedback Assessment requirements, progress tracking
et al., 2022)
Learning Management Systems (LMS) (Ashrafi et al., Course management, content organization Institutional adoption, course requirements
2022)
Adaptive Learning Platforms (Clark et al., 2022) Customized learning paths, skill-building Personalized learning preferences, skill development
goals
Collaborative AI Tools (Ng et al., 2023) Enhanced collaboration and teamwork skills Group projects, collaborative learning objectives
Education and Information Technologies
Education and Information Technologies

2023; Sing et al., 2022), there is a notable gap in the research when it comes to
exploring ChatGPT from the perspectives of learners, specifically in terms of usage
intention and behaviour among higher education students. To understand the studies
conducted on ChatGPT so far, a literature search was done until June 17, 2023, in
Scopus using relevant keywords such as "ChatGPT" AND "Higher Education" OR
"Academi*" OR "Tertiary teaching" and this yielded a total of 100 studies. How-
ever, among these 100 studies, only one empirical study on ChatGPT was identi-
fied, indicating the limited number of empirical investigations conducted in this spe-
cific area. Notably, this one empirical study was conducted by Strzelecki (2023) in
Poland on ChatGPT stands out as it explores the students’ acceptance and actual
usage of ChatGPT. The majority of existing research on ChatGPT primarily consists
of commentary, reviews, editorials, or broad discussions, with only one empirical
study conducted in this particular research area thus far.
Considering the existing research gap, this empirical investigation aims to fill
this gap in the ChatGPT literature by examining the current behaviours and attitudes
of higher education students towards their intentions and actual usage of ChatGPT.
Understanding how students utilise ChatGPT is a crucial foundational step for edu-
cators to develop suitable strategies in teaching, delivering assessments, providing
support, and increasing awareness of the possibility of plagiarism among students.
This understanding will enable effective communication with students regarding
the appropriate use of ChatGPT and equip educators with the knowledge to identify
unethical behaviours facilitated by AI. To the best of the knowledge, no empirical
study has been conducted in the Asian region to date on this topic. Therefore, this
study holds significance as it serves as one of the pioneering empirical investiga-
tions conducted in Asia. It contributes to the limited global body of research that
explores the factors influencing students’ adoption intentions and actual usage of
ChatGPT. Therefore, the primary objective of this study is to offer empirical evi-
dence that enhances the current theoretical discussions in the literature concerning
this significant research topic.
The study is organised as follows: The subsequent section provides a theoretical
background, followed by Section 3 which presents the literature review and hypoth-
eses development, proposing the conceptual framework that guides this research.
Sections 4 and 5 outline the methodology and present the study’s results. In Sec-
tion 6, the findings are discussed, and the study concludes with a summary, practi-
cal implications, and theoretical contributions. The limitations of the study are also
addressed in the last section.

2 Theoretical background

2.1 Technology acceptance model (TAM)

Technology acceptance model (TAM) is based on the psychological theories of rea-


soned action and planned behaviour, and has developed a crucial framework for com-
prehending the factors influencing whether people will embrace or refuse a technology
(Marangunić & Granić, 2015). Perceived usefulness and perceived ease of use are two

13
Education and Information Technologies

critical factors within the technology acceptance model, which aids in understanding
users’ adoption of technology (Davis, 1989). Perceived ease of use is defined as the
extent to which an individual believes that utilizing a specific system would enhance
their job performance (Davis, 1985). On the other hand, perceived usefulness refers to
the degree to which an individual believes that using a particular system would require
minimal physical and mental effort (Davis, 1985). In the revised technology accept-
ance model, the authors introduced the concept of behavioural intention to explain how
users’ behaviours in a technological system can be influenced by perceived ease of use
and perceived usefulness (Venkatesh & Davis, 1996). As AI continues to experience
significant growth across various industries, the technology acceptance model has pro-
vided theoretical insights into the increasing adoption of AI. This adoption is no longer
limited to the information system sector but has expanded to encompass a wide range
of industries that leverage technological advancements for transformative purposes. For
example, the application of the technology acceptance model has been utilised to sys-
tematically review AI implementation in the healthcare sector in the United Arab Emir-
ates, revealing successful practices for AI adoption (Alhashmi et al., 2019). Similarly,
the technology acceptance model, incorporating the constructs of perceived usefulness
and perceived ease of use, has been employed to explain AI acceptance among farmers
in the German agriculture sector, with the aim of enhancing crop production (Mohr &
Kühl, 2021).

2.2 Theory of planned behaviour (TPB) model

In addition to employing the TAM model, this study also incorporates the theory of
planned behaviour (TPB) model to investigate how higher education students utilise
ChatGPT in their learning. The theory of planned behaviour is an extension of the the-
ory of reasoned action, which was initially introduced in 1985 by psychologist Icek
Ajzen (1985) and has since gained significant prominence in the fields of social psy-
chology and human behaviour sciences (Conner & Armitage, 1998). According to the
TPB model, motivational elements that affect behaviour are captured by intentions,
which also serve as indicators of how much effort a person is prepared to put forth
to carry out the action (Ajzen, 1991). The theory of planned behaviour is extensively
used in sociology and behaviour research, including student learning behaviours that
adopt new technology in higher education, such as readiness for a new mobile learning
approach (Cheon et al., 2012), employing blended learning (Anthony Jnr et al., 2020),
and adoption of advanced technology system in library use (Rahmat et al., 2022). The
behavioural intention was extensively found in studies to significantly impact students’
actual adoption of AI platforms in learning. A study conducted by Almahri et al. (2020)
found a noteworthy and positive correlation between the intention of UK university stu-
dents to use AI chatbots and their actual usage of these chatbots to support their studies.
Another investigation by Hu (2022) focused on the smart learning context and revealed
that behavioural intention had a significant influence on the transformative learning
experiences of undergraduate students within smart environments. Therefore, the TAM
and TPB models serve as the fundamental frameworks underpinning the design of this
research study.

13
Education and Information Technologies

3 Review of literature and hypotheses formation

3.1 Perceived ease of use, perceived usefulness, and adoption intentions

The original technology acceptance model proposed by Davis (1985) establishes the
notion that perceived ease of use plays a pivotal role in influencing the perceived
usefulness within the user motivation domain. This relationship has been confirmed
by various studies. For instance, research conducted on Chinese students regard-
ing their experiences with intelligent tutoring systems demonstrated a significant
impact of perceived ease of use on perceived usefulness (Cao et al., 2021). Sim-
ilarly, a study involving secondary school students in China found that perceived
ease of use positively influenced the perceived usefulness of intelligent tutoring sys-
tems (Ni & Cheung, 2022). Furthermore, in the context of adopting AI personal
intelligent agents, US students’ intention to adopt was prominently associated with
the perceived ease of use and usefulness of the agent (Moussawi et al., 2021). This
relationship was also observed among college students who adopted AI-based per-
sonal intelligent agents (Moussawi et al., 2021). Hence, based on the aforemen-
tioned rationale, this study hypothesises a hypothetical impact of perceived ease of
use on perceived usefulness, forming the foundation for the following formulated
hypothesis:

H1a: The perceived ease of use positive influences the perceived usefulness of
ChatGPT.

As online education has grown, Chinese college students’ inclination to utilise


intelligent tutoring systems has been positively linked to their perceptions of ease
of use and usefulness (Cao et al., 2021). Similarly, Akour et al. (2022) employed the
technology acceptance framework in their study and found that perceived ease of
use and usefulness significantly influenced students’ intention to adopt the metaverse
system in their learning experiences. Additionally, Hu (2022) conducted a study on
Taiwanese university students in the context of AI-supported smart learning envi-
ronments and confirmed that both perceived ease of use and perceived usefulness
directly impacted students’ behavioural intention to use the smart learning environ-
ment. Drawing upon these findings, the following hypothesis is proposed:

H1b: The adoption intention of ChatGPT is positively influenced by the perceived


ease of use.

3.2 Perceived usefulness, perceived trust, interactivity, personalisation,


and perceived intelligence

The trust construct was expanded upon in the TAM model by Baby and Kannam-
mal (2020) to investigate users’ beliefs regarding new technologies. Perceived trust
refers to the level of trust users have in the reliability and security of the technology

13
Education and Information Technologies

they are using (Komiak & Benbasat, 2006). Trust in human–computer interactions
has a significant impact on technology usage and computer-based platforms. It helps
reduce risks, uncertainties, and anxieties associated with technological interactions,
promotes positive and meaningful experiences with technology, and facilitates the
development of healthy relationships with related systems (Gulati et al., 2019). In
the context of the technology acceptance model and its application to understand the
usage of AI, trust perception indirectly influences the behavioural intention to use
AI technology and its associated applications (Choung et al., 2023). In the context
of AI, trust is viewed from two perspectives. The first perspective involves a col-
lection of specific beliefs, known as trusting beliefs, which encompass aspects such
as goodwill, competence, morality, and predictability. These beliefs shape an indi-
vidual’s perception of trust in AI systems. The second perspective relates to the will-
ingness of one party to rely on another in a potentially risky or uncertain situation,
known as trusting intention (Siau & Wang, 2018).Trust was also proven to directly
increase behavioural intention towards AI-provided services in the healthcare sector
(Liu & Tao, 2022; Mohd Rahim et al., 2022). When individuals perceive a tech-
nology as useful, it can positively influence their trust in the technology (Aw et al.,
2019; Harrigan et al., 2021). Therefore, trust has the potential to act as a mediator
between perceived usefulness and adoption intention. By instilling a sense of confi-
dence, reducing perceived risks, and increasing the likelihood of technology adop-
tion, trust plays a crucial role in influencing users’ intention to adopt a particular
technology. Building upon these foundations, the following hypothesis is formulated
for this study:

H2a: Perceived usefulness influences the adoption intention of ChatGPT mediated


via perceived trust.

In addition to the original constructs of the Technology Acceptance Model


(TAM), such as perceived ease of use and perceived usefulness, researchers have
expanded the model by incorporating additional constructs to enhance the under-
standing of technology adoption in diverse contexts. One such extension of the
TAM, proposed by Almaiah et al. (2016), emphasises the significance of person-
alisation and interactivity within the technology system. Personalisation (PSN) is
tailoring the technology to provide a personal experience to the users, which can
help them meet their individual needs (Almaiah et al., 2016). Baby and Kannammal
(2020) added the trust construct to explain new technology usage. The study by Liu
and Tao (2022), targeting, particularly the AI systems, proposed an extended use of
personalisation and trust alongside the technology acceptance model’s constructs. In
information systems, personalisation is defined differently in different fields; how-
ever, on a conceptual level, it can be broadly defined as catering to different objects
to meet the personal needs of people in different sectors (Fan & Poole, 2006).
Conversely, interactivity (IN) pertains to the capacity of AI tools to engage in
interactions with users, leading to a sense of responsibility and adaptability (Kang
et al., 2016; Pillai et al., 2023). In the educational context, instructional interactiv-
ity is perceived from the students’ standpoint and is only established once a mes-
sage loop, involving communication from the learner and back, is completed (Yacci,

13
Education and Information Technologies

2000). Chatbots and users exchange information interact to understand as what the
user is attempting to say and provide the best possible answer. The technology adop-
tion can be high if there is high interactivity with the technology, as this helps users
to engage better. Therefore, interactivity and personalisation have been employed as
factors to examine the connection between AI interactive technology and users, as
they have the potential to significantly influence users’ decisions. The interactivity
of AI tools influences users’ perceptions of their usefulness (Arghashi & Yuksel,
2022; Liaw & Huang, 2013). When ChatGPT is interactive, capable of engaging
in meaningful conversations, understanding user needs, and providing relevant and
helpful responses, users may be more likely to perceive them as useful tools. Like-
wise, when users perceive chatbots as having the ability to comprehend their spe-
cific requirements and deliver appropriate and personalised responses, it may have a
positive impact on their intention to adopt and utilise the technology. Consequently,
this study proposes the following two hypotheses:

H2b: The relationship between perceived usefulness and the adoption intention of
ChatGPT is mediated by interactivity.
H2c: Perceived usefulness affects the adoption intention of ChatGPT mediated via
personalisation.

Another factor having a notable potential to impact students’ ChatGPT adoption


intention is perceived intelligence (PI). PI represents the user’s perception of the
chatbots’ effectiveness, independence, comprehension, and ability to provide rele-
vant output through natural language interactions (Bawack, 2021). PI further relates
to competence, efficiency and usage, and ability of the AI tool to provide effective
output (Pillai & Sivathanu, 2020; Yu, 2020). The notion of PI originated alongside
advancements in AI and has been closely linked to the uncanny valley theory (Gray
& Wegner, 2012; Ho & Macdorman, 2010). Perceiving a technology’s intelligence
involves evaluating how closely it can replicate human cognitive behaviours (Chuah
et al., 2021).In the study by Moussawi et al. (2021), perceived intelligence was used
as an extended determinant in the technology acceptance framework. The AI tech-
nology and its implications show more or less random behaviours during the inter-
action with users, and the users and in turn interpret and evaluate the behaviours’
pattern as intelligence (Bartneck et al., 2009). Therefore, the PI of technology is
based on its competence to deliver an outcome expected by the users during the
interaction (Bartneck et al., 2009). The PI of ChatGPT can act as a mediating fac-
tor between perceived usefulness and adoption intention. When users perceive Chat-
GPT as intelligent and able to engage in meaningful and intelligent conversations,
it may positively affect their intention to adopt and use this interactive AI tool. The
below hypothesis is designed based on these arguments from the literataure:

H2d: Perceived intelligence plays a mediating role in the relationship between the
perceived usefulness of ChatGPT and the intention to adopt it.

In the study by Chai et al. (2020), students’ perception of AI usefulness contrib-


uted to their intention to make use of AI knowledge and utilise the technology for

13
Education and Information Technologies

various purposes. It also fostered the intention to learn how to use AI adequately
among the researched students, signalling the emerging rise of AI practices among
the young generation in the future (Sing et al., 2022). The study conducted by Al-
Sharafi et al. (2022) further confirmed the significance of perceived usefulness in
influencing the continued use of chatbots for educational purposes among students
in Malaysian public universities. Building upon the foundations of the technology
acceptance model and drawing on empirical evidence from AI adoption in educa-
tion, this study puts forth the following hypothesis:

H2e: Perceived usefulness positively influences the ChatGPT adoption intentions.

3.3 Perceived trust and adoption intentions

Kumar et al. (2022) extended the technology acceptance model and found that
trust had a positive indirect effect on students’ intention to adopt AI technology in
higher education, specifically in the context of blockchain. Trust in AI was found
to be prominent and have a direct impact on South Indian university students and
their intention to use AI-enabled platforms, including the AI job application process
(Kandoth & Shekhar, 2022). Kim et al. (2021) conducted a study and identified a
direct impact of trust in adoption intention on consumer response towards AI, which
includes the intention to use platforms and devices that involve AI. In the higher
education context, Mohd Rahim et al. (2022) conducted a study and discovered
that perceived trust had a direct and positive impact on the behavioural intention of
Malaysian students to adopt AI chatbot platforms for their university learning. This
direct association between trust and AI-powered platforms was also observed in a
sample of US students (Moussawi et al., 2021). As a result, it is anticipated that stu-
dents’ intention to use ChatGPT will be positively influenced by their perceived trust
in this chatbot. Based on the above premise, the below hypothesis is formulated:

H3: Perceived trust positively influences the adoption intention of ChatGPT.

3.4 Interactivity and adoption intentions

In South Korea, Kim et al. (2022) conducted a study that revealed a significant posi-
tive relationship between users’ perceived interactivity with an AI device and their
intention to purchase and use the device. The presence of highly interactive avatars
in a smart online platform was found to promote users’ intention to use and interact
with the platform in real-time, as determined in a study by Etemad-Sajadi (2016). In
terms of impact on the student population, a high level of interactivity on web plat-
forms helped elevate college students’ intention to participate in physical activities
(Lu et al., 2014). In the computer-mediated environment of college and university
education, interactivity also contributes greatly to the student experience and inten-
tion to use virtual reality–based MR training systems (Chang et al., 2018). Further-
more, the interactivity of the learning system was discovered to influence the behav-
ioural intention of students and the actual use of the system among undergraduates

13
Education and Information Technologies

in Malaysia (Baleghi-Zadeh et al., 2017). Based on mentioned existing knowledge,


interactivity is hypothesised in this study to contribute to a rise in the adoption
intention of ChatGPT use.

H4: Interactivity positively influences the adoption intention of ChatGPT.

3.5 Personalisation and adoption intentions

The primary goal of personalisation is to deliver the response by understanding


user requirements and contexts (Ho, 2006). Personalisation of AI technology plays
a noteworthy role in accepting and preparing for its usage behaviours across vari-
ous backgrounds. The study conducted by Liu and Tao (2022) in China discovered
that personalisation positively influenced the behavioural intention to utilise AI-
based smart healthcare services. Particularly within a rising need for learner-based
personalised education, technological advancement offers a foundation for digital
learning environments to form the systematic adoption of instruction for each stu-
dent (Tetzlaff et al., 2021). Therefore, AI has the potential to address this challenge
effectively due to its flexibility. To enhance learning outcomes and efficiency, an AI-
powered system that provides personalised learning assistance is designed to capture
students’ evolving behaviours and individualised information as input (Ingkavara
et al., 2022). Personalised technology in education holds great promise for students,
both in terms of their engagement in learning and the possibility of better learning
results (Tetzlaff et al., 2021). In practical application, Krouska et al. (2022) con-
ducted a study that investigated the effect of personalisation in an educational tech-
nology system and found a significantly positive impact on students’ learning using
that system. In the context of Indian higher education students, Pillai et al. (2023)
found that the personalisation of teaching chatbots had a statistically significant pos-
itive impact on the students’ intention to adopt chatbot usage. Therefore, based on
these findings, this study proposes that personalisation will positively contribute to
the intention to use AI chatbot ChatGPT in their learning.

H5: Personalisation positively influences the adoption intention of ChatGPT

3.6 Perceived Intelligence and adoption intentions

Moussawi and Koufaris (2019) conducted a study that found an indirect impact
of perceived intelligence on the intention to continue using AI-based personal
assistant technologies, such as virtual voice assistants like Siri and Alexa. This
implies that users’ perception of the intelligence exhibited by these virtual assis-
tants influences their willingness to persistently utilise them. Perception of the
level of intelligence among users towards AI-based virtual assistants in the form
of chatbots was proven to significantly impact user intention to use it based on
how sensitive it is towards the prompt and how well it can recognise the user’s
voices (Bawack, 2021). Perceived intelligence of AI is recently gaining attention
in the education sector, albeit the limited knowledge still allows room for more

13
Education and Information Technologies

exploration. Moussawi et al. (2021) discovered that perceived intelligence among


college students indirectly influenced their intention to adopt AI technology and
its associated agents, primarily through the mediating factor of perceived useful-
ness. Similarly, in the context of using a teacher chatbot to facilitate learning,
students’ perception of the chatbot’s intelligence was a significant determinant of
their intention to use it, as it was linked to receiving instant and accurate answers
and feedback (Pillai et al., 2023). Therefore, based on these findings, this study
proposes a relationship between students’ perception of ChatGPT’s intelligence
and their adoption intention of ChatGPT.

H6: Perceived intelligence positively influences the adoption intention of Chat-


GPT

3.7 Adoption intentions and actual usage

Actual usage refers to the behaviours of individuals using ChatGPT after they
have adopted to use it (Pillai et al., 2023). It signifies the degree to which indi-
viduals actively interact with and make use of the chatbot for diverse objec-
tives. Individuals with a strong adoption intention are more likely to explore and
actively use the ChatGPT than those with weak adoption intention. Adoption
intention sets the initial motivation and predisposition for individuals to engage
with the technology, which can influence their subsequent usage behaviour (Wei
et al., 2021). Hence, the adoption intention supports the strong relationship with
the actual usage of technology-enhanced tools, as supported in the literature (Pil-
lai et al., 2023; Hoque & Sorwar, 2017). Based on this discussion, the study for-
mulats the last hypothesis as below:

Fig. 1  Study’s proposed theoretical model

13
Education and Information Technologies

H7: Adoption intentions positively influence actual usage of ChatGPT

The proposed research model consisting all the seven designed hypotheses of
the study is illustrated in Fig. 1.

4 Methodology

4.1 Survey instrument design and data collection

A survey instrument employed for this study was adopted from the constructs
of TAM and TPB models and further using the literaure on various studies on
Chatbot adoption and usage. The survey instrument consisted of first section on
several questions on demographics followed by sections on TAM components
such as perceived ease of use (PEU) and perceived usefulness (PU). The next
section was on questions related to perceived trust (PT), interactivity (IN), per-
sonalisation (PSN), and perceived intelligence (PI). The last section of the ques-
tionnaire was regarding the questions on TPB components of ChatGPT adop-
tion intention (AI) and its actual usage (AU). The dependent variable considered
for this study is AU, while the independent variable consisted of the rest of the
seven variables. The survey questions for all the items were designed on a scale
of 1 to 7 (1 = extremely disagree to 7 = extremely agree). PEU consisted of a
six-item scale, while PU was made up of a five-item scale. PT, IN, PSN, and
PI comprised of four items, five items, three items, and five items, respectively.
There were six items in AI construct, while AU contained four-item scales. The
survey instrument items were used from past studies as illustrated in Table 2.
Also the factor loading of each items are presented in Table 2. All the items of
the constructs were retained except for AI, where one item was removed due to
lower factor loading than 0.4.
Before data collection, a priori sample size was calculated using an online
calculator suggested by Soper (2021) for structural equation modelling to iden-
tify the appropriate sample size needed for this study. Considering an antici-
pated effect size of 0.4, which is commonly used as the average effect in educa-
tional research (Cohen, 1988), the calculator recommended a sample size of 89.
After this estimation on required sample size and obtaining ethics approval for
the study, the data collection was initiated. To recruit participants, the survey
links were posted on public social media platforms. Participants were required to
meet two inclusion criteria: (1) being an undergraduate or postgraduate student
and (2) studying in a university in Vietnam. The data was collected from under-
graduate and postgraduate students enrolled in different universities in Vietnam
and there was a total of 108 respondents for this study, which was deemed suf-
ficient as per the priori sample size calculation. As all the survey items in the
questionnaire were mandatory; hence, there were no incomplete or missing data.

13
Table 2  Mean, standard deviation, skewness, kurtosis, factor loading, reliability, convergent validity and divergent validity of study’s constructs
Construct with items and adopted source Mean S.D Skewness Kurtosis Factor loading CA CR AVE

13
Perceived Ease of Use (PEU) (Davis, 1989) 5.061 0.957 −0.017 1.963 0.872 0.904 0.612
PEU1: ChatGPT would be flexible and easy to use −0.55 1.161 0.761
PEU2: ChatGPT would be easy to access for my studies −0.51 1.002 0.831
PEU3: It would be easy and clear to interact with ChatGPT −0.432 1.255 0.820
PEU4: I believe skills needed to use ChatGPT is easy −0.247 0.56 0.701
PEU5: It would be easy to access study-related information from ChatGPT −0.489 1.253 0.785
PEU6: It would be easier to get things done and solve queries using ChatGPT than if not −0.286 0.259 0.791
using it
Perceived usefulness (PU) (Davis, 1989) 4.748 1.201 −0.433 3.400 0.918 0.939 0.758
PU1: ChatGPT is useful for my studies −0.292 0.128 0.876
PU2: I feel that if I used ChatGPT, I would learn better −0.472 0.038 0.891
PU3: ChatGPT would answer all my queries and provides answers as per my expectations −0.157 0.141 0.781
PU4: ChatGPT would help me to improve the efficiency and quality of learning −0.606 0.479 0.922
PU5: ChatGPT would provide prompt access to study issues irrespective of my location −0.685 1.045 0.875
Perceived Trust (PT) (Tarhini et al., 2017) 3.798 1.273 0.111 1.338 0.939 0.957 0.847
PT1: I feel Interaction with ChatGPT would be secure enough −0.102 0.401 0.871
PT2: I trust that my activities while interacting with ChatGPT would be private and safe −0.049 0.133 0.957
PT3: I feel that my details available with ChatGPT would be kept confidential −0.112 0.08 0.927
PT4: Overall, I feel that nobody will be able to access my personal information from Chat- −0.057 0.224 0.925
GPT
Interactivity (IN) (Etemad-Sajadi, 2016) 4.473 1.084 −0.030 0.900 0.864 0.905 0.656
IN1: ChatGPT would allow me to interact with it and get the information required −0.352 1.551 0.749
IN2: Interactive features of ChatGPT would meet my needs in learning −0.598 1.359 0.809
IN3: ChatGPT would help to get the required information easily without asking the professor 0.086 −0.116 0.798
IN4: ChatGPT would provide the required information without writing an email to my university −0.003 −0.331 0.803
Education and Information Technologies
Table 2  (continued)
Construct with items and adopted source Mean S.D Skewness Kurtosis Factor loading CA CR AVE

IN5: ChatGPT would be efficient for interacting and fulfilling my learning needs −0.56 0.673 0.884
Personalisation (PSN) (Komiak & Benbasat, 2006) 4.323 1.182 −0.034 0.181 0.833 0.900 0.751
PSN1: ChatGPT would understand my individual learning needs 0.066 0.447 0.829
PSN2: ChatGPT would provide learning information based on my personal needs −0.307 0.221 0.888
PSN3: I feel that ChatGPT provide learning materials as per my individual needs −0.223 0.489 0.881
Perceived Intelligence (PI) (Pillai et al., 2023) 4.185 1.055 −0.331 0.742 0.842 0.891 0.620
PI1: ChatGPT is competent for teaching −0.176 0.338 0.789
Education and Information Technologies

PI2: ChatGPT is knowledgeable to answer my queries −0.042 0.086 0.828


PI3: ChatGPT would assist me in learning −0.877 2.335 0.763
PI4: I feel that ChatGPT is intelligent as a teacher in class 0.23 −0.409 0.756
PI5: I feel that ChatGPT gives sensible answers −0.505 0.661 0.8
Adoption Intention (AI) (Tarhini et al., 2017) 4.870 1.121 −0.299 0.518 0.914 0.937 0.750
AI1: I intend to use the ChatGPT for my studies and exam if my college allows this −0.473 0.155 0.863
AI2: Given a chance, I intend to use ChatGPT to answer my queries related to the study −0.407 0.36 0.904
AI3: I feel that I would be using ChatGPT for my studies −0.573 0.781 0.916
AI5: I feel I would interact with ChatGPT for my studies −0.409 0.586 0.863
AI6: I am willing to use ChatGPT if the possible rewards are high enough −0.27 0.144 0.777
Actual Usage (AU) (Chow et al., 2012) 3.993 1.438 0.055 -0.306 0.875 0.914 0.728
AU1: I use ChatGPT on a daily basis 0.446 −0.555 0.827
AU2: I use ChatGPT frequently 0.12 −0.651 0.912
AU3: I use ChatGPT for learning very often 0.028 −0.825 0.917
AU4: I use ChatGPT as it is able to support my unattended and urgent questions related to −0.653 0.826 0.745
academic matters

CA Cronbach’s alpha (α); CR Composite reliability; AVE Average variance extracted; S.D Standard deviation

13
Education and Information Technologies

4.2 Structural equation model (SEM)

The data collected for this study were analysed using Structural Equation Model
(SEM) to examine the seven hypotheses proposed. SEM was chosen as an appropri-
ate analysis method as it combines Exploratory Factor Analysis (EFA) and multiple
regression analysis, allowing for the exploration of causal relationships between var-
iables (Maheshwari, 2022). Prior to conducting the hypothesis testing using SEM,
several tests were performed, as discussed in the results and analysis section.

5 Results and analysis

5.1 Analysis of descriptive statistics and correlations among variables

First, the participants’ characteristics were analysed, showing an average age of


participants to be approximately 21 years. The majority of participants (79%) were
identified as female. Among the students, 98% pursued undergraduate degrees, with
33% studying in public universities and the remaining two-thirds attending private
universities. Next, the descriptive statistics of all the constructs were calculated (as
in Table 2), and the results exhibited that the highest mean is of PEU variable with a
value of 5.1 and the lowest of PT with a value of 3.8, which shows that students do
not completely trust ChatGPT tool yet, although they found it easy to use. The mean
of all the variables was above the midpoint i.e., 3.5 (on a scale of 1 to 7).
Significant correlations were observed between the dependent variable (AU) and
all the independent variables, as evidenced by the findings presented in Table 3. The
highest correlation of AU was with AI (r = 0.75) and lowest with PT (r = 0.40), and
this resonates with the above finding of mean values, which shows that the student’s
trust is yet to be built on using ChatGPT in their education.

Table 3  Variables correlations and discriminant validity


1 2 3 4 5 6 7 8 VIF

1. Actual usage (AU) 0.853


2. Perceived Ease of Use .557** 0.783 2.390
(PEU)
3. Perceived Usefulness .723** .706** 0.870 4.965
(PU)
4. Personalisation (PSN) .562** .431** .672** 0.866 2.822
5. Interactivity (IN) .619** .646** .764** .764** 0.810 3.885
6. Perceived Trust (PT) .402** .355** .443** .490** .483** 0.921 1.463
7. Perceived Intelligence .717** .538** .794** .594** .691** .493** 0.788 3.109
(PI)
8. Adoption Intentions .746** .594** .713** .523** .579** .400** .638** 0.853 2.208
(AI)
**
Significant Correlation at the 0.01 level (2-tailed)
The bold diagonal represents the square root of AVE, as a measure of discriminant validity

13
Education and Information Technologies

5.2 Testing the measurement model using Confirmatory Factor Analysis (CFA)

To validate the model before applying the Structural Equation Model (SEM), the
CFA was conducted using AMOS software, version 28. The fit measures, includ-
ing Cmin/df = 1.610, IFI = 0.899, TLI = 0.880, CFI = 0.896, and RMSEA = 0.077,
met the recommended threshold levels proposed by Hair et al. (2011), indicating
that the model was considered a good fit.

5.3 Assessment of variables for normality, multicollinearity, validity,


and reliability

Once the model was validated, the skewness and kurtosis measures were used
to assess the normality of the data for both the independent and dependent vari-
ables. The obtained skewness and kurtosis values fell within the accepted range
of -2 to + 2 and -7 to + 7, respectively, indicating a normal distribution of the data
(Hair et al., 2010) (as shown in Table 2). Additionally, to examine the presence of
multicollinearity among the variables, the variation inflation factor (VIF) was cal-
culated. The VIF values for all variables were below the threshold of 5 (Mahesh-
wari, 2021), indicating the absence of multicollinearity in the dataset (Table 2).
The reliability and validity of the model were assessed in the next step. Internal
reliability was examined using Cronbach’s alpha (CA), while construct reliabil-
ity (CR) was further evaluated. Discriminant validity was assessed by calculating
the square root of the average variance extracted (AVE) (Gonzalez-Tamayo et al.,
2023). The CA values for the variables ranged from 0.86 to 0.94, while the CR
values for the constructs ranged from 0.90 to 0.96 (as presented in Table 2). These
values exceeded the cutoff value of 0.7 (Hair et al., 2011), indicating high internal
reliability. Moreover, the AVE values for all constructs surpassed the threshold of
0.5 (Hair et al., 2011) (Table 2). Discriminant validity was confirmed by compar-
ing the square root values of AVE with the correlations between each construct,
both horizontally and vertically (as indicated in bold on the diagonal in Table 3)
and none of the correlations were higher than the square root values of AVE.

5.4 Common method bias

Before proceeding with hypothesis testing using SEM, an assessment of common


method bias was conducted as a final step. Common method bias can be present
in self-reported surveys, and measures were taken to mitigate this bias through
procedural and statistical approaches (Olarewaju et al., 2023). Procedurally, par-
ticipants were assured of the anonymity and confidentiality of their data and were
encouraged to provide honest responses to the survey questions. Furthermore, a
statistical test known as Harman’s one-factor test was employed. The results of
this test indicated that a single factor accounted for only 45.3% of the explained

13
Education and Information Technologies

variance, which fell below the threshold of 50% (Podsakoff et al., 2012). This
suggests that common method bias was not a major concern in the study.

5.5 Hypotheses testing results from SEM

Once all the necessary tests including CFA, normality, reliability, validity, and com-
mon method bias were conducted, SEM was employed to test the seven designed
hypotheses of the study using AMOS, version 28. The model’s fitness was evaluated
and found to be a good fit, as indicated by the following values: Cmin/df = 1.644,
IFI = 0.890, TLI = 0.873, CFI = 0.887, and RMSEA = 0.079. All these measures fell
within the threshold values recommended by Hair et al. (2011). The results of the
hypothesis testing are presented in Table 4 and Fig. 2.
The study’s first hypothesis ­(H1a and ­H1b) was designed to examine the impact
of perceived ease of use on both the adoption intention of ChatGPT and perceived
usefulness. The findings indicate that PEU was significant and positively influ-
enced both variables, with the highest influence on PU (β = 0.815) compared to AI
(β = 0.269). The next hypothesis ­(H2a, ­H2b, ­H2c, ­H2d, ­H2e) examined the direct influ-
ence of PU on AI and an indirect relationship mediated by PT, IN, PSN, and PI on
AI. The results highlighted that PU does not influence AI directly but has an indirect
significant effect on AI via IN and PSN, but the indirect effect via IN was negative
(β = −0.577), whereas the influence was positive via PSN (β = 0.334). The rest of
the indirect influence of PU via PT and PT on AI were insignificant. The subsequent
four hypotheses ­(H3, ­H4, ­H5, ­H6) aimed to assess the direct impact of PT, IN, PSN,
and PI on AI, respectively. The results suggest that PT and PI do not influence AI
of students, while IN and PSN significantly influence AI, with IN having a negative
effect (β = -0.604) and PSN having a positive effect (β = 0.443). The last hypothesis
­(H7) of the study aimed to examine the impact of AI on AU, and results indicated

Table 4  Path analysis results Hypothesis Path Coefficients Results


(from Structural Equation
Model) H1a PEU → PU 0.815*** Supported
H1b PEU → AI 0.269* Supported
H2 PU → PT → AI −0.028 Not Supported
H2b PU → IN → AI −0.577* Supported
H2c PU → PSN → AI 0.334* Supported
H2d PU → PI → AI 0.489 Not Supported
H2e PU → AI 0.429 Not Supported
H3 PT → AI −0.057 Not Supported
H4 IN → AI −0.604* Supported
H5 PSN → AI 0.443* Supported
H6 PI → AI 0.603 Not Supported
H7 AI → AU 0.968*** Supported
*
for p < .05, ** for p < .01, *** for p < .001

13
Education and Information Technologies

Fig. 2  Results from SEM of study’s proposed theoretical model

a significant and positive influence of AI on AU (β = 0.968), and this effect is the


highest amongst all the significant effects of the other variables in this research.

6 Discussion

This research aimed to explore the adoption intentions and actual usage of ChatGPT
among higher education students in Vietnam. The initial findings indicate a positive
influence of students’ perceived ease of use (PEU) on their adoption intentions (AI)
towards ChatGPT. This positive relationship between PEU and adoption intentions
has been observed in previous studies by Ni and Cheung (2022) and Hu (2022).
Additionally, the study revealed a direct impact of PEU on the perceived useful-
ness (PU) of the system as perceived by the students. This finding is not surpris-
ing considering that ChatGPT offers a user-friendly interface and prompt responses,
enhancing students’ comfort and engagement. Similar results have been reported in
studies conducted by Cao et al. (2021) and Moussawi et al. (2021).
Unlike previous studies (Al-Sharafi et al., 2022; Sing et al., 2022) that indicated
a positive relationship between perceived usefulness (PU) and adoption intentions
(AI), the present study yielded contrasting results, as perceived usefulness was not
found to have a significant impact on students’ adoption intentions towards Chat-
GPT. This inconsistency may be attributed to several factors that warrant considera-
tion. One possible reason is that students in this study may have limited familiarity
with ChatGPT, as the system is still in its infancy stage. Because of their limited
exposure, students may have a limited understanding of the potential applications
and benefits that ChatGPT can provide. Consequently, their perceived usefulness of
the tool might be lower, leading to weaker adoption intentions. Additionally, stu-
dents may already be using alternative technological tools that they find more com-
fortable or effective for their needs. This existing preference for alternative tools

13
Education and Information Technologies

could diminish their perception of the usefulness of ChatGPT, resulting in weaker


adoption intentions. The quality and accuracy of the responses generated by the sys-
tem could have been another factor that impacted the perceived usefulness of Chat-
GPT. If students have experienced instances where the AI tool provided inaccurate
or unreliable responses, it could raise doubts about the trustworthiness and over-
all usefulness of ChatGPT. Such doubts could contribute to the weaker influence of
perceived usefulness on adoption intentions.
Although no direct effect of perceived usefulness (PU) on adoption intentions
(AI) was observed in this study, significant indirect effects were found through the
mediating variables of interactivity and personalisation. The indirect effect of PU on
AI through interactivity was found to be negative, while the effect through person-
alisation was positive. These findings differ from previous studies (Baleghi-Zadeh
et al., 2017; Etemad-Sajadi, 2016) which demonstrated a positive influence of inter-
activity on behavioural intention. The negative effect of interactivity in this study
can be explained by the possibility that the interactivity provided by ChatGPT did
not meet user expectations or needs in terms of responsiveness, efficiency, or effec-
tiveness. This mismatch may have reduced perceptions of usefulness and weakened
adoption intentions. The interactivity may have been perceived as limited, ineffi-
cient, or lacking in responsiveness, which could have diminished students’ percep-
tions of usefulness and ultimately undermined their adoption intentions. Another
explanation could be that students had preconceived notions about the level of inter-
activity ChatGPT could offer, and when those expectations were not met, it nega-
tively impacted their perception of usefulness and adoption intentions. Conversely,
the indirect positive influence of perceived usefulness (PU) on adoption intentions
(AI) through personalisation can be attributed to students benefiting from person-
alised recommendations, suggestions, or solutions provided by ChatGPT. This per-
sonalisation likely contributed to their perception of ChatGPT as a time-saving and
effective tool. The perception of efficiency and effectiveness in personalised interac-
tions may have enhanced the students’ perception of usefulness and, consequently,
driven their adoption intentions.
In contrast to the indirect effects mediated by interactivity and personalisation,
this study did not find significant indirect effects of perceived usefulness mediated
via perceived trust and perceived intelligence on adoption intentions. Despite the
significance of perceived trust and perceived intelligence in technology acceptance,
they may have been perceived as secondary or less influential compared to other
factors in the specific context of ChatGPT usage among university students as this
tool is comparatively new. Students might have prioritised practical benefits and
personalisation over trust or intelligence, considering ChatGPT as a recent tool that
requires time to build trust. The results of this study contrast with the findings of
Moussawi et al. (2021), where perceived intelligence was found to have an indirect
impact on the intention to adopt AI technology through perceived usefulness among
college students.
The study revealed that personalisation and interactivity serve as significant
mediators, and further to this, these variables also have notable direct impacts on
students’ adoption intentions (AI). Specifically, personalisation was found to have
a positive influence, while interactivity had a negative influence. These findings are

13
Education and Information Technologies

consistent with the results reported in similar research conducted by Krouska et al.
(2022) and Pillai et al. (2023), where personalisation positively influenced technol-
ogy adoption intentions among students. However, the findings regarding interac-
tivity in this study contradict with those reported in the literature. Etemad-Sajadi’s
(2016) study, for instance, found that online real-time interactivity encouraged users
to increase their intention to use and interact with an online smart platform. The
contrasting results regarding interactivity could be attributed to several factors.
Firstly, it is possible that the interactivity provided by ChatGPT in this study did
not meet the users’ expectations or requirements, leading to a negative influence on
adoption intentions. Furthermore, the impact of interactivity on adoption intentions
can be influenced by various contextual factors and specific characteristics of the
technology being examined. Each study may have employed different interactivity
measures or investigated different technologies, leading to divergent findings.
In line with the findings regarding the mediating role of perceived trust and per-
ceived intelligence, this study further found that these variables also have insignifi-
cant direct effects on students’ adoption intentions (AI). This contradicts the find-
ings from various studies in the literature that have demonstrated a direct effect of
artificial intelligence tools on adoption intentions (Kandoth & Shekhar, 2022; Kim
et al., 2021; Mohd Rahim et al., 2022). However, these divergent findings can be
associated to the novelty of ChatGPT, as users may require more time to build the
necessary trust in the technology. Likewise, prior research has documented a direct
impact of perceived intelligence on technology adoption (Balakrishnan et al., 2022;
Pillai et al., 2023). However, the results from this study contradict those findings,
potentially due to ChatGPT’s limitations, such as its inability to provide the latest
information. These limitations might have affected the adoption of ChatGPT among
students. It is important to acknowledge that ChatGPT is a relatively novel technol-
ogy, and users may have certain expectations and concerns regarding its trustworthi-
ness and intelligence. Building trust in AI systems takes time and requires consistent
positive experiences. The constraints associated with ChatGPT, such as the presence
of outdated information, could have influenced students’ perceptions of its utility
and, subsequently, their intentions to adopt it.
The last finding of the study demonstrated a positive relationship between stu-
dents’ adoption intentions (AI) and the actual usage (AU) of ChatGPT. This result
is consistent with previous research conducted by Wei et al. (2021) and Pillai et al.
(2023), which also investigated the connection between adoption intentions and the
actual usage of chatbots. The positive correlation between adoption intentions and
actual usage can be attributed to students’ confidence in adopting and using Chat-
GPT effectively. When students have a strong intention to adopt the technology and
believe in their competence to utilise it, they are more likely to translate their adop-
tion intentions into actual usage. The associaton between adoption intentions and
actual usage suggests that students who are motivated to adopt ChatGPT and have a
positive perception of its usefulness are more inclined to engage with the technology
in their studies. This finding underscores the importance of fostering positive adop-
tion intentions among students as a means to drive actual usage of ChatGPT.
The study’s findings investigated the determinants of ChatGPT adoption among
students and their broader relevance. Notably, the students’ readiness to embrace

13
Education and Information Technologies

ChatGPT depended strongly on their perception of its user-friendliness, a factor


with implications extending beyond higher education. User-friendliness is pivotal
in workplaces as well, where seamless AI integration is pivotal for user acceptance.
The need for AI technologies to cater to users’ specific needs and offer interactive
and engaging experiences emerges as a universal principle driving adoption. In
various work settings, AI tools that provide personalized support and foster engag-
ing interactions are more likely to garner employee enthusiasm. Further, this study
findings might be applicable in K-12 education, wherein the students might also be
inclined to use ChatGPT. Lastly, it is crucial to highlight the importance of promot-
ing responsible AI usage among employees and students to maximise the benefits of
these technologies in their respective domains.

7 Conclusion, implications, and limitations of the study

ChatGPT, a relatively new technology, and is still in its infancy and is gradually
being explored and adopted in the academic industry. This research aimed to provide
a deeper understanding of the factors that impact students’ adoption intentions and
actual usage of ChatGPT with their studies. The results of this study provide insights
into the factors that influence students’ behaviours and emphasise the importance of
perceived ease of use, usefulness, interactivity, personalization, and adoption inten-
tions in shaping their utilization of ChatGPT for their academic pursuits. The signifi-
cance of perceived ease of use became apparent, suggesting that students are more
inclined to adopt and utilise ChatGPT when they perceive it as user-friendly and con-
venient. The study also revealed the impact of perceived usefulness on adoption inten-
tions, with indirect effects mediated by interactivity (negative) and personalization
(positive). This suggests that students’ perceptions of the usefulness of ChatGPT may
be influenced by the interactive features it offers (which students are not much famil-
iar with) and the extent to which it can be personalised to their needs. The insights
gained from this study have practical implications for educational institutions to arrive
to a decision regarding utilising ChatGPT in educational settings. By recognising the
importance of factors like ease of use, usefulness, interactivity, personalisation, and
adoption intentions, the educational institutions can implement effective strategies to
enhance student engagement with proper utilisation of ChatGPT. The theoretical and
practical implications of the study is discussed in the following sections.
Given the ethical concerns surrounding AI in education, institutions can play a
pivotal role in ensuring responsible AI usage. To address ethical considerations,
institutions should establish clear guidelines and codes of conduct for both students
and educators. These guidelines can outline the appropriate and responsible use of
AI tools, emphasizing the importance of critical thinking and creativity in assess-
ments. Additionally, ongoing training and awareness programs can educate students
and educators about the ethical implications of AI in education, fostering a culture
of responsible technology usage. By navigating these ethical considerations thought-
fully, educational institutions can harness the benefits of AI while upholding ethical
standards.

13
Education and Information Technologies

7.1 Theoretical implications

This empirical study stands out as the first of its kind to investigate adoption inten-
tions and actual usage of ChatGPT, specifically within the context of students’ stud-
ies in Asia. While previous research on ChatGPT has predominantly comprised
literature reviews or general discussions, this study adds significant value by offer-
ing insights derived from primary data analysis. By examining the perspectives
and behaviours of students, this study fills a significant gap in the current literature
and deepens an understanding of the factors that impact the adoption and usage of
ChatGPT in educational institutions in Vietnam. The theoretical implications of this
study are significant as the results emphasises the importance of several factors in
shaping adoption intentions and actual usage of ChatGPT. Notably, perceived ease
of use, perceived usefulness, personalisation, and interactivity emerged as critical
determinants that significantly impact students’ attitudes and behaviours towards
ChatGPT. By providing empirical evidence of the impact of these factors within
the context of this new technology, this study makes a value contribution to the
advancement of existing theoretical frameworks developed for various AI tools.

7.2 Practical implications

The study provides practical insights that can guide educational institutions in effec-
tively integrating AI tools like ChatGPT into educational settings. The positive
impact of perceived ease of use on adoption intentions underscores the significance
of user-friendly interfaces and responsive systems. However, while incorporating
ChatGPT into the educational landscape, it is crucial for educational institutions to
consider the implications for assessments. While it may be inevitable for students
to use ChatGPT in their studies, careful thought should be given to its integration
into the assessment process. It is essential to design assessments in a manner that
encourages responsible use of ChatGPT and preserves students’ critical thinking
skills and creativity in assessment writing. Educators play a significant role in this
regard by providing clear guidelines and instructions that outline the appropriate and
ethical use of AI tools during assessments. By ensuring that students understand the
intended purpose and limitations of ChatGPT, educators can foster a responsible and
ethical usage of ChatGPT in academic assessments.

7.3 Study limitations and directions for future research

This study has few limitations, thereby offering suggestions for future research. First
and foremost, it is important to note that this research was conducted within a spe-
cific context in Vietnam and with a relatively limited sample size. As a result, the
generalizability of the findings to other contexts or user groups in different coun-
tries may be constrained. To enhance the external validity of the findings, future
research could consider conducting studies in diverse educational settings and with
larger sample sizes. Secondly, it is worth noting that the study relied on self-reported

13
Education and Information Technologies

data, which could be susceptible to response biases. As a result, the participants’


actual adoption intentions and usage patterns of ChatGPT may not have been fully
captured. To address this limitation, future research could employ mixed methods
approaches that combine surveys with interviews or observational data collection.
This would allow for a more comprehensive understanding of students’ actual usage
behaviours and experiences with ChatGPT. Thirdly, future studies could consider
comparative research between different countries or different cohorts of students
(e.g., school versus university) to examine potential variations in ChatGPT adoption
intentions and usage. Next, the limited familiarity with ChatGPT could potentially
have influenced students’ perceptions and adoption intentions in several ways. Stu-
dents who are less familiar with the technology may rely more on initial impressions
or preconceived notions, which could shape their attitudes and intentions. Hence,
this limited familiarity of students in our study is a snapshot of their initial expe-
riences and future research could explore deeper into how this familiarity evolves
as students gain more exposure to ChatGPT and how it influences their long-term
adoption behaviors and attitudes. Further, to understand the long-term impact of
ChatGPT in higher education, the future research could focus on longitudinal stud-
ies to gain valuable insights at different point of time. Lastly, cultural, educational,
and contextual factors may influence students’ perceptions and behaviours related to
ChatGPT and investigating these differences can also contribute to a more nuanced
understanding of the topic.

Data availability The data used in this research cannot be made publicly available due to restrictions out-
lined in the ethics approval letter.

Declarations
Conflict of interest There is no conflict of interest.

Competing interests The author declare that they have no known competing financial interests or personal
relationships that could have appeared to influence the work reported in this paper.

References
Ajzen, I. (1985). From intentions to actions: A theory of planned behavior. Springer.
Ajzen, I. (1991). The theory of planned behavior. Organisational Behavior and Human Decision Pro-
cesses, 50(2), 179–211.
Akour, I. A., Al-Maroof, R. S., Alfaisal, R., & Salloum, S. A. (2022). A conceptual framework for deter-
mining metaverse adoption in higher institutions of gulf area: An empirical study using hybrid
SEM-ANN approach. Computers and Education: Artificial Intelligence, 3, 100052.
Alhashmi, S., Salloum, S. A., & Mhamdi, C. (2019). Implementing artificial intelligence in the United
Arab Emirates healthcare sector: An extended technology acceptance model. Int. J. Inf. Technol.
Lang. Stud, 3(3), 27–42.
Almahri, F. A. J., Bell, D., & Merhi, M. (2020). Understanding student acceptance and use of chatbots in
the United Kingdom universities: a structural equation modeling approach. 2020 6th International
Conference on Information Management (ICIM).

13
Education and Information Technologies

Almaiah, M. A., Jalil, M. A., & Man, M. (2016). Extending the TAM to examine the effects of quality
features on mobile learning acceptance. Journal of Computers in Education, 3, 453–485.
Al-Mekhlafi, A. B. A., Othman, I., Kineber, A. F., Mousa, A. A., & Zamil, A. M. (2022). Modeling the
impact of massive open online courses (MOOC) implementation factors on continuance intention
of students: PLS-SEM approach. Sustainability, 14(9), 5342.
Al-Sharafi, M. A., Al-Emran, M., Iranmanesh, M., Al-Qaysi, N., Iahad, N. A., & Arpaci, I. (2022).
Understanding the impact of knowledge management factors on the sustainable use of AI-based
chatbots for educational purposes using a hybrid SEM-ANN approach. Interactive Learning Envi-
ronments, 1–20 (ahead-of-print)
Anthony Jnr, B., Kamaludin, A., Romli, A., Raffei, A. F. M., Phon, D. N. A. L. E., Abdullah, A., Ming,
G. L., Shukor, N. A., Nordin, M. S., & Baba, S. (2020). Predictors of blended learning deployment
in institutions of higher learning: Theory of planned behavior perspective. The International Jour-
nal of Information and Learning Technology, 37(4), 179–196.
Arghashi, V., & Yuksel, C. A. (2022). Interactivity, Inspiration, and Perceived Usefulness! How retail-
ers’ AR apps improve consumer engagement through flow. Journal of Retailing and Consumer
Services, 64, 102756.
Ashrafi, A., Zareravasan, A., Rabiee Savoji, S., & Amani, M. (2022). Exploring factors influencing stu-
dents’ continuance intention to use the learning management system (LMS): A multi-perspective
framework. Interactive Learning Environments, 30(8), 1475–1497.
Aw, E. C. X., Basha, N. K., Ng, S. I., & Sambasivan, M. (2019). To grab or not to grab? The role of trust
and perceived value in on-demand ridesharing services. Asia Pacific Journal of Marketing and
Logistics, 31(5), 1442–1465.
Aydın, Ö., & Karaarslan, E. (2023). Is ChatGPT leading generative AI? What is beyond expectations?
What is beyond expectations?
Baby, A., & Kannammal, A. (2020). Network Path Analysis for developing an enhanced TAM model: A
user-centric e-learning perspective. Computers in Human Behavior, 107, 106081.
Balakrishnan, J., Abed, S. S., & Jones, P. (2022). What is the role of meta-UTAUT factors, perceived
anthropomorphism, perceived intelligence, and social self-efficacy in chatbot-based services?
Technological Forecasting and Social Change, 180, 121692.
Baleghi-Zadeh, S., Ayub, A. F. M., Mahmud, R., & Daud, S. M. (2017). The influence of system interac-
tivity and technical support on learning management system utilisation. Knowledge Management &
E-Learning, 9(1), 50.
Bartneck, C., Kulić, D., Croft, E., & Zoghbi, S. (2009). Measurement instruments for the anthropomor-
phism, animacy, likeability, perceived intelligence, and perceived safety of robots. International
Journal of Social Robotics, 1, 71–81.
Bawack, R. E. (2021). How Perceived intelligence affects consumer adoption of AI-based voice assis-
tants: An affordance perspective. PACIS.
Cao, J., Yang, T., Lai, I. K.-W., & Wu, J. (2021). Student acceptance of intelligent tutoring systems dur-
ing COVID-19: The effect of political influence. The International Journal of Electrical Engineer-
ing & Education, 00207209211003270.
Chai, C. S., Wang, X., & Xu, C. (2020). An extended theory of planned behavior for the modeling of Chi-
nese secondary school students’ intention to learn artificial intelligence. Mathematics, 8(11), 2089.
Chang, C.-W., Heo, J., Yeh, S.-C., Han, H.-Y., & Li, M. (2018). The effects of immersion and interactiv-
ity on college students’ acceptance of a novel VR-supported educational technology for mental
rotation. IEEE Access, 6, 66590–66599.
Cheon, J., Lee, S., Crooks, S. M., & Song, J. (2012). An investigation of mobile learning readiness
in higher education based on the theory of planned behavior. Computers & Education, 59(3),
1054–1064.
Choung, H., David, P., & Ross, A. (2023). Trust in AI and Its Role in the Acceptance of AI Technologies.
International Journal of Human-Computer Interaction, 39(9), 1727–1739.
Chow, M., Herold, D. K., Choo, T. M., & Chan, K. (2012). Extending the technology acceptance model
to explore the intention to use Second Life for enhancing healthcare education. Computers & Edu-
cation, 59(4), 1136–1144.
Chuah, S.H.-W., Aw, E.C.-X., & Yee, D. (2021). Unveiling the complexity of consumers’ intention to use
service robots: An fsQCA approach. Computers in Human Behavior, 123, 106870.
Clark, R. M., Kaw, A. K., & Braga Gomes, R. (2022). Adaptive learning: Helpful to the flipped class-
room in the online environment of COVID? Computer Applications in Engineering Education,
30(2), 517–531.

13
Education and Information Technologies

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Lawrence Erlbaum.
Conner, M., & Armitage, C. J. (1998). Extending the theory of planned behavior: A review and avenues
for further research. Journal of Applied Social Psychology, 28(15), 1429–1464.
Dahiya, M. (2017). A tool of conversation: Chatbot. International Journal of Computer Sciences and
Engineering, 5(5), 158–161.
Davis, F. D. (1985). A technology acceptance model for empirically testing new end-user information
systems: Theory and results. Massachusetts Institute of Technology.
Davis, F. D. (1989). Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Informa-
tion Technology. MIS Quarterly, 13(3), 319–340.
Deng, J., & Lin, Y. (2022). The benefits and challenges of ChatGPT: An overview. Frontiers in Com-
puting and Intelligent Systems, 2(2), 81–83.
Dignum, V. (2020). AI is multidisciplinary. AI Matters, 5(4), 18–21.
Etemad-Sajadi, R. (2016). The impact of online real-time interactivity on patronage intention: The use
of avatars. Computers in Human Behavior, 61, 227–232.
Fan, H., & Poole, M. S. (2006). What is personalisation? Perspectives on the design and implementa-
tion of personalisation in information systems. Journal of Organizational Computing and Elec-
tronic Commerce, 16(3–4), 179–202.
Fuchs, K. (2023). Exploring the opportunities and challenges of NLP models in higher education: Is
Chat GPT a blessing or a curse? Frontiers in Education, 8, 1166682. Frontiers.
Gilson, A., Safranek, C., Huang, T., Socrates, V., Chi, L., & Taylor, R. (2023). How Does ChatGPT
Perform on the Medical Licensing Exams? The Implications of Large Language Models for
Medical Education and Knowledge Assessment. https://​doi.​org/​10.​1101/​2022.​12,23. medRxiv.
Gonzalez-Tamayo, L. A., Maheshwari, G., Bonomo-Odizzio, A., Herrera-Avilés, M., & Krauss-
Delorme, C. (2023). Factors influencing small and medium size enterprises development and
digital maturity in Latin America. Journal of Open Innovation: Technology, Market, and Com-
plexity, 100069.
Gray, K., & Wegner, D. M. (2012). Feeling robots and human zombies: Mind perception and the
uncanny valley. Cognition, 125(1), 125–130.
Gulati, S., Sousa, S., & Lamas, D. (2019). Design, development, and evaluation of a human-computer
trust scale. Behavior & Information Technology, 38(10), 1004–1015.
Haenlein, M., & Kaplan, A. (2019). A brief history of artificial intelligence: On the past, present, and
future of artificial intelligence. California Management Review, 61(4), 5–14.
Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2010). Multivariate data analysis: A global
perspective. Pearson Prentice Hall.
Hair, J. F., Ringle, C. M., & Sarstedt, M. (2011). PLS-SEM: Indeed, a silver bullet. Journal of Mar-
keting Theory and Practice, 19(2), 139–152.
Harrigan, M., Feddema, K., Wang, S., Harrigan, P., & Diot, E. (2021). How trust leads to online pur-
chase intention founded in perceived usefulness and peer communication. Journal of Consumer
Behaviour, 20(5), 1297–1312.
Ho, C. C., & MacDorman, K. F. (2010). Revisiting the uncanny valley theory: Developing and validat-
ing an alternative to the Godspeed indices. Computers in Human Behavior, 26(6), 1508–1518.
Ho, S. Y. (2006). The attraction of internet personalisation to web users. Electronic Markets, 16(1),
41–50.
Hoque, R., & Sorwar, G. (2017). Understanding factors influencing the adoption of mHealth by the
elderly: An extension of the UTAUT model. International Journal of Medical Informatics, 101,
75–84.
Hu, Y.-H. (2022). Effects and acceptance of precision education in an AI-supported smart learning
environment. Education and Information Technologies, 27(2), 2013–2037.
Ingkavara, T., Panjaburee, P., Srisawasdi, N., & Sajjapanroj, S. (2022). The use of a personalised
learning approach to implementing self-regulated online learning. Computers and Education:
Artificial Intelligence, 3, 100086.
Kandoth, S., & Shekhar, S. K. (2022). Social influence and intention to use AI: The role of per-
sonal innovativeness and perceived trust using the parallel mediation model. Forum Scientiae
Oeconomia.
Kang, M., Shin, D. H., & Gong, T. (2016). The role of personalisation, engagement, and trust in
online communities. Information Technology & People, 29(3), 580–596.

13
Education and Information Technologies

Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the inter-
pretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1),
15–25.
Kim, J., Giroux, M., & Lee, J. C. (2021). When do you trust AI? The effect of number presenta-
tion detail on consumer trust and acceptance of AI recommendations. Psychology & Marketing,
38(7), 1140–1155.
Kim, J., Kang, S., & Bae, J. (2022). Human likeness and attachment effect on the perceived interactiv-
ity of AI speakers. Journal of Business Research, 144, 797–804.
Komiak, S. Y. X., & Benbasat, I. (2006). The Effects of Personalization and Familiarity on Trust and
Adoption of Recommendation Agents. MIS Quarterly, 30(4), 941–960.
Krouska, A., Troussas, C., & Sgouropoulou, C. (2022). Mobile game-based learning as a solution in
COVID-19 era: Modeling the pedagogical affordance and student interactions. Education and
Information Technologies, 27(1), 229–241.
Kuhail, M. A., Alturki, N., Alramlawi, S., & Alhejori, K. (2023). Interacting with educational chatbots: A
systematic review. Education and Information Technologies, 28(1), 973–1018.
Kuleto, V., Ilić, M., Dumangiu, M., Ranković, M., Martins, O. M., Păun, D., & Mihoreanu, L. (2021).
Exploring opportunities and challenges of artificial intelligence and machine learning in higher
education institutions. Sustainability, 13(18), 10424.
Kumar, N., Singh, M., Upreti, K., & Mohan, D. (2022). Blockchain adoption intention in higher educa-
tion: role of trust, perceived security, and privacy in technology adoption model. Proceedings of
International Conference on Emerging Technologies and Intelligent Systems: ICETIS 2021 (Vol-
ume 1).
Kurni, M., Mohammed, M. S., & Srinivasa, K. G. (2023). Intelligent tutoring systems. A beginner’s
guide to introduce artificial intelligence in teaching and learning (pp. 29–44). Springer Interna-
tional Publishing.
Liaw, S. S., & Huang, H. M. (2013). Perceived satisfaction, perceived usefulness, and interactive learning
environments as predictors to self-regulation in e-learning environments. Computers & Education,
60(1), 14–24.
Liebrenz, M., Schleifer, R., Buadze, A., Bhugra, D., & Smith, A. (2023). Generating scholarly con-
tent with ChatGPT: ethical challenges for medical publishing. The Lancet. Digital Health, 5(3),
e105–e106.
Liu, K., & Tao, D. (2022). The roles of trust, personalisation, loss of privacy, and anthropomorphism in
public acceptance of smart healthcare services. Computers in Human Behavior, 127, 107026.
Lu, Y., Kim, Y., Dou, X. Y., & Kumar, S. (2014). Promote physical activity among college students:
Using media richness and interactivity in web design. Computers in Human Behavior, 41, 40–50.
Maheshwari, G. (2021). Factors affecting students’ intentions to undertake online learning: An empirical
study in Vietnam. Education and Information Technologies, 26(6), 6629–6649.
Maheshwari, G. (2022). Entrepreneurial intentions of university students in Vietnam: Integrated model of
social learning, human motivation, and TPB. The International Journal of Management Education,
20(3), 100714
Marangunić, N., & Granić, A. (2015). Technology acceptance model: A literature review from 1986 to
2013. Universal Access in the Information Society, 14, 81–95.
Miller, S. M. (2018). AI: Augmentation, more so than automation. Asian Management Insights, 5(1),
1–20.
Mohd Rahim, N. I., Iahad, N. A., Yusof, A. F., & Al-Sharafi, M. A. (2022). AI-based chatbots adoption
model for higher-education institutions: A hybrid PLS-SEM-neural network modelling approach.
Sustainability, 14(19), 12726.
Mohr, S., & Kühl, R. (2021). Acceptance of artificial intelligence in German agriculture: An application
of the technology acceptance model and the theory of planned behavior. Precision Agriculture,
22(6), 1816–1844.
Moussawi, S., & Koufaris, M. (2019). Perceived intelligence and perceived anthropomorphism of per-
sonal intelligent agents: Scale development and validation. Proceedings of the 52nd Hawaii Inter-
national Conference on System Sciences
Moussawi, S., Koufaris, M., & Benbunan-Fich, R. (2021). How perceptions of intelligence and anthro-
pomorphism affect the adoption of personal intelligent agents. Electronic Markets, 31, 343–364.
Nalbant, K. G. (2021). The importance of artificial intelligence in education: A short review. Journal of
Review in Science and Engineering, 2021, 1–15.

13
Education and Information Technologies

Nazaretsky, T., Ariely, M., Cukurova, M., & Alexandron, G. (2022). Teachers’ trust in AI-powered edu-
cational technology and a professional development program to improve it. British Journal of Edu-
cational Technology, 53(4), 914–931.
Ng, D. T. K., Lee, M., Tan, R. J. Y., Hu, X., Downie, J. S., & Chu, S. K. W. (2023). A review of AI teach-
ing and learning from 2000 to 2020. Education and Information Technologies, 28(7), 8445–8501.
Ni, A., & Cheung, A. (2023). Understanding secondary students’ continuance intention to adopt AI-
powered intelligent tutoring system for English learning. Education and Information Technologies,
28(3), 3191–3216.
Nisar, S., & Aslam, M. S. (2023). Is ChatGPT a good tool for T&CM students in studying pharmacol-
ogy? Available at SSRN 4324310.
Olarewaju, A. D., Gonzalez-Tamayo, L. A., Maheshwari, G., & Ortiz-Riaga, M. C. (2023). Journal of
Small Business and Enterprise Development, 30(3), 475–500.
Oxford Analytica. (2023). ChatGPT dramatically fuels corporate interest in AI. Emerald Expert
Briefings(oxides).
Pavlik, J. V. (2023). Collaborating With ChatGPT: considering the implications of generative artificial
intelligence for journalism and media education. Journalism and Mass Communication Educator,
78(1), 84–93.
Pillai, R., Sivathanu, B., Metri, B., & Kaushik, N. (2023). Students’ adoption of AI-based teacher-bots
(T-bots) for learning in higher education. Information Technology & People (West Linn, Or.)
(ahead-of-print)
Pillai, R., & Sivathanu, B. (2020). Adoption of AI-based chatbots for hospitality and tourism. Interna-
tional Journal of Contemporary Hospitality Management, 32(10), 3199–3226.
Podsakoff, P. M., MacKenzie, S. B., & Podsakoff, N. P. (2012). Sources of method bias in social science
research and recommendations on how to control it. Annual Review of Psychology, 63, 539–569.
Popenici, S. A., & Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning
in higher education. Research and Practice in Technology Enhanced Learning, 12(1), 1–13.
Pothen, A. S. (2022). Artificial intelligence and its increasing importance. In J. Karthikeyan, T. S. Hie,
& N. Y. Jin (Eds.), Learning Outcomes of Classroom Research (pp. 74–81). L Ordine Nuovo
Publication.
Qadir, J. (2022). Engineering education in the era of chatGPT: promise and pitfalls of generative AI
for education. In IEEE Global Engineering Education Conference (EDUCON) proceedings. IEEE.
Rahaman, M., Ahsan, M., Anjum, N., Rahman, M., & Rahman, M. N. (2023). The AI race is on! Goog-
le’s bard and OpenAI’s ChatGPT head to head: An opinion article. Mizanur and Rahman, Md Nafi-
zur, The AI Race is on.
Rahmat, T. E., Raza, S., Zahid, H., Abbas, J., Mohd Sobri, F. A., & Sidiki, S. N. (2022). Nexus between
integrating technology readiness 2.0 index and students’ e-library services adoption amid the
COVID-19 challenges: Implications based on the theory of planned behavior. Journal of Education
and Health Promotion, 11(1), 50–50.
Rudolph, J., Tan, S., & Tan, S. (2023). ChatGPT: Bullshit spewer or the end of traditional assessments in
higher education? Journal of Applied Learning and Teaching, 6(1), 1–22.
Samek, W., Wiegand, T., & Müller, K.-R. (2017). Explainable artificial intelligence: Understanding, visu-
alising, and interpreting deep learning models. arXiv preprint arXiv:1708.08296.
Sass, T., & Ali, S. M. (2023). Virtual Tutoring Use and Student Achievement Growth. Georgia Policy
Labs Reports.
Siau, K., & Wang, W. (2018). Building trust in artificial intelligence, machine learning, and robotics. Cut-
ter Business Technology Journal, 31(2), 47–53.
Simmons, A. B., & Chappell, S. G. (1988). Artificial intelligence definition and practice. IEEE Journal of
Oceanic Engineering, 13(2), 14–42.
Sing, C. C., Teo, T., Huang, F., Chiu, T. K., & Xing Wei, W. (2022). Secondary school students’ inten-
tions to learn AI: Testing moderation effects of readiness, social good and optimism. Educational
Technology Research and Development, 70(3), 765–782.
Smutny, P., & Schreiberova, P. (2020). Chatbots for learning: A review of educational chatbots for the
Facebook Messenger. Computers & Education, 151, 103862.
Soper, D. S. (2021). A-priori sample size calculator for structural equation models [Software]. 2021.
Strzelecki, A. (2023). To use or not to use ChatGPT in higher education? A study of students’ acceptance
and use of technology. Interactive Learning Environments, 1–14. (ahead-of-print)

13
Education and Information Technologies

Tarhini, A., Masa’deh, R. E., Al-Busaidi, K. A., & MohammedMaqableh, A. B. M. (2017). Factors influ-
encing students’ adoption of e-learning: A structural equation modeling approach. Journal of Inter-
national Education in Business, 10(2), 164–182.
Tetzlaff, L., Schmiedek, F., & Brod, G. (2021). Developing personalised education: A dynamic frame-
work. Educational Psychology Review, 33, 863–882.
Venkatesh, V., & Davis, F. D. (1996). A model of the antecedents of perceived ease of use: Development
and test. Decision Sciences, 27(3), 451–481.
Wei, J., Vinnikova, A., Lu, L., & Xu, J. (2021). Understanding and predicting the adoption of mobile fit-
ness apps: Evidence from China. Health Communication, 36(8), 950–961.
Yacci, M. (2000). interactivity demystified: A structural definition for distance education and intelligent
computer-based instruction. Educational Technology, 40(4), 5–16.
Yu, C. E. (2020). Humanlike robots as employees in the hotel industry: Thematic content analysis of
online reviews. Journal of Hospitality Marketing & Management, 29(1), 22–38.
Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on
artificial intelligence applications in higher education–where are the educators? International Jour-
nal of Educational Technology in Higher Education, 16(1), 1–27.
Zhai, X. (2022). ChatGPT user experience: Implications for education. Available at SSRN 4312418.

Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps
and institutional affiliations.

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under
a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted
manuscript version of this article is solely governed by the terms of such publishing agreement and
applicable law.

13

You might also like