Full Text
Full Text
Full Text
https://doi.org/10.1007/s10639-023-12333-z
Greeni Maheshwari1
Abstract
ChatGPT, an extensively recognised language model created by OpenAI, has gained
significant prominence across various industries, particularly in education. This
study aimed to investigate the factors that influence students’ intentions to adopt and
utilise ChatGPT for their academic studies. The study used a Structural Equation
Model (SEM) for analysing the data gathered from 108 participants, comprising
both undergraduate and postgraduate students enrolled in public and private univer-
sities in Vietnam. The findings indicated that students’ inclination to adopt ChatGPT
(referred to as adoption intention or AI) was influenced by their perception of its
user-friendliness (PEU). However, the perceived usefulness (PU) of ChatGPT did
not have a direct impact on students’ adoption intention; instead, it had an indirect
influence through personalisation (with a positive effect) and interactivity (with a
negative effect). Importantly, there was no significant indirect effect of PU on AI
mediated by perceived trust and perceived intelligence. This study is one of the ini-
tial empirical inquiries into ChatGPT adoption within an Asian context, providing
valuable insights in this emerging area of research. As the use of ChatGPT by stu-
dents becomes increasingly inevitable, educational institutions should carefully con-
sider integrating it into the assessment process. It is crucial to design assessments
that encourage responsible usage of ChatGPT, preserving students’ critical think-
ing abilities and creativity in their assessment writing. Moving forward, educators
will play a pivotal role by offering clear guidelines and instructions that set out the
appropriate and ethical use of artificial intelligence tools in the assessments.
* Greeni Maheshwari
[email protected]
1
Department of Management, The Business School, RMIT University, Ho Chi Minh City,
Vietnam
13
Vol.:(0123456789)
Education and Information Technologies
1 Introduction
13
Education and Information Technologies
13
13
Table 1 Summary of various AI tools and the students’ decision impacting their usage
AI Tool / Technology Why students might choose Factors influencing students’ decisions
ChatGPT (Fuchs, 2023) Accessibility, immediate answers, conversational Familiarity with text-based chat, need for quick informa-
interface tion
Intelligent Tutoring Systems (Kurni et al., 2023) Personalized subject-specific instruction, adaptive Academic goals, desire for structured learning, subject-
content specific needs
Massive Open Online Courses (MOOCs) (Al-Mekhlafi Comprehensive courses, certification Desire for formal learning, comprehensive course
et al., 2022) content
Virtual Tutors (Sass & Ali, 2023) High personalization, subject-specific expertise Subject-specific needs, desire for one-on-one guidance
Educational Chatbots (Kuhail et al., 2023) Quick responses, assistance Ease of use, availability
AI-Powered educational technologies (Nazaretsky Evaluation and feedback Assessment requirements, progress tracking
et al., 2022)
Learning Management Systems (LMS) (Ashrafi et al., Course management, content organization Institutional adoption, course requirements
2022)
Adaptive Learning Platforms (Clark et al., 2022) Customized learning paths, skill-building Personalized learning preferences, skill development
goals
Collaborative AI Tools (Ng et al., 2023) Enhanced collaboration and teamwork skills Group projects, collaborative learning objectives
Education and Information Technologies
Education and Information Technologies
2023; Sing et al., 2022), there is a notable gap in the research when it comes to
exploring ChatGPT from the perspectives of learners, specifically in terms of usage
intention and behaviour among higher education students. To understand the studies
conducted on ChatGPT so far, a literature search was done until June 17, 2023, in
Scopus using relevant keywords such as "ChatGPT" AND "Higher Education" OR
"Academi*" OR "Tertiary teaching" and this yielded a total of 100 studies. How-
ever, among these 100 studies, only one empirical study on ChatGPT was identi-
fied, indicating the limited number of empirical investigations conducted in this spe-
cific area. Notably, this one empirical study was conducted by Strzelecki (2023) in
Poland on ChatGPT stands out as it explores the students’ acceptance and actual
usage of ChatGPT. The majority of existing research on ChatGPT primarily consists
of commentary, reviews, editorials, or broad discussions, with only one empirical
study conducted in this particular research area thus far.
Considering the existing research gap, this empirical investigation aims to fill
this gap in the ChatGPT literature by examining the current behaviours and attitudes
of higher education students towards their intentions and actual usage of ChatGPT.
Understanding how students utilise ChatGPT is a crucial foundational step for edu-
cators to develop suitable strategies in teaching, delivering assessments, providing
support, and increasing awareness of the possibility of plagiarism among students.
This understanding will enable effective communication with students regarding
the appropriate use of ChatGPT and equip educators with the knowledge to identify
unethical behaviours facilitated by AI. To the best of the knowledge, no empirical
study has been conducted in the Asian region to date on this topic. Therefore, this
study holds significance as it serves as one of the pioneering empirical investiga-
tions conducted in Asia. It contributes to the limited global body of research that
explores the factors influencing students’ adoption intentions and actual usage of
ChatGPT. Therefore, the primary objective of this study is to offer empirical evi-
dence that enhances the current theoretical discussions in the literature concerning
this significant research topic.
The study is organised as follows: The subsequent section provides a theoretical
background, followed by Section 3 which presents the literature review and hypoth-
eses development, proposing the conceptual framework that guides this research.
Sections 4 and 5 outline the methodology and present the study’s results. In Sec-
tion 6, the findings are discussed, and the study concludes with a summary, practi-
cal implications, and theoretical contributions. The limitations of the study are also
addressed in the last section.
2 Theoretical background
13
Education and Information Technologies
critical factors within the technology acceptance model, which aids in understanding
users’ adoption of technology (Davis, 1989). Perceived ease of use is defined as the
extent to which an individual believes that utilizing a specific system would enhance
their job performance (Davis, 1985). On the other hand, perceived usefulness refers to
the degree to which an individual believes that using a particular system would require
minimal physical and mental effort (Davis, 1985). In the revised technology accept-
ance model, the authors introduced the concept of behavioural intention to explain how
users’ behaviours in a technological system can be influenced by perceived ease of use
and perceived usefulness (Venkatesh & Davis, 1996). As AI continues to experience
significant growth across various industries, the technology acceptance model has pro-
vided theoretical insights into the increasing adoption of AI. This adoption is no longer
limited to the information system sector but has expanded to encompass a wide range
of industries that leverage technological advancements for transformative purposes. For
example, the application of the technology acceptance model has been utilised to sys-
tematically review AI implementation in the healthcare sector in the United Arab Emir-
ates, revealing successful practices for AI adoption (Alhashmi et al., 2019). Similarly,
the technology acceptance model, incorporating the constructs of perceived usefulness
and perceived ease of use, has been employed to explain AI acceptance among farmers
in the German agriculture sector, with the aim of enhancing crop production (Mohr &
Kühl, 2021).
In addition to employing the TAM model, this study also incorporates the theory of
planned behaviour (TPB) model to investigate how higher education students utilise
ChatGPT in their learning. The theory of planned behaviour is an extension of the the-
ory of reasoned action, which was initially introduced in 1985 by psychologist Icek
Ajzen (1985) and has since gained significant prominence in the fields of social psy-
chology and human behaviour sciences (Conner & Armitage, 1998). According to the
TPB model, motivational elements that affect behaviour are captured by intentions,
which also serve as indicators of how much effort a person is prepared to put forth
to carry out the action (Ajzen, 1991). The theory of planned behaviour is extensively
used in sociology and behaviour research, including student learning behaviours that
adopt new technology in higher education, such as readiness for a new mobile learning
approach (Cheon et al., 2012), employing blended learning (Anthony Jnr et al., 2020),
and adoption of advanced technology system in library use (Rahmat et al., 2022). The
behavioural intention was extensively found in studies to significantly impact students’
actual adoption of AI platforms in learning. A study conducted by Almahri et al. (2020)
found a noteworthy and positive correlation between the intention of UK university stu-
dents to use AI chatbots and their actual usage of these chatbots to support their studies.
Another investigation by Hu (2022) focused on the smart learning context and revealed
that behavioural intention had a significant influence on the transformative learning
experiences of undergraduate students within smart environments. Therefore, the TAM
and TPB models serve as the fundamental frameworks underpinning the design of this
research study.
13
Education and Information Technologies
The original technology acceptance model proposed by Davis (1985) establishes the
notion that perceived ease of use plays a pivotal role in influencing the perceived
usefulness within the user motivation domain. This relationship has been confirmed
by various studies. For instance, research conducted on Chinese students regard-
ing their experiences with intelligent tutoring systems demonstrated a significant
impact of perceived ease of use on perceived usefulness (Cao et al., 2021). Sim-
ilarly, a study involving secondary school students in China found that perceived
ease of use positively influenced the perceived usefulness of intelligent tutoring sys-
tems (Ni & Cheung, 2022). Furthermore, in the context of adopting AI personal
intelligent agents, US students’ intention to adopt was prominently associated with
the perceived ease of use and usefulness of the agent (Moussawi et al., 2021). This
relationship was also observed among college students who adopted AI-based per-
sonal intelligent agents (Moussawi et al., 2021). Hence, based on the aforemen-
tioned rationale, this study hypothesises a hypothetical impact of perceived ease of
use on perceived usefulness, forming the foundation for the following formulated
hypothesis:
H1a: The perceived ease of use positive influences the perceived usefulness of
ChatGPT.
The trust construct was expanded upon in the TAM model by Baby and Kannam-
mal (2020) to investigate users’ beliefs regarding new technologies. Perceived trust
refers to the level of trust users have in the reliability and security of the technology
13
Education and Information Technologies
they are using (Komiak & Benbasat, 2006). Trust in human–computer interactions
has a significant impact on technology usage and computer-based platforms. It helps
reduce risks, uncertainties, and anxieties associated with technological interactions,
promotes positive and meaningful experiences with technology, and facilitates the
development of healthy relationships with related systems (Gulati et al., 2019). In
the context of the technology acceptance model and its application to understand the
usage of AI, trust perception indirectly influences the behavioural intention to use
AI technology and its associated applications (Choung et al., 2023). In the context
of AI, trust is viewed from two perspectives. The first perspective involves a col-
lection of specific beliefs, known as trusting beliefs, which encompass aspects such
as goodwill, competence, morality, and predictability. These beliefs shape an indi-
vidual’s perception of trust in AI systems. The second perspective relates to the will-
ingness of one party to rely on another in a potentially risky or uncertain situation,
known as trusting intention (Siau & Wang, 2018).Trust was also proven to directly
increase behavioural intention towards AI-provided services in the healthcare sector
(Liu & Tao, 2022; Mohd Rahim et al., 2022). When individuals perceive a tech-
nology as useful, it can positively influence their trust in the technology (Aw et al.,
2019; Harrigan et al., 2021). Therefore, trust has the potential to act as a mediator
between perceived usefulness and adoption intention. By instilling a sense of confi-
dence, reducing perceived risks, and increasing the likelihood of technology adop-
tion, trust plays a crucial role in influencing users’ intention to adopt a particular
technology. Building upon these foundations, the following hypothesis is formulated
for this study:
13
Education and Information Technologies
2000). Chatbots and users exchange information interact to understand as what the
user is attempting to say and provide the best possible answer. The technology adop-
tion can be high if there is high interactivity with the technology, as this helps users
to engage better. Therefore, interactivity and personalisation have been employed as
factors to examine the connection between AI interactive technology and users, as
they have the potential to significantly influence users’ decisions. The interactivity
of AI tools influences users’ perceptions of their usefulness (Arghashi & Yuksel,
2022; Liaw & Huang, 2013). When ChatGPT is interactive, capable of engaging
in meaningful conversations, understanding user needs, and providing relevant and
helpful responses, users may be more likely to perceive them as useful tools. Like-
wise, when users perceive chatbots as having the ability to comprehend their spe-
cific requirements and deliver appropriate and personalised responses, it may have a
positive impact on their intention to adopt and utilise the technology. Consequently,
this study proposes the following two hypotheses:
H2b: The relationship between perceived usefulness and the adoption intention of
ChatGPT is mediated by interactivity.
H2c: Perceived usefulness affects the adoption intention of ChatGPT mediated via
personalisation.
H2d: Perceived intelligence plays a mediating role in the relationship between the
perceived usefulness of ChatGPT and the intention to adopt it.
13
Education and Information Technologies
various purposes. It also fostered the intention to learn how to use AI adequately
among the researched students, signalling the emerging rise of AI practices among
the young generation in the future (Sing et al., 2022). The study conducted by Al-
Sharafi et al. (2022) further confirmed the significance of perceived usefulness in
influencing the continued use of chatbots for educational purposes among students
in Malaysian public universities. Building upon the foundations of the technology
acceptance model and drawing on empirical evidence from AI adoption in educa-
tion, this study puts forth the following hypothesis:
Kumar et al. (2022) extended the technology acceptance model and found that
trust had a positive indirect effect on students’ intention to adopt AI technology in
higher education, specifically in the context of blockchain. Trust in AI was found
to be prominent and have a direct impact on South Indian university students and
their intention to use AI-enabled platforms, including the AI job application process
(Kandoth & Shekhar, 2022). Kim et al. (2021) conducted a study and identified a
direct impact of trust in adoption intention on consumer response towards AI, which
includes the intention to use platforms and devices that involve AI. In the higher
education context, Mohd Rahim et al. (2022) conducted a study and discovered
that perceived trust had a direct and positive impact on the behavioural intention of
Malaysian students to adopt AI chatbot platforms for their university learning. This
direct association between trust and AI-powered platforms was also observed in a
sample of US students (Moussawi et al., 2021). As a result, it is anticipated that stu-
dents’ intention to use ChatGPT will be positively influenced by their perceived trust
in this chatbot. Based on the above premise, the below hypothesis is formulated:
In South Korea, Kim et al. (2022) conducted a study that revealed a significant posi-
tive relationship between users’ perceived interactivity with an AI device and their
intention to purchase and use the device. The presence of highly interactive avatars
in a smart online platform was found to promote users’ intention to use and interact
with the platform in real-time, as determined in a study by Etemad-Sajadi (2016). In
terms of impact on the student population, a high level of interactivity on web plat-
forms helped elevate college students’ intention to participate in physical activities
(Lu et al., 2014). In the computer-mediated environment of college and university
education, interactivity also contributes greatly to the student experience and inten-
tion to use virtual reality–based MR training systems (Chang et al., 2018). Further-
more, the interactivity of the learning system was discovered to influence the behav-
ioural intention of students and the actual use of the system among undergraduates
13
Education and Information Technologies
Moussawi and Koufaris (2019) conducted a study that found an indirect impact
of perceived intelligence on the intention to continue using AI-based personal
assistant technologies, such as virtual voice assistants like Siri and Alexa. This
implies that users’ perception of the intelligence exhibited by these virtual assis-
tants influences their willingness to persistently utilise them. Perception of the
level of intelligence among users towards AI-based virtual assistants in the form
of chatbots was proven to significantly impact user intention to use it based on
how sensitive it is towards the prompt and how well it can recognise the user’s
voices (Bawack, 2021). Perceived intelligence of AI is recently gaining attention
in the education sector, albeit the limited knowledge still allows room for more
13
Education and Information Technologies
Actual usage refers to the behaviours of individuals using ChatGPT after they
have adopted to use it (Pillai et al., 2023). It signifies the degree to which indi-
viduals actively interact with and make use of the chatbot for diverse objec-
tives. Individuals with a strong adoption intention are more likely to explore and
actively use the ChatGPT than those with weak adoption intention. Adoption
intention sets the initial motivation and predisposition for individuals to engage
with the technology, which can influence their subsequent usage behaviour (Wei
et al., 2021). Hence, the adoption intention supports the strong relationship with
the actual usage of technology-enhanced tools, as supported in the literature (Pil-
lai et al., 2023; Hoque & Sorwar, 2017). Based on this discussion, the study for-
mulats the last hypothesis as below:
13
Education and Information Technologies
The proposed research model consisting all the seven designed hypotheses of
the study is illustrated in Fig. 1.
4 Methodology
A survey instrument employed for this study was adopted from the constructs
of TAM and TPB models and further using the literaure on various studies on
Chatbot adoption and usage. The survey instrument consisted of first section on
several questions on demographics followed by sections on TAM components
such as perceived ease of use (PEU) and perceived usefulness (PU). The next
section was on questions related to perceived trust (PT), interactivity (IN), per-
sonalisation (PSN), and perceived intelligence (PI). The last section of the ques-
tionnaire was regarding the questions on TPB components of ChatGPT adop-
tion intention (AI) and its actual usage (AU). The dependent variable considered
for this study is AU, while the independent variable consisted of the rest of the
seven variables. The survey questions for all the items were designed on a scale
of 1 to 7 (1 = extremely disagree to 7 = extremely agree). PEU consisted of a
six-item scale, while PU was made up of a five-item scale. PT, IN, PSN, and
PI comprised of four items, five items, three items, and five items, respectively.
There were six items in AI construct, while AU contained four-item scales. The
survey instrument items were used from past studies as illustrated in Table 2.
Also the factor loading of each items are presented in Table 2. All the items of
the constructs were retained except for AI, where one item was removed due to
lower factor loading than 0.4.
Before data collection, a priori sample size was calculated using an online
calculator suggested by Soper (2021) for structural equation modelling to iden-
tify the appropriate sample size needed for this study. Considering an antici-
pated effect size of 0.4, which is commonly used as the average effect in educa-
tional research (Cohen, 1988), the calculator recommended a sample size of 89.
After this estimation on required sample size and obtaining ethics approval for
the study, the data collection was initiated. To recruit participants, the survey
links were posted on public social media platforms. Participants were required to
meet two inclusion criteria: (1) being an undergraduate or postgraduate student
and (2) studying in a university in Vietnam. The data was collected from under-
graduate and postgraduate students enrolled in different universities in Vietnam
and there was a total of 108 respondents for this study, which was deemed suf-
ficient as per the priori sample size calculation. As all the survey items in the
questionnaire were mandatory; hence, there were no incomplete or missing data.
13
Table 2 Mean, standard deviation, skewness, kurtosis, factor loading, reliability, convergent validity and divergent validity of study’s constructs
Construct with items and adopted source Mean S.D Skewness Kurtosis Factor loading CA CR AVE
13
Perceived Ease of Use (PEU) (Davis, 1989) 5.061 0.957 −0.017 1.963 0.872 0.904 0.612
PEU1: ChatGPT would be flexible and easy to use −0.55 1.161 0.761
PEU2: ChatGPT would be easy to access for my studies −0.51 1.002 0.831
PEU3: It would be easy and clear to interact with ChatGPT −0.432 1.255 0.820
PEU4: I believe skills needed to use ChatGPT is easy −0.247 0.56 0.701
PEU5: It would be easy to access study-related information from ChatGPT −0.489 1.253 0.785
PEU6: It would be easier to get things done and solve queries using ChatGPT than if not −0.286 0.259 0.791
using it
Perceived usefulness (PU) (Davis, 1989) 4.748 1.201 −0.433 3.400 0.918 0.939 0.758
PU1: ChatGPT is useful for my studies −0.292 0.128 0.876
PU2: I feel that if I used ChatGPT, I would learn better −0.472 0.038 0.891
PU3: ChatGPT would answer all my queries and provides answers as per my expectations −0.157 0.141 0.781
PU4: ChatGPT would help me to improve the efficiency and quality of learning −0.606 0.479 0.922
PU5: ChatGPT would provide prompt access to study issues irrespective of my location −0.685 1.045 0.875
Perceived Trust (PT) (Tarhini et al., 2017) 3.798 1.273 0.111 1.338 0.939 0.957 0.847
PT1: I feel Interaction with ChatGPT would be secure enough −0.102 0.401 0.871
PT2: I trust that my activities while interacting with ChatGPT would be private and safe −0.049 0.133 0.957
PT3: I feel that my details available with ChatGPT would be kept confidential −0.112 0.08 0.927
PT4: Overall, I feel that nobody will be able to access my personal information from Chat- −0.057 0.224 0.925
GPT
Interactivity (IN) (Etemad-Sajadi, 2016) 4.473 1.084 −0.030 0.900 0.864 0.905 0.656
IN1: ChatGPT would allow me to interact with it and get the information required −0.352 1.551 0.749
IN2: Interactive features of ChatGPT would meet my needs in learning −0.598 1.359 0.809
IN3: ChatGPT would help to get the required information easily without asking the professor 0.086 −0.116 0.798
IN4: ChatGPT would provide the required information without writing an email to my university −0.003 −0.331 0.803
Education and Information Technologies
Table 2 (continued)
Construct with items and adopted source Mean S.D Skewness Kurtosis Factor loading CA CR AVE
IN5: ChatGPT would be efficient for interacting and fulfilling my learning needs −0.56 0.673 0.884
Personalisation (PSN) (Komiak & Benbasat, 2006) 4.323 1.182 −0.034 0.181 0.833 0.900 0.751
PSN1: ChatGPT would understand my individual learning needs 0.066 0.447 0.829
PSN2: ChatGPT would provide learning information based on my personal needs −0.307 0.221 0.888
PSN3: I feel that ChatGPT provide learning materials as per my individual needs −0.223 0.489 0.881
Perceived Intelligence (PI) (Pillai et al., 2023) 4.185 1.055 −0.331 0.742 0.842 0.891 0.620
PI1: ChatGPT is competent for teaching −0.176 0.338 0.789
Education and Information Technologies
CA Cronbach’s alpha (α); CR Composite reliability; AVE Average variance extracted; S.D Standard deviation
13
Education and Information Technologies
The data collected for this study were analysed using Structural Equation Model
(SEM) to examine the seven hypotheses proposed. SEM was chosen as an appropri-
ate analysis method as it combines Exploratory Factor Analysis (EFA) and multiple
regression analysis, allowing for the exploration of causal relationships between var-
iables (Maheshwari, 2022). Prior to conducting the hypothesis testing using SEM,
several tests were performed, as discussed in the results and analysis section.
13
Education and Information Technologies
To validate the model before applying the Structural Equation Model (SEM), the
CFA was conducted using AMOS software, version 28. The fit measures, includ-
ing Cmin/df = 1.610, IFI = 0.899, TLI = 0.880, CFI = 0.896, and RMSEA = 0.077,
met the recommended threshold levels proposed by Hair et al. (2011), indicating
that the model was considered a good fit.
Once the model was validated, the skewness and kurtosis measures were used
to assess the normality of the data for both the independent and dependent vari-
ables. The obtained skewness and kurtosis values fell within the accepted range
of -2 to + 2 and -7 to + 7, respectively, indicating a normal distribution of the data
(Hair et al., 2010) (as shown in Table 2). Additionally, to examine the presence of
multicollinearity among the variables, the variation inflation factor (VIF) was cal-
culated. The VIF values for all variables were below the threshold of 5 (Mahesh-
wari, 2021), indicating the absence of multicollinearity in the dataset (Table 2).
The reliability and validity of the model were assessed in the next step. Internal
reliability was examined using Cronbach’s alpha (CA), while construct reliabil-
ity (CR) was further evaluated. Discriminant validity was assessed by calculating
the square root of the average variance extracted (AVE) (Gonzalez-Tamayo et al.,
2023). The CA values for the variables ranged from 0.86 to 0.94, while the CR
values for the constructs ranged from 0.90 to 0.96 (as presented in Table 2). These
values exceeded the cutoff value of 0.7 (Hair et al., 2011), indicating high internal
reliability. Moreover, the AVE values for all constructs surpassed the threshold of
0.5 (Hair et al., 2011) (Table 2). Discriminant validity was confirmed by compar-
ing the square root values of AVE with the correlations between each construct,
both horizontally and vertically (as indicated in bold on the diagonal in Table 3)
and none of the correlations were higher than the square root values of AVE.
13
Education and Information Technologies
variance, which fell below the threshold of 50% (Podsakoff et al., 2012). This
suggests that common method bias was not a major concern in the study.
Once all the necessary tests including CFA, normality, reliability, validity, and com-
mon method bias were conducted, SEM was employed to test the seven designed
hypotheses of the study using AMOS, version 28. The model’s fitness was evaluated
and found to be a good fit, as indicated by the following values: Cmin/df = 1.644,
IFI = 0.890, TLI = 0.873, CFI = 0.887, and RMSEA = 0.079. All these measures fell
within the threshold values recommended by Hair et al. (2011). The results of the
hypothesis testing are presented in Table 4 and Fig. 2.
The study’s first hypothesis (H1a and H1b) was designed to examine the impact
of perceived ease of use on both the adoption intention of ChatGPT and perceived
usefulness. The findings indicate that PEU was significant and positively influ-
enced both variables, with the highest influence on PU (β = 0.815) compared to AI
(β = 0.269). The next hypothesis (H2a, H2b, H2c, H2d, H2e) examined the direct influ-
ence of PU on AI and an indirect relationship mediated by PT, IN, PSN, and PI on
AI. The results highlighted that PU does not influence AI directly but has an indirect
significant effect on AI via IN and PSN, but the indirect effect via IN was negative
(β = −0.577), whereas the influence was positive via PSN (β = 0.334). The rest of
the indirect influence of PU via PT and PT on AI were insignificant. The subsequent
four hypotheses (H3, H4, H5, H6) aimed to assess the direct impact of PT, IN, PSN,
and PI on AI, respectively. The results suggest that PT and PI do not influence AI
of students, while IN and PSN significantly influence AI, with IN having a negative
effect (β = -0.604) and PSN having a positive effect (β = 0.443). The last hypothesis
(H7) of the study aimed to examine the impact of AI on AU, and results indicated
13
Education and Information Technologies
6 Discussion
This research aimed to explore the adoption intentions and actual usage of ChatGPT
among higher education students in Vietnam. The initial findings indicate a positive
influence of students’ perceived ease of use (PEU) on their adoption intentions (AI)
towards ChatGPT. This positive relationship between PEU and adoption intentions
has been observed in previous studies by Ni and Cheung (2022) and Hu (2022).
Additionally, the study revealed a direct impact of PEU on the perceived useful-
ness (PU) of the system as perceived by the students. This finding is not surpris-
ing considering that ChatGPT offers a user-friendly interface and prompt responses,
enhancing students’ comfort and engagement. Similar results have been reported in
studies conducted by Cao et al. (2021) and Moussawi et al. (2021).
Unlike previous studies (Al-Sharafi et al., 2022; Sing et al., 2022) that indicated
a positive relationship between perceived usefulness (PU) and adoption intentions
(AI), the present study yielded contrasting results, as perceived usefulness was not
found to have a significant impact on students’ adoption intentions towards Chat-
GPT. This inconsistency may be attributed to several factors that warrant considera-
tion. One possible reason is that students in this study may have limited familiarity
with ChatGPT, as the system is still in its infancy stage. Because of their limited
exposure, students may have a limited understanding of the potential applications
and benefits that ChatGPT can provide. Consequently, their perceived usefulness of
the tool might be lower, leading to weaker adoption intentions. Additionally, stu-
dents may already be using alternative technological tools that they find more com-
fortable or effective for their needs. This existing preference for alternative tools
13
Education and Information Technologies
13
Education and Information Technologies
consistent with the results reported in similar research conducted by Krouska et al.
(2022) and Pillai et al. (2023), where personalisation positively influenced technol-
ogy adoption intentions among students. However, the findings regarding interac-
tivity in this study contradict with those reported in the literature. Etemad-Sajadi’s
(2016) study, for instance, found that online real-time interactivity encouraged users
to increase their intention to use and interact with an online smart platform. The
contrasting results regarding interactivity could be attributed to several factors.
Firstly, it is possible that the interactivity provided by ChatGPT in this study did
not meet the users’ expectations or requirements, leading to a negative influence on
adoption intentions. Furthermore, the impact of interactivity on adoption intentions
can be influenced by various contextual factors and specific characteristics of the
technology being examined. Each study may have employed different interactivity
measures or investigated different technologies, leading to divergent findings.
In line with the findings regarding the mediating role of perceived trust and per-
ceived intelligence, this study further found that these variables also have insignifi-
cant direct effects on students’ adoption intentions (AI). This contradicts the find-
ings from various studies in the literature that have demonstrated a direct effect of
artificial intelligence tools on adoption intentions (Kandoth & Shekhar, 2022; Kim
et al., 2021; Mohd Rahim et al., 2022). However, these divergent findings can be
associated to the novelty of ChatGPT, as users may require more time to build the
necessary trust in the technology. Likewise, prior research has documented a direct
impact of perceived intelligence on technology adoption (Balakrishnan et al., 2022;
Pillai et al., 2023). However, the results from this study contradict those findings,
potentially due to ChatGPT’s limitations, such as its inability to provide the latest
information. These limitations might have affected the adoption of ChatGPT among
students. It is important to acknowledge that ChatGPT is a relatively novel technol-
ogy, and users may have certain expectations and concerns regarding its trustworthi-
ness and intelligence. Building trust in AI systems takes time and requires consistent
positive experiences. The constraints associated with ChatGPT, such as the presence
of outdated information, could have influenced students’ perceptions of its utility
and, subsequently, their intentions to adopt it.
The last finding of the study demonstrated a positive relationship between stu-
dents’ adoption intentions (AI) and the actual usage (AU) of ChatGPT. This result
is consistent with previous research conducted by Wei et al. (2021) and Pillai et al.
(2023), which also investigated the connection between adoption intentions and the
actual usage of chatbots. The positive correlation between adoption intentions and
actual usage can be attributed to students’ confidence in adopting and using Chat-
GPT effectively. When students have a strong intention to adopt the technology and
believe in their competence to utilise it, they are more likely to translate their adop-
tion intentions into actual usage. The associaton between adoption intentions and
actual usage suggests that students who are motivated to adopt ChatGPT and have a
positive perception of its usefulness are more inclined to engage with the technology
in their studies. This finding underscores the importance of fostering positive adop-
tion intentions among students as a means to drive actual usage of ChatGPT.
The study’s findings investigated the determinants of ChatGPT adoption among
students and their broader relevance. Notably, the students’ readiness to embrace
13
Education and Information Technologies
ChatGPT, a relatively new technology, and is still in its infancy and is gradually
being explored and adopted in the academic industry. This research aimed to provide
a deeper understanding of the factors that impact students’ adoption intentions and
actual usage of ChatGPT with their studies. The results of this study provide insights
into the factors that influence students’ behaviours and emphasise the importance of
perceived ease of use, usefulness, interactivity, personalization, and adoption inten-
tions in shaping their utilization of ChatGPT for their academic pursuits. The signifi-
cance of perceived ease of use became apparent, suggesting that students are more
inclined to adopt and utilise ChatGPT when they perceive it as user-friendly and con-
venient. The study also revealed the impact of perceived usefulness on adoption inten-
tions, with indirect effects mediated by interactivity (negative) and personalization
(positive). This suggests that students’ perceptions of the usefulness of ChatGPT may
be influenced by the interactive features it offers (which students are not much famil-
iar with) and the extent to which it can be personalised to their needs. The insights
gained from this study have practical implications for educational institutions to arrive
to a decision regarding utilising ChatGPT in educational settings. By recognising the
importance of factors like ease of use, usefulness, interactivity, personalisation, and
adoption intentions, the educational institutions can implement effective strategies to
enhance student engagement with proper utilisation of ChatGPT. The theoretical and
practical implications of the study is discussed in the following sections.
Given the ethical concerns surrounding AI in education, institutions can play a
pivotal role in ensuring responsible AI usage. To address ethical considerations,
institutions should establish clear guidelines and codes of conduct for both students
and educators. These guidelines can outline the appropriate and responsible use of
AI tools, emphasizing the importance of critical thinking and creativity in assess-
ments. Additionally, ongoing training and awareness programs can educate students
and educators about the ethical implications of AI in education, fostering a culture
of responsible technology usage. By navigating these ethical considerations thought-
fully, educational institutions can harness the benefits of AI while upholding ethical
standards.
13
Education and Information Technologies
7.1 Theoretical implications
This empirical study stands out as the first of its kind to investigate adoption inten-
tions and actual usage of ChatGPT, specifically within the context of students’ stud-
ies in Asia. While previous research on ChatGPT has predominantly comprised
literature reviews or general discussions, this study adds significant value by offer-
ing insights derived from primary data analysis. By examining the perspectives
and behaviours of students, this study fills a significant gap in the current literature
and deepens an understanding of the factors that impact the adoption and usage of
ChatGPT in educational institutions in Vietnam. The theoretical implications of this
study are significant as the results emphasises the importance of several factors in
shaping adoption intentions and actual usage of ChatGPT. Notably, perceived ease
of use, perceived usefulness, personalisation, and interactivity emerged as critical
determinants that significantly impact students’ attitudes and behaviours towards
ChatGPT. By providing empirical evidence of the impact of these factors within
the context of this new technology, this study makes a value contribution to the
advancement of existing theoretical frameworks developed for various AI tools.
7.2 Practical implications
The study provides practical insights that can guide educational institutions in effec-
tively integrating AI tools like ChatGPT into educational settings. The positive
impact of perceived ease of use on adoption intentions underscores the significance
of user-friendly interfaces and responsive systems. However, while incorporating
ChatGPT into the educational landscape, it is crucial for educational institutions to
consider the implications for assessments. While it may be inevitable for students
to use ChatGPT in their studies, careful thought should be given to its integration
into the assessment process. It is essential to design assessments in a manner that
encourages responsible use of ChatGPT and preserves students’ critical thinking
skills and creativity in assessment writing. Educators play a significant role in this
regard by providing clear guidelines and instructions that outline the appropriate and
ethical use of AI tools during assessments. By ensuring that students understand the
intended purpose and limitations of ChatGPT, educators can foster a responsible and
ethical usage of ChatGPT in academic assessments.
This study has few limitations, thereby offering suggestions for future research. First
and foremost, it is important to note that this research was conducted within a spe-
cific context in Vietnam and with a relatively limited sample size. As a result, the
generalizability of the findings to other contexts or user groups in different coun-
tries may be constrained. To enhance the external validity of the findings, future
research could consider conducting studies in diverse educational settings and with
larger sample sizes. Secondly, it is worth noting that the study relied on self-reported
13
Education and Information Technologies
Data availability The data used in this research cannot be made publicly available due to restrictions out-
lined in the ethics approval letter.
Declarations
Conflict of interest There is no conflict of interest.
Competing interests The author declare that they have no known competing financial interests or personal
relationships that could have appeared to influence the work reported in this paper.
References
Ajzen, I. (1985). From intentions to actions: A theory of planned behavior. Springer.
Ajzen, I. (1991). The theory of planned behavior. Organisational Behavior and Human Decision Pro-
cesses, 50(2), 179–211.
Akour, I. A., Al-Maroof, R. S., Alfaisal, R., & Salloum, S. A. (2022). A conceptual framework for deter-
mining metaverse adoption in higher institutions of gulf area: An empirical study using hybrid
SEM-ANN approach. Computers and Education: Artificial Intelligence, 3, 100052.
Alhashmi, S., Salloum, S. A., & Mhamdi, C. (2019). Implementing artificial intelligence in the United
Arab Emirates healthcare sector: An extended technology acceptance model. Int. J. Inf. Technol.
Lang. Stud, 3(3), 27–42.
Almahri, F. A. J., Bell, D., & Merhi, M. (2020). Understanding student acceptance and use of chatbots in
the United Kingdom universities: a structural equation modeling approach. 2020 6th International
Conference on Information Management (ICIM).
13
Education and Information Technologies
Almaiah, M. A., Jalil, M. A., & Man, M. (2016). Extending the TAM to examine the effects of quality
features on mobile learning acceptance. Journal of Computers in Education, 3, 453–485.
Al-Mekhlafi, A. B. A., Othman, I., Kineber, A. F., Mousa, A. A., & Zamil, A. M. (2022). Modeling the
impact of massive open online courses (MOOC) implementation factors on continuance intention
of students: PLS-SEM approach. Sustainability, 14(9), 5342.
Al-Sharafi, M. A., Al-Emran, M., Iranmanesh, M., Al-Qaysi, N., Iahad, N. A., & Arpaci, I. (2022).
Understanding the impact of knowledge management factors on the sustainable use of AI-based
chatbots for educational purposes using a hybrid SEM-ANN approach. Interactive Learning Envi-
ronments, 1–20 (ahead-of-print)
Anthony Jnr, B., Kamaludin, A., Romli, A., Raffei, A. F. M., Phon, D. N. A. L. E., Abdullah, A., Ming,
G. L., Shukor, N. A., Nordin, M. S., & Baba, S. (2020). Predictors of blended learning deployment
in institutions of higher learning: Theory of planned behavior perspective. The International Jour-
nal of Information and Learning Technology, 37(4), 179–196.
Arghashi, V., & Yuksel, C. A. (2022). Interactivity, Inspiration, and Perceived Usefulness! How retail-
ers’ AR apps improve consumer engagement through flow. Journal of Retailing and Consumer
Services, 64, 102756.
Ashrafi, A., Zareravasan, A., Rabiee Savoji, S., & Amani, M. (2022). Exploring factors influencing stu-
dents’ continuance intention to use the learning management system (LMS): A multi-perspective
framework. Interactive Learning Environments, 30(8), 1475–1497.
Aw, E. C. X., Basha, N. K., Ng, S. I., & Sambasivan, M. (2019). To grab or not to grab? The role of trust
and perceived value in on-demand ridesharing services. Asia Pacific Journal of Marketing and
Logistics, 31(5), 1442–1465.
Aydın, Ö., & Karaarslan, E. (2023). Is ChatGPT leading generative AI? What is beyond expectations?
What is beyond expectations?
Baby, A., & Kannammal, A. (2020). Network Path Analysis for developing an enhanced TAM model: A
user-centric e-learning perspective. Computers in Human Behavior, 107, 106081.
Balakrishnan, J., Abed, S. S., & Jones, P. (2022). What is the role of meta-UTAUT factors, perceived
anthropomorphism, perceived intelligence, and social self-efficacy in chatbot-based services?
Technological Forecasting and Social Change, 180, 121692.
Baleghi-Zadeh, S., Ayub, A. F. M., Mahmud, R., & Daud, S. M. (2017). The influence of system interac-
tivity and technical support on learning management system utilisation. Knowledge Management &
E-Learning, 9(1), 50.
Bartneck, C., Kulić, D., Croft, E., & Zoghbi, S. (2009). Measurement instruments for the anthropomor-
phism, animacy, likeability, perceived intelligence, and perceived safety of robots. International
Journal of Social Robotics, 1, 71–81.
Bawack, R. E. (2021). How Perceived intelligence affects consumer adoption of AI-based voice assis-
tants: An affordance perspective. PACIS.
Cao, J., Yang, T., Lai, I. K.-W., & Wu, J. (2021). Student acceptance of intelligent tutoring systems dur-
ing COVID-19: The effect of political influence. The International Journal of Electrical Engineer-
ing & Education, 00207209211003270.
Chai, C. S., Wang, X., & Xu, C. (2020). An extended theory of planned behavior for the modeling of Chi-
nese secondary school students’ intention to learn artificial intelligence. Mathematics, 8(11), 2089.
Chang, C.-W., Heo, J., Yeh, S.-C., Han, H.-Y., & Li, M. (2018). The effects of immersion and interactiv-
ity on college students’ acceptance of a novel VR-supported educational technology for mental
rotation. IEEE Access, 6, 66590–66599.
Cheon, J., Lee, S., Crooks, S. M., & Song, J. (2012). An investigation of mobile learning readiness
in higher education based on the theory of planned behavior. Computers & Education, 59(3),
1054–1064.
Choung, H., David, P., & Ross, A. (2023). Trust in AI and Its Role in the Acceptance of AI Technologies.
International Journal of Human-Computer Interaction, 39(9), 1727–1739.
Chow, M., Herold, D. K., Choo, T. M., & Chan, K. (2012). Extending the technology acceptance model
to explore the intention to use Second Life for enhancing healthcare education. Computers & Edu-
cation, 59(4), 1136–1144.
Chuah, S.H.-W., Aw, E.C.-X., & Yee, D. (2021). Unveiling the complexity of consumers’ intention to use
service robots: An fsQCA approach. Computers in Human Behavior, 123, 106870.
Clark, R. M., Kaw, A. K., & Braga Gomes, R. (2022). Adaptive learning: Helpful to the flipped class-
room in the online environment of COVID? Computer Applications in Engineering Education,
30(2), 517–531.
13
Education and Information Technologies
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Lawrence Erlbaum.
Conner, M., & Armitage, C. J. (1998). Extending the theory of planned behavior: A review and avenues
for further research. Journal of Applied Social Psychology, 28(15), 1429–1464.
Dahiya, M. (2017). A tool of conversation: Chatbot. International Journal of Computer Sciences and
Engineering, 5(5), 158–161.
Davis, F. D. (1985). A technology acceptance model for empirically testing new end-user information
systems: Theory and results. Massachusetts Institute of Technology.
Davis, F. D. (1989). Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Informa-
tion Technology. MIS Quarterly, 13(3), 319–340.
Deng, J., & Lin, Y. (2022). The benefits and challenges of ChatGPT: An overview. Frontiers in Com-
puting and Intelligent Systems, 2(2), 81–83.
Dignum, V. (2020). AI is multidisciplinary. AI Matters, 5(4), 18–21.
Etemad-Sajadi, R. (2016). The impact of online real-time interactivity on patronage intention: The use
of avatars. Computers in Human Behavior, 61, 227–232.
Fan, H., & Poole, M. S. (2006). What is personalisation? Perspectives on the design and implementa-
tion of personalisation in information systems. Journal of Organizational Computing and Elec-
tronic Commerce, 16(3–4), 179–202.
Fuchs, K. (2023). Exploring the opportunities and challenges of NLP models in higher education: Is
Chat GPT a blessing or a curse? Frontiers in Education, 8, 1166682. Frontiers.
Gilson, A., Safranek, C., Huang, T., Socrates, V., Chi, L., & Taylor, R. (2023). How Does ChatGPT
Perform on the Medical Licensing Exams? The Implications of Large Language Models for
Medical Education and Knowledge Assessment. https://doi.org/10.1101/2022.12,23. medRxiv.
Gonzalez-Tamayo, L. A., Maheshwari, G., Bonomo-Odizzio, A., Herrera-Avilés, M., & Krauss-
Delorme, C. (2023). Factors influencing small and medium size enterprises development and
digital maturity in Latin America. Journal of Open Innovation: Technology, Market, and Com-
plexity, 100069.
Gray, K., & Wegner, D. M. (2012). Feeling robots and human zombies: Mind perception and the
uncanny valley. Cognition, 125(1), 125–130.
Gulati, S., Sousa, S., & Lamas, D. (2019). Design, development, and evaluation of a human-computer
trust scale. Behavior & Information Technology, 38(10), 1004–1015.
Haenlein, M., & Kaplan, A. (2019). A brief history of artificial intelligence: On the past, present, and
future of artificial intelligence. California Management Review, 61(4), 5–14.
Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2010). Multivariate data analysis: A global
perspective. Pearson Prentice Hall.
Hair, J. F., Ringle, C. M., & Sarstedt, M. (2011). PLS-SEM: Indeed, a silver bullet. Journal of Mar-
keting Theory and Practice, 19(2), 139–152.
Harrigan, M., Feddema, K., Wang, S., Harrigan, P., & Diot, E. (2021). How trust leads to online pur-
chase intention founded in perceived usefulness and peer communication. Journal of Consumer
Behaviour, 20(5), 1297–1312.
Ho, C. C., & MacDorman, K. F. (2010). Revisiting the uncanny valley theory: Developing and validat-
ing an alternative to the Godspeed indices. Computers in Human Behavior, 26(6), 1508–1518.
Ho, S. Y. (2006). The attraction of internet personalisation to web users. Electronic Markets, 16(1),
41–50.
Hoque, R., & Sorwar, G. (2017). Understanding factors influencing the adoption of mHealth by the
elderly: An extension of the UTAUT model. International Journal of Medical Informatics, 101,
75–84.
Hu, Y.-H. (2022). Effects and acceptance of precision education in an AI-supported smart learning
environment. Education and Information Technologies, 27(2), 2013–2037.
Ingkavara, T., Panjaburee, P., Srisawasdi, N., & Sajjapanroj, S. (2022). The use of a personalised
learning approach to implementing self-regulated online learning. Computers and Education:
Artificial Intelligence, 3, 100086.
Kandoth, S., & Shekhar, S. K. (2022). Social influence and intention to use AI: The role of per-
sonal innovativeness and perceived trust using the parallel mediation model. Forum Scientiae
Oeconomia.
Kang, M., Shin, D. H., & Gong, T. (2016). The role of personalisation, engagement, and trust in
online communities. Information Technology & People, 29(3), 580–596.
13
Education and Information Technologies
Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the inter-
pretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1),
15–25.
Kim, J., Giroux, M., & Lee, J. C. (2021). When do you trust AI? The effect of number presenta-
tion detail on consumer trust and acceptance of AI recommendations. Psychology & Marketing,
38(7), 1140–1155.
Kim, J., Kang, S., & Bae, J. (2022). Human likeness and attachment effect on the perceived interactiv-
ity of AI speakers. Journal of Business Research, 144, 797–804.
Komiak, S. Y. X., & Benbasat, I. (2006). The Effects of Personalization and Familiarity on Trust and
Adoption of Recommendation Agents. MIS Quarterly, 30(4), 941–960.
Krouska, A., Troussas, C., & Sgouropoulou, C. (2022). Mobile game-based learning as a solution in
COVID-19 era: Modeling the pedagogical affordance and student interactions. Education and
Information Technologies, 27(1), 229–241.
Kuhail, M. A., Alturki, N., Alramlawi, S., & Alhejori, K. (2023). Interacting with educational chatbots: A
systematic review. Education and Information Technologies, 28(1), 973–1018.
Kuleto, V., Ilić, M., Dumangiu, M., Ranković, M., Martins, O. M., Păun, D., & Mihoreanu, L. (2021).
Exploring opportunities and challenges of artificial intelligence and machine learning in higher
education institutions. Sustainability, 13(18), 10424.
Kumar, N., Singh, M., Upreti, K., & Mohan, D. (2022). Blockchain adoption intention in higher educa-
tion: role of trust, perceived security, and privacy in technology adoption model. Proceedings of
International Conference on Emerging Technologies and Intelligent Systems: ICETIS 2021 (Vol-
ume 1).
Kurni, M., Mohammed, M. S., & Srinivasa, K. G. (2023). Intelligent tutoring systems. A beginner’s
guide to introduce artificial intelligence in teaching and learning (pp. 29–44). Springer Interna-
tional Publishing.
Liaw, S. S., & Huang, H. M. (2013). Perceived satisfaction, perceived usefulness, and interactive learning
environments as predictors to self-regulation in e-learning environments. Computers & Education,
60(1), 14–24.
Liebrenz, M., Schleifer, R., Buadze, A., Bhugra, D., & Smith, A. (2023). Generating scholarly con-
tent with ChatGPT: ethical challenges for medical publishing. The Lancet. Digital Health, 5(3),
e105–e106.
Liu, K., & Tao, D. (2022). The roles of trust, personalisation, loss of privacy, and anthropomorphism in
public acceptance of smart healthcare services. Computers in Human Behavior, 127, 107026.
Lu, Y., Kim, Y., Dou, X. Y., & Kumar, S. (2014). Promote physical activity among college students:
Using media richness and interactivity in web design. Computers in Human Behavior, 41, 40–50.
Maheshwari, G. (2021). Factors affecting students’ intentions to undertake online learning: An empirical
study in Vietnam. Education and Information Technologies, 26(6), 6629–6649.
Maheshwari, G. (2022). Entrepreneurial intentions of university students in Vietnam: Integrated model of
social learning, human motivation, and TPB. The International Journal of Management Education,
20(3), 100714
Marangunić, N., & Granić, A. (2015). Technology acceptance model: A literature review from 1986 to
2013. Universal Access in the Information Society, 14, 81–95.
Miller, S. M. (2018). AI: Augmentation, more so than automation. Asian Management Insights, 5(1),
1–20.
Mohd Rahim, N. I., Iahad, N. A., Yusof, A. F., & Al-Sharafi, M. A. (2022). AI-based chatbots adoption
model for higher-education institutions: A hybrid PLS-SEM-neural network modelling approach.
Sustainability, 14(19), 12726.
Mohr, S., & Kühl, R. (2021). Acceptance of artificial intelligence in German agriculture: An application
of the technology acceptance model and the theory of planned behavior. Precision Agriculture,
22(6), 1816–1844.
Moussawi, S., & Koufaris, M. (2019). Perceived intelligence and perceived anthropomorphism of per-
sonal intelligent agents: Scale development and validation. Proceedings of the 52nd Hawaii Inter-
national Conference on System Sciences
Moussawi, S., Koufaris, M., & Benbunan-Fich, R. (2021). How perceptions of intelligence and anthro-
pomorphism affect the adoption of personal intelligent agents. Electronic Markets, 31, 343–364.
Nalbant, K. G. (2021). The importance of artificial intelligence in education: A short review. Journal of
Review in Science and Engineering, 2021, 1–15.
13
Education and Information Technologies
Nazaretsky, T., Ariely, M., Cukurova, M., & Alexandron, G. (2022). Teachers’ trust in AI-powered edu-
cational technology and a professional development program to improve it. British Journal of Edu-
cational Technology, 53(4), 914–931.
Ng, D. T. K., Lee, M., Tan, R. J. Y., Hu, X., Downie, J. S., & Chu, S. K. W. (2023). A review of AI teach-
ing and learning from 2000 to 2020. Education and Information Technologies, 28(7), 8445–8501.
Ni, A., & Cheung, A. (2023). Understanding secondary students’ continuance intention to adopt AI-
powered intelligent tutoring system for English learning. Education and Information Technologies,
28(3), 3191–3216.
Nisar, S., & Aslam, M. S. (2023). Is ChatGPT a good tool for T&CM students in studying pharmacol-
ogy? Available at SSRN 4324310.
Olarewaju, A. D., Gonzalez-Tamayo, L. A., Maheshwari, G., & Ortiz-Riaga, M. C. (2023). Journal of
Small Business and Enterprise Development, 30(3), 475–500.
Oxford Analytica. (2023). ChatGPT dramatically fuels corporate interest in AI. Emerald Expert
Briefings(oxides).
Pavlik, J. V. (2023). Collaborating With ChatGPT: considering the implications of generative artificial
intelligence for journalism and media education. Journalism and Mass Communication Educator,
78(1), 84–93.
Pillai, R., Sivathanu, B., Metri, B., & Kaushik, N. (2023). Students’ adoption of AI-based teacher-bots
(T-bots) for learning in higher education. Information Technology & People (West Linn, Or.)
(ahead-of-print)
Pillai, R., & Sivathanu, B. (2020). Adoption of AI-based chatbots for hospitality and tourism. Interna-
tional Journal of Contemporary Hospitality Management, 32(10), 3199–3226.
Podsakoff, P. M., MacKenzie, S. B., & Podsakoff, N. P. (2012). Sources of method bias in social science
research and recommendations on how to control it. Annual Review of Psychology, 63, 539–569.
Popenici, S. A., & Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning
in higher education. Research and Practice in Technology Enhanced Learning, 12(1), 1–13.
Pothen, A. S. (2022). Artificial intelligence and its increasing importance. In J. Karthikeyan, T. S. Hie,
& N. Y. Jin (Eds.), Learning Outcomes of Classroom Research (pp. 74–81). L Ordine Nuovo
Publication.
Qadir, J. (2022). Engineering education in the era of chatGPT: promise and pitfalls of generative AI
for education. In IEEE Global Engineering Education Conference (EDUCON) proceedings. IEEE.
Rahaman, M., Ahsan, M., Anjum, N., Rahman, M., & Rahman, M. N. (2023). The AI race is on! Goog-
le’s bard and OpenAI’s ChatGPT head to head: An opinion article. Mizanur and Rahman, Md Nafi-
zur, The AI Race is on.
Rahmat, T. E., Raza, S., Zahid, H., Abbas, J., Mohd Sobri, F. A., & Sidiki, S. N. (2022). Nexus between
integrating technology readiness 2.0 index and students’ e-library services adoption amid the
COVID-19 challenges: Implications based on the theory of planned behavior. Journal of Education
and Health Promotion, 11(1), 50–50.
Rudolph, J., Tan, S., & Tan, S. (2023). ChatGPT: Bullshit spewer or the end of traditional assessments in
higher education? Journal of Applied Learning and Teaching, 6(1), 1–22.
Samek, W., Wiegand, T., & Müller, K.-R. (2017). Explainable artificial intelligence: Understanding, visu-
alising, and interpreting deep learning models. arXiv preprint arXiv:1708.08296.
Sass, T., & Ali, S. M. (2023). Virtual Tutoring Use and Student Achievement Growth. Georgia Policy
Labs Reports.
Siau, K., & Wang, W. (2018). Building trust in artificial intelligence, machine learning, and robotics. Cut-
ter Business Technology Journal, 31(2), 47–53.
Simmons, A. B., & Chappell, S. G. (1988). Artificial intelligence definition and practice. IEEE Journal of
Oceanic Engineering, 13(2), 14–42.
Sing, C. C., Teo, T., Huang, F., Chiu, T. K., & Xing Wei, W. (2022). Secondary school students’ inten-
tions to learn AI: Testing moderation effects of readiness, social good and optimism. Educational
Technology Research and Development, 70(3), 765–782.
Smutny, P., & Schreiberova, P. (2020). Chatbots for learning: A review of educational chatbots for the
Facebook Messenger. Computers & Education, 151, 103862.
Soper, D. S. (2021). A-priori sample size calculator for structural equation models [Software]. 2021.
Strzelecki, A. (2023). To use or not to use ChatGPT in higher education? A study of students’ acceptance
and use of technology. Interactive Learning Environments, 1–14. (ahead-of-print)
13
Education and Information Technologies
Tarhini, A., Masa’deh, R. E., Al-Busaidi, K. A., & MohammedMaqableh, A. B. M. (2017). Factors influ-
encing students’ adoption of e-learning: A structural equation modeling approach. Journal of Inter-
national Education in Business, 10(2), 164–182.
Tetzlaff, L., Schmiedek, F., & Brod, G. (2021). Developing personalised education: A dynamic frame-
work. Educational Psychology Review, 33, 863–882.
Venkatesh, V., & Davis, F. D. (1996). A model of the antecedents of perceived ease of use: Development
and test. Decision Sciences, 27(3), 451–481.
Wei, J., Vinnikova, A., Lu, L., & Xu, J. (2021). Understanding and predicting the adoption of mobile fit-
ness apps: Evidence from China. Health Communication, 36(8), 950–961.
Yacci, M. (2000). interactivity demystified: A structural definition for distance education and intelligent
computer-based instruction. Educational Technology, 40(4), 5–16.
Yu, C. E. (2020). Humanlike robots as employees in the hotel industry: Thematic content analysis of
online reviews. Journal of Hospitality Marketing & Management, 29(1), 22–38.
Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on
artificial intelligence applications in higher education–where are the educators? International Jour-
nal of Educational Technology in Higher Education, 16(1), 1–27.
Zhai, X. (2022). ChatGPT user experience: Implications for education. Available at SSRN 4312418.
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps
and institutional affiliations.
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under
a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted
manuscript version of this article is solely governed by the terms of such publishing agreement and
applicable law.
13