Factors Predicting ChatGPT Adoption For Assessment Support
Factors Predicting ChatGPT Adoption For Assessment Support
Factors Predicting ChatGPT Adoption For Assessment Support
net/publication/380919536
Integrating the adapted UTAUT model with moral obligation, trust and
perceived risk to predict ChatGPT adoption for assessment support: A survey
with students
CITATION READS
1 128
4 authors, including:
8 PUBLICATIONS 59 CITATIONS
The Hong Kong University of Science and Technology
13 PUBLICATIONS 110 CITATIONS
SEE PROFILE
SEE PROFILE
All content following this page was uploaded by Kwok Yip Cheung on 11 July 2024.
Integrating the adapted UTAUT model with moral obligation, trust and
perceived risk to predict ChatGPT adoption for assessment support: A
survey with students
Chung Yee Lai a, *, Kwok Yip Cheung b, Chee Seng Chan c, Kuok Kei Law d
a
Lee Shau Kee School of Business and Administration, Hong Kong Metropolitan University, Hong Kong
b
Department of Accounting, The Hong Kong University of Science and Technology, Hong Kong
c
Brittany Universite, France
d
NUCB Business School, Japan
A B S T R A C T
ChatGPT stands out among other AI technology tools to provide assessment support in the higher education context. This exploratory study investigates the mo
tivators and barriers which influence the use intention of ChatGPT for assessment support among undergraduates in Hong Kong. We tested the adapted and extended
UTUAT model using the structural equation modeling approach. Through self-report online questionnaire, we received usable responses from 483 undergraduates of
eight Hong Kong universities.
In a highly competitive Chinese context, assessment results hold immense significance in determining the value and merit of students. The findings reveal that trust
acts as the strongest determinant while moral obligation and perceived risk serve as major inhibitors to students’ acceptance of ChatGPT in assessments. Perceived
risk does not mediate the relationship between trust and students’ behavioral intention. Both performance expectancy and effort expectancy positively influence
behavioral intention to use ChatGPT for assessment support. However, social influence is found to be statistically insignificant.
The results contribute to the education literature by extending the adapted UTAUT model with additional variables for ChatGPT adoption in the assessment
context. They offer practical insights to promote ethical and optimal use of ChatGPT in the higher education context.
* Corresponding author. Hong Kong Metropolitan University, Lee Shau Kee School of Business and Administration, 30 Good Shepherd Street, Homantin, Kowloon,
Hong Kong.
E-mail addresses: [email protected] (C.Y. Lai), [email protected] (K.Y. Cheung), [email protected] (C.S. Chan), [email protected]
(K.K. Law).
https://doi.org/10.1016/j.caeai.2024.100246
Received 4 October 2023; Received in revised form 22 May 2024; Accepted 23 May 2024
Available online 27 May 2024
2666-920X/© 2024 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-
nc-nd/4.0/).
C.Y. Lai et al. Computers and Education: Artificial Intelligence 6 (2024) 100246
tools and techniques. based on the source of channel, author, language rigor and date of
Last, ChatGPT’s ability to generate instant and context-relevant re publication. However, ChatGPT relies on algorithms that generate re
sponses is valuable in addressing challenging questions during assess sponses using internet and published data which can be inaccurate and
ments. LLMs can display step-by-step reasoning in science subjects, biased, making it difficult to ensure the veracity of the information
which can be effectively employed to solve questions through various (Eysenbach, 2023). Due to ChatGPT’s content generation capabilities,
prompting strategies (Liang et al., 2023). Alternatively, it can also act language fluency and lack of originality, students do not have access to
like a search engine which enables users to input inquiries informally or information for reliability judgement. The lack of trust in ChatGPT may
incompletely, allowing students to search and extract information effi affect students’ intention to interact with it for assessment support.
ciently. Further, it can also generate outputs such as essays, case study PR is related to students’ concern about ChatGPT’s safety, security
analyses, and computer codes etc. Thereby, learners can allocate more risks and ethical implications. The adoption of ChatGPT in assessments
effort on meaningful analytical tasks in preparing assignments. Students raises concerns regarding privacy and security risks. Since the training of
can use ChatGPT as a scaffolding tool for their preliminary draft, fol AI chatbots often entails the collection and processing of extensive data
lowed by manual error correction and reference updates for creating from diverse sources (Wu et al., 2023b), ChatGPT has been criticized by
final outputs in form of essays, reports, research papers and dissertations the Italian data-protection authority for its large-scale data collection
in appropriate genre, tone and style (Choi, Hickman, Monahan, & from users without legal basis (McCallum, 2023). Further, the input by
Schwarcz, 2023; Zohery, 2023). students in ChatGPT may be regenerated as responses, which exposes
Given ChatGPT’s strong capability to generate fluent responses, students to potential safety and privacy risks when using ChatGPT for
there is a remarkable surge in usage and students will inevitably assessment support.
leverage this cutting-edge technology to support their assessment en Additionally, the use of ChatGPT associates with ethical consider
deavors. Existing research indicated that 95 percent of student partici ations which need to be carefully addressed. If students effortlessly copy
pants held a positive attitude towards chatbot adoption (Malmström and incorporate the instant responses into their work, they may fail to
et al., 2023) and were familiar with ChatGPT (Singh et al., 2023). In a demonstrate their subject knowledge (Lai & Cheung, 2023), or develop
survey by Study.com (2023), almost 90 percent of student respondents the reasoning, problem solving and critical thinking skills through the
regularly employed ChatGPT to assist them with their assignments, assessments (Memarian & Doleck, 2023a). Unless explicitly requested,
whereas around half of them admitted using them in take-home quizzes ChatGPT typically does not provide any sources of information, despite
and essays. its knowledge being derived from the training text with a defined
endpoint (Rahman et al., 2023). Some advanced text generated by
1.1. ChatGPT for assessment support in Hong Kong ChatGPT can evade detection by current plagiarism detection technol
ogies, providing an unfair advantage to learners who complete assess
Educators in Hong Kong typically agree that AI-powered tools have ments independently. These raise concerns about fairness, plagiarism
great potential for assisting students’ learning (Fung, 2023; Yiu, 2023) and undermines academic integrity (Eke et al., 2023; Memarian &
but hold distinctive views towards the application of ChatGPT in as Doleck, 2023b). Students who irresponsibly use ChatGPT during as
sessments because of the ethical concerns. At the early launch of sessments expose themselves to the risk of receiving a score deduction or
ChatGPT, only The Hong Kong University of Science and Technology grading penalty, which may negatively affect their intention to embrace
and The Education University of Hong Kong kept an open mind to the the latest AI technology.
use of ChatGPT for assessment support (Mok, 2023). Among eight uni The moral perception of the technology plays a crucial role in
versities offering programs funded by University Grants Committee in shaping users’ acceptance of it (Wu, Huang, & Li, 2023). However,
Hong Kong, four of them initially banned the use of ChatGPT in as students typically lack knowledge about the responsible use of ChatGPT
sessments because of issues related to fairness, data privacy and security, because of its novelty. In a study on around six thousand Swedish uni
information accuracy, and bias (Chow, 2023; Fung, 2023). versity students, over 60 percent of participants agreed that employing
After deliberation, universities in Hong Kong have allowed students chatbots during assessments could be regarded as cheating (Malmström
to adopt ChatGPT for supporting their assessments if they are able to et al., 2023). Further, more than 80 percent of them expressed uncer
present evidence of their responsible and ethical use of AI chatbot. For tainty regarding their universities’ guidelines or regulations about
instance, they are required to seek prior approval from instructors, responsible use of AI technology. While ChatGPT criticisms often center
include a statement of declaration and articulate their reflections in the around its failure to acknowledge original sources and authors in
learning process. At least five universities have begun offering students generated responses, few students are educated on properly crediting
free access to ChatGPT, either directly through OpenAI or by providing ideas sourced through reverse searching to avoid plagiarism (Halaweh,
their own version of the AI chatbot via the Microsoft Azure OpenAI 2023). Learners with distinctive levels of perceived responsibility for
platform. This has significantly increased the accessibility of this AI moral behaviors are likely to demonstrate different acceptance towards
technology for students. ChatGPT use in assessment. Those who consider ChatGPT morally
To foster the broader adoption of ChatGPT in assessment settings, it acceptable for assessment support are more likely to display higher level
is crucial to gain a thorough understanding on the factors which of acceptance. Conversely, students who recognize the unethical im
contribute to and hinder the acceptance of ChatGPT. Empirical studies plications may feel a moral obligation (MO) to refrain from using it, as
have evidenced that trust (TR), perceived risk (PR), and the morality of doing so is perceived as morally wrong and their intention to adopt it is
individuals are important predictors of users’ technology acceptance diminished.
(Albayati, 2024; Lee & Song, 2013; Milchram et al., 2023). However,
despite the wide application of ChatGPT in education, few studies have 1.2. Research gap and questions
focused on the application of this novel technology in the assessment
context (Gamage et al., 2023; Zuckerman et al., 2023). In view of this, The results of this study may differ from previous research on other
the current research studies students’ acceptance of ChatGPT (GPT-3.5), technologies for two main reasons. Firstly, ChatGPT stands out from
the free version of ChatGPT which is publicly available. other AI technologies due to its novelty and groundbreaking generative
TR is an important factor of interaction (Cheung & Lai, 2022) and a capabilities. As ChatGPT provides revolutionary support in assessments,
strong motivator of behavioral intention to use AI-powered chatbots it is conceivable that students may embrace the technology primarily
(Viswanath et al., 2023). Traditionally, students search for reference due to its exceptional performance, even if they have reservations about
materials for their assignments from the internet, hardcopies of course TR and PR associated with it. This could potentially yield different
materials and books where they can evaluate the information reliability outcomes when compared to empirical studies conducted on other
2
C.Y. Lai et al. Computers and Education: Artificial Intelligence 6 (2024) 100246
technologies. However, there is a scarcity of studies that have specif 2. Literature review
ically explored the acceptance of ChatGPT (Choudhury & Shamszare,
2023). 2.1. Assessments in higher education: challenges and ethical
Secondly, the education system in Hong Kong is featured by an considerations in the age of ChatGPT
intense focus on academic excellence (Dou & Shek, 2022). Within a
highly competitive culture, the success of students is heavily dependent Assessments play a crucial role in higher education as tools to eval
on the results of high-stakes assessments, which are used to determine uate students’ learning progress and outcomes (Cotton et al., 2023;
their eligibility for increasingly limited educational opportunities Fischer et al., 2023). Common forms of educational assessment include
(Kennedy, 2007). While primary and secondary school students often formative assessments and summative assessments which involves
have the option to seek assessment support from their parents or in collection of evidence about students’ domain-specific abilities (Meijer
structors at large-scale tutorial centers, the availability of personal tutors et al., 2020). Formative assessments, usually in form of in-class discus
or tutorial centers specifically catering to university students is rela sions, video quizzes, short reflection assignments and peer reviews,
tively limited in Hong Kong. Additionally, the specialized and complex involve ongoing evaluation of students’ learning, focusing on under
course content at university level can pose challenges for university standing and improvement. Summative assessments, such as academic
students in seeking assessment support, possibly leading to a reliance on papers, projects, presentations, reports, quizzes, and examinations, are
ChatGPT as a means of obtaining prompt assistance. In this context, it is employed to evaluate students’ performance and compare it against a
possible that some students might prioritize the perceived benefits of benchmark at the end of an instructional period (Khadled & El Khatib,
using ChatGPT in assessments over the potential risks, which may lead 2019).
them to overlook their MO. Hence, investigating the utilization of In the Chinese high-stakes context, assessments serve a significant
ChatGPT within the context of Hong Kong universities becomes an role in reflecting the worth and quality of individuals (Li, 2009). As
intriguing endeavor. students actively participate in various assessment tasks, they encounter
Therefore, we propose that the drivers and barriers to ChatGPT use unanticipated obstacles that require them to explore and deliberate
for assessments are specific to the higher education context and may upon different approaches (Fischer et al., 2023). This process involves
differ from those of other technologies. Consequently, the following leveraging prior knowledge and making innovative decisions to address
research questions are formulated: the given problem, consequently leading to the ongoing enhancement of
their evaluative judgment as a “feel” for what constitutes quality in the
RQ 1 What are the primary determinants and deterrents influencing field of study (Dall’Alba, 2018). From the Chinese perspective, assess
students’ ChatGPT acceptance for assessment support? ments have the primary objective of fostering students’ holistic devel
RQ 2 To what extent do the determinants and deterrents influence opment, guiding them towards becoming recognized as socially
students’ ChatGPT acceptance for assessment support? esteemed “better people” (Brown, et al., 2011). This stands in contrast to
the western tradition, which tends to encourage a clear separation be
Our study contributes in the following four significant ways: tween affective and academic aspects when reporting assessment results
(Friedman & Frisbie, 1995).
(i) Though ChatGPT represents a novel AI-based tool with immense Although ChatGPT can be a valuable tool for assisting students in
potential to revolutionize knowledge acquisition and drive excelling in assessments, it can also assist university students to cheat
changes in assessment practices, research on AI chatbots in edu (Gamage et al., 2023). By providing human-like and read-to-use text for
cation is still at an early stage (Yilmaz & Yilmaz, 2023). While entire essays, presentation scripts, and assignments, ChatGPT can
most of studies are on general usage of ChatGPT, our study serves replace the need for students to engage in critical thinking and original
as an exploratory investigation into the predictors of behavioral work. This opens the door for students to cheat by submitting responses
intention (BI) to use ChatGPT for assessment support. generated by ChatGPT as their own. Unfortunately, this undermines the
(ii) This study pioneers the adaptation and extension of the UTAUT principle of fairness in educational measurement and presents chal
model to investigate the impact of MO, TR, and PR on ChatGPT lenges for instructors in detecting and discouraging cheating (Illing
acceptance for assessment support within the higher education worth, 2023; Jiang et al., 2024). This also compromises the fundamental
context. It contributes to the empirical literature on AI chatbot objectives of higher education, which seek to educate and challenge
adoption by proposing a novel model comprising six factors students, potentially leading to a devaluation of degrees (Cotton et al.,
which explain the intention to use ChatGPT in assessments. 2023). Ultimately, engaging in such dishonest behavior hinders stu
(iii) Through exploring users’ psychological factors, this study pro dents’ personal growth and contradicts the basic objective of assess
vides practical implications for educators to foster responsible ments to develop students as “better people”.
use of ChatGPT among students. Drawing from our findings, we Recently, efforts are underway to identify AI-generated text in stu
urge university committees to develop guidelines, organize dents’ assignments and online examinations through upgrading the
seminars, and conduct workshops to promote and regulate the functions of digital tools like GPTZero (Farrokhnia et al., 2023) and
ethical use of ChatGPT. similarity detection software like Turnitin (Kohnke et al., 2023). How
(iv) Our findings highlight the barriers to undergraduates’ acceptance ever, the detectability of ChatGPT-generated answers to assessment
of ChatGPT for assessment support and provide valuable insights questions remains uncertain (Ibrahim et al., 2023) since they are
for AI developers on potential improvements, thus facilitating written-on-the-spot and do not appear in a static sense (Steele, 2023).
wider adoption of this technology within the education context. Relying solely on these temporary measures can potentially result in a
continuous cycle of evasion and detection, failing to address the core
The rest of this paper is organized as below. Section 2 provides the utility of ChatGPT or the broader ethical considerations associated with
literature review about the roles of assessments and ChatGPT for as the use of technology (Kohnke et al., 2023).
sessments under the Chinese context, as well as the theoretical frame
work of this study. Section 3 develops the hypotheses to be tested. 2.2. Unified theory on acceptance and use of technology (UTAUT)
Section 4 showcases research methods and materials. Section 5 presents
and analyzes the results. Section 6 contains discussion of results and The Unified Theory on Acceptance and Use of Technology (UTAUT)
research implications. Section 7 provides a conclusion, limitations, and has been a classic model for investigating technology acceptance in
suggestions for future studies. different sectors and geographical contexts. This comprehensive model
was compiled from eight models and was validated by Venkatesh et al.
3
C.Y. Lai et al. Computers and Education: Artificial Intelligence 6 (2024) 100246
(2003) with four core predictors of intention to adopt technology Cheung, 2023). Given the unsolved ethical issues associated with
including performance expectancy (PE), effort expectancy (EE), social ChatGPT use, students may perceive themselves immoral if they use
influence (SI) and facilitating conditions (FC). ChatGPT for preparing their assessments. In this study, MO refers to
Users’ perception of PE can be affected by different factors such as students’ feelings about their responsibilities to avoid using ChatGPT to
perceived usefulness, compatibility of a technology, and its impact on support their assessments. Thus, the following relationship is
job performance (Brachten et al., 2021). EE is influenced by individuals’ hypothesized:
prior experience with the technology, its complexity, perceived tech
H1. Moral obligation has negative effect on students’ behavioral
nical capability, and the availability of support for its adoption (Muriithi
intention to use ChatGPT for assessment support.
et al., 2016). The source of SI can be from users’ significant others
including friends, classmates, colleagues, teachers, and family members
3.2. Trust (TR)
(Venkatesh et al., 2003). FC encompasses the resources, training and
technical support from instructors to the users for using the technology
TR is defined as students’ perceptions about the trustworthiness (i.e.
(Oye et al., 2014). The model suggests that the intention to adopt
integrity, ability and benevolence) and reliability of the technology
technology is the determinant of actual use. The UTAUT constructs
(Alalwan et al., 2017). It refers to one’s readiness to accept vulnerability
explain 56 percent of variance in BI and 40 percent of variance in
based on positive expectations about the intentions or the acts of others
technology use in organizational setting. The impact of predictors is
in an environment featured by interdependence and risk (Ennew &
moderated by individuals’ age, gender, experience and voluntariness of
Sekhon, 2007). Trust in competence enhanced the confidence level of
use (Venkatesh et al., 2003). Nevertheless, most empirical studies solely
decision that the actors made about behaviors (Cheung & Lai, 2022).
used a subset of the UTAUT model, with moderators frequently being
TR was identified as a key factor of university students’ technology
dropped (Dwivedi et al., 2023).
adoption (Albayati, 2024; Chao, 2019; Jo, 2023; Wu & Chiu, 2018). Wu
Recently, there has been a rising interest in applying the UTAUT
and Chiu (2018) found that users’ level of trust played a vital role in
model to understand ChatGPT acceptance in various settings. Menon
their e-learning adoption decisions when they were not familiar with a
and Shilpa (2023) conducted a semi-structured interview on India users.
system. In the ChatGPT context, Jo (2023) conducted a questionnaire
The results indicated that PE, EE, SI and FC are strong predictors of
survey on university students in Korea and identified trust as a crucial
ChatGPT adoption. Budhathoki et al. (2024) and Strzelecki (2023)
factor of ChatGPT acceptance for their general usage. When university
examined the acceptance of ChatGPT among students from the UK,
students trusted that their information is secure and the benefits of
Nepal, and Polish state universities. Their findings revealed that PE, EE,
ChatGPT adoption outweigh the risks, they had higher tendency to use it
and SI emerged as influential factors predicting students’ BI to utilize
in academic settings. Rahman et al. (2023) examined the role of trust in
ChatGPT. Foroughi et al. (2023) investigated determinants of ChatGPT
ChatGPT acceptance among university students in Bangladesh. They
acceptance for educational purposes among university students in
have found that trust serves as a moderating factor influencing the
Malaysia. The findings showed that PE and EE were significant drivers
relationship between perceived enjoyment and attitudes towards
for use intention of ChatGPT. However, in Habibi et al.’s (2023) study,
ChatGPT adoption for learning, which subsequently predicts their BI to
EE was found to be an insignificant determinant of ChatGPT use in
use it. However, the impact of trust on ChatGPT adoption for assessment
learning among university students in Indonesia. Despite the importance
support remains a research gap in literature.
role of ChatGPT in assessment support, no empirical research has been
In this study, TR refers to students’ understanding towards the reli
focused on ChatGPT adoption in the assessment setting.
ability and trustworthiness of ChatGPT in supporting their assessments.
In this study, we postulate that factors of UTAUT model such as PE,
Consistent with prior studies, we conjecture that as users’ perception on
EE, and SI are direct determinants of students’ intention to accept
TR is an important factor of ChatGPT acceptance for assessment support,
ChatGPT in learning. As discussed in the introduction, we also examine
the following hypothesis is set:
the role of MO and PR, as well as the direct and indirect influence of trust
on use intention of ChatGPT. H2. Trust has a positive effect on students’ behavioral intention to use
ChatGPT for assessment support.
3. Hypothesis development
3.3. Perceived risk (PR)
3.1. Moral obligation (MO)
PR is defined as individuals’ perception about the uncertainty and
MO is defined as the feeling of guilt or the personal obligation to seriousness of consequences associated with a behavior (Bauer, 1967).
perform or not to perform an act (Beck & Ajzen, 1991). Previous studies Featherman and Pavlou (2003) suggested a broader definition of PR as
have indicated that when individuals engaged in unethical behavior, people’s understanding on the potential for loss when pursuing desired
they considered not only external social pressures, but also internal outcomes of using a technology. PR arose when individuals engaged in
feelings of MO and responsibility (Beck & Ajzen, 1991; Cronan & situations with uncertain consequences or potential negative outcomes
Al-Rafee, 2008). Extant research has demonstrated that MO was a sig due to wrong or poor decisions.
nificant determinant of BI (Beck & Ajzen, 1991; Conner & Armitage, Previous studies demonstrated that PR of privacy and data security
1998), and that individuals might consider MO when forming intentions reduces users’ satisfaction, which negatively affects their continued use
to engage in unethical behavioral such as digital piracy (Cronan & intention (Cheng & Jiang, 2020; Ikkatai et al., 2022). For instance,
Al-Rafee, 2008; Higgins et al., 2008). Individuals with high moral and Hanafizadeh et al. (2014) have found that the greater the risk of using a
ethical standards were less likely to engage in opportunistic behavior new technology, the lower was an individual’s intention to use it.
(Gupta et al., 2004). Albayati (2024) has indicated that security and privacy concerns exert a
The use of ChatGPT in assessment settings raises concerns about lack positive influence on trust, serving as a significant predictor of students’
of sources and access to original articles, thereby posing challenges behavioral intention to utilize ChatGPT as a regular assistance tool.
related to academic integrity. It is debatable whether the responses Contrarily, Hsu and Shiue (2008) conducted a consumer survey in
generated by ChatGPT are entirely original or instead consist of para Taiwan that yielded contrasting findings. Their results have indicated
phrases of sources without proper citation, as the LLM was trained on that the PR associated with purchasing pirated software had no influ
internet texts with multiple sources (Gravel et al., 2023). From this ence on the intention to pay for legal alternatives. This outcome suggests
perspective, certain universities considered the use of AI-powered that consumers may hold the belief that they are not exposed to signif
chatbots for assessment support as a form of plagiarism (Lai & icant risks of prosecution and penalties.
4
C.Y. Lai et al. Computers and Education: Artificial Intelligence 6 (2024) 100246
ChatGPT has been accused of preparing misleading and fraudulent (Budhathoki et al., 2024; Strzelecki, 2023). Nonetheless, Habibi et al.’s
content (Lai & Cheung, 2023). Recent research has consistently high (2023) failed to show the significant relationship between EE and
lighted ChatGPT’s tendency to generate hypothetical literature review ChatGPT use for higher education learning. Following most of the prior
with conspiracy theories, fictitious citations and non-existent references studies, we assume EE is a strong motivator of BI to use ChatGPT for
in academic papers and theses (Rahman et al., 2023; Zhou et al., 2023). assessment support. Thus, we formulate the following hypothesis:
The AI-chatbot has been found to frequently blend reputable author
H6. Effort expectancy has a positive effect on students’ behavioral
names with a sound research track to an interesting title as well as to a
intention to use ChatGPT for assessment support.
relevant journal (Gravel et al., 2023). The improper use of fake refer
ences by students exposes them to the risk of receiving mark penalties in
3.6. Social influence (SI)
their assessments.
In this study, we define PR as individuals’ perception about the un
SI represents “the extent to which an individual perceives those
certainty, seriousness and potential for loss (including privacy) associ
important others believe he or she should use the new system” (Ven
ated with using ChatGPT for assessment support. We postulate that the
katesh et al., 2003, p. 450). SI has been frequently identified as a crucial
PR of ChatGPT reduces students’ intention to use it for learning pur
factor of e-learning technology adoption among university students
poses. Thus, the following relationship is hypothesized:
(Ahmed et al., 2022; Raman & Thannimalai, 2021). In the ChatGPT
H3. Perceived risk has a negative effect on students’ behavioral context, SI has shown strong predictive power of ChatGPT acceptance
intention to use ChatGPT for assessment support. for general use in higher education context (Strzelecki, 2023). In this
study, SI refers to the degree to which individuals perceive those
PR is an important component of trust models, as trust may help with
important others, such as classmates, friends and instructors, believe
individuals’ risk assessment (Cheung & Lai, 2022). Various studies have
they should use ChatGPT for assessment support. We expect that if the
examined the relationship between risk and trust in the context of
important others agree that students should use ChatGPT, they will be
technology (Albayati, 2024; Bugshan & Attar, 2020; Lee and Song, 2013;
encouraged to use it. We therefore propose the following hypothesis:
Mou et al., 2017). In a Korean online technology context, Lee and Song
(2013) have found TR an indirect antecedent of BI through PR. H7. Social influence has a positive effect on students’ behavioral
Following his study, we hypothesize that students’ trust in ChatGPT may intention to use ChatGPT for assessment support.
influence their PR and be an indirect determinant of ChatGPT’s use
intention for assessment support. Therefore, the following hypothesis is 3.7. Behavioral intention (BI)
set:
Ajzen (1991) defines BI as an indicator of a person’s readiness to
H4. Perceived risk has a mediating effect on the relationship between
perform a specific behavior. Numerous studies proved that BI plays a
trust and students’ behavioral intention to use ChatGPT for assessment
crucial role in actual behaviors (Burke, 2002). In this study, BI associates
support.
with the undergraduates’ readiness to use ChatGPT for assessment
support. It is the only endogenous variable in the model which is
3.3.1. UTAUT determinants and behavioral intention to use ChatGPT determined by other exogenous variables.
The adapted UTAUT model is also used to explore the determinants The extended model of UTAUT for testing in this study is presented in
of students’ use intention of ChatGPT at universities in Hong Kong. In Fig. 1:
this study, we focus on UTAUT determinants such as PE, EE, and SI.
4. Methods and materials
3.4. Performance expectancy (PE)
4.1. Research method
Prior research has consistently identified PE as a strong motivator of
BI to use educational technology (Ahmed et al., 2022; Chao, 2019). This study adopted a structural equation modeling (SEM) approach
Budhathoki et al. (2024) and Strzelecki (2023) suggested that PE posi which investigates the relationships among the latent variables adapted
tively correlates with ChatGPT acceptance for academic use among and extended from the UTAUT framework. The SEM approach is a
university students. However, there are critics related to the generaliz multivariate technique which combines the confirmatory factor analysis
ability and insufficient original insights of ChatGPT responses. If stu (CFA) with path analysis (Vanneste et al., 2013). Initially, we followed
dents perceive its responses as general insights without support of Hair et al. (2019) to conduct a CFA to assess the conformity of the in
domain expertise, they may be reluctant to adopt it in assessments. In dividual and collective constructs with content, convergent, and
the present study, PE refers to individuals’ beliefs about the usefulness of discriminant validity requirements. Additionally, a common method
ChatGPT for assessment support. We postulate students’ PE for the bias test is performed alongside the measurement model to ensure the
ChatGPT functions, such as performance enhancement, learning effi absence of any biased data. Then, the SEM was performed to examine
ciency and usefulness for improving course results, positively influences the causal relationships among all the constructs and test the structural
BI to use it in assessments. Accordingly, we hypothesize the following: model for the research model.
H5. Performance expectancy has a positive effect on students’ 4.2. Survey instrument
behavioral intention to use ChatGPT for assessment support.
There are two parts for the questionnaire. The first part consisting of
3.5. Effort expectancy (EE) 28 descriptions was designed to examine the determinants of partici
pants’ intention to use ChatGPT and the second part was to collect de
EE refers to “the degree of ease associated with using the system” mographic information. The 28 manifest items, the corresponding
(Venkatesh et al., 2003, p. 450). In the initial use of technology, the level descriptions for measuring the seven latent variables and the sources of
of ease significantly influenced the acceptance behavior (Cimperman past studies are presented in English (Table 1). The descriptions were
et al., 2013). In this study, we define EE as the energy required to learn modified from previous studies to better accommodate our investigation
how to use ChatGPT and interact with it for assessment support. Prior on the use of ChatGPT in assessment setting. Participants were asked to
studies in education have shown that increased levels of energy indicate their levels of agreement towards the descriptions based on a 5-
discourage students from using ChatGPT in academic context point Likert scale which ranges from “strongly disagree” to “strongly
5
C.Y. Lai et al. Computers and Education: Artificial Intelligence 6 (2024) 100246
agree”. The questionnaire does not explicitly specify the types of as the measurement model. The GOF indices of the measurement model
sessments, allowing the term “assessments” to encompass both forma and the recommended values from literatures are listed in Table 3.
tive and summative assessments. From the values above, we conclude that the measurement model
The survey items were set to obtain quantitative data about students’ has good fit of data and requires no adjustments.
drivers and inhibitors of ChatGPT adoption for assessment support. The second step of CFA is to examine the reliability and convergent
validity of each construct. The results of CFA are presented in Table 4.
From Table 4, the factor loadings for all 28 items are more than 0.5,
4.3. Sampling and data collection
which demonstrate that the meanings of the factors are preserved by
these indicators. The reported average variance extracted (AVE) values
The questionnaire survey was conducted using convenience sam
range from 0.666 to 0.769, which exceeds the commonly used threshold
pling to collect data in July 2023, after a pilot study on 50 students. The
of 0.5 as recommended by Fornell & Larcker (1981). Since the composite
questionnaire was randomly distributed via the Qualtrics survey plat
reliability (CR) values, ranging from 0.888 to 0.930, are above the
form (www.qualtrics.com) to students in eight universities in Hong
suggested threshold of 0.6 (Bagozzi & Yi, 1988), indicating the mea
Kong. Because of the complexity involved in estimating the total pop
surement is reliable. On the other hand, the Cronbach’s alpha values
ulation size, Hair et al.’s (2019) criteria were adopted to determine the
range from 0.847 to 0.906, all exceeding the recommended threshold of
sample size, i.e., 5 to 10 responses per item. As there are 28 items in the
0.7 (Nunnally & Bernstein, 1994). This shows that the manifest in
questionnaire, a total of 140–280 sample size is considered adequate for
dicators of each latent construct are closely related and have sufficiently
this study. Out of the 538 questionnaires distributed, 496 participants
low measurement error.
reported prior experience with ChatGPT, which accounts for 92.19% of
This research further examined the discriminant validity of the
the total participants. Thirteen questionnaires were eliminated due to
constructs, following the criteria proposed by Fornell & Larcker (1981).
incomplete information, resulting in a total of 483 useable question
This approach suggests that the square root of the AVE for each construct
naires being collected. Since the sample size is larger than 280, it is
should be greater than the correlations between that construct and the
considered sufficient for the study. No information revealing survey
other constructs in the model.
respondents’ identity was collected. The information collected from
The results presented in Table 5 show that the square root of the AVE
respondents is kept highly confidential and the electronic data files
value for each construct is greater than the correlations between that
containing their information are password-protected.
construct and the other constructs. This provides evidence of discrimi
nant validity, indicating that the constructs in the model are distinct and
5. Results
measure different concepts.
6
C.Y. Lai et al. Computers and Education: Artificial Intelligence 6 (2024) 100246
Table 1
Survey item details.
Latent Variable Manifest Description Source
Indicator
Moral Obligation MO1 1 It would not be morally wrong for me to complete the assessments with the help of ChatGPT. Modified from
MO2 2 I would not feel guilty if I search information from ChatGPT for reference when completing assessments. Quoc et al.
MO3 3 I would not feel guilty if I directly copy the answers from ChatGPT when completing assessments. (2020)
MO4 4 Using ChatGPT to complete the assessments does not go against my moral principles.
Trust TR1 5 I believe that ChatGPT is trustworthy for completing my assessments. Modified from
TR2 6 I trust the response quality of ChatGPT for completing my assessments. Gefen et al.
TR3 7 I am confident that the technology provider of ChatGPT is honest. (2003) and Chao
TR4 8 Even if not monitored, I would trust the assessments could be done right with the ChatGPT functions. (2019)
Perceived Risk PR1 9 I would receive a mark penalty for plagiarism if I use ChatGPT to complete the assessments. Modified from
PR2 10 If I used ChatGPT for completing the assessments, I would probably be caught. Quoc et al.
PR3 11 I might be disappointed if I complete the assessments with the help of ChatGPT. (2020) and Chao
PR4 12 I think using ChatGPT to complete the assessments puts my privacy at risk. (2019)
Performance PE1 13 I find ChatGPT useful for completing the assessments. Modified from
Expectancy PE2 14 Using ChatGPT helps me to complete the assessments more quickly. UTAUT model (
PE3 15 Using ChatGPT improves my performance in assessments. Venkatesh et al.,
PE4 16 Using ChatGPT increases the chances of improving my course results. 2003) and
UTAUT 2 model
(Venkatesh
et al., 2012)
Effort Expectancy EE1 17 Learning how to use ChatGPT is easy for me. Modified from
EE2 18 I find ChatGPT easy to use in assisting my completion of assessments. UTAUT model (
EE3 19 I find it easy to become skillful at using ChatGPT to complete the assessments. Venkatesh et al.,
EE4 20 My interaction with ChatGPT is clear and understandable. 2003)
Social Influence SI1 21 People who are important to me think that I should complete the assessments with the help of ChatGPT. Modified from
SI2 22 People who affect my learning behavior think that I should complete the assessments with the help of ChatGPT. UTAUT model (
SI3 23 My peers (such as classmates and friends) think that I should complete the assessments with the help of ChatGPT. Venkatesh et al.,
SI4 24 My instructors whose opinions that I value prefer that I should complete the assessments with the help of ChatGPT. 2003)
Behavioral Intention BI1 25 I intend to use ChatGPT in my future assessments. Modified from
BI2 26 I plan to complete the assessments with the help of ChatGPT in the next two months. UTAUT model (
BI3 27 I predict I will use ChatGPT in my future assessments. Venkatesh et al.,
BI4 28 I plan to continue to complete the assessments with the help of ChatGPT frequently. 2003) and
UTAUT 2 model
(Venkatesh
et al., 2012)
that the model is free from common method bias. assessment support. The structural equation model was employed to test
these hypotheses. The results indicate that MO, TR, PR, PE and EE are
5.4. Hypotheses testing using structural equation modelling significant determinants for the adoption of ChatGPT. They indicate that
increased levels of TR and PE as well as reduced levels of EE of using it
After the measurement model had been validated, we examined the encourage students to use ChatGPT while MO and PR discourage stu
causal effects of PE, EE, SI, TR, PR and MO on BI in the proposed model. dents from using it in assessments. SI does not have significant impacts
Table 6 presents the path coefficients and loading values of the path on BI to use it and PR does not significantly serve as a factor that
lines within the PLS algorithm procedure. H1 (path MO → BI) has the mediated the relationship between TR and BI to use ChatGPT for
highest t-value (t = 3.956), while H7 (path SI → BI) has the lowest t- assessment support.
value (t = 0.641). In this study, all the hypotheses proposed are sup MO and TR are respectively the most significant inhibitor and pre
ported except H4 and H7. dictor of ChatGPT adoption, which is consistent with other prior studies
The results indicate the positive and significant relationship of MO to on technology acceptance. When adopting the latest AI technology for
BI (beta = − 0.166; t = 3.956; p < 0.01) to support H1. The results also supporting assessments, users should consider not only the perceived
report the positive and significant correlation of TR to BI (beta = 0.119; social pressures, but also their personal sense of responsibility and
t = 2.770; p < 0.01) to support H2. Similarly, the findings also support obligation (Beck & Ajzen, 1991; Cronan & Al-Rafee, 2008). Similar to
H3 as they indicate negative relationship of PR to BI (beta = − 0.104; t = Gupta et al.’s (2004) results, users who adhere to high moral and ethical
2.326; p < 0.05). However, the findings do not support H4 as the rela standards have less intention to use technology opportunistically.
tionship of TR on PR is not significant (beta = − 0.047; t = 0.941; p > In the Chinese culture, good assessment results not only signify
0.10). students’ academic excellence but also serve as an indication of being a
The results also reflect significant positive relationship between PE more valuable and accomplished individual. As a result, these students
and BI (beta = 0.111; t = 2.639; p < 0.01) to support H5. As for the are granted higher status among their peers and families (Brown et al.,
relationship between EE and BI (beta = 0.083; t = 1.703; p < 0.10), the 2011). Engaging in unethical use of ChatGPT can evoke feelings of moral
results provide evidence to marginally support H6. However, H7 is not obligation and concerns about social judgment and reputational harm
supported as the findings indicate insignificant relationship between BI among students. These concerns are contradictory to their goal of per
and SI (beta 0.041; t = 0.641; p > 0.10). Fig. 2 presents the graph of the sonal development and seeking social recognition through assessments.
structural equation model for the relationships of the constructs with If they perceive that using ChatGPT to assist them completing assess
path coefficients. ments is morally wrong (e.g. regarded as cheating/plagiarism), they are
likely to have stronger feelings of responsibility to refrain from using
6. Discussion and implications ChatGPT and have less intention to use it in assessments.
TR enhances intention to use a technology as users believe that the
This study has developed an integrated model, extended UTAUT technology is trustworthy and reliable (Alalwan et al., 2017). This study
model, to analyze the drivers and inhibitors of ChatGPT adoption for exhibits consistent findings with Jo (2023) and Rahman et al. (2023)
7
C.Y. Lai et al. Computers and Education: Artificial Intelligence 6 (2024) 100246
about ChatGPT acceptance for academic purposes, indicating that par Table 3
ticipants who demonstrate confidence in ChatGPT and trust in its Goodness of fit indices of the CFA model.
response quality tend to show a higher BI to utilize ChatGPT in their Fit Modified Recommended Source
assessment preparation. Since data transparency and consent are index Model values
important concern among students (Memarian & Doleck, 2023b), the CFI 0.996 ≥0.90 Bagozzi and Yi (1988), Byrne
ethical and explainable AI models are essential for developing trust (2013)
among users. If students lack trust on ChatGPT and understanding on TLI 0.996 ≥0.90 Hair et al. (2019), Ho (2006)
how it collects or generates data, they may perceive it as unsafe to use in NFI 0.884 ≥0.90 Bagozzi and Yi (1988)
RMSEA 0.013 Schumacker and Lomax
assessments. Thus, it is essential to develop advanced mechanisms to
≤0.10
(2010)
enhance the transparency and security of generative AI models for
improving the explainability of their responses (Dwivedi et al., 2023).
PE is another significant predictor for ChatGPT acceptance, which
Table 4
has a positive relationship with BI to use ChatGPT in assessment settings.
Results of confirmatory factor analysis (N = 483).
In the education context, users are more likely to adopt a new technol
ogy if its use helps them to perform a specific task (Ahmed et al., 2022; Construct Item Factor AVE CR Cronbach
Loading Alpha
Budhathoki et al., 2024; Chao, 2019; Strzelecki, 2023). Powered by a
Large Language Model (LLM), ChatGPT stands out among other existing Moral Obligation MO1 0.883 0.756 0.925 0.892
MO2 0.816
chatbots due to its foundation on the powerful Generative Pretrained
MO3 0.902
Transformer (GPT) architecture. This allows ChatGPT to weigh the MO4 0.875
impact of individual input words on each output word and establish Trust TR1 0.819 0.711 0.908 0.869
relationships between all words within a sentence instantly (Vaswani TR2 0.862
TR3 0.843
et al., 2017). The scale and performance of the LLM is influenced by the
TR4 0.848
size of the datasets it has been trained on, the amount of computational Perceived Risk PR1 0.900 0.769 0.920 0.903
resources used during training, and the total number of parameters it PR2 0.912
can support (Kaplan et al., 2020). The more parameters a LLM possesses, PR3 0.841
the greater its capacity to absorb and learn from expansive datasets, and PR4 0.851
Performance PE1 0.890 0.769 0.930 0.901
more complicated tasks it can handle (Lozić & Štular, 2023). In contrast,
Expectancy PE2 0.835
previous chatbots, such as Alexa (Amazon), Siri (Apple), and Cortana PE3 0.885
(Microsoft), which do not use LLMs, are constrained to pre-determined PE4 0.895
dialogue flows or simplistic question-answer formats (Kuhail et al., Effort Expectancy EE1 0.907 0.767 0.929 0.906
2023). With its 175 billion parameters, ChatGPT stands apart with its EE2 0.751
EE3 0.947
capability to demonstrate more fluid and naturalistic conversational EE4 0.890
abilities that closely resemble human-to-human interactions (Susnjak, Social Influence SI1 0.811 0.666 0.888 0.847
2022). SI2 0.701
Known for ChatGPT’s exceptional writing and language abilities, it SI3 0.840
SI4 0.900
can assist students to complete the assessment tasks effectively and
Behavioral Intention BI1 0.887 0.730 0.915 0.878
efficiently through extracting main points from lecture notes or papers, BI2 0.853
generating summaries, providing translation service, prompting an BI3 0.852
swers to questions, as well as commenting on grammar, coherence and BI4 0.824
Table 2
Demographic profile of participants (N = 483). Table 5
Discriminant validity of measurement model.
Variable N %
BI PE MO PR EE SI TR
Gender
Male 169 34.99 BI 0.854
Female 314 65.01 PE 0.127 0.877
Age MO − 0.162 0.039 0.87
18–20 years old 236 48.86 PR − 0.101 − 0.039 − 0.021 0.877
21–22 years old 182 37.68 EE 0.103 0.139 − 0.048 0.134 0.877
Above 22 years old 65 13.46 SI 0.053 0.154 0.09 − 0.038 0.02 0.816
University Attended TR 0.137 0.008 − 0.018 − 0.047 0.087 0.037 0.843
City University of Hong Kong (CityU) 65 13.46
The Hong Kong University of Science and Technology (HKUST) 68 14.08 Note: Diagonals show the square root of the AVE whereas other entries stand for
Hong Kong Baptist University (HKBU) 78 16.15 correlations.
Lingnan University (LU) 36 7.45
The Education University of Hong Kong (EdUHK) 46 9.52
The University of Hong Kong (HKU) 71 14.70
The Chinese University of Hong Kong (CUHK) 57 11.80 Table 6
The Hong Kong Polytechnic University (PolyU) 62 12.84 Hypotheses results.
Programme Discipline Hypotheses Beta p-values t-values Decision
Business 138 28.57
Education 42 8.70 H1: MO →BI − 0.166 0.000*** 3.956 Supported
Engineering/Information Technology 49 10.14 H2: TR → BI 0.119 0.006*** 2.770 Supported
Medicine/Health/Nursing 70 14.49 H3: PR → BI − 0.104 0.020** 2.326 Supported
Arts/Journalism/Language 59 12.22 H4: TR → PR − 0.047 0.345 0.945 Not Supported
Science 37 7.66 H5: PE → BI 0.111 0.008*** 2.639 Supported
Law 20 4.14 H6: EE → BI 0.083 0.089* 1.703 Supported
Social Science/Social Work/Psychology 41 8.49 H7: SI → BI 0.041 0.522 0.641 Not Supported
Others 27 5.59
Note: *p < 0.05, **p < 0.01, ***p < 0.001.
8
C.Y. Lai et al. Computers and Education: Artificial Intelligence 6 (2024) 100246
writing style (Halaweh, 2023). Once students perceive that ChatGPT is uncertainty and the potential for serious negative consequences (Cheng
effective in improving their course performance, they are more likely to & Jiang, 2020; Hanafizadeh et al., 2014). These findings differ from
adopt it in assessments. However, if they are unsatisfied with the per those of Hsu and Shiue (2008), who discovered that individuals may
formance of ChatGPT as they find the generated content inaccurate, hold the belief that they are not exposed to risks or penalties when
biased and selective, they will have less intention to use it for completing engaging in unethical technological behaviors. The results of this study
the assessments. suggest that PR can serve as a limiting factor in students’ adoption of
The relationship between PR and BI to use ChatGPT is also found to ChatGPT in the assessment setting.
be negative and significant. Our findings indicate that participants are As expected, EE shows a positive correlation with the adoption of
sensitive to the potential risks and penalties associated with ChatGPT ChatGPT. Our findings are consistent with most of the prior studies that
usage. When students perceive that using ChatGPT could lead to being have shown how increased levels of energy required to adopt a system
caught or result in mark deductions, their intention to use it in assess can discourage users from using ChatGPT in general academic context
ments decreases. These results are consistent with prior studies showing (Budhathoki et al., 2024; Strzelecki, 2023). The study presented con
that the higher the PR associated with adopting a new technology, the trasting findings to those of Habibi et al. (2023), who were unable to
lower the individual’s intention to use it (Albayati, 2024; Cheng & Jiang, establish a significant relationship between EE and BI to use ChatGPT for
2020; Hanafizadeh et al., 2014; Ikkatai et al., 2022). If individuals feel learning in general. When students perceive ChatGPT as user-friendly
that their conversations with chatbots are not secure and private, they and requiring minimal learning time, they are more likely to have a
have less intention to explore them freely and use them frequently higher intention to use it for assessments. Furthermore, their intention to
(Albayati, 2024). This can be attributed to students’ aversion to utilize ChatGPT for completing assessments increases when they
9
C.Y. Lai et al. Computers and Education: Artificial Intelligence 6 (2024) 100246
perceive it as effective in reducing their workload. jeopardize students’ reputation, which diminishes their intention to
Contrary to our expectations, SI does not exhibit a significant rela adopt ChatGPT in supporting their assessments.
tionship with the adoption of ChatGPT. These results are not in line with Recent research has shown that many students had a limited un
Strzelecki’s (2023) study on ChatGPT use for general academic pur derstanding of the rules and guidelines regarding the responsible use of
poses, and they diverge from previous research indicating that users are AI tools like ChatGPT (Malmström et al., 2023). They often overlooked
more likely to adopt e-learning technologies when they receive agree the possibility and significance of including a declaration acknowl
ment from significant others regarding their technological behaviors edging the use of AI tools as a preventive measure against plagiarism.
(Ahmed et al., 2022; Raman & Thannimalai, 2021). One possible While regularly updating guidelines and exploring measures against
explanation is that in the Hong Kong context, students may prioritize improper use of AI tools, university committees are recommended to
achieving high marks in assessments over the agreement of important promote ethical AI adoption through seminars and workshops,
others. Therefore, even if important others do not approve of their use of educating students about the appropriate use of ChatGPT and raising
ChatGPT, students may still be enticed to use it for improving their awareness of associated ethical concerns. They can also reinforce stu
course performance. dents’ understanding on the notion of plagiarism, which involves pre
The results of this study do not align with Lee and Song (2013) which senting someone else’s ideas as one’s own without giving proper credit
suggested trust was an indirect predictor of new technology adoption. to the original source (Atlas, 2023). Experiential scenarios and case
We do not identify a significant mediating impact of PR on the rela studies are useful for enhancing students’ understanding of AI ethics and
tionship between TR and BI. This can be attributed to students’ tendency adoption complexities. These initiatives will help students understand
to prioritize the usefulness of ChatGPT in improving their course per what plagiarism is, why ChatGPT application in assessments is some
formance over concerns related to PR. If students discover that ChatGPT times immoral, and how to use ChatGPT responsibly - mitigate excessive
can remarkably increase their assessment scores, they may overlook the reliance on ChatGPT, declare the use of ChatGPT in assessments, and
potential risks. Similarly, if students perceive ChatGPT as ineffective, include proper citation in their assessment answers, ultimately yielding
they are unlikely to utilize it in assessments, irrespective of the level of long-lasting benefits in students’ educational journeys (Welding, 2023).
PR. These findings align with the competitive learning environment and PR is identified as the second strongest inhibitor of ChatGPT adop
result-driven learning culture prevalent in Hong Kong. tion in assessment among students. Students are less inclined to adopt
In response to research question 1, our findings align with the ma this new chatbot technology if they perceive privacy risks, fear being
jority of the existing literature on technology adoption (Ahmed et al., caught for plagiarism, or anticipate penalties for its use. University
2022; Budhathoki et al., 2024; Chao, 2019; Jo, 2023; Rahman et al., committees should formulate clear guidelines regarding ChatGPT usage,
2023; Strzelecki, 2023), highlighting that factors such as TR, PE and EE promoting proper referencing habits and requiring a declaration state
are significant determinants of BI to use ChatGPT for assessment support ment on the use of ChatGPT in assignments. Workshops should be
among Hong Kong undergraduates. On the other hand, MO and PR are organized to highlight the importance of evaluating the reliability and
deterrents of it, which is consistent with results of prior studies relevancy of the generated responses, hereby helping individuals avoid
(Albayati, 2024; Cheng & Jiang, 2020; Gupta et al., 2004; Hanafizadeh potential misinterpretations or reliance on hallucinated content.
et al., 2014; Ikkatai et al., 2022). Interestingly, PR is not identified as a Complying with these guidelines will help students avoid mark penalties
mediator between TR and students’ BI to use ChatGPT for assessment and the consequences of plagiarism. Instructors should design formative
support, which is contrary to existing literature on TR and technology assessments that evaluate students’ learning progress at different stages
adoption (Lee & Song, 2013). It is also surprising to identify SI as an and incorporate real-time invigilated tasks to replace take-home written
insignificant factor of ChatGPT acceptance in assessment context, which tasks such as essays in summative assessments. Alternatively, the
does not align with the results of previous studies on ChatGPT adoption ChatGPT developer should implement robust data protection protocols
for general academic purposes (Strzelecki, 2023). to mitigate intentional or unintentional breaches in the confidentiality
Regarding research question 2, we observed that TR and PE are the of personal data. Transparent disclosure of data gathering and process
strongest and second strongest predictors of student participants’ ing methods can help address students’ privacy concerns, reducing the
intention to use ChatGPT in assessments. To increase students’ intention PR associated with ChatGPT adoption for assessment support.
to use ChatGPT, further development and improvement of this chatbot
are needed to ensure the high accuracy of answers. Addressing issues 7. Conclusion, limitations and suggestions for future studies
such as the tendency to invent questionable claims or false references, as
well as ensuring balanced and diverse training data, will enhance stu 7.1. Conclusion
dents’ trust in ChatGPT and their expectations regarding its performance
in supporting assessments. Universities can also educate students on the ChatGPT has emerged as a popular tool in higher education. With the
importance of verifying sources and information generated by ChatGPT, increasing mainstream exposure of this AI tool, we anticipate a more
enhancing trust and demonstrating its value in acquiring knowledge and widespread and frequent ChatGPT adoption for supporting assessments
improving academic outcomes. among university students in the future. It is crucial to understand the
Additionally, the empirical data also show that EE has a positive and determinants and deterrents that influence students’ BI to use ChatGPT
significant relationship with BI for adopting ChatGPT for assessment in assessments.
support. In view of its impact, university instructors can organize This research provides valuable insights into the acceptance patterns
training workshops to educate students on how to efficiently generate of undergraduate students towards ChatGPT in the assessment setting.
relevant answers by providing precise background information, clear The findings highlight the significance of MO, TR, PR, PE and EE in
keywords, and specific prompts in their interactions with the chatbot. predicting undergraduates’ ChatGPT acceptance for assessment pur
This approach will make ChatGPT more user-friendly and improve poses. Through examining the relationship between these factors and
students’ acceptance of this latest chatbot for supporting assessments. students’ BI to use ChatGPT for assessment support, this study aims to
MO is the strongest inhibitor of students’ ChatGPT adoption inten bridge the gap between existing education literature on AI chatbot
tion. The findings reveal that MO acts as the strongest deterrent to acceptance and the latest developments in the field.
students’ adoption of ChatGPT. Within the Chinese context, assessments The results provide practical value to educators for designing mea
serve a vital role in reflecting the worth and quality of individuals (Li, sures to promote the ethical use of ChatGPT in assisting students’ as
2009), with the ultimate goals of helping them to become better people sessments. It is crucial to educate students on how to ethically and
and receive social recognition (Brown et al., 2011). Engaging in un morally leverage the advantages of ChatGPT in supporting their as
ethical use of AI tools contradicts these objectives and may potentially sessments while mitigating potential drawbacks. This can be achieved
10
C.Y. Lai et al. Computers and Education: Artificial Intelligence 6 (2024) 100246
by enhancing their understanding of the importance of proper citation, analysis, Conceptualization. Chee Seng Chan: Supervision, Methodol
adhering to ethical principles, preventing improper use of AI technol ogy, Conceptualization. Kuok Kei Law: Writing – review & editing,
ogy, and minimizing the risks associated with ChatGPT usage in Conceptualization.
assessments.
Further development and improvement of the chatbot are necessary Declaration of competing interest
to ensure high answer accuracy. Addressing issues such as questionable
claims and false references, as well as ensuring balanced and diverse The authors declare that they have no known competing financial
training data, will enhance trust in ChatGPT and students’ expectations interests or personal relationships that could have appeared to influence
of its performance in supporting assessments. Additionally, enhancing the work reported in this paper.
the user-friendliness of ChatGPT can promote broader adoption of the
technology in the assessment context. Contrary to previous research on Acknowledgments
the acceptance of ChatGPT for various purposes, it has been found that
SI is not a significant factor in the adoption of ChatGPT by undergrad The authors thank all the students who participated in the study and
uate students to support their assessments. This indicates that the de reviewers who provided their valuable feedback.
cision to use ChatGPT in assessments tends to be a personal choice, and
the roles of peer influence and social contexts are not crucial in this Abbreviations
specific context.
AI Artificial intelligence
7.2. Limitations and suggestions for future studies AVE Average variance extracted
BI Behavioral intention
This study has certain limitations. First, the generalizability of the ChatGPT Chat Generative Pre-Trained Transformer
results is limited due to the specific population sample used. Future CR Composite reliability
studies can be conducted on larger sample size to cover more diverse EE Effort expectancy
population including secondary school and postgraduate students. GOF Goodness of fit
Additionally, researchers can compare the motivators of ChatGPT GPT Generative Pre-Trained Transformer
adoption among different age groups and between undergraduate and LLM Large language model
postgraduate students. Second, as this study adopts a cross-sectional MO Moral obligation
design, the data collected only reflect participants’ intention to use PE Performance expectancy
ChatGPT for assessment support at the time of completing the online PR Perceived risk
questionnaires. To gain a deeper understanding of the long-term use and SI Social influence
effects of ChatGPT, future studies should employ a longitudinal study TR Trust
design to capture students’ early-stage and later-stage experiences with UTAUT Unified Theory of Acceptance and Use of Technology
ChatGPT. Third, this research investigated the factors affecting students’ VIFs Variance Inflation Factors
acceptance towards only the free version of ChatGPT (GPT-3.5) for
supporting their assessments. Additional comparative studies can be References
conducted to examine how the acceptance factors differ between the
free and paid versions of ChatGPT, as well as between ChatGPT and Ahmed, R. R., Štreimikienė, D., & Štreimikis, J. (2022). The extended UTAUT model and
other powerful chatbots adopting LLMs, such as Mistral and Llama, in learning management system during COVID-19: Evidence from PLS-SEM and
conditional process modeling. Journal of Business Economics and Management, 23(1),
the context of supporting student assessments. Last, this study relies on 82–104.
self-reported online questionnaire survey as the primary data collection Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human
method. Incorporating face-to-face interviews in future studies can Decision Processes, 50(2), 179–211.
Alalwan, A., Dwivedi, Y. K., & Rana, N. P. (2017). Factors influencing adoption of mobile
provide an opportunity for in-depth exploration of the factors that in
banking by Jordanian bank customers: Extending UTAUT2 with trust. International
fluence students’ acceptance of ChatGPT for assessment support. Journal of Information Management, 37(3), 99–110.
Albayati, H. (2024). Investigating undergraduate students’ perceptions and awareness of
using ChatGPT as a regular assistance tool: A user acceptance perspective study.
8. Statement on open data and ethics
Computers and Education: Artificial Intelligence, 6. Article 100203.
Atlas, S. (2023). ChatGPT for higher education and professional development: A guide to
The study was approved by the Research Ethics Committee of Brit conversational AI. Retrieved from https://digitalcommons.uri.edu/cba_facpubs/
tany Université with ID: BU/REC-01072023b. Informed consent was 548/. (Accessed 26 July 2023).
Bagozzi, R. P., & Yi, Y. (1988). On the evaluation of structural equation model. Journal of
obtained from all participants, and their privacy rights were strictly the Academy of Marketing Science, 16(1), 74–94.
observed. The participants were protected by hiding their personal in Bauer, R. A. (1967). Consumer behavior as risk taking. In F. C. Donald (Ed.), Risk taking
formation during the research process. They knew that the participation and information handling in consumer behavior (pp. 23–33). Boston: Harvard
University Press.
was voluntary and they could withdraw from the study at any time. Beck, L., & Ajzen, I. (1991). Predicting dishonest actions using the theory of planned
There is no potential conflict of interest in this study. The data can be behavior. Journal of Research in Personality, 25(3), 285–301.
obtained by sending request e-mails to the corresponding author. Brachten, F., Kissmer, T., & Stieglitz, S. (2021). The acceptance of chatbots in an
enterprise context – a survey study. International Journal of Information Management,
60, Article 102375.
Funding Brown, T. L. G., Hui, K. F. S., Yu, W. M. F., & Kennedy, J. K. (2011). Teachers’
conceptions of assessment in Chinese contexts: A tripartite model of accountability,
improvement, and irrelevance. International Journal of Educational Research, 50(5–6),
This research did not receive any specific grant from funding
307–320.
agencies in the public, commercial, or not-for-profit sectors. Budhathoki, T., Zirar, A., Njoya, E. T., & Timsina, A. (2024). ChatGPT adoption and
anxiety: A cross-country analysis utilising the unified theory of acceptance and use of
technology (UTAUT). Studies in Higher Education, 1–16. https://doi.org/10.1080/
CRediT authorship contribution statement
03075079.2024.2333937
Bugshan, H., & Attar, R. W. (2020). Social commerce information sharing and their
Chung Yee Lai: Writing – review & editing, Writing – original draft, impact on consumers. Technological Forecasting and Social Change, 153. Article
Validation, Methodology, Investigation, Formal analysis, Data curation, 119875 https://doi.org/10.1016/j.techfore.2019.119875.
Burke, R. R. (2002). Technology and the customer interface: What consumers want in the
Conceptualization. Kwok Yip Cheung: Writing – review & editing, physical and virtual store. Journal of the Academy of Marketing Science, 30(4),
Writing – original draft, Software, Methodology, Investigation, Formal 411–432.
11
C.Y. Lai et al. Computers and Education: Artificial Intelligence 6 (2024) 100246
Byrne, B. M. (2013). Structural equation modeling with EQS: Basic concepts, applications, Gravel, J., D’Amours-Gravel, M., & Osmanlliu, E. (2023). Learning to fake it: Limited
and programming. New York: Routledge. responses and fabricated references provided by ChatGPT for medical questions.
Chao, C. M. (2019). Factors determining the behavioral intention to use mobile learning: Mayo Clinic Proceedings: Digital Health, 1(3), 226–234.
An application and extension of the UTAUT Model. Frontiers in Psychology, 10. Article Gupta, P. B., Gould, S. J., & Pola, B. (2004). “To pirate or not to pirate”: A comparative
1652. study of the ethical versus other influences on the consumer’s software acquisition-
Cheng, Y., & Jiang, H. (2020). How do AI-driven chatbots impact user experience? mode decision. Journal of Business Ethics, 55(3), 255–274.
Examining gratifications, perceived privacy risk, satisfaction, loyalty, and continued Habibi, A., Muhaimin, M., Danibao, B. K., Wibowo, Y. G., Wahyuni, S., & Octavia, A.
use. Journal of Broadcasting & Electronic Media, 64(4), 592–614. (2023). ChatGPT in higher education learning: Acceptance and use. Computers and
Cheung, K. Y., & Lai, C. Y. (2022). External auditors’ trust and perceived quality of Education: Artificial Intelligence, 5. https://doi.org/10.1016/j.caeai.2023.100190.
interactions. Cogent Business & Management, 9(1). https://doi.org/10.1080/ Article 100190.
23311975.2022.2085366. Article 2085366. Hair, J. F., Black, W. C., Babin, B. J., Anderson, R. E., & Tatham, R. L. (2019). Multivariate
Choi, J. H., Hickman, K. E., Monahan, A., & Schwarcz, D. (2023). ChatGPT goes to law data analysis (8th ed.). Boston: Cengage.
school. Journal of Legal Education, 71(3), 387–400. https://doi.org/10.2139/ Halaweh, M. (2023). ChatGPT in education: Strategies for responsible implementation.
ssrn.4335905. Article 4335905. Contemporary Educational Technology, 15(2). https://doi.org/10.30935/cedtech/
Choudhury, A., & Shamszare, H. (2023). Investigating the impact of user trust on the 13036. Article ep.421.
adoption and use of ChatGPT: Survey analysis. Journal of Medical Internet Research, Hanafizadeh, P., Behboudi, M., Koshksaray, A. A., & Tabar, M. J. S. (2014). Mobile-
25. Article e47184. banking adoption by Iranian bank clients. Telematics and Informatics, 31, 62–78.
Chow, F. (2023). Hong Kong Baptist University begins ChatGPT trial for teaching staff, Higgins, G. E., Wolfe, S. E., & Marcum, C. D. (2008). Digital piracy: An examination of
but professors wary over lack of guidelines for AI use. South China Morning Post. three measurements of self-control. Deviant Behavior, 29(5), 440–461.
Retrieved from https://www.scmp.com/news/hong-kong/education/article/ Ho, R. (2006). Handbook of univariate and multivariate data analysis and interpretation with
3221176/hong-kong-baptist-university-begins-chatgpt-trial-teaching-staff-professor SPSS. New York: Chapman & Hall/CRC.
s-wary-over-lack?utm_source=rss_feed/. (Accessed 6 September 2023). Hsu, J. L., & Shiue, C. W. (2008). Consumers’ willingness to pay for non-pirated software.
Cimperman, M., Brenčič, M. M., Trkman, P., & Stanonik, M. D. L. (2013). Older adults’ Journal of Business Ethics, 81(4), 715–732. https://doi.org/10.1007/s10551-007-
perceptions of home telehealth services. Telemedicine and e-Health, 19(10), 786–790. 9543-9
Conner, M., & Armitage, C. J. (1998). Extending the theory of planned behavior A review Ibrahim, H., Liu, F., Asim, R., Battu, B., Benabderrahmane, S., Alhafni, B., Adnan, W.,
and avenues for further research. Journal of Applied Social Psychology, 28, Alhanai, T., AlShebli, B., Baghdadi, R., Bélanger, J. J., Beretta, E., Celik, K.,
1429–1464. Chaqfeh, M., Daqaq, M. F., El Bernoussi, Z., Fougnie, D., de Soto, B. G., Gandolfi, A.,
Cotton, D. R., Cotton, P. A., & Shipway, J. R. (2023). Chatting and cheating: Ensuring & Zaki, Y. (2023). Perception, performance, and detectability of conversational
academic integrity in the era of ChatGPT. Innovations in Education & Teaching artificial intelligence across 32 university courses. Science Reports, 13. Article 12187.
International. https://doi.org/10.1080/14703297.2023.2190148 Ikkatai, Y., Hartwig, T., Takanashi, N., & Yokoyama, H. M. (2022). Octagon
Cronan, T. P., & Al-Rafee, S. (2008). Factors that influence the intention to pirate measurement: Public attitudes toward AI ethics. International Journal of Human-
software and media. Journal of Business Ethics, 78, 527–545. Computer Interaction, 38(17), 1589–1606.
Dall’Alba, G. (2018). Evaluative judgement for learning to be in a digital world. In Illingworth, S. (2023). ChatGPT: Students could use AI to cheat, but it’s a chance to
D. Boud, R. Ajjawi, P. Dawson, & J. Tai (Eds.), Developing evaluative judgement in rethink assessment altogether. The Conversation. Retrieved from https://theconversat
higher education: Assessment for knowing and producing quality work (pp. 18–27). ion.com/chatgpt-students-could-use-ai-to-cheat-but-its-a-chance-to-rethink-assess
Abingdon: Routledge. https://doi.org/10.4324/9781315109251-3. ment-altogether-198019/. (Accessed 20 September 2023).
Dou, D., & Shek, D. T. L. (2022). Hong Kong high school students’ perceptions of the new Jaimovitch-López, G., Ferri, C., Hernández-Orallo, J., Martínez-Plumed, F., & Ramírez-
secondary school curriculum. Frontiers in Pediatrics, 10, Article 881515. Quintana, M. J. (2022). Can language models automate data wrangling? Machine
Dowling, M., & Lucey, B. (2023). ChatGPT for (finance) research: The Bananarama Learning, 1–30. https://doi.org/10.1007/s10994-022-06259-9
conjecture. Finance Research Letters, 53. Article 103662. Jiang, Y., Hao, J., Fauss, M., & Li, C. (2024). Detecting ChatGPT-generated essays in a
Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., large-scale writing assessment: Is there a bias against non-native English speakers?
Baabdullah, A. M., Koohang, A., Richter, A., Rowe, F., Sarker, S., Stahl, B. C., Computers and Education. https://doi.org/10.1016/j.compedu.2024.105070. Article
Tiwari, M. K., van der Aalst, W., Venkatesh, V., Viglia, G., Wade, M., Walton, P., 105070.
Wirtz, J., … Wright, R. (2023). “So what if ChatGPT wrote it?” Multidisciplinary Jo, H. (2023). Decoding the ChatGPT mystery: A comprehensive exploration of factors
perspectives on opportunities, challenges and implications of generative driving AI language model adoption. Information Development, 1–21. https://doi.org/
conversational AI for research, practice and policy. International Journal of 10.1177/02666669231202764
Information Management, 71. Article 102642. Kaplan, J., McCandlish, S., Henighan, T., Brown, T. B., Chess, B., Child, R., Gray, S.,
Eke, D. O. (2023). ChatGPT and the rise of generative AI: Threat to academic integrity? Radford, A., Wu, J., & Amodei, D. (2020). Scaling laws for neural language models.
Journal of Responsible Technology, 13. Article 100060. arXiv Preprint. https://arxiv.org/abs/2001.08361.
Ennew, C., & Sekhon, H. (2007). Measuring trust in financial services: The trust index. Kennedy, K. J. (2007). Barriers to innovative school practice: A socio-cultural framework
Consumer Policy Review, 17(2), 62–68. for understanding assessment practices in asia. Understanding and practice conference.
Eysenbach, G. (2023). The role of ChatGPT, generative language models and artificial Singapore: Nanyang Technological University. Paper presented at the Redesigning
intelligence in medical education: A conversation with ChatGPT - and a call for Pedagogy—Culture.
papers (preprint). JMIR Medical Education, 9. https://doi.org/10.2196/46885. Khadled, S., & El Khatib, S. (2019). Summative assessment in higher education: A
Article e46885. feedback for better learning outcomes. The international arab conference on quality
Farrokhnia, M., Banihashem, S. K., Noroozi, O., & Wals, A. (2023). A SWOT analysis of assurance in higher education (IACQA’ 2019). El Khiâra, Lebanon.
ChatGPT: Implications for educational practice and research. Innovations in Education Kock, N. (2015). Common method bias in PLS-SEM: A full collinearity assessment
& Teaching International, 61(3), 460–474. https://doi.org/10.1080/ approach. International Journal of e-Collaboration, 11(4), 1–10. https://doi.org/10.4
14703297.2023.2195846 018/ijec.2015100101.
Featherman, M. S., & Pavlou, P. A. (2003). Predicting e-services adoption: A perceived Kohnke, L., Moorhouse, B. L., & Zou, D. (2023). ChatGPT for language teaching and
risk facets perspective. International Journal of Human-Computer Studies, 59, 451–474. learning. RELC Journal, 54(2), 537–550. https://doi.org/10.1177/
Fischer, J., Bearman, M., Boud, D., & Tai, J. (2023). How does assessment drive learning? 00336882231162868
A focus on students’ development of evaluative judgement. Assessment & Evaluation Kuhail, M. A., Alturki, N., Alramlawi, S., & Alhejori, K. (2023). Interacting with
in Higher Education, 49(2), 233–245. https://doi.org/10.1080/ educational chatbots: A systematic review. Education and Information Technologies,
02602938.2023.2206986 28, 973–1018.
Fornell, C., & Larcker, D. F. (1981). Structural equation models with unobservable Lai, C. Y., & Cheung, K. Y. (2023). Exploring the role of intrinsic motivation in ChatGPT
variables and measurement error: Algebra and statistics. Journal of Marketing adoption to support active learning: An extension of the technology acceptance
Research, 18(3), 382–388. https://doi.org/10.1177/002224378101800313. model. Computers and Education: Artificial Intelligence, 5. https://doi.org/10.1016/j.
Foroughi, B., Senali, M. G., Iranmanesh, M., Khanfar, A., Ghobakhloo, M., Annamalai, N., caeai.2023.100178. Article 100178.
& Naghmeh-Abbaspour, B. (2023). Determinants of intention to use ChatGPT for Lee, J., & Song, C. (2013). Effects of trust and perceived risk on user acceptance of a new
educational purposes: Findings from PLS-SEM and fsQCA. International Journal of technology service. Social Behavior and Personality, 41(4), 587–598. https://doi.org/
Human-Computer Interaction, 1–20. https://doi.org/10.1080/ 10.2224/sbp.2013.41.4.587
10447318.2023.2226495 Li, J. (2009). Learning to self-perfect: Chinese beliefs about learning. In C. K. K. Chan, &
Friedman, S. J., & Frisbie, D. A. (1995). The influence of report cards on the validity of N. Rao (Eds.), Revisiting the Chinese learner: Changing contexts, changing education (pp.
grades reported to parents. Educational and Psychological Measurement, 55(1), 5–26. 35–69). Hong Kong: The University of Hong Kong, Comparative Education Research
Fung, K. N. (2023). At least 6 universities provide free use of ChatGPT for teachers and Centre/Springer.
students, and PolyU teachers can also use the graphic conversion program. Hong Liang, Y., Zou, D., Xie, H., & Wang, F. L. (2023). Exploring the potential of using
Kong Economic Times, TOPick. Retrieved from https://topick.hket.com/article ChatGPT in physics education. Smart Learning Environments, 10. https://doi.org/
/3584257/. (Accessed 8 October 2023). 10.1186/s40561-023-00273-7. Article 52(2023).
Gamage, K. A. A., Dehideniya, S. C. P. K., Xu, Z., & Tang, X. (2023). ChatGPT and higher Lozić, E., & Štular, B. (2023). Fluent but not factual: A comparative analysis of ChatGPT
education assessments: More opportunities than concerns? Journal of Applied and other AI chatbots’ proficiency and originality in scientific writing for
Learning and Teaching, 6(2), 1–12. humanities. Future Internet, 15(10). Article 336.
Gefen, D., Karahanna, E., & Straub, D. (2003). Trust and TAM in online shopping: An Malmström, H., Stöhr, C., & Ou, A. W. (2023). Chatbots and other AI for learning: A
integrated model. MIS Quarterly, 27(1), 51–90. survey of use and views among university students in Sweden. Chalmers Studies in
Communication and Learning in Higher Education, 1. https://doi.org/10.17196/cls.
csclhe/2023/01
12
C.Y. Lai et al. Computers and Education: Artificial Intelligence 6 (2024) 100246
McCallum, S. (2023). ChatGPT banned in Italy over privacy concerns. BBC News. Singh, H., Tayarani-Najaran, M. H., & Yaqoob, M. (2023). Exploring computer science
Retrieved from https://www.bbc.com/news/technology-65139406.amp/. (Accessed students’ perception of ChatGPT in higher education: A descriptive and correlation
30 August 2023). study. Education Sciences, 13(9). https://doi.org/10.3390/educsci13090924. Article
Meijer, H., Hoekstra, R., Brouwer, J., & Strijbos, J. (2020). Unfolding collaborative 924.
learning assessment literacy: A reflection on current assessment methods in higher Steele, J. L. (2023). To GPT or not GPT? Empowering our students to learn with AI.
education. Assessment & Evaluation in Higher Education, 45(8), 1222–1240. Computers and Education: Artificial Intelligence, 5. https://doi.org/10.1016/j.
Memarian, B., & Doleck, T. (2023a). ChatGPT in education: Methods, potentials, and caeai.2023.100160. Article 100160.
limitations. Computers in Human Behavior: Artificial Humans, 1(2). Article 100022. Strzelecki, A. (2023). To use or not to use ChatGPT in higher education? A study of
Memarian, B., & Doleck, T. (2023b). Fairness, accountability, transparency, and ethics students’ acceptance and use of technology. Interactive Learning Environments, 1–13.
(fate) in artificial intelligence (AI) and higher education: A systematic review. Susnjak, T. (2022). ChatGPT: The end of online exam integrity? arXiv Preprint. https://
Computers and Education: Artificial Intelligence, 5. https://doi.org/10.1016/j. doi.org/10.48550/arXiv.2212.09292
caeai.2023.100152. Article 100152. Vanneste, J. L., Yu, J., Cornish, D. A., Tanner, D. J., Windner, R., Chapman, J. R., …
Menon, D., & Shilpa, K. (2023). “Chatting with ChatGPT”: Analyzing the factors Dowlut, S. (2013). Identification, virulence, and distribution of two biovars of
influencing users’ intention to use the Open AI’s ChatGPT using the UTAUT model. Pseudomonas syringae pv. actinidiae in NewZealand. Plant Disease, 97(6), 708–719.
Heliyon, 9, Article e20962. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Lukasz, K., &
Milchram, C., Van de Kaa, G., Doorn, N., & Künneke, R. (2023). Moral values as factors Polosukhin, I. (2017). Attention is all you need. In I. Guyon, U. V. Luxburg,
for social acceptance of smart grid technologies. Sustainability, 10(8). https://doi. S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, & R. Garnett (Eds.), Advances in
org/10.3390/su10082703. Article 2703. neural information processing systems (pp. 1–15). Curran Associates, Inc.
Mohammed, A. A. Q., Al-ghazzali, A., & Alqohfa, K. A. S. (2023). Exploring ChatGPT uses Venkatesh, V., Morris, M., Davis, G., & Davis, F. (2003). User acceptance of information
in higher studies: A case study of arab postgraduates in India. Journal of English technology: Toward a unified view. MIS Quarterly, 27(3), 425–478.
Studies in Arabia, 2(2), 9–17. Venkatesh, V., Thong, J., & Xu, X. (2012). Consumer acceptance and use of information
Mok, L. (2023). Hong Kong Education University approves use of ChatGPT in coursework technology: Extending the unified theory of acceptance and use of technology. MIS
despite bans by two other schools. Retrieved from https://hongkongfp.com/2023/ Quarterly, 36(1), 157–178.
03/24/hong-kong-education-university-approves-use-of-chatgpt-in-coursework-de Viswanath, A., Joshi, A., Nim, S., & Das, S. (2023). Determinants and consequences of
spite-bans-by-two-other-schools/. (Accessed 23 August 2023). trust in AI-based customer service chatbots. Service Industries Journal, 43(2), 1–34.
Mou, J., Shin, D. H., & Cohen, J. F. (2017). Tracing college students’ acceptance of online Welding, L. (2023). Half of college students say using AI on schoolwork is cheating or
health services. International Journal of Human–Computer Interaction, 33(5), 371–384. plagiarism. Retrieved from https://www.bestcolleges.com/research/college-st
https://doi.org/10.1080/10447318.2016.1244941. udents-ai-tools-survey/. (Accessed 1 September 2023).
Muriithi, P., Horner, D., & Pemberton, L. (2016). Factors contributing to adoption and Wu, I. L., & Chiu, M. L. (2018). Examining supply chain collaboration with determinants
use of information and communication technologies within research collaborations and performance impact: Social capital, justice, and technology use perspectives.
in Kenya. Information Technology for Development, 22(1), 84–100. International Journal of Information Management, 39(C), 5–19.
Nunnally, J., & Bernstein, I. (1994). Psychometric theory. McGraw-Hill. Wu, X., Duan, R., & Ni, J. (2023). Unveiling security, privacy, and ethical concerns of
Oye, N. D., Iahad, N. A., & Ab Rahim, N. (2014). The history of UTAUT model and its ChatGPT. arXiv Preprint. https://doi.org/10.48550/arXiv.2307.14192
impact on ICT acceptance and usage by academicians. Education and Information Wu, W., Huang, X., & Li, X. (2023). Technology moral sense: Development, reliability,
Technologies, 19, 251–270. https://doi.org/10.1007/s10639-012-9189-9 and validity of the TMS scale in Chinese version. Frontiers in Psychology, 14, Article
Perez, S. (2023). AI app Character.ai is catching up to ChatGPT in the US. Yahoo!finance. 1056569.
Retrieved from https://finance.yahoo.com/news/ai-app-character-ai-catch Yilmaz, R., & Yilmaz, F. G. K. (2023). The effect of generative artificial intelligence (AI)-
ing-173509867.html/. (Accessed 13 October 2023). based tool use on students’ computational thinking skills, programming self-efficacy
Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., & Podsakoff, N. P. (2003). Common method and motivation. Computers and Education: Artificial Intelligence, 4. https://doi.org/
biases in behavioral research: A critical review of the literature and recommended 10.1016/j.caeai.2023.100147. Article 100147.
remedies. Journal of Applied Psychology, 88(5), 879–903. https://doi.org/10.1037 Yiu, W. (2023). 2 universities in Hong Kong embrace use of ChatGPT, other AI tools to
/0021-9010.88.5.879. boost quality of teaching, learning. South China Morning Post. Retrieved from https://
Quoc, T. P., Nhut, M. D., & Duc, T. N. (2020). Factors affecting on the digital piracy www.scmp.com/news/hong-kong/education/article/3212304/2-universities-hon
behavior: An empirical study in Vietnam. Journal of Theoretical and Applied Electronic g-kong-embrace-use-chatgpt-other-ai-tools-boost-quality-teaching-learning?campai
Commerce Research, 15(2), 122–135. gn=3212304&module=perpetual_scroll_0&pgtype=article/. (Accessed 29 August
Rahman, M., Terano, H. J. R., Rahman, N., Salamzadeh, A., & Rahaman, S. (2023). 2023).
ChatGPT and academic research: A review and recommendations based on practical Zhou, J., Zhang, Y., Luo, Q., Parker, A. G., & Choudhury, M. D. (2023). Synthetic lies:
examples. Journal of Education, Management and Development Studies, 3(1), 1–12. Understanding AI-generated misinformation and evaluating algorithmic and human
Raman, A., & Thannimalai, R. (2021). Factors impacting the behavioural intention to use solutions. Presentation at the meeting of CHI Conference on Human Factors in Computing
e-learning at higher education amid the covid-19 pandemic: UTAUT2 model. Systems, Hamburg, Germany. https://doi.org/10.1145/3544548.3581318
Psychological Science and Education, 26(3), 82–93. Zohery, M. (2023). ChatGPT in academic writing and publishing: A comprehensive
Roumeliotis, K. I., & Tselikas, N. D. (2023). ChatGPT and open-AI models: A preliminary guide. In Artificial intelligence in academia, research and science: ChatGPT as a case
review. Future Internet, 15(6). https://doi.org/10.3390/fi15060192. Article 192. study (pp. 10–61). Achtago Publishing. https://doi.org/10.5281/zenodo.7803703.
Schumacker, R. E., & Lomax, R. G. (2010). A beginner’s guide to structural equation Zuckerman, M., Flood, R., Tan, R. J. B., Kelp, N., Ecker, D. J., Menke, J., & Lockspeiser, T.
modeling (3rd ed.). New York: Routledge/Taylor and Francis Group. (2023). ChatGPT for assessment writing. Medical Teacher, 45(11), 1224–1227.
https://doi.org/10.1080/0142159X.2023.2249239
13