1 s2.0 S2949882124000586 Main

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Computers in Human Behavior: Arti cial Humans 2 (2024) 100098

Contents lists available at ScienceDirect

Computers in Human Behavior: Artificial Humans


journal homepage: www.journals.elsevier.com/computers-in-human-behavior-artificial-humans

Understanding AI Chatbot adoption in education: PLS-SEM analysis of user


behavior factors
Md Rabiul Hasan a,* , Nahian Ismail Chowdhury b, Md Hadisur Rahman b, Md Asif Bin Syed b,
JuHyeong Ryu b
a
Edward P. Fitts Department of Industrial and Systems Engineering, North Carolina State University, Raleigh, NC, 27695, USA
b
Industrial and Management Systems Engineering, West Virginia University, Morgantown, WV, 26505, USA

A R T I C L E I N F O A B S T R A C T

INDEX TERMS: The integration of Artificial Intelligence (AI) into education is a recent development, with chatbots emerging as a
Chatbot noteworthy addition to this transformative landscape. As online learning platforms rapidly advance, students
ChatGPT need to adapt swiftly to excel in this dynamic environment. Consequently, understanding the acceptance of
Google BARD
chatbots, particularly those employing Large Language Models (LLM) such as Chat Generative Pretrained
Industry 5.0
Transformer (ChatGPT), Google Bard, and other interactive AI technologies, is of paramount importance.
PLS-SEM
Technology acceptance model Investigating how students accept and view chatbots is essential to directing their incorporation into Industry 4.0
Technology readiness Index and enabling a smooth transition to Industry 5.0’s customized and human-centered methodology. However,
Industry 4.0 existing research on chatbots in education has overlooked key behavior-related aspects, such as Optimism,
LLM Innovativeness, Discomfort, Insecurity, Transparency, Ethics, Interaction, Engagement, and Accuracy, creating a
significant literature gap. To address this gap, this study employs Partial Least Squares Structural Equation
Modeling (PLS-SEM) to investigate the determinant of chatbots adoption in education among students, consid­
ering the Technology Readiness Index and Technology Acceptance Model. Utilizing a five-point Likert scale for
data collection, we gathered a total of 185 responses, which were analyzed using R-Studio software. We
established 12 hypotheses to achieve its objectives. The results showed that Optimism and Innovativeness are
positively associated with Perceived Ease of Use and Perceived Usefulness. Conversely, Discomfort and Insecurity
negatively impact Perceived Ease of Use, with only Insecurity negatively affecting Perceived Usefulness.
Furthermore, Perceived Ease of Use, Perceived Usefulness, Interaction and Engagement, Accuracy, and
Responsiveness all significantly contribute to the Intention to Use, whereas Transparency and Ethics have a
negative impact on Intention to Use. Finally, Intention to Use mediates the relationships between Interaction,
Engagement, Accuracy, Responsiveness, Transparency, Ethics, and Perception of Decision Making. These find­
ings provide insights for future technology designers, elucidating critical user behavior factors influencing
chatbots adoption and utilization in educational contexts.

1. Introduction further augmented their capabilities (van Dis et al., 2023). In the
contemporary AI landscape, chatbots also denote a computer program
The advent of artificial intelligence (AI) has transformed human’s capable of engaging in conversations with users, be it through speech or
interactions with technology significantly. AI-driven machines now text (Chetlen et al., 2019). Consequently, these interactive chatbots have
possess the capability of comprehending human language and gained significant attention in recent years in different areas, including
responding to inquiries. Chatbots, which simulate human-like conver­ education. Hwang and Chang (Hwang & Chang, 2021) have highlighted
sations and provide personalized assistance, have gained immense the advantages of employing chatbots, including enhanced learning ef­
popularity. Recent advancement in data set quality, size, and sophisti­ ficiency, real-time interaction, and improved peer communication skills.
cated techniques for fine-tuning these models with human input have Chocarro et al. (Chocarro et al., 2023) investigated the acceptance of

* Corresponding author.
E-mail addresses: [email protected] (M.R. Hasan), [email protected] (N.I. Chowdhury), [email protected] (M.H. Rahman), [email protected]
(M.A.B. Syed), [email protected] (J. Ryu).

https://doi.org/10.1016/j.chbah.2024.100098
Received 4 March 2024; Received in revised form 16 August 2024; Accepted 13 October 2024
Available online 13 October 2024
2949-8821/© 2024 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
M.R. Hasan et al. Computers in Human Behavior: Arti cial Humans 2 (2024) 100098

chatbots of teachers in education, employing the Technology Accep­ To assess the effectiveness of LLM-based chatbots among graduate-
tance Model (TAM). They found that teachers are more inclined to level students or those considering graduate studies, we will utilize
accept chatbots if they perceived them as user-friendly and beneficial. Partial Least Squares Structural Equation Modeling (PLS-SEM). This
Stathakarou et al. (Stathakarou et al., 2020) also investigated students’ statistical technique is well-suited to evaluating complex relationships
perceptions of chatbots in healthcare education, discovering that chat­ and latent constructs within a model. With PLS-SEM, we can measure
bots have the potential to support learning and enhance academic per­ the direct effects of these behavior aspects and explore the interplay
formance. Foroughi et al. (Foroughi et al., 2023) examined the Intention between them. This approach offers a comprehensive view of chatbots’
to Use (ITU) of ChatGPT, revealing that performance and effort expec­ effectiveness in an educational context. In addition, by using PLS-SEM,
tancy, learning value, and hedonic motivation significantly impact its we can quantitatively validate and gain a deeper understanding of
adoption. Chatbots have gained attention not only within classroom how these factors influence chatbots’ use and perceived impact in ed­
learning environments but have also in academic research. ucation and bridges the gap in the literature and contributes to a more
Researchers are using machine learning to develop chatbots for ed­ comprehensive understanding of LLM-based chatbots’ potential role and
ucation. Følstad and Brandtzæg (Følstad & Brandtzæg, 2017) pointed improvements in the educational setting. The objectives of this research
out that machine learning and natural language technology are more work are as follows.
used among researchers in the development of chatbots for education
purposes with the advancement of AI in the field of academic research. • Combining TAM (Perceived ease of use and perceived usefulness)
Lin et al. (Lin et al., 2023) demonstrated that there is a trend to improve and TR personality trait dimensions (discomfort, innovativeness,
the response accuracy of the chatbots in order to make the interaction optimism, and insecurity) for the TRAM model to investigate the
more interactive, like human conversation. LLM’s powered chatbots will adoption of chatbots and intention to use by the PLS-SEM approach.
lead to a new generation of search engines that can provide detailed and • Extending the TRAM model to understand students’ experiences is
informative answers to complex user questions (Grant & Metz, 2022). In one of its first kinds to investigate constructs like interaction and
May 2021, Google introduced its LLM, later announced as Google BARD engagement, accuracy and responsiveness, transparency and ethics,
(Hoppner & Streatfeild, 2023). On the other hand, ChatGPT creates the and perception of decision-making.
possibilities of AI-powered educational systems, which have gained • Finally, evaluating the mediation effects of intention to use with the
exponential interest in recent months. In a recent study, Mogavi et al., relationship among the Interaction, Engagement, Accuracy,
(2023) reported that ChatGPT is mostly popular in higher education, Responsiveness, Transparency, Ethics, and Perception of Decision
while the most discussed topics of ChatGPT are its productivity, effi­ Making (PDM).
ciency, and ethics (Mogavi et al., 2023). After evaluating the perfor­
mance of OpenAI, ChatGPT and Google Bard against human ratings, The paper is organized as follows. Section 2 describes the theoretical
Abdolvahab Khademi (Khademi, 2023) reported that their background and hypothesis development. We discuss the research
inter-reliability, as measured by Intraclass correlation (ICC), was low. It methodology in section 3. Following this, in section 4, the results and
indicates that compared to human ratings, considered the gold standard, findings of this study have been presented. Section 5 discusses the im­
they did not perform well. Another study conducted by Rahsepar et al. plications of this study and future research directions and concludes in
(Rahsepar et al., 2023) in 2023 compared the accuracy of ChatGPT 3.5, section 6.
Google Bard, Bing, and Google search engines in responding to a lung
cancer questionnaire. The results showed that ChatGPT 3.5 had the 2. Theoretical background and hypothesis development
highest accuracy rate of 70.8%, compared to Google Bard (51.7%), Bing
(61.7%), and Google search engine (55%). However, only some systems 2.1. Technology acceptance model (TAM)
achieved 100% consistency (Rahsepar et al., 2023). However, it is
important to note that this study only focused on the accuracy of re­ Many theoretical models help to understand individual behaviors
sponses to a lung cancer questionnaire and did not explore other fields toward new (Lin et al., 2023). TAM has been widely adopted as a
like education or academic search results. theoretical framework to understand individual behaviors toward new
Conversely, the responsible application of AI remains a concern in technology usage (McCoy et al., 2007). TAM was first introduced by
AI-powered enterprise platforms, including ChatGPT, Google BARD, Davis in 1989, which deals with PU, PEOU, and user acceptance of in­
Notion AI, Jasper, Microsoft Bing, among others (Dwivedi et al., 2023) formation technology (Davis, 1989). Previous research has acknowl­
(Ooi et al., 2023) (Anagnostou et al., 2022). Hoi (Hoi, 2023) argued that edged the effectiveness of the TAM model and has broadened its scope to
various AI principles should be explored, like transparency, reliability encompass external variables crucial for adopting technology (McCoy
and security, ethics, and sustainability. Alsharhan et al. (Alsharhan et al., 2007) (Wu & Lederer, 2009) (Scherer et al., 2019). Since the
et al., 2023) conducted a literature review on chatbots adoption and initial introduction of this model, several expansions have been imple­
demonstrated that the theories that dominate the explanation of chat­ mented across diverse technologies (Harris & Rogers, 2023).
bots adoption are the technology acceptance model, social presence
theory, and the concept of computers as social actors. They also noted 2.2. Technology readiness (TR)
that while numerous studies have scrutinized the intention to use
chatbots, relatively few have investigated their actual usage and sus­ Technology Readiness is defined by A. Parasuraman (Parasuraman,
tained intention. Trust and privacy are another concern when it comes to 2000) as people’s propensity to embrace and use new technologies for
the word chatbots. Considering it, Lappeman et al. (Lappeman et al., accomplishing goals in home life and at work’. An individual’s pre­
2023) investigated the trust and digital privacy concerns for banking paredness to embrace technology can be assessed using the Technology
chatbots services which found that privacy concerns notably impact user Readiness Index (TRI) scale, developed by Parasuraman. The follow-up
self-disclosure, resulting in a negative correlation. This development has study by Parasuraman and Colby (Parasuraman & Colby, 2015) classi­
increased interest in studying user perceptions of AI-powered chatbots fied technology adoption into four categories: Innovativeness, Opti­
and their future applications. Although, to the best of our knowledge, mism, Discomfort, and Insecurity, which is multi-faceted. Of the four
some previous research work is trying to investigate user behavior to­ categories, Innovativeness and Optimism have a positive impact, as in­
wards chatbots and their adoption among the previous research works, dividuals are more likely to embrace new technology. On the other
there needs to be more research that reflects the user perceptions to­ hand, discomfort, and insecurity act as barriers to TR (Sohaib et al.,
wards chatbots adoption among students in the academic field in a ho­ 2020). According to some researchers, Innovativeness and Optimism are
listic approach. motivating factors that encourage TR, while Optimism and Discomfort

2
M.R. Hasan et al. Computers in Human Behavior: Arti cial Humans 2 (2024) 100098

act as inhibitors, decreasing an individual’s TR (Blut & Wang, 2020). In with a natural tendency to distrust and be skeptical of technology often
2008, Lam et al. (Lam et al., 2008) conducted an assessment that cate­ assume that there are more risks than benefits associated with it, leading
gorized TR into four dimensions and analyzed the impact of each them to avoid using it (Blut & Wang, 2020). Venkatesh et al. (Venkatesh
dimension individually. The assessment highlighted the importance of et al., 2012) highlighted the importance of trust in influencing in­
TR development and its potential outcomes. This study investigates the dividuals’ technology adoption behavior. As a result, feeling insecure
impact of TR, which includes optimism, innovativeness, discomfort, and can harm both PU and PEOU. The study puts forward two hypotheses,
insecurity, on TAM, which pertains to Perceived Ease of Use (PEOU) and Hypothesis H4a and Hypothesis H4b. Furthermore, it is crucial to
Perceived Usefulness (PU). Different researchers explored the integra­ consider the three other hypotheses that pertain to the PEOU, PU, and
tion of TR and TAM in their research (Lin et al., 2023) (Koivisto et al., ITU chatbots in education. Hypothesis, H5, Hypothesis, H6 and Hy­
2016) (Godoe & Johansen, 2012). pothesis, H7 were proposed regarding the use of chatbots in education.
Researchers have also combined the TAM model with other personalized
2.2.1. Optimism (OP) development approaches, such as TR (Lin et al., 2023). Fig. 1 below
Parasuraman (Parasuraman, 2000) defines optimism as having a shows the exploration of the TRAM model (TR + TAM) in the first part of
positive outlook toward technology and firmly believing that it em­ this study.
powers individuals with greater control, flexibility, and efficiency in Further investigation was conducted to explore how ITU mediates
their daily lives. People with an optimistic outlook are better equipped other factors. The study’s independent variables include Interaction and
to handle negative outcomes, enhancing technology’s PEOU and PU Engagement (IE), Accuracy and Speed (AS), and Transparency and
(Sohaib et al., 2020). Based on the information given, someone with an Ethics (TE). In contrast, the dependent variable is the PDM with the
optimistic outlook would view the integration of chatbots as a simple intention to serve as the mediator, as shown in Fig. 2.
and beneficial process in education. According to this knowledge, we
developed Hypothesis H1a and Hypothesis H1b. 2.2.5. Interaction and engagement (IE)
IE is crucial when using chatbots in education. Past research has
2.2.2. Innovativeness (IN) demonstrated that online interaction and engagement have a significant
Innovativeness pertains to an individual’s readiness to explore novel impact on various positive student outcomes, such as satisfaction and
ideas or methods (Lam et al., 2008). According to Connolly and Kick motivation (Onofrei et al., 2022) (Kuo et al., 2010, pp. 593–600) (Kuo
(Connolly & Kick, 2015), innovativeness refers to individuals who enjoy et al., 2014). A positive correlation exists between interaction, engage­
taking risks and finding joy in experimenting with new ideas. Sohaib ment, and intention to use and Hypothesis H8 is proposed.
et al. (Sohaib et al., 2020) found that innovativeness has a clear and
positive effect on both PU and PEOU. This supports the idea that being 2.2.6. Accuracy and responsiveness (AR)
innovative can lead to an increased perception of a product or service’s When it comes to chatbots, users expect accurate and timely re­
practicality and ease of use. There are two theories put forth: Hypothesis sponses. Kerlyl et al. (Kerlyl et al., 2007) emphasized the importance of
H2a and Hypothesis H2b. chatbots learner models in education. Malik et al. (Malik et al., 2020)
defined responsiveness as the readiness to offer fast help and services.
2.2.3. Discomfort (Dis) Interestingly, they found no significant association between respon­
It is imperative to acknowledge that technology can induce siveness and user intention to use chatbots. Therefore, Hypothesis H9 is
discomfort when an individual feels inundated by it and senses a lack of proposed to investigate further.
authority over its usage (Sohaib et al., 2020). Experiencing discomfort
can make it harder to embrace and adopt new technologies, and thus, try 2.2.7. Transparency and ethics (TE)
to avoid it (Parasuraman, 2000). Therefore, discomfort has a negative Chatbots can be opaque, making it hard for users to comprehend
impact on the perceived usefulness (PU) and PEOU of a technology (Blut their decision-making and answer-generating processes, which raises
& Wang, 2020). Hypothesis H3a and Hypothesis H3b are formulated concerns about transparency (Kooli, 2023). Mozafari et al. (Mozafari
concerning the use of chatbots in education. et al., 2020) highlighted that disclosing chatbots leads to positive out­
comes for certain service types, allowing firms to achieve transparency.
2.2.4. Insecurity (INS) Ethics and AI are closely related. Nowadays, chatbots are based on LLM.
Insecurity pertains to a lack of trust in technology, which arises from Virginia Dignum (Dignum, 2018) argued that ethical considerations
doubts about its capability to function correctly and worries about the should be integrated into the design of AI, and ethical reasoning capa­
possible adverse effects it might cause (Parasuraman, 2000). People bilities should be incorporated into the behavior of artificial

Fig. 1. TRAM (TAM + TR) model.

3
M.R. Hasan et al. Computers in Human Behavior: Arti cial Humans 2 (2024) 100098

Fig. 2. Conceptual framework and mediating effect.

autonomous systems. Hypothesis H10 is proposed to investigate TE is


Table 1
correlated with the ITU of chatbots.
List of hypotheses.

2.2.8. Perception of decision-making (PDM) No. Hypothesis No. Hypothesis

While conventional chatbots only provide information, chatbots Hypothesis Optimism is positively Hypothesis Proposes that the
with expert decision-making abilities can solve complex problems. Hsu H1a related to the PEOU of H6 PEOU of chatbots is
chatbots. positively related to
et al. (Hsu et al., 2023) discovered that chatbots possessing expert
the ITU of chatbots in
decision-making knowledge significantly improve students’ learning education.
achievements. Several studies have explored chatbots’ decision-making Hypothesis Optimism is positively Hypothesis Suggests that the PU of
in various industries, such as healthcare and finance (Ali et al., 2022) H1b linked to the PU of H7 chatbots is positively
(Reicherts et al., 2022) (Alaaeldin et al., 2021). The study explores chatbots in education. related to the ITU of
chatbots in education.
decision-making perspectives with PDM as the dependent variable and Hypothesis Innovativeness positively Hypothesis IE affects the ITU.
ITU as the mediator. The hypotheses being investigated are, Hypothesis H2a correlates with the PEOU H8
H11a, Hypothesis H11b, Hypothesis H11c and Hypothesis H12. towards the ITU of
All the hypotheses are described and presented also in Table 1. chatbots in education.
Hypothesis Innovativeness is Hypothesis A positive correlation
H2b positively associated H9 between AR and the
3. Research method with the PU towards the ITU of chatbots.
ITU of chatbots in
In this study, PLS-SEM was adopted for the study’s exploratory na­ education.
ture. The primary appeal of PLS-SEM is that the method enables re­ Hypothesis There is a negative Hypothesis TE is correlated with
H3a correlation between H10 the ITU of chatbots.
searchers to estimate complex models with many constructs, indicator discomfort and the PEOU
variables, and structural paths without imposing distributional as­ of chatbots, affecting the
sumptions on the data (Hair et al., 2019). This method can be adopted ITU.
for small sample sizes with the absence of distributional assumptions. Hypothesis Discomfort is negatively Hypothesis The ITU mediates the
H3b related to the PU of H11a relationship between
The method was applied for both analyzing the TRAM model and
chatbots, which also IE and PDM of
evaluating the mediation effect, as it has the ability to validate the impacts the ITU. education.
measurement model and test the structural model hypothesis. SEMinR Hypothesis Proposes that insecurity Hypothesis The ITU mediates the
library in RStudio 2023.06.2 + 561 was used to perform the analysis. H4a negatively impacts the H11b relationship between
The study received ethical approval from West Virginia University in PEOU of chatbots in AR and PDM of
education, affecting the education.
Morgantown. The protocol number associated with the approval is ITU.
2304759788. Hypothesis Suggests that insecurity Hypothesis The ITU mediates the
H4b has a negative impact on H11c relationship between
the PU of chatbots in TE and PDM of
3.1. Data collection
education, and this also education.
affects the ITU.
A data collection survey was distributed among the participants in Hypothesis Suggests that the PEOU Hypothesis The ITU positively
the education field. The survey has been circulated among the graduate H5 of chatbots is positively H12 influences the PDM of
related to the PU. education.
and undergraduate students from April 2023 through July 2023 for data
collection. The survey questionnaire was developed based on the
developed hypothesis and distributed among students using Qualtrics. statistically many psychological constructs like attitudes, beliefs, and
According to Likert’s 1932 original study, there are an infinite preferences (Joshi et al., 2015). Again, the person being interviewed can
number of definable attitudes, which can be grouped into clusters. This easily read out the entire list of scale descriptions when using a 5-point
cluster in a scale is a widely applied instrument in psychological scale (Dawes, 2008). So, a five-point Likert Scale was used to collect the
research because it is flexible and can be adjusted to measure

4
M.R. Hasan et al. Computers in Human Behavior: Arti cial Humans 2 (2024) 100098

responses for each question. (5) Strongly agree, (4) Somewhat agree, (3) evaluates a construct’s convergent validity, the average variance
Neither agree nor disagree, (2) Somewhat disagree, (1) Strongly extracted (AVE). The minimum acceptable AVE is 0.50 or higher (Hair
disagree. et al., 2019). Hair (Hair et al., 2019) recommended the assessment of
heterotrait-monotrait ratio (HTMT) of the correlations proposed by
3.2. User demographics Henseler et al. (Henseler et al., 2015) to evaluate discriminant validity;
the fourth criterion to assess measurement model validation. Henseler
A total of one hundred and eighty-three people participated in the et al. (Henseler et al., 2015) propose a threshold value of 0.90 for
survey. However, only one hundred and forty-two data were used for structural models with constructs that are conceptually very similar, but
analysis as the remaining forty-one responses were incomplete. Among when constructs are conceptually more distinct, a lower, more conser­
the finalized participants, 58% (82) were male and 42% (60) were fe­ vative, threshold value is suggested, such as 0.85.
male. 35% (50) of participants were in the 18–24 age range and 65%
(92) were in the age range of 25–34. On the percentage of Ethnicity
4.1. Measurement model validation
among the participants, Asian was 78%, followed by white 14%, Black
or African American 4%, and other 10%, respectively. ChatGPT (96%)
Bartlett test of sphericity, which is suggested by Bartlett, a chi-square
was the dominant interactive AI or chatbots that participants used, along
test to assess the interpretable factors if the data matrix contains
with interactive AI that ranges from Google BARD, JASPER, Notion AI,
meaningful information to evaluate the correlation matrix (Yim, 2019).
Midjourney, and Snapchat AI to Upwork BOT. Most of the participants
Factor analysis requires a minimum of a statistically significant
had a Bachelor’s degree, 52% (74), 20% (28) had a Master’s degree,
chi-square value (p ≤ 0.05) (Tinsley & Tinsley, 1987). In this study the
14% (20) were in Ph.D., 4% (5) were doing Post-doc and 11% (15) re­
P-value is found 1.034796e-205 which is less than 0.05. An additional
ported as other degrees. Actual usage frequency was collected in the
metric to evaluate the level of suitability for factor analysis and the
survey. 27% (38) of participants responded that they use chatbots
intercorrelation between variables is the Kaiser-Meyer-Olkin Measure of
multiple times a day, 4% (6) of participants use once a day, 26% (37) use
Sampling Adequacy (KMO MSA) (Yim, 2019). It is commonly accepted
few times a week, 3% (4) use once a week, 25% (35) use chatbots few
that data is appropriate for factor analysis when the KMO value is
times a month, 9% (13) use once a month and 6% (9) responded their
greater than 0.8 (Jr et al., 2009). The KMO score in our investigation is
usage frequency as other. All the demographic information is presented
0.837, indicating that the factor analysis of the data was appropriate to
in Fig. 3.
move forward with the PLS-SEM technique.
The indicator loading’s reliability was over 0.708 for all the latent
4. Results
constructs. Table 2 represents the alpha, ρC, and ρA of the variables. It is
apparent from this table that all these values are over the recommended
The assessment of the measurement model involves examining the
value of 0.7, and ρA lies between alpha and ρC. In addition, the AVE
indicator loading’s reliability, internal consistency reliability, conver­
value of all the variables is over 0.5.
gent validity, and discriminant validity (Hair et al., 2019). Indicator
Table 3 shows the HTMT values of the measurement model, and all
loading’s reliability has a recommended threshold value of 0.708 or
the numbers are less than the recommended value of 0.85. These results
above (Hair et al., 2019). The second step is assessing internal consis­
validate the TRAM measurement model.
tency reliability; a higher value generally indicates a higher reliability.
Jöreskog’s (Jöreskog, 1970) composite reliability (ρC) is often used for
internal consistency reliability, with values ranging from 0.7 to 0.9 are Table 2
Reliability and validity of the measurement model.
considered satisfactory to good. However, values higher than 0.95 are
undesirable since they indicate that the items are redundant, reducing alpha ρC AVE ρA
construct validity (Diamantopoulos et al., 2012). Cronbach’s alpha is OP 0.814 0.877 0.642 0.824
another measure of internal consistency reliability that assumes similar IN 0.705 0.870 0.771 0.722
thresholds (Hair et al., 2019). However, Cronbach’s alpha may be too DIS 0.743 0.882 0.790 0.828
INS 0.773 0.869 0.689 0.807
conservative, and composite reliability may be too liberal. Therefore, PEOU 0.779 0.900 0.818 0.799
Dijkstra and Henseler (Dijkstra & Henseler, 2015) proposed ρA as an PU 0.834 0.923 0.857 0.837
improvised measure of construct reliability, which usually lies between ITU 0.717 0.840 0.638 0.729
Cronbach’s alpha and composite reliability. The third criterion

Fig. 3. Demographic data.

5
M.R. Hasan et al. Computers in Human Behavior: Arti cial Humans 2 (2024) 100098

Table 3 Table 4
Discriminant validity (HTMT) of the measurement model. Structural model.
OP IN DIS INS PEOU PU ITU Hypothesis Path Path Value Std. t Value 5% CI 95% CI

OP – – – – – – – H1a OP - > PEOU 0.40 0.10 5.95 0.41 0.72


IN 0.736 – – – – – – H1b OP - > PU 0.46 0.05 14.36 0.69 0.87
DIS 0.039 0.179 – – – – – H2a IN - > PEOU 0.09 0.11 3.93 0.24 0.59
INS 0.089 0.196 0.116 – – – – H2b IN - > PU 0.24 0.08 9.24 0.57 0.81
PEOU 0.563 0.411 0.138 0.152 – – – H3a DIS - > PEOU − 0.09 0.08 1.65 0.06 0.33
PU 0.776 0.693 0.083 0.101 0.503 – – H3b DIS - > PU − 0.03 0.07 1.27 0.05 0.26
ITU 0.775 0.680 0.294 0.177 0.452 0.698 – H4a INS - > PEOU − 0.13 0.08 1.88 0.08 0.34
H4b INS - > PU − 0.01 0.04 2.34 0.08 0.22
H5 PEOU - > PU 0.12 0.11 4.68 0.32 0.68
4.2. Hypothesis testing H6 PEOU - > ITU 0.14 0.11 4.13 0.28 0.65
H7 PU - > ITU 0.49 0.08 9.22 0.57 0.82

After validating the measurement model, PLS-SEM was assessed to


test the structural model and the hypothesis. The standard assessment measurement model and structural model were validated using the PLS-
criteria for the structural model are the coefficient of determination (R2) SEM. From Tables 5 and 6, it has been found that all the criteria meet or
(Hair et al., 2019). The R2 ranges from 0 to 1, with higher values indi­ exceed the threshold values, thus validating the measurement model.
cating a greater explanatory power. The recommended values of R2 Furthermore, in a similar way, the structural model was assessed by path
0.75, 0.5, and 0.25 refer to satisfactory, moderate, and weak, respec­ coefficients. R2 values show a weak explanatory power of the model
tively. The results of R2 indicate that 23% (PEOU), 48% (PU), and 32% with 38% (ITU) and 21% (PDM). Fig. 5 and Table 7 show the mediating
of the variance is the chatbots use intention (ITU). These values show a structural model bootstrapped result. According to the results, this study
weak level of explanation. The path coefficients were evaluated based fails to reject hypotheses H8 and H9. Interaction and engagement (H8),
on the t-test, computed by performing the bootstrapping technique with and Accuracy and responsiveness have a direct positive significant effect
a significance level of 10%. Bootstrapping is a nonparametric method to on intention to use. We fail to reject hypothesis H10 based on a path
test the coefficients, i.e., path coefficients and outer factor weights, by coefficient that shows Transparency, and ethics have a negative effect on
assessing the standard error for estimation (Sohaib et al., 2020). The the intention to use chatbots. Lastly, it can be seen from the result that
threshold values for t-test at significance levels of 10%, 5%, and 1% are the intention to use has a positive significant effect on the perception of
1.65, 1.96 and 2.58 respectively. Fig. 4 and Table 4 show the boot­ decision-making, and thus rejecting hypothesis H12. Table 8 summa­
strapped results and the t-test values According to the results, we fail to rizes the result of hypotheses H1a to H10 and H12.
reject hypotheses from H1a to H7 except for H3b. From the findings,
optimism, and innovativeness have significant effects on the chatbots’
perceived ease of use and perceived usefulness, and they positively in­ 4.3. Mediation effect analysis
fluence the latent construct. Furthermore, discomfort and insecurity
have a negative effect on perceived ease of use, and insecurity has a In this study, the intention to use was determined as a mediator
negative effect on the perceived usefulness of the chatbots, which is between interaction and engagement (H11a), accuracy and respon­
indicated by the negative path coefficient value. However, there is not siveness (H11b), transparency and ethics (H11c), and perception of
enough evidence to reject hypothesis H3b. Therefore, discomfort does
not have a significant effect on perceived usefulness. In addition,
Table 5
perceived ease of use has a positive, significant effect on the perceived Reliability and validity of the mediating measurement model.
usefulness of chatbots in education. Therefore, failing to reject hypoth­
alpha ρC AVE ρA
esis H5. Moreover, perceived ease of use and perceived usefulness both
have a significant direct effect on the intention to use chatbots in edu­ IE 0.713 0.874 0.776 0.721
AR 0.791 0.903 0.824 0.849
cation. Therefore, failing to reject H6 and H7.
TE 0.742 0.884 0.792 0.790
The next phase of the research focused on evaluating the user’s ex­ ITU 0.717 0.841 0.639 0.722
pectations from the chatbots and how they influence the intention to use PDM 0.855 0.912 0.776 0.868
and perception of decision-making to use the chatbots. A similar

Fig. 4. Structural model results.

6
M.R. Hasan et al. Computers in Human Behavior: Arti cial Humans 2 (2024) 100098

Table 6 discomfort has any significant effect on perceived usefulness due to lack
Discriminant validity (HTMT) of the mediating measurement model. of evidence. Moreover, perceived ease of use and perceived usefulness
IE AR TE ITU PDM motivate user intention to use interactive AI technology. There is very
little of research on chatbots from the standpoint of student behavior.
IE – – – – –
AR 0.844 – – – – And the current study’s findings about how perceived ease of use affects
TE 0.109 0.214 – – –
ITU 0.778 0.688 0.284 – –
PDM 0.293 0.252 0.160 0.572 – Table 7
Mediating structural model testing.
Hypothesis Path Path Std. t 2.5% 97.5%
decision-making. In the first part of the mediation effect analysis, the Values Value CI CI
indirect effect was significant from IE, AR, and TE to PDM, and ITU was
H8 IE - > ITU 0.377 0.085 9.137 0.596 0.930
the mediator. Later, direct paths from IE, AR, and TE to PDM were H9 AR - > ITU 0.269 0.093 7.429 0.492 0.854
developed, and the bootstrapped model was evaluated to determine the H10 TE - > ITU − 0.128 0.115 2.468 0.108 0.554
direct effect significance. The results show that the direct effect from IE H12 ITU - > 0.458 0.108 5.295 0.353 0.774
to PDM is 0.013 with a 95% confidence interval [− 0.243; 0.216]. As this PDM

interval includes zero, this direct effect is not significant. We, therefore,
conclude that ITU fully mediates the relationship between IE and PDM,
and we fail to reject H11a. Similarly, the direct effect from AR to PDM is Table 8
0.031 with a 95% confidence interval [− 0.248; 0.195], and the direct Hypothesis result summary.
effect is not significant. Therefore, ITU fully mediates the relationship No. t Value Findings Decision
between AR and PDM, and we fail to reject H11b. Finally, the direct H1a 5.95 Positive and significant Can not reject
effect from TE to PDM is 0.035 with a 95% confidence interval [− 0.194; H1b 14.36 Positive and significant Can not reject
0.143]. This interval includes zero. Therefore, this direct effect is not H2a 3.93 Positive and significant Can not reject
significant and concludes that ITU fully mediates the relationship be­ H2b 9.24 Positive and significant Can not reject
H3a 1.65 Negative and significant Can not reject
tween TE and PDM. We fail to reject H11c. Table 9 summarizes the result
H3b 1.27 Insignificant Reject
of these three hypotheses. H4a 1.88 Negative and significant Can not reject
H4b 2.34 Negative and significant Can not reject
5. Discussion H5 4.68 Positive and significant Can not reject
H6 4.13 Positive and significant Can not reject
H7 9.22 Positive and significant Can not reject
This study aims to determine interactive AI’s technological accep­ H8 9.137 Positive and significant Can not reject
tance and readiness in education from a human factor’s perspective. The H9 7.429 Positive and significant Can not reject
study has shown that technology acceptance behaviors significantly H10 2.468 Positive and significant Can not reject
influence user intention to use chatbots. The findings from PLS-SEM H12 5.295 Positive and significant Can not reject

analysis show that optimism and innovativeness positively influence


the perceived ease of use and perceived usefulness of chatbots. In simple,
positive outlook towards the chatbots encourages users to engage with Table 9
the technology. On the contrary, discomfort, and insecurity related to Mediation hypothesis result summary.
using chatbots negatively influence the perceived ease of use, and No. 95% CI Findings
insecurity negatively influences the perceived usefulness of such tech­ H11a [− 0.243; 0.216]. Full mediation
nology. Therefore, negative traits discourage users from engaging with H11b [− 0.248; 0.195], Full mediation
interactive AI technology. However, it could not be inferred that H11c [− 0.194; 0.143]. Full mediation

Fig. 5. Mediating structural model result.

7
M.R. Hasan et al. Computers in Human Behavior: Arti cial Humans 2 (2024) 100098

attitudes and perceived usefulness are consistent with those of a number technology. This study also contributes to the following findings of
of prior studies (Foroughi et al., 2023) (Chocarro et al., 2023) chatbots in education: 1. Both discomfort and insecurity have a negative
(Stathakarou et al., 2020) (Hussein, 2017) (Malik et al., 2021). The impact on the perceived ease of use of chatbots, but for the perceived
study’s findings are particularly important to the developers and usefulness of chatbots, insecurity is negatively related, not discomfort.
stakeholders of the technology and draw attention to the user’s tech­ Therefore, the interactive AI industry should address and mitigate users’
nology readiness dimensions influencing the use of interactive AI. discomfort and insecurity towards using interactive AI and increase
The study’s findings align with the hypotheses demonstrated by users’ engagement. Suggest methods such as user feedback loops, user-
Parasuraman and Colby (Parasuraman & Colby, 2015). According to centered testing, and user empathy exercises should be evaluated
these authors, optimism and innovativeness are motivators, influencing extensively to address common issues and create more user-friendly AI
users to engage more with new technology. However, discomfort and interfaces. 2. Perceived ease of use, perceived usefulness, interaction
insecurity are inhibitors to accepting and engaging with newer tech­ and engagement, accuracy and responsiveness have a significant direct
nology; therefore, Technology Readiness Index (TRI) is a well-defined effect on the intention to use while transparency and ethics have
predictor of technology-related intentional behavior. negative effect. 3. Intention to use has a positive significant effect on the
Furthermore, the study provides insight into other aspects of perception of decision-making; 4. Examining the intention of use as a
responsible AI and user concerns in adopting the new technology. The mediator between interaction and engagement, accuracy and respon­
analysis found that interaction and engagement positively influence user siveness, transparency and ethics, and perception of decision-making.
intention to use chatbots. In addition, accuracy and responsiveness Knowing this research model and its outcome will benefit future
positively influence users’ intention to use chatbots. However, trans­ studies among the researchers, students, and their adoption of chatbots
parency and ethics have a negative effect on the intention to use chat­ and technology designers. In addition, the findings of the study, en­
bots. Furthermore, the intention to use fully mediate the decision- courages collaboration between researchers and industry professionals
making process when using interactive AI chatbots regarding the con­ in refining AI technology by sharing research insights and industry ex­
cerns mentioned above. As mentioned previously, an implication of this periences to create more effective and user-friendly interactive AI so­
study is identifying the user’s expectations when using the chatbots and lutions. A wide range of stakeholders, including those with an interest in
utilizing them to improve the design and interaction of the chatbots LLM-based chatbots from a number of industries, including research,
technology. The developers and policymakers should design interactive education, healthcare, and other associated fields, may find this study
chatbots, considering the users’ concerns. intriguing because to its wide dimension.
Although our study provides valuable insights, cross-sectional sur­
veys, and self-reported measures may introduce biases. Longitudinal CRediT authorship contribution statement
data might be useful to reduce this biasness. Nonetheless, the study’s
findings still contribute to our understanding of the topic and can serve Md Rabiul Hasan: Writing – review & editing, Writing – original
as a foundation for further research. Future research can be explored on draft, Visualization, Validation, Software, Methodology, Investigation,
chatbots use by conducting qualitative interviews to assess the user Formal analysis, Data curation, Conceptualization. Nahian Ismail
behavior. One study limitation is the small sample size in the data Chowdhury: Writing – review & editing, Writing – original draft,
collection. The future study can focus on collecting larger data sets to Investigation, Formal analysis, Data curation. Md Hadisur Rahman:
solidify the conclusion. Future research can explore the cultural differ­ Writing – review & editing, Writing – original draft, Visualization,
ences in the adoption and perception of chatbots by investigating how Conceptualization. Md Asif Bin Syed: Visualization. JuHyeong Ryu:
varying cultural contexts might influence the readiness and acceptance Writing – review & editing, Supervision.
of emerging interactive AI technology. The study’s findings can be
compared with other analytical methods like artificial neural networks
(Sohaib et al., 2020). Similar studies with different methods can help to Declaration of competing interest
shed light on the strengths and weaknesses of the used methods.
The authors declare that they have no known competing financial
6. Conclusion interests or personal relationships that could have appeared to influence
the work reported in this paper. This research did not receive any spe­
The present study was designed to determine the effect of technology cific grant from funding agencies in the public, commercial, or not-for-
readiness dimensions on using and adapting emerging interactive AI profit sectors.

APPENDIX

TABLE 10
Survey Questionnaires

General themes Descriptive Statements 1 2 3 4 5

Optimism (Parasuraman, 2000) Interactive AI/Chatbot uses the newest technologies are much more convenient ​ ​ ​ ​ ​
to use
Interactive AI/Chatbot makes me more efficient in my academics/study ​ ​ ​ ​ ​
I prefer to use the most advanced Interactive AI/Chatbot technology available. ​ ​ ​ ​ ​
I feel confident that interactive AI/Chatbot will follow through with what I ​ ​ ​ ​ ​
instructed them to do
Innovativeness (Parasuraman, 2000) I keep up with the latest Interactive AI/Chatbot developments ​ ​ ​ ​ ​
In general, I am among the first in my circle to acquire new Interactive AI/ ​ ​ ​ ​ ​
Chatbot when it appears.
Discomfort (Parasuraman, 2000) Interactive AI/Chatbot always seems to fail at the worst possible time. ​ ​ ​ ​ ​
(continued on next page)

8
M.R. Hasan et al. Computers in Human Behavior: Arti cial Humans 2 (2024) 100098

TABLE 10 (continued )
General themes Descriptive Statements 1 2 3 4 5

Sometimes, I think that Interactive AI/Chatbot’s are not designed for use by ​ ​ ​ ​ ​
ordinary people.
Insecurity (Parasuraman, 2000) You worry that information you send over the Interactive AI/Chatbot will be seen ​ ​ ​ ​ ​
by other people.
Interactive AI/Chatbot innovations always seem to hurt a lot of people by making ​ ​ ​ ​ ​
their skills obsolete
Whenever something gets automated, I need to check carefully that the machine ​ ​ ​ ​ ​
or computer is not making mistakes
Perceived ease of use (Sohaib et al., 2020) Learning to use Interactive AI/Chatbot would be easy for me ​ ​ ​ ​ ​
Usage of Interactive AI/chatbot is clear and understandable to me ​ ​ ​ ​ ​
Perceived usefulness (Sohaib et al., 2020) The use of Interactive AI/Chatbot enables me to finish my work more quickly ​ ​ ​ ​ ​
The use of Interactive AI/Chatbot increases my productivity ​ ​ ​ ​ ​
Interaction and engagement (Onofrei et al., 2022) (Kuo et al., 2010, I expect future AI to be more engaging and interactive than the existing ones ​ ​ ​ ​ ​
pp. 593–600) (Kuo et al., 2014) I am excited to use future AI/Chatbot for its engaging and interactive capabilities. ​ ​ ​ ​ ​
Accuracy and responsiveness (Chen et al., 2021) (Singh & Thakur, I expect future AI/Chatbot to be more accurate and faster (data provides) than the ​ ​ ​ ​ ​
2020) existing ones
I am excited to use future AI’s/Chatbot’s for its responsiveness and accuracy ​ ​ ​ ​ ​
(data) capabilities.
Transparency and ethics (Mozafari et al., 2020) (Dignum, 2018) Interactive AI’s/Chatbot’s are still not trustworthy in its interactions ​ ​ ​ ​ ​
Interactive AI’s/Chatbot’s and their features exhibit pervasive technological ​ ​ ​ ​ ​
uncertainties
Intention to Use (Sohaib et al., 2020) I intend to use an Interactive AI/Chatbot when it becomes widely available ​ ​ ​ ​ ​
Whenever possible, I intend to frequently use an Interactive AI/Chatbot in my ​ ​ ​ ​ ​
daily life
I intend to use an Interactive AI/Chatbot when it is legally accepted for my ​ ​ ​ ​ ​
works/institutions
Perception of decision-making (Rezaei, 2015) (Ali et al., 2022) I use an interactive AI/Chatbot to assist me in making decisions ​ ​ ​ ​ ​
(Reicherts et al., 2022) I utilize the Interactive AI/Chatbot to verify my decisions as I make them ​ ​ ​ ​ ​
I am open to making decisions based on the suggestions given by Interactive AI/ ​ ​ ​ ​ ​
Chatbot

References Dignum, V. (2018). Ethics in artificial intelligence: Introduction to the special issue.
Ethics and Information Technology, 20(1), 1–3. https://doi.org/10.1007/s10676-018-
9450-z
Alaaeldin, R., Asfoura, E., Kassem, G., & Abdel-Haq, M. S. (2021). Developing chatbot
Dijkstra, T. K., & Henseler, J. (2015). Consistent partial least squares path modeling. MIS
system to support decision making based on Big data analytics. Journal of
Quarterly, 39(2), 297–316 [Online]. Available: https://www.jstor.org/stable/
Management Information and Decision Sciences, 24(2), 1–15 [Online]. Available: http
26628355. (Accessed 5 November 2023).
s://www.proquest.com/docview/2516962161/abstract/3C53B03C87C84E69PQ/1.
Dwivedi, Y. K., et al. (2023). Opinion paper: ‘So what if ChatGPT wrote it?’
(Accessed 12 August 2023).
Multidisciplinary perspectives on opportunities, challenges and implications of
Ali, S. R., Dobbs, T. D., & Whitaker, I. S. (2022). Using a ChatBot to support clinical
generative conversational AI for research, practice and policy. International Journal
decision-making in free flap monitoring. Journal of Plastic, Reconstructive & Aesthetic
of Information Management, 71, Article 102642. https://doi.org/10.1016/j.
Surgery, 75(7), 2387–2440. https://doi.org/10.1016/j.bjps.2022.04.072
ijinfomgt.2023.102642
Alsharhan, A., Al-Emran, M., & Shaalan, K. (2023). Chatbot adoption: A multiperspective
Følstad, A., & Brandtzæg, P. B. (2017). Chatbots and the new world of HCI. Interactions,
systematic review and future research agenda. IEEE Transactions on Engineering
24(4), 38–42. https://doi.org/10.1145/3085558
Management, 1–13. https://doi.org/10.1109/TEM.2023.3298360
Foroughi, B., et al. (2023). Determinants of intention to use ChatGPT for educational
Anagnostou, M., et al. (2022). Characteristics and challenges in the industries towards
purposes: Findings from PLS-SEM and fsQCA. International Journal of Human-
responsible AI: A systematic literature review. Ethics and Information Technology, 24
Computer Interaction, 0(0), 1–20. https://doi.org/10.1080/10447318.2023.2226495
(3), 37. https://doi.org/10.1007/s10676-022-09634-1
Godoe, P., & Johansen, T. S. (2012). Understanding adoption of new technologies:
Blut, M., & Wang, C. (2020). Technology readiness: A meta-analysis of
Technology readiness and technology acceptance as an integrated concept. 3(1),
conceptualizations of the construct and its impact on technology usage. J. of the
Article 1. https://doi.org/10.5334/jeps.aq
Acad. Mark. Sci., 48(4), 649–669. https://doi.org/10.1007/s11747-019-00680-8
Grant, N., & Metz, C. (2022). A new Chat bot is a ‘code red’ for google’s search business.
Chen, J.-S., Le, T.-T.-Y., & Florence, D. (2021). Usability and responsiveness of artificial
The New York Times [Online]. Available: https://www.nytimes.com/2022/12/21/te
intelligence chatbot on online customer experience in e-retailing. International
chnology/ai-chatgpt-google-search.html. (Accessed 9 August 2023).
Journal of Retail & Distribution Management, 49(11), 1512–1531. https://doi.org/
Hair, J. F., Risher, J. J., Sarstedt, M., & Ringle, C. M. (2019). When to use and how to
10.1108/IJRDM-08-2020-0312
report the results of PLS-SEM. European Business Review, 31(1), 2–24. https://doi.
Chetlen, A., Artrip, R., Drury, B., Arbaiza, A., & Moore, M. (2019). Novel use of chatbot
org/10.1108/EBR-11-2018-0203
technology to educate patients before breast biopsy. Journal of the American College
Harris, M. T., & Rogers, W. A. (2023). Developing a healthcare technology acceptance
of Radiology, 16(9), 1305–1308. https://doi.org/10.1016/j.jacr.2019.05.050. Part B.
model (H-TAM) for older adults with hypertension. Ageing and Society, 43(4),
Chocarro, R., Cortiñas, M., & Marcos-Matás, G. (2023). Teachers’ attitudes towards
814–834. https://doi.org/10.1017/S0144686X21001069
chatbots in education: A technology acceptance model approach considering the
Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing
effect of social language, bot proactiveness, and users’ characteristics. Educational
discriminant validity in variance-based structural equation modeling. J. of the Acad.
Studies, 49(2), 295–313. https://doi.org/10.1080/03055698.2020.1850426
Mark. Sci., 43(1), 115–135. https://doi.org/10.1007/s11747-014-0403-8
Connolly, A., & Kick, A. (2015). What differentiates early organization adopters of
Hoi, S. C. H. (2023). Responsible AI for trusted AI-powered enterprise platforms. In
bitcoin from non-adopters?. AMCIS 2015 proceedings [Online]. Available: htt
Proceedings of the sixteenth ACM international conference on web search and data mining
ps://aisel.aisnet.org/amcis2015/AdoptionofIT/GeneralPresentations/46.
(pp. 1277–1278). New York, NY, USA: Association for Computing Machinery.
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of
https://doi.org/10.1145/3539597.3575784. WSDM ’23.
information technology. MIS Quarterly, 13(3), 319–340. https://doi.org/10.2307/
Hoppner, T., & Streatfeild, L. (2023). ChatGPT, bard & Co.: An introduction to AI for
249008
competition and regulatory lawyers. https://doi.org/10.2139/ssrn.4371681.
Dawes, J. (2008). Do data characteristics change according to the number of scale points
Rochester, NY.
used? An experiment using 5-point, 7-point and 10-point scales. International Journal
Hsu, T.-C., Huang, H.-L., Hwang, G.-J., & Chen, M.-S. (2023). Effects of incorporating an
of Market Research, 50(1), 61–104. https://doi.org/10.1177/147078530805000106
expert decision-making mechanism into chatbots on students’ achievement,
Diamantopoulos, A., Sarstedt, M., Fuchs, C., Wilczynski, P., & Kaiser, S. (2012).
enjoyment, and anxiety. Educational Technology & Society, 26(1), 218–231 [Online].
Guidelines for choosing between multi-item and single-item scales for construct
Available: https://www.jstor.org/stable/48707978. (Accessed 11 August 2023).
measurement: A predictive validity perspective. J. of the Acad. Mark. Sci., 40(3),
Hussein, Z. (2017). Leading to intention: The role of attitude in relation to technology
434–449. https://doi.org/10.1007/s11747-011-0300-3
acceptance model in E-learning. Procedia Computer Science, 105, 159–164. https://
doi.org/10.1016/j.procs.2017.01.196

9
M.R. Hasan et al. Computers in Human Behavior: Arti cial Humans 2 (2024) 100098

Hwang, G.-J., & Chang, C.-Y. (2021). A review of opportunities and challenges of Mozafari, N., Weiger, W., & Hammerschmidt, M. (2020). The chatbot disclosure
chatbots in education. Interactive Learning Environments, 0(0), 1–14. https://doi.org/ dilemma: Desirable and undesirable effects of disclosing the non-human identity of
10.1080/10494820.2021.1952615 chatbots. ICIS 2020 proceedings [Online]. Available: https://aisel.aisnet.org/icis202
Jöreskog, K. G. (1970). Simultaneous factor analysis in several populations. ETS Research 0/hci_artintel/hci_artintel/6.
Bulletin Series, 1970(2). https://doi.org/10.1002/j.2333-8504.1970.tb00790.x. i–31. Onofrei, G., Filieri, R., & Kennedy, L. (2022). Social media interactions, purchase
Joshi, A., Kale, S., Chandel, S., & Pal, D. K. (2015). Likert scale: Explored and explained. intention, and behavioural engagement: The mediating role of source and content
British Journal of Applied Science & Technology, 7(4), Article 4. https://doi.org/ factors. Journal of Business Research, 142, 100–112. https://doi.org/10.1016/j.
10.9734/BJAST/2015/14975 jbusres.2021.12.031
Jr, J. F. H., Black, W. C., Babin, B. J., & Anderson, R. E. (2009). Multivariate data analysis Ooi, K.-B., et al. (2023). The potential of generative artificial intelligence across
(7th ed.). Upper Saddle River, NJ: Pearson. disciplines: Perspectives and future directions. Journal of Computer Information
Kerlyl, A., Hall, P., & Bull, S. (2007). Bringing chatbots into education: Towards natural Systems, 0(0), 1–32. https://doi.org/10.1080/08874417.2023.2261010
language negotiation of open learner models. In R. Ellis, T. Allen, & A. Tuson (Eds.), Parasuraman, & Colby, C. L. (2015). An updated and streamlined technology readiness
Applications and innovations in intelligent systems XIV (pp. 179–192). London: Index: Tri 2.0. Journal of Service Research, 18(1), 59–74. https://doi.org/10.1177/
Springer. https://doi.org/10.1007/978-1-84628-666-7_14. 1094670514539730
Khademi, A. (2023). Can ChatGPT and bard generate aligned assessment items? A Parasuraman, A. (2000). Technology readiness Index (tri): A multiple-item scale to
reliability analysis against human performance. JALT, 6(1). https://doi.org/ measure readiness to embrace new technologies. Journal of Service Research, 2(4),
10.37074/jalt.2023.6.1.28 307–320. https://doi.org/10.1177/109467050024001
Koivisto, K., Makkonen, M., Frank, L., & Riekkinen, J. (2016). Extending the technology Rahsepar, A. A., Tavakoli, N., Kim, G. H. J., Hassani, C., Abtin, F., & Bedayat, A. (2023).
acceptance model with personal innovativeness and technology readiness : A How AI responds to common lung cancer questions: ChatGPT versus Google bard.
comparison of three models. Presented at the Bled eConference, Moderna Organizacija Radiology, 307(5), Article e230922. https://doi.org/10.1148/radiol.230922
[Online]. Available: https://jyx.jyu.fi/handle/123456789/52061. (Accessed 9 Reicherts, L., Park, G. W., & Rogers, Y. (2022). Extending chatbots to probe users:
August 2023). Enhancing complex decision-making through probing conversations. In Proceedings
Kooli. (2023). Chatbots in education and research: A critical examination of ethical of the 4th conference on conversational user interfaces, in CUI ’22 (pp. 1–10). New York,
implications and solutions. Sustainability, 15(7), Article 7. https://doi.org/10.3390/ NY, USA: Association for Computing Machinery. https://doi.org/10.1145/
su15075614 3543829.3543832.
Kuo, Y.-C., Walker, A. E., Belland, B. R., Schroder, K. E. E., & Kuo, Y.-T. (2014). A case Rezaei, S. (2015). Segmenting consumer decision-making styles (CDMS) toward
study of integrating interwise: Interaction, internet self-efficacy, and satisfaction in marketing practice: A partial least squares (PLS) path modeling approach. Journal of
synchronous online learning environments. Irrodl, 15(1), 161–181. https://doi.org/ Retailing and Consumer Services, 22, 1–15. https://doi.org/10.1016/j.
10.19173/irrodl.v15i1.1664 jretconser.2014.09.001
Kuo, Y.-C., Walker, A., & Schroder, K. (2010). Interaction and other variables as Scherer, R., Siddiq, F., & Tondeur, J. (2019). The technology acceptance model (TAM): A
predictors of student satisfaction in online learning environments. Presented at the meta-analytic structural equation modeling approach to explaining teachers’
society for information technology & teacher education international conference. adoption of digital technology in education. Computers & Education, 128, 13–35.
Association for the Advancement of Computing in Education (AACE) [Online]. https://doi.org/10.1016/j.compedu.2018.09.009
Available: https://www.learntechlib.org/primary/p/33407/. (Accessed 11 August Singh, S., & Thakur, H. K. (2020). Survey of various AI chatbots based on technology
2023). used. In 2020 8th international conference on reliability, infocom technologies and
Lam, S. Y., Chiang, J., & Parasuraman, A. (2008). The effects of the dimensions of optimization (trends and future directions) (ICRITO) (pp. 1074–1079). https://doi.org/
technology readiness on technology acceptance: An empirical analysis. Journal of 10.1109/ICRITO48877.2020.9197943
Interactive Marketing, 22(4), 19–39. https://doi.org/10.1002/dir.20119 Sohaib, O., Hussain, W., Asif, M., Ahmad, M., & Mazzara, M. (2020). A PLS-SEM neural
Lappeman, J., Marlie, S., Johnson, T., & Poggenpoel, S. (2023). Trust and digital privacy: network approach for understanding cryptocurrency adoption. IEEE Access, 8,
Willingness to disclose personal information to banking chatbot services. J Financ 13138–13150. https://doi.org/10.1109/ACCESS.2019.2960083
Serv Mark, 28(2), 337–357. https://doi.org/10.1057/s41264-022-00154-z Stathakarou, N., et al. (2020). “Students’ perceptions on chatbots’ potential and design
Lin, C.-C., Huang, A. Y. Q., & Yang, S. J. H. (2023). A review of AI-driven conversational characteristics in healthcare education,”. https://doi.org/10.3233/SHTI200531
chatbots implementation methodologies and challenges (1999–2022). Sustainability, Tinsley, H. E. A., & Tinsley, D. J. (1987). Uses of factor analysis in counseling psychology
15(5), Article 5. https://doi.org/10.3390/su15054012 research. Journal of Counseling Psychology, 34(4), 414–424. https://doi.org/10.1037/
Malik, P., Gautam, S., & Srivastava, S. (2020). A study on behaviour intention for using 0022-0167.34.4.414
chatbots. In 2020 8th international conference on reliability, infocom technologies and van Dis, E. A. M., Bollen, J., Zuidema, W., van Rooij, R., & Bockting, C. L. (2023).
optimization (trends and future directions) (ICRITO) (pp. 332–338). https://doi.org/ ChatGPT: Five priorities for research. Nature, 614(7947), 224–226. https://doi.org/
10.1109/ICRITO48877.2020.9197782 10.1038/d41586-023-00288-7
Malik, R., Shrama, A., Trivedi, S., & Mishra, R. (2021). Adoption of chatbots for learning Venkatesh, V., Thong, J. Y. L., & Xu, X. (2012). Consumer acceptance and use of
among university students: Role of perceived convenience and enhanced information technology: Extending the unified theory of acceptance and use of
performance. International Journal of Emerging Technologies in Learning (iJET), 16(18), technology. MIS Quarterly, 36(1), 157–178. https://doi.org/10.2307/41410412
200–212 [Online]. Available: https://www.learntechlib.org/p/220124/. (Accessed Wu, J., & Lederer, A. (2009). A meta-analysis of the role of environment-based
8 January 2024). voluntariness in information technology acceptance. MIS Quarterly, 33(2), 419–432.
McCoy, S., Galletta, D. F., & King, W. R. (2007). Applying TAM across cultures: The need https://doi.org/10.2307/20650298
for caution. European Journal of Information Systems, 16(1), 81–90. https://doi.org/ Yim, M.-S. (2019). A study on factor analytical methods and procedures for PLS-SEM
10.1057/palgrave.ejis.3000659 (partial least squares structural equation modeling). The Journal of Industrial
Mogavi, R. H., et al. (2023). Exploring user perspectives on ChatGPT: Applications, Distribution & Business, 10(5), 7–20. https://doi.org/10.13106/ijidb.2019.vol10.
perceptions, and implications for AI-integrated education. https://doi.org/10.48550/ no5.7
arXiv.2305.13114. arXiv, Jun. 13.

10

You might also like