AI in The Workplace Examining The Effects of ChatGPT

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

International Journal of Human–Computer Interaction

ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/hihc20

AI in the Workplace: Examining the Effects of


ChatGPT on Information Support and Knowledge
Acquisition

Hyeon Jo & Do-Hyung Park

To cite this article: Hyeon Jo & Do-Hyung Park (12 Nov 2023): AI in the Workplace: Examining
the Effects of ChatGPT on Information Support and Knowledge Acquisition, International
Journal of Human–Computer Interaction, DOI: 10.1080/10447318.2023.2278283

To link to this article: https://doi.org/10.1080/10447318.2023.2278283

Published online: 12 Nov 2023.

Submit your article to this journal

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=hihc20
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION
https://doi.org/10.1080/10447318.2023.2278283

AI in the Workplace: Examining the Effects of ChatGPT on Information Support


and Knowledge Acquisition
Hyeon Joa and Do-Hyung Parkb
a
Headquarters, HJ Institute of Technology and Management, Bucheon, Republic of Korea; bGraduate School of Business IT, Kookmin
University, Seoul, Republic of Korea

ABSTRACT KEYWORDS
The rapid development and widespread adoption of artificial intelligence (AI) systems like ChatGPT; information
ChatGPT in the workplace underscores the need for a comprehensive understanding of their impli­ support; knowledge
cations on worker productivity, learning, and the determinants of their usage. This study elucidates acquisition; utilitarian
benefits; technology
the antecedents and outcomes of using ChatGPT among workers, employing a sample size of 351 adoption
participants across various industries, aged between 20 and 40. Structural equation modeling
(SEM) was employed for the analysis, providing a robust perspective on the relationships between
variables. The study discovered that perceived intelligence and self-learning of ChatGPT hold sig­
nificant positive associations with information support and knowledge acquisition, both of which
further influence the utilitarian benefits perceived by users, thereby shaping their intention to use
ChatGPT. Utilitarian benefits were found to significantly enhance the intention to use ChatGPT but
did not directly impact its actual usage. The intention to use ChatGPT, however, was found to
positively correlate with its actual use. Among demographic factors, gender and age were identi­
fied as significantly influencing actual use, whereas the industry of users did not hold a significant
impact. With these findings, this study offers a more profound understanding of how AI systems
like ChatGPT can contribute to workplace productivity, and presents strategic insights for practi­
tioners looking to effectively implement these technologies.

1. Introduction self-learning. However, the extent of the actual use of


ChatGPT by workers and the factors influencing such use
The rapid progression of artificial intelligence (AI) technol­
remain unclear, justifying the need for the present study.
ogy has prompted a multitude of ground-breaking advance­
ChatGPT boasts a variety of features that make it a valu­
ments across diverse fields. A notable innovation in this able tool in the workplace. In the realm of human resources,
realm is the emergence of ChatGPT, a sophisticated lan­ ChatGPT can be utilized to streamline recruitment processes
guage model developed and trained by OpenAI, which is (Marr, 2023). It is capable of conducting initial candidate
gaining substantial traction in various contexts (King, 2023; screening through intelligent conversation, reducing the
Lund & Wang, 2023; Sallam, 2023). The prowess of workload on HR professionals. Within project management,
ChatGPT lies in its remarkable capacity to comprehend and ChatGPT has been utilized as an assistant for scheduling
generate text that mirrors human conversation, rendering it and tracking tasks (Prieto et al., 2023). It can send
a powerful tool that can potentially revolutionize communi­ reminders, update calendars, and even draft emails, thus aid­
cation and information processes across sectors (Dwivedi ing in coordinating tasks and managing time more effect­
et al., 2023). ChatGPT is capable of comprehending complex ively. ChatGPT has also made inroads into the domain of
instructions, responding to queries, and even learning from content creation and documentation (Liebrenz et al., 2023).
user feedback (Cardon et al., 2023; Chen et al., 2022). In From drafting reports and minutes of meetings to creating
particular, the application of ChatGPT in the workplace has content for social media and websites, the adoption of AI-
been a subject of considerable interest (Salah et al., 2023; powered text generation is on the rise (Emenike & Emenike,
Uddin et al., 2023). With its ability to deliver accurate, 2023). In addition, it can act as a real-time transcribing tool
rapid, and contextually appropriate responses, ChatGPT in meetings, thereby ensuring accurate record-keeping
presents an opportunity to significantly augment work effi­ (George & George, 2023).
ciency and productivity (Badini et al., 2023; Peres et al., The use of ChatGPT in the workplace is likely affected
2023). It can serve as a powerful assistant, aiding in tasks by several factors. A holistic understanding of the use of
ranging from managing correspondence and scheduling ChatGPT in the workplace necessitates considering the fac­
meetings to provide information support and facilitating tors in a specific order. The AI’s intelligence forms the basis

CONTACT Do-Hyung Park [email protected] Graduate School of Business IT, Kookmin University, 77, Jeongneung-ro, Seongbuk-gu, Seoul,
02707, Republic of Korea
� 2023 Taylor & Francis Group, LLC
2 H. JO AND D.-H. PARK

of its capabilities, which in turn influences the quality of et al., 2023). Its ability to generate coherent and contextually
information and knowledge support it can provide. The con­ relevant responses makes it a powerful tool for various
sequent benefits that workers perceive from using ChatGPT applications. In the context of organizations, ChatGPT has
can then influence their intention to use the system and been used in various capacities. One of its significant appli­
eventually, their actual use. Starting with perceived intelli­ cations is in customer service, where it can answer customer
gence, this concept refers to how users perceive the artificial queries in real time, reducing the workload of customer ser­
intelligence of a system (Pillai et al., 2022; Rafiq et al., vice personnel and increasing efficiency (Chowdhury et al.,
2022). The more intelligent the AI system is perceived to be, 2023). Moreover, the ability of ChatGPT to learn from past
the more likely it is that users will trust and use the system. interactions makes it more effective over time, as it becomes
In the context of ChatGPT, its advanced language model better adapted to handle the specific queries common in the
could be seen as intelligent if it is capable of understanding organization (George & George, 2023; Okonkwo & Ade-
user inputs and providing relevant responses. Next, self- Ibijola, 2021). Furthermore, organizations are using
learning capability refers to the system’s ability to learn ChatGPT for internal communication (Carvalho & Ivanov,
from user interactions and improve its performance over 2023) and project management tasks (Dwivedi et al., 2023;
time (Chen et al., 2022). AI systems like ChatGPT that have
Prieto et al., 2023). In addition, ChatGPT has shown poten­
strong self-learning capabilities can provide more personal­
tial in areas such as data analysis and report generation,
ized and relevant responses to users, thereby enhancing user
where it can interpret data and provide insightful summa­
satisfaction and increasing intention to use and actual use.
ries, aiding in decision-making processes (Hassani & Silva,
Information support, meanwhile, refers to the system’s abil­
2023; Sharma & Dash, 2023; Xue et al., 2023). However, des­
ity to provide users with useful and relevant information
pite these promising applications, there is limited empirical
(Lee et al., 2022). In a workplace context, information sup­
port from AI systems can aid decision-making processes research on how workers use ChatGPT in their daily tasks
and increase work efficiency, thereby motivating users to and what factors influence this usage. This study aims to
continue using the system. Knowledge acquisition is another address this gap in the literature by examining the antece­
crucial factor, as the ability to gain knowledge by using an dents of ChatGPT usage in the workplace.
AI system can increase users’ task performance and product­
ivity (Al-Emran & Teo, 2020). This in turn can lead to 2.2. Technology adoption
higher user satisfaction and increased likelihood of future
system use. Finally, utilitarian benefits are practical benefits One of the theoretical foundations of this research is
that users can gain from using a system (Cheng & Jiang, grounded in the technology acceptance model (TAM), which
2020). These benefits, such as time-saving and cost effi­ postulates that perceived usefulness and ease of use affect
ciency, can increase users’ perceived value of the system, users’ intention to use technology and its actual usage
leading to increased intention to use and actual use. By tak­ (Davis, 1989). This research expands TAM to consider add­
ing these factors into account, this study can achieve a more itional factors relevant to the context of advanced AI tech­
holistic grasp of what drives workers’ intention to use and nology like ChatGPT. First, this study considers the
actual use of ChatGPT. perceived intelligence of ChatGPT. As AI technology,
Despite growing interest in AI applications in the work­ ChatGPT’s perceived intelligence can influence users’ atti­
place, existing research has mostly focused on broad impli­ tudes toward the system, its perceived usefulness, and their
cations and theoretical projections. Empirical studies intention to use it (Rafiq et al., 2022). High perceived intelli­
examining the actual use of AI, like ChatGPT, by workers gence can make ChatGPT appear more capable, enhancing
are scarce. Moreover, few studies have explored the factors users’ confidence in the tool and encouraging its use.
affecting this use, creating a gap in our understanding. Second, self-learning is an essential feature of AI systems,
Addressing this gap, this study seeks to empirically explore enabling them to improve over time based on user interac­
the use of ChatGPT by workers, the factors influencing it, tions (Rijsdijk et al., 2007). The capability of self-learning
and the consequent benefits. The study’s originality lies in can make ChatGPT more useful to users over time, thus
its in-depth, empirical exploration of ChatGPT use in the promoting usage. Third, information support and knowledge
workplace, which could guide both research and practice in acquisition have been identified as critical factors affecting
this burgeoning field.
technology usage in the workplace (Beer & Mulder, 2020; Ra
et al., 2019). Workers are likely to use technology that sup­
2. Literature review ports them in finding relevant information and acquiring
knowledge efficiently (Joseph & Arun, 2021; Ngoc Thang &
2.1. ChatGPT and organization Anh Tuan, 2020; Rizun, 2017). Finally, utilitarian benefits
ChatGPT, developed by OpenAI, has become a prominent are considered, aligning with TAM’s concept of perceived
tool in both personal and organizational settings owing to usefulness. Utilitarian benefits refer to the practical benefits
its sophisticated abilities in processing human language (Qin that users get from using a system, such as saving time or
et al., 2023). It is a model that is trained to predict the next increasing productivity (McLean & Osei-Frimpong, 2019).
word in a sentence, thereby producing text that mirrors The more utilitarian benefits users perceive, the more likely
human conversation based on the received input (Alberts they are to use the system. This research integrates these
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION 3

factors into an exhaustive framework to elucidate employees’ distinctiveness of this study lies in its comprehensive empir­
intent to use and actual utilization of ChatGPT. ical examination of ChatGPT’s application within workplace
contexts, a contribution that holds potential implications for
both research and practical applications in this emerging
2.3. Uses and gratification theory (UGT) field. Specifically, the primary objective of this study is to
UGT has long been a well-established perspective in the enrich our comprehension of AI interaction, with a focal
study of media use. It suggests that people actively gravitate point on the distinctive attributes inherent in ChatGPT. The
towards media and technology that optimally satisfy their research probes into the underlying mechanisms governing
diverse requirements (Boudkouss & Djelassi, 2021; Katz the actual usage of ChatGPT, leveraging the unique charac­
et al., 1973; Shin, 2011). The central assumption of UGT is teristics of perceived intelligence and self-learning capability.
that people approach their media usage with specific objec­ These investigations encompass a broader spectrum of bene­
tives in mind and that they make deliberate decisions to use fits, including information support and knowledge acquisi­
specific media based on expected gratifications or benefits. tion, culminating in the actual utilization of the ChatGPT.
UGT has been adopted in various contexts, including Figure 1 illustrates the theoretical model.
studies related to traditional media, the Internet, social
media, and even technology use (Busalim et al., 2021; 3.1. Perceived intelligence
Cheng, 2021; Kamble et al., 2020; Ruggiero, 2000; Stafford
Perceived intelligence of chatbots is considered as the user’s
et al., 2004; Whiting & Williams, 2013). For instance,
perception of the artificial intelligence’s knowledge, compe­
research by Sundar and Limperos (2013) used UGT to
tence, and cognitive abilities (Mirnig et al., 2017; Yu, 2020).
investigate the factors affecting users’ preferences for certain
Prior studies have established a notable association between
technological interfaces over others. They found that the
the perceived intelligence and users’ adoption and satisfac­
expected utilitarian and hedonic benefits significantly influ­
tion with AI systems like ChatGPT (Pillai & Sivathanu,
enced users’ interface preferences. In a similar vein, Stafford
2020; Rafiq et al., 2022). Users who perceive ChatGPT as
et al. (2004) adopted UGT to explain individuals’ use of the
intelligent may find the platform more supportive and cap­
Internet and found that the perceived benefits, including
able of aiding in problem-solving (Orr� u et al., 2023).
information-seeking, entertainment, and social interaction,
Similarly, Russo et al. (2021) stated that perceived intelli­
significantly predicted Internet use. More recently, UGT has
gence can affect the user’s perception of the AI’s ability to
been applied to the field of AI (Marjerison et al., 2022; Xie
facilitate knowledge acquisition. Users who recognize
et al., 2022). For example, McLean and Osei-Frimpong
Chatbots as intelligent tools are more likely to utilize it for
(2019) employed UGT to investigate users’ acceptance of
learning and extracting useful information (Adamopoulou &
AI-based virtual assistants. They found that users’ utilitarian
Moussiades, 2020; Okonkwo & Ade-Ibijola, 2021). They may
benefits, symbolic benefits, social presence, and social inter­
believe that an intelligent system can present diverse resour­
action significantly influenced their acceptance of these vir­
ces for knowledge generation. Given the above discussions
tual assistants. Applying UGT to the study of ChatGPT
and the significance of perceived intelligence in the AI-user
usage in the workplace can provide a robust theoretical
interaction context, it is reasonable to examine its impact on
foundation to understand workers’ motivations and the ben­
information support and knowledge acquisition among
efits they seek from using ChatGPT. Based on UGT, this
workers. Thus, this study suggests the following hypotheses.
study posits that workers actively use ChatGPT when they
perceive intelligence, learn autonomously, and acquire H1a. Perceived intelligence of ChatGPT positively affects
knowledge and information that could provide utilitarian information support.
benefits to their work, ultimately influencing their intention
to use and actual use of ChatGPT. H1b. Perceived intelligence of ChatGPT positively affects
knowledge acquisition.
3. Research model and hypotheses
3.2. Self-learning
This research endeavors to empirically investigate the util­
ization of ChatGPT among workers, along with the factors Self-learning refers to the ability of artificial intelligence
that exert influence and the ensuing advantages. The systems like ChatGPT to learn and improve over time

Figure 1. Theoretical model.


4 H. JO AND D.-H. PARK

from interactions and experiences (Chen et al., 2022). The 3.4. Knowledge acquisition
self-learning capability of AI systems can enhance their
Knowledge acquisition is the process through which users
effectiveness in providing valuable information support
gain new insights or knowledge from AI systems like
(Collins et al., 2021). According to Seo et al. (2021), AI
ChatGPT (Arpaci et al., 2020). The effectiveness of informa­
systems with stronger self-learning capabilities are per­
tion systems (ISs) in facilitating knowledge acquisition sig­
ceived as more supportive, providing more accurate and
nificantly impacts the perceived usefulness (a key element of
personalized solutions to users’ problems. Likewise, the
utilitarian benefits) users gain from the system (Al-Emran
self-learning ability of AI is critical to knowledge acquisi­
et al., 2018). According to Al-Emran and Teo (2020), the
tion. According to Bajwa et al. (2021), an AI system that ability to generate new insights and understandings contrib­
continuously learns and updates its knowledge can better utes to users’ productivity and efficiency, enhancing the
assist users in generating and acquiring new knowledge. practical benefits of using the IS. As users learn more and
Self-learning in AI systems provides a dynamic learning acquire relevant knowledge from the system, the practical
environment, enabling users to gain insights from varied utility of the system to the user increases. Furthermore,
resources and adapt to their changing needs (Chen et al., chatbots significantly improve the efficiency and potency of
2022). Given the pivotal role of self-learning in AI sys­ micro-learning processes, increasing information gathering
tems and its potential influence on information support and retention by as much as 20% (V�azquez-Cano et al.,
and knowledge acquisition, it is imperative to examine its 2021). The ability of AI to facilitate learning and knowledge
impact. Therefore, this study puts forth the following acquisition increases its perceived usefulness, prompting
hypotheses for examination. greater user engagement and sustained use (Al-Sharafi et al.,
H2a. Self-learning of ChatGPT positively affects information 2022). Given the implications of knowledge acquisition for
support. utilitarian benefits and users’ intention to use AI, it is cru­
cial to further explore its impact. Hence, this research pro­
H2b. Self-learning of ChatGPT positively affects knowledge poses the following hypotheses.
acquisition. H4a. Knowledge acquisition via ChatGPT positively affects
utilitarian benefits.

3.3. Information support H4b. Knowledge acquisition via ChatGPT positively affects
the intention to use.
Informational support pertains to assistance that comes in
the form of recommendations, insights, directives, or coun­
sel for resolving issues (Cohen & Wills, 1985). In the context 3.5. Utilitarian benefits
of AI, information support is described as the delivery of Utilitarian benefits refer to the practical advantages or func­
relevant and helpful information to provide guidance and tional value users gain from employing an AI chatbot like
advice for problem-solving (Lee et al., 2022). This provision ChatGPT (Cheng & Jiang, 2020). Studies have shown that
of support is crucial in enhancing the utilitarian benefits, when users perceive tangible benefits from a system, their
essentially the functional advantages users gain from intention to use it in the future rises (Dinh & Park, 2023;
employing AI (Gupta, 2020). When an AI system like Kim & Oh, 2011; McLean & Osei-Frimpong, 2019). For
ChatGPT provides relevant and timely information support, instance, when workers find that using ChatGPT improves
users can leverage it to improve efficiency and productivity efficiency or increases productivity, they are likely to develop
in their daily activities (Dennehy et al., 2023). This increases a stronger intention to continue using it. This suggests a dir­
the perceived utilitarian benefits of using the system. ect relationship between the utilitarian benefits perceived
Moreover, Ashfaq et al. (2020) demonstrated that the provi­ and the intention to use the system (Venkatesh et al., 2012).
sion of information support also positively influences a Similarly, the utilitarian benefits of an AI system can also
user’s intention to use AI. They argued that users who affect its actual usage. Users who perceive significant prac­
receive adequate information support are likely to find the tical value from using ChatGPT may be motivated to con­
system more helpful, thus increasing their willingness to tinue using it (DeLone & McLean, 2003; Mishra & Shukla,
continue using it. The substantial effect of information sup­ 2020 psycho). Therefore, it is reasonable to posit that the
port on utilitarian benefits and users’ intention to use AI tangible benefits that users gain from using ChatGPT would
indicates the necessity to study its impact further. directly influence its actual use. Given the implications of
Consequently, this research proposes the following utilitarian benefits on both the intention to use and actual
hypotheses. use of AI, this study suggests the following hypotheses.
H3a. Information support of ChatGPT positively affects H5a. Utilitarian benefits derived from ChatGPT positively
utilitarian benefits. affect intention to use it.

H3b. Information support of ChatGPT positively affects H5b. Utilitarian benefits derived from ChatGPT positively
intention to use. affect its actual use.
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION 5

3.6. Intention to use research adheres to the stratified random sampling tech­
nique, which is a form of probability sampling (Bryman,
Intention to use refers to a user’s plan or decision to use an
2016). The population is first divided into distinct strata or
AI system such as ChatGPT in the future (Venkatesh et al.,
subgroups, and the subsequent selection of subjects from
2012). There is a robust body of research suggesting a strong
each stratum is executed randomly and proportionally
link between a user’s intention to use technology and their
(Creswell & Creswell, 2017). In this specific study, the strata
actual use of that technology (Davis, 1989; DeLone &
were determined based on three demographic indicators:
McLean, 2003; Jeyaraj, 2021; Tao, 2009). Additionally, sev­
gender, age (limited to those in their 20s to 40s who are
eral scholars have confirmed that behavioral intention is a
known to use ChatGPT more frequently), and industry. The
significant factor in determining the actual usage of AI chat­
delineation of these strata is reflective of the diverse work­
bots (Gatzioufa & Saprikis, 2022). Given the consistency of
force that engages with ChatGPT (Radford, 2019). The
these findings in the literature, it is reasonable to hypothe­
stratified random sampling method ensures a reduction in
size that a user’s intention to use ChatGPT would positively
sampling error and a representation of all subgroups within
impact their actual usage of the system. Thus, this study
the population (Bhattacherjee, 2012). It enhances statistical
suggests the following hypothesis.
efficiency and provides a more accurate reflection of the
H6. Intention to use ChatGPT positively affects its population (Saunders et al., 2019). This quality is crucial to
actual use. this study, given the aim to comprehend the utilization of
ChatGPT across varied demographic clusters in the
workplace.
3.7. Control variables
The survey distribution and collection were conducted
Prior research has suggested that gender, age, and industry for approximately ten days, from May 23 to June 1, 2023.
can influence the actual use of technologies like AI systems This timeframe was selected to provide ample opportunity
(Følstad et al., 2021; Kelly et al., 2022; Terblanche & Kidd, for participants to respond, while still ensuring a timely data
2022; Venkatesh et al., 2003). For instance, Terblanche and collection process. Upon collection, a filtering process was
Kidd (2022) found that age and gender differences can influ­ applied to the dataset to ensure the quality of responses.
ence chatbot adoption and use. Their findings indicated that Hankook Research disseminated an online questionnaire to
younger individuals and males were more likely to accept their registered panelists, collecting responses from individu­
and use new technologies. Moreover, the industry of users als who fulfilled the study’s inclusion criteria. The initial
can also influence the actual use of technology. Certain part of the questionnaire consisted of qualifying questions,
industries may have a higher dependency on AI technologies including optional ones pertaining to ChatGPT usage, age,
for operational tasks, leading to a higher rate of actual use and industry. This strategy enabled us to gather a sample
(Kelly et al., 2022). Given the potential influence of these composed of individuals in their 20s to 40s who had prior
user characteristics on the actual use of AI technologies, it is experience utilizing ChatGPT in their professional contexts.
worthwhile to investigate their impact. Hence, this study Any incomplete or inconsistent responses were discarded
considers the gender, age, and industry of users as control to maintain the robustness of the analysis. This process is
variables, and suggests the following control hypothesis. vital to minimize the potential biases and inaccuracies in the
dataset and consequently, in the study’s findings. Finally,
CV1. The gender of the user affects the actual use of
351 responses were used for analysis. The final sample size
ChatGPT.
is adequately powered to conduct meaningful statistical anal­
yses, providing a solid foundation to understand the influ­
CV2. The age of the user affects the actual use of ChatGPT.
ence of different factors on the intention to use and actual
CV3. The type of industry the user is engaged in affects the use of ChatGPT in the workplace.
actual use of ChatGPT. The demographics of the respondents, as depicted in
Table 1, were almost evenly distributed between male and
female respondents, with males making up 50.1% (N ¼ 176)
4. Research methodology and females 49.9% (N ¼ 175) of the sample. The respond­
ents’ age was also evenly distributed across three age groups:
4.1. Participants
20s, 30s, and 40s, accounting for 33.6% (N ¼ 118), 33.3%
This study aimed to examine the usage of ChatGPT and the (N ¼ 117), and 33.0% (N ¼ 116) of the total respondents,
factors influencing its adoption among working individuals. respectively. This balanced age distribution is particularly
To obtain a comprehensive and representative dataset, a important as it allows the research to investigate potential
cross-sectional survey was conducted. The survey was out­ age-related differences in perceptions and usage of
sourced to Hankook Research, a renowned polling company, ChatGPT. Regarding the industry, respondents were spread
to ensure the professional and systematic collection of data. across various sectors, highlighting the versatility of
The choice of a third-party organization like Hankook ChatGPT’s use across different work contexts. The industries
Research is justified by its expertise in collecting unbiased with the most respondents were Service (23.1%, N ¼ 81),
data and its ability to reach a broad and diverse pool of manufacturing (22.5%, N ¼ 79), and education (17.4%,
respondents. The sampling approach implemented in this N ¼ 61). Other represented industries included information
6 H. JO AND D.-H. PARK

Table 1. Demographic features of respondents.


Subjects (N ¼ 351)
Demographics Item Frequency Percentage (%)
Gender Male 176 50.1
Female 175 49.9
Age 20s 118 33.6
30s 117 33.3
40s 116 33.0
Industry Manufacturing 79 22.5
Construction 14 4.0
Information and Communication 49 14.0
Distribution 29 8.3
Law 9 2.6
Medical and Health 29 8.3
Education 61 17.4
Service 81 23.1
Use Frequency (I use ChatGPT frequently.) Strongly Disagree 8 2.3
Disagree 41 11.7
Somewhat Disagree 51 14.5
Neither Agree nor Disagree 126 35.9
Somewhat Agree 78 22.2
Agree 33 9.4
Strongly Agree 14 4.0

and communication (14.0%, N ¼ 49), distribution (8.3%, (2012) and Davis (1989). Finally, the construct actual use
N ¼ 29), medical and health (8.3%, N ¼ 29), construction was examined through items such as “I use ChatGPT
(4.0%, N ¼ 14), and law (2.6%, N ¼ 9). In summary, the frequently,” which is consistent with the research from
sample incorporated a diverse set of demographics and (DeLone & McLean, 2003) and (Mishra & Shukla, 2020).
industry backgrounds, ensuring a broad exploration of user These items, among others, were used to provide a compre­
interactions with ChatGPT across different contexts. Most hensive understanding of the constructs under investigation.
respondents (35.9%) chose “Neither Agree nor Disagree” This study adapted these items to our specific context—
when asked about their frequent use of ChatGPT, while a interaction with ChatGPT—ensuring that they were compre­
significant portion (22.2%) somewhat agreed, and fewer hensible and relevant to the study participants. The develop­
respondents either strongly agreed (4.0%) or strongly dis­ ment of the questionnaire underwent a rigorous process to
agreed (2.3%). ensure the validity and reliability of the instrument across
language translations. Initially, the questionnaire was devel­
oped in English, drawing from the theoretical foundations
4.2. Measurements and concepts integral to the study. Upon completion, the
English version of the questionnaire was sent to a language
The development of the research instrument involved sev­
expert for translation into Korean to cater to the local
eral stages to ensure the accuracy, relevance, and reliability
respondents. The translated Korean questionnaire was then
of the survey items. Each construct in this study was opera­
subjected to a back-translation process. In this process, an
tionalized with multiple items adopted from existing vali­
independent researcher, who was proficient in both English
dated scales. The constructs for this research were
and Korean but was not privy to the original English ques­
operationalized using a series of items. For instance, per­
tionnaire, translated the Korean version back into English.
ceived intelligence was examined using various items,
This step was taken to check the consistency and equiva­
including “I believe that ChatGPT is competent,” which is a lence of the translation. The English version translated back
reflection of the work by Pillai and Sivathanu (2020). from the original questionnaire was subsequently compared
Similarly, the self-learning construct was captured through to the original English version by the research team. Any
several items, one of which was “The ChatGPT’s ability is discrepancies were thoroughly examined, and necessary revi­
enhanced through learning,” referencing the research of sions were made to ensure conceptual and semantic equiva­
Chen et al., (2022). To assess information support, the lence across both versions. This meticulous translation and
research employed items such as “ChatGPT gives me sug­ back-translation process was instrumental in validating the
gestions and advice on problem-solving,” inspired by the accuracy and cross-language reliability of the questionnaire,
work of Lee et al. (2022). For Knowledge Acquisition, an which is crucial in maintaining the integrity and robustness
example item is “ChatGPT enables me to generate new of the data collected for the study. The items were then sub­
knowledge based on my existing understanding,” which was jected to a pilot test with a small group of participants rep­
influenced by Al-Sharafi et al. (2022)’s study. When measur­ resentative of the study’s target population to ascertain the
ing utilitarian benefits, the research used items like “I find clarity and appropriateness of the questions. Based on
ChatGPT useful in my daily life,” based on McLean and the pilot test feedback, slight modifications were made to
Osei-Frimpong (2019)’s work. Intention to use was eval­ the phrasing of some items to improve clarity and under­
uated with items such as “I intend to use ChatGPT in the standing. The final instrument was a self-administered ques­
future,” in accordance with the work of Venkatesh et al. tionnaire using a 7-point Likert scale, ranging from 1
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION 7

(Strongly Disagree) to 7 (Strongly Agree), for the respond­ same source at the same time (Podsakoff et al., 2003). This
ents to express their agreement or disagreement with each is a pertinent concern in survey research as it might inflate
statement. Table A1 describes the list of construct items and the true associations between constructs. Procedurally, the
corresponding sources. survey design incorporated several recommended practices
to mitigate CMB. The anonymity of respondents was
assured to reduce evaluation apprehension and potential
4.3. Analysis procedure bias in their responses. Moreover, questions related to differ­
This study utilized the partial least squares (PLS) method, ent constructs were mixed in the survey rather than grouped
which is ideal for handling formative factors and a large to prevent respondents from inferring relationships between
number of constructs. The PLS method has been widely constructs. Additionally, the survey instructions emphasized
employed in the IS fields, as noted by Hair et al. (2012). the importance of honest and independent responses, mini­
The choice of PLS was grounded in its robustness, coupled mizing the likelihood of socially desirable responses.
with its relatively few restrictions on data distribution and Statistically, a post hoc Harman’s single-factor test was per­
sample size, as stated by Falk and Miller (1992). The PLS formed to investigate the presence of CMB. This involves an
method’s strengths make it an apt choice for this study, exploratory factor analysis on all the items in the survey to
given the characteristics of the collected data and the con­ examine the unrotated factor solution. The results revealed
structs under investigation. The data analysis was conducted that the first factor did not explain the majority of the
in two main stages to ensure the soundness and validity of covariance observed among the variables. Podsakoff et al.
the findings. The first stage was the evaluation of the meas­ (2003) indicating that CMB is unlikely to seriously com­
urement model, which included checking the convergent promise the findings of this study. Moreover, the variance
validity and reliability of the constructs, as well as assessing inflation factor (VIF) for each predictor in the regression
their discriminant validity. This step is crucial for confirm­ models was well below the threshold of 10, further suggest­
ing the reliability and validity of the measurement items and ing that CMB is not a significant concern (Kock, 2015). In
ensuring that they accurately capture the constructs they are conclusion, while it is impossible to eliminate the risk of
intended to measure. The second stage involved the evalu­ CMB in survey research, the procedural and statistical rem­
ation of the structural model, which allowed for testing the edies implemented in this study have minimized this con­
proposed relationships between the constructs. By examining cern and bolstered the validity of the findings.
the path coefficients and their statistical significance, the
study could identify the strengths of the relationships 5.2. Measurement model
between constructs and determine whether the hypothesized
associations were supported by the data. The measurement model of this study was evaluated by
examining the reliability, convergent validity, and discrimin­
ant validity of the constructs. The results of this evaluation
5. Results are presented in Table 2. In terms of reliability, Cronbach’s
5.1. Common method bias (CMB) alpha values for all constructs surpass the recommended
threshold of 0.70, indicating satisfactory internal consistency
To account for the potential of CMB in this study, many (Nunnally, 1978). Moreover, the composite reliability (CR)
procedural and statistical remedies were employed. CMB values of all constructs are also above the recommended
refers to the spurious correlation that might occur when value of 0.70, further confirming the reliability of the con­
both predictor and criterion variables are collected from the structs (Hair et al., 2006).

Table 2. Factor analysis and reliability.


Construct Items Mean St. Dev. Factor Loading Cronbach’s Alpha CR AVE
Perceived Intelligence PIE1 5.151 1.072 0.874 0.883 0.928 0.811
PIE2 5.040 1.176 0.911
PIE3 5.017 1.243 0.916
Self-learning SLN1 4.840 1.197 0.906 0.844 0.906 0.764
SLN2 4.798 1.170 0.901
SLN3 5.074 1.081 0.811
Information Support ISP1 4.655 1.176 0.871 0.826 0.896 0.742
ISP2 4.652 1.191 0.892
ISP3 4.635 1.230 0.819
Knowledge Acquisition KAQ1 4.983 1.281 0.875 0.890 0.932 0.819
KAQ2 5.094 1.167 0.933
KAQ3 5.028 1.139 0.907
Utilitarian Benefits UTB1 5.080 1.124 0.896 0.906 0.941 0.841
UTB2 5.074 1.147 0.934
UTB3 5.063 1.158 0.921
Intention to Use ITU1 5.350 1.152 0.937 0.917 0.948 0.859
ITU2 5.194 1.255 0.943
ITU3 5.145 1.296 0.899
Actual Use AUS1 4.370 1.344 0.888 0.735 0.883 0.790
AUS2 4.083 1.330 0.890
8 H. JO AND D.-H. PARK

Regarding convergent validity, the average variance 5.3. Hypothesis test


extracted (AVE) values of all constructs are greater than the
Structural equation modeling (SEM) using PLS was
recommended value of 0.50 (Fornell & Larcker, 1981), sug­
employed to test the relationships posited in our research
gesting that more than half of the variance of the items is
model. The significance of the path coefficients was deter­
accounted for by their respective constructs. Additionally, all
mined using the bootstrapping method, a technique recom­
factor loadings are well above the suggested cutoff of 0.70
mended by Chin (1998) for PLS-SEM, and the significance
(Hair et al., 2006), providing further evidence of convergent
of the relationships between constructs was determined by
validity.
generating t-values through a bootstrapping procedure with
The measurement model’s discriminant validity was eval­
5000 resamples. The results of the structural model, as illus­
uated using both the Fornell-Larcker criterion and the
trated in Figure 2, provided information about the relation­
Heterotrait-Monotrait Ratio (HTMT). According to the
ship between the constructs and the extent to which the
Fornell-Larcker criterion, the square root of a construct’s
hypotheses of this study were supported.
AVE should exceed its correlations with other constructs
The results are as expected, with perceived intelligence
(Fornell & Larcker, 1981). Table 3 presents the findings that
showing a significant correlation with both information sup­
demonstrate the fulfillment of the Fornell-Larcker criterion.
port (b ¼ 0.495, t ¼ 7.167, f2 ¼ 0.299) and knowledge acqui­
Specifically, the diagonal values (square root of AVEs) are
sition (b ¼ 0.456, t ¼ 6.449, f2 ¼ 0.249), thus supporting
greater than the off-diagonal values in the corresponding
hypotheses H1a and H1b. Consistent with expectations, self-
rows and columns, which demonstrates satisfactory discrim­
learning exerts a significant influence on both information
inant validity among constructs. The HTMT is a more strin­
support (b ¼ 0.278, t ¼ 3.813, f2 ¼ 0.094) and knowledge
gent approach to test discriminant validity (Henseler et al.,
acquisition (b ¼ 0.311, t ¼ 4.121, f2 ¼ 0.116), corroborating
2015). HTMT values should be below 0.90 to establish dis­ hypotheses H2a and H2b. As anticipated, information sup­
criminant validity. The results in Table 4 show that all port has a significant positive impact on both utilitarian
HTMT values are below the threshold, further confirming benefits (b ¼ 0.274, t ¼ 4.85, f2 ¼ 0.094) and intention to use
the discriminant validity of the constructs. These findings (b ¼ 0.187, t ¼ 3.335, f2 ¼ 0.045), which supports H3a and
demonstrate good discriminant validity, confirming that the H3b. As proposed, knowledge acquisition has a significant
constructs in this study are distinct from each other. positive effect on both utilitarian benefits (b ¼ 0.54, t ¼ 9.62,
f2 ¼ 0.365) and Intention to Use (b ¼ 0.291, t ¼ 4.023, f2 ¼
Table 3. Correlation matrix and discriminant assessment.
0.088), strongly supporting H4a and H4b. As hypothesized,
Constructs 1 2 3 4 5 6 7
utilitarian benefits show a significant correlation with
1. Perceived Intelligence 0.900
2. Self-learning 0.614 0.874 Intention to Use (b ¼ 0.4, t ¼ 5.274, f2 ¼ 0.181), lending cre­
3. Information Support 0.665 0.581 0.861 dence to H5a. Contrary to expectations, utilitarian benefits
4. Knowledge Acquisition 0.647 0.591 0.677 0.905 are not significantly associated with actual use (b ¼ 0.083,
5. Utilitarian Benefits 0.715 0.561 0.639 0.726 0.726
6. Intention to Use 0.599 0.528 0.640 0.708 0.731 0.927 t ¼ 1.194, f2 ¼ 0.005), leading to the rejection of H5b. As
7. Actual Use 0.412 0.365 0.531 0.435 0.435 0.569 0.889 expected, Intention to use is significantly associated with
The values on the diagonal represent the square root of the AVE. actual use (b ¼ 0.51, t ¼ 7.265, f2 ¼ 0.184), thereby support­
ing H6. Gender (b ¼ −0.181, t ¼ 2.087, f2 ¼ 0.012) and age
Table 4. HTMT matrix. (b ¼ −0.099, t ¼ 2.306, f2 ¼ 0.015) has a significant effect
Constructs 1 2 3 4 5 6 7 on actual use, while industry (b ¼ 0.005, t ¼ 0.124, f2 ¼
1. Perceived Intelligence 0.000) does not affect actual use. Overall, the structural
2. Self-learning 0.712 model described approximately 34.0 percent of the variance
3. Information Support 0.773 0.695
4. Knowledge Acquisition 0.729 0.684 0.787
in actual use.
5. Utilitarian Benefits 0.802 0.643 0.734 0.804 The effect sizes provide an objective quantitative measure
6. Intention to Use 0.668 0.601 0.731 0.780 0.780 of the extent of the relationships among the constructs in our
7. Actual Use 0.511 0.465 0.679 0.537 0.565 0.692
model. The f-square statistics revealed the significance of

Figure 2. PLS analysis result.


INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION 9

Table 5. Analysis of path coefficients.


H Cause Effect Coefficient T-value p-value F-square Hypothesis
H1a Perceived Intelligence Information Support 0.495��� 7.167 0.000 0.299 Supported
H1b Perceived Intelligence Knowledge Acquisition 0.456��� 6.449 0.000 0.249 Supported
H2a Self-learning Information Support 0.278��� 3.813 0.000 0.094 Supported
H2b Self-learning Knowledge Acquisition 0.311��� 4.121 0.000 0.116 Supported
H3a Information Support Utilitarian Benefits 0.274��� 4.850 0.000 0.094 Supported
H3b Information Support Intention to Use 0.187��� 3.335 0.001 0.045 Supported
H4a Knowledge Acquisition Utilitarian Benefits 0.540��� 9.620 0.000 0.365 Supported
H4b Knowledge Acquisition Intention to Use 0.291��� 4.023 0.000 0.088 Supported
H5a Utilitarian Benefits Intention to Use 0.400��� 5.274 0.000 0.181 Supported
H5b Utilitarian Benefits Actual Use 0.083��� 1.194 0.233 0.005 Not Supported
H6 Intention to Use Actual Use 0.510��� 7.265 0.000 0.184 Supported
CV1 Gender Actual Use −0.181� 2.087 0.037 0.012 Supported
CV2 Age Actual Use −0.099� 2.306 0.021 0.015 Supported
CV3 Industry Actual Use 0.005 0.124 0.902 0.000 Not Supported
For the variables, the values were assigned as follows: Gender (Male ¼1, Female ¼ 2); Industry (1 ¼ Manufacturing, 2 ¼ Construction, 3¼ Information and
Communication, 4 ¼ Distribution, 5 ¼ Law, 6 ¼ Medical and Health, 7 ¼ Education, 8 ¼ Service).
*: p < 0.05; *�: p < 0.01; ***: p < 0.001.

several paths in the model (Cohen, 1988). For example, the (Collins et al., 2021; Seo et al., 2021). In terms of informa­
relationships between perceived intelligence and both infor­ tion support, a self-learning AI can better understand the
mation support and knowledge acquisition were found to be context and nuances of user queries over time, leading to
strong, with effect sizes of 0.299 and 0.249, respectively. These more accurate and personalized responses. As the AI system
results emphasize the importance of perceived intelligence in learns and evolves, it’s more likely to provide appropriate
influencing users’ perceptions of information support and suggestions, guidance, or advice to help users navigate their
knowledge acquisition from ChatGPT. Furthermore, the problems. This in turn improves the perceived information
strong effect size (0.365) of knowledge acquisition on utilitar­ support from the AI. On the other hand, self-learning capa­
ian benefits indicates that acquiring knowledge from the bilities can also influence knowledge acquisition. As
ChatGPT substantially contributes to its utilitarian benefits. ChatGPT learns from its interactions and the variety of
On the other hand, relationships such as between utilitarian resources at its disposal, it enhances its ability to generate
benefits and actual use and between industry and actual use new, relevant, and personalized knowledge. This could
had very low effect sizes, indicating their limited influence. involve introducing users to new concepts or providing
Table 5 shows the analysis results of SEM. more nuanced explanations based on their previous interac­
tions. Consequently, users are more likely to feel that they
6. Discussion can acquire new knowledge or expand their existing know­
ledge base through their interaction with the AI system.
The findings from this study corroborate with prior research
in many ways, with some divergences and new insights as
well.
6.3. The effect of information support on utilitarian
benefit and intention to use
6.1. The effect of perceived intelligence on information
support and knowledge acquisition The empirical results showing the positive impact of infor­
mation support on utilitarian benefits and intention to use
The positive association between perceived intelligence of
complement existing literature, such as the studies by Gupta
ChatGPT and information support and knowledge acquisi­
(2020) and Ashfaq et al. (2020). When users perceive this
tion aligns with the assertions of Orr� u et al. (2023) and
support as useful, it directly contributes to the utilitarian
Russo et al. (2021), respectively. Users likely perceive a more
benefits that they derive from using the AI system.
intelligent system to provide better support and facilitate
Utilitarian benefits refer to practical and functional benefits,
more knowledge acquisition. This interpretation is consistent
such as saving time, increasing efficiency, or helping users
with the expectation that a more competent AI system
solve problems. So, if users find the information provided
would be more capable of offering valuable insights and
by ChatGPT to be useful, helpful, and relevant, they are
learning materials.
more likely to perceive a higher level of utilitarian benefit
from using AI. The significant influence of information sup­
6.2. The effect of self-learning on information support
port on users’ intention to continue using ChatGPT can also
and knowledge acquisition
be understood in this light. If users find AI to be a useful
The significant correlation between ChatGPT’s self-learning tool that provides valuable information support, they are
capabilities and information support and knowledge acquisi­ more likely to develop the intention to continue using it in
tion adds to the growing body of literature stressing the the future. This suggests that they see value in the tool and
importance of AI adaptability and learning over time wish to continue leveraging its benefits.
10 H. JO AND D.-H. PARK

6.4. The effect of knowledge acquisition on utilitarian direct impact on actual usage could indeed be influenced by
benefit and intention to use various factors. Firstly, in terms of company policy for infor­
mation security or confidentiality, some organizations that
The results identified a significant positive effect of know­
utilize AI systems like ChatGPT in their workplaces may
ledge acquisition on utilitarian benefits and intention to use.
have strict policies regarding information security and data
Previous research has emphasized the importance of know­
confidentiality. Employees may recognize the utilitarian ben­
ledge acquisition for practical benefits (Al-Emran et al.,
efits of ChatGPT, such as increased productivity or
2018; Al-Emran & Teo, 2020) and intention to sustain the
improved decision-making, leading to a positive intention to
use (Al-Sharafi et al., 2022). When users can effectively
use the tool. However, due to concerns about data privacy
acquire new knowledge through ChatGPT, this directly con­
or confidentiality, employees might be hesitant to fully util­
tributes to utilitarian benefits, which are the practical advan­
ize the system for certain tasks, especially those involving
tages users gain from AI. These benefits could include the
sensitive information. Secondly, the user interface (UI) of
ability to make better-informed decisions, solve problems
ChatGPT may also play a role in the discrepancy between
more effectively, or streamline tasks that would otherwise
intention and actual use. While employees might understand
require substantial time and effort. Thus, a positive relation­
the potential benefits of ChatGPT in theory, they may find
ship between knowledge acquisition and utilitarian benefits
the actual interface challenging to navigate or not user-
suggests that an AI’s capacity to facilitate learning is a crit­
friendly. Employees might have thought it required a high
ical aspect of its value to users. Similarly, if users feel they
level of usage skill, like prompt engineering. This unfamiliar­
are successfully gaining new knowledge from their interac­
ity or discomfort with the UI could deter them from using
tions with ChatGPT, they are likely to have a higher inten­
the system regularly, despite their positive intentions.
tion to continue using it. Intention to use ChatGPT
represents a user’s plan or decision to continue leveraging Thirdly, in terms of technological constraints, technical limi­
this AI in the future. If users see ChatGPT as a valuable tations or system glitches can hinder the smooth functioning
learning tool, they are likely to continue using it, increasing of ChatGPT, affecting its actual use. Frequent technical
their ongoing engagement with the system. issues or slow response times might discourage employees
from incorporating ChatGPT into their regular workflow,
even if they recognize its utilitarian benefits. It could also be
6.5. The effect of utilitarian benefit on intention to use from a variety of other perspectives, such as organizational
and actual use culture and training, integration with existing workflows, or
perception of AI as a substitute rather than a complement.
The findings unveiled that utilitarian benefits significantly
Further investigation into these factors through surveys,
influence the intention to use, confirming the observations
interviews, or experiments could provide deeper insights
of prior works (Dinh & Park, 2023; Kim & Oh, 2011;
into the specific reasons behind the discrepancy and offer
McLean & Osei-Frimpong, 2019). This relationship between
practical guidance for practitioners seeking to implement AI
utilitarian benefits and intention to use suggests that the
technologies effectively in the workplace.
more users find practical value in ChatGPT, the more likely
they are to want to continue using it. If users believe that
using ChatGPT helps them accomplish tasks more quickly, 6.6. The effect of intention to use on actual use
increases their productivity, or even simplifies complex
tasks, they are likely to develop a stronger intention to The results showing a strong association between intention
use it. to use and actual use align well with previous literature
The finding that utilitarian benefits do not significantly (Venkatesh et al., 2012; Davis, 1989), reinforcing the pivotal
affect the actual use of ChatGPT may initially appear coun­ role of user intentions in influencing their behavior.
terintuitive, as one would expect that practical benefits
derived from using an AI system would directly lead to its 6.7. The control effect of gender, age, and industry on
increased use. This diverges from the findings of DeLone actual use
and McLean (2003) and Mishra and Shukla (2020), who
found a direct impact of utility on actual use. While prior Concerning the role of user demographics, this study offers
research has consistently noted the influence of perceived valuable insights. The results reveal a significant correlation
usefulness on intention to use (Ailawadi et al., 2005; between age and actual use, and between gender and actual
Gatzioufa & Saprikis, 2022; Rapp et al., 2021), this study use, contrary to some previous findings that suggested a
interestingly reveals that despite a significant impact of utili­ more complex or non-significant relationship (McLean &
tarian benefits on intention to use, such benefits do not Osei-Frimpong, 2019; Nguyen et al., 2021). This study thus
necessarily translate into actual use. This finding prompts a contributes to a more nuanced understanding of the demo­
reconsideration of the assumptions in the extant literature graphic factors influencing AI use, underlining the need for
and calls for further research to bridge the gap between more tailored approaches in AI design and implementation.
users’ intentions and their actual behavior in the context of The negative correlation between gender (where male ¼
AI interactions. 1, female ¼ 2) and actual use suggests that male users might
The discrepancy between the positive influence of utilitar­ be more likely to use ChatGPT frequently, consistent with
ian benefits on the intention to use ChatGPT and its lack of the gender differences in technology adoption reported by
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION 11

Venkatesh et al. (2000) and Li et al. (2008). The finding that operations, and enhanced job performance—can influence
male users employ ChatGPT more frequently than female user intentions and foster system adoption.
users may be attributed to a variety of reasons, including From a pedagogical perspective, organizations should rec­
cultural, societal, and educational factors. There’s a possibil­ ognize AI systems as crucial tools for both knowledge acqui­
ity that this difference is a reflection of the long-standing sition and information provision. Implementing focused
gender digital divide, where men are traditionally more training sessions and workshops will ensure that employees
likely to adopt and use technology compared to women can effectively utilize these systems, promoting an active
(Hilbert, 2011). Additionally, it might be that men are gen­ learning environment. Moreover, using AI systems as know­
erally more inclined towards using technological tools in ledge repositories can facilitate quick access to vital insights,
their daily tasks due to societal or job-related needs. This aiding informed decision-making across departments.
difference might also be influenced by the gender disparity On the technical side, user feedback is essential for
prevalent in certain sectors like IT or engineering where ongoing AI system improvements. By understanding user
such AI tools like ChatGPT are more commonly used. It is experiences and addressing any adoption disparities due to
crucial, however, to note that these explanations are conjec­ demographic factors, organizations can adjust AI functional­
tural and could differ based on the cultural and social con­ ities to meet diverse user needs. Given the evolving nature
text of the user base. Future research could focus on of AI systems, regular updates based on user feedback
understanding the reasons behind this disparity, which can ensure that the system’s capabilities align with user expecta­
lead to actionable steps towards encouraging more balanced tions. Furthermore, as AI becomes more prevalent, ethical
usage of such technology across genders. The effect of age considerations become paramount. When adopting AI tech­
nologies, such as ChatGPT, organizations must adhere to
on actual use aligns with prior research suggesting that
ethical standards, especially concerning data privacy, trans­
younger users are more likely to adopt new technologies
parency, and fairness. It’s vital for AI systems to respect
(De Cicco et al., 2020; Kasilingam, 2020). This tendency can
user privacy and offer clear explanations for their responses.
be attributed to a multitude of factors, key among them
being exposure and familiarity with technology from an
early age, which promotes a higher level of comfort and 7. Conclusion
proficiency (Morris & Venkatesh, 2000). Furthermore, this
In our exploration of ChatGPT’s use within professional
group’s increased propensity for exploration and openness
environments, our data, sourced from workers aged 20–40
to change contributes to this trend (Venkatesh et al., 2016).
across various industries, highlighted a positive correlation
These factors combined lead to a quicker and more frequent
between users’ perceptions of ChatGPT’s intelligence and
adoption and use of ChatGPT by younger users compared
self-learning capabilities, and its role in offering informa­
to their older counterparts.
tional support and facilitating knowledge acquisition. This,
The lack of significant effect of industry on actual use
in turn, significantly influenced their perception of its utili­
suggests that ChatGPT’s use might be universal across dif­
tarian benefits, shaping their intention to integrate the tool
ferent sectors. This is a crucial finding and indicates the ver­
into their daily tasks. Interestingly, while perceived benefits
satile nature of the ChatGPT technology, revealing that it
were influential, they did not consistently translate to actual
can be adopted across various sectors without significant
usage; the overall intent proved a more accurate predictor.
variances. This can be interpreted to mean that ChatGPT’s Demographically, younger and male users showed a higher
application is not confined to any particular industry, and inclination towards ChatGPT, while the specific industry in
thus, its utility is universal. It is possibly due to ChatGPT’s which they worked remained inconsequential to their usage
adaptive learning capabilities and its wide-ranging function­ patterns.
alities that cater to diverse professional requirements. These This study offers several key theoretical contributions.
findings align with the studies by Brynjolfsson and McAfee First, it delves into the unique self-learning feature of
(2014) and Chui et al. (2018) who found that digital tech­ ChatGPT, distinct from many AI systems. ChatGPT’s chat
nologies, including AI, have broad implications across differ­ interface visibly demonstrates its ongoing learning to users,
ent industries. emphasizing the importance of user awareness in AI adop­
The current study offers theoretical, pedagogical, and tion. Second, while past studies examined AI competence
technical insights. From a theoretical standpoint, our results and learning ability separately, our research integrates both
underscore the importance of enhancing the perceived intel­ within one model. We show that perceived intelligence
ligence and self-learning capabilities of AI systems. influences information support more than knowledge acqui­
Technologies like prompt engineering, designed to evolve sition. In contrast, self-learning affects knowledge acquisition
from user interactions, are more likely to produce accurate more. This comprehensive approach enriches our compre­
and relevant responses. This capability can strengthen both hension of how AI capabilities lead to varied user results.
information support and knowledge acquisition for profes­ Lastly, we spotlight two vital dimensions of ChatGPT’s util­
sionals. The strong correlation between information support ity: information support and knowledge acquisition. The for­
and AI’s utilitarian benefits underscores the importance of mer includes functions like translation and summarization,
seamlessly integrating AI into employees’ daily workflows. while the latter offers new insights, combining data from
Articulating these benefits—such as task efficiency, improved varied domains to provide users with unique knowledge,
12 H. JO AND D.-H. PARK

such as responses to queries. This categorization paves the perception. Moreover, the mixed results about demographic
way for future explorations into the efficacy of generative influences suggest that future research might delve deeper
algorithms in specific contexts. into the intersectionality of demographics and other factors
The results of this study offer significant practical contribu­ (such as tech literacy, and individual differences) to better
tions for the design, deployment, and marketing of AI tech­ understand the dynamics of AI adoption. Secondly, the
nologies, including ChatGPT. Firstly, the pronounced study primarily adopted a quantitative approach. Although
influence of perceived intelligence on information support this approach provides valuable statistical insights, it might
and knowledge acquisition implies that practitioners should overlook the nuanced perceptions and experiences of users,
prioritize enhancing their AI system’s perceived intelligence. which could be captured through qualitative methods.
Enhancements could focus on improving the system’s accur­ Future research could, therefore, employ a mixed-methods
acy, speed, and contextual responsiveness to user queries approach, combining surveys with interviews or focus
(Haque & Li, 2023). For instance, developing the AI to answer groups, to delve deeper into user perceptions and experien­
intricate questions, providing precise information, and show­ ces. Thirdly, another significant limitation is the study’s
casing an in-depth grasp of language can amplify perceived inability to account for the diversity of user contexts, such
intelligence (De Angelis et al., 2023). Secondly, the value of as specific work scenarios in which ChatGPT is employed
self-learning in AI is underscored by our findings. By advanc­ (documentation, programming, marketing, etc.). This limita­
ing and showcasing these capabilities, the system’s utility can
tion primarily arises from the need for a significantly larger
be bolstered, enriching user experiences. Developers might
and more diverse sample to effectively represent each con­
prioritize refining machine learning algorithms for adaptabil­
text. Additionally, the study relied on a self-reported meth­
ity and personalized responses (George & George, 2023; Paul
odology, which inherently poses challenges in objectively
et al., 2023), which companies can spotlight in marketing
measuring the frequency of ChatGPT usage. Although the
efforts. Thirdly, insights into the relationship between utilitar­
variable “industry” was introduced to partially account for
ian benefits, intent, and actual use are noteworthy. While the
the context, it was found to be non-significant, suggesting
utility is vital, barriers preventing AI integration into daily
routines should be addressed, possibly through intuitive user that the influence of the workplace environment may be less
interfaces or simplified implementation. Tools like user train­ critical in the adoption and use of AI technologies like
ing, FAQs, or a dedicated support system might mitigate tech­ ChatGPT. We anticipate future research will refine the
nical challenges. Fourthly, this research offers insights for methodology, perhaps incorporating more objective usage
training and integration programs. Given the weight of per­ measures or accounting for additional variables. Lastly, the
ceived intelligence and self-learning in influencing the sys­ generalizability of the findings may be limited due to the
tem’s perceived value, onboarding should emphasize these specific sample used in this study. Our study relies on sur­
features. Demos or tutorials can exhibit the system’s evolving vey responses from 351 South Koreans, which may limit the
learning and aptitude for accurate, context-sensitive generalizability of the findings to a broader global context.
responses. Educating users about AI’s productivity potential We recognize the importance of considering cultural aspects
can foster engagement and sustained use. Lastly, the role of that could influence individuals’ perceptions and behaviors
demographics in AI utilization requires attention. With older related to AI systems like ChatGPT. To enhance the general­
and female users less frequently using AI, features or strategies izability of our results, we acknowledge the necessity of
catering to these demographics might be beneficial. Whether including participants from diverse countries and ethnicities
through intuitive interfaces, tailored tutorials, or demo­ in future studies. By incorporating a more diverse sample,
graphic-focused marketing, emphasizing the system’s utility is we can gain a better understanding of the cross-cultural var­
key. Interestingly, our findings indicate the industry doesn’t iations in the adoption and use of ChatGPT in the work­
dictate AI use, suggesting ChatGPT’s versatility across sectors. place. Moreover, we primarily focus on office workers, and
Practitioners should, therefore, present their AI solutions as we acknowledge that different groups, such as university stu­
adaptable assets for diverse sectors, from education to finance. dents, middle and high school students familiar with tech­
Despite providing insightful contributions, this study is nology, and seniors over 50, may interact with ChatGPT in
not without limitations, which in turn present avenues for distinct ways. Including these varied demographic groups in
future research. Firstly, while this study focused on variables our research can provide valuable insights into the differen­
such as perceived intelligence and self-learning capabilities, ces in technological familiarity and problem-solving
it did not consider other important factors such as trust, approaches across different age groups.
privacy concerns, or prior experience with AI systems,
which could also significantly impact user interactions with
AI. Moreover, the present study opens up new avenues for Disclosure statement
future research. The unexpected finding of the non-signifi­ No potential conflict of interest was reported by the author(s).
cant relationship between utilitarian benefits and actual use
warrants further exploration to identify potential moderating
or mediating variables. In the context of AI’s self-learning ORCID
capability, more research could be undertaken to examine Hyeon Jo http://orcid.org/0000-0001-7442-4736
how this factor interacts with other dimensions of AI Do-Hyung Park http://orcid.org/0000-0002-7278-5228
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION 13

Reference Chen, Q., Gong, Y., Lu, Y., & Tang, J. (2022). Classifying and measur­
ing the service quality of AI chatbot in frontline service. Journal of
Adamopoulou, E., & Moussiades, L. (2020). Chatbots: History, technol­ Business Research, 145(5), 552–568. https://doi.org/10.1016/j.jbusres.
ogy, and applications. Machine Learning with Applications, 2(53), 2022.02.088
100006. https://doi.org/10.1016/j.mlwa.2020.100006 Cheng, Y.-M. (2021). Will robo-advisors continue? Roles of task-tech­
Ailawadi, K. L., Kopalle, P. K., & Neslin, S. A. (2005). Predicting com­ nology fit, network externalities, gratifications and flow experience
petitive response to a major policy change: Combining game-theor­ in facilitating continuance intention. Kybernetes, 50(6), 1751–1783.
etic and empirical analyses. Marketing Science, 24(1), 12–24. https:// https://doi.org/10.1108/K-03-2020-0185
doi.org/10.1287/mksc.1040.0077 Cheng, Y., & Jiang, H. (2020). How do AI-driven chatbots impact user
Al-Emran, M., Mezhuyev, V., & Kamaludin, A. (2018). Students’ per­ experience? Examining gratifications, perceived privacy risk, satisfac­
ceptions towards the integration of knowledge management proc­ tion, loyalty, and continued use. Journal of Broadcasting &
esses in M-learning systems: A preliminary study. International Electronic Media, 64(4), 592–614. https://doi.org/10.1080/08838151.
Journal of Engineering Education, 34(2), 371–380. 2020.1834296
Al-Emran, M., & Teo, T. (2020). Do knowledge acquisition and know­ Chin, W. W. (1998). The partial least squares approach to structural
ledge sharing really affect e-learning adoption? An empirical study. equation modeling. Lawrence Erlbaum.
Education and Information Technologies, 25(3), 1983–1998. https:// Chowdhury, N., Abdoulsalam Awais, O., & Aktar, S. (2023). Improving
doi.org/10.1007/s10639-019-10062-w Customer Care with ChatGPT: A Case Study. https://doi.org/10.
Al-Sharafi, M. A., Al-Emran, M., Iranmanesh, M., Al-Qaysi, N., Iahad, 5281/zenodo.7699658
N. A., & Arpaci, I. (2022). Understanding the impact of knowledge Chui, M., Manyika, J., Miremadi, M., Henke, N., Chung, R., Nel, P., &
management factors on the sustainable use of AI-based chatbots for Malhotra, S. (2018). Notes from the AI frontier: Insights from hun­
educational purposes using a hybrid SEM-ANN approach. dreds of use cases. McKinsey Global Institute, 2, 267.
Interactive Learning Environments, 1–20. https://doi.org/10.1080/ Cohen, J. (1988). Statistical power analysis. Erlbaum.
10494820.2022.2075014 Cohen, S., & Wills, T. A. (1985). Stress, social support, and the buffer­
Alberts, I. L., Mercolli, L., Pyka, T., Prenosil, G., Shi, K., Rominger, A., ing hypothesis. Psychological Bulletin, 98(2), 310–357. https://doi.
& Afshar-Oromieh, A. (2023). Large language models (LLM) and org/10.1037/0033-2909.98.2.310
ChatGPT: What will the impact on nuclear medicine be? European Collins, C., Dennehy, D., Conboy, K., & Mikalef, P. (2021). Artificial
Journal of Nuclear Medicine and Molecular Imaging, 50(6), 1549– intelligence in information systems research: A systematic literature
1552. https://doi.org/10.1007/s00259-023-06172-w review and research agenda. International Journal of Information
Arpaci, I., Al-Emran, M., & Al-Sharafi, M. A. (2020). The impact of
Management, 60(4), 102383. https://doi.org/10.1016/j.ijinfomgt.2021.
knowledge management practices on the acceptance of Massive
102383
Open Online Courses (MOOCs) by engineering students: A cross-
Creswell, J. W., & Creswell, J. D. (2017). Research design: Qualitative,
cultural comparison. Telematics and Informatics, 54, 101468. https://
quantitative, and mixed methods approaches. Sage publications.
doi.org/10.1016/j.tele.2020.101468
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and
Ashfaq, M., Yun, J., Yu, S., & Loureiro, S. M. C. (2020). I, Chatbot:
user acceptance of information technology. MIS Quarterly, 13(3),
Modeling the determinants of users’ satisfaction and continuance
319–340. https://doi.org/10.2307/249008
intention of AI-powered service agents. Telematics and Informatics,
De Angelis, L., Baglivo, F., Arzilli, G., Privitera, G. P., Ferragina, P.,
54, 101473. https://doi.org/10.1016/j.tele.2020.101473
Tozzi, A. E., & Rizzo, C. (2023). ChatGPT and the rise of large lan­
Badini, S., Regondi, S., Frontoni, E., & Pugliese, R. (2023). Assessing
guage models: The new AI-driven infodemic threat in public health.
the capabilities of ChatGPT to improve additive manufacturing
Frontiers in Public Health, 11, 1166120. https://doi.org/10.3389/
troubleshooting. Advanced Industrial and Engineering Polymer
fpubh.2023.1166120
Research, 6(3), 278–287. https://doi.org/10.1016/j.aiepr.2023.03.003
De Cicco, R., Silva, S. C., & Alparone, F. R. (2020). Millennials’ attitude
Bajwa, J., Munir, U., Nori, A., & Williams, B. (2021). Artificial intelli­
gence in healthcare: Transforming the practice of medicine. toward chatbots: An experimental study in a social relationship per­
Future Healthcare Journal, 8(2), e188–e194. https://doi.org/10.7861/ spective. International Journal of Retail & Distribution Management,
fhj.2021-0095 48(11), 1213–1233. https://doi.org/10.1108/IJRDM-12-2019-0406
Beer, P., & Mulder, R. H. (2020). The effects of technological develop­ DeLone, W. H., & McLean, E. R. (2003). The DeLone and McLean
ments on work and their implications for continuous vocational model of information systems success: A ten-year update. Journal of
education and training: A systematic review. Frontiers in Psychology, Management Information Systems, 19(4), 9–30. https://doi.org/10.
11, 918. https://doi.org/10.3389/fpsyg.2020.00918 1080/07421222.2003.11045748
Bhattacherjee, A. (2012). Social science research: Principles, methods Dennehy, D., Griva, A., Pouloudi, N., Dwivedi, Y. K., M€antym€aki, M.,
and practices. Open University Press. & Pappas, I. O. (2023). Artificial Intelligence (AI) and information
Boudkouss, H., & Djelassi, S. (2021). Understanding in-store interactive systems: Perspectives to responsible AI. Information Systems
technology use: A uses and gratifications theory (UGT) perspective. Frontiers, 25(1), 1–7. https://doi.org/10.1007/s10796-022-10365-3
International Journal of Retail & Distribution Management, 49(12), Dinh, C.-M., & Park, S. (2023). How to increase consumer intention to
1621–1639. https://doi.org/10.1108/IJRDM-11-2020-0459 use Chatbots? An empirical analysis of hedonic and utilitarian moti­
Bryman, A. (2016). Social research methods. Oxford university press. vations on social presence and the moderating effects of fear across
Brynjolfsson, E., & McAfee, A. (2014). The second machine age: Work, generations. Electronic Commerce Research. 1–41. https://doi.org/10.
progress, and prosperity in a time of brilliant technologies. WW 1007/s10660-022-09662-5
Norton & Company. Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar,
Busalim, A. H., Ghabban, F., & Hussin, A. R. C. (2021). Customer A. K., Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, M.,
engagement behaviour on social commerce platforms: An empirical Albanna, H., Albashrawi, M. A., Al-Busaidi, A. S., Balakrishnan, J.,
study. Technology in Society, 64(3), 101437. https://doi.org/10.1016/j. Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., … Wright, R.
techsoc.2020.101437 (2023). “So what if ChatGPT wrote it?” Multidisciplinary perspec­
Cardon, P., Getchell, K., Carradini, S., Fleischmann, A. C., & Stapp, J. tives on opportunities, challenges and implications of generative
(2023). Generative AI in the Workplace: Employee Perspectives of conversational AI for research, practice and policy. International
ChatGPT Benefits and Organizational Policies. SocArXiv. https://doi. Journal of Information Management, 71, 102642. https://doi.org/10.
org/10.31235/osf.io/b3ezy 1016/j.ijinfomgt.2023.102642
Carvalho, I., & Ivanov, S. (2023). ChatGPT for tourism: Applications, Emenike, M. E., & Emenike, B. U. (2023). Was this title generated by
benefits and risks. Tourism Review. https://doi.org/10.1108/TR-02- ChatGPT? Considerations for artificial intelligence text-generation
2023-0088 software programs for chemists and chemistry educators. Journal of
14 H. JO AND D.-H. PARK

Chemical Education, 100(4), 1413–1418. https://doi.org/10.1021/acs. King, M. R. (2023). A conversation on artificial intelligence, chatbots, and
jchemed.3c00063 plagiarism in higher education. Cellular and Molecular Bioengineering,
Falk, R. F., & Miller, N. B. (1992). A primer for soft modeling. 16(1), 1–2. https://doi.org/10.1007/s12195-022-00754-8
University of Akron Press. Kock, N. (2015). WarpPLS 5.0 user manual. ScriptWarp Systems.
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation Lee, C. T., Pan, L.-Y., & Hsieh, S. H. (2022). Artificial intelligent chat­
models with unobservable variables and measurement Error. Journal bots as brand promoters: A two-stage structural equation modeling-
of Marketing Research, 18(1), 39–50. https://doi.org/10.2307/3151312 artificial neural network approach. Internet Research, 32(4), 1329–
Følstad, A., Araujo, T., Law, E. L.-C., Brandtzaeg, P. B., Papadopoulos, S., 1356. https://doi.org/10.1108/INTR-01-2021-0030
Reis, L., Baez, M., Laban, G., McAllister, P., Ischen, C., Wald, R., Li, S., Glass, R., & Records, H. (2008). The influence of gender on new
Catania, F., Meyer von Wolff, R., Hobert, S., & Luger, E. (2021). technology adoption and use–mobile commerce. Journal of Internet
Future directions for chatbot research: An interdisciplinary research Commerce, 7(2), 270–289. https://doi.org/10.1080/153328608020 67748
agenda. Computing, 103(12), 2915–2942. https://doi.org/10.1007/s00607- Liebrenz, M., Schleifer, R., Buadze, A., Bhugra, D., & Smith, A. (2023).
021-01016-7 Generating scholarly content with ChatGPT: Ethical challenges for
Gatzioufa, P., & Saprikis, V. (2022). A literature review on users’ behav­ medical publishing. The Lancet. Digital Health, 5(3), e105–e106.
ioral intention toward chatbots’ adoption. Applied Computing and https://doi.org/10.1016/S2589-7500(23)00019-5
Informatics, 1(1), 9–23. https://doi.org/10.1108/ACI-01-2022-0021 Lund, B. D., & Wang, T. (2023). Chatting about ChatGPT: How may
George, A. S., & George, A. H. (2023). A review of ChatGPT AI’s impact AI and GPT impact academia and libraries? Library Hi Tech News.
on several business sectors. Partners Universal International Innovation Marjerison, R. K., Zhang, Y., & Zheng, H. (2022). AI in E-commerce:
Journal, 1(1), 9–23. https://doi.org/10.5281/zenodo.7644359 Application of the use and gratification model to the acceptance of
Gupta, A. (2020). Introduction to AI Chatbots. IJERT, 9(7), 255–258. chatbots. Sustainability, 14(21), 14270. https://doi.org/10.3390/
https://doi.org/10.17577/IJERTV9IS070143 su142114270
Hair, J., Anderson, R., & Tatham, B. R. (2006). Multivariate data ana­ Marr, B. (2023). The 7 best examples of how ChatGPT can be used in
lysis (6th ed.). Pearson Education. human resources (HR). Forbes. Retrieved June 1, 2023, from https://
Hair, J. F., Sarstedt, M., Ringle, C. M., & Mena, J. A. (2012). An assess­ www.forbes.com/sites/bernardmarr/2023/03/07/the-7-best-examples-of-
ment of the use of partial least squares structural equation modeling how-chatgpt-can-be-used-in-human-resources-hr/?sh=62a6bd 6b4a82
in marketing research. Journal of the Academy of Marketing Science, McLean, G., & Osei-Frimpong, K. (2019). Hey Alexa … examine the
40(3), 414–433. https://doi.org/10.1007/s11747-011-0261-6 variables influencing the use of artificial intelligent in-home voice
Haque, M. A., & Li, S. (2023). The potential use of ChatGPT for assistants. Computers in Human Behavior, 99, 28–37. https://doi.org/
debugging and bug fixing. EAI Endorsed Transactions on AI and 10.1016/j.chb.2019.05.009
Robotics, 2(1), e4. https://doi.org/10.4108/airo.v2i1.3276 Mirnig, N., Stollnberger, G., Miksch, M., Stadler, S., Giuliani, M., &
Hassani, H., & Silva, E. S. (2023). The role of ChatGPT in data science:
Tscheligi, M. (2017). To err is robot: How humans assess and act
How AI-assisted conversational interfaces are revolutionizing the
toward an erroneous social robot. Frontiers in Robotics and AI, 4,
field. Big Data and Cognitive Computing, 7(2), 62. https://doi.org/10.
21. https://doi.org/10.3389/frobt.2017.00021
3390/bdcc7020062
Mishra, A., & Shukla, A. (2020). Psychological determinants of consumer’s
Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for
usage, satisfaction, and word-of-mouth recommendations toward smart
assessing discriminant validity in variance-based structural equation
voice assistants. In S. K. Sharma, Y. K. Dwivedi, B. Metri, & N. P.
modeling. Journal of the Academy of Marketing Science, 43(1), 115–
Rana, (Eds.), Re-imagining diffusion and adoption of information tech­
135. https://doi.org/10.1007/s11747-014-0403-8
nology and systems: A continuing conversation. Cham.
Hilbert, M. (2011). The end justifies the definition: The manifold out­
Morris, M. G., & Venkatesh, V. (2000). Age differences in technology
looks on the digital divide and their practical usefulness for policy-
making. Telecommunications Policy, 35(8), 715–736. https://doi.org/ adoption decisions: Implications for a changing work force.
10.1016/j.telpol.2011.06.012 Personnel Psychology, 53(2), 375–403. https://doi.org/10.1111/j.1744-
Jeyaraj, A. (2021). Rethinking the intention to behavior link in infor­ 6570.2000.tb00206.x
mation technology use: Critical review and research directions. Ngoc Thang, N., & Anh Tuan, P. (2020). Knowledge acquisition,
International Journal of Information Management, 59(2), 102345. knowledge management strategy and innovation: An empirical study
https://doi.org/10.1016/j.ijinfomgt.2021.102345 of Vietnamese firms. Cogent Business & Management, 7(1), 1786314.
Joseph, R. P., & Arun, T. M. (2021). Models and tools of knowledge https://doi.org/10.1080/23311975.2020.1786314
acquisition. In S. Patnaik, K. Tajeddini, & V. Jain (Eds.), Nguyen, D. M., Chiu, Y.-T H., & Le, H. D. (2021). Determinants of
Computational management: Applications of computational intelli­ continuance intention towards banks’ Chatbot services in Vietnam:
gence in business management (pp. 53–67). Springer International A necessity for sustainable development. Sustainability, 13(14), 7625.
Publishing. https://doi.org/10.1007/978-3-030-72929-5_3 https://doi.org/10.3390/su13147625
Kamble, S. S., Gunasekaran, A., Ghadge, A., & Raut, R. (2020). A per­ Nunnally, J. C. (1978). Psychometric theory (2nd ed.). Mcgraw Hill
formance measurement system for industry 4.0 enabled smart man­ Book Company.
ufacturing system in SMMEs- A review and empirical investigation. Okonkwo, C. W., & Ade-Ibijola, A. (2021). Chatbots applications in
International Journal of Production Economics, 229, 107853. https:// education: A systematic review. Computers and Education: Artificial
doi.org/10.1016/j.ijpe.2020.107853 Intelligence, 2(2), 100033. https://doi.org/10.1016/j.caeai.2021.100033
Kasilingam, D. L. (2020). Understanding the attitude and intention to Orr�u, G., Piarulli, A., Conversano, C., & Gemignani, A. (2023).
use smartphone chatbots for shopping. Technology in Society, 62(C), Human-like problem-solving abilities in large language models using
101280. https://doi.org/10.1016/j.techsoc.2020.101280 ChatGPT. Frontiers in Artificial Intelligence, 6, 1199350. https://doi.
Katz, E., Blumler, J. G., & Gurevitch, M. (1973). Uses and gratifications org/10.3389/frai.2023.1199350
research. Public Opinion Quarterly, 37(4), 509–523. https://doi.org/ Paul, J., Ueno, A., & Dennis, C. (2023). ChatGPT and consumers:
10.1086/268109 Benefits, pitfalls and future research Agenda. Wiley Online Library.
Kelly, S., Kaye, S.-A., & Oviedo-Trespalacios, O. (2022). A multi-indus­ Peres, R., Schreier, M., Schweidel, D., & Sorescu, A. (2023). On ChatGPT
try analysis of the future use of AI chatbots. Human Behavior and and beyond: How generative artificial intelligence may affect research,
Emerging Technologies, 2022(1), 1–14. https://doi.org/10.1155/2022/ teaching, and practice. International Journal of Research in Marketing,
2552099 40(2), 269–275. https://doi.org/10.1016/j.ijresmar.2023.03.001
Kim, B., & Oh, J. (2011). The difference of determinants of acceptance Pillai, R., & Sivathanu, B. (2020). Adoption of AI-based chatbots for
and continuance of mobile data services: A value perspective. Expert hospitality and tourism. International Journal of Contemporary
Systems with Applications, 38(3), 1798–1804. https://doi.org/10.1016/ Hospitality Management, 32(10), 3199–3226. https://doi.org/10.1108/
j.eswa.2010.07.107 IJCHM-04-2020-0259
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION 15

Pillai, S. G., Kim, W. G., Haldorai, K., & Kim, H.-S. (2022). Online food Stafford, T. F., Stafford, M. R., & Schkade, L. L. (2004). Determining
delivery services and consumers’ purchase intention: Integration of the­ uses and gratifications for the Internet. Decision Sciences, 35(2),
ory of planned behavior, theory of perceived risk, and the elaboration 259–288. https://doi.org/10.1111/j.00117315.2004.02524.x
likelihood model. International Journal of Hospitality Management, Sundar, S. S., & Limperos, A. M. (2013). Uses and grats 2.0: New grati­
105(2), 103275. https://doi.org/10.1016/j.ijhm.2022.103275 fications for new media. Journal of Broadcasting & Electronic Media,
Podsakoff, P. M., MacKenzie, M., Scott, B., Lee, J.-Y., & Podsakoff, 57(4), 504–525. https://doi.org/10.1080/08838151.2013.845827
N. P. (2003). Common method biases in behavioral research: A crit­ Tao, D. (2009). Intention to use and actual use of electronic informa­
ical review of the literature and recommended remedies. The tion resources: Further exploring technology acceptance model
Journal of Applied Psychology, 88(5), 879–903. https://doi.org/10. (TAM). AMIA. Annual Symposium Proceedings, 2009, 629–633.
1037/0021-9010.88.5.879 Terblanche, N., & Kidd, M. (2022). Adoption factors and moderating
Prieto, S. A., Mengiste, E. T., & Garc�ıa de Soto, B. (2023). Investigating effects of age and gender that influence the intention to use a non-
the use of ChatGPT for the scheduling of construction projects. directive reflective coaching chatbot. SAGE Open, 12(2),
Buildings, 13(4), 857. https://doi.org/10.3390/buildings13040857 215824402210961. https://doi.org/10.1177/21582440221096136
Qin, C., Zhang, A., Zhang, Z., Chen, J., Yasunaga, M., & Yang, D. Uddin, S. J., Albert, A., Ovid, A., & Alsharef, A. (2023). Leveraging
(2023). Is ChatGPT a general-purpose natural language processing ChatGPT to aid construction hazard recognition and support safety
task solver? arXiv preprint arXiv:2302.06476. education and training. Sustainability, 15(9), 7121. https://doi.org/
Ra, S., Shrestha, U., Khatiwada, S., Yoon, S. W., & Kwon, K. (2019). 10.3390/su15097121
The rise of technology and impact on skills. International Journal of V�azquez-Cano, E., Mengual-Andr�es, S., & L� opez-Meneses, E. (2021).
Training Research, 17(Suppl 1), 26–40. https://doi.org/10.1080/ Chatbot to improve learning punctuation in Spanish and to enhance
14480220.2019.1629727 open and flexible learning environments. International Journal of
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. Educational Technology in Higher Education, 18(1), 33. https://doi.
(2019). Language models are unsupervised multitask learners. org/10.1186/s41239-021-00269-8
OpenAI Blog, 1(8), 9. https://cdn.openai.com/better-language-mod­ Venkatesh, V., Morris, M. G., & Ackerman, P. L. (2000). A longitu­
els/language_models_are_unsupervised_multitask_learners.pdf dinal field investigation of gender differences in individual technol­
Rafiq, F., Dogra, N., Adil, M., & Wu, J.-Z. (2022). Examining consum­ ogy adoption decision-making processes. Organizational Behavior
er’s intention to adopt AI-chatbots in tourism using partial least and Human Decision Processes, 83(1), 33–60. https://doi.org/10.1006/
squares structural equation modeling method. Mathematics, 10(13), obhd.2000.2896
2190. https://doi.org/10.3390/math10132190 Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User
Rapp, A., Curti, L., & Boldi, A. (2021). The human side of human-chat­ acceptance of information technology: Toward a unified view. MIS
bot interaction: A systematic literature review of ten years of research Quarterly, 27(3), 425–478. https://doi.org/10.2307/30036540
on text-based chatbots. International Journal of Human-Computer Venkatesh, V., Thong, J. Y., & Xu, X. (2016). Unified theory of accept­
Studies, 151(3), 102630. https://doi.org/10.1016/j.ijhcs.2021.102630 ance and use of technology: A synthesis and the road ahead. Journal
Rijsdijk, S. A., Hultink, E. J., & Diamantopoulos, A. (2007). Product of the Association for Information Systems, 17(5), 328–376. https://
intelligence: Its conceptualization, measurement and impact on con­ doi.org/10.17705/1jais.00428
sumer satisfaction. Journal of the Academy of Marketing Science, Venkatesh, V., Thong, J. Y. L., & Xu, X. (2012). Consumer acceptance
35(3), 340–356. https://doi.org/10.1007/s11747-007-0040-6 and use of information technology: Extending the unified theory of
Rizun, M. (2017). Information systems for knowledge workers support. acceptance and use of technology. MIS Quarterly, 36(1), 157–178.
https://doi.org/10.2307/41410412
Informatyka Ekonomiczna, 4(46), 121–130. https://doi.org/10.15611/
Whiting, A., & Williams, D. (2013). Why people use social media: A
ie.2017.4.10
uses and gratifications approach. Qualitative Market Research: An
Ruggiero, T. E. (2000). Uses and gratifications theory in the 21st cen­
International Journal, 16(4), 362–369. https://doi.org/10.1108/QMR-
tury. Mass Communication & Society, 3(1), 3–37. https://doi.org/10.
06-2013-0041
1207/S15327825MCS0301_02
Xie, C., Wang, Y., & Cheng, Y. (2022). Does artificial intelligence satisfy
Russo, C., Madani, K., & Rinaldi, A. M. (2021). Knowledge acquisition
you? A meta-analysis of user gratification and user satisfaction with
and design using semantics and perception: A case study for
AI-powered Chatbots. International Journal of Human–Computer
autonomous robots. Neural Processing Letters, 53(5), 3153–3168.
Interaction, 1–11. https://doi.org/10.1080/10447318.2022.2121458
https://doi.org/10.1007/s11063-020-10311-x
Xue, V. W., Lei, P., & Cho, W. C. (2023). The potential impact of
Salah, M., Alhalbusi, H., Abdelfattah, F., & Ismail, M. M. (2023).
ChatGPT in clinical and translational medicine. Clinical and
Chatting with ChatGPT: Investigating the impact on psychological
Translational Medicine, 13(3), e1216. https://doi.org/10.1002/ctm2.1216
well-being and self-esteem with a focus on harmful stereotypes and
Yu, C.-E. (2020). Humanlike robots as employees in the hotel industry:
job anxiety as moderator. https://doi.org/10.21203/rs.3.rs-2610655/v2
Thematic content analysis of online reviews. Journal of Hospitality
Sallam, M. (2023). ChatGPT utility in healthcare education, research, and
Marketing & Management, 29(1), 22–38. https://doi.org/10.1080/
practice: Systematic review on the promising perspectives and valid con­
19368623.2019.1592733
cerns. Healthcare, 11(6), 887. https://doi.org/10.3390/healthcare11060887
Saunders, M. N. K., Lewis, P., & Thornhill, A. (2019). Research methods
for business students. Pearson.
Seo, K., Tang, J., Roll, I., Fels, S., & Yoon, D. (2021). The impact of
About the authors
artificial intelligence on learner–instructor interaction in online Hyeon Jo, a KAIST alumnus, PhD, leads research at HJ Technology
learning. International Journal of Educational Technology in Higher and Management. His work spans industry 4.0, smart lighting, IT
Education, 18(1), 54. https://doi.org/10.1186/s41239-021-00292-9 security, and web analytics, with multiple publications across various
Sharma, P., & Dash, B. (2023). Impact of big data analytics and academic journals.
ChatGPT on cybersecurity [Paper presentation]. 2023 4th
International Conference on Computing and Communication Do-Hyung Park received his doctoral degree in MIS from KAIST
Systems (I3CS), https://doi.org/10.1109/I3CS58314.2023.10127411 Business School. Currently, he is the professor at Graduate School of
Shin, D.-H. (2011). Understanding e-book users: Uses and gratification Business IT of Kookmin University. He is also in charge of Customer
expectancy model. New Media & Society, 13(2), 260–278. https://doi. Experience Laboratory (CXLab.). His research interests are customer
org/10.1177/1461444810372163 behavior, customer analytics, and experience design.
16 H. JO AND D.-H. PARK

Appendix

Table A1. List of constructs and items.


Construct Items Mean Source
Perceived Intelligence PIE1 I believe that ChatGPT is competent. Pillai and Sivathanu (2020)
PIE2 I consider ChatGPT to be knowledgeable.
PIE3 I perceive ChatGPT as intelligent.
Self-learning SLN1 The ChatGPT’s ability is enhanced through learning. Chen et al. (2022)
SLN2 The ChatGPT can become better through learning.
SLN3 The ChatGPT can learn from past experiences.
Information Support ISP1 ChatGPT gives me suggestions and advice on problem-solving. Lee et al. (2022)
ISP2 ChatGPT delivers information appropriate to my situation.
ISP3 ChatGPT tells me where I can go to get help.
Knowledge Acquisition KAQ1 ChatGPT enables me to generate new knowledge based on my Al-Sharafi et al. (2022)
existing understanding.
KAQ2 ChatGPT helps me acquire knowledge from various sources.
KAQ3 ChatGPT aids in acquiring the knowledge that suits my needs.
Utilitarian Benefits UTB1 I find ChatGPT useful in my daily life. McLean and Osei-Frimpong (2019)
UTB2 Using ChatGPT enables me to accomplish things more quickly.
UTB3 Using ChatGPT enhances my productivity.
Intention To Use ITU1 I intend to use ChatGPT in the future. Venkatesh et al. (2012) and Davis
ITU2 I plan to use ChatGPT in the workplace. (1989)
ITU3 I aim to continue to use ChatGPT frequently.
Actual Use AUS1 I use ChatGPT frequently. DeLone and McLean (2003) and
AUS2 I consider myself a frequent user. Mishra and Shukla (2020)

You might also like