1 s2.0 S1567422322000825 Main

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

Electronic Commerce Research and Applications 55 (2022) 101199

Contents lists available at ScienceDirect

Electronic Commerce Research and Applications


journal homepage: www.elsevier.com/locate/elerap

Should the chatbot “save itself” or “be helped by others”? The influence of
service recovery types on consumer perceptions of recovery satisfaction
Mengmeng Song a, Jingzhe Du b, Xinyu Xing a, Jian Mou c, *
a
College of Tourism, Hainan University, Haikou, China
b
School of Economics and Management, Beijing Jiaotong University, Beijing, China
c
School of Business, Pusan National University, Republic of Korea

A R T I C L E I N F O A B S T R A C T

Keywords: The application of artificial intelligence is considered essential to adapt to a new cycle of industrial trans­
Service recovery formation and technological advancements in many fields and industries. The extensive use of artificial intel­
Chatbot self-recovery ligence technology is expected to improve the level and quality of services provided by companies adopting these
Perceived value
methods. In this study, we propose a novel approach to self-recovery by chatbot systems after service failures
Perceived privacy risk
Level of robot intelligence
based on social response theory. Moreover, we explore differences in consumer perceptions of different service
recovery types and their impact on recovery satisfaction, and discusses whether the intelligence of the compu­
tational agent also has an effect. We present the results of three scenario-based experiments, which demonstrate
the positive effect of chatbot self-recovery on consumer satisfaction, and show the mediating paths of service
recovery types in terms of perceived functional value and privacy risks as well as the boundary condition of the
level of robot intelligence. This work expands the range of applications of chatbots in the service industry and
provides a new framework for the governance of artificial intelligence.

1. Introduction (Mozafari et al., 2022), which can enable them to engage in anthropo­
morphic conversations and provide consumers with better service ex­
In recent years, the development and application of artificial intel­ periences and a sense of interactivity.
ligence (AI), machine learning, and natural language processing tech­ Although the use of AI technology has increased substantially in the
nology has motivated many organizations to employ chatbots to perform service industry, service failures remain relatively common in human­
service tasks and facilitate transactions instead of human employees –computer interaction (Lei et al., 2021; Lu et al., 2020; You et al., 2020).
(Wamba et al., 2021). For example, companies such as Expedia, Taobao, Michaud (2018) observed the application of chatbots and found that
Lego, and Facebook presently use chatbots to provide online services. consumers’ demands and expectations are not always simple and direct,
Text-based chatbots operating over instant messaging platforms can and chatbots may fail to understand the information conveyed. The
reach more than 2.5 billion people (Sheehan et al., 2020). The chatbot differences in the content and quality of customer conversations with
market is expected to reach a value of $102.29 billion by 2026 (Huang chatbots compared to conversations with human employees can drive
and Dootson, 2022). On e-commerce platforms in China, chatbots pri­ customers to adopt a negative attitude toward chatbots (Adam et al.,
marily act as service agents for text communication with consumers. 2021). If appropriate recovery measures are not performed to improve
Therefore, in this study, we consider a chatbot that operates as a text- consumers’ satisfaction, these service failures pose major obstacles for
based software robot that can simulate interpersonal dialogue through consumers to continue to use intelligent service applications. For
natural language processing (Wirtz et al., 2018). The underlying algo­ example, the world’s first AI hotel, the Henn na Hotel in Japan, has
rithms of this technology create a typical dialogue outline and a text received a massive number of consumer complaints because the facili­
corpus of chats between costumers and chatbots to train agents to realize ty’s AI and robotic systems were not able to handle service failures
such communication (Hasal et al., 2021). Compared with traditional independently, resulting in a great deal of lost business and a negative
self-service technology with only functional features, chatbots are reputation among consumers (Gale and Mochizuki, 2019). Outcomes of
equipped with additional social, emotional, and relational elements this nature should be avoided to prevent a series of serious

* Corresponding author at: School of Business, Pusan national University, 2, Busandaehak-ro 63beon-gil, Geumjeong-gu, Busan 46241, Republic of Korea.
E-mail address: [email protected] (J. Mou).

https://doi.org/10.1016/j.elerap.2022.101199
Received 21 March 2022; Received in revised form 2 September 2022; Accepted 9 September 2022
Available online 13 September 2022
1567-4223/© 2022 Elsevier B.V. All rights reserved.
M. Song et al. Electronic Commerce Research and Applications 55 (2022) 101199

consequences, such as customers canceling their services or abandoning recovery types (chatbot self-recovery vs human recovery) on consumer
the usage of certain functions (Kuang et al., 2022), increasing dissatis­ recovery satisfaction. Our primary research question provides a foun­
faction with the platform (Hong and Williams, 2019), or negative effects dation for future research on robotics (Belanche et al., 2019), and solves
on reputation through word-of-mouth (Crisafulli and Singh, 2017) and the core problem of this challenge. Additionally, we analyze the differ­
so forth caused by service failures of computational service agents. ences in the perceived value (functional and experience values) and
Moreover, due to practical considerations such as cost and human labor privacy risk of the two service recovery types in detail. Finally, this study
requirements, chatbots must independently carry out proper self- highlights the moderating role of the level of intelligence of robot agents
recovery operations in a timely manner. Therefore, the types of ser­ on perceived value and privacy risk. As a complementary research
vice recovery that are most effective in improving consumer satisfaction question, we consider the mediators and moderators of the above rela­
after service failure on that part of chatbots should be investigated, tionship. We aim to further improve the consumer experience and
starting from the initial recovery after the failure. Previous studies have enhance the consumer recovery satisfaction by realizing an effective
classified service recovery and verified its effectiveness in terms of fac­ combination of online service platform chatbots and human employees.
tors such as recovery cost (Boshoff, 1997), attribution of responsibility Moreover, this work clarifies the necessity of AI governance and data
(Lee and Cranage, 2014), and consumer engagement (Bagherzadeh privacy protection, helps enterprises to deeply understand the value and
et al., 2020). However, in situations involving chatbot service failures, risks of AI development, and provides suggestions for AI management
these classifications do not necessarily apply. In this study, we divide and related decision-making in the enterprise.
service recovery types into chatbot self-recovery vs human recovery The remainder of this study is structured as follows. Section 2 re­
from the perspective of service providers and explore their effectiveness views related articles that compare chatbot self-recovery and human
with respect to consumer satisfaction with recovery processes. Owing to recovery interactions, perceived value, perceived privacy risk, and the
their convenience, accessibility, and ease of use, self-recovery by chat­ level of robot intelligence. The hypotheses of this work are presented in
bots can save consumers the time and effort needed to wait in a service Section 3. Section 4 describes the experiments and presents the results.
queue for a human operator, and also avoid issues associated with Next, a summary of the findings, contributions, and implications are
limited soft skills or fatigue on the part of human employees, which can given in Section 5. Finally, in Section 6, we discuss the limitations of this
lead to mistakes in making decisions (Babic et al., 2020). Similarly, AI- work and suggest some possible avenues for future research.
based chatbots can provide consumers with positive experience through
natural dialogue and interaction, improve the reliability and trust of 2. Literature review
organizations, and lead to increased profits for enterprises (Cheng and
Jiang, 2020). Therefore, further research is needed on the value of 2.1. Service recovery and chatbot self-recovery
chatbots in service recovery interactions.
Longoni and Cian (2020) showed that the operation of chatbots de­ According to the definition of service failure provided by Chen and
pends more on fact, rationality, logic, and overall cognitive evaluation Tussyadiah (2021), chatbot service failures occurs when the service
than other factors. Such computational agents provide continuous ser­ performance of a chatbot does not meet consumer expectations. Con­
vices and can be programmed to learn through the use of personal data sumers generally exhibit negative attitudes and lower repurchase in­
(Um et al., 2020), so they are able to provide service recovery more tentions when confronted with service failure (Kuang et al., 2022).
accurately than humans and in a timely manner. However, poor Therefore, research on AI needs to focus on timely recovery and con­
communication during human interactions with chatbots should be sumer retention after service failures. Service recovery is a series of
expected to often exhibit a negative impact on the human’s intention to actions taken by service providers to change the dissatisfied state of
use chatbots again, leading to increased consumer skepticism towards consumers (Spreng et al., 1995). In fact, most studies have considered
the technology in general. Research by Castelo et al. (2019) showed that symbolic recovery (also known as mental or psychological recovery,
consumers believe robots lack the emotional capabilities required to etc.) and utilitarian recovery (also known as material or tangible re­
perform tasks involving subjectivity, intuition, and emotion. Hence, covery, etc.) to study the effects of service recovery (Miller et al., 2000;
consumers generally prefer human employees to provide these services Roschk and Gelbrich, 2014; You et al., 2020). Symbolic service recovery
(de Kervenoael et al., 2020). In addition, although chatbots provide (such as an apology, sympathy, or a promise) refers to a verbal admis­
consumers with convenient services, their massive collection of infor­ sion of wrongdoing and the provision of social and psychological
mation also poses a consumer privacy concern (Bouhia et al., 2022). compensation to consumers (Bagozzi, 1975). Utilitarian service recov­
Therefore, investigating how consumers perceive differences between ery refers to the provision of material compensation for consumers
chatbots and human employees participating in service recovery is through economic expenditures such as compensation or discounts
important. (Smith et al., 1999). On this basis, many researchers have used apologies
As a service provider, the intelligence level of a chatbot determines and a series of compensation combinations as different recovery stra­
the range of suitable responsibilities it can undertake and the industries tegies to study consumer responses (Boshoff, 1997; Webster and Sun­
in which it can operate. A weak AI lacks the self-regulation abilities and daram, 1998; Wirtz and Mattila, 2004), and largely found that service
modes of thinking modes of a human employee, and such systems can recovery works better when it actually makes up for consumers’ losses
only replace humans in front-line work limited to performing many (Roschk and Gelbrich, 2014). In addition, some studies divided service
repetitive and simple tasks. In contrast, highly intelligent chatbots can recovery types according to the attribution of responsibility (Lee and
enhance the experience of consumers during human–computer inter­ Cranage, 2014), consumer participation (Bagherzadeh et al., 2020),
action. Computational systems designed along these lines can apply recovery participants (Ho et al., 2020), and recovery information (Ho
logic, recognize certain patterns to appear to understand facts about the et al., 2020). In the context of chatbot service failures, Choi et al. (2021)
world, and act in human-like ways, and they can perform autonomous discussed the effectiveness of robot recovery from the perspective of
learning to collaborate with humans (Huang and Rust, 2018). The level robot anthropomorphism and concluded that humanoids (but not non­
of intelligence of robots providing services can affect consumer per­ humanoids) can effectively apologize and explain the cause of a failure
ceptions and behaviors (Um et al., 2020). Therefore, levels of robot in­ to enhance satisfaction, and verified the positive effect of human
telligence that more positively impact consumer attitudes and behaviors intervention for nonhumanoid robots to recover from inadequate situ­
are worth exploring. ations. However, the differences between chatbot self-recovery and
To solve the problem of robot service failure and determine recovery human recovery have received relatively little attention from the
strategies that enterprises can adopt to improve consumer satisfaction, perspective of different service providers.
in the present work, we focus on exploring the impact of different service With the rapid advances in new technologies, AI plays an

2
M. Song et al. Electronic Commerce Research and Applications 55 (2022) 101199

increasingly essential role in service recovery (Ho et al., 2020; Hu et al., algorithms, which are often not customized to meet subjective or un­
2021; Choi et al., 2021). Increasingly, enterprises are using chatbots to conventional customer needs (Huang and Dootson, 2022). The differ­
replace humans for many tasks in areas such as customer service and ences between chatbot and human contexts may lead to different
product delivery because of their advantages in terms of speed, consis­ perceptions and responses of individuals to recovery behaviors of the
tency, complex data processing, and uninterrupted work capabilities two types of service providers. Perceived functional value, perceived
(Leo and Huh, 2020). On the basis of previous studies on service re­ experience value, and perceived privacy risk are three important factors
covery types, this study divides service recovery into chatbot self- affecting consumer perception in human–computer interactions with
recovery and human recovery from the perspective of service pro­ chatbots. Therefore, in this study, we discuss the internal influence
viders to focus on the differences between chatbots and human em­ mechanism of recovery types on consumer satisfaction from these three
ployees in service recovery. Chatbot self-recovery refers to instructing perspectives.
consumers how to deal with service failures through a technical inter­
face and providing consumers with a solution path to deal with basic 2.3. Consumer perception
service failures (Meuter et al., 2000). Human recovery generally re­
covers services by apologizing to consumers, answering consumer Perceived value refers to consumers’ overall evaluation of the utility
questions again, and offering refunds or compensation (Nikbin et al., of the products (or services) they receive (Sweeney and Soutar, 2001),
2016). In contrast, AI typically learns from the data it receives and acts which is based on their perception of the products’ or services’ attri­
according to a pre-set program (Lv et al., 2021). Thus, chatbots can butes, performance, and results (Yoo and Park, 2016). The study of
improve services based on past interactions with users, continuously perceived value has mainly focused on the influence of consumers’
evolving their intelligence, and providing human-like services (Mar­ perceived value on consumers’ determinations of satisfaction and their
ikyan et al., 2022). However, human employees are not always able to behavior in the e-commerce environment (Yang et al., 2021; Yoo et al.,
serve consumers one-on-one in a timely manner because of detail, triv­ 2010). In addition, the key role of perceived value in the field of AI has
iality, and high volume of service tasks. Currently, the service func­ gradually been explored. Lalicic and Weismayer (2021) found that
tionality of chatbots may be more easily recognized by consumers. perceived consumer value can be used as a mediating variable between
Therefore, how chatbot self-recovery and human recovery can improve consumer experience and AI services. Previous studies have ignored the
consumers communication effectiveness and satisfaction after service differences in service functionality and experience when chatbots and
failures is worth exploring. human employees perform service recovery. In this study, considering
service function and nature in an online context, perceived value is
2.2. Social response theory divided into perceived functional value and perceived experience value.
Perceived functional value describes the effectiveness and functionality
In the context of human–computer interaction, when some verbal of services provided to consumers, such as successful hotel reservations
and non-verbal anthropomorphic design cues are provided such chatting and information feedback. Perceived experience value describes the
ability and a physical appearance or an avatar, respectively, people quality of experience in the service process, such as a sincere attitude or
respond socially to conversational agents (Adam et al., 2021). Chatbots friendly service. Therefore, we consider perceived functional and
are a virtual conversational agent that can communicate with consumers experience values as part of the psychological perception of consumers
in real time through messaging interfaces and text. Simultaneously, their during chatbot self-recovery and human recovery, and construct a key
static orientation also has anthropomorphic characteristics. Hence, path to improving recovery satisfaction.
when consumers encounter chatbots, they typically treat them as social Privacy and security issues related to AI systems have attracted a
actors, using social rules to respond (Moon, 2003). In the literature on great deal of attention as a topic of research (Shin et al., 2022c). Privacy
social response theory, by reviewing a series of experimental studies, risk is the extent to which consumers suffer from the potential disclosure
Nass and Moon (2000) confirmed that individuals apply gender and of personal information (Vimalkumar et al., 2021). When consumers
racial stereotypes, politeness, and other behaviors involved in reacting accept services provided by chatbots, they must provide sufficient per­
to human beings to human–computer interaction, and proposed possible sonal information, which may violate their privacy and lead to misuse or
explanations for the causes of people’s social reactions to computers. sharing with unauthorized third parties (Cheng and Jiang, 2020). In
Based on this work, the applicability of the theory of social response to addition, the lack of fairness, accountability, transparency and
artificial agents in virtual embodiments has been widely verified and explainability in AI operating algorithms may be expected to affect
become a focus of research. Holzwarth et al. (2006) pointed out that an consumers’ privacy concerns (Shin et al, 2022b). The ignorance of
avatar enhances the personification of technology and drives consumers companies about privacy issues may be expected to drive consumers to
react to avatars as human beings. Therefore, avatars can positively in­ trust companies that such provide products or services less (Martin,
fluence the purchasing process along the lines of human sales agents. 2018). Moreover, some studies have found that perceived risk can
Similarly, Adam et al. (2021) found that from the perspective of chatbot trigger negative emotions in consumers and reduce service recovery
design, anthropomorphic cues can stimulate users’ perception of social satisfaction (Wei, 2021). This shows that consumers’ perceived privacy
presence, thus making users more likely to comply requests made by risks has a negative impact on companies. Previous studies based on
chatbots. In addition, Chattaraman et al. (2019) explored psychological perceived privacy risk focused on online contexts to explore their impact
phenomena such as individuality, politeness, and flattery when in­ on consumer purchasing intentions (Chen and Huang, 2017), and their
dividuals interacted with virtual agent based on social response theory acceptance of technology (Wang and Lin, 2017; Featherman et al.,
(Holzwarth et al., 2006; Nass and Moon, 2000). However, these works 2010). However, few researchers have explored consumers’ perceived
did not consider the key role of consumers’ social response when chat­ privacy risk with respect to chatbots and human employees and impacts
bots perform service recovery. Therefore, in this study, we introduce on subsequent attitudes and behaviors. Regarding the perceived privacy
social response theory to the scenario of chatbot recovery after a service risk of chatbots, Cheng and Jiang (2020) unilaterally confirmed that the
failure, try to explore the differences in perception between chatbot self- perceived privacy risk associated with chatbots reduces user satisfaction
recovery and human recovery, and consider the effect on recovery without comparing chatbots and human employees. Therefore, it seems
satisfaction. Chatbots have an undisputed advantage over human em­ worthwhile to explore consumers’ perceived privacy risks and satisfac­
ployees, given that they working 24/7 at a lower cost. However, tion evaluation of chatbot self-recovery compared to human recovery.
compared with human employees, consumers have less experience
interacting with chatbots, which leads to lower consumer comfort
(Hasal et al., 2021). In addition, chatbot responses are based on

3
M. Song et al. Electronic Commerce Research and Applications 55 (2022) 101199

2.4. Robot intelligence level hypothesis.


H1. In the context of chatbot service failures, consumers are more
The level of intelligence of a robot measures its ability to achieve satisfied with chatbot self-recovery than with human recovery.
goals (Iantovics et al., 2018). Robots can perform tasks such as data
curation as assistants to human employees. They can also become 3.2. Mediating effect of perceived value in service recovery types and
partners of human employees, processing large amounts of data quickly recovery satisfaction
and effectively to enhance human intuition and creativity (Babic et al.,
2021). The more capable the robot, the higher the level of intelligence Service recovery is an action taken by firms to appease dissatisfied
perceived by consumers (Bartneck et al., 2009). Some researchers have consumers (Kenney, 1995). In the context of service failures, the
argued that the randomness of robot behavior (Bartneck et al., 2009)) behavior of chatbot self-recovery is a huge advance in the AI revolution
and its level of logic (McCarthy, 2007) are important criteria for clas­ (Choi et al., 2021; Ho et al., 2020; Hu et al., 2021). According to social
sifying the level of intelligence of a robotic system. Winfield (2017) response theory, when people encounter a computer or software pro­
explored levels of robot intelligence and proposed four types of intelli­ gram as a displayed interface, voice system, or agent, they treat it as a
gence, including morphological, swarm, individual, and social. Huang social entity and often respond to the computer (Moon, 2000). There­
and Rust (2018) proposed that the service quality of robots has gradually fore, whether they are faced with chatbot self-recovery or human re­
improved, evolving from mechanical intelligence to intuitive intelli­ covery, consumers will have specific perceptions and evaluations that
gence, and finally to emotional intelligence. Most previous studies vary with respect to different service providers (Hu et al., 2021). From
focused on the definition and classification of robot intelligence, and the perspective of service function and nature, this study divides
some researchers have developed a measurement scale of perceived perceived value into perceived functional value and perceived experi­
intelligence, which has contributed to the progress of robot monitoring ence value. Specifically, perceived functional value reflects consumers’
(Bartneck et al., 2009). Research on the functional evaluation and psychological perceptions and evaluation of service function results
perceived value of robots with different levels of intelligence is some­ after receiving service recovery, whereas perceived experience value
what lacking in the relevant literature. Hence, we explore whether ro­ represents consumers’ value perception and psychological state of ser­
bots with different intelligence levels can cause differences in perceived vice contact and interaction processes.
value and perceived privacy risk to consumers in the present work to From a technical point of view, machine exhibit obvious efficiency
promote the suitability of AI in the service industry. advantages in handling objective, procedural, and repetitive tasks. This
efficiency advantage, which is based on computer programs and algo­
3. Research hypothesis rithms, is beyond the reach of human employees (Castelo et al., 2019).
Kim et al. (2021) found that people have positive attitudes toward
3.1. Service recovery types and recovery satisfaction functional chatbots designed to perform repetitive tasks and improve
work efficiency. Therefore, compared with human employees, chatbots
Miscommunication can damage public perception of chatbots. In perform better at task-based services and can make up for the losses
response to service failures, enterprises tend to adopt appropriate ser­ caused by service failures more efficiently, increasing their perceived
vice recoveries to improve consumer satisfaction (Min et al., 2021; functional value to consumers. Yoo et al. (2010) proposed that con­
Weitzl and Hutzinger, 2019). A study (Sheehan et al., 2020) found no sumers’ value assessments affect their judgments of satisfaction; that is,
perceptual difference between error-free chatbots and chatbots seeking when consumers perceive that a recovery achieves its functional goals,
clarification. In other words, it makes sense for companies to invest in their satisfaction with the situation increases. Thus, the following hy­
chatbots for service delivery and self-recovery, because they can be potheses are proposed.
expected to reduce labor costs and improve efficiency in the enterprise H2. In the context of chatbot service failures, the impact of service
(Hu et al., 2021; Choi et al., 2020). Chatbot self-recovery refers to recovery type on recovery satisfaction is mediated by perceived func­
instructing consumers to deal with service failures through a computa­ tional value. That is, consumers assign a higher perceived functional
tional agent’s technical interface and providing consumers with a so­ value to chatbot self-recovery than to human recovery, which leads to
lution to handle basic service failures (Meuter et al., 2000). Human higher consumer recovery satisfaction.
recovery is driven by emotions and provides consumers with an expla­ While chatbots seem capable of capturing consumers’ attention,
nation as well as a flexible and differentiated recovery (Nikbin et al., their natural language systems cannot yet mimic the full range of
2016). In contrast to human employees, who use subjective thought to intelligent human conversation. From the perspective of service expe­
solve problems, chatbots handle service failures according to pre­ rience, adding expression of emotion to robots is widely accepted as very
determined algorithms without human involvement (Tussyadiah, 2020; challenging (Chiang et al., 2022). Emotion and experience are inherent
Ngai et al., 2021), and they can thus provide uninterrupted services to characteristics of human beings, whereas the emotions expressed by
consumers. chatbots (such as regret for service failure) are only a programmed
Chatbots are as efficient as skilled workers and four times as efficient response (Hu et al., 2021), which cannot be considered social in nature
as less-experienced workers (Luo et al., 2019). Although chatbots are (Belanche et al., 2020). Specifically, in human–computer interaction,
unable to perform customer service completely and independently, when consumers mention subjective and unconventional content,
using chatbots substantially automates customer service tasks (Nishant chatbots cannot respond flexibly (Castelo et al., 2019). Compared with
et al., 2020), which helps companies scale. Moreover, chatbots do not chatbots, human employees more accurately identify and understand
experience frustration and fatigue as humans do, and they can consumers’ needs during service recovery (de Kervenoael et al., 2020)
communicate with consumers in a friendly manner, which avoids the and provide personalized feedback on their thoughts and emotions
loss of business due to poor service attitudes or slow response (Luo et al., (Dabholkar and Bagozzi, 2002). In particular, human employees are able
2019). In addition, AI recommendations, which are based on fact, ra­ to show a sincere attitude based on empathy (de Kervenoael et al.,
tionality, logic, and overall computational operation, are considered to 2020), which undoubtedly promotes a stronger perceived experience
be more capable of evaluating utility values of attributes than human value for consumers in recovery.
recommendation, which then produces utility-centric recommendations The interactive experience and intimacy of consumers in the process
(Longoni and Cian, 2020). Therefore, when service failures occur, of recovery is an important factor affecting consumer satisfaction
compared with human recovery, the consistency and convenient service (Chiang et al., 2022; Min et al., 2015). That is, the higher the perceived
system of chatbots are preferred by consumers, who hence are more value experienced by consumers, the higher their recovery satisfaction.
satisfied with chatbot self-recovery. Thus, we propose the following Therefore, when faced with recovery involving humans, consumers

4
M. Song et al. Electronic Commerce Research and Applications 55 (2022) 101199

generate a stronger perceived experience value based on the flexibility 3.4. Moderating effect of the level of robot intelligence on chatbot self-
and subjectivity of human language and behavior. In contrast, when recovery and perceived value
faced with chatbot self-recovery, consumers exhibit less perceived
experience value and have lower recovery satisfaction. Thus, the The level of a robot’s intelligence is used to measure its ability to
following hypothesis is proposed. achieve goals (Iantovics et al., 2018). As chatbot technology becomes
H3. In the context of chatbot service failures, the impact of service more sophisticated, it can advance from simply answering common
recovery type on recovery satisfaction is mediated by perceived expe­ questions to handling more complex and critical service requests
rience value. That is, consumers have a lower perceived experience (Mozafari et al., 2022). Different intelligence levels in a robot give
value with chatbot self-recovery than with human recovery, which leads consumers different capability perceptions (Bartneck et al., 2009). The
to lower consumer recovery satisfaction. greater the perception of the intelligence of a robot, the more people
expect the robot to be effective and useful in providing the desired
3.3. Mediating effect of perceived privacy risk in service recovery types service (Moussawi et al., 2021). Huang and Rust (2018) defined AI with
and recovery satisfaction analytical intelligence as “weak AI,” and AI with intuitive intelligence as
“strong AI.” Babic et al. (2021) believed that robots with different in­
The advancement of AI technology has made consumers realize that telligence levels cooperate with human employees in different ways.
personal information not only brings convenience and value, but also Robots with high intelligence levels can not only serve as assistants to
potential risks. People are often conflicted in their use of new technol­ human employees, but also can supervise human employees and
ogy. On the one hand, they want to take advantage of the data pro­ collaborate with humans. Studies have shown that chatbots can process
cessing power provided by chatbots, but on the other, they want to keep users’ emotional requests almost as well as human employees (Lee et al.,
the personal information involved confidential and remain in control 2020). Hence, this study argues that robots with a high level of intelli­
(Bouhia et al., 2022). In service recovery, service providers inevitably gence can handle complex service failure situations and respond as if
have access to consumers’ personal information or shopping prefer­ empathically to consumers. In contrast, robots with a low level of in­
ences, and hence consumers are concerned that these shopping details telligence can only respond mechanically. As the level of a robot’s in­
may be stolen by third parties (Vimalkumar et al., 2021). According to telligence improves, its ability to analyze, deal with problems, and
social response theory, people tend to apply social rules and expecta­ provide emotional and social services all increase.
tions to computers unconsciously (Nass and Moon, 2000); they relate to In the context of service failures, robots with high intelligence can
computers in the same way they relate to humans. That is, consumers reliably respond to the service environment and complete complex task-
have privacy concerns when encountering both robots and human em­ based service recovery, enabling consumers to perceive a stronger
ployees. In fact, this perceived privacy risk is involved in the entire functional value. At the same time, robots with high intelligence can also
interaction process between customers and service providers. This study imitate humans to understand and respond to consumers’ emotions, thus
focuses on consumers’ perceived privacy risk in the process of recovery improving consumers’ perceived experience value. Because robots do
after service failure. A previous study (Dwivedi et al., 2021) found that not participate in human recovery, the impact of robot intelligence is not
chatbots exacerbated individuals’ privacy concerns about losing control significant. Thus, the following hypotheses are proposed.
of their data. Chatbots exhibit superior “memory” and data processing H5. In the context of chatbot service failures, intelligence level
capabilities, and these data not only identify specific people, but also moderates the impact of chatbot self-recovery on perceived functional
determine their geographic location, behavioral or psychological char­ value. A higher intelligence level increase the perceived functional value
acteristics, and eventually track their daily schedule or habitual pref­ generated by consumers in chatbot self-recovery. In human recovery,
erences (Fan et al., 2022). At the same time, chatbots learn from this there is no significant difference.
data and optimize their automation by analyzing input data to make H6. In the context of chatbot service failures, intelligence level
predictions. With the constant input of customer data, they can make moderates the impact of chatbot self-recovery on perceived experience
more accurate predictions (Shin, 2021a). A large amount of data forms value. A higher intelligence level increase the perceived experience
the main basis that enables chatbots to provide services. However, as value generated by consumers in chatbot self-recovery. In human re­
these data are not always properly anonymized, consumers have no way covery, there is no significant difference.
of knowing how their personal information may be used and how al­
gorithms work (Hasal et al., 2021; Shin et al., 2022b), and they logically 3.5. Moderating effect of the level of robot intelligence on chatbot self-
worry about data theft and misuse by third parties (Kedra and Gossec, recovery and perceived privacy risk
2020). That means that chatbot data collection could raise privacy
concerns. Human employees offers a limited service that makes it As their conversational capabilities have improved, chatbots have
difficult for them to track various information about their customers and begun to request and process more and more sensitive personal infor­
make predictions. And they also have limited working time and energy, mation. Robots with high levels of intelligence can learn and make de­
and hence encounter difficultly in memorize and processing big data. As cisions autonomously, like humans (Pagallo, 2013). They have powerful
a result, consumers exhibit higher perceived privacy risks in chatbot databases and sensitive information-capturing functions which can
self-recovery than in recovery with human employees. Previous studies obtain data and information instantaneously and then can compare
have observed that the consumer perception of privacy risk has a millions of situations to make the best decision (Wirtz et al., 2018).
negative impact on recovery satisfaction. Wei (2021) confirmed that Similarly, such robots are able to identify consumers and provide
consumers’ perceived risk reduces their satisfaction. Therefore, we also personalized services at scale with negligible marginal cost (Nishant
argue that perceived privacy risk of consumers posed by chatbot self- et al., 2020). Thus, the service and recovery responses provided by ro­
recovery reduces consumer recovery satisfaction. Thus, the following bots are based on consumer information and data. The smarter the robot
hypothesis is proposed. and the more information it has access to, the better it would be ex­
H4. In the context of chatbot service failures, the impact of service pected to perform. Consumers are worried that they will lose control of
recovery type on recovery satisfaction is mediated by perceived privacy their personal information because AI may share the data it collects and
risk. That is, consumers have higher perceived privacy risk with chatbot stores with distant clouds, which is likely to threaten the privacy and
self-recovery than with human recovery, which leads to lower consumer security of consumers. Therefore, robots with a high level of intelligence
recovery satisfaction. increase a user’s perceived privacy risk while providing a high-quality
experience. Robots do not participate in the recovery process of
human employees, and hence the impact of robot intelligence levels on

5
M. Song et al. Electronic Commerce Research and Applications 55 (2022) 101199

Fig. 1. Research model.

consumers’ perceived privacy risk is not significant when human re­ level of situational immersion, recovery satisfaction, and demographic
covery occurs. Thus, the following hypothesis is proposed. information.
H7. In the context of chatbot service failures, intelligence level Young people are more familiar with the new high-tech form of robot
moderates the impact of chatbot self-recovery on perceived privacy risk. service. Therefore, young people were selected as the main subjects of
A higher intelligence level increases the perceived privacy risk gener­ the investigation. In Experiment 1, a total of 107 valid questionnaires
ated by consumers in chatbot self-recovery. In human recovery, there is were provided to subjects, 58.9 % being distributed to women. Among
no significant difference. them, the majority of the subjects (72.0 %) were between the ages of 18
and 24. Furthermore, 72.0 % of the subjects had booked hotels online at
4. Experiment design and analysis of results least twice in the past year, and 74.8 % had communicated with a
chatbot at least once when booking hotels online, and 53.3 % of subjects
Three scenario-based experiments were conducted to collect data had encountered chatbot service failures when booking hotels online.
and examine the above hypotheses (Fig. 1). Experiment 1 tested the
influence of service recovery type (chatbot self-recovery vs human re­ 4.1.2. Variable measurement
covery) on consumer recovery satisfaction to verify H1. Experiment 2 In terms of the manipulation test, the independent variable was
assessed the mediating roles of perceived value and perceived privacy measured using questions such as “I think the type of service recovery in
risk in the impact of service recovery type on consumer recovery satis­ the above situation belongs to” (1 = chatbot self-recovery, 5 = human
faction to verify H2, H3, and H4. Experiment 3 studied the impacts of the recovery) and “I think the type of service agent who contributed to the
level of robot intelligence (high vs low) moderating service recovery service recovery in the above situation belongs to” (1 = chatbot service
types on perceived value and perceived privacy risk to verify H5, H6, agent, 5 = human service agent). The level of situational immersion was
and H7. Subjects were recruited from sojump.com (one of the largest tested with the statement “I can imagine myself as the main character of
online survey companies in China). This platform is comparable to the the above situation.” The recovery satisfaction scale was adapted from
Amazon Mechanical Turk Portal. All participants in the survey received the well-established four-item scale of Kim et al. (2009), for example,
1 yuan as a reward. Using sojump.com to collect data saves considerable “Overall, I am satisfied with the service recovery I received.” All mea­
time and manpower. At the same time, a majority of users of the plat­ sures used a 5-point Likert scale, with 1 = strongly disagree and 5 =
form are scientific research workers, which can ensure the quality of the strongly agree.
questionnaire. However, this method also has some limitations. Online
participants may not be able to experience the scene in an immersive 4.1.3. Data results and discussion
way. Therefore, we set a situational immersion question to test the
participants’ immersion in the experimental situation (Table 1). 4.1.3.1. Manipulation test. A one-sample t-test was used to test the level
of situational immersion, revealing that subjects had a high immersion
in the pre-defined situation (M = 4.06, SD = 0.684; t = 15.960, p <
4.1. Experiment 1: Influence of service recovery types on consumer 0.001). The one-way ANOVA showed a significant difference in the
recovery satisfaction. judgment of subjects on the service recovery type (F(1, 105) = 100.040,
p < 0.001). Subjects in the chatbot self-recovery group thought that the
4.1.1. Experimental method and process service recovery type was more likely to be chatbot self-recovery (M =
Experiment 1 used a single-factor (service recovery type: chatbot 1.750, SD = 1.140), whereas subjects in the human recovery group
self-recovery vs human recovery) between-subject design. The stimulus believed it was more likely to be human recovery (M = 4.142, SD =
materials for the service recovery types were composed of two parts, 1.328). The results show that the manipulation test of the service re­
including written materials (Table 2) and screenshots that provided a covery types was successful. Reliability tests were performed for service
conversation with a consumer on an online hotel platform for chatbot recovery type (α = 0.934) and recovery satisfaction item (α = 0.770). All
self-recovery (Fig. 2) and for human recovery (Fig. 3). Subjects were exhibited high reliability.
randomly divided into two groups and were asked to read the corre­
sponding scenario and answer questions about service recovery type,

6
M. Song et al. Electronic Commerce Research and Applications 55 (2022) 101199

Table 1 Table 1 (continued )


Classification of service recovery type. Study Study focus Theory Key findings
Study Study focus Theory Key findings
tangible recovery
Bagherzadeh The process of Attribution When customers efforts, and attention
et al., 2020 customer theory, Equity receive service to operational
participation in theory recovery, their details.
service failures Expectation satisfaction Roschk & Identifying Resource The strongest
preceding to co- disconfirmation increases. Joint Gelbrich, appropriate exchange theory recovery effect
creation in service theory service recovery has 2014 compensation mostly emerges when
recovery a greater positive types for service compensation
impact on failures represents a resource
satisfaction than only similar to the failure
firm recovery or it is supposed to
customer recovery. offset.
Choi et al., How do consumers Social exchange Humanoids (but not Wirtz Consumer Justice theory Compensation has no
2021 react to robot theory nonhumanoids) can & Mattila, responses to effect on satisfaction
service failures and Mental effectively apologize 2004 compensation, when recovery is
recovery? accounting theory and explain the cause speed of recovery, immediate and an
of a failure to and apology after apology is given or no
enhance satisfaction. encountering apology is given and
Ho et al., Human employees, Role congruity When customers service failure a response is delayed.
2020 service robots, and theory receive service For joint recovery
other customers: recovery from other (delayed recovery
Does it matter who customers, they rate and apology, or
helps customers the service immediate recovery
after a service experience lower without apology),
failure? than a service providing
recovery experience compensation is
provided by the effective in
company (employees increasing
and service robots). satisfaction.
Additionally, when You et al., When and why Self-verification For multiple service
firms offer 2020 appreciation is theory failures, appreciation
instrumental better than Sociometer is more effective than
recovery (vs apology in theory Self- apology in consumer
informational redressing service enhancement satisfaction after
recovery), it results failures theory recovery, and verbal
in more favorable appreciation (vs
service evaluations. apology) from service
This impact does not providers enhances
exist when service the self-esteem of
recovery is provided consumers, which
by other customers. leads to positive
Lee & The effect of Impression When identification results such as
Cranage, negative online formation theory with negative word satisfaction and word
2014 word-of-mouth on Attribution of mouth is low, of mouth
consumer buying theory potential consumers recommendation.
behavior: the roles Information are less likely to
of opinion processing theory develop negative
consensus and attitudes when an
organizational organization offers Table 2
response strategies both adaptive and Written materials of experiment 1.
external defensive
responses in an Chatbot self-recovery Human recovery
apology. When Please imagine that you now need to Please imagine that you now need to
identification with book a room in Hotel A through book a room in Hotel A through OTA.
negative word of OTA. The chatbot soon assists you. The chatbot soon assists you.
mouth is high, However, in the process of However, in the process of
potential consumers communication, the chatbot communication, the chatbot
are more likely to encounters a program error, and you encounters a program error. Finally,
develop negative must wait in line to communicate you communicate with the human
attitudes when the with a human employee. Therefore, service agent after waiting in line and
organization offers you chose to continue to complete the reservation.
an apology combined communicate with the chatbot, and
with a defensive finally the chatbot completes the
response than when reservation through self-recovery.
the organization
offers an apology
combined with an 4.1.3.2. Hypothesis tests. We used the one-way ANOVA to test the in­
adaptive response or
no response.
fluence of service recovery type on recovery satisfaction. The results
Miller et al., Service recovery: a Previous The success of service showed that service recovery type had a significant impact on recovery
2000 framework and literature recovery depends on satisfaction (F(1, 105) = 4.570, p = 0.002). Specifically, recovery
empirical different measures of satisfaction in the chatbot self-recovery group (M = 4.097, SD = 0.531)
investigation customer outcome, a
was significantly higher than that in the human recovery group (M =
mixture of
psychological and 3.788, SD = 0.450). Hence, H1 was verified.

4.1.3.3. Discussion. Experiment 1 explored the effect of service

7
M. Song et al. Electronic Commerce Research and Applications 55 (2022) 101199

Fig. 2. Screen-based materials of the chatbot self-recovery in Experiment 1.

Fig. 3. Screen-based materials of the human recovery in Experiment 1.

recovery type on consumer recovery satisfaction. Compared with human recovery types were composed of two parts, including written materials
recovery, consumers were more satisfied with chatbot self-recovery. H1 (Table 3) and screenshots that provided a conversation with a consumer
was supported. The next experiment focused on the internal mechanism on an online shopping platform for chatbot self-recovery (Fig. 4) and for
of the impact of service recovery type on recovery satisfaction and human recovery (Fig. 5). Subjects were randomly assigned to read one of
further explored the mediating effect of perceived value and privacy two scenarios and asked to answer questions separately. Furthermore,
risk. related question items were added to the variable measurement to
Experiment 2: Mediating effects of perceived value and perceived investigate the mediating roles of perceived functional value and
privacy risk. perceived experience value to verify H2–H4.
Experiment 2 used a single-factor (chatbot self-recovery vs human In Experiment 2, a total of 104 valid questionnaires were provided,
recovery) between-subject design. The stimulus materials for the service and 64.4 % of the respondents were women. Among them, the majority

8
M. Song et al. Electronic Commerce Research and Applications 55 (2022) 101199

Table 3 Song et al. (2022), which consisted of statements such as “I’m worried
Written materials of experiment 2. about the misuse of my personal information.”
Chatbot self-recovery Human recovery
4.1.5. Data results and discussion
Please imagine that you are going to buy a Please imagine that you are going to
pair of wireless Bluetooth headphones buy a pair of wireless Bluetooth
from a shopping platform. After placing headphones from a shopping platform. 4.1.5.1. Manipulation test. A one-sample t-test was used to test the level
the order, you find that the address is After placing the order, you find that of situational immersion, revealing that subjects had a high immersion
wrong, you ask the service agent to the address is wrong, you ask the
into the pre-defined situation (M = 3.950, SD = 0.928; t = 10.458, p <
change the address. The chatbot of this service agent to change the address. The
platform assists you. After you ask the chatbot of this platform assists you. 0.001). The one-way ANOVA revealed that there was a significant dif­
question, the chatbot does not After you ask the question, the chatbot ference in the judgment of the subjects on the type of service recovery (F
understand you and asks you to repeat does not understand you. You must (1, 102) = 80.827, p < 0.001). Subjects in the chatbot self-recovery
the question in a different way. Soon request assistance from a human service group expected that the service recovery type was more likely to be
after, the chatbot provides the order agent. After waiting, you successfully
details and ways to modify, and the change the address through the human
chatbot self-recovery (M = 2.157, SD = 1.076), whereas subjects in the
modification was successful. service agent. human recovery group expected that it was more likely to be human
recovery (M = 4.120, SD = 1.150). The results showed that the
manipulation test of the service recovery types was successful. The in­
of the subjects (84.6 %) were between the ages of 18 and 24. Further­ ternal consistency (Cronbach’s α) tests were performed for perceived
more, 98.1 % of the subjects had shopped online at least twice in the past functional value (α = 0.866), perceived experience value (α = 0.919),
year, 94.2 % of the subjects had communicated with a chatbot at least perceived privacy risk (α = 0.866), and recovery satisfaction (α =
once when shopping online, and 81.7 % of subjects had encountered 0.773), which were shown in Appendix A. In addition, the Cronbach’s α
chatbot service failures when shopping online. of service recovery type (α = 0.799) was also measured. All Cronbach’s α
of the variables high reliability. The independent variable was manip­
4.1.4. Variable measurement ulated by experiment scenarios. Therefore, it was excluded from the
In Experiment 2, the independent variables and level of situational following analysis and data for independent variable was not reported in
immersion and recovery satisfaction followed those of Experiment 1. In Appendix A. Construct reliability tests were also performed. It can be
addition, the scale of perceived functional value followed that of Roig seen that the variance extracted (AVE) for the above variables were all
et al. (2006), which included four items such as “the service recovery above 0.5 and the composite reliabilities (CR) were above 0.7, showing
provided by the service agent was effective.” Moreover, the scale of good construct reliability. Besides, the item loadings of the above vari­
perceived experience value followed that of Choi et al. (2020), which ables were between 0.776 and 0.906 which all were greater than 0.5 (see
consisted of six items such as “the service agent was able to provide Appendix A).
sympathy and care.” The scale of perceived privacy risk followed that of

Fig. 4. Screen-based materials of the chatbot self-recovery in Experiment 2.

9
M. Song et al. Electronic Commerce Research and Applications 55 (2022) 101199

Fig. 5. Screen-based materials of the human-involved recovery in Experiment 2.

Table 4 Table 5
Decomposition of the mediating effect of perceived functional value. Decomposition of the mediating effect of perceived privacy risk.
Effect BootSE BootLLCI BootULCI Effect BootSE BootLLCI BootULCI

Direct effects − 0.289 0.090 − 0.467 − 0.111 Direct effects − 0.270 0.104 − 0.477 − 0.063
Mediating effects − 0.121 0.070 − 0.280 − 0.008 Mediating effects − 0.139 0.068 − 0.292 − 0.029
Total effects − 0.409 0.105 − 0.618 − 0.201 Total effects − 0.409 0.105 − 0.618 − 0.201

4.1.5.2. Hypothesis tests. We used the one-way ANOVA to test the and perceived functional value acted as a partial mediator, thus sup­
impact of service recovery type on recovery satisfaction. The results porting H2.
show that service recovery types had a significant impact on recovery After adding perceived experience value as the mediating variable,
satisfaction (F(1, 102) = 15.166, p < 0.001). Specifically, the recovery the impact of service recovery types on recovery satisfaction remained
satisfaction of the chatbot self-recovery group (M = 3.819, SD = 0.519) significant (t = − 4.363, p < 0.001). However, the influence of service
was significantly higher than that of the human recovery group (M = recovery types on perceived experience value was not significant (t =
3.410, SD = 0.553), and H1 was again supported. 0.851, p = 0.397), and the indirect effect of “service recovery type­
In Experiment 2, service recovery type was treated as the antecedent s—perceived experience value—recovery satisfaction” was not signifi­
variable; perceived functional value, perceived experience value, and cant (indirect effect = 0.029, LLCI = − 0.054, ULCI = 0.088). Hence, H3
perceived privacy risk were each used as the mediating variables; and was rejected.
recovery satisfaction was the outcome variable. The mediating effects As shown in Table 5, when perceived privacy risk was added as the
were analyzed using SPSS PROCESS (model 4). The data (Table 4) show mediating variable, the impact of service recovery types on recovery
that after adding perceived functional value as the mediating variable, satisfaction remained significant (t = − 2.590, p = 0.011), and the
the influence of service recovery types on recovery satisfaction remained impact of service recovery types on perceived privacy risk was signifi­
significant (t = − 3.217, p = 0.002), and service recovery types had a cant (t = 3.607, p = 0.001). Compared with chatbot self-recovery,
significant impact on perceived functional value (t = − 2.070, p = human recovery caused subjects have a higher perceived privacy risk,
0.041). Compared with human recovery, chatbot self-recovery results in which was not consistent with H4. The impact of perceived privacy risk
a higher perceived functional values. The impact of perceived functional on recovery satisfaction was also significant (t = − 3.967, p < 0.001).
value on recovery satisfaction was also significant (t = 6.709, p < Moreover, the lower the subjects’ perceived privacy risk, the higher
0.001), and the higher the subjects’ perceived functional value, the their recovery satisfaction. The mediation path of “service recovery
higher their recovery satisfaction. The mediation path of “service re­ types—perceived privacy risk—recovery satisfaction” was significant
covery types—perceived functional value—recovery satisfaction” was (indirect effect = − 0.139, LLCI = − 0.292, ULCI = − 0.029), and
significant (indirect effect = − 0.121, LLCI = − 0.280, ULCI = − 0.008), perceived privacy risk acted as a partial mediator. Hence, H4 was

10
M. Song et al. Electronic Commerce Research and Applications 55 (2022) 101199

Table 6 once when booking tickets online, and 56.6 % of subjects had encoun­
Written materials of experiment 3. tered chatbot service failures when booking tickets online.
Chatbot self-recovery Human recovery
4.1.7. Variable measurement
Low level of Please imagine that you need Please imagine that you need
robot to obtain a reimbursement to obtain a reimbursement In Experiment 3, the independent variable, level of situational im­
intelligence voucher for your airline ticket voucher for your airline ticket mersion, perceived functional value, and perceived privacy risk were the
from an online ticketing from an online ticketing same as those in Experiment 2. Because the results of Experiment 2 show
platform. You ask the service platform. You ask the service that H3 was not verified, that is, the impact of service recovery type on
agent for details, and the agent for details, and the
chatbot of this platform assists chatbot of this platform assists
perceived experience value was not significant, we did not explore the
you. After you ask the you. After you ask the moderating effect of robot intelligence level on the impact of service
question, the chatbot does not question, the chatbot does not recovery type on perceived experience value in Experiment 3. In addi­
understand you and asks you understand you and asks you tion, the scale of robot intelligence level followed the research of Bart­
to repeat the question in a to repeat the question in a
neck et al. (2009) in consisting of five items such as “I think the hotel
different way. After that, the different way. Eventually, you
chatbot soon provides a way to choose to ask human service consumer service has the ability to solve problems.”
issue an invoice, and you agent for help, and
follow the chatbot’s successfully issue a 4.1.8. Data results and discussion
instructions to successfully reimbursement voucher under
issue a reimbursement the guidance of human service
voucher. agent. 4.1.8.1. Manipulation test. A one-sample t-test was used to test the level
High level of Please imagine that you need Please imagine that you need of situational immersion, revealing that the subjects had a high im­
robot to obtain a reimbursement to obtain a reimbursement mersion in the pre-defined situation (M = 4.06, SD = 0.809; t = 21.014,
intelligence voucher for your airline ticket voucher for your airline ticket p < 0.001). The one-way ANOVA showed a significant difference in the
from an online ticketing from an online ticketing
platform. You ask the service platform. You ask the service
judgment of the subjects on the type of service recovery (F(1, 256) =
agent for details, and the agent for details, and the 777.777, p < 0.001). Subjects in the chatbot self-recovery group thought
chatbot of this platform assists chatbot of this platform assists that the service recovery type was more likely to be chatbot self-
you. After you ask the you. After you ask the recovery (M = 1.562, SD = 0.682), whereas subjects in the human re­
question, the chatbot does not question, the chatbot does not
covery group believed it was more likely to be human recovery (M =
understand you, but offers understand you, but offers
guesses based on your needs. guesses based on your needs. 4.342, SD = 0.892). Similarly, there was a significant difference in the
After that, you fill in the Eventually, you choose to ask a judgment of the subjects on the level of intelligence of the robots (F(1,
information under chatbot’s human service agent for help, 256) = 70.208, p < 0.001). Subjects in the low level of robot intelligence
guidance and successfully and successfully issue a group thought that the level of robot intelligence was more likely to be
issue a reimbursement reimbursement voucher under
low (M = 2.732, SD = 0.851), whereas subjects in the high level of robot
voucher. the guidance of human service
agent. intelligence group believed it was more likely to be high (M = 3.136, SD
= 0.894). The results showed that manipulation tests of the service re­
covery types and robot intelligence level were successful. The internal
partially verified. consistency (Cronbach’s α) tests were performed for service recovery
types (α = 0.900), perceived functional value (α = 0.919), perceived
4.1.5.3. Discussion. Experiment 2 verified the mediating effects of privacy risk (α = 0.924) and level of robot intelligence (α = 0.890),
perceived functional value and perceived privacy risk. Moreover, the which were shown in Appendix B. In addition, the Cronbach’s α of
perceived functional value of consumers was higher and the perceived service recovery type (α = 0.799) was also measured. All Cronbach’s α of
privacy risk was lower in chatbot self-recovery than in human recovery, the variables exhibited high reliability. The independent variable was
and hence H2 was supported, H4 was supported partly, and H3 was manipulated by experiment scenarios. Hence, it was excluded from the
rejected. On this basis, Experiment 3 examined the moderating effect of following analysis and data for independent variable was not reported in
level of robot intelligence on the influence of service recovery types on Appendix B. Construct reliability tests were also performed. It can be
perceived functional value and perceived privacy risk to provide theo­ seen that the AVE and CR met the standards, which showed good
retical guidance for AI governance. construct reliability. Besides, the item loadings of the above variables
Experiment 3: Moderating role of robot intelligence level in service were greater than 0.5 (see Appendix B).
recovery type on perceived functional value, and perceived privacy risk.
4.1.8.2. Hypothesis tests. In Experiment 3, service recovery types were
4.1.6. Experimental method and process designated as the independent variables; perceived functional value,
Experiment 3 used a 2 (service recovery: chatbot self-recovery vs and perceived privacy risk were each used as the outcome variables; and
human recovery) × 2 (level of robot intelligence: high vs low) between- the level of robot intelligence was the moderating variable. The
subject design to test the moderating effect of the level intelligence of moderating effect was analyzed using the two-way ANOVA. Fig. 10
the robot on the impact of service recovery types on perceived functional depicts the perceived functional value of subjects for different service
value and perceived privacy risk. In Experiment 3, the stimulation ma­ recovery types for low and high robot intelligence levels. The interaction
terials were different from those of the first two experiments. One effect between service recovery type and robot intelligence level was
stimulus was written materials, as shown in Table 6. The other was significant (F(1, 254) = 6.374, p = 0.012). In the chatbot self-recovery
screenshots that depicted four scenarios (Figs. 6–9). Subjects were condition, the robot intelligence level exhibited a significant effect on
randomly divided into four groups and were asked to read the corre­ perceived functional value (Mlow = 3.08, Mhigh = 3.85, F(1, 120) =
sponding scenario and answer questions about service recovery type, 16.951, p < 0.001). In the human recovery condition, the effect of robot
level of situational immersion, perceived functional value, perceived intelligence level was not significant (Mlow = 1.91, Mhigh = 2.12, F(1,
privacy risk, robot intelligence level, and demographic information. 134) = 2.979, p = 0.087). Thus, H5 was supported.
In Experiment 3, a total of 258 valid questionnaires were provided, In addition, Fig. 11 depicts the perceived privacy risk of subjects for
and 55 % of the respondents were women. Among them, the majority of different service recovery types for low and high robot intelligence
the subjects (72.9 %) were between the ages of 18 and 30. Furthermore, levels. The interaction effect between service recovery type and robot
73.3 % of the subjects had booked tickets online at least twice in the past intelligence level was significant (F(1, 254) = 4.794, p = 0.029). In the
year, 69.4 % of the subjects had communicated with a chatbot at least

11
M. Song et al. Electronic Commerce Research and Applications 55 (2022) 101199

Fig. 6. Screen-based materials of the chatbot self-recovery and low robot intelligence level in Experiment 3.

chatbot self-recovery condition, the robot intelligence level had a sig­ et al., 2020). With the development of AI technology, robots can learn to
nificant effect on perceived privacy risk (Mlow = 2.68, Mhigh = 3.26, F(1, handle various problems through continuous learning and feedback.
120) = 9.672, p = 0.002). In the human recovery condition, the effect of Therefore, when service failure occurs, the form and result of chatbot-
robot intelligence level was not significant (Mlow = 3.85, Mhigh = 3.95, F based service recovery can more effectively improve consumer recov­
(1, 134) = 0.703, p = 0.403). Hence, H7 was supported. ery satisfaction. (2) The action mechanism of service recovery types
includes two paths: perceived functional value and perceived privacy
4.1.8.3. Discussion. Experiment 3 verified the moderating effect of level risk. On the one hand, the high-efficiency and high-quality services of
of robot intelligence on the impact of service recovery types on chatbot self-recovery deepened consumers’ positive perception of
perceived functional value and perceived privacy risk, supporting H5 chatbot services. On the other, consumers exhibited lower privacy
and H7. In particular, the higher the level of robot intelligence, the concerns because of the evidently mechanical nature of chatbots. By
stronger the perceived functional value to consumers and the stronger contrast, human employees are more likely to give away users’ private
the perceived privacy risk in chatbot self-recovery. Regardless of the information for their own gain (Song et al., 2022). In service failure, the
level of robot intelligence, there was no significant difference in the type of recovery does not significantly impact consumers’ perceived
perceived functional value and perceived privacy risk to consumers for experience values, possibly because the developed chatbot language and
human recovery. Because the impact of service recovery types on responses were very similar to those of humans and the severity of
perceived experience value was not significant, H6 was rejected. service failures was lower. As a result, we found no significant difference
between chatbots and humans in handling service failures. Therefore, in
5. Conclusion and implications contrast to human recovery, chatbot self-recovery leads to a higher
perceived functional value and lower perceived privacy risk among
5.1. Conclusion consumers, increasing recovery satisfaction. (3) Consumers’ perception
of chatbot self-recovery is affected by the level of robot intelligence. The
Starting with the types of service recovery (chatbot self-recovery vs higher the level of intelligence of robot, the stronger the consumer’s
human recovery) and based on the perceived characteristics of perceived functional value and perceived privacy risk associated with
human–computer interaction in the context of chatbot service failure, chatbot self-recovery. This conclusion demonstrates the important role
this research seeks to find a balance between chatbots and human em­ of robot intelligence level in the process of chatbot design and appli­
ployees in service recovery. The influence of service recovery types on cation, and reveals its influence on the psychological mechanism of
recovery satisfaction has been examined through three situational ex­ consumers in human–computer interaction. It also expands the research
periments, and the following conclusions were drawn: (1) Compared scope of AI governance. However, the level of robot intelligence has
with human recovery, chatbot self-recovery enhances consumers’ little effect on the perceived experience value of consumers with human
affirmation of services. The main reason for using AI as a replacement recovery. During this process, consumers tend to generate psychological
for human workers is to consider the benefits of automation (Sheehan perceptions based on the responses and behaviors of the human

12
M. Song et al. Electronic Commerce Research and Applications 55 (2022) 101199

Fig. 7. Screen-based materials of the chatbot self-recovery and high robot intelligence level in Experiment 3.

Fig. 8. Screen-based materials of the human recovery with and low robot intelligence level in Experiment 3.

employees, rather than focusing on the characteristics of the chatbots 5.2. Theoretical contribution
that have failed to serve them.
In this study, we have focused on chatbot self-recovery as a novel
type of recovery that advances the related research on recovery

13
M. Song et al. Electronic Commerce Research and Applications 55 (2022) 101199

Fig. 9. Screen-based materials of the human recovery and high robot intelligence level in Experiment 3.

mechanisms after chatbot service failures. Previous studies suggested


that human recovery is the main source of service recovery (Ho et al.,
2020; Nikbin et al., 2016), but with the intensity and immediacy of
service tasks, human employees are not always able to respond to con­
sumers in a timely-one-to-one manner. Currently, the exclusively
service-oriented nature of chatbots is more likely to be recognized by
consumers. In contrast to previous findings, this study demonstrates that
chatbot self-recovery is more likely to improve the recovery satisfaction
of consumers when chatbot service failures occur. This study consoli­
dates the outstanding value of chatbot self-recovery and confirms the
advantages and strength of chatbots in performing service recovery
independently and in a timely manner.
Based on social response theory, we have explored the unique path of
different perceptions driven by human or chatbot recovery after chatbot
service failures, and integrates chatbot self-recovery strategies with
robot intelligence levels to construct a complete theoretical model.
Fig. 10. Effect of service recovery type and robot intelligence level on Previous studies mainly discussed the influence of anthropomorphic
perceived functional value. cues from chatbots on consumer emotions and interaction experiences
based on social response theory (Adam et al., 2021), and applied this
theory to the attribution of chatbot service failures (Belanche et al.,
2020). However, in this study, we use social response theory to explore
how service recovery by chatbots and human employees affects con­
sumer satisfaction with service recovery, which expands the scope of
applications of social response theory in human–computer interaction as
well as its exploration of the causes of consumer satisfaction service
recovery through perceived value and perceived risk supplementing the
important source of consumer satisfaction induced by AI. Moreover, the
debate over how to determine the balance between chatbots and
humans continues. We found that the services provided by chatbots are
more efficient and functional than those rendered by human beings,
with lower perceived privacy risks and higher recovery satisfaction. The
research results lay a theoretical foundation for chatbot service recovery
and promote research on consumer perceptions of chatbot.
Robots of different intelligence levels may be expected to drive
consumers to experience different human–computer interactions. Pre­
Fig. 11. Effect of service recovery type and robot intelligence levelon perceived
vious studies focused on the problem of collaboration between robot
privacy risk.
intelligence and humans (Babic et al., 2021). However, this study
introduced robot intelligence as a moderating variable in the service
failure study of robots, which enriches the boundary conditions for

14
M. Song et al. Electronic Commerce Research and Applications 55 (2022) 101199

service recovery after chatbot service failure. Previous studies have Table A
found that robots of different intelligence levels cooperate with human PFV PEV PPR RS Item
employees in different ways, and the smarter the robot, the better it is loadings
able to build trust with humans and quickly process large amounts of AVE 0.712 0.716 0.772 0.601
different data in the workplace in a consistent manner (Babic et al., CR 0.908 0.938 0.910 0.857
2021). We found that as robot intelligence increases, the robot’s ability Cronbach’s a 0.866 0.919 0.866 0.773
to solve problems increases and the consumer’s perceived functional PFV1: The service 0.283 − 0.018 0.011 − 0.134 0.803
recovery provided by
value during chatbot self-recovery increases, which is in line with the
the service agent is
results of previous research. However, consumers’ also perceived the effective.
storage and analysis of large amounts of user data by robots with a high PFV2: The service quality 0.292 − 0.017 0.018 − 0.116 0.854
level of intelligence as a privacy risk (Venkatesh et al., 2021). These of the service agent is
results provide theoretical support for further understanding the effect acceptable.
PFV3: The results of the 0.239 − 0.005 − 0.013 − 0.023 0.846
of AI attributes on consumer perceptions and privacy concerns, and service received were
establish the research value of robot intelligence level. as expected.
PFV4: The results of the 0.310 − 0.026 − 0.015 − 0.143 0.870
5.3. Management implications service recovery meet
my needs.
PEV1: The service agent − 0.121 0.199 − 0.111 0.184 0.776
This study provides guidance and reference for the arrangement and can give me sympathy
management of chatbots and human employees in the service industry, and care.
mitigating the insufficiency of only investing in human employees for PEV2: The service agent − 0.062 0.248 − 0.111 0.031 0.906
seems to show a sincere
recovery as in past approaches (Harrison-Walker, 2019), and provides a
interest in solving it.
foundation for the further development of these technologies. When a PEV3: The service agent 0.034 0.229 − 0.044 − 0.135 0.887
chatbot service fails, the functional attributes of human–computer seems to provide
interaction should be fully utilized to provide consumers with efficient friendly services.
service recovery and reduce manual investment to quickly solve prob­ PEV4: The service agent 0.032 0.173 0.001 − 0.041 0.802
seems to provide
lems and obtain positive affirmations from consumers. This not only
pleasant service.
reduces the utilization rate of human employees and cuts labor costs, but PEV5: The service agent 0.056 0.212 − 0.022 − 0.128 0.875
also improves consumers’ recovery satisfaction. seems to make
Enterprises should focus on the human–computer interaction expe­ consumers feel
rience, enhance the feedback speed and communication ability of comfortable.
PEV6: The service agent − 0.013 0.193 − 0.03 − 0.006 0.823
chatbot employees, and mitigate consumer privacy concerns. When seems to communicate
chatbot services fail, companies should make full use of the advantages with me smoothly.
of chatbots to enhance users’ sense of control over the technology in the PPR1: I’m worried about 0.040 − 0.077 0.405 − 0.077 0.871
service interaction (Hsu et al., 2021; Meuter et al., 2000; Pitardi et al., personal information
being leaked and sold
2022). Furthermore, to ensure that consumers’ personal privacy is not
to third parties.
threatened and to improve consumers’ trust in chatbots (Shin, 2021b), PPR2: I’m worried about − 0.028 − 0.103 0.426 0.055 0.879
formulate and guide the implementation of relevant policies and ethical the misuse of my
principles for chatbot and, enhancing the fairness, accountability, personal information.
transparency, and explainability of chatbot governance (Shin, 2022; PPR3: I’m worried about − 0.022 − 0.045 0.368 − 0.008 0.886
personal information
Shin et al., 2021), and establishing consumer privacy protection as an being obtained by
unshakable principle for chatbot services are all necessary. In addition, unknown individuals
combined with the factors of robot intelligence level, chatbots have and companies without
lower perceived privacy risks than human employees in both conditions. authorization.
RS1: Overall, I am 0.042 − 0.036 0.023 0.268 0.777
Hence, in service interactions that often require consumers to disclose
satisfied with the
private information, enterprises should appropriately reduce the use of service recovery I
human employees and add chatbots. received.
Enterprises should understand the balance point of the level of robot RS2: I am satisfied with − 0.012 0.015 − 0.072 0.355 0.849
intelligence and attempt to improve chatbots’ abilities to effectively the way service failure
was resolved.
handle problems without causing excessive privacy concerns to better RS3: The service agent 0.084 0.022 0.008 0.147 0.710
promote human–robot collaboration. According to the needs of different responded better than
roles in human–robot collaboration, chatbots with different levels of expected to the service
intelligence can improve cooperation in human–robot collaboration, failure.
RS4: I have a more − 0.170 − 0.068 0.014 0.549 0.759
promote the continuous use of chatbots, and provide guidance and
positive attitude
strategies for accurately remedying chatbot service failures to facilitate towards the shop now.
the coexistence of chatbots and human employees in the enterprise.
Note(s): PFV = Perceived functional value; PEV = Perceived experience value;
PPR = Perceived privacy risk; RS = Recovery satisfaction.
6. Limitation and future prospects

In this study, we have considered text-based chatbots, which is the failures was not high. In a more serious service failure context, whether
most commonly implemented form based on the current situation of e- the recovery satisfaction of chatbot self-recovery would remain higher
commerce platforms in China. However, voice-based chatbots (for than that of human recovery needs further verification. Third, as an
example, Apple’s Siri and Amazon’s Alexa) have been developed to emerging technology, in addition to their level of intelligence, chatbots
perform user tasks in the role of intelligent personal assistants (Ford and have many other characteristics and attributes that may also have
Palmer, 2019). Therefore, the effect of service recovery and customer different effects on consumer perceptions, such as identity roles, levels
response to voice-based chatbots should considered in future research. of anthropomorphism, and whether the chatbot’s identity is flawed.
Second, in the scenario used in this study, the severity of chatbot service Finally, consumers with different values and personality traits have

15
M. Song et al. Electronic Commerce Research and Applications 55 (2022) 101199

Table B References
PFV PPR LRI Item
Adam, M., Wessel, M., Benlian, A., 2021. AI-based chatbots in customer service and their
loadings
effects on user compliance. Electr. Markets 31 (2), 427–445.
AVE 0.786 0.847 0.683 Babic, B., Chen, D.L., Evgeniou, T., Fayard, A.L., 2021. The better way to onboard AI.
CR 0.936 0.943 0.915 Harvard Bus. Rev. 98 (4), 56–65.
Cronbach’s a 0.919 0.924 0.890 Bagherzadeh, R., Rawal, M., Wei, S., Torres, J.L.S., 2020. The journey from customer
PFV1: The service recovery provided 0.285 0.118 − 0.051 0.814 participation in service failure to co-creation in service recovery. J. Retail. Consumer
Services 54, 102058.
by the service agent is effective.
Bagozzi, R.P., 1975. Marketing as exchange. J. Market. 39 (4), 32–39.
PFV2: The service quality of the 0.306 0.036 − 0.043 0.935
Bartneck, C., Kulić, D., Croft, E., Zoghbi, S., 2009. Measurement instruments for the
service agent is acceptable.
anthropomorphism, animacy, likeability, perceived intelligence, and perceived
PFV3: The results of the service 0.291 0.005 − 0.033 0.912 safety of robots. Int. J. Social Robotics 1 (1), 71–81.
received were as expected. Belanche, D., Casaló, L.V., Flavián, C., Schepers, J., 2019. Service robot implementation:
PFV4: The results of the service 0.270 − 0.029 − 0.014 0.880 a theoretical framework and research agenda. Serv. Ind. J. 1–23.
recovery meet my needs. Belanche, D., Casaló, L.V., Flavián, C., Schepers, J., 2020. Robots or frontline employees?
PPR1: I’m worried about personal 0.037 0.352 − 0.015 0.914 Exploring customers’ attributions of responsibility and stability after service failure
information being leaked and sold or success. J. Service Manage. 31 (2), 267–289.
to third parties. Boshoff, C., 1997. An experimental study of service recovery options. Int. J. Service Ind.
PPR2: I’m worried about the misuse 0.052 0.370 − 0.033 0.948 Manage. 8 (2), 110–130.
of my personal information. Bouhia, M., Rajaobelina, L., PromTep, S., Arcand, M., & Ricard, L., 2022. Drivers of
PPR3: I’m worried about personal 0.040 0.351 − 0.041 0.899 privacy concerns when interacting with a chatbot in a customer service encounter.
information being obtained by Int. J. Bank Market., (ahead-of-print).
Castelo, N., Bos, M.W., Lehmann, D.R., 2019. Task-dependent algorithm aversion.
unknown individuals and
J. Mark. Res. 56 (5), 809–825.
companies without authorization.
Chattaraman, V., Kwon, W.S., Gilbert, J.E., Ross, K., 2019. Should AI-Based,
LRI1: The chatbot has the ability to − 0.007 − 0.002 0.215 0.745 conversational digital assistants employ social-or task-oriented interaction style? A
solve problems. task-competency and reciprocity perspective for older adults. Comput. Hum. Behav.
LRI2: The chatbot is very − 0.049 − 0.019 0.264 0.876 90, 315–330.
knowledgeable. Chen, Y.S., Huang, S.Y., 2017. The effect of task-technology fit on purchase intention:
LRI3: The chatbot is smart. − 0.067 − 0.051 0.271 0.873 The moderating role of perceived risks. J. Risk Res. 20 (11), 1418–1438.
LRI4: The chatbot is responsible. − 0.040 − 0.020 0.244 0.815 Chen, Y., Tussyadiah, I.P., 2021. Service failure in peer-to-peer accommodation. Ann.
LRI5: The chatbot can be sensitive to − 0.003 − 0.032 0.238 0.817 Tourism Res. 88, 103156.
information. Cheng, Y., Jiang, H., 2020. How do AI-driven chatbots impact user experience?
Examining gratifications, perceived privacy risk, satisfaction, loyalty, and continued
Note: LRI = Level of robot intelligence. use. J. Broadcast. Electr. Media 64 (4), 592–614.
Chiang, A.H., Trimi, S., Lo, Y.J., 2022. Emotion and service quality of anthropomorphic
robots. Technol. Forecast. Soc. Chang. 177, 121550.
different preferences for chatbots and human employees, which un­ Choi, Y., Choi, M., Oh, M., Kim, S., 2020. Service robots in hotels: understanding the
doubtedly affects their perception of service providers. These values and service quality perceptions of human-robot interaction. J. Hospitality Market.
personality traits include novelty-seeking (Um et al., 2020), cultural Manage. 29 (6), 613–635.
Choi, S., Mattila, A.S., Bolton, L.E., 2021. To err is human (-oid): how do consumers react
dimensions (Shin et al., 2022a), and implicit personality (Puzakova to robot service failure and recovery? J. Service Res. 24 (3), 354–371.
et al., 2013). In the future, the boundary conditions of the conclusion Crisafulli, B., Singh, J., 2017. Service failures in e-retailing: Examining the effects of
can also be extended by discussing populations with different response time, compensation, and service criticality. Comput. Hum. Behav. 77,
413–424.
characteristics. Dabholkar, P.A., Bagozzi, R.P., 2002. An attitudinal model of technology-based self-
service: moderating effects of consumer traits and situational factors. J. Acad. Mark.
7. Author statement Sci. 30 (3), 184–201.
de Kervenoael, R., Hasan, R., Schwob, A., Goh, E., 2020. Leveraging human-robot
interaction in hospitality services: Incorporating the role of perceived value,
Mengmeng Song and Xinyu Xing conceived the study and were empathy, and information sharing into visitors’ intentions to use social robots.
responsible for the design, data collection and data analysis. Mengmeng Tourism Manage. 78, 104042.
Dwivedi, Y.K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., Williams, M.
Song, Jingzhe Du and Jian Mou were responsible for findings interpre­
D., 2021. Artificial Intelligence (AI): Multidisciplinary perspectives on emerging
tation. Jian Mou also contributed to hypothesis and research model challenges, opportunities, and agenda for research, practice and policy. Int. J. Inf.
development, discussion and contributions, and helped with polishing Manage. 57, 101994.
Fan, H., Han, B., Gao, W., 2022. (Im) Balanced customer-oriented behaviors and AI
the writing of the manuscript.
chatbots’ Efficiency-Flexibility performance: The moderating role of customers’
rational choices. J. Retail. Consumer Serv. 66, 102937.
Declaration of Competing Interest Featherman, M.S., Miyazaki, A.D., Sprott, D.E., 2010. Reducing online privacy risk to
facilitate e-service adoption: the influence of perceived ease of use and corporate
credibility. J. Serv. Mark. 24 (3), 219–229.
The authors declare that they have no known competing financial Ford, M., Palmer, W., 2019. Alexa, are you listening to me? An analysis of Alexa voice
interests or personal relationships that could have appeared to influence service network traffic. Pers. Ubiquit. Comput. 23 (1), 67–79.
the work reported in this paper. Gale, A., Mochizuki, T., 2019. Robot hotel loses love for robots. Wall Street J.
Harrison-Walker, L.J., 2019. The critical role of customer forgiveness in successful
service recovery. J. Bus. Res. 95, 376–391.
Data availability Hasal, M., Nowaková, J., Ahmed Saghair, K., Abdulla, H., Snášel, V., Ogiela, L., 2021.
Chatbots: Security, privacy, data protection, and social aspects. Concurr. Comput.:
Practice Experience 33 (19), e6426.
Data will be made available on request. Ho, T.H., Tojib, D., Tsarenko, Y., 2020. Human staff vs. service robot vs. fellow customer:
Does it matter who helps your customer following a service failure incident? Int. J.
Acknowledgement Hospitality Manage. 87, 102501.
Holzwarth, M., Janiszewski, C., Neumann, M.M., 2006. The influence of avatars on
online consumer shopping behavior. J. Market. 70 (4), 19–36.
This work was supported by <Natural Science Foundation of China Hong, J.W., Williams, D., 2019. Racism, responsibility and autonomy in HCI: Testing
Project> No. [72062015]; No. [620RC561]; Hainan Province Key R&D perceptions of an AI agent. Comput. Hum. Behav. 100, 79–84.
Hsu, P.F., Nguyen, T.K., Huang, J.Y., 2021. Value co-creation and co-destruction in self-
Program (ZDYF2022GXJS007, ZDYF2022GXJS010). service technology: A customer’s perspective. Electron. Commer. Res. Appl. 46,
101029.
Appendix Hu, Y., Min, H., Su, N., 2021. How sincere is an apology? recovery satisfaction in a robot
service failure context. J. Hospit. Tourism Res. 45 (6), 1022–1043.
Huang, Y.S.S., Dootson, P., 2022. Chatbots and service failure: When does it lead to
Table A customer aggression. J. Retail. Consumer Services 68, 103044.
Table B

16
M. Song et al. Electronic Commerce Research and Applications 55 (2022) 101199

Huang, M.H., Rust, R.T., 2018. Artificial intelligence in service. J. Service Res. 21 (2), Pitardi, V., Bartikowski, B., Osburg, V.S., Yoganathan, V., 2022. Effects of gender
155–172. congruity in human-robot service interactions: The moderating role of masculinity.
Iantovics, L.B., Rotar, C., Niazi, M.A., 2018. MetrIntPair—a novel accurate metric for the Int. J. Inf. Manage. 102489.
comparison of two cooperative multiagent systems intelligence based on paired Puzakova, M., Kwak, H., Rocereto, J.F., 2013. When humanizing brands goes wrong: The
intelligence measurements. Int. J. Intell. Syst. 33 (3), 463–486. detrimental effect of brand anthropomorphization amid product wrongdoings.
Kedra, J., Gossec, L., 2020. Big Data and artificial intelligence: Will they change our J. Market. 77 (3), 81–100.
practice? Joint Bone Spine 87 (2), 107–109. Roig, J.C.F., Garcia, J.S., Tena, M.A.M., Monzonis, J.L., 2006. Customer perceived value
Kim, T.T., Kim, W.G., Kim, H.B., 2009. The effects of perceived justice on recovery in banking services. Int. J. Bank Market. 24 (5), 266–283.
satisfaction, trust, word-of-mouth, and revisit intention in upscale hotels. Tourism Roschk, H., Gelbrich, K., 2014. Identifying appropriate compensation types for service
Manage. 30 (1), 51–62. failures: A meta-analytic and experimental analysis. J. Service Res. 17 (2), 195–211.
Kim, J., Merrill Jr, K., Collins, C., 2021. AI as a friend or assistant: The mediating role of Sheehan, B., Jin, H.S., Gottlieb, U., 2020. Customer service chatbots: Anthropomorphism
perceived usefulness in social AI vs. functional AI. Telematics Inform. 64, 101694. and adoption. J. Bus. Res. 115, 14–24.
Kuang, D., Ma, B., Wang, H., 2022. The relative impact of advertising and referral reward Shin, D., 2021a. Embodying algorithms, enactive artificial intelligence and the extended
programs on the post-consumption evaluations in the context of service failure. cognition: You can see as much as you know about algorithm. J. Inf. Sci.
J. Retail. Consumer Services 65, 102849. 0165551520985495.
Lalicic, L., Weismayer, C., 2021. Consumers’ reasons and perceived value co-creation of Shin, D., 2021b. The perception of humanness in conversational journalism: An
using artificial intelligence-enabled travel service agents. J. Bus. Res. 129, 891–901. algorithmic information-processing perspective. New Media Soc.
Lee, M.C., Chiang, S.Y., Yeh, S.C., Wen, T.F., 2020. Study on emotion recognition and 1461444821993801.
companion Chatbot using deep neural network. Multimedia Tools Appl. 79 (27), Shin, D., 2022. How do people judge the credibility of algorithmic sources? AI Soc. 37
19629–19657. (1), 81–96.
Lee, C.H., Cranage, D.A., 2014. Toward understanding consumer processing of negative Shin, D., Rasul, A., Fotiadis, A., 2021. Why am I seeing this? Deconstructing algorithm
online word-of-mouth communication: the roles of opinion consensus and literacy through the lens of users. Internet Res.
organizational response strategies. J. Hospit. Tourism Res. 38 (3), 330–360. Shin, D., Chotiyaputta, V., Zaid, B., 2022a. The effects of cultural dimensions on
Lei, S.I., Shen, H., Ye, S., 2021. A comparison between chatbot and human service: algorithmic news: how do cultural value orientations affect how people perceive
customer perception and reuse intention. Int. J. Contemp. Hospitality Manage. 8 (5), algorithms? Comput. Hum. Behav. 126, 107007.
12–14. Shin, D., Kee, K.F., Shin, E.Y., 2022b. Algorithm awareness: Why user awareness is
Leo, X., Huh, Y.E., 2020. Who gets the blame for service failures? Attribution of critical for personal privacy in the adoption of algorithmic platforms? Int. J. Inf.
responsibility toward robot versus human service providers and service firms. Manage. 65, 102494.
Comput. Hum. Behav. 113, 106520. Shin, D., Zaid, B., Biocca, F., Rasul, A., 2022c. In platforms we trust? unlocking the black-
Longoni, C., Cian, L., 2020. Artificial intelligence in utilitarian vs. hedonic contexts: The box of news algorithms through interpretable AI. J. Broadcast. Electr. Media 1–22.
“word-of-machine” effect. J. Market. 86 (1), 91–108. Smith, A.K., Bolton, R.N., Wagner, J., 1999. A model of customer satisfaction with
Lu, V.N., Wirtz, J., Kunz, W.H., Paluch, S., Gruber, T., Martins, A., Patterson, P.G., 2020. service encounters involving failure and recovery. J. Mark. Res. 36 (3), 356–372.
Service robots, customers and service employees: what can we learn from the Song, M., Xing, X., Duan, Y., Cohen, J., Mou, J., 2022. Will artificial intelligence replace
academic literature and where are the gaps? J. Service Theory Practice 30 (3), human customer service? The impact of communication quality and privacy risks on
361–391. adoption intention. J. Retail. Consumer Services 66, 102900.
Luo, X., Tong, S., Fang, Z., Qu, Z., 2019. Frontiers: Machines vs. humans: The impact of Spreng, R.A., Harrell, G.D., Mackoy, R.D., 1995. Service recovery: Impact on satisfaction
artificial intelligence chatbot disclosure on customer purchases. Marketing Sci. 38 and intentions. J. Serv. Mark. 9 (1), 15–23.
(6), 937–947. Sweeney, J.C., Soutar, G.N., 2001. Consumer perceived value: The development of a
Lv, X., Liu, Y., Luo, J., Liu, Y., Li, C., 2021. Does a cute artificial intelligence assistant multiple item scale. J. Retail. 77 (2), 203–220.
soften the blow? The impact of cuteness on customer tolerance of assistant service Tussyadiah, I., 2020. A review of research into automation in tourism: Launching the
failure. Ann. Tourism Res. 87, 103114. annals of tourism research curated collection on artificial intelligence and robotics in
Marikyan, D., Papagiannidis, S., Rana, O.F., Ranjan, R., Morgan, G., 2022. “Alexa, let’s tourism. Ann. Tourism Res. 81, 102883.
talk about my productivity”: The impact of digital assistants on work productivity. Um, T., Kim, T., Chung, N., 2020. How does an intelligence chatbot affect customers
J. Bus. Res. 142, 572–584. compared with self-service technology for sustainable services? Sustainability 12
Martin, K., 2018. The penalty for privacy violations: How privacy violations impact trust (12), 5119.
online. J. Bus. Res. 82, 103–116. Venkatesh, V., Hoehle, H., Aloysius, J.A., Nikkhah, H.R., 2021. Being at the cutting edge
McCarthy, J., 2007. From here to human-level AI. Artif. Intell. 171 (18), 1174–1182. of online shopping: Role of recommendations and discounts on privacy perceptions.
Meuter, M.L., Ostrom, A.L., Roundtree, R.I., Bitner, M.J., 2000. Self-service technologies: Comput. Hum. Behav. 121, 106785.
understanding customer satisfaction with technology-based service encounters. Vimalkumar, M., Sharma, S.K., Singh, J.B., Dwivedi, Y.K., 2021. ‘Okay google, what
J. Market. 64 (3), 50–64. about my privacy?’: User’s privacy perceptions and acceptance of voice based digital
Michaud, L.N., 2018. Observations of a new chatbot: drawing conclusions from early assistants. Comput. Hum. Behav. 120, 106763.
interactions with users. IT Prof. 20 (5), 40–47. Wamba, S.F., Bawack, R.E., Guthrie, C., Queiroz, M.M., Carillo, K.D.A., 2021. Are we
Miller, J.L., Craighead, C.W., Karwan, K.R., 2000. Service recovery: a framework and preparing for a good AI society? A bibliometric review and research agenda.
empirical investigation. J. Oper. Manage. 18 (4), 387–400. Technol. Forecast. Soc. Chang. 164, 120482.
Min, K.S., Jung, J.M., Ryu, K., 2021. Listen to their heart: Why does active listening Wang, E.S.T., Lin, R.L., 2017. Perceived quality factors of location-based apps on trust,
enhance customer satisfaction after a service failure? Int. J. Hospitality Manage. 96, perceived privacy risk, and continuous usage intention. Behav. Inf. Technol. 36 (1),
102956. 2–10.
Min, H., Lim, Y., Magnini, V.P., 2015. Factors affecting customer satisfaction in responses Webster, C., Sundaram, D.S., 1998. Service consumption criticality in failure recovery.
to negative online hotel reviews: The impact of empathy, paraphrasing, and speed. J. Bus. Res. 41 (2), 153–159.
Cornell Hospitality Quart. 56 (2), 223–231. Wei, J., 2021. The impacts of perceived risk and negative emotions on the service
Moon, Y., 2000. Intimate exchanges: Using computers to elicit self-disclosure from recovery effect for online travel agencies: the moderating role of corporate
consumers. J. Consumer Res. 26 (4), 323–339. reputation. Front. Psychol. 12.
Moon, Y., 2003. Don’t blame the computer: When self-disclosure moderates the self- Weitzl, W.J., Hutzinger, C., 2019. Rise and fall of complainants’ desires: The role of pre-
serving bias. J. Consumer Psychol. 13 (1–2), 125–137. failure brand commitment and online service recovery satisfaction. Comput. Hum.
Moussawi, S., Koufaris, M., Benbunan-Fich, R., 2021. How perceptions of intelligence Behav. 97, 116–129.
and anthropomorphism affect adoption of personal intelligent agents. Electronic Winfield, A.F., 2017. How intelligent is your intelligent robot?. arXiv preprint arXiv:
Markets 31 (2), 343–364. 1712.08878.
Mozafari, N., Weiger, W.H., Hammerschmidt, M., 2022. Trust me, I’m a bot – Wirtz, J., Mattila, A.S., 2004. Consumer responses to compensation, speed of recovery
repercussions of chatbot disclosure in different service frontline settings. J. Service and apology after a service failure. Int. J. Service Ind. Manage. 15 (2), 150–166.
Manage. 33 (2), 221–245. Wirtz, J., Patterson, P.G., Kunz, W.H., Gruber, T., Lu, V.N., Paluch, S., Martins, A., 2018.
Nass, C., Moon, Y., 2000. Machines and mindlessness: Social responses to computers. Brave new world: service robots in the frontline. J. Service Manage. 29 (5), 907–931.
J. Social Issues 56 (1), 81–103. Yang, F., Tang, J., Men, J., Zheng, X., 2021. Consumer perceived value and impulse
Ngai, E.W., Lee, M.C., Luo, M., Chan, P.S., Liang, T., 2021. An intelligent knowledge- buying behavior on mobile commerce: The moderating effect of social influence.
based chatbot for customer service. Electron. Commer. Res. Appl. 50, 101098. J. Retail. Consumer Services 63, 102683.
Nikbin, D., Marimuthu, M., Hyun, S.S., 2016. Influence of perceived service fairness on Yoo, W.S., Lee, Y., Park, J., 2010. The role of interactivity in e-tailing: Creating value and
relationship quality and switching intention: An empirical study of restaurant increasing satisfaction. J. Retail. Consumer Services 17 (2), 89–96.
experiences. Curr. Issues Tourism 19 (10), 1005–1026. Yoo, J., Park, M., 2016. The effects of e-mass customization on consumer perceived
Nishant, R., Kennedy, M., Corbett, J., 2020. Artificial intelligence for sustainability: value, satisfaction, and loyalty toward luxury brands. J. Bus. Res. 69 (12),
Challenges, opportunities, and a research agenda. Int. J. Inf. Manage. 53, 102104. 5775–5784.
Pagallo, U., 2013. Robots in the cloud with privacy: A new threat to data protection? You, Y., Yang, X., Wang, L., Deng, X., 2020. When and why saying “thank you” is better
Comp. Law Security Rev. 29 (5), 501–508. than saying “sorry” in redressing service failures: the role of self-esteem. J. Market.
84 (2), 133–150.

17

You might also like