Journal of Business Research: Grazia Murtarelli, Anne Gregory, Stefania Romenti
Journal of Business Research: Grazia Murtarelli, Anne Gregory, Stefania Romenti
Journal of Business Research: Grazia Murtarelli, Anne Gregory, Stefania Romenti
A R T I C L E I N F O A B S T R A C T
Keywords: The use of chatbots to manage online interactions with consumers poses additional ethical challenges linked to
Artificial intelligence the use of artificial intelligence (AI) applications and opens up new ethical avenues for investigation. A literature
Chatbot analysis identifies a research gap regarding the ethical challenges related to chatbots as non-moral and non-
Ethical challenges
independent agents managing non-real conversations with consumers. It raises concerns about the ethical im
Online conversations
Conversation management
plications related to the progressive automation of online conversational processes and their integration with AI.
The conversational approach has been explored in the organisational and management literature, which has
analysed the features and roles of conversations in managing interactions ethically. This study aims to discuss
conceptually the ethical challenges related to chatbots within the marketplace by integrating the current chatbot-
based literature with that on conversation management studies. A new conceptual model is proposed which
embraces ethical considerations in the future development of chatbots.
1. Introduction 2018). Along with the objective advantages of using chatbots for man
aging organisation–user interactions, are some specific ethical chal
One of the most recent and increasingly popular artificial intelli lenges linked to their features and which result in new ethical avenues
gence (AI)-based applications relates to the transformation of customer that need to be investigated.
service interactions using chatbots, which provide support to organisa Because chatbots increasingly integrate AI mechanisms design (such
tions in managing customer service experiences. Considered as ‘machine as game theory, data and opinion mining, optimization techniques) in
conversation system[s] [that] interact with human users via natural online social networking sites, they comply with the rules and dynamics
conversational language’’ (Shawar & Atwell, 2005, p. 489), chatbots are of online social networks. These are characterised by real multiactor-
increasingly integrated within social network organisational accounts as based conversations that require technical resources, specific knowl
Customer Relationships Management (CRM) tools (DiSilvestro, 2018). edge and communication abilities in order to nurture online in
Amazon’s Alexa and Apple’s Siri are two examples of such advance teractions. Modern chatbots are characterised by conversational
ments in machine–human interactions, and it is expected that they will interfaces that make them increasingly able to simulate human con
proliferate (Marr, 2018). A 2018 Gartner survey predicts that 84% of versations, to such an extent that customers may well not realize, that
organisations will increase investments in this customer experience they are talking to a chatbot rather than a human services assistant.
technology and that 25% of customer services will integrate chatbots by Furthermore, even if they do realize they are speaking to an automated
2020 (Gartner, 2018). agent, because chatbots display human conversational behaviours, they
The reasons that are leading to this proliferation are visible and encourage, even entice, customers to engage with them in a reciprocal
palpable to professionals - chatbots help organisations automatise and human manner, treating interactions as actual conversations, rather
aggregate human data at a large scale in order to explore and understand than what the authors prefer to call them, para-conversations. In sum, a
consumers’ behavioural patterns, to rethink and optimise procedures particular issue with chatbots is that they can be mistaken as human if
and activities and to effectively manage decision-making processes their robotic nature is not disclosed: their simulation of the human voice
(Leonardi & Treem, 2012; Lepri, Oliver, Letouzé, Pentland, & Vinck, is now so accurate that they are susceptible to greater
* Corresponding author at: Università IULM, Via Carlo Bo, 8, 20143 Milan, Italy.
E-mail addresses: [email protected] (G. Murtarelli), [email protected] (A. Gregory), [email protected] (S. Romenti).
https://doi.org/10.1016/j.jbusres.2020.09.018
Received 7 March 2019; Received in revised form 5 September 2020; Accepted 8 September 2020
0148-2963/© 2020 Elsevier Inc. All rights reserved.
Please cite this article as: Grazia Murtarelli, Journal of Business Research, https://doi.org/10.1016/j.jbusres.2020.09.018
G. Murtarelli et al. Journal of Business Research xxx (xxxx) xxx
anthropomorphism by humans than are their unvoiced non-human information asymmetry and the latter’s power. With every interaction,
associates. humans provide these non-human agents with additional consumer,
The conversational process is commonly used by organisations for personal and/or, private information which in turn enriches their data
creating agreements, facilitating decision-making processes and repositories and which can then be used in a way that benefits their
improving the quality of interactions. Conversations are the means used owners: even if that is only to confirm consumer existing purchasing
by organisations to gain cooperation and collaboration based on a level habits. However, there is an additional and potent force in play. Con
of trust, even if that trust is based on a strictly transactional under versations are not just the words used for transferring data or a concept
standing, for example, that a product will be exchanged for a financial to someone/something else: conversations are strategic processes play
token. Within the chatbot-based environment, conversations play an ing a crucial role in building trusting relationships, reducing un
additional role, as they are used as a means to collect information and certainties and increasing knowledge.
integrate data. The human-like qualities of chatbots are such that, as in In this article, we argue, paradoxically, that to level the playing field
human conversations that are friendly and helpful, they encourage and re-balance human–machine conversations, we need to use these
digital users to trust them not only about the transaction that they are conversations themselves. They could play a pivotal role in redis
undertaking, but to reveal additional personal and other information, tributing and redressing the asymmetrical power characterising the
which is subsequently permanently stored, accessible and integrated so extant chatbot-based environment. By surfacing the ethical issues and
that a wider-ranging picture of individual consumers can be developed developing the rules that define those conversations, steps can be taken
over time. Moreover, this individual data is aggregated with the data of to move towards an equalisation of power and boundaries can be placed
other consumers so that comprehensive profiles of consumers can be on the processes by which power imbalances are initiated and
obtained. This information is typically not shared with the consumer. perpetuated.
Combined with predictive analytics, chatbots have the capacity to This study aims to explore conceptually, the ethical challenges and
generate significant information asymmetry, a feature that frequently possible solutions relating to managing online human–machine in
characterises human - machine interactions. Information asymmetry is teractions by integrating the current chatbot-based literature with the
based on the information disparities characterising those who join on literature on conversation-based management. To achieve this aim, this
line interactions and that could have potential damaging consequences. article is structured as it follows. The first section reviews the extant
The party with less information, or less well organised, analysed and chatbot-based literature, analyses the opportunities and challenges
aggregated information, may not make fully informed choices, or may related to the increasing use of robots for managing organisation–con
have made different choices if they had the same information as the sumer interactions, and scopes the gap on the ethical implications linked
other party in the exchange. At worst, a lack of information may have to them. The second section explores the conversation management
fraudulent consequences: for example, a bot may be running an in literature in order to develop theoretical and practical insights that
vestment scheme which involves money laundering. could be helpful in effectively programming ethical online chat
The use of chatbot-based conversations within the marketplace bot–human conversations. The third section explores a model which
therefore raises potential ethical issues involving customers. If chatbots maps a way forward for incorporating ethics in chatbot-human in
have large amounts of information at their disposal, and they are able to teractions and fourth, the managerial implications of using conversation
see patterns that provide additional intelligence that is advantageous to principles in chatbot implementation will be outlined.
their owners, they could be programmed to use such data for manipu
lating users’ perceptions about a specific issue, such as first evaluations 2. Artificial intelligence-based chatbots: towards a new ethical
of products or services. This potential imbalance in informative power paradigm for human–machine interactions
could increase the risks for consumers who interact with organisations
via chatbots. Considered as an ‘integral element for managing firm-customer re
Crucially, chatbots are not moral and independent agents endowed lationships’ (Köhler, Rohm, de Ruyter, & Wetzels, 2011, p. 93), a
with moral reasoning capabilities. Chatbots can enhance customer ex chatbot has been defined as an e-service agent that represents a techno
periences and improve interactions with clients on the basis of data. For logical evolution of the traditional service agent involved in direct
example, they improve organisations’ ability to provide personalised firm–customer exchanges (Chung, Ko, Joung, & Kim, 2018; Köhler et al.,
customer experiences tailored to customers’ needs based on pieces of 2011; Larivière et al., 2017). Chatbots exploit the features of social
information that are autonomously collected during human - machine network- and AI-based technologies to guarantee personalised infor
interactions (Marr, 2018). However, even if organisations enhance the mation and to meet customers’ needs and expectations more effectively
personability of chatbots by providing user-friendly software, chatbots (Köhler et al., 2011; Sweezey, 2018a). In doing so, they help determine
are missing human qualities, such as judgement, empathy and discre the success of service exchanges and enhance firm–customer in
tion, so will not detect instances when actions may and should be teractions. More specifically, they facilitate what the service-dominant
changed because of these factors. They make decisions, not judgements, approach defines as the process of value co-creation, as they emphasise
and those decisions are based on algorithms which are calibrated in the interactive and networked nature of value creation (Lusch & Vargo,
ways that will benefit the algorithm owner or commissioner. In more 2006; Vargo & Lush, 2008). As stated by Vargo and Akaka (2009), the
extreme cases they risk ‘spread[ing] rumors and misinformation, or main aim of service systems is “to provide input into the value-creating
attack[ing] people for posting their thoughts and opinions online’ processes of other service systems and thus to obtain reciprocal output”
(Radziwill & Benton, 2017, p. 1). (p. 39). Chatbots act as service systems whose input is the collection,
An examination of the literature reveals a research gap with regard to integration and transformation of data and by providing valuable
the ethical challenges linked to the use of chatbots as non-moral and interactional experiences both for customers and firms. They meet the
non-independent agents managing non-real, or para- conversations with service-dominant based principle according to which “the value created
consumers. More specifically, this gap reveals a need to explore the by the customer, through the support of a supplier, enables the supplier
ethical implications related to the progressive automation of online to gain financial value in return” (Grönroos, 2011, p. 285). From a
conversational processes and their integration with AI principles. The customer’s viewpoint, the value of using chatbots is the high speed of
authors therefore argue that especially in the marketplace, people service provided and the quantity and variety of information they can
should be aware of the nature and features chatbots and should be supply compared with traditional service agents (Chung et al., 2018;
alerted to the implications of interacting with them. Köhler et al., 2011; Sweezey, 2018b). Chatbots have the capability to
Furthermore, this discussion is timely and pressing since every ‘mimic human behavior and record a contact history they can subse
transaction in which consumers engage with chatbots, increases quently refer to by tapping into an artificial memory’ (Köhler et al.,
2
G. Murtarelli et al. Journal of Business Research xxx (xxxx) xxx
2011, p. 96). By using pattern-matching techniques, chatbots rapidly The anthropomorphic lens stresses the need to go beyond design
access a ‘set of concepts that are interconnected relationally and hier principles when developing chatbots. It underscores the importance of
archically’ (Abdul-Kader & Woods, 2015, p. 74) and can use this wide exploring their role as social actors characterised by communicative
range of knowledge to quickly and effectively assist customers. From the behaviour and by the level of humanness simulated by the language they
viewpoint of the firm, by using machine-learning techniques, chatbots use and the emotions they generate in digital users (Bickmore, 2010; Ho,
acquire information, learn and modify their own behaviours in order to Hancock, & Miner, 2018; Isotalus & Muukkonen, 2002; Lee & Nass,
help clients use their own time more efficiently and in guiding them to 2010; Reeves, 2016; Stoll, Edwards, & Edwards, 2016; Westerman,
develop a deeper understanding of products with positive consequences Cross, & Lindmark, 2019). Defined more as animated agents (Isotalus &
(for the form) in terms of financial performance (Chung et al., 2018). Muukkonen, 2002) with an affective function, chatbots can stimulate
The role of chatbots as e-service agents in the extant service- closeness and immediacy (Isotalus & Muukkonen, 2002) and promote
dominant approach has been investigated using two logics which un disclosure practices (Ho et al., 2018). According to this approach and as
derlie chatbot–human interactions (see Fig. 1). These can be seen as two indicated earlier in this article, digital users perceive and interact with
interrelated lenses of analysis: the technological lens, which emphasises chatbots as if they are other people, even if they know that they are
the role of the functional interface of chatbots and identifies design machines (Von der Pütten, Krämer, Gratch, & Kang, 2010; Ho et al.,
principles for their effective development. The anthropomorphic lens, 2018). Chatbots can act as ‘companions’ (Stoll et al., 2016) when they
which focuses on the humanising experience that chatbots can provide use anthropomorphic language, behaviours and signals (Lee & Nass,
and that could enhance firm–customer interactions. The authors sum 2010; Stoll et al., 2016; Westerman et al., 2019). Consequently, in
marise the thinking of these two perspectives in Fig. 1 and show how, managing chatbot-human interactions, particular attention needs to be
together, they contribute to the service-based dominant approach. given not only to design principles but also to how messages are
Examining chatbots through the technological lens argues that their managed and shared in specific formats and with content that could
development requires the application of design principles in order to affect digital users’ impressions about the humanness of chatbots. Their
improve their functional interface (Kreps, 2017; Lau, Wong, Pun, & attractiveness can be enhanced by such elements as the timing of their
Chin, 2003; Messinger, Ge, Smirnov, Stroulia, & Lyons, 2019). Defining responses, their use of spelling and the length of words used (Westerman
a set of rules and procedures enhances the quality of informational ex et al., 2019).
changes and ensures an agile infrastructure (Lau et al., 2003). Kreps The authors contend that the interactive and relational nature of the
(2017) identifies seven design factors that need to be considered: a) value created via chatbots as evidenced by service-dominant logic,
availability of tailored information systems to collect information about cannot be explored by using the technological and anthropomorphic
digital users and to develop a set of message responses that can match lenses alone as has been the case so far. As Følstad, Nordheim, and
their needs; b) ease of use, which constrains the use of complex design Bjørkli (2018) state, the interactive and relational value created via
features and forces the adoption of a user-centric approach; c) high chatbots can be affected not only “by factors concerning the specific
levels of relevance and clarity in message responses by integrating with chatbot, specifically the quality of its interpretation of requests and
visual elements; d) high levels of interaction that requires a complete advice, its human-likeness, its self-presentation, and its professional
vocabulary and set of information in order to provide constant feedback appearance” (Følstad et al., 2018, p. 1). As Folstad et al go on to say, the
and avoid missing or misleading answers; e) added programming which relational value created via chatbots can also be affected by “factors
allows recognition of users’ emotions; f) high levels of interest and ap concerning the service context, specifically the brand of the chatbot
peals for shared information; and g) high connection with other social host, the perceived security and privacy in the chatbot, as well as general
networks to encourage sharing of information. According to this tech risk perceptions concerning the topic of the request” (Følstad et al.,
nological perspective, the suitability and technological appropriateness 2018, p. 1).
of the chatbot design represents the first requirement for their practical While Folstad and colleagues indicate additional dimensions that
operation and effectiveness. need to be added to the technological and anthropomorphic lenses, they
3
G. Murtarelli et al. Journal of Business Research xxx (xxxx) xxx
do not articulate these conceptually. As has already been argued, what judgement and social norms that constrain instrumental human
makes chatbots different from other AI-enabled consumer-focused behaviour (Mengis & Eppler, 2008).
technologies is their potential to be mistaken as humans largely because The third ethical issue linked to the use of chatbots concerns the need
of their conversational capacity. Their information asymmetrical data- to manage network security and protect users’ privacy. The collection of
retaining, analytical and predictive capacity also enable them to personal and impersonal data linked to individual behaviours within the
engage in effective persuasive conversations. Focussing on this, the digital marketplace is made possible by technological features, such as
authors have turned for guidance to one of the earliest and authoritative identification tags. These features raise concerns about specific ethical
texts on persuasive conversations: Aristotle’s Rhetoric (Roberts, 2015) concerns, such as information privacy, data protection, lack of control
to discern if there are any helpful parallels. Aristotle lists three Persua over personal data and potential slavery to technological devices (Hsu &
sive Appeals (and bearing in mind chatbots are intended to be persuasive Lin, 2016; Nguyen & Simkin, 2017; Slettemes, 2009). As a consequence,
agents in a consumer context this has relevance): logos, pathos and the requirement to apply privacy measures for minimising information
ethos. Logos concerns appeals to the rational. The authors would equate disclosure and loss of data is increasing (Khan, Al Mansur, Kabir, Jaman,
this to the technological lens in that this is focused on the suitability of & Chowdhury, 2012; Solanas & Martinez-Ballestè, 2009). In this case, an
the technology and the features that are available to make the in ethical issue can be addressed via the technological lens which allows
teractions reliable and technologically appropriate such as the right for the application of numerous measures for ensuring information
vocabulary, the correct sequencing of questions, appropriate visual confidentiality, integrity and availability (Solanas & Martinez-Ballestè,
isations, ease of use and so on. Pathos, which centres on appeals to the 2009). Information, for instance, should be known only by authorised
emotions, evokes parallels with the anthropomorphic lens. Here issues stakeholders (confidentiality), should be complete and detailed (integ
such as appropriate communicative behaviours, how human the bot rity) and should be accessible in a prearranged way and time (avail
appears to be, what emotional responses it generates are pertinent. ability) (Solanas & Martinez-Ballestè, 2009). However, a technological
Ethos, which concentrates on ethics, that is, issues of character (for solution is not the only one, and may not be the best as outlined in the
example, judgement; intentionality; exercise of power; credibility or following discussion demonstrates.
trust) appears to be the missing element. Ethos has ready application to Given the interactive and relational nature of the value created via
considerations of, for example, the conversational brand’s behaviour in chatbots and the role of chatbot-based para-conversations, the authors
the context of chatbots (Følstad et al., 2018). The argument made by the have explored the conversation management literature, according to
authors is that while the technological and anthropomorphic lenses may which conversation ‘serves not only to exchange information, but also
deal with the performance of chatbots and is largely for the benefit of the for conversation partners to relate to each other and develop a shared
owner of the bot, the third dimension is to do with governance and is reality between them’ (Mengis & Eppler, 2008, p. 1290). More specif
crucially bound up with ultimate consumer acceptance of the place bots ically, we intend to enrich the stream of service-dominant logic litera
as legitimate and trustworthy organisational agents which take into ture by putting forward evidence that calibrating conversations
account of their position too. according to mechanisms, procedures and rules that go beyond the
Having established that there are moral and ethical elements to the technological or anthropomorphic approaches to incorporate conver
use of chatbots, this article now turns to what the authors would regard sation governance principles, would address ethical challenges and
as three pivotal ethical issues which emerge in the literature on chatbots, make the technology more propitious.
although there are many more. The first is linked to the risk of the
asymmetrical redistribution of power because of the information asymmetry 3. A conversation-based perspective to shape human–machine
that occurs within human–machine interactions. Chatbots increase the ca interactions via chatbots
pacity of organisations to collect in-depth, integrated and aggregated
information about their customers in order to develop apparently The conversation management literature helps us address the three
cooperative interactions with them on a range of topics including those chatbot-based ethical challenges outlined above.
outside the subject of the immediate transaction (Parvatiyar & Seth, Risk of the asymmetrical distribution of informative power: solving in
2001). However, as has been already stated, chatbots’ ability to access a formation asymmetry. The topic of information asymmetry has emerged
large amount of information within a short period of time, combined in the chatbot-based literature as it represents a core concept in mar
with their ability to process data at a high speed and their predictive keting interactions between organizations and customers. ‘Marketing
capability, leads to an imbalance in informative power. The ethical risk relationships between buyers and sellers are often characterized by in
is of misalignment occurring between the different participants in the formation asymmetry, in the sense that the supplier possesses more in
para-conversation who do not have the same level of informative formation about the object of an exchange (e.g., a product or service)
knowledge. As West (2019) points out, chatbots and users interact in a than the buyer’ (Mishra, Heide, & Cort, 1998, p. 277). Two main
‘system in which the commoditization of our data enables an asym problems are linked to this: first, the adverse selection problem (Mishra
metric redistribution of power that is weighted toward the actors who et al., 1998; Nayyar, 1990), which arises when the buyer cannot eval
have access and the capability to make sense of information’ (p. 1). uate the seller’s skills, abilities and characteristics. Second, the moral
The second ethical challenge concerning the use of chatbots is linked hazard problem (Holmstrom, 1979; Mishra et al., 1998; Stewart, 1994),
to the management strategies for humanising them in order to reduce users’ which relates to the supplier affecting consumers’ perceptions of the
perception of risk and to increase perceived chatbot credibility and trust. quality of a product or service because these consumers do not have full
Developing human–machine cooperation can provide benefits to orga information about such a product or service. Mishra et al. (1998)
nisations in terms of speed, cost, savings and revenue (Wilson & identified three possible solutions to solve these two problems: a)
Daugherty, 2018), but genuine interactions are built on more than one- sharing of detailed private information by the parties involved in a
sided instrumental value. For this reason, the anthropomorphic view marketing interaction before starting a possible exchange, b) the
applied to chatbots has called for their humanisation even if they are not development of a shared culture based on common and understood
characterised yet by self-consciousness and self-awareness (Kaplan & meanings about events and situations in order to avoid goal incongru
Haenlein, 2019). This call is posited on the need to reduce users’ ence and c) the development of a customer-oriented compensation
perception of risk and to make chatbots more credible in consumers’ system based on the use of incentives and rewards.
eyes by using the range of cognitive, emotional and social intelligence The conversation-based literature can enrich this viewpoint and
cues that consumers themselves display (Kaplan & Haenlein, 2019). The provide additional insights about the strategic use of conversations to
ethical risk is by making chatbots appear more human, consumers may provide solutions to the ethical challenge of information asymmetry. It
assume they have the same mental models and attendant ethical values, emphasises the role of the subjects involved in the conversation and the
4
G. Murtarelli et al. Journal of Business Research xxx (xxxx) xxx
need to clarify at the beginning of interactions: detailed information about chatbots and to display different risk perceptions to their use than they
identity and mental models of the actors involved (Arnold, 2008; Jabri, would, say, to a purely text based human–machine interface. In order to
2009; Mengis & Eppler, 2008; O’Neill & Jabri, 2007); and their aims or promote trusting interactions between interlocutors and to reduce users’
intents in order to avoid possible goal incongruence (Ford & Ford, 1995; perceptions of risks, the following conversational features need to be
Liedtka & Rosemblum; 1996; Skordoulis & Dawson, 2007; Von Krogh & taken into account; the forms of the exchanged messages in terms of format
Roos, 1995). and content (Mengis & Eppler, 2008) and the different types of conversa
The conversation-based literature affirms that involved actors should tions that can occur in order avoid conversational mistakes that could
reveal their identity immediately by introducing themselves at the impair the quality of the relationship (Ford & Ford, 1995).
beginning of an interaction. Identity disclosure helps to stimulate an According to Mengis and Eppler (2008), underestimating the rele
egalitarian position amongst the interlocutors (Beech, MacIntosh, & vance of form can create ambiguity and cause loss of credibility. Con
MacLean, 2010) while also recognizing the risk of hierarchical posi cerning format, the use of humour and clear statements facilitates
tioning, but introductions can be deliberately framed to encourage understanding of the message and the use of simple language and visual
respectful listening and honest reciprocation. With this in mind it would supports appears to facilitate the creation of new knowledge. A neutral
be helpful if the chatbot would disclose its identity at the beginning of a and moderate tone reduces the risk of difficult and stressful conversa
para-conversation by making a declaration such as, “Hi! I am a bot, how tions (Mengis & Eppler, 2008). As far as content is concerned, this helps
can I help you?”. align interlocutors’ expectations about the conversation. Messages
Moreover, in order to reduce information asymmetry, it would be rooted in actual facts seem to assure the interlocutors that no mis
beneficial to understand the mental models interlocutors use for inter interpretations have occurred. In sum, chatbots should be programmed
preting and creating meaning (Mengis & Eppler, 2008; O’Neill & Jabri, to prioritise messages that focus on the issues that matter most or on the
2007). All the actors involved use specific mental models for under topics that are relevant and meaningful to interlocutors.
standing, interpreting and attributing a meaning to the reality they In addition, organisations should consider sequencing different types
experience. Mental models are ‘deeply anchored, internal pictures of of conversations which may focus on different purposes such as their
how the world works’, and they refer to the ‘whole network of values, effectiveness (meeting the organizations objectives), the satisfaction of
convictions, assumptions and psychological dispositions for sense- interlocutors and the success of interactional relationships. Within the
making and move in a nanosecond from the original message to our conversation-based literature, the four types of conversations can be
interpretation’ (Mengis & Eppler, 2008, p. 1299). Knowing the cultural identified (Ford & Ford, 1995) and they could be taken into account in
background of the participants in a conversation is essential, as it affects programming chatbots, First, initiative, which is characterised by the use
the capacity to share information and opinions effectively, as well as to of assertions (i.e. ‘We need to…’), requests (i.e. ‘Will you approve…’)
cooperate in order to achieve a specific aims (O’Neill & Jabri, 2007). and promise (i.e. ‘We will do…’) in order to ‘focus listeners’ attention on
From a practical point of view, chatbots should pose certain types of what could or should be done’ (p. 546). Second, conversations for un
questions to uncover interlocutors theories, beliefs and mental repre derstanding, which are useful for making sense of an issue and are
sentations (Sears & Jacko, 2009): what-if questions; “how” questions in characterised by ‘assertions (Scherr, 1989) and expressives; that is,
order to stimulate comparisons; out-of-the-box questions, such for claims are made, evidence and testimony [are] given, hypotheses [are]
instance metaphorical or lateral questions. While there are additional examined, beliefs and feelings [are] explored, and contentions [are]
dangers here of increasing the power of chatbots and their potential for maintained’ (p. 548). Third, conversations for performance, which include
increased manipulation which require ethical governance, once mental a call to action and are characterised by ‘an interplay of directives (re
models have been discovered, it means that from the other side of the quests) and commissives (promises) spoken to produce a specific result’
para-conversation there is increased mental congruence. (p. 549). Finally, conversations for closure, which are used to reach an
Conversational situations involving information asymmetry are end and ensure that the participants are satisfied with the achieved re
often characterised by the presence of hidden agendas and goal incon sults. These are ‘characterized by assertions, expressives, and declara
gruence. Chatbots should immediately declare the aim of the conver tions to bring about an end to the exchange process’ (p. 551).
sation and ask the interlocutor’s intent. Von Krogh and Roos (1995)
distinguished the operational intent from the strategic intent of conver 3.2. Managing online security: enhancing users’ privacy
sations. In operational intent, conversations are addressed to verify that
a meaning has been understood, accepted and shared. They are char Chatbots constantly interact with machines and humans within the
acterized by a conciliatory tone (by using excuse utterances such as “I online environment, and they manage huge amounts of personal data
am glad to hear that” or greetings utterances such as “Thank you”) and and information that could be sensitive, such as that pertaining to health
by the sharing of information resources such as links, reports, data, and finance. As pointed out by Subramanian (2017), the ‘security and
graphics. In this case, chatbots could help users in developing shared discreetness of social robots are obviously a very critical design imper
understanding and schema for making interpretations about a specific ative’, as ‘they affect the safety and security of the robot as well as [of]
issue (Ford & Ford, 1995). In strategic intent, conversations are oriented the individual that it is associated with, including the individual’s
towards stimulating and collecting new perspectives and ideas. They are properties’ (p. 96). Similarly, Følstad and Brandtzaeg (2017) identified
characterized by a challenging tone of voice using question-based ut that ‘machine agents may be used to sway individuals’ opinions in un
terances to uncover underlying assumption (“What do you mean?” desirable ways’ (p. 38). An example could be represented by the
“What do you intend?”) and by providing feedback and evaluations on computational propaganda phenomenon in political communication
the ideas shared by interlocutors (“Well done” “Great ideas”). In this (Bradshaw & Howard, 2018). In recent research into the on-line political
case, chatbots help users to develop capability and specific competences environment it emerges that the most common strategy used in more
about an issue, assisting cooperation and stimulating openness to than 38 countries is to develop political bots, or automated accounts
different perspectives (Skordoulis & Dawson, 2007). designed to mimic human behaviour. Such bots are designed to use
various manipulation techniques such as ‘spreading junk news and
3.1. Humanising chatbots: increasing their credibility and reducing users’ propaganda during elections and referenda, or manufacturing a false
perception of risks sense of popularity or support (so-called ‘astroturfing’) by liking or
sharing stories, ultimately drowning out authentic conversations about
Chatbots are increasingly programmed to mimic human beings, politics online’ (Bradshaw & Howard, 2018, p. 12).
although they have nothing human about them. This initial paradox As a counterbalance to such approaches, the conversation-based
leads potential customers to accept at face value the credibility of literature describes a conversational setting that could be reassuring in
5
G. Murtarelli et al. Journal of Business Research xxx (xxxx) xxx
terms of respect for privacy and network security. That literature ex The authors suggest therefore, that three sets of diagnostic questions
plores the features of the conversational environment that need to be should be built into the development of chabot conversations to address
clarified and shared, such as access modalities and conversational set these ethical considerations.
tings (Innes, 2004) and the specific conversational rules that are based on The first group questions to pose before activating a chatbot-user
advocacy and comparison approaches (Von Krogh & Roos, 1995). It conversation is related to the roles of the actors involved in the con
claims that authentic conversations can be developed in a respectful and versation, their identity and mental models, their aims and their intents.
proper environment (Innes, 2004). The following criteria need to be met By posing specific questions at the beginning of online interactions,
to guarantee a conversational environment which generates reassur chatbots could uncover interlocutors’ mental models and recognise the
ance: ease of access to the conversational environment for all involved, presence of framing mechanisms that can occur during a conversation
full accessibility of information that needs to be shared amongst the (Mengis & Eppler, 2008). These questions could help marketing and
participants, a setting where mutual respect is required and guided by communication practitioners to reduce the risk of the asymmetrical
ground rules for conversational behaviours. redistribution of informative power and meet the ethical requirement of
In conversations, power structures can emerge, so agreeing specific solving such information asymmetry. By developing an invitational
conversational rules can help in maintaining balanced formal and rhetoric that links chatbots back to humans, by revealing information
informal relationships is necessary. In practical terms, it could be helpful about the features and roles of interlocutors and sharing clear conver
for organizations to create a chatbot etiquette and code of polite conduct sational aims, intents and expected results, value is added to para-con
that could be openly offered to all actors before starting a conversation. versations and para-dialogues. These promote mutual confirmation of
This would also guard against instances where the humans involved in individuals’ uniqueness rather than the gaining of control over others
para-conversations have abused their power to ‘train’ chatbots to use, (Foss & Griffin, 1995; Yang & Heeman, 2010). Both parties are aware of
for example, obscene language. Of course, enforcing those rules would the presence of each other and the fact that the other party should not be
also need consideration and exclusion for violations may be a last considered solely as a data source but a partner in achieving conversa
sanction. tional aims.
As well as chatbots being programmed to reveal their identity from
4. Managerial implications for the ethical development of the beginning of para-conversations, with self-disclosure utterances
chatbots being added to the language repository, they should also be pro
grammed to recognise the interlocutors conversational needs. As
Three main ethical challenges have emerged from a review of the underlined by Mengis and Eppler (2008), by asking a series of questions
chatbot-based literature: the risk of the asymmetrical redistribution of that answer the over-arching question: “is the communication intent
informative power; the need to humanize chatbots to reduce users’ risk explicitly shared by all participants and is it oriented towards the co-
perceptions and increasing organizational credibility; and, the need to creation of meaning?’ (p. 1299). Articulating individual and common
guarantee network security and protect users’ privacy. By introducing goals explicitly could help participants in sense-making of the interac
the conversation perspective, this study provides insights and stimuli for tion and avoid misunderstandings (Mengis & Eppler, 2008). Even in
marketing and communication researchers and professionals who would chatbot–human conversations, there is benefit in surfacing each actor’s
like to integrate the use of chatbots in their collection of consumer intentions and clarifying the expected results and outcomes would be
interaction tactics. The authors suggest that communication and mar worthwhile for both participants. Indeed, conversations can contribute
keting professionals align programming with online conversational to achieving a range of different aims, such as developing skills and
competencies to help address the ethical challenges that persist in the specific competencies, stimulating collaboration among interlocutors
use of AI-based applications. Fig. 2 offers a synthesis of the conversation and improving consensus levels. The end points can be shared before the
management contribution. beginning of a conversation, as these can guide specific conversational
features of the
Managing Enhancing users’ conversational
online security privacy environment
Clarifying the Von Krogh &
Ross, 1995
conversational rules
Fig. 2. Conversational process features for addressing chatbot-based ethical challenges.
6
G. Murtarelli et al. Journal of Business Research xxx (xxxx) xxx
choices in terms of utterances, cues and formats. they could encourage. By stimulating real-time interactions, chatbots
A second group of diagnostic questions is related to the specific play the role of service systems and allow customers and organizations
principles drawn from the conversational literature that could guide to share data and acquire valuable insights. These chatbots-human in
decisions about the format and content of messages. Such questions teractions have been explored in the literature to date by adopting two
could be helpful for increasing the credibility of chatbots and answering theoretical approaches: a technological one, which has focused on
the ethical challenge of humanizing chatbots. Distinguishing between examining the suitability and the appropriateness of chatbots in terms of
identifying problems and giving recommendations could help in devel interfaces, infrastructure and design; and, an anthropomorphic one,
oping trust and loyalty by the actors involved. Before conversations which has emphasized their role as social actors and organizational at
begin, how they are to be structured should be explicitly stated - that is, tempts to make them similar to human beings. Both perspectives focus
if they are to explore a broad topic or to focus on a specific issue, if all the on chatbot performance and are addressed to improve the chatbot-based
single contributions shared by the interlocutors are taken into account experience with clear advantages for the organizations which own and
or if specific utterances and declarations are given more weight. program the chatbots systems.
Recognising the impact of using specific sequences during a con In order to enhance the value co-creation approach promoted by the
versation is helpful if the aim is to avoid possible conversational service-dominant approach according to which both organizations and
breakdowns that could negatively affect the level of trust interlocutors customers could take optimal benefits from using chatbot-based ser
have in each other. The conversation management approach indeed vices, we argue that attention should move from a focus on enhancing
indicates that trustful interactions can be created “through a process of performance to embrace considerations that concern chatbot governance.
discovery in dialogue with others” (Bowen, 2016, p. 568) and by This links to the key issue of the legitimation of chatbots as organiza
ensuring the quality of interaction. This quality can be improved by tional representatives by digital users. As the use of AI technologies and
attributing the proper formats to the messages exchanged and selecting the use of big data is coming under increasing public scrutiny, this shift
the right structure for the conversations. It can be further enhanced by is apposite. Taking a cue from Aristotle (Roberts, 2015), this means
adding empathic utterances, such as those in agreement or adopting a new approach and theoretical perspective to examine chat
disagreement. bots which specifically articulates an ethical strand, as summarised in
A third and final group of diagnostic questions is related to conver Fig. 3.
sational etiquette; the set of conversational rules that need to be shared The three main ethical challenges identified and analysed (infor
at the beginning of a conversation. The conversation management mative asymmetry linked to the missing redistribution of informative
literature suggests that the environment in which chatbot-based para- power; the increasing users’ perceptions of risk and loss of chatbots
dialogues and para-conversations take place is important. This credibility and trust; and network security and privacy) have not been
perspective emphasises the importance of defining the setting in which specifically articulated and addressed by either the technological or
conversations occur, being clear about the rules and transforming anthropomorphic approaches. The main contribution of this paper is,
chatbot-based para-conversations into a collaborative act where ma therefore, the adoption of ideas and concepts drawn from the
chine and users cooperate to achieve a common and understood aim. conversation-based literature to answer this ethical need. A further
Conversational etiquette could also help practitioners in managing contribution is the formulation of a new model that shows the rela
user requirements for security and privacy. In this case, developing a tionship between the two extant lenses in the literature, technological
conversation etiquette allows organization to create common ground and anthropomorphic, with the ethical dimension.
where interlocutors feel themselves free to share convergent or diver
gent stimuli and idea (Hammond & Sanders, 2002). 6. Conclusions
5. Implication for theory Sullins (2006) underlined three requirements for a robot to be seen as
a moral agent—it should be significantly autonomous from any pro
The service-dominant approach to chatbots has emphasized the grammers, ascribing the robot’s behaviour to an intention should be
interactive and networked nature of the value co-creation process that possible and the robot should show an understanding of its
7
G. Murtarelli et al. Journal of Business Research xxx (xxxx) xxx
8
G. Murtarelli et al. Journal of Business Research xxx (xxxx) xxx
Roberts, R. (2015). Rhetoric Aristotle. Fairhope, AL: Mockingbird Classics Publishing. West, S. M. (2019). Data capitalism: Redefining the logics of surveillance and privacy.
Scherr, A. L. (1989). Managing for breakthroughs in productivity. Human Resource Business & Society, 58(1), 20–41.
Management, 28(3), 403–424. https://doi.org/10.1002/hrm.3930280308 Westerman, D., Cross, A. C., & Lindmark, P. G. (2019). I believe in a thing called bot:
Sears, A., & Jacko, J. A. (Eds.). (2009). Human-computer interaction: Development process. Perceptions of the humanness of ‘Chatbots’. Communication Studies, 70(3), 295–312.
New York: CRC Press. https://doi.org/10.1080/10510974.2018.1557233
Shawar, B. A., & Atwell, E. S. (2005). Using corpora in machine-learning chatbot systems. Wilson, H. J., & Daugherty, P. R. (2018). Collaborative intelligence: Humans and AI are
International Journal of Corpus Linguistics, 10(4), 489–516. https://doi.org/10.1075/ joining forces. Harvard Business Review. Available at https://hbr.org/2018/07/coll
ijcl.10.4.06sha aborative-intelligence-humans-and-ai-are-joining-forces Accessed 12 January 2019.
Skordoulis, R., & Dawson, P. (2007). Reflective decisions: The use of Socratic dialogue in Yang, F., & Heeman, P. A. (2010). Initiative conflicts in task-oriented dialogue. Computer
managing organizational change. Management Decision, 45(6), 991–1007. https:// Speech & Language, 24(2), 175–189. https://doi.org/10.1016/j.csl.2009.04.003
doi.org/10.1108/00251740710762044
Slettemes, D. (2009). RFID-the ‘Next Step’ in consumer-product relations or Orwellian
Grazia Murtarelli, Ph.D. is Assistant Professor of Corporate Communication at Università
nightmare? Challenges for research and policy. Journal of Consumer Policy, 32,
IULM in Milan (Italy), where she teaches Digital Communication Management and Web
219–244. https://doi.org/10.1007/s10603-009-9103-z
Analytics. Her research focuses on the analysis of online scenario and, more specifically, on
Solanas, A., & Martinez-Ballestè, A. (2009). Advances in artificial intelligence for privacy
the following issues: social media-based relationship management, online dialogue stra
protection and security. New Jersey: Word Scientific.
tegies, digital visual engagement processes and social media measurement and evaluation.
Stewart, J. (1994). The welfare implications of moral hazard and adverse selection in
She is Public Relations Student & Early Career Representative at International Commu
competitive insurance markets. Economic Inquiry, 32(2), 193–208. https://doi.org/
nication Association. She is also a faculty affiliate of the Center of Research for Strategic
10.1111/j.1465-7295.1994.tb01324.x
Communication at Università IULM.
Stoll, B., Edwards, C., & Edwards, A. (2016). “Why aren’t you a sassy little thing”: The
effects of robot-enacted guilt trips on credibility and consensus in a negotiation.
Communication Studies, 67(5), 530–547. https://doi.org/10.1080/ Anne Gregory, Ph.D. has been Professor of Corporate Communication at the University of
10510974.2016.1215339 Huddersfield since September 2014. She joined the University from Leeds Beckett Uni
Subramanian, R. (2017). Emergent AI, social robots and the law: Security, privacy and versity where she was Director of the Centre for Public Relations Studies, an interna
policy issues. Journal of International, Technology and Information Management, 26(3), tionally recognised research centre and think tank on public relations and communication.
81–105. https://ssrn.com/abstract=3279236. While at Leeds Beckett, she also completed a three-year term as Pro Vice Chancellor.
Sullins, J. P. (2006). When is a robot a moral agent. In M. Anderson, & S. L. Anderson Professor Gregory is the Director of the Global Capabilities Framework research project the
(Eds.), Machine ethics (pp. 151–160). Cambridge: Cambridge University Press. results of which were announced in April 2018. The Framework has been adopted by the
Sweezey, M. (2018a). Marketing automation for dummies. New Jersey: John Wiley & Sons. Global Alliance of Public Relations and Communication Management which is the
Sweezey, M. (2018b). The Future is now: How chatbots streamline lead gen and sales. confederation of the professional bodies worldwide.
[online] Salesforce blog, https://www.salesforce.com/blog/2018/08/chatbots-lead-
gen-sales-crm Accessed August 20, 2019.
Stefania Romenti, Ph.D, is Associate Professor in Strategic communication and PR at
Vargo, S. L., & Akaka, M. A. (2009). Service-dominant logic as a foundation for service
IULM university (Milan, Italy) and Chair of the Master of Science in Strategic Communi
science: Clarifications. Service Science, 1(1), 32–41. https://doi.org/10.1287/
cation (English Language). She is Director of the Executive Master in Corporate Public
serv.1.1.32
Relations (IULM University) and Adjunct Professor at IE Business School (Madrid) in
Vargo, S. L., & Lusch, R. F. (2008). Service-dominant logic: Continuing the evolution.
“Measuring Intangibles and KPI’s in Communication”. She is Founder and Director of the
Journal of the Academy of Marketing Science, 36(1), 1–10. https://doi.org/10.1007/
Research Center in Strategic Communication (CECOMS) and Member of the Board of the
s11747-007-0069-6
European Association of Public Relations Education and Research Association (EUPRERA).
Von der Pütten, A. M., Krämer, N. C., Gratch, J., & Kang, S. H. (2010). It doesn’t matter
Dr. Romenti centers her research on strategic communication, corporate reputation,
what you are! Explaining social effects of agents and avatars. Computers in Human
stakeholder management and engagement, dialogue, social media, measurement and
Behavior, 26(6), 1641–1650. https://doi.org/10.1016/j.chb.2010.06.012
evaluation.
Von Krogh, G., & Roos, J. (1995). Conversation management. European Management
Journal, 13(4), 390–394. https://doi.org/10.1016/0263-2373(95)00032-G