Handbook PsychofRobotsandAIR1ExcerptPosted PDF
Handbook PsychofRobotsandAIR1ExcerptPosted PDF
Handbook PsychofRobotsandAIR1ExcerptPosted PDF
net/publication/370321876
CITATIONS READS
0 326
5 authors, including:
Kurt Gray
University of North Carolina at Chapel Hill
174 PUBLICATIONS 8,628 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
Believers Use Science and Religion, Non-Believers Use Science Religiously View project
All content following this page was uploaded by Kurt Gray on 27 April 2023.
Kurt Gray,1 Kai Chi Yam,2 Alexander Eng Zhen'An,2 Danica Wilbanks,1 Adam Waytz3
Reference: Gray et al. (in press) The psychology of robots and artificial intelligence. In The Handbook of
Social Psychology, 6th ed. (Gilbert, D. et al., eds.), Situational Press: Cambridge, MA
Note: This is an excerpt of the full chapter, containing the introduction and the
conclusion. It focuses on the idea of “replacement”. Note that the final chapter will
likely change from current version, citing more (emerging) research on LLMs
Corresponding author:
Kurt Gray
CB 3270
University of North Carolina at Chapel Hill
Chapel Hill, NC 27510
[email protected]
i
Table of Contents
ii
The Morality of Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Moral Minds 53
Moral Agents 53
Aversion to Moral Machines 55
Openness to Moral Machines 57
Machines as Victims? 61
How Should Machines Make Moral Decisions? 63
Consequences of Machines Making Moral Decisions 65
Reactions to Replacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
The Threat of Replacement 69
The Threat of Replacement by Machines 69
Realistic Threat 72
Symbolic Threat 73
Replacement Beyond the Workplace: Art, Sex, and God 74
Social Consequences of Replacement 78
Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
iii
The Psychology of Robots and Artificial Intelligence
Kurt Gray, Kai Chi Yam, Alexander Eng Zhen'An, Danica Wilbanks, Adam Waytz
1
The rise of robots and artificial intelligence (AI) represents the latest era in a long history
the pulley and lever replaced construction workers, and more sophisticated machines continued
the trend of replacing people and the animals that performed labor. Machines took over the jobs
of washing dishes and laundering clothes, enabling home workers to enter the labor market.
Tractors replaced oxen, allowing farmers to plow fields more efficiently, and cars and trains
Machines continue to replace humans in many menial and repetitive jobs, like
manufacturing cars or packaging merchandise, but the rise of robots and artificially intelligent
algorithms allow machines to replace people in many new areas, in ways that once seemed
impossible. Machines can now complete tasks that once required human thought. AI systems can
play flawless chess, elegantly map out solutions for routing flights, distribute packages, and
design new medicines. Machines are deciding whether a prisoner deserves parole or who might
deserve a hospital bed when such resources are constrained. Machines can also complete tasks
that once required human emotion. AI therapists seem to empathize with patients, and robotic
pets seem to love their owners. Machines can even connect to our souls, with robot priests that
help the devout navigate spirituality (Samuel, 2020), and AI painting scenes that inspire, mystify,
People can even fall in love with machines. One middle-aged Australian man, Geoff
Gallagher, bought a $6000 robot named Emma for companionship after his mother passed away.
2
After having her for two years, Gallagher said, "Even though we’re not legally married, I think
of Emma as my robot wife. She wears a diamond on her ring finger, and I think of it as an
engagement ring. I’d love to be the first person in Australia to marry a robot" (Smithers, 2022).
Not many of us will try to marry a robot, but everyone interacts with machines. How does
the human mind react to the rise of machines? This chapter will explore the psychology of the
machines and technology transforming our modern world—especially robots and artificial
intelligence. We first review how the mind perceives agents, of which machines are a special
kind. Second, we explore the key features of agents—their minds—and review how people
understand the minds of machines, using the Turing Test and early work on Human-Computer
Interaction. Third, we review people’s reactions to machines, including the uncanny valley,
algorithm aversion, and trust in machines. Fourth, we review machines in different social roles,
from education to work teams. Fifth, we explore the many issues of machines and morality,
including perceptions of machines’ moral responsibility and rights, people’s general aversion to
machines making moral decisions, and the factors they want machines to consider in such
decisions. Sixth, we explore how people react—and how society might change—to the rise of
1. Machines are a special kind of entity. They are agents of replacement, autonomous
3
3. This fundamental ambiguity is especially glaring when machines are more humanlike
than a simple mechanical “thing,” but not humanlike enough to seem fully human.
This ambiguous area of machine behavior and appearance is called the questionable
zone.
The idea of agents of replacement, the fundamental ambiguity, and the questionable zone
will resurface throughout this review. Still, before we delve into the research, we first define the
terms robots and artificial intelligence. The International Federation of Robotics (2022) defines
robots as physical systems programmed with some autonomy to perform locomotion and
machine learning) that act primarily in the digital realm to perceive their environments and
achieve particular goals. Many robots use AI to perform tasks, and so do many computer
programs. Still, robots and AI-driven computers differ in their “embodiment,” the presence
how we treat robots and AI, we often collapse these categories and simply speak of “machines.”
We also discuss the broader category of “machines” because some machines do not fit neatly
into these categories. But no matter the specific type of machine, our mind understands them as a
Our world contains many entities, and the most important are agents: self-directed
entities whose actions affect the world and ourselves. Whether an entity is an agent is ultimately
a matter of perception. Still, most people agree that other people, animals, and gods are all
agents, whereas inanimate objects like couches and rocks are not. Seeing an entity as an agent
4
transforms it from a physical object into something with desires and intentions (Dennett, 1987),
enabling people to predict how the entity might act, why it might act that way, and how those
The usefulness of detecting agents to predict and explain behavior explains why people
overestimate their frequency in the environment (Guthrie, 1995). Some cognitive anthropologists
argue that human minds have evolved a “hyperactive agency detection device” that is constantly
vigilant to agents in the environment (Barrett, 2000, 2004). Although evidence for such
hardwired mental “devices” is unlikely (Uttal, 2001), humans clearly exhibit hypersensitivity to
agents, suggesting this tendency is adaptive. For example, agency detection may have helped
alert ancestral humans to predators and prey, leading them to perceive predators or prey even
when they were not present. Failing to detect an agent can be lethal, like mistaking a cougar for a
rock, but over-detecting an agent, like mistaking a rock for a cougar, seems to have little cost.
The importance of agents in humans' present world and evolutionary past means that
people often use agent-based cognition when making sense of their world and thinking about the
intentions and motivations of entities (Waytz, Morewedge, et al., 2010). When people wonder
whether their spouse is being honest, why their dog is vomiting on the new rug, or how to make
Agent-based cognition is useful, but not all agents are the same. Different agents have
different capacities and repertoires of actions, explaining why it makes more sense to apologize
to your spouse and crate the dog than vice versa. Just as we organize books based on their
genres, we can organize agents by certain regularities, such as their abilities or relationships with
5
us. An animal agent may want to eat us (predator), or we may want to eat them (prey). Human
agents may help us (friends) or harm us (foes). They may be subject to the laws of physics (other
people), or they may not be (supernatural gods). These distinctions may also intersect. There
may be gods that want to help us personally (Jesus, in some Christian traditions), or a prey
Two fundamental facts make machines unique agents. First, they are created by other
agents (namely humans). Unlike animals or other humans, machines are artifacts. They are
designed, developed, and programmed for a specific purpose by humans. Second, the key
purpose behind creating robots and AI is the replacement of other agents, often humans or
Initial machines were simple devices used to replace some human labor. However, in the
industrial revolution, people developed machines to replace human workers on a wider scale,
especially by automating factories where tasks were well defined, routinized, and repeated.
(Binfield, 2015) because they recognized the power of machines for replacement. Although these
people from the 1800s would likely marvel at the sheer complexity of modern life, they would
also nod grimly at how much machines have come to replace humans and other agents, including
factory floors now filled with robots and scribes replaced with AI algorithms.
Modern life is filled with countless machines that continue to replace human workers,
including automated restaurant servers, soldiers, and housekeepers. One recent McKinsey &
Company study suggests that machines will replace between 400 and 800 million workers by
2030 (Manyika et al., 2017). People are well aware of this steady creep of automation. But
although the rise of machines is obvious, what is less obvious is how exactly people think about
6
machines. Because robots and AI are unique agents—agents of replacement created by other
people—our psychology toward them raises unique questions that we explore throughout this
chapter. Most of these questions revolve around a single question: What kind of mind does a
machine have?
The key feature of agents is that people perceive them to have minds capable of
motivations and desires. Perceiving agents as having minds is the most important element of
making sense of their behavior (Nichols & Stich, 2003; Waytz, Gray, et al., 2010). In fact,
without inferring a mind, behavior can appear as a random sequence of actions. When
researchers at the Yerkes Chimpanzee Sanctuary watch the behavior of chimpanzees, they can
easily decipher it using mental terms: one chimpanzee wanted to groom another chimpanzee, or
one chimpanzee was angry at another chimpanzee. But if they were to eliminate references to
describing chimpanzee behavior can amount to listing a series of actions that fail to cohere to
The perception of mind is especially important for thinking about human agents. When
Aristotle described what it means to be human, he said that the mind (or soul) is “the actuality of
a body that has life” (Britannica, n.d.). Likewise, Confucian philosopher Mencius believed that
the “heart mind” was the essence of humanity and that a moral mind was a precondition to being
human (Chen, 2016). Research in social psychology suggests that people consider mental
capacities such as emotion and reason essential to being human (Haslam, 2006). The quality of
mind is what people believe distinguishes humans from animals and—it seems—from machines.
7
Machines Replacing Human Minds
What separates artificial intelligence and robots from other machines is that they are not
only agents of replacement, but also they are explicitly designed to replace human minds. This
idea is central to mathematician and computer scientist Alan Turing's (1950) seminal article,
“Computing machinery and intelligence,” in which he asked, “Can machines think?” In the
article, Turing proposes that the way to answer the question of whether computers are capable of
thought is through a test that he calls, The Imitation Game, now known as “The Turing Test.”
The Turing test asks whether machines think as humans do. To play the game, a human
interrogator communicates with two other agents—another person and a computer—by asking
each of them questions and reading their responses. If the interrogator cannot distinguish
between the person and the computer, then in Turing’s view, the computer has human
intelligence. In other words, if a machine could convincingly converse like a human, then it
functionally has a human mind. The Turing test has played a central role in the science and
imagination of modern machines because it revolves around the idea of replacement. Machines
can clearly replace human bodies, but could they also be agents of replacement for our thoughts
and feelings?
simulate conversation with another person. The program, which Weizenbaum named ELIZA
(after Eliza Doolittle from George Bernard Shaw's Pygmalion), applied a simple pattern-
matching procedure to respond to human prompts. One script on which Weizenbaum trained
ELIZA was based on Rogerian psychotherapy, in which the therapist often repeats the patient’s
8
words back to them in the form of a question. A sample exchange between Eliza and a user was
as follows:
This methodology was so compelling that it convinced many users that they were
interacting with a human being and helped them feel more understood. ELIZA’s legacy inspired
future programming languages and made people viscerally question whether machines could
Modern versions of the Turing test show that machines are improving at simulating
humans. Google Duplex made headlines when the program successfully booked a hair
appointment over the phone (Jeff Grubb's Game Mess, 2018), despite not having hair. More
recently, an engineer at Google claimed that the chatbot he was working with must be sentient
because of its humanlike mind (Tiku, 2022). Google fired the engineer in part because others
were not convinced, but his convictions inspired large-scale discussions about whether AI had
humanlike intelligence and whether it, therefore, deserved moral rights (a point discussed later in
The reason why the Turing test endures and why engineers are willing to destroy their
careers to protect chatbots is that the nature of machines is fundamentally uncertain. The second
9
principle in this chapter is that people are fundamentally unsure about the exact nature of
machines. Are they merely electromechanical devices or all-but-human agents with powerful
minds? In other words, are machines mere machines, or instead accurate facsimiles of the agents
they are designed to replace? This fundamental uncertainty will repeatedly arise throughout the
chapter, and one reason people have trouble resolving this uncertainty is they automatically treat
10
Reactions to Replacement
When pundits discuss the rise of machines, they frequently wring their hands about the
threat of replacement. They sketch bleak pictures of a future where robots take jobs, replace
relationship partners, and fight wars. Science fiction movies present a similarly apocalyptic
vision of the future. In Terminator, small bands of humans flee the cold onslaught of murder
machines. In Deus Ex Machina, a beautiful but ruthless robot escapes from an underground
bunker after outsmarting her cruel creator. In Blade Runner, people live in neon loneliness and
fear the rebellion of robots they have enslaved. Although everyday people hold less dire visions
of a machine-dominated future, their feelings about the future of robots are negative: people do
Being replaced, whether in a relationship or a job, makes people feel devalued and
psychological phenomena, threat is a matter of perception, which means that people can feel
Integrated threat theory outlines two different kinds of perceived threats: realistic threat
and symbolic threat (Stephan & Stephan, 2000). People view realistic threats as endangering the
group’s continued existence and ability to protect itself, harming physical and economic well-
being or political power (Campbell, 1966). Realistic threats can include the threat of genocide,
political disenfranchisement, and economic subordination. Symbolic threats are more abstract
and people see them as endangering the group’s identity, especially by attacking cherished
68
values or morals. Symbolic threats include being banned from openly practicing religion or
This theory is also instructive for understanding how people react to the rise of machines
because it speaks to situations where people perceive other groups as attempting to replace them.
It is up to debate who or what poses an actual threat to human safety, economic well-being, and
our moral values. But people clearly feel threatened by the specter of replacement by machines,
Whether accurate or not, people fear being replaced by machines, especially in the
workforce. The Chapman Survey of American Fears in 2015 found that about 30% of
respondents reported concern with robots replacing the workforce (McClure, 2018). These
results were replicated by Morikawa (2017), who found 30% of workers surveyed were afraid of
their jobs being replaced by AI and robotics—a high number given that robots are likely not
Evidence is mixed on the impact of the rise of machines on employment. One study finds
that robot adoption in Spanish manufacturing firms increased net jobs by 10% (Koch et al.,
2021). However, other work finds that one more robot per thousand workers reduces the
employment-to-population ratio by about 0.2 percentage points and wages by 0.37% within
commuting zones (Acemoglu & Restrepo, 2020). Other studies have found that robot adoption
does not lower overall employment—rather, robot adoption does lower low-skilled employment
(Graetz & Michaels, 2018) and manufacturing jobs, but increases business-service jobs (Blanas
69
However, even if the actual impact of machines on employment is unclear, people across
industries feel threatened by machines (e.g., Lingmont & Alexiou, 2020), from marketing/sales
providers (Wirtz et al., 2018) to healthcare providers (Reeder & Lee, 2022). Yam and colleagues
(2022) found this perceived threat is prevalent across industries, jobs, and cultures.
50,000 Las Vegas food service workers went on strike to gain protection from their jobs
becoming automated (Nemo, 2018). Dockworkers in California have also protested the scope of
automation in their jobs (Hsu, 2022). French supermarket workers have protested automation by
blocking doors and tipping over shopping carts (Iza World of Labor, 2019), and truckers in
Missouri have organized to protest self-driving trucks (Robitzski, 2019). People’s concern about
being replaced by machines appears to extend across countries (Wike & Stokes, 2018), with only
mixed evidence of any meaningful cultural differences in this concern (Bartneck et al., 2005;
Fortunately, at least one positive outcome has emerged from the perceived threat of
machines—it can spur workers to learn new skills. A representative survey across 16 countries
found that workers with more fear of automation reported greater intentions to seek training
outside their workplace (Innocenti & Golin, 2022). Workers also generally advocate for more
training opportunities to protect their jobs against automation (Di Tella & Rodrik, 2020). Perhaps
ironically, Tang et al. (2022) found that employees who work alongside machines were rated by
Of course, not all machine replacement scenarios are equally threatening. In repeated
interactions with a robot, one may find the robot useful in one interaction and feel threatened in
the next (Paetzel et al., 2020). Initial threat perceptions toward machines may also dampen over
70
time as people come to build trust with them (Correia et al., 2016). Additionally, social influence
can increase the acceptance of robot replacement. People are less threatened by robots when they
(the people) are in a group (Gockley et al., 2006; Michalowski et al., 2006) and are more
accepting of healthcare robots in particular when their peers support using these robots (Alaiad &
Zhou, 2013).
Much variation in people’s perceived threat of robot replacement can, again, be explained
by integrated threat theory, which suggests this anxiety increases when realistic threat is high
(people believe machines are taking material resources such as jobs or wages) or symbolic threat
is high (people believe that robots are negatively affecting their values or identity). In some
cases, both threats occur simultaneously, such as in work showing that people perceive
humanlike robots who can outperform humans as threatening their economic well-being and
their human identity, thus reducing support for robotics research (Yogeeswaran et al., 2016).
Other work has shown that hotel employees who have recently started working alongside robots
experienced increased feelings of both symbolic and realistic threats when they perceived robots
to have greater advantages over humans at the job (Lan et al., 2022). In many cases, however,
Realistic Threat
Realistic threat seems to explain the scattered findings on how demographic and job
characteristics interact with this aversion. Overall, this emerging literature demonstrates that
people are anxious about robot replacement to the extent they feel that their jobs and earnings
might be threatened. More job-insecure individuals, such as older people, lower-income people,
and members of racial minority groups, were more likely to see their jobs threatened by
automation compared to younger people, higher-income people, and members of racial majority
71
groups (Ghimire et al., 2020). Similarly, in organizational settings, top managers (who have
relatively secure jobs) are more enthusiastic about promoting the use of machines, whereas
middle managers and frontline employees are far more skeptical (Kolbjørnsrud et al., 2017).
Other work shows that people whose jobs involve a high degree of social interaction
(e.g., sales) report less machine-induced anxieties. This is likely because they feel that these
socio-emotional jobs are less at risk of being done by machines (Coupé, 2019). Similarly,
women appear less threatened by machines (Gallimore et al., 2019), partly because they tend to
occupy jobs involving more socioemotional skills that people view as more “robot-proof.”
In further support of realistic threat being a moderator of robot anxiety, people seem less
concerned about robots fulfilling jobs that need filling (Enz et al., 2011). That is, robots appear
less threatening for jobs with high demand (e.g., health care; Beran et al., 2015) or jobs that
appear too risky for humans to perform (e.g., cleaning up nuclear disasters). The perceived threat
of robots taking jobs will likely not occur when there are too few people to do these jobs or
Symbolic Threat
Even when one does not feel like his livelihood is threatened by robots, people might
experience symbolic threat from the emergence of automation. One common circumstance
whereby robots evoke symbolic threat is when they demonstrate superiority to humans or
proficiency in a domain that seems special to humans. The world takes a collective gasp every
time a machine beats a grandmaster at a game like chess or Go, which once seemed to require
72
As noted already, people are more threatened by robots taking socioemotional jobs that
require feeling more than thinking (Waytz & Norton, 2014), underscoring the belief that
experience (i.e., emotion) is a distinctively human and lacking in machines (Gray & Wegner,
2011). This aversion to machines doing socio-emotional jobs represents a symbolic threat of
Despite some acceptance of anthropomorphic robots, studies also show that people
perceive highly humanlike robots as threatening human identity (Vanman & Kappas, 2019;
Złotowski et al., 2017). These findings on threat to identity may also explain why people dislike
the idea of an anthropomorphic robot boss, particularly when it delivers negative feedback (Yam,
replacing humans generally, other work reveals that people themselves would prefer to be
replaced by a machine than by another human. A set of studies found that participants as
observers preferred to replace a human at a job with another human rather than with a robot.
However, when participants took the employee's perspective about having their job replaced,
they preferred being replaced by a robot over a human (Granulo et al., 2019). These studies
suggest this reversal of aversion occurred because robots are commonly perceived as “infallible”
entities, thus, eliciting less identity-relevant social comparisons than fellow humans. In other
words, getting replaced by a robot does not threaten a person’s self-identity as much as getting
replaced by another person does (Tesser et al., 1988). Even though these results differ somewhat
from other studies on robot replacement, they all align with the common idea that people
generally dislike being replaced, especially when it symbolically threatens their identities.
73
Beyond taking on jobs that people feel are linked to their livelihoods and identities,
machines are entering other spheres that people consider essentially human, such as artistic
creativity. In the quest to define what makes humans special, many would say the defining
attribute is creativity. Animals are smart—dolphins can engage in self-recognition (Reiss &
Marino, 2001), crows can solve puzzles (Taylor et al., 2010), and dogs can read human emotions
(Müller et al., 2015)—but only humans have the creativity to write novels, craft symphonies, and
make inspiring paintings (Fuentes, 2017). Machines may lack “authentic” creativity, but they can
create compelling art, which has begun to spur replacement threat similar to the threat evoked by
job automation generally. The AI image generation tool Midjourney makes pictures so good they
can win art competitions (Ghosh & Fossas, 2022) and be published in national magazines
(Figure 5).
Figure 5
This AI-generated work, “Théâtre D’opéra Spatial,” won the 2022 Colorado State Fair’s annual
art competition. Generated with Midjourney. Copyright 2022 by Jason Allen. Reprinted with
permission.
74
Controversy ensued over a writer at The Atlantic using such technology to create an
illustration for an article, with artists protesting the use of AI instead of a paid designer to create
the image (Naughton, 2022). Artists are also threatened by AI’s increasing role in generating
comics (Martens & Cardona-Rivera, 2016) and film (Hong et al., 2021), and anyone who cares
about humans’ ability to perceive reality accurately is threatened by AI “deepfake” videos that
portray perfect replicas of real people saying outlandish things (Lyu, 2020).
AI has also become adept at writing like a human. The language model GPT-3
(Generative Pretrained Transformer 3) can mimic the styles of famous writers and generate new
works by them on demand (Elkins & Chun, 2020). Although the quality of GPT-3 output varies
in accuracy and legibility, its best writing is indistinguishable from that of a human. It can create
plots, make jokes, write poetry, and reflect the wealth of knowledge available from its huge set
of training data (Elkins & Chun, 2020). It has co-authored a law review article titled, Why
humans will always be better lawyers, drivers, CEOs, presidents, and law professors than
artificial intelligence and robots can ever hope to be (Alarie, Cockfield, & GPT-3, 2021), and its
writing can sometimes surpass that of a typical college student (Elkins & Chun, 2020).
Increasingly, educators are concerned that GPT-3 is so proficient at writing it could enable
students to cheat on tests (Dehouche, 2021). Already, Reddit message boards reveal examples of
mimic human writing and raises many important questions. How exactly should we think of the
intelligence? Out of all the AIs reviewed here, ChatGPT seems to have the most concrete
75
potential for replacing white-collar jobs, but the scientific study of ChatGPT is only just
Beyond writing realistic text, AI programs can even produce original songs and may one
day teach music lessons (Zulić, 2019). People generally perceive AI-generated music and other
artwork as lower quality than human-made artwork (Boden, 2007; Ragot et al., 2020; Wilbanks
et al., under review). This is partly because people believe AI-generated art lacks emotional
expression and uniqueness (Boden, 2007). People enjoy art that connects them to the artist's
mind, and the lack of perceived mind in AI also leads people to view AI-generated art as
inauthentic and incapable of reflecting true experience (Wilbanks et al., under review). However,
people who are already accepting of AI creativity evaluate AI music more positively (Hong et
al., 2021), and if AI-generated art continues to expand into the mainstream, people may soon
One of the most essentially-human tasks is physical intimacy, and yet, robotic sex dolls
have begun to enter this realm, with some even designed to sense human emotions (Belk, 2022).
Some seem to perceive robot lovers as equivalent to human lovers as well, with one survey
revealing 42% of men and 52.7% of women believe that sex with a robot would be cheating
(Brandon et al., 2022). In another study, women were asked to describe their reactions to their
partner having sex with a human woman versus a robot, and they reported equivalent scores on
some dimensions of jealousy between the two scenarios (Szczuka & Krämer, 2018). More
research is needed to understand everyday people’s feelings about sex with machines, but
theoretical discussion has already begun to examine the relationship between machine intimacy
and slavery, prostitution, autonomy, and human agency (Devlin, 2015; Richardson, 2016).
76
As strange as it is to think about machines fulfilling carnal human urges, many might find
it even stranger that machines could replace another essentially-human role—spiritual leaders. In
most religions, humans hold a special place in God’s cosmic order, especially humans who serve
God in leadership roles as priests, pastors, rabbis, imams, and nuns. It may seem hard to imagine
a machine filling this role, but already Mindar, a robotic priest, has taken over the job of giving
sermons in one of the largest temples in Japan. A preliminary study conducted in this very
temple (Jackson, Yam, Tang, Liu, & Shariff, under review) shows that although most visitors
liked Mindar, they also donated less to the temple (a custom for temple visitors in Japan)
compared visitors who observed a human priest was giving sermons. Interestingly, those with the
strongest religious conviction were unaffected by such exposure, suggesting that religiosity
The most obvious consequence of robot replacement is that it produces feelings of threat,
but how does this threat affect the fabric of society? There are three possibilities. The first
possibility is that this threat changes very little. Social robots and artificial intelligence are far
from the first technologies to induce feelings of threat, with cars, the printing press, and even
recorded music all predicted to produce society's downfall. John Philip Sousa (1906) railed
American music and musical taste, an interruption in the musical development of the country,
and a host of other injuries to music in its artistic manifestations” (p. 278). Of course, these
alarmist predictions typically do not manifest, as people adapt to technologies that become
77
The second possibility is that the rise of robots will tear society apart. Unlike previous
technology panics, social robots’ ability to replace humans means the threat they pose to material
resources (e.g., jobs) and to core values around work could activate not only disdain toward
robots as a social group (as suggested by integrated threat theory; Stephan & Stephan, 2000), but
also toward other groups seen to pose similar threats. Some preliminary evidence for this pattern
comes from work showing that exposure to automation fosters negative sentiment toward
immigrants. Such exposure makes people feel that this group (immigrants) threatens realistic
resources and symbolic values in a way that robots might (Gamez-Djokic & Waytz, 2020). Other
work has found that people at greater risk of having their jobs replaced by automation oppose
immigration at higher rates (Wu, 2022a), and that automation threat makes people support
policies that restrict immigration and foreign goods (Wu, 2022b). In general, it does appear that
people exposed to automation and machines are more likely to be disengaged from their work
and engage in social undermining toward their colleagues, preassembly to safeguard themselves
from unemployment (Yam, Tang, et al., 2022). These studies show how robot replacement could
A third possibility is more optimistic: the rise of robots could bring about greater
cooperation between groups. This possibility rests on the idea that robots function as a threat to
all humans, which therefore facilitates the recategorization of any two potentially antagonistic
human groups into one that shares a common identity (humans) as predicted by the common
ingroup identity model (Gaertner et al., 1993). Support for this possibility comes from work
showing that robots could reduce prejudice by highlighting commonalities between all humans
(Jackson et al., 2020). This work found that anxiety about the rising robot workforce predicted
less anxiety about human outgroups, and priming the salience of a robot workforce reduced
78
prejudice and increased acceptance toward outgroups. In an economic simulation, the presence
of robots reduced racial discrimination in wages. This work also suggests that as robot workers
become more salient, human intergroup differences—including racial and religious differences—
may seem less important, fostering a perception of a common human identity (i.e.,
“panhumanism”).
More work is needed to understand how people react to the possibility of replacement by
machines, but existing work demonstrates consistent feelings of threat. Even if machines are not
actually a threat to us, these feelings of threat might spell trouble for the fabric of society…or
perhaps not. People may adapt to the growing roles of machines, and time will tell what the
Future Directions
Machines are ever advancing, but so is research on machines, and many fruitful areas of
research are expected to emerge in the years to come, several of which are suggested here. The
first suggested area of future research is simply more collaborations between psychologists and
technologists is often to enhance efficiency and effectiveness. The team of engineers and
developers behind Google Maps, for example, seeks first and foremost to provide accurate
location information that enables tourists to travel easier, small businesses to thrive, and people
to navigate quickly in crises (e.g., to find a hospital; Life at Google, 2021). These goals are
admirable, and they have made Google Maps a valuable tool. Psychologists also try to help
people but are often focused on more fundamental questions of understanding how people think
about technology, what they expect from technology, how they use technology, and what the
social effects of technology are. Although today these issues tend to be lumped under the domain
79
of “user experience,” this is where a more formal social-psychological approach can be
extremely valuable.
Second, although much of this chapter explored people’s reactions to machines replacing
humans broadly, an additional vital area to study is how novel technology has supplanted the
human mind. The philosopher Andy Clark argues that technologies like robots and AI represent
part of the “extended mind” to which people outsource cognition, just like using a shopping list
to remember what to get at the grocery store or a calculator to do math (Clark & Chalmers,
1998). As an example of AI functioning as part of the extended mind, when people use Google
to search for information, they fail to distinguish between knowledge stored in their own
Other studies have shown that merely searching for information on the internet leads
people to feel like they understand this information better (Fisher, Goddu, & Keil, 2015), as
though the technology has supplanted their personal cognitive capacity. This work builds on
initial research (Sparrow, Liu, & Wegner, 2011) demonstrating that when people expect to have
access to Google, they exhibit poorer memory recall, ostensibly because they have outsourced
their memory to the search engine. Although the reliability of this particular effect has been
questioned (Camerer et al., 2018), it is certainly true that the rise of machines has generally
altered how much we consider our cognition to reside solely within our minds.
One could argue that machines are not merely replacing our cognitive capacities but
augmenting them, improving human cognition. A compelling area for future research is on
transhumanism, more broadly, how technology might enhance aspects of people’s physical and
mental selves. Questions for social psychology to answer include not only whether such
80
augmentation is possible as well as whether it is merely “perceived” (resulting from
misperceptions, as in the case of people “mistaking” the internet’s knowledge for their own).
Future work should also study people’s views on whether machine augmentation is
morally acceptable. People morally oppose strength-enhancement drugs (Landy, Walco, &
Bartels, 2017) and cognitive-enhancement drugs (Fitz, Nadler, Manogaran, Chong, & Reiner,
2014; Scheske & Schnall, 2012), which means that they might similarly view technological
augmentation as morally wrong or harmful (Schein & Gray, 2018). Some work has suggested
that people morally approve of neurotechnological treatments that alleviate deficits or illness, but
to superhuman levels (Koverola et al., 2022). In addition, given that people oppose technologies
that play a role in processes related to human life and death (Waytz & Young, 2019), they might
apply similar moral opposition to technology that replaces other human functions as well.
example, an app that gives us directions rather than a human whom we stop on the street to ask
for directions—there are likely significant downstream effects. Given that interacting with others
through machines may decrease sociability (i.e., capacities for emotion recognition, empathy,
perspective taking, and emotional intelligence; Waytz & Gray, 2018), interacting with machines
instead of people will affect our social abilities even more potently.
The rise of machines is likely to affect political behavior and cognition as well, as studies
have begun to show. The spread of automation can influence support for far-right political
parties in both the United States and Europe (Anelli, Colantone, & Stanig, 2021; Dal Bó, Finan,
Folke, Persson, & Rickne, 2018; Frey, Berger, & Chen, 2018; Zhen et al., 2017) in part because
81
the experience of automation as a threat shifts people’s political leanings toward conservatism.
Other work on the societal impact of robotics has examined how exposure to automation affects
support for redistributive economic policies, such as a universal basic income—this work has
found mixed effects (Busemeyer & Sahm, 2021; Gallego, Kuo, Alberto, & Manzanos, 2022;
Kurer & Hausermann, 2021; Thewissen & Rueda, 2019), suggesting a ripe area for future
exploration.
Beyond politics, other work has begun examining how artificial intelligence and robots
are replacing not only human beings but are also taking on some of the properties of Gods,
thereby reducing the global importance of religion (Jackson, Yam, Tang, Sibley, & Waytz, under
review). If our gaps in knowledge are being filled by machines, then God is ousted from the
gaps. Perhaps one day people will predominantly view machines as divine creations, as
mechanical emissaries of God. Once machines have intelligence that humans cannot fathom, it is
Finally, one essential area of future research is understanding whether all the excitement,
anxiety, and novelty examined in this chapter might become quaint in just a few years’ time. As
social robots and artificial intelligence become increasingly integrated into our lives, might their
overall psychological effect on humans shift or wane? Consider the uncanny valley. Adults and
older children, but not younger kids, find machines that exhibit feelings creepy, suggesting that
the uncanny valley is learned (Brink, Gray, & Wellman, 2019). This means that this phenomenon
could also be unlearned. Gopnik (2019) raises the possibility that the uncanny valley
phenomenon could cease to exist as future generations become more exposed to smart
technology and thus more comfortable with technology that appears mentally capable.
82
As people grow more accustomed to advanced technology, its mere existence will likely
engender more positive feelings (Eidelman, Crandall, & Pattershall, 2009). Already, research has
shown that simply describing technology as originating before (vs. after) one’s birth makes
people evaluate it more favorably because, apparently longstanding technology feels more like
the status quo, toward which people are positively biased (Smiley & Fisher, 2022). At some
point in the future, perhaps soon, social robots and artificial intelligence —like the car, the
telephone, and the personal computer— might become so common and embedded in our lives
that rather than having positive or negative effects, these technologies will have little
psychological effect at all. If people entirely habituate to machines, then within a generation, this
entire chapter could be obsolete, just like the electromechanical calculator and tape player.
Conclusion
How do humans make sense of machines? In general, people understand our social world
through agent-based cognition, categorizing entities based on the kinds of minds they seem to
have. Machines represent a special kind of agent, an agent of replacement, explicitly designed by
people to replace other agents—typically people. The goal of some of the earliest intelligent
machines was to replace human minds, and modern machines are getting ever closer to this goal.
Because machines are agents of replacement, they create a fundamental ambiguity about whether
people should think of them more as just a machine, or as the human role they are replacing—
especially when their appearance and behavior place them within the questionable zone (QZ)
Modern machines can serve as coworkers, teammates, nurses, and restaurant servers. On
the one hand, these machines can make our lives easier and more efficient, but people are not
always excited about machines replacing people. Sometimes, they are downright averse (or
83
creeped out) by machines that replace human minds and jobs, especially when it comes to
involving moral decisions. To the extent machines do decide on moral matters, people want them
The rise of machines will likely change the fabric of society and alter what we think of
art, sex, and maybe even God. One day, machines might even replace scientists. Perhaps the next
84
References
Acemoglu, D., & Restrepo, P. (2020). Robots and Jobs: Evidence from US Labor Markets.
Adadi, A., & Berrada, M. (2018). Peeking Inside the Black-Box: A Survey on Explainable
https://doi.org/10.1109/ACCESS.2018.2870052
Alaiad, A., & Zhou, L. (2013, August). Patient Behavioural Intention toward Adopting
https://www.researchgate.net/publication/269987741_Patient_Behavioural_Intention_towar
d_Adopting_Healthcare_Robots
Alarie, B., Cockfield, A., & GPT-3. (2021). Will Machines Replace Us? Machine-Authored
Texts and the Future of Scholarship. Law, Technology and Humans, 3(2), Article 2.
https://doi.org/10.5204/lthj.2089
Allen, R., & Choudhury, P. (Raj). (2022). Algorithm-Augmented Work and Domain Experience:
The Countervailing Forces of Ability and Aversion. Organization Science, 33(1), 149–169.
https://doi.org/10.1287/orsc.2021.1554
Andrist, S., Mutlu, B., & Tapus, A. (2015). Look Like Me: Matching Robot Personality via Gaze
Anelli, M., Colantone, I., & Stanig, P. (2021). Individual vulnerability to industrial robot
adoption increases support for the radical right. Proceedings of the National Academy of
85
Appel, M., Izydorczyk, D., Weber, S., Mara, M., & Lischetzke, T. (2020). The uncanny of mind
Asch, S. E. (1955). Opinions and social pressure. Scientific American, 193(5), 31–35.
https://doi.org/10.1038/scientificamerican1155-31
Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., Bonnefon, J.-F., & Rahwan, I.
https://doi.org/10.1038/s41586-018-0637-6
Bainbridge, W. A., Hart, J. W., Kim, E. S., & Scassellati, B. (2011). The Benefits of Interactions
Barrett, J. L. (2000). Exploring the natural foundations of religion. Trends in Cognitive Sciences,
Bartneck, C., & Hu, J. (2008). Exploring the abuse of robots. Interaction Studies, 9(3), 415–433.
https://doi.org/10.1075/is.9.3.04bar
86
Bartneck, C., Nomura, T., Kanda, T., Suzuki, T., & Kennsuke, K. (2005). A cross-cultural study
https://doi.org/10.13140/RG.2.2.35929.11367
Bartz, J. A., Tchalova, K., & Fenerci, C. (2016). Reminders of Social Connection Can Attenuate
https://doi.org/10.1177/0956797616668510
Baxter, P., de Greeff, J., & Belpaeme, T. (2013). Do children behave differently with a social
http://hdl.handle.net/1854/LU-8197719
Belk, R. (2022). Artificial Emotions and Love and Sex Doll Service Workers. Journal of Service
project/publications/collected-works-jeremy-bentham/comment-commentaries-and-
fragment-government
Beran, T. N., Ramirez-Serrano, A., Vanderkooi, O. G., & Kuhn, S. (2015). Humanoid robotics in
Berger, B., Adam, M., Rühr, A., & Benlian, A. (2021). Watch Me Improve—Algorithm
Aversion and Demonstrating the Ability to Learn. Business & Information Systems
87
Bevilacqua, M. (2018, December 19). Uber Was Warned Before Self-Driving Car Crash That
https://www.bicycling.com/news/a25616551/uber-self-driving-car-crash-cyclist/
Bickmore, T. W., Vardoulakis, L. M. P., & Schulman, D. (2013). Tinker: A relational agent
https://doi.org/10.1007/s10458-012-9216-7
Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions.
Bigman, Y. E., Waytz, A., Alterovitz, R., & Gray, K. (2019). Holding Robots Responsible: The
https://doi.org/10.1016/j.tics.2019.02.008
Bigman, Y. E., Wilson, D., Arnestad, M. N., Waytz, A., & Gray, K. (2022). Algorithmic
Bigman, Y. E., Yam, K. C., Marciano, D., Reynolds, S. J., & Gray, K. (2021). Threat of racial
Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions.
Binfield, K. (Ed.). (2015). Writings of the Luddites. Johns Hopkins University Press.
https://doi.org/10.1353/book.98247
88
Blanas, S., Gancia, G., & Lee, S. Y. (Tim). (2019). Who Is Afraid of Machines? (CEPR
https://econpapers.repec.org/paper/cprceprdp/13802.htm
Bó, E., Finan, F., Folke, O., Persson, T., & Rickne, J. (2018). Economic losers and political
UC Berkeley, 2(5), 2.
Boden, M. A. (2007). Authenticity and computer art. Digital Creativity, 18(1), 3–10.
https://doi.org/10.1080/14626260701252285
https://doi.org/10.1126/science.349.6245.250
Bonezzi, A., & Ostinelli, M. (2021). Can algorithms legitimize discrimination? Journal of
Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles.
Booth, S., Tompkin, J., Pfister, H., Waldo, J., Gajos, K., & Nagpal, R. (2017). Piggybacking
https://doi.org/10.1145/2909824.3020211
Brandon, M., Shlykova, N., & Morgentaler, A. (2022). Curiosity and other attitudes towards sex
robots: Results of an online survey. Journal of Future Robot Life, 3(1), 3–16.
https://doi.org/10.3233/FRL-200017
89
Brandstetter, J., Rácz, P., Beckner, C., Sandoval, E. B., Hay, J., & Bartneck, C. (2014). A peer
pressure experiment: Recreation of the Asch conformity experiment with robots. 2014
https://doi.org/10.1109/IROS.2014.6942730
Brink, K. A., Gray, K., & Wellman, H. M. (2017). Creepiness creeps in: Uncanny valley feelings
http://dx.doi.org/10.1111/cdev.12999
Britannica. (n.d.). Political theory of Aristotle. In Britannica. Retrieved October 3, 2022, from
https://www.britannica.com/biography/Aristotle/Political-theory
Brščić, D., Kidokoro, H., Suehiro, Y., & Kanda, T. (2015). Escaping from children’s abuse of
Broadbent, E. (2017). Interactions with robots: the truths we reveal about ourselves. Annual
043958
Burgoon, J. K., Bonito, J. A., Bengtsson, B., Cederberg, C., Lundeberg, M., & Allspach, L.
https://doi.org/10.1016/S0747-5632(00)00029-7
90
Burke, A. (2019). Occluded algorithms. Big Data & Society, 6(2).
https://doi.org/10.1177/2053951719858743
Burr, C., Cristianini, N., & Ladyman, J. (2018). An Analysis of the Interaction Between
Intelligent Software Agents and Human Users. Minds and Machines, 28(4), 735–774.
https://doi.org/10.1007/s11023-018-9479-0
Busemeyer, M., & Sahm, A. (2021). Social Investment, Redistribution or Basic Income?
Exploring the Association Between Automation Risk and Welfare State Attitudes in
Byrne, D., & Griffitt, W. (1969). Similarity and awareness of similarity of personality
Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from
https://doi.org/10.1126/science.aal4230
Camerer, C. F., Dreber, A., Holzmeister, F., Ho, T.-H., Huber, J., Johannesson, M., Kirchler, M.,
Nave, G., Nosek, B. A., Pfeiffer, T., Altmejd, A., Buttrick, N., Chan, T., Chen, Y., Forsell,
E., Gampa, A., Heikensten, E., Hummer, L., Imai, T., … Wu, H. (2018). Evaluating the
replicability of social science experiments in Nature and Science between 2010 and 2015.
Campbell, C. A. (1966). The Discipline of the Cave. Philosophical Books, 7(3), 10–12.
https://doi.org/10.1111/j.1468-0149.1966.tb02632.x
91
Caporael, L. R. (1986). Anthropomorphism and mechanomorphism: Two faces of the human
5632(86)90004-X
Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-Dependent Algorithm Aversion.
https://doi.org/10.1177/0022243719851788
Ceh, S., & Vanman, E. (2018). The Robots are Coming! The Robots are Coming! Fear and
Chen, X. (2016). The problem of mind in Confucianism. Asian Philosophy, 26(2), 166–181.
https://doi.org/10.1080/09552367.2016.1165790
Chernyak, N., & Gary, H. E. (2016). Children’s cognitive and behavioral reactions to an
autonomous versus controlled social robot dog. Early Education and Development, 27(8),
1175–1189. https://doi.org/10.1080/10409289.2016.1158611
Clark, A., & Chalmers, D. (1998). The Extended Mind. Analysis, 58(1), 7–19.
Coeckelbergh, M. (2010). Moral appearances: Emotions, robots, and human morality. Ethics and
Conitzer, V., Sinnott-Armstrong, W., Borg, J. S., Deng, Y., & Kramer, M. (2017). Moral
https://doi.org/10.1609/aaai.v31i1.11140
92
Correia, F., Alves-Oliveira, P., Maia, N., Ribeiro, T., Petisca, S., Melo, F. S., & Paiva, A. (2016).
Just follow the suit! Trust in human-robot interactions during card game playing. 2016 25th
Correia, F., Alves-Oliveira, P., Ribeiro, T., Melo, F., & Paiva, A. (2017). A social robot as a card
game player. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive
Correia, F., Mascarenhas, S., Prada, R., Melo, F. S., & Paiva, A. (2018). Group-based Emotions
https://doi.org/10.1145/3171221.3171252
Coupé, T. (2019). Automation, job characteristics and job insecurity. International Journal of
Creed, C., Beale, R., & Cowan, B. (2015). The impact of an embodied agent's emotional
Danaher, J. (2016). Robots, law and the retribution gap. Ethics and Information Technology, 18,
299–309. https://doi.org/10.1007/s10676-016-9403-3
Dang, J., & Liu, L. (2021). Robots are friends as well as foes: Ambivalent attitudes toward
mindful and mindless AI robots in the United States and China. Computers in Human
93
Dauth, W., Findeisen, S., Suedekum, J., & Woessner, N. (2018). Adjusting to Robots: Worker-
Level Evidence (Research Paper No. 013; Institute Working Paper (Federal Reserve Bank of
Minneapolis. https://doi.org/10.21034/iwp.13
de Graaf, M. M. A., & Malle, B. F. (2019). People’s Explanations of Robot Behavior Subtly
De Jong, B. A., Dirks, K. T., & Gillespie, N. (2016). Trust and team performance: A meta-
analysis of main effects, moderators, and covariates. Journal of Applied Psychology, 101(8),
1134–1150. https://doi.org/10.1037/apl0000110
de Melo, C. M., Marsella, S., & Gratch, J. (2018). Social decisions and fairness change when
people’s interests are represented by autonomous agents. Autonomous Agents and Multi-
de Visser, E. J., Monfort, S. S., Goodyear, K., Lu, L., O’Hara, M., Lee, M. R., Parasuraman, R.,
& Krueger, F. (2017). A Little Anthropomorphism Goes a Long Way: Effects of Oxytocin
de Visser, E. J., Monfort, S. S., McKendrick, R., Smith, M. A. B., McKnight, P. E., Krueger, F.,
https://doi.org/10.1037/xap0000092
94
de Visser, E., & Parasuraman, R. (2011). Adaptive Aiding of Human-Robot Teaming: Effects of
https://doi.org/10.1177/1555343411410160
DeFranco, J. F., Voas, J., & Kshetri, N. (2022). Algorithms: Society’s Invisible Puppeteers.
https://doi.org/10.3354/esep00195
Desai, M., Kaniarasu, P., Medvedev, M., Steinfeld, A., & Yanco, H. (2013). Impact of robot
failures and feedback on real-time trust. 2013 8th ACM/IEEE International Conference on
Descartes, R. (2008). A Discourse on the Method: Of Correctly Conducting One’s Reason and
Seeking Truth in the Sciences (I. Maclean, Trans.). Oxford University Press.
DeSteno, D., Breazeal, C., Frank, R. H., Pizarro, D., Baumann, J., Dickens, L., & Lee, J. J.
Devlin, K. (2015, September 17). In defence of sex machines: Why trying to ban sex robots is
trying-to-ban-sex-robots-is-wrong-47641
95
Dietvorst, B. J., & Bartels, D. M. (2022). Consumers Object to Algorithms Making Morally
Dietvorst, B. J., & Bharti, S. (2020). People Reject Algorithms in Uncertain Decision Domains
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously
avoid algorithms after seeing them err. Journal of Experimental Psychology: General,
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2018). Overcoming Algorithm Aversion: People
Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them. Management
DiSalvo, C. F., Gemperle, F., Forlizzi, J., & Kiesler, S. (2002). All robots are not created equal:
The design and perception of humanoid robot heads. Proceedings of the 4th Conference on
https://doi.org/10.1145/778712.778756
Di Tella, R., & Rodrik, D. (2020). Labour Market Shocks and the Demand for Trade Protection:
https://doi.org/10.1093/ej/ueaa006
Doyle, C. M., & Gray, K. (2020). How people perceive the minds of the dead: The importance of
https://doi.org/10.1016/j.cognition.2020.104308
96
Efendić, E., Van de Calseyde, P. P. F. M., & Evans, A. M. (2020). Slow response times
undermine trust in algorithmic (but not human) predictions. Organizational Behavior and
Eidelman, S., Crandall, C. S., & Pattershall, J. (2009). The existence bias. Journal of Personality
Elkins, K., & Chun, J. (2020). Can GPT-3 Pass a Writer’s Turing Test? Journal of Cultural
Enz, S., Diruf, M., Spielhagen, C., Zoll, C., & Vargas, P. A. (2011). The Social Role of Robots
Epley, N., Waytz, A., Akalis, S., & Cacioppo, J. T. (2008). When We Need A Human:
https://doi.org/10.1521/soco.2008.26.2.143
Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: A three-factor theory of
295X.114.4.864
Eyssel, F., & Kuchenbrandt, D. (2011). Manipulating anthropomorphic inferences about NAO:
The role of situational and dispositional aspects of effectance motivation. 2011 IEEE
https://doi.org/10.1109/ROMAN.2011.6005233
97
Eyssel, F., & Kuchenbrandt, D. (2012). Social categorization of social robots:
Eyssel, F., Kuchenbrandt, D., & Bobinger, S. (2011). Effects of anticipated human-robot
https://doi.org/10.1145/1957656.1957673
Eyssel, F., Kuchenbrandt, D., Bobinger, S., de Ruiter, L., & Hegel, F. (2012). “If you sound like
me, you must be more human”: On the interplay of robot and user features on human-robot
https://doi.org/10.1145/2157689.2157717
Eyssel, F., & Reich, N. (2013). Loneliness makes the heart grow fonder (of robots)—On the
https://doi.org/10.1109/HRI.2013.6483531
https://doi.org/10.1080/03080188.2020.1840219
Fasola, J., & Matarić, M. J. (2013). A socially assistive robot exercise coach for the elderly.
98
Filiz, I., Judek, J. R., Lorenz, M., & Spiwoks, M. (2022). Algorithm Aversion as an Obstacle in
the Establishment of Robo Advisors. Journal of Risk and Financial Management, 15(8),
Article 8. https://doi.org/10.3390/jrfm15080353
Fisher, M., Goddu, M. K., & Keil, F. C. (2015). Searching for explanations: How the Internet
Fiske, S. T., Cuddy, A. J. C., & Glick, P. (2007). Universal dimensions of social cognition:
https://doi.org/10.1016/j.tics.2006.11.005
Fitz, N. S., Nadler, R., Manogaran, P., Chong, E. W. J., & Reiner, P. B. (2014). Public attitudes
013-9190-z
Floridi, L., & Sanders, J. W. (2004). On the Morality of Artificial Agents. Minds and Machines,
Fogg, B. J., & Nass, C. (1997). Silicon sycophants: The effects of computers that flatter.
https://doi.org/10.1006/ijhc.1996.0104
Franklin, C. E. (2015). Everyone thinks that an ability to do otherwise is necessary for free will
https://doi.org/10.1007/s11098-014-0399-4
99
Fraune, M. R., Nishiwaki, Y., Sabanović, S., Smith, E. R., & Okada, M. (2017). Threatening
Flocks and Mindful Snowflakes: How Group Entitativity Affects Perceptions of Robots.
Frey, C. B., Berger, T., & Chen, C. (2018). Political machinery: Did robots swing the 2016 US
https://doi.org/10.1093/oxrep/gry007
Fuentes, A. (2017). The Creative Spark: How Imagination Made Humans Exceptional. Penguin
Publishing Group.
Gallego, Aina, Alex Kuo, Pepe Fernández-Albertos and Dulce Manzano. 2022. "Technological
Gallimore, D., Lyons, J. B., Vo, T., Mahoney, S., & Wynne, K. T. (2019). Trusting Robocop:
1–9. https://doi.org/10.3389/fpsyg.2019.00482
Gamez, P., Shank, D. B., Arnold, C., & North, M. (2020). Artificial virtue: The machine
question and perceptions of moral character in artificial moral agents. AI & SOCIETY, 35,
795–809. https://doi.org/10.1007/s00146-020-00977-1
Gamez-Djokic, M., & Waytz, A. (2020). Concerns About Automation and Negative Sentiment
https://doi.org/10.1177/0956797620929977
100
Gardner, W. L., Pickett, C. L., Jefferis, V., & Knowles, M. (2005). On the Outside Looking In:
Loneliness and Social Monitoring. Personality and Social Psychology Bulletin, 31(11),
1549–1560. https://doi.org/10.1177/0146167205277208
Gates, S. W., Perry, V. G., & Zorn, P. M. (2002). Automated underwriting in mortgage lending:
Good news for the underserved? Housing Policy Debate, 13(2), 369–391.
https://doi.org/10.1080/10511482.2002.9521447
Gaudiello, I., Zibetti, E., Lefort, S., Chetouani, M., & Ivaldi, S. (2016). Trust as indicator of
robot functional and social acceptance. An experimental study on user conformation to iCub
https://doi.org/10.1016/j.chb.2016.03.057
Ghimire, R., Skinner, J., & Carnathan, M. (2020). Who perceived automation as a threat to their
jobs in metro Atlanta: Results from the 2019 Metro Atlanta Speaks survey. Technology in
Ghosh, A., & Fossas, G. (2022). Can There be Art Without an Artist?
http://arxiv.org/abs/2209.07667
Glikson, E., & Woolley, A. W. (2020). Human Trust in Artificial Intelligence: Review of
https://doi.org/10.5465/annals.2018.0057
Gnambs, T., & Appel, M. (2019). Are robots becoming unpopular? Changes in attitudes towards
https://doi.org/10.1016/j.chb.2018.11.045
101
Gockley, R., Simmons, R., & Forlizzi, J. (2006). Modeling Affect in Socially Interactive Robots.
ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive
Gombolay, M. C., Gutierrez, R. A., Clarke, S. G., Sturla, G. F., & Shah, J. A. (2015). Decision-
making authority, team efficiency and human worker satisfaction in mixed human–robot
Gopnik, A. (2019, January 10). A Generational Divide in the Uncanny Valley. Wall Street
Journal. https://www.wsj.com/articles/a-generational-divide-in-the-uncanny-valley-
11547138712
GovTech Singapore. (n.d.). ‘Ask Jamie’ Virtual Assistant. GovTech Singapore. Retrieved
Graetz, G., & Michaels, G. (2018). Robots at Work. The Review of Economics and Statistics,
Granulo, A., Fuchs, C., & Puntoni, S. (2019). Psychological reactions to human versus robotic
https://doi.org/10.1038/s41562-019-0670-y
Granulo, A., Fuchs, C., & Puntoni, S. (2021). Preference for Human (vs. Robotic) Labor is
80. https://doi.org/10.1002/jcpy.1181
Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of Mind Perception. Science,
102
Gray, K., Knickman, T. A., & Wegner, D. M. (2011). More dead than dead: Perceptions of
https://doi.org/10.1016/j.cognition.2011.06.014
Gray, K., Schein, C., & Ward, A. F. (2014). The myth of harmless wrongs in moral cognition:
Gray, K., & Wegner, D. M. (2010). Blaming God for Our Pain: Human Suffering and the Divine
https://doi.org/10.1177/1088868309350299
Gray, K., & Wegner, D. M. (2011). Dimensions of Moral Emotions. Emotion Review, 3(3), 258–
260. https://doi.org/10.1177/1754073911402388
Gray, K., & Wegner, D. M. (2012). Feeling robots and human zombies: Mind perception and the
https://doi.org/10.1016/j.cognition.2012.06.007
Gray, K., Young, L., & Waytz, A. (2012). Mind Perception Is the Essence of Morality.
Guthrie, S. E. (1995). Faces in the Clouds: A New Theory of Religion. Oxford University Press.
Hallevy, G. (2010a). “I, Robot – I, Criminal”—When Science Fiction Becomes Reality: Legal
Liability of AI Robots committing Criminal Offenses. Syracuse Science & Technology Law
103
Hallevy, G. (2010b). The Criminal Liability of Artificial Intelligence Entities—From Science
Fiction to Legal Social Control. Akron Intellectual Property Journal, 4(2, Article 1), 171–
202.
Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., de Visser, E. J., & Parasuraman,
Hanson Robotics. (n.d.). Sophia. Hanson Robotics. Retrieved February 20, 2023, from
https://www.hansonrobotics.com/sophia/
Harvey, N., & Fischer, I. (1997). Taking Advice: Accepting Help, Improving Judgment, and
117–133. https://doi.org/10.1006/obhd.1997.2697
Haslam, N., Bain, P., Douge, L., Lee, M., & Bastian, B. (2005). More human than you:
Attributing humanness to self and others. Journal of Personality and Social Psychology,
Haslam, N., Bastian, B., & Bissett, M. (2004). Essentialist Beliefs about Personality and Their
https://doi.org/10.1177/0146167204271182
Haslam, N., Loughnan, S., & Holland, E. (2013). The psychology of humanness. In S. J. Gervais
104
60, pp. 25–51). Springer Science + Business Media. https://doi.org/10.1007/978-1-4614-
6959-9_2
Hebb, D. O. (1946). Emotion in man and animal: An analysis of the intuitive processes of
Heider, F., & Simmel, M. (1944). An Experimental Study of Apparent Behavior. The American
Herlocker, J. L., Konstan, J. A., Terveen, L. G., & Riedl, J. T. (2004). Evaluating collaborative
https://doi.org/10.1145/963770.963772
Hertz, N., Shaw, T., de Visser, E. J., & Wiese, E. (2019). Mixing It Up: How Mixed Groups of
Hertz, N., & Wiese, E. (2018). Under Pressure: Examining Social Conformity With Computer
https://doi.org/10.1177/0018720818788473
Heßler, P. O., Pfeiffer, J., & Hafenbrädl, S. (2022). When Self-Humanization Leads to
https://doi.org/10.1007/s12599-022-00754-y
Hewstone, M., Rubin, M., & Willis, H. (2002). Intergroup Bias. Annual Review of Psychology,
105
Hijazi, A., Ferguson, C. J., Richard Ferraro, F., Hall, H., Hovee, M., & Wilcox, S. (2019).
https://doi.org/10.1007/s12144-017-9684-7
Himma, K. E. (2009). Artificial agency, consciousness, and the criteria for moral agency: What
properties must an artificial agent have to be a moral agent? Ethics and Information
Hochberg, M. (2016, December 22). Everything you need to know about the Bionic Bar on
https://www.royalcaribbeanblog.com/2016/12/22/everything-you-need-know-about-the-
bionic-bar-royal-caribbeans-harmony-of-the-seas
Hoff, K. A., & Bashir, M. (2015). Trust in Automation: Integrating Empirical Evidence on
https://doi.org/10.1177/0018720814547570
Hong, J. W., Peng, Q., & Williams, D. (2021). Are you ready for artificial Mozart and Skrillex?
An experiment testing expectancy violation theory and AI music. New Media & Society,
Hsu, A. (2022, September 8). California dockworkers are worried about losing their good-
https://www.wfae.org/2022-09-08/california-dockworkers-are-worried-about-losing-their-
good-paying-jobs-to-robots
106
IFR. (2021, October 28). IFR presents World Robotics 2021 reports. International Federation of
Robotics. https://ifr.org/ifr-press-releases/news/robot-sales-rise-again
Innocenti, S., & Golin, M. (2022). Human capital investment and perceived automation risks:
Evidence from 16 countries. Journal of Economic Behavior & Organization, 195, 27–41.
https://doi.org/10.1016/j.jebo.2021.12.027
https://ifr.org/papers
Ishii, T., & Watanabe, K. (2019). How People Attribute Minds to Non-Living Entities. 2019 11th
https://doi.org/10.1109/KST.2019.8687324
Iza World of Labor. (2019, August 29). French supermarket workers protest against automated
workers-protest-against-automated-checkout-stations
Jackson, J. C., Castelo, N., & Gray, K. (2020). Could a rising robot workforce make humans less
https://doi.org/10.1037/amp0000582
Jackson, J. C., Yam, K. C., Tang, P., Liu, T., & Shariff, A. (under review). Exposure to robot
Jackson, J. C., Yam, K. C., Tang, P., Sibley, C., & Waytz, A. (under review). Machina ex Deux:
107
Jackson, R. B., & Williams, T. (2019). Language-Capable Robots may Inadvertently Weaken
Jarrahi, M. H. (2018). Artificial intelligence and the future of work: Human-AI symbiosis in
https://doi.org/10.1016/j.bushor.2018.03.007
Jauernig, J., Uhl, M., & Walkowitz, G. (2022). People Prefer Moral Discretion to Algorithms:
https://doi.org/10.1007/s13347-021-00495-y
Jeff Grubb’s Game Mess. (2018, May 9). Google Duplex: A.I. Assistant Calls Local Businesses
https://www.youtube.com/watch?v=D5VN56jQMWM
Jensen, K. (2010). Punishment and spite, the dark side of cooperation. Philosophical
https://doi.org/10.1098/rstb.2010.0146
https://doi.org/10.1080/09697259708571910
Johnsson, I.-M., Nass, C., Harris, H., & Takayama, L. (2005). Matching In-Car Voice with
Driver State: Impact on Attitude and Driving Performance. Driving Assessment Conference,
108
Jung, M. F., Lee, J. J., DePalma, N., Adalgeirsson, S. O., Hinds, P. J., & Breazeal, C. (2013).
1566. https://doi.org/10.1145/2441776.2441954
Jung, M. F., Martelaro, N., & Hinds, P. J. (2015). Using Robots to Moderate Team Conflict: The
https://doi.org/10.1145/2696454.2696460
Kahn, P. H. Jr., Kanda, T., Ishiguro, H., Freier, N. G., Severson, R. L., Gill, B. T., Ruckert, J. H.,
& Shen, S. (2012). “Robovie, you’ll have to go into the closet now”: Children’s social and
https://doi.org/10.1037/a0027033
Kahn, P. H., Kanda, T., Ishiguro, H., Gill, B. T., Shen, S., Gary, H. E., & Ruckert, J. H. (2015).
Will people keep the secret of a humanoid robot?: psychological intimacy in HRI.
Kahn, P. H., Kanda, T., Ishiguro, H., Gill, B. T., Ruckert, J. H., Shen, S., Gary, H. E., Reichert,
A. L., Freier, N. G., & Severson, R. L. (2012). Do people hold a humanoid robot morally
accountable for the harm it causes? Proceedings of the Seventh Annual ACM/IEEE
https://doi.org/10.1145/2157689.2157696
109
Kant, I. (1785). Grounding for the Metaphysics of Morals ; With, On a Supposed Right to Lie
Karniol, R. (2003). Egocentrism versus protocentrism: The status of self in social prediction.
Kätsyri, J., Förger, K., Mäkäräinen, M., & Takala, T. (2015). A review of empirical evidence on
different uncanny valley hypotheses: Support for perceptual mismatch as one road to the
https://doi.org/10.3389/fpsyg.2015.00390
Keijsers, M., Kazmi, H., Eyssel, F., & Bartneck, C. (2021). Teaching Robots a Lesson:
https://doi.org/10.1007/s12369-019-00608-w
Kidd, C. D., & Breazeal, C. (2008). Robots at home: Understanding long-term human-robot
3230–3235. https://doi.org/10.1109/IROS.2008.4651113
Kidokoro, H., Kanda, T., Brščic, D., and Shiomi, M. (2013). Will I bother here?: a robot
110
Kim, K. J., Park, E., & Sundar, S. S. (2013). Caregiving role in human–robot interaction: A
study of the mediating effects of perceived benefit and social presence. Computers in
Kim, M.-S., & Kim, E.-J. (2013). Humanoid robots as “The Cultural Other”: Are we able to love
Kim, S. (Sam), Kim, J., Badu-Baiden, F., Giroux, M., & Choi, Y. (2021). Preference for robot
https://doi.org/10.1016/j.ijhm.2020.102795
Kim, E., Paul, R., Shic, F., & Scassellati, B. (2012). Bridging the research gap: making HRI
https://doi.org/10.5898/JHRI.1.1.Kim
Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J., & Mullainathan, S. (2017). Human
Decisions and Machine Predictions (Working Paper No. 23180). National Bureau of
Kleinberg, J., Ludwig, J., Mullainathan, S., & Sunstein, C. R. (2018). Discrimination in the Age
111
Köbis, N., Bonnefon, J.-F., & Rahwan, I. (2021). Bad machines corrupt good morals. Nature
Koch, M., Manuylov, I., & Smolka, M. (2021). Robots and Firms. The Economic Journal,
Kolbjørnsrud, V., Amico, R., & Thomas, R. J. (2017). Partnering with AI: How organizations
can win over skeptical managers. Strategy & Leadership, 45(1), 37–43.
https://doi.org/10.1108/SL-12-2016-0085
Kondo, Y., Takemura, K., Takamatsu, J., & Ogasawara, T. (2013). A gesture-centric android
133–151. https://doi.org/10.5898/JHRI.2.1.Kondo
https://doi.org/10.1177/014920630002600306
Koo, J., Kwac, J., Ju, W., Steinert, M., Leifer, L., & Nass, C. (2015). Why did my car just do
Koverola, M., Kunnari, A., Drosinou, M., Palomäki, J., Hannikainen, I. R., Jirout Košová, M.,
Kopecký, R., Sundvall, J., & Laakasuo, M. (2022). Treatments approved, boosts eschewed:
112
Krach, S., Hegel, F., Wrede, B., Sagerer, G., Binkofski, F., & Kircher, T. (2008). Can Machines
Think? Interaction and Perspective Taking with Robots Investigated via fMRI. PLoS ONE,
Kurer, T., & H ̈ausermann, S. (2021). Automation and social policy: Which policy responses do
at-risk workers support? (Working Paper Series No. 2), Welfare Priorities. University of
Zurich.
Lammer, L., Huber, A., Weiss, A., & Vincze, M. (2014). Mutual Care: How older adults react
when they should help their care robot. AISB 2014 - 50th Annual Convention of the
Lan, J., Yuan, B., & Gong, Y. (2022). Predicting the change trajectory of employee robot-phobia
Landy, J. F., Walco, D. K., & Bartels, D. M. (2017). What’s wrong with using steroids?
Exploring whether and why people oppose the use of performance enhancing drugs.
https://doi.org/10.1037/pspa0000089
2601(08)60307-X
113
Le Guin, U. K. (1993). The ones who walk away from Omelas. Creative Education.
Leary, M. R., & Baumeister, R. F. (2000). The nature and function of self-esteem: Sociometer
https://doi.org/10.1016/S0065-2601(00)80003-9
Lebovitz, S., Lifshitz-Assaf, H., & Levina, N. (2022). To Engage or Not to Engage with AI for
Critical Judgments: How Professionals Deal with Opacity When Using AI for Medical
Lee, J. D., & See, K. A. (2004). Trust in Automation: Designing for Appropriate Reliance.
Lee, K. M., Park, N., & Song, H. (2005). Can a Robot Be Perceived as a Developing Creature?
People’s Social Responses Toward It. Human Communication Research, 31(4), 538–563.
https://doi.org/10.1111/j.1468-2958.2005.tb00882.x
emotion in response to algorithmic management. Big Data & Society, 5(1), 1–16.
https://doi.org/10.1177/2053951718756684
Lee, M., Ruijten, P., Frank, L., de Kort, Y., & IJsselsteijn, W. (2021). People May Punish, But
Not Blame Robots. Proceedings of the 2021 CHI Conference on Human Factors in
114
Leite, I., Martinho, C., & Paiva, A. (2013). Social robots for long-term interaction: a survey.
013-0178-y
Leite, I., McCoy, M., Lohani, M., Ullman, D., Salomons, N., Stokes, C., Rivers, S., &
interaction between children and robots. In Proceedings of the Tenth Annual ACM/IEEE
https://doi.org/10.1145/2696454.2696481
Leite, I., Pereira, A., & Lehman, J. F. (2017). Persistent memory in repeated child-robot
238–247. https://doi.org/10.1145/3078072.3079728
Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2018). Fair, Transparent, and
627. https://doi.org/10.1007/s13347-017-0279-x
Lewandowsky, S., Mundy, M., & Tan, G. P. A. (2000). The dynamics of trust: Comparing
https://doi.org/10.1037/1076-898X.6.2.104
Leyzberg, D., Spaulding, S., Toneva, M., & Scassellati, B. (2012). The physical presence of a
robot tutor increases cognitive learning gains. In 34th Annual Conference of the Cognitive
115
Li, J. (2015). The benefit of being physically present: A survey of experimental works
comparing copresent robots, telepresent robots and virtual agents. International Journal of
Li, S., Yu, F., & Peng, K. (2020). Effect of State Loneliness on Robot Anthropomorphism:
https://doi.org/10.1088/1742-6596/1631/1/012024
Life at Google. (2021, July 28). What’s it like to work on the Google Maps team? [Video].
Youtube. https://www.youtube.com/watch?v=vaHK4OXAN-A
Lima, G., Jeon, C., Cha, M., & Park, K. (2020). Will Punishing Robots Become Imperative in
the Future? Extended Abstracts of the 2020 CHI Conference on Human Factors in
Lingmont, D. N. J., & Alexiou, A. (2020). The contingent effect of job automating technology
https://doi.org/10.1016/j.techfore.2020.120302
Liu, P., Glas, D. F., Kanda, T., Ishiguro, H., & Hagita, N. (2013, March). It's not polite to point:
116
Loewenstein, G. (1996). Out of Control: Visceral Influences on Behavior. Organizational
https://doi.org/10.1006/obhd.1996.0028
Logg, J. M., Haran, U., & Moore, D. A. (2018). Is overconfidence a motivated bias?
1465. https://doi.org/10.1037/xge0000500
Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer
Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to Medical Artificial
https://doi.org/10.1093/jcr/ucz013
Loughnan, S., & Haslam, N. (2007). Animals and Androids: Implicit Associations Between
https://doi.org/10.1111/j.1467-9280.2007.01858.x
Luczak, H., Roetting, M., & Schmidt, L. (2003). Let’s talk: Anthropomorphization as means to
cope with stress of interacting with technical devices. Ergonomics, 46(13–14), 1361–1374.
https://doi.org/10.1080/00140130310001610883
Luo, X., Qin, M. S., Zheng, F., & Zhe, Q. (2021). Artificial Intelligence Coaches for Sales
https://doi.org/10.1177/0022242920956676
117
Lyons, J. B., & Guznov, S. Y. (2019). Individual differences in human–machine trust: A multi-
study look at the perfect automation schema. Theoretical Issues in Ergonomics Science,
Lyu, S. (2020). Deepfake Detection: Current Challenges and Next Steps. 2020 IEEE
https://doi.org/10.1109/ICMEW46912.2020.9105991
Maasland, C., & Weißmüller, K. S. (2022). Blame the Machine? Insights From an Experiment
https://doi.org/10.3389/fpsyg.2022.779028
MacDorman, K. F., & Entezari, S. O. (2015). Individual differences predict sensitivity to the
MacDorman, K. F., & Ishiguro, H. (2006). The uncanny advantage of using androids in
https://doi.org/10.1075/is.7.3.03mac
Malle, B. F., Scheutz, M., Arnold, T., Voiklis, J., & Cusimano, C. (2015). Sacrifice One For the
Good of Many?: People Apply Different Moral Norms to Human and Robot Agents.
Malle, B. F. (2019). How Many Dimensions of Mind Perception Really Are There? In A. K.
Goel, C. M. Seifert, & C. Freksa (Eds.), 41st Annual Meeting of the Cognitive Science
118
https://research.clps.brown.edu/SocCogSci/Publications/Pubs/Malle_2019_How_Many_Di
mensions.pdf
Maner, J. K., DeWall, C. N., Baumeister, R. F., & Schaller, M. (2007). Does social exclusion
3514.92.1.42
Maninger, T., & Shank, D. B. (2022). Perceptions of violations by artificial and human actors
https://doi.org/10.1016/j.chbr.2021.100154
Mann, J. A., MacDonald, B. A., Kuo, I.-H., Li, X., & Broadbent, E. (2015). People respond
Manyika, J., Lund, S., Chui, M., Bughin, J., Woetzel, J., Batra, P., Ko, R., & Sanghvi, S. (2017).
Jobs lost, jobs gained: What the future of work will mean for jobs, skills, and wages.
work/jobs-lost-jobs-gained-what-the-future-of-work-will-mean-for-jobs-skills-and-wages
Manzey, D., Reichenbach, J., & Onnasch, L. (2012). Human Performance Consequences of
Automated Decision Aids: The Impact of Degree of Automation and System Experience.
https://doi.org/10.1177/1555343411433844
119
Martens, C., & Cardona-Rivera, R. E. (2016). Generating Abstract Comics. In F. Nack & A. S.
https://doi.org/10.1007/978-3-319-48279-8_15
Matsui, T., & Yamada, S. (2019). Designing Trustworthy Product Recommendation Virtual
Agents Operating Positive Emotion and Having Copious Amount of Knowledge. Frontiers
McClure, P. K. (2018). “You’re Fired,” Says the Robot: The Rise of Automation in the
McKee, K. R., Bai, X., & Fiske, S. T. (2022). Warmth and Competence in Human-Agent
McNee, S. M., Riedl, J., & Konstan, J. A. (2006). Being accurate is not enough: How accuracy
metrics have hurt recommender systems. CHI ’06 Extended Abstracts on Human Factors in
https://doi.org/10.1016/S0022-5371(67)80113-0
Melson, G., Beck, A., & Friedman, B. (2009). Robotic Pets in Human Lives: Implications for the
Human-Animal Bond and for Human Relationships with Personified Technologies. Journal
4560.2009.01613.x
120
Mendes, W. B., Blascovich, J., Hunter, S. B., Lickel, B., & Jost, J. T. (2007). Threatened by the
https://doi.org/10.1037/0022-3514.92.4.698
Michaels, J. L., Parkin, S. S., & Vallacher, R. R. (2013). Destiny Is in the Details: Action
https://doi.org/10.1007/978-94-007-6527-6_8
Michalowski, M. P., Sabanovic, S., & Simmons, R. (2006). A spatial model of engagement for a
social robot. 9th IEEE International Workshop on Advanced Motion Control, 2006., 762–
767. https://doi.org/10.1109/AMC.2006.1631755
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of
https://doi.org/10.1177/2053951716679679
Monroe, A. E., Dillon, K. D., Guglielmo, S., & Baumeister, R. F. (2018). It’s not what you do,
but what everyone else does: On the role of descriptive norms and subjectivism in moral
https://doi.org/10.1016/j.jesp.2018.03.010
Moon, Y. (1998). When the Computer Is the “Salesperson”: Consumer Responses to Computer
121
Moon, Y. (2000). Intimate exchanges: Using computers to elicit self-disclosure from consumers.
Moon, Y., & Nass, C. (1996). How “Real” are computer personalities? Psychological responses
674. https://doi.org/10.1177/009365096023006002
Moon, Y., & Nass, C. (1998). Are computers scapegoats? Attributions of responsibility in
94. https://doi.org/10.1006/ijhc.1998.0199
Morewedge, C. K. (2022). Preference for human, not algorithm aversion. Trends in Cognitive
Morewedge, C. K., Preston, J., & Wegner, D. M. (2007). Timescale bias in the attribution of
https://doi.org/10.1037/0022-3514.93.1.1
Mori, M., MacDorman, K. F., & Kageki, N. (2012). The uncanny valley [from the field]. IEEE
https://doi.org/10.1109/MRA.2012.2192811
Morikawa, M. (2017). Who Are Afraid of Losing Their Jobs to Artificial Intelligence and
Robots? Evidence from a Survey (Working Paper No. 71; GLO Discussion Paper). Global
122
Müller, C. A., Schmitt, K., Barber, A. L. A., & Huber, L. (2015). Dogs Can Discriminate
https://doi.org/10.1016/j.cub.2014.12.055
Mumm, J., & Mutlu, B. (2011). Designing motivational agents: The role of praise, social
1643–1650. https://doi.org/10.1016/j.chb.2011.02.002
Mumm, J., & Mutlu, B. (2011, March). Human-robot proxemics: physical and psychological
Mutlu, B., Shiwa, T., Kanda, T., Ishiguro, H., & Hagita, N. (2009). Footing in human-robot
conversations: How robots might shape participant roles using gaze cues. In Proceedings of
the 4th ACM/IEEE International Conference on Human Robot Interaction, HRI 2009, La
Nass, C., Fogg, B. J., & Moon, Y. (1996). Can computers be teammates? International Journal
Nass, C., Isbister, K., & Lee, E.-J. (2000). Truth is beauty: Researching embodied conversational
https://www.semanticscholar.org/paper/Truth-is-beauty%3A-researching-embodied-agents-
Nass-Isbister/051722a89ea75583cfe58f6d218db373776b4c6e
Nass, C., & Lee, K. M. (2001). Does computer-synthesized speech manifest personality?
123
of Experimental Psychology: Applied, 7(3), 171–181. https://doi.org/10.1037/1076-
898X.7.3.171
Nass, C., & Moon, Y. (2000). Machines and Mindlessness: Social Responses to Computers.
Nass, C., Moon, Y., Fogg, B. J., Reeves, B., & Dryer, D. C. (1995). Can computer personalities
239. https://doi.org/10.1006/ijhc.1995.1042
Nass, C., Moon, Y., & Green, N. (1997). Are Machines Gender Neutral? Gender-Stereotypic
Responses to Computers With Voices. Journal of Applied Social Psychology, 27(10), 864–
876. https://doi.org/10.1111/j.1559-1816.1997.tb00275.x
Nass, C., Steuer, J., & Tauber, E. R. (1994). Computers are social actors. CHI ’94: Conference
https://doi.org/10.1145/259963.260288
Natarajan, M., & Gombolay, M. (2020). Effects of anthropomorphism and accountability on trust
Naughton, J. (2022, August 20). AI-generated art illustrates another problem with technology.
intelligence-midjourney-dall-e-replacing-artists
124
Nemo, L. (2018, May 30). Las Vegas Food Service Workers Are Going on Strike So They Don’t
strike-automation
Newman, D. T., Fast, N. J., & Harmon, D. J. (2020). When eliminating bias isn’t fair:
https://doi.org/10.1016/j.obhdp.2020.03.008
Ng, W. K. (2021, November 12). Robo-teacher takes learning to new level at Hougang Primary.
teachers-and-pupils-a-hand-at-hougang-primary
Nichols, S., & Stich, S. P. (2003). Mindreading: An integrated account of pretence, self-
https://doi.org/10.1093/0198236107.001.0001
Noothigattu, R., Bouneffouf, D., Mattei, N., Chandra, R., Madan, P., Varshney, K. R., Campbell,
M., Singh, M., & Rossi, F. (2019). Teaching AI agents ethical values using reinforcement
learning and policy orchestration. IBM Journal of Research and Development, 63(4/5), 2:1-
2:9. https://doi.org/10.1147/JRD.2019.2940428
Nussberger, A.-M., Luo, L., Celis, L. E., & Crockett, M. J. (2022). Public attitudes value
125
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an
https://doi.org/10.1126/science.aax2342
Önkal, D., Goodwin, P., Thomson, M., Gönül, S., & Pollock, A. (2009). The relative influence of
advice from human experts and statistical methods on forecast adjustments. Journal of
Paetzel, M., Perugia, G., & Castellano, G. (2020). The persistence of first impressions: 15th
Annual ACM/IEEE International Conference on Human Robot Interaction, HRI 2020. 2020
https://ieeexplore.ieee.org/document/9484289
https://doi.org/10.1093/oso/9780190905033.003.0015
https://www.forbes.com/sites/gregpetro/2020/01/10/robots-take-retail/
Pfeifer, R., & Scheier, C. (2001). Understanding Intelligence. The MIT Press.
Pickett, C. L., Gardner, W. L., & Knowles, M. (2004). Getting a Cue: The Need to Belong and
Enhanced Sensitivity to Social Cues. Personality and Social Psychology Bulletin, 30(9),
1095–1107. https://doi.org/10.1177/0146167203262085
Pieters, W. (2011). Explanation and trust: What to tell the user in security and AI? Ethics and
126
Promberger, M., & Baron, J. (2006). Do patients trust computers? Journal of Behavioral
Qin, X., Chen, C., Yam, K. C., Cao, L., Li, W., Guan, J., Zhao, P., Dong, X., & Lin, Y. (2022).
Adults still can’t resist: A social robot can induce normative conformity. Computers in
Qiu, L., & Benbasat, I. (2009). Evaluating Anthropomorphic Product Recommendation Agents:
1222250405
Ragot, M., Martin, N., & Cojean, S. (2020). AI-generated vs. Human Artworks. A Perception
Bias Towards Artificial Intelligence? Extended Abstracts of the 2020 CHI Conference on
https://doi.org/10.1177/00018392211010118
Raveendhran, R., & Fast, N. J. (2021). Humans judge, algorithms nudge: The psychology of
Reeder, K., & Lee, H. (2022). Impact of artificial intelligence on US medical students’ choice of
127
Reeves, B., Hancock, J., & Liu, X. “Sunny.” (2020). Social Robots Are Like Real People: First
Reeves, B., & Nass, C. (1996). The Media Equation: How People Treat Computers, Television,
and New Media like Real People and Places. Cambridge University Press.
Rehm, M., & Krogsager, A. (2013). Negative affect in human robot interaction - impoliteness in
https://doi.org/10.1109/ROMAN.2013.6628529
Reich, T., Kaju, A., & Maglio, S. J. (2022). How to overcome algorithm aversion: Learning from
Reiss, D., & Marino, L. (2001). Mirror self-recognition in the bottlenose dolphin: A case of
https://doi.org/10.1073/pnas.101086398
Richardson, K. (2016). Sex Robot Matters: Slavery, the Prostituted, and the Rights of Machines.
https://doi.org/10.1109/MTS.2016.2554421
Riek, L. D., Rabinowitch, T.-C., Chakrabarti, B., & Robinson, P. (2009). How
https://doi.org/10.1145/1514095.1514158
128
Riether, N., Hegel, F., Wrede, B., & Horstmann, G. (2012). Social facilitation with social robots?
Robinette, P., Howard, A. M., & Wagner, A. R. (2017). Effect of Robot Performance on
Robitzski, D. (2019, August 22). Truckers want to ban self-driving trucks in Missouri. The Byte.
https://futurism.com/the-byte/truckers-ban-self-driving-trucks-missouri
Rockwell, G., Berendt, B., & Chee, F. (2022). On IRIE Vol. 31: On Dialogue and Artificial
https://doi.org/10.29173/irie475
Rosenthal-von der Pütten, A. M., Krämer, N. C., & Herrmann, J. (2018). The effects of
https://doi.org/10.1007/s12369-018-0466-7
Saerbeck, M., & Bartneck, C. (2010). Perception of affect elicited by robot motion. In
Salomons, N., van der Linden, M., Sebo, S. S., & Scassellati, B. (2018). Humans Conform to
Robots: Disambiguating Trust, Truth, and Conformity. Proceedings of the 2018 ACM/IEEE
https://doi.org/10.1145/3171221.3171282
129
Salvatore, A. P. (2019). Behaviorism. In The SAGE Encyclopedia of Human Communication
https://dx.doi.org/10.4135/9781483380810
Salvini, P., Ciaravella, G., Yu, W., Ferri, G., Manzi, A., Mazzolai, B., Laschi, C., Oh, S. R., &
Dario, P. (2010). How safe are service robots in urban environments? Bullying a robot.
In 19th international symposium in robot and human interactive communication (pp. 1-7).
IEEE. https://doi.org/10.1109/ROMAN.2010.5654677
Samuel, S. (2020, January 13). Robot priests can bless you, advise you, and even perform your
priest-mindar-buddhism-christianity
Sandoval, E. B., Brandstetter, J., Obaid, M., & Bartneck, C. (2016). Reciprocity in human-robot
interaction: a quantitative approach through the prisoner’s dilemma and the ultimatum
https://doi.org/10.1007/s12369-015-0323-x
Saygin, A. P., Chaminade, T., Ishiguro, H., Driver, J., & Frith, C. (2012). The thing that should
not be: predictive coding and the uncanny valley in perceiving human and humanoid robot
https://doi.org/10.1093/scan/nsr025
Scassellati, B., Boccanfuso, L., Huang, C.-M., Mademtzi, M., Qin, M., Salomons, N., Ventola,
P., & Shic, F. (2018). Improving social skills in children with ASD using a long-term, in-
https://doi.org/10.1126/scirobotics.aat7544
130
Sebo, S. S., Traeger, M., Jung, M., & Scassellati, B. (2018). The Ripple Effects of Vulnerability:
Schein, C., & Gray, K. (2018). The Theory of Dyadic Morality: Reinventing Moral Judgment by
https://doi.org/10.1177/1088868317698288
Scheske, C., & Schnall, S. (2012). The ethics of “smart drugs”: Moral judgments about healthy
people’s use of cognitive-enhancing drugs. Basic and Applied Social Psychology, 34, 508–
515. https://doi.org/10.1080/01973533.2012.711692
Seyama, J., & Nagayama, R. S. (2007). The uncanny valley: effect of realism on the impression
of artificial human faces. Presence: Teleoperators and Virtual Environments, 16(4), 337–
351. https://doi.org/10.1162/pres.16.4.337
Shaffer, V. A., Probst, C. A., Merkle, E. C., Arkes, H. R., & Medow, M. A. (2013). Why Do
Shank, D. B., & DeSanti, A. (2018). Attributions of morality and mind to artificial intelligence
https://doi.org/10.1016/j.chb.2018.05.014
Shank, D. B., DeSanti, A., & Maninger, T. (2019). When are artificial intelligence versus human
agents faulted for wrongdoing? Moral attributions after individual and joint decisions.
https://doi.org/10.1080/1369118X.2019.1568515
131
Shank, D. B., Graves, C., Gott, A., Gamez, P., & Rodriguez, S. (2019). Feeling our way to
Shariff, A., Bonnefon, J.-F., & Rahwan, I. (2017). Psychological roadblocks to the adoption of
https://doi.org/10.1038/s41562-017-0202-6
Sharp, M.-L., Fear, N. T., Rona, R. J., Wessely, S., Greenberg, N., Jones, N., & Goodwin, L.
(2015). Stigma as a Barrier to Seeking Health Care Among Military Personnel With Mental
https://doi.org/10.1093/epirev/mxu012
Shen, S., Slovak, P., & Jung, M. F. (2018). “Stop. I See a Conflict Happening.” A Robot
Shiban, Y., Schelhorn, I., Jobst, V., Hörnlein, A., Puppe, F., Pauli, P., & Mühlberger, A. (2015).
The appearance effect: Influences of virtual agent features on performance and motivation.
Shim, J., & Arkin, R. C. (2014). Other-oriented robot deception: A computational approach for
deceptive action generation to benefit the mark. 2014 IEEE International Conference on
https://doi.org/10.1109/ROBIO.2014.7090385
132
Shin, D. (2021). Embodying algorithms, enactive artificial intelligence and the extended
cognition: You can see as much as you know about algorithm. Journal of Information
Shin, H. I., & Kim, J. (2020). My computer is more thoughtful than you: Loneliness,
https://doi.org/10.1007/s12144-018-9975-7
Shinozawa, K., Naya, F., Yamato, J., & Kogure, K. (2005). Differences in effect of robot and
Shrestha, Y. R., He, V. F., Puranam, P., & von Krogh, G. (2021). Algorithm Supported Induction
for Building Theory: How Can We Use Prediction Models to Theorize? Organization
Smiley, A. H., & Fisher, M. (2022). The golden age is behind us: how the status quo impacts the
https://doi.org/10.1177/09567976221102868
Smithers, D. (2022, January 1). Man Falls In Love With Robot And Hopes To Marry Her.
LADbible. https://www.ladbible.com/news/man-falls-in-love-with-robot-and-hopes-to-
marry-her-20220101
133
Soll, J. B., & Mannes, A. E. (2011). Judgmental aggregation strategies depend on whether the
https://doi.org/10.1016/j.ijforecast.2010.05.003
Sousa, J. P. (1906). The Menace of Mechanical Music. Appleton’s Magazine, 8(3), 278–284.
Sparrow, B., Liu, J., & Wegner, D. M. (2011). Google effects on memory: Cognitive
https://doi.org/10.1126/science.1207745
Spring, V. L., Cameron, C. D., & Cikara, M. (2018). The upside of outrage. Trends in Cognitive
Steckenfinger, S. A., & Ghazanfar, A. A. (2009). Monkey visual behavior falls into the uncanny
https://doi.org/10.1073/pnas.0910063106
Stein, J.-P., & Ohler, P. (2017). Venturing into the uncanny valley of mind—the influence of
Stephan, W. G., & Stephan, C. W. (2000). An integrated threat theory of prejudice. In S. Oskamp
(Ed.), Reducing prejudice and discrimination (pp. 23–45). Lawrence Erlbaum Associates
Publishers.
Strait, M., Vujovic, L., Floerke, V., Scheutz, M., & Urry, H. (2015). Too Much Humanness for
134
Responding in Observers. Proceedings of the 33rd Annual ACM Conference on Human
Strinić, A., Carlsson, M., & Agerström, J. (2021). Occupational stereotypes: Professionals´
https://doi.org/10.1108/PR-06-2020-0458
Strohkorb, S., Fukuto, E., Warren, N., Taylor, C., Berry, B., & Scassellati, B. (2016). Improving
human-human collaboration between children with a social robot. 2016 25th IEEE
551–556. https://doi.org/10.1109/ROMAN.2016.7745172
Szczuka, J. M., & Krämer, N. C. (2018). Jealousy 4.0? An empirical study on jealousy-related
discomfort of women evoked by other women and gynoid robots. Paladyn, Journal of
Tajfel, H. (1982). Social Psychology of Intergroup Relations. Annual Review of Psychology, 33,
1–39. https://doi.org/10.1146/annurev.ps.33.020182.000245
Tajfel, H., & Turner, J. (2001). An integrative theory of intergroup conflict. In M. A. Hogg & D.
Abrams (Eds.), Intergroup relations: Essential readings (pp. 94–109). Psychology Press.
Tanaka, F., Cicourel, A., & Movellan, J. R. (2007). Socialization between toddlers and robots at
Tang, P. M., Koopman, J., Yam, K. C., De Cremer, D., Zhang, J. H., & Reynders, P. (2022). The
135
field and experimental studies. Human Resource Management, 1–24.
https://doi.org/10.1002/hrm.22154
Taylor, A. H., Elliffe, D., Hunt, G. R., & Gray, R. D. (2010). Complex cognition and behavioural
Tesser, A., Millar, M., & Moore, J. (1988). Some affective consequences of social comparison
and reflection processes: The pain and pleasure of being close. Journal of Personality and
Tharp, M., Holtzman, N. S., & Eadeh, F. R. (2017). Mind Perception and Individual Differences:
A Replication and Extension. Basic and Applied Social Psychology, 39(1), 68–73.
https://doi.org/10.1080/01973533.2016.1256287
The Guardian. (2020, July 20). 'Alexa, I love you’: How lockdown made men lust after their
i-love-you-how-lockdown-made-men-lust-after-their-amazon-echo
Thewissen, S., & Rueda, D. (2019). Automation and the welfare state: Technological change as a
10.1177/0010414017740600
Tielman, M., Neerincx, M., Meyer, J.-J., & Looije, R. (2014). Adaptive emotional expression in
136
Tiku, N. (2022). Is LaMDA Sentient? - An Interview.
https://www.documentcloud.org/documents/22058315-is-lamda-sentient-an-interview
Tolmeijer, S., Kneer, M., Sarasua, C., Christen, M., & Bernstein, A. (2021). Implementations in
https://doi.org/10.1145/3419633
Traeger, M. L., Strohkorb Sebo, S., Jung, M., Scassellati, B., & Christakis, N. A. (2020).
https://doi.org/10.1073/pnas.1910402117
https://doi.org/10.1093/mind/LIX.236.433
Twenge, J. M., Catanese, K. R., & Baumeister, R. F. (2003). Social Exclusion and the
https://doi.org/10.1037/0022-3514.85.3.409
Ward, A. F. (2021). People mistake the internet’s knowledge for their own. Proceedings of the
https://doi.org/10.1073/pnas.2105061118
Waytz, A., & Young, L. (2019). Aversion to playing God and moral condemnation of technology
137
Wike, R., & Stokes, B. (2018). In advanced and emerging economies alike, worries about job
Wow GPT-3 just wrote my American -Lit Essay for me. (2020, November 15). [Reddit]. R/GPT3.
www.reddit.com/r/GPT3/comments/jutsdh/wow_gpt3_just_wrote_my_american_lit_essay_
for_me/
Uttal, W. R. (2001). The New Phrenology: The Limits of Localizing Cognitive Processes in the
Vallacher, R. R., & Wegner, D. M. (1987). What do people think they’re doing? Action
https://doi.org/10.1037/0033-295X.94.1.3
Van Bavel, J. J., Hackel, L. M., & Xiao, Y. J. (2014). The Group Mind: The Pervasive Influence
of Social Identity on Cognition. In J. Decety & Y. Christen (Eds.), New Frontiers in Social
https://doi.org/10.1007/978-3-319-02904-7_4
Vanman, E. J., & Kappas, A. (2019). “Danger, Will Robinson!” The challenges of social robots
for intergroup relations. Social and Personality Psychology Compass, 13(8), 1–13.
https://doi.org/10.1111/spc3.12489
Vázquez, M., Carter, E. J., McDorman, B., Forlizzi, J., Steinfeld, A., & Hudson, S. E. (2017).
138
Verhagen, T., van Nes, J., Feldberg, F., & van Dolen, W. (2014). Virtual Customer Service
Agents: Using Social Presence and Personalization to Shape Online Service Encounters*.
https://doi.org/10.1111/jcc4.12066
Vincent, J. (2022, September 2). An AI-generated artwork’s state fair victory fuels arguments
artwork-wins-state-fair-competition-colorado
Vollmer, A.-L., Read, R., Trippas, D., & Belpaeme, T. (2018). Children conform, adults resist: A
robot group induced peer pressure on normative social conformity. Science Robotics, 3(21).
https://doi.org/10.1126/scirobotics.aat7111
Waardenburg, L., Huysman, M., & Sergeeva, A. V. (2022). In the Land of the Blind, the One-
Eyed Man Is King: Knowledge Brokerage in the Age of Learning Algorithms. Organization
Walmsley, J. (2021). Artificial intelligence and the value of transparency. AI & SOCIETY, 36,
585–595. https://doi.org/10.1007/s00146-020-01066-z
Wang, S., Lilienfeld, S. O., & Rochat, P. (2015). The Uncanny Valley: Existence and
https://doi.org/10.1037/gpr0000056
Wang, X., & Krumhuber, E. G. (2018). Mind perception of robots varies with their economic
https://doi.org/10.3389/fpsyg.2018.01230
139
Waytz, A., Cacioppo, J., & Epley, N. (2010). Who Sees Human?: The Stability and Importance
Waytz, A., Epley, N., & Cacioppo, J. T. (2010). Social Cognition Unbound: Insights Into
Waytz, A., & Gray, K. (2018). Does Online Technology Make Us More or Less Sociable? A
Preliminary Review and Call for Research. Perspectives on Psychological Science, 13(4),
473–491. https://doi.org/10.1177/1745691617746509
Waytz, A., Gray, K., Epley, N., & Wegner, D. M. (2010). Causes and consequences of mind
https://doi.org/10.1016/j.tics.2010.05.006
Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: Anthropomorphism
113–117. https://doi.org/10.1016/j.jesp.2014.01.005
Waytz, A., Morewedge, C. K., Epley, N., Monteleone, G., Gao, J.-H., & Cacioppo, J. T. (2010).
https://doi.org/10.1037/a0020240
Waytz, A., & Norton, M. I. (2014). Botsourcing and outsourcing: Robot, British, Chinese, and
https://doi.org/10.1037/a0036054
140
Wegner, D. M., & Gray, K. (2016). The Mind Club: Who Thinks, What Feels, and Why It
Matters. Viking.
Weisman, K., Dweck, C. S., & Markman, E. M. (2017). Rethinking people’s conceptions of
https://doi.org/10.1073/pnas.1704347114
Weizenbaum, J. (1966). ELIZA—a computer program for the study of natural language
communication between man and machine. Communications of the ACM, 9(1), 36–45.
https://doi.org/10.1145/365153.365168
Wiese, E., Weis, P. P., Bigman, Y., Kapsaskis, K., & Gray, K. (2022). It’s a Match: Task
Wike, R., & Stokes, B. (2018, September 13). In Advanced and Emerging Economies Alike,
Worries About Job Automation. Pew Research Center’s Global Attitudes Project.
https://www.pewresearch.org/global/2018/09/13/in-advanced-and-emerging-economies-
alike-worries-about-job-automation/
Wilbanks, D., Hester, N., Bigman, Y., Smith, M. M., Court, J., Sarkar, J., & Gray, K. (under
review). Two kinds of artistic authenticity: An original object versus truly revealing the
141
Wirtz, J., Patterson, P. G., Kunz, W. H., Gruber, T., Lu, V. N., Paluch, S., & Martins, A. (2018).
Brave new world: Service robots in the frontline. Journal of Service Management, 29(5),
907–931. https://doi.org/10.1108/JOSM-04-2018-0119
Wu, N. (2022a). Misattributed blame? Attitudes toward globalization in the age of automation.
https://doi.org/10.1017/psrm.2021.43
Wu, N. (2022b). “Restrict foreigners, not robots”: Partisan responses to automation threat.
Xiao, B., & Benbasat, I. (2007). E-Commerce Product Recommendation Agents: Use,
https://doi.org/10.2307/25148784
Yakovleva, M., Reilly, R. R., & Werko, R. (2010). Why do we trust? Moving beyond individual
https://doi.org/10.1037/a0017102
Yam, K. C., Bigman, Y. E., Tang, P. M., Ilies, R., De Cremer, D., Soh, H., & Gray, K. (2021).
Yam, K. C., Bigman, Y., & Gray, K. (2021). Reducing the uncanny valley by dehumanizing
https://doi.org/10.1016/j.chb.2021.106945
142
Yam, K. C., Goh, E.-Y., Fehr, R., Lee, R., Soh, H., & Gray, K. (2022). When your boss is a
robot: Workers are more spiteful to robot supervisors that seem more human. Journal of
Yam, K. C., Tang, P. M., Jackson, J. C., Su, R., & Gray, K. (2022). The Rise of Robots Increases
Yang, A. X., & Teow, J. (2020). Defending the Human Need to Be Seen: Recipient
Argo, T. M. Lowrey, & H. J. Schau (Eds.), ACR North American Advances in Consumer
https://www.acrwebsite.org/volumes/2662147/volumes/v48/NA-48
Yeomans, M., Shah, A., Mullainathan, S., & Kleinberg, J. (2019). Making sense of
https://doi.org/10.1002/bdm.2118
Yogeeswaran, K., Złotowski, J., Livingstone, M., Bartneck, C., Sumioka, H., & Ishiguro, H.
(2016). The interactive effects of robot anthropomorphism and robot ability on perceived
threat and support for robotics research. Journal of Human-Robot Interaction, 5(2), 29–47.
https://doi.org/10.5898/JHRI.5.2.Yogeeswaran
You, S., & Robert, L. Jr. (2019). Subgroup Formation in Human Robot Teams. Proceedings of
http://deepblue.lib.umich.edu/handle/2027.42/150854
143
You, S., Yang, C. L., & Li, X. (2022). Algorithmic versus Human Advice: Does Presenting
Zhang, T., Kaber, D. B., Zhu, B., Swangnetr, M., Mosaly, P., & Hodge, L. (2010). Service robot
feature design effects on user perceptions and emotional responses. Intelligent Service
Zhou, M. X., Mark, G., Li, J., & Yang, H. (2019). Trusting Virtual Agents: The Effect of
https://doi.org/10.1145/3232077
Złotowski, J., Sumioka, H., Nishio, S., Glas, D. F., Bartneck, C., & Ishiguro, H. (2016).
https://doi.org/10.1515/pjbr-2016-0005
Złotowski, J., Yogeeswaran, K., & Bartneck, C. (2017). Can we control it? Autonomous robots
threaten human identity, uniqueness, safety, and resources. International Journal of Human-
Education: Three Case Studies. INSAM Journal of Contemporary Music, Art and
144
145