anthropomorphism: opportunities and challenges …...of a humanoid’s head as human-like (fig. 1)....

14
Int J of Soc Robotics (2015) 7:347–360 DOI 10.1007/s12369-014-0267-6 Anthropomorphism: Opportunities and Challenges in Human–Robot Interaction Jakub Zlotowski · Diane Proudfoot · Kumar Yogeeswaran · Christoph Bartneck Accepted: 23 October 2014 / Published online: 9 November 2014 © Springer Science+Business Media Dordrecht 2014 Abstract Anthropomorphism is a phenomenon that describes the human tendency to see human-like shapes in the environment. It has considerable consequences for people’s choices and beliefs. With the increased presence of robots, it is important to investigate the optimal design for this tech- nology. In this paper we discuss the potential benefits and challenges of building anthropomorphic robots, from both a philosophical perspective and from the viewpoint of empir- ical research in the fields of human–robot interaction and social psychology. We believe that this broad investigation of anthropomorphism will not only help us to understand the phenomenon better, but can also indicate solutions for facil- itating the integration of human-like machines in the real world. Keywords Human–robot interaction · Anthropomorphism · Uncanny valley · Contact theory · Turing · Child-machines J. Zlotowski (B ) · C. Bartneck HIT Lab NZ, University of Canterbury, Christchurch, New Zealand e-mail: [email protected] C. Bartneck e-mail: [email protected] D. Proudfoot Department of Philosophy, University of Canterbury, Christchurch, New Zealand e-mail: [email protected] K. Yogeeswaran Department of Psychology, University of Canterbury, Christchurch, New Zealand e-mail: [email protected] 1 Introduction Anthropomorphism has a major impact on human behav- iour, choices, or laws. Based on evidence for the presence of mind, captive chimpanzees were granted limited human rights in Spain [2]. Moreover, people often refer to our planet as ‘Mother Earth’ and anthropomorphism is often used in discussions regarding environmental issues [151]. People regularly make anthropomorphic attributions when describing their surrounding environment including animals [44], moving geometrical figures [82] or weather patterns [77]. Building on this common tendency, anthropomorphic form has been used to design technology [52] and sell products [1, 4]. Anthropomorphic design is an especially important topic for design of robots due to their higher anthropomorphizabil- ity [91]. Increasing number of industrial and service robots yields a question of designing this technology in order to increase its efficiency and effectiveness. One of the main themes in the field of human–robot interaction (HRI) tries to address that issue. However, taking a broader perspective that involves related fields could foster the discussion. In this paper we present viewpoints of empirical work from HRI and social psychology, and a philosophical discourse on the issue of designing anthropomorphic technology. In the next section of this paper we present perspectives from different fields on the process of anthropomorphization and how the general public’s view differs from the scientific knowledge. In Sect. 3 we include a broad literature review of research on anthropomorphism in the field of HRI. In Sect. 4 we discuss why creating human-like robots can be beneficial and what opportunities it creates for HRI. Section 5 is dedi- cated to a discussion on potential problems that human-like technology might elicit. In Sect. 6 we propose solutions to some of these problems that could be applied in HRI. 123

Upload: others

Post on 19-Jun-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Anthropomorphism: Opportunities and Challenges …...of a humanoid’s head as human-like (Fig. 1). Anthropomor-phic form in appearance has even been attributed to flying robots [43]

Int J of Soc Robotics (2015) 7:347–360DOI 10.1007/s12369-014-0267-6

Anthropomorphism: Opportunities and Challengesin Human–Robot Interaction

Jakub Złotowski · Diane Proudfoot ·Kumar Yogeeswaran · Christoph Bartneck

Accepted: 23 October 2014 / Published online: 9 November 2014© Springer Science+Business Media Dordrecht 2014

Abstract Anthropomorphism is a phenomenon thatdescribes the human tendency to see human-like shapes in theenvironment. It has considerable consequences for people’schoices and beliefs. With the increased presence of robots,it is important to investigate the optimal design for this tech-nology. In this paper we discuss the potential benefits andchallenges of building anthropomorphic robots, from both aphilosophical perspective and from the viewpoint of empir-ical research in the fields of human–robot interaction andsocial psychology. We believe that this broad investigationof anthropomorphism will not only help us to understand thephenomenon better, but can also indicate solutions for facil-itating the integration of human-like machines in the realworld.

Keywords Human–robot interaction ·Anthropomorphism · Uncanny valley ·Contact theory · Turing · Child-machines

J. Złotowski (B) · C. BartneckHIT Lab NZ, University of Canterbury, Christchurch,New Zealande-mail: [email protected]

C. Bartnecke-mail: [email protected]

D. ProudfootDepartment of Philosophy, University of Canterbury,Christchurch, New Zealande-mail: [email protected]

K. YogeeswaranDepartment of Psychology, University of Canterbury,Christchurch, New Zealande-mail: [email protected]

1 Introduction

Anthropomorphism has a major impact on human behav-iour, choices, or laws. Based on evidence for the presenceof mind, captive chimpanzees were granted limited humanrights in Spain [2]. Moreover, people often refer to ourplanet as ‘Mother Earth’ and anthropomorphism is oftenused in discussions regarding environmental issues [151].People regularly make anthropomorphic attributions whendescribing their surrounding environment including animals[44], moving geometrical figures [82] or weather patterns[77]. Building on this common tendency, anthropomorphicform has been used to design technology [52] and sellproducts [1,4].

Anthropomorphic design is an especially important topicfor design of robots due to their higher anthropomorphizabil-ity [91]. Increasing number of industrial and service robotsyields a question of designing this technology in order toincrease its efficiency and effectiveness. One of the mainthemes in the field of human–robot interaction (HRI) triesto address that issue. However, taking a broader perspectivethat involves related fields could foster the discussion. In thispaper we present viewpoints of empirical work from HRIand social psychology, and a philosophical discourse on theissue of designing anthropomorphic technology.

In the next section of this paper we present perspectivesfrom different fields on the process of anthropomorphizationand how the general public’s view differs from the scientificknowledge. In Sect. 3 we include a broad literature review ofresearch on anthropomorphism in the field of HRI. In Sect. 4we discuss why creating human-like robots can be beneficialand what opportunities it creates for HRI. Section 5 is dedi-cated to a discussion on potential problems that human-liketechnology might elicit. In Sect. 6 we propose solutions tosome of these problems that could be applied in HRI.

123

Page 2: Anthropomorphism: Opportunities and Challenges …...of a humanoid’s head as human-like (Fig. 1). Anthropomor-phic form in appearance has even been attributed to flying robots [43]

348 Int J of Soc Robotics (2015) 7:347–360

2 Why Do We Anthropomorphize?

Fromapsychological perspective, the central questions aboutanthropomorphism are: what explains the origin and per-sistence of anthropomorphism? Psychologists and anthro-pologists have explained the origin of anthropomorphismas an adaptive trait, for example with respect to theisticreligions. They speculate that early hominids who inter-preted ambiguous shapes as faces or bodies improved theirgenetic fitness, by making alliances with neighbouring tribesor by avoiding threatening neighbours and predatory ani-mals [10,22–24,27,75]. But what explains the persistence ofanthropomorphizing? Here theorists hypothesize that thereare neural correlates of anthropomorphizing [123,138], spe-cific anthropomorphizingmechanisms (e.g. the hypothesizedhypersensitive agency detection device or HADD [11,12]),and diverse psychological traits that generate anthropomor-phizing behaviour [28]. They also suggest (again in the caseof theistic religions) that confirmation bias [10] or difficultiesin challenging anthropomorphic interpretations of the envi-ronmentmayunderlie the persistence of anthropomorphizing[115].

Epley et al. [53] proposed a theory that determined threepsychological factors as affectingwhen people anthropomor-phize non-human agents:

1. Elicited agent knowledge—due to people’s muchricher knowledge regarding humans compared with non-human agents, people are more likely to use anthro-pomorphic explanations of non-human agents’ actionsuntil they create an adequate mental model of non-humans.

2. Effectance motivation—when people are motivated toexplain or understand an agent’s behaviour the tendencyto anthropomorphize increases.

3. Sociality motivation—people who lack social connec-tion with other humans often compensate for this bytreating non-human agents as if they were human-like[54].

This theory has been also successfully applied in the contextof HRI [56].

Although people make anthropomorphic attributions tovarious types of non-human agents, not all agents areanthropmorphized in the same way. Anthropomorphizationof animals is distinct from the tendency to anthropomor-phize artifacts, such as cars or computers [40]. There aregender differences in this tendency when the target is ananimal, with females more likely to make anthropomorphicattributions thanmales. However, when anthropomorphizingmachines, males and females are equally likely to exhibit thistendency.

2.1 A Philosophical Perspective on Anthropomorphism

From a philosophical perspective, the central questions aboutanthropomorphism are: can we make a principled distinc-tion between justified and unjustified anthropomorphism,and if so how? Is anthropomorphizing (philosophically) jus-tified? If anthropomorphism is a ‘natural’ behaviour, thisquestion may seem odd. Moreover, some researchers in AIhave argued that there is not anything to the notion of mindbut an evolved human tendency to anthropomorphize; anentity’s having a ‘soul’ is nothing over and above the ten-dency of observers to see the entity in this way. On thisview, humans are ‘natural-born dualists’ [26, p. xiii]. How-ever, the intuition that anthropomorphizing can be illicit isstrong—for example, my automobile is not a person, evenif I attribute human and personal characteristics (and haveaffective responses) to it. Researchers in AI claim that theirmachines have cognitive and affective characteristics, eitherexplicitly or implicitly, and this is in particular true in the caseof anthropomorphic robots (and anthropomorphic softwareagents). These assertions require philosophical analysis andevaluation, along with empirical investigation. Determiningunder what conditions the anthropomorphizing of machinesis justified and under what conditions unjustified is centralto the question whether ‘expressive’ or ‘emotional’ robotsactually have emotions (see e.g. [3,6]). It is also key to thegrowing debates within AI about the ethical use of artificialsystems.

In this regard, a combination of philosophical analysisand experimental work is required, but to our knowledgehas not been carried out. In the large body of experimen-tal work on human reactions to anthropomorphic robots,responses on standard questionnaires are commonly takento demonstrate that subjects identify a robot’s displays ormovements as (for example) expressions of the fundamentalhumanemotions—happiness, sadness, disgust, and soon (seee.g. [34]). The robot is said to smile or frown. However, tak-ing these responses (in forced choices) at face value ignoresthe possibility that they are elliptical for the subjects’ actualviews. To use an analogy, it is common when discussingfictions to omit the logical prefixes such as ‘I imagine that...’ or ‘Make-believedly ...’—for example, we say ‘SherlockHolmes lives at 221BBaker Street’ whenwemean ‘In the fic-tion SherlockHolmes lives at 221BBaker Street’. Somethingsimilar may be occurring in discussions of anthropomorphicrobots; saying that the robot has a ‘happy’ expression mightbe shorthand for the claim (for example) that if the robotwerea human, it would have a happy expression. A fine-grained‘philosophical’ experiment might allow us to find out if thisis the case. Experimental philosophy has gained ground insome areas of traditional a priori argument such as ethics;it might be used in AI to enable more accurate analysis ofhuman reactions to anthropomorphic robots.

123

Page 3: Anthropomorphism: Opportunities and Challenges …...of a humanoid’s head as human-like (Fig. 1). Anthropomor-phic form in appearance has even been attributed to flying robots [43]

Int J of Soc Robotics (2015) 7:347–360 349

2.2 Science-fiction as a Proxy to General Public’sPerception

It can be argued that almost all prior knowledge of partici-pants about robots in HRI studies stem from the media. Anextensive discussion on how robots are being portrayed in themedia is available [14]. Here therefore only in short: thereare two main story types that run through the media aboutrobots. One is that robots want to be like humans (e.g. Mr.Data) and that once a superior level of intelligence and poweris achieved will want to kill or enslave humanity (e.g. Lore).These rather negative views on the future of human–robotrelationships are based on the media industry’s need to pro-duce engaging stories. Fear is the single most used methodto engage the audience. A future world in which humansand robots live happily side by side is rare. The TV showFuturama and the movie Robot and Frank comes to mind asthe glowing exceptions. The stories presented in the mediathat focus on robots can be categorized along the questionswhether the the body and/or the mind of the robot is simi-lar to humans. If we take Mr. Data again as an example, hedoes look very much like a human, but his mind functionsdifferently. From this the writers can form engaging themes,such as Data’s quest to understand humor and emotions. Andwe are surprised when Data emerges from the bottom of alake without any specific gear. His highly human-like formmakes us believe that he might also need oxygen, which hedoes not. In summary, the media has used robots extensivelyand most of the knowledge and expectations that the peopleon the street have are based on these media and not on thescientific literature.

3 Anthropomorphism in HRI

Reeves and Nass [116], in their classical work on the‘Computers are Social Actors’ paradigm, showed that peo-ple engage in social interaction with various types ofmedia. Therefore, designers of interactive technologies couldimprove this interaction, building on the chronic tendency ofpeople to anthropomorphize their environment. Due to theirhigher anthropomorphizability and physical autonomy in anatural human environment, robots are especially well suitedto benefit from anthropomorphism [91]. Furthermore, phys-ical presence in the real world (rather than being merely vir-tual) is an important factor that can also increase the anthro-pomorphic quality of robots [92]. Mere presence of a robotwas found to lead to the social facilitation effect [120].More-over, when playing a game against a robotic opponent, peo-ple may utilize similar strategies as when they play againsta human [137]. They also hold robots more accountable fortheir actions than other non-human objects [87]. This ten-dency cannot be observed when an opponent is a disembod-

ied computer. On the other hand, Levin et al. [95] suggeststhat people initially equate robots and disembodied comput-ers in terms of intentionality. However, when they focus onthe intentional behaviour of a robot, this tendency can beoverridden.

3.1 Factors Affecting Anthropomorphism

It is important to remember that anthropomorphism isaffected not only by physical appearance. Hegel et al. [81]created a typology of signals and cues that robots emit dur-ing interaction and which can affect their perceived human-likeness. Choi and Kim [41] proposed that anthropomor-phism of robots involves: appearance, human–robot inter-action, and the accordance of the two former measurements.Thedistinctionbetween the anthropomorphic form in appear-ance and in behaviour can also be found in the model pre-sented by von Zitzewitz et al. [155].

External appearance can influence the perception of anobject [129]. According to Fong et al. [65], we can classifyrobots based on their appearance into four categories: anthro-pomorphic, zoomorphic, caricatured, and functional. In thefield of robotics there is an increased tendency to build robotsthat resemble humans in their appearance. In recent years wecan observe an increased number of robots that are built withlegs rather than wheels [39]. Some researchers suggest that,in order to create robotswith an anthropomorphic appearancethat are capable of engaging in interaction with humans in away analogous to human-human interaction, it is necessaryto build robots with features that enable them to perceive theworld similarly to humans, i.e. using two cameras (in place ofeyes) and two microphones (ears) [133]. Di Salvo et al. [49]state that it is the presence of certain features and the dimen-sions of the head that have a major impact on the perceptionof a humanoid’s head as human-like (Fig. 1). Anthropomor-phic form in appearance has even been attributed to flyingrobots [43].

However, research into anthropomorphism in the field ofHRI has not been limited to the anthropomorphic form of arobot’s appearance. HRI factors were found to be even moreimportant than embodiment in the perceived humanness ofrobots [90]. Kahn et al. [86] presented six benchmarks inHRI that constitute essential features affecting robots’ per-ceived human-likeness: autonomy, imitation, intrinsic moralvalue, moral accountability, privacy, and reciprocity. Pre-vious studies proposed other factors, such as verbal [132]and non-verbal [100,122] communication, the perceived‘emotions’ of the robot [59], the intelligence of the machine[15] or its predictability [60]. Moreover, robots that exhibittypically human behaviours, such as cheating, are also per-ceived as more human-like [131]. There is a philosophicalquestion, whether such behaviour really makes robots morehuman-like or if instead it is necessary for them to ‘truly’ feel

123

Page 4: Anthropomorphism: Opportunities and Challenges …...of a humanoid’s head as human-like (Fig. 1). Anthropomor-phic form in appearance has even been attributed to flying robots [43]

350 Int J of Soc Robotics (2015) 7:347–360

Fig. 1 Robots with different anthropomorphic features in appearance. From the left: Telenoid, Robovie R2, Geminoid HI2, Papero, NAO

emotions and have intentions. However, Turkle [144] pointsout that the behaviour of robots is more important than theirinner states for them to be treated as companions.

Furthermore, anthropomorphism is the result not only ofa robot’s actions, but also of an observer’s characteristics,such asmotivation [53], social background [55], gender [61],and age [88]. Moreover, the social relationship between arobot and a human can affect the degree to which a robotis attributed humanness. People apply social categorizationsto robots and those machines that are perceived as ingroupmembers are also anthropomorphized more strongly thanoutgroup robots [57,93]. Therefore, it should not be surpris-ing that a robot that has the physical appearance of a memberof another race is treated as a member of an outgroup andperceived as less human-like by people with racial prejudices[58]. There is also empirical evidence that mere HRI can leadto increased anthropomorphization of a robot.

3.2 Consequences of Anthropomorphizing Robots

Despite multiple ways in which we can make robots morehuman-like, anthropomorphism should not be a goal onits own. People differ in their preferences regarding theappearance of a robot. These differences can have cultural[55] or individual (personality) [134] origins. Goetz et al.[70] emphasized that, rather than aiming to create the mosthuman-like robots, embodiment should be designed in a waythat matches the robots’ tasks. Anthropomorphism has mul-tiple desirable and undesirable consequences for HRI. Arobot’s embodiment affects the perception of their intelli-gence and intentionality on neuropsychological and behav-ioural levels [80]. More visual attention is attracted byanthropomorphic or zoomorphic than inanimate robots [9].Furthermore, similar perceptual processes are involvedwhenobserving themovement of a humanoid andof a human [104].Based on the physical appearance of a robot, people attributedifferent personality traits to it [149]. Moreover, people usecues, such as a robot’s origin or the language that it speaks,in order to create a mental model of the robot’s mind [94].

People behave differently when interacting with a petrobot and with a humanoid robot. Although they providecommands in the same way to both types of robots, they dif-fer in the type of feedback; in the case of a humanoid robotthis is much more formal and touch-avoiding [7]. Similarly,Kanda et al. [89] found that the physical appearance of arobot does not affect the verbal behaviour of humans, but isexhibited in more subtle way in humans’ non-verbal behav-iour, such as the preferred interaction distance or delay inresponse. This finding was further supported by Walters etal. [148] who showed that the comfortable approach distanceis affected by the robot’s voice. Furthermore, androids canbe as persuasive in HRI as humans [102], which could beused to change people’s behaviour to be more useful for therobot.

4 Benefits and Opportunities of AnthropomorphicRobots

From the literature review presented in the previous sec-tion, it becomes clear that there are multiple ways in whichrobots can be designed in order to create an impression ofhuman-likeness. This creates an opportunity to positivelyimpact HRI by building on the phenomenon of anthropomor-phism. DiSalvo and Gemperle [48] suggested that the fourmain reasons for designing objects with anthropomorphicshapes are: keeping things the same (objects which histori-cally had anthropomorphic forms maintain this appearanceas a convention), explaining the unknown (anthropomorphicshapes can help to explain products with new functionality),reflecting product attributes (using anthropomorphic shapesto emphasize a product’s attributes) and projecting humanvalues (influencing the experience of a product via the socio-cultural context of the user).

4.1 Facilitation of HRI

The practical advantage of building anthropomorphic robotsis that it facilitates human-machine interaction (see e.g.

123

Page 5: Anthropomorphism: Opportunities and Challenges …...of a humanoid’s head as human-like (Fig. 1). Anthropomor-phic form in appearance has even been attributed to flying robots [43]

Int J of Soc Robotics (2015) 7:347–360 351

[62,63,69,147]). It also creates familiarity with a roboticsystem [41] and builds on established human skills, devel-oped in social human-human interactions [129]. A human-like machine enables an untrained human user to under-stand and predict the machine’s behaviour—animatronictoys and entertainment robots are an obvious example butanthropomorphizing is valuable too in the case of indus-trial robots (see e.g. Rod Brooks’s Baxter).1 Believabil-ity is particularly important in socially assistive robotics(see [136]). In addition, where a machine requires indi-vidualized training, a human-like appearance encourageshuman observers to interact with the machine and so pro-duces more training opportunities than might otherwise beavailable [32].

Considering that social robots may be used in pub-lic spaces in the future, there is a need for ensuringthat people will treat them properly, i.e. not destroy themin an act of vandalism. We already know that peopleare less reluctant to punish robots than human beings[16], although another study did not show any differ-ence in punishment between a dog-like robot AIBO anda human partner [118]. Furthermore, lighter, but neverthe-less still negative, forms of abuse and impoliteness towardsa robot can occur when a robot is placed in a social envi-ronment [117]. Therefore, it is necessary to counteractthese negative behaviours towards a robot. Anthropomor-phism could be used in order to increase people’s will-ingness to care about the well-being of robots. Robotsthat are human-like in both appearance and behaviourare treated less harshly than machine-like robots [17,20].This could be related to higher empathy expressed towardsanthropomorphic robots, as their appearance and behav-iour can facilitate the process of relating to them [119].A robot that expresses ‘emotions’ could also be treatedas more human-like [59], which could change people’sbehaviour.

Depending on a robot’s task, different levels of anthro-pomorphism might be required. A robot’s embodimentaffects the perception of the robot as an interaction part-ner [64]. The physical appearance of the robot is oftenused to judge its knowledge [109]. Therefore, by manip-ulating the robot’s appearance it is possible to elicit dif-ferent levels of information from people, i.e. less if con-versation efficiency is desired or more when the robotshould receive a robust amount of detailed feedback. Fur-thermore, people comply more with a robot whose degreeof anthropomorphism matches the level of a task’s seri-ousness [70]. In the context of educational robots that areused to teach human pupils, a robot that can employ socialsupportive behaviour while teaching can lead to superiorperformance by students [121]. Moreover, in some cul-

1 See http://www.rethinkrobotics.com/resources/videos/.

tures anthropomorphic robots are preferred over mechanisticrobots [13].

4.2 Anthropomorphism as a Psychological Test-bed

From a psychological perspective, human-like robots presenta way to test theories of psychological and social develop-ment. It may be possible to investigate hypotheses aboutthe acquisition (or deficit) of cognition and affect, in par-ticular the development of theory of mind (TOM) abilities([29,71,125,126]), by modeling the relevant behaviours onrobots (e.g. [127]). Doing so would enable psychologicaltheories to be tested in controlled, standardized conditions,without (it is assumed) ethical problems regarding consentand treatment of infant human subjects. Here practical andtheoretical research goals are linked: devices such as robotphysiotherapists must be able to identify their human clients’interests and feelings and to respond appropriately—and soresearch on the acquisition of TOM abilities is essential tobuilding effective service robots.

4.3 Philosophical Origins of Human-like Robots

From a philosophical perspective, two striking ideas appearin the AI literature on anthropomorphic robots. First, thenotion of building a socially intelligent robot (see e.g.[29,33]). This replaces AI’s grand aim of building a human-level intelligent machine (or Artificial General Intelligence(AGI)) with the language and intellectual abilities of a typicalhuman adult—aproject that, despite some extravagant claimsin the 1980s, has not succeeded. Instead the (still-grand)goal is to construct a machine that can interact with humanbeings or other machines, responding to normal social cues.A notable part of this is the aim to build a machine with thecognitive and affective capacities of a typical human infant(see e.g. [128,130]). For several researchers in social anddevelopmental robotics, this involves building anthropomor-phic machines [85]. Second, the notion that there is nothingmore to the development of intentionality than anthropomor-phism. We unwittingly anthropomorphize human infants; acarer interprets a baby’s merely reflexive behaviour as (say)social smiling, and by smiling in return encourages the devel-opment of social intelligence in the baby. Some roboticsresearchers suggest that, if human observers interact withmachines in ways analogous to this carer-infant exchange,the result will be intelligentmachines (see e.g. [29,36]). Suchinteraction will be easiest, it is implied, if it involves anthro-pomorphic robots. The combination of these two ideas givesAI a new take on the project of building a thinking machine.

Many concepts at the centre of this work in current AIwere present at the birth of the field, in Turing’s writingson machine intelligence. Turing [139,141] theorized that thecortex of the human infant is a learning machine, to be orga-

123

Page 6: Anthropomorphism: Opportunities and Challenges …...of a humanoid’s head as human-like (Fig. 1). Anthropomor-phic form in appearance has even been attributed to flying robots [43]

352 Int J of Soc Robotics (2015) 7:347–360

nized by a suitable process of education (to become a univer-sal machine) and that simulating this process is the route to athinkingmachine.He described the childmachine, amachinethat is to learn as human infants learn (see [113]). Turingalso emphasized the importance of embodiment, particularlyhuman-like embodiment [139,141]—he did, though, warnagainst the (hypothesized) uncanny valley [99] (see sectionbelow), saying that too human-like machines would have‘something like the unpleasant quality of artificial flowers’[143, p. 486]. For Turing, the project of building a childmachine has both psychological and philosophical benefits.Concerning the former, attempting to construct a thinkingmachine will help us, he said, to find out how human beingsthink [140, p. 486]. Concerning the latter, for Turing the con-cept of the child machine is inextricably connected to theidea of a genuine thinking thing: the machine that learns togeneralize from past education can properly be said to have‘initiative’ and to make ‘choices’ and ‘decisions’, and so canbe regarded as intelligent rather than a mere automaton [141,p. 429], [142, p. 393].

5 Disadvantages of Anthropomorphism in HRI

Despite numerous advantages that anthropomorphism bringsto HRI, there are also some drawbacks related to human-likedesign and task performance. Anthropomorphism is not asolution, but a mean of facilitating an interaction [52]. Whena robot’s human-likeness has a negative effect on interaction,it should be avoided. For example, during medical check-ups conducted by a robot, patients felt less embarrassedwith a machine-like robot than a more humanoid robot [20].Furthermore, the physical presence of a robot results in adecreased willingness of people to disclose an undesirablebehaviour compared to a projected robot [92]. These find-ings suggest that a machine-like form could be beneficial,as patients might provide additional information—whichotherwise they might have tried to hide, if they thought itembarrassing—that could help toward a correct diagnosis.Moreover, providing an anthropomorphic form to a robotmight not be sufficient to facilitate people’s interaction withit. People engage more in HRI when it is goal-oriented ratherthan in a pure social interaction [8].

Furthermore, a robot’s anthropomorphism leads to dif-ferent expectations regarding its capabilities and behaviourcompared to machine-like robots. People expect that human-like robots will follow human social norms [135]. Therefore,a robot that does not have the required capabilities to doso can decrease the satisfaction of their human partners inHRI. Although in the short term this can be counter-balancedby the higher reward-value due to the robot’s anthropomor-phic appearance [135]. In the context of search and rescue,people felt calmer when a robot had a non-anthropomorphic

appearance; considering that such an interaction context ishighly stressful for humans, apparently a robot’s machine-like-aspects are more desirable [25]. A similar preferencewas shown in the case of robots that are designed to interactin crowded urban environments [67]. People indicate that inthe first place a robot should be functional and able to com-plete its tasks correctly and only in the second place doesits enjoyable behaviour matter [96]. In addition, a robot’smovement does not need to be natural because in some con-texts people may prefer caricatured and exaggerated behav-iour [150]. There are also legal questions regarding anthropo-morphic technology that must be addressed. Android sciencehas to resolve the issue of the moral status of androids—or unexpected ramifications might hamper the field in thefuture [37].

5.1 The Risks of Anthropomorphism in AI

The disadvantages of building anthropomorphic robots alsoinclude the following, in ascending order of seriousness forAI. First, it has been claimed, the phenomenon of anthro-pomorphic robots (at least as portrayed in fiction) encour-ages the general public to think that AI has advancedfurther than it has in reality—and to misidentify AI asconcerned only with human-like systems. Jordan Pollackremarked, for example, ‘We cannot seem to convince thepublic that humanoids and Terminators are just Holly-wood special effects, as science-fictional as the little greenmen from Mars!’ [108, p. 50]. People imagine that servicerobots of the future will resemble robots from literature andmovies [101].

The second problem arises specifically for thoseresearchers in social and developmental robotics whose aimis to build anthropomorphic robots with the cognitive oraffective capacities of the human infant. Several theoristsclaim that focusing on human-level and human-like AI is ahindrance to progress in AI: research should focus on the‘generic’ concept of intelligence, or on mindless intelligence[108], rather than on the parochial goal of human intelli-gence (see e.g. [66,79]). To quote Pollack again, AI behaves‘as if human intelligence is next to godliness’ [108, p. 51]. Inaddition, critics say, the difficult goal of human-like AI setsan unrealistic standard for researchers (e.g. [42]). If sound,such objections would apply to the project of building a childmachine.

The third difficulty arises for any attempt to build a sociallyintelligent robot. This is the forensic problem of anthropo-morphism – the problem of howwe are reliably able to detectintelligence in machines, given that the tendency to anthro-pomorphize leads us to find intelligence almost everywhere[110,112]. Researchers in AI have long anthropomorphizedtheir machines and anthropomorphic robots can prompt fan-tasy and make-believe in observers and researchers alike.

123

Page 7: Anthropomorphism: Opportunities and Challenges …...of a humanoid’s head as human-like (Fig. 1). Anthropomor-phic form in appearance has even been attributed to flying robots [43]

Int J of Soc Robotics (2015) 7:347–360 353

Such anthropomorphizing is not ‘innocent’: instead it intro-duces a bias into judgements of intelligence in machines andso renders these judgements suspect.2 Even at the beginningof the field, in 1948, Turing said that playing chess against a‘paper’ machine (i.e. a simulation of machine behaviour by ahuman being using paper and pencil) ‘gives a definite feelingthat one is pitting one’s wits against something alive’ [141,p. 412]. His descriptions of his own machines were some-times extravagantly anthropomorphic—he said, for example,that his child machine could not be sent to school ‘withoutthe other children making excessive fun of it’ [139, pp. 460-1]—but they were also plainly tongue-in-cheek. He made itclear, when talking of ‘emotional’ communication betweenhuman and child-machine3 (the machine was to be organisedby means of ‘pain’ and ‘pleasure’ inputs) that this did ‘notpresuppose any feelings on the part of the machine’ [139],[141, p. 461]. In Turing’s vocabulary, ‘pain’ is just the termfor a signal that cancels an instruction in the machine’s table.

Anthropomorphizing leaves AI with no trustworthy wayof testing for intelligence in artificial systems. At best, theanthropomorphizing of machines obscures both AI’s actualachievements and how far it has to go in order to produce gen-uinely intelligent machines. At worst, it leads researchers tomake plainly false claims about their creations; for exam-ple, Yamamoto described his robot vacuum cleaner Sozzyas ‘friendly’ [153] and Hogg, Martin, and Resnick said thatFrantic, their Braitenberg-like creature made of Lego bricks,‘does nothing but think’ [84].

In a classic 1976 paper entitled ‘Artificial IntelligenceMeets Natural Stupidity’, Drew McDermott advised scien-tists to use ‘colourless’ or ‘sanitized’ technical descriptionsof their machines in place of unreflective and misleadingpsychological expressions [98, p. 4]. (McDermott’s targetwas ‘wishful mnemonics’ [98, p. 4] but anthropomorphizingin AI goes far beyond such shorthand.) Several researchersin social robotics can be seen as in effect attempting tofollow McDermott’s advice with respect to their anthropo-morphic robots. These researchers refrain from saying thattheir ‘expressive’ robots have emotions, and instead say thatthey have emotional behaviour. Kismet, Cynthia Breazeal’sfamous (now retired) interactive ‘expressive’ robot was said(without scare-quotes) to have a ‘smile on [its] face’, a ‘sor-rowful expression’, a ‘fearful expression’, a ‘happy and inter-ested expression’, a ‘contented smile’, a ‘big grin’, and a‘frown’ ([31, p. 584–588], [30,35]). However, this vocab-ulary is not sufficiently sanitized: for example, to say thata machine smiles is to say that the machine has an intent,

2 Daniel Dennett uses the notion of ‘innocent’ anthropomorphizing in[46].3 Turing distinguished between communication by ‘pain’ and ‘plea-sure’ inputs and ‘unemotional’ communication that by means of ‘sensestimuli’ [141]; for analysis see [114].

namely to communicate, and an inner state, typically hap-piness.4 Here the forensic problem of anthropomorphismreemerges. We need a test for expressiveness, as much asfor intelligence, that is not undermined by our tendency toanthropomorphize.

5.2 Uncanny Valley

Anthropomorphism not only affects how people behavetowards robots, but also whether they will accept them innatural human environments. The relation between the phys-ical appearance of robots and their acceptance has recentlyreceived major interest in the field of HRI. Despite this, thereare still many unanswered questions and most efforts aredevoted to the uncanny valley theory [99]. This theory pro-poses a non-linear relationship between a robot’s degree ofanthropomorphismand its likeability.With increased human-likeness a robot’s likeability also increases; yet when a robotclosely resembles a human, but is not identical, this producesa strong negative emotional reaction. Once a robot’s appear-ance is indistinguishable from that of a human, the robot isliked as much as a human being [99].

It has been suggested that neurological changes areresponsible for the uncanny valley phenomenon [124]. Fur-thermore, it is the perceived higher ability of anthropomor-phic robots to have experience that makes people particularlyuncomfortable with human-like technology [73]. However,in spite of its popularity the empirical proof of the uncannyvalley theory is relatively sparse. Some studies did not findevidence supporting this hypothesis [19], while others sug-gest that the relation between likeability and appearancemight have a different shape, one that resembles more a cliffthan a valley [18]. We believe that future research shouldaddress three key issues: defining terminology, finding enti-ties that lie between the deepest point of the uncanny valleyand the human level, and investigating the uncanny valley instudies that involve actual HRI.

Up to now multiple terms have been used in place of theJapanese term used by Mori to describe the uncanny val-ley. This reduces the comparability of the studies. Moreover,other researchers point out that even the human-likeness axisof the graph is not well-defined [38]. Efforts are spent ontrying to find a term that would fit the hypothesized shapeof the valley rather than on creating a hypothesis that fits thedata. It is also possible that the term used by Mori might notbe the most appropriate one and that the problem does notlie only in the translation.

The uncanny valley hypothesis suggests that when a robotcrosses the deepest point of the uncanny valley its likeabil-ity will suddenly increase. However, to date, no entity has

4 This is not to suggest that what makes a ‘smile’ into a smile is somefeeling—an inner state—in the robot. See [111,113].

123

Page 8: Anthropomorphism: Opportunities and Challenges …...of a humanoid’s head as human-like (Fig. 1). Anthropomor-phic form in appearance has even been attributed to flying robots [43]

354 Int J of Soc Robotics (2015) 7:347–360

been shown to exist that is similar enough to a human forit to fit this description. We propose that work on the oppo-site process to anthropomorphism could fill that niche. It hasbeen suggested that dehumanization,which is the deprivationof human qualities in real human beings, is such a process[78]. Work on dehumanization shows which characteristicsare perceived as critical for the perception of others as human.Their elimination leads to people being treated as if theywerenot fully human. The studies of dehumanization show thatthere are two distinct senses of humanness—uniquely humanand human nature. Uniquely human characteristics distin-guish humans from other species and reflect attributes suchas intelligence, intentionality, or secondary emotions. On theother hand, people deprived of human nature are perceivedas automata and lacking in primary emotions, sociability, orwarmth. These two dimensions map well onto the conceptproposed for the dimensionality of mind-attribution that wasfound to involve agency and experience [72]. Therefore, wecould explore the uncanny valley, not by trying to reach thehuman level starting from a machine, but rather by usinghumans that are perceived as lacking some human qualities.There is some empirical evidence that anthropomorphism isalso a multi-dimensional phenomenon [156].

In addition, all previous studies of the uncanny valleyhypothesis have used either static images or videos of robots.The question remains how well these findings can be gener-alized to actual HRI. It is possible that the uncanny valleywould have no effect on HRI or that it would be limitedto the very first few seconds of interaction. The studies ofthe uncanny valley phenomenon in computer graphics indi-cate that this phenomenon might be related to the exposureto a specific agent [47]. Increased familiarity with an agentcould be related with decreased uncanniness felt as a resultof its appearance. The physical appearance of a robot is notthe most important factor in anthropomorphism [90]. Fur-thermore, the perception of human-likeness changes duringinteraction [68]. It is possible that the uncanny valley mightlead to people being less willing to engage in interaction.However, we believe that more effort should be put into inter-action design rather than physical-appearance design, sincethe relationship between the former and the uncanny valleyneeds further empirical research.

6 Overcoming the Problems of AnthropomorphicTechnology

Even if we accept the uncanny valley as it was proposed byMori, there are certain reasons why the consequences foracceptance of anthropomorphic robots are not as profoundas the theory indicated. In non-laboratory conditions peo-ple rarely reported an eerie feeling when interacting with ageminoid [21]. Furthermore, at least for computer graphics

there are guidelines regarding the creation of anthropomor-phic heads that can reduce the unnaturalness of agents’ faces[97]. Furthermore, people find robots’ performance muchmore important than their appearance [76], which furtheremphasizes that whether a robot performs its task correctlyis of greater importance than how it looks.

6.1 Facilitating positive attitudes toward eerie machines

If the uncanny valley has a lasting effect on HRI, it is ben-eficial to consider how the acceptance of eerie machinescould be facilitated. Previous work in HRI shows that peo-ple can perceive robots as either ingroup or outgroup mem-bers [57] and even apply racial prejudice towards them [58].Therefore, the theoretical foundations for the integrationof highly human-like robots could build on the extensiveresearch examining how to facilitate positive intergroup rela-tions between humans belonging to differing nationalities,ethnicities, sexual, or religious groups. The topic of inter-group relations has been heavily investigated by social psy-chologists worldwide sinceWorldWar II.While some of thisearly work was interested in understanding psychologicalfactors that led to the events of the Holocaust, Gordon All-port’s seminal work on the nature of prejudice [5] providedthe field with a larger platform to examine the basic psycho-logical factors underlying stereotyping, prejudice, and dis-crimination.

From several decades of research on the topic, the fieldhas not only shed light on the varied ways in which inter-group bias manifests itself in everyday life [45,74], but italso helps us better understand the economic, motivational,cognitive, evolutionary, and ideological factors that driveintergroup bias and conflict between social groups [83]. Inaddition, the field has also identified several social psycho-logical approaches and strategies that can be used to reduceprejudice, stereotyping, and discrimination toward outgroups(i.e. groups to which we do not belong). These strate-gies range from interventions that promote positive feelingsand behaviour toward outgroups through media messages[50,105,152], recategorization of outgroups into a commonsuperordinate group [50,51], valuing diversity andwhat eachsubgroup can contribute to the greater good [145,146,154],promoting positive contact with members of the outgroup[50,106,107], among others.

In the context ofHRI, these social psychological strategiesmay be used to promote positive HRI and favorable socialattitudes toward robots. For example, from over fifty yearsof empirical research on intergroup contact, there is strongempirical evidence that positive contact with an outgroupcan reduce prejudice or negative feelings toward the out-group [106,107]. Such positive contact between two groupshas been shown to be particularly beneficial when four con-ditions are met: (a) within a given situation, the perceived

123

Page 9: Anthropomorphism: Opportunities and Challenges …...of a humanoid’s head as human-like (Fig. 1). Anthropomor-phic form in appearance has even been attributed to flying robots [43]

Int J of Soc Robotics (2015) 7:347–360 355

status of the two groups must be equal; (b) they must havecommon goals; (c) cooperation between the two groups mustoccur; and (d) positive contact between groups must be per-ceived as sanctioned by authority. Such intergroup contactmay reduce outgroup prejudice for many different reasons[106,107], one reason being that positive contact allows oneto learn more about the outgroup. In the context of HRI,one may, therefore, expect that negative attitudes towardshuman-like robots may stem from their perceived unfamil-iarity and unpredictability. Although they look human-like,people cannot be sure whether machines will behave likea human being. Increased familiarity with such technologymight thereby lead to decreased uncertainty regarding itsactions and in turn reduce negative feelings toward them.Empirical research is needed in order to establish whetherintergroup contact can facilitate greater acceptance of anthro-pomorphic robots. More broadly, such social psychologicalresearch may offer insight into understanding when and whypeople may feel unfavorably toward robots, while offeringpractical strategies that can be considered in HRI as a meansto promote greater social acceptance of robots in our increas-ingly technological world.

6.2 Limiting the Risks Associated with AnthropomorphicTechnology

Of the criticism that anthropomorphic robots (in fiction atleast) encourage the general public to think that AI has pro-gressed further than in actuality, we can simply say that thismay underestimate the public’s good sense. The objection,against those researchers aiming to build a child-machine,that human-like AI is a mistaken and unproductive goal canalso be answered. For example, the real target of this com-plaint may be, not human-level or human-like AI as such,but rather symbolic AI as a means of attaining human-levelAI (see [110]). Behaviour-based approaches may escape thiscomplaint. Moreover, the assumption that there is such athing as generic intelligence, and this is the proper subject ofstudy for researchers in computational intelligence, begs animportant question. Perhaps our concept of intelligence justis drawn from the paradigm examples of thinking things—human beings.

This leaves the forensic problem of anthropomorphism.In general, AI requires a distinction between a mere and athinking machine, and this distinction must be proof againstthe human tendency to anthropomorphize. This is exactlywhat Turing’s imitation game provides (see [110,112]). Thegame disincentivizes anthropomorphism: an observer (i.e.interrogator) who anthropomorphizes a contestant increasesthe chances of making the embarrassing mistake of misiden-tifying a computer as a human being. The behaviour of inter-rogators (in early Loebner Prize Contests where a series ofmachine and human contestants were interviewed individu-

ally) shows that observers go out of their way to avoid thisawkward error, to the extent that they misidentify humanbeings as computers. In addition, the imitation game con-trols for the effect of the tendency to anthropomorphize; insimultaneous interviews with a machine and a human con-testant, an observer’s propensity to anthropomorphize (whichwe can assume to be present equally in both interviews) can-not advantage one contestant over the other. Turing’s test isproofed against the human tendency to anthropomorphizemachines.

But how is anthropomorphism-proofing to be appliedto judgements of intelligence or affect in anthropomorphicrobots? Much of the engineering of these robots is to makethem visibly indistinguishable from a human being. An openimitation game where both contestants—a hyper-realisticanthropomorphic robot and an actual human being—are seenby the interrogator would provide a disincentive to anthro-pomorphizing and a control on the tendency to anthropo-morphize. However, interrogators in this game might wellfocus on characteristics of the contestants that Turing labeled‘irrelevant’ disabilities—qualities immaterial to the questionwhether a machine can think, such as a failure ‘to shine inbeauty competitions’ [139, p. 442]. An interrogator might,for example, concentrate on the functioning of a contestant’sfacial muscles or the appearance of the skin. This open game,although anthropomorphism-proofed, would fail as a test ofintelligence or affect in machines. On the other hand, in thestandard game where both the robot and the human con-testants are hidden, much of the robot’s engineering wouldnow be irrelevant to its success or failure in the game—forexample, David Hanson’s robot Einstein’s ‘eyebrows’ [103]surely do not contribute to a capacity for cognition or affect.This is why Turing said that there is ‘little point in trying tomake a “thinking machine” more human by dressing it upin ... artificial flesh’ [139, p. 442] and hoped that ‘no greatefforts will be put into making machines with the most dis-tinctively human, but non-intellectual characteristics such asthe shape of the human body’ [143, p. 486].

In sum, AI requires some means of anthropomorphism-proofing judgements of intelligence or affect in anthropomor-phic robots—otherwise it lacks a distinction between justi-fied and unjustified anthropomorphizing.An openTuring testwill test for the wrong things. A standard Turing test will suf-fice, but seems to be inconsistent with the growing trend forhyper-realistic and eerie robots.

7 Conclusions

In this paper we have discussed the widespread tendency forpeople to anthropomorphise their surroundings and in partic-ular how this affects HRI. Our understanding of its impact onHRI is still in its infancy. However, there is no doubt that it

123

Page 10: Anthropomorphism: Opportunities and Challenges …...of a humanoid’s head as human-like (Fig. 1). Anthropomor-phic form in appearance has even been attributed to flying robots [43]

356 Int J of Soc Robotics (2015) 7:347–360

creates new opportunities and poses problems that can haveprofound consequences for the field of HRI and acceptanceof the robotic technology.

Anthropomorphism is not only limited to the appearanceof a robot, but the design of a robotic platformmust also con-sider a robot’s interactionwith humans as an important factor.Accordance between these factors is necessary for a robot tomaintain its human-like impression. A well designed systemcan facilitate interaction, but it must match the specific taskgiven to a robot. For people it is more important that a robotcan do its job accurately rather than how it looks. However,we have presented multiple examples where anthropomor-phic form in appearance and behavior can help a robot toperform its tasks successfully by eliciting desired behavioursfrom human interaction partners.

On the other hand, development of anthropomorphicrobots comes at certain costs. People expect them to adhereto human norms and have much higher expectations regard-ing their capabilities compared to robots with machine-likeappearance. The uncanny valley hypothesis suggests thatthere is repulsion toward highly human-like machines thatare still distinguishable from humans. However, in this paperwe have shown the main shortcoming of the previous workthatmight hamper the suitability of this theory inHRI. Futureresearch should focus on investigating of this phenomenonin real HRI rather than by using images or videos. Moreover,work on the opposite process, dehumanization, can help us tounderstand the relationship between acceptance and anthro-pomorphism better. In addition, in order to facilitate the inte-gration of human-like robots we propose to employ strategiesfrom the area of intergroup relations that are being used tofacilitate positive relations between human subgroups.

Wehave also shown that the phenomenonof anthropomor-phic robots generates challenging philosophical and psycho-logical questions. In order for the field of AI to progress fur-ther it is necessary to acknowledge them. These challengescan not only affect how the general public perceives cur-rent and future directions of research and anthropomorphicand intelligent systems, but also might determine how farthe field can go. It remains to be seen whether the field cansuccessfully address these problems.

References

1. Aaker J (1997) Dimensions of brand personality. J Mark Res34:347–356

2. Abend L (2008) In spain, human rights for apes. TIMEcom http://www.time.com/time/world/article/08599(1824206)

3. Adolphs R (2005) Could a robot have emotions? Theoretical per-spectives from social cognitive neuroscience. In:ArbibM, FellousJM (eds) Who needs emotions: the brain meets the robot. OxfordUniversity Press, Oxford, pp 9–28

4. Aggarwal P, McGill A (2007) Is that car smiling at me? Schemacongruity as a basis for evaluating anthropomorphized products.J Consum Res 34(4):468–479

5. Allport GW (1954) The nature of prejudice. Addison-Wesley,Reading

6. Arbib MA, Fellous JM (2004) Emotions: from brain to robot.Trends Cogn Sci 8(12):554–561

7. Austermann A, Yamada S, Funakoshi K, Nakano M (2010) Howdo users interact with a pet-robot and a humanoid. In: Conferenceon human factors in computing systems—Proceedings, Atlanta,pp 3727–3732

8. Baddoura R, Venture G, Matsukata R (2012) The familiar as akey-concept in regulating the social and affective dimensions ofHRI. In: IEEE-RAS international conference on humanoid robots,Osaka, pp 234–241

9. Bae J, KimM (2011) Selective visual attention occurred in changedetection derived by animacy of robot’s appearance. In: Proceed-ings of the 2011 international conference on collaboration tech-nologies and systems, CTS 2011, pp 190–193

10. Barrett J (2004) Why would anyone believe in god?. AltaMiraPress, Lanham

11. Barrett J (2007) Cognitive science of religion: what is it and whyis it? Relig Compass 1(6):768–786

12. Barrett JL (2000) Exploring the natural foundations of religion.Trends Cogn Sci 4(1):29–34

13. BartneckC (2008)Who like androidsmore: Japanese or US amer-icans? Proceedings of the 17th IEEE international symposium onrobot and human interactive communication, RO-MAN, Munich,pp 553–557

14. Bartneck C (2013) Robots in the theatre and the media. In: Design& semantics of form & movement (DeSForM2013), Philips, pp64–70

15. Bartneck C, Hue J (2008) Exploring the abuse of robots. InteractStud 9(3):415–433

16. Bartneck C, Rosalia C, Menges R, Deckers I (2005) Robot abuse:limitation of the media equation. In: Proceedings of the interact2005 workshop on agent abuse, Rome

17. Bartneck C, Reichenbach J, Carpenter J (2006) Use of praise andpunishment in human–robot collaborative teams. In: RO-MAN2006: the 15th IEEE international symposiumon robot and humaninteractive communication (IEEE Cat No. 06TH8907), Piscat-away

18. Bartneck C, Kanda T, Ishiguro H, Hagita N (2007) Is the uncannyvalley an uncanny cliff? In: Proceedings of the IEEE internationalworkshop on robot and human interactive communication, Jeju,pp 368–373

19. Bartneck C, Kanda T, Ishiguro H, Hagita N (2009) My roboticdoppelganger - a critical look at the uncanny valley theory. In: 18thIEEE international symposium on robot and human interactivecommunication, RO-MAN2009. IEEE, pp 269–276

20. Bartneck C, Bleeker T, Bun J, Fens P, Riet L (2010) The influ-ence of robot anthropomorphismon the feelings of embarrassmentwhen interacting with robots. Paladyn 1:1–7

21. Becker-Asano C, Ogawa K, Nishio S, Ishiguro H (2010) Explor-ing the uncanny valleywith geminoidHI-1 in a real-world applica-tion. In: Proceedings of the IADIS international conference inter-faces and human computer interaction 2010, IHCI. Proceedingsof the IADIS international conference game and entertainmenttechnologies 2010, Part of the MCCSIS 2010, Freiburg, pp 121–128

22. Bering J (2005) Origins of the social mind: evolutionary psychol-ogy and child development. The Guildford Press, New York

23. Bering J (2010) The god instinct. Nicholas Brealey, London24. Bering JM (2006) The folk psychology of souls. Behav Brain Sci

29(05):453–462

123

Page 11: Anthropomorphism: Opportunities and Challenges …...of a humanoid’s head as human-like (Fig. 1). Anthropomor-phic form in appearance has even been attributed to flying robots [43]

Int J of Soc Robotics (2015) 7:347–360 357

25. Bethel CL, Salomon K, Murphy RR (2009) Preliminary results:humans find emotive non-anthropomorphic robots more calming.In: Proceedings of the 4th ACM/IEEE international conferenceon human–robot interaction, HRI’09, pp 291–292

26. Bloom P (2005) Descartes’ baby: how the science of child devel-opment explains what makes us human. Basic Books, New York

27. Boyer P (2001) Religion explained. Basic Books, New York28. Boyer P (2003) Religious thought and behaviour as by-products

of brain function. Trends Cogn Sci 7(3):119–12429. BreazealC (1998)Early experiments usingmotivations to regulate

human–robot interaction. In: AAAI fall symposium on emotionaland intelligent: the tangled knot of cognition. Technical reportFS-98-03, pp 31–36

30. Breazeal C (2000) Sociablemachines: expressive social exchangebetween humans and robots. Dissertation, Department of Electri-cal Engineering and Computer Science, Massachusetts Instituteof Technology

31. Breazeal C (2001) Affective interaction between humans androbots. In: Kelemen J, Sosik P (eds) ECAL 2001. LNAI vol 2159.Springer-Verlag, Berlin, pp 582–591

32. Breazeal C (2006) Human–robot partnership. IEEE Intell Syst21(4):79–81

33. Breazeal C, Fitzpatrick P (2000) That certain look: social ampli-fication of animate vision. In: Proceedings of the AAAI fall sym-posium on society of intelligence agents the human in the loop

34. Breazeal C, Scassellati B (2000) Infant-like social interactionsbetween a robot and a human caregiver. Adapt Behav 8(1):49–74

35. Breazeal C, Scassellati B (2001) Challenges in building robotsthat imitate. In: Dautenhahn K, Nehaniv CL (eds) Imitation inanimals and artifacts. MIT Press, Cambridge

36. BreazealC,BuchsbaumD,Gray J,GatenbyD,BlumbergB (2005)Learning from and about others: towards using imitation to boot-strap the social understanding of others by robots. Artif Life 11(1–2):31–62

37. Calverley DJ (2006) Android science and animal rights, does ananalogy exist? Connect Sci 18(4):403–417

38. CheethamM,Suter P, JanckeL (2011)The human likeness dimen-sion of the “Uncanny valley hypothesis”: behavioral and func-tional MRI findings. Front Hum Neurosci 5:126

39. Chew S, Tay W, Smit D, Bartneck C (2010) Do social robotswalk or roll? In: Lecture notes in computer science (includingsubseries Lecture notes in artificial intelligence and Lecture notesin bioinformatics), LNAI, vol 6414. Springer, Singapore, pp 355–361

40. Chin M, Sims V, Clark B, Lopez G (2004) Measuring individ-ual differences in anthropomorphism toward machines and ani-mals. In: Proceedings of the human factors and ergonomics societyannual meeting, vol 48. SAGE Publications, Thousand Oaks, pp1252–1255

41. Choi J, Kim M (2009) The usage and evaluation of anthropo-morphic form in robot design. In: Undisciplined! Design researchsociety conference 2008, Sheffield Hallam University, Sheffield,16–19 July 2008

42. Cohen PR (2005) If not turing’s test, then what? AI Mag 26(4):6143. Cooney M, Zanlungo F, Nishio S, Ishiguro H (2012) Designing

a flying humanoid robot (FHR): effects of flight on interactivecommunication. In: Proceedings of the IEEE international work-shop on robot and human interactive communication, Paris, pp364–371

44. Darwin C (1872) 1998 The expression of the emotions in manand animals. Oxford University Press, New York

45. Dasgupta N (2004) Implicit in group favoritism, outgroupfavoritism, and their behavioral manifestations. Soc Justice Res17(2):143–169

46. Dennett DC, Dretske F, Shurville S, Clark A, Aleksander I, Corn-well J (1994) The practical requirements for making a consciousrobot. Philos Trans R Soc Lond Ser A 349(1689):133–146

47. Dill V, Flach LM, Hocevar R, Lykawka C, Musse SR, Pinho MS(2012) Evaluation of the uncanny valley in CG characters. In:12th international conference on intelligent virtual agents, IVA2012, September 12, 2012–September 14, 2012. Lecture notes incomputer science (including subseries Lecture notes in artificialintelligence and Lecture notes in bioinformatics) LNAI, vol 7502,Springer Verlag, Berlin, pp 511–513

48. DiSalvo C, Gemperle F (2003) From seduction to fulfillment: theuse of anthropomorphic form in design. In: Proceedings of the2003 international conference on designing pleasurable productsand interfaces, DPPI ’03, ACM, New York, pp 67–72

49. DiSalvoCF,Gemperle F, Forlizzi J,Kiesler S (2002)All robots arenot created equal: the design and perception of humanoid robotheads. In: Proceedings of the conference on designing interac-tive systems: processes, practices, methods, and techniques. DIS,London, pp 321–326

50. Dovidio J, Gaertner S (1999) Reducing prejudice: combatingintergroup biases. Curr Dir Psychol Sci 8(4):101–105

51. Dovidio J, Gaertner S, Saguy T (2009) Commonality and thecomplexity of “we” social attitudes and social change. Pers SocPsychol Rev 13(1):3–20

52. Duffy BR (2003) Anthropomorphism and the social robot. RobotAuton Syst 42(3—-4):177–190

53. Epley N, Waytz A, Cacioppo JT (2007) On seeing human: athree-factor theory of anthropomorphism. Psychol Rev 114(4):864–886

54. Epley N, Akalis S, Waytz A, Cacioppo J (2008) Creating socialconnection through inferential reproduction: loneliness and per-ceived agency in gadgets, gods, and hreyhounds: Research article.Psychol Sci 19(2):114–120

55. EversV,MaldonadoHC,BrodeckiTL,HindsPJ (2008)Relationalvs. group self-construal: untangling the role of national culture inHRI. In: HRI 2008: Proceedings of the 3rd ACM/IEEE interna-tional conference on human–robot interaction: living with robots,Amsterdam, pp 255–262

56. Eyssel F, Kuchenbrandt D (2011) Manipulating anthropomorphicinferences about NAO: the role of situational and dispositionalaspects of effectance motivation. In: Proceedings of the IEEEinternational workshop on robot and human interactive commu-nication, pp 467–472

57. Eyssel F, Kuchenbrandt D (2012) Social categorization of socialrobots: anthropomorphism as a function of robot group member-ship. Br J Soc Psychol 51(4):724–731

58. Eyssel F, Loughnan S (2013) “It don’t matter if you’re black orwhite”? Effects of robot appearance and user prejudice on evalua-tions of a newly developed robot companion. In: 5th internationalconference on social robotics, ICSR 2013, October 27, 2013–October 29, 2013. Lecture notes in computer science (includingsubseries Lecture notes in artificial intelligence and Lecture notesin bioinformatics). LNAI, vol 8239. Springer Verlag, Berlin, pp422–431

59. Eyssel F, Hegel F, Horstmann G, Wagner C (2010) Anthropo-morphic inferences from emotional nonverbal cues: A case study.In: Proceedings of the IEEE international workshop on robot andhuman interactive communication. Viareggio, pp 646–651

60. Eyssel F, Kuchenbrandt D, Bobinger S (2011) Effects of antici-pated human–robot interaction and predictability of robot behav-ior on perceptions of anthropomorphism. In: HRI 2011: Proceed-ings of the 6th ACM/IEEE international conference on human–robot interaction. Lausanne, pp 61–67

61. Eyssel F, Kuchenbrandt D, Bobinger S, De Ruiter L, Hegel F(2012) “If you sound like me, you must be more human”: on theinterplay of robot and user features on human–robot acceptance

123

Page 12: Anthropomorphism: Opportunities and Challenges …...of a humanoid’s head as human-like (Fig. 1). Anthropomor-phic form in appearance has even been attributed to flying robots [43]

358 Int J of Soc Robotics (2015) 7:347–360

and anthropomorphism. In:HRI’12: Proceedings of the 7th annualACM/IEEE international conference on human–robot interaction,pp 125–126

62. Fasola J,MataricMJ (2012) Using socially assistive human–robotinteraction to motivate physical exercise for older adults. ProcIEEE 100:2512–2526

63. Feil-Seifer D, Mataric MJ (2011) Automated detection and clas-sification of positive vs. negative robot interactions with childrenwith autismusing distance-based features. In:HRI 2011: Proceed-ings of the 6th ACM/IEEE international conference on human–robot interaction, pp 323–330

64. Fischer K, Lohan KS, Foth K (2012) Levels of embodiment: Lin-guistic analyses of factors influencing HRI. In: HRI’12: Proceed-ings of the 7th annual ACM/IEEE international conference onhuman–robot interaction, pp 463–470

65. Fong T, Nourbakhsh I, Dautenhahn K (2003) A survey of sociallyinteractive robots. In: IROS2002, September 30, 2002–September30, 2002. Robotics and autonomous systems, vol 42. Elsevier,pp 143–166

66. Ford K, Hayes P (1998) On computational wings: rethinking thegoals of artificial intelligence-the gold standard of traditional arti-ficial intelligence-passing the so-called turing test and therebyappearing to be. Sci Am Presents 9(4):79

67. Forster F, Weiss A, Tscheligi M (2011) Anthropomorphic designfor an interactive urban robot—the right design approach? In: HRI2011: Proceedings of the 6thACM/IEEE international conferenceon human–robot interaction, Lausanne, pp 137–138

68. Fussell SR, Kiesler S, Setlock LD, Yew V (2008) How peopleanthropomorphize robots. In: HRI 2008: Proceedings of the 3rdACM/IEEE international conference on human–robot interaction:living with robots, Amsterdam, pp 145–152

69. Giullian N, Ricks D, Atherton A, ColtonM, GoodrichM, BrintonB (2010) Detailed requirements for robots in autism therapy. In:Conference Proceedings: IEEE international conference on sys-tems, man and cybernetics, pp 2595–2602

70. Goetz J, Kiesler S, Powers A (2003) Matching robot appearanceand behavior to tasks to improve human–robot cooperation. In:Proceedings of the 12th IEEE international workshop on robotand human interactive communication (ROMAN 2003), 2003,pp 55–60

71. Gold K, Scassellati B (2007) A bayesian robot that distinguishes“self” from “other”. In: Proceedings of the 29th annual meetingof the cognitive science society

72. Gray HM, Gray K, Wegner DM (2007) Dimensions of mind per-ception. Science 315(5812):619

73. Gray K, Wegner D (2012) Feeling robots and human zombies:mind perception and the uncanny valley. Cognition 125(1):125–130

74. Greenwald A, Banaji M (1995) Implicit social cognition: atti-tudes, self-esteem, and stereotypes. Psychol Rev 102(1):4–27

75. Guthrie S (1995) Faces in the clouds: a new theory of religion.Oxford University Press, Oxford

76. Hancock PA, Billings DR, Schaefer KE, Chen JYC (2011) Ameta-analysis of factors affecting trust in human–robot interac-tion. Hum Factors 53(5):517–527

77. Hard R (2004) The routledge handbook of greek mythology:Based on HJ Rose’s “Handbook of Greek Mythology”. Psychol-ogy Press, New York

78. HaslamN (2006)Dehumanization: an integrative review. Pers SocPsychol Rev 10(3):252–264

79. Hayes P, Ford K (1995) Turing test considered harmful. In: IJCAI(1), pp 972–977.

80. Hegel F, Krach S, Kircher T, Wrede B, Sagerer G (2008) Under-standing social robots: a user study on anthropomorphism. In:Proceedings of the 17th IEEE international symposium on robotand human interactive communication, RO-MAN, pp 574–579

81. Hegel F, Gieselmann S, Peters A, Holthaus P, Wrede B (2011)Towards a typology ofmeaningful signals and cues in social robot-ics. In: Proceedings of the IEEE international workshop on robotand human interactive communication, pp 72–78

82. Heider F, Simmel M (1944) An experimental study of apparentbehavior. Am J Psychol 57(2):243–259

83. Hewstone M, Rubin M, Willis H (2002) Intergroup bias. AnnuRev Psychol 53:575–604

84. Hogg DW, Martin F, Resnick M (1991) Braitenberg creatures.Epistemology and LearningGroup,MITMedia Laboratory, Cam-bridge

85. Ishiguro H (2006) Android science: conscious and subconsciousrecognition. Connect Sci 18(4):319–332

86. Kahn P, Ishiguro H, Friedman B, Kanda T (2006) What is ahuman? Toward psychological benchmarks in the field of human–robot interaction. In: The 15th IEEE international symposiumon robot and human interactive communication, 2006. ROMAN2006, pp 364–371

87. Kahn PH Jr, Kanda T, Ishiguro H, Gill BT, Ruckert JH, Shen S,Gary HE, Reichert AL, Freier NG, Severson RL (2012) Do peo-ple hold a humanoid robot morally accountable for the harm itcauses? In: HRI’12: Proceedings of the 7th annual ACM/IEEEinternational conference on human–robot interaction, Boston,pp 33–40

88. Kamide H, Kawabe K, Shigemi S, Arai T (2013) Development ofa psychological scale for general impressions of humanoid. AdvRobot 27(1):3–17

89. Kanda T, Miyashita T, Osada T, Haikawa Y, Ishiguro H (2005)Analysis of humanoid appearances in human–robot interaction.In: 2005 IEEE/RSJ international conference on intelligent robotsand systems. IROS, Edmonton, pp 62–69

90. Kiesler S, Goetz J (2002) Mental models and cooperation withrobotic assistants. In: Proceedings of the conference on humanfactors in computing systems, pp 576–577

91. Kiesler S, Hinds P (2004) Introduction to this special issue onhuman–robot interaction. Hum-Comput Interact 19(1):1–8

92. Kiesler S, Powers A, Fussell SR, Torrey C (2008) Anthropomor-phic interactions with a robot and robot-like agent. Soc Cogn26(2):169–181

93. Kuchenbrandt D, Eyssel F, Bobinger S, NeufeldM (2013)When arobot’s group membership matters. Int J Soc Robot 5(3):409–417

94. Lee Sl, Lau I, Kiesler S, Chiu CY (2005) Humanmentalmodels ofhumanoid robots. In: Proceedings of the 2005 IEEE internationalconference on robotics and automation, 2005. ICRA 2005, pp2767–2772

95. Levin D, Killingsworth S, Saylor M, Gordon S, Kawamura K(2013) Tests of concepts about different kinds of minds: predic-tions about the behavior of computers, robots, and people. HumComput Interact 28(2):161–191

96. Lohse M (2011) Bridging the gap between users’ expectationsand system evaluations. In: Proceedings of the IEEE internationalworkshop on robot and human interactive communication, pp485–490

97. MacDorman KF, Green RD, Ho CC, Koch CT (2009) Too realfor comfort? Uncanny responses to computer generated faces.Comput Hum Behav 25(3):695–710

98. McDermott D (1976) Artificial intelligence meets natural stupid-ity. ACM SIGART Bull 57:4–9

99. Mori M (1970) The uncanny valley. Energy 7(4):33–35100. Mutlu B, Yamaoka F, Kanda T, Ishiguro H, Hagita N (2009) Non-

verbal leakage in robots: Communication of intentions throughseemingly unintentional behavior. In: Proceedings of the 4thACM/IEEE international conference on human robot interaction,HRI ’09. ACM, New York, pp 69–76

101. Oestreicher L, Eklundh KS (2006) User expectations on human-robot co-operation. In: Robot and human interactive communica-

123

Page 13: Anthropomorphism: Opportunities and Challenges …...of a humanoid’s head as human-like (Fig. 1). Anthropomor-phic form in appearance has even been attributed to flying robots [43]

Int J of Soc Robotics (2015) 7:347–360 359

tion, 2006. The 15th IEEE international symposium on ROMAN2006. IEEE, pp 91–96

102. Ogawa K, Bartneck C, Sakamoto D, Kanda T, Ono T, Ishiguro H(2009) Can an android persuade you? In: RO-MAN2009: the 18thIEEE international symposium on robot and human interactivecommunication, Piscataway, pp 516–521

103. Oh JH, Hanson D, Kim WS, Han IY, Kim JY, Park IW (2006)Design of android type humanoid robot albert HUBO. In: 2006IEEE/RSJ international conference on intelligent robots and sys-tems, pp 1428–1433

104. Oztop E, Chaminade T, Franklin D (2004) Human-humanoidinteraction: is a humanoid robot perceived as a human? In: 20044th IEEE/RAS international conference on humanoid robots, vol2, pp 830–841

105. Paluck E (2009) Reducing intergroup prejudice and conflict usingthe media: a field experiment in Rwanda. J Pers Soc Psychol96(3):574–587

106. Pettigrew T (1998) Intergroup contact theory. Annu Rev Psychol49:65–85

107. Pettigrew T, Tropp L (2006) A meta-analytic test of intergroupcontact theory. J Pers Soc Psychol 90(5):751–783

108. Pollack JB (2006) Mindless intelligence. IEEE Intell Syst21(3):50–56

109. Powers A, Kramer ADI, Lim S, Kuo J, Lee SL, Kiesler S (2005)Eliciting information from people with a gendered humanoidrobot. In: Proceedings of the IEEE international workshop onrobot and human interactive communication, vol 2005. Nashville,pp 158–163

110. Proudfoot D (2011) Anthropomorphism and AI: turing’s muchmisunderstood imitation game. Artif Intell 175(5):950–957

111. Proudfoot D (2013a) Can a robot smile? Wittgenstein on facialexpression. In: Racine TP, SlaneyKL (eds)AWittgensteinian per-spective on the use of conceptual analysis in psychology. PalgraveMacmillan, Basingstoke, pp 172–194

112. Proudfoot D (2013b) Rethinking turing’s test. J Philos110(7):391–411

113. Proudfoot D (2015) Turing’s child-machines. In: Bowen J,Copeland J, Sprevak M, Wilson R (eds) The turing guide: life,work, legacy. Oxford University Press, Oxford

114. Proudfoot D (2014) Turing’s three senses of ”Emotional”. Int JSynth Emot 5(2):7–20

115. Pyysiäinen I (2004) Religion is neither costly nor beneficial.Behav Brain Sci 27(06):746–746

116. Reeves B, Nass C (1996) The media equation. Cambridge Uni-versity Press, New York

117. Rehm M, Krogsager A (2013) Negative affect in human robotinteraction—impoliteness in unexpected encounters with robots.In: 2013 IEEE RO-MAN, pp 45–50

118. Reichenbach J, Bartneck C, Carpenter J (2006) Well done, robot!- The importance of praise and presence in human–robot collab-oration. In: Proceedings of the IEEE international workshop onrobot and human interactive communication, Hatfield, pp 86–90

119. Riek LD, Rabinowitch TC, Chakrabarti B, Robinson P (2008)How anthropomorphism affects empathy toward robots. In: Pro-ceedings of the 4th ACM/IEEE international conference onhuman–robot interaction, HRI’09, San Diego, pp 245–246

120. Riether N, Hegel F, Wrede B, Horstmann G (2012) Social facil-itation with social robots? In: Proceedings of the seventh annualACM/IEEE international conference on human–robot interaction,ACM, Boston, HRI ’12, pp 41–48

121. Saerbeck M, Schut T, Bartneck C, Janse MD (2010) Expressiverobots in education: varying the degree of social supportive behav-ior of a robotic tutor. In: Proceedings of the conference on humanfactors in computing systems, vol 3, pp 1613–1622

122. SalemM, Eyssel F, Rohlfing K, Kopp S, Joublin F (2011) Effectsof gesture on the perception of psychological anthropomorphism:

a case study with a humanoid robot, Lecture notes in computerscience (including subseries Lecture notes in artificial intelligenceand Lecture notes in bioinformatics). LNAI, vol 7072. Springer,Berlin

123. Saver JL, Rabin J (1997) The neural substrates of religious expe-rience. J Neuropsychiatry Clin Neurosci 9(3):498–510

124. Saygin AP, Chaminade T, Ishiguro H, Driver J, Frith C (2012)The thing that should not be: predictive coding and the uncannyvalley in perceiving human and humanoid robot actions. SocCognAffect Neurosci 7(4):413–422

125. Scassellati B (2000) How robotics and developmental psychologycomplement each other. In: NSF/DARPA workshop on develop-ment and learning

126. Scassellati B (2002) Theory of mind for a humanoid robot. AutonRobots 12(1):13–24

127. Scassellati B (2007) How social robots will help us to diagnose,treat, and understand autism. Springer Tracts in Advanced Robot-ics, vol 28, pp 552–563

128. Scassellati B,CrickC,GoldK,KimE, Shic F, SunG (2006) Socialdevelopment [robots]. Comput Intell Mag IEEE 1(3):41–47

129. Schmitz M (2011) Concepts for life-like interactive objects. In:Proceedings of the fifth international conference on tangible,embedded, and embodied interaction, TEI ’11. ACM, New York,pp 157–164

130. Shic F, Scassellati B (2007) Pitfalls in the modeling of develop-mental systems. Int J Hum Robot 4(2):435–454

131. Short E,Hart J,VuM,Scassellati B (2010)No fair!!An interactionwith a cheating robot. In: 5thACM/IEEE international conferenceon human–robot interaction, HRI 2010, Osaka, pp 219–226

132. Sims VK, Chin MG, Lum HC, Upham-Ellis L, Ballion T, Lagat-tuta NC (2009) Robots’ auditory cues are subject to anthropo-morphism. In: Proceedings of the human factors and ergonomicssociety, vol 3, pp 1418–1421

133. Spexard T, Haasch A, Fritsch J, Sagerer G (2006) Human-likeperson tracking with an anthropomorphic robot. In: Proceedingsof the IEEE international conference on robotics and automation,vol 2006. Orlando, pp 1286–1292

134. Syrdal DS, Dautenhahn K, Woods SN, Walters ML, KoayKL (2007) Looking good? appearance preferences androbot personality inferences at zero acquaintance. In: AAAIspring symposium—Technical report, vol SS-07-07. Stanford,pp 86–92

135. Syrdal DS, Dautenhahn K, Walters ML, Koay KL (2008) Sharingspaces with robots in a home scenario—anthropomorphic attribu-tions and their effect on proxemic expectations and evaluations ina live HRI trial. In: AAAI fall symposium—Technical report, volFS-08-02, Arlington, pp 116–123

136. Tapus A,Matari MJ, Scassellati B (2007) Socially assistive robot-ics [grand challenges of robotics]. IEEE Robotics Autom Mag14(1):35–42

137. Torta E, Van Dijk E, Ruijten PAM, Cuijpers RH (2013) Theultimatum game as measurement tool for anthropomorphismin human–robot interaction. In: 5th international conference onsocial robotics, ICSR 2013, October 27, 2013–October 29, 2013.Lecture notes in computer science (including subseries Lecturenotes in artificial intelligence and Lecture notes in bioinformat-ics). LNAI, vol 8239. Springer Verlag, pp 209–217

138. Trimble M, Freeman A (2006) An investigation of religiosity andthe Gastaut-Geschwind syndrome in patients with temporal lobeepilepsy. Epilepsy Behav 9(3):407–414

139. Turing A (1950) Computing machinery and intelligence. Mind59(236):433–460

140. Turing A (2004a) Can digital computers think? In: Copeland BJ(ed) The essential turing. Oxford University Press, Oxford

141. Turing A (2004b) Intelligent machinery. In: Copeland BJ (ed)The essential turing. Oxford University Press, Oxford

123

Page 14: Anthropomorphism: Opportunities and Challenges …...of a humanoid’s head as human-like (Fig. 1). Anthropomor-phic form in appearance has even been attributed to flying robots [43]

360 Int J of Soc Robotics (2015) 7:347–360

142. Turing A (2004c) Lecture on the automatic computing engine. In:Copeland BJ (ed) The essential turing. Oxford University Press,Oxford

143. Turing A, Braithwaite R, Jefferson G, Newman M (2004) Canautomatic calculating machines be said to think? In: CopelandJ (ed) The essential turing. Oxford University Press, Oxford

144. Turkle S (2010) In good company? On the threshold of roboticcompanions. In: Wilks Y (ed) Close engagements with artificalcompanions: key social, psychological, ethical and design issues.John Benjamins Publishing Company, Amsterdam/ Philadelphia,pp 3–10

145. Verkuyten M (2006) Multicultural recognition and ethnic minor-ity rights: a social identity perspective. Eur Rev Soc Psychol17(1):148–184

146. Vorauer J, Gagnon A, Sasaki S (2009) Salient intergroup ideologyand intergroup interaction. Psychol Sci 20(7):838–845

147. Wade E, Parnandi AR, Matari MJ (2011) Using socially assistiverobotics to augment motor task performance in individuals post-stroke. In: IEEE international conference on intelligent robots andsystems, pp 2403–2408

148. WaltersML, Syrdal DS,KoayKL,DautenhahnK, TeBoekhorst R(2008) Human approach distances to a mechanical-looking robotwith different robot voice styles. In: Proceedings of the 17th IEEEinternational symposium on robot and human interactive commu-nication, RO-MAN, pp 707–712

149. Walters ML, Koay KL, Syrdal DS, Dautenhahn K, Te BoekhorstR (2009) Preferences and perceptions of robot appearance andembodiment in human–robot interaction trials. Adaptive andemergent behaviour and complex systems—proceedings of the23rd Convention of the society for the study of artificial intelli-gence and simulation of behaviour, AISB 2009, Edinburgh, pp136–143

150. Wang E, Lignos C, Vatsal A, Scassellati B (2006) Effects of headmovement on perceptions of humanold robot behavior. In: HRI2006: Proceedings of the 2006 ACM conference on human–robotinteraction, vol 2006, pp 180–185

151. Wilson EO (2006) The creation: an appeal to save life on earth.Wiley Online Library, Hoboken

152. Wittenbrink B, Judd C, Park B (2001) Spontaneous prejudice incontext: variability in automatically activated attitudes. J Pers SocPsychol 81(5):815–827

153. Yamamoto M (1993) Sozzy: a hormone-driven autonomous vac-uum cleaner. In: International Society for Optical Engineering,vol 2058, pp 212–213

154. Yogeeswaran K, Dasgupta N (2014) The devil is in the details:abstract versus concrete construals of multiculturalism differen-tially impact intergroup relations. J Pers Soc Psychol 106(5):772–789

155. von Zitzewitz J, Boesch PM,Wolf P, Riener R (2013) Quantifyingthe human likeness of a humanoid robot. Int J Soc Robot 5(2):263–276

156. Złotowski J, Strasser E, Bartneck C (2014) Dimensions of anthro-pomorphism: from humanness to humanlikeness. In: Proceedingsof the 2014 ACM/IEEE international conference on human–robotinteraction, HRI ’14. ACM, New York, pp 66–73

Jakub Złotowski is a Ph. D candidate at the HIT Lab NZ of the Univer-sity of Canterbury, New Zealand and a cooperative researcher at ATR(Kyoto, Japan). He receivedMSc degree in Interactive Technology fromthe University of Tampere (Finland) in 2010. He previously worked asa research fellow at the University of Salzburg (Austria) on an EU FP7project—Interactive Urban RObot (IURO). His research focus is onanthropomorphism and social aspects of Human-Robot Interaction.

Diane Proudfoot is Associate Professor of Philosophy at the Universityof Canterbury, New Zealand and Co-Director of the Turing Archive forthe History of Computing, the largest web collection of digital facsim-iles of original documents by Turing and other pioneers of computing.She was educated at the University of Edinburgh, the University of Cal-ifornia at Berkeley, and the University of Cambridge. She has numer-ous print and online publications on Turing and Artificial Intelligence.Recent examples include: ‘RethinkingTuring’s Test’, Journal of Philos-ophy, 2013; ‘Anthropomorphism andAI: Turing’s muchmisunderstoodimitation game’, Artificial Intelligence, 2011; and with Jack Copeland,‘Alan Turing, Father of theModernComputer’,TheRutherford Journal,2011.

Kumar Yogeeswaran is a Lecturer of Social Psychology (equivalent toAssistant Professor) at the University of Canterbury, New Zealand. Heearned his PhD in Social Psychology at the University ofMassachusettsAmherst. His primary research examines the complexities of achievingnational unity in the face of ethnic diversity, while identifying newstrategies that help reduce intergroup conflict in pluralistic societies. Hissecondary research interests lie in the application of social psychologyto the fields of law, politics, communication, and technology.

Christoph Bartneck is an associate professor and director of post-graduate studies at the HIT Lab NZ of the University of Canterbury.He has a background in Industrial Design and Human-Computer Inter-action, and his projects and studies have been published in leadingjournals, newspapers, and conferences. His interests lie in the fields ofSocial Robotics, Design Science, and Multimedia Applications. He hasworked for several international organizations including the Technol-ogy Centre of Hannover (Germany), LEGO (Denmark), Eagle RiverInteractive (USA), Philips Research (Netherlands), ATR (Japan), NaraInstitute of Science and Technology (Japan), and The Eindhoven Uni-versity of Technology (Netherlands). Christoph is a member of the NewZealand Institute for Language Brain &Behavior, the IFIPWork Group14.2 and ACM SIGCHI.

123