The customer experience in service industries has been radically transformed by technologies like artificial intelligence (AI), robotics, and augmented reality. In particular, service robots are increasingly replacing human service agents, especially in retail and hospitality. However, the unintended consequences of using such technologies include increasing opportunities for crime, for historically, replacing employees with self-service technologies saw huge rises in theft in servicescapes. More recently, the introduction of self-service checkouts saw new types of shoplifters, including customers who would not normally shoplift but saw an opportunity to do so afforded by this new technology. Furthermore, it is not uncommon for humans to abuse robots e.g., a drunk person attacking a security service robot, children kicking and punching a kindergarten robot, attempts to steal from delivery robots. In short, consumers may alter their behaviour in costly and unfavourable ways as human service agents are replaced by service robots.
Nevertheless, there is relatively little research exploring the unintended consequences of introducing technology-infused service experiences to customers. By adding service robots to the service experience, organisations may unintentionally remove a human element that safeguards servicescapes from opportunistic customer misbehaviour. Thus, it is critical to understand how the perceived humanness of different service robots impacts customer behaviour. This study aims to do so by investigating whether making a service robot seem more human reduce the likelihood of people engaging in deviant consumer behaviour and whether this relationship is influenced by feelings of empathy, the perceived risk of getting caught, and negative attitudes toward robots. Four hypotheses were developed accordingly.
Robots’ attributes collectively influence their perceived humanness e.g., physical or virtual form, humanoid or non-humanoid, analytical or socio-emotional tasks. Perceived humanness is the extent to which an individual is seen as having characteristics that are typical for humans, and service robots evoke this through human-like characteristics like face, arms, legs, or warmth.
Method and sample
Scenario-based experiments were conducted, which manipulated the humanness of a service agent in a banking environment (from self-service technology, to robot, to human employee) across seven conditions: an ATM, a subtle humanised ATM, an explicit humanised ATM, a humanised service robot (cute), a humanised service robot (mechanical), a humanised service robot (android), and a human bank teller (see Figure 1). Data about the likelihood of consumer misbehaviour, empathy towards the service robot, perceived risk of being caught and punished and negative attitudes towards robots. The sample consisted of 553 participants from the US, recruited from Amazon MTurk. They completed a 5–10-min survey on Qualtrics and were compensated US$1.20. All items in the survey were measured on a seven-point Likert scale.
Key finding
Consumers are less likely to have opportunistic theft intentions if a robot looks more human, as partially explained by their empathy for the robots and their heightened sense of perceived risk:
- When customers begin to humanise service robots, one important emotional outcome is empathy – which can increase moral and prosocial behaviours, hence supressing the urge to act on any opportunity the servicescape presents for theft.
- Also, when the perceived humanness of service robots increases, a cognitive outcome is the perceived risk of getting caught while misbehaving. Here, when service robots are humanised to the degree that they are considered capable guardians (like human security guards), the perceived risk of being caught and punished will reduce consumers’ deviant behaviours.
In terms of negative attitudes towards robots, the impact on the above negative relationship (between service robots’ perceived humanness and consumers’ deviant behaviours) is only significant when consumers either have low negative attitudes or have highly strong negative attitudes. So a strong negative view of robots can make people less likely to misbehave when dealing with a robot that seems more human.
Recommendations
Managers have two levers to mitigate the extent of deviant customer behaviour intentions with service robots:
- Risk lever: Introducing service robots as guardians that have a perceived risk of getting caught and being punished like that of human employees e.g., designed with compliance and security features
- Empathy lever: Humanising the design of service robots to evoke empathy in customers and signal social closeness e.g., more human-like features and speech. Also, managers should consider adding changes to existing technologies like self-service checkouts to minimise costs.
However, businesses should be aware of potential costs stemming from a lower perceived guardianship in the servicescape due to employing service robots, including acquisition, programming, and maintenance costs, or costs related to deviant consumer behaviour. In addition, while robots may save wage costs, managers need to compare the costs associated with different guardians (human or robot) and evaluate which method is cost-effective.
Before widely implementing the use of service robots, businesses are recommended to extensively test robot service agents in market trials to identify and mitigate any unintended consequences.
Finally, businesses may calculate potential ROI by considering total revenue, costs associated with different types of guardians over the years, and whether these costs are less than 0.92% of revenue (the reported losses from crime in the retail sector).
Researcher
More information
The research article is also available on eprints.