Exploring consumer appraisals of deepfake messaging

Project dates: 2022 - Ongoing

This research aims to explore the marketing implications of ‘deepfakes’ – a form of AI-facilitated synthetic media. This research provides new conceptual foundations of this emerging technology as a communication innovation and aims to be among the first to offer empirical evidence regarding consumer perceptions of deepfakes within marketing contexts.

Why is this important?

Our digital lives are becoming increasingly influenced with synthetic media, content which is automatically and artificially generated. Deepfakes are a new form of synthetic media, being the product of artificial intelligence and deep neural networks trained via machine learning. These deep neural networks can be trained to automatically merge, combine, superimpose and replace video, images, and audio to create hyper-realistic, though fake content. Deepfakes are beginning to be leveraged within marketing communications, such as through video advertisements. However, deepfakes can be difficult for humans to discern as inauthentic due to their high realism, meaning their use can be perceived as potentially deceptive. Conceptual and empirical research is needed to evaluate the marketing implications of deepfakes and to understand the factors which may influence how consumers process deepfake messaging.

What did we find out

The first published paper was conceptual in nature and provided new insight into the marketing implications of this highly emergent technology.

  • Deepfakes possess unique opportunities for marketers through enhanced personalisation of marketing communications, augmentation of perceived emotional intelligence of chatbots, and enriched experiential engagement for customers). However, deepfakes have highly malicious potential, through their potential use within disinformation campaigns, identity theft and fraud, and brand sabotage.
  • Deepfakes are uniquely positioned when compared to other forms of marketing communication practice, such as the use of chatbots, native advertising and spokescharacters. Deepfakes are differentiated from these practices through possessing both high human realism and high technologically-facilitated intelligence.
  • Our conceptual framework proposes that deepfakes can protect the interests of both customers and organizations.
  • To further guide marketers, we provide propositions and research questions focused on three key areas of marketing practice: 1) customer and self-service, 2) disclosure, and 3) smart and personalized advertising.

The second paper (under review)  uses a systematic literature review to conduct a multidisciplinary assessment of deepfake literature to synthesize a holistic deepfake definition, identify who creates and is depicted by deepfakes, and to understand the value creation and destruction potential of deepfakes as a new communication innovation for various actors within the digital ecosystem.

  • A holistic deepfake definition was proposed which integrated multidisciplinary perspectives through a thematic analysis of existing definitions within the literature.
  • Deepfake creators were most commonly unable to be explicitly identified, or were individual content creators (e.g., YouTubers), or transformative actors (e.g. social causes).
  • Public figures, celebrities, and actors were most frequently depicted by deepfakes.
  • Value creation opportunities were most associated with the creation of epistemic, functional, and emotional value for customers and functional value for organisations.
  • Value destruction opportunities were most associated with deepfakes destroying epistemic value for both customers and organisations.

What we aim to do

The third paper (in progress) aims to investigate the role of realism, disclosure, and psychological traits in regard to marketing outcomes. Through a series of experiments, we aim to investigate how these factors influence consumer appraisal of a deepfake-altered video advertisement, the value appraisals of the product and subsequent engagement with the product and advertisement. Results of the studies comprising this paper are forthcoming.

 

To learn more about this project, please e-mail best@qut.edu.au


Chief Investigators

Publications


Rich Media

QUT News, 2023 | When, not if, business hit by new deepfake scams: be prepared 

tickerNews, 2023 | Deepfakes are taking over Hollywood

tickerNews, 2023 | Will deepfakes change election results? 

BEST Conference 2021 Presentation | “Deepfakes: a state of play and interactions with behavioural biases and value”