DMRC academic discusses how to combat visual mis/disinformation online
Journalism Practice, one of the top 15 communication journals out of more than 500, according to Google Scholar, recently interviewed the DMRC’s Dr TJ Thomson on its ‘The J Word’ podcast about combatting visual mis/disinformation online. Listen to highlights here.
Thomson, a chief investigator in the Centre’s digital publics and digital inclusion and participation programs, joined the podcast alongside US academics, Drs Teri Finneman and Rebecca Nee. Journalism Practice associate editor, Dr Ted Gutsche, hosted the show.
Thomson was invited on the show after co-publishing with A/Prof Dan Angus and Dr Paula Dootson, ‘Visual Mis/disinformation in Journalism and Public Communications: Current Verification Practices, Challenges, and Future Opportunities.’ In the piece, Thomson and his co-authors note that only about 25 percent of journalists worldwide, on average, use any type of social media verification tool, according to a 2019 ICFJ report.
Verification practices for visual content
Thomson discussed extant online image verification practices, like reverse-image searching, which allows a user to upload an image and potentially find other instances of that image elsewhere on the web. However, this practice requires intact, original versions of the image to also be online as well as any ones that have been manipulated. It’s also challenged because the entire web isn’t crawled and because reverse-image search engines can be fooled relatively easily. For example, reversing an image’s orientation from left to right can return a completely different set of results compared to searching for the image in its native orientation.
Examining image metadata can be helpful to assess whether the image was made at the time, location, and production circumstances claimed. However, metadata can themselves be manipulated and they are also stripped when images are uploaded to social media sites like Facebook, Instagram, and Twitter.
Thomson also discussed more forensic image analysis practices, both manual and digital, but noted that such analysis can require a deep level of technical knowledge, including an understanding of light, optics, geometry, and computer science.
Verification challenges for visual content
Thomson said three primary challenges exist when trying to identify visual mis/disinformation. The first is the difficulty in defining what is mis/disinformation when it comes to visual content. The second is dealing with the scale (3.2 billion photos and 720,000 hours of video a day) and speed (live or near-live) at which visuals are produced, shared, and need to be fact-checked. Third, the technical skills and know-how to do manual or automated forensic image analysis can be significant.
Opportunities for verifying visual content
At the individual level and for people without specialised skills, Thomson recommends paying attention to the source of the entity posting or re-sharing the content in question and their history and affiliations on the platform. He also recommends paying attention to the production and presentation circumstances, when possible. Sometimes this can be done by a direct conversation with the person posting or sharing the questionable content. ‘If you have time, ask if they can show you another angle of the same scene’, Thomson said. ‘Oftentimes if it is a fake, they’re not going to have different outtakes; they’re not going to have different angles, different perspectives. So trying to have that conversation to get a little bit more confidence before you click the retweet button can be quite helpful because visuals are really, really potent and really efficient vehicles of meaning’.
At the organisational level and for those with more specialised skills, Thomson highlighted the opportunity for news organizations to work with more computer science-minded folks. The training data that developers use to train algorithms are paramount but they are often very clinical or atypical of journalistic imagery. By working together with news organisations or wire services, the two parties can create more representative training data that lead to increased accuracy in detecting faked or manipulated visuals online.
‘Ultimately, more robust and authentic training data, coupled with critical thinking and greater awareness, will help people have better tools to fight this kind of thing’, Thomson said.