GenAI Research Catchup: 11. September 2024

This is the latest in a series of posts that will re-cap our fortnightly research catchups. We hold these meetings at the QUT Kelvin Grove campus, and they are open to anyone at QUT interested in Generative AI related research. Contact the lab (genailab@qut.edu.au) if you would like to be added to our mailing list for the meetings.

Our current meeting format (subject to change, and dependent on fortnightly volunteers to fill each section!) includes three sections;

  • A discussion of some recent news-worth event pertaining to GenAI
  • A gentle explainer for some term or concept in the GenAI research literature, and
  • A share of someone’s in-progress or recently published research work on GenAI

FYI, these posts will also be co-produced with the assistance of various GenAI tools!


This week’s GenAI research catchup Patrik Wikstrom shared his work on using Whisper and Gemini for an automated content framing analysis and the development of a human-in-the-loop verification system.

Automated Content Framing with Patrik Wikstrom

Patrik Wikstrom demonstrated an innovative approach to analysing social media content using Google’s Gemini and OpenAI’s Whisper. The system collects videos from platforms like TikTok and uses Gemini’s advanced capabilities to extract key information such as the main activity, location, emotions, and whether the content touches on political or sensitive topics. Throughout this process, Gemini provides a narrative description of the video’s content and perform a framing analysis based on communication theory. This automated approach allows researchers to quickly process and categorise large volumes of social media content.

Crispin la valiente | Getty Images

However, the system is not without limitations. Wikstrom noted that while Gemini excels at identifying surface-level content, it often misses important contextual information, such as recognising specific politicians or understanding the broader political landscape. To address this, Wikstrom and his team are developing a human-in-the-loop verification system, where human coders can review and correct the AI’s analysis. This hybrid approach aims to combine the speed and scale of AI analysis with human expertise to ensure accuracy and reliability. The project opens up exciting possibilities for large-scale content analysis in communication research, while also highlighting the ongoing challenges of context and nuance in AI-driven analysis.