GenAI Research Catchup: 17th July 2024

This is the first in a series of posts that will re-cap our fortnightly research catchups. We hold these meetings at the QUT Kelvin Grove campus, and they are open to anyone at QUT interested in Generative AI related research. Contact the lab (genailab@qut.edu.au) if you would like to be added to our mailing list for the meetings.

Our current meeting format (subject to change, and dependent on fortnightly volunteers to fill each section!) includes three sections;

  • A discussion of some recent news-worth event pertaining to GenAI
  • A gentle explainer for some term or concept in the GenAI research literature, and
  • A share of someone’s in-progress or recently published research work on GenAI

FYI, these posts will also be co-produced with the assistance of various GenAI tools!


This week’s GenAI research catchup featured thought-provoking presentations by from Jiaru Tang, and our own William He.

Bernhard Lang | Getty Images

The world of AI Generated dancing TikTok cats with Jiaru Tang

Jiaru Tang, currently engaged in her PhD project “Investigating Artificial Intelligence-generated Content Production in the Chinese Platform Economy”, shared fascinating insights into the recent emergence of particular AI-generated content farms on the platform TikTok. Her presentation highlighted how AI is transforming content creation and consumption on social media, particularly on TikTok, where trends evolve rapidly, and user engagement is paramount. In her PhD research, Jiaru has recently noted an increase in AI-generated content proliferating on TikTok, ranging from dancing cat memes to Joe Rogan conspiracy theory disinformation deepfakes. She discussed the implications of AI in content creation, raising questions about authenticity and the role of human creativity. The seamless integration of AI-generated elements into user-generated content, including AIGC features promoted by the TikTok platform iteself, blurs the lines between human and machine creativity.

The discussion touched on the cultural impact of AI on TikTok, and the fact that AI-generated content can, will, and already does shape cultural norms and behaviors. Concerns were raised about the authenticity of content and whether audiences can distinguish between human-made and AI-generated creations. There was also a friendly debate about the future of content creation and the ethical considerations it entails, particularly regarding the originality and ownership of AI-generated works. Participants shared examples of popular TikTok trends that were either initiated or heavily influenced by AI, illustrating how pervasive and influential AI-generated content has become.

All about “Alignment” with William He

William He, research engineer at the GenAI Lab, this week discussed the concept of “Alignment” in AI, shedding light on its significance and challenges in the field of AI development. “Alignment” (broadly, the process of ensuring that an AI system’s goals and behaviors are aligned with human values and intentions), can be conceptualized in different ways by different communities – for instance, researchers in ‘AI Safety’ consider the sub-problems of ‘inner’ vs. ‘outer’ alignment, however lately the term ‘Alignment’ is increasingly being co-opted in the marketing hype. One of the significant challenges in achieving alignment of any form is the complexity and ambiguity of the notion of ‘human values’, and the difficulty in translating them into a format that AI systems can comprehend and act upon.

The discussion touched on the issue of who decides what constitutes “human values” and how these values are integrated into AI systems. There were examples of misalignment incidents where AI systems produced flawed outcomes due to flawed training data. The conversation also touched on current research efforts aimed at improving alignment, including techniques like reinforcement learning with human feedback (RLHF) and the development of more transparent AI models.