This is the latest in a series of posts that will re-cap our fortnightly research catchups. We hold these meetings at the QUT Kelvin Grove campus, and they are open to anyone at QUT interested in Generative AI related research. Contact the lab (genailab@qut.edu.au) if you would like to be added to our mailing list for the meetings.
Our current meeting format (subject to change, and dependent on fortnightly volunteers to fill each section!) includes three sections;
- A discussion of some recent news-worth event pertaining to GenAI
- A gentle explainer for some term or concept in the GenAI research literature, and
- A share of someone’s in-progress or recently published research work on GenAI
FYI, these posts will also be co-produced with the assistance of various GenAI tools!
This week’s GenAI research catchup had Dr. Tegan Cohen sharing on an ongoing GDPR lawsuit against ChatGPT for privacy violations, Prof. Nicolas Suzor and Lucinda Nelson unpacking the concept of ‘Toxicity’, and Brett Fyfield sharing a recent paper on using ChatGPT in teaching across school of design courses at QUT.
None Of Your Business lawsuit against OpenAI with Tegan Cohen
Tegan Cohen discussed an ongoing GDPR lawsuit against OpenAI, brought by the European NGO NOYB (None of Your Business). The case centers on ChatGPT’s tendency to fabricate information, specifically birthdates, when prompted about individuals. NOYB argues that this violates the GDPR’s principle of accuracy, which requires personal data to be accurate and up-to-date. The case raises important questions about whether language models like ChatGPT are fundamentally incompatible with certain GDPR requirements, or if data protection regulations need updating to address AI-generated content.
The lawsuit highlights several interesting legal and technical challenges. While the specific complaint may seem trivial, focusing on fabricated birthdates, it could have far-reaching implications for how privacy laws are applied to AI-generated outputs. The case also brings attention to the technical feasibility of compliance, as OpenAI claims it cannot selectively remove false information without blocking all outputs about an individual. This situation puts OpenAI in a position similar to Google’s “right to be forgotten” dilemma, where they must balance privacy concerns with freedom of expression and access to information. As the case progresses, it may shape the legal toolkit available to address AI-generated misinformation and privacy violations in the future.
“Toxicity” with Nic Suzor and Lucinda Nelson
Nic Suzor and Lucinda Nelson discussed “toxicity” in AI models. They highlighted that while many companies claim to have sophisticated toxicity filters running parallel to their main language models, these filters often fall short in practice. The speakers emphasized three major problems with current toxicity classification systems: poor detection of implicit hate speech, inability to understand context (especially in counter-speech), and lack of sensitivity to power dynamics in language use.
They shared examples from their own research where popular toxicity models from tech giants like Google, Microsoft, and OpenAI failed to accurately classify content. They argued that many of these models essentially perform “tone policing” rather than effectively identifying harmful content. Interestingly, their research suggests that large language models (LLMs) could potentially perform better at this task when properly fine-tuned. While challenges remain in convincing industry players to adopt better standards, the researchers see potential for improvement through engagement with policymakers, tech companies, and advocacy groups.
Using ChatGPT as a virtual assistant in the design studio with Brett Fyfield
Brett Fyfield shared his recent paper that discusses the experiences of integrating ChatGPT into design education at QUT. In the courses discussed, ChatGPT was deliberately incorporated into classroom activities, such as using it to write theory definitions which students then assessed against rubrics. This approach aimed to acknowledge the tool’s presence while teaching students to critically evaluate AI-generated content. Fyfield and his colleagues are continuing to refine the integration approaches are now exploring how professional designers incorporate generative AI into their workflows.
Fyfield also discussed the evolution of student attitudes towards AI tools in his classes. In his motion typography unit, he adapted assessments to allow students to use any combination of found, stock, or AI-generated images, which enhanced creative freedom. Some students experimented with AI-generated content before reverting to traditional practices, while others successfully incorporated it into their final projects. Fyfield emphasized that the goal is to use these tools to augment creativity and improve the quality of outcomes. He’s currently working on refining assessment rubrics to better evaluate AI-assisted work and is particularly interested in exploring the differences between how photographers capture people in social situations versus how AI represents them.