GenAI Research Catchup: 28 August 2024

This is the latest in a series of posts that will re-cap our fortnightly research catchups. We hold these meetings at the QUT Kelvin Grove campus, and they are open to anyone at QUT interested in Generative AI related research. Contact the lab (genailab@qut.edu.au) if you would like to be added to our mailing list for the meetings.

Our current meeting format (subject to change, and dependent on fortnightly volunteers to fill each section!) includes three sections;

  • A discussion of some recent news-worth event pertaining to GenAI
  • A gentle explainer for some term or concept in the GenAI research literature, and
  • A share of someone’s in-progress or recently published research work on GenAI

FYI, these posts will also be co-produced with the assistance of various GenAI tools!


This week’s GenAI research catchup our own Aaron Snoswell sharing on the new ‘Prompt Caching’ feature available in Anthropic’s Claude models, and Michael Dezuanni sharing a recent findings on GenAI literacy from the Adult Media Literacy survey.

Prompt Caching with Aaron Snoswell

Prompt caching is an emerging technique in AI that can significantly reduce computation time and costs for large language models like Claude. As explained by Aaron Snoswell, prompt caching allows the initial prompt to be processed once and its internal representations (called “activations”) to be stored. These cached activations can then be reused for subsequent queries, potentially reducing costs by up to 90% for large-scale applications. This technique is particularly beneficial when analyzing large datasets with a consistent prompt, such as processing thousands of emails or documents.

While prompt caching offers substantial benefits, it comes with some limitations and considerations. It only works with prefix prompts and requires the exact same prompt to be reused; any changes necessitate reprocessing. The pricing model for prompt caching also introduces new calculations for users, with companies like Anthropic charging more for the initial cached prompt but less for subsequent uses. Despite these complexities, prompt caching represents a significant advancement in making large language models more efficient and accessible for a wider range of applications, potentially enabling the use of more advanced models at scale that were previously cost-prohibitive.

Adult GenAI Literacy with Michael Dezuanni

Prof. Michael Dezuanni from QUT shared findings from this year’s adult media literacy survey, where nearly 4,000 adult Australians revealed surprising insights into the adoption and perception of generative AI tools. Despite the buzz surrounding AI, only about 40% of respondents reported using text-based AI services, with a mere 16% having experience with AI image generators. The primary motivation for those who did engage with AI was curiosity, followed by a desire to simplify work or study tasks. Interestingly, the survey uncovered significant disparities in AI usage and attitudes across different demographic groups. Younger, more educated individuals living in metropolitan areas were far more likely to use and have positive sentiments towards AI compared to older, less educated, or rural populations.

The survey also highlighted a general wariness towards AI among Australians. Only 7% of respondents agreed that generative AI is exciting and would improve their lives, while just 14% expressed a strong interest in learning more about AI tools. These findings suggest a notable gap between the hype surrounding AI and its actual adoption and perception by the general public. Furthermore, the research team observed that the patterns of digital inclusion and exclusion seen with previous technologies are being replicated with AI, raising concerns about the potential for AI to exacerbate existing social and economic divides. As AI becomes more integrated into everyday software and workplaces, addressing these disparities in access and literacy will be crucial to ensure equitable participation in an AI-driven future.