BRAG Meeting Thursday 1st June 2023

The fortnightly BRAG meeting will be held this Thursday 01/06 at 1 pm via Zoom/GP-Y801. This week we will have presentations by @Edgar and @Joshua.

Zoom Link: https://qut.zoom.us/j/86982060024?pwd=dHFHdm44R2NjYWNEbWhwTmFWQ09PZz09

Password: brag@QUT (if prompted)

Edgar’s talk

Title: Unsupervised anomaly detection in spatio-temporal stream network data

Authors: Joint work with Jay M. Ver Hoef, Erin E. Peterson, James McGree, Cesar A. Villa, Catherine Leigh, Ryan Turner, Cameron Roberts, and Kerrie Mengersen

Abstract: Digital sensors for water quality monitoring provide near real-time data but are prone to technical anomalies that impact statistical inference and decision making. We propose a framework for detecting anomalies in stream network data. We explore the effectiveness of state-of-the-art spatio-temporal methods, capturing anomalies in the residuals via posterior predictive distributions and using Hidden Markov Models. A case study from the Herbert River in Queensland, Australia demonstrate that these models provide a suitable performance in detecting multiple types of anomalies and can be used to improve automatic detection in near-real time.

Josh’s talk

Title: Bayesian score calibration for approximate models.

Authors: Joint work with David J Warne, David J Nott, and Christopher Drovandi.

Talk overview: I will present on Bayesian Score Calibration a new method for fast simulation-based inference using approximate models. If time permits, I will do a live code demo with the BayesScoreCal.jl package (https://github.com/bonStats/BayesScoreCal.jl/).

Abstract: Scientists continue to develop increasingly complex mechanistic models to reflect their knowledge more realistically. Statistical inference using these models can be highly challenging since the corresponding likelihood function is often intractable and model simulation may be computationally burdensome. Fortunately, in many of these situations, it is possible to adopt a surrogate model or approximate likelihood function. It may be convenient to base Bayesian inference directly on the surrogate, but this can result in bias and poor uncertainty quantification. We propose a new method for adjusting approximate posterior samples to reduce bias and produce more accurate uncertainty quantification. We do this by optimising a transform of the approximate posterior that maximises a scoring rule. Our approach requires only a (fixed) small number of complex model simulations and is numerically stable. We demonstrate good performance of the new method on several examples of increasing complexity. https://arxiv.org/abs/2211.05357