Panel:

  • Jon Forster (University of Warwick)
  • Arkadiusz Wisniowski (University of Manchester)
  • Fred Piel (Imperial College London)
  • George Ploubidis (University College London)

ONS representatives:

  • Dominic Webber (ONS chair)
  • Duncan Elliott (ONS)
  • Aidan Metcalfe (ONS)
  • Joe Southam-Gisby (ONS)
  • Petya Kozhuharova (ONS)
  • Pratibha Vellanki (ONS)
  • Salah Merad (ONS)

Technical Overview – Presented by Duncan Elliott and Salah Merad

Hierarchical models and aggregate uncertainty:

DE presented the proposed approach of using a single run of the DPM for point estimates and uncertainty at cell level, and then the multiple draws approach for estimation of uncertainty at aggregate levels (because the single run approach is not appropriate for the estimation of uncertainty at aggregate levels). DE and SM presented results that showed the precision at cell level using the single run and multiple draws approaches. The ONS view was that the single draw approach performed well at cell level compared to the multiple draws approach. The panel commented that ideally the multiple draws approach should be used at all levels of estimation. There was some concern that the proposed approach was being selected because it simply provides higher precision at cell level than the multiple draws approach. The presenters responded that simulation testing of the single run approach showed that the credible intervals at the cell level had the right coverage probability.

The panel also discussed measurement error uncertainty in data models for mid-year estimates (MYE). DE and SM presented results where the ONS used the uncertainty of PR in 2011 and SPD in 2021 (derived from the coverage ratio models in step 1). The panel commented on the double counting of uncertainty in step 2 data models for MYE, which was highlighted at the previous meeting, and asked why we did not simply use an exact data model for MYE in 2011 and 2021 in the multiple draws approach. ONS colleagues replied that with the current software we could not specify an exact data model for population (these are reserved for births and deaths only). However, normal data models were tested with very small standard deviations (eg 1) and the results showed unexpected patterns as the uncertainty at aggregates levels increased at time points further from census years. The chair suggested that these results be presented at the next meeting.

ABPE Sensitivity Analysis – Presented by Joe Southam-Gisby

JSG presented an overview of the models being used to test the sensitivity of the ABPEs and what parameters have already been tested (both different tests of the system models and data models). Results were presented from a case study of one of the parameter checks (a version of the DPM without the latest version of the SPD), and how it compares to a benchmark (the DPM run used for the latest published ABPEs). JSG then presented the engagements with internal stakeholders and Office for Statistical Regulation (OSR) on how the current parameters fulfil the OSR requirement for accreditation of testing the sensitivity of the ABPEs. JSG presented an overview of our plans for the second phase of research and what parameters we have prioritised to test next.

The panel commented that this is not stress testing the model itself, but more so the approach of Admin-Based Population Estimates. They supported the overall parameters that were being tested and planned to be tested but emphasised that we should be testing uncertainty ranges as part of this research. The panel also supported the idea that the ONS should investigate multivariate parameter testing. The panel also recommended that the ONS test the relationships between the priors and their variance; this will give users confidence that the right model has been selected. Testing the outputs between steps 1 and 2 of the DPM were also recommended.

Simulations – Presented by Aiden Metcalfe

AM presented simulation results that showed modest levels of bias in inflows and outflows in ages where the levels of flows are low. AM asked whether these levels of bias were acceptable. The panel responded that you would not expect to obtain zero bias. However, it was not immediately obvious how to determine what is an acceptable level.

There was some discussion about the patterns of Mean Squared Error (MSE) and bias presented. Some of these patterns were difficult to interpret. It was suggested to also show MSE in terms of variance and bias squared.

In response to the question on how to test the full model (steps 1 and 2 with the multiple draws approach), the panel commented that we should try to fit a model where cohorts are estimated jointly even if it takes a very long time, which was one of the recommendations from the OSR review of ABPEs.

Action:

Present results showing unexpected uncertainty patterns from Normal data models. – Duncan Elliott and Salah Merad

Next meeting: Wednesday 9 July 2025