Author: Alexandra Christenson and Torunn Jegleim 

Purpose 

It’s common practice in performance management contexts to assess the quality of an outcome using a “traffic-light” system; Red, Amber or Green – also known as a RAG status. This paper provides the outline of the proposed RAG status methodology for live returns during the 2021 Census collection.

Background

During the 2019 Rehearsal the Response Chasing Algorithm (RCA) compared live and expected return rates to identify areas with shortfalls in returns (Meirinhos, 2019b). Shortfalls were assigned a RAG status, which was visible in the 2019 RCA Dashboard (See Annex F). However, the evaluation of the 2019 Rehearsal concluded that the RAG status needed to be further developed to be more informative.

The aim of the improved RAG status methodology is to;

  1. Give an overview of how the collection operation is doing in comparison to the census quality targets
  2. Flag what and where the issues that need actioning are; low response and/or high variability depending on geography level

In developing the new methodology, Census Statistical Design (CSD) consulted other business areas; Question and Questionnaire Design and Methodology within the ONS and other national statistics agencies; Stats Canada, Stats NZ, US Census Bureau and Australian Bureau of Statistics.

Discussion

For the 2021 Census the ONS has committed to achieving key quality targets; reaching an overall response of 94%, at least 80% response in each local authority and minimised variability; proposed to be 90% of LSOAs in an LA falling within 10% of the response mean (Martyna, 2020). In order to understand if we are on track to reach these targets, a tool measuring this is needed.

The RAG status is designed to act as a decision support tool for the governance of the census collection operation. The RAG status will be widely visible in the future 2021 Census data dashboard, which is planned to be shared across teams and in daily governance meetings. It is therefore imperative that the RAG status methodology is fit for purpose; flagging issues that need actioning and transparent about how issues are flagged.

For the 2021 Census, the following is proposed;

Geography levelOverview of proposed methodology
Lower Super Output AreaRAG status determined by shortfall in live vs expected returns.

Proposed thresholds in Annex A
Team Leader AreaRAG status determined by shortfall in live vs expected returns.

Proposed thresholds in Annex A
Local AuthorityRAG status determined by return shortfall and variability in return rates within the local authority. Relative importance of return rate and variability is adjusted throughout the operation (See Annex A). Combined to create a single RAG status (See Annex A).
RegionalAverage RAG status score for the LAs making up the region. Proposed final scores in Annex A
NationalNo coloured RAG or calculation – but show key figures for;

-       The forecasted overall return rate for England & Wales

-       The number of LAs that are forecasted to reach 80% overall response out the total number of LAs.
OnlineMonitor the online proportion of response, and RAG status this against targets on a local authority and national level

Lower Super Output Area (LSOA) and Team Leader Area (TLA) RAG status

The predictive modelling and maximising response strategies are conducted and targeted at LSOA level, indicating a need to monitor returns. TLAs are the operational geographies for field staff; representing the work area of up to 12 census field officers. The RAG status at LSOA and TLA level will be a simple measure of live versus expected returns against the thresholds outlined in Annex A.

The Field Prioritisation Algorithm (Meirinhos, 2019a) will be working at an OA level to minimise variability within each LA, and so implicitly, working to improve response in the worst performing OAs will work to reduce the spread within and across LSOAs. However, there are no explicit quality targets for variability within LSOAs, and indeed, given the (average) number of LSOAs per LA, it would be neither practical nor informative to measure and so these are not included in the RAG status at LSOA level.

Local Authority (LA) RAG status

Given the census LA variability and response target, the geography provides a sensible level to introduce an enhanced calculation to determine RAG status. With 336 LAs across England and Wales, this RAG status will be crucial to flag issues for action; interventions or further ad-hoc analysis.

The RAG status at LA level will be determined by two components;

  1. Return Rate Difference (RRD): measured as the difference between live and expected returns
  2. Variability (V%) measured as the proportion of LSOAs in the LA with a return rate falling within 10% of the return rate mean for the LA (Martyna 2020).

Within each LA, each component is assigned a daily score from 1 (best) to 3 (worst) based on proposed thresholds (Annex A). Each component will then have a weight multiplied by its score to reflect what stage of the collection operation we are in, giving a final equation of:

(RRD score x daily weight) + (V% score x daily weight) = Final RAG score

The purpose of the weights is to accurately show what issues can be actioned. For example, until field staff go live, we have no means by which to target variability issues, thus flagging a potential problem prior to this is redundant. As the weights always add up to 1.0 (or 100%), the range of possible final scores will always be between 1 to 3 (final score thresholds in Annex A).

We propose that the component measuring variability will have a low weight (0.1) until tranche 2 field staff commence work (Census Day + 2), at which point the weights begin to gradually change until the two components are at an equal weight of 0.5 three weeks before the end of collection. The last three weeks will have a constant equal weight of 0.5 applied to both components (see table in Annex A).

In a hypothetical scenario, these would be the results;

ComponentsDayValueScoreWeightWeighted scoreFinal score
RRD70.210.90.91.2
V%70.8630.10.3
RRD500.210.50.52
V%500.8630.51.5

Whilst the values remain the same in this scenario, the changing weights places more emphasis on the V% on day 50 compared to day 7, bringing the final score up from 1.2 to 2.0, shifting the RAG status from green to amber.  This is not to say that variability issues are not important prior to census day + 2, but that they are not heavily weighted as it cannot be actioned.

In determining the weighting strategy and thresholds, special attention has been paid to ensuring that the weights and thresholds minimise RAG status volatility over time.   The above methodology has been tested using 2019 Census Rehearsal data as well as using predicted data for the V% from the Field Operation Simulation (FOS) (Ward, et al., 2019).

Other approaches for Local Authority RAG status:

Alternative approaches to calculating an LA RAG status, such as using flat weights or using a risk impact table instead of final score have been explored (Annex C, Annex D, Annex E)

However, the simulations for a risk impact table flag issues as red from the first day of collection both in the FOS output data (Ward, et al., 2019) and the rehearsal data (Annex D, Annex E). Furthermore, the simulation using flat weights either flag everything as green (although we know this was not the case during rehearsal) or everything as amber/red long before we are able to take action to rectify the issues (Annex C).

This suggests that neither of the approaches are fitting given the purpose of the RAG status.

Regional and National RAG status:

On the regional level, the proposed approach is to calculate the average final RAG scores for the LAs belonging to the specific region. Following this approach, the regional RAG status considers the same components as the LA RAG status without the need to aggregate the measures or change thresholds. The final score RAG will follow the same thresholds as the LA (See Annex A).

To track the overall progress of the census collection operation, we propose to not provide a coloured RAG or calculations. Instead, viewers will have three measures indicating progress against the overall and local authority response targets and the variability target; overall final forecasted return rate, number of LAs forecasted to reach 80% response rate and number of LAs reaching the variability target (Martyna, 2020). The ad-hoc team in CSD will also be available for more thorough weekly analysis of the national picture.

Online RAG status:

We will monitor the online proportion of response, and RAG status this against targets that, at a local level consider the proportion of paper questionnaire initial contacts and an expected level of mode switching, and at a national level sum to our overall quality target for online response.

Conclusion

This paper has outlined the proposed method to derivate a RAG status at all geography levels during live operations as well as the proposed approach for tracking online response.

An informative RAG status is imperative to manage the census collection operation. If the programme is in danger of not reaching any of the quality targets, this needs to be flagged promptly. The purpose is to display what issues needs to be actioned where. The approach presented offers a more informative way of doing this than previously done, whilst still acknowledging that human intervention will be needed to perform more thorough analysis during live operations.

List of Annexes

  • Annex A: Proposed thresholds for all geography levels and weights for LA
  • Annex B: RAG status simulations using proposed methodology
  • Annex C: RAG status simulation using proposed thresholds and constant weights
  • Annex D: RAG status simulation using a risk impact table method 1
  • Annex E: RAG status simulation using a risk impact table method 2
  • Annex F: 2019 RCA Dashboard maps with RAG status

(The annexes are contained in the attached downloadable document.)

References

Martyna, Kamila (2020) EAP138: Variability Target for Response Rates in Collection https://share.sp.ons.statistics.gov.uk/sites/cen/csod/CSOD_Stats_Design/Statistical_Design/Presentations/Design_Authority_Board_2011_variability_v2.docx

Meirinhos, Victor (2019a) EAP115: Field Prioritisation Algorithm

Meirinhos, Victor (2019b) EAP114: Independent Methodological Review: Response Chasing Algorithm https://www.statisticsauthority.gov.uk/wp-content/uploads/2020/06/EAP114-Independent-Methodological-Review-Response-Chasing-Algorithm.pdf

Ward, K., Barber, P., Priestly, M., Fraser, O. (2019) EAP117: Simulating Census Operations to inform Resource Decisions