Dear Mr Wragg,

I write in response to the Committee’s call for evidence on the Civil Service People Survey. We have focused our evidence on questions in the Terms of Reference regarding survey design, delivery and validity of results, from the perspective of our role as the Office for National Statistics in administering high-quality national surveys including the census.

Survey design

Anonymity

Staff surveys, and surveys in general, can adopt different strategies to protect respondents’ privacy. These can range from anonymising responses by removing any information that connects the survey to the respondents, to ensuring that analysis derived from the survey does not lead to disclosing identity.

Full anonymisation can limit analysis. For example, if different age groups have a different experience of working in an organisation, this would not by highlighted if the age field was removed to protect privacy. Therefore, best practice is to seek a compromise by using a range of measures to protect privacy. These include:

  • Deidentification removes fields that are highly likely to identify an individual such as their name and address and keeps fields such as age that do not directly relate to one person. Grouping answers limits direct identification further, for example using age ranges rather than dates of birth or referring to branches rather than teams. All these group small numbers of people together to limit the identification of one person while maximising the benefit of the survey.
  • Use of identifiers rather than names so that there is additional protection. Only very few people would have the technical ability and access rights to link respondents to responses. If there were a breach, they would be easily identified because they would leave a digital footprint.
  • Open rather than targeted invitation provides respondents with more control over their responses. An open invitation to the target population allows a respondent to input all their information without it connecting back to a database. A targeted invitation provides a respondent with a code that connects them, and only them, to the sampling frame but could come at the expense of privacy if other measures are not taken, and possibly impact on responses and the response rate if people are more concerned they can be identified.
  • Segregation of duties enables a significant reduction in the number of people who have access to identifiable information. Analysts are not granted access to personal identifiers and users of analysis are only granted access to aggregated data.
  • Statistical Disclose Control is a process by which analytical outputs are checked to ensure that they cannot lead to reidentification on individuals. There are a number of methods that can be used including supressing small numbers and swapping cells which mean the headline summary is still correct. We would not recommend completing cross-sectional analysis when there are low numbers in a category as this might enable identification, especially when it is possible to link to other information in the public domain.
  • Summarising and controlling access to free text are important to ensuring that respondents who provide information that can be used to identify themselves or others are protected. This is particularly important when respondents use the survey as an opportunity to raise issues which require careful handling such as safeguarding. It is best practice to have a safeguarding policy that provides clear guidance and oversight as to when privacy should be breached to protect individuals.

In addition, it is good practice to carry out privacy impact assessments and to make privacy notices and technical guides readily available.

Survey design and delivery

To ensure the best design and delivery of a survey, you may want to be aware of the following:

  • Continuity – staff surveys such as the People Survey are repeated every year so that changes can be tracked and compared over time. To achieve the objectives, the survey needs to be relatively stable, and changes carefully considered and implemented. When a question is discontinued or changed significantly, the time series is ‘broken’, and a new measure is tracked. This is sometimes necessary to ensure the survey remains relevant and useful.
  • Comparability – when a key requirement is the ability to compare performance across the civil service, a key feature of the survey must be consistency: the same survey, with the same questions must be used by all organisations.
  • Comprehension – questions should be pre-tested to ensure that they are being understood as intended and the wording is suitable and understood by all respondents. As far as possible, the survey should use harmonised standards that are available to government departments or reuse questions that are commonly used. These questions have been tested and the practice enables comparison with other data.
  • Scope – the topics covered by the survey are varied. To better understand them, questions in a ‘block’ touch on subsets of the topic. The survey designers must consider the length of the survey and the impact that may have on the quality of the responses, participation and respondents following through to the end, thus completing the survey. The usual recommendation for an online survey is that it should be completed in around 20 minutes.
  • Mode of collection – how responses are collected is determined by cost, the speed in which results are needed, participant preference and the influence modes have on the responses provided. For example, when people complete a survey online, which is the cheapest collection mode, they tend to complete it quickly and may be less reflective compared to an interviewer led survey where the interaction between people can lead to explaining the answer and probing further.
  • Inclusive – the survey ought to be inclusive by design and this refers to the overarching study design but also to the design of the questions themselves and the interfaces that respondents interact with. For example, the online survey should be designed to meet accessibility standards so that it does not limit participation through design. We should be inclusive in the questions we ask, ensuring that the available answer options collect data that represents the population being surveyed. Having multiple modes of collection available increases access to the survey and in turn increases representation in the data.

Who should be involved?

Developing and delivering a survey of the scale of the People Survey is a multidisciplinary task that requires the involvement of many professionals to ensure it delivers on analytical and business objectives.

  • Policy and Analysis users – it is essential to involve those who will be using the results to understand their requirements and to ensure that the data being collected meets their policy questions.
  • Methodologists and Data Architects – the data that underpins the analysis that responds to the policy questions needs to be designed and architected so that it meets data standards and methodological requirements. This step is crucial to ensure that the data collected is fit for purpose, can be used, reused and linked (for example to the data from previous years).
  • Survey designers – As with all surveys, involving questionnaire design experts in the development of the questions and survey to ensure it will meet and balance user need is crucial. As part of their professional input, survey designers will review if questions are clear, appropriate, representative, inclusive and accessible by involving groups across the civil service and asking for their views. They will test questions to ensure that they meet the requirements. We would prioritise cognitive testing to check understanding and interpretation, to mitigate any potential quality issues in the data ahead of going live and so that results can be explained clearly following analysis.
  • Survey developers and user experience designers – whether data is collected online, by an interviewer or using a papers questionnaire, the survey flow and the User Interface must be designed and tested to meet industry standards and to ensure that the survey is accessible to everyone. The survey can be sent to https://digitalaccessibilitycentre.org/ for testing.
  • Procurement – whether the survey is commissioned internally or externally, the specification must be understood and agreed by all parties with subsequent changes governed appropriately. The successful bidder must be able to meet the required standards.
  • Supplier – at the appropriate stage it is essential to build a strong working relationship with the supplier and especially with the technical delivery team. The supplier will be a survey expert with a wealth of experience and should be able to deliver the specified requirements as well as advise on innovation.
  • Communication and dissemination teams – the survey must be promoted by central and local teams to encourage participation. In addition to advertising the survey, the communication can include descriptions of how data will be used, what the benefit of the survey will be and why it is worth taking part. As well as communicating the results, it is necessary to ensure methods and processes are transparent so that people know what to read into them, and importantly what not to read into them. For communication teams to support the survey they must be given all the relevant information from design to analysis.

Relevance of metrics

The information included in the People Survey should be based on the data user needs and the departments that will use it. As mentioned above, these can be ascertained through consultation with policy users. Comparison over time is always an important aspect of any regular survey, and we would recommend keeping question sets as comparable as possible from year to year with changes, when needed, following a transparent methodological review. Finally, some terms used within questions are subjective depending on the department – again this could be improved through consultation.

Periodically the topics covered will change and be impacted by other issues. A good example is the need to monitor the experience of working in the civil service throughout and following the pandemic. When adding or changing a metric, it is important to communicate and explain the changes, especially at the reporting stage.

Some departments may also need to consider organisational changes and how they would like them reported against previous years.

Validity of results

Quality assurance

It would be difficult to quality assure the information provided via the People Survey. There are limited sources to cross-check the information, but these could be exit interviews and/or any internal departmental staff surveys. One approach to address this could be to do a quality follow-up survey with a sample of respondents – which is like what we do with the census to quality assure that data.

Non-response bias impact

Non-response bias can have a huge impact. It can cause results to be distorted, and this is linked with wider issues, as typically people that don’t respond have a reason not to engage, and those reasons are particularly interesting for those collecting the data but remain unseen.

It also means that the data will not be representative and, as a result, any policy changes might not address the real issues. Methodological solutions include weighting and imputation and require comparing the population of respondents to the population of civil servants using the data that is available through HR departments. For example, if fewer people aged under 30 respond to the survey, the responses of those who have replied could be given a bigger weight. Any weighting strategy would need to be transparent and carefully considered, attempting to explain the assumption that people who have responded do indeed represent those who have not.

Survey Delivery

Strengths and weaknesses

Strengths and weaknesses are mostly the result of trade-offs. For example, while the People Survey is relatively long, risking attrition, lower response rate and haste in completion, it does allow for more detailed analysis on many topics.

As discussed in this submission, using a consistent survey across the Civil Service enables efficiency, comparison between organisations, sharing of good practice and analysis over time while limiting bespoke design on issues that may be of interests to specific departments.

Again, as aforementioned, while the survey is not weighted, which enables quick access to the results, it does have an impact on how confident we can be that respondents represent the civil service as a whole. The survey is reported as percentages of respondents rather than a percentage of the population and users can break down the results further to compare responses from different groups. This is a pragmatic, clear approach which is clearly communicated.

A mixture of both quantitative and qualitative data collection could improve the quality of the analysis and the usefulness of the survey. The People Survey is quantitative, with a few open questions capturing free text. There are other qualitative measures such as depth interviews and focus group discussions that can be used alongside the People Survey to enhance understanding of the results. These can either be in addition to, or instead of some of the questions in the survey.

Finally, the survey is accompanied by a tool that enables quick analysis and comparisons, disseminated to all participating organisations; this is a strength.

My colleague Sarah Henry, Director of Methodology at the ONS, looks forward to discussing this further with the Committee on 13 September. Please do let us know if any questions ahead of then.

Yours sincerely,

Professor Sir Ian Diamond