You are currently viewing Here’s How You can Fix Common Non-Sampling Errors

No one likes errors. And in survey research, they can make or break the data you’re collecting and have devastating downstream effects on organizational decision-making.

Errors in survey research are often grouped into two main categories: sampling errors and non-sampling errors.

  • Sampling errors are those that occur when you are putting together your sample, or the group of people you will survey. A sampling error is anything that prevents your sample from truly representing your target population.

  • Non-sampling errors, by contrast, are errors that occur during (or after) the survey. They can be systemic or random, but they aren’t determined by your sample—instead, they are problems that occur within the process of collecting data itself.

Both are important to avoid, but in this blog post, we’re focusing on non-sampling errors that can occur during fieldwork: enumerator effects, nonresponse errors, and response errors. We will explain how they happen, how to recognize them, and share effective strategies you can employ to avoid or combat these errors and collect your best data ever.

Let’s dive in.

Table of Contents

Error #1: Enumerator (or interviewer) effects

If your survey involves enumerators or interviewers, you’ve probably encountered this one. Enumerator effects refer to data inconsistencies that arise because of who is asking the questions or how they ask them. You can think of enumerator effects as a catch-all for the way that human differences and realities can unwittingly inject bias into a survey’s data.

Why this error happens:

  1. Enumerator behavior: Inconsistent tone, pacing, or emphasis on certain questions in one interview but not the next can all lead respondents to interpret questions differently. For example, an enumerator who rushes or stresses through certain questions because they don’t think they are important can inadvertently cause respondents to interpret those questions very differently from respondents whose interviewers paced all their questions equally.

     

  2. Respondent response to enumerators: Like enumerators, respondents are human, with their own unconscious biases. This can mean that they answer questions differently depending on the interviewer’s gender, age, ethnicity, or perceived social status. This category of enumerator effect is especially pernicious if you conduct surveys on sensitive subjects, like health statuses or socially undesirable behavior.

How to Prevent Enumerator Effects

  1. Invest in training field teams.
    Teach enumerators how their behavior can influence responses, model neutral interviewing techniques, and conduct mock interviews to reinforce best practices. It is also highly recommended that field managers create training manuals that field teams can refer to when they are actually out in the field conducting interviews.

  2. Conduct random spot checks.
    If possible, send managers into the field to directly observe some interviews! Field teams do not need to be informed beforehand of when a spot check will occur, but should know that they will be implemented during data collection as part of overall data quality measures. This lets you directly observe if field teams are using best practices, or if an enumerator might benefit from additional training.

  3. Craft clear, unambiguous survey questions.
    Simpler questions minimize the need for explanation and interpretation, which works to reduce this type of bias. Include clarifying notes within your digital forms if needed, and train enumerators to read questions exactly as they are written, with no added personal embellishments.

Want to improve your question design? Check out these resources on writing great survey questions:

How to write survey questions for research – with examples (Blog post)
How to write survey questions that get real answers (Survey & Beyond podcast episode)

Error #2: Non-response errors

Even the best-designed surveys suffer from non-response errors. This occurs when people in your sample don’t participate or fail to answer all questions in a survey.

In practice, this looks like respondents not answering calls, not being available at home for their interview, or dropping off partway through a survey. In email surveys, non-response can also happen when respondents ignore survey links or emails.

Why non-response error happens:

There are many reasons!

People can forget an interview, decide after agreeing to participate in a survey that they actually don’t want to, or have unexpected circumstances arise that make participation impossible. In the low-income global communities where many academic researchers conduct studies, long surveys can be a financial burden that takes time away from work or caretaking.

As you can imagine, all of these reasons mean that non-response errors can’t be completely eliminated. But you can minimize it with a few proven strategies, especially when you use digital data collection tools like SurveyCTO.

How to Reduce Non-Response Errors

  1. Send reminders.
    For online surveys, automated reminder emails are the go-to for reminding respondents to complete a survey. For in-person or phone interviews, your team can use follow-up calls, SMS messages, or rescheduling options when possible. Try to choose the communication method that’s best-suited to your respondents’ daily lives and context. For example, try to find out what the most popular messaging app is in the areas your survey is taking place, and then use that as your main means of sending reminders.

Did you know that WhatsApp is the most commonly-used messaging app in the world? Learn how Innovations for Poverty Action used SurveyCTO’s WhatsApp plug-in to increase survey response rates in this webinar.

2. Effectively communicate your survey’s purpose.
As part of your survey, make sure you explain why this data is being collected and how each respondent’s input contributes to positive real-world outcomes. People are more likely to make an effort to participate when they understand how their participation counts.

3. Offer appropriate incentives.
Small incentives, either monetary or non-monetary, have been shown to boost survey participation rates. If you decide to implement incentives, be mindful of the cultural context of the incentives you offer. And don’t forget to be transparent about what you are offering, and why you’re using incentives.

4. Keep it short.
Long or repetitive surveys are known to lead to respondent fatigue and attrition. Trim unnecessary questions and focus on collecting data that will truly add value to your research or project.

Error #3: Response errors

Response errors occur when respondents provide inaccurate answers, either intentionally or unintentionally. Today, we’ll focus on the intentional version of this error, where respondents consciously choose to misreport, withhold, or obfuscate information in their answers.

Why Response Errors Happen:

  1. Privacy and security concerns: Respondents often fear that their answers could be leaked or compromised, and used against them.

  2. Anonymity concerns: Even with secure software, respondents might hesitate to share honest responses to sensitive questions if the survey fully identifies them by their full name, address, etc.

  3. Social desirability bias: People want to present themselves positively. This can also come up in the context of enumerator effects, where respondents give different answers to enumerators they see as having a higher social status than themselves. In response errors, social desirability bias shows up in respondents answering questions in particular ways because they want to feel like they are behaving correctly. For example, imagine respondents taking a survey on their exercise and eating habits, and overreporting healthy behaviors, or underreporting unhealthy ones.

How to Prevent Response Errors

  1. Use robust data security, and communicate it to respondents.
    Use secure data collection platforms (like SurveyCTO) that offer data encryption and granular user access controls to protect data. And tell respondents about your security measures! Respondents who are confident that their answers are truly private can feel more comfortable being candid. This is especially true for surveys on sensitive subjects.

  2. Offer anonymity (or partial anonymity) whenever possible.
    If your survey doesn’t absolutely require you to fully identify respondents, consider collecting responses anonymously! If that’s not possible, consider what information is absolutely needed vs. wanted—perhaps you need to collect family names, but not first names; maybe you could just use initials. Or perhaps you need to identify the town where respondents are from, but not document their entire home address.

  3. Explain the benefits of honesty.
    Communication is key here. We recommend incorporating a note for respondents at the beginning of surveys on why truthful, complete answers are necessary for meaningful impact. For example, letting respondents know that a full, accurate picture of their community’s health needs will lead to the right healthcare investments can encourage respondents who might be hesitant or uncomfortable answering health-related questions honestly.

Final thoughts

Survey errors happen, but with the right strategies and plans, researchers and data collection professionals can dramatically reduce those errors and mitigate their effects. Through the right training, questionnaire design, security practices, and respondent communication, you can ensure that your surveys capture excellent-quality data.

Survey error is a big topic—this article only covers non-sampling errors! But start with reducing these errors, and you’ll already be ahead in collecting higher-quality data to make better decisions and a better world.

Marta Costa

Senior Product Specialist

Marta is a member of the Customer Success team for Dobility. She helps users working at NGOs, nonprofits, survey firms, universities and research institutes achieve their objectives using SurveyCTO, and works on new ways to help users get the most out of the platform.

Marta has worked in international development consultancy and research, supporting and coordinating impact evaluations, monitoring and evaluation projects, and data collection processes at the national level in areas such as education, energy access, and financial inclusion.