To test for non-response bias, compare the replies of those who reply to the first sending of a questionnaire to those who respond to successive mailings. If there is no significant difference between these two groups, then we can be confident that the results from the initial sample are accurate estimates of the value of the item for the population as a whole.
If those who reply to the first mailing are more likely than those who don't reply to send answers indicating that the item is important to them, then we should adjust the apparent importance of the item in the initial sample.
Non-response bias can also arise because responders and non-responders differ on some aspect(s) of interest, which would lead to biased estimates if not controlled for. For example, if smokers are less likely to reply to surveys, this would result in an underestimate of the prevalence of smoking among the general population.
Response bias can be reduced by using multiple methods to contact potential respondents and by offering compensation for time spent completing the survey.
In conclusion, non-response bias is an important factor to consider when interpreting survey findings. Although several techniques exist for reducing non-response bias, it may not be possible to completely avoid this problem.
Non-response bias can be investigated by contrasting the characteristics of respondents who returned completed surveys with those of non-respondents who did not return a completed survey. If they are different, this indicates that some type of selection bias may have occurred because people who were willing to spend their time filling out surveys were likely going to have an interest in the topic being studied. This kind of bias could lead us to overestimate or underestimate the value of a statistic based on what people chose to share.
For example, if we were to compare the characteristics of respondents who took part in a health study and those who did not, we might find that participation was correlated with having better health or being more interested in health issues. In this case, we would need to make certain assumptions about how much worse off or how more interested people who didn't take part were, before concluding that our results weren't affected by selection bias.
In general, statistical methods can be used to adjust survey data for non-response bias. The two main approaches are post-stratification and weighting. Post-stratification involves re-weighting the responses of respondents who did not respond to create a new sample with the same relationship between response and non-response as the original sample.
Response biases can significantly affect the validity of questionnaires or surveys. All of these "artifacts" of survey and self-reported research have the potential to undermine a measure's or study's validity. Response biases include refusals, dropouts, and misunderstandings on our part.
Refusal rates can vary depending on how informed people are about studies and what is required of them. If participants do not want to take part in a study, this will result in a refusal rate. Refusal rates should be noted when calculating attrition (see below).
Dropout rates refer to the number of participants who withdraw from a study before it has ended. Participants may drop out for a variety of reasons, including feeling ill or tired of taking part in the research. If many people drop out of a study early, this may indicate that the questions being asked are not important to them or there is something wrong with the way in which it is being conducted. Dropout rates should be noted when calculating retention (see below).
This means that some true relationships were not identified as such because one or both of the variables involved in the relationship had responses that were biased in opposite directions.