News

Stay in the loop with company updates and insights from our talented team.

Frequent Survey Responders on Online Panels

featured / survey research / blog

FMG regularly conducts online surveys as a means of data collection. We recognize that like all survey modes, online surveys have their limitations, in many cases due to how the online panel provider procures its sample of respondents. Online survey One potential problem is that online panel members differ in their propensities to participate in surveys: some respondents take numerous surveys every week or month, while others take few or none. This phenomenon of "frequent survey responders" (Coen, Lorch, & Piekarski, 2005) concerns survey methodologists because the act of taking numerous surveys may have an impact on the way respondents answer the survey questions. Along these lines, the survey research community has voiced two concerns about respondents who take a large number of surveys:

  • First, frequent survey responders ("FSRs") may be more interested than others in taking online surveys. Though they may be engaged with the survey material, researchers worry that their answers may be changed by "practice effects" that emerge as a result of repeated survey-taking. As one researcher put it, "the intuition is that…panelists' self-reported attitude and behaviors are changed over time by their regular participation in surveys" (Dennis, 2001). Indeed, several researchers have documented that observing/measuring a phenomenon can change it (Zwane et al., 2011; Williams, Block, & Fitzsimons, 2006).

    The empirical evidence to support this assertion is unfortunately tenuous because the findings are mixed and outdated. To provide just two illustrative examples, in his study using the GfK KnowledgePanel, a probability-based online panel, Clinton (2001) found that FSRs were more likely to report daily news consumption behavior than others. In contrast, using the comScore Networks panel, an opt-in or non-probability panel, Fulgoni (2005) found that FSRs and less-frequent responders did not differ with regard to their grocery shopping behavior reports. These diverging results are unsurprising since the study designs and behavioral outcomes of interest differed considerably.
  • The second concern is that FSRs may take numerous surveys because they are motivated by earning incentive payments, and therefore may be focused more on quick completion than giving well-considered responses. Respondents may thus engage in survey satisficing, or taking shortcuts to avoid the cognitive work that is required to answer the survey questions (see, e.g., Krosnick, 1991; Krosnick et al. 2000). This may lead them to exhibit undesirable responding behaviors that have a negative impact on data quality (e.g., speeding, straightlining, or item nonresponse).

    Though survey methodologists agree that these types of responding behaviors are indeed problematic, some researchers have determined that FSRs are no more likely to engage in these behaviors than other respondents (see, e.g., Coen et al., 2005). Again, however, it is unclear what comparability/generalizability exists between this example and other studies.

In a cursory review of 16 empirical studies about FSRs, I came across at least 14 different online panels, and at least three different definitions of FSRs. Furthermore, each study asked respondents about different attitudinal and/or behavioral topics ranging from politics to electric cars.

Although the survey methodology community has a sense that FSRs are problematic, we do not have a firm grasp on the scale of the problem or the conditions under which problems might emerge. We need researchers to continue conducting these studies and sharing their results so that we can aggregate across them to get a better sense of the true scope of the problem.

In the meantime, there are a number of precautions a researcher could take to monitor FSRs and lessen the likelihood of their introducing any undesirable measurement error into her survey data:

  1. Use screener questions to screen out respondents who have previously participated in studies similar to yours. Research has demonstrated that repeat participation in surveys of the same topic can sometimes introduce bias into survey data (see, e.g., Marsh et al., 2010; Halpern-Manners, Warren, & Torche, 2014).
  2. Ensure that your online panel provider is taking steps to monitor the number of surveys its panelists are completing. Professional online panel providers should always engage in routine quality control checks to ensure that they are not over-surveying their respondents. Many panels take measures such as placing limits on the number of surveys that their panelists may complete, so make sure you know the protocol for your panel of choice. Researchers should note, however, that a given panel provider only knows how many surveys a given respondent has taken on its own panel, so the problem of taking numerous surveys due to cross-membership is unaddressed by a panel's own quality control procedures.
  3. Obtain panel data on survey response frequency for your respondents. Online panels monitor the number of surveys their respondents take by collecting their own respondent-level data on response frequency. If these data are of interest to you, be sure to negotiate their availability (and pricing) prior to data collection.
  4. Consider using quotas by survey response frequency. Setting quotas by different response frequency categories explicitly allows you to control how many FSRs will be permitted to complete your study, among other key variables.
  5. Analyze key results by some measure of panel-reported frequency of survey response. The researcher may conduct a sensitivity analysis to determine whether or not results differ with and without the questionable group of FSRs. This will allow you to detect whether frequency of survey response is related to response distributions on key variables.
  6. Always approach study design from the Total Survey Error (TSE) framework (see, e.g., Groves et al. 2004). Researchers should keep in mind that FSRs are only one possible source of error. Because of this, researchers are strongly advised to think carefully about other sources of error that can have an impact on estimates from online surveys, particularly coverage error, sampling error, and nonresponse error.

References

  1. Clinton, J. (2001). Panel bias from attrition and conditioning: A case study of the Knowledge Networks panel. Montreal, Canada.
  2. Coen, T., Lorch, J., & Piekarski, L. (2005). The effects of survey frequency on panelists' responses. Survey Sampling International White Paper, Retrieved from http://www.surveysampling.com/ssi-media/Corporate/white_papers/The-Effects-of-Survey-Frequency-on-Panelists-Responses.image.
  3. Dennis, M. (2001). Are internet panels creating professional respondents? Journal of Marketing Research, 13(2): 34-38.
  4. Fulgoni, G. (2005). The ‘professional respondent' problem in online survey panels today. Presented at the Marketing Research Association Annual Conference, Chicago, IL.
  5. Groves et al. (2004). Survey Methodology. Hoboken, NJ: John Wiley & Sons.
  6. Halpern-Manners, A., Warren, J. R., & Torche, F. (2014). Panel conditioning in a longitudinal study of illicit behaviors. Public Opinion Quarterly, (78)3: 565-590.
  7. Marsh, K., Daves, R. P., Anderson, A., Turner, S., White, H. A., & Everett, S. E. (2010). The priming directive: Priming effects in media awareness measures on a national probability panel. Paper presented at the 65th Annual Annual Conference of the American Association for Public Opinion Research, Chicago, IL.
  8. Williams, P., Block, L. G., Fitzsimons, G. J. (2006). Simply asking questions about health behaviors increases both healthy and unhealthy behaviors. Social Influence, 1(2): 117-127.
  9. Zwane, A. P., et al. (2011). Being surveyed can change later behavior and related parameter estimates. Proceedings of the National Academy of Sciences, 108(5): 1821-1826.


comments powered by Disqus