1.1 Advantages of probability-based, online survey panels
The EIPS infrastructure combines the various advantages of the online survey mode with probability sampling methods that are generally used by the traditionally survey modes. By combining the “best features of both worlds”, it allows for collecting high-quality data at largely reduced costs. Online panels have become increasingly popular in the survey industry, as they have important advantages over traditional survey modes such as face-to-face or telephone surveys. Online panels facilitate individual-level longitudinal research, because panellists can be easily recontacted for repeated surveys. Online surveys are convenient for respondents and cost-efficient for the organizer. They are convenient because respondents can complete them at the time and place of their choice. Further, they are also more cost-efficient than traditional survey modes because of reduced interview costs and because online panellists are easily activated to participate in repeated surveys. However, today’s commercial polls often use online panels where panellists are recruited via non-probability methods. This means that panellists are not recruited through a random sample of the general population, but for example by clicking on pop-up banners or by signing up through a website. Such recruitment strategies are obviously selective, yielding a sample of panellists heavily skewed towards the internet savvy, educated and politically vocal parts of the population. The EIPS breaks with the low-quality sample custom of commercial online panels and consists solely of panels that are based on a probability sample of the general population of a country. These probability-based online panels hold samples that are representative of the general population (Callegaro et al. 2014). Indeed, numerous studies have proven that data stemming from probability-based online panels are more accurate than data from non-probability-based online panels and is on par with probability-based face-to-face and telephone surveys (Revilla and Saris 2012, Baker et al. (2013), D. S. Yeager et al. (2011)), and at lower costs (Blom, Gathmann, and Krieger 2015).
Callegaro, Mario, Ana Villar, J Krosnick, and D Yeager. 2014. “A Critical Review of Studies Investigating the Quality of Data Obtained with Online Panels.” John Wiley & Sons.
Revilla, Melanie A, and Willem E Saris. 2012. “A Comparison of the Quality of Questions in a Face-to-Face and a Web Survey.” International Journal of Public Opinion Research 25 (2). Oxford University Press: 242–53.
Baker, Reg, J Michael Brick, Nancy A Bates, Mike Battaglia, Mick P Couper, Jill A Dever, Krista J Gile, and Roger Tourangeau. 2013. “Summary Report of the Aapor Task Force on Non-Probability Sampling.” Journal of Survey Statistics and Methodology 1 (2). Oxford University Press: 90–143.
Yeager, David S, Jon A Krosnick, LinChiat Chang, Harold S Javitz, Matthew S Levendusky, Alberto Simpser, and Rui Wang. 2011. “Comparing the Accuracy of Rdd Telephone Surveys and Internet Surveys Conducted with Probability and Non-Probability Samples.” Public Opinion Quarterly 75 (4). Oxford University Press: 709–47.
Blom, Annelies G., Christina Gathmann, and Ulrich Krieger. 2015. “Setting up an Online Panel Representative of the General Population: The German Internet Panel.” Field Methods 27 (4): 391–408. doi:10.1177/1525822X15574494.