doi:10.5477/cis/reis.191.25-42
Survey Quality in Digital Society:
Advances and Setbacks
La calidad de la encuesta en la sociedad digital: Avances y retrocesos
M.ª Ángeles Cea D’Ancona
Citation
Cea D’Ancona, M.ª Ángeles (2025). «Survey Quality in Digital Society: Advances and Setbacks». Revista Española de Investigaciones Sociológicas, 191: 25-42. (doi: 10.5477/cis/reis.191.25-42)
M.ª Ángeles Cea D’Ancona: Universidad Complutense de Madrid | maceada@ucm.es
Almost two decades have passed since the publication of the article “La senda tortuosa de la ‘calidad’ de la encuesta” (REIS, 111), and even more since the publication of two comprehensive monographs on survey errors (Groves, 1989; Biemer and Lyberg, 2003). Therefore, an update is necessary, in the context of a decrease in face-to-face surveys and an increase in online and mixed survey methods. This, despite the fact that, in 2003, Biemer and Lyberg affirmed that these types of surveys were the norm (p. 208). Mainly, these consist of panel surveys that begin in-person and are subsequently completed by telephone or self-completion. The objective is to reduce the economic and time costs of the research by applying methods that are cheaper than face-to-face surveys, in addition to solving non-response, coverage and measurement errors, by combining sampling frames, sample selection procedures and questionnaire administration. One recent illustration of this is offered by the European Social Survey (ESS). Round 12 of this survey will be mixed in 2025/2026: half of the sample will be face-to-face, and the other half will be self-completion of a questionnaire via web and postal mail. Round 13 (2027/2028) will move to only the online version, depending on the impact of Round 12 in their time series.
On the other hand, according to the ESOMAR (2023), the digital world has become the main source of information collection. In Spain, 70 % of all the studies conducted in 2022 (and analyzed by Insights+Analytics) were quantitative, with 28 % of all information collection being conducted electronically, 31 % online/mobile quantitative, 7 % via telephone and only 4 % face-to-face. Of these, 37 % were panel surveys. These percentages are similar to those found globally, with 35 % of all studies being online/mobile quantitative. This digital expansion would not have been possible without technological advances, which have facilitated access to the Internet and mobile devices even for the older population and those of a lower socio-economic status.
This article examines advances in survey methodology and their translation into quality improvements, based on the results of empirical research published over the past decade in scientific journals that specialize in surveys. The overview begins by considering the potential and limitations of online surveys and mixed methods. It then goes on to detail the various survey errors within the theoretical framework of total survey error.
Potential and limitations of online surveys and mixed survey methods in social research
The expansion of web or online surveys is largely explained by their lower cost, which makes it possible to increase sample size and dispersion. This expansion is also due to their monitoring as a strategy to reduce unit non-response and the application of panel studies. This is accompanied by the relative speed of data transmission, since it is stored directly in electronic format, as is the case with automated telephone survey modalities (CATI) and face-to-face surveys (CAPI). This helps reduce coding errors and speeds up the preparation and analysis of survey data. In addition to these potentialities, there is also a greater flexibility of questionnaire design (question and answer formats) through the use of multimedia software, and because the survey is self-completed. This self-completion is associated with a reduced social desirability bias since responses are not given to an interviewer either in person or via telephone (Fricker et al., 2005; Gooch and Vavreck, 2019). But their absence may have a negative impact by obtaining lower quality, more hasty responses (increased primacy bias1 and no item response), since no one acts to motivate the individual to respond, clarifies any doubts or performs follow up on the information collection (Cernat and Revilla, 2020; Heerwegh and Loosveldt, 2008).
There are other possibilities and limitations given that the self-completed surveys conducted online (CAWI: Computer Assisted Web Interviewing) may be completed using a mobile device (smart phone or tablet). This permits the integration of responses on attitudes and behaviors with specific behavioral data that is collected passively through sensors (GPS locations, accelerometers, devices for measuring physical activity, stress, etc.). The integration of subjective and objective data improves the measurement of behaviors by providing data that is less susceptible to recall errors and social desirability bias (Keusch and Conrad, 2022; Link et al., 2014; Struminskaya et al., 2020). Negatively, the acceptance of these data collection methods is low (Wenz and Keusch, 2023). The same occurs with the option to take photos and track mobile device usage (such as web pages visited), which may also complement (and even replace) the data collected via surveys. Participants must remember to use it for each requested event. It requires ongoing motivation and commitment. Furthermore, they may choose to report only certain activities, generating differential exclusion of the events under question, in addition to storage limitations, which may lead to data loss.
The main handicap of the online survey continues to be the reduction of non-responses (Elevelt, Lugtig and Toepoel, 20192; Jäckle et al., 2019; Struminskaya et al., 2021a), which compromise its quality and possibilities of inference. This challenge is compounded by coverage errors (discussed in the next section) and the limitations of completing the survey with a mobile device. The questionnaire should have a format that facilitates response when using a small touch screen, and the use of sensors installed on the device must be authorized.
While answering survey questions or taking photographs allows an individual to control the information that is provided, for other activities (such as GPS location) the only control is to turn off data collection for privacy reasons. Studies on the willingness to perform additional tasks on a mobile device as part of a survey conclude that the predisposition is greater for tasks where the content being transmitted can be controlled (such as photographs) as compared to those that automatically collect data (such as GPS location) (Revilla, Couper and Ochoa, 2019; Revilla et al., 20163; Wenz, Jäckle and Couper, 2019; Wenz and Keusch, 20234). It has also been observed that people who use their device more intensively (measured by the frequency of application downloads and the number of applications used) are more predisposed to participate in mobile data collection tasks than those who are concerned about the privacy and security of the data that they provide. Participation is also affected by the organization sponsoring the study and its duration, as with other survey methods, favoring those that take less time and are sponsored by universities (Struminskaya et al., 2021b). Allowing the person to choose the mode (voice, text, video) of responding to the survey also encourages participation by increasing satisfaction with the survey (Conrad et al., 2017).
Regarding mixed survey methods (online self-completion in-person or by telephone), economic reasons and an increase in response rate in cross-sectional and panel studies also encourage their expansion. These methods include a more diverse population, reducing coverage and non-response errors, which decrease the study’s representativeness (Cornesse and Bosnjak, 2018; Jäckle, Lynn and Burton, 2015; Lugtig et al., 2011). Some surveys offer the option of choosing the preferred mode of being surveyed from the get-go, while in others, the mode is assigned depending on the response propensities of each population group during fieldwork. This latter approach has the advantage of applying the method having the highest response probability by the given population (Cornesse and Bosnjak, 2018). However, the response rate of online surveys continues to be a negative, even when questioning professionals have full access to the Internet and high education levels (Cea D’Ancona and Valles, 2021)5.
Progress in compliance with quality criteria within the framework of total survey error
The logistic and discriminant regression models obtained in the Social perception of surveys (III), conducted by the Center for Sociological Research (CIS) in 2017 (Cea D’Ancona, 2022), reveals that in surveys, trust depends on the utility attributed to the same. This utility is connected to the representativeness of the sample and the validity of the data it provides, in addition to its consideration as being beneficial for people. From its results, it may be concluded that the degree of compliance with quality requirements can determine participation in a survey, depending on the reliability attributed to the data that it provides. But what determines the quality of the survey? Although there is consensus that a low response rate decreases its quality, a high rate is not synonymous with quality (Eckman and Koch, 2019), since it depends on various errors.
When assessing survey quality, the theoretical frame of reference is the total error of the survey, which, as Lyberg (2012) indicates, allows the survey to be optimized, minimizing the accumulated size of all sources of error, given the budgetary limitations. It consists of different sources of error that contribute to survey estimates deviating from actual values (Groves, 1989; Groves and Lyberg, 2010; Lyberg and Stukel, 2017). It includes no observation errors, which influence the selection of the sample to be analyzed: coverage errors (not including the entire study population), sampling errors (the sample does not represent the population), and no response errors (of the unit or of the item). Although these are the most frequently analyzed errors, with specific formulas that quantify their incidence (Groves, 1989), the complete analysis of quality also includes measurement, observation or response errors. The latter relate to the representativeness of the information provided by the survey. It affects the survey method applied, especially when addressing topics that are susceptible to social desirability bias (Cea D’Ancona, 2017; Heerwegh and Loosveldt, 2008; Kreuter, Presser and Tourangeau, 2008; Zhang et al., 2017). Also, the design of the questionnaire, the mediation of the interviewer (when applicable), the attitude of the person surveyed and the treatment of the survey data (such as editing, coding of open questions, recording, weighting, imputation, tabulation, statistical modelling, etc.). Therefore, advances in survey quality include both groups of errors.
Incidence of non-observation errors
on sample representativeness
The growing use of online surveys has been accompanied by a debate on the representativeness of the sample that ultimately completes them. This does not only refer to coverage errors, given the lack of sampling frames of Internet users that make it possible to apply probability sampling in surveys of the general population (even using web-push). The representativeness of non-probability samples and low response rates are also the subject of debate.
Coverage errors exist when certain units of the population of interest do not have the opportunity to be surveyed, because they are not included in the sampling frame (Groves et al., 2009). This affects the proportion of the population that is not covered and their differences with those that are covered, especially if they are related to the topic of the survey. However, a significant decline is being detected in the overrepresentation of highly educated people in the population that accesses the Internet (Sterret et al., 2017). A certain level of skill is needed to complete online questionnaires, and this may negatively affect the participation of less educated individuals and those with less interest in the topic of the survey. This is referred to as the “digital divide” in differential access and use of the new technologies. Therefore, coverage error continues to be the source that most attenuates the representativeness of online surveys directed to the general population, although they also exist in other surveys (such as telephone surveys with exclusive sampling of landlines or mobile phones). This error increases in online surveys completed with mobile phones. It involves having the device, the ability to use it for the requested task, and the willingness to provide one’s consent to share their data (Antoun et al., 2019; Couper et al., 2018; Keusch et al., 2023; Keusch et al., 2019; Wenz, Jäckle and Couper, 2019). A decrease in the same implies providing access to mobile phones, connection to mobile Internet service, and support during the survey self-completion process.
When online surveys are completed by panels of volunteers, this adds to the debate regarding the representativeness of samples selected via non-probabilistic methods. This is especially the case when individuals recruit themselves in response to survey advertisements, a common practice in non-probability surveys (Callegaro et al., 2014; Cornesse et al., 2020). These advertisements tend to attract people having specific sociodemographic profiles, values and habits, who may be simultaneously participating in several online panels (Tourangeau, Conrad and Couper, 2013). They are distinguished by their greater political knowledge and their preference for center-left wing parties and policies (Karp and Luehiste, 2015; Valentino et al., 2020), and by a lower presence of individuals over the age of sixty-five (Loosveldt and Sonck, 2008). This deteriorates the representativeness of the sample and the obtaining of biased estimates (Bethlehem, 2010; Chang and Krosnick, 2009; Cornesse and Bosnjak, 2018; Wang et al., 2015). Furthermore, it is observed that individuals who actively participate in various panels may even provide erroneous data to increase their economic compensation (Toepoel, Das and Soest, 2008; Cornesse and Bosnjak, 2018).
While probabilistic sampling makes it possible to estimate the precision of sample estimates, with confidence intervals and margins of error (Kish, 1965), non-probabilistic sampling (mainly convenience samples) does not calibrate the occurrence of errors at each stage of the sample design. All that is verified is the closeness of the final sample to the study population in terms of specific characteristics. As occurs with non-probabilistic sampling by quotas, which are designed to ensure that the sample coincides with the population in key demographic parameters. To the extent that this is certain, inferences made from quota samples will be accurate (Cornesse et al., 2020).
But even probability samples may be inaccurate given the variations in the probability that certain groups of the population will end up participating in the survey, with systematic (non-random) non-response. Statistical adjustments used to reduce systematic biases in probability samples are also applied in non-probability samples, in global adjustments and for specific results. These include propensity score weighting, which is applied once the survey data collection has been completed, and which uses data from the population or from a large probability sample as a reference. Typically, a logistic regression model is used based on demographic, behavioral, and attitudinal variables measured in both data sets to predict the probability that a particular unit belongs to the non-probability sample. This is weighted using the inverse of the predicted probability derived from these propensity models (Lee, 2006; Valliant and Dever, 2011). On the other hand, sample matching attempts to form a balanced non-probabilistic sample by selecting units from a very large frame (such as the list of members of a voluntary participation panel), based on a series of characteristics that match those corresponding to the units of the reference probability sample (Bethlehem, 2016). The comparison procedure is based on a distance metric (such as Euclidean) to identify the closest match between pairs of units, based on the set of common features. Matching prior to the onset of the survey is conducted to reduce differences between the non-probability sample and the population in key variables. Unlike propensity weighting, the matching of samples is not an explicit weighting technique, but rather, it is a method that attempts to balance the non-probabilistic sample. In both cases, however, there is no guarantee that biases in non-probability samples will be completely eliminated (Cornesse et al., 2020; Little et al., 2020).
After reviewing the available empirical evidence, Cornesse et al. (2020) insisted to support the recommendation to continue relying on probability sampling surveys. Lavrakas et al. (2022) would do the same, comparing online panels by administering the same questionnaire in eight independent national samples. They also recommend greater transparency on behalf of the surveying companies. The availability of reports that describe the methodology used to collect and manipulate the data is considered to be of the utmost importance in determining whether the surveyed individuals are actually representative of their population.
Regarding non-response errors (of unit and of item), it should be reiterated that response rate is only mildly associated with this error (Groves et al., 2008; Groves and Peytcheva, 2008). Relatively low response rates may accurately reflect the population, if the set of individuals completing the survey varies randomly from the non-responders (Bethlehem, Cobben and Schouten, 2011; Cornesse and Bosnjak, 2018). The incidence of non-response on the quality of the survey depends on the profiles of the respondents, their connection with the topic at hand, the interest that it arouses in the population to be surveyed (Groves, Presser and Dipko, 2004; Keusch, 2013) and its sensitivity (Couper et al., 2010; Tourangeau and Yan 2007). Surveys addressing highly stigmatized behaviors tend to be less frequently answered by those who participate the most in such behaviors, undermining their representativeness (Plutzer, 2019).
Telephone surveys repeatedly show that older people are overrepresented, while in online surveys, they are underrepresented, along with those of a lower socioeconomic status (Bech and Kristensen, 2009; Couper, 2000; Roster et al., 2004). On the other hand, those who are the most active in their community tend to be more participative in surveys, since they perceive them as public good, with their participation being considered prosocial behavior (Beller and Geyer, 2021; Groves, Singer and Corning, 2000). This is in line with the conclusion that altruistic values predict survey participation (Groves, Cialdini and Couper, 1992). The survey structure (type and format of the survey questions) also contributes to this, together with the guarantees of privacy and confidentiality that are provided.
If the survey is online, the probability of participating is also influenced by physical capacity (vision, ability to respond) and familiarity with digital devices, in addition to type of data that is to be collected since the requirement of downloading an application tends to reduce the willingness to participate (Jäckle et al., 2019; Wenz, Jäckle and Couper, 2019). To avoid this, it is recommended that additional instructions or screenshots be provided on how to access the app store, download it and install it in the device. When the individual is not sufficiently familiar with or uses the computer or device less intensively, it is recommended that an interviewer be available to offer assistance through a support hotline. And, to ensure that security is not a concern, the invitation letter should inform of the guarantees of confidentiality, highlighting the importance of participating in the survey. Additional reminders will also be sent to over-surveyed populations and panel studies (Struminskaya et al., 2021b; Mol, 2017). Reminders sent via instant messaging (SMS) have been shown to be more effective at increasing the response rate, since they are better at attracting attention and are more effective in establishing legitimacy (Andreadis, 2020; Kocar, 2022).
Regarding the questionnaire, survey duration, the difficulty of the questions, the content of the first question and the use of the progress bar (in online surveys), they are related to response rate (Liu and Wronski, 2018). Time of year appears to have an impact (better in September and the winter) as does the day of the week (Monday, followed by Tuesday), as compared to Saturday and Sunday, when it is less likely that online survey will be completed (Fang et al., 2021). They tend to be postponed to Monday, due to family and domestic obligations, as well as the need to disconnect from activities that are cognitively demanding.
The incidence of non-response on survey quality, which magnifies other errors in sample representativeness, may be reduced through various actions. This includes offering incentives for online surveys (Becker, Möser and Glauser 2019; Göritz, 2006), in addition to other methods. There is also the option of reviewing and eliminating data that one does not wish to transmit to the researcher (Wenz and Keusch, 2023). Also, the letter of invitation may include a link to an app store (Lawes et al., 2022) to facilitate safe downloading.
Once data have been collected, statistical adjustments are applied to reduce the negative impact of non-response, as with other non-observation errors. This includes weightings that correct for sociodemographic differences between the final sample and the population. Their effectiveness depends on how closely the selected variables are related to the survey topic, the propensity to respond to it, and the quality of the data available. This data includes population statistics (census, population register, etc.), administrative data (if it is possible to link records), and data from commercial sources containing characteristics of neighborhoods and housing units (West et al., 2015).
An auxiliary source of information is the interviewer’s observations on the characteristics of the population surveyed when they mediate the data collection. Compared to the characteristics available at the area level, those provided on housing can provide information of interest for the survey and the weighting adjustments. On the downside, these observations may vary greatly between observers, and they lack the necessary quality. Their application requires additional training for the interviewers and their responses must be accompanied by photographs, which would be reviewed as a group to reduce the variation between the interviewers (Ren et al., 2022). It should also be considered that the observations tend to capture observable classification variables that are not always key in the survey. Therefore, their utility in reducing the no response error depends on how related they are to the topic of the survey and that they do not constitute value judgments. In the case of virtual observations via Google Street View, they are subject to coverage problems (fewer in non-urban areas) and time lag with the date when the images were taken (Vercruyssen and Loosveldt, 2017).
Impact of measurement errors on the representativeness of the information
Observational or measurement errors are the deviations of responses from the actual values (Groves, 1989; Couper, 2000). Their size may be affected by decisions made during the survey design, from the selection of the method to the precise formulation of questions and answers, affecting the survey results and the drawing of erroneous conclusions (Saris and Revilla, 2016). In a recent study, Poses et al. (2021) quantified the average measurement quality at 0.65 for 67 questions in the European Social Survey across forty-one country language groups. Of the observed variance, 65 % came from latent interest concepts, while 35 % was due to measurement error. Previously, DeCastellarnau and Revilla (2017) found estimates of measurement quality between 0.60 and 0.89 for the questions from the fifth wave of the online Norwegian Citizen Panel.
Regarding the question-answer process, Tourangeau, Rips and Rasinski (2000) suggested that the quality of the response depends on the thoroughness of four cognitive steps: understanding the question, retrieving relevant information from memory, formulating a judgment and selecting a response. Biased effects, which lead to unrealistic responses, are often referred to as response effects. This includes random, inattentive responses or insufficient effort to respond (Maniaci and Rogge, 2014). Regardless of the content of the question, they include acquiescence bias (or the tendency to agree regardless of the question asked), primacy bias (selecting the first reasonable response option) and recency bias (choosing the last), when failing to make an effort in the response process. While recency bias is more present in telephone surveys, primacy bias is more prevalent in self-completed surveys (Christian, Dillman and Smyth, 2007).
In addition, there are errors caused by the order of the survey questions and their content (those questions referring to the past and that are vulnerable to social desirability bias). This may be affected by the interest in the survey topic (Anduiza and Galais, 2016) and the educational level of the individual surveyed. In general, measurement errors are more frequent in individuals with a lower educational level. The exception, the social desirability bias, is more common in those with a higher educational level. They are more likely to perceive the intentionality of the question, offering a differential response depending on the survey method applied, and favoring self-completed ones (Cea D’Ancona, 2017; Chang and Krosnick, 2009; Heerwegh and Loosveldt, 2008; Kreuter, Presser and Tourangeau, 2008; Zhang et al., 2017).
Given their complexity and the different factors involved in the response, measurement errors are difficult to control, although studies have examined how to minimize them and improve survey design (Callegaro, Manfreda and Vehovar, 2015; Couper, 2008; Tourangeau, Conrad and Couper, 2013). Although online surveys are cost-effective, fast and easy to implement, data quality (in terms of measurement) is compromised when questions are answered at random or with low motivation to correctly interpret their content and comply with the survey instructions. This raises questions about the quality of their measurements, due to the lack of control of face-to-face interviews and their greater vulnerability to acquiescence bias (Fricker et al., 2005; Zhang and Conrad, 2014), as well as errors facilitated by typing with the fingertips on a small virtual keyboard (when answering via mobile device). This leads to consideration of the length of the answers to open questions as an indicator of satisfaction (Mavletova and Couper, 2013).
As self-completion surveys, the potential of online surveys is highlighted since they allow respondents to decide when to answer the questions. They also permit the verification of relevant information before their completion. This creates less pressure (than telephone surveys) to provide fast answers, resulting in more accurate responses to questions of knowledge and those referring to the past (Braunsberger, Wybenga and Gates, 2007; Fricker et al., 2005). They also encourage the reporting of socially undesirable opinions or behaviors, unlike telephone surveys, which are more vulnerable to social desirability (Chang and Krosnick, 2009; Christian, Dillman and Smyth, 2007; Kreuter, Presser and Tourangeau, 2008). On the other hand, face-to-face surveys favor a better relationship between the interviewer and the respondent as well as the validation of the survey’s legitimacy (Jäckle, Roberts and Lynn, 2010). Therefore, there is a lower risk of social desirability bias as compared to telephone surveys (Hope et al., 2022).
Among the actions used to reduce careless responses (less common in women and those with a higher educational level) are items that verify whether attention is paid when answering the survey questions (Berinsky, Margolis and Sances, 2014). Online surveys are more dependent on questionnaire design since no interviewer is present to clarify the questions and encourage respondents to answer them. Their visual stimuli increase survey motivation and participation since they make them more fun and entertaining (Bărbulescu and Cernat, 2012; Liu et al., 2015; Mavletova, 2015). Other mobile-specific design improvements include the use of user-friendly input tools and the avoidance of formats that make their use more difficult (sliders, drop-down boxes that become selectors), as well as the application of the Responsive Web Design to adapt the questionnaire to different screen sizes (Antoun, Couper and Conrad, 2017).
In face-to-face and telephone surveys, the interviewer’s performance can increase measurement errors. Although it may help decrease the difficulty of the task by reducing the cognitive demands of the question (by offering clarifications on question-answers), it may result in errors when formulating questions and recording answers (West and Blom, 2017). It has also been observed that sociodemographic (mis)matches between the interviewer and the respondent may affect the non-response of units and items in face-to-face surveys (Bittman, 2020; Durrant et al., 2010). The main theoretical framework explaining this is the theory of linking or connection (Groves, Cialdini and Couper, 1992). It suggests that people prefer to interact with those that they like based on their sociodemographic characteristics, attitudes or beliefs. It is observed that matching by gender, age, educational level and skin color increases cooperation and participation in the survey (Blanchard, 2022; Durrant et al., 2010; Vercruyssen, Wuyts and Loosveldt, 2017). In contrast, social distance theory argues that too much distance (in sociodemographic terms) between the interviewer and the respondent will result in biased responses (Dohrenwend, Colombotos and Dohrenwend, 1968). In panel surveys, it is observed that keeping the same interviewer fosters the confidence of the individuals surveyed and the sincerity of their response (Kühne, 2018).
The observations noted after the interview may be used as indicators of response quality, including the degree of understanding and cooperation of the questionnaire respondent (as in the case of the European Social Survey or those conducted by the CIS). They help to identify potential faults in data quality. While these observations typically focus on non-response errors, they are useful in terms of adjusting for unit non-response and panel wear (West, Kreuter and Trappmann, 2014). However, they may be subject to interviewer variance effects and measurement errors (Sinibaldi, Durrant and Kreuter, 2013), as previously mentioned. Likewise, the bias that interviewers may introduce in the selection of cases (sample units) must be considered. Since they are usually evaluated by the response rates obtained, selecting households or individuals having a greater probability of completing the survey makes them more productive, especially when payment is received for completed questionnaires. Commonly used quality control measures, such as verifications (telephone re-interviews or brief return visits to verify that they were surveyed), audio recordings, and time stamps, do not necessarily detect deviations from protocol, leading to a dangerous situation. Artificially high response rates because hard-to-contact cases are not recorded as non-respondents, artificially high response rates indicate that selection has been manipulated, and that the data may not represent the population (Eckman and Koch, 2019). To avoid this manipulation in the selection of sample units, sampling methods should be used that minimize their selection capacity, improve their training and supervision, and ensure that they do not feel pressured to attain high response rates. Also, additional quality controls should be applied to those who complete many interviews in the first contact, even verifying their behavior using GPS devices.
Finally, the length of the interview can also affect the quality of the response (Olson and Peytchev, 2007; Roberts et al., 2019; Vandenplas, Beullens and Loosveldt, 2019). Measurements of duration, pace (minutes per question) and speed (questions per minute), calculated from paradata, serve as indicators of interviewer performance. Those who deviate the most from the standardized interview protocol and speed up the interview contribute the most to this component of measurement error (Olson, Smyth and Kirchner, 2020; Vandenplas, Beullens and Loosveldt, 2019; Wuyts and Loosveldt, 2022).
In short, interviewer variance is a key component of measurement error, and it is quantifiable. This variance includes all deviations from the overall mean response resulting from the individual’s combination of physical characteristics, interview style, and questionnaire completion (such as writing the literal answer to open questions, correctly marking answers to closed questions, or not skipping any question). However, its effect on the response may be random (different errors in each interview) or systematic (in all of the interviews conducted). In the latter case, it would have a greater impact on the survey quality. Its reduction requires increasing the number of interviewers to ensure that poor performance results in fewer questionnaires and that the error is random, increasing the interviewer’s variance, in addition to intensifying their training and supervision. Wuyts and Loosveldt (2022) recommend audio recordings to eliminate or once again educate interviewers on the worst interview practices at the start of the field work. They also suggest the use of “trace” data from their course of action, as well as keystrokes, which register all of the entries made from the keyboard, mouse and tactile screen. As with the interview time data, their collection is free and can be used to flag suspicious practices. Furthermore, the assessment of circumstances where their specific characteristics may affect the response is also conducted. This is especially the case when the survey topic is directly related to some of the visible characteristics and the surveyed individual hides his/her response because it may be considered offensive or embarrassing, as indicated previously by Fowler and Mangione (1990).
Survey continues to be the predominant methodological strategy for obtaining large volumes of information to describe and understand the formation of public opinion, changes over time, and the links between attitudes and behaviors of the population. However, to achieve its objectives, it must provide credible data for those who finance, use, and analyze them; it must meet minimum quality criteria to justify its high cost.
Over the past decade, both advances and setbacks have occurred, due to the desire to reduce the economic cost and the time required to obtain the information from surveys. Undoubtedly, computer advances are contributing to the digitalization of the survey and the lowering of its cost, accompanied by quality improvements, with the reduction of errors thanks to questionnaire administration, response recording and interviewer performance (when recording data from the interview). However, these advances are not solutions. Notable issues continue to exist in the form of non-observation and measurement errors, which are not always resolved by the application of mixed surveys, due to incompatibilities in sampling frames, sample selection procedures (probabilistic and otherwise), questionnaire design and comparability of responses. This is especially the case in subjective topics and those that may be vulnerable to social desirability bias.
Completing surveys with mobile devices has not been found to present a problem. Improvements in connectivity, battery life, mobile interfaces (easier text entry), questionnaire design and objective data collection may increase the dominance of the use of these devices in social research, although it is not a panacea. There are major handicaps in the use of these devices, which is not helped by the fact that no mediator is present during information collection.
The review of empirical research conducted during this work raises the debate as to what should be prioritized: the availability of data in a relatively short period of time at a minimum cost or a quality survey, although having higher economic and time costs. Using the information provided here, the reader can draw his/her own conclusions.
Andreadis, Ioannis (2020). “Text Message (SMS) Pre-Notifications, Invitations and Reminders for Web Surveys”. Survey Methods: Insights from the Field, 8. doi: 10.13094/SMIF-2020-00019
Anduiza, Eva and Galais, Carol (2016). “Answering without Reading: IMCS and Strong Satisficing in Online Surveys”. International Journal of Public Opinion Research, 29(3): 497-519. doi:10.1093/ijpor/edw007
Antoun, Christopher; Couper, Mick P. and Conrad, Frederick G. (2017). “Effects of Mobile Versus PC Web on Survey Response Quality: A Crossover Experiment in a Probability Web Panel”. Public Opinion Quarterly, 81(1): 280-306. doi:10.1093/poq/nfw088
Antoun, Christopher; Conrad, Frederick G.; Couper, Mick P. and West, Brady T. (2019). “Simultaneous Estimation of Multiple Sources of Error in a Smartphone-Based Survey”. Journal of Survey Statistics and Methodology, 7(1): 93-117. doi:10.1093/jssam/smy002
Bărbulescu, Marinică and Cernat, Alexandru (2012). “The Impact of Pictures on Best-Worst Scaling in Web Surveys”. International Review of Social Research, 2(3): 79-93. doi:10.1515/irsr-2012-0028
Bech, Mickael and Kristensen, Morten Bo (2009). “Differential Response Rates in Postal and Web-Based Surveys in Older Respondents”. Survey Research Methods, 3(1): 1-6. doi:10.18148/srm/2009.v3i1.592
Becker, Rolf; Möser, Sara and Glauser, David (2019). “Cash vs. Vouchers vs. Gifts in Web Surveys of a Mature Panel Study”. Social Science Research, 81: 221-234. doi: 10.1016/j.ssresearch.2019.02.008
Beller, Johannes and Geyer, Siegfried (2021). “Personal Values Strongly Predict Study Dropout”. Survey Research Methods, 15(3): 269-280. doi:10.18148/srm/2021.v15i3.7801
Berinsky, Adam J.; Margolis, Michele F. and Sances, Michael W. (2014). “Separating the Shirkers from the Workers? Making Sure Respondents Pay Attention on Self-Administered Surveys”. American Journal of Political Science, 58(3): 739-753. doi: 10.1111/ajps.12081
Bethlehem, Jelke (2010). “Selection Bias in Web Surveys”. International Statistical Review, 78(2): 161-188. doi: 10.1111/j.1751-5823.2010.00112.x
Bethlehem, Jelke (2016). “Solving the Nonresponse Problem with Sample Matching?”. Social Science Computer Review, 34: 59-77. doi:10.1177/0894439315573926
Bethlehem, Jelke; Cobben, Fannie and Schouten, Barry (2011). Handbook of Nonresponse in Household Surveys. Hoboken: John Wiley & Sons.
Biemer, Paul P. and Lyberg, Lars E. (2003). Introduction to Survey Quality. Hoboken: John Wiley & Sons.
Bittmann, Felix (2020). “The More Similar, the Better? How (Mis)Match Between Respondents and Interviewers Affects Item Nonresponse and Data Quality in Survey Situations”. Survey Research Methods, 14(3): 301-323. doi:10.18148/srm/2020.v14i3.7621
Blanchard, Maxime (2022). “Skin Tones and Polarized Politics: How Skin Color Differences Between Interviewers and Respondents Influence Survey Answers in Bolivia”. International Journal of Public Opinion Research, 34(1). doi:10.1093/ijpor/edac007
Braunsberger, Karin; Wybenga, Hans and Gates, Roger (2007). “A Comparison of Reliability Between Telephone and Web-based Surveys”. Journal of Business Research, 60(7): 758-764. doi:10.1016/j.jbusres.2007.02.015
Callegaro, Mario; Baker, Reg; Bethlehem, Jelke; Göritz, Anja S.; Krosnick, John A. and Lavrakas, Paul J. (2014). Online Panel Research: A Data Quality Perspective. UK: John Wiley and Sons.
Callegaro, Mario; Manfreda, Katja L. and Vehovar, Vasja (2015). Web Survey Methodology. London: Sage.
Cea D’Ancona, M.ª Ángeles (2017). “Measuring Multiple Discrimination Through Survey Methodology”. Social Science Research, 67: 239-251. doi:10.1016/j.ssresearch.2017.04.006
Cea D’Ancona, M.ª Ángeles (2022). “Calidad, Confianza y Participación en Encuestas”. Papers, 107(4): 1-27. doi:10.5565/rev/papers.3074.
Cea D’Ancona, Mª Ángeles and Valles Martínez, Miguel S. (2021). “Multiple Discrimination: From Perceptions and Experiences to Proposals for Anti-discrimination Policies”. Social & Legal Studies, 30(6): 937-958. doi: 10.1177/0964663920983534
Cernat, Alexandru and Revilla, Melanie (2020). “Moving from Face-to-Face to a Web Panel: Impacts on Measurement Quality”. Journal of Survey Statistics and Methodology, 9(4): 1-19. doi: 10.1093/jssam/smaa007
Chang, Linchiat and Krosnick, Jon A. (2009). “National Surveys via RDD Telephone Interviewing Versus the Internet: Comparing Sample Representativeness and Response Quality”. Public Opinion Quarterly, 73(4): 641-678. doi: 10.1093/poq/nfp075
Christian, Leah M.; Dillman, Don A. and Smyth, Jolene D. (2007). The Effects of Mode and Format on Answers to Scalar Questions in Telephone and Web Surveys. In: J. M. Lepkowski; C. Tucker; M. Bryck et al. (eds.). Advances in Telephone Survey Methodology. Hoboken: Wiley & Sons.
Conrad, Frederick G.; Schober, Michael F.; Antoun, Christopher; Yan, H. Yanna; Hupp, Andrew L.; Johnston, Michael; Ehlen, Patrick; Vickers, Lucas and Zhang, Chan (2017). “Respondent Mode Choice in a Smartphone Survey”. Public Opinion Quarterly, 81(1): 307-337. doi: 10.1093/poq/nfw097
Cornesse, Carina and Bosnjak, Michael (2018). “Is There an Association Between Survey Characteristics and Representativeness? A Meta-analysis”. Survey Research Methods, 12(1): 1-13. doi: 10.18148/srm/2018.v12i1.7205
Cornesse, Carina; Blom, Annelies G.; Dutwin, David; Krosnick, Jon A; De Leeuw, Edith D.; Legleye, Stéphane; Pasek, Josh; Pennay, Darren; Phillips, Benjamin; Sakshaug, Joseph W.; Struminskaya, Bella and Wenz, Alexander (2020). “A Review of Conceptual Approaches and Empirical Evidence on Probability and Nonprobability Sample Survey Research”. Journal of Survey Statistics and Methodology, 8(1): 4-36. doi: 10.1093/jssam/smz041
Couper, Mick P. (2000). “Web Surveys: A Review of Issues and Approaches”. Public Opinion Quarterly, 64(4): 464-494. doi: 10.1086/318641
Couper, Mick P. (2008). Designing Effective Web Surveys. New York: Cambridge University Press.
Couper, Mick P.; Singer, Eleanor; Conrad, Frederick G. and Groves, Robert M. (2010). “Experimental Studies of Disclosure Risk, Disclosure Harm, Topic Sensitivity, and Survey Participation”. Journal of Official Statistics, 26: 287-300.
Couper, Mick P.; Gremel, Garret; Axinn, William; Guyer, Heidi; Wagner, James and Wes, Brady T. (2018). “New Options for National Population Surveys: The Implications of Internet and Smartphone Coverage”. Social Science Research, 73: 221-235. doi: 10.1016/j.ssresearch.2018.03.008
DeCastellarnau, Anna and Revilla, Melanie (2017). “Two Approaches to Evaluate Measurement Quality in Online Surveys”. Survey Research Methods, 11(4): 415-433. doi: 10.18148/srm/2017.v11i4.7226
Dohrenwend, Barbara S.; Colombotos, John and Dohrenwend, Bruce (1968). “Social Distance and Interviewer Effects”. Public Opinion Quarterly, 32(3): 410-422. doi: 10.1086/267624
Durrant, Gabriele B.; Groves, Robert M.; Staetsky, Laura and Steele, Fiona (2010). “Effects of Interviewer Attitudes and Behaviors on Refusal in Household Surveys”. Public Opinion Quarterly, 74(1): 1-36. doi: 10.1093/poq/nfp098
Eckman, Stephanie and Koch, Achim (2019). “Interviewer Involvement in Sample Selection Shapes the Relationship Between Response Rates and Data Quality”. Public Opinion Quarterly, 83(2): 313-337. https://doi.org/10.1093/poq/nfz012
Elevelt, Anne; Lugtig, Peter and Toepoel, Vera (2019). “Doing a Time Use Survey on Smartphones Only: What Factors Predict Nonresponse at Different Stages of the Survey Process?”. Survey Research Methods, 13(2): 195-213. doi: 10.18148/srm/2019.v13i2.7385
ESOMAR (2023). “Global Market Research Report”. Press Release. Available at: https://www.ia-espana.org/wp-content/uploads/2022/10/Ndp-datos-sector-2021.pdf, access February 19, 2024.
Fang, Qixiang; Burger, Joep; Meijers, Ralph and Berkel, Kees van (2021). “The Role of Time, Weather and Google Trends in Understanding and Predicting Web Survey Response”. Survey Research Methods, 15(1): 1-25. doi: 10.18148/srm/2021.v15i1.7633
Fowler, Floyd J. and Mangione, Thomas W. (1990). Standardized Survey Interviewing: Minimizing Interviewer-Related Error. London: Sage.
Fricker, Scott; Galesic, Mirta; Tourangeau, Roger and Yan, Ting (2005). “An Experimental Comparison of Web and Telephone Surveys”. Public Opinion Quarterly, 69(3): 370-392. doi: 10.1093/poq/nfi027
Gooch, Andrew and Vavreck, Lynn (2019). “How Face-to-Face Interviews and Cognitive Skill Affect Item Non-Response”. Political Science Research and Methods, 7(1): 143-162. doi: 10.1017/psrm.2016.20
Göritz, Anja S. (2006). “Incentives in Web Studies”. International Journal of Internet Science, 1(1): 58-70.
Groves, Robert M. (1989). Survey Errors and Survey Costs. Hoboken: John Wiley and Sons.
Groves, Robert M.; Cialdini, Robert B. and Couper, Mick P. (1992). “Understanding the Decision to Participate in a Survey”. Public Opinion Quarterly, 56(4): 475-495. doi: 10.1086/269338
Groves, Robert M.; Singer, Eleanor and Corning, Amy (2000). “Leverage-Saliency Theory of Survey Participation”. Public Opinion Quarterly, 64(3): 299-308. doi: 10.1093/poq/nfh00210.1086/317990
Groves, Robert M.; Presser, Stanley and Dipko, Sarah (2004). “The Role of Topic Interest in Survey Participation Decisions”. Public Opinion Quarterly, 68(1): 2-31. doi: 10.1093/poq/nfh002
Groves, Robert M.; Brick, Michael, Couper, Mick P.; Kalsbeek, William; Harris-Kojetin, Brian; Kreuter, Frauke; Pennell, Beth-Ellen; Raghunathan, Trivellore; Schouten, Barry; Smith, Tom; Tourangeau, Roger; Bowers, Ashley; Jans, Matthew; Kennedy, Courtney; Levenstein, Rachel; Olson, Kristen; Peytcheva, Emelia; Ziniel, Sonja and Wagner, James (2008). “Issues Facing the Field: Alternative Practical Measures of Representativeness of Survey Respondent Pools”. Survey Practice, 1(3): 1-6. doi: 10.29115/SP-2008-0013
Groves, Robert M. and Peytcheva, Emilia (2008). “The Impact of Nonresponse Rates on Nonresponse Bias”. Public Opinion Quarterly, 72(2): 167-189. doi: 10.1093/poq/nfn011
Groves, Robert M.; Fowler, Floyd J.; Couper, Mick P.; Lepkowski, James L.; Singer, Eleanor and Tourangeau, Roger (2009). Survey Methodology. New York: John Wiley & Sons.
Groves, Robert M. and Lyberg, Lars (2010). “Total Survey Error: Past, Present, and Future”. Public Opinion Quarterly, 74(5): 849-879. doi: 10.1093/poq/nfq065
Heerwegh, Dirk and Loosveldt, Geert (2008). “Face-to-Face Versus Web Surveying in a High-Internet-Coverage Population”. Public Opinion Quarterly, 72(5): 836-846. doi: 10.1093/poq/nfn045
Hope, Steven; Campanelli, Pamela; Nicolaas, Gerry; Lynn, Peter and Jäckle, Annette (2022). “The Role of the Interviewer in Producing Mode Effects”. Survey Research Methods, 16(2): 207-226. doi: 10.18148/srm/2022.v16i2.7771
Jäckle, Annette; Roberts, Caroline and Lynn, Peter (2010). “Assessing the Effect of Data Collection Mode on Measurement”. International Statistical Review, 78(1): 3-20. doi: 10.1111/j.1751-5823.2010.00102.x
Jäckle, Annette; Lynn, Peter and Burton, Jonathan (2015). “Going Online with a Face-to-Face Household Panel”. Survey Research Methods, 9(1): 57-70. doi: 10.18148/srm/2015.v9i1.5475
Jäckle, Annette; Burton, Jonathan; Couper, Mick P. and Lessof, Carli (2019). “Participation in a Mobile App Survey to Collect Expenditure Data as Part of a Large-Scale Probability Household Panel”. Survey Research Methods, 13(1): 23-44. doi: 10.18148/srm/2019.v1i1.7297
Karp, Jeffrey A. and Luehiste, Maarja (2015). “Explaining Political Engagement with Online Panels”. Public Opinion Quarterly, 80(3): 666-693. doi: 10.1093/poq/nfw014
Keusch, Florian (2013). “The Role of Topic Interest and Topic Salience in Online Panel Web Surveys”. International Journal of Market Research, 55(1): 59-80. doi:10.2501/IJMR-2013-007
Keusch, Florian; Struminskaya, Bella; Antoun, Christopher; Couper, Mick P. and Kreuter, Frauke (2019). “Willingness to Participate in Passive Mobile Data Collection”. Public Opinion Quarterly, 83(1): 210-235. doi: 10.1093/poq/nfz007
Keusch, Florian and Conrad, Frederick G. (2022). “Using Smartphones to Capture and Combine Self-Reports and Passively Measured Behavior in Social Research”. Journal of Survey Statistics and Methodology, 10(4): 863-885. doi: 10.1093/jssam/smab035
Keusch, Florian; Bähr, Sebastian; Haas, Georg-Christoph; Kreuter, Frauke and Trappmann, Mark (2023). “Coverage Error in Data Collection Combining Mobile Surveys with Passive Measurement Using Apps”. Sociological Methods & Research, 52(2): 841-878. doi: 10.1177/0049124120914924
Kish, Leslie (1965). Survey Sampling. New York: John Wiley & Sons.
Kocar, Sebastian (2022). “Survey Response in RDD-Sampling SMS-Invitation Web-Push Study”. Survey Research Methods, 16(3): 283-299. doi: 10.18148/srm/2022.v16i3.7846
Kreuter, Frauke; Presser, Stanley and Tourangeau, Roger (2008). “Social Desirability Bias in CATI, IVR, and Web Surveys”. Public Opinion Quarterly, 72(5): 847-865. doi: 10.1093/poq/nfn063
Kühne, Simon (2018). “From Strangers to Acquaintances? Interviewer Continuity and Socially Desirable Responses in Panel Surveys”. Survey Research Methods, 12(2): 121-146. doi: 10.18148/srm/2018.v12i2.7299
Lavrakas, Paul J.; Pennay, Darren; Neiger, Dina and Phillips, Benjamin (2022). “Comparing Probability-Based Surveys and Nonprobability Online Panel Surveys in Australia”. Survey Research Methods, 16(2): 241-266. doi: 10.18148/srm/2022.v16i2.7907
Lawes, Mario; Hetschko, Clemens; Sakshaug, Joseph W. and Grießemer, Stephan (2022). “Contact Modes and Participation in App-Based Smartphone Surveys”. Social Science Computer Review, 40(5): 1076-1092. doi: 10.1177/0894439321993832
Lee, Sunghee (2006). “Propensity Score Adjustments as a Weighting Scheme for Volunteer Panel Web Surveys”. Journal of Official Statistics, 22: 329-349.
Link, Michael W.; Murphy, Joe; Schober, Michael F.; Buskirk, Trent D.; Hunter Childs, Jennifer and Langer Tesfaye, Casey (2014). “Mobile Technologies for Conducting, Augmenting and Potentially Replacing Surveys”. Public Opinion Quarterly, 78(4): 779-787. doi: 10.1093/poq/nfu054
Little, Roderick J. A.; West, Brady T.; Boonstra, Phillip S. and Hu, Jingwei (2020). “Measures of the Degree of Departure from Ignorable Sample Selection”. Journal of Survey Statistics and Methodology, 8(5): 932-964. doi: 10.1093/jssam/smz023
Liu, Mingnan; Kuriakose, Noble; Cohen, Jon and Cho, Sarah (2015). “Impact of Web Survey Invitation Design on Survey Participation, Respondents, and Survey Responses”. Social Science Computer Review, 34(5): 631-644. doi: 10.1177/0894439315605606
Liu, Mingnan and Wronski, Laura (2018). “Examining Completion Rates in Web Surveys via Over 25,000 Real-World Surveys”. Social Science Computer Review, 36(1): 116-124. doi: 10.1177/0894439317695581
Loosveldt, Geert and Sonck, Nathalie (2008). “An Evaluation of the Weighting Procedures for an Online Access Panel Survey”. Survey Research Methods, 2(2): 93-105. doi: 10.18148/srm/2008.v2i2.82
Lugtig, Peter; Lensvelt-Mulders, Gerty J.L.M.; Frerichs, Remco and Greven, Assyn (2011). “Estimating Nonresponse Bias and Mode Effects in a Mixed-Mode Survey”. International Journal of Market Research, 53(5): 669-686. doi: 10.2501/IJMR-53-5-669-686
Lyberg, Lars E. (2012). “Survey quality”. Survey Methodology, 38(2): 107-130.
Lyberg, Lars E. and Stukel, Diana M. (2017). The Roots and Evolution of the Total Survey Error Concept. In: P.P. Biemer; E. de Leeuw; S. Eckman; B. Edwards; F. Kreuter; L. E. Lyberg; N. C. Tucker and B. T. West (eds.) Total Survey Error in Practice. New York: Wiley.
Maniaci, Michael R. and Rogge, Ronald D. (2014). “Caring About Carelessness: Participant Inattention and its Effects on Research”. Journal of Research in Personality, 48: 61-83. doi: 10.1016/j.jrp.2013.09.008
Mavletova, Aigul (2015). “Web Surveys Among Children and Adolescents: Is There a Gamification Effect?”. Social Science Computer Review, 33(3): 372-398. doi: 10.1177/0894439314545316
Mavletova, Aigul and Couper, Mick P. (2013). “Sensitive Topics in PC Web and Mobile Web Surveys”. Survey Research Methods, 7(3): 191-205. doi: 10.18148/srm/2013.v7i3.5458
Mol, Christof van (2017). “Improving Web Survey Efficiency”. International Journal of Social Research Methodology, 20(4): 317-327. doi: 10.1080/13645579.2016.1185255
Olson, Kristen and Peytchev, Andy (2007). “Effect of Interviewer Experience on Interview Pace and Interviewer Attitudes”. Public Opinion Quarterly, 71(2): 273-286. doi: 10.1093/poq/nfm007
Olson, Kristen; Smyth, Jolene D. and Kirchner, Antje (2020). “The Effect of Question Characteristics on Question Reading Behaviors in Telephone Surveys”. Journal of Survey Statistics and Methodology, 8(4): 636-666. doi: 10.1093/jssam/smz031
Poses, Carlos; Revilla, Melanie; Asensio, Marc; Schwarz, Hannah and Weber, Wiebke (2021). “Measurement Quality of 67 Common Social Sciences Questions Across Countries/Languages”. Survey Research Methods, 15(3): 235-256. doi: 10.18148/srm/2021.v15i3.7816
Plutzer, Eric (2019). “Privacy, Sensitive Questions, and Informed Consent: Their Impacts on Total Survey Error, and the Future of Survey Research”. Public Opinion Quarterly, 83(1): 169-184. doi: 10.1093/poq/nfz017
Ren, Weijia; Krenzke, Tom; West, Brady T. and Cantor, David (2022). “An Evaluation of the Quality of Interviewer and Virtual Observations and Their Value for Potential Nonresponse Bias Reduction”. Survey Research Methods, 16(1): 97-131. doi: 10.18148/srm/2022.v16i1.7767
Revilla, Melanie; Toninelli, Daniele; Ochoa, Carlos and Loewe, Germán (2016). “Do Online Access Panels Need to Allow and Adapt Surveys to Mobile Devices?”. Internet Research, 26(5): 1209-1227. doi: 10.1108/IntR-02-2015-0032
Revilla, Melanie; Couper, Mick. P. and Ochoa, Carlos (2019). “Willingness of Online Panelists to Perform Additional Tasks”. Methods, Data, Analyses, 13(2): 223-252. doi: 10.12758/mda.2018.01
Roberts, Caroline; Gilbert, Emily; Allum, Nick and Eisner, Léïla (2019). “Research Synthesis: Satisficing in Surveys”. Public Opinion Quarterly, 83(3): 598-626. doi: 10.1093/poq/nfz035
Roster, Catherine A.; Rogers, Robert D.; Albaum, Gerald and Klein, Darin (2004). “A Comparison of Response Characteristics from Web and Telephone Surveys”. International Journal of Market Research, 46(3): 359-374. doi: 10.1177/147078530404600301
Saris, Willem E. and Revilla, Melanie (2016). “Correction for Measurement Errors in Survey Research”. Social Indicators Research, 127(3): 1005-1020. doi: /10.1007/s11205-015-1002-x
Sinibaldi, Jennifer; Durrant, Gabriele B. and Kreuter, Frauke (2013). “Evaluating the Measurement Error of Interviewer Observed Paradata”. Public Opinion Quarterly, 77(1): 173-193. doi: 10.1093/poq/nfs062
Sterrett, David; Malato, Dan; Benz, Jennifer; Tompson, Trevor and English, Ned (2017). “Assessing Changes in Coverage Bias of Web Surveys in the United States”. Public Opinion Quarterly, 81(1): 338-356. doi: 10.1093/poq/nfx002
Struminskaya, Bella; Lugtig, Peter; Keusch, Florian and Höhne, Jan Karem (2020). “Augmenting Surveys with Data from Sensors and Apps”. Social Science Computer Review, 0(0): 1-13. doi: 10.1177/0894439320979951
Struminskaya, Bella; Lugtig, Peter; Toepoel, Vera; Schouten, Barry; Giesen, Deirdre and Dolmans, Ralph (2021a). “Sharing Data Collected with Smartphone Sensors”. Public Opinion Quarterly, 85(1): 423-462. doi: 10.1093/poq/nfab025
Struminskaya, Bella; Toepoel, Vera; Lugtig, Peter; Haan, Marieke; Luiten, Annemieke and Schouten, Barry (2021b). “Understanding Willingness to Share Smartphone-Sensor Data”. Public Opinion Quarterly, 84(1): 725-759. doi: 10.1093/poq/nfab025
Toepoel, Vera; Das, Marcel and Soest, Arthur van (2008). “Effects of Design in Web Surveys: Comparing Trained and Fresh Respondent”. Public Opinion Quarterly, 72(5): 985-1007. doi: 10.1093/poq/nfn060
Tourangeau, Roger; Rips, Lance and Rasinski, Kenneth (2000). The Psychology of Survey Response. Cambridge: Cambridge University Press.
Tourangeau, Roger and Yan, Ting (2007). «Sensitive Questions in Surveys». Psychological Bulletin, 133(5): 859-883. doi: 10.1037/0033-2909.133.5.859
Tourangeau, Roger; Conrad, Fredrick G. and Couper, Mick P. (2013). The Science of Web Surveys. New York: Oxford University Press.
Valentino, Nicholas A.; Zhirkov, Kirill; Hillygus, D. Sunshine and Guay, Brian (2020). “The Consequences of Personality Biases in Online Panels for Measuring Public Opinion”. Public Opinion Quarterly, 84(2): 446-468. doi: 10.1093/poq/nfaa026
Valliant, Richard and Dever, Jill A. (2011). “Estimating Propensity Adjustments for Volunteer Web Surveys”. Sociological Methods and Research, 40(1): 105-137. doi: 10.1177/0049124110392533
Vandenplas, Caroline; Beullens, Koen and Loosveldt, Geert (2019). “Linking Interview Speed and Interviewer Effects on Target Variables in Face-to-Face Surveys”. Survey Research Methods, 13(3): 249-265. doi: 10.18148/srm/2019.v13i3.7321
Vercruyssen, Anina and Loosveldt, Geert (2017). “Using Google Maps and Google Street View to Validate Interviewer Observations and Predict Non-response”. Survey Research Methods, 11(3): 345-360. doi: 10.18148/srm/2017.v11i3.6301
Vercruyssen, Anina; Wuyts, Celine and Loosveldt, Geert (2017). “The Effect of Sociodemographic (Mis)match Between Interviewers and Respondents on Unit and Item Nonresponse in Belgium”. Social Science Research, 67: 229-238. doi: 10.1016/j.ssresearch.2017.02.007
Wang, Wei; Rothschild, David; Goel, Sharad and Gelman, Andrew (2015). “Forecasting Elections with Non-Representative Polls”. International Journal of Forecasting, 31(3): 980-991. doi: 10.1016/j.ijforecast.2014.06.001
Wenz, Alexander; Jäckle, Annette and Couper, Mick P. (2019). “Willingness to Use Mobile Technologies for Data Collection in a Probability Household Panel”. Survey Research Methods, 13(1): 1-22. doi: 10.18148/srm/2019.v1i1.7298
Wenz, Alexander and Keusch, Florian (2023). “Increasing the Acceptance of Smartphone-Based Data Collection”. Public Opinion Quarterly, 87(2): 357-388. doi: 10.1093/poq/nfad019
West, Brady T.; Kreuter, Frauke and Trappmann, Mark (2014). “Is the Collection of Interviewer Observations Worthwhile in an Economic Panel Survey?”. Journal of Survey Statistics and Methodology, 2(2): 159-181. doi: 10.1093/jssam/smu002
West, Brady T.; Wagner, James; Hubbard, Frost and Gu, Haoyu (2015). “The Utility of Alternative Commercial Data Sources for Survey Operations and Estimation”. Journal of Survey Statistics and Methodology, 3(2): 240-264. doi: 10.1093/jssam/smv004
West, Brady T. and Blom, Annelies G. (2017). “Explaining Interviewer Effects”. Journal of Survey Statistics and Methodology, 5(2): 175-211. doi: 10.1093/jssam/smw024
Wuyts, Celine and Loosveldt, Geert (2022). “Interviewer Performance in Slices or by Traces”. Survey Research Methods, 16(2): 147-163. doi: 10.18148/srm/2022.v16i2.7672
Zhang, Chan and Conrad, Frederick (2014). “Speeding in Web Surveys”. Survey Research Methods, 8(2): 127-135. doi: 10.18148/srm/2014.v8i2.5453
Zhang, XiaoChi; Kuchinke, Lars; Woud, Marcella L.; Velten, Julia and Margraf, Jürgen (2017). “Survey Method Matters”. Computers in Human Behavior, 71: 172-180. doi: 10.1016/j.chb.2017.02.006
1 Randomizing the response options makes the bias random and non-systematic.
2 In their Time Use Study, 43 % of panel members responded positively to the invitation to participate in the smartphone version, and only 29 % completed all stages of the study, with their sociodemographic profile being different from those who did not participate in some of the tasks, such as recording GPS data.
3 Their study shows that willingness to use GPS varies by country: from 30 % of the respondents in Mexico to 17 % in Portugal; in Spain, 24 %.
4 Based on the Technology Acceptance Model, the willingness to download a smartphone app was examined in 1876 members of the NORC AmeriSpeak Panel. They found that willingness increased in studies where they could control data collection. It was possible to temporarily disable it or review the data before it was submitted.
5 In the survey conducted on 7989 teachers and researchers from public and private Spanish universities randomly selected for the MEDIM II project (CSO2016-75946-R), 1667 ultimately completed it after receiving six reminders.
RECEPTION: April 5, 2024
REVIEW: October 18, 2024
ACCEPTANCE: December 2, 2024