Introduction

Survey research is a common tool used to collect data about public and expert opinion across social science disciplines. Well-designed surveys enable researchers to test theories about population perceptions and behavior. Surveys are often used to study academic scientists and their collaboration patterns and publication behaviors (Lopex-Navarro et al., 2015; Tsai et al., 2016; Youtie et al., 2014). As with all research methods, there are weaknesses and limitations with survey design and data collection. Survey administration varies for paper-and-pencil, face-to-face, telephone, and web formats, but many common practices hold across survey modes including formatting and labeling to signal the reputation of those conducting the survey; pre-notification of survey invitations; clear human subjects statements and specification of confidentiality and data use; mixed method contacts; and follow up communications to improve response rates and reduce nonresponse bias. The effectiveness of these various efforts is always under study (c.f., Dillman et al., 2014; Marsden & Wright, 2010; Wolf et al., 2016) .

Surveys of academic scientists provide a unique set of opportunities and challenges, as compared to surveys of the general public. First, academic scientists are much more likely to recognize the affiliation of the researchers and funding sources. Second, they are more likely to quickly understand and assess their rights as human research subjects and understand the low risk associated with completing a survey. Third, in the case of web surveys, academic researchers are more likely to have access to and literacy for digital technologies. And in our experience over the past 20 years, surveying academic scientists about program implementation, research funding, productivity, research networks, and university administration, they are more likely to participate in surveys—likely because of a sense of obligation to the funding agency, feelings of reciprocity toward other researchers, or a desire to share their work experiences. Given academic scientists’ higher than average response rates to surveys and fewer digital divide issues, they are an ideal population for testing web survey implementation methods.

Best practice in survey administration aims to reduce common sources of survey error including sampling, coverage, nonresponse, measurement, and processing error (Groves, 1989). A pre-notification letter to alert sampled individuals that they will be asked to participate in a study is an important mechanism used to improve response rates, increase legitimacy of research, and assure potential respondents that the research is high quality. Research consistently shows that alert letters improve response rates (Dillman et al., 2014; Tourangeau et al., 2013). In the case of web surveys, mailing alert letters on official letterhead has the added bonus of offering a mixed mode contact, which is also associated with increased response rates (Sue & Ritter, 2012), and may be particularly valuable when conducting research with professional groups such as scientists.

The ASU Center for Science, Technology, and Environmental Policy Studies (CSTEPS) regularly implements web surveys with mailed alert letters (Kim et al., 2017; Siciliano et al., 2018) and thus was interested in understanding if there would be an effect on survey response rates if we shifted to email alert letters or discontinued alert letters altogether. In spring 2020, with the work-from-home orders in response to COVID-19, postal alert letters were unlikely to reach research participants. While research indicates that mailed alert letters produce higher response rates than no alert letter (Bandilla et al., 2012; Kaplowitz et al., 2004), there is little research to indicate whether an alert letter sent by email would have the same intended effect as an alert letter mailed by traditional post. The research team decided to collect data on the effects of using an alert email or not. We report results from three random experiments designed to investigate the effects of receiving an alert email on web survey response rates in a sample of university-based scientists. We test the effects of receiving an alert email as compared to no alert email and variation based on the timing of the alert email.

Literature and hypotheses

The use of surveys that are completely electronic is the fastest growing form of survey research across the globe and related research continues to emerge as technology evolves (Dillman et al., 2014; Sue & Ritter, 2012). There are many elements of best practice for web surveys that are meant to improve the quality of web surveys, reduce sources of error (e.g. coverage, nonresponse, measurement) and increase response rates. Software programs ensure that the templates are flexible and easy-to-use, that only the investigators have control over the data and that costs are minimized (Dillman et al., 2014). Considerable research shows that making multiple contacts improves the chances persons who are sampled will agree to participate (Dillman et al., 2014).

Alert letters are pre-notice informing sampled individuals of the intent and date of the upcoming survey. Mailing hard copies of an alert letter prior to a web survey establishes the legitimacy of the project, primes the recipients for the survey’s arrival and enables an additional, mixed mode contact for respondents (Sue & Ritter, 2012). Kaplowitz et al. (2004) demonstrated that using postcard pre-notifications in a survey of college students produced a significantly higher response rate than no prenotification. Sending an alert letter via mail produced a significantly higher response rate compared to sending the letter through email (Crawford et al., 2004). The mailed pre-notification letter offers not only the opportunity to alert sampled individuals that the email invitation is coming but can serve as a secondary connection with the respondent should the initial email invitation get sent to a spam folder or blocked by a server.

While alert letters have many benefits, they are costly. First, there is the expense of paying postage. Producing the alert letter additionally includes the cost of letterhead, envelopes, printing, and the labor to print, fold, and stuff envelopes. Another, often underestimated cost to mailing alert letters is the labor required to acquire complete mailing addresses, which is much easier with small, known samples as compared to large random samples. For example, surveys within organizations where letters can be distributed through payroll or office mailboxes, surveys of students with addresses as part of their records, or surveys of homeowners where addresses are public records are more easily processed. If one of the primary reasons for the increase in web surveys is convenience and lower costs, mailing alert letters adds substantial inconvenience and cost.

The use of online pre-notice or email alert letters is less common and the research on the mode of pre-notification is limited (Dillman, et al., 2014; Tourangeau et al., 2013). Bosnjak et al. (2008) compared short mobile-messaging service (SMS), email and no pre-notice in a college survey. The SMS pre-notice produced a higher response rate than the email and no pre-notice. One reason for the lower response rate associated with an email alert letter could be the failure to receive or read the messages due to the widespread use of spam filters (Tourangeau, 2013). Bandilla et al. (2012) found that while any mailed letter is more effective than email alone, the pre-notification only had an effect for the email invitation group—implying a mixed mode effect. Web survey guides advise pre-notifying potential respondents by internet before sending out the invitation in order to maximize response rates (Sue & Ritter, 2012).

Following survey best practice, in previous web surveys of academic scientists, we have sent a mail alert letter printed on official university letterhead including handwritten signatures and postal stamps (rather than scanned postal codes). These personalized efforts are shown to help surveyors establish a connection with respondents to increase response rates (Dillman et al., 2014). The mixed mode approach adds a postal mail letter that is not subject to spam filters, typos in email addresses, and other technical errors that can occur with web surveys. Alert letters might result in the potential respondent contacting the research team or going directly to the survey website and signals that the survey is run by experienced professionals (Callegaro et al., 2015). The mailed letter helps establish the legitimacy and authority of the research team, and ultimately increases the response rate when contacting first-time research participants (Vehovar et al., 2002). Due to COVID19 stay-at-home orders, this mode of pre-notification was temporarily unavailable as scientists were not likely to be at their university office addresses. This created an opportunity to test the effects of using an electronic alert letter as compared to no pre-notification contact.

Hypothesis 1

Receiving an email alert letter will positively affect response rates to a web survey, as compared to no pre-notification.

We also tested the effects of the timing of email pre-notifications. The literature suggests that the timing of the survey implementation process can affect the response rate, although an exact schedule is rarely given. According to Dillman et al. (2014), researchers should allow adequate time for the recipients to read the content, but not so much time that the requests are forgotten. Thus, the shorter amount of time may make it easier for respondents as they have enough time to learn about the survey, but not too much time to forget about it. We thus expect that response rates will differ based on the timing of the alert letter:

Hypothesis 2

There will be a significant difference in response rates based on the timing of the pre-notification email, with response greater for shorter intervals between pre-notification and survey invitation.

Material and methods

The experiment was conducted as part of a project that regularly implements online surveys of academic scientists. The sampling frame for the three surveys, which covered the topics of (1) the effects of COVID-19 on academic research, (2) US visa and immigration issues affecting the scientific community, and (3) general science policy questions, included all full-time faculty with PhDs in four fields of science—biology and genetics, civil and environmental engineering, biochemistry, and geography—at 81 randomly selected Carnegie designated Research Extensive and Intensive (R1) universities in the United States (US). For each survey, universities were randomly sampled from within eight stratified geographic regions in the US. For each sampled university, we visited the department websites and collected the name and contact information of tenured and tenure track faculty (assistant, associate, and full professors), research professors and instructors with PhDs in each field. The final samples included contact information for 1968, 2443, and 2436 scientists, respectively.

The experiment was designed to test the effects of receiving an electronic alert letter and the timing of the alert letter on response rates. There were three conditions:

  • Condition 1: No electronic pre-notification letter

  • Condition 2: Week notice electronic pre-notification letter

  • Condition 3: Days-notice electronic pre-notification letter

Individuals in the sampling frame for each survey were randomly assigned across the three conditions. Individuals within each field, for each survey, were assigned a random number using MS Excel. The randomly assigned numbers were then sorted lowest to highest. The first 1/3 were assigned to condition 1, the second 1/3 to condition 2, the third 1/3 to condition 3, within each field of science. Table 1 notes the random assignment of the sample by condition. The experiments were pre-registered at AsPredicted.Footnote 1

Table 1 Random assignment of sample by field of science and condition

We verified that participants were randomly assigned to each condition ("Appendix 1"). An analysis of the condition allocation on the covariates (gender and geographic region) shows statistically significant differences in some of the regions. We control for this possible bias by including these covariates in our model specifications.

Electronic pre-notification letters were sent using the Yet Another Mail Merge (YAMM), a Gmail add-on for mail merges. The text of the alert letter is presented in "Appendix 2". The web survey was administered using Sawtooth® software. The electronic pre-notification letters were sent five days apart for all three surveys. The survey administration calendar is reported in Table 2.

Table 2 Survey administration calendar

All study variables are binary. For our analyses, the dependent variable was the individual response to each survey (1 = yes, 0 = no). Using AAPOR (2016) response rate formula RR2, response rates were calculated as the percentage of complete responses to the survey compared to eligible respondents who did not complete the survey (e.g. excluding uncontactable and ineligible cases).

The three experimental conditions resulted in 67% of each sample receiving an email alert for all three surveys; with one-third of each sample receiving prenotification two days prior to the actual invitation, and one-third receiving prenotification one week prior. The control variables were gender, academic field and region in the US. Table 3 reports descriptive information for all study variables.

Table 3 Descriptive Statistics: study variables, by survey

Differences in survey response by experimental condition were examined for each survey using crosstabulations and chi-square tests. Analyses were performed separately to investigate each hypothesis. We then estimated logistic regression models for each survey to examine our hypotheses after adjusting for the potential effects of gender, academic field, and geographic region. The response variable = 1 if the participant responded to the survey and is = 0 otherwise.

Results

Response rates for the three surveys were as follows: COVID19 survey 19%; Visa and Immigration survey 15%; and Science Policy Questions survey 17%.

To examine hypothesis 1, we compared survey participation across persons assigned to receive any pre-notification vs. those assigned to the no pre-notification condition. Results of this analysis are presented in Table 4. Just under 20 percent of persons who received an alert email participated in survey one (19.6%) while 16.6 percent of those who did not receive pre-notification participated. For survey two, 15 percent of persons who received an alert email participated compared to 15.7 percent of those who did not receive the pre-notification. Eighteen percent of persons who received a pre-notification for survey three participated, while 14.1 percent of those who did not get an alert email participated in the survey.

Table 4 Tabulation comparing responses for those who received an alert email or not

Although there is an apparent trend, chi-square tests revealed no significant effect of pre-notification on survey participation for surveys one and two. However, survey three had a significant chi-square value indicating a difference in response rates for those who received a pre-notification compared to those who did not (chi2 = 2.679, Pr = 0.017). This lends support to hypothesis 1, an association between pre-notification and increased survey response rate. We also obtained the appropriate effect sizes, Yule’s Q, for each survey and the pooled results. The results indicate the effects, where there are any, are relatively modest.

We next investigated whether response rates varied based on the timeliness of receiving an email alert letter. To test hypothesis 2, we compared the responses of the subsample randomly assigned to receive an electronic prenotification a week in advance with responses of the subsample assigned to receive an electronic prenotification with a few days’ notice. The results comparing prenotification timing and survey response are presented in Table 5. We found no significant difference in response rates between those receiving a week early alert email (response rate 19.8%, 14.3%, 18.8%, respectively, for surveys 1–3) and those receiving a few days’ prenotification (response rate 19.4%, 15.7%, 18.8%, respectively).

Table 5 Tabulation comparing responses by prenotification timing

We then estimated logit regression models predicting survey response by experimental condition while controlling for gender, academic field, rank and region of the country. The results of these models are presented in Tables 6 and 7. The first set of models, Table 6, predict likelihood of sampled persons responding to the survey when receiving any prenotification alert email. The second set of models, Table 7, predict the likelihood of responding to the survey depending on the timing of the prenotification.

Table 6 Logit Regression predicting response to hypothesis 1: Alert email (odds ratios)
Table 7 Logit Regression predicting response to hypothesis 2: Alert timing (odds ratios)

We found partial support for hypothesis one that receiving an electronic prenotification increases survey participation. For survey one, persons who received an electronic prenotification had 1.24 higher odds of participating in the survey than did their counterparts who did not receive a prenotification (p-value = 0.097). For survey three, those who received an email alert prenotification had 1.31 higher odds of participating in the survey than their counterparts who did not receive an email alert (p-value = 0.027). Pooling all three surveys, persons who received an electronic prenotification had 1.16 higher odds of participating than their counterparts (p-value = 0.037). The findings as a whole indicate that electronic pre-notification produces a modest increase in survey participation.

The second experimental comparison was also in the expected direction, with fewer days of advance notice improving the likelihood of responding by about one-third of a percentage point. However, the findings were not statistically significant. Three covariates were found to be significantly associated with survey participation for some of the surveys. Assistant professors, women and biologists were all more likely to participate in survey one and for the pooled regression. For all three surveys and pooled together, only a small amount of variance in response was explained (Pseudo R2 < 0.03). Thus, while there are other factors that may better explain why people respond to surveys, alert letters do play a small role.

Discussion

The goal of the study was to analyze how electronic alert pre-notifications and the timing of those notices would affect response rates to web surveys. We conducted this experiment to determine if receiving an email alert and the timing of those notices impacts response rates. Receiving an email alert results in a statistically significant improvement in response rates. For surveys one and three and the pooled regression, the significant findings were in the expected directions—receiving an email alert improved the likelihood of responding to the survey. Additionally, other factors including gender, field of science, and rank significantly influence response rates.

There are several limitations of the survey design and models. First, other omitted factors may also explain response or nonresponse to the survey, including level of involvement and engagement with the survey topic and timing of dissemination (Sue & Ritter, 2012). It is possible response rates varied due to interest in a specific topic (e.g. COVID19, immigration and visa policy). Second, COVID-19 has changed many academics’ schedules, workload, and stress levels, which could differently affect their likelihood of responding especially by rank and gender. Third, it is possible that because academic scientists primarily work online and have the ability to assess the legitimacy of the survey (e.g. academic institution, Institutional Review Board protocol), the effect of an alert letter on confirming the legitimacy or reputation of a survey request and driving response rates may be lower than with other populations.

A fourth limitation is that reliance on email to communicate makes it more difficult to ensure that every sampled individual received and opened the email pre-notification message and invitation to participate in the survey. Because this sample of academic scientists regularly use email for work and have full access to digital services, an email alert may be more successful with this sample than it would be with a general population survey. There is the possibility that some in the sample did not receive either the pre-notification or invitation due to spam filters or other digital barriers (Bandilla et al., 2012; Tourangeau et al., 2013). Our survey-monitoring program indicates that 694 (10%) of the individuals sampled across all three surveys did not open any of the emails about the surveys (pre-notification or otherwise). It is possible academics were not checking their email during the study period due to travel, fieldwork, or work from home pressures. We conducted analyses with and without the non-contacted 694 included in the denominator and found non-response due to no email contact did not vary across experimental conditions.

Previous research shows that postal mail pre-notifications positively influence response rates to web surveys (Bandilla et al., 2012; Kaplowitz et al., 2004). Our findings suggest email alert pre-notifications can also be effective for improving response rates. The question as to the relative effectiveness of email pre-notifications, compared to those that use another mode, however, remains unresolved. Further investigation is needed to compare the effects of receiving a paper mail pre-notification letter versus an email alert letter and no alert message.

It will also be important to compare the effects of email pre-notification on response rates with larger and more diverse samples. We note that this sample of academic scientists represents frequent email users with regular access to the Internet. The effect of email alert letters would potentially be lower for those who are not regular Internet users and the general public.

This study was motivated by COVID19 restrictions and concern that our inability to send out mailed pre-notification letters would negatively affect response rates from our sample of academic scientists, as office mailing addresses were temporarily inaccessible due to work from home orders. These and other similar circumstances can constrain survey researchers and continue to limit academic progress by restricting contact options and potentially adding to response biases. Ongoing experimentation is necessary as we continue to adapt research methods to the new social and cultural realities.