Weekly Comments from Dale Martin

Dale Martin
City Manager
Fernandina Beach
August 23, 2019 – 8:15 a.m.

City Manager Dale Martin

Following the recent National Citizen Survey, a few people questioned the process of the survey. I contacted Ms. Damema Mann, Director of National Engagement of the National Research Center, to solicit her response to some of the local concerns. Please note that most of the information that she provides is included in the Technical Appendices provided (but typically not read) as part of the survey results. 

Hi Dale, Thanks for sharing this article, and the comments from the City Commissioner.

The Technical Appendices report has detailed methodology (including information on the sampling/household selection process) in Appendix C: Detailed Survey Methods. I encourage you to direct folks there, and of course, I’m happy to answer any follow-up/clarification questions anyone may have. It’s always our goal to use best practices in survey research, be transparent with our methods and get you data that is an accurate reflection of your population as a whole. Here’s some information from that section that directly addresses how households are selected (and, yes, all households within Fernandina Beach’s limits were eligible to be selected):

Selecting Survey Recipients

“Sampling” refers to the method by which households were chosen to receive the survey. All households within the City of Fernandina Beach were eligible to participate in the survey. A list of all households within the zip codes serving Fernandina Beach was purchased from Go-Dog Direct based on updated listings from the United States Postal Service. Since some of the zip codes that serve the City of Fernandina Beach households may also serve addresses that lie outside of the community, the exact geographic location of each housing unit was compared to community boundaries using the most current municipal boundary file (updated on a quarterly basis) and addresses located outside of the City of Fernandina Beach boundaries were removed from consideration.

To choose the 2,900 survey recipients, a systematic sampling method was applied to the list of households previously screened for geographic location. Systematic sampling is a procedure whereby a complete list of all possible households is culled, selecting every Nth one, giving each eligible household a known probability of selection, until the appropriate number of households is selected. Multi-family housing units were selected at a higher rate as residents of this type of housing typically respond at lower rates to surveys than do those in single-family housing units. Figure 1 displays a map of the households selected to receive the survey. In general, because of the random sampling techniques used, the displayed sampling density will closely mirror the overall housing unit density (which may be different from the population density). While the theory of probability assumes no bias in selection, there may be some minor variations in practice (meaning, an area with only 15% of the housing units might be selected at an actual rate that is slightly above or below that). An individual within each household was selected using the birthday method. The birthday method selects a person within the household by asking the “person whose birthday has most recently passed” to complete the questionnaire. The underlying assumption in this method is that day of birth has no relationship to the way people respond to surveys. This instruction was contained in the cover letter accompanying the questionnaire.

Survey Administration and Response

Selected households received three mailings, one week apart, beginning on May 3rd, 2019. The first mailing was a prenotification postcard announcing the upcoming survey. The next mailing contained a letter from the Mayor inviting the household to participate, a questionnaire and a postage-paid return envelope. The final mailing contained a reminder letter, another survey and a postage-paid return envelope. The second cover letter asked those who had not completed the survey to do so and those who had already done so to refrain from turning in another survey. The survey was available in English. Completed surveys were collected over the following seven weeks.

About 8% of the 2,900 surveys mailed were returned because the housing unit was vacant or the postal service was unable to deliver the survey as addressed. Of the remaining 2,680 households that received the survey, 1,055 completed the survey, providing an overall response rate of 39%. The response rate was calculated using AAPOR’s response rate #2 for mailed surveys of unnamed persons.

Confidence Intervals

It is customary to describe the precision of estimates made from surveys by a “level of confidence” and accompanying “confidence interval” (or margin of error). A traditional level of confidence, and the one used here, is 95%. The 95% confidence interval can be any size and quantifies the sampling error or imprecision of the survey results because some residents’ opinions are relied on to estimate all residents’ opinions.[2]

The margin of error for the City of Fernandina Beach survey is no greater than plus or minus three percentage points around any given percent reported for all respondents (1,055 completed surveys).

For subgroups of responses, the margin of error increases because the number of respondents for the subgroup is smaller. 

 Survey Validity

The question of survey validity has two parts: 1) how can a community be confident that the results from those who completed the questionnaire are representative of the results that would have been obtained had the survey been administered to the entire population? and 2) how closely do the perspectives recorded on the survey reflect what residents really believe or do?

To answer the first question, the best survey research practices were used for the resources spent to ensure that the results from the survey respondents reflect the opinions of residents in the entire community. These practices include:

  • Using a mail-out/mail-back methodology, which typically gets a higher response rate than phone for the same dollars spent. A higher response rate lessens the worry that those who did not respond are different than those who did respond.
  • Selecting households at random within the community to receive the survey to ensure that the households selected to receive the survey are representative of the larger community.
  • Over-sampling multi-family housing units to improve response from hard-to-reach, lower income or younger apartment dwellers.
  • Selecting the respondent within the household using an unbiased sampling procedure; in this case, the “birthday method.” The cover letter included an instruction requesting that the respondent in the household be the adult (18 years old or older) who most recently had a birthday, irrespective of year of birth.
  • Contacting potential respondents three times to encourage response from people who may have different opinions or habits than those who would respond with only a single prompt.
  • Inviting response in a compelling manner (using appropriate letterhead/logos and a signature of a visible leader) to appeal to recipients’ sense of civic responsibility.
  • Providing a pre-addressed, postage-paid return envelope.
  • Offering the survey in Spanish or other language when requested by a given community.
  • Weighting the results to reflect the demographics of the population.

The answer to the second question about how closely the perspectives recorded on the survey reflect what residents really believe or do is more complex. Resident responses to surveys are influenced by a variety of factors. For questions about service quality, residents’ expectations for service quality play a role as well as the “objective” quality of the service provided, the way the resident perceives the entire community (that is, the context in which the service is provided), the scale on which the resident is asked to record his or her opinion and, of course, the opinion, itself, that a resident holds about the service. Similarly a resident’s report of certain behaviors is colored by what he or she believes is the socially desirable response (e.g., reporting tolerant behaviors toward “oppressed groups,” likelihood of voting for a tax increase for services to poor people, use of alternative modes of travel to work besides the single occupancy vehicle), his or her memory of the actual behavior (if it is not a question speculating about future actions, like a vote), his or her confidence that he or she can be honest without suffering any negative consequences (thus the need for anonymity) as well as the actual behavior itself.

How closely survey results come to recording the way a person really feels or behaves often is measured by the coincidence of reported behavior with observed current behavior (e.g., driving habits), reported intentions to behave with observed future behavior (e.g., voting choices) or reported opinions about current community quality with objective characteristics of the community (e.g., feelings of safety correlated with rates of crime). There is a body of scientific literature that has investigated the relationship between reported behaviors and actual behaviors.

Well-conducted surveys, by and large, do capture true respondent behaviors or intentions to act with great accuracy. Predictions of voting outcomes tend to be quite accurate using survey research, as do reported behaviors that are not about highly sensitive issues (e.g., family abuse or other illegal or morally sanctioned activities). For self-reports about highly sensitive issues, statistical adjustments can be made to correct for the respondents’ tendency to report what they think the “correct” response should be.

Research on the correlation of resident opinion about service quality and “objective” ratings of service quality vary, with some showing stronger relationships than others. NRC’s own research has demonstrated that residents who report the lowest ratings of street repair live in communities with objectively worse street conditions than those who report high ratings of street repair (based on road quality, delay in street repair, number of road repair employees). Similarly, the lowest rated fire services appear to be “objectively” worse than the highest rated fire services (expenditures per capita, response time, “professional” status of firefighters, breadth of services and training provided). Resident opinion commonly reflects objective performance data but is an important measure on its own. NRC principals have written, “If you collect trash three times a day but residents think that your trash haul is lousy, you still have a problem.”

In conclusion Martin wrote:

I hope this helps! I think the article shared has some valid concerns about the pitfalls of poorly conducted surveys or poorly crafter survey questions/responses – we do not feel those concerns are applicable to The NCS and/or to the work we’ve done for Fernandina Beach. Also, to clarify another question in the column – the survey wasn’t conducted online – it was by mail only, per the City’s request. 

I’m happy to provide any additional detail that may be helpful, or answer any other questions.

On behalf of the City Commission, I would like to thank all of the residents who took the time to complete the survey.

3 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Dave Lott
Dave Lott(@dave-l)
4 years ago

While some take the position that the 39% response rate is not truly representative (often they cite they didn’t get the survey), then how do they justify the 26% voter turnout rate in the November 2017 city election? A 39% response rate for a mailed survey as detailed as this one was is a strong response rate. http://www.readexresearch.com/mail-survey-response-rate/
While it certainly is true that the results can be interpreted in different ways and the way a question is worded can certainly create some bias in the answer, the important thing is that there is transparency in the survey form and the manner of collection and that is certainly the case with this survey.

Frank Quigley
Frank Quigley(@frank-quigley)
4 years ago
Reply to  Dave Lott

That’s valid. I’d like to see the actual questions (will search online to see if I can find). Have had a lot of experience in the mass media field with this. Such surveys can be very illuminating, yet also limiting. Depends. Their results are but one data data set added to the decision-making mix. Not saying surveys are a dark art, in some cases so, just that these are tools that need to be understood and taken in full context.

Alas, yes, the bottom line is that if idea boosters and nay-sayers only spend time on social media and don’t participate in the democratic process – especially voting – that’s on them.

Steve Vogel
Steve Vogel(@stevedec)
4 years ago
Reply to  Frank Quigley

“Alas, yes, the bottom line is that if idea boosters and nay-sayers only spend time on social media and don’t participate in the democratic process – especially voting – that’s on them”

AMEN !