The End of Survey Science?

the word survey with a survey above it

The following was written by David Banks, Professor of Statistics at Duke University and member of the NISS Board of Trustees.

Survey science is a lot like global warming. We all realize that we are watching a slow-motion disaster, but very little is being done to manage the consequences.

The culprit, of course, is declining response rates. Two recent articles in the New York Times describe facets of the problem. The most recent, by Cliff Zukin, a professor of public policy and political science at Rutgers and a past-president of the American Association for Public Opinion Research, focuses upon political polling (http://www.nytimes.com/2015/06/21/opinion/sunday/whats-the-matter-with-polling.html).

The other article, by David Leonhardt, a Pulitzer Prize winning journalist, describes the difficulty in estimating the unemployment rate (http://www.nytimes.com/2014/08/26/upshot/a-new-reason-to-question-the-official-unemployment-rate.html).

But the underlying issues are identical, they are growing, and they affect all survey work.

The path to the present took many steps and there is shared responsibility. In the 1970s, the Vietnam War cost the US government the trust of many young people, and over the next four decades, that disaffection spread to other demographic sectors (minorities, gun owners, Democrats/Republicans, and if the current primary season is indicative, now almost everyone alive). Also, the rise of marketing expanded the number of surveys that were fielded, and commercial phone banks intruded into the home, so that many people now refuse to answer questions.

No doubt there are other factors---perhaps more distractions and less time, or growing incivility, or frustration with perceived powerlessness. And for me, poorly worded survey questions are annoying, and diminish my inclination to participate.

There are data quality problems too. The Office of Management and Budget is keen to minimize survey burden, so questionnaires tend to be short and possibly perfunctory. Also, there are political groups that seek to limit the amount and kind of information that can be gathered; e.g., in 2011 the Canadian long form was made voluntary (but the new Liberal government has reversed that for 2016).

NISS has been a major player in the survey science arena. Its partnerships with NASS and Census Bureau are the most prominent examples, but there are others. The work that NISS has done concerns government surveys rather than commercial ones, and it is about establishment or farm surveys, rather than seeking responses from private citizens. Nonetheless, the problems of declining response rates persist. And the quality of establishment survey data, based on forms completed by busy people with few incentives for accuracy, is surely uneven.

Given these headwinds, I no longer believe that conventional polling is useful, especially if it pertains to any important or controversial topic. We need to find new ways to get the information that is needed for evidence-based governance.

As an alternative, perhaps survey scientists need to adopt a Nielsen family system? Instead of a probability sample with large rates of non-response, one could recruit respondents that are demographically matched to the population. These people would be paid a significant sum to complete a detailed questionnaire on a monthly basis, probably with a rotating panel design to assist in longitudinal studies.

One advantage is efficiency. A typical survey instrument must ask a lot of basic questions (e.g., age, gender, marital status, education) before getting to the meaningful questions (Are you voting for Bernie Sanders? Do you use heroin?). But with a stable of respondents, the basic demographics get collected once, rather than on every questionnaire, so there is room for more questions.

A second advantage is trust. With long-term paid respondents, I think it is more likely that they will answer honestly and carefully, and be open to sensitive questions. Also, one can probably ask more complex questions. For example, randomized response is rarely used in the field, since most people don't understand the privacy protection obtained from the instruction: "Toss a coin secretly. If it is Heads, say yes, and otherwise answer honestly." But with a stable of respondents, they might come to understand the logic and trust the protection.

Finally, in this kind of data collection framework, it is easy to ask different questions each month, and add new questions on short notice. This allows a business model in which the survey administrator can sell questions to academics. A psychologist could ask about pet ownership; a criminologist could ask about recent burglaries. And since the answers can quickly be cross-tabulated against previous responses, the academics might discover that people tend to acquire dogs for protection after a robbery.

This suggestion is, of course, speculative. But I don't think it is conjectural to point out that the classic survey methods we were taught and preach are no longer capable of providing actionable information on many important questions. And I hope smart people will seriously consider alternative data gathering mechanisms that might once have seemed heretical.

Monday, February 1, 2016 by Jamie Nunnelly