In this installment of the SSNB/NSRN Methods Blog series, psychologist Will Gervais introduces us to the unmatched count technique for survey research. This technique is designed to allow survey takers to give more honest answers to awkward questions (e.g. Do you believe in God?) and to allow researchers to make more accurate population level estimates of socially sensitive phenomena (e.g. the prevalence of atheism).
You’re sitting at home one night watching Rick and Morty or House of Cards, or whatever you’re into. The phone rings. Someone wants to ask you some questions for a survey. As a benevolent human being, you agree to participate. The nice person on the other end of the line asks you a series of questions: age, gender, ethnicity, voting preferences. And then…
“Do you believe in God?”
You give an answer and move on. Eventually, the survey ends. You go back to watching interdimensional travel or political machinations or whatever.
The people on the other end of the line compile and aggregate your answers with the answers from many others, probably at least a thousand and balanced across demographic categories to be nationally representative. Then a report comes out claiming that 10-11% of people in the US don’t believe in God (Gallup, 2016). You’re one data point in there, somewhere.
So far, so good. Or is it? I would argue that your answer to that question might not accurately tell us whether you believe in God, and, consequently, those national percentages might not be accurate.
Let’s say you said “No, I don’t believe in God.” Given the significant stigma against religious disbelief in the US, I would be inclined to say you probably don’t believe in God. But if you answered that you do believe in God, I can make at least two distinct inferences from your statement:
- You believe in God.
- You actually don’t believe in God but aren’t comfortable telling a stranger that you don’t believe in God.
In other words, nationally representative telephone polls are probably biased when it comes to socially sensitive questions, including belief in God. Answers reflect both actual beliefs and also tendencies to consciously or unconsciously give the “right” (nice, friendly, socially acceptable) answer.
This sounds straightforward, but scientists like myself and many others who are trying to understand how religious beliefs evolved, are culturally transmitted, and affect people’s lives are in a pickle. Nationally representative polls ostensibly give us the best evidence out there about what people do and don’t believe. But we (should) also know that self-reports need to be taken with a grain of salt. Psychologists, sociologists, and others have grappled with this problem for decades (e.g. Roese & Jamieson 1993).
One school of thought says that we should turn away from self-reports and try to develop implicit measures of cognition that can tell us a lot about people’s underlying psychological tendencies, which presumably affect explicit beliefs at some point (see Järnefelt’s previous post).
Another school of thought says that we can still ask people about their beliefs, but we should do so in a way that gives people an “out,” by which they can tell us about their beliefs in an indirect way, with pressures to appear socially desirable somewhat mitigated. There are a number of these methodological tools out there, and my current favorite is the unmatched count technique (Raghavarao & Federer 1979; Coutts & Jann 2011). It’s a way to (hopefully) get less biased population estimates of the prevalence of things that people don’t want to tell strangers over the phone.
The technique goes like this. You randomly split your sample into two groups. Let’s call them the Baseline Group and the Experimental Group. You give the folks in each group a list of statements and you ask them to tell you how many of them are true statements about them. Nobody has to tell you which statements are true about them, just how many in total. Most of the statements are the same across groups, but the Experimental Group gets a bonus statement, which is your key item of interest. Like so:
Baseline | Experimental |
How many of the following statements are true for you? | How many of the following statements are true for you? |
1. I own a unicycle | 1. I own a unicycle |
2. I have been to Delaware | 2. I have been to Delaware |
3. I brush my teeth regularly | 3. I brush my teeth regularly |
4. I like the beach | 4. I like the beach |
5. I have a university degree | 5. I have a university degree |
6. BONUS STATEMENT | |
Answer: 1 2 3 4 5 | Answer: 1 2 3 4 5 6 |
Because the first five options are identical, any difference in average scores between the two groups should reflect the proportion of people in the experimental condition for whom the BONUS STATEMENT is true. For example, if the bonus statement was “I have walked on the moon,” we would expect that the averages in the two groups would be identical. After all, nobody in our sample (presumably) has walked on the moon. If the bonus statement was “I was born on Earth,” we would expect the average Experimental score to be 1 point higher than the average Baseline score, as presumably everyone in our sample was born on Earth.
The benefits of the technique shine through when you include a socially sensitive item as the bonus statement. If the bonus statement is “I have smoked crack cocaine” and the Experimental score is .14 higher than the Baseline average, we can indirectly infer that 14% of people in our sample have smoked crack cocaine. That’s why adding that particular bonus statement led to a score .14 higher on average. Crucially, not a single participant in this study has to tell us that they have smoked crack, and we can’t “out” any crack users. We just make indirect population level inferences.
In terms of belief in God, picture the following example:
Baseline | Experimental |
How many of the following statements are true for you? | How many of the following statements are true for you? |
1. I own a unicycle | 1. I own a unicycle |
2. I have been to Delaware | 2. I have been to Delaware |
3. I brush my teeth regularly | 3. I brush my teeth regularly |
4. I like the beach | 4. I like the beach |
5. I have a university degree | 5. I have a university degree |
6. I do not believe in God | |
Answer: 1 2 3 4 5 | Answer: 1 2 3 4 5 6 |
If we observe a difference between the average scores of both groups, this can tell us indirectly what proportion of our sample doesn’t believe in God. And—unlike with the telephone poll—not a single person has to out themselves as an atheist to a stranger over the phone.
So, what does this method tell us about belief in God and how can it help our scholarship? Maxine Najle and I have data from a nationally representative sample of people in the US. The paper is still in the sausage-making factory that is academic publishing, and we are collecting additional data to double check our results, so we unfortunately can’t release the full results just yet. But don’t be all that surprised to see a new paper claiming that Gallup telephone polls might be underestimating the number of atheists in the USA by tens of millions.
While obtaining more accurate population-level estimates of the prevalence of atheism is beneficial to a range of scholarly endeavors, it is of paramount importance for the testing and development of existing and emerging theories of religion (Boyer 2001; Norenzayan 2014; Norris & Inglehart 2004), as they make different predictions about how prevalent atheism should be and in which environments it should flourish. The unmatched count technique can be a very useful addition to our methodological toolkit for addressing such questions.
References
Boyer, P. (2001). Religion Explained: The Evolutionary Origins of Religious Thought. New York: Basic Books.
Coutts, E., & Jann, B. (2011). Sensitive questions in online surveys: Experimental results for the randomized response technique (RRT) and the unmatched count technique (UCT). Sociological Methods & Research, 40(1), 169-193.
Gallup. (2016). Religion. Retrieved from http://www.gallup.com/poll/1690/Religion.aspx
Norenzayan, A. (2013) Big Gods: How Religion Transformed Cooperation and Conflict. Princeton: Princeton University Press.
Norris, P. & Inglehart, R. (2004). Sacred and Secular: Religion and Politics Worldwide. Cambridge: Cambridge University Press.
Raghavarao, D., & Federer, W. T. (1979). Block total response as an alternative to the randomized response method in surveys. Journal of the Royal Statistical Society. Series B (Methodological), 40-45.
Roese, N. J., & Jamieson, D. W. (1993). Twenty years of bogus pipeline research: a critical review and meta-analysis. Psychological Bulletin, 114(2), 363.
Will Gervais (Assistant Professor, University of Kentucky) is an evolutionary and cultural psychologist who is interested in why people believe what they believe about the world. His research focuses on the cognitive, evolutionary, and cultural forces that facilitate supernatural beliefs—and how these beliefs, in turn, affect cognition, evolution, and culture. Specifically, a lot of Will’s research focuses on atheists: who are they, why are they atheists, how many of them are there, and how do people view them? A comprehensive understanding of human nature needs to account for religion, and a mature science of religion needs to account for religious disbelief.