Does failure to understand probability also increase belief in God?

A recently published study, Shenhav et al (2011), “Divine Intuition: Cognitive Style Influences Belief in God,” presents evidence on the relationship between intuitive thinking and belief in God. This post points out that it’s not obvious that it isn’t really just general intelligence driving the results from observational data. Because general intelligence is measured using two noisy, short, subcomponents of a much longer IQ test, it’s reasonable to guess that much of the variation in correct answers to the cognitive style questions is correlated with general intelligence even after conditioning on the available IQ test scores. Using GSS data and similar methods to Shenhav’s Study 1, it also appears that failing to understand basic notions of probability predicts belief in God after holding intelligence measures fixed.

In some waves of the U.S. General Social Survey (GSS) respondents were asked two questions designed to determine if they understand elementary notions in probability theory (search for “odds1” and “odds2” in thecodebookfor details). Does “lack of statistical thinking” predict belief in God like intuitive thinking?

(1) (2) (3)
rs confidence in the existence of god rs confidence in the existence of god rs confide
> nce in the existence of god

knowsprob -0.240*** -0.205*** -0.149**
(-6.03) (-3.78) (-2.83)
degree==high school -0.054 -0.021
(-0.62) (-0.24)
degree==junior college -0.119 -0.128
(-0.96) (-1.08)
degree==bachelor -0.346** -0.278*
(-3.07) (-2.46)
degree==graduate -0.457** -0.517***
(-3.17) (-3.66)
number words correct in vocabulary test -0.065*** -0.073***
(-3.65) (-4.10)

Observations 3667 2067 2067

tstatistics in parentheses* p < 0.05, ** p < 0.01, *** p < 0.001

The table shows results from three OLS regressions. The dependent variable is a six-point scale of strength of belief in God (see “God” in the codebook for details), not unlike the continuous (but probably heavily censored, but we don’t know because there are no summary statistics reported) version used in the paper. Model 1 controls for the sum of correct answers to the two probability questions (“knowsprob”), like the sum of the three intuition questions, and year dummies (not reported). Model 2 adds dummies for highest level of education and the number of correct responses on a vocabulary quiz, which act like the controls for education and IQ test scores in the paper. Model 3 adds a host of demographics including a quadratic in year of birth, immigrant status, sex, marital status, children, and region effects. These are much richer controls than available to Shenhav et al, and the sample size here is about an order of magnitude larger than in the paper.

The results show that getting the answers to the probability questions incorrect statistically and substantially predicts greater belief in God, even after holding all of the stuff in model 3 constant. Each correct probability answer is associated with a 24 percentage point decrease in belief in God unconditionally (p<0.001), and a 15 percentage point decrease conditional on everything (p<0.01). Further, higher education levels and more correct vocabulary responses continue to predict lower belief in God even after conditioning on each other, knowsprob, and all other characteristics.

It is possible that it really is the case that, holding general intelligence constant, failing to understand probability is associated with belief in God. But it seems plausible that part or all of the association between wrong answers to the probability questions and higher belief in God can instead be attributed to lower general intelligence. “knowsprob,” “wordsum,” and educational achievement are all noisy measures of the same thing, and when we throw them all into a regression model they independently predict the outcome.

To get a grip on the magnitude of the bias induced by the measurement error I tried a little simulation. Draw a variable IQ on U[0,1], representing percentiles of the distribution of IQ. Simulate 28 responses to the matrix reasoning test by drawing a normal with mean zero and standard deviation 1.5, adding it to the IQ percentile, and assigning a correct response when the sum is greater than unity (this produces a test score with a correlation with IQ of about 0.6). Do likewise with the three cognitive style questions, except to roughly reproduce the correlations in the paper make the noise standard normal. Finally, let belief in God be (1-IQ) plus a standard normal, and censor it at zero and one to reproduce the limited nature of the dependent variable used in the paper (the authors do not comment on the properties of the belief variable and do not present summary statistics, so it is not possible to roughly match the proportion of censored responses). By construction it is only IQ and not intuitive thinking which drives belief in God. Simulate datasets of size n=300, roughly the sample size in the paper, and do so 1,000 times.

Across the 1,000 replications the average correlations are pretty similar to those reported in the paper. Intuitive answers have a correlation of 0.09 with belief in God, versus 0.13 in the paper. IQ and the IQ test score have a correlation of 0.62, similar to that reported in the literature. Intuitive answers have a correlation with IQ test score of -0.22, versus -.27 in the paper. The coefficient on INTUITIVE is substantially biased up, and this graph shows there is a pretty big size distortion:

The blue density is the standard normal density, from which we would be drawing t-ratios if the parameter estimate were unbiased. The red density shows the realized distribution in the presence of measurement error induced bias. The red vertical line shows the upper critical value at 5% size, so all draws to the right of this line are statistically different from zero at 5% size. Realized size at a nominal size of 5% is over 15%. The z-stat in the paper is 2.23 (the square root of the reported F-stat of 5.0), 9.3% of the simulated t-ratios were larger than that. There is a pretty good chance of finding associations as large as those reported in the paper even if the true effect is zero.

None of this means that the paper’s conclusions are mistaken. I have not even commented on Study 3, in which subjects are experimentally primed into various cognitive styles, and pointing out that an estimator is biased up does not, of course, mean its true value is zero. But the bias here is quite bad and the observational evidence should be reconsidered.

Finally, tangentially, I want to highlight a weird semi-related aspect of the results:

(Click table to see a more legible version.)

This is the only table of results for Study 2. The table shows unconditional correlations, except for the two numbers in brackets, which are (I think) correlations between those variables after residualizing with respect to variables 3, 4, 6, and 7. Here is the weird result: apparently the correlations between belief in God and intuitive or correct responses is unaffected by conditioning on IQ and the personality measures (e.g., 0.135 unconditionally, 0.138 conditionally, unlike the results reported in the table above in which the effect of failing to understand probability is substantially diminished when we condition on education). We can read off the table that intuitive responses are correlated with matrix reasoning (-0.272) and with vocabulary (-0.213), but are basically uncorrelated with the personality measures. So apparently the IQ measures and intuitive responses are correlated, but conditioning on IQ measures does not affect the correlation between belief and intuitive responses. That pattern can only obtain if the effect of the IQ measures on belief is roughly zero after conditioning on intuitive responses, which would be very surprising, and I would think that such a result would be flagged in the paper. Since the paper does not report a full set of regression results, we cannot determine exactly what’s going on.

Tags: , ,

Copyright © 2017 M. Christopher Auld