Often a researcher will conduct two tests - the results from one will be statistically significant, and the other won't. However, they'll often fail to note that the difference between the two results isn't itself statistically significant (I don't even want to think about how may times I've done this). Andrew Gelman at Statistical Modeling, Causal Inference, and Social Science has a very nice and simple (given that we are dealing with statistics, after all) example of this phenomenon (and yes, that's the singular form):
Let me explain. Consider two experiments, one giving an estimated effect of 25 (with a standard error of 10) and the other with an estimate of 10 (with a standard error of 10). The first is highly statistically significant (with a p-value of 1.2%) and the second is clearly not statistically significant (with an estimate that is no bigger than its s.e.).Click here for the whole post, and a hat tip to Mahalanobis for the link.What about the difference? The difference is 15 (with a s.e. of sqrt(10^2+10^2)=14.1), which is clearly not statistically significant! (The z-score is only 1.1.)
No comments:
Post a Comment