Here is the true statement about confidence interval (taken from the paper):
"If we were to repeat the experiment over and over, then 95 % of the time the confidence intervals contain the true mean"
The true mean is the actual mean of the population that we try to measure. The sample mean is the average of the values that were measured.
I did a little bit of Matlab to try to relate sample mean (that we can measure) and confidence interval.
My question was the following: If we perform an experiment once and compute the 95% confidence interval of the mean (e.g. [0.1 to 0.4]). What is the probability that, if we repeat the experiment, the new sample mean will fall within the previously computed confidence interval (see Matlab code below).
Yesterday, I would have said 95%. But I would have been wrong.
It turns out that there is ONLY a 83% probability that the new sample mean (based on 100 samples) will fall within the computed 95% confidence interval and this number does not depend on the number of samples taken (same result with n=1000)
For the (larger) 99% confidence interval, this probability rises to 93%.
So the confidence interval does not provide a lot of information about the sample mean.
Similar misunderstandings were found for the p-values: (Gigerenzer 2004)
Just found a similar result here:
Gigerenzer, G. Mindless statistics. J. Socio. Econ. 33, 587–606 (2004).