A random sample of 100 male university students in the UK had their heights measured. The mean height was 70.4 inches and the standard deviation was 2.4 inches. What can we deduce, from that tiny sample, about the mean height of the million or so male university students in the UK?

Let us call this unknown mean m. One simple answer is that m is about 70.4 inches. But that is inaccurate (m won’t be exactly 70.4) and imprecise (we are not saying anything about how close to 70.4 m is likely to be).

A confidence interval is a way of giving an indication of the accuracy and precision of our estimate.

Because we know something about how variable individual heights are (we have a standard deviation from the sample), we also know something about variable mean heights will be. In this case, we want the standard deviation for a mean in a sample of size 100; it will be about 0.24 inches, (calculated by dividing 2.4 by the square root of the sample size, 100). And we know that, in many situations, random values lie within 2 standard deviations either side of their mean about 95% of the time. So in this case we can say, with 95% confidence, that the sample mean will be within 2 × 0.24 inches of m, the mean for the million students.

In short, m = 70.4 ± 0.48 with 95% confidence. (To avoid appearing to be too precise, we might well round 0.48 to 0.5 and then say 69.9 < m < 70.9 with 95% confidence.)

Of course there are a lot of assumptions and a fair bit of statistical theory in that little calculation. But the essence of it is that we can state an interval for m and say how confident we are that our statement is correct.

Confidence intervals, unfortunately, are often misunderstood so it is worth making a few further points.

• There is nothing particularly important about 95%. If we gave a wider interval we would be more confident that our statement about m was correct. For example, another calculation shows that m = 70.4 ± 0.62 with 99% confidence. A narrower interval would carry less confidence.
• The width of the confidence interval will depend on the sample size – larger samples give more reliable information. If the figures quoted had been obtained from a sample of size 400 rather than 100, the width of the interval would be halved: m = 70.4 ± 0.24 with 95% confidence
• It is tempting to ask why we can’t have 100% confidence. Well, we can: m takes any value from minus infinity to plus infinity with 100% confidence. Not very useful!
• And why can’t we have a single value for m? Again, we can: m = 70.4, but with zero confidence! Also not very useful.
• Finally, we have to ask exactly what we mean by ‘confidence’ here. The statement ‘m = 70.4 ± 0.48 with 95% confidence’ gives an estimate for m that has been calculated using a technique which gets it right 95% of the time. That is, we know that in the long run 95% of intervals constructed in this way will contain the true value. In this particular case, however, we cannot know whether m lies in the calculated interval or not. We can be 95% confident that it does, but we have to accept that we might be in one of the 5% of cases in which we have got it wrong. That is in the very nature of statistics and uncertainty.

Footnote

An online opinion poll today shows support for the Conservative party running at 36%. In the small print it says “the margin of error is 3%”. This is the standard way in which polling companies report what are, usually, 95% confidence intervals. A more statistical way of saying the same thing would be:

With 95% confidence, support for the Conservatives is currently in the range 36% ± 3%