Probability (Part 2) – Confidence

Continuing on the probability theme, I want to touch on confidence – that is, the confidence that a probability (or statistical conclusion) is correct.  I’m not going to include equations to calculate confidence limits as they can be found in almost any text book on statistics.  Rather, I want to try and draw out an understanding of what “confidence” means in this context because it’s easy to confuse people and make it harder than it should be.

Let’s consider the coin tossing experiment I discussed previously – toss a coin 100 times and predict how often you will get exactly 50 heads and 50 tails.  Actually, let’s start with tossing it once: the probability you will get heads is 0.5.  How confident are you that you are right?  If the coin is truly fair (i.e. no bias for either side, and no cheating) then I think you would be 100% confident of a 0.5 probability.  Two tosses and there is a p=0.25 (the more technical way to say a 0.25 probability, or a 25% chance) of two heads: the coins can fall HH, HT, TH or TT and one of those four options is what I need.  So, for 100 tosses and the choices are… far too many ways to list and count how many come down with exactly 50 heads.

With the experiment I did previously, we found that only 8% of the times we tossed 100 times did we get exactly 50 heads.  The probability of getting 50 heads looks like it will be around 0.08 (8/100) but how confident are we of that.  Having run the experiment 100,000 times, I’m quite confident that probability is accurate.  It would be possible to calculate a number but, for simplicity (and I’m trying to keep this simple) we only need to recognise that by running the trial more times we increase our confidence.

A second way to look at confidence is to consider holding up a jar of sweets and asking you to guess how many (sweets) there are (and you could pick any number from your head that seemed right to you).  If I made it easier and asked to to guess a range of answers (i.e. from what you thought the minimum was to what you thought the maximum was) you would be more likely to guess in the right range – you would certainly be more confident of your guess being right.  If I asked you to do this with, say 50% confidence and then with 98% confidence (I’ll not ask you to say what 50% or 98% confidence really means – let’s just agree that you should be right a lot more often with 98%), I expect you’ll expand the range for the second guess – after all, a wider range means you’re more likely to be right.

Go to Wikipedia and look up confidence interval (the statistical term) and you can get as many equations or formulae you like but, at its simplest (and I like to keep things simple), confidence is a measure of how right you think you are.

One of the most common uses of the expression comes with hypothesis testing where you are expected to be 95% confident the result didn’t arise by chance (which is where the maths come in).  However, there’s nothing magical about 95% – it’s just a commonly agreed standard for confidence (1 chance in 20 that you’re wrong).

(First posted as a blog on 27th September 2011)

Advertisements