2009 | 2008 | 2007 | 2006 | 2005 | 2004 | 2003

Lie Detector Tests
May 01, 2006
Rebecca Goldin Ph.D
A perfect illustration of sensitivity versus specificity

The Washington Post today illustrated that the use of lie detectors has little basis in science. We think that their article is a great illustration of the difference between sensitivity and specificity.

Sensitivity is the likelihood that someone who is not lying will take a lie detector test and pass. Specificity is the likelihood that someone who is lying will take a lie detector test and fail. These seem like they are measuring the same thing – the accuracy of the test – but they are in fact really different measurements. Unfortunately, the lie detector has a poor track record on both.

For most medical tests, the more sensitive a test is, the less specific, and vice versa. Another way to think of this is this: more sensitive means there are more false negatives, which means that (quite generally) some people who should have tested positive in fact tested negative – in other words, the test is less specific. A more specific test casts a wide net in order to correctly designate as many positives as it can, and typically inadvertently tags more people who are negative as positive (false positives).

But theoretical considerations aside, take a look at how the numbers are affected by sensitivity and specificity.

The Washington Post gave an example: suppose 10,000 people take a lie detector test, of which 10 are spies (and the rest honest folk). Research shows that almost 1,600 people would fail the test, and two spies would pass. Suppose that exactly 1,598 out of 9,990 innocent people fail the test. These are false positives – people who tested positive for lying when they were not. The false positive rate is 1598/9990 or about 16 percent. That means that the sensitivity is 84 percent. On the other hand, eight out of ten liars failed the test – that is a specificity of 80 percent. With 10,000 people taking the test, of which 10 are spies, the percentage of spies in the failed pool is 2 out of 1600, or .125 percent. That makes it pretty hard to find the spies!

On the other hand, if there had been 1,000 spies in this pool of 10,000 people, then 16 percent of the 9000 innocents – a total of 1,440 people – would be designated as liars when they are not. And 800 spies would be labeled as liars. That would be a total of 2,240 designated liars, of which 800 are actually spies. We would have found about 36 percent of the designated liars are actually spies. That’s a better rate – but still not so great. Even worse, about 200 spies would still be left among the remaining group of people who pass the lie detector test!

Overall, we give lie detector tests a failing grade, as does the Post. It is neither sensitive nor specific, tagging innocent federal workers as liars and allowing many skilled liars into trusted government positions.