An article on the use of biometrics to identify terrorists in airports. The key point in the article:
Suppose this "magically-effective" face-recognition software is 99.99 percent accurate. That is, if someone is a terrorist, there is a 99.99 percent chance that the software would indicate "terrorist," and if someone was not a terrorist, there is a 99.99 percent chance that the software would indicate "non-terrorist." Assume that one in one billion flyers, on average, is a terrorist. Is the software any good?
No. The software will generate 9,999 false alarms for every one real terrorist. And every false alarm still means that all the security people go through all of their security procedures. Because the population of non-terrorists is so much larger than the number of terrorists, the test is useless. This result is counterintuitive and surprising, but it is correct. The false alarms in this kind of system render it mostly useless. It's "The Boy Who Cried Wolf" increased over 1000-fold.
Of course, that's assuming that you can get a system that is 99.99% accurate.
Start with the statement that terrorist are 1 in 1,000,000,000 of the travelers passing through airports.
Lets look at the 1 terrorist out of those billion. At a 99.99% rate, you easily pick up the terrorist.
But then consider the other 999,999,999 travelers who aren't terrorists. If the system is 99.99% accurate, then the flip side is that 0.01% of the time it will incorrectly label one of these travelers as a terrorist when they are not in fact a terrorist.
So 999,999,999 times 0.01% equals 9,999 innocent travelers picked out of the crowd as terrorists. Of course after 10 or 15 minutes of confusion, the traveler will (probably) be able to prove their innocence. So that's 2,500 hours of time (312 eight hour shifts) to pick out the terrorist.