Scamming cashiers to prey on shoppers

From this article by the Seattle Times:

An unusually creepy and clever form of identity theft has cropped up in Washington.

Here's how it works: Con artists prowl retail aisles on the lookout for victims. When a customer steps up to make a purchase, the thief pulls out a cell phone and calls the cashier. Posing as store security, the thief indicates there's a past problem with the customer.

Wanting to protect the store, the cashier then requests additional information from the customer, passing along driver's-license and credit-card numbers to "security."

This was an (apparently) isolated incident at the Sears in Shoreline, WA.

Sears spokeswoman Jan Drummond yesterday said the Shoreline incident was the first reported case of its kind among the chain's 870 department stores.

Drummond said the store had started an investigation immediately and was "still working on some aspects."

Drummond also said the store was reinforcing training of all retail employees to maintain confidentiality of information and was instructing them "not to provide information over the phone, no matter who the individual says they are."

The Shoreline cashier acted reasonably under the circumstances, Drummond maintained.

"This is a very clever thing," Drummond said of the scam. "It's very difficult to stay one step ahead of these guys and figure out where they will find the next vulnerability."

Shortening the bungee cord

Nothing against Microsoft, but I loved the metaphor in this quote from the ZDNet article, "Microsoft confronts security fears", regarding changes in Microsoft's security plans:

[Gartner analyst John Pescatore] likened Microsoft's [past security] approach to running a bungee-jumping concession. "You probably ought to make the rubber band a little short," he said. "What Microsoft has always done in the past is give a really big rubber band and say, 'Oops, we heard a splat. Here's how you can shorten the rubber band.'"

You can have security without giving up liberty

From the 30 Sept 2001 issue of Crypto-Gram:


Security and privacy [and liberty] are not two sides of a teeter-totter. This association is simplistic and largely fallacious. It's easy and fast, but less effective, to increase security by taking away liberty. However, the best ways to increase security are not at the expense of privacy and liberty.

It's easy to refute the notion that all security comes at the expense of liberty. Arming pilots, reinforcing cockpit doors, and teaching flight attendants karate are all examples of security measures that have no effect on individual privacy or liberties. So are better authentication of airport maintenance workers, or dead-man switches that force planes to automatically land at the closest airport, or armed air marshals traveling on flights.

Liberty-depriving security measures are most often found when system designers failed to take security into account from the beginning. They're Band-aids, and evidence of bad security planning. When security is designed into a system, it can work without forcing people to give up their freedoms.

Here's an example: securing a room. Option one: convert the room into an impregnable vault. Option two: put locks on the door, bars on the windows, and alarm everything. Option three: don't bother securing the room; instead, post a guard in the room who records the ID of everyone entering and makes sure they should be allowed in.

Option one is the best, but is unrealistic. Impregnable vaults just don't exist, getting close is prohibitively expensive, and turning a room into a vault greatly lessens its usefulness as a room. Option two is the realistic best; combine the strengths of prevention, detection, and response to achieve resilient security. Option three is the worst. It's far more expensive than option two, and the most invasive and easiest to defeat of all three options. It's also a sure sign of bad planning; designers built the room, and only then realized that they needed security. Rather then spend the effort installing door locks and alarms, they took the easy way out and invaded people's privacy.

Airport biometrics to find terrorists

An article on the use of biometrics to identify terrorists in airports. The key point in the article:

Suppose this "magically-effective" face-recognition software is 99.99 percent accurate. That is, if someone is a terrorist, there is a 99.99 percent chance that the software would indicate "terrorist," and if someone was not a terrorist, there is a 99.99 percent chance that the software would indicate "non-terrorist." Assume that one in one billion flyers, on average, is a terrorist. Is the software any good?

No. The software will generate 9,999 false alarms for every one real terrorist. And every false alarm still means that all the security people go through all of their security procedures. Because the population of non-terrorists is so much larger than the number of terrorists, the test is useless. This result is counterintuitive and surprising, but it is correct. The false alarms in this kind of system render it mostly useless. It's "The Boy Who Cried Wolf" increased over 1000-fold.

Of course, that's assuming that you can get a system that is 99.99% accurate.

If you're wondering, here's the math behind Bruce's numbers:

Start with the statement that terrorist are 1 in 1,000,000,000 of the travelers passing through airports.

Lets look at the 1 terrorist out of those billion. At a 99.99% rate, you easily pick up the terrorist.

But then consider the other 999,999,999 travelers who aren't terrorists. If the system is 99.99% accurate, then the flip side is that 0.01% of the time it will incorrectly label one of these travelers as a terrorist when they are not in fact a terrorist.

So 999,999,999 times 0.01% equals 9,999 innocent travelers picked out of the crowd as terrorists. Of course after 10 or 15 minutes of confusion, the traveler will (probably) be able to prove their innocence. So that's 2,500 hours of time (312 eight hour shifts) to pick out the terrorist.

Privacy vs. terrorism

Like most civilized citizens of the world, my condolences go out to those more personally affected by the attacks in New York and Washington DC.

We now begin a tricky dance deciding the actions we will take to thwart terrorist plans before they come to fruition. Many have thoughtfully pointed out the mistakes we made during WWII when we crossed the line, particularly the internment of innocent US citizens whose grandparents happened to have come from Japan. We don't want to make similar mistakes in our treatment of US citizens who happen to be Muslims, or come from a Middle Eastern heritage.

One of the more subtle points is the things we might give up to combat terrorism, particularly personal privacy on the Internet. People will naturally look for easy answers to the problems posed by these recent terrorist attacks, and many will blame the Internet. This is a new technology and people tend to be scared of new technology. But complete surrender of our rights to surveillance technologies is not the solution to terrorism.

I particularly liked this post in an online forum:

Read More …

Web authentication with outbound call

Authentify|Register
is a new product that authenticates Internet users with a two-factor
technique that includes web interaction plus an automated outbound
telephone call to that user. During the call, the user must enter data
on the telephone keypad and have his or her voice data recorded.
Businesses can use this additional information to better authenticate
users and have a better audit trail.

Biometrics in hospitals

Another naive article about biometrics (5 Apr 2001) in hospitals to protect sensitive patient data. From MedcomSoft, a Canadian company.

How is this any better than a password? In fact, it's worse than a password for two reasons:

  1. If
    your password is "cracked", you can change it. On the other hand, if
    there's some kind of a bug in the software of the fingerprint reader
    such that they can capture the digitized image of your fingerprint,
    then your screwed. Hmm, actually that could work ten times. But then
    what?

  2. What about people with no fingers? What about people who loose fingers?



Sigh… Bruce Schneier has covered all this ground in his Crypto-Gram newsletter and his book, Secrets & Lies.