Should all locks have keys? Phones, Castles, Encryption, and You.

Passing a law that requires companies to build devices with digital keyholes which only good-guys can use, is the same as passing a law that says the value of π (pi) must be exactly 3.

Here's an excellent short video about the literal impossibility of such laws, and the enormous risks of going ahead anyway.  Because unlike real-world keyholes where the bad-guy must be physically present at each keyhole they want to break through, in the digital world each bad-guy can simultaneously attack millions of digital keyholes from the other side of the world.  The end of the video says it best: "Anyone who says otherwise [that digital keyholes can be built which allow only angel good-guys while blocking demon bad-guys] is either ignorant of the mathematics, or less of an angel then they appear."

There's no math in the video, just really good explanation.

Misunderstanding the security "problem"

It's surprisingly easy to misunderstand the "problem" you are trying to solve.

Take this example (my emphasis added):

… searching airplane pilots [at checkpoints] regularly elicits howls of laughter among amateur security watchers. What they don't realize is that the issue is not whether we should trust pilots, airplane maintenance technicians or people with clearances. The issue is whether we should trust people who are dressed as pilots, wear airplane-maintenance-tech IDs or claim to have clearances.

"Arms race" between Iraqi insurgents and US military

A fascinating look at the "arms race" underway between the Iraqi insurgents, and the US military.  From a Newsweek article:

Counterinsurgency experts are alarmed by how fast the other side's tactics can evolve. A particularly worrisome case is the ongoing arms race over improvised explosive devices. The first IEDs were triggered by wires and batteries; insurgents waited on the roadside and detonated the primitive devices when Americans drove past. After a while, U.S. troops got good at spotting and killing the triggermen when bombs went off. That led the insurgents to replace their wires with radio signals. The Pentagon, at frantic speed and high cost, equipped its forces with jammers to block those signals, accomplishing the task this spring. The insurgents adapted swiftly by sending a continuous radio signal to the IED; when the signal stops or is jammed, the bomb explodes. The solution? Track the signal and make sure it continues. Problem: the signal is encrypted. Now the Americans are grappling with the task of cracking the encryption on the fly and mimicking it—so far, without success. Still, IED casualties have dropped, since U.S. troops can break the signal and trigger the device before a convoy passes. That's the good news. The bad news is what the new triggering system says about the insurgents' technical abilities.

(I found this via Bruce Schneir's security blog, which I highly recommend)

Nanny-in-the-Middle Attack

"Man-in-the-Middle" attack's occur in the "real" world, not just in computer security. In this case, it was a Nanny-in-the-Middle…
Security Notes from All Over: Man-in-the-Middle Attack


The phrase "man-in-the-middle attack" is used to describe a computer attack
where the adversary sits in the middle of a communications channel
between two people, fooling them both. It is an important attack, and
causes all sorts of design considerations in communications protocols.

But it's a real-life attack, too. Here's a story of a woman who posts an ad
requesting a nanny. When a potential nanny responds, she asks for
references for a background check. Then she places another ad, using
the reference material as a fake identity. She gets a job with the good
references — they're real, although for another person — and then
robs the family who hires her. And then she repeats the process.

Look what's going on here. She inserts herself in the middle of a
communication between the real nanny and the real employer, pretending
to be one to the other. The nanny sends her references to someone she
assumes to be a potential employer, not realizing that it is a
criminal. The employer receives the references and checks them, not
realizing that they don't actually belong to the person who is sending

It's a nasty piece of crime.

The San Francisco Chronicle carried the full story.

Clever credit card scam (on the phone)

From the Bruce Schneier's Crypto-Gram newsletter:


This one is clever.

You receive a telephone call from someone purporting to be from your credit
card company. They claim to be from something like the security and
fraud department, and question you about a fake purchase for some
amount close to $500.

When you say that the purchase wasn't
yours, they tell you that they're tracking the fraudsters and that you
will receive a credit. They tell you that the fraudsters are making
fake purchases on cards for amounts just under $500, and that they're
on the case.

They know your account number. They know your name
and address. They continue to spin the story, and eventually get you to
reveal the three extra numbers on the back of your card.

That's all they need. They thenstart charging your card for amounts just
under $500. When you get your bill, you're unlikely to call the credit
card company because you already know that they're on the case and that
you'll receive a credit.

It's a really clever social engineering
attack. They have to hit a lot of cards fast and then disappear,
because otherwise they can be tracked, but I bet they've made a lot of
money so far.

Using device location for wireless security

What if you used the location of a wireless device as part of authentication/authorization?

Location-based security for wireless apps (Nov 2002)

However for this to solve any problems, the location technology would have to be network-based not device based. You can't use device-based location (like GPS), because the problem is that you don't trust the device in the first place; you can't be sure that somebody isn't spoofing their location in the device's response. So passive (for the device) network-based solutions (triangulation, time difference of arrival, RF fingerprinting, …) are the only ones you can rely on because they are much harder to "spoof".

Security: how well does it fail?

Excerpts from an article in the Sept 2002 issue of "The Atlantic":

Indeed, he [Bruce Schneier] regards the national push for a high-tech salve for security anxieties as a reprise of his own early and erroneous beliefs about the transforming power of strong crypto. The new technologies have enormous capacities, but their advocates have not realized that the most critical aspect of a security measure is not how well it works but how well it fails.

[Here's an example of measuring how "good" security is by how well it fails]

… at Sea-Tac Airport, someone ran through the metal detector and disappeared onto the little subway that runs among the terminals. Although the authorities quickly identified the miscreant, a concession stand worker, they still had to empty all the terminals and re-screen everyone in the airport, including passengers who had already boarded planes. Masses of unhappy passengers stretched back hundreds of feet from the checkpoints. Planes by the dozen sat waiting at the gates.

In Seattle a single slip-up shut down the entire airport, which delayed flights across the nation. Sea-Tac had no adequate way to contain the damage from a breakdown — such as a button installed near the x-ray machines to stop the subway, so that idiots who bolt from checkpoints cannot disappear into another terminal. The shutdown would inconvenience subway riders, but not as much as being forced to go through security again after a wait of several hours. An even better idea would be to place the x-ray machines at the departure gates, as some are in Europe, in order to scan each group of passengers closely and minimize inconvenience to the whole airport if a risk is detected — or if a machine or a guard fails.