Bruce Schneier is an expert in almost everything technical; whats nice is that he writes well, and intellegently. You can sign up to his monthly mailings here and view current and past issues here.
"Underwriters Laboratories (UL) is an independent testing organization that rates electrical equipment, safes, and a whole lot of other things. It all started in 1893, when William Henry Merrill was called in to find out why the Palace of Electricity at the Columbian Exposition in Chicago kept catching on fire (not the best way to tout the wonders of electricity). After making the exhibit safe, he realized he had a business model on his hands. He approached insurance underwriters with the idea of an independent testing lab. They were all sick of paying for electricity fires, and took him up on the deal. Eventually, if your electrical equipment wasn't UL certified, you couldn't get insurance.
Today, UL rates many different things. Safes, for example, are rated based on time and materials. A "TL-15" rating means that the safe is secure against a burglar limited to safecracking tools and 15 minutes' working time. Other ratings certify the safe for longer periods of time, or against burglars with blowtorches and explosives. These ratings are not theoretical; actual hotshot safecrackers, employed by UL, take actual safes and test them. If a company comes out with a new version of a safe, it has to get it retested -- the rating does not carry forward.
Applying this sort of thinking to computer networks -- firewalls, operating systems, Web servers -- is a natural idea. And the newly formed Center for Internet Security plans to implement it. I'll talk about the general idea first, and then the specifics.
I don't believe that this is a good idea, certainly not now and possibly not ever. First, network security is too much of a moving target. Safes are easy. Safecracking tools don't change much. Maybe someone invents a hotter torch. Or someone else invents a more sensitive microphone. But most of the time, techniques of safecracking remain constant. Not so with the Internet. There are always new vulnerabilities, new attacks, new countermeasures. There are a couple of dozen new vulnerabilities each week in major software products; any rating is likely to become obsolete within months, if not weeks.
Second, network security is much too hard to test. Again, safes are easy. Breaking into them requires skill, but is reasonably straightforward. Modern software is obscenely complex: there are an enormous number of features, configurations, implementations. And then there are interactions between different products, different vendors, and different networks. In the past, I've written extensively about complexity and the impossibility of testing security. For now, suffice it to say that testing any reasonably sized software product would cost millions of dollars, and wouldn't guarantee anything at the end. And worse, if you updated the product you'd have to test it all over again. "
View: The Article
1 Comment - Add comment