|
Four senators (Rockefeller, Bayh, Nelson, and Snowe) have recently introduced S.773, the Cybersecurity Act of 2009. While there are some good parts to the bill, many of the substantive provisions are poorly thought out at best. The bill attempts to solve non-problems, and to assume that research results can be commanded into being by virtue of an act of Congress. Beyond that, there are parts of the bill whose purpose is mysterious, or whose content bears no relation to its title.
Let’s start with the good stuff. Section 2 summarizes the threat. If anything, it understates it. Section 3 calls for the establishment of an advisory committee to the president on cybersecurity issues. Perhaps that’s Just Another Committee; on the other hand, it reports to the president and “shall advise the President on matters relating to the national cybersecurity program and strategy”. That’s good—but whether or not the president (any president!) actually listens to and understands their recommendations is another matter entirely.
Section 10 (“Promoting Cybersecurity Awareness”) and Section 13 (“Cybersecurity Competition and Challenge”) are innocuous, though I’m not convinced they’ll do much good. (I suspect that folks reading this blog already realize this, but I’ll state it explicitly anyway: the odds on anyone, whether in a “challenge” or not, finding a magic solution to the computer security problems are exactly 0. Most of the problems we have are due to buggy code, and there’s no single cause or solution to that. In fact, I seriously doubt if there is any true solution; buggy code is the oldest unsolved problem in computer science, and I expect it to remain that way.)
As an academic, I am, of course, in favor of more research dollars (Section 11). Once again, I’ll state the obvious: I would hope to benefit if that provision is enacted.
Section 21, on “International Norms and Cybersecurity Deterrance [sic] Measures”, is problematic. I don’t think that lack of international norms or cooperation—to the extent that this provision might actually accomplish something—is much of a problem. But what does the bill mean by “deterrence”? There is no substance in that section. The proper role of government in dealing with cyber threats from abroad is indeed worthy of discussion; I’ve written about this elsewhere. This bill is silent on it, except for the title of this section.
I’m intrigued by Section 15, on risk management. The proper role of liability and insurance in cybersecurity has long been a topic of discussion; I would very much like to see a full-blown study of the question (probably by the National Academies), but I don’t think that can be done in one year.
I don’t know why the bill allots three years (Section 9) to implement DNSSEC; NIST already has that project well underway for the .gov zone. It would be good if the root zone and .com were signed, but I don’t think that that’s NIST’s responsibility. Calling for a review of IANA‘s contracts to run the root zone of the DNS is just plain wrong; while IANA does administer the root zone, it does so under ICANN‘s direction. ICANN is an international organization; a legislative attempt to wrest control of the root for the United States would not be well-received.
So much for the sections I like. The bad parts of this bill, I fear, outweigh the good parts.
Section 17 has a good title—“Authentication and Civil Liberties Report”—but it worries me. It calls for a study on the feasibility of “an identity management and authentication program ... for government and critical infrastructure information systems and networks.” Such a system is a bad idea.
The idea seems to have come from the “Securing Cyberspace for the 44th Presidencywritten earlier. True, this bill calls for “appropriate civil liberties and privacy protections”, but a centralized authentication system is likely to lead to serious security risks. As a National Academies study noted, “A centralized password system, a public key system, or a biometric system would be much more likely to pose security and privacy hazards than would decentralized versions of any of these.” (Disclaimer: I was part of the committee that wrote that report. Naturally, I’m not representing the Academy in this posting.) The 44th President report wanted to ensure that certain actions were strongly tied to authorized individuals, but this approach simply won’t accomplish that goal. I say that for many reasons; now, I’ll mention just one: consider the effect of a tailored virus that infected the computer of someone who is supposed to control critical infrastructure systems. That virus could do anything it wanted, with the proper person’s credentials.
Sections 6 and 18 are seriously flawed. For one thing, they make the assumption that there is some easily-distinguished set of crucial networks. I doubt that there is. The 1999 National Academies report Trust in Cyberspace (again, I was on the committee; again, I’m speaking only for myself) stated
The study committee believes that implementing a single MEII [“Minimum Essential Information Infrastructure”] for the nation would be misguided and infeasible. An independent study conducted by RAND (Anderson et al., 1998) also arrives at this conclusion. One problem is the incompatibilities that inevitably would be introduced as nonhardened parts of NISs are upgraded to exploit new technologies. NISs [“Networked Information Systems”] constantly evolve to exploit new technology, and an MEII that did not evolve in concert would rapidly become useless.
A second problem with a single national MEII is that “minimum” and “essential” depend on context and application (see Box 5.1), so one size cannot fit all. For example, water and power are essential services. Losing either in a city for a day is troublesome, but losing it for a week is unacceptable, as is having either out for even a day for an entire state. A hospital has different minimum information needs for normal operation (e.g., patient health records, billing and insurance records) than it does during a civil disaster. Finally, the trustworthiness dimensions that should be preserved by an MEII depend on the customer: local law enforcement agents may not require secrecy in communications when handling a civil disaster but would in day-to-day crime fighting.
Looking more narrowly, we come to the same conclusion. Suppose that we only wanted to protect the water, power, and communications systems, and hence their networks, while other networks were under attack. How would spare parts be ordered, if the vendors’ factory networks weren’t functioning? Where would fuel come from, if trucking and shipping company networks were not protected? Could these companies even communicate with their employees, given how many rely on commercial ISPs for telecommuting? For that matter, these companies themselves rely on commercial ISPs to link their various locations. The ability of the Presiden to “declare a cybersecurity emergency and order the limitation or shutdown of Internet traffic to and from any compromised Federal Government or United States critical infrastructure information system or network” would be of dubious utility. (The political and social wisdom of granting such power is itself an interesting question; for today at least, I’ll concentrate on the technical issues.)
Section 6 has other questionable provisions. 6(a)(1) calls for research in cybersecurity metrics. Research is a fine thing and security metrics are an active research area. Why should asking NIST to focus on this result in new answers? I’ve asserted that the most interesting question—how secure is a given piece of software—is not answerable, even in principle. Known weaknesses (see (6)(a)(2) and (6)(a)(3)) aren’t very interesting; if a site hasn’t fixed them, it’s generally because of overriding concerns, such as budget, backwards compatibility, or the sheer difficulty of updating a large-scale production system without breaking the applications you’re trying to run.
(6)(a)(4) is just strange. Yes, configuration management is difficult and security-relevant. That doesn’t mean that a standard configuration language would solve such problems. Why, for example, should the proper security settings for a web browser bear any relationship whatsoever to the settings of a laptop’s built-in firewall? Neither bears any particular relationship to permission settings for a database server, let alone permissions within the database itself. It’s not just comparing apples to oranges, it’s comparing apples to magnetic alloys of neodymium or some such. There might be a small benefit to having one parser, but the real problem is the policies configured, rather than the language. Perhaps the goal is to make it easy to swap out one box and get a different model that does the same thing, but that simply won’t work; the new box will have different concepts, and hence different secure configuration requirements.
The same can be said for (6)(a)(5), on standard software configurations. (That and some other sections apply to “grantees”, among others. Does that mean that NIST will have to set standards for NetBSD, to accomodate people like me? Or does it mean that I can’t run NetBSD, despite the threat posed by software monocultures?
A vulnerability specification language ((6)(a)(6)) isn’t a bad idea, though I note that such a thing is inherently OS-dependent. Two things are necessary, though, to make it useful: sufficient knowledge of what components are implicated, and sufficient knowledge of what the vulnerability is, to permit realistic assessment of the actual risk to a given site. I’ll give a concrete example: the system I’m typing this on has a version of Ghostscript with a buffer overflow when processing PDF documents. Yes, that sounds serious—except that I never use Ghostscript to read external PDFs; I use a variety of other programs. To me, then, there is no risk.
Section (6)(a)(7) sounds great—national compliance standards for all software—but it’s doomed. We’ve been down that road before, ranging from the Orange Book to the Common Criteria. All of these projects tried to establish standards and evaluation criteria for trusted software systems. The problem is that building and testing such systems, and going through external evaluations, are slow and expensive processes. Far fewer systems were evaluated than should have been, because purchasers wanted to buy cheap commercial hardware and software. The result was an endless set of waivers. Is the government willing to pay premium prices, for all of its systems? Let me rephrase the question: will each and every government agency be willing to spend its own budget dollars on such systems, and will Congress appropriate enough money? Allow me to express serious doubt. “C2 by ‘92” (an attempt by DoD to enforce minimal levels of security via use of C2-level systems by 1992) never went anywhere; I don’t think this one will succeed, either. There are many further reasons for skepticism—who will pay for private sector deployments; what security model is appropriate (the Orange Book was geared to the military classification model, which is simply wrong for most civilian use); whether the flaws are in the OS at all, etc.), and more—and we can’t just legislate useful, usable standards into being. Legislation may be appropriate when we know the goal (we don’t), or we have good reason to believe we’ll know it and can reach it in not very many years. Neither is the case here.
I could go on and on. Section 7, for example, calls for licensing of cybersecurity professionals. What is that supposed to do? The big flaws lie not in the ways we configure our firewalls and crypto boxes; rather, they’re in the software we choose to run, and in management that doesn’t listen to (or doesn’t understand) security warnings. Are the authors of the legislation concerned about sabotage by security folks who aren’t trustworthy? I’d start by worrying about supply chain vulnerabilities in hardware and software.
It’s fair to ask what I would recommend instead. Suppose I were drafting a bill or an executive order. Suppose (heaven help us all) President Obama appointed me as his National Cybersecurity Advisor. What would I suggest? A full answer would call for a much longer post than this; indeed, it would probably take at least a full-fledged technical paper and perhaps a book. The short answer is that just as there is no royal road to geometry, there is no presidential or Congressional road to cybersecurity. You have to do it step by step, system by system. Things we can do today—more cryptography, following industry best practices, and so on—are the low-hanging fruit; while we should do more of these, such things are demonstrably insufficient. A more drastic move is to accept that there are some things we just can’t do safely at any reasonable cost: the complexity will get us. We need to be more humble in our designs. At this point, to a first approximation all computer systems are interconnected; we cannot realistically hope to limit the spread of certain attacks. We can make progress if and only if we accept that as the starting point, ask “what then?”, and build our systems accordingly.
Sponsored byVerisign
Sponsored byDNIB.com
Sponsored byVerisign
Sponsored byCSC
Sponsored byIPv4.Global
Sponsored byRadix
Sponsored byWhoisXML API
Five cent version of my answer would be ‘the issue is areas where cooperation is very rudimentary and/or exists only on paper, or doesnt exist at all’. And they do exist.
There are MLATs, there’s a MLAT + harmonized regulations framework like the council of europe convention on cybercrime .. and the countries missing from this - well, you read the list and go ‘oh, country X and country Y are not there, and there is a huge volume of threats where we could do with actual cooperation from those countries’
It goes on and on, mostly in that vein. And it is something that’s become depressingly familiar over the past several years, I have seen for myself where it works. And I have seen where it doesnt work at all.
Thanks for a nice article. However, while concentrating on technical issues makes it apparently easier to highlight some pitfalls, it causes the discussion to entirely depend on common sense. That is to say, it is illusory to blame a software bug without analyzing its specs. By common sense, we can easily assume that, say, a browser should just browse the web, but what does such spec mean?
The very first finding states
In the text that was for C=America, but it holds for most countries. By common sense, the Congress worries about national security, but what are the boundaries of cyberspace? The article says enough about protecting only a part of it.
As an individual, I couldn’t object much if the US (or China, or any national government) wrested control of the Internet (security against cyberspace.) However, I’m curious to learn how can that happen the other way around (security inside cyberspace), as it’s bound to be the largest experiment in direct democracy ever.