Developing Norms, Prioritizing Responses
The last few months have brought multiple revelations about widespread cybersecurity breaches impacting both powerful governments and thousands of private companies.
First it was the SolarWinds hack: As Ars Technica reported, the U.S. Cybersecurity Infrastructure and Security Agency “determined that this threat poses a grave risk to the Federal Government and state, local, tribal, and territorial governments as well as critical infrastructure entities and other private sector organizations.” Multiple other countries were impacted, too.
Then we read about the Microsoft Exchange attacks: by mid-March, CNN reported that security experts “estimated that at least 20,000 US-based Exchange servers remain unpatched and vulnerable to exploitation, and as many as 80,000 around the globe…. The number of attempted attacks against organizations has been doubling every two to three hours, according to Check Point Research, which monitors the internet for malicious activity.”
The White House is now considering how to respond to those attacks: “’When not one but two cyberhacks have gone undetected by the federal government in such a short period of time, it’s hard to say that we don’t have a problem’… Biden administration officials said they would seek a deeper partnership with the private sector,” reports the New York Times. Officials plan to develop “a real-time threat sharing arrangement, whereby private companies would send threat data to a central repository where the government could pair it with intelligence from the National Security Agency, the C.I.A. and other spy shops, to provide a far earlier warning than is possible today.”
In Lawfare, an article draws distinctions between the two operations and argues that those distinctions should inform the U.S. response:
from both a normative and a strategic point of view, not all hacks are the same. Admittedly, delineating international norms in cyberspace is a difficult and ambiguous exercise, but if there is a clear lesson from these two recent attacks, it is that the U.S. government must try to do so.
Then, on March 18th, a report from Google’s Project Zero team detailed how a “team of advanced hackers exploited no fewer than 11 zero-day vulnerabilities in a nine-month campaign that used compromised websites to infect fully patched devices running Windows, iOS, and Android.” Ars Technica coverage noted that the “breadth and abundance of exploits for known vulnerabilities sets the group apart.”
A day later, a new article highlighted a new attack against critical vulnerabilities in servers sold by a company called F5 Networks. “When security researchers weren't busy attending to the unfolding Exchange mass compromise, many of them warned that it was only a matter of time before the F5 vulnerabilities also came under attack,” wrote Dan Goodin. “Now, that day has come.”
In a response to the SolarWinds attack, security expert Bruce Schneier explained that “[c]ybersecurity is expensive. Cybersecurity to defend against nation-state operations like SolarStorm is very expensive. But cyber-insecurity can end up costing even more. We as a country need to decide when and how we are willing to pay.” He added, however, that it’s “unlikely that the federal government will enforce strict security standards for technology procurement—which is what we need to prevent a repeat of SolarStorm.”
Maybe that is the first step that needs to be implemented immediately—even ahead of (or in conjunction with) threat-sharing arrangements between government agencies and private companies, and ahead of more complex conversations about international norms in cyberspace.
Given all of the things that are now internet-connected, from energy grids to pacemakers and from baby monitors and voice assistants to almost all of our means of communication and most of our means of travel (let alone the “internet of battlefield things”), cybersecurity is now one of the conditions required for the common good.
The cost of cyber-insecurity is already too high.