So Far, So Bad
Irina Raicu is the director of the Internet Ethics program at the Markkula Center for Applied Ethics at Santa Clara University. Views are her own.
It’s been a tough few weeks in cybersecurity news. First, Facebook disclosed a massive breach: attackers had exploited a combination of three bugs in the company’s software, which allowed them to gain complete access to at least 50 million users’ accounts. Fifty million. Worse, because so many people had taken advantage of Facebook’s Single Sign-On feature, which invited them to use their Facebook identity to log into various other services around the web, the hackers now had access to those people’s other accounts, too. Moreover, as Wired magazine’s Issie Lapowsky details in an article titled “The Facebook Hack Exposes an Internet-Wide Failure,” the harm was magnified by the lax security measures implemented by many other sites. Lapowsky reports on a paper published by researches from the University of Illinois at Chicago:
… perhaps the most staggering finding in the paper is that people don't necessarily need to have logged into third-party sites with Facebook to be exposed. Say, for example, you logged onto a website with the same email address that's associated with your Facebook account. If an attacker tries to log onto that same website using Facebook's Single Sign-On, the researchers found that some sites… will associate the two accounts.
"If you have a Facebook account, even if you’ve never used it to log into any other website... an attacker could still use the Facebook token and get access to a user’s account on third-party websites” [notes one of the researchers].
Pause a moment to take in the implications of that. Say that you have been a fairly cautious user of the internet, somewhat informed about cybersecurity issues, and you might have chosen not to use Single Sign-On at all. Given the way some services set up their sites, your caution is irrelevant. They will assume that you want convenience—and override your caution.
Then, yesterday, the news broke about a bug that had allowed developers to access some Google+ users’ data without their consent. In a blog post, Google explained what they had uncovered as part of an internal audit: “Users can grant access to their Profile data, and the public Profile information of their friends, to Google+ apps, via the API. The bug meant that apps also had access to Profile fields that were shared with the user, but not marked as public” [emphasis added]. In other words, the users’ friends’ non-public data could have been handed over to developers, even if those friends (i.e. other Google+ users) had in no way consented to that.
The blog post was published after The Wall Street Journal had reported on the bug. The Wall Street Journal also reported that Google had considered making the bug public back in March, after it had patched it, but that internal deliberations had warned about regulatory repercussions.
So far, Google has claimed that the bug had not been exploited before it was found and patched. Cybersecurity experts are pointing out that this is means we are talking about vulnerability disclosure, which brings with it its own norms and ethical dilemmas (a few years ago, as part of a panel discussion on ethical hacking, we had an interesting discussion around the ethics of vulnerability disclosure). So this scandal is similar to the Cambridge Analytica one in the sense that it involves the ability to hand over, with no consent, the data of users’ friends; it’s different, though, because in this case there was no Cambridge Analytica (as far as we know so far, at least), to take advantage of that option.
Again, though, people’s choices about what to keep private, rather than make public, were overridden by the services they used. In Google’s case, the overriding was apparently accidental, rather than company policy. The massive recent Facebook breach was also the result of failure, not Facebook intent. What’s particularly dispiriting, though, is that these are companies that care deeply about cybersecurity, have every interest in protecting their users’ data from unauthorized access, and have massive resources focused on that effort. So the message that rings out to users is “Go ahead—change your privacy settings. Inform yourself, and make other choices, too, in an effort to protect your information. It may or may not matter, though. And, if we fail, we might not tell you about it. Why make you more aware of the risks?”
Last year, I published an article in Slate’s Future Tense; it was titled “It’s Cybersecurity Awareness Month. Do You Feel More Cybersecure and Aware Yet?” After reports of massive breaches at Yahoo, the SEC, Equifax, and the NSA, I wrote, “One hopeful outcome of this slew of failures is that legislators are becoming not just aware but angry, and appear to be ready to propose some legislative measures in response.” I am now more aware of how naïve that was. Federal legislators, at least, are angry—but not about cybersecurity. California, on the other hand, has passed an Internet-of-Things cybersecurity law, but it’s limited in scope. I was ready to argue that limited protection is still better than none, until I read Bruce Schneier’s evaluation of it: "If I have a house with 50 unlocked windows, you just secured the one in the second bedroom.” He prefaced that with “Hooray for doing something, but it’s a small piece of a very large problem.”
Given our society’s dependence on the internet, cybersecurity is both a very large problem and a question of common good. As our Center materials explain it,
Examples of particular common goods or parts of the common good include an accessible and affordable public health care system, an effective system of public safety and security, peace among the nations of the world, a just legal and political system, an unpolluted natural environment, and a flourishing economic system.
To that list, we should add “a more secure internet.”
October is Cybersecurity Awareness Month. The journalists, at least, are doing their part to make us more aware. Hooray for doing something!
Photo by amika_san, used without modification under a Creative Commons license.