Santa Clara University

Bookmark and Share

Ethical Issues in the Online World

Welcome to the blog of the Internet Ethics program at the Markkula Center for Applied Ethics, Santa Clara University. Program Director Irina Raicu will be joined by various guests in discussing the ethical issues that arise continuously on the Internet; we hope to host a robust conversation about them, and we look forward to your comments.

The following postings have been filtered by tag hacking. clear filter
  •  The Ethics of Encryption, After the Paris Attacks

    Friday, Nov. 20, 2015

    The smoldering ongoing debate about the ethics of encryption has burst into flame anew following the Paris attacks last week. Early reports about the attacks, at least in the U.S., included claims that the attackers had used encrypted apps to communicate. On Monday, the director of the CIA said that “this is a time for particularly Europe, as well as here in the United States, for us to take a look and see whether or not there have been some inadvertent or intentional gaps that have been created in the ability of intelligence and security services to protect the people…." Also on Monday, Computerworld reports, Senator Feinstein told a reporter that she had “met with chief counsels of most of the biggest software companies to find legal ways that would allow intelligence authorities to break encryption when monitoring terrorism. ‘I have asked for help,’ Feinstein said. ‘I haven't gotten any help.’”

    At the same time, cybersecurity experts are arguing, anew, that there is no way to allow selective access to encrypted materials without also providing a way for bad actors to access such materials, too—thus endangering the privacy and security of all those who use online tools for communication. In addition, a number of journalists are debunking the initial claims that encryption played a part in the Paris terror attacks (see Motherboard’s “How the Baseless ‘Terrorists Communicating Over Playstation4’ Rumor Got Started”), and questioning the assertion that weakening US-generated encryption tools is necessary in order for law enforcement to thwart terrorism (see Wired’s “After Paris Attacks, What the CIA Director Gets Wrong About Encryption”). But the initial claims, widely reported, are already cited in calls for new regulations (in the Washington Post, Brian Fung argues that “[i]f government surveillance expands after Paris, the media will be partly to blame”).

    As more details from the investigation into the Paris attacks and their aftermath come to light, it now appears that the attackers in fact didn’t encrypt at least some of their communications. However, even the strongest supporters of encryption concede that terrorists have used it and will probably use it again in their efforts to camouflage their communications. The question is how to respond to that.

    The ethics of generating and deploying encryption tools doesn’t lend itself to an easy answer. Perhaps the best evidence for that is the fact that the U.S. government helps fund the creation and wide-spread dissemination of such tools. As Computerworld’s Matt Hamblen reports,

    The U.S.-financed Open Technology Fund (OTF) was created in 2012 and supports privately built encryption and other apps to "develop open and accessible technologies to circumvent censorship and surveillance, and thus promote human rights and open societies," according to the OTF's website.

    In one example, the OTF provided $1.3 million to encryption app maker Open Whisper Systems in 2013 and 2014. The San Francisco-based company produced Signal, Redphone and TextSecure smartphone apps to provide various encryption capabilities.

    The same tools that are intended to “promote human rights and open societies” can be used by terrorists, too. So far, all the cybersecurity experts seem to agree that there is no way to provide encryption backdoors that could be used only by the “good guy”: see, for example, the recently released “Keys under Doormats” paper, whose authors argue that

    The complexity of today’s Internet environment, with millions of apps and globally connected services, means that new law enforcement requirements are likely to introduce unanticipated, hard to detect security flaws. Beyond these and other technical vulnerabilities, the prospect of globally deployed exceptional access systems raises difficult problems about how such an environment would be governed and how to ensure that such systems would respect human rights and the rule of law.

    At a minimum, these difficult problems have to be addressed carefully, with full input from the people who best understand the technical challenges. Vilifying the developers of encryption tools and failing to recognize that they are indeed helping in our efforts to uphold our values is unwarranted.


    Photo by woodleywonderworks, used without modification under a Creative Commons license.

  •  Et tu, Barbie?

    Wednesday, Oct. 14, 2015

    In a smart city, in a smart house, a little girl got a new Barbie. Her parents, who had enough money to afford a rather pricey doll, explained to the girl that the new Barbie could talk—could actually have a conversation with the girl. Sometime later, alone in her room with her toys, the little girl, as instructed, pushed on the doll’s belt buckle and started talking. After a few minutes, she wondered what Barbie would answer if she said something mean—so she tried that.

    Later, the girl’s mother accessed the app that came with the new doll and listened to her daughter’s conversation. The mom then went to the girl’s room and asked her why she had been mean to Barbie. The little girl learned something—about talking, about playing, about technology, about her parents.

    Or maybe I should have written all of the above using future tense—because “Hello Barbie,” according to media reports, does not hit the stores until next month.

    After reading several articles about “Hello Barbie,” I decided to ask several folks here at the university for their reactions to this new high-tech toy. (I read, think, and write all the time about privacy, so I wanted some feedback from folks who mostly think about other stuff.)  Mind you, the article I’d sent them as an introduction was titled “Will Barbie Be Hackers’ New Plaything?”—so I realize it wasn’t exactly a neutral way to start the conversation. With that caveat, though, here is a sample of the various concerns that my colleagues expressed.

    The first reaction came via email: “There is a sci-fi thriller in there somewhere…” (Thriller, yes, I thought to myself, though not sci-fi anymore.)

    The other concerns came in person.  From a parent of grown kids: the observation that these days parents seem to want to know absolutely everything about their children, and that that couldn’t be healthy for either the parents or the kids. From a dad of a 3-year girl: “My daughter already loves Siri; if I gave her this she would stop talking to anybody else!” From a woman thinking back: “I used to have to talk for my doll, too…” The concerns echoed those raised in much of the media coverage of Hello Barbie—that she will stifle the imagination that kids deploy when they have to provide both sides of a conversation with their toys, or that she will violate whatever privacy children still have.

    But I was particularly struck by a paragraph in a Mashable article that described in more detail how the new doll/app combo will work:

    "When a parent goes through the process of setting up Hello Barbie via the app, it's possible to control the settings and manually approve or delete potential conversation topics. For example, if a child doesn’t celebrate certain holidays like Christmas, a parent can chose to remove certain lines from Barbie's repertoire."

    Is the question underlying all of this, really, one of control? Who will ultimately control Hello Barbie? Will it be Mattel? Will it be ToyTalk, the San Francisco company providing the “consumer-grade artificial intelligence” that enables Hello Barbie’s conversations? The parents who buy the doll? The hackers who might break in? The courts that might subpoena the recordings of the children’s chats with the doll?

    And when do children get to exercise control? When and how do they get to develop autonomy if even well intentioned people (hey, corporations are people, too, now) listen in to—and control—even the conversations that the kids are having when they play, thinking they’re alone? (“…Toy Talk says that parents will have ‘full control over all account information and content,’ including sharing recordings on Facebook, YouTube, and Twitter,” notes an ABC News article; “data is sent to and from ToyTalk’s servers, where conversations are stored for two years from the time a child last interacted with the doll or a parent accessed a ToyTalk account,” points out the San Francisco Chronicle.)

    What do kids learn when they realize that those conversations they thought were private were actually being recorded, played back, and shared with either business’ partners or parents’ friends? All I can hope is that the little girls who will receive Hello Barbie will, as a result, grow up to be privacy activists—or, better yet, tech developers and designers who will understand, deeply, the importance of privacy by design.

    Photo by Mike Licht, used without modification under a Creative Commons license.


  •  Trust, Self-Criticism, and Open Debate

    Tuesday, Mar. 17, 2015
    President Barack Obama speaks at the White House Summit on Cybersecurity and Consumer Protection in Stanford, Calif., Friday, Feb. 13, 2015. (AP Photo/Jeff Chiu)

    Last November, the director of the NSA came to Silicon Valley and spoke about the need for increased collaboration among governmental agencies and private companies in the battle for cybersecurity.  Last month, President Obama came to Silicon Valley as well, and signed an executive order aimed at promoting information sharing about cyberthreats.  In his remarks ahead of that signing, he noted that the government “has its own significant capabilities in the cyber world” and added that when it comes to safeguards against governmental intrusions on privacy, “the technology so often outstrips whatever rules and structures and standards have been put in place, which means the government has to be constantly self-critical and we have to be able to have an open debate about it.”

    Five days later, on February 19, The Intercept reported that back in 2010 “American and British spies hacked into the internal computer network of the largest manufacturer of SIM cards in the world, stealing encryption keys used to protect the privacy of cellphone communications across the globe….” A few days after that, on February 23, at a cybersecurity conference, the director of the NSA was confronted by the chief information security officer of Yahoo in an exchange which, according to the managing editor of the Just Security blog, “illustrated the chasm between some leading technology companies and the intelligence community.”

    Then, on March 10th, The Intercept reported that in 2012 security researchers working with the CIA “claimed they had created a modified version of Apple’s proprietary software development tool, Xcode, which could sneak surveillance backdoors into any apps or programs created using the tool. Xcode, which is distributed by Apple to hundreds of thousands of developers, is used to create apps that are sold through Apple’s App Store.” Xcode’s product manager reacted on Twitter: “So. F-----g. Angry.”

    Needless to say, it hasn’t been a good month for the push toward increased cooperation. However, to put those recent reactions in a bit more historical context, in October 2013, it was Google’s chief legal officer, David Drummond, who reacted to reports that Google’s data links had been hacked by the NSA: "We are outraged at the lengths to which the government seems to have gone to intercept data from our private fibre networks,” he said, “and it underscores the need for urgent reform." In May 2014, following reports that some Cisco products had been altered by the NSA, Mark Chandler, Cisco’s general counsel, wrote that the “failure to have rules [that restrict what the intelligence agencies may do] does not enhance national security ….”

    If the goal is increased collaboration between the public and private sector on issues related to cybersecurity, many commentators have observed that the issue most hampering that is a lack of trust. Things are not likely to get better as long as the anger and lack of trust are left unaddressed.  If President Obama is right in noting that, in a world in which technology routinely outstrips rules and standards, the government must be “constantly self-critical,” then high-level visits to Silicon Valley should include that element, much more openly than they have until now.


  •  Internet Access Is a Privilege

    Sunday, Apr. 21, 2013

    What would our lives be like if we no longer had access to the Internet?  How much good would we lose?  How much harm would we be spared?  Is Internet access a right?  These days, whether or not we think of access to it as a right, many of us take the Internet for granted.  In this brief video, Apple co-founder A. C. "Mike" Markkula Jr. looks at the big picture, argues that Internet use is a privilege, and considers ways to minimize some of the harms associated with it, while fully appreciating its benefits.

    In an op-ed published in the New York Times last year, Vint Cerf (who is often described as one of the "fathers of the Internet" and is currently a vice president and chief Internet evangelist for Google) argued along similar lines:

    "As we seek to advance the state of the art in technology and its use in society, [engineers] must be conscious of our civil responsibilities in addition to our engineering expertise.  Improving the Internet is just one means, albeit an important one, by which to improve the human condition. It must be done with an appreciation for the civil and human rights that deserve protection--without pretending that access itself is such a right."