Santa Clara University

Bookmark and Share

Ethical Issues in the Online World

Welcome to the blog of the Internet Ethics program at the Markkula Center for Applied Ethics, Santa Clara University. Program Director Irina Raicu will be joined by various guests in discussing the ethical issues that arise continuously on the Internet; we hope to host a robust conversation about them, and we look forward to your comments.

The following postings have been filtered by tag ethics. clear filter
  •  Pols, the Press, and Choreographed Spontaneity

    Wednesday, Nov. 25, 2015

    Note: The author of this blog post, Miriam Schulman, is the Associate Director/COO of the Markkula Center for Applied Ethics. This piece first appeared in Real Clear Politics on November 13, 2015.

    Call me a fusty former English teacher, but I was perplexed when, without much apparent irony, the New York Times reported this week that President Obama’s digital media team is “building up a social media presence for Mr. Obama in his own voice.” They do this, in part, by writing tweets for him.

    It’s not exactly the same as downloading a paper from the Internet, but fielding a team of social media mavens that purports to speak in the president’s “own voice” does raise questions about the authenticity the White House seems intent on mimicking. And the fact that the media have covered the president’s tweets as though they are, well, the president’s tweets, adds another layer of disingenuousness to the story.

    In fact, the article indicated that Obama rarely does his own social media posting. But it went on to cite as an example of the president’s own voice the now famous tweet “Cool clock, Ahmed,” which went out after high-schooler Ahmed Mohamed was arrested for bringing a digital clock to school that was mistaken for a bomb.

    Although the Times indicated that White House officials would not say who wrote the tweet, it was reported in every account I read as though it came directly from the president’s mouth.

    On CNN:

    "Cool clock, Ahmed," Obama tweeted. "Want to bring it to the White House? We should inspire more kids like you to like science. It's what makes America great."

    On PBS:

    President Obama jumped in yesterday, too, inviting [Ahmed] to visit with a tweet that read: “Cool clock, Ahmed. Want to bring it to the White House? 

    In the Daily Mail

    And then, President Barack Obama personally extended an invitation for the teenager to bring his ‘cool clock’ to the White House.

    I’m reminded of a story Joan Didion tells in her expose of political journalism, “Political Fictions.” At a campaign stop during the presidential election of 1988, candidate Michael Dukakis got out of an airplane in San Diego and played a “spontaneous” game of catch with his campaign manager on the runway. The moment was so obviously staged that the TV crews called it “tarmac arrival with ball tossing.” In fact, Dukakis had played a similar “casual” game of catch at a bowling alley in Ohio—and then restaged it when it turned out that CNN hadn’t gotten the shot.

    That politicians are always trying to shape their image is not so surprising, but what shocked Didion was the way this moment was covered by the press—as though it were a real example of what a regular guy Dukakis was.

    I suppose at some level we all know that political identities are constructed things. Certainly we’re aware that many utterances by politicians were written by others. George H.W. Bush’s “thousand points of light” line was written by Peggy Noonan. John Kennedy’s famous “Ask not what your country can do for you” was borrowed from a former headmaster.

    And watching the recent round of presidential debates, only the very naïve cannot see the hand of handlers in the carefully crafted and often totally unresponsive answers to questions. These are well-rehearsed snippets designed to show toughness or compassion, wonkiness or relatability—whether the candidate has these qualities or not.

    Even Donald Trump, whose followers love what they believe is his shoot-from-the-hip honesty, seems to have given his @realDonaldTrump Twitter handle over to staffers and interns. At least that’s who he blamed when his account tweeted a photo of an American flag draped over two men wearing Nazi uniforms, or when it retweeted and then deleted this message about Hillary Clinton: “If Hillary Clinton can’t satisfy her husband what makes her think she can satisfy America?”

    Okay, realDonaldTrump is apparently not really Donald Trump. But can we not rely on the press to help us sift the fabricated from the real? When, as the Times reported, Obama’s digital team is working to “bring a sense of spontaneity and accessibility to one of the world’s most choreographed and constricted positions,” should reporters not help us to see that the tweets are themselves part of the choreography?

    Photo by Esther Vargas, used without modification under a Creative Commons license.

  •  The Ethics of Encryption, After the Paris Attacks

    Friday, Nov. 20, 2015

    The smoldering ongoing debate about the ethics of encryption has burst into flame anew following the Paris attacks last week. Early reports about the attacks, at least in the U.S., included claims that the attackers had used encrypted apps to communicate. On Monday, the director of the CIA said that “this is a time for particularly Europe, as well as here in the United States, for us to take a look and see whether or not there have been some inadvertent or intentional gaps that have been created in the ability of intelligence and security services to protect the people…." Also on Monday, Computerworld reports, Senator Feinstein told a reporter that she had “met with chief counsels of most of the biggest software companies to find legal ways that would allow intelligence authorities to break encryption when monitoring terrorism. ‘I have asked for help,’ Feinstein said. ‘I haven't gotten any help.’”

    At the same time, cybersecurity experts are arguing, anew, that there is no way to allow selective access to encrypted materials without also providing a way for bad actors to access such materials, too—thus endangering the privacy and security of all those who use online tools for communication. In addition, a number of journalists are debunking the initial claims that encryption played a part in the Paris terror attacks (see Motherboard’s “How the Baseless ‘Terrorists Communicating Over Playstation4’ Rumor Got Started”), and questioning the assertion that weakening US-generated encryption tools is necessary in order for law enforcement to thwart terrorism (see Wired’s “After Paris Attacks, What the CIA Director Gets Wrong About Encryption”). But the initial claims, widely reported, are already cited in calls for new regulations (in the Washington Post, Brian Fung argues that “[i]f government surveillance expands after Paris, the media will be partly to blame”).

    As more details from the investigation into the Paris attacks and their aftermath come to light, it now appears that the attackers in fact didn’t encrypt at least some of their communications. However, even the strongest supporters of encryption concede that terrorists have used it and will probably use it again in their efforts to camouflage their communications. The question is how to respond to that.

    The ethics of generating and deploying encryption tools doesn’t lend itself to an easy answer. Perhaps the best evidence for that is the fact that the U.S. government helps fund the creation and wide-spread dissemination of such tools. As Computerworld’s Matt Hamblen reports,

    The U.S.-financed Open Technology Fund (OTF) was created in 2012 and supports privately built encryption and other apps to "develop open and accessible technologies to circumvent censorship and surveillance, and thus promote human rights and open societies," according to the OTF's website.

    In one example, the OTF provided $1.3 million to encryption app maker Open Whisper Systems in 2013 and 2014. The San Francisco-based company produced Signal, Redphone and TextSecure smartphone apps to provide various encryption capabilities.

    The same tools that are intended to “promote human rights and open societies” can be used by terrorists, too. So far, all the cybersecurity experts seem to agree that there is no way to provide encryption backdoors that could be used only by the “good guy”: see, for example, the recently released “Keys under Doormats” paper, whose authors argue that

    The complexity of today’s Internet environment, with millions of apps and globally connected services, means that new law enforcement requirements are likely to introduce unanticipated, hard to detect security flaws. Beyond these and other technical vulnerabilities, the prospect of globally deployed exceptional access systems raises difficult problems about how such an environment would be governed and how to ensure that such systems would respect human rights and the rule of law.

    At a minimum, these difficult problems have to be addressed carefully, with full input from the people who best understand the technical challenges. Vilifying the developers of encryption tools and failing to recognize that they are indeed helping in our efforts to uphold our values is unwarranted.


    Photo by woodleywonderworks, used without modification under a Creative Commons license.

  •  Metaphors of Big Data

    Friday, Nov. 6, 2015

    Hot off the (digital) presses: This morning, Re/code ran a short piece by me, titled “Metaphors of Big Data.”

    In the essay, I argue that the metaphors currently used to describe “big data” fail to acknowledge the vast variety of vast datasets that are now being collected and processed.  I argue that we need new metaphors.

    Strangers have long had access to some details about most of us—our names, phone numbers and even addresses have been fairly easy to find, even before the advent of the Internet. And marketers have long created, bought and sold lists that grouped customers based on various differentiating criteria. But marketers didn’t use to have access to, say, our search topics, back when we were searching in libraries, not Googling. The post office didn’t ask us to agree that it was allowed to open our letters and scan them for keywords that would then be sold to marketers that wanted to reach us with more accurately personalized offers. We would have balked. We should balk now.

    The link will take you to the piece on the Re/code site, but I hope you’ll come back and respond to it in the blog comments!


    Photo by Marc_Smith, used without modification under a Creative Commons license.


  •  Coverage of the Privacy Crimes Symposium

    Thursday, Oct. 29, 2015

    Note: The author of this blog post, Brent Tuttle, CIPP/US E, is a third-year law student at Santa Clara University’s School of Law; he is pursuing a Privacy Law certificate. This piece first appeared in The Advocate--the law school's student-run newspaper.

    On October 6th, SCU Law’s High Tech Law Institute, the Markkula Center for Applied Ethics, and the Santa Clara District Attorney’s Office hosted the first ever “Privacy Crimes: Definition and Enforcement” half-day conference. The Electronic Frontier Foundation (EFF), the International Association of Privacy Professionals (IAPP), and the Identity Theft Council (ITC) also sponsored the free event. It brought together practitioners, academics, and students to discuss several important questions that both civil and criminal legal professionals face in the digital age.  For example, what is a privacy crime? What is being done to enforce the laws addressing these privacy crimes? Furthermore, how can we balance privacy interests in the criminal justice system? 

    After opening remarks from Santa Clara District Attorney Jeffrey Rosen, Daniel Suvor gave the keynote address. Mr. Suvor is the Chief of Policy to the Attorney General of California, Kamala Harris, and former Senior Director of the Office of Cabinet Affairs at the White House. Mr. Suvor discussed his work with the California Attorney General’s Office and elaborated on the AG’s stance regarding the current state of privacy crimes. 

    Mr. Suvor spoke of the California AG’s efforts to combat cyber-crimes.  He noted that California was the first state to have a data breach notification law, implemented in 2003. Mr. Suvor also discussed a recent settlement between the CA Attorney General and Houzz, Inc. that is the first of its kind in the United States. Among other things, the terms of the settlement require Houzz, Inc. to appoint a Chief Privacy Officer who will oversee the company’s compliance with privacy laws and report privacy concerns to the CEO and/or other senior executives. 

    The California Attorney General has also increased privacy enforcement through the creation of an E-Crime Unit in 2011 to prosecute identity theft, data intrusion, and crimes involving the use of technology. To date, the E-Crime Unit has conducted several investigations involving piracy, shutting down illegal streaming websites, and online counterfeit operations. Mr. Suvor noted a recent area of priority to the Unit: the prosecution of cyber exploitation, commonly known as “revenge porn.” 

    Mr. Suvor clarified that the AG’s Office adamantly believes the term “revenge porn” is a misnomer. The Office takes the position that the term “cyber exploitation” is more appropriate for two reasons.  First, porn is generally created for public consumption, whereas “revenge porn” was not created with a public audience in mind. In addition, the Office does not give any credence to the notion that the publisher of non-consensual porn has any legitimate interest in vengeance or revenge in carrying out such heinous acts. He noted that cyber exploitation is a serious nationwide epidemic and that California law expressly prohibits this conduct under California Penal Code, section 647. To tackle this problem, the Office is collaborating with the private sector. Mr. Suvor reported that Google, Facebook, Twitter, Reddit, and others have since adopted policies that will help victims combat cyber exploitation.

    Following Mr. Suvor’s keynote, Irina Raicu, Director of Internet Ethics at the Markkula Center for Applied Ethics, moderated a panel titled “What Is a Privacy Crime?” The well-rounded group of panelists consisted of Hanni Fakhoury, Senior Staff Attorney from the Electronic Frontier Foundation; Tom Flattery, Santa Clara County’s Deputy District Attorney; and Susan Freiwald, a Professor at the University of San Francisco School of Law. 

    Ms. Freiwald opened the panel by acknowledging how hard it is to define a privacy crime. Privacy interests are amorphous. To some, privacy is the right to be left alone. Others seek privacy in their communications, privacy in their autonomy, but depending on the individual, privacy expectations and concerns will vary. However, she drew a sharp distinction in differentiating privacy crimes from torts, because in this respect, the State has an interest in punishing an individual for privacy crimes. 

    Ms. Freiwald also urged the audience that it is important to proceed with caution when defining privacy crimes. For example, Freiwald stressed the consideration of due process. We must ensure that legislation specifies conduct so that people have notice of what exactly is illegal, what the relevant level of culpability is, whether a privacy crime must be subjectively or objectively harmful, and what defenses may be available to those accused. Furthermore, she noted that protecting some from privacy crimes could also conflict with the First Amendment. In this respect, she urged that we find a proper balance between protecting an individual’s privacy while leaving room for freedom of speech and freedom of the press. 

    The co-panelists echoed Ms. Freiwald’s concerns and statements. Deputy District Attorney Tom Flattery shed light on how the Penal Code helps protect privacy, but also recognized that there are gaps that it does not address. While the Penal Code combats matters where one individual does something to harm another individual, it does not address matters Mr. Flattery referred to as “commercial surveillance,” where private companies use deceptive terms of service to invasively collect data on their users. 

    Mr. Flattery went into detail about the common use of the California Penal Code to deal with privacy crimes.  Specifically, section 502 contains anti-hacking provisions that differentiate criminal activity by what an individual does with the data after gaining unauthorized access. For example, if someone merely gained unauthorized access to a social media or email account and did nothing with this data, that person would be subject to Penal Code § 502(c)(7), though first offense is only considered an infraction, in the same vein as a speeding or parking ticket. However, if the individual used the information, then Penal Code § 502(c)(2) elevates the charge to a misdemeanor or felony. Mr. Flattery encouraged the audience to think about what the term “use” means in the context of the Code. Does this code section only apply when an individual uses the information to obtain financial gain, or does sharing this data with a group of friends also constitute a “use”? Mr. Flattery stated that these questions don’t really have “good clean answers,” which leaves citizens without a bright-line rule in a context that will become increasingly more important over time. 

    Another area of concern Mr. Flattery highlighted was the increasing theft of medical IDs and electronic medical records. In these instances, people will go in to a hospital or medical treatment facility and assume the identity of someone else to obtain free healthcare services under a stolen alias. However, as medical records increasingly become electronic, when the victim of this crime comes into the hospital with a legitimate medical emergency, his or her electronic medical record is full of inaccurate medical information. In these cases, the identity theft can be life threatening, as a patient’s record can correctly document that someone under their name received a particular medication two weeks prior, when in fact the actual patient is fatally allergic to such treatment. 

    Mr. Fakhoury brought a unique perspective to the debate, but one that all the panelists were somewhat in agreement on. His takeaway was that when defining and addressing privacy crimes, we “need to chill out a little bit and think these things through.” Rather than adding more legislation, he stressed that we should examine whether or not the current California Penal Code sections could be used to address the problem. Mr. Fakhoury believes that the current penal code could fix at least some of the new problems society is facing with “privacy crimes.” For example, addressing Mr. Flattery’s previous remarks about medical ID theft, Mr. Fakhoury noted that the general identity theft statute is an applicable statutory remedy, so he questioned why we would need another law to handle this problem. Mr. Fakhoury also emphasized the potential issues of adding an abundance of new and unnecessary legislation. New bills could be drafted sloppily or poorly and include ambiguous language that is left for courts to interpret, thereby covering more conduct than was originally intended. 

    Not entirely against new legislation, Mr. Fakhoury urged support for CalECPA, aka SB-178 (which was signed by the Governor late last week). This new law provides citizens with privacy protections against law enforcement. Mr. Fakhoury distinguished this piece of legislation from others that might be quick to criminalize privacy crimes, as he believes it provides law enforcement with tools to get sensitive digital information, but it also protects the public by requiring law enforcement to get a search warrant beforehand. 

    Santa Clara County’s Supervising District Attorney Christine Garcia-Sen moderated the next panel, “What’s Being Done to Enforce Laws Addressing Privacy Crimes?” Attorney Ingo Brauer, Santa Clara County Deputy District Attorney Vishal Bathija, and Erica Johnstone of Ridder, Costa & Johnstone LLP all participated in an hour-long talk that discussed the obstacles and successes practitioners are facing in enforcing privacy crimes. 

    Mr. Bathija highlighted the fact that frequently victims are so embarrassed by these privacy crimes that they are hesitant to shed more light on the humiliating moments with court proceedings and enforcement. He used an example of a sexual assault case where an underage female was exchanging sexually explicit photos with another person. Prior to the case going to trial, the victim realized that the details of her sexual assault would be heard by the jury. Understandably, she vocally expressed her concerns that she didn’t want other people to know that she had been subject to this sexually deviant conduct with the offender.

    Erica Johnstone was quick to point out that a huge difficulty in litigating “revenge porn” or “cyber exploitation,” is the expense of doing so. Many firms cannot accept clients without a retainer fee of $10,000. If the case goes to court, a plaintiff can easily accrue a bill of $25,000, and if the party wants to litigate to get a judgment, the legal bill can easily exceed $100,000. This creates a barrier whereby most victims of cyber exploitation cannot afford to hire a civil litigator. Ms. Johnstone shared her experience of working for pennies on the dollar in order to help victims of these crimes, but stressed how time- and labor-intensive the work was. 

    Ms. Johnstone also pointed out the flawed rationale in using copyright law to combat revenge porn. Unless the victim is also the person who took the picture, the victim has no copyright in the photo. In addition, the non-consensual content often goes viral so quickly that it is impossible to employ copyright takedown notices to effectively tackle this problem. She described one case where a client and her mother spent 500 hours sending Digital Millennium Copyright Act takedown notices to websites. She also spoke on the issue of search results still displaying content that had been taken down, but was pleased to announce that Google and Bing! had altered their practices. These updated policies allow a victim to go straight to search engines and provide them with all URLs where the revenge porn is located, at which point the search engines will automatically de-list all of the links from their query results. Ms. Johnstone also applauded California prosecutors in their enforcement of revenge porn cases and said they were “setting a high bar” that other states have yet to match. 

    As a defense attorney, Ingo Brauer expressed his frustration with the Stored Communications Act, a law that safeguards digital content. He noted that while prosecutors are able to obtain digital content information under the SCA, the law does not provide the same access for all parties, for example defense and civil attorneys. Mr. Brauer stressed that in order for our society to ensure due process, digital content information must be available to both prosecutors and defense attorneys. Failure to provide equal access to digital content information could result in wrongful prosecutions and miscarriages of justice. 

    All three panelists were also adamant about educating others and raising awareness surrounding privacy crimes. In many instances, victims of revenge porn and other similar offenses are not aware of the remedies available to them or are simply too embarrassed to come forward. However, they noted that California offers more legal solutions than most states, both civilly and criminally. Their hope is that as the discussion surrounding privacy crimes becomes more commonplace, the protections afforded to victims will be utilized as well.

    The conference closed out with the panel “Balancing Privacy Interests in the Criminal Justice System.” Santa Clara Superior Court Judge Shelyna V. Brown, SCU Assistant Clinical Professor of Law Seth Flagsberg, and Deputy District Attorney Deborah Hernandez all participated on the panel moderated by SCU Law Professor Ellen Kreitzberg. 

    This area presents a particularly sensitive field as both victims and the accused are entitled to certain privacy rights within the legal system, yet prioritizing or balancing these interests is difficult. For example, Judge Brown stated in a hypothetical sexual assault case where the defense sought psychological records of the victim, she would want to know if the records would have any relevance to the actual defense. She stressed that the privacy rights of the victim must be fairly weighed against the defendant’s right to fully cross-examine and confront his or her accusers. And even if the information is relevant, she noted that often times you must decide whether all of it should be released and whether the information should be released under seal.

    Overall, the Privacy Crimes conference served as an excellent resource for those interested in this expanding field. EFF Senior Staff Attorney Hanni Fakhoury stated, “This was a really well put together event. You have a real diversity of speakers and diversity of perspectives. I think what’s most encouraging is to have representatives from the District Attorney’s Office and the Attorney General’s Office, not only laying out how they see these issues, but being in an audience to hear civil libertarians and defense attorneys discuss their concerns. Having...very robust pictures, I think it’s great for the University and it’s great for the public interest as a whole to hear the competing viewpoints.”  

    Videos, photos, and resources from the event

  •  On Snowden, Civil Disobedience, and Whistleblower Protection

    Friday, Oct. 23, 2015
    A video technician monitors a computer screen as National Security Agency leaker Edward Snowden appears on a live video feed broadcast from Moscow at an event sponsored by the ACLU Hawaii in Honolulu on Saturday, Feb. 14, 2015. (AP Photo/Marco Garcia)

    During the recent Democratic presidential debate, when asked for her views about Edward Snowden, Hilary Clinton replied that “he could have been a whistleblower. He could have gotten all of the protections of being a whistleblower. He could have raised all the issues that he has raised. And I think there would have been a positive response to that."

    She repeated that claim during at least one campaign event afterward, even though, by then, several reports had deemed it either outright false or, in the case of Politifact, cautiously, “Mostly False.”In the New Yorker, John Cassidi was more direct: “Hilary Clinton Is Wrong about Edward Snowden.” As he explains, there is a whistleblower protection statute that applies to federal employees but not to those in the intelligence agencies, and there is a statute that provides a path for intelligence agency employees to report certain matters to Congress, but which provides no protection to those doing the reporting. Politifact and others have pointed, in addition, to an Executive Order signed by President Obama, which purports to expand whistleblower protections to intelligence agency employees but not to contractors like Snowden. President Obama himself cited a policy directive that he said applied in Snowden’s case, but the National Whistleblowers Center, in a 2013 post analyzing that directive, concluded that it

    fails to provide protection for whistleblowers and creates bad precedent. The Directive has already been used effectively by the White House to create an illusion that intelligence agency whistleblowers have rights and creates a pretext to oppose effective Congressional action to enact an actual law that would protect intelligence community employees.

    And all of the analyses note that there are no whistleblower exceptions that would have protected Snowden from criminal prosecution. Snowden’s case is often compared to that of NSA employee Thomas Drake (who did face a prosecution later described by the judge in the case as “unconscionable”)—but that comparison is never raised by those who argue that Snowden could have taken advantage of whistleblower protections and chose not to.

    Given the maze of statutes and executive orders and policy directives relevant to the claim about whistleblower protection, it’s easy to understand that laypeople’s eyes might glaze over at protracted exegesis of that issue. But a presidential candidate who is also a lawyer doesn’t have that excuse—especially when analyses debunking that claim have been appearing for years.

    It’s telling that the debate about the ethics of Snowden’s actions continues, now making its way into the presidential campaign. Back in early 2014, my colleague David DeCosse (the director of the Campus Ethics program at the Markkula Center for Applied Ethics) organized an event titled “Conscience, Edward Snowden, and the Internet: Has Civil Disobedience Gone Too Far?” David and I both spoke at that event, and our comments were followed by lots of questions and comments from the audience gathered at SCU. (David later also wrote a piece on that topic, titled “Edward Snowden and the Moral Worth of Civil Disobedience,” which was published in the Religion and Ethics Newsweekly.)

    As you can see in this summary of the event, David and I agreed on some things and disagreed on many others. Like Clinton, David argued that Snowden should not have fled the U.S., or should have come back to face the legal consequences of his actions. In his essay, David praises Dr. Martin Luther King, Jr.’s “conviction that all those engaging in civil disobedience must be willing to accept legal punishment for their actions. At bottom, this concern was a way to reaffirm the value of the law in itself. Moreover,” David argues, “submitting to such punishment was also a way to affirm by word and deed the moral good of the political community.”

    Is there another question that should be asked, however, before we assert that the ethical course of action, for a person involved in civil disobedience, is to submit to the punishment that the law allots for his/her actions? Should we first ask about the fairness and proportionality of the punishment involved? Are those considerations completely irrelevant to an assessment of the ethics of the decision to blow the whistle and flee? Because, under the Espionage Act (the law under which Snowden has been charged, as he knew he would be), Snowden could face the death penalty or life in prison. What happens when civil disobedience poses a stark choice between martyrdom and no action? In addition, the Espionage Act does not include an ethical balancing test. It makes no exceptions for whistleblowers—for their intent, for the magnitude of the public good that may be achieved through their disclosures, or for the lack of more protective law-abiding ways for whistleblowers to inform the public (or at least some portion of the government outside of the Executive branch). In the eyes of that law, someone like Snowden is exactly the same as someone who would sell national secrets for private gain. Because the law has no whistleblower exception, defendants convicted in recent trials under the Espionage Act have not been allowed to even mention their motives at trial. Is this ethical?

    If we decide that the definition of civil disobedience includes the requirement that those who break the law must submit to the punishment imposed by the law, without questioning the morality of the process or of the punishment involved, then Snowden’s actions don’t constitute civil disobedience. That, however, doesn’t change the fact that he could not have “gotten all of the protections of being a whistleblower.” To continue to assert that, as Hilary Clinton and others seem willing to do, is to subvert, through misinformation, the important conversation that we should continue to have about both the ethics of Snowden’s choices and the ethics of our own laws. The “moral good of the political community” (as David DeCosse put it) demands an evaluation of both.


  •  Et tu, Barbie?

    Wednesday, Oct. 14, 2015

    In a smart city, in a smart house, a little girl got a new Barbie. Her parents, who had enough money to afford a rather pricey doll, explained to the girl that the new Barbie could talk—could actually have a conversation with the girl. Sometime later, alone in her room with her toys, the little girl, as instructed, pushed on the doll’s belt buckle and started talking. After a few minutes, she wondered what Barbie would answer if she said something mean—so she tried that.

    Later, the girl’s mother accessed the app that came with the new doll and listened to her daughter’s conversation. The mom then went to the girl’s room and asked her why she had been mean to Barbie. The little girl learned something—about talking, about playing, about technology, about her parents.

    Or maybe I should have written all of the above using future tense—because “Hello Barbie,” according to media reports, does not hit the stores until next month.

    After reading several articles about “Hello Barbie,” I decided to ask several folks here at the university for their reactions to this new high-tech toy. (I read, think, and write all the time about privacy, so I wanted some feedback from folks who mostly think about other stuff.)  Mind you, the article I’d sent them as an introduction was titled “Will Barbie Be Hackers’ New Plaything?”—so I realize it wasn’t exactly a neutral way to start the conversation. With that caveat, though, here is a sample of the various concerns that my colleagues expressed.

    The first reaction came via email: “There is a sci-fi thriller in there somewhere…” (Thriller, yes, I thought to myself, though not sci-fi anymore.)

    The other concerns came in person.  From a parent of grown kids: the observation that these days parents seem to want to know absolutely everything about their children, and that that couldn’t be healthy for either the parents or the kids. From a dad of a 3-year girl: “My daughter already loves Siri; if I gave her this she would stop talking to anybody else!” From a woman thinking back: “I used to have to talk for my doll, too…” The concerns echoed those raised in much of the media coverage of Hello Barbie—that she will stifle the imagination that kids deploy when they have to provide both sides of a conversation with their toys, or that she will violate whatever privacy children still have.

    But I was particularly struck by a paragraph in a Mashable article that described in more detail how the new doll/app combo will work:

    "When a parent goes through the process of setting up Hello Barbie via the app, it's possible to control the settings and manually approve or delete potential conversation topics. For example, if a child doesn’t celebrate certain holidays like Christmas, a parent can chose to remove certain lines from Barbie's repertoire."

    Is the question underlying all of this, really, one of control? Who will ultimately control Hello Barbie? Will it be Mattel? Will it be ToyTalk, the San Francisco company providing the “consumer-grade artificial intelligence” that enables Hello Barbie’s conversations? The parents who buy the doll? The hackers who might break in? The courts that might subpoena the recordings of the children’s chats with the doll?

    And when do children get to exercise control? When and how do they get to develop autonomy if even well intentioned people (hey, corporations are people, too, now) listen in to—and control—even the conversations that the kids are having when they play, thinking they’re alone? (“…Toy Talk says that parents will have ‘full control over all account information and content,’ including sharing recordings on Facebook, YouTube, and Twitter,” notes an ABC News article; “data is sent to and from ToyTalk’s servers, where conversations are stored for two years from the time a child last interacted with the doll or a parent accessed a ToyTalk account,” points out the San Francisco Chronicle.)

    What do kids learn when they realize that those conversations they thought were private were actually being recorded, played back, and shared with either business’ partners or parents’ friends? All I can hope is that the little girls who will receive Hello Barbie will, as a result, grow up to be privacy activists—or, better yet, tech developers and designers who will understand, deeply, the importance of privacy by design.

    Photo by Mike Licht, used without modification under a Creative Commons license.


  •  Privacy Crimes Symposium: A Preview

    Monday, Oct. 5, 2015
    Daniel Suvor

    Tomorrow, Santa Clara University will host a free half-day symposium titled “Privacy Crimes: Definition and Enforcement.” The event is co-sponsored by the Santa Clara District Attorney’s Office, the High Tech Law Institute, and the Markkula Center for Applied Ethics. (Online registration is now closed, but if you’d still like to attend, you can email

    The event will open with remarks from Santa Clara DA Jeff Rosen and a keynote by Daniel Suvor, who is the California attorney general’s current policy advisor. A recent Fusion article detailing the latest efforts to criminalize and prosecute “revenge porn” quotes Suvor, who explains that the attorney general “sees this as the next front in the violence against women category of crime. … She sees it as the 21st century incarnation of domestic violence and assaults against women, now taken online.’”

    The Fusion article also points out that California was “the second state to put a revenge porn law on the books…. In the past two years, 23 other states have followed suit.”

    Of course, revenge porn is not the only crime that impacts privacy, and legislative responses are not the only way to combat such crimes.  The Privacy Crimes symposium will feature panel discussions that will address a broad variety of related questions: How are privacy interests harmed? Why (and when) should we turn to criminal law in response? What types of criminal charges are currently used in the prosecutions that involve such harms? Are current laws sufficiently enforced? Are the current laws working well? Should some laws be changed? Do we need new ones? Are there other ways that would also work (or work better) to minimize privacy harms? Are there better ways to protect competing privacy interests in the criminal justice system?

    We are looking forward to a thought-provoking discussion and many more questions from audience members! And we are grateful to the International Association of Privacy Professionals, the Electronic Frontier Foundation, and the Identity Theft Council for their help in publicizing this event.

  •  The Ethics of Ad-Blocking

    Wednesday, Sep. 23, 2015
    (AP Photo/Damian Dovarganes)

    As the number of people who are downloading ad-blocking software has grown, so has the number of articles discussing the ethics of ad-blocking. And interest in the subject doesn’t seem to be waning: a recent article in Mashable was shared more than 2,200 times, and articles about the ethics of ad-blocking have also appeared in Fortune (“You shouldn’t feel bad about using an ad blocker, and here’s why” and “Is using ad blockers morally wrong? The debate continues”), Digiday (“What would Kant do? Ad blocking is a problem, but it’s ethical”), The New York Times (“Enabling of Ad Blockers in Apple’s iOS9 Prompts Backlash”), as well as many other publications.

    Mind you, this is not a new debate. People were discussing it in the xkcd forum in 2014. The BBC wrote about the ethics of ad blocking in 2013. Back in 2009, Farhad Manjoo wrote for about what he described as a more ethical “approach to fair ad-blocking”; he concluded his article with the lines, “Ad blocking is here to stay. But that doesn't have to be the end of the Web—just the end of terrible ads.”
    As it turns out, in 2015, we still have terrible ads (see Khoi Vinh’s blog post, “Ad Blocking Irony.”) And, as a recent report by PageFair and Adobe details, the use of ad blockers “grew by 48% during the past year, increasing to 45 million average monthly active users” in the U.S. alone.
    In response, some publishers are accusing people who install (or build) ad blockers of theft. They are also accusing them of breaching their “implied contracts” with sites that offer ad-supported content (but see Marco Arment’s recent blog post, “The ethics of modern web ad-blocking,” which demolishes this argument, among other anti-blocker critiques).
    Many of the recent articles present both sides of the ethics debate. However, most of the articles on the topic claim that the main reasons that users are installing ad blockers are the desires to escape “annoying” ads or to improve browsing speeds (since ads can sometimes slow downloads to a crawl). What many articles leave out entirely, or gloss over in a line or two, are two other reasons why people (and especially those who understand how the online advertising ecosystem works) install ad blockers: For many of those users, the primary concerns are the tracking behind “targeted” ads, and the meteoric growth of “malvertising”—advertising used as vectors for malware.
    When it comes to the first concern, most of the articles about the ethics of ad-blocking simply conflate advertising and tracking—as if the tracking is somehow inherent in advertising. But the two are not the same, and it is important that we reject this false either/or proposition. If advertisers continue to push for more invasive consumer tracking, ad blocker usage will surge: When the researchers behind the PageFair and Adobe 2015 report asked “respondents who are not currently using an ad blocking extention … what would cause them to change their minds,” they found that “[m]isuse of personal information was the primary reason to enable ad blocking” (see p. 12 of the report). Now, it may not be clear exactly what the respondents meant by “misuse of personal information,” but that is certainly not a reference to either annoying ads or clogged bandwidth.
    As for the rise of “malvertising,” it was that development that led me to say to a Mashable reporter that if this continues unabated we might all eventually end up with an ethical duty to install ad blockers—in order to protect ourselves and others who might then be infected in turn.
    Significantly, the dangers of malvertising are connected to those of the more “benign” tracking. As a Wired article explains,

    it is modern, more sophisticated ad networks’ granular profiling capabilities that really create the malvertising sweet spot. Today ad networks let buyers configure ads to appear according to Web surfers’ precise browser or operating system types, their country locations, related search keywords and other identifying attributes. Right away we can see the value here for criminals borrowing the tactics of savvy marketers. … Piggybacking on rich advertising features, malvertising offers persistent, Internet-scale profiling and attacking. The sheer size and complexity of online advertising – coupled with the Byzantine nature of who is responsible for ad content placement and screening – means attackers enjoy the luxury of concealment and safe routes to victims, while casting wide nets to reach as many specific targets as possible.

    As one cybersecurity expert tweeted, sarcastically rephrasing the arguments of some of those who argue that installing ad-blocking software is unethical, “If you love content then you must allow random anonymous malicious entities to run arbitrary code on your devices” (@thegrugq).

    Now, if you clicked on the link to the Wired article cited above, you might or might not have noticed a thin header above the headline. The header reads, “Sponsor content.” Yup, that entire article is a kind of advertising, too. A recent New York Times story about the rise of this new kind of “native advertising” is titled “With Technology, Avoiding Both Ads and the Blockers.” (Whether such “native experiences” are better than the old kind of ads is a subject for another ethics debate; the FTC recently held a workshop about this practice and came out with more questions than answers.)

    Of course, not all online ads incorporate tracking, not all online ads bring malware, and many small publishers are bearing the brunt of a battle about practices over which they have little (if any) control. Unfortunately, for now, the blocking tools available are blunt instruments. Does that mean, though, that until the development of more nuanced solutions, the users of ad-supported sites should continue to absorb the growing privacy and security risks?

    Bottom line: discussing the ethics of ad-blocking without first clarifying the ethics of the ecosystem in which it has developed (and the history of the increasing harms that accompany many online ads) is misleading.

  •  A Personal Privacy Policy

    Wednesday, Sep. 2, 2015

    This essay first appeared in Slate's Future Tense blog in July 2015.

    Dear Corporation,

    You have expressed an interest in collecting personal information about me. (This interest may have been expressed by implication, in case you were attempting to collect such data without notifying me first.) Since you have told me repeatedly that personalization is a great benefit, and that advertising, search results, news, and other services should be tailored to my individual needs and desires, I’ve decided that I should also have my own personalized, targeted privacy policy. Here it is.

    While I am glad that (as you stated) my privacy is very important to you, it’s even more important to me. The intent of this policy is to inform you how you may collect, use, and dispose of personal information about me.

    By collecting any such information about me, you are agreeing to the terms below. These terms may change from time to time, especially as I find out more about ways in which personal information about me is actually used and I think more about the implications of those uses.

    Note: You will be asked to provide some information about yourself. Providing false information will constitute a violation of this agreement.

    Scope: This policy covers only me. It does not apply to related entities that I do not own or control, such as my friends, my children, or my husband.

    Age restriction and parental participation: Please specify if you are a startup; if so, note how long you’ve been in business. Please include the ages of the founders/innovators who came up with your product and your business model. Please also include the ages of any investors who have asserted, through their investment in your company, that they thought this product or service was a good idea.

    Information about you. For each piece of personal information about me that you wish to collect, analyze, and store, you must first disclose the following: a) Do you need this particular piece of information in order for your product/service to work for me? If not, you are not authorized to collect it. If yes, please explain how this piece of information is necessary for your product to work for me. b) What types of analytics do you intend to do perform with this information? c) Will you share this piece of information with anyone outside your company? If so, list each entity with which you intend to share it, and for what purpose; you must update this disclosure every time you add a new third party with which you’d like to share. d) Will you make efforts to anonymize the personal information that you’re collecting? e) Are you aware of the research that shows that anonymization doesn’t really work because it’s easy to put together information from several categories and/or several databases and so figure out the identity of an “anonymous” source of data? f) How long will you retain this particular piece of information about me? g) If I ask you to delete it, will you, and if so, how quickly? Note: by “delete” I don’t mean “make it invisible to others”—I mean “get it out of your system entirely.”

    Please be advised that, like these terms, the information I’ve provided to you may change, too: I may switch electronic devices; change my legal name; have more children; move to a different town; experiment with various political or religious affiliations; buy products that I may or may not like, just to try something new or to give to someone else; etc. These terms (as amended as needed) will apply to any new data that you may collect about me in the future: your continued use of personal information about me constitutes your acceptance of this.

    And, of course, I reserve all rights not expressly granted to you.

    Photo by Perspecsys Photos, used without modification under a Creative Commons license.

  •  Internet Ethics: Fall 2015 Events

    Tuesday, Sep. 1, 2015

    Fall will be here soon, and with it come three MCAE events about three interesting Internet-related ethical (and legal) topics. All of the events are free and open to the public; links to more details and registration forms are included below, so you can register today!

    The first, on September 24, is a talk by Santa Clara Law professor Colleen Chien, who recently returned from her appointment as White House senior advisor for intellectual property and innovation. Chien’s talk, titled “Tech Innovation Policy at the White House: Law and Ethics,” will address several topics—including intellectual property and innovation (especially the efforts toward patent reform); open data and social change; and the call for “innovation for all” (i.e. innovation in education, the problem of connectivity deserts, the need for tech inclusion, and more). Co-sponsored by the High Tech Law Institute, this event is part of our ongoing “IT, Ethics, and Law” lecture series, which recently included presentations on memory, forgiveness, and the “right to be forgotten”; ethical hacking; and the ethics of online price discrimination. (If you would like to be added to our mailing list for future events in this series, please email

    The second, on October 6, is a half-day symposium on privacy law and ethics and the criminal justice system. Co-sponsored by the Santa Clara District Attorney’s office and the High Tech Law Institute, “Privacy Crimes: Definition and Enforcementaims to better define the concept of “privacy crimes,” assess how such crimes are currently being addressed in the criminal justice system, and explore how society might better respond to them—through new laws, different enforcement practices, education, and other strategies. The conference will bring together prosecutors, defense attorneys, judges, academics, and victims’ advocates to discuss three main questions: What is a “privacy crime”? What’s being done to enforce laws that address such crimes? And how should we balance the privacy interests of the people involved in the criminal justice system? The keynote speaker will be Daniel Suvor, chief of policy for California’s Attorney General Kamala Harris. (This event will qualify for 3.5 hours of California MCLE, as well as IAPP continuing education credit; registration is required.)

    Finally, on October 29 the Center will host Antonio Casilli, associate professor of digital humanities at Telecom Paris Tech. In his talk, titled “How Can Somebody Be A Troll?,” Casilli will ask some provocative questions about the line between actual online trolls and, as he puts it, “rightfully upset Internet users trying to defend their opinions.” In the process, he will discuss the arguments of a new generation of authors and scholars who are challenging the view that trolling is a deviant behavior or the manifestation of perverse personalities; such writers argue that trolling reproduces anthropological archetypes; highlights the intersections of different Internet subcultures; and interconnects discourses around class, race, and gender.

    Each of the talks and panels will conclude with question-and-answer periods. We hope to see you this fall and look forward to your input!

    (And please spread the word to any other folks you think might be interested.)