Santa Clara University

Bookmark and Share

Ethical Issues in the Online World

Welcome to the blog of the Internet Ethics program at the Markkula Center for Applied Ethics, Santa Clara University. Program Director Irina Raicu will be joined by various guests in discussing the ethical issues that arise continuously on the Internet; we hope to host a robust conversation about them, and we look forward to your comments.

The following postings have been filtered by tag accountability. clear filter
  •  The Ethics of Encryption, After the Paris Attacks

    Friday, Nov. 20, 2015

    The smoldering ongoing debate about the ethics of encryption has burst into flame anew following the Paris attacks last week. Early reports about the attacks, at least in the U.S., included claims that the attackers had used encrypted apps to communicate. On Monday, the director of the CIA said that “this is a time for particularly Europe, as well as here in the United States, for us to take a look and see whether or not there have been some inadvertent or intentional gaps that have been created in the ability of intelligence and security services to protect the people…." Also on Monday, Computerworld reports, Senator Feinstein told a reporter that she had “met with chief counsels of most of the biggest software companies to find legal ways that would allow intelligence authorities to break encryption when monitoring terrorism. ‘I have asked for help,’ Feinstein said. ‘I haven't gotten any help.’”

    At the same time, cybersecurity experts are arguing, anew, that there is no way to allow selective access to encrypted materials without also providing a way for bad actors to access such materials, too—thus endangering the privacy and security of all those who use online tools for communication. In addition, a number of journalists are debunking the initial claims that encryption played a part in the Paris terror attacks (see Motherboard’s “How the Baseless ‘Terrorists Communicating Over Playstation4’ Rumor Got Started”), and questioning the assertion that weakening US-generated encryption tools is necessary in order for law enforcement to thwart terrorism (see Wired’s “After Paris Attacks, What the CIA Director Gets Wrong About Encryption”). But the initial claims, widely reported, are already cited in calls for new regulations (in the Washington Post, Brian Fung argues that “[i]f government surveillance expands after Paris, the media will be partly to blame”).

    As more details from the investigation into the Paris attacks and their aftermath come to light, it now appears that the attackers in fact didn’t encrypt at least some of their communications. However, even the strongest supporters of encryption concede that terrorists have used it and will probably use it again in their efforts to camouflage their communications. The question is how to respond to that.

    The ethics of generating and deploying encryption tools doesn’t lend itself to an easy answer. Perhaps the best evidence for that is the fact that the U.S. government helps fund the creation and wide-spread dissemination of such tools. As Computerworld’s Matt Hamblen reports,

    The U.S.-financed Open Technology Fund (OTF) was created in 2012 and supports privately built encryption and other apps to "develop open and accessible technologies to circumvent censorship and surveillance, and thus promote human rights and open societies," according to the OTF's website.

    In one example, the OTF provided $1.3 million to encryption app maker Open Whisper Systems in 2013 and 2014. The San Francisco-based company produced Signal, Redphone and TextSecure smartphone apps to provide various encryption capabilities.

    The same tools that are intended to “promote human rights and open societies” can be used by terrorists, too. So far, all the cybersecurity experts seem to agree that there is no way to provide encryption backdoors that could be used only by the “good guy”: see, for example, the recently released “Keys under Doormats” paper, whose authors argue that

    The complexity of today’s Internet environment, with millions of apps and globally connected services, means that new law enforcement requirements are likely to introduce unanticipated, hard to detect security flaws. Beyond these and other technical vulnerabilities, the prospect of globally deployed exceptional access systems raises difficult problems about how such an environment would be governed and how to ensure that such systems would respect human rights and the rule of law.

    At a minimum, these difficult problems have to be addressed carefully, with full input from the people who best understand the technical challenges. Vilifying the developers of encryption tools and failing to recognize that they are indeed helping in our efforts to uphold our values is unwarranted.


    Photo by woodleywonderworks, used without modification under a Creative Commons license.

  •  Coverage of the Privacy Crimes Symposium

    Thursday, Oct. 29, 2015

    Note: The author of this blog post, Brent Tuttle, CIPP/US E, is a third-year law student at Santa Clara University’s School of Law; he is pursuing a Privacy Law certificate. This piece first appeared in The Advocate--the law school's student-run newspaper.

    On October 6th, SCU Law’s High Tech Law Institute, the Markkula Center for Applied Ethics, and the Santa Clara District Attorney’s Office hosted the first ever “Privacy Crimes: Definition and Enforcement” half-day conference. The Electronic Frontier Foundation (EFF), the International Association of Privacy Professionals (IAPP), and the Identity Theft Council (ITC) also sponsored the free event. It brought together practitioners, academics, and students to discuss several important questions that both civil and criminal legal professionals face in the digital age.  For example, what is a privacy crime? What is being done to enforce the laws addressing these privacy crimes? Furthermore, how can we balance privacy interests in the criminal justice system? 

    After opening remarks from Santa Clara District Attorney Jeffrey Rosen, Daniel Suvor gave the keynote address. Mr. Suvor is the Chief of Policy to the Attorney General of California, Kamala Harris, and former Senior Director of the Office of Cabinet Affairs at the White House. Mr. Suvor discussed his work with the California Attorney General’s Office and elaborated on the AG’s stance regarding the current state of privacy crimes. 

    Mr. Suvor spoke of the California AG’s efforts to combat cyber-crimes.  He noted that California was the first state to have a data breach notification law, implemented in 2003. Mr. Suvor also discussed a recent settlement between the CA Attorney General and Houzz, Inc. that is the first of its kind in the United States. Among other things, the terms of the settlement require Houzz, Inc. to appoint a Chief Privacy Officer who will oversee the company’s compliance with privacy laws and report privacy concerns to the CEO and/or other senior executives. 

    The California Attorney General has also increased privacy enforcement through the creation of an E-Crime Unit in 2011 to prosecute identity theft, data intrusion, and crimes involving the use of technology. To date, the E-Crime Unit has conducted several investigations involving piracy, shutting down illegal streaming websites, and online counterfeit operations. Mr. Suvor noted a recent area of priority to the Unit: the prosecution of cyber exploitation, commonly known as “revenge porn.” 

    Mr. Suvor clarified that the AG’s Office adamantly believes the term “revenge porn” is a misnomer. The Office takes the position that the term “cyber exploitation” is more appropriate for two reasons.  First, porn is generally created for public consumption, whereas “revenge porn” was not created with a public audience in mind. In addition, the Office does not give any credence to the notion that the publisher of non-consensual porn has any legitimate interest in vengeance or revenge in carrying out such heinous acts. He noted that cyber exploitation is a serious nationwide epidemic and that California law expressly prohibits this conduct under California Penal Code, section 647. To tackle this problem, the Office is collaborating with the private sector. Mr. Suvor reported that Google, Facebook, Twitter, Reddit, and others have since adopted policies that will help victims combat cyber exploitation.

    Following Mr. Suvor’s keynote, Irina Raicu, Director of Internet Ethics at the Markkula Center for Applied Ethics, moderated a panel titled “What Is a Privacy Crime?” The well-rounded group of panelists consisted of Hanni Fakhoury, Senior Staff Attorney from the Electronic Frontier Foundation; Tom Flattery, Santa Clara County’s Deputy District Attorney; and Susan Freiwald, a Professor at the University of San Francisco School of Law. 

    Ms. Freiwald opened the panel by acknowledging how hard it is to define a privacy crime. Privacy interests are amorphous. To some, privacy is the right to be left alone. Others seek privacy in their communications, privacy in their autonomy, but depending on the individual, privacy expectations and concerns will vary. However, she drew a sharp distinction in differentiating privacy crimes from torts, because in this respect, the State has an interest in punishing an individual for privacy crimes. 

    Ms. Freiwald also urged the audience that it is important to proceed with caution when defining privacy crimes. For example, Freiwald stressed the consideration of due process. We must ensure that legislation specifies conduct so that people have notice of what exactly is illegal, what the relevant level of culpability is, whether a privacy crime must be subjectively or objectively harmful, and what defenses may be available to those accused. Furthermore, she noted that protecting some from privacy crimes could also conflict with the First Amendment. In this respect, she urged that we find a proper balance between protecting an individual’s privacy while leaving room for freedom of speech and freedom of the press. 

    The co-panelists echoed Ms. Freiwald’s concerns and statements. Deputy District Attorney Tom Flattery shed light on how the Penal Code helps protect privacy, but also recognized that there are gaps that it does not address. While the Penal Code combats matters where one individual does something to harm another individual, it does not address matters Mr. Flattery referred to as “commercial surveillance,” where private companies use deceptive terms of service to invasively collect data on their users. 

    Mr. Flattery went into detail about the common use of the California Penal Code to deal with privacy crimes.  Specifically, section 502 contains anti-hacking provisions that differentiate criminal activity by what an individual does with the data after gaining unauthorized access. For example, if someone merely gained unauthorized access to a social media or email account and did nothing with this data, that person would be subject to Penal Code § 502(c)(7), though first offense is only considered an infraction, in the same vein as a speeding or parking ticket. However, if the individual used the information, then Penal Code § 502(c)(2) elevates the charge to a misdemeanor or felony. Mr. Flattery encouraged the audience to think about what the term “use” means in the context of the Code. Does this code section only apply when an individual uses the information to obtain financial gain, or does sharing this data with a group of friends also constitute a “use”? Mr. Flattery stated that these questions don’t really have “good clean answers,” which leaves citizens without a bright-line rule in a context that will become increasingly more important over time. 

    Another area of concern Mr. Flattery highlighted was the increasing theft of medical IDs and electronic medical records. In these instances, people will go in to a hospital or medical treatment facility and assume the identity of someone else to obtain free healthcare services under a stolen alias. However, as medical records increasingly become electronic, when the victim of this crime comes into the hospital with a legitimate medical emergency, his or her electronic medical record is full of inaccurate medical information. In these cases, the identity theft can be life threatening, as a patient’s record can correctly document that someone under their name received a particular medication two weeks prior, when in fact the actual patient is fatally allergic to such treatment. 

    Mr. Fakhoury brought a unique perspective to the debate, but one that all the panelists were somewhat in agreement on. His takeaway was that when defining and addressing privacy crimes, we “need to chill out a little bit and think these things through.” Rather than adding more legislation, he stressed that we should examine whether or not the current California Penal Code sections could be used to address the problem. Mr. Fakhoury believes that the current penal code could fix at least some of the new problems society is facing with “privacy crimes.” For example, addressing Mr. Flattery’s previous remarks about medical ID theft, Mr. Fakhoury noted that the general identity theft statute is an applicable statutory remedy, so he questioned why we would need another law to handle this problem. Mr. Fakhoury also emphasized the potential issues of adding an abundance of new and unnecessary legislation. New bills could be drafted sloppily or poorly and include ambiguous language that is left for courts to interpret, thereby covering more conduct than was originally intended. 

    Not entirely against new legislation, Mr. Fakhoury urged support for CalECPA, aka SB-178 (which was signed by the Governor late last week). This new law provides citizens with privacy protections against law enforcement. Mr. Fakhoury distinguished this piece of legislation from others that might be quick to criminalize privacy crimes, as he believes it provides law enforcement with tools to get sensitive digital information, but it also protects the public by requiring law enforcement to get a search warrant beforehand. 

    Santa Clara County’s Supervising District Attorney Christine Garcia-Sen moderated the next panel, “What’s Being Done to Enforce Laws Addressing Privacy Crimes?” Attorney Ingo Brauer, Santa Clara County Deputy District Attorney Vishal Bathija, and Erica Johnstone of Ridder, Costa & Johnstone LLP all participated in an hour-long talk that discussed the obstacles and successes practitioners are facing in enforcing privacy crimes. 

    Mr. Bathija highlighted the fact that frequently victims are so embarrassed by these privacy crimes that they are hesitant to shed more light on the humiliating moments with court proceedings and enforcement. He used an example of a sexual assault case where an underage female was exchanging sexually explicit photos with another person. Prior to the case going to trial, the victim realized that the details of her sexual assault would be heard by the jury. Understandably, she vocally expressed her concerns that she didn’t want other people to know that she had been subject to this sexually deviant conduct with the offender.

    Erica Johnstone was quick to point out that a huge difficulty in litigating “revenge porn” or “cyber exploitation,” is the expense of doing so. Many firms cannot accept clients without a retainer fee of $10,000. If the case goes to court, a plaintiff can easily accrue a bill of $25,000, and if the party wants to litigate to get a judgment, the legal bill can easily exceed $100,000. This creates a barrier whereby most victims of cyber exploitation cannot afford to hire a civil litigator. Ms. Johnstone shared her experience of working for pennies on the dollar in order to help victims of these crimes, but stressed how time- and labor-intensive the work was. 

    Ms. Johnstone also pointed out the flawed rationale in using copyright law to combat revenge porn. Unless the victim is also the person who took the picture, the victim has no copyright in the photo. In addition, the non-consensual content often goes viral so quickly that it is impossible to employ copyright takedown notices to effectively tackle this problem. She described one case where a client and her mother spent 500 hours sending Digital Millennium Copyright Act takedown notices to websites. She also spoke on the issue of search results still displaying content that had been taken down, but was pleased to announce that Google and Bing! had altered their practices. These updated policies allow a victim to go straight to search engines and provide them with all URLs where the revenge porn is located, at which point the search engines will automatically de-list all of the links from their query results. Ms. Johnstone also applauded California prosecutors in their enforcement of revenge porn cases and said they were “setting a high bar” that other states have yet to match. 

    As a defense attorney, Ingo Brauer expressed his frustration with the Stored Communications Act, a law that safeguards digital content. He noted that while prosecutors are able to obtain digital content information under the SCA, the law does not provide the same access for all parties, for example defense and civil attorneys. Mr. Brauer stressed that in order for our society to ensure due process, digital content information must be available to both prosecutors and defense attorneys. Failure to provide equal access to digital content information could result in wrongful prosecutions and miscarriages of justice. 

    All three panelists were also adamant about educating others and raising awareness surrounding privacy crimes. In many instances, victims of revenge porn and other similar offenses are not aware of the remedies available to them or are simply too embarrassed to come forward. However, they noted that California offers more legal solutions than most states, both civilly and criminally. Their hope is that as the discussion surrounding privacy crimes becomes more commonplace, the protections afforded to victims will be utilized as well.

    The conference closed out with the panel “Balancing Privacy Interests in the Criminal Justice System.” Santa Clara Superior Court Judge Shelyna V. Brown, SCU Assistant Clinical Professor of Law Seth Flagsberg, and Deputy District Attorney Deborah Hernandez all participated on the panel moderated by SCU Law Professor Ellen Kreitzberg. 

    This area presents a particularly sensitive field as both victims and the accused are entitled to certain privacy rights within the legal system, yet prioritizing or balancing these interests is difficult. For example, Judge Brown stated in a hypothetical sexual assault case where the defense sought psychological records of the victim, she would want to know if the records would have any relevance to the actual defense. She stressed that the privacy rights of the victim must be fairly weighed against the defendant’s right to fully cross-examine and confront his or her accusers. And even if the information is relevant, she noted that often times you must decide whether all of it should be released and whether the information should be released under seal.

    Overall, the Privacy Crimes conference served as an excellent resource for those interested in this expanding field. EFF Senior Staff Attorney Hanni Fakhoury stated, “This was a really well put together event. You have a real diversity of speakers and diversity of perspectives. I think what’s most encouraging is to have representatives from the District Attorney’s Office and the Attorney General’s Office, not only laying out how they see these issues, but being in an audience to hear civil libertarians and defense attorneys discuss their concerns. Having...very robust pictures, I think it’s great for the University and it’s great for the public interest as a whole to hear the competing viewpoints.”  

    Videos, photos, and resources from the event

  •  On Snowden, Civil Disobedience, and Whistleblower Protection

    Friday, Oct. 23, 2015
    A video technician monitors a computer screen as National Security Agency leaker Edward Snowden appears on a live video feed broadcast from Moscow at an event sponsored by the ACLU Hawaii in Honolulu on Saturday, Feb. 14, 2015. (AP Photo/Marco Garcia)

    During the recent Democratic presidential debate, when asked for her views about Edward Snowden, Hilary Clinton replied that “he could have been a whistleblower. He could have gotten all of the protections of being a whistleblower. He could have raised all the issues that he has raised. And I think there would have been a positive response to that."

    She repeated that claim during at least one campaign event afterward, even though, by then, several reports had deemed it either outright false or, in the case of Politifact, cautiously, “Mostly False.”In the New Yorker, John Cassidi was more direct: “Hilary Clinton Is Wrong about Edward Snowden.” As he explains, there is a whistleblower protection statute that applies to federal employees but not to those in the intelligence agencies, and there is a statute that provides a path for intelligence agency employees to report certain matters to Congress, but which provides no protection to those doing the reporting. Politifact and others have pointed, in addition, to an Executive Order signed by President Obama, which purports to expand whistleblower protections to intelligence agency employees but not to contractors like Snowden. President Obama himself cited a policy directive that he said applied in Snowden’s case, but the National Whistleblowers Center, in a 2013 post analyzing that directive, concluded that it

    fails to provide protection for whistleblowers and creates bad precedent. The Directive has already been used effectively by the White House to create an illusion that intelligence agency whistleblowers have rights and creates a pretext to oppose effective Congressional action to enact an actual law that would protect intelligence community employees.

    And all of the analyses note that there are no whistleblower exceptions that would have protected Snowden from criminal prosecution. Snowden’s case is often compared to that of NSA employee Thomas Drake (who did face a prosecution later described by the judge in the case as “unconscionable”)—but that comparison is never raised by those who argue that Snowden could have taken advantage of whistleblower protections and chose not to.

    Given the maze of statutes and executive orders and policy directives relevant to the claim about whistleblower protection, it’s easy to understand that laypeople’s eyes might glaze over at protracted exegesis of that issue. But a presidential candidate who is also a lawyer doesn’t have that excuse—especially when analyses debunking that claim have been appearing for years.

    It’s telling that the debate about the ethics of Snowden’s actions continues, now making its way into the presidential campaign. Back in early 2014, my colleague David DeCosse (the director of the Campus Ethics program at the Markkula Center for Applied Ethics) organized an event titled “Conscience, Edward Snowden, and the Internet: Has Civil Disobedience Gone Too Far?” David and I both spoke at that event, and our comments were followed by lots of questions and comments from the audience gathered at SCU. (David later also wrote a piece on that topic, titled “Edward Snowden and the Moral Worth of Civil Disobedience,” which was published in the Religion and Ethics Newsweekly.)

    As you can see in this summary of the event, David and I agreed on some things and disagreed on many others. Like Clinton, David argued that Snowden should not have fled the U.S., or should have come back to face the legal consequences of his actions. In his essay, David praises Dr. Martin Luther King, Jr.’s “conviction that all those engaging in civil disobedience must be willing to accept legal punishment for their actions. At bottom, this concern was a way to reaffirm the value of the law in itself. Moreover,” David argues, “submitting to such punishment was also a way to affirm by word and deed the moral good of the political community.”

    Is there another question that should be asked, however, before we assert that the ethical course of action, for a person involved in civil disobedience, is to submit to the punishment that the law allots for his/her actions? Should we first ask about the fairness and proportionality of the punishment involved? Are those considerations completely irrelevant to an assessment of the ethics of the decision to blow the whistle and flee? Because, under the Espionage Act (the law under which Snowden has been charged, as he knew he would be), Snowden could face the death penalty or life in prison. What happens when civil disobedience poses a stark choice between martyrdom and no action? In addition, the Espionage Act does not include an ethical balancing test. It makes no exceptions for whistleblowers—for their intent, for the magnitude of the public good that may be achieved through their disclosures, or for the lack of more protective law-abiding ways for whistleblowers to inform the public (or at least some portion of the government outside of the Executive branch). In the eyes of that law, someone like Snowden is exactly the same as someone who would sell national secrets for private gain. Because the law has no whistleblower exception, defendants convicted in recent trials under the Espionage Act have not been allowed to even mention their motives at trial. Is this ethical?

    If we decide that the definition of civil disobedience includes the requirement that those who break the law must submit to the punishment imposed by the law, without questioning the morality of the process or of the punishment involved, then Snowden’s actions don’t constitute civil disobedience. That, however, doesn’t change the fact that he could not have “gotten all of the protections of being a whistleblower.” To continue to assert that, as Hilary Clinton and others seem willing to do, is to subvert, through misinformation, the important conversation that we should continue to have about both the ethics of Snowden’s choices and the ethics of our own laws. The “moral good of the political community” (as David DeCosse put it) demands an evaluation of both.


  •  The Ethics of Ad-Blocking

    Wednesday, Sep. 23, 2015
    (AP Photo/Damian Dovarganes)

    As the number of people who are downloading ad-blocking software has grown, so has the number of articles discussing the ethics of ad-blocking. And interest in the subject doesn’t seem to be waning: a recent article in Mashable was shared more than 2,200 times, and articles about the ethics of ad-blocking have also appeared in Fortune (“You shouldn’t feel bad about using an ad blocker, and here’s why” and “Is using ad blockers morally wrong? The debate continues”), Digiday (“What would Kant do? Ad blocking is a problem, but it’s ethical”), The New York Times (“Enabling of Ad Blockers in Apple’s iOS9 Prompts Backlash”), as well as many other publications.

    Mind you, this is not a new debate. People were discussing it in the xkcd forum in 2014. The BBC wrote about the ethics of ad blocking in 2013. Back in 2009, Farhad Manjoo wrote for about what he described as a more ethical “approach to fair ad-blocking”; he concluded his article with the lines, “Ad blocking is here to stay. But that doesn't have to be the end of the Web—just the end of terrible ads.”
    As it turns out, in 2015, we still have terrible ads (see Khoi Vinh’s blog post, “Ad Blocking Irony.”) And, as a recent report by PageFair and Adobe details, the use of ad blockers “grew by 48% during the past year, increasing to 45 million average monthly active users” in the U.S. alone.
    In response, some publishers are accusing people who install (or build) ad blockers of theft. They are also accusing them of breaching their “implied contracts” with sites that offer ad-supported content (but see Marco Arment’s recent blog post, “The ethics of modern web ad-blocking,” which demolishes this argument, among other anti-blocker critiques).
    Many of the recent articles present both sides of the ethics debate. However, most of the articles on the topic claim that the main reasons that users are installing ad blockers are the desires to escape “annoying” ads or to improve browsing speeds (since ads can sometimes slow downloads to a crawl). What many articles leave out entirely, or gloss over in a line or two, are two other reasons why people (and especially those who understand how the online advertising ecosystem works) install ad blockers: For many of those users, the primary concerns are the tracking behind “targeted” ads, and the meteoric growth of “malvertising”—advertising used as vectors for malware.
    When it comes to the first concern, most of the articles about the ethics of ad-blocking simply conflate advertising and tracking—as if the tracking is somehow inherent in advertising. But the two are not the same, and it is important that we reject this false either/or proposition. If advertisers continue to push for more invasive consumer tracking, ad blocker usage will surge: When the researchers behind the PageFair and Adobe 2015 report asked “respondents who are not currently using an ad blocking extention … what would cause them to change their minds,” they found that “[m]isuse of personal information was the primary reason to enable ad blocking” (see p. 12 of the report). Now, it may not be clear exactly what the respondents meant by “misuse of personal information,” but that is certainly not a reference to either annoying ads or clogged bandwidth.
    As for the rise of “malvertising,” it was that development that led me to say to a Mashable reporter that if this continues unabated we might all eventually end up with an ethical duty to install ad blockers—in order to protect ourselves and others who might then be infected in turn.
    Significantly, the dangers of malvertising are connected to those of the more “benign” tracking. As a Wired article explains,

    it is modern, more sophisticated ad networks’ granular profiling capabilities that really create the malvertising sweet spot. Today ad networks let buyers configure ads to appear according to Web surfers’ precise browser or operating system types, their country locations, related search keywords and other identifying attributes. Right away we can see the value here for criminals borrowing the tactics of savvy marketers. … Piggybacking on rich advertising features, malvertising offers persistent, Internet-scale profiling and attacking. The sheer size and complexity of online advertising – coupled with the Byzantine nature of who is responsible for ad content placement and screening – means attackers enjoy the luxury of concealment and safe routes to victims, while casting wide nets to reach as many specific targets as possible.

    As one cybersecurity expert tweeted, sarcastically rephrasing the arguments of some of those who argue that installing ad-blocking software is unethical, “If you love content then you must allow random anonymous malicious entities to run arbitrary code on your devices” (@thegrugq).

    Now, if you clicked on the link to the Wired article cited above, you might or might not have noticed a thin header above the headline. The header reads, “Sponsor content.” Yup, that entire article is a kind of advertising, too. A recent New York Times story about the rise of this new kind of “native advertising” is titled “With Technology, Avoiding Both Ads and the Blockers.” (Whether such “native experiences” are better than the old kind of ads is a subject for another ethics debate; the FTC recently held a workshop about this practice and came out with more questions than answers.)

    Of course, not all online ads incorporate tracking, not all online ads bring malware, and many small publishers are bearing the brunt of a battle about practices over which they have little (if any) control. Unfortunately, for now, the blocking tools available are blunt instruments. Does that mean, though, that until the development of more nuanced solutions, the users of ad-supported sites should continue to absorb the growing privacy and security risks?

    Bottom line: discussing the ethics of ad-blocking without first clarifying the ethics of the ecosystem in which it has developed (and the history of the increasing harms that accompany many online ads) is misleading.

  •  Nothing to Hide? Nothing to Protect?

    Wednesday, Aug. 19, 2015

    Despite numerous articles and at least one full-length book debunking the premises and implications of this particular claim, “I have nothing to hide” is still a common reply offered by many Americans when asked whether they care about privacy.

    What does that really mean?

    An article by Conor Friedersdorf, published in The Atlantic, offers one assessment. It is titled “This Man Has Nothing to Hide—Not Even His Email Password.” (I’ll wait while you consider changing your email password right now, and then decide to do it some other time.) The piece details Friedersdorf’s interaction with a man named Noah Dyer, who responded to the writer’s standard challenge—"Would you prove [that you have nothing to hide] by giving me access to your email accounts, … along with your credit card statements and bank records?"—by actually providing all of that information. Friedersdorf then considers the ethical implications of Dyer’s philosophy of privacy-lessness, while carefully navigating the ethical shoals of his own decisions about which of Dyer’s information to look at and which to publish in his own article.

    Admitting to a newfound though limited respect for Dyer’s commitment to drastic self-revelation, Friedersdorf ultimately reaches, however, a different conclusion:

    Since Dyer granted that he was vulnerable to information asymmetries and nevertheless opted for disclosure, I had to admit that, however foolishly, he could legitimately claim he has nothing to hide. What had never occurred to me, until I sat in front of his open email account, is how objectionable I find that attitude. Every one of us is entrusted with information that our family, friends, colleagues, and acquaintances would rather that we kept private, and while there is no absolute obligation for us to comply with their wishes—there are, indeed, times when we have a moral obligation to speak out in order to defend other goods—assigning the privacy of others a value of zero is callous.

    I think it is more than callous, though. It is an abdication of our responsibility to protect others, whose calculations about disclosure and risk might be very different from our own. Saying “I have nothing to hide” is tantamount to saying “I have nothing and no one to protect.” It is either an acknowledgment of a very lonely existence or a devastating failure of empathy and imagination.

    As Friedersdorf describes him, Dyer is not a hermit; he has interactions with many people, at least some of whom (including his children) he appears to care about. And, in his case, his abdication is not complete; it is, rather, a shifting of responsibility. Because while he did disclose much of his personal information (which of course included the personal details of many others who had not been consulted, and whose “value system,” unlike his own, may not include radical transparency), Dyer wrote to Friedersdorf, the reporter, “[a]dditionally, while you may paint whatever picture of me you are inclined to based on the data and our conversations, I would ask you to exercise restraint in embarrassing others whose lives have crossed my path…”

    In other words, “I have nothing to hide; please hide it for me.”

    “I have nothing to hide” misses the fact that no person is an island, and much of every person’s data is tangled, interwoven, and created in conjunction with, other people’s.

    The theme of the selfishness or lack of perspective embedded in the “nothing to hide” response is echoed in a recent commentary by lawyer and privacy activist Malavika Jayaram. In an article about India’s Aadhar ID system, Jayaram quotes Edward Snowden, who in a Reddit AMA session once said that “[a]rguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.” Jayaram builds on that, writing that the “nothing to hide” argument “locates privacy at an individual (some would say selfish) level and ignores the collective, societal benefits that it engenders and protects, such as the freedom of speech and association.”

    She rightly points out, as well, that the “’nothing to hide’ rhetoric … equates a legitimate desire for space and dignity to something sinister and suspect” and “puts the burden on those under surveillance … , rather than on the system to justify why it is needed and to implement the checks and balances required to make it proportional, fair, just and humane.”

    But there might be something else going on, at the same time, in the rhetorical shift from “privacy” to “something to hide”—a kind of deflection, of finger-pointing elsewhere: There, those are the people who have “something to hide”—not me! Nothing to see here, folks who might be watching. I accept your language, your framing of the issue, and your conclusions about the balancing of values or rights involved. Look elsewhere for troublemakers.

    Viewed this way, the “nothing to hide” response is neither naïve nor simplistically selfish; it is an effort—perhaps unconscious—at camouflage. The opposite of radical transparency.

    The same impetus might present itself in a different, also frequent response to questions about privacy and surveillance: “I’m not that interesting. Nobody would want to look at my information. People could look at information about me and it would all be banal.” Or maybe that is, for some people, a reaction to feelings of helplessness. If every day people read articles advising them about steps to take to protect their online privacy, and every other day they read articles explaining how those defensive measures are defeated by more sophisticated actors, is it surprising that some might try to reassure themselves (if not assure others) that their privacy is not really worth breaching?

    But even if we’re not “interesting,” whatever that means, we all do have information, about ourselves and others, that we need to protect. And our society gives us rights that we need to protect, too--for our sake and others'.

    Photo by Hattie Stroud, used without modification under a Creative Commons license.

  •  Applying Applied Ethics -- on Yik Yak

    Friday, Jun. 26, 2015

    Earlier this week, the associate director of the Markkula Center for Applied Ethics, Miriam Schulman, published a blog post about one of the center's recent campus projects. "If we want to engage with students," she wrote, "we have to go where they are talking, and this year, that has been on Yik Yak." To read more about this controversial app and a creative way to use it in a conversation about applied ethics, see "Yik Yak: The Medium and the Message." (And consider subscribing to the "All About Ethics" blog, as well!)


  •  How Google Can Illuminate the "Right to Be Forgotten" Debate: Two Requests

    Thursday, May. 14, 2015


    Happy Birthday, Right-to-Have-Certain-Results-De-Listed-from-Searches-on-Your-Own-Name-,-Depending-on-the-Circumstances!

    It’s now been a year since the European Court of Justice shocked (some) people with a decision that has mistakenly been described as announcing a “right to be forgotten.”

    Today, 80 Internet scholars sent an open letter to Google asking the company to release additional aggregate data about the company’s implementation of the court decision.  As they explain,

    The undersigned have a range of views about the merits of the ruling. Some think it rightfully vindicates individual data protection/privacy interests. Others think it unduly burdens freedom of expression and information retrieval. Many think it depends on the facts.

    We all believe that implementation of the ruling should be much more transparent for at least two reasons: (1) the public should be able to find out how digital platforms exercise their tremendous power over readily accessible information; and (2) implementation of the ruling will affect the future of the [“right to be forgotten”] in Europe and elsewhere, and will more generally inform global efforts to accommodate privacy rights with other interests in data flows.

    Although Google has released a Transparency Report with some aggregate data and some examples of the delinking decisions reached so far, the signatories find that effort insufficient. “Beyond anecdote,” they write,

    we know very little about what kind and quantity of information is being delisted from search results, what sources are being delisted and on what scale, what kinds of requests fail and in what proportion, and what are Google’s guidelines in striking the balance between individual privacy and freedom of expression interests.

    For now, they add, the participants in the delisting debate “do battle in a data vacuum, with little understanding of the facts.”

    More detailed data is certainly much needed. What remains striking, in the meantime, is how little understanding of the facts many people continue to have about what the decision itself mandates. A year after the decision was issued, an associate editor for Engadget, for example, still writes that, as a result of it, “if Google or Microsoft hides a news story, there may be no way to get it back.” 

    To “get it back”?! Into the results of a search on a particular person’s name? Because that is the entire scope of the delinking involved here—when the delinking does happen.

    In response to a request for comment on the Internet scholars’ open letter, a Google spokesman told The Guardian that “it’s helpful to have feedback like this so we can know what information the public would find useful.” In that spirit of helpful feedback, may I make one more suggestion?

    Google’s RTBF Transparency Report (updated on May 14) opens with the line, “In a May 2014 ruling, … the Court of Justice of the European Union found that individuals have the right to ask search engines like Google to remove certain results about them.” Dear Googlers, could you please add a line or two explaining that “removing certain results” does not mean “removing certain stories from the Internet, or even from the Google search engine”?

    Given the anniversary of the decision, many reporters are turning to the Transparency Report for information for their articles. This is a great educational opportunity. With a line or two, while it weighs its response to the important request for more detailed reporting on its actions, Google could already improve the chances of a more informed debate.

    [I’ve written about the “right to be forgotten” a number of times: chronologically, see “The Right to Be Forgotten, Or the Right to Edit?” “Revisiting the ‘Right to Be Forgotten,’” “The Right to Be Forgotten, The Privilege to Be Remembered” (that one published in Re/code), “On Remembering, Forgetting, and Delisting,” “Luciano Floridi’s Talk at Santa Clara University,” and, most recently, “Removing a Search Result: An Ethics Case Study.”]

    (Photo by Robert Scoble, used without modification under a Creative Commons license.)


  •  Is Facebook Becoming a Better Friend?

    Thursday, Apr. 30, 2015
    This Feb. 8, 2012 photo shows workers inside of Facebook headquarters in Menlo Park, Calif. (AP Photo/Paul Sakuma)

    Good friends understand boundaries and keep your secrets.  You can’t be good friends with someone you don’t trust.

    Facebook, the company that made “friend” a verb and invited you to mix together your bosom buddies, relatives, acquaintances, classmates, lovers, co-workers, exes, teachers, and who-knows-who-else into one group it called “friends,” and has been helping you stay in touch with all of them and prompting you to reveal lots of things to all them, is taking some steps to become more trustworthy.

    Specifically, as TechCrunch and others recently reported, as of April 30 Facebook’s modified APIs will no longer allow apps to collect data both from their users and from their users’ Facebook “friends”—something they often did until now, often without the users (or their friends) realizing it.*

    As TechCrunch’s Josh Constine puts it, “Some users will see [this] as a positive move that returns control of personal data to its rightful owners. Just because you’re friends with someone, doesn’t mean you necessarily trust their judgment about what developers are safe to deal with. Now, each user will control their own data destiny.” Moreover, with Facebook’s new APIs, each user will have more “granular control” over what permissions he or she grants to an app in terms of data collection or other actions—such as permission to post to his or her Newsfeed. Constine writes that

    Facebook has now instituted Login Review, where a team of its employees audit any app that requires more than the basic data of someone’s public profile, list of friends, and email address. The Login Review team has now checked over 40,000 apps, and from the experience, created new, more specific permissions so developers don’t have to ask for more than they need. Facebook revealed that apps now ask an average of 50 percent fewer permissions than before.

    These are important changes, specifically intended by Facebook to increase user trust in the platform. They are certainly welcome steps. However, Facebook might ponder the first line of Constine’s TechCrunch article, which reads, “It was always kind of shady that Facebook let you volunteer your friends’ status updates, check-ins, location, interests and more to third-party apps.” Yes, it was. It should have been obvious all along that users should “control their own data destiny.” Facebook’s policies and lack of clarity about what they made possible turned many of us who used it into somewhat inconsiderate “friends.”

    Are there other policies that continue to have that effect? So many of our friendship-related actions are now prompted and shaped by the design of the social platforms on which we perform them—and controlled even more by algorithms such as the Facebook one that determines which of our friends’ posts we see in our Newsfeed (no, they don’t all scroll by in chronological order; what you see is a curated feed, in which the parameters for curation are not fully disclosed and keep changing).

    Facebook might be becoming a better, more trustworthy friend (though a “friend” that, according to The Atlantic, made $5 billion last year by showing you ads, “more than [doubling] digital ad revenue over the course of two years”). Are we becoming better friends, though, too? Or should we be clamoring for even more transparency and more changes that would empower us to be that?

    *  We warned about this practice in our Center’s module about online privacy: “Increasingly, you may… be allowing some entities to collect a lot of personal information about all of your online ‘friends’ by simply clicking ‘allow’ when downloading applications that siphon your friends' information through your account. On the flip side, your ‘friends’ can similarly allow third parties to collect key information about you, even if you never gave that third party permission to do so.” Happily, we’ll have to update that page now…


  •  Harrison Bergeron in Silicon Valley

    Wednesday, Apr. 1, 2015
    Certain eighth graders I know have been reading “Harrison Bergeron,” so I decided to re-read it, too. The short story, by Kurt Vonnegut, describes a dystopian world in which, in an effort to make all people equal, a government imposes countervailing handicaps on all citizens who are somehow naturally gifted: beautiful people are forced to wear ugly masks; strong people have to carry around weights in proportion to their strength; graceful people are hobbled; etc. In order to make everybody equal, in other words, all people are brought to the lowest common denominator. The title character, Harrison Bergeron, is particularly gifted and therefore particularly impaired. As Vonnegut describes him,
    … Harrison's appearance was Halloween and hardware. Nobody had ever born heavier handicaps. He had outgrown hindrances faster than the H-G men could think them up. Instead of a little ear radio for a mental handicap, he wore a tremendous pair of earphones, and spectacles with thick wavy lenses. The spectacles were intended to make him not only half blind, but to give him whanging headaches besides.
    Scrap metal was hung all over him. Ordinarily, there was a certain symmetry, a military neatness to the handicaps issued to strong people, but Harrison looked like a walking junkyard. In the race of life, Harrison carried three hundred pounds.
    And to offset his good looks, the H-G men required that he wear at all times a red rubber ball for a nose, keep his eyebrows shaved off, and cover his even white teeth with black caps at snaggle-tooth random.
    In classroom discussions, the story is usually presented as a critique of affirmative action. Such discussions miss the fact that affirmative action aims to level the playing field, not the players.
    In the heart of Silicon Valley, in a land that claims to value meritocracy but ignores the ever more sharply tilted playing field, “Harrison Bergeron” seems particularly inapt. But maybe it’s not. Maybe it should be read, only in conjunction with stories like CNN’s recent interactive piece titled “The Poor Kids of Silicon Valley.” Or the piece by KQED’s Rachel Myrow, published last month, which notes that 30% of Silicon Valley’s population lives “below self-sufficiency standards,” and that “the income gap is wider than ever, and wider in Silicon Valley than elsewhere in the San Francisco Bay Area or California.”
    What such (nonfiction, current) stories make clear is that we are, in fact, already hanging weights and otherwise hampering people in our society.  It’s just that we don’t do it to those particularly gifted; we do it to the most vulnerable ones. The kids who have to wake up earlier because they live far from their high-school and have to take two buses since their parents can’t drive them to school, and who end up sleep deprived and less able to learn—the burden is on them. The kids who live in homeless shelters and whose brains might be impacted, long-term, by the stress of poverty—the burden is on them.  The people who work as contractors with limited or no benefits—the burden is on them. The parents who have to work multiple jobs, can’t afford to live close to work, and have no time to read to their kids—the burden is on all of them.
    In a Wired article about a growing number of Silicon Valley “techie” parents who are opting to home-school their kids, Jason Tanz expresses some misgivings about the subject but adds,
    My son is in kindergarten, and I fear that his natural curiosity won’t withstand 12 years of standardized tests, underfunded and overcrowded classrooms, and constant performance anxiety. The Internet has already overturned the way we connect with friends, meet potential paramours, buy and sell products, produce and consume media, and manufacture and deliver goods. Every one of those processes has become more intimate, more personal, and more meaningful. Maybe education can work the same way.
    Set aside the question of whether those processes have indeed become more intimate and meaningful; let’s concentrate on a different question about the possibility that, with the help of the Internet, education might “work the same way”: For whom?
    Are naturally curious and creative kids being hampered by standardized tests and underfunded and overcrowded classrooms? Well then, in Silicon Valley, some of those kids will be homeschooled. The Wired article quotes a homeschooling parent who optimistically foresees a day “when you can hire a teacher by the hour, just as you would hire a TaskRabbit to assemble your Ikea furniture.” As to what happens to the kids of the TaskRabbited teacher? If Harrison Bergeron happens to be one of those, he will be further hampered, and nobody will check whether the weight of the burden will be proportional to anything.
    Meritocracy is a myth when social inequality becomes as vast as it has become in Silicon Valley. Teaching “Harrison Bergeron” to eighth graders in this environment is a cruel joke.
    (Photo by Ken Banks, cropped, used under a Creative Commons license.)
  •  Trust, Self-Criticism, and Open Debate

    Tuesday, Mar. 17, 2015
    President Barack Obama speaks at the White House Summit on Cybersecurity and Consumer Protection in Stanford, Calif., Friday, Feb. 13, 2015. (AP Photo/Jeff Chiu)

    Last November, the director of the NSA came to Silicon Valley and spoke about the need for increased collaboration among governmental agencies and private companies in the battle for cybersecurity.  Last month, President Obama came to Silicon Valley as well, and signed an executive order aimed at promoting information sharing about cyberthreats.  In his remarks ahead of that signing, he noted that the government “has its own significant capabilities in the cyber world” and added that when it comes to safeguards against governmental intrusions on privacy, “the technology so often outstrips whatever rules and structures and standards have been put in place, which means the government has to be constantly self-critical and we have to be able to have an open debate about it.”

    Five days later, on February 19, The Intercept reported that back in 2010 “American and British spies hacked into the internal computer network of the largest manufacturer of SIM cards in the world, stealing encryption keys used to protect the privacy of cellphone communications across the globe….” A few days after that, on February 23, at a cybersecurity conference, the director of the NSA was confronted by the chief information security officer of Yahoo in an exchange which, according to the managing editor of the Just Security blog, “illustrated the chasm between some leading technology companies and the intelligence community.”

    Then, on March 10th, The Intercept reported that in 2012 security researchers working with the CIA “claimed they had created a modified version of Apple’s proprietary software development tool, Xcode, which could sneak surveillance backdoors into any apps or programs created using the tool. Xcode, which is distributed by Apple to hundreds of thousands of developers, is used to create apps that are sold through Apple’s App Store.” Xcode’s product manager reacted on Twitter: “So. F-----g. Angry.”

    Needless to say, it hasn’t been a good month for the push toward increased cooperation. However, to put those recent reactions in a bit more historical context, in October 2013, it was Google’s chief legal officer, David Drummond, who reacted to reports that Google’s data links had been hacked by the NSA: "We are outraged at the lengths to which the government seems to have gone to intercept data from our private fibre networks,” he said, “and it underscores the need for urgent reform." In May 2014, following reports that some Cisco products had been altered by the NSA, Mark Chandler, Cisco’s general counsel, wrote that the “failure to have rules [that restrict what the intelligence agencies may do] does not enhance national security ….”

    If the goal is increased collaboration between the public and private sector on issues related to cybersecurity, many commentators have observed that the issue most hampering that is a lack of trust. Things are not likely to get better as long as the anger and lack of trust are left unaddressed.  If President Obama is right in noting that, in a world in which technology routinely outstrips rules and standards, the government must be “constantly self-critical,” then high-level visits to Silicon Valley should include that element, much more openly than they have until now.


  • Pages:
  • 1
  • 2
  • »