Santa Clara University

Bookmark and Share

Ethical Issues in the Online World

Welcome to the blog of the Internet Ethics program at the Markkula Center for Applied Ethics, Santa Clara University. Program Director Irina Raicu will be joined by various guests in discussing the ethical issues that arise continuously on the Internet; we hope to host a robust conversation about them, and we look forward to your comments.

The following postings have been filtered by tag anonymity. clear filter
  •  The Ethics of Ad-Blocking

    Wednesday, Sep. 23, 2015
    (AP Photo/Damian Dovarganes)
    (AP Photo/Damian Dovarganes)

    As the number of people who are downloading ad-blocking software has grown, so has the number of articles discussing the ethics of ad-blocking. And interest in the subject doesn’t seem to be waning: a recent article in Mashable was shared more than 2,200 times, and articles about the ethics of ad-blocking have also appeared in Fortune (“You shouldn’t feel bad about using an ad blocker, and here’s why” and “Is using ad blockers morally wrong? The debate continues”), Digiday (“What would Kant do? Ad blocking is a problem, but it’s ethical”), The New York Times (“Enabling of Ad Blockers in Apple’s iOS9 Prompts Backlash”), as well as many other publications.

    Mind you, this is not a new debate. People were discussing it in the xkcd forum in 2014. The BBC wrote about the ethics of ad blocking in 2013. Back in 2009, Farhad Manjoo wrote for about what he described as a more ethical “approach to fair ad-blocking”; he concluded his article with the lines, “Ad blocking is here to stay. But that doesn't have to be the end of the Web—just the end of terrible ads.”
    As it turns out, in 2015, we still have terrible ads (see Khoi Vinh’s blog post, “Ad Blocking Irony.”) And, as a recent report by PageFair and Adobe details, the use of ad blockers “grew by 48% during the past year, increasing to 45 million average monthly active users” in the U.S. alone.
    In response, some publishers are accusing people who install (or build) ad blockers of theft. They are also accusing them of breaching their “implied contracts” with sites that offer ad-supported content (but see Marco Arment’s recent blog post, “The ethics of modern web ad-blocking,” which demolishes this argument, among other anti-blocker critiques).
    Many of the recent articles present both sides of the ethics debate. However, most of the articles on the topic claim that the main reasons that users are installing ad blockers are the desires to escape “annoying” ads or to improve browsing speeds (since ads can sometimes slow downloads to a crawl). What many articles leave out entirely, or gloss over in a line or two, are two other reasons why people (and especially those who understand how the online advertising ecosystem works) install ad blockers: For many of those users, the primary concerns are the tracking behind “targeted” ads, and the meteoric growth of “malvertising”—advertising used as vectors for malware.
    When it comes to the first concern, most of the articles about the ethics of ad-blocking simply conflate advertising and tracking—as if the tracking is somehow inherent in advertising. But the two are not the same, and it is important that we reject this false either/or proposition. If advertisers continue to push for more invasive consumer tracking, ad blocker usage will surge: When the researchers behind the PageFair and Adobe 2015 report asked “respondents who are not currently using an ad blocking extention … what would cause them to change their minds,” they found that “[m]isuse of personal information was the primary reason to enable ad blocking” (see p. 12 of the report). Now, it may not be clear exactly what the respondents meant by “misuse of personal information,” but that is certainly not a reference to either annoying ads or clogged bandwidth.
    As for the rise of “malvertising,” it was that development that led me to say to a Mashable reporter that if this continues unabated we might all eventually end up with an ethical duty to install ad blockers—in order to protect ourselves and others who might then be infected in turn.
    Significantly, the dangers of malvertising are connected to those of the more “benign” tracking. As a Wired article explains,

    it is modern, more sophisticated ad networks’ granular profiling capabilities that really create the malvertising sweet spot. Today ad networks let buyers configure ads to appear according to Web surfers’ precise browser or operating system types, their country locations, related search keywords and other identifying attributes. Right away we can see the value here for criminals borrowing the tactics of savvy marketers. … Piggybacking on rich advertising features, malvertising offers persistent, Internet-scale profiling and attacking. The sheer size and complexity of online advertising – coupled with the Byzantine nature of who is responsible for ad content placement and screening – means attackers enjoy the luxury of concealment and safe routes to victims, while casting wide nets to reach as many specific targets as possible.

    As one cybersecurity expert tweeted, sarcastically rephrasing the arguments of some of those who argue that installing ad-blocking software is unethical, “If you love content then you must allow random anonymous malicious entities to run arbitrary code on your devices” (@thegrugq).

    Now, if you clicked on the link to the Wired article cited above, you might or might not have noticed a thin header above the headline. The header reads, “Sponsor content.” Yup, that entire article is a kind of advertising, too. A recent New York Times story about the rise of this new kind of “native advertising” is titled “With Technology, Avoiding Both Ads and the Blockers.” (Whether such “native experiences” are better than the old kind of ads is a subject for another ethics debate; the FTC recently held a workshop about this practice and came out with more questions than answers.)

    Of course, not all online ads incorporate tracking, not all online ads bring malware, and many small publishers are bearing the brunt of a battle about practices over which they have little (if any) control. Unfortunately, for now, the blocking tools available are blunt instruments. Does that mean, though, that until the development of more nuanced solutions, the users of ad-supported sites should continue to absorb the growing privacy and security risks?

    Bottom line: discussing the ethics of ad-blocking without first clarifying the ethics of the ecosystem in which it has developed (and the history of the increasing harms that accompany many online ads) is misleading.

  •  Nothing to Hide? Nothing to Protect?

    Wednesday, Aug. 19, 2015

    Despite numerous articles and at least one full-length book debunking the premises and implications of this particular claim, “I have nothing to hide” is still a common reply offered by many Americans when asked whether they care about privacy.

    What does that really mean?

    An article by Conor Friedersdorf, published in The Atlantic, offers one assessment. It is titled “This Man Has Nothing to Hide—Not Even His Email Password.” (I’ll wait while you consider changing your email password right now, and then decide to do it some other time.) The piece details Friedersdorf’s interaction with a man named Noah Dyer, who responded to the writer’s standard challenge—"Would you prove [that you have nothing to hide] by giving me access to your email accounts, … along with your credit card statements and bank records?"—by actually providing all of that information. Friedersdorf then considers the ethical implications of Dyer’s philosophy of privacy-lessness, while carefully navigating the ethical shoals of his own decisions about which of Dyer’s information to look at and which to publish in his own article.

    Admitting to a newfound though limited respect for Dyer’s commitment to drastic self-revelation, Friedersdorf ultimately reaches, however, a different conclusion:

    Since Dyer granted that he was vulnerable to information asymmetries and nevertheless opted for disclosure, I had to admit that, however foolishly, he could legitimately claim he has nothing to hide. What had never occurred to me, until I sat in front of his open email account, is how objectionable I find that attitude. Every one of us is entrusted with information that our family, friends, colleagues, and acquaintances would rather that we kept private, and while there is no absolute obligation for us to comply with their wishes—there are, indeed, times when we have a moral obligation to speak out in order to defend other goods—assigning the privacy of others a value of zero is callous.

    I think it is more than callous, though. It is an abdication of our responsibility to protect others, whose calculations about disclosure and risk might be very different from our own. Saying “I have nothing to hide” is tantamount to saying “I have nothing and no one to protect.” It is either an acknowledgment of a very lonely existence or a devastating failure of empathy and imagination.

    As Friedersdorf describes him, Dyer is not a hermit; he has interactions with many people, at least some of whom (including his children) he appears to care about. And, in his case, his abdication is not complete; it is, rather, a shifting of responsibility. Because while he did disclose much of his personal information (which of course included the personal details of many others who had not been consulted, and whose “value system,” unlike his own, may not include radical transparency), Dyer wrote to Friedersdorf, the reporter, “[a]dditionally, while you may paint whatever picture of me you are inclined to based on the data and our conversations, I would ask you to exercise restraint in embarrassing others whose lives have crossed my path…”

    In other words, “I have nothing to hide; please hide it for me.”

    “I have nothing to hide” misses the fact that no person is an island, and much of every person’s data is tangled, interwoven, and created in conjunction with, other people’s.

    The theme of the selfishness or lack of perspective embedded in the “nothing to hide” response is echoed in a recent commentary by lawyer and privacy activist Malavika Jayaram. In an article about India’s Aadhar ID system, Jayaram quotes Edward Snowden, who in a Reddit AMA session once said that “[a]rguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.” Jayaram builds on that, writing that the “nothing to hide” argument “locates privacy at an individual (some would say selfish) level and ignores the collective, societal benefits that it engenders and protects, such as the freedom of speech and association.”

    She rightly points out, as well, that the “’nothing to hide’ rhetoric … equates a legitimate desire for space and dignity to something sinister and suspect” and “puts the burden on those under surveillance … , rather than on the system to justify why it is needed and to implement the checks and balances required to make it proportional, fair, just and humane.”

    But there might be something else going on, at the same time, in the rhetorical shift from “privacy” to “something to hide”—a kind of deflection, of finger-pointing elsewhere: There, those are the people who have “something to hide”—not me! Nothing to see here, folks who might be watching. I accept your language, your framing of the issue, and your conclusions about the balancing of values or rights involved. Look elsewhere for troublemakers.

    Viewed this way, the “nothing to hide” response is neither naïve nor simplistically selfish; it is an effort—perhaps unconscious—at camouflage. The opposite of radical transparency.

    The same impetus might present itself in a different, also frequent response to questions about privacy and surveillance: “I’m not that interesting. Nobody would want to look at my information. People could look at information about me and it would all be banal.” Or maybe that is, for some people, a reaction to feelings of helplessness. If every day people read articles advising them about steps to take to protect their online privacy, and every other day they read articles explaining how those defensive measures are defeated by more sophisticated actors, is it surprising that some might try to reassure themselves (if not assure others) that their privacy is not really worth breaching?

    But even if we’re not “interesting,” whatever that means, we all do have information, about ourselves and others, that we need to protect. And our society gives us rights that we need to protect, too--for our sake and others'.

    Photo by Hattie Stroud, used without modification under a Creative Commons license.

  •  Applying Applied Ethics -- on Yik Yak

    Friday, Jun. 26, 2015

    Earlier this week, the associate director of the Markkula Center for Applied Ethics, Miriam Schulman, published a blog post about one of the center's recent campus projects. "If we want to engage with students," she wrote, "we have to go where they are talking, and this year, that has been on Yik Yak." To read more about this controversial app and a creative way to use it in a conversation about applied ethics, see "Yik Yak: The Medium and the Message." (And consider subscribing to the "All About Ethics" blog, as well!)


  •  On Remembering, Forgetting, and Delisting

    Friday, Feb. 20, 2015
    Over the last two weeks, Julia Powles, who is a law and technology researcher at the University of Cambridge, has published two interesting pieces on privacy, free speech, and the “right to be forgotten”: “Swamplands of the Internet: Speech and Privacy,” and “How Google Determined Our Right to Be Forgotten” (the latter co-authored by Enrique Chaparro). They are both very much worth reading, especially for folks whose work impacts the privacy rights (or preferences, if you prefer) of people around the world.
    Today, a piece that I wrote, which also touches on the “right to be forgotten,” was published in Re/code. It’s titled “The Right to Be Forgotten, the Privilege to Be Remembered.” I hope you’ll read that, too!
    And earlier in February, Google’s Advisory Council issued its much-anticipated report on the issue, which seeks to clarify the outlines of the debate surrounding it and offers suggestions for the implementation of “delisting.”
    One of the authors of that report, Professor Luciano Floridi, will be speaking at Santa Clara University on Wednesday, 2/25, as part of our “IT, Ethics and Law” lecture series.  Floridi is Professor of Philosophy and Ethics of Information at the University of Oxford and the Director of Research of the Oxford Internet Institute. His talk is titled “Recording, Recalling, Retrieving, Remembering: Memory in the Information Age.” The event is free and open to the public; if you live in the area and are interested in memory, free speech, and privacy online, we hope you will join us and RSVP!
    [And if you would like to be added to our mailing list for the lecture series—which has recently hosted panel presentations on ethical hacking, the ethics of online price discrimination, and privacy by design and software engineering ethics—please email] 
    Photo by Minchioletta, used without modification under a Creative Commons license.
  •  “Practically as an accident”: on “social facts” and the common good

    Thursday, Oct. 30, 2014


    In the Los Angeles Review of Books, philosopher Evan Selinger takes issue with many of the conclusions (and built-in assumptions) compiled in Dataclysm—a new book by Christian Rudder, who co-founded the dating site OKCupid and now heads the site’s data analytics team. While Selinger’s whole essay is really interesting, I was particularly struck by his comments on big data and privacy. 

    “My biggest issue with Dataclysm,” Selinger writes,
    lies with Rudder’s treatment of surveillance. Early on in the book he writes: ‘If Big Data’s two running stories have been surveillance and money, for the last three years I’ve been working on a third: the human story.’ This claim about pursuing a third path isn’t true. Dataclysm itself is a work of social surveillance.
    It’s tempting to think that different types of surveillance can be distinguished from one another in neat and clear ways. If this were the case, we could say that government surveillance only occurs when organizations like the National Security Agency do their job; corporate surveillance is only conducted by companies like Facebook who want to know what we’re doing so that they effectively monetize our data and devise strategies to make us more deeply engaged with their platform; and social surveillance only takes place in peer-to-peer situations, like parents monitoring their children’s phones, romantic partners scrutinizing each other’s social media feeds….
    But in reality, surveillance is defined by fluid categories.
    While each category of surveillance might include both ethical and unethical practices, the point is that the boundaries separating the categories are porous, and the harms associated with surveillance might seep across all of them.
    Increasingly, when corporations like OKCupid or Facebook analyze their users’ data and communications in order to uncover “social facts,” they claim to be acting in the interest of the common good, rather than pursuing self-serving goals. They claim to give us clear windows into our society. The subtitle of Rudder’s book, for example, is “Who We Are (When We Think No One’s Looking).” As Selinger notes,
    Rudder portrays the volume of information… as a gift that can reveal the truth of who we really are. … [W]hen people don’t realize they’re lab rats in Rudder’s social experiments, they reveal habits—‘universals,’ he even alleges…  ‘Practically as an accident,’ Rudder claims, digital data can now show us how we fight, how we love, how we age, who we are, and how we’re changing.’
    Of course, Rudder should contain his claims to the “we” who use OKCupid (a 2013 study by the Pew Research Trust found that 10% of Americans report having used an online dating service). Facebook has a stronger claim to having a user base that reflects all of “us.”  But there are other entities that sit on even vaster data troves than Facebook’s, even more representative of U.S. society overall. What if a governmental organization were to decide to pursue the same selfless goals, after carefully ensuring that the data involved would be carefully anonymized and presented only in the aggregate (akin to what Rudder claims to have done)?
    In the interest of better “social facts,” of greater insight into our collective mindsets and behaviors, should we encourage (or indeed demand from) the NSA to publish “Who Americans Are (Whey They Think No One’s Watching)”? To be followed, perhaps, by a series of “Who [Insert Various Other Nationalities] Are (When They Think No One’s Watching)”? Think of all the social insights and common good that would come from that!
    In all seriousness, as Selinger rightly points out, the surveillance behind such no-notice-no-consent research comes at great cost to society:
    Rudder’s violation of the initial contextual integrity [underpinning the collection of OKCupid user data] puts personal data to questionable secondary, social use. The use is questionable because privacy isn’t only about protecting personal information. People also have privacy interests in being able to communicate with others without feeling anxious about being excessively monitored. … [T]he resulting apprehension inhibits speech, stunts personal growth, and possibly even disinclines people from experimenting with politically relevant ideas.
    With every book subtitled “Who We Are (When We Think No One’s Looking),” we, the real we, become more weary, more likely to assume that someone’s always looking. And as many members of societies that have lived with excessive surveillance have attested, that’s not a path to achieving the good life.
    Photo by Henning Muhlinghaus, used without modification under a Creative Commons license.


  •  Who (or What) Is Reading Whom: An Ongoing Metamorphosis

    Thursday, Oct. 23, 2014
    If you haven’t already read the Wall Street Journal article titled “Your E-Book Is Reading You,” published in 2012, it’s well worth your time. It might even be worth a second read, since our understanding of many Internet-related issues has changed substantially since 2012.
    I linked to that article in a short piece that I wrote, which was published yesterday in Re/Code: “Metamorphosis.”  I hope you’ll read that, too—and we’d love to get your comments on that story either at Re/Code or in the Comments section here!
    And finally, just a few days ago, a new paper by Jules Polonetsky and Omer Tene (both from the Future of Privacy Forum) was released through SSRN: “Who Is Reading Whom Now: Privacy in Education from Books to MOOCs.” This is no bite-sized exploration, but an extensive overview of the promises and challenges of technology-driven innovations in education—including the ethical implications of the uses of both “small data” and “big data” in this particular context.
    To play with yet another title—there are significant and ongoing shifts in “the way we read now”…

    Photo by Jose Antonio Alonso, used without modification under a Creative Commons license.

  •  Protecting Privacy and Society

    Monday, Apr. 15, 2013

    Consumer and business data is increasingly moving to the "cloud," and people are clamoring for protection of that data.  However, as Symantec's President, CEO, and Chairman of the Board Steve Bennett points out in this clip, "maximum privacy" is really anonymity, and some people use anonymity as a shield for illegal and unethical behavior.  How should cloud service providers deal with this dilemma?  What is their responsibility to their customers, and to society at large?  How should good corporate citizens respond when they are asked to cooperate with law enforcement? 

    Providers of cloud services are all faced with this dilemma; as Ars Technica recently reported, for example, Verizon took action when it discovered child pornography in one of its users' accounts.

  •  On Designing a More Ethical Internet

    Monday, Mar. 18, 2013

    Over the following weeks, this video series will present the views of several Silicon Valley tech leaders on some of the key issues in Internet ethics today.  This first entry, however, sets the context of the conversation.  What does it mean to live well by means of the Internet?  In what ways can the Internet help us live well, or make it more difficult to live well?  In this brief video, Santa Clara University Associate Professor Shannon Vallor looks at the Internet through a philosopher's lens.  Now that the Internet has become a medium through which we live a big portion of our lives, she argues that we all need to think about Internet ethics much more broadly and deeply--and that the people who devise Internet tools and services should think not only about meeting the user's immediate desires and needs, but also about doing that in a way that promotes a good life.

    We invite you to sign up (via email or rss feed) to be notified as a new video clip is posted each week, and we look forward to your comments!