Santa Clara University

Bookmark and Share

Ethical Issues in the Online World

Welcome to the blog of the Internet Ethics program at the Markkula Center for Applied Ethics, Santa Clara University. Program Director Irina Raicu will be joined by various guests in discussing the ethical issues that arise continuously on the Internet; we hope to host a robust conversation about them, and we look forward to your comments.

The following postings have been filtered by tag benefit. clear filter
  •  The Ethics of Ad-Blocking

    Wednesday, Sep. 23, 2015
    (AP Photo/Damian Dovarganes)
    (AP Photo/Damian Dovarganes)

    As the number of people who are downloading ad-blocking software has grown, so has the number of articles discussing the ethics of ad-blocking. And interest in the subject doesn’t seem to be waning: a recent article in Mashable was shared more than 2,200 times, and articles about the ethics of ad-blocking have also appeared in Fortune (“You shouldn’t feel bad about using an ad blocker, and here’s why” and “Is using ad blockers morally wrong? The debate continues”), Digiday (“What would Kant do? Ad blocking is a problem, but it’s ethical”), The New York Times (“Enabling of Ad Blockers in Apple’s iOS9 Prompts Backlash”), as well as many other publications.

    Mind you, this is not a new debate. People were discussing it in the xkcd forum in 2014. The BBC wrote about the ethics of ad blocking in 2013. Back in 2009, Farhad Manjoo wrote for about what he described as a more ethical “approach to fair ad-blocking”; he concluded his article with the lines, “Ad blocking is here to stay. But that doesn't have to be the end of the Web—just the end of terrible ads.”
    As it turns out, in 2015, we still have terrible ads (see Khoi Vinh’s blog post, “Ad Blocking Irony.”) And, as a recent report by PageFair and Adobe details, the use of ad blockers “grew by 48% during the past year, increasing to 45 million average monthly active users” in the U.S. alone.
    In response, some publishers are accusing people who install (or build) ad blockers of theft. They are also accusing them of breaching their “implied contracts” with sites that offer ad-supported content (but see Marco Arment’s recent blog post, “The ethics of modern web ad-blocking,” which demolishes this argument, among other anti-blocker critiques).
    Many of the recent articles present both sides of the ethics debate. However, most of the articles on the topic claim that the main reasons that users are installing ad blockers are the desires to escape “annoying” ads or to improve browsing speeds (since ads can sometimes slow downloads to a crawl). What many articles leave out entirely, or gloss over in a line or two, are two other reasons why people (and especially those who understand how the online advertising ecosystem works) install ad blockers: For many of those users, the primary concerns are the tracking behind “targeted” ads, and the meteoric growth of “malvertising”—advertising used as vectors for malware.
    When it comes to the first concern, most of the articles about the ethics of ad-blocking simply conflate advertising and tracking—as if the tracking is somehow inherent in advertising. But the two are not the same, and it is important that we reject this false either/or proposition. If advertisers continue to push for more invasive consumer tracking, ad blocker usage will surge: When the researchers behind the PageFair and Adobe 2015 report asked “respondents who are not currently using an ad blocking extention … what would cause them to change their minds,” they found that “[m]isuse of personal information was the primary reason to enable ad blocking” (see p. 12 of the report). Now, it may not be clear exactly what the respondents meant by “misuse of personal information,” but that is certainly not a reference to either annoying ads or clogged bandwidth.
    As for the rise of “malvertising,” it was that development that led me to say to a Mashable reporter that if this continues unabated we might all eventually end up with an ethical duty to install ad blockers—in order to protect ourselves and others who might then be infected in turn.
    Significantly, the dangers of malvertising are connected to those of the more “benign” tracking. As a Wired article explains,

    it is modern, more sophisticated ad networks’ granular profiling capabilities that really create the malvertising sweet spot. Today ad networks let buyers configure ads to appear according to Web surfers’ precise browser or operating system types, their country locations, related search keywords and other identifying attributes. Right away we can see the value here for criminals borrowing the tactics of savvy marketers. … Piggybacking on rich advertising features, malvertising offers persistent, Internet-scale profiling and attacking. The sheer size and complexity of online advertising – coupled with the Byzantine nature of who is responsible for ad content placement and screening – means attackers enjoy the luxury of concealment and safe routes to victims, while casting wide nets to reach as many specific targets as possible.

    As one cybersecurity expert tweeted, sarcastically rephrasing the arguments of some of those who argue that installing ad-blocking software is unethical, “If you love content then you must allow random anonymous malicious entities to run arbitrary code on your devices” (@thegrugq).

    Now, if you clicked on the link to the Wired article cited above, you might or might not have noticed a thin header above the headline. The header reads, “Sponsor content.” Yup, that entire article is a kind of advertising, too. A recent New York Times story about the rise of this new kind of “native advertising” is titled “With Technology, Avoiding Both Ads and the Blockers.” (Whether such “native experiences” are better than the old kind of ads is a subject for another ethics debate; the FTC recently held a workshop about this practice and came out with more questions than answers.)

    Of course, not all online ads incorporate tracking, not all online ads bring malware, and many small publishers are bearing the brunt of a battle about practices over which they have little (if any) control. Unfortunately, for now, the blocking tools available are blunt instruments. Does that mean, though, that until the development of more nuanced solutions, the users of ad-supported sites should continue to absorb the growing privacy and security risks?

    Bottom line: discussing the ethics of ad-blocking without first clarifying the ethics of the ecosystem in which it has developed (and the history of the increasing harms that accompany many online ads) is misleading.

  •  Are You A Hysteric, Or A Sociopath? Welcome to the Privacy Debate

    Tuesday, Oct. 7, 2014


    Whether you’re reading about the latest data-mining class action lawsuit through your Google Glass or relaxing on your front porch waving at your neighbors, you probably know that there’s a big debate in this country about privacy.  Some say privacy is important. Some say it’s dead.  Some say kids want it, or not. Some say it’s a relatively recent phenomenon whose time, by the way, has passed—a slightly opaque blip in our history as social animals. Others say it’s a human right without which many other rights would be impossible to maintain.

    It’s a much-needed discussion—but one in which the tone is often not conducive to persuasion, and therefore progress.  If you think concerns about information privacy are overrated and might become an obstacle to the development of useful tools and services, you may hear yourself described as a [Silicon Valley] sociopath or a heartless profiteer.  If you believe that privacy is important and deserves protection, you may be called a “privacy hysteric.”
    It’s telling that privacy advocates are so often called “hysterics”—a term associated more commonly with women, and with a surfeit of emotion and lack of reason.  (Privacy advocates are also called “fundamentalists” or “paranoid”—again implying belief not based in reason.)  And even when such terms are not directly deployed, the tone often suggests them. In a 2012 Cato Institute policy analysis titled “A Reasonable Response to the Privacy ‘Crisis,’” for example, Larry Downes writes about the “emotional baggage” invoked by the term “privacy,” and advises, “For those who naturally leap first to leg­islative solutions, it would be better just to fume, debate, attend conferences, blog, and then calm down before it’s too late.”  (Apparently debate, like fuming and attending conferences, is just a harmless way to let off steam—as long as it doesn’t lead to such hysteria as class-action lawsuits or actual attempts at legislation.)
    In the year following Edward Snowden’s revelations, the accusations of privacy “hysteria” or “paranoia” seemed to have died down a bit; unfortunately, they might be making a comeback. The summary of a recent GigaOm article, for example, accuses BuzzFeed of “pumping up the hysteria” in its discussion of ad beacons installed—and quickly removed—in New York.
    On the other hand, those who oppose privacy-protecting legislation or who argue that other values or rights might trump privacy sometimes find themselves diagnosed, too–if not as sociopaths, then at least as belonging on the “autism spectrum”: disregardful of social norms, unable to empathize with others.
    Too often, the terms thrown about by some on both sides in the privacy debate suggest an abdication of the effort to persuade. You can’t reason with hysterics and sociopaths, so there’s no need to try. You just state your truth to those others who think like you do, and who cheer your vehemence.
    But even if you’re a privacy advocate, you probably want the benefits derived from collecting and analyzing at least some data sets, under some circumstances; and even if you think concerns about data disclosures are overblown, you still probably don’t disclose everything about yourself to anyone who will listen.
    If information is power, privacy is a defensive shell against that power.  It is an effort to modulate vulnerability.  (The more vulnerable you feel, the more likely you are to understand the value of privacy.)  So privacy is an inherent part of all of our lives; the question is how to deploy it best.  In light of new technologies that create new privacy challenges, and new methodologies that seek to maximize benefits while minimizing harms (e.g. “differential privacy”), we need to be able to discuss this complicated balancing act —without charged rhetoric making the debate even more difficult.
    If you find yourself calling people privacy-related names (or writing headlines or summaries that do that, even when the headlined articles themselves don’t), please rephrase.
    Photo by Tom Tolkien, unmodified, used under a Creative Commons license:
  •  The Disconnect: Accountability and Consequences Online

    Sunday, Apr. 28, 2013

    Do we need more editorial control on the Web?  In this brief clip, the Chairman, President, and Chief Executive Officer of Seagate Technology, Stephen Luczo, argues that we do.  He also cautions that digital media channels sometimes unwittingly lend a gloss of credibility to some stories that don't deserve it (as was recently demonstrated in the coverage of the Boston bombing).  Luczo views this as a symptom of a broader breakdown among responsibility, accountability, and consequences in the online world.  Is the much-vaunted freedom of the Internet diminishing the amount of substantive feedback that we get for doing something positive--or negative--for society?

    Chad Raphael, Chair of the Communication Department and Associate Professor at Santa Clara University, responds to Luczo's comments:

    "It's true that the scope and speed of news circulation on the Internet worsens longstanding problems of countering misinformation and holding the sources that generate it accountable.  But journalism's traditional gatekeepers were never able to do these jobs alone, as Senator Joseph McCarthy knew all too well.  News organizations make their job harder with each new round of layoffs of experienced journalists.

    There are new entities emerging online that can help fulfill these traditional journalistic functions, but we need to do more to connect, augment, and enshrine them in online news spaces. Some of these organizations, such as News Trust, crowdsource the problem of misinformation by enlisting many minds to review news stories and alert the public to inaccuracy and manipulation.  Their greatest value may be as watchdogs who can sound the alarm on suspicious material.  Other web sites, such as, rely on trained professionals to evaluate political actors' claims.  They can pick up tips from multiple watchdogs, some of them more partisan than others, and evaluate those tips as fair-minded judges.  We need them to expand their scope beyond checking politicians to include other public actors.  The judges could also use some more robust programs for tracking the spread of info-viruses back to their sources, so they can be identified and exposed quickly.  We also need better ways to publicize the online judges' verdicts. 

    If search engines and other news aggregators aim to organize the world's information for us, it seems within their mission to let us know what sources, stories, and news organizations have been more and less accurate over time.  Even more importantly, aggregators might start ranking better performing sources higher in their search results, creating a powerful economic incentive to get the story right rather than getting it first.

    Does that raise First Amendment concerns? Sure. But we already balance the right to free speech against other important rights, including reputation, privacy, and public safety.  And the Internet is likely to remain the Wild West until Google, Yahoo!, Digg, and other news aggregators start separating the good, the bad, and the ugly by organizing information according to its credibility, not just its popularity."

    Chad Raphael

  •  Internet Access Is a Privilege

    Sunday, Apr. 21, 2013

    What would our lives be like if we no longer had access to the Internet?  How much good would we lose?  How much harm would we be spared?  Is Internet access a right?  These days, whether or not we think of access to it as a right, many of us take the Internet for granted.  In this brief video, Apple co-founder A. C. "Mike" Markkula Jr. looks at the big picture, argues that Internet use is a privilege, and considers ways to minimize some of the harms associated with it, while fully appreciating its benefits.

    In an op-ed published in the New York Times last year, Vint Cerf (who is often described as one of the "fathers of the Internet" and is currently a vice president and chief Internet evangelist for Google) argued along similar lines:

    "As we seek to advance the state of the art in technology and its use in society, [engineers] must be conscious of our civil responsibilities in addition to our engineering expertise.  Improving the Internet is just one means, albeit an important one, by which to improve the human condition. It must be done with an appreciation for the civil and human rights that deserve protection--without pretending that access itself is such a right."