Santa Clara University

Bookmark and Share

Ethical Issues in the Online World

Welcome to the blog of the Internet Ethics program at the Markkula Center for Applied Ethics, Santa Clara University. Program Director Irina Raicu will be joined by various guests in discussing the ethical issues that arise continuously on the Internet; we hope to host a robust conversation about them, and we look forward to your comments.

The following postings have been filtered by tag data collection. clear filter
  •  Metaphors of Big Data

    Friday, Nov. 6, 2015

    Hot off the (digital) presses: This morning, Re/code ran a short piece by me, titled “Metaphors of Big Data.”

    In the essay, I argue that the metaphors currently used to describe “big data” fail to acknowledge the vast variety of vast datasets that are now being collected and processed.  I argue that we need new metaphors.

    Strangers have long had access to some details about most of us—our names, phone numbers and even addresses have been fairly easy to find, even before the advent of the Internet. And marketers have long created, bought and sold lists that grouped customers based on various differentiating criteria. But marketers didn’t use to have access to, say, our search topics, back when we were searching in libraries, not Googling. The post office didn’t ask us to agree that it was allowed to open our letters and scan them for keywords that would then be sold to marketers that wanted to reach us with more accurately personalized offers. We would have balked. We should balk now.

    The link will take you to the piece on the Re/code site, but I hope you’ll come back and respond to it in the blog comments!


    Photo by Marc_Smith, used without modification under a Creative Commons license.


  •  Et tu, Barbie?

    Wednesday, Oct. 14, 2015

    In a smart city, in a smart house, a little girl got a new Barbie. Her parents, who had enough money to afford a rather pricey doll, explained to the girl that the new Barbie could talk—could actually have a conversation with the girl. Sometime later, alone in her room with her toys, the little girl, as instructed, pushed on the doll’s belt buckle and started talking. After a few minutes, she wondered what Barbie would answer if she said something mean—so she tried that.

    Later, the girl’s mother accessed the app that came with the new doll and listened to her daughter’s conversation. The mom then went to the girl’s room and asked her why she had been mean to Barbie. The little girl learned something—about talking, about playing, about technology, about her parents.

    Or maybe I should have written all of the above using future tense—because “Hello Barbie,” according to media reports, does not hit the stores until next month.

    After reading several articles about “Hello Barbie,” I decided to ask several folks here at the university for their reactions to this new high-tech toy. (I read, think, and write all the time about privacy, so I wanted some feedback from folks who mostly think about other stuff.)  Mind you, the article I’d sent them as an introduction was titled “Will Barbie Be Hackers’ New Plaything?”—so I realize it wasn’t exactly a neutral way to start the conversation. With that caveat, though, here is a sample of the various concerns that my colleagues expressed.

    The first reaction came via email: “There is a sci-fi thriller in there somewhere…” (Thriller, yes, I thought to myself, though not sci-fi anymore.)

    The other concerns came in person.  From a parent of grown kids: the observation that these days parents seem to want to know absolutely everything about their children, and that that couldn’t be healthy for either the parents or the kids. From a dad of a 3-year girl: “My daughter already loves Siri; if I gave her this she would stop talking to anybody else!” From a woman thinking back: “I used to have to talk for my doll, too…” The concerns echoed those raised in much of the media coverage of Hello Barbie—that she will stifle the imagination that kids deploy when they have to provide both sides of a conversation with their toys, or that she will violate whatever privacy children still have.

    But I was particularly struck by a paragraph in a Mashable article that described in more detail how the new doll/app combo will work:

    "When a parent goes through the process of setting up Hello Barbie via the app, it's possible to control the settings and manually approve or delete potential conversation topics. For example, if a child doesn’t celebrate certain holidays like Christmas, a parent can chose to remove certain lines from Barbie's repertoire."

    Is the question underlying all of this, really, one of control? Who will ultimately control Hello Barbie? Will it be Mattel? Will it be ToyTalk, the San Francisco company providing the “consumer-grade artificial intelligence” that enables Hello Barbie’s conversations? The parents who buy the doll? The hackers who might break in? The courts that might subpoena the recordings of the children’s chats with the doll?

    And when do children get to exercise control? When and how do they get to develop autonomy if even well intentioned people (hey, corporations are people, too, now) listen in to—and control—even the conversations that the kids are having when they play, thinking they’re alone? (“…Toy Talk says that parents will have ‘full control over all account information and content,’ including sharing recordings on Facebook, YouTube, and Twitter,” notes an ABC News article; “data is sent to and from ToyTalk’s servers, where conversations are stored for two years from the time a child last interacted with the doll or a parent accessed a ToyTalk account,” points out the San Francisco Chronicle.)

    What do kids learn when they realize that those conversations they thought were private were actually being recorded, played back, and shared with either business’ partners or parents’ friends? All I can hope is that the little girls who will receive Hello Barbie will, as a result, grow up to be privacy activists—or, better yet, tech developers and designers who will understand, deeply, the importance of privacy by design.

    Photo by Mike Licht, used without modification under a Creative Commons license.


  •  The Ethics of Ad-Blocking

    Wednesday, Sep. 23, 2015
    (AP Photo/Damian Dovarganes)

    As the number of people who are downloading ad-blocking software has grown, so has the number of articles discussing the ethics of ad-blocking. And interest in the subject doesn’t seem to be waning: a recent article in Mashable was shared more than 2,200 times, and articles about the ethics of ad-blocking have also appeared in Fortune (“You shouldn’t feel bad about using an ad blocker, and here’s why” and “Is using ad blockers morally wrong? The debate continues”), Digiday (“What would Kant do? Ad blocking is a problem, but it’s ethical”), The New York Times (“Enabling of Ad Blockers in Apple’s iOS9 Prompts Backlash”), as well as many other publications.

    Mind you, this is not a new debate. People were discussing it in the xkcd forum in 2014. The BBC wrote about the ethics of ad blocking in 2013. Back in 2009, Farhad Manjoo wrote for about what he described as a more ethical “approach to fair ad-blocking”; he concluded his article with the lines, “Ad blocking is here to stay. But that doesn't have to be the end of the Web—just the end of terrible ads.”
    As it turns out, in 2015, we still have terrible ads (see Khoi Vinh’s blog post, “Ad Blocking Irony.”) And, as a recent report by PageFair and Adobe details, the use of ad blockers “grew by 48% during the past year, increasing to 45 million average monthly active users” in the U.S. alone.
    In response, some publishers are accusing people who install (or build) ad blockers of theft. They are also accusing them of breaching their “implied contracts” with sites that offer ad-supported content (but see Marco Arment’s recent blog post, “The ethics of modern web ad-blocking,” which demolishes this argument, among other anti-blocker critiques).
    Many of the recent articles present both sides of the ethics debate. However, most of the articles on the topic claim that the main reasons that users are installing ad blockers are the desires to escape “annoying” ads or to improve browsing speeds (since ads can sometimes slow downloads to a crawl). What many articles leave out entirely, or gloss over in a line or two, are two other reasons why people (and especially those who understand how the online advertising ecosystem works) install ad blockers: For many of those users, the primary concerns are the tracking behind “targeted” ads, and the meteoric growth of “malvertising”—advertising used as vectors for malware.
    When it comes to the first concern, most of the articles about the ethics of ad-blocking simply conflate advertising and tracking—as if the tracking is somehow inherent in advertising. But the two are not the same, and it is important that we reject this false either/or proposition. If advertisers continue to push for more invasive consumer tracking, ad blocker usage will surge: When the researchers behind the PageFair and Adobe 2015 report asked “respondents who are not currently using an ad blocking extention … what would cause them to change their minds,” they found that “[m]isuse of personal information was the primary reason to enable ad blocking” (see p. 12 of the report). Now, it may not be clear exactly what the respondents meant by “misuse of personal information,” but that is certainly not a reference to either annoying ads or clogged bandwidth.
    As for the rise of “malvertising,” it was that development that led me to say to a Mashable reporter that if this continues unabated we might all eventually end up with an ethical duty to install ad blockers—in order to protect ourselves and others who might then be infected in turn.
    Significantly, the dangers of malvertising are connected to those of the more “benign” tracking. As a Wired article explains,

    it is modern, more sophisticated ad networks’ granular profiling capabilities that really create the malvertising sweet spot. Today ad networks let buyers configure ads to appear according to Web surfers’ precise browser or operating system types, their country locations, related search keywords and other identifying attributes. Right away we can see the value here for criminals borrowing the tactics of savvy marketers. … Piggybacking on rich advertising features, malvertising offers persistent, Internet-scale profiling and attacking. The sheer size and complexity of online advertising – coupled with the Byzantine nature of who is responsible for ad content placement and screening – means attackers enjoy the luxury of concealment and safe routes to victims, while casting wide nets to reach as many specific targets as possible.

    As one cybersecurity expert tweeted, sarcastically rephrasing the arguments of some of those who argue that installing ad-blocking software is unethical, “If you love content then you must allow random anonymous malicious entities to run arbitrary code on your devices” (@thegrugq).

    Now, if you clicked on the link to the Wired article cited above, you might or might not have noticed a thin header above the headline. The header reads, “Sponsor content.” Yup, that entire article is a kind of advertising, too. A recent New York Times story about the rise of this new kind of “native advertising” is titled “With Technology, Avoiding Both Ads and the Blockers.” (Whether such “native experiences” are better than the old kind of ads is a subject for another ethics debate; the FTC recently held a workshop about this practice and came out with more questions than answers.)

    Of course, not all online ads incorporate tracking, not all online ads bring malware, and many small publishers are bearing the brunt of a battle about practices over which they have little (if any) control. Unfortunately, for now, the blocking tools available are blunt instruments. Does that mean, though, that until the development of more nuanced solutions, the users of ad-supported sites should continue to absorb the growing privacy and security risks?

    Bottom line: discussing the ethics of ad-blocking without first clarifying the ethics of the ecosystem in which it has developed (and the history of the increasing harms that accompany many online ads) is misleading.

  •  A Personal Privacy Policy

    Wednesday, Sep. 2, 2015

    This essay first appeared in Slate's Future Tense blog in July 2015.

    Dear Corporation,

    You have expressed an interest in collecting personal information about me. (This interest may have been expressed by implication, in case you were attempting to collect such data without notifying me first.) Since you have told me repeatedly that personalization is a great benefit, and that advertising, search results, news, and other services should be tailored to my individual needs and desires, I’ve decided that I should also have my own personalized, targeted privacy policy. Here it is.

    While I am glad that (as you stated) my privacy is very important to you, it’s even more important to me. The intent of this policy is to inform you how you may collect, use, and dispose of personal information about me.

    By collecting any such information about me, you are agreeing to the terms below. These terms may change from time to time, especially as I find out more about ways in which personal information about me is actually used and I think more about the implications of those uses.

    Note: You will be asked to provide some information about yourself. Providing false information will constitute a violation of this agreement.

    Scope: This policy covers only me. It does not apply to related entities that I do not own or control, such as my friends, my children, or my husband.

    Age restriction and parental participation: Please specify if you are a startup; if so, note how long you’ve been in business. Please include the ages of the founders/innovators who came up with your product and your business model. Please also include the ages of any investors who have asserted, through their investment in your company, that they thought this product or service was a good idea.

    Information about you. For each piece of personal information about me that you wish to collect, analyze, and store, you must first disclose the following: a) Do you need this particular piece of information in order for your product/service to work for me? If not, you are not authorized to collect it. If yes, please explain how this piece of information is necessary for your product to work for me. b) What types of analytics do you intend to do perform with this information? c) Will you share this piece of information with anyone outside your company? If so, list each entity with which you intend to share it, and for what purpose; you must update this disclosure every time you add a new third party with which you’d like to share. d) Will you make efforts to anonymize the personal information that you’re collecting? e) Are you aware of the research that shows that anonymization doesn’t really work because it’s easy to put together information from several categories and/or several databases and so figure out the identity of an “anonymous” source of data? f) How long will you retain this particular piece of information about me? g) If I ask you to delete it, will you, and if so, how quickly? Note: by “delete” I don’t mean “make it invisible to others”—I mean “get it out of your system entirely.”

    Please be advised that, like these terms, the information I’ve provided to you may change, too: I may switch electronic devices; change my legal name; have more children; move to a different town; experiment with various political or religious affiliations; buy products that I may or may not like, just to try something new or to give to someone else; etc. These terms (as amended as needed) will apply to any new data that you may collect about me in the future: your continued use of personal information about me constitutes your acceptance of this.

    And, of course, I reserve all rights not expressly granted to you.

    Photo by Perspecsys Photos, used without modification under a Creative Commons license.

  •  Luciano Floridi’s Talk at Santa Clara University

    Tuesday, Mar. 10, 2015


    In the polarized debate about the so-called “right to be forgotten” prompted by an important decision issued by the European Court of Justice last year, Luciano Floridi has played a key role. Floridi, who is Professor of Philosophy and Ethics of Information at the University of Oxford and Director of Research of the Oxford Internet Institute, accepted Google’s invitation to join its advisory council on that topic. While the council was making its way around seven European capitals pursuing both expert and public input, Professor Floridi (the only ethicist in the group) wrote several articles about his evolving understanding of the issues involved—including “Google's privacy ethics tour of Europe: a complex balancing act”; “Google ethics tour: should readers be told a link has been removed?”; “The right to be forgotten – the road ahead”; and “Right to be forgotten poses more questions than answers.”
    Last month, after the advisory council released its much-anticipated report, Professor Floridi spoke at Santa Clara University (his lecture was part of our ongoing “IT, Ethics, and Law” lecture series). In his talk, titled “Recording, Recalling, Retrieving, Remembering: Memory in the Information Age,” Floridi embedded his analysis of the European court decision into a broader exploration of the nature of memory itself; the role of memory in the European philosophical tradition; and the relationship among memory, identity, forgiveness, and closure. As Floridi explained, the misnamed “right to be forgotten” is really about closure, which is in turn not about forgetting but about “rightly managing your past memory.”
    Here is the video of that talk. We hope that it will add much-needed context to the more nuanced conversation that is now developing around the balancing of the rights, needs, and responsibilities of all of the stakeholders involved in this debate, as Google continues to process the hundreds of thousands of requests for de-linking submitted so far in the E.U.
    If you would like to be added to our “IT, Ethics, and Law” mailing list in order to be notified of future events in the lecture series, please email


  •  “Practically as an accident”: on “social facts” and the common good

    Thursday, Oct. 30, 2014


    In the Los Angeles Review of Books, philosopher Evan Selinger takes issue with many of the conclusions (and built-in assumptions) compiled in Dataclysm—a new book by Christian Rudder, who co-founded the dating site OKCupid and now heads the site’s data analytics team. While Selinger’s whole essay is really interesting, I was particularly struck by his comments on big data and privacy. 

    “My biggest issue with Dataclysm,” Selinger writes,
    lies with Rudder’s treatment of surveillance. Early on in the book he writes: ‘If Big Data’s two running stories have been surveillance and money, for the last three years I’ve been working on a third: the human story.’ This claim about pursuing a third path isn’t true. Dataclysm itself is a work of social surveillance.
    It’s tempting to think that different types of surveillance can be distinguished from one another in neat and clear ways. If this were the case, we could say that government surveillance only occurs when organizations like the National Security Agency do their job; corporate surveillance is only conducted by companies like Facebook who want to know what we’re doing so that they effectively monetize our data and devise strategies to make us more deeply engaged with their platform; and social surveillance only takes place in peer-to-peer situations, like parents monitoring their children’s phones, romantic partners scrutinizing each other’s social media feeds….
    But in reality, surveillance is defined by fluid categories.
    While each category of surveillance might include both ethical and unethical practices, the point is that the boundaries separating the categories are porous, and the harms associated with surveillance might seep across all of them.
    Increasingly, when corporations like OKCupid or Facebook analyze their users’ data and communications in order to uncover “social facts,” they claim to be acting in the interest of the common good, rather than pursuing self-serving goals. They claim to give us clear windows into our society. The subtitle of Rudder’s book, for example, is “Who We Are (When We Think No One’s Looking).” As Selinger notes,
    Rudder portrays the volume of information… as a gift that can reveal the truth of who we really are. … [W]hen people don’t realize they’re lab rats in Rudder’s social experiments, they reveal habits—‘universals,’ he even alleges…  ‘Practically as an accident,’ Rudder claims, digital data can now show us how we fight, how we love, how we age, who we are, and how we’re changing.’
    Of course, Rudder should contain his claims to the “we” who use OKCupid (a 2013 study by the Pew Research Trust found that 10% of Americans report having used an online dating service). Facebook has a stronger claim to having a user base that reflects all of “us.”  But there are other entities that sit on even vaster data troves than Facebook’s, even more representative of U.S. society overall. What if a governmental organization were to decide to pursue the same selfless goals, after carefully ensuring that the data involved would be carefully anonymized and presented only in the aggregate (akin to what Rudder claims to have done)?
    In the interest of better “social facts,” of greater insight into our collective mindsets and behaviors, should we encourage (or indeed demand from) the NSA to publish “Who Americans Are (Whey They Think No One’s Watching)”? To be followed, perhaps, by a series of “Who [Insert Various Other Nationalities] Are (When They Think No One’s Watching)”? Think of all the social insights and common good that would come from that!
    In all seriousness, as Selinger rightly points out, the surveillance behind such no-notice-no-consent research comes at great cost to society:
    Rudder’s violation of the initial contextual integrity [underpinning the collection of OKCupid user data] puts personal data to questionable secondary, social use. The use is questionable because privacy isn’t only about protecting personal information. People also have privacy interests in being able to communicate with others without feeling anxious about being excessively monitored. … [T]he resulting apprehension inhibits speech, stunts personal growth, and possibly even disinclines people from experimenting with politically relevant ideas.
    With every book subtitled “Who We Are (When We Think No One’s Looking),” we, the real we, become more weary, more likely to assume that someone’s always looking. And as many members of societies that have lived with excessive surveillance have attested, that’s not a path to achieving the good life.
    Photo by Henning Muhlinghaus, used without modification under a Creative Commons license.


  •  Who (or What) Is Reading Whom: An Ongoing Metamorphosis

    Thursday, Oct. 23, 2014
    If you haven’t already read the Wall Street Journal article titled “Your E-Book Is Reading You,” published in 2012, it’s well worth your time. It might even be worth a second read, since our understanding of many Internet-related issues has changed substantially since 2012.
    I linked to that article in a short piece that I wrote, which was published yesterday in Re/Code: “Metamorphosis.”  I hope you’ll read that, too—and we’d love to get your comments on that story either at Re/Code or in the Comments section here!
    And finally, just a few days ago, a new paper by Jules Polonetsky and Omer Tene (both from the Future of Privacy Forum) was released through SSRN: “Who Is Reading Whom Now: Privacy in Education from Books to MOOCs.” This is no bite-sized exploration, but an extensive overview of the promises and challenges of technology-driven innovations in education—including the ethical implications of the uses of both “small data” and “big data” in this particular context.
    To play with yet another title—there are significant and ongoing shifts in “the way we read now”…

    Photo by Jose Antonio Alonso, used without modification under a Creative Commons license.

  •  Questions about Mass Surveillance

    Tuesday, Oct. 14, 2014

    Last week, Senator Ron Wyden of Oregon, long-time member of the Select Committee on Intelligence and current chairman of the Senate Finance Committee, held a roundtable on the impact of governmental surveillance on the U.S. digital economy.  (You can watch a video of the entire roundtable discussion here.) While he made the case that the current surveillance practices have hampered both our security and our economy, the event focused primarily on the implications of mass surveillance for U.S. business—corporations, entrepreneurs, tech employees, etc.  Speaking at a high-school in the heart of Silicon Valley, surrounded by the Executive Chairman of Google, the General Counsels of Microsoft and Facebook, and others, Wyden argued that the current policies around surveillance were harming one of the most promising sectors of the U.S. economy—and that Congress was largely ignoring that issue. “When the actions of a foreign government threaten red-white-and-blue jobs, Washington [usually] gets up at arms,” Wyden noted, but “no one in Washington is talking about how overly broad surveillance is hurting the US economy.”

    The focus on the economic impact was clearly intended to present the issue of mass surveillance through a new lens—one that might engage those lawmakers and citizens who had not been moved, perhaps, by civil liberties arguments.  However, even in this context, the discussion frequently turned to the “personal” implications of the policies involved.  And in comments both during and after the panel discussion, Wyden expressed his deep concern about the particular danger posed by the creation and implementation of “secret law.”  Microsoft’s General Counsel, Brad Smith, went one step further:  “We need to recognize,” he said, “that laws that the rest of the world does not respect will ultimately undermine the fundamental ability of our own legal processes, law enforcement agencies, and even the intelligence community itself.”

    That brought me back to some of the questions I raised in 2013 (a few months after the Snowden revelations first became public), in an article published by the Santa Clara Magazine.  One of the things I had asked was whether the newly-revealed surveillance programs might “change the perception of the United States to the point where they hamper, more than they help, our national security. “ In regard to secret laws, even if those were to be subject to effective Congressional and court oversight, I wondered, "[i]s there a level of transparency that U.S. citizens need from each branch of the government even if those branches are transparent to one another? In a democracy, can the system of checks and balances function with informed representatives but without an informed public? Would such an environment undermine voters’ ability to choose [whom to vote for]?"

    And, even more broadly, in regard to the dangers inherent in indiscriminate mass surveillance, "[i]n a society in which the government collects the metadata (and possibly much of the content) of every person’s communications for future analysis, will people still speak, read, research, and act freely? Do we have examples of countries in which mass surveillance coexisted with democratic governance?"

    We know that a certain level of mass surveillance and democratic governance did coexist for a time, very uneasily, in our own past, during the Hoover era at the FBI—and the revelations of the realities of that coexistence led to the Church committee and to policy changes.

    Will the focus on the economic impact of current mass governmental surveillance lead to new changes in our surveillance laws? Perhaps.  But it was Facebook’s general counsel who had (to my mind) the best line of last week’s roundtable event. When a high-school student in the audience asked the panel how digital surveillance affects young people like him, who want to build new technology companies or join growing ones, one panelist advised him to just worry about creating great products, and to let people like the GCs worry about the broader issues.  Another panelist told him that he should care about this issue because of the impact that data localization efforts would have on future entrepreneurs’ ability to create great companies. Then, Facebook’s Colin Stretch answered. “I would say care about it for the reasons you learned in your Civics class,” he said, “not necessarily the reasons you learned in your computer science class.”

    Illustration by Stuart Bradford

  •  Mobile Technology and Social Media: Ethical Implications

    Sunday, May. 12, 2013

    The adoption of mobile devices and the use of social media are both growing quickly around the world.  In emerging markets in particular, mobile devices have become “life tools”—used for telemedicine, banking, education, communication, and more.  These developments give rise to new ethical challenges.  How should the mobile be used for data collection among vulnerable populations?  Can apps that bring great benefits also cause unintended harm?  And who should address these concerns?  In this brief video, tech entrepreneur and professor Radha Basu argues that the debate should include the manufacturers of mobile devices and the app developers, but also the young people who will be most affected by these new developments.