Santa Clara University

internet-ethics-banner
Bookmark and Share
 
 
RSS

Ethical Issues in the Online World

Welcome to the blog of the Internet Ethics program at the Markkula Center for Applied Ethics, Santa Clara University. Program Director Irina Raicu will be joined by various guests in discussing the ethical issues that arise continuously on the Internet; we hope to host a robust conversation about them, and we look forward to your comments.

  •  Cookies and Privacy: A Delicious Counter-Experiment

    Monday, Nov. 17, 2014

     

    Last month, a number of stories in publications such as Pro Publica, Mashable, Slate, and The Smithsonian Magazine covered an “experiment” by artist Risa Puno, who asked attendees at an art festival to disclose bits of personal information about themselves in exchange for cookies.  ProPublica described the event as a “highly unscientific but delicious experiment” in which “380 New Yorkers gave up sensitive personal information—from fingerprints to partial Social Security numbers—for a cookie.” Of course, we are given no count of the number of people who refused the offer, and the article notes that “[j]ust under half—or 162 people—gave what they said were the last four digits of their Social Security numbers”—with that rather important “what they said” caveat casually buried mid-sentence.

    “To get a cookie,” according to the Pro Publica story, “people had to turn over personal data that could include their address, driver's license number, phone number and mother's maiden name”—the accuracy of most of which, of course, Puno could also not confirm.
     
    All of this is shocking only if one assumes that people are not capable of lying (especially to artists offering cookies). But the artist declared herself shocked, and Pro Publica somberly concluded that “Puno's performance art experiment highlights what privacy experts already know: Many Americans are not sure how much their personal data is worth, and that consumer judgments about what price to put on privacy can be swayed by all kinds of factors.”
     
    In this case, I am at least thankful for the claim that the non-experiment “highlights,” rather than “proves” something. Other stories, however, argued that the people convinced to give up information “demonstrated just how much their personal information was worth.” The Smithsonian argued that the “artistic experiment is confirmation of the idea that people really just have no sense of what information and privacy is worth other than, variably, a whole lot, or, apparently, a cookie.” The headline in The Consumerist blared, “Forget Computer Cookies: People Happily Give Up Personal Data For The Baked Kind” (though, in all fairness, The Consumerist article did highlight the “what they said” bit, and noted that the “finely-honed Brooklynite sense of modern irony may have played a role, too. Plenty of purchasers didn’t even eat their cookies…. They ‘bought’ them so they could post photos on Twitter and Instagram saying things like, ‘Traded all my personal data for a social media cookie’…”—which suggests rather more awareness than Puno gives people credit for).
     
    In any case, prompted by those stories, I decided that a flip-side “artistic experiment” was in order. Last week, together with my partner in privacy-protective performance art—Robert Henry, who is Santa Clara University’s Chief Information Security Officer—I set up a table in between the campus bookstore and the dining area.  Bob had recently sent out a campus-wide email reminding people to change their passwords, and we decided that we would offer folks cookies in return for password changes. We printed out a sign that read “Treats for Password Changes,” and we set out two types of treats: cookies and free USB drives. The USB drives all came pre-loaded with a file explaining the security dangers associated with picking up free USB drives. The cookies came pre-loaded with chocolate chips.
     
    We are now happy to report our results. First, a lot of people don’t trust any offers of free cookies. We got a lot of very suspicious looks. Second, within the space of about an hour and a half, about 110 people were easily convinced to change one of their passwords—something that is a good privacy/security practice in itself—in exchange for a cookie. Does this mean people do care about privacy? (To anticipate your question: some people pulled out their phones or computers and appeared to be changing a password right there; others promised to change a password when they got to their computer; we have no way of knowing if they did—just like Puno had no way of knowing whether much of the “information” she got was true. Collected fingerprints aside…) Third, cookies were much, much more popular than the free USB drives. Of course, the cookies were cheaper than the USB drives. Does this mean that people are aware of the security dangers posed by USB drives and are willing to “pay” for privacy?
     
    Responses from the students, parents, and others who stopped to talk with us and enjoy the soft warm chocolate-chip cookies ranged from “I’m a cryptography student and I change my passwords every three months” to “I only have one password—should I change that?” to “I didn’t know you were supposed to change passwords” to “But I just changed my password in response to your email” (which made Bob really happy). It was, if nothing else, an educational experience—in some cases for us, in others for them.
     
    So what does our “artistic experiment” prove? Absolutely nothing, of course—just like Puno’s “experiment,” which prompted so much coverage. (Or maybe they both prove that people like free cookies.)
     
    The danger with projects like hers, though, is that their “conclusions” are often echoed in discussions about business, regulation, or public policy in general: If people give up personal information for a cookie, the argument goes, why should we protect privacy? That is the argument that needs to be refuted—again and again. Poll after poll finds that people say they do value their privacy, are deeply concerned by its erosion, and want more laws to protect it; but some refuse to believe them and turn, instead, to “evidence” from silly “experiments.” If so, we need more flip-side “experiments”—complete, of course, with baked goods.
     
  •  Dickens on Big Data

    Thursday, Nov. 6, 2014
    This essay first appeared in Re/Code in May 2014.
     
     While some writers like to imagine what Plato would have said about the Googleplex and other aspects of current society, when it comes to life regulated and shaped by data and algorithms, Charles Dickens is the one to ask. His novel Hard Times is subtitled “For these times,” and his exploration of oversimplification through numbers certainly makes that subtitle apt again.
     
    Hard Times is set in a fictional Victorian mill town in which schools and factories are purportedly run based on data and reason. In Dickens’s days, the Utilitarians proposed a new take on ethics, and social policies drew on utilitarian views. Hard Times has often been called a critique of utilitarianism; however, its critique is not directed primarily at the goal of maximizing happiness and minimizing harm, but at the focus on facts/data/measurable things at the expense of everything else. Dickens was excoriating what today we would call algorithmic regulation and education.
     
    Explaining the impact of “algorithmic regulation,” social critic Evgeny Morozov writes about
    … the construction of “invisible barbed wire” around our intellectual and social lives. Big data, with its many interconnected databases that feed on information and algorithms of dubious provenance, imposes severe constraints on how we mature politically and socially. The German philosopher Jürgen Habermas was right to warn — in 1963 — that “an exclusively technical civilization … is threatened … by the splitting of human beings into two classes — the social engineers and the inmates of closed social institutions.
     
    Dickens’s Hard Times is concerned precisely with the social engineers and the inmates of closed social institutions. Its prototypical “social engineer” is Thomas Gradgrind — a key character who advocates data-based scientific education (and nothing else):
    Thomas Gradgrind, sir. A man of realities. A man of fact and calculations. … With a rule and a pair of scales, and the multiplication table always in his pocket, sir, ready to weight and measure any parcel of human nature, and tell you exactly what it comes to. It is a mere question of figures, a case of simple arithmetic.
     
    Today he would carry a cellphone in his pocket instead of a multiplication table, but he is otherwise a modern man: A proponent of a certain way of looking at the world, through big data and utopian algorithms.
     
    Had Dickens been writing today, would he have set his book in Silicon Valley? Writers like Dave Eggers do, in novels like The Circle, which explores more recent efforts at trying out theoretical social systems on vast populations.
     
    Morozov frequently does, too, as in his article “The Internet Ideology: Why We Are Allowed to Hate Silicon Valley,” where he argues that the “connection between the seeming openness of our technological infrastructures and the intensifying degree of control [by corporations, by governments, etc.] remains poorly understood.”
     
    Dickens was certainly concerned by the intensifying control he observed in the ethos of his age. “You,” says one of his educators to a young student, “are to be in all things regulated and governed … by fact. We hope to have, before long, a board of fact, composed of commissioners of fact, who will force the people to be a people of fact, and of nothing but fact. You must discard the word Fancy altogether. You have nothing to do with it.”
     
    Fancy, wonder, imagination, creativity — all qualities that can’t be accurately quantified — may indeed be downplayed (even if unintentionally) in a fully quantified educational system. In such a system, what happens to the questions that can’t be answered once and for all?
     
    As Dickens puts it, “Herein lay the spring of the mechanical art and mystery of educating the reason without stooping to the cultivation of the sentiments and affections. Never wonder. By means of addition, subtraction, multiplication, and division, settle everything somehow, and never wonder.”
     
    And what about the world beyond the school? In Hard Times, the other people subjected to “algorithmic regulation” are the workers of Coketown. Describing this fictional Victorian mill town (after having visited a real one), Dickens writes:
    Fact, fact, fact, everywhere in the material aspect of the town; fact, fact, fact, everywhere in the immaterial. The … school was all fact, and the school of design was all fact, and the relations between master and man were all fact, and everything was fact between the lying-in hospital and the cemetery, and what you couldn’t state in figures, or show to be purchaseable in the cheapest market and saleable in the dearest, was not, and never should be, world without end, Amen.
     
    The overarching point, for Dickens, is that many of the most important aspects of human life are not measurable with precision and not amenable to algorithmically designed policies. “It is known,” he writes in Hard Times:
    … to the force of a single pound weight, what the engine will do; but not all the calculators of the National Debt can tell me the capacity for good or evil, for love or hatred, for patriotism or discontent, for the decomposition of virtue into vice, or the reverse, at any single moment in the soul of one of these its quiet servants…. There is no mystery in it; there is an unfathomable mystery in the meanest of them, for ever.
     
    Does the exponentially greater power of our “calculators” challenge that perception? Does big data mean “no mystery,” even in human beings?
     
    We are buffered by claims that Google or Facebook or some other data-collection entities know us better than we know ourselves (or at least better than our spouses do); we are implementing predictive policing; we ponder algorithmic approaches to education.
     
    As we do so, Dickens’s characters, like Thomas Gradgrind’s daughter Louisa, call out a warning from another time when we tried this approach. In a moment of crisis, Louisa (who has been raised on a steady diet of facts) tells Gradgrind, “With a hunger and thirst upon me, father, which have never been for a moment appeased; with an ardent impulse toward some region where rules, and figures, and definitions were not quite absolute; I have grown up, battling every inch of my way.”
     
    In Hard Times, all the children who are shaped by the utilitarian fact-based approach, with no room for wonder and fancy, are stifled and stilted. Eventually, even Gradgrind realizes this. “I only entreat you to believe … I have meant to do right,” he tells his daughter. Dickens adds, “He said it earnestly, and to do him justice he had. In gauging fathomless deeps with his little mean excise-rod, and in staggering over the universe with his rusty stiff-legged compasses, he had meant to do great things.” But this is not a Disney ending; Gradgrind’s remorse does not reverse the damage done to Louisa and others like her. And the lives of the algorithmically-governed people of Coketown are miserable.
     
    In a recent Scientific American blog, psychologist Adam Waytz uses the term “quantiphobia” in reference to the claim that creativity is unquantifiable. He describes himself as “bugged” by such claims, and wonders “where such quantiphobia originates.” He then writes that
    … both neural and self-report evidence show that people tend to represent morals like preferences more than like facts. Getting back to the issue of quantiphobia, my sense is that when numbers are appended to issues with moral relevance, this moves them out of the realm of preference and into the realm of fact, and this transition unnerves us.
     
    Is it irrational to be “unnerved” by this transition? What Waytz fails to address is the assumption that numbers or facts provide greater or more objective truth than unquantified “preferences” do. Of course, the process through which “numbers are appended to issues” is itself subjective — expressive of preferences. What we choose to measure, and how, is subjective. How we analyze the resulting numbers is subjective. The movement into the realm of fact is not equivalent to a movement into the realm of truth. The refusal to append numbers to certain things is not “quantiphobia”—it is wisdom.
     
    This is not to dismiss the very real benefits that can be derived from big-data analytics and algorithmic functions in many contexts. We can garner those and still acknowledge that certain things may be both extremely important and unmeasurable, and that our policies and approaches should reflect that reality.
     
    Dickens throws down a gauntlet for our times: “Supposing we were to reserve our arithmetic for material objects, and to govern these awful unknown quantities [i.e., human beings] by other means!”
     
    Dear Reader, if you are a “quant,” please read Hard Times.  And no, don’t count the lines.
     
    (Photo by Wally Gobetz, used without modification under a Creative Commons license.)
     

     

     

  •  “Practically as an accident”: on “social facts” and the common good

    Thursday, Oct. 30, 2014

     

    In the Los Angeles Review of Books, philosopher Evan Selinger takes issue with many of the conclusions (and built-in assumptions) compiled in Dataclysm—a new book by Christian Rudder, who co-founded the dating site OKCupid and now heads the site’s data analytics team. While Selinger’s whole essay is really interesting, I was particularly struck by his comments on big data and privacy. 

    “My biggest issue with Dataclysm,” Selinger writes,
     
    lies with Rudder’s treatment of surveillance. Early on in the book he writes: ‘If Big Data’s two running stories have been surveillance and money, for the last three years I’ve been working on a third: the human story.’ This claim about pursuing a third path isn’t true. Dataclysm itself is a work of social surveillance.
     
    It’s tempting to think that different types of surveillance can be distinguished from one another in neat and clear ways. If this were the case, we could say that government surveillance only occurs when organizations like the National Security Agency do their job; corporate surveillance is only conducted by companies like Facebook who want to know what we’re doing so that they effectively monetize our data and devise strategies to make us more deeply engaged with their platform; and social surveillance only takes place in peer-to-peer situations, like parents monitoring their children’s phones, romantic partners scrutinizing each other’s social media feeds….
     
    But in reality, surveillance is defined by fluid categories.
     
    While each category of surveillance might include both ethical and unethical practices, the point is that the boundaries separating the categories are porous, and the harms associated with surveillance might seep across all of them.
     
    Increasingly, when corporations like OKCupid or Facebook analyze their users’ data and communications in order to uncover “social facts,” they claim to be acting in the interest of the common good, rather than pursuing self-serving goals. They claim to give us clear windows into our society. The subtitle of Rudder’s book, for example, is “Who We Are (When We Think No One’s Looking).” As Selinger notes,
     
    Rudder portrays the volume of information… as a gift that can reveal the truth of who we really are. … [W]hen people don’t realize they’re lab rats in Rudder’s social experiments, they reveal habits—‘universals,’ he even alleges…  ‘Practically as an accident,’ Rudder claims, digital data can now show us how we fight, how we love, how we age, who we are, and how we’re changing.’
     
    Of course, Rudder should contain his claims to the “we” who use OKCupid (a 2013 study by the Pew Research Trust found that 10% of Americans report having used an online dating service). Facebook has a stronger claim to having a user base that reflects all of “us.”  But there are other entities that sit on even vaster data troves than Facebook’s, even more representative of U.S. society overall. What if a governmental organization were to decide to pursue the same selfless goals, after carefully ensuring that the data involved would be carefully anonymized and presented only in the aggregate (akin to what Rudder claims to have done)?
     
    In the interest of better “social facts,” of greater insight into our collective mindsets and behaviors, should we encourage (or indeed demand from) the NSA to publish “Who Americans Are (Whey They Think No One’s Watching)”? To be followed, perhaps, by a series of “Who [Insert Various Other Nationalities] Are (When They Think No One’s Watching)”? Think of all the social insights and common good that would come from that!
     
    In all seriousness, as Selinger rightly points out, the surveillance behind such no-notice-no-consent research comes at great cost to society:
     
    Rudder’s violation of the initial contextual integrity [underpinning the collection of OKCupid user data] puts personal data to questionable secondary, social use. The use is questionable because privacy isn’t only about protecting personal information. People also have privacy interests in being able to communicate with others without feeling anxious about being excessively monitored. … [T]he resulting apprehension inhibits speech, stunts personal growth, and possibly even disinclines people from experimenting with politically relevant ideas.
     
    With every book subtitled “Who We Are (When We Think No One’s Looking),” we, the real we, become more weary, more likely to assume that someone’s always looking. And as many members of societies that have lived with excessive surveillance have attested, that’s not a path to achieving the good life.
     
    Photo by Henning Muhlinghaus, used without modification under a Creative Commons license.

     

  •  Who (or What) Is Reading Whom: An Ongoing Metamorphosis

    Thursday, Oct. 23, 2014
     
    If you haven’t already read the Wall Street Journal article titled “Your E-Book Is Reading You,” published in 2012, it’s well worth your time. It might even be worth a second read, since our understanding of many Internet-related issues has changed substantially since 2012.
     
    I linked to that article in a short piece that I wrote, which was published yesterday in Re/Code: “Metamorphosis.”  I hope you’ll read that, too—and we’d love to get your comments on that story either at Re/Code or in the Comments section here!
     
    And finally, just a few days ago, a new paper by Jules Polonetsky and Omer Tene (both from the Future of Privacy Forum) was released through SSRN: “Who Is Reading Whom Now: Privacy in Education from Books to MOOCs.” This is no bite-sized exploration, but an extensive overview of the promises and challenges of technology-driven innovations in education—including the ethical implications of the uses of both “small data” and “big data” in this particular context.
     
    To play with yet another title—there are significant and ongoing shifts in “the way we read now”…
     

    Photo by Jose Antonio Alonso, used without modification under a Creative Commons license.

  •  Questions about Mass Surveillance

    Tuesday, Oct. 14, 2014


    Last week, Senator Ron Wyden of Oregon, long-time member of the Select Committee on Intelligence and current chairman of the Senate Finance Committee, held a roundtable on the impact of governmental surveillance on the U.S. digital economy.  (You can watch a video of the entire roundtable discussion here.) While he made the case that the current surveillance practices have hampered both our security and our economy, the event focused primarily on the implications of mass surveillance for U.S. business—corporations, entrepreneurs, tech employees, etc.  Speaking at a high-school in the heart of Silicon Valley, surrounded by the Executive Chairman of Google, the General Counsels of Microsoft and Facebook, and others, Wyden argued that the current policies around surveillance were harming one of the most promising sectors of the U.S. economy—and that Congress was largely ignoring that issue. “When the actions of a foreign government threaten red-white-and-blue jobs, Washington [usually] gets up at arms,” Wyden noted, but “no one in Washington is talking about how overly broad surveillance is hurting the US economy.”

    The focus on the economic impact was clearly intended to present the issue of mass surveillance through a new lens—one that might engage those lawmakers and citizens who had not been moved, perhaps, by civil liberties arguments.  However, even in this context, the discussion frequently turned to the “personal” implications of the policies involved.  And in comments both during and after the panel discussion, Wyden expressed his deep concern about the particular danger posed by the creation and implementation of “secret law.”  Microsoft’s General Counsel, Brad Smith, went one step further:  “We need to recognize,” he said, “that laws that the rest of the world does not respect will ultimately undermine the fundamental ability of our own legal processes, law enforcement agencies, and even the intelligence community itself.”

    That brought me back to some of the questions I raised in 2013 (a few months after the Snowden revelations first became public), in an article published by the Santa Clara Magazine.  One of the things I had asked was whether the newly-revealed surveillance programs might “change the perception of the United States to the point where they hamper, more than they help, our national security. “ In regard to secret laws, even if those were to be subject to effective Congressional and court oversight, I wondered, "[i]s there a level of transparency that U.S. citizens need from each branch of the government even if those branches are transparent to one another? In a democracy, can the system of checks and balances function with informed representatives but without an informed public? Would such an environment undermine voters’ ability to choose [whom to vote for]?"

    And, even more broadly, in regard to the dangers inherent in indiscriminate mass surveillance, "[i]n a society in which the government collects the metadata (and possibly much of the content) of every person’s communications for future analysis, will people still speak, read, research, and act freely? Do we have examples of countries in which mass surveillance coexisted with democratic governance?"

    We know that a certain level of mass surveillance and democratic governance did coexist for a time, very uneasily, in our own past, during the Hoover era at the FBI—and the revelations of the realities of that coexistence led to the Church committee and to policy changes.

    Will the focus on the economic impact of current mass governmental surveillance lead to new changes in our surveillance laws? Perhaps.  But it was Facebook’s general counsel who had (to my mind) the best line of last week’s roundtable event. When a high-school student in the audience asked the panel how digital surveillance affects young people like him, who want to build new technology companies or join growing ones, one panelist advised him to just worry about creating great products, and to let people like the GCs worry about the broader issues.  Another panelist told him that he should care about this issue because of the impact that data localization efforts would have on future entrepreneurs’ ability to create great companies. Then, Facebook’s Colin Stretch answered. “I would say care about it for the reasons you learned in your Civics class,” he said, “not necessarily the reasons you learned in your computer science class.”

    Illustration by Stuart Bradford

  •  Are You A Hysteric, Or A Sociopath? Welcome to the Privacy Debate

    Tuesday, Oct. 7, 2014

     

    Whether you’re reading about the latest data-mining class action lawsuit through your Google Glass or relaxing on your front porch waving at your neighbors, you probably know that there’s a big debate in this country about privacy.  Some say privacy is important. Some say it’s dead.  Some say kids want it, or not. Some say it’s a relatively recent phenomenon whose time, by the way, has passed—a slightly opaque blip in our history as social animals. Others say it’s a human right without which many other rights would be impossible to maintain.

    It’s a much-needed discussion—but one in which the tone is often not conducive to persuasion, and therefore progress.  If you think concerns about information privacy are overrated and might become an obstacle to the development of useful tools and services, you may hear yourself described as a [Silicon Valley] sociopath or a heartless profiteer.  If you believe that privacy is important and deserves protection, you may be called a “privacy hysteric.”
     
    It’s telling that privacy advocates are so often called “hysterics”—a term associated more commonly with women, and with a surfeit of emotion and lack of reason.  (Privacy advocates are also called “fundamentalists” or “paranoid”—again implying belief not based in reason.)  And even when such terms are not directly deployed, the tone often suggests them. In a 2012 Cato Institute policy analysis titled “A Reasonable Response to the Privacy ‘Crisis,’” for example, Larry Downes writes about the “emotional baggage” invoked by the term “privacy,” and advises, “For those who naturally leap first to leg­islative solutions, it would be better just to fume, debate, attend conferences, blog, and then calm down before it’s too late.”  (Apparently debate, like fuming and attending conferences, is just a harmless way to let off steam—as long as it doesn’t lead to such hysteria as class-action lawsuits or actual attempts at legislation.)
     
    In the year following Edward Snowden’s revelations, the accusations of privacy “hysteria” or “paranoia” seemed to have died down a bit; unfortunately, they might be making a comeback. The summary of a recent GigaOm article, for example, accuses BuzzFeed of “pumping up the hysteria” in its discussion of ad beacons installed—and quickly removed—in New York.
     
    On the other hand, those who oppose privacy-protecting legislation or who argue that other values or rights might trump privacy sometimes find themselves diagnosed, too–if not as sociopaths, then at least as belonging on the “autism spectrum”: disregardful of social norms, unable to empathize with others.
     
    Too often, the terms thrown about by some on both sides in the privacy debate suggest an abdication of the effort to persuade. You can’t reason with hysterics and sociopaths, so there’s no need to try. You just state your truth to those others who think like you do, and who cheer your vehemence.
     
    But even if you’re a privacy advocate, you probably want the benefits derived from collecting and analyzing at least some data sets, under some circumstances; and even if you think concerns about data disclosures are overblown, you still probably don’t disclose everything about yourself to anyone who will listen.
     
    If information is power, privacy is a defensive shell against that power.  It is an effort to modulate vulnerability.  (The more vulnerable you feel, the more likely you are to understand the value of privacy.)  So privacy is an inherent part of all of our lives; the question is how to deploy it best.  In light of new technologies that create new privacy challenges, and new methodologies that seek to maximize benefits while minimizing harms (e.g. “differential privacy”), we need to be able to discuss this complicated balancing act —without charged rhetoric making the debate even more difficult.
     
    If you find yourself calling people privacy-related names (or writing headlines or summaries that do that, even when the headlined articles themselves don’t), please rephrase.
     
    Photo by Tom Tolkien, unmodified, used under a Creative Commons license: https://creativecommons.org/licenses/by/2.0/legalcode
     
     
  •  Should You Watch? On the Responsibility of Content Consumers

    Tuesday, Sep. 23, 2014

    This fall, Internet users have had the opportunity to view naked photographs of celebrities (which were obtained without approval, from private iCloud accounts, and then—again without consent—distributed widely).  They were also able to watch journalists and an aid worker being beheaded by a member of a terrorist organization that then uploaded the videos of the killings to various social media channels.  And they were also invited to watch a woman being rendered unconscious by a punch from a football player who was her fiancé at the time; the video of that incident was obtained from a surveillance camera inside a hotel elevator.

     
    These cases have been accompanied by heated debates around the issues of journalism ethics and the responsibilities of social media platforms. Increasingly, though, a question is arising about the responsibility of the Internet users themselves—the consumers of online content. The question is, should they watch?
    Would You Watch [the beheading videos]?” ask CNN and ABC News. “Should You Watch the Ray Rice Assault Video?” asks Shape magazine. “Should We Look—Or Look Away?” asks Canada’s National Post. And, in a broader article about the “consequences and import of ubiquitous, Internet-connected photography” (and video), The Atlantic’s Robinson Mayer reflects on all three of the cases noted above; his piece is titled “Pics or It Didn’t Happen.”
    Many commentators have argued that to watch those videos or look at those pictures is a violation of the privacy of the victims depicted in them; that not watching is a sign of respect; or that the act of watching might cause new harm to the victims or to people associated with them (friends, family members, etc.). Others have argued that watching the beheading videos is necessary “if the depravity of war is to be understood and, hopefully, dealt with,” or that watching the videos of Ray Rice hitting his fiancé will help change people’s attitudes toward domestic violence.
    What do you think?
    Would it be unethical to watch the videos discussed above? Why?
    Would it be unethical to look at the photos discussed above? Why?
    Are the three cases addressed above so distinct from each other that one can’t give a single answer about them all?  If so, which of them would you watch, or refuse to watch, and why?
     
    Photo by Matthew Montgomery, unmodified, used under a Creative Commons license.
  •  Revisiting the "Right to Be Forgotten"

    Tuesday, Sep. 16, 2014

     Media coverage of the implementation of the European Court decision on de-indexing certain search results has been less pervasive than the initial reporting on the decision itself, back in May.  At the time, much of the coverage had framed the issue in terms of clashing pairs: E.U. versus U.S; privacy versus free speech.  In The Guardian, an excellent overview of the decision described the “right to be forgotten” as a “cultural shibboleth.”

    (I wrote about it back then, too, arguing that many of the stories about it were rife with mischaracterizations and false dilemmas.)

    Since then, most of the conversation about online “forgetting” seems to have continued on parallel tracks—although with somewhat different clashing camps.  On one hand, many journalists and other critics of the decision (on both sides of the Atlantic) have made sweeping claims about a resulting “Internet riddled with memory holes” and articles “scrubbed from search results”; one commentator wrote that the court decision raises the question, can you really have freedom of speech if no one can hear what you are saying?” 

    On the other hand, privacy advocates (again on both sides of the Atlantic) have been arguing that the decision is much narrower in scope than has generally been portrayed and that it does not destroy free speech; that Google is not, in fact, the sole and ultimate arbiter of the determinations involved in the implementation of the decision; and that even prior to the court’s decision Google search results were selective, curated, and influenced by various countries’ laws.  Recently, FTC Commissioner Julie Brill urged “thought leaders on both sides of the Atlantic to recognize that, just as we both deeply value freedom of expression, we also have shared values concerning relevance in personal information in the digital age.”

    Amid this debate, in late June, Google developed and started to use its own process for complying with the decision.  But Google has also convened an advisory council that will take several months to consider evidence (including public input from meetings held in seven European capitals--Madrid, Rome, Paris, Warsaw, Berlin, London, and Brussels), before producing a report that would inform the company’s current efforts.  Explaining the creation of the council, the company noted that it is now required to balance “on a case-by-case basis, an individual’s right to be forgotten with the public’s right to information,” and added, “We want to strike this balance right. This obligation is a new and difficult challenge for us, and we’re seeking advice on the principles Google ought to apply…. That’s why we’re convening a council of experts.”

    The advisory council (to whom any and all can submit comments) has been posting videos of the public meetings online. However, critics have taken issue with the group’s members (selected by Google itself), with the other presenters invited to participate at the meetings (again screened and chosen by Google), and, most recently, with its alleged rebuffing of questions from the general public. So far, many of the speakers invited to the meetings have raised questions about the appropriateness of the decision itself.

    In this context, one bit of evidence makes its own public comment:  Since May, according to Google, the company has received more than 120,000 de-indexing requests. Tens of thousands of Europeans have gone through the trouble of submitting a form and the related information necessary to request that a search of their name not include certain results.  

    And, perhaps surprisingly (especially given most the coverage of the decision in the U.S.), a recent poll of American Internet users, by an IT security research firmfound that a “solid majority” of them—61%--were “in favor of a ‘right to be forgotten’ law for US citizens.”

    But this, too, may speak differently to different audiences. Some will see it as evidence of a vast pent-up need that had had no outlet until now. Others will see it as evidence of the tens of thousands of restrictions and “holes” that will soon open up in the Web.

    So—should we worry about the impending “memory holes”?

    In a talk entitled “The Internet with a Human Face,” American Web developer Maciej Ceglowski argues that “the Internet somehow contrives to remember too much and too little at the same time.” He adds,

    in our elementary schools in America, if we did something particularly heinous, they had a special way of threatening you. They would say: “This is going on your permanent record.”

    It was pretty scary. I had never seen a permanent record, but I knew exactly what it must look like. It was bright red, thick, tied with twine. Full of official stamps.

    The permanent record would follow you through life, and whenever you changed schools, or looked for a job or moved to a new house, people would see the shameful things you had done in fifth grade. 

    How wonderful it felt when I first realized the permanent record didn’t exist. They were bluffing! Nothing I did was going to matter! We were free!

    And then when I grew up, I helped build it for real.

    But while a version the “permanent record” is now real, it is also true that much content on the Internet is already ephemeral. The phenomenon of “link rot,” for example, affects even important legal documents.  And U.K. law professor Paul Bernal has argued that we should understand the Internet as “organic, growing and changing all the time,” and that it’s a good thing that this is so. “Having ways to delete information [online] isn’t the enemy of the Internet of the people,” Bernal writes, “much as an enemy of the big players of the Internet.”

    Will Google, one of the “big players on the internet,” hear such views, too? It remains to be seen; Google’s “European grand tour,” as another UK law professor has dubbed it, will conclude on November 4th

    Photograph by derekb, unmodified, under a Creative Commons license. https://creativecommons.org/licenses/by-nc/2.0/legalcode

  •  Singing in the Shower: Privacy in the Age of Facebook

    Tuesday, Sep. 9, 2014
     
    It is a truth universally acknowledged, that the amount and kinds of information that people post on Facebook mean that people don’t care about privacy.
     
    Like many other “truths” universally acknowledged, this one is wrong, in a number of ways.
     
    First, not everybody is on Facebook. So to justify, say, privacy-invasive online behavioral advertising directed at everyone on the Internet by pointing to the practices of a subset of Internet users is wrong.
     
    Second, it’s wrong to generalize about “Facebook users,” too. Many Facebook users take advantage of various privacy settings and use the platform to interact only with friends and family members. So it makes sense for them to post on Facebook the kind of personal, private things that people have always shared with friends and family.
     
    Still—most Facebook users have hundreds of “friends”: some are close; some are not; some are relatives barely known; some are friends who have grown distant over time. Does it make sense to share intimate things with all of them?
     
    There are several answers to that, too. The privacy boundaries that people draw around themselves vary. What may seem deeply intimate and private to one person might not seem that way to someone else—and vice versa. That doesn’t mean that people who post certain things “don’t care about privacy”—it means they would define “private” differently than others would.  And even when people do post things that they would consider intimate on Facebook, that doesn’t mean they post everything. Some people like singing in choirs; that doesn’t mean they’d be OK with being spied on while singing in the shower.
     
    Third, we need to acknowledge the effects of the medium itself. Take, say, a Facebook user who has 200 “friends.” Were all those friends to be collected in one room (the close and the distant friends, the old and the recently befriended, the co-workers, the relatives, the friends of friends whose “friend requests” were accepted simply to avoid awkwardness, etc.), and were the user to be given a microphone, he or she might refrain from announcing what he ate for dinner, or reciting a song lyric that ran through her mind, or revealing an illness or a heartbreak, or subjecting the entire audience to a slide show of vacation pictures. But for the Facebook user sitting alone in a room, facing a screen, the audience is at least partially concealed. He or she knows that it’s there—is even hoping for some comments in response to posts—or at least some “likes”… But the mind conjures, at best, a subset of the tens or hundreds of those “friended.” If that. Because there is, too, something about the act of typing a “status update” that echoes, for some of us, the act of writing in a journal. (Maybe a diary with a friendly, ever-shifting companion Greek chorus?) The medium misleads.
     
    So no, people who post on Facebook are not being hypocritical when they say (as most people do) that they care about privacy. (It bears noting that in a recent national survey by the Pew Research Center, 86% of internet users said they had “taken steps online to remove or mask their digital footprints.”)
     
    It’s high time to let the misleading cliché about privacy in the age of Facebook go the way of other much-repeated statements that turned out to be neither true nor universally acknowledged. And it’s certainly time to stop using it as a justification for practices that violate privacy. If you haven’t been invited to join the singer in the shower, stay out.
     
  •  More to Say about Internet Ethics

    Tuesday, Sep. 2, 2014
     
    Welcome back!
     
    As the summer of 2014 draws to a close, people are debating the merits of hashtag activism (and pouring buckets of ice water on their heads); Facebook is appending a “Satire” tag to certain stories; new whistleblowers are challenging pervasive governmental surveillance online; and Twitter is struggling to remove posts that include graphic images of the tragic beheading of a U.S. journalist. The Internet continues to churn out ethics-related questions.  New issues keep arising, new facets of “old” issues are continually revealed, and Silicon Valley is frequently mistakenly perceived as a monolithic entity with little interest in the ethical ramifications of the technology it produces.
     
    But our community is neither monolithic nor uninterested.  Back in 2013, for example, the Internet Ethics program at the Markkula Center for Applied Ethics started a blog called “Internet Ethics: Views from Silicon Valley,” with the goal of offering 10  brief videos in which Silicon Valley pioneers and leaders would address some key ethical issues related to the role of the Internet in modern life. While that project was completed (and those videos, featuring the co-founders of Apple and Adobe Systems, the Executive Chairman of NetApp, the CEOs of VMWare and Seagate, and more, remain available on our website and our YouTube channel), we have decided to restart the blog. 
     
    We hope to be a platform for a multiplicity of Silicon Valley voices and demonstrate that applied ethics is everybody’s business—not just the purview of philosophers or philanthropists.
     
    We aim to blog about once a week, with entries by various staff members of the Markkula Center for Applied Ethics, as well as other Santa Clara University faculty members (and perhaps some students, too!) We look forward to your comments, and we hope to host a robust conversation around such topics as big data ethics, online privacy, the Internet of Things, Net neutrality, the “right to be forgotten,” cyberbullying, the digital divide, sentiment analysis, the impact of social media, online communities, digital journalism, diversity in tech, and more. We will also post advance notice of various ethics-related events taking place on campus, free and open to the public.
     
    If you’d like to be notified as new entries are posted, please subscribe today!  (There’s an email subscription box to the right, or an RSS feed at the top of the blog. ) You can also follow the Internet Ethics program on Twitter at @IEthics, and the Center overall either on Facebook or on Twitter at @mcaenews. 
     
    And to those of you who had been subscribed already, again, welcome back!