Santa Clara University

internet-ethics-banner
Bookmark and Share
 
 
RSS

Ethical Issues in the Online World

Welcome to the blog of the Internet Ethics program at the Markkula Center for Applied Ethics, Santa Clara University. Program Director Irina Raicu will be joined by various guests in discussing the ethical issues that arise continuously on the Internet; we hope to host a robust conversation about them, and we look forward to your comments.

The following postings have been filtered by tag ethics. clear filter
  •  The Ethics of Encryption

    Wednesday, Feb. 25, 2015
     
     
    One of the programs organized by the Markkula Center for Applied Ethics is a Business and Organizational Ethics Partnership that brings together Silicon Valley executives and scholars. Earlier this month, the partnership’s meeting included a panel discussion on the ethics of encryption. The panelists were David J. Johnson, Special Agent in Charge of the San Francisco Division of the FBI; Marshall Erwin, a senior staff analyst at Mozilla and fellow at Stanford’s Center for Internet and Society; and Jonathan Mayer, Cybersecurity Fellow at the Center for International Security and Cooperation and Junior Affiliate Scholar at the Center for Internet and Society.
     
    Of course, since then, the conversation about encryption has continued: President Obama discussed it, for example, in an interview that he gave when he came to Silicon Valley to advocate for increased cooperation between tech companies and the government; NSA Director Mike Rogers was challenged on that topic at a recent cybersecurity conference; and Hilary Clinton and others continued to hope for a middle ground solution. However, as the Washington Post recently put it, “political leaders appear to be re-hashing the same debate in search of a compromise solution that technical experts say does not exist.” 
    (In the photo, L-R: Irina Raicu, Jonathan Mayer, Marshall Erwin, and David J. Johnson)
  •  On Remembering, Forgetting, and Delisting

    Friday, Feb. 20, 2015
     
    Over the last two weeks, Julia Powles, who is a law and technology researcher at the University of Cambridge, has published two interesting pieces on privacy, free speech, and the “right to be forgotten”: “Swamplands of the Internet: Speech and Privacy,” and “How Google Determined Our Right to Be Forgotten” (the latter co-authored by Enrique Chaparro). They are both very much worth reading, especially for folks whose work impacts the privacy rights (or preferences, if you prefer) of people around the world.
     
    Today, a piece that I wrote, which also touches on the “right to be forgotten,” was published in Re/code. It’s titled “The Right to Be Forgotten, the Privilege to Be Remembered.” I hope you’ll read that, too!
     
    And earlier in February, Google’s Advisory Council issued its much-anticipated report on the issue, which seeks to clarify the outlines of the debate surrounding it and offers suggestions for the implementation of “delisting.”
     
    One of the authors of that report, Professor Luciano Floridi, will be speaking at Santa Clara University on Wednesday, 2/25, as part of our “IT, Ethics and Law” lecture series.  Floridi is Professor of Philosophy and Ethics of Information at the University of Oxford and the Director of Research of the Oxford Internet Institute. His talk is titled “Recording, Recalling, Retrieving, Remembering: Memory in the Information Age.” The event is free and open to the public; if you live in the area and are interested in memory, free speech, and privacy online, we hope you will join us and RSVP!
     
    [And if you would like to be added to our mailing list for the lecture series—which has recently hosted panel presentations on ethical hacking, the ethics of online price discrimination, and privacy by design and software engineering ethics—please email ethics@scu.edu.] 
     
    Photo by Minchioletta, used without modification under a Creative Commons license.
  •  On Spirituality, Social Justice, and Social Media

    Thursday, Jan. 22, 2015

     

    Christine Cate is a recent graduate of Santa Clara University, where she majored in Public Health Science with a minor in Biology. She has worked at the Markkula Center for Applied Ethics as the Character Education intern for the Character Based Literacy Program since October 2012. A version of this piece first appeared in November 2014 in the blog of the Ignatian Solidarity Network. Christine is a member of the Network’s social media team, focusing on contemporary issues of social justice and spirituality.

    Sometimes, reading the news makes my stomach turn. Every day, headlines about sexual assault, racism, immigration, poverty, or infectious disease are intermingled with stories on Kim Kardashian’s newest racy cover, snow storms on the East Coast, and political speculations. The media is constantly bombarding us with stories ranging in importance from superficial fluff to deeply divisive topics.

    The never-ending availability of news is positive in one sense, as the public is becoming more “informed,” but it also has its consequences. The media is desensitizing us to critical social issues like violence, racism, and sexism, while simultaneously flooding our feeds with stories of naked celebrities trying to break the internet or the most expensive Starbucks drink ever. Inane news stories focusing on things like which celebrity unfollowed whom on Instagram this week distract us from being able to critically observe and understand the world in which we live. Even political news stories can contain sensational levels of bias that make getting an objective comprehension of situations nearly impossible. And it’s nearly impossible to escape; anyone active on social media knows how often links to news articles show up among personal updates and advertisements. Individuals who aren’t constantly connected to social media, rare as they may be, are still saturated with current events from radio, print, and advertising outlets. It takes real effort to not know about what is going on in the world in our current society, and ignorance may be just as harmful as news-intoxication.

    Both the lack of current event literacy and the over-saturation of news are serious problems in our world, as media is one of the most powerful influences in society today. After returning from the Ignatian Family Teach-In that took place in November 2014 in Virginia and Washington, D.C., I found myself reflecting on the role that news and social media play in our lives, and how that impacts both our spirituality and capacity to enact social justice.

    At the Teach-in, in the rare moments between keynote speakers and breakout sessions, large projection screens and television monitors displayed live updates of tweets with the #IFTJ14 hashtag. Multiple photographers scurried around the crowded conference room, and cameras recorded every speaker for the online live stream. The slogan for this year’s Teach-In was “Uprooting Injustice, Sowing Truth, Witnessing Transformation.” The issues of immigration reform, divestment from fossil fuels, and Central American legislation were highlighted, as well as special recognition for the 25th anniversary of the UCA martyrs. Over the course of Saturday and Sunday, conference attendees were challenged to view these issues, as well as other powerful issues like the criminal justice system and racism in society, through a lens of spirituality and social justice. During presentations, audience members tweeted out perspectives or quotes that they felt were especially eye-opening or striking, with their tweets flying out into cyberspace and appearing shortly after on the illuminated screens.

    The reach of the Teach-In is hard to fathom. With an estimated 1,500 attendees, and the majority of them active on social media, it wouldn’t be a far stretch to say that tens of thousands of people were indirectly exposed to the messages of the Teach-In through media sources. The goal of the Teach-In was to give voice to the voiceless, to highlight areas in our collective history and present realities that need change, and I think that goal was accomplished spectacularly. Social media amplified the messages spoken at the Teach-In, and expanded the audience beyond just physical attendees.

    But amid the masses of news stories already flooding the eyes and minds of people today, is social media enough to make a change? How many news readers are intentional in what and how they read news stories? How many social media users are intentionally aware of their influence, and use their accounts as platforms to share morally important or challenging new stories? How many people are harnessing the power of social media to identify injustice, spread truth, and incite action for transformation?
     
    There are plenty of examples of social media bringing faith into daily rhetoric. The hashtag #blessed is popular on Instagram and Twitter, and there are hundreds of accounts that exist solely to post encouraging scripture passages, quotes, or otherwise spirituality related content. Spirituality and faith have become trendy in certain spheres, with social media users around the world able to share prayers and encourage and inspire from afar. But rarely do faithful social media users (in both senses of the word) connect their spirituality, social media reach, and social justice.
               
    What would it look like if the culture of mainstream news and social media changed to include the combination of spirituality and social justice? Would the voices of the oppressed and marginalized be heard more? Would people be more willing to confront the uncomfortable problems in our societies and work for positive change? Or would we just become desensitized to it, as we have to news coverage of war and violence? Can the integration of spirituality and social media be a powerful tool to expose injustices, spread truth, and document change?
     
    I don’t have answers to these questions, not yet. I am far more aware of my social media presence and interaction with news outlets, and would like to be more intentional in how I read news stories and pass them along to my sphere of influence. I think by critically analyzing new stories, and calling out the biases that we have been so accustomed to, we can change the way information is transmitted in society. I think that by integrating spirituality and social justice on a conscious level with how we use social media platforms we will be able to uproot injustice, sow truth, and witness transformation. 
     
    (Photo by Werner Kunz, used without modification under a Creative Commons license.)
     

     

  •  “It’s Been a Great Year!”

    Friday, Jan. 16, 2015
     
    Was 2014 a great year for Facebook? That depends, of course, on which measures or factors you choose to look at. The number of videos in users’ newsfeeds more than tripled.  The number of monthly active Facebook users is 1.35 bilion, and going up. Last June, however, Facebook took a drubbing in the media when reports about its controversial research on “emotional contagion” brought the term “research ethics” into worldwide conversations.  In response, Facebook announced that it would put in place enhanced review processes for its studies of users, and that newly hired engineers will receive training in research ethics when they join the company.
     
    Then, in December, Facebook offered its users a way to share with their friends an overview of their year (their Facebook year, at least). It was a mini-photo album: a collection of photos from one’s account, curated by Facebook (and no, the pre-selected photos were not the most “liked” ones). While customizable, their personalized albums showed up in users’ newsfeeds with a pre-filled cover photo and the tagline “It’s Been a Great Year! Thanks for being a part of it.”
     
    Now, Facebook chooses things like taglines very, very carefully. Deliberately. This was not a throwaway line. But, as you may already know by now, a father whose six-year-old daughter died last year—and who was repeatedly faced with her smiling photo used as the cover of his suggested “It’s Been a Great Year!” album—wrote a blog post that went viral, decrying what he termed “inadvertent algorithmic cruelty” and adding, “If I could fix one thing about our industry, just one thing, it would be that: to increase awareness of and consideration for the failure modes, the edge cases, the worst-case scenarios.” Many publications picked up the story.
     
    Apologies were then exchanged. But many other Facebook users felt the same pain, and did not receive an apology. And some were maybe reminded of the complaints that accompanied the initial launch of Facebook’s “Look Back Video” feature in early February 2014. As TechCrunch noted then, “[a]lmost immediately after launch, many users were complaining about the photos that Facebook auto-selected. Some had too many photos of their exes. Some had sad photos that they’d rather not remember as a milestone.” On February 7, TechCrunch reported that a “quick visit to the Facebook Look Back page now shows a shiny new edit button.”
     
    Come December, the “year-in-review” album was customizable. But the broader lesson about “the failures modes, the edge cases, the worst-case scenarios” was apparently not learned, or forgotten between February and December, despite the many sharp intervening critiques of the way Facebook treats its users.  
     
    In October, Santa Clara University professor Shannon Vallor and I wrote an op-ed arguing that Facebook’s response to the firestorm surrounding the emotion contagion study was too narrowly focused on research ethics.  We asked, “What about other ethical issues, not research-related, that Facebook's engineers are bound to encounter, perhaps even more frequently, in their daily work?”  The year-in-review app demonstrates that the question is very much still in play.  You can read our op-ed, which was published by the San Jose Mercury News, here.
     
    Here’s hoping for a better year.
     
    Photo by FACEBOOK(LET), used without modification under a Creative Commons license.
     
  •  “Practically as an accident”: on “social facts” and the common good

    Thursday, Oct. 30, 2014

     

    In the Los Angeles Review of Books, philosopher Evan Selinger takes issue with many of the conclusions (and built-in assumptions) compiled in Dataclysm—a new book by Christian Rudder, who co-founded the dating site OKCupid and now heads the site’s data analytics team. While Selinger’s whole essay is really interesting, I was particularly struck by his comments on big data and privacy. 

    “My biggest issue with Dataclysm,” Selinger writes,
     
    lies with Rudder’s treatment of surveillance. Early on in the book he writes: ‘If Big Data’s two running stories have been surveillance and money, for the last three years I’ve been working on a third: the human story.’ This claim about pursuing a third path isn’t true. Dataclysm itself is a work of social surveillance.
     
    It’s tempting to think that different types of surveillance can be distinguished from one another in neat and clear ways. If this were the case, we could say that government surveillance only occurs when organizations like the National Security Agency do their job; corporate surveillance is only conducted by companies like Facebook who want to know what we’re doing so that they effectively monetize our data and devise strategies to make us more deeply engaged with their platform; and social surveillance only takes place in peer-to-peer situations, like parents monitoring their children’s phones, romantic partners scrutinizing each other’s social media feeds….
     
    But in reality, surveillance is defined by fluid categories.
     
    While each category of surveillance might include both ethical and unethical practices, the point is that the boundaries separating the categories are porous, and the harms associated with surveillance might seep across all of them.
     
    Increasingly, when corporations like OKCupid or Facebook analyze their users’ data and communications in order to uncover “social facts,” they claim to be acting in the interest of the common good, rather than pursuing self-serving goals. They claim to give us clear windows into our society. The subtitle of Rudder’s book, for example, is “Who We Are (When We Think No One’s Looking).” As Selinger notes,
     
    Rudder portrays the volume of information… as a gift that can reveal the truth of who we really are. … [W]hen people don’t realize they’re lab rats in Rudder’s social experiments, they reveal habits—‘universals,’ he even alleges…  ‘Practically as an accident,’ Rudder claims, digital data can now show us how we fight, how we love, how we age, who we are, and how we’re changing.’
     
    Of course, Rudder should contain his claims to the “we” who use OKCupid (a 2013 study by the Pew Research Trust found that 10% of Americans report having used an online dating service). Facebook has a stronger claim to having a user base that reflects all of “us.”  But there are other entities that sit on even vaster data troves than Facebook’s, even more representative of U.S. society overall. What if a governmental organization were to decide to pursue the same selfless goals, after carefully ensuring that the data involved would be carefully anonymized and presented only in the aggregate (akin to what Rudder claims to have done)?
     
    In the interest of better “social facts,” of greater insight into our collective mindsets and behaviors, should we encourage (or indeed demand from) the NSA to publish “Who Americans Are (Whey They Think No One’s Watching)”? To be followed, perhaps, by a series of “Who [Insert Various Other Nationalities] Are (When They Think No One’s Watching)”? Think of all the social insights and common good that would come from that!
     
    In all seriousness, as Selinger rightly points out, the surveillance behind such no-notice-no-consent research comes at great cost to society:
     
    Rudder’s violation of the initial contextual integrity [underpinning the collection of OKCupid user data] puts personal data to questionable secondary, social use. The use is questionable because privacy isn’t only about protecting personal information. People also have privacy interests in being able to communicate with others without feeling anxious about being excessively monitored. … [T]he resulting apprehension inhibits speech, stunts personal growth, and possibly even disinclines people from experimenting with politically relevant ideas.
     
    With every book subtitled “Who We Are (When We Think No One’s Looking),” we, the real we, become more weary, more likely to assume that someone’s always looking. And as many members of societies that have lived with excessive surveillance have attested, that’s not a path to achieving the good life.
     
    Photo by Henning Muhlinghaus, used without modification under a Creative Commons license.

     

  •  Who (or What) Is Reading Whom: An Ongoing Metamorphosis

    Thursday, Oct. 23, 2014
     
    If you haven’t already read the Wall Street Journal article titled “Your E-Book Is Reading You,” published in 2012, it’s well worth your time. It might even be worth a second read, since our understanding of many Internet-related issues has changed substantially since 2012.
     
    I linked to that article in a short piece that I wrote, which was published yesterday in Re/Code: “Metamorphosis.”  I hope you’ll read that, too—and we’d love to get your comments on that story either at Re/Code or in the Comments section here!
     
    And finally, just a few days ago, a new paper by Jules Polonetsky and Omer Tene (both from the Future of Privacy Forum) was released through SSRN: “Who Is Reading Whom Now: Privacy in Education from Books to MOOCs.” This is no bite-sized exploration, but an extensive overview of the promises and challenges of technology-driven innovations in education—including the ethical implications of the uses of both “small data” and “big data” in this particular context.
     
    To play with yet another title—there are significant and ongoing shifts in “the way we read now”…
     

    Photo by Jose Antonio Alonso, used without modification under a Creative Commons license.

  •  Questions about Mass Surveillance

    Tuesday, Oct. 14, 2014


    Last week, Senator Ron Wyden of Oregon, long-time member of the Select Committee on Intelligence and current chairman of the Senate Finance Committee, held a roundtable on the impact of governmental surveillance on the U.S. digital economy.  (You can watch a video of the entire roundtable discussion here.) While he made the case that the current surveillance practices have hampered both our security and our economy, the event focused primarily on the implications of mass surveillance for U.S. business—corporations, entrepreneurs, tech employees, etc.  Speaking at a high-school in the heart of Silicon Valley, surrounded by the Executive Chairman of Google, the General Counsels of Microsoft and Facebook, and others, Wyden argued that the current policies around surveillance were harming one of the most promising sectors of the U.S. economy—and that Congress was largely ignoring that issue. “When the actions of a foreign government threaten red-white-and-blue jobs, Washington [usually] gets up at arms,” Wyden noted, but “no one in Washington is talking about how overly broad surveillance is hurting the US economy.”

    The focus on the economic impact was clearly intended to present the issue of mass surveillance through a new lens—one that might engage those lawmakers and citizens who had not been moved, perhaps, by civil liberties arguments.  However, even in this context, the discussion frequently turned to the “personal” implications of the policies involved.  And in comments both during and after the panel discussion, Wyden expressed his deep concern about the particular danger posed by the creation and implementation of “secret law.”  Microsoft’s General Counsel, Brad Smith, went one step further:  “We need to recognize,” he said, “that laws that the rest of the world does not respect will ultimately undermine the fundamental ability of our own legal processes, law enforcement agencies, and even the intelligence community itself.”

    That brought me back to some of the questions I raised in 2013 (a few months after the Snowden revelations first became public), in an article published by the Santa Clara Magazine.  One of the things I had asked was whether the newly-revealed surveillance programs might “change the perception of the United States to the point where they hamper, more than they help, our national security. “ In regard to secret laws, even if those were to be subject to effective Congressional and court oversight, I wondered, "[i]s there a level of transparency that U.S. citizens need from each branch of the government even if those branches are transparent to one another? In a democracy, can the system of checks and balances function with informed representatives but without an informed public? Would such an environment undermine voters’ ability to choose [whom to vote for]?"

    And, even more broadly, in regard to the dangers inherent in indiscriminate mass surveillance, "[i]n a society in which the government collects the metadata (and possibly much of the content) of every person’s communications for future analysis, will people still speak, read, research, and act freely? Do we have examples of countries in which mass surveillance coexisted with democratic governance?"

    We know that a certain level of mass surveillance and democratic governance did coexist for a time, very uneasily, in our own past, during the Hoover era at the FBI—and the revelations of the realities of that coexistence led to the Church committee and to policy changes.

    Will the focus on the economic impact of current mass governmental surveillance lead to new changes in our surveillance laws? Perhaps.  But it was Facebook’s general counsel who had (to my mind) the best line of last week’s roundtable event. When a high-school student in the audience asked the panel how digital surveillance affects young people like him, who want to build new technology companies or join growing ones, one panelist advised him to just worry about creating great products, and to let people like the GCs worry about the broader issues.  Another panelist told him that he should care about this issue because of the impact that data localization efforts would have on future entrepreneurs’ ability to create great companies. Then, Facebook’s Colin Stretch answered. “I would say care about it for the reasons you learned in your Civics class,” he said, “not necessarily the reasons you learned in your computer science class.”

    Illustration by Stuart Bradford

  •  Are You A Hysteric, Or A Sociopath? Welcome to the Privacy Debate

    Tuesday, Oct. 7, 2014

     

    Whether you’re reading about the latest data-mining class action lawsuit through your Google Glass or relaxing on your front porch waving at your neighbors, you probably know that there’s a big debate in this country about privacy.  Some say privacy is important. Some say it’s dead.  Some say kids want it, or not. Some say it’s a relatively recent phenomenon whose time, by the way, has passed—a slightly opaque blip in our history as social animals. Others say it’s a human right without which many other rights would be impossible to maintain.

    It’s a much-needed discussion—but one in which the tone is often not conducive to persuasion, and therefore progress.  If you think concerns about information privacy are overrated and might become an obstacle to the development of useful tools and services, you may hear yourself described as a [Silicon Valley] sociopath or a heartless profiteer.  If you believe that privacy is important and deserves protection, you may be called a “privacy hysteric.”
     
    It’s telling that privacy advocates are so often called “hysterics”—a term associated more commonly with women, and with a surfeit of emotion and lack of reason.  (Privacy advocates are also called “fundamentalists” or “paranoid”—again implying belief not based in reason.)  And even when such terms are not directly deployed, the tone often suggests them. In a 2012 Cato Institute policy analysis titled “A Reasonable Response to the Privacy ‘Crisis,’” for example, Larry Downes writes about the “emotional baggage” invoked by the term “privacy,” and advises, “For those who naturally leap first to leg­islative solutions, it would be better just to fume, debate, attend conferences, blog, and then calm down before it’s too late.”  (Apparently debate, like fuming and attending conferences, is just a harmless way to let off steam—as long as it doesn’t lead to such hysteria as class-action lawsuits or actual attempts at legislation.)
     
    In the year following Edward Snowden’s revelations, the accusations of privacy “hysteria” or “paranoia” seemed to have died down a bit; unfortunately, they might be making a comeback. The summary of a recent GigaOm article, for example, accuses BuzzFeed of “pumping up the hysteria” in its discussion of ad beacons installed—and quickly removed—in New York.
     
    On the other hand, those who oppose privacy-protecting legislation or who argue that other values or rights might trump privacy sometimes find themselves diagnosed, too–if not as sociopaths, then at least as belonging on the “autism spectrum”: disregardful of social norms, unable to empathize with others.
     
    Too often, the terms thrown about by some on both sides in the privacy debate suggest an abdication of the effort to persuade. You can’t reason with hysterics and sociopaths, so there’s no need to try. You just state your truth to those others who think like you do, and who cheer your vehemence.
     
    But even if you’re a privacy advocate, you probably want the benefits derived from collecting and analyzing at least some data sets, under some circumstances; and even if you think concerns about data disclosures are overblown, you still probably don’t disclose everything about yourself to anyone who will listen.
     
    If information is power, privacy is a defensive shell against that power.  It is an effort to modulate vulnerability.  (The more vulnerable you feel, the more likely you are to understand the value of privacy.)  So privacy is an inherent part of all of our lives; the question is how to deploy it best.  In light of new technologies that create new privacy challenges, and new methodologies that seek to maximize benefits while minimizing harms (e.g. “differential privacy”), we need to be able to discuss this complicated balancing act —without charged rhetoric making the debate even more difficult.
     
    If you find yourself calling people privacy-related names (or writing headlines or summaries that do that, even when the headlined articles themselves don’t), please rephrase.
     
    Photo by Tom Tolkien, unmodified, used under a Creative Commons license: https://creativecommons.org/licenses/by/2.0/legalcode
     
     
  •  Should You Watch? On the Responsibility of Content Consumers

    Tuesday, Sep. 23, 2014

    This fall, Internet users have had the opportunity to view naked photographs of celebrities (which were obtained without approval, from private iCloud accounts, and then—again without consent—distributed widely).  They were also able to watch journalists and an aid worker being beheaded by a member of a terrorist organization that then uploaded the videos of the killings to various social media channels.  And they were also invited to watch a woman being rendered unconscious by a punch from a football player who was her fiancé at the time; the video of that incident was obtained from a surveillance camera inside a hotel elevator.

     
    These cases have been accompanied by heated debates around the issues of journalism ethics and the responsibilities of social media platforms. Increasingly, though, a question is arising about the responsibility of the Internet users themselves—the consumers of online content. The question is, should they watch?
    Would You Watch [the beheading videos]?” ask CNN and ABC News. “Should You Watch the Ray Rice Assault Video?” asks Shape magazine. “Should We Look—Or Look Away?” asks Canada’s National Post. And, in a broader article about the “consequences and import of ubiquitous, Internet-connected photography” (and video), The Atlantic’s Robinson Mayer reflects on all three of the cases noted above; his piece is titled “Pics or It Didn’t Happen.”
    Many commentators have argued that to watch those videos or look at those pictures is a violation of the privacy of the victims depicted in them; that not watching is a sign of respect; or that the act of watching might cause new harm to the victims or to people associated with them (friends, family members, etc.). Others have argued that watching the beheading videos is necessary “if the depravity of war is to be understood and, hopefully, dealt with,” or that watching the videos of Ray Rice hitting his fiancé will help change people’s attitudes toward domestic violence.
    What do you think?
    Would it be unethical to watch the videos discussed above? Why?
    Would it be unethical to look at the photos discussed above? Why?
    Are the three cases addressed above so distinct from each other that one can’t give a single answer about them all?  If so, which of them would you watch, or refuse to watch, and why?
     
    Photo by Matthew Montgomery, unmodified, used under a Creative Commons license.
  •  Revisiting the "Right to Be Forgotten"

    Tuesday, Sep. 16, 2014

     Media coverage of the implementation of the European Court decision on de-indexing certain search results has been less pervasive than the initial reporting on the decision itself, back in May.  At the time, much of the coverage had framed the issue in terms of clashing pairs: E.U. versus U.S; privacy versus free speech.  In The Guardian, an excellent overview of the decision described the “right to be forgotten” as a “cultural shibboleth.”

    (I wrote about it back then, too, arguing that many of the stories about it were rife with mischaracterizations and false dilemmas.)

    Since then, most of the conversation about online “forgetting” seems to have continued on parallel tracks—although with somewhat different clashing camps.  On one hand, many journalists and other critics of the decision (on both sides of the Atlantic) have made sweeping claims about a resulting “Internet riddled with memory holes” and articles “scrubbed from search results”; one commentator wrote that the court decision raises the question, can you really have freedom of speech if no one can hear what you are saying?” 

    On the other hand, privacy advocates (again on both sides of the Atlantic) have been arguing that the decision is much narrower in scope than has generally been portrayed and that it does not destroy free speech; that Google is not, in fact, the sole and ultimate arbiter of the determinations involved in the implementation of the decision; and that even prior to the court’s decision Google search results were selective, curated, and influenced by various countries’ laws.  Recently, FTC Commissioner Julie Brill urged “thought leaders on both sides of the Atlantic to recognize that, just as we both deeply value freedom of expression, we also have shared values concerning relevance in personal information in the digital age.”

    Amid this debate, in late June, Google developed and started to use its own process for complying with the decision.  But Google has also convened an advisory council that will take several months to consider evidence (including public input from meetings held in seven European capitals--Madrid, Rome, Paris, Warsaw, Berlin, London, and Brussels), before producing a report that would inform the company’s current efforts.  Explaining the creation of the council, the company noted that it is now required to balance “on a case-by-case basis, an individual’s right to be forgotten with the public’s right to information,” and added, “We want to strike this balance right. This obligation is a new and difficult challenge for us, and we’re seeking advice on the principles Google ought to apply…. That’s why we’re convening a council of experts.”

    The advisory council (to whom any and all can submit comments) has been posting videos of the public meetings online. However, critics have taken issue with the group’s members (selected by Google itself), with the other presenters invited to participate at the meetings (again screened and chosen by Google), and, most recently, with its alleged rebuffing of questions from the general public. So far, many of the speakers invited to the meetings have raised questions about the appropriateness of the decision itself.

    In this context, one bit of evidence makes its own public comment:  Since May, according to Google, the company has received more than 120,000 de-indexing requests. Tens of thousands of Europeans have gone through the trouble of submitting a form and the related information necessary to request that a search of their name not include certain results.  

    And, perhaps surprisingly (especially given most the coverage of the decision in the U.S.), a recent poll of American Internet users, by an IT security research firmfound that a “solid majority” of them—61%--were “in favor of a ‘right to be forgotten’ law for US citizens.”

    But this, too, may speak differently to different audiences. Some will see it as evidence of a vast pent-up need that had had no outlet until now. Others will see it as evidence of the tens of thousands of restrictions and “holes” that will soon open up in the Web.

    So—should we worry about the impending “memory holes”?

    In a talk entitled “The Internet with a Human Face,” American Web developer Maciej Ceglowski argues that “the Internet somehow contrives to remember too much and too little at the same time.” He adds,

    in our elementary schools in America, if we did something particularly heinous, they had a special way of threatening you. They would say: “This is going on your permanent record.”

    It was pretty scary. I had never seen a permanent record, but I knew exactly what it must look like. It was bright red, thick, tied with twine. Full of official stamps.

    The permanent record would follow you through life, and whenever you changed schools, or looked for a job or moved to a new house, people would see the shameful things you had done in fifth grade. 

    How wonderful it felt when I first realized the permanent record didn’t exist. They were bluffing! Nothing I did was going to matter! We were free!

    And then when I grew up, I helped build it for real.

    But while a version the “permanent record” is now real, it is also true that much content on the Internet is already ephemeral. The phenomenon of “link rot,” for example, affects even important legal documents.  And U.K. law professor Paul Bernal has argued that we should understand the Internet as “organic, growing and changing all the time,” and that it’s a good thing that this is so. “Having ways to delete information [online] isn’t the enemy of the Internet of the people,” Bernal writes, “much as an enemy of the big players of the Internet.”

    Will Google, one of the “big players on the internet,” hear such views, too? It remains to be seen; Google’s “European grand tour,” as another UK law professor has dubbed it, will conclude on November 4th

    Photograph by derekb, unmodified, under a Creative Commons license. https://creativecommons.org/licenses/by-nc/2.0/legalcode

  • Pages:
  • 1
  • 2
  • »