Santa Clara University

internet-ethics-banner
Bookmark and Share
 
 
RSS

Ethical Issues in the Online World

Welcome to the blog of the Internet Ethics program at the Markkula Center for Applied Ethics, Santa Clara University. Program Director Irina Raicu will be joined by various guests in discussing the ethical issues that arise continuously on the Internet; we hope to host a robust conversation about them, and we look forward to your comments.

The following postings have been filtered by tag privacy. clear filter
  •  BroncoHack 2015 (Guest Post)

    Friday, May. 8, 2015

    Last weekend, Santa Clara University hosted BroncoHack 2015—a hackathon organized by the OMIS Student Network, with the goal of creating “a project that is innovative in the arenas of business and technology” while also reflecting the theme of “social justice.” The Markkula Center for Applied Ethics was proud to be one of the co-sponsors of the event.

    The winning project was “PrivaSee”—a suite of applications that helps prevent the leakage of sensitive and personally identifiable student information from schools’ networks. In the words of its creators, “PrivaSee offers a web dashboard that allows schools to monitor their network activity, as well as a mobile application that allows parents to stay updated about their kids’ digital privacy. A network application that sits behind the router of a school's network continuously monitors the network packets, classifies threat levels, and notifies the school administration (web) and parents (mobile) if it discovers student data being leaked out of the network, or if there are any unauthorized apps or services being used in the classrooms that could potentially syphon private data. For schools, it offers features like single dashboard monitoring of all kids and apps. For parents, it provides the power of on-the-move monitoring of all their kids’ privacy and the ability to chat with school administration in the event of any issues. Planned extensions like 'privacy bots' will crawl the Internet to detect leaked data of students who might have found ways to bypass a school's secure networks. The creators of PrivaSee believe that cybersecurity issues in connected learning environments are a major threat to kids' safety, and they strive to create a safer ecosystem.”

    From the winning team:

    "Hackathons are always fun and engaging. Personally, I put this one at the top of my list. I feel lucky to have been part of this energetic, multi-talented team, and I will never forget the fun we had. Our preparations started a week ago, brainstorming various ideas. We kick-started the event with analysis of our final idea and the impact it can create, rather than worrying about any technical challenges that might hit us. We divided our work, planned our approach, and enjoyed every moment while shaping our idea to a product. Looking back, I am proud to attribute our success to my highly motivated and fearless team with an unending thirst to bring a vision to reality. We are looking forward to testing our idea in real life and helping to create a safer community." - Venkata Sai Kishore Modalavalasa, Computer Science & Engineering Graduate Student, Santa Clara University

    "My very first hackathon, and an amazing experience indeed! The intellectually charged atmosphere, the intense coding, and the serious competition kept us on our toes throughout the 24 hours. Kudos to ‘Cap'n Sai,’ who guided us and helped take the product to near perfection. Kudos to the rest of my teammates, who coded diligently through the night. And finally, thank you to the organizers and sponsors of BroncoHack 2015, for having provided us with a platform to turn an idea into a functional security solution that can help us make a difference." - Ashish Nair, Computer Science & Engineering Graduate Student, Santa Clara University

    "Bronco-hack was the first hackathon I ever attended, and it turned to be an amazing experience. After pondering over many ideas, we finally decided to stick with our app: 'PrivaSee'. The idea was to come up with a way to protect kids from sending sensitive digital information that can potentially be compromised over the school’s network. Our objective was to build a basic working model (minimum viable product) of the app. It was a challenge to me because I was not experienced in the particular technical skill-set that was required to build my part of the app. This experience has most definitely strengthened my ability to perform and learn in high pressure situations. I would definitely like to thank the organizers for supporting us throughout the event. They provided us with whatever our team needed and were very friendly about it. I plan to focus on resolving more complicated issues that still plague our society and carry forward and use what I learnt from this event." - Manish Kaushik, Computer Science & Engineering Graduate Student, Santa Clara University

    "Bronco Hack 2015 was my first Hackathon experience. I picked up working with Android App development. Something that I found challenging and fun to do was working with parse cloud and Android Interaction. I am really happy that I was able to learn and complete the hackathon. I also find that I'm learning how to work and communicate effectively in teams and within time bounds. Everyone in the team comes in with different skill levels and you really have to adapt quickly in order to be productive as a team and make your idea successful within 24hrs." - Prajakta Patil, Computer Science & Engineering Graduate Student, Santa Clara University

    "I am extremely glad I had this opportunity to participate in Bronco Hack 2015. It was my first ever hackathon, and an eye-opening event for me. It is simply amazing how groups of individuals can come up with such unique and extremely effective solutions for current issues in a matter of just 24 hours. This event helped me realize that I am capable of much more than I expected. It was great working with the team we had, and special thanks to Captain Sai for leading the team to victory. " - Tanmay Kuruvilla, Computer Science & Engineering Graduate Student, Santa Clara University

    Congratulations to all of the BroncoHack participants—and yes, BroncoHack will return next Spring!

  •  A New Ethics Case Study

    Friday, Apr. 24, 2015
    A Google receptionist works at the front desk in the company's office in this Oct. 2, 2006, file photo. (AP Photo/Mark Lennihan, File)

    In October 2014, Google inaugurated a Transparency Report detailing its implementation of the European court decision generally (though mistakenly) described as being about “the right to be forgotten.” To date, according to the report, Google has received more than 244,000 requests for removals of URLs from certain searches involvijng names of EU residents. Aside from such numbers, the Transparency Report includes examples of requests received--noting, in each case, whether or not Google complied with the request.

    The “right to be forgotten” decision and its implementation have raised a number of ethical issues. Given that, we thought it would be useful to draw up an ethics case study that would flesh out those issues; we published that yesterday: see “Removing a Search Result: An Ethics Case Study.”

    What would you decide, if you were part of the decision-making team tasked with evaluating the request described in the case study?

     

  •  Grant from Intel's Privacy Curriculum Initiative Will Fund New SCU Course

    Friday, Mar. 27, 2015

    Exciting news! A new course now being developed at Santa Clara University, funded by a $25,000 grant from Intel Corporation's Privacy Curriculum Initiative, will bring together engineering, business, and law students to address topics such as privacy by design, effective and accurate privacy policies, best‐practice cybersecurity procedures, and more. Ethics will be an important part of the discussion, and the curriculum will be developed by the High Tech Law Institute in conjunction with Santa Clara University’s School of Engineering, the Leavey School of Business, and the Markkula Center for Applied Ethics.

    More details here!

     

  •  Trust, Self-Criticism, and Open Debate

    Tuesday, Mar. 17, 2015
    President Barack Obama speaks at the White House Summit on Cybersecurity and Consumer Protection in Stanford, Calif., Friday, Feb. 13, 2015. (AP Photo/Jeff Chiu)

    Last November, the director of the NSA came to Silicon Valley and spoke about the need for increased collaboration among governmental agencies and private companies in the battle for cybersecurity.  Last month, President Obama came to Silicon Valley as well, and signed an executive order aimed at promoting information sharing about cyberthreats.  In his remarks ahead of that signing, he noted that the government “has its own significant capabilities in the cyber world” and added that when it comes to safeguards against governmental intrusions on privacy, “the technology so often outstrips whatever rules and structures and standards have been put in place, which means the government has to be constantly self-critical and we have to be able to have an open debate about it.”

    Five days later, on February 19, The Intercept reported that back in 2010 “American and British spies hacked into the internal computer network of the largest manufacturer of SIM cards in the world, stealing encryption keys used to protect the privacy of cellphone communications across the globe….” A few days after that, on February 23, at a cybersecurity conference, the director of the NSA was confronted by the chief information security officer of Yahoo in an exchange which, according to the managing editor of the Just Security blog, “illustrated the chasm between some leading technology companies and the intelligence community.”

    Then, on March 10th, The Intercept reported that in 2012 security researchers working with the CIA “claimed they had created a modified version of Apple’s proprietary software development tool, Xcode, which could sneak surveillance backdoors into any apps or programs created using the tool. Xcode, which is distributed by Apple to hundreds of thousands of developers, is used to create apps that are sold through Apple’s App Store.” Xcode’s product manager reacted on Twitter: “So. F-----g. Angry.”

    Needless to say, it hasn’t been a good month for the push toward increased cooperation. However, to put those recent reactions in a bit more historical context, in October 2013, it was Google’s chief legal officer, David Drummond, who reacted to reports that Google’s data links had been hacked by the NSA: "We are outraged at the lengths to which the government seems to have gone to intercept data from our private fibre networks,” he said, “and it underscores the need for urgent reform." In May 2014, following reports that some Cisco products had been altered by the NSA, Mark Chandler, Cisco’s general counsel, wrote that the “failure to have rules [that restrict what the intelligence agencies may do] does not enhance national security ….”

    If the goal is increased collaboration between the public and private sector on issues related to cybersecurity, many commentators have observed that the issue most hampering that is a lack of trust. Things are not likely to get better as long as the anger and lack of trust are left unaddressed.  If President Obama is right in noting that, in a world in which technology routinely outstrips rules and standards, the government must be “constantly self-critical,” then high-level visits to Silicon Valley should include that element, much more openly than they have until now.

     

  •  Luciano Floridi’s Talk at Santa Clara University

    Tuesday, Mar. 10, 2015

     

     
    In the polarized debate about the so-called “right to be forgotten” prompted by an important decision issued by the European Court of Justice last year, Luciano Floridi has played a key role. Floridi, who is Professor of Philosophy and Ethics of Information at the University of Oxford and Director of Research of the Oxford Internet Institute, accepted Google’s invitation to join its advisory council on that topic. While the council was making its way around seven European capitals pursuing both expert and public input, Professor Floridi (the only ethicist in the group) wrote several articles about his evolving understanding of the issues involved—including “Google's privacy ethics tour of Europe: a complex balancing act”; “Google ethics tour: should readers be told a link has been removed?”; “The right to be forgotten – the road ahead”; and “Right to be forgotten poses more questions than answers.”
     
    Last month, after the advisory council released its much-anticipated report, Professor Floridi spoke at Santa Clara University (his lecture was part of our ongoing “IT, Ethics, and Law” lecture series). In his talk, titled “Recording, Recalling, Retrieving, Remembering: Memory in the Information Age,” Floridi embedded his analysis of the European court decision into a broader exploration of the nature of memory itself; the role of memory in the European philosophical tradition; and the relationship among memory, identity, forgiveness, and closure. As Floridi explained, the misnamed “right to be forgotten” is really about closure, which is in turn not about forgetting but about “rightly managing your past memory.”
     
    Here is the video of that talk. We hope that it will add much-needed context to the more nuanced conversation that is now developing around the balancing of the rights, needs, and responsibilities of all of the stakeholders involved in this debate, as Google continues to process the hundreds of thousands of requests for de-linking submitted so far in the E.U.
     
    If you would like to be added to our “IT, Ethics, and Law” mailing list in order to be notified of future events in the lecture series, please email ethics@scu.edu.

     

  •  On Remembering, Forgetting, and Delisting

    Friday, Feb. 20, 2015
     
    Over the last two weeks, Julia Powles, who is a law and technology researcher at the University of Cambridge, has published two interesting pieces on privacy, free speech, and the “right to be forgotten”: “Swamplands of the Internet: Speech and Privacy,” and “How Google Determined Our Right to Be Forgotten” (the latter co-authored by Enrique Chaparro). They are both very much worth reading, especially for folks whose work impacts the privacy rights (or preferences, if you prefer) of people around the world.
     
    Today, a piece that I wrote, which also touches on the “right to be forgotten,” was published in Re/code. It’s titled “The Right to Be Forgotten, the Privilege to Be Remembered.” I hope you’ll read that, too!
     
    And earlier in February, Google’s Advisory Council issued its much-anticipated report on the issue, which seeks to clarify the outlines of the debate surrounding it and offers suggestions for the implementation of “delisting.”
     
    One of the authors of that report, Professor Luciano Floridi, will be speaking at Santa Clara University on Wednesday, 2/25, as part of our “IT, Ethics and Law” lecture series.  Floridi is Professor of Philosophy and Ethics of Information at the University of Oxford and the Director of Research of the Oxford Internet Institute. His talk is titled “Recording, Recalling, Retrieving, Remembering: Memory in the Information Age.” The event is free and open to the public; if you live in the area and are interested in memory, free speech, and privacy online, we hope you will join us and RSVP!
     
    [And if you would like to be added to our mailing list for the lecture series—which has recently hosted panel presentations on ethical hacking, the ethics of online price discrimination, and privacy by design and software engineering ethics—please email ethics@scu.edu.] 
     
    Photo by Minchioletta, used without modification under a Creative Commons license.
  •  “Practically as an accident”: on “social facts” and the common good

    Thursday, Oct. 30, 2014

     

    In the Los Angeles Review of Books, philosopher Evan Selinger takes issue with many of the conclusions (and built-in assumptions) compiled in Dataclysm—a new book by Christian Rudder, who co-founded the dating site OKCupid and now heads the site’s data analytics team. While Selinger’s whole essay is really interesting, I was particularly struck by his comments on big data and privacy. 

    “My biggest issue with Dataclysm,” Selinger writes,
     
    lies with Rudder’s treatment of surveillance. Early on in the book he writes: ‘If Big Data’s two running stories have been surveillance and money, for the last three years I’ve been working on a third: the human story.’ This claim about pursuing a third path isn’t true. Dataclysm itself is a work of social surveillance.
     
    It’s tempting to think that different types of surveillance can be distinguished from one another in neat and clear ways. If this were the case, we could say that government surveillance only occurs when organizations like the National Security Agency do their job; corporate surveillance is only conducted by companies like Facebook who want to know what we’re doing so that they effectively monetize our data and devise strategies to make us more deeply engaged with their platform; and social surveillance only takes place in peer-to-peer situations, like parents monitoring their children’s phones, romantic partners scrutinizing each other’s social media feeds….
     
    But in reality, surveillance is defined by fluid categories.
     
    While each category of surveillance might include both ethical and unethical practices, the point is that the boundaries separating the categories are porous, and the harms associated with surveillance might seep across all of them.
     
    Increasingly, when corporations like OKCupid or Facebook analyze their users’ data and communications in order to uncover “social facts,” they claim to be acting in the interest of the common good, rather than pursuing self-serving goals. They claim to give us clear windows into our society. The subtitle of Rudder’s book, for example, is “Who We Are (When We Think No One’s Looking).” As Selinger notes,
     
    Rudder portrays the volume of information… as a gift that can reveal the truth of who we really are. … [W]hen people don’t realize they’re lab rats in Rudder’s social experiments, they reveal habits—‘universals,’ he even alleges…  ‘Practically as an accident,’ Rudder claims, digital data can now show us how we fight, how we love, how we age, who we are, and how we’re changing.’
     
    Of course, Rudder should contain his claims to the “we” who use OKCupid (a 2013 study by the Pew Research Trust found that 10% of Americans report having used an online dating service). Facebook has a stronger claim to having a user base that reflects all of “us.”  But there are other entities that sit on even vaster data troves than Facebook’s, even more representative of U.S. society overall. What if a governmental organization were to decide to pursue the same selfless goals, after carefully ensuring that the data involved would be carefully anonymized and presented only in the aggregate (akin to what Rudder claims to have done)?
     
    In the interest of better “social facts,” of greater insight into our collective mindsets and behaviors, should we encourage (or indeed demand from) the NSA to publish “Who Americans Are (Whey They Think No One’s Watching)”? To be followed, perhaps, by a series of “Who [Insert Various Other Nationalities] Are (When They Think No One’s Watching)”? Think of all the social insights and common good that would come from that!
     
    In all seriousness, as Selinger rightly points out, the surveillance behind such no-notice-no-consent research comes at great cost to society:
     
    Rudder’s violation of the initial contextual integrity [underpinning the collection of OKCupid user data] puts personal data to questionable secondary, social use. The use is questionable because privacy isn’t only about protecting personal information. People also have privacy interests in being able to communicate with others without feeling anxious about being excessively monitored. … [T]he resulting apprehension inhibits speech, stunts personal growth, and possibly even disinclines people from experimenting with politically relevant ideas.
     
    With every book subtitled “Who We Are (When We Think No One’s Looking),” we, the real we, become more weary, more likely to assume that someone’s always looking. And as many members of societies that have lived with excessive surveillance have attested, that’s not a path to achieving the good life.
     
    Photo by Henning Muhlinghaus, used without modification under a Creative Commons license.

     

  •  Are You A Hysteric, Or A Sociopath? Welcome to the Privacy Debate

    Tuesday, Oct. 7, 2014

     

    Whether you’re reading about the latest data-mining class action lawsuit through your Google Glass or relaxing on your front porch waving at your neighbors, you probably know that there’s a big debate in this country about privacy.  Some say privacy is important. Some say it’s dead.  Some say kids want it, or not. Some say it’s a relatively recent phenomenon whose time, by the way, has passed—a slightly opaque blip in our history as social animals. Others say it’s a human right without which many other rights would be impossible to maintain.

    It’s a much-needed discussion—but one in which the tone is often not conducive to persuasion, and therefore progress.  If you think concerns about information privacy are overrated and might become an obstacle to the development of useful tools and services, you may hear yourself described as a [Silicon Valley] sociopath or a heartless profiteer.  If you believe that privacy is important and deserves protection, you may be called a “privacy hysteric.”
     
    It’s telling that privacy advocates are so often called “hysterics”—a term associated more commonly with women, and with a surfeit of emotion and lack of reason.  (Privacy advocates are also called “fundamentalists” or “paranoid”—again implying belief not based in reason.)  And even when such terms are not directly deployed, the tone often suggests them. In a 2012 Cato Institute policy analysis titled “A Reasonable Response to the Privacy ‘Crisis,’” for example, Larry Downes writes about the “emotional baggage” invoked by the term “privacy,” and advises, “For those who naturally leap first to leg­islative solutions, it would be better just to fume, debate, attend conferences, blog, and then calm down before it’s too late.”  (Apparently debate, like fuming and attending conferences, is just a harmless way to let off steam—as long as it doesn’t lead to such hysteria as class-action lawsuits or actual attempts at legislation.)
     
    In the year following Edward Snowden’s revelations, the accusations of privacy “hysteria” or “paranoia” seemed to have died down a bit; unfortunately, they might be making a comeback. The summary of a recent GigaOm article, for example, accuses BuzzFeed of “pumping up the hysteria” in its discussion of ad beacons installed—and quickly removed—in New York.
     
    On the other hand, those who oppose privacy-protecting legislation or who argue that other values or rights might trump privacy sometimes find themselves diagnosed, too–if not as sociopaths, then at least as belonging on the “autism spectrum”: disregardful of social norms, unable to empathize with others.
     
    Too often, the terms thrown about by some on both sides in the privacy debate suggest an abdication of the effort to persuade. You can’t reason with hysterics and sociopaths, so there’s no need to try. You just state your truth to those others who think like you do, and who cheer your vehemence.
     
    But even if you’re a privacy advocate, you probably want the benefits derived from collecting and analyzing at least some data sets, under some circumstances; and even if you think concerns about data disclosures are overblown, you still probably don’t disclose everything about yourself to anyone who will listen.
     
    If information is power, privacy is a defensive shell against that power.  It is an effort to modulate vulnerability.  (The more vulnerable you feel, the more likely you are to understand the value of privacy.)  So privacy is an inherent part of all of our lives; the question is how to deploy it best.  In light of new technologies that create new privacy challenges, and new methodologies that seek to maximize benefits while minimizing harms (e.g. “differential privacy”), we need to be able to discuss this complicated balancing act —without charged rhetoric making the debate even more difficult.
     
    If you find yourself calling people privacy-related names (or writing headlines or summaries that do that, even when the headlined articles themselves don’t), please rephrase.
     
    Photo by Tom Tolkien, unmodified, used under a Creative Commons license: https://creativecommons.org/licenses/by/2.0/legalcode
     
     
  •  Should You Watch? On the Responsibility of Content Consumers

    Tuesday, Sep. 23, 2014

    This fall, Internet users have had the opportunity to view naked photographs of celebrities (which were obtained without approval, from private iCloud accounts, and then—again without consent—distributed widely).  They were also able to watch journalists and an aid worker being beheaded by a member of a terrorist organization that then uploaded the videos of the killings to various social media channels.  And they were also invited to watch a woman being rendered unconscious by a punch from a football player who was her fiancé at the time; the video of that incident was obtained from a surveillance camera inside a hotel elevator.

     
    These cases have been accompanied by heated debates around the issues of journalism ethics and the responsibilities of social media platforms. Increasingly, though, a question is arising about the responsibility of the Internet users themselves—the consumers of online content. The question is, should they watch?
    Would You Watch [the beheading videos]?” ask CNN and ABC News. “Should You Watch the Ray Rice Assault Video?” asks Shape magazine. “Should We Look—Or Look Away?” asks Canada’s National Post. And, in a broader article about the “consequences and import of ubiquitous, Internet-connected photography” (and video), The Atlantic’s Robinson Mayer reflects on all three of the cases noted above; his piece is titled “Pics or It Didn’t Happen.”
    Many commentators have argued that to watch those videos or look at those pictures is a violation of the privacy of the victims depicted in them; that not watching is a sign of respect; or that the act of watching might cause new harm to the victims or to people associated with them (friends, family members, etc.). Others have argued that watching the beheading videos is necessary “if the depravity of war is to be understood and, hopefully, dealt with,” or that watching the videos of Ray Rice hitting his fiancé will help change people’s attitudes toward domestic violence.
    What do you think?
    Would it be unethical to watch the videos discussed above? Why?
    Would it be unethical to look at the photos discussed above? Why?
    Are the three cases addressed above so distinct from each other that one can’t give a single answer about them all?  If so, which of them would you watch, or refuse to watch, and why?
     
    Photo by Matthew Montgomery, unmodified, used under a Creative Commons license.
  •  Revisiting the "Right to Be Forgotten"

    Tuesday, Sep. 16, 2014

     Media coverage of the implementation of the European Court decision on de-indexing certain search results has been less pervasive than the initial reporting on the decision itself, back in May.  At the time, much of the coverage had framed the issue in terms of clashing pairs: E.U. versus U.S; privacy versus free speech.  In The Guardian, an excellent overview of the decision described the “right to be forgotten” as a “cultural shibboleth.”

    (I wrote about it back then, too, arguing that many of the stories about it were rife with mischaracterizations and false dilemmas.)

    Since then, most of the conversation about online “forgetting” seems to have continued on parallel tracks—although with somewhat different clashing camps.  On one hand, many journalists and other critics of the decision (on both sides of the Atlantic) have made sweeping claims about a resulting “Internet riddled with memory holes” and articles “scrubbed from search results”; one commentator wrote that the court decision raises the question, can you really have freedom of speech if no one can hear what you are saying?” 

    On the other hand, privacy advocates (again on both sides of the Atlantic) have been arguing that the decision is much narrower in scope than has generally been portrayed and that it does not destroy free speech; that Google is not, in fact, the sole and ultimate arbiter of the determinations involved in the implementation of the decision; and that even prior to the court’s decision Google search results were selective, curated, and influenced by various countries’ laws.  Recently, FTC Commissioner Julie Brill urged “thought leaders on both sides of the Atlantic to recognize that, just as we both deeply value freedom of expression, we also have shared values concerning relevance in personal information in the digital age.”

    Amid this debate, in late June, Google developed and started to use its own process for complying with the decision.  But Google has also convened an advisory council that will take several months to consider evidence (including public input from meetings held in seven European capitals--Madrid, Rome, Paris, Warsaw, Berlin, London, and Brussels), before producing a report that would inform the company’s current efforts.  Explaining the creation of the council, the company noted that it is now required to balance “on a case-by-case basis, an individual’s right to be forgotten with the public’s right to information,” and added, “We want to strike this balance right. This obligation is a new and difficult challenge for us, and we’re seeking advice on the principles Google ought to apply…. That’s why we’re convening a council of experts.”

    The advisory council (to whom any and all can submit comments) has been posting videos of the public meetings online. However, critics have taken issue with the group’s members (selected by Google itself), with the other presenters invited to participate at the meetings (again screened and chosen by Google), and, most recently, with its alleged rebuffing of questions from the general public. So far, many of the speakers invited to the meetings have raised questions about the appropriateness of the decision itself.

    In this context, one bit of evidence makes its own public comment:  Since May, according to Google, the company has received more than 120,000 de-indexing requests. Tens of thousands of Europeans have gone through the trouble of submitting a form and the related information necessary to request that a search of their name not include certain results.  

    And, perhaps surprisingly (especially given most the coverage of the decision in the U.S.), a recent poll of American Internet users, by an IT security research firmfound that a “solid majority” of them—61%--were “in favor of a ‘right to be forgotten’ law for US citizens.”

    But this, too, may speak differently to different audiences. Some will see it as evidence of a vast pent-up need that had had no outlet until now. Others will see it as evidence of the tens of thousands of restrictions and “holes” that will soon open up in the Web.

    So—should we worry about the impending “memory holes”?

    In a talk entitled “The Internet with a Human Face,” American Web developer Maciej Ceglowski argues that “the Internet somehow contrives to remember too much and too little at the same time.” He adds,

    in our elementary schools in America, if we did something particularly heinous, they had a special way of threatening you. They would say: “This is going on your permanent record.”

    It was pretty scary. I had never seen a permanent record, but I knew exactly what it must look like. It was bright red, thick, tied with twine. Full of official stamps.

    The permanent record would follow you through life, and whenever you changed schools, or looked for a job or moved to a new house, people would see the shameful things you had done in fifth grade. 

    How wonderful it felt when I first realized the permanent record didn’t exist. They were bluffing! Nothing I did was going to matter! We were free!

    And then when I grew up, I helped build it for real.

    But while a version the “permanent record” is now real, it is also true that much content on the Internet is already ephemeral. The phenomenon of “link rot,” for example, affects even important legal documents.  And U.K. law professor Paul Bernal has argued that we should understand the Internet as “organic, growing and changing all the time,” and that it’s a good thing that this is so. “Having ways to delete information [online] isn’t the enemy of the Internet of the people,” Bernal writes, “much as an enemy of the big players of the Internet.”

    Will Google, one of the “big players on the internet,” hear such views, too? It remains to be seen; Google’s “European grand tour,” as another UK law professor has dubbed it, will conclude on November 4th

    Photograph by derekb, unmodified, under a Creative Commons license. https://creativecommons.org/licenses/by-nc/2.0/legalcode

  • Pages:
  • 1
  • 2
  • »