Santa Clara University

internet-ethics-banner
Bookmark and Share
 
 
RSS

Ethical Issues in the Online World

Welcome to the blog of the Internet Ethics program at the Markkula Center for Applied Ethics, Santa Clara University. Program Director Irina Raicu will be joined by various guests in discussing the ethical issues that arise continuously on the Internet; we hope to host a robust conversation about them, and we look forward to your comments.

The following postings have been filtered by tag accountability. clear filter
  •  How Google Can Illuminate the "Right to Be Forgotten" Debate: Two Requests

    Thursday, May. 14, 2015

     

    Happy Birthday, Right-to-Have-Certain-Results-De-Listed-from-Searches-on-Your-Own-Name-,-Depending-on-the-Circumstances!

    It’s now been a year since the European Court of Justice shocked (some) people with a decision that has mistakenly been described as announcing a “right to be forgotten.”

    Today, 80 Internet scholars sent an open letter to Google asking the company to release additional aggregate data about the company’s implementation of the court decision.  As they explain,

    The undersigned have a range of views about the merits of the ruling. Some think it rightfully vindicates individual data protection/privacy interests. Others think it unduly burdens freedom of expression and information retrieval. Many think it depends on the facts.

    We all believe that implementation of the ruling should be much more transparent for at least two reasons: (1) the public should be able to find out how digital platforms exercise their tremendous power over readily accessible information; and (2) implementation of the ruling will affect the future of the [“right to be forgotten”] in Europe and elsewhere, and will more generally inform global efforts to accommodate privacy rights with other interests in data flows.

    Although Google has released a Transparency Report with some aggregate data and some examples of the delinking decisions reached so far, the signatories find that effort insufficient. “Beyond anecdote,” they write,

    we know very little about what kind and quantity of information is being delisted from search results, what sources are being delisted and on what scale, what kinds of requests fail and in what proportion, and what are Google’s guidelines in striking the balance between individual privacy and freedom of expression interests.

    For now, they add, the participants in the delisting debate “do battle in a data vacuum, with little understanding of the facts.”

    More detailed data is certainly much needed. What remains striking, in the meantime, is how little understanding of the facts many people continue to have about what the decision itself mandates. A year after the decision was issued, an associate editor for Engadget, for example, still writes that, as a result of it, “if Google or Microsoft hides a news story, there may be no way to get it back.” 

    To “get it back”?! Into the results of a search on a particular person’s name? Because that is the entire scope of the delinking involved here—when the delinking does happen.

    In response to a request for comment on the Internet scholars’ open letter, a Google spokesman told The Guardian that “it’s helpful to have feedback like this so we can know what information the public would find useful.” In that spirit of helpful feedback, may I make one more suggestion?

    Google’s RTBF Transparency Report (updated on May 14) opens with the line, “In a May 2014 ruling, … the Court of Justice of the European Union found that individuals have the right to ask search engines like Google to remove certain results about them.” Dear Googlers, could you please add a line or two explaining that “removing certain results” does not mean “removing certain stories from the Internet, or even from the Google search engine”?

    Given the anniversary of the decision, many reporters are turning to the Transparency Report for information for their articles. This is a great educational opportunity. With a line or two, while it weighs its response to the important request for more detailed reporting on its actions, Google could already improve the chances of a more informed debate.

    [I’ve written about the “right to be forgotten” a number of times: chronologically, see “The Right to Be Forgotten, Or the Right to Edit?” “Revisiting the ‘Right to Be Forgotten,’” “The Right to Be Forgotten, The Privilege to Be Remembered” (that one published in Re/code), “On Remembering, Forgetting, and Delisting,” “Luciano Floridi’s Talk at Santa Clara University,” and, most recently, “Removing a Search Result: An Ethics Case Study.”]

    (Photo by Robert Scoble, used without modification under a Creative Commons license.)

     

  •  Is Facebook Becoming a Better Friend?

    Thursday, Apr. 30, 2015
    This Feb. 8, 2012 photo shows workers inside of Facebook headquarters in Menlo Park, Calif. (AP Photo/Paul Sakuma)

    Good friends understand boundaries and keep your secrets.  You can’t be good friends with someone you don’t trust.

    Facebook, the company that made “friend” a verb and invited you to mix together your bosom buddies, relatives, acquaintances, classmates, lovers, co-workers, exes, teachers, and who-knows-who-else into one group it called “friends,” and has been helping you stay in touch with all of them and prompting you to reveal lots of things to all them, is taking some steps to become more trustworthy.

    Specifically, as TechCrunch and others recently reported, as of April 30 Facebook’s modified APIs will no longer allow apps to collect data both from their users and from their users’ Facebook “friends”—something they often did until now, often without the users (or their friends) realizing it.*

    As TechCrunch’s Josh Constine puts it, “Some users will see [this] as a positive move that returns control of personal data to its rightful owners. Just because you’re friends with someone, doesn’t mean you necessarily trust their judgment about what developers are safe to deal with. Now, each user will control their own data destiny.” Moreover, with Facebook’s new APIs, each user will have more “granular control” over what permissions he or she grants to an app in terms of data collection or other actions—such as permission to post to his or her Newsfeed. Constine writes that

    Facebook has now instituted Login Review, where a team of its employees audit any app that requires more than the basic data of someone’s public profile, list of friends, and email address. The Login Review team has now checked over 40,000 apps, and from the experience, created new, more specific permissions so developers don’t have to ask for more than they need. Facebook revealed that apps now ask an average of 50 percent fewer permissions than before.

    These are important changes, specifically intended by Facebook to increase user trust in the platform. They are certainly welcome steps. However, Facebook might ponder the first line of Constine’s TechCrunch article, which reads, “It was always kind of shady that Facebook let you volunteer your friends’ status updates, check-ins, location, interests and more to third-party apps.” Yes, it was. It should have been obvious all along that users should “control their own data destiny.” Facebook’s policies and lack of clarity about what they made possible turned many of us who used it into somewhat inconsiderate “friends.”

    Are there other policies that continue to have that effect? So many of our friendship-related actions are now prompted and shaped by the design of the social platforms on which we perform them—and controlled even more by algorithms such as the Facebook one that determines which of our friends’ posts we see in our Newsfeed (no, they don’t all scroll by in chronological order; what you see is a curated feed, in which the parameters for curation are not fully disclosed and keep changing).

    Facebook might be becoming a better, more trustworthy friend (though a “friend” that, according to The Atlantic, made $5 billion last year by showing you ads, “more than [doubling] digital ad revenue over the course of two years”). Are we becoming better friends, though, too? Or should we be clamoring for even more transparency and more changes that would empower us to be that?

    *  We warned about this practice in our Center’s module about online privacy: “Increasingly, you may… be allowing some entities to collect a lot of personal information about all of your online ‘friends’ by simply clicking ‘allow’ when downloading applications that siphon your friends' information through your account. On the flip side, your ‘friends’ can similarly allow third parties to collect key information about you, even if you never gave that third party permission to do so.” Happily, we’ll have to update that page now…

     

  •  Harrison Bergeron in Silicon Valley

    Wednesday, Apr. 1, 2015
     
    Certain eighth graders I know have been reading “Harrison Bergeron,” so I decided to re-read it, too. The short story, by Kurt Vonnegut, describes a dystopian world in which, in an effort to make all people equal, a government imposes countervailing handicaps on all citizens who are somehow naturally gifted: beautiful people are forced to wear ugly masks; strong people have to carry around weights in proportion to their strength; graceful people are hobbled; etc. In order to make everybody equal, in other words, all people are brought to the lowest common denominator. The title character, Harrison Bergeron, is particularly gifted and therefore particularly impaired. As Vonnegut describes him,
     
    … Harrison's appearance was Halloween and hardware. Nobody had ever born heavier handicaps. He had outgrown hindrances faster than the H-G men could think them up. Instead of a little ear radio for a mental handicap, he wore a tremendous pair of earphones, and spectacles with thick wavy lenses. The spectacles were intended to make him not only half blind, but to give him whanging headaches besides.
    Scrap metal was hung all over him. Ordinarily, there was a certain symmetry, a military neatness to the handicaps issued to strong people, but Harrison looked like a walking junkyard. In the race of life, Harrison carried three hundred pounds.
    And to offset his good looks, the H-G men required that he wear at all times a red rubber ball for a nose, keep his eyebrows shaved off, and cover his even white teeth with black caps at snaggle-tooth random.
     
    In classroom discussions, the story is usually presented as a critique of affirmative action. Such discussions miss the fact that affirmative action aims to level the playing field, not the players.
     
    In the heart of Silicon Valley, in a land that claims to value meritocracy but ignores the ever more sharply tilted playing field, “Harrison Bergeron” seems particularly inapt. But maybe it’s not. Maybe it should be read, only in conjunction with stories like CNN’s recent interactive piece titled “The Poor Kids of Silicon Valley.” Or the piece by KQED’s Rachel Myrow, published last month, which notes that 30% of Silicon Valley’s population lives “below self-sufficiency standards,” and that “the income gap is wider than ever, and wider in Silicon Valley than elsewhere in the San Francisco Bay Area or California.”
     
    What such (nonfiction, current) stories make clear is that we are, in fact, already hanging weights and otherwise hampering people in our society.  It’s just that we don’t do it to those particularly gifted; we do it to the most vulnerable ones. The kids who have to wake up earlier because they live far from their high-school and have to take two buses since their parents can’t drive them to school, and who end up sleep deprived and less able to learn—the burden is on them. The kids who live in homeless shelters and whose brains might be impacted, long-term, by the stress of poverty—the burden is on them.  The people who work as contractors with limited or no benefits—the burden is on them. The parents who have to work multiple jobs, can’t afford to live close to work, and have no time to read to their kids—the burden is on all of them.
     
    In a Wired article about a growing number of Silicon Valley “techie” parents who are opting to home-school their kids, Jason Tanz expresses some misgivings about the subject but adds,
     
    My son is in kindergarten, and I fear that his natural curiosity won’t withstand 12 years of standardized tests, underfunded and overcrowded classrooms, and constant performance anxiety. The Internet has already overturned the way we connect with friends, meet potential paramours, buy and sell products, produce and consume media, and manufacture and deliver goods. Every one of those processes has become more intimate, more personal, and more meaningful. Maybe education can work the same way.
     
    Set aside the question of whether those processes have indeed become more intimate and meaningful; let’s concentrate on a different question about the possibility that, with the help of the Internet, education might “work the same way”: For whom?
     
    Are naturally curious and creative kids being hampered by standardized tests and underfunded and overcrowded classrooms? Well then, in Silicon Valley, some of those kids will be homeschooled. The Wired article quotes a homeschooling parent who optimistically foresees a day “when you can hire a teacher by the hour, just as you would hire a TaskRabbit to assemble your Ikea furniture.” As to what happens to the kids of the TaskRabbited teacher? If Harrison Bergeron happens to be one of those, he will be further hampered, and nobody will check whether the weight of the burden will be proportional to anything.
     
    Meritocracy is a myth when social inequality becomes as vast as it has become in Silicon Valley. Teaching “Harrison Bergeron” to eighth graders in this environment is a cruel joke.
     
    (Photo by Ken Banks, cropped, used under a Creative Commons license.)
  •  Trust, Self-Criticism, and Open Debate

    Tuesday, Mar. 17, 2015
    President Barack Obama speaks at the White House Summit on Cybersecurity and Consumer Protection in Stanford, Calif., Friday, Feb. 13, 2015. (AP Photo/Jeff Chiu)

    Last November, the director of the NSA came to Silicon Valley and spoke about the need for increased collaboration among governmental agencies and private companies in the battle for cybersecurity.  Last month, President Obama came to Silicon Valley as well, and signed an executive order aimed at promoting information sharing about cyberthreats.  In his remarks ahead of that signing, he noted that the government “has its own significant capabilities in the cyber world” and added that when it comes to safeguards against governmental intrusions on privacy, “the technology so often outstrips whatever rules and structures and standards have been put in place, which means the government has to be constantly self-critical and we have to be able to have an open debate about it.”

    Five days later, on February 19, The Intercept reported that back in 2010 “American and British spies hacked into the internal computer network of the largest manufacturer of SIM cards in the world, stealing encryption keys used to protect the privacy of cellphone communications across the globe….” A few days after that, on February 23, at a cybersecurity conference, the director of the NSA was confronted by the chief information security officer of Yahoo in an exchange which, according to the managing editor of the Just Security blog, “illustrated the chasm between some leading technology companies and the intelligence community.”

    Then, on March 10th, The Intercept reported that in 2012 security researchers working with the CIA “claimed they had created a modified version of Apple’s proprietary software development tool, Xcode, which could sneak surveillance backdoors into any apps or programs created using the tool. Xcode, which is distributed by Apple to hundreds of thousands of developers, is used to create apps that are sold through Apple’s App Store.” Xcode’s product manager reacted on Twitter: “So. F-----g. Angry.”

    Needless to say, it hasn’t been a good month for the push toward increased cooperation. However, to put those recent reactions in a bit more historical context, in October 2013, it was Google’s chief legal officer, David Drummond, who reacted to reports that Google’s data links had been hacked by the NSA: "We are outraged at the lengths to which the government seems to have gone to intercept data from our private fibre networks,” he said, “and it underscores the need for urgent reform." In May 2014, following reports that some Cisco products had been altered by the NSA, Mark Chandler, Cisco’s general counsel, wrote that the “failure to have rules [that restrict what the intelligence agencies may do] does not enhance national security ….”

    If the goal is increased collaboration between the public and private sector on issues related to cybersecurity, many commentators have observed that the issue most hampering that is a lack of trust. Things are not likely to get better as long as the anger and lack of trust are left unaddressed.  If President Obama is right in noting that, in a world in which technology routinely outstrips rules and standards, the government must be “constantly self-critical,” then high-level visits to Silicon Valley should include that element, much more openly than they have until now.

     

  •  Luciano Floridi’s Talk at Santa Clara University

    Tuesday, Mar. 10, 2015

     

     
    In the polarized debate about the so-called “right to be forgotten” prompted by an important decision issued by the European Court of Justice last year, Luciano Floridi has played a key role. Floridi, who is Professor of Philosophy and Ethics of Information at the University of Oxford and Director of Research of the Oxford Internet Institute, accepted Google’s invitation to join its advisory council on that topic. While the council was making its way around seven European capitals pursuing both expert and public input, Professor Floridi (the only ethicist in the group) wrote several articles about his evolving understanding of the issues involved—including “Google's privacy ethics tour of Europe: a complex balancing act”; “Google ethics tour: should readers be told a link has been removed?”; “The right to be forgotten – the road ahead”; and “Right to be forgotten poses more questions than answers.”
     
    Last month, after the advisory council released its much-anticipated report, Professor Floridi spoke at Santa Clara University (his lecture was part of our ongoing “IT, Ethics, and Law” lecture series). In his talk, titled “Recording, Recalling, Retrieving, Remembering: Memory in the Information Age,” Floridi embedded his analysis of the European court decision into a broader exploration of the nature of memory itself; the role of memory in the European philosophical tradition; and the relationship among memory, identity, forgiveness, and closure. As Floridi explained, the misnamed “right to be forgotten” is really about closure, which is in turn not about forgetting but about “rightly managing your past memory.”
     
    Here is the video of that talk. We hope that it will add much-needed context to the more nuanced conversation that is now developing around the balancing of the rights, needs, and responsibilities of all of the stakeholders involved in this debate, as Google continues to process the hundreds of thousands of requests for de-linking submitted so far in the E.U.
     
    If you would like to be added to our “IT, Ethics, and Law” mailing list in order to be notified of future events in the lecture series, please email ethics@scu.edu.

     

  •  Questions about Mass Surveillance

    Tuesday, Oct. 14, 2014


    Last week, Senator Ron Wyden of Oregon, long-time member of the Select Committee on Intelligence and current chairman of the Senate Finance Committee, held a roundtable on the impact of governmental surveillance on the U.S. digital economy.  (You can watch a video of the entire roundtable discussion here.) While he made the case that the current surveillance practices have hampered both our security and our economy, the event focused primarily on the implications of mass surveillance for U.S. business—corporations, entrepreneurs, tech employees, etc.  Speaking at a high-school in the heart of Silicon Valley, surrounded by the Executive Chairman of Google, the General Counsels of Microsoft and Facebook, and others, Wyden argued that the current policies around surveillance were harming one of the most promising sectors of the U.S. economy—and that Congress was largely ignoring that issue. “When the actions of a foreign government threaten red-white-and-blue jobs, Washington [usually] gets up at arms,” Wyden noted, but “no one in Washington is talking about how overly broad surveillance is hurting the US economy.”

    The focus on the economic impact was clearly intended to present the issue of mass surveillance through a new lens—one that might engage those lawmakers and citizens who had not been moved, perhaps, by civil liberties arguments.  However, even in this context, the discussion frequently turned to the “personal” implications of the policies involved.  And in comments both during and after the panel discussion, Wyden expressed his deep concern about the particular danger posed by the creation and implementation of “secret law.”  Microsoft’s General Counsel, Brad Smith, went one step further:  “We need to recognize,” he said, “that laws that the rest of the world does not respect will ultimately undermine the fundamental ability of our own legal processes, law enforcement agencies, and even the intelligence community itself.”

    That brought me back to some of the questions I raised in 2013 (a few months after the Snowden revelations first became public), in an article published by the Santa Clara Magazine.  One of the things I had asked was whether the newly-revealed surveillance programs might “change the perception of the United States to the point where they hamper, more than they help, our national security. “ In regard to secret laws, even if those were to be subject to effective Congressional and court oversight, I wondered, "[i]s there a level of transparency that U.S. citizens need from each branch of the government even if those branches are transparent to one another? In a democracy, can the system of checks and balances function with informed representatives but without an informed public? Would such an environment undermine voters’ ability to choose [whom to vote for]?"

    And, even more broadly, in regard to the dangers inherent in indiscriminate mass surveillance, "[i]n a society in which the government collects the metadata (and possibly much of the content) of every person’s communications for future analysis, will people still speak, read, research, and act freely? Do we have examples of countries in which mass surveillance coexisted with democratic governance?"

    We know that a certain level of mass surveillance and democratic governance did coexist for a time, very uneasily, in our own past, during the Hoover era at the FBI—and the revelations of the realities of that coexistence led to the Church committee and to policy changes.

    Will the focus on the economic impact of current mass governmental surveillance lead to new changes in our surveillance laws? Perhaps.  But it was Facebook’s general counsel who had (to my mind) the best line of last week’s roundtable event. When a high-school student in the audience asked the panel how digital surveillance affects young people like him, who want to build new technology companies or join growing ones, one panelist advised him to just worry about creating great products, and to let people like the GCs worry about the broader issues.  Another panelist told him that he should care about this issue because of the impact that data localization efforts would have on future entrepreneurs’ ability to create great companies. Then, Facebook’s Colin Stretch answered. “I would say care about it for the reasons you learned in your Civics class,” he said, “not necessarily the reasons you learned in your computer science class.”

    Illustration by Stuart Bradford

  •  Should You Watch? On the Responsibility of Content Consumers

    Tuesday, Sep. 23, 2014

    This fall, Internet users have had the opportunity to view naked photographs of celebrities (which were obtained without approval, from private iCloud accounts, and then—again without consent—distributed widely).  They were also able to watch journalists and an aid worker being beheaded by a member of a terrorist organization that then uploaded the videos of the killings to various social media channels.  And they were also invited to watch a woman being rendered unconscious by a punch from a football player who was her fiancé at the time; the video of that incident was obtained from a surveillance camera inside a hotel elevator.

     
    These cases have been accompanied by heated debates around the issues of journalism ethics and the responsibilities of social media platforms. Increasingly, though, a question is arising about the responsibility of the Internet users themselves—the consumers of online content. The question is, should they watch?
    Would You Watch [the beheading videos]?” ask CNN and ABC News. “Should You Watch the Ray Rice Assault Video?” asks Shape magazine. “Should We Look—Or Look Away?” asks Canada’s National Post. And, in a broader article about the “consequences and import of ubiquitous, Internet-connected photography” (and video), The Atlantic’s Robinson Mayer reflects on all three of the cases noted above; his piece is titled “Pics or It Didn’t Happen.”
    Many commentators have argued that to watch those videos or look at those pictures is a violation of the privacy of the victims depicted in them; that not watching is a sign of respect; or that the act of watching might cause new harm to the victims or to people associated with them (friends, family members, etc.). Others have argued that watching the beheading videos is necessary “if the depravity of war is to be understood and, hopefully, dealt with,” or that watching the videos of Ray Rice hitting his fiancé will help change people’s attitudes toward domestic violence.
    What do you think?
    Would it be unethical to watch the videos discussed above? Why?
    Would it be unethical to look at the photos discussed above? Why?
    Are the three cases addressed above so distinct from each other that one can’t give a single answer about them all?  If so, which of them would you watch, or refuse to watch, and why?
     
    Photo by Matthew Montgomery, unmodified, used under a Creative Commons license.
  •  Revisiting the "Right to Be Forgotten"

    Tuesday, Sep. 16, 2014

     Media coverage of the implementation of the European Court decision on de-indexing certain search results has been less pervasive than the initial reporting on the decision itself, back in May.  At the time, much of the coverage had framed the issue in terms of clashing pairs: E.U. versus U.S; privacy versus free speech.  In The Guardian, an excellent overview of the decision described the “right to be forgotten” as a “cultural shibboleth.”

    (I wrote about it back then, too, arguing that many of the stories about it were rife with mischaracterizations and false dilemmas.)

    Since then, most of the conversation about online “forgetting” seems to have continued on parallel tracks—although with somewhat different clashing camps.  On one hand, many journalists and other critics of the decision (on both sides of the Atlantic) have made sweeping claims about a resulting “Internet riddled with memory holes” and articles “scrubbed from search results”; one commentator wrote that the court decision raises the question, can you really have freedom of speech if no one can hear what you are saying?” 

    On the other hand, privacy advocates (again on both sides of the Atlantic) have been arguing that the decision is much narrower in scope than has generally been portrayed and that it does not destroy free speech; that Google is not, in fact, the sole and ultimate arbiter of the determinations involved in the implementation of the decision; and that even prior to the court’s decision Google search results were selective, curated, and influenced by various countries’ laws.  Recently, FTC Commissioner Julie Brill urged “thought leaders on both sides of the Atlantic to recognize that, just as we both deeply value freedom of expression, we also have shared values concerning relevance in personal information in the digital age.”

    Amid this debate, in late June, Google developed and started to use its own process for complying with the decision.  But Google has also convened an advisory council that will take several months to consider evidence (including public input from meetings held in seven European capitals--Madrid, Rome, Paris, Warsaw, Berlin, London, and Brussels), before producing a report that would inform the company’s current efforts.  Explaining the creation of the council, the company noted that it is now required to balance “on a case-by-case basis, an individual’s right to be forgotten with the public’s right to information,” and added, “We want to strike this balance right. This obligation is a new and difficult challenge for us, and we’re seeking advice on the principles Google ought to apply…. That’s why we’re convening a council of experts.”

    The advisory council (to whom any and all can submit comments) has been posting videos of the public meetings online. However, critics have taken issue with the group’s members (selected by Google itself), with the other presenters invited to participate at the meetings (again screened and chosen by Google), and, most recently, with its alleged rebuffing of questions from the general public. So far, many of the speakers invited to the meetings have raised questions about the appropriateness of the decision itself.

    In this context, one bit of evidence makes its own public comment:  Since May, according to Google, the company has received more than 120,000 de-indexing requests. Tens of thousands of Europeans have gone through the trouble of submitting a form and the related information necessary to request that a search of their name not include certain results.  

    And, perhaps surprisingly (especially given most the coverage of the decision in the U.S.), a recent poll of American Internet users, by an IT security research firmfound that a “solid majority” of them—61%--were “in favor of a ‘right to be forgotten’ law for US citizens.”

    But this, too, may speak differently to different audiences. Some will see it as evidence of a vast pent-up need that had had no outlet until now. Others will see it as evidence of the tens of thousands of restrictions and “holes” that will soon open up in the Web.

    So—should we worry about the impending “memory holes”?

    In a talk entitled “The Internet with a Human Face,” American Web developer Maciej Ceglowski argues that “the Internet somehow contrives to remember too much and too little at the same time.” He adds,

    in our elementary schools in America, if we did something particularly heinous, they had a special way of threatening you. They would say: “This is going on your permanent record.”

    It was pretty scary. I had never seen a permanent record, but I knew exactly what it must look like. It was bright red, thick, tied with twine. Full of official stamps.

    The permanent record would follow you through life, and whenever you changed schools, or looked for a job or moved to a new house, people would see the shameful things you had done in fifth grade. 

    How wonderful it felt when I first realized the permanent record didn’t exist. They were bluffing! Nothing I did was going to matter! We were free!

    And then when I grew up, I helped build it for real.

    But while a version the “permanent record” is now real, it is also true that much content on the Internet is already ephemeral. The phenomenon of “link rot,” for example, affects even important legal documents.  And U.K. law professor Paul Bernal has argued that we should understand the Internet as “organic, growing and changing all the time,” and that it’s a good thing that this is so. “Having ways to delete information [online] isn’t the enemy of the Internet of the people,” Bernal writes, “much as an enemy of the big players of the Internet.”

    Will Google, one of the “big players on the internet,” hear such views, too? It remains to be seen; Google’s “European grand tour,” as another UK law professor has dubbed it, will conclude on November 4th

    Photograph by derekb, unmodified, under a Creative Commons license. https://creativecommons.org/licenses/by-nc/2.0/legalcode

  •  The Disconnect: Accountability and Consequences Online

    Sunday, Apr. 28, 2013

    Do we need more editorial control on the Web?  In this brief clip, the Chairman, President, and Chief Executive Officer of Seagate Technology, Stephen Luczo, argues that we do.  He also cautions that digital media channels sometimes unwittingly lend a gloss of credibility to some stories that don't deserve it (as was recently demonstrated in the coverage of the Boston bombing).  Luczo views this as a symptom of a broader breakdown among responsibility, accountability, and consequences in the online world.  Is the much-vaunted freedom of the Internet diminishing the amount of substantive feedback that we get for doing something positive--or negative--for society?

    Chad Raphael, Chair of the Communication Department and Associate Professor at Santa Clara University, responds to Luczo's comments:

    "It's true that the scope and speed of news circulation on the Internet worsens longstanding problems of countering misinformation and holding the sources that generate it accountable.  But journalism's traditional gatekeepers were never able to do these jobs alone, as Senator Joseph McCarthy knew all too well.  News organizations make their job harder with each new round of layoffs of experienced journalists.

    There are new entities emerging online that can help fulfill these traditional journalistic functions, but we need to do more to connect, augment, and enshrine them in online news spaces. Some of these organizations, such as News Trust, crowdsource the problem of misinformation by enlisting many minds to review news stories and alert the public to inaccuracy and manipulation.  Their greatest value may be as watchdogs who can sound the alarm on suspicious material.  Other web sites, such as FactCheck.org, rely on trained professionals to evaluate political actors' claims.  They can pick up tips from multiple watchdogs, some of them more partisan than others, and evaluate those tips as fair-minded judges.  We need them to expand their scope beyond checking politicians to include other public actors.  The judges could also use some more robust programs for tracking the spread of info-viruses back to their sources, so they can be identified and exposed quickly.  We also need better ways to publicize the online judges' verdicts. 

    If search engines and other news aggregators aim to organize the world's information for us, it seems within their mission to let us know what sources, stories, and news organizations have been more and less accurate over time.  Even more importantly, aggregators might start ranking better performing sources higher in their search results, creating a powerful economic incentive to get the story right rather than getting it first.

    Does that raise First Amendment concerns? Sure. But we already balance the right to free speech against other important rights, including reputation, privacy, and public safety.  And the Internet is likely to remain the Wild West until Google, Yahoo!, Digg, and other news aggregators start separating the good, the bad, and the ugly by organizing information according to its credibility, not just its popularity."

    Chad Raphael