Santa Clara University

Bookmark and Share

Ethical Issues in the Online World

Welcome to the blog of the Internet Ethics program at the Markkula Center for Applied Ethics, Santa Clara University. Program Director Irina Raicu will be joined by various guests in discussing the ethical issues that arise continuously on the Internet; we hope to host a robust conversation about them, and we look forward to your comments.

The following postings have been filtered by tag media. clear filter
  •  The Ethics of Encryption, After the Paris Attacks

    Friday, Nov. 20, 2015

    The smoldering ongoing debate about the ethics of encryption has burst into flame anew following the Paris attacks last week. Early reports about the attacks, at least in the U.S., included claims that the attackers had used encrypted apps to communicate. On Monday, the director of the CIA said that “this is a time for particularly Europe, as well as here in the United States, for us to take a look and see whether or not there have been some inadvertent or intentional gaps that have been created in the ability of intelligence and security services to protect the people…." Also on Monday, Computerworld reports, Senator Feinstein told a reporter that she had “met with chief counsels of most of the biggest software companies to find legal ways that would allow intelligence authorities to break encryption when monitoring terrorism. ‘I have asked for help,’ Feinstein said. ‘I haven't gotten any help.’”

    At the same time, cybersecurity experts are arguing, anew, that there is no way to allow selective access to encrypted materials without also providing a way for bad actors to access such materials, too—thus endangering the privacy and security of all those who use online tools for communication. In addition, a number of journalists are debunking the initial claims that encryption played a part in the Paris terror attacks (see Motherboard’s “How the Baseless ‘Terrorists Communicating Over Playstation4’ Rumor Got Started”), and questioning the assertion that weakening US-generated encryption tools is necessary in order for law enforcement to thwart terrorism (see Wired’s “After Paris Attacks, What the CIA Director Gets Wrong About Encryption”). But the initial claims, widely reported, are already cited in calls for new regulations (in the Washington Post, Brian Fung argues that “[i]f government surveillance expands after Paris, the media will be partly to blame”).

    As more details from the investigation into the Paris attacks and their aftermath come to light, it now appears that the attackers in fact didn’t encrypt at least some of their communications. However, even the strongest supporters of encryption concede that terrorists have used it and will probably use it again in their efforts to camouflage their communications. The question is how to respond to that.

    The ethics of generating and deploying encryption tools doesn’t lend itself to an easy answer. Perhaps the best evidence for that is the fact that the U.S. government helps fund the creation and wide-spread dissemination of such tools. As Computerworld’s Matt Hamblen reports,

    The U.S.-financed Open Technology Fund (OTF) was created in 2012 and supports privately built encryption and other apps to "develop open and accessible technologies to circumvent censorship and surveillance, and thus promote human rights and open societies," according to the OTF's website.

    In one example, the OTF provided $1.3 million to encryption app maker Open Whisper Systems in 2013 and 2014. The San Francisco-based company produced Signal, Redphone and TextSecure smartphone apps to provide various encryption capabilities.

    The same tools that are intended to “promote human rights and open societies” can be used by terrorists, too. So far, all the cybersecurity experts seem to agree that there is no way to provide encryption backdoors that could be used only by the “good guy”: see, for example, the recently released “Keys under Doormats” paper, whose authors argue that

    The complexity of today’s Internet environment, with millions of apps and globally connected services, means that new law enforcement requirements are likely to introduce unanticipated, hard to detect security flaws. Beyond these and other technical vulnerabilities, the prospect of globally deployed exceptional access systems raises difficult problems about how such an environment would be governed and how to ensure that such systems would respect human rights and the rule of law.

    At a minimum, these difficult problems have to be addressed carefully, with full input from the people who best understand the technical challenges. Vilifying the developers of encryption tools and failing to recognize that they are indeed helping in our efforts to uphold our values is unwarranted.


    Photo by woodleywonderworks, used without modification under a Creative Commons license.

  •  The Ethics of Ad-Blocking

    Wednesday, Sep. 23, 2015
    (AP Photo/Damian Dovarganes)

    As the number of people who are downloading ad-blocking software has grown, so has the number of articles discussing the ethics of ad-blocking. And interest in the subject doesn’t seem to be waning: a recent article in Mashable was shared more than 2,200 times, and articles about the ethics of ad-blocking have also appeared in Fortune (“You shouldn’t feel bad about using an ad blocker, and here’s why” and “Is using ad blockers morally wrong? The debate continues”), Digiday (“What would Kant do? Ad blocking is a problem, but it’s ethical”), The New York Times (“Enabling of Ad Blockers in Apple’s iOS9 Prompts Backlash”), as well as many other publications.

    Mind you, this is not a new debate. People were discussing it in the xkcd forum in 2014. The BBC wrote about the ethics of ad blocking in 2013. Back in 2009, Farhad Manjoo wrote for about what he described as a more ethical “approach to fair ad-blocking”; he concluded his article with the lines, “Ad blocking is here to stay. But that doesn't have to be the end of the Web—just the end of terrible ads.”
    As it turns out, in 2015, we still have terrible ads (see Khoi Vinh’s blog post, “Ad Blocking Irony.”) And, as a recent report by PageFair and Adobe details, the use of ad blockers “grew by 48% during the past year, increasing to 45 million average monthly active users” in the U.S. alone.
    In response, some publishers are accusing people who install (or build) ad blockers of theft. They are also accusing them of breaching their “implied contracts” with sites that offer ad-supported content (but see Marco Arment’s recent blog post, “The ethics of modern web ad-blocking,” which demolishes this argument, among other anti-blocker critiques).
    Many of the recent articles present both sides of the ethics debate. However, most of the articles on the topic claim that the main reasons that users are installing ad blockers are the desires to escape “annoying” ads or to improve browsing speeds (since ads can sometimes slow downloads to a crawl). What many articles leave out entirely, or gloss over in a line or two, are two other reasons why people (and especially those who understand how the online advertising ecosystem works) install ad blockers: For many of those users, the primary concerns are the tracking behind “targeted” ads, and the meteoric growth of “malvertising”—advertising used as vectors for malware.
    When it comes to the first concern, most of the articles about the ethics of ad-blocking simply conflate advertising and tracking—as if the tracking is somehow inherent in advertising. But the two are not the same, and it is important that we reject this false either/or proposition. If advertisers continue to push for more invasive consumer tracking, ad blocker usage will surge: When the researchers behind the PageFair and Adobe 2015 report asked “respondents who are not currently using an ad blocking extention … what would cause them to change their minds,” they found that “[m]isuse of personal information was the primary reason to enable ad blocking” (see p. 12 of the report). Now, it may not be clear exactly what the respondents meant by “misuse of personal information,” but that is certainly not a reference to either annoying ads or clogged bandwidth.
    As for the rise of “malvertising,” it was that development that led me to say to a Mashable reporter that if this continues unabated we might all eventually end up with an ethical duty to install ad blockers—in order to protect ourselves and others who might then be infected in turn.
    Significantly, the dangers of malvertising are connected to those of the more “benign” tracking. As a Wired article explains,

    it is modern, more sophisticated ad networks’ granular profiling capabilities that really create the malvertising sweet spot. Today ad networks let buyers configure ads to appear according to Web surfers’ precise browser or operating system types, their country locations, related search keywords and other identifying attributes. Right away we can see the value here for criminals borrowing the tactics of savvy marketers. … Piggybacking on rich advertising features, malvertising offers persistent, Internet-scale profiling and attacking. The sheer size and complexity of online advertising – coupled with the Byzantine nature of who is responsible for ad content placement and screening – means attackers enjoy the luxury of concealment and safe routes to victims, while casting wide nets to reach as many specific targets as possible.

    As one cybersecurity expert tweeted, sarcastically rephrasing the arguments of some of those who argue that installing ad-blocking software is unethical, “If you love content then you must allow random anonymous malicious entities to run arbitrary code on your devices” (@thegrugq).

    Now, if you clicked on the link to the Wired article cited above, you might or might not have noticed a thin header above the headline. The header reads, “Sponsor content.” Yup, that entire article is a kind of advertising, too. A recent New York Times story about the rise of this new kind of “native advertising” is titled “With Technology, Avoiding Both Ads and the Blockers.” (Whether such “native experiences” are better than the old kind of ads is a subject for another ethics debate; the FTC recently held a workshop about this practice and came out with more questions than answers.)

    Of course, not all online ads incorporate tracking, not all online ads bring malware, and many small publishers are bearing the brunt of a battle about practices over which they have little (if any) control. Unfortunately, for now, the blocking tools available are blunt instruments. Does that mean, though, that until the development of more nuanced solutions, the users of ad-supported sites should continue to absorb the growing privacy and security risks?

    Bottom line: discussing the ethics of ad-blocking without first clarifying the ethics of the ecosystem in which it has developed (and the history of the increasing harms that accompany many online ads) is misleading.

  •  The Social Network of Discourse and Discomfort

    Friday, Jun. 19, 2015

    Ello, the social media platform that was prominently (if briefly) touted last year as the “anti-Facebook,” is reinventing itself for mobile. Twitter is reinventing itself, too. Pinterest is reinventing itself into a store. And the anti-“anti-Facebook,” i.e. Facebook, is constantly reinventing itself.

    But the real “anti-Facebook” is described by the director of MIT’s Center for Civic Media, Ethan Zuckerman, in the transcript of a wide-ranging discussion recently held under the auspices of the Carnegie Council on Ethics in International Affairs. Zuckerman notes that one of his students, Sands Fish,

    is trying to build social networks designed to make you uncomfortable. Basically, the first thing he does is he takes away the choice of friends. You no longer have a choice about who is going to be your friend. You are going to interact with people whom he thinks you should be interacting with, as a way of sort of challenging us. Will anyone use this? It's a good question. This is why you do this at research universities rather than going out and getting venture capital for it.
    Initially, the idea of a social platform designed to make users uncomfortable seems amusing, or maybe closer to a conceptual art project than a real social network. But at a time when scholars warn about “filter bubbles” (and companies who might be blamed for them try to calm the worries, or at least deflect responsibility), a time when we seem to either surround ourselves with like-minded people or get sucked into the “spriral of silence” and stop talking about controversial topics, such a network could become a fascinating training ground. Might it lead to constructive ways to engage with people who have different experiences and preferences, hold different beliefs, etc., yet still need to function together, as people in a pluralistic society do?
    Would people willingly submit themselves to discomfort by participating in such a network? Would folks who join such a network be the ones already more comfortable with (or even attracted to) conflict and diversity? Or is it a question of degrees—the degree of discomfort, the degree of diversity, and the degree of thoughtfulness of the conversations that might ensue?
    Zuckerman addresses this issue:
    A lot of my theories around this suggest that you need bridge figures. You need people whom you have one thing in common with, but something else that is very different. I spend a ton of my life right now working on technology and innovation in sub-Saharan Africa. I work with people whom I don't have a lot in common with in terms of where we grew up, who we know, where we are from, but we have a lot in common in terms of what we do day to day, how we interact with technological systems, the things that we care about. That gives us a common ground that we are able to work on.
    Would the designer of the network of discomfort provide us with bridge figures? Or would serendipity offer some?
    One final thought: in some ways, for some people, Facebook itself has become the kind of social network that Fish (Zuckerman’s student) is apparently trying to design. When your relatives or co-workers send you “friend” requests, do you still “have a choice about who is going to be your friend”? (Much has been written about how conversations on Facebook have deteriorated as the users have amassed vast numbers of “friends” from diverse parts and periods of their lives; and many commentators have suggested that this kind of blended audience has driven teens, at least, to other social networks not yet co-opted by their parents and teachers). Maybe the key distinction in the MIT project would be that participants would, as Zuckerman describes it, “interact with people whom [the network designer] thinks [they] should be interacting with.” The anti-Facebook would provide us more thoughtfully curated discomfort.
    Photo by Kevin Dooley, used without modification under a Creative Commons license.


  •  How Google Can Illuminate the "Right to Be Forgotten" Debate: Two Requests

    Thursday, May. 14, 2015


    Happy Birthday, Right-to-Have-Certain-Results-De-Listed-from-Searches-on-Your-Own-Name-,-Depending-on-the-Circumstances!

    It’s now been a year since the European Court of Justice shocked (some) people with a decision that has mistakenly been described as announcing a “right to be forgotten.”

    Today, 80 Internet scholars sent an open letter to Google asking the company to release additional aggregate data about the company’s implementation of the court decision.  As they explain,

    The undersigned have a range of views about the merits of the ruling. Some think it rightfully vindicates individual data protection/privacy interests. Others think it unduly burdens freedom of expression and information retrieval. Many think it depends on the facts.

    We all believe that implementation of the ruling should be much more transparent for at least two reasons: (1) the public should be able to find out how digital platforms exercise their tremendous power over readily accessible information; and (2) implementation of the ruling will affect the future of the [“right to be forgotten”] in Europe and elsewhere, and will more generally inform global efforts to accommodate privacy rights with other interests in data flows.

    Although Google has released a Transparency Report with some aggregate data and some examples of the delinking decisions reached so far, the signatories find that effort insufficient. “Beyond anecdote,” they write,

    we know very little about what kind and quantity of information is being delisted from search results, what sources are being delisted and on what scale, what kinds of requests fail and in what proportion, and what are Google’s guidelines in striking the balance between individual privacy and freedom of expression interests.

    For now, they add, the participants in the delisting debate “do battle in a data vacuum, with little understanding of the facts.”

    More detailed data is certainly much needed. What remains striking, in the meantime, is how little understanding of the facts many people continue to have about what the decision itself mandates. A year after the decision was issued, an associate editor for Engadget, for example, still writes that, as a result of it, “if Google or Microsoft hides a news story, there may be no way to get it back.” 

    To “get it back”?! Into the results of a search on a particular person’s name? Because that is the entire scope of the delinking involved here—when the delinking does happen.

    In response to a request for comment on the Internet scholars’ open letter, a Google spokesman told The Guardian that “it’s helpful to have feedback like this so we can know what information the public would find useful.” In that spirit of helpful feedback, may I make one more suggestion?

    Google’s RTBF Transparency Report (updated on May 14) opens with the line, “In a May 2014 ruling, … the Court of Justice of the European Union found that individuals have the right to ask search engines like Google to remove certain results about them.” Dear Googlers, could you please add a line or two explaining that “removing certain results” does not mean “removing certain stories from the Internet, or even from the Google search engine”?

    Given the anniversary of the decision, many reporters are turning to the Transparency Report for information for their articles. This is a great educational opportunity. With a line or two, while it weighs its response to the important request for more detailed reporting on its actions, Google could already improve the chances of a more informed debate.

    [I’ve written about the “right to be forgotten” a number of times: chronologically, see “The Right to Be Forgotten, Or the Right to Edit?” “Revisiting the ‘Right to Be Forgotten,’” “The Right to Be Forgotten, The Privilege to Be Remembered” (that one published in Re/code), “On Remembering, Forgetting, and Delisting,” “Luciano Floridi’s Talk at Santa Clara University,” and, most recently, “Removing a Search Result: An Ethics Case Study.”]

    (Photo by Robert Scoble, used without modification under a Creative Commons license.)


  •  Luciano Floridi’s Talk at Santa Clara University

    Tuesday, Mar. 10, 2015


    In the polarized debate about the so-called “right to be forgotten” prompted by an important decision issued by the European Court of Justice last year, Luciano Floridi has played a key role. Floridi, who is Professor of Philosophy and Ethics of Information at the University of Oxford and Director of Research of the Oxford Internet Institute, accepted Google’s invitation to join its advisory council on that topic. While the council was making its way around seven European capitals pursuing both expert and public input, Professor Floridi (the only ethicist in the group) wrote several articles about his evolving understanding of the issues involved—including “Google's privacy ethics tour of Europe: a complex balancing act”; “Google ethics tour: should readers be told a link has been removed?”; “The right to be forgotten – the road ahead”; and “Right to be forgotten poses more questions than answers.”
    Last month, after the advisory council released its much-anticipated report, Professor Floridi spoke at Santa Clara University (his lecture was part of our ongoing “IT, Ethics, and Law” lecture series). In his talk, titled “Recording, Recalling, Retrieving, Remembering: Memory in the Information Age,” Floridi embedded his analysis of the European court decision into a broader exploration of the nature of memory itself; the role of memory in the European philosophical tradition; and the relationship among memory, identity, forgiveness, and closure. As Floridi explained, the misnamed “right to be forgotten” is really about closure, which is in turn not about forgetting but about “rightly managing your past memory.”
    Here is the video of that talk. We hope that it will add much-needed context to the more nuanced conversation that is now developing around the balancing of the rights, needs, and responsibilities of all of the stakeholders involved in this debate, as Google continues to process the hundreds of thousands of requests for de-linking submitted so far in the E.U.
    If you would like to be added to our “IT, Ethics, and Law” mailing list in order to be notified of future events in the lecture series, please email


  •  On Remembering, Forgetting, and Delisting

    Friday, Feb. 20, 2015
    Over the last two weeks, Julia Powles, who is a law and technology researcher at the University of Cambridge, has published two interesting pieces on privacy, free speech, and the “right to be forgotten”: “Swamplands of the Internet: Speech and Privacy,” and “How Google Determined Our Right to Be Forgotten” (the latter co-authored by Enrique Chaparro). They are both very much worth reading, especially for folks whose work impacts the privacy rights (or preferences, if you prefer) of people around the world.
    Today, a piece that I wrote, which also touches on the “right to be forgotten,” was published in Re/code. It’s titled “The Right to Be Forgotten, the Privilege to Be Remembered.” I hope you’ll read that, too!
    And earlier in February, Google’s Advisory Council issued its much-anticipated report on the issue, which seeks to clarify the outlines of the debate surrounding it and offers suggestions for the implementation of “delisting.”
    One of the authors of that report, Professor Luciano Floridi, will be speaking at Santa Clara University on Wednesday, 2/25, as part of our “IT, Ethics and Law” lecture series.  Floridi is Professor of Philosophy and Ethics of Information at the University of Oxford and the Director of Research of the Oxford Internet Institute. His talk is titled “Recording, Recalling, Retrieving, Remembering: Memory in the Information Age.” The event is free and open to the public; if you live in the area and are interested in memory, free speech, and privacy online, we hope you will join us and RSVP!
    [And if you would like to be added to our mailing list for the lecture series—which has recently hosted panel presentations on ethical hacking, the ethics of online price discrimination, and privacy by design and software engineering ethics—please email] 
    Photo by Minchioletta, used without modification under a Creative Commons license.
  •  On Spirituality, Social Justice, and Social Media

    Thursday, Jan. 22, 2015


    Christine Cate is a recent graduate of Santa Clara University, where she majored in Public Health Science with a minor in Biology. She has worked at the Markkula Center for Applied Ethics as the Character Education intern for the Character Based Literacy Program since October 2012. A version of this piece first appeared in November 2014 in the blog of the Ignatian Solidarity Network. Christine is a member of the Network’s social media team, focusing on contemporary issues of social justice and spirituality.

    Sometimes, reading the news makes my stomach turn. Every day, headlines about sexual assault, racism, immigration, poverty, or infectious disease are intermingled with stories on Kim Kardashian’s newest racy cover, snow storms on the East Coast, and political speculations. The media is constantly bombarding us with stories ranging in importance from superficial fluff to deeply divisive topics.

    The never-ending availability of news is positive in one sense, as the public is becoming more “informed,” but it also has its consequences. The media is desensitizing us to critical social issues like violence, racism, and sexism, while simultaneously flooding our feeds with stories of naked celebrities trying to break the internet or the most expensive Starbucks drink ever. Inane news stories focusing on things like which celebrity unfollowed whom on Instagram this week distract us from being able to critically observe and understand the world in which we live. Even political news stories can contain sensational levels of bias that make getting an objective comprehension of situations nearly impossible. And it’s nearly impossible to escape; anyone active on social media knows how often links to news articles show up among personal updates and advertisements. Individuals who aren’t constantly connected to social media, rare as they may be, are still saturated with current events from radio, print, and advertising outlets. It takes real effort to not know about what is going on in the world in our current society, and ignorance may be just as harmful as news-intoxication.

    Both the lack of current event literacy and the over-saturation of news are serious problems in our world, as media is one of the most powerful influences in society today. After returning from the Ignatian Family Teach-In that took place in November 2014 in Virginia and Washington, D.C., I found myself reflecting on the role that news and social media play in our lives, and how that impacts both our spirituality and capacity to enact social justice.

    At the Teach-in, in the rare moments between keynote speakers and breakout sessions, large projection screens and television monitors displayed live updates of tweets with the #IFTJ14 hashtag. Multiple photographers scurried around the crowded conference room, and cameras recorded every speaker for the online live stream. The slogan for this year’s Teach-In was “Uprooting Injustice, Sowing Truth, Witnessing Transformation.” The issues of immigration reform, divestment from fossil fuels, and Central American legislation were highlighted, as well as special recognition for the 25th anniversary of the UCA martyrs. Over the course of Saturday and Sunday, conference attendees were challenged to view these issues, as well as other powerful issues like the criminal justice system and racism in society, through a lens of spirituality and social justice. During presentations, audience members tweeted out perspectives or quotes that they felt were especially eye-opening or striking, with their tweets flying out into cyberspace and appearing shortly after on the illuminated screens.

    The reach of the Teach-In is hard to fathom. With an estimated 1,500 attendees, and the majority of them active on social media, it wouldn’t be a far stretch to say that tens of thousands of people were indirectly exposed to the messages of the Teach-In through media sources. The goal of the Teach-In was to give voice to the voiceless, to highlight areas in our collective history and present realities that need change, and I think that goal was accomplished spectacularly. Social media amplified the messages spoken at the Teach-In, and expanded the audience beyond just physical attendees.

    But amid the masses of news stories already flooding the eyes and minds of people today, is social media enough to make a change? How many news readers are intentional in what and how they read news stories? How many social media users are intentionally aware of their influence, and use their accounts as platforms to share morally important or challenging new stories? How many people are harnessing the power of social media to identify injustice, spread truth, and incite action for transformation?
    There are plenty of examples of social media bringing faith into daily rhetoric. The hashtag #blessed is popular on Instagram and Twitter, and there are hundreds of accounts that exist solely to post encouraging scripture passages, quotes, or otherwise spirituality related content. Spirituality and faith have become trendy in certain spheres, with social media users around the world able to share prayers and encourage and inspire from afar. But rarely do faithful social media users (in both senses of the word) connect their spirituality, social media reach, and social justice.
    What would it look like if the culture of mainstream news and social media changed to include the combination of spirituality and social justice? Would the voices of the oppressed and marginalized be heard more? Would people be more willing to confront the uncomfortable problems in our societies and work for positive change? Or would we just become desensitized to it, as we have to news coverage of war and violence? Can the integration of spirituality and social media be a powerful tool to expose injustices, spread truth, and document change?
    I don’t have answers to these questions, not yet. I am far more aware of my social media presence and interaction with news outlets, and would like to be more intentional in how I read news stories and pass them along to my sphere of influence. I think by critically analyzing new stories, and calling out the biases that we have been so accustomed to, we can change the way information is transmitted in society. I think that by integrating spirituality and social justice on a conscious level with how we use social media platforms we will be able to uproot injustice, sow truth, and witness transformation. 
    (Photo by Werner Kunz, used without modification under a Creative Commons license.)


  •  Should You Watch? On the Responsibility of Content Consumers

    Tuesday, Sep. 23, 2014

    This fall, Internet users have had the opportunity to view naked photographs of celebrities (which were obtained without approval, from private iCloud accounts, and then—again without consent—distributed widely).  They were also able to watch journalists and an aid worker being beheaded by a member of a terrorist organization that then uploaded the videos of the killings to various social media channels.  And they were also invited to watch a woman being rendered unconscious by a punch from a football player who was her fiancé at the time; the video of that incident was obtained from a surveillance camera inside a hotel elevator.

    These cases have been accompanied by heated debates around the issues of journalism ethics and the responsibilities of social media platforms. Increasingly, though, a question is arising about the responsibility of the Internet users themselves—the consumers of online content. The question is, should they watch?
    Would You Watch [the beheading videos]?” ask CNN and ABC News. “Should You Watch the Ray Rice Assault Video?” asks Shape magazine. “Should We Look—Or Look Away?” asks Canada’s National Post. And, in a broader article about the “consequences and import of ubiquitous, Internet-connected photography” (and video), The Atlantic’s Robinson Mayer reflects on all three of the cases noted above; his piece is titled “Pics or It Didn’t Happen.”
    Many commentators have argued that to watch those videos or look at those pictures is a violation of the privacy of the victims depicted in them; that not watching is a sign of respect; or that the act of watching might cause new harm to the victims or to people associated with them (friends, family members, etc.). Others have argued that watching the beheading videos is necessary “if the depravity of war is to be understood and, hopefully, dealt with,” or that watching the videos of Ray Rice hitting his fiancé will help change people’s attitudes toward domestic violence.
    What do you think?
    Would it be unethical to watch the videos discussed above? Why?
    Would it be unethical to look at the photos discussed above? Why?
    Are the three cases addressed above so distinct from each other that one can’t give a single answer about them all?  If so, which of them would you watch, or refuse to watch, and why?
    Photo by Matthew Montgomery, used without modification under a Creative Commons license.
  •  The Disconnect: Accountability and Consequences Online

    Sunday, Apr. 28, 2013

    Do we need more editorial control on the Web?  In this brief clip, the Chairman, President, and Chief Executive Officer of Seagate Technology, Stephen Luczo, argues that we do.  He also cautions that digital media channels sometimes unwittingly lend a gloss of credibility to some stories that don't deserve it (as was recently demonstrated in the coverage of the Boston bombing).  Luczo views this as a symptom of a broader breakdown among responsibility, accountability, and consequences in the online world.  Is the much-vaunted freedom of the Internet diminishing the amount of substantive feedback that we get for doing something positive--or negative--for society?

    Chad Raphael, Chair of the Communication Department and Associate Professor at Santa Clara University, responds to Luczo's comments:

    "It's true that the scope and speed of news circulation on the Internet worsens longstanding problems of countering misinformation and holding the sources that generate it accountable.  But journalism's traditional gatekeepers were never able to do these jobs alone, as Senator Joseph McCarthy knew all too well.  News organizations make their job harder with each new round of layoffs of experienced journalists.

    There are new entities emerging online that can help fulfill these traditional journalistic functions, but we need to do more to connect, augment, and enshrine them in online news spaces. Some of these organizations, such as News Trust, crowdsource the problem of misinformation by enlisting many minds to review news stories and alert the public to inaccuracy and manipulation.  Their greatest value may be as watchdogs who can sound the alarm on suspicious material.  Other web sites, such as, rely on trained professionals to evaluate political actors' claims.  They can pick up tips from multiple watchdogs, some of them more partisan than others, and evaluate those tips as fair-minded judges.  We need them to expand their scope beyond checking politicians to include other public actors.  The judges could also use some more robust programs for tracking the spread of info-viruses back to their sources, so they can be identified and exposed quickly.  We also need better ways to publicize the online judges' verdicts. 

    If search engines and other news aggregators aim to organize the world's information for us, it seems within their mission to let us know what sources, stories, and news organizations have been more and less accurate over time.  Even more importantly, aggregators might start ranking better performing sources higher in their search results, creating a powerful economic incentive to get the story right rather than getting it first.

    Does that raise First Amendment concerns? Sure. But we already balance the right to free speech against other important rights, including reputation, privacy, and public safety.  And the Internet is likely to remain the Wild West until Google, Yahoo!, Digg, and other news aggregators start separating the good, the bad, and the ugly by organizing information according to its credibility, not just its popularity."

    Chad Raphael