Santa Clara University

internet-ethics-banner
Bookmark and Share
 
 
RSS

Ethical Issues in the Online World

Welcome to the blog of the Internet Ethics program at the Markkula Center for Applied Ethics, Santa Clara University. Program Director Irina Raicu will be joined by various guests in discussing the ethical issues that arise continuously on the Internet; we hope to host a robust conversation about them, and we look forward to your comments.

The following postings have been filtered by tag ethics. clear filter
  •  Should You Watch? On the Responsibility of Content Consumers

    Tuesday, Sep. 23, 2014

    This fall, Internet users have had the opportunity to view naked photographs of celebrities (which were obtained without approval, from private iCloud accounts, and then—again without consent—distributed widely).  They were also able to watch journalists and an aid worker being beheaded by a member of a terrorist organization that then uploaded the videos of the killings to various social media channels.  And they were also invited to watch a woman being rendered unconscious by a punch from a football player who was her fiancé at the time; the video of that incident was obtained from a surveillance camera inside a hotel elevator.

     
    These cases have been accompanied by heated debates around the issues of journalism ethics and the responsibilities of social media platforms. Increasingly, though, a question is arising about the responsibility of the Internet users themselves—the consumers of online content. The question is, should they watch?
    Would You Watch [the beheading videos]?” ask CNN and ABC News. “Should You Watch the Ray Rice Assault Video?” asks Shape magazine. “Should We Look—Or Look Away?” asks Canada’s National Post. And, in a broader article about the “consequences and import of ubiquitous, Internet-connected photography” (and video), The Atlantic’s Robinson Mayer reflects on all three of the cases noted above; his piece is titled “Pics or It Didn’t Happen.”
    Many commentators have argued that to watch those videos or look at those pictures is a violation of the privacy of the victims depicted in them; that not watching is a sign of respect; or that the act of watching might cause new harm to the victims or to people associated with them (friends, family members, etc.). Others have argued that watching the beheading videos is necessary “if the depravity of war is to be understood and, hopefully, dealt with,” or that watching the videos of Ray Rice hitting his fiancé will help change people’s attitudes toward domestic violence.
    What do you think?
    Would it be unethical to watch the videos discussed above? Why?
    Would it be unethical to look at the photos discussed above? Why?
    Are the three cases addressed above so distinct from each other that one can’t give a single answer about them all?  If so, which of them would you watch, or refuse to watch, and why?
     
    Photo by Matthew Montgomery, unmodified, used under a Creative Commons license.
  •  Revisiting the "Right to Be Forgotten"

    Tuesday, Sep. 16, 2014

     Media coverage of the implementation of the European Court decision on de-indexing certain search results has been less pervasive than the initial reporting on the decision itself, back in May.  At the time, much of the coverage had framed the issue in terms of clashing pairs: E.U. versus U.S; privacy versus free speech.  In The Guardian, an excellent overview of the decision described the “right to be forgotten” as a “cultural shibboleth.”

    (I wrote about it back then, too, arguing that many of the stories about it were rife with mischaracterizations and false dilemmas.)

    Since then, most of the conversation about online “forgetting” seems to have continued on parallel tracks—although with somewhat different clashing camps.  On one hand, many journalists and other critics of the decision (on both sides of the Atlantic) have made sweeping claims about a resulting “Internet riddled with memory holes” and articles “scrubbed from search results”; one commentator wrote that the court decision raises the question, can you really have freedom of speech if no one can hear what you are saying?” 

    On the other hand, privacy advocates (again on both sides of the Atlantic) have been arguing that the decision is much narrower in scope than has generally been portrayed and that it does not destroy free speech; that Google is not, in fact, the sole and ultimate arbiter of the determinations involved in the implementation of the decision; and that even prior to the court’s decision Google search results were selective, curated, and influenced by various countries’ laws.  Recently, FTC Commissioner Julie Brill urged “thought leaders on both sides of the Atlantic to recognize that, just as we both deeply value freedom of expression, we also have shared values concerning relevance in personal information in the digital age.”

    Amid this debate, in late June, Google developed and started to use its own process for complying with the decision.  But Google has also convened an advisory council that will take several months to consider evidence (including public input from meetings held in seven European capitals--Madrid, Rome, Paris, Warsaw, Berlin, London, and Brussels), before producing a report that would inform the company’s current efforts.  Explaining the creation of the council, the company noted that it is now required to balance “on a case-by-case basis, an individual’s right to be forgotten with the public’s right to information,” and added, “We want to strike this balance right. This obligation is a new and difficult challenge for us, and we’re seeking advice on the principles Google ought to apply…. That’s why we’re convening a council of experts.”

    The advisory council (to whom any and all can submit comments) has been posting videos of the public meetings online. However, critics have taken issue with the group’s members (selected by Google itself), with the other presenters invited to participate at the meetings (again screened and chosen by Google), and, most recently, with its alleged rebuffing of questions from the general public. So far, many of the speakers invited to the meetings have raised questions about the appropriateness of the decision itself.

    In this context, one bit of evidence makes its own public comment:  Since May, according to Google, the company has received more than 120,000 de-indexing requests. Tens of thousands of Europeans have gone through the trouble of submitting a form and the related information necessary to request that a search of their name not include certain results.  

    And, perhaps surprisingly (especially given most the coverage of the decision in the U.S.), a recent poll of American Internet users, by an IT security research firmfound that a “solid majority” of them—61%--were “in favor of a ‘right to be forgotten’ law for US citizens.”

    But this, too, may speak differently to different audiences. Some will see it as evidence of a vast pent-up need that had had no outlet until now. Others will see it as evidence of the tens of thousands of restrictions and “holes” that will soon open up in the Web.

    So—should we worry about the impending “memory holes”?

    In a talk entitled “The Internet with a Human Face,” American Web developer Maciej Ceglowski argues that “the Internet somehow contrives to remember too much and too little at the same time.” He adds,

    in our elementary schools in America, if we did something particularly heinous, they had a special way of threatening you. They would say: “This is going on your permanent record.”

    It was pretty scary. I had never seen a permanent record, but I knew exactly what it must look like. It was bright red, thick, tied with twine. Full of official stamps.

    The permanent record would follow you through life, and whenever you changed schools, or looked for a job or moved to a new house, people would see the shameful things you had done in fifth grade. 

    How wonderful it felt when I first realized the permanent record didn’t exist. They were bluffing! Nothing I did was going to matter! We were free!

    And then when I grew up, I helped build it for real.

    But while a version the “permanent record” is now real, it is also true that much content on the Internet is already ephemeral. The phenomenon of “link rot,” for example, affects even important legal documents.  And U.K. law professor Paul Bernal has argued that we should understand the Internet as “organic, growing and changing all the time,” and that it’s a good thing that this is so. “Having ways to delete information [online] isn’t the enemy of the Internet of the people,” Bernal writes, “much as an enemy of the big players of the Internet.”

    Will Google, one of the “big players on the internet,” hear such views, too? It remains to be seen; Google’s “European grand tour,” as another UK law professor has dubbed it, will conclude on November 4th

    Photograph by derekb, unmodified, under a Creative Commons license. https://creativecommons.org/licenses/by-nc/2.0/legalcode

  •  Singing in the Shower: Privacy in the Age of Facebook

    Tuesday, Sep. 9, 2014
     
    It is a truth universally acknowledged, that the amount and kinds of information that people post on Facebook mean that people don’t care about privacy.
     
    Like many other “truths” universally acknowledged, this one is wrong, in a number of ways.
     
    First, not everybody is on Facebook. So to justify, say, privacy-invasive online behavioral advertising directed at everyone on the Internet by pointing to the practices of a subset of Internet users is wrong.
     
    Second, it’s wrong to generalize about “Facebook users,” too. Many Facebook users take advantage of various privacy settings and use the platform to interact only with friends and family members. So it makes sense for them to post on Facebook the kind of personal, private things that people have always shared with friends and family.
     
    Still—most Facebook users have hundreds of “friends”: some are close; some are not; some are relatives barely known; some are friends who have grown distant over time. Does it make sense to share intimate things with all of them?
     
    There are several answers to that, too. The privacy boundaries that people draw around themselves vary. What may seem deeply intimate and private to one person might not seem that way to someone else—and vice versa. That doesn’t mean that people who post certain things “don’t care about privacy”—it means they would define “private” differently than others would.  And even when people do post things that they would consider intimate on Facebook, that doesn’t mean they post everything. Some people like singing in choirs; that doesn’t mean they’d be OK with being spied on while singing in the shower.
     
    Third, we need to acknowledge the effects of the medium itself. Take, say, a Facebook user who has 200 “friends.” Were all those friends to be collected in one room (the close and the distant friends, the old and the recently befriended, the co-workers, the relatives, the friends of friends whose “friend requests” were accepted simply to avoid awkwardness, etc.), and were the user to be given a microphone, he or she might refrain from announcing what he ate for dinner, or reciting a song lyric that ran through her mind, or revealing an illness or a heartbreak, or subjecting the entire audience to a slide show of vacation pictures. But for the Facebook user sitting alone in a room, facing a screen, the audience is at least partially concealed. He or she knows that it’s there—is even hoping for some comments in response to posts—or at least some “likes”… But the mind conjures, at best, a subset of the tens or hundreds of those “friended.” If that. Because there is, too, something about the act of typing a “status update” that echoes, for some of us, the act of writing in a journal. (Maybe a diary with a friendly, ever-shifting companion Greek chorus?) The medium misleads.
     
    So no, people who post on Facebook are not being hypocritical when they say (as most people do) that they care about privacy. (It bears noting that in a recent national survey by the Pew Research Center, 86% of internet users said they had “taken steps online to remove or mask their digital footprints.”)
     
    It’s high time to let the misleading cliché about privacy in the age of Facebook go the way of other much-repeated statements that turned out to be neither true nor universally acknowledged. And it’s certainly time to stop using it as a justification for practices that violate privacy. If you haven’t been invited to join the singer in the shower, stay out.
     
  •  More to Say about Internet Ethics

    Tuesday, Sep. 2, 2014
     
    Welcome back!
     
    As the summer of 2014 draws to a close, people are debating the merits of hashtag activism (and pouring buckets of ice water on their heads); Facebook is appending a “Satire” tag to certain stories; new whistleblowers are challenging pervasive governmental surveillance online; and Twitter is struggling to remove posts that include graphic images of the tragic beheading of a U.S. journalist. The Internet continues to churn out ethics-related questions.  New issues keep arising, new facets of “old” issues are continually revealed, and Silicon Valley is frequently mistakenly perceived as a monolithic entity with little interest in the ethical ramifications of the technology it produces.
     
    But our community is neither monolithic nor uninterested.  Back in 2013, for example, the Internet Ethics program at the Markkula Center for Applied Ethics started a blog called “Internet Ethics: Views from Silicon Valley,” with the goal of offering 10  brief videos in which Silicon Valley pioneers and leaders would address some key ethical issues related to the role of the Internet in modern life. While that project was completed (and those videos, featuring the co-founders of Apple and Adobe Systems, the Executive Chairman of NetApp, the CEOs of VMWare and Seagate, and more, remain available on our website and our YouTube channel), we have decided to restart the blog. 
     
    We hope to be a platform for a multiplicity of Silicon Valley voices and demonstrate that applied ethics is everybody’s business—not just the purview of philosophers or philanthropists.
     
    We aim to blog about once a week, with entries by various staff members of the Markkula Center for Applied Ethics, as well as other Santa Clara University faculty members (and perhaps some students, too!) We look forward to your comments, and we hope to host a robust conversation around such topics as big data ethics, online privacy, the Internet of Things, Net neutrality, the “right to be forgotten,” cyberbullying, the digital divide, sentiment analysis, the impact of social media, online communities, digital journalism, diversity in tech, and more. We will also post advance notice of various ethics-related events taking place on campus, free and open to the public.
     
    If you’d like to be notified as new entries are posted, please subscribe today!  (There’s an email subscription box to the right, or an RSS feed at the top of the blog. ) You can also follow the Internet Ethics program on Twitter at @IEthics, and the Center overall either on Facebook or on Twitter at @mcaenews. 
     
    And to those of you who had been subscribed already, again, welcome back!
     
     
     
  •  The Disconnect: Accountability and Consequences Online

    Sunday, Apr. 28, 2013

    Do we need more editorial control on the Web?  In this brief clip, the Chairman, President, and Chief Executive Officer of Seagate Technology, Stephen Luczo, argues that we do.  He also cautions that digital media channels sometimes unwittingly lend a gloss of credibility to some stories that don't deserve it (as was recently demonstrated in the coverage of the Boston bombing).  Luczo views this as a symptom of a broader breakdown among responsibility, accountability, and consequences in the online world.  Is the much-vaunted freedom of the Internet diminishing the amount of substantive feedback that we get for doing something positive--or negative--for society?

    Chad Raphael, Chair of the Communication Department and Associate Professor at Santa Clara University, responds to Luczo's comments:

    "It's true that the scope and speed of news circulation on the Internet worsens longstanding problems of countering misinformation and holding the sources that generate it accountable.  But journalism's traditional gatekeepers were never able to do these jobs alone, as Senator Joseph McCarthy knew all too well.  News organizations make their job harder with each new round of layoffs of experienced journalists.

    There are new entities emerging online that can help fulfill these traditional journalistic functions, but we need to do more to connect, augment, and enshrine them in online news spaces. Some of these organizations, such as News Trust, crowdsource the problem of misinformation by enlisting many minds to review news stories and alert the public to inaccuracy and manipulation.  Their greatest value may be as watchdogs who can sound the alarm on suspicious material.  Other web sites, such as FactCheck.org, rely on trained professionals to evaluate political actors' claims.  They can pick up tips from multiple watchdogs, some of them more partisan than others, and evaluate those tips as fair-minded judges.  We need them to expand their scope beyond checking politicians to include other public actors.  The judges could also use some more robust programs for tracking the spread of info-viruses back to their sources, so they can be identified and exposed quickly.  We also need better ways to publicize the online judges' verdicts. 

    If search engines and other news aggregators aim to organize the world's information for us, it seems within their mission to let us know what sources, stories, and news organizations have been more and less accurate over time.  Even more importantly, aggregators might start ranking better performing sources higher in their search results, creating a powerful economic incentive to get the story right rather than getting it first.

    Does that raise First Amendment concerns? Sure. But we already balance the right to free speech against other important rights, including reputation, privacy, and public safety.  And the Internet is likely to remain the Wild West until Google, Yahoo!, Digg, and other news aggregators start separating the good, the bad, and the ugly by organizing information according to its credibility, not just its popularity."

    Chad Raphael

  •  Internet Access Is a Privilege

    Sunday, Apr. 21, 2013

    What would our lives be like if we no longer had access to the Internet?  How much good would we lose?  How much harm would we be spared?  Is Internet access a right?  These days, whether or not we think of access to it as a right, many of us take the Internet for granted.  In this brief video, Apple co-founder A. C. "Mike" Markkula Jr. looks at the big picture, argues that Internet use is a privilege, and considers ways to minimize some of the harms associated with it, while fully appreciating its benefits.

    In an op-ed published in the New York Times last year, Vint Cerf (who is often described as one of the "fathers of the Internet" and is currently a vice president and chief Internet evangelist for Google) argued along similar lines:

    "As we seek to advance the state of the art in technology and its use in society, [engineers] must be conscious of our civil responsibilities in addition to our engineering expertise.  Improving the Internet is just one means, albeit an important one, by which to improve the human condition. It must be done with an appreciation for the civil and human rights that deserve protection--without pretending that access itself is such a right."

  •  Protecting Privacy and Society

    Monday, Apr. 15, 2013

    Consumer and business data is increasingly moving to the "cloud," and people are clamoring for protection of that data.  However, as Symantec's President, CEO, and Chairman of the Board Steve Bennett points out in this clip, "maximum privacy" is really anonymity, and some people use anonymity as a shield for illegal and unethical behavior.  How should cloud service providers deal with this dilemma?  What is their responsibility to their customers, and to society at large?  How should good corporate citizens respond when they are asked to cooperate with law enforcement? 

    Providers of cloud services are all faced with this dilemma; as Ars Technica recently reported, for example, Verizon took action when it discovered child pornography in one of its users' accounts.

  •  The Need for Accuracy Online

    Monday, Apr. 8, 2013

    The Internet has surely surpassed the expectations of its pioneers.  As a communication medium, it is unparalleled in scope and impact.  However, the ease of publication in the Web 2.0 world has created new ethical dilemmas.  In this brief video, Adobe Chairman of the Board Charles Geschke points out the gap between what Internet users expect to receive (i.e. factual and accurate information) and what they too often get instead.  Is it the user's responsibility to judge which sources to access on the Web, and how much to rely on them?  Is it the publishers of information who have a duty to strive to be accurate?

    Below, Sally Lehrman (Knight Ridder/San Jose Mercury News Endowed Chair in Journalism and the Public Interest at Santa Clara University, and a Markkula Center for Applied Ethics Scholar) responds to Geschke's comments.  Add your own responses in the "Comments" section!

    "The Internet has certainly opened up opportunities for anyone to publish whatever they want.  In some ways, the proliferation of voices is good.  It provides access to ideas and perspectives that traditional news gatherers might miss. It also can put pressure on news organizations to get things right.  But, as Mr. Geschke points out, it's hard to tell when the information packaged like news on the Internet is really just marketing or propaganda.  That's why brands like the New York Times, Wall Street Journal, and local sites such as Patch.com and your own local newspaper are valuable.  Their reporting can be trusted.

    Ethical traditions in journalism ensure multiple sources and careful attention to facts.  But many people have come to expect their news for free, and feet-on-the-ground reporting and fact-checking are expensive.  That makes it very difficult for true news operations to survive.  Unfortunately, we're seeing a decline in quality as a result.  The public must learn to discern--and value--quality news.  One way is to learn more about traditional journalism ethics guidelines, found (on the Internet!) on sites such as www.spj.org/ethics.asp and www.rtdna.org/channel/ethics."

    Sally Lehrman

  •  Privacy Tradeoffs Online

    Tuesday, Apr. 2, 2013

    New technologies often bring both benefits and unintended consequences.  The same is true of laws aimed at new technologies.  In this brief clip, NetApp's Executive Chairman Dan Warmenhoven discusses the development of GPS-tracking technology and the ethical issues associated with the aggregation of GPS data into large databases.  Using HIPAA as an example, he then argues that data protection efforts can go too far, leaving us with inefficient outcomes.  How do we strike the right balance between benefits and harms?

  •  The Right to Be Forgotten

    Monday, Mar. 25, 2013

    "Total interconnectedness," very cheap data storage, and powerful search technologies come together to create a new set of ethical questions.  Do we have a right to access and correct the data in our profiles?  Do we have a right to be "forgotten" by the Internet?  In this brief video, Reputation.com co-founder Owen Tripp asks us to consider the impact of the Internet's long memory on those among us who are most vulnerable.  Below, Evan Selinger--Associate Professor in the Department of Philosophy at the Rochester Institute of Technology--responds to Tripp's comments:

    "Owen Tripp is moved by the ideas driving the "right to be forgotten" movements. For the reasons he gives, we all should be, too.  In the age of big data, the permanent record threat we're confronted with as kids takes on a new and more ominous meaning.  Our digital dossiers expand all the time, in both obvious and unclear ways, and through processes that are transparent as well as surreptitious.  Now that unprecedented amounts of information are readily available about what we've done and what makes us tick, lamentable incidents and statements can follow us everywhere with the crushing weight of Jacob Marley's chains. With the past always present, time--as Shakespeare's Hamlet exclaimed--is out of joint.

    When citizens become open books, it becomes awfully tempting to manage heightened publicity with overly cautious and risk-adverse behavior.  With enough fear, we'll lose out on more than opportunity. Our character can be diminished, perhaps timorousness shifting from vice to virtue.  As David Hoffman, Director of Security Policy and Global Privacy Officer at Intel Corporation, contends, society thus needs solutions that safeguard a limited "right to fail" without encouraging reckless or anti-social behavior, or the problems that come from historical amnesia or revisionism.  At stake is nothing less than securing adequate space for social experimentation, the "breathing room" (to borrow a phrase from privacy scholar Julie Cohen) that enables people to learn and grow.

    While the right to be forgotten appears to be gaining traction in Europe, there are numerous challenges ahead, not least because the road from privacy interest to privacy right can be long and winding.  In the United States concern has been expressed over how legal enforcement of a robust right for individuals to control personal information could run afoul of First Amendment speech protections and squash innovation by subjecting companies like Google and Facebook to bureaucratic procedures that, practically speaking, are unworkable, and further burdened by the prospect of overly punitive sanctions.  Furthermore, as numerous scholars suggest, the notion of so-called "personal information" is hard to pin down in an age of networked citizens where lots of data involves or affects other people, implicating what law professor Sonja West aptly calls the "story of us." Finally, while the market can indeed provide helpful services, we shouldn't lose sight of the fact that when privacy protection is commodified, greater burden is placed on lower income people.  Freedom and peace of mind become purchasing power privilege."

    Evan Selinger -- Twitter: @evanselinger

  • Pages:
  • 1
  • 2