Santa Clara University

internet-ethics-banner
Bookmark and Share
 
 
RSS

Ethical Issues in the Online World

Welcome to the blog of the Internet Ethics program at the Markkula Center for Applied Ethics, Santa Clara University. Program Director Irina Raicu will be joined by various guests in discussing the ethical issues that arise continuously on the Internet; we hope to host a robust conversation about them, and we look forward to your comments.

The following postings have been filtered by tag internet. clear filter
  •  Death and Facebook

    Wednesday, Jul. 29, 2015

    A number of recent articles have noted Facebook’s introduction of a feature that allows users to designate “legacy contacts” for their accounts. In an extensive examination titled “Where Does Your Facebook Account Go When You Die?,” writer Simon Davis explains that, until recently, when Facebook was notified that one of its users had died, the company would “memorialize” that person’s account (in part in order to keep the account from being hacked). What “memorialization” implies has changed over time. Currently, according to Davis, memorialized accounts retain the privacy and audience settings last set by the user, while the contact information and ability to post status updates are stripped out. Since February, however, users can also designate a “legacy contact” person who “can perform certain functions on a memorialized account.” As Davis puts it, “Now, a trusted third party can approve a new friend request by the distraught father or get the mother’s input on a different profile image.”

    Would you give another person the power to add new “friends” to your account or change the profile image, after your death? Which begs the question, what is a Facebook account?

    In his excellent article, Davis cites Vanessa Callison-Burch, the Facebook product manager who is primarily responsible for the newly-added legacy account feature. Explaining some of the thinking behind it, she argues that a Facebook account “is a really important part of people’s identity and is a community space. Your Facebook account is incredibly personalized. It’s a community place for people to assemble and celebrate your life.” She adds that “there are certain things that that community of people really need to be supported in that we at Facebook can’t make the judgment call on.”

    While I commend Facebook for its (new-found?) modesty in feature design, and its recognition that the user’s wishes matter deeply, I find myself wondering about that description of a Facebook account as “a community space.” Is it? I’ve written elsewhere that posting on Facebook “echoes, for some of us, the act of writing in a journal.” A diary is clearly not a “community space.” On Facebook, however, commenters on one user’s posts get to comment on other commenters’ comments, and entire conversations develop among a user’s “friends.” Sometimes friends of friends “friend” each other.  So, yes, a community is involved. But no, the community’s “members” don’t get to decide what your profile picture should be, or whether or not you should “friend” your dad. Who should?

    In The Guardian, Stuart Heritage explores that question in a much lighter take on the subject of “legacy contacts,” titled “To my brother I leave my Facebook account ... and any chance of dignity in death.” As he makes clear, “nominating a legacy contact is harder than it looks.”

    Rather than simply putting that responsibility on a trusted person, Simon Davis suggests that Facebook should give users the opportunity to create an advance directive with specific instructions about their profile: “who should be able to see it, who should be able to send friend requests, and even what kind of profile picture or banner image the person would want displayed after death.” That alternative would respect the user’s autonomy even more than the current “legacy contact” does.

    But there is another option that perhaps respects that autonomy the most: Facebook currently also allows a user to check a box specifying that his or her account be simply deleted after his or her death. Heritage writes that “this is a hard button to click. It means erasing yourself.” Does it? Maybe it just signals a different perspective on Facebook. Maybe, for some, a Facebook account is neither an autobiography nor a guest book. Maybe the users who choose that delete option are not meanly destroying a “community space,” but ending a conversation.

    Photo by Lori Semprevio, used without modification under a Creative Commons license.

  •  IoT: The Internet of Trees

    Friday, Jul. 17, 2015

    Ethics is about living the good life, and, for many of us, trees are an important part of that good life (and not just because we like breathing).  This becomes clear in an article titled “When You Give a Tree an Email Address,” in which The Atlantic’s Adrienne LaFrance writes about a project undertaken by the city of Melbourne.  As LaFrance explains, “[o]fficials assigned the trees ID numbers and email addresses in 2013 as part of a program designed to make it easier for citizens to report problems like dangerous branches.”  As it turned out, however, quite a few citizens chose, instead, to write messages addressed directly to particular trees.

    Some of the messages quoted by LaFrance are quite moving.  On May 21, 2015, for example, a message to “Golden Elm, Tree ID 1037148” read, “I’m so sorry you’re going to die soon. It makes me sad when trucks damage your low hanging branches. Are you as tired of all this construction work as we are?” Other messages are funny. (All, by definition, are whimsical. How else do you write to a tree?) But the best part, perhaps, is that the trees sometimes write back.  For example, in January 2015, a Willow Leaf Peppermint answered a query about its gender. “Hello,” it began,

    I am not a Mr or a Mrs, as I have what’s called perfect flowers that include both genders in my flower structure, the term for this is Monoicous. [Even trees generate run-ons.] Some trees species have only male or female flowers on individual plants and therefore do have genders, the term for this is Dioecious. Some other trees have male flowers and female flowers on the same tree. It is all very confusing and quite amazing how diverse and complex trees can be. 

    Kind regards,

    Mr and Mrs Willow Leaf Peppermint (same Tree)

    Should we rethink the possibilities of the acronym “IoT”? With the coming of the much-anticipated “Internet of Things,” will trees eventually notify the city officials directly when they’re about to tip over, or a branch has scraped a car, or a good percentage of their fruits are ripe?

    In the meantime, is it pessimistic to worry that hackers might break into the trees’ email accounts and start sending offensive responses, or distribute spam instead of pollen?

    For now, the article made me think of a famous poem by Joyce Kilmer, “Trees,” which was published in 1913. With apologies, here is my take on the Internet of Trees:

     

    I thought that I would never see

    An email written by a tree.

     

    A tree whose hungry eyes are keen

    Upon a gadget’s glowing screen;

     

    A tree that doesn’t choose to Skype

    But lifts her leafy arms to type;

     

    A tree that may in Summer share

    Selfies with robins in her hair;

     

    Within whose bosom drafts might end;

    Who intimately lives with “Send.”

     

    Poems are made by fools like me,

    But emails come, now, from a tree.

     

    Photo by @Doug88888, used without modification under a Creative Commons license.

  •  Internet Values?

    Tuesday, Jun. 30, 2015

    "1.     The Internet’s architecture is highly unusual.

    2.       The Internet’s architecture reflects certain values.

    3.       Our use of the Net, based on that architecture, strongly encourages the adoption of those values.

    4.       Therefore, the Internet tends to transform us and our institutions in ways that reflect those values.

    5.       And that’s a good thing."

    The quoted list above comprises the premises that undergird an essay by David Weinberger, recently published in The Atlantic, titled “The Internet That Was (And Still Could Be).” Weinberger, who is the co-author of The Cluetrain Manifesto (and now a researcher at Harvard’s Berkman Center for Internet & Society), argues that the Internet’s architecture “values open access to information, the democratic and permission-free ability to read and to post, an open market of ideas and businesses, and provides a framework for bottom-up collaboration among equals.” However, he notes, in what he calls the “Age of Apps” most Internet users don’t directly encounter that architecture:

    In the past I would have said that so long as this architecture endures, so will the transfer of values from that architecture to the systems that run on top of it. But while the Internet’s architecture is still in place, the values transfer may actually be stifled by the many layers that have been built on top of it.

    Moreover, if people think, for example, that the Internet is Facebook, then the value transfer may be not just stifled but shifted: what they may be absorbing are Facebook’s values, not the Internet’s. However, Weinberger describes himself as still ultimately optimistic about the beneficial impact of the Internet. In light of the layers that obscure its architecture and its built-in values, he offers a new call to action: “As the Internet’s architecture shapes our behavior and values less and less directly, we’re going to have to undertake the propagation of the values embedded in that architecture as an explicit task” (emphasis added).

    It’s interesting to consider this essay in conjunction with the results of a poll reported recently by the Pew Research Center. In a study of people from 32 developing and emerging countries, the Pew researchers found that

    [t]he aspect of the internet that generates the greatest concern is its effect on a nation’s morals. Overall, a median of 42% say the internet has a negative influence on morality, with 29% saying it has a positive influence. The internet’s influence on morality is seen as the most negative of the five aspects tested in 28 of the 32 countries surveyed. And in no country does a majority say that the influence of the internet on morality is a positive.

    It should be noted at the outset that not all of those polled described themselves as internet users—and that Pew reports that a “major subgroup that sees the internet positively is internet users themselves” (though, as a different study shows, millions of people in some developing countries mistakenly identify themselves as non-users when they really do use the Internet).

    Interesting distinctions emerge among the countries surveyed, as well. In Nigeria, Pew reports, 50% of those polled answered that “[i]ncreasing use of the Internet in [their] country has had a good influence on morality.” In Ghana, only 29% did. In Vietnam, 40%. In China, 25%. In Tunisia, 17%. In Russia, 13%.

    The Pew study, however, did not attempt to provide a definition of “morality” before posing that question. It would have been interesting (and would perhaps be an interesting future project) to ask users in other countries what they perceive as the values embedded in the Internet. Would they agree with Weinberger’s list? And how might they respond to an effort to clarify and propagate those values explicitly, as Weinberger suggests? For non-users of the Internet, in other countries, is the motivation purely a lack of access, or is it a rejection of certain values, as well?

    If a clash of values is at issue, it involves a generational aspect, too: the Pew report notes that in many of the countries surveyed, “young people (18-34 years old) are much more likely to say that the internet has a good influence compared with older people (ages 35+).” This, the report adds, “is especially true on its influence of morality.”

    Photo by Blaise Alleyne, used without modification under a Creative Commons license.

  •  "Harrison Bergeron" in Silicon Valley -- Part II

    Friday, May. 22, 2015

    A few weeks ago, I wrote about Kurt Vonnegut’s short story “Harrison Bergeron.” In the world of that story the year is 2081, and, in an effort to render all people “equal,” the  government imposes handicaps on all those who are somehow better than average. One of the characters, George, whose intelligence is "way above normal," has "a little mental handicap radio in his ear.”

    As George tries to concentrate on something,

    “[a] buzzer sounded in George's head. His thoughts fled in panic, like bandits from a burglar alarm.

    "That was a real pretty dance, that dance they just did," said Hazel.

    "Huh" said George.

    "That dance-it was nice," said Hazel.

    "Yup," said George. He tried to think a little about the ballerinas. … But he didn't get very far with it before another noise in his ear radio scattered his thoughts.

    George winced. So did two out of the eight ballerinas.

    Hazel saw him wince. Having no mental handicap herself, she had to ask George what the latest sound had been.

    "Sounded like somebody hitting a milk bottle with a ball peen hammer," said George.

    "I'd think it would be real interesting, hearing all the different sounds," said Hazel a little envious. "All the things they think up."

    "Um," said George.

    "Only, if I was Handicapper General, you know what I would do?" said Hazel. … "I'd have chimes on Sunday--just chimes. Kind of in honor of religion."

    "I could think, if it was just chimes," said George.

    Re-reading the story, I thought about the work of the late professor Cliff Nass, whose “pioneering research into how humans interact with technology,” as the New York Times described it, “found that the increasingly screen-saturated, multitasking modern world was not nurturing the ability to concentrate, analyze or feel empathy.”

    If we have little “mental handicap radios” in our ears, these days, it’s usually because we put them there—or on our eyes, or wrists, or just in our hands—ourselves (though some versions are increasingly required by employers or schools). Still, like the ones in the story, they are making it more difficult for all of us to focus on key tasks, to be present for our loved ones, to truly take in and respond to our surroundings.

    In anticipation of the Memorial Day’s weekend, I wish you a few days of lessened technological distractions. And, if you have some extra time, you might want to read some of professor Nass’ research.

     

  •  How Google Can Illuminate the "Right to Be Forgotten" Debate: Two Requests

    Thursday, May. 14, 2015

     

    Happy Birthday, Right-to-Have-Certain-Results-De-Listed-from-Searches-on-Your-Own-Name-,-Depending-on-the-Circumstances!

    It’s now been a year since the European Court of Justice shocked (some) people with a decision that has mistakenly been described as announcing a “right to be forgotten.”

    Today, 80 Internet scholars sent an open letter to Google asking the company to release additional aggregate data about the company’s implementation of the court decision.  As they explain,

    The undersigned have a range of views about the merits of the ruling. Some think it rightfully vindicates individual data protection/privacy interests. Others think it unduly burdens freedom of expression and information retrieval. Many think it depends on the facts.

    We all believe that implementation of the ruling should be much more transparent for at least two reasons: (1) the public should be able to find out how digital platforms exercise their tremendous power over readily accessible information; and (2) implementation of the ruling will affect the future of the [“right to be forgotten”] in Europe and elsewhere, and will more generally inform global efforts to accommodate privacy rights with other interests in data flows.

    Although Google has released a Transparency Report with some aggregate data and some examples of the delinking decisions reached so far, the signatories find that effort insufficient. “Beyond anecdote,” they write,

    we know very little about what kind and quantity of information is being delisted from search results, what sources are being delisted and on what scale, what kinds of requests fail and in what proportion, and what are Google’s guidelines in striking the balance between individual privacy and freedom of expression interests.

    For now, they add, the participants in the delisting debate “do battle in a data vacuum, with little understanding of the facts.”

    More detailed data is certainly much needed. What remains striking, in the meantime, is how little understanding of the facts many people continue to have about what the decision itself mandates. A year after the decision was issued, an associate editor for Engadget, for example, still writes that, as a result of it, “if Google or Microsoft hides a news story, there may be no way to get it back.” 

    To “get it back”?! Into the results of a search on a particular person’s name? Because that is the entire scope of the delinking involved here—when the delinking does happen.

    In response to a request for comment on the Internet scholars’ open letter, a Google spokesman told The Guardian that “it’s helpful to have feedback like this so we can know what information the public would find useful.” In that spirit of helpful feedback, may I make one more suggestion?

    Google’s RTBF Transparency Report (updated on May 14) opens with the line, “In a May 2014 ruling, … the Court of Justice of the European Union found that individuals have the right to ask search engines like Google to remove certain results about them.” Dear Googlers, could you please add a line or two explaining that “removing certain results” does not mean “removing certain stories from the Internet, or even from the Google search engine”?

    Given the anniversary of the decision, many reporters are turning to the Transparency Report for information for their articles. This is a great educational opportunity. With a line or two, while it weighs its response to the important request for more detailed reporting on its actions, Google could already improve the chances of a more informed debate.

    [I’ve written about the “right to be forgotten” a number of times: chronologically, see “The Right to Be Forgotten, Or the Right to Edit?” “Revisiting the ‘Right to Be Forgotten,’” “The Right to Be Forgotten, The Privilege to Be Remembered” (that one published in Re/code), “On Remembering, Forgetting, and Delisting,” “Luciano Floridi’s Talk at Santa Clara University,” and, most recently, “Removing a Search Result: An Ethics Case Study.”]

    (Photo by Robert Scoble, used without modification under a Creative Commons license.)

     

  •  Harrison Bergeron in Silicon Valley

    Wednesday, Apr. 1, 2015
     
    Certain eighth graders I know have been reading “Harrison Bergeron,” so I decided to re-read it, too. The short story, by Kurt Vonnegut, describes a dystopian world in which, in an effort to make all people equal, a government imposes countervailing handicaps on all citizens who are somehow naturally gifted: beautiful people are forced to wear ugly masks; strong people have to carry around weights in proportion to their strength; graceful people are hobbled; etc. In order to make everybody equal, in other words, all people are brought to the lowest common denominator. The title character, Harrison Bergeron, is particularly gifted and therefore particularly impaired. As Vonnegut describes him,
     
    … Harrison's appearance was Halloween and hardware. Nobody had ever born heavier handicaps. He had outgrown hindrances faster than the H-G men could think them up. Instead of a little ear radio for a mental handicap, he wore a tremendous pair of earphones, and spectacles with thick wavy lenses. The spectacles were intended to make him not only half blind, but to give him whanging headaches besides.
    Scrap metal was hung all over him. Ordinarily, there was a certain symmetry, a military neatness to the handicaps issued to strong people, but Harrison looked like a walking junkyard. In the race of life, Harrison carried three hundred pounds.
    And to offset his good looks, the H-G men required that he wear at all times a red rubber ball for a nose, keep his eyebrows shaved off, and cover his even white teeth with black caps at snaggle-tooth random.
     
    In classroom discussions, the story is usually presented as a critique of affirmative action. Such discussions miss the fact that affirmative action aims to level the playing field, not the players.
     
    In the heart of Silicon Valley, in a land that claims to value meritocracy but ignores the ever more sharply tilted playing field, “Harrison Bergeron” seems particularly inapt. But maybe it’s not. Maybe it should be read, only in conjunction with stories like CNN’s recent interactive piece titled “The Poor Kids of Silicon Valley.” Or the piece by KQED’s Rachel Myrow, published last month, which notes that 30% of Silicon Valley’s population lives “below self-sufficiency standards,” and that “the income gap is wider than ever, and wider in Silicon Valley than elsewhere in the San Francisco Bay Area or California.”
     
    What such (nonfiction, current) stories make clear is that we are, in fact, already hanging weights and otherwise hampering people in our society.  It’s just that we don’t do it to those particularly gifted; we do it to the most vulnerable ones. The kids who have to wake up earlier because they live far from their high-school and have to take two buses since their parents can’t drive them to school, and who end up sleep deprived and less able to learn—the burden is on them. The kids who live in homeless shelters and whose brains might be impacted, long-term, by the stress of poverty—the burden is on them.  The people who work as contractors with limited or no benefits—the burden is on them. The parents who have to work multiple jobs, can’t afford to live close to work, and have no time to read to their kids—the burden is on all of them.
     
    In a Wired article about a growing number of Silicon Valley “techie” parents who are opting to home-school their kids, Jason Tanz expresses some misgivings about the subject but adds,
     
    My son is in kindergarten, and I fear that his natural curiosity won’t withstand 12 years of standardized tests, underfunded and overcrowded classrooms, and constant performance anxiety. The Internet has already overturned the way we connect with friends, meet potential paramours, buy and sell products, produce and consume media, and manufacture and deliver goods. Every one of those processes has become more intimate, more personal, and more meaningful. Maybe education can work the same way.
     
    Set aside the question of whether those processes have indeed become more intimate and meaningful; let’s concentrate on a different question about the possibility that, with the help of the Internet, education might “work the same way”: For whom?
     
    Are naturally curious and creative kids being hampered by standardized tests and underfunded and overcrowded classrooms? Well then, in Silicon Valley, some of those kids will be homeschooled. The Wired article quotes a homeschooling parent who optimistically foresees a day “when you can hire a teacher by the hour, just as you would hire a TaskRabbit to assemble your Ikea furniture.” As to what happens to the kids of the TaskRabbited teacher? If Harrison Bergeron happens to be one of those, he will be further hampered, and nobody will check whether the weight of the burden will be proportional to anything.
     
    Meritocracy is a myth when social inequality becomes as vast as it has become in Silicon Valley. Teaching “Harrison Bergeron” to eighth graders in this environment is a cruel joke.
     
    (Photo by Ken Banks, cropped, used under a Creative Commons license.)
  •  Trust, Self-Criticism, and Open Debate

    Tuesday, Mar. 17, 2015
    President Barack Obama speaks at the White House Summit on Cybersecurity and Consumer Protection in Stanford, Calif., Friday, Feb. 13, 2015. (AP Photo/Jeff Chiu)

    Last November, the director of the NSA came to Silicon Valley and spoke about the need for increased collaboration among governmental agencies and private companies in the battle for cybersecurity.  Last month, President Obama came to Silicon Valley as well, and signed an executive order aimed at promoting information sharing about cyberthreats.  In his remarks ahead of that signing, he noted that the government “has its own significant capabilities in the cyber world” and added that when it comes to safeguards against governmental intrusions on privacy, “the technology so often outstrips whatever rules and structures and standards have been put in place, which means the government has to be constantly self-critical and we have to be able to have an open debate about it.”

    Five days later, on February 19, The Intercept reported that back in 2010 “American and British spies hacked into the internal computer network of the largest manufacturer of SIM cards in the world, stealing encryption keys used to protect the privacy of cellphone communications across the globe….” A few days after that, on February 23, at a cybersecurity conference, the director of the NSA was confronted by the chief information security officer of Yahoo in an exchange which, according to the managing editor of the Just Security blog, “illustrated the chasm between some leading technology companies and the intelligence community.”

    Then, on March 10th, The Intercept reported that in 2012 security researchers working with the CIA “claimed they had created a modified version of Apple’s proprietary software development tool, Xcode, which could sneak surveillance backdoors into any apps or programs created using the tool. Xcode, which is distributed by Apple to hundreds of thousands of developers, is used to create apps that are sold through Apple’s App Store.” Xcode’s product manager reacted on Twitter: “So. F-----g. Angry.”

    Needless to say, it hasn’t been a good month for the push toward increased cooperation. However, to put those recent reactions in a bit more historical context, in October 2013, it was Google’s chief legal officer, David Drummond, who reacted to reports that Google’s data links had been hacked by the NSA: "We are outraged at the lengths to which the government seems to have gone to intercept data from our private fibre networks,” he said, “and it underscores the need for urgent reform." In May 2014, following reports that some Cisco products had been altered by the NSA, Mark Chandler, Cisco’s general counsel, wrote that the “failure to have rules [that restrict what the intelligence agencies may do] does not enhance national security ….”

    If the goal is increased collaboration between the public and private sector on issues related to cybersecurity, many commentators have observed that the issue most hampering that is a lack of trust. Things are not likely to get better as long as the anger and lack of trust are left unaddressed.  If President Obama is right in noting that, in a world in which technology routinely outstrips rules and standards, the government must be “constantly self-critical,” then high-level visits to Silicon Valley should include that element, much more openly than they have until now.

     

  •  On Remembering, Forgetting, and Delisting

    Friday, Feb. 20, 2015
     
    Over the last two weeks, Julia Powles, who is a law and technology researcher at the University of Cambridge, has published two interesting pieces on privacy, free speech, and the “right to be forgotten”: “Swamplands of the Internet: Speech and Privacy,” and “How Google Determined Our Right to Be Forgotten” (the latter co-authored by Enrique Chaparro). They are both very much worth reading, especially for folks whose work impacts the privacy rights (or preferences, if you prefer) of people around the world.
     
    Today, a piece that I wrote, which also touches on the “right to be forgotten,” was published in Re/code. It’s titled “The Right to Be Forgotten, the Privilege to Be Remembered.” I hope you’ll read that, too!
     
    And earlier in February, Google’s Advisory Council issued its much-anticipated report on the issue, which seeks to clarify the outlines of the debate surrounding it and offers suggestions for the implementation of “delisting.”
     
    One of the authors of that report, Professor Luciano Floridi, will be speaking at Santa Clara University on Wednesday, 2/25, as part of our “IT, Ethics and Law” lecture series.  Floridi is Professor of Philosophy and Ethics of Information at the University of Oxford and the Director of Research of the Oxford Internet Institute. His talk is titled “Recording, Recalling, Retrieving, Remembering: Memory in the Information Age.” The event is free and open to the public; if you live in the area and are interested in memory, free speech, and privacy online, we hope you will join us and RSVP!
     
    [And if you would like to be added to our mailing list for the lecture series—which has recently hosted panel presentations on ethical hacking, the ethics of online price discrimination, and privacy by design and software engineering ethics—please email ethics@scu.edu.] 
     
    Photo by Minchioletta, used without modification under a Creative Commons license.
  •  Questions about Mass Surveillance

    Tuesday, Oct. 14, 2014


    Last week, Senator Ron Wyden of Oregon, long-time member of the Select Committee on Intelligence and current chairman of the Senate Finance Committee, held a roundtable on the impact of governmental surveillance on the U.S. digital economy.  (You can watch a video of the entire roundtable discussion here.) While he made the case that the current surveillance practices have hampered both our security and our economy, the event focused primarily on the implications of mass surveillance for U.S. business—corporations, entrepreneurs, tech employees, etc.  Speaking at a high-school in the heart of Silicon Valley, surrounded by the Executive Chairman of Google, the General Counsels of Microsoft and Facebook, and others, Wyden argued that the current policies around surveillance were harming one of the most promising sectors of the U.S. economy—and that Congress was largely ignoring that issue. “When the actions of a foreign government threaten red-white-and-blue jobs, Washington [usually] gets up at arms,” Wyden noted, but “no one in Washington is talking about how overly broad surveillance is hurting the US economy.”

    The focus on the economic impact was clearly intended to present the issue of mass surveillance through a new lens—one that might engage those lawmakers and citizens who had not been moved, perhaps, by civil liberties arguments.  However, even in this context, the discussion frequently turned to the “personal” implications of the policies involved.  And in comments both during and after the panel discussion, Wyden expressed his deep concern about the particular danger posed by the creation and implementation of “secret law.”  Microsoft’s General Counsel, Brad Smith, went one step further:  “We need to recognize,” he said, “that laws that the rest of the world does not respect will ultimately undermine the fundamental ability of our own legal processes, law enforcement agencies, and even the intelligence community itself.”

    That brought me back to some of the questions I raised in 2013 (a few months after the Snowden revelations first became public), in an article published by the Santa Clara Magazine.  One of the things I had asked was whether the newly-revealed surveillance programs might “change the perception of the United States to the point where they hamper, more than they help, our national security. “ In regard to secret laws, even if those were to be subject to effective Congressional and court oversight, I wondered, "[i]s there a level of transparency that U.S. citizens need from each branch of the government even if those branches are transparent to one another? In a democracy, can the system of checks and balances function with informed representatives but without an informed public? Would such an environment undermine voters’ ability to choose [whom to vote for]?"

    And, even more broadly, in regard to the dangers inherent in indiscriminate mass surveillance, "[i]n a society in which the government collects the metadata (and possibly much of the content) of every person’s communications for future analysis, will people still speak, read, research, and act freely? Do we have examples of countries in which mass surveillance coexisted with democratic governance?"

    We know that a certain level of mass surveillance and democratic governance did coexist for a time, very uneasily, in our own past, during the Hoover era at the FBI—and the revelations of the realities of that coexistence led to the Church committee and to policy changes.

    Will the focus on the economic impact of current mass governmental surveillance lead to new changes in our surveillance laws? Perhaps.  But it was Facebook’s general counsel who had (to my mind) the best line of last week’s roundtable event. When a high-school student in the audience asked the panel how digital surveillance affects young people like him, who want to build new technology companies or join growing ones, one panelist advised him to just worry about creating great products, and to let people like the GCs worry about the broader issues.  Another panelist told him that he should care about this issue because of the impact that data localization efforts would have on future entrepreneurs’ ability to create great companies. Then, Facebook’s Colin Stretch answered. “I would say care about it for the reasons you learned in your Civics class,” he said, “not necessarily the reasons you learned in your computer science class.”

    Illustration by Stuart Bradford

  •  Should You Watch? On the Responsibility of Content Consumers

    Tuesday, Sep. 23, 2014

    This fall, Internet users have had the opportunity to view naked photographs of celebrities (which were obtained without approval, from private iCloud accounts, and then—again without consent—distributed widely).  They were also able to watch journalists and an aid worker being beheaded by a member of a terrorist organization that then uploaded the videos of the killings to various social media channels.  And they were also invited to watch a woman being rendered unconscious by a punch from a football player who was her fiancé at the time; the video of that incident was obtained from a surveillance camera inside a hotel elevator.

     
    These cases have been accompanied by heated debates around the issues of journalism ethics and the responsibilities of social media platforms. Increasingly, though, a question is arising about the responsibility of the Internet users themselves—the consumers of online content. The question is, should they watch?
    Would You Watch [the beheading videos]?” ask CNN and ABC News. “Should You Watch the Ray Rice Assault Video?” asks Shape magazine. “Should We Look—Or Look Away?” asks Canada’s National Post. And, in a broader article about the “consequences and import of ubiquitous, Internet-connected photography” (and video), The Atlantic’s Robinson Mayer reflects on all three of the cases noted above; his piece is titled “Pics or It Didn’t Happen.”
    Many commentators have argued that to watch those videos or look at those pictures is a violation of the privacy of the victims depicted in them; that not watching is a sign of respect; or that the act of watching might cause new harm to the victims or to people associated with them (friends, family members, etc.). Others have argued that watching the beheading videos is necessary “if the depravity of war is to be understood and, hopefully, dealt with,” or that watching the videos of Ray Rice hitting his fiancé will help change people’s attitudes toward domestic violence.
    What do you think?
    Would it be unethical to watch the videos discussed above? Why?
    Would it be unethical to look at the photos discussed above? Why?
    Are the three cases addressed above so distinct from each other that one can’t give a single answer about them all?  If so, which of them would you watch, or refuse to watch, and why?
     
    Photo by Matthew Montgomery, unmodified, used under a Creative Commons license.
  • Pages:
  • 1
  • 2
  • »