Welcome to the blog of the Internet Ethics program at the Markkula Center for Applied Ethics, Santa Clara University. Program Director Irina Raicu will be joined by various guests in discussing the ethical issues that arise continuously on the Internet; we hope to host a robust conversation about them, and we look forward to your comments.
The following postings have been filtered by tag internet. clear filter
President Barack Obama speaks at the White House Summit on Cybersecurity and Consumer Protection in Stanford, Calif., Friday, Feb. 13, 2015. (AP Photo/Jeff Chiu)
Last November, the director of the NSA came to Silicon Valley and spoke about the need for increased collaboration among governmental agencies and private companies in the battle for cybersecurity. Last month, President Obama came to Silicon Valley as well, and signed an executive order aimed at promoting information sharing about cyberthreats. In his remarks ahead of that signing, he noted that the government “has its own significant capabilities in the cyber world” and added that when it comes to safeguards against governmental intrusions on privacy, “the technology so often outstrips whatever rules and structures and standards have been put in place, which means the government has to be constantly self-critical and we have to be able to have an open debate about it.”
Five days later, on February 19, The Intercept reported that back in 2010 “American and British spies hacked into the internal computer network of the largest manufacturer of SIM cards in the world, stealing encryption keys used to protect the privacy of cellphone communications across the globe….” A few days after that, on February 23, at a cybersecurity conference, the director of the NSA was confronted by the chief information security officer of Yahoo in an exchange which, according to the managing editor of the Just Security blog, “illustrated the chasm between some leading technology companies and the intelligence community.”
Then, on March 10th, The Intercept reported that in 2012 security researchers working with the CIA “claimed they had created a modified version of Apple’s proprietary software development tool, Xcode, which could sneak surveillance backdoors into any apps or programs created using the tool. Xcode, which is distributed by Apple to hundreds of thousands of developers, is used to create apps that are sold through Apple’s App Store.” Xcode’s product manager reacted on Twitter: “So. F-----g. Angry.”
Needless to say, it hasn’t been a good month for the push toward increased cooperation. However, to put those recent reactions in a bit more historical context, in October 2013, it was Google’s chief legal officer, David Drummond, who reacted to reports that Google’s data links had been hacked by the NSA: "We are outraged at the lengths to which the government seems to have gone to intercept data from our private fibre networks,” he said, “and it underscores the need for urgent reform." In May 2014, following reports that some Cisco products had been altered by the NSA, Mark Chandler, Cisco’s general counsel, wrote that the “failure to have rules [that restrict what the intelligence agencies may do] does not enhance national security ….”
If the goal is increased collaboration between the public and private sector on issues related to cybersecurity, many commentators have observed that the issue most hampering that is a lack of trust. Things are not likely to get better as long as the anger and lack of trust are left unaddressed. If President Obama is right in noting that, in a world in which technology routinely outstrips rules and standards, the government must be “constantly self-critical,” then high-level visits to Silicon Valley should include that element, much more openly than they have until now.
Over the last two weeks, Julia Powles, who is a law and technology researcher at the University of Cambridge, has published two interesting pieces on privacy, free speech, and the “right to be forgotten”: “Swamplands of the Internet: Speech and Privacy,” and “How Google Determined Our Right to Be Forgotten” (the latter co-authored by Enrique Chaparro). They are both very much worth reading, especially for folks whose work impacts the privacy rights (or preferences, if you prefer) of people around the world.
And earlier in February, Google’s Advisory Council issued its much-anticipated report on the issue, which seeks to clarify the outlines of the debate surrounding it and offers suggestions for the implementation of “delisting.”
[And if you would like to be added to our mailing list for the lecture series—which has recently hosted panel presentations on ethical hacking, the ethics of online price discrimination, and privacy by design and software engineering ethics—please email firstname.lastname@example.org.]
Last week, Senator Ron Wyden of Oregon, long-time member of the Select Committee on Intelligence and current chairman of the Senate Finance Committee, held a roundtable on the impact of governmental surveillance on the U.S. digital economy. (You can watch a video of the entire roundtable discussion here.) While he made the case that the current surveillance practices have hampered both our security and our economy, the event focused primarily on the implications of mass surveillance for U.S. business—corporations, entrepreneurs, tech employees, etc. Speaking at a high-school in the heart of Silicon Valley, surrounded by the Executive Chairman of Google, the General Counsels of Microsoft and Facebook, and others, Wyden argued that the current policies around surveillance were harming one of the most promising sectors of the U.S. economy—and that Congress was largely ignoring that issue. “When the actions of a foreign government threaten red-white-and-blue jobs, Washington [usually] gets up at arms,” Wyden noted, but “no one in Washington is talking about how overly broad surveillance is hurting the US economy.”
The focus on the economic impact was clearly intended to present the issue of mass surveillance through a new lens—one that might engage those lawmakers and citizens who had not been moved, perhaps, by civil liberties arguments. However, even in this context, the discussion frequently turned to the “personal” implications of the policies involved. And in comments both during and after the panel discussion, Wyden expressed his deep concern about the particular danger posed by the creation and implementation of “secret law.” Microsoft’s General Counsel, Brad Smith, went one step further: “We need to recognize,” he said, “that laws that the rest of the world does not respect will ultimately undermine the fundamental ability of our own legal processes, law enforcement agencies, and even the intelligence community itself.”
That brought me back to some of the questions I raised in 2013 (a few months after the Snowden revelations first became public), in an article published by the SantaClara Magazine. One of the things I had asked was whether the newly-revealed surveillance programs might “change the perception of the United States to the point where they hamper, more than they help, our national security. “ In regard to secret laws, even if those were to be subject to effective Congressional and court oversight, I wondered, "[i]s there a level of transparency that U.S. citizens need from each branch of the government even if those branches are transparent to one another? In a democracy, can the system of checks and balances function with informed representatives but without an informed public? Would such an environment undermine voters’ ability to choose [whom to vote for]?"
And, even more broadly, in regard to the dangers inherent in indiscriminate mass surveillance, "[i]n a society in which the government collects the metadata (and possibly much of the content) of every person’s communications for future analysis, will people still speak, read, research, and act freely? Do we have examples of countries in which mass surveillance coexisted with democratic governance?"
We know that a certain level of mass surveillance and democratic governance did coexist for a time, very uneasily, in our own past, during the Hoover era at the FBI—and the revelations of the realities of that coexistence led to the Church committee and to policy changes.
Will the focus on the economic impact of current mass governmental surveillance lead to new changes in our surveillance laws? Perhaps. But it was Facebook’s general counsel who had (to my mind) the best line of last week’s roundtable event. When a high-school student in the audience asked the panel how digital surveillance affects young people like him, who want to build new technology companies or join growing ones, one panelist advised him to just worry about creating great products, and to let people like the GCs worry about the broader issues. Another panelist told him that he should care about this issue because of the impact that data localization efforts would have on future entrepreneurs’ ability to create great companies. Then, Facebook’s Colin Stretch answered. “I would say care about it for the reasons you learned in your Civics class,” he said, “not necessarily the reasons you learned in your computer science class.”
This fall, Internet users have had the opportunity to view naked photographs of celebrities (which were obtained without approval, from private iCloud accounts, and then—again without consent—distributed widely). They were also able to watch journalists and an aid worker being beheaded by a member of a terrorist organization that then uploaded the videos of the killings to various social media channels. And they were also invited to watch a woman being rendered unconscious by a punch from a football player who was her fiancé at the time; the video of that incident was obtained from a surveillance camera inside a hotel elevator.
These cases have been accompanied by heated debates around the issues of journalism ethics and the responsibilities of social media platforms. Increasingly, though, a question is arising about the responsibility of the Internet users themselves—the consumers of online content. The question is, should they watch?
Many commentators have argued that to watch those videos or look at those pictures is a violation of the privacy of the victims depicted in them; that not watching is a sign of respect; or that the act of watching might cause new harm to the victims or to people associated with them (friends, family members, etc.). Others have argued that watching the beheading videos is necessary “if the depravity of war is to be understood and, hopefully, dealt with,” or that watching the videos of Ray Rice hitting his fiancé will help change people’s attitudes toward domestic violence.
What do you think?
Would it be unethical to watch the videos discussed above? Why?
Would it be unethical to look at the photos discussed above? Why?
Are the three cases addressed above so distinct from each other that one can’t give a single answer about them all? If so, which of them would you watch, or refuse to watch, and why?
But our community is neither monolithic nor uninterested. Back in 2013, for example, the Internet Ethics program at the Markkula Center for Applied Ethics started a blog called “Internet Ethics: Views from Silicon Valley,” with the goal of offering 10 brief videos in which Silicon Valley pioneers and leaders would address some key ethical issues related to the role of the Internet in modern life. While that project was completed (and those videos, featuring the co-founders of Apple and Adobe Systems, the Executive Chairman of NetApp, the CEOs of VMWare and Seagate, and more, remain available on our website and our YouTube channel), we have decided to restart the blog.
We hope to be a platform for a multiplicity of Silicon Valley voices and demonstrate that applied ethics is everybody’s business—not just the purview of philosophers or philanthropists.
We aim to blog about once a week, with entries by various staff members of the Markkula Center for Applied Ethics, as well as other Santa Clara University faculty members (and perhaps some students, too!) We look forward to your comments, and we hope to host a robust conversation around such topics as big data ethics, online privacy, the Internet of Things, Net neutrality, the “right to be forgotten,” cyberbullying, the digital divide, sentiment analysis, the impact of social media, online communities, digital journalism, diversity in tech, and more. We will also post advance notice of various ethics-related events taking place on campus, free and open to the public.
If you’d like to be notified as new entries are posted, please subscribe today! (There’s an email subscription box to the right, or an RSS feed at the top of the blog. ) You can also follow the Internet Ethics program on Twitter at @IEthics, and the Center overall either on Facebook or on Twitter at @mcaenews.
And to those of you who had been subscribed already, again, welcome back!
Do we need more editorial control on the Web? In this brief clip, the Chairman, President, and Chief Executive Officer of Seagate Technology, Stephen Luczo, argues that we do. He also cautions that digital media channels sometimes unwittingly lend a gloss of credibility to some stories that don't deserve it (as was recently demonstrated in the coverage of the Boston bombing). Luczo views this as a symptom of a broader breakdown among responsibility, accountability, and consequences in the online world. Is the much-vaunted freedom of the Internet diminishing the amount of substantive feedback that we get for doing something positive--or negative--for society?
Chad Raphael, Chair of the Communication Department and Associate Professor at Santa Clara University, responds to Luczo's comments:
"It's true that the scope and speed of news circulation on the Internet worsens longstanding problems of countering misinformation and holding the sources that generate it accountable. But journalism's traditional gatekeepers were never able to do these jobs alone, as Senator Joseph McCarthy knew all too well. News organizations make their job harder with each new round of layoffs of experienced journalists.
There are new entities emerging online that can help fulfill these traditional journalistic functions, but we need to do more to connect, augment, and enshrine them in online news spaces. Some of these organizations, such as News Trust, crowdsource the problem of misinformation by enlisting many minds to review news stories and alert the public to inaccuracy and manipulation. Their greatest value may be as watchdogs who can sound the alarm on suspicious material. Other web sites, such as FactCheck.org, rely on trained professionals to evaluate political actors' claims. They can pick up tips from multiple watchdogs, some of them more partisan than others, and evaluate those tips as fair-minded judges. We need them to expand their scope beyond checking politicians to include other public actors. The judges could also use some more robust programs for tracking the spread of info-viruses back to their sources, so they can be identified and exposed quickly. We also need better ways to publicize the online judges' verdicts.
If search engines and other news aggregators aim to organize the world's information for us, it seems within their mission to let us know what sources, stories, and news organizations have been more and less accurate over time. Even more importantly, aggregators might start ranking better performing sources higher in their search results, creating a powerful economic incentive to get the story right rather than getting it first.
Does that raise First Amendment concerns? Sure. But we already balance the right to free speech against other important rights, including reputation, privacy, and public safety. And the Internet is likely to remain the Wild West until Google, Yahoo!, Digg, and other news aggregators start separating the good, the bad, and the ugly by organizing information according to its credibility, not just its popularity."
What would our lives be like if we no longer had access to the Internet? How much good would we lose? How much harm would we be spared? Is Internet access a right? These days, whether or not we think of access to it as a right, many of us take the Internet for granted. In this brief video, Apple co-founder A. C. "Mike" Markkula Jr. looks at the big picture, argues that Internet use is a privilege, and considers ways to minimize some of the harms associated with it, while fully appreciating its benefits.
"As we seek to advance the state of the art in technology and its use in society, [engineers] must be conscious of our civil responsibilities in addition to our engineering expertise. Improving the Internet is just one means, albeit an important one, by which to improve the human condition. It must be done with an appreciation for the civil and human rights that deserve protection--without pretending that access itself is such a right."
Consumer and business data is increasingly moving to the "cloud," and people are clamoring for protection of that data. However, as Symantec's President, CEO, and Chairman of the Board Steve Bennett points out in this clip, "maximum privacy" is really anonymity, and some people use anonymity as a shield for illegal and unethical behavior. How should cloud service providers deal with this dilemma? What is their responsibility to their customers, and to society at large? How should good corporate citizens respond when they are asked to cooperate with law enforcement?
Providers of cloud services are all faced with this dilemma; as Ars Technica recently reported, for example, Verizon took action when it discovered child pornography in one of its users' accounts.
The Internet has surely surpassed the expectations of its pioneers. As a communication medium, it is unparalleled in scope and impact. However, the ease of publication in the Web 2.0 world has created new ethical dilemmas. In this brief video, Adobe Chairman of the Board Charles Geschke points out the gap between what Internet users expect to receive (i.e. factual and accurate information) and what they too often get instead. Is it the user's responsibility to judge which sources to access on the Web, and how much to rely on them? Is it the publishers of information who have a duty to strive to be accurate?
Below, Sally Lehrman (Knight Ridder/San Jose Mercury News Endowed Chair in Journalism and the Public Interest at Santa Clara University, and a Markkula Center for Applied Ethics Scholar) responds to Geschke's comments. Add your own responses in the "Comments" section!
"The Internet has certainly opened up opportunities for anyone to publish whatever they want. In some ways, the proliferation of voices is good. It provides access to ideas and perspectives that traditional news gatherers might miss. It also can put pressure on news organizations to get things right. But, as Mr. Geschke points out, it's hard to tell when the information packaged like news on the Internet is really just marketing or propaganda. That's why brands like the New York Times, Wall Street Journal, and local sites such as Patch.com and your own local newspaper are valuable. Their reporting can be trusted.
Ethical traditions in journalism ensure multiple sources and careful attention to facts. But many people have come to expect their news for free, and feet-on-the-ground reporting and fact-checking are expensive. That makes it very difficult for true news operations to survive. Unfortunately, we're seeing a decline in quality as a result. The public must learn to discern--and value--quality news. One way is to learn more about traditional journalism ethics guidelines, found (on the Internet!) on sites such as www.spj.org/ethics.asp and www.rtdna.org/channel/ethics."
New technologies often bring both benefits and unintended consequences. The same is true of laws aimed at new technologies. In this brief clip, NetApp's Executive Chairman Dan Warmenhoven discusses the development of GPS-tracking technology and the ethical issues associated with the aggregation of GPS data into large databases. Using HIPAA as an example, he then argues that data protection efforts can go too far, leaving us with inefficient outcomes. How do we strike the right balance between benefits and harms?