The Markkula Center for Applied Ethics presents a series of brief videos on key issues in Internet ethics, as identified by Silicon Valley leaders. The participants include the co-founders of Adobe and Reputation.com, as well as the CEOs of Symantec and Seagate. Over the course of 10 weeks, a new video will be uploaded each week.
As we wrap up the "Internet Ethics: Views from Silicon Valley" series, we hope that its videos, comments, and related articles will continue to spark conversations about ethical challenges that arise in the online context. All of the clips in the series will remain available on YouTube, as well as on the Markkula Center for Applied Ethics website.
We leave you with one more issue to consider: The development of "the Internet of Things." In this brief clip, Owen Tripp, the co-founder of Reputation.com, addresses the balancing act between convenience and privacy in our connected world. As a recent Wired Magazine article notes, many of us now go about our lives "surrounded by tiny, intelligent devices that capture data about how we live and what we do." Whether we call it "The Internet of Things," the "Sensor Revolution," or "The Programmable World," we still need to consider the ethical implications of this new reality.
Kim Polese, a Silicon Valley entrepreneur and innovator, addresses a new and growing digital divide: the one between those who have high-speed wired broadband access to the Internet in their homes, and those who don't. Many of the services that we increasingly rely on in our daily lives require such access; Polese argues that the lack of affordable high-speed broadband access magnifies the inequalities in our society, keeping both necessities and opportunities out of reach for many Americans.
In a recent New York Times article, law professor Susan Crawford agrees with this assessment: she describes "[h]igh capacity fiber connections to homes and businesses" as "a social good" (as well as a business imperative). Both Polese and Crawford call for increased regulatory oversight in order to bring about affordable and widespread broadband access in the U.S.
The adoption of mobile devices and the use of social media are both growing quickly around the world.In emerging markets in particular, mobile devices have become “life tools”—used for telemedicine, banking, education, communication, and more.These developments give rise to new ethical challenges.How should the mobile be used for data collection among vulnerable populations?Can apps that bring great benefits also cause unintended harm?And who should address these concerns?In this brief video, tech entrepreneur and professor Radha Basu argues that the debate should include the manufacturers of mobile devices and the app developers, but also the young people who will be most affected by these new developments.
A recent study claims that “[b]ring-your-own-device strategies are the single most radical change to the economics and culture of client computing in a decade.”As people increasingly bring their own (mostly mobile) devices into the workplace, and use them for both personal and professional activities, new challenges arise—for the employees as well as their employers.In this brief video, Pat Gelsinger, the CEO of VMware, discusses some of the implications of BYOD for data security and personal privacy.Can people enjoy the convenience of BYOD without compromising either the assets of their employers or their own personal privacy?Gelsinger argues that this is a problem that can be addressed through a technical solution, and challenges tech companies to devise products and services that will protect both the corporations and the individuals who adopt the BYOD strategy.
Below, Brian Buckley, Lecturer in the Philosophy Department and Incoming Director of Pre-Law Advising at Santa Clara University, comments on some of the ethical questions raised by the video:
“Mr. Gelsinger’s discussion of mobile devices and privacy raises at least three ethical questions about balance in the use of such devices.These questions are either implicit in or complementary to his assertion that this is a 'technology problem' that may be solved in part by firewalls, etc.
The first question involves information protection: What is the optimal balance between (a) introducing mobile technologies into the workplace and (b) preserving the value corporations have in their proprietary data? Companies innovate, do research, and invest in order to secure for themselves information that is profitable to their ongoing enterprise.This connection between protection and incentive is thus essential to promoting corporate growth and advancing stakeholder interests. After all, what company would invest corporate funds in research and development, etc., if the rewards of that investment were not secure?And yet these rewards are put in jeopardy when certain devices are brought into a workplace without safeguards.A balance therefore upholds here the very foundations of a business model that captures value, and it must be taken seriously.
The second question involves employee privacy: What balance is optimal between (a) the introduction of mobile devices to the workplace and (b) the employees’ reasonable expectations of privacy?An employee enters the workforce expecting the necessary inconveniences of steady employment—the commute, the hours, etc.But it seems problematic to claim that part of those inconveniences might include an erosion of certain realms of privacy.The newer devices, however, could thrust employees into the public domain without their consent, or otherwise capture their personal data.This constitutes a breach of personal boundaries that is not part of the employment contract and is a serious offense to respect for persons.The firewall that Mr. Gelsinger proposed would, however, work both ways—protecting both the intellectual property and assets of the corporation as well as the personal information of the employee.However such a firewall is fashioned, a balance must be struck that never promotes or celebrates the use of mobile devices at the expense of employee protections.
The third question, a broader one suggested by the video, involves the saturation of technology and its possible effect on character: What is the optimal balance between (a) the use of mobile data devices in employee life and (b) the increasing reliance of employers and employees on such devices?Mr. Gelsinger worries about privacy matters and the proprietary issues, but it seems that another reasonable concern involves the expected overuse of mobile devices. Employees may increasingly feel the need to incorporate the mobile data into their private life—to work on projects on the weekend, on vacation, etc.Furthermore, the mobile quality of various applications, with their accompanying data, might incentivize employers to either expect or reward such behavior.It is not hard to imagine the employer who is 'very sorry' to interrupt the vacation or who 'just this one time' wants the employee to give up her weekend, etc.Mobile data technologies might encourage this.I acknowledge that this also can be a good that allows for people to stay at home with a special needs child, etc.But if character formation (both vices and virtues) is based on daily habits, it seems reasonable to worry about the extent mobile data devices may devalue relationships, professionalism, and communities and instead incentivize workaholics. “
Do we need more editorial control on the Web? In this brief clip, the Chairman, President, and Chief Executive Officer of Seagate Technology, Stephen Luczo, argues that we do. He also cautions that digital media channels sometimes unwittingly lend a gloss of credibility to some stories that don't deserve it (as was recently demonstrated in the coverage of the Boston bombing). Luczo views this as a symptom of a broader breakdown among responsibility, accountability, and consequences in the online world. Is the much-vaunted freedom of the Internet diminishing the amount of substantive feedback that we get for doing something positive--or negative--for society?
Chad Raphael, Chair of the Communication Department and Associate Professor at Santa Clara University, responds to Luczo's comments:
"It's true that the scope and speed of news circulation on the Internet worsens longstanding problems of countering misinformation and holding the sources that generate it accountable. But journalism's traditional gatekeepers were never able to do these jobs alone, as Senator Joseph McCarthy knew all too well. News organizations make their job harder with each new round of layoffs of experienced journalists.
There are new entities emerging online that can help fulfill these traditional journalistic functions, but we need to do more to connect, augment, and enshrine them in online news spaces. Some of these organizations, such as News Trust, crowdsource the problem of misinformation by enlisting many minds to review news stories and alert the public to inaccuracy and manipulation. Their greatest value may be as watchdogs who can sound the alarm on suspicious material. Other web sites, such as FactCheck.org, rely on trained professionals to evaluate political actors' claims. They can pick up tips from multiple watchdogs, some of them more partisan than others, and evaluate those tips as fair-minded judges. We need them to expand their scope beyond checking politicians to include other public actors. The judges could also use some more robust programs for tracking the spread of info-viruses back to their sources, so they can be identified and exposed quickly. We also need better ways to publicize the online judges' verdicts.
If search engines and other news aggregators aim to organize the world's information for us, it seems within their mission to let us know what sources, stories, and news organizations have been more and less accurate over time. Even more importantly, aggregators might start ranking better performing sources higher in their search results, creating a powerful economic incentive to get the story right rather than getting it first.
Does that raise First Amendment concerns? Sure. But we already balance the right to free speech against other important rights, including reputation, privacy, and public safety. And the Internet is likely to remain the Wild West until Google, Yahoo!, Digg, and other news aggregators start separating the good, the bad, and the ugly by organizing information according to its credibility, not just its popularity."
What would our lives be like if we no longer had access to the Internet? How much good would we lose? How much harm would we be spared? Is Internet access a right? These days, whether or not we think of access to it as a right, many of us take the Internet for granted. In this brief video, Apple co-founder A. C. "Mike" Markkula Jr. looks at the big picture, argues that Internet use is a privilege, and considers ways to minimize some of the harms associated with it, while fully appreciating its benefits.
"As we seek to advance the state of the art in technology and its use in society, [engineers] must be conscious of our civil responsibilities in addition to our engineering expertise. Improving the Internet is just one means, albeit an important one, by which to improve the human condition. It must be done with an appreciation for the civil and human rights that deserve protection--without pretending that access itself is such a right."
Consumer and business data is increasingly moving to the "cloud," and people are clamoring for protection of that data. However, as Symantec's President, CEO, and Chairman of the Board Steve Bennett points out in this clip, "maximum privacy" is really anonymity, and some people use anonymity as a shield for illegal and unethical behavior. How should cloud service providers deal with this dilemma? What is their responsibility to their customers, and to society at large? How should good corporate citizens respond when they are asked to cooperate with law enforcement?
Providers of cloud services are all faced with this dilemma; as Ars Technica recently reported, for example, Verizon took action when it discovered child pornography in one of its users' accounts.
The Internet has surely surpassed the expectations of its pioneers. As a communication medium, it is unparalleled in scope and impact. However, the ease of publication in the Web 2.0 world has created new ethical dilemmas. In this brief video, Adobe Chairman of the Board Charles Geschke points out the gap between what Internet users expect to receive (i.e. factual and accurate information) and what they too often get instead. Is it the user's responsibility to judge which sources to access on the Web, and how much to rely on them? Is it the publishers of information who have a duty to strive to be accurate?
Below, Sally Lehrman (Knight Ridder/San Jose Mercury News Endowed Chair in Journalism and the Public Interest at Santa Clara University, and a Markkula Center for Applied Ethics Scholar) responds to Geschke's comments. Add your own responses in the "Comments" section!
"The Internet has certainly opened up opportunities for anyone to publish whatever they want. In some ways, the proliferation of voices is good. It provides access to ideas and perspectives that traditional news gatherers might miss. It also can put pressure on news organizations to get things right. But, as Mr. Geschke points out, it's hard to tell when the information packaged like news on the Internet is really just marketing or propaganda. That's why brands like the New York Times, Wall Street Journal, and local sites such as Patch.com and your own local newspaper are valuable. Their reporting can be trusted.
Ethical traditions in journalism ensure multiple sources and careful attention to facts. But many people have come to expect their news for free, and feet-on-the-ground reporting and fact-checking are expensive. That makes it very difficult for true news operations to survive. Unfortunately, we're seeing a decline in quality as a result. The public must learn to discern--and value--quality news. One way is to learn more about traditional journalism ethics guidelines, found (on the Internet!) on sites such as www.spj.org/ethics.asp and www.rtdna.org/channel/ethics."
New technologies often bring both benefits and unintended consequences. The same is true of laws aimed at new technologies. In this brief clip, NetApp's Executive Chairman Dan Warmenhoven discusses the development of GPS-tracking technology and the ethical issues associated with the aggregation of GPS data into large databases. Using HIPAA as an example, he then argues that data protection efforts can go too far, leaving us with inefficient outcomes. How do we strike the right balance between benefits and harms?
"Total interconnectedness," very cheap data storage, and powerful search technologies come together to create a new set of ethical questions. Do we have a right to access and correct the data in our profiles? Do we have a right to be "forgotten" by the Internet? In this brief video, Reputation.com co-founder Owen Tripp asks us to consider the impact of the Internet's long memory on those among us who are most vulnerable. Below, Evan Selinger--Associate Professor in the Department of Philosophy at the Rochester Institute of Technology--responds to Tripp's comments:
"Owen Tripp is moved by the ideas driving the "right to be forgotten" movements. For the reasons he gives, we all should be, too. In the age of big data, the permanent record threat we're confronted with as kids takes on a new and more ominous meaning. Our digital dossiers expand all the time, in both obvious and unclear ways, and through processes that are transparent as well as surreptitious. Now that unprecedented amounts of information are readily available about what we've done and what makes us tick, lamentable incidents and statements can follow us everywhere with the crushing weight of Jacob Marley's chains. With the past always present, time--as Shakespeare's Hamlet exclaimed--is out of joint.
When citizens become open books, it becomes awfully tempting to manage heightened publicity with overly cautious and risk-adverse behavior. With enough fear, we'll lose out on more than opportunity. Our character can be diminished, perhaps timorousness shifting from vice to virtue. As David Hoffman, Director of Security Policy and Global Privacy Officer at Intel Corporation, contends, society thus needs solutions that safeguard a limited "right to fail" without encouraging reckless or anti-social behavior, or the problems that come from historical amnesia or revisionism. At stake is nothing less than securing adequate space for social experimentation, the "breathing room" (to borrow a phrase from privacy scholar Julie Cohen) that enables people to learn and grow.
While the right to be forgotten appears to be gaining traction in Europe, there are numerous challenges ahead, not least because the road from privacy interest to privacy right can be long and winding. In the United States concern has been expressed over how legal enforcement of a robust right for individuals to control personal information could run afoul of First Amendment speech protections and squash innovation by subjecting companies like Google and Facebook to bureaucratic procedures that, practically speaking, are unworkable, and further burdened by the prospect of overly punitive sanctions. Furthermore, as numerous scholars suggest, the notion of so-called "personal information" is hard to pin down in an age of networked citizens where lots of data involves or affects other people, implicating what law professor Sonja West aptly calls the "story of us." Finally, while the market can indeed provide helpful services, we shouldn't lose sight of the fact that when privacy protection is commodified, greater burden is placed on lower income people. Freedom and peace of mind become purchasing power privilege."