Santa Clara University

internet-ethics-banner
Bookmark and Share
 
 
RSS

Ethical Issues in the Online World

Welcome to the blog of the Internet Ethics program at the Markkula Center for Applied Ethics, Santa Clara University. Program Director Irina Raicu will be joined by various guests in discussing the ethical issues that arise continuously on the Internet; we hope to host a robust conversation about them, and we look forward to your comments.

  •  Trust, Self-Criticism, and Open Debate

    Tuesday, Mar. 17, 2015
    President Barack Obama speaks at the White House Summit on Cybersecurity and Consumer Protection in Stanford, Calif., Friday, Feb. 13, 2015. (AP Photo/Jeff Chiu)
    President Barack Obama speaks at the White House Summit on Cybersecurity and Consumer Protection in Stanford, Calif., Friday, Feb. 13, 2015. (AP Photo/Jeff Chiu)

    Last November, the director of the NSA came to Silicon Valley and spoke about the need for increased collaboration among governmental agencies and private companies in the battle for cybersecurity.  Last month, President Obama came to Silicon Valley as well, and signed an executive order aimed at promoting information sharing about cyberthreats.  In his remarks ahead of that signing, he noted that the government “has its own significant capabilities in the cyber world” and added that when it comes to safeguards against governmental intrusions on privacy, “the technology so often outstrips whatever rules and structures and standards have been put in place, which means the government has to be constantly self-critical and we have to be able to have an open debate about it.”

    Five days later, on February 19, The Intercept reported that back in 2010 “American and British spies hacked into the internal computer network of the largest manufacturer of SIM cards in the world, stealing encryption keys used to protect the privacy of cellphone communications across the globe….” A few days after that, on February 23, at a cybersecurity conference, the director of the NSA was confronted by the chief information security officer of Yahoo in an exchange which, according to the managing editor of the Just Security blog, “illustrated the chasm between some leading technology companies and the intelligence community.”

    Then, on March 10th, The Intercept reported that in 2012 security researchers working with the CIA “claimed they had created a modified version of Apple’s proprietary software development tool, Xcode, which could sneak surveillance backdoors into any apps or programs created using the tool. Xcode, which is distributed by Apple to hundreds of thousands of developers, is used to create apps that are sold through Apple’s App Store.” Xcode’s product manager reacted on Twitter: “So. F-----g. Angry.”

    Needless to say, it hasn’t been a good month for the push toward increased cooperation. However, to put those recent reactions in a bit more historical context, in October 2013, it was Google’s chief legal officer, David Drummond, who reacted to reports that Google’s data links had been hacked by the NSA: "We are outraged at the lengths to which the government seems to have gone to intercept data from our private fibre networks,” he said, “and it underscores the need for urgent reform." In May 2014, following reports that some Cisco products had been altered by the NSA, Mark Chandler, Cisco’s general counsel, wrote that the “failure to have rules [that restrict what the intelligence agencies may do] does not enhance national security ….”

    If the goal is increased collaboration between the public and private sector on issues related to cybersecurity, many commentators have observed that the issue most hampering that is a lack of trust. Things are not likely to get better as long as the anger and lack of trust are left unaddressed.  If President Obama is right in noting that, in a world in which technology routinely outstrips rules and standards, the government must be “constantly self-critical,” then high-level visits to Silicon Valley should include that element, much more openly than they have until now.

     

  •  Luciano Floridi’s Talk at Santa Clara University

    Tuesday, Mar. 10, 2015

     

     
    In the polarized debate about the so-called “right to be forgotten” prompted by an important decision issued by the European Court of Justice last year, Luciano Floridi has played a key role. Floridi, who is Professor of Philosophy and Ethics of Information at the University of Oxford and Director of Research of the Oxford Internet Institute, accepted Google’s invitation to join its advisory council on that topic. While the council was making its way around seven European capitals pursuing both expert and public input, Professor Floridi (the only ethicist in the group) wrote several articles about his evolving understanding of the issues involved—including “Google's privacy ethics tour of Europe: a complex balancing act”; “Google ethics tour: should readers be told a link has been removed?”; “The right to be forgotten – the road ahead”; and “Right to be forgotten poses more questions than answers.”
     
    Last month, after the advisory council released its much-anticipated report, Professor Floridi spoke at Santa Clara University (his lecture was part of our ongoing “IT, Ethics, and Law” lecture series). In his talk, titled “Recording, Recalling, Retrieving, Remembering: Memory in the Information Age,” Floridi embedded his analysis of the European court decision into a broader exploration of the nature of memory itself; the role of memory in the European philosophical tradition; and the relationship among memory, identity, forgiveness, and closure. As Floridi explained, the misnamed “right to be forgotten” is really about closure, which is in turn not about forgetting but about “rightly managing your past memory.”
     
    Here is the video of that talk. We hope that it will add much-needed context to the more nuanced conversation that is now developing around the balancing of the rights, needs, and responsibilities of all of the stakeholders involved in this debate, as Google continues to process the hundreds of thousands of requests for de-linking submitted so far in the E.U.
     
    If you would like to be added to our “IT, Ethics, and Law” mailing list in order to be notified of future events in the lecture series, please email ethics@scu.edu.

     

  •  The Ethics of Encryption

    Wednesday, Feb. 25, 2015
     
     
    One of the programs organized by the Markkula Center for Applied Ethics is a Business and Organizational Ethics Partnership that brings together Silicon Valley executives and scholars. Earlier this month, the partnership’s meeting included a panel discussion on the ethics of encryption. The panelists were David J. Johnson, Special Agent in Charge of the San Francisco Division of the FBI; Marshall Erwin, a senior staff analyst at Mozilla and fellow at Stanford’s Center for Internet and Society; and Jonathan Mayer, Cybersecurity Fellow at the Center for International Security and Cooperation and Junior Affiliate Scholar at the Center for Internet and Society.
     
    Of course, since then, the conversation about encryption has continued: President Obama discussed it, for example, in an interview that he gave when he came to Silicon Valley to advocate for increased cooperation between tech companies and the government; NSA Director Mike Rogers was challenged on that topic at a recent cybersecurity conference; and Hilary Clinton and others continued to hope for a middle ground solution. However, as the Washington Post recently put it, “political leaders appear to be re-hashing the same debate in search of a compromise solution that technical experts say does not exist.” 
    (In the photo, L-R: Irina Raicu, Jonathan Mayer, Marshall Erwin, and David J. Johnson)
  •  On Remembering, Forgetting, and Delisting

    Friday, Feb. 20, 2015
     
    Over the last two weeks, Julia Powles, who is a law and technology researcher at the University of Cambridge, has published two interesting pieces on privacy, free speech, and the “right to be forgotten”: “Swamplands of the Internet: Speech and Privacy,” and “How Google Determined Our Right to Be Forgotten” (the latter co-authored by Enrique Chaparro). They are both very much worth reading, especially for folks whose work impacts the privacy rights (or preferences, if you prefer) of people around the world.
     
    Today, a piece that I wrote, which also touches on the “right to be forgotten,” was published in Re/code. It’s titled “The Right to Be Forgotten, the Privilege to Be Remembered.” I hope you’ll read that, too!
     
    And earlier in February, Google’s Advisory Council issued its much-anticipated report on the issue, which seeks to clarify the outlines of the debate surrounding it and offers suggestions for the implementation of “delisting.”
     
    One of the authors of that report, Professor Luciano Floridi, will be speaking at Santa Clara University on Wednesday, 2/25, as part of our “IT, Ethics and Law” lecture series.  Floridi is Professor of Philosophy and Ethics of Information at the University of Oxford and the Director of Research of the Oxford Internet Institute. His talk is titled “Recording, Recalling, Retrieving, Remembering: Memory in the Information Age.” The event is free and open to the public; if you live in the area and are interested in memory, free speech, and privacy online, we hope you will join us and RSVP!
     
    [And if you would like to be added to our mailing list for the lecture series—which has recently hosted panel presentations on ethical hacking, the ethics of online price discrimination, and privacy by design and software engineering ethics—please email ethics@scu.edu.] 
     
    Photo by Minchioletta, used without modification under a Creative Commons license.
  •  Covering Sexism in Tech

    Thursday, Feb. 5, 2015
     
    The lack of diversity in the ranks of Silicon Valley tech companies has been a subject for debate for quite some time. It gathered more steam last year, when a number of companies including Apple, Google, and Twitter released their employment numbers, but many people had been writing and working for increased diversity for years.  (I wrote about it, too, in MarketWatch, back in June 2013.) And, as a recent San Francisco Chronicle article noted, with the issue now in the spotlight, a number of startups are “seeking to turn Silicon Valley’s diversity problem into profit—helping tech companies find, recruit, and retain a diverse workforce, usually for a hefty fee. Still other companies have recently added diversity services to those already offered.  In tech, diversity is now for sale.”
     
    In the midst of these developments, last week’s Newsweek cover article “What Silicon Valley Thinks of Women” seemed a bit of a throwback.  The controversial cover that went along with it, which got even more attention than the article itself, seemed even more of a throwback.  Many pixels were spilled, on media social and not, in arguments between those who thought the cover itself was sexist and those who felt that it reflected and therefore drew attention to sexism, and/or that it was effective simply because it drew lots of attention to the article and the magazine.
     
    TechCrunch writer Alexia Tsotsis summed up one side of the debate in an article titled “What (Some) Silicon Valley Women Think of Newsweek” (complete with provocative spoof of the provocative Newsweek cover). She first took Newsweek to task for “[b]andwagoning on after lengthy articles in The New York Times and a very public campaign to collect startup diversity data,” and pointed out that the piece recycles the “requisite tech sexism horror stories.” (In fact, one of the most shocking direct quotes included in the Newsweek article, about women not having “mastered” linear thinking, attributed to a Silicon Valley investor, was also quoted word for word in a July 2014 article in Wired UK. With sexism being so rampant, surely there are other shocking quotes to be found?)
     
    Then, Tsotsis turns to the Newsweek cover, which she describes as “a visceral gut punch”: “It portrays a woman without eyes, in a short skirt, getting her behind clicked on by a big black cursor. (Whoever these Silicon Valley people are who are thinking about women like this, they are not doing it on their phone or a tablet.)”  She then argues that “Newsweek’s faceless and sexualized symbol of women in tech is… basic and reductive. We have worked so hard to broaden the scope of what we can be, in Silicon Valley, in the world, and here comes Newsweek… with an image that bluntly, sloppily trivializes how painfully that progress was won.”
     
    And that is, indeed, the issue.  Maybe stereotypical cartoons can draw attention to a critique of stereotypes (though would it really make sense to illustrate an issue like, say, anti-Semitism by creating a new anti-Semitic cartoon?), but the tone of this particular cover is not critical. It’s playful. It’s breezy. The “woman” on the cover is clearly young, and dressed in a very short red dress (whose hem is being lifted by the oversized cursor). She is also wearing very high-heeled red shoes.  She has no eyes or nose, but she does have a very red mouth—and she looks back—almost gamely—over her shoulder at the cursor lifting her skirt. If the piece is about “what Silicon Valley thinks about women,” the cover visually “quotes” the misogynists.
     
    Ironically, though the article itself focuses on the story of one particular startup founded by two women to illustrate the problem of gender discrimination in Silicon Valley, the reader has to scroll down quite a ways before getting to a photo of those two women entrepreneurs. What if those two real women had been on the cover? Their experiences were apparently seen as interesting and representative enough to illustrate the broader issue, but their faces weren’t. Instead, the key image of the piece (which, the cliché goes, is worth a thousand words), gives voice to those who demean such women.
     
    As Tsotsis notes, the cover “gets to subconsciously influence a bunch of kids accompanying their parents on trips to the grocery … perpetuating what it purportedly denounces — It makes women feel excluded, sexualized and degraded as it tries to point out how bad it is to exclude, sexualize, and degrade women.”
     
    In response to such criticism, the author of the Newsweek article came to the defense of the cover’s designers; she said the backlash was “totally misguided” and added that "For… people to be coming out being outraged by an image as opposed to actual [sexist] behavior is just petty.” That’s a false dichotomy, though: one—or many—can clearly be outraged both by sexist behavior and by a cover that trivializes (and perhaps perpetuates) the problem.  Maybe the next time Newsweek writes about the lack of diversity in Silicon Valley (and yes, we need many more such articles, to keep the attention on an ugly reality that will take some time to change), it will take a page from a different startup highlighted by the San Francisco Chronicle: Gap Jumpers “sells software that helps tech companies evaluate job candidates based on talent alone. The company offers different skills tests to vet people applying for job—a blind audition conducted via computer.” Some cursors aim to lift women, rather than skirts.
     
    Photo by streunna4, used without modification under a Creative Commons license.
     
  •  On Spirituality, Social Justice, and Social Media

    Thursday, Jan. 22, 2015

     

    Christine Cate is a recent graduate of Santa Clara University, where she majored in Public Health Science with a minor in Biology. She has worked at the Markkula Center for Applied Ethics as the Character Education intern for the Character Based Literacy Program since October 2012. A version of this piece first appeared in November 2014 in the blog of the Ignatian Solidarity Network. Christine is a member of the Network’s social media team, focusing on contemporary issues of social justice and spirituality.

    Sometimes, reading the news makes my stomach turn. Every day, headlines about sexual assault, racism, immigration, poverty, or infectious disease are intermingled with stories on Kim Kardashian’s newest racy cover, snow storms on the East Coast, and political speculations. The media is constantly bombarding us with stories ranging in importance from superficial fluff to deeply divisive topics.

    The never-ending availability of news is positive in one sense, as the public is becoming more “informed,” but it also has its consequences. The media is desensitizing us to critical social issues like violence, racism, and sexism, while simultaneously flooding our feeds with stories of naked celebrities trying to break the internet or the most expensive Starbucks drink ever. Inane news stories focusing on things like which celebrity unfollowed whom on Instagram this week distract us from being able to critically observe and understand the world in which we live. Even political news stories can contain sensational levels of bias that make getting an objective comprehension of situations nearly impossible. And it’s nearly impossible to escape; anyone active on social media knows how often links to news articles show up among personal updates and advertisements. Individuals who aren’t constantly connected to social media, rare as they may be, are still saturated with current events from radio, print, and advertising outlets. It takes real effort to not know about what is going on in the world in our current society, and ignorance may be just as harmful as news-intoxication.

    Both the lack of current event literacy and the over-saturation of news are serious problems in our world, as media is one of the most powerful influences in society today. After returning from the Ignatian Family Teach-In that took place in November 2014 in Virginia and Washington, D.C., I found myself reflecting on the role that news and social media play in our lives, and how that impacts both our spirituality and capacity to enact social justice.

    At the Teach-in, in the rare moments between keynote speakers and breakout sessions, large projection screens and television monitors displayed live updates of tweets with the #IFTJ14 hashtag. Multiple photographers scurried around the crowded conference room, and cameras recorded every speaker for the online live stream. The slogan for this year’s Teach-In was “Uprooting Injustice, Sowing Truth, Witnessing Transformation.” The issues of immigration reform, divestment from fossil fuels, and Central American legislation were highlighted, as well as special recognition for the 25th anniversary of the UCA martyrs. Over the course of Saturday and Sunday, conference attendees were challenged to view these issues, as well as other powerful issues like the criminal justice system and racism in society, through a lens of spirituality and social justice. During presentations, audience members tweeted out perspectives or quotes that they felt were especially eye-opening or striking, with their tweets flying out into cyberspace and appearing shortly after on the illuminated screens.

    The reach of the Teach-In is hard to fathom. With an estimated 1,500 attendees, and the majority of them active on social media, it wouldn’t be a far stretch to say that tens of thousands of people were indirectly exposed to the messages of the Teach-In through media sources. The goal of the Teach-In was to give voice to the voiceless, to highlight areas in our collective history and present realities that need change, and I think that goal was accomplished spectacularly. Social media amplified the messages spoken at the Teach-In, and expanded the audience beyond just physical attendees.

    But amid the masses of news stories already flooding the eyes and minds of people today, is social media enough to make a change? How many news readers are intentional in what and how they read news stories? How many social media users are intentionally aware of their influence, and use their accounts as platforms to share morally important or challenging new stories? How many people are harnessing the power of social media to identify injustice, spread truth, and incite action for transformation?
     
    There are plenty of examples of social media bringing faith into daily rhetoric. The hashtag #blessed is popular on Instagram and Twitter, and there are hundreds of accounts that exist solely to post encouraging scripture passages, quotes, or otherwise spirituality related content. Spirituality and faith have become trendy in certain spheres, with social media users around the world able to share prayers and encourage and inspire from afar. But rarely do faithful social media users (in both senses of the word) connect their spirituality, social media reach, and social justice.
               
    What would it look like if the culture of mainstream news and social media changed to include the combination of spirituality and social justice? Would the voices of the oppressed and marginalized be heard more? Would people be more willing to confront the uncomfortable problems in our societies and work for positive change? Or would we just become desensitized to it, as we have to news coverage of war and violence? Can the integration of spirituality and social media be a powerful tool to expose injustices, spread truth, and document change?
     
    I don’t have answers to these questions, not yet. I am far more aware of my social media presence and interaction with news outlets, and would like to be more intentional in how I read news stories and pass them along to my sphere of influence. I think by critically analyzing new stories, and calling out the biases that we have been so accustomed to, we can change the way information is transmitted in society. I think that by integrating spirituality and social justice on a conscious level with how we use social media platforms we will be able to uproot injustice, sow truth, and witness transformation. 
     
    (Photo by Werner Kunz, used without modification under a Creative Commons license.)
     

     

  •  “It’s Been a Great Year!”

    Friday, Jan. 16, 2015
     
    Was 2014 a great year for Facebook? That depends, of course, on which measures or factors you choose to look at. The number of videos in users’ newsfeeds more than tripled.  The number of monthly active Facebook users is 1.35 bilion, and going up. Last June, however, Facebook took a drubbing in the media when reports about its controversial research on “emotional contagion” brought the term “research ethics” into worldwide conversations.  In response, Facebook announced that it would put in place enhanced review processes for its studies of users, and that newly hired engineers will receive training in research ethics when they join the company.
     
    Then, in December, Facebook offered its users a way to share with their friends an overview of their year (their Facebook year, at least). It was a mini-photo album: a collection of photos from one’s account, curated by Facebook (and no, the pre-selected photos were not the most “liked” ones). While customizable, their personalized albums showed up in users’ newsfeeds with a pre-filled cover photo and the tagline “It’s Been a Great Year! Thanks for being a part of it.”
     
    Now, Facebook chooses things like taglines very, very carefully. Deliberately. This was not a throwaway line. But, as you may already know by now, a father whose six-year-old daughter died last year—and who was repeatedly faced with her smiling photo used as the cover of his suggested “It’s Been a Great Year!” album—wrote a blog post that went viral, decrying what he termed “inadvertent algorithmic cruelty” and adding, “If I could fix one thing about our industry, just one thing, it would be that: to increase awareness of and consideration for the failure modes, the edge cases, the worst-case scenarios.” Many publications picked up the story.
     
    Apologies were then exchanged. But many other Facebook users felt the same pain, and did not receive an apology. And some were maybe reminded of the complaints that accompanied the initial launch of Facebook’s “Look Back Video” feature in early February 2014. As TechCrunch noted then, “[a]lmost immediately after launch, many users were complaining about the photos that Facebook auto-selected. Some had too many photos of their exes. Some had sad photos that they’d rather not remember as a milestone.” On February 7, TechCrunch reported that a “quick visit to the Facebook Look Back page now shows a shiny new edit button.”
     
    Come December, the “year-in-review” album was customizable. But the broader lesson about “the failures modes, the edge cases, the worst-case scenarios” was apparently not learned, or forgotten between February and December, despite the many sharp intervening critiques of the way Facebook treats its users.  
     
    In October, Santa Clara University professor Shannon Vallor and I wrote an op-ed arguing that Facebook’s response to the firestorm surrounding the emotion contagion study was too narrowly focused on research ethics.  We asked, “What about other ethical issues, not research-related, that Facebook's engineers are bound to encounter, perhaps even more frequently, in their daily work?”  The year-in-review app demonstrates that the question is very much still in play.  You can read our op-ed, which was published by the San Jose Mercury News, here.
     
    Here’s hoping for a better year.
     
    Photo by FACEBOOK(LET), used without modification under a Creative Commons license.
     
  •  #Compassion

    Thursday, Jan. 8, 2015

    For the 2014-2015 school year, the overarching theme being
    explored by various programs of the Markkula Center for
    Applied Ethics
    is “Compassion.” Fittingly, the Center’s first program
    on this theme was a talk entitled “What Is Compassion? A Philosophical Overview.” 

    Led by emeritus philosophy professor William J. Prior, the event turned out to be less of a talk and more of a spirited conversation. The professor had set it up that way—by handing out a one-pager with a brief description of the Good Samaritan parable and a number of questions to be answered by the audience. “In doing the following exercise,” he began, “I’d like you to try to forget everything you think you know about compassion and about this very famous story.” He also asked the audience to ignore the story’s religious underpinnings, and focus on its philosophical aspects.  After several questions that focused the reader’s attention on certain elements of the story, Prior asked, “Based on the reading of the text and your own interpretation of that text, what is compassion?”

    My scribbled notes reply, “Recognition of suffering and action to alleviate it.” As it turns out, that’s a bit different than many of the dictionary definitions of compassion (some of which Prior had also collected and distributed to the crowd). Most of those were variations of a two-part definition that involved a) recognition/consciousness of suffering, and b) desire to alleviate that suffering.

    But the Good Samaritan story argues for more than just desire. The two people who walked by the man who had been left “half dead” before the Good Samaritan found him might have felt a desire to help—we don’t know; however, for whatever reason, they didn’t act on it.  The Samaritan cared for the man’s wounds, took him to shelter at an inn, and even gave money to the innkeeper for the man’s continued care.

    The discussion of the Samaritan’s acts raised the issue of what level of action might be required. If action is required as part of compassion, is any action enough?

    And, I wondered, what does compassion look like online?

    As I am writing this, social media is flooded with references to the heartbreaking killings at the French satirical magazine Charlie Hebdo. People are using #JeSuisCharlie, #CharlieHebo, and other hashtags to express solidarity with satirists, respect, sorrow, anger, support for free speech, opposition to religious extremism. But they are also using social media, and blogs, and online maps, and other online tools, to organize demonstrations—to draw each other out into the cold streets in a show of support for the victims and for their values. Do these actions reflect compassion?

    We often hear the online world described as a place of little compassion. But we also know that people contribute to charities online, offer support and understanding in comments on blogs or on social media posts, click “like…” Is clicking “like” action enough? Is tweeting with the #bringbackourgirls hashtag enough? Is re-tweeting? Are there some actions online so ephemeral and without cost that they communicate desire to help but don’t rise to the level of compassion?

    Would the Good Samaritan have been compassionate if he had seen the wounded man lying on the ground and raised awareness of the need by tweeting about it? (“Man beaten half to death on the road to Jericho. #compassion”) Does compassionate action vary depending on our proximity to the need? On the magnitude of the need? On our own ability to help?

    I am left with lots of questions, some of which I hope to ask during the Q&A following next week’s “Ethics at Noon” talk by the Chair of SCU’s Philosophy department, Dr. Shannon Vallor (author of 21st Century Virtue: Cultivating the Technomoral Self, as well as of our module on software engineering ethics, the Stanford Encyclopedia of Philosophy article on social networking and ethics, and more). Professor Vallor’s talk, which will be held on Thursday, January 15, is titled “Life Online and the Challenge of Compassion.” The talk is free and open to the public; feel free to join us and ask your own questions! 

  •  Ethical Hacking and the Ethics of Disclosure

    Tuesday, Dec. 23, 2014

     

    Whether we call it “ethical hacking,” “penetration testing,” “vulnerability analysis,” “cyberoffense,” or “cybersecurity research,” we are talking about an increasingly important field rich in remunerative employment, intellectual challenges, and ethical dilemmas.

    As a recent Washington Post article noted, this is a “controversial area of technology: the teaching and practice of what is loosely called ‘cyberoffense.’ In a world in which businesses, the military and governments rely on computer systems that are potentially vulnerable, having the ability to break into those systems provides a strategic advantage.” The Post adds, “Unsurprisingly, ethics is a big issue in this field.”
     
    (Also unsurprisingly, perhaps, the coverage of ethics included in cyberoffense courses at various universities—at least as described in the article—is deeply underwhelming. In many engineering and computer science courses, ethics is barely mentioned; discussion of ethics, when it does happen, is often left to a separate course, removed from the substance and skills that the students are actually mastering.)
     
    Last month, as part of the “IT, Ethics, and Law” lecture series co-sponsored by the Markkula Center for Applied Ethics and the High Tech Law Institute, Santa Clara University hosted a panel discussion about ethical hacking. The panelists were Marisa Fagan (Director of Crowd Ops at Bugcrowd), Manju Mude (Chief Security Officer at Splunk), Abe Chen (Director of Information and Product Security at Tesla Motors), Alex Wheeler (Director of R&D at Accuvant), and Seth Schoen (Senior Staff Technologist at the Electronic Frontier Foundation). The topics ranged from an effort to define “ethical hacking” to a review of current bug bounty practices and employment opportunities for ethical hackers, to a discussion about the ethics of teaching cyberoffense in colleges and universities, and more.
     
    A particularly interesting chunk of the conversation addressed the ethical issues associated with disclosures of discovered vulnerabilities. Rather than try to summarize it, I’ve included an audio clip of that discussion below. Unfortunately, the participants are (mostly) not identified by name; I can tell you, though, that the voices you hear, in order, are those of yours truly (who moderated), and then Seth, Alex, Seth, Abe, Marisa, Abe, Alex, Marisa, and me again.
     
    As it happens, the one participant who is not heard in this clip is Manju Mude—so it bears noting that Manju contributed significantly throughout the panel (including steering the conversation, right after this clip, to the related topic of hacktivism), and that she was a driving force beyond the convening of the whole event, as well as invaluable help in reaching out to the other panelists. I will take this opportunity to thank all of them again, and hope that you will appreciate their insights on the topic of the ethics of disclosure:
     
     
    [For more on the topic of ethical decision-making in general, please see the Markkula Center for Applied Ethics' framework for ethical decision making--and, for an introduction to its key concepts, download the free companion app!]
     
    [In the photo, left to right: Seth Schoen, Marisa Fagan, Abe Chen, Alex Wheeler, Manju Mude, Irina Raicu]
     
     
  •  Content versus Conversation

    Tuesday, Dec. 16, 2014
     
    Last month, at the pii2014 conference held in Silicon Valley (where “pii” stands for “privacy, identity, innovation”), one interesting session was a conversation between journalist Kara Swisher and the co-founders of Secret—one of a number of apps that allow users to communicate anonymously.  Such apps have been criticized by some as enabling cruel comments and cyberbullying; other commentators, however, like Rachel Metz in the MIT Tech Review, have argued that “[s]peaking up in these digital spaces can bring out the trolls, but it’s often followed by compassion from others, and a sense of freedom and relief.”
     
    During the conversation with David Byttow and Chrys Bader-Wechseler, Swisher noted that Secret says it is not a media company—but, she argued, it does generate content through its users. Secret’s co-founders pushed back. They claimed that what happens on their platform are conversations, not “content.”  Secret messages are ephemeral, they noted; they disappear soon after being posted (how soon is not clear). We’ve always had great, passionate conversations with people, they said, without having those conversations recorded for ever; Secret, they argued, is just a new way to do that.
     
    Those comments left me thinking about the term “social media” itself. What does “media” mean in this context? I’m pretty sure that most Facebook or Twitter users don’t see themselves as content-creators for media companies. They see themselves, I would guess, as individuals engaged in conversations with other individuals. But those conversations do get treated like media content in many ways. We keep hearing about social media platforms collecting the “data” or “content” created by their users, analyzing that content, tweaking it to “maximize engagement,” using it as fodder for behavioral research, etc.
     
    There are other alternatives for online conversations, of course. Texting and emailing are never claimed to constitute “content creation” for media companies. But texts and email conversations are targeted, directed. They have an address line, which has to be filled in.
     
    Tools like Secret, however, enable a different kind of interaction. If I understand it correctly, this is more like shouting out a window and—more often than not—getting some response (from people you know, or people in your area).  It’s hoping to be heard, and maybe acknowledged, but not seen, not known.
     
    A reporter for Re/Code, Nellie Bowles, once wrote about a “real-life” party organized through Secret. Some of the conversations that took place at that party were pretty odd; some were interesting; but none of them became “content” until Bowles wrote about them.
     
    Calling social media posts “content” turns them into a commodity, and makes them sound less personal. Calling them parts of a conversation is closer, I think, to what most people perceive them to be, and reminds us of social norms that we have around other people’s conversations—even if they’re out loud, and in public.
     
    It’s a distinction worth keeping in mind. 
     
    Photo by Storebukkebruse, used without modification under a Creative Commons license.