Santa Clara University

Bookmark and Share

Ethical Issues in the Online World

Welcome to the blog of the Internet Ethics program at the Markkula Center for Applied Ethics, Santa Clara University. Program Director Irina Raicu will be joined by various guests in discussing the ethical issues that arise continuously on the Internet; we hope to host a robust conversation about them, and we look forward to your comments.

  •  Privacy Crimes Symposium: A Preview

    Monday, Oct. 5, 2015
    Daniel Suvor
    Daniel Suvor

    Tomorrow, Santa Clara University will host a free half-day symposium titled “Privacy Crimes: Definition and Enforcement.” The event is co-sponsored by the Santa Clara District Attorney’s Office, the High Tech Law Institute, and the Markkula Center for Applied Ethics. (Online registration is now closed, but if you’d still like to attend, you can email

    The event will open with remarks from Santa Clara DA Jeff Rosen and a keynote by Daniel Suvor, who is the California attorney general’s current policy advisor. A recent Fusion article detailing the latest efforts to criminalize and prosecute “revenge porn” quotes Suvor, who explains that the attorney general “sees this as the next front in the violence against women category of crime. … She sees it as the 21st century incarnation of domestic violence and assaults against women, now taken online.’”

    The Fusion article also points out that California was “the second state to put a revenge porn law on the books…. In the past two years, 23 other states have followed suit.”

    Of course, revenge porn is not the only crime that impacts privacy, and legislative responses are not the only way to combat such crimes.  The Privacy Crimes symposium will feature panel discussions that will address a broad variety of related questions: How are privacy interests harmed? Why (and when) should we turn to criminal law in response? What types of criminal charges are currently used in the prosecutions that involve such harms? Are current laws sufficiently enforced? Are the current laws working well? Should some laws be changed? Do we need new ones? Are there other ways that would also work (or work better) to minimize privacy harms? Are there better ways to protect competing privacy interests in the criminal justice system?

    We are looking forward to a thought-provoking discussion and many more questions from audience members! And we are grateful to the International Association of Privacy Professionals, the Electronic Frontier Foundation, and the Identity Theft Council for their help in publicizing this event.

  •  The Ethics of Ad-Blocking

    Wednesday, Sep. 23, 2015
    (AP Photo/Damian Dovarganes)
    (AP Photo/Damian Dovarganes)

    As the number of people who are downloading ad-blocking software has grown, so has the number of articles discussing the ethics of ad-blocking. And interest in the subject doesn’t seem to be waning: a recent article in Mashable was shared more than 2,200 times, and articles about the ethics of ad-blocking have also appeared in Fortune (“You shouldn’t feel bad about using an ad blocker, and here’s why” and “Is using ad blockers morally wrong? The debate continues”), Digiday (“What would Kant do? Ad blocking is a problem, but it’s ethical”), The New York Times (“Enabling of Ad Blockers in Apple’s iOS9 Prompts Backlash”), as well as many other publications.

    Mind you, this is not a new debate. People were discussing it in the xkcd forum in 2014. The BBC wrote about the ethics of ad blocking in 2013. Back in 2009, Farhad Manjoo wrote for about what he described as a more ethical “approach to fair ad-blocking”; he concluded his article with the lines, “Ad blocking is here to stay. But that doesn't have to be the end of the Web—just the end of terrible ads.”
    As it turns out, in 2015, we still have terrible ads (see Khoi Vinh’s blog post, “Ad Blocking Irony.”) And, as a recent report by PageFair and Adobe details, the use of ad blockers “grew by 48% during the past year, increasing to 45 million average monthly active users” in the U.S. alone.
    In response, some publishers are accusing people who install (or build) ad blockers of theft. They are also accusing them of breaching their “implied contracts” with sites that offer ad-supported content (but see Marco Arment’s recent blog post, “The ethics of modern web ad-blocking,” which demolishes this argument, among other anti-blocker critiques).
    Many of the recent articles present both sides of the ethics debate. However, most of the articles on the topic claim that the main reasons that users are installing ad blockers are the desires to escape “annoying” ads or to improve browsing speeds (since ads can sometimes slow downloads to a crawl). What many articles leave out entirely, or gloss over in a line or two, are two other reasons why people (and especially those who understand how the online advertising ecosystem works) install ad blockers: For many of those users, the primary concerns are the tracking behind “targeted” ads, and the meteoric growth of “malvertising”—advertising used as vectors for malware.
    When it comes to the first concern, most of the articles about the ethics of ad-blocking simply conflate advertising and tracking—as if the tracking is somehow inherent in advertising. But the two are not the same, and it is important that we reject this false either/or proposition. If advertisers continue to push for more invasive consumer tracking, ad blocker usage will surge: When the researchers behind the PageFair and Adobe 2015 report asked “respondents who are not currently using an ad blocking extention … what would cause them to change their minds,” they found that “[m]isuse of personal information was the primary reason to enable ad blocking” (see p. 12 of the report). Now, it may not be clear exactly what the respondents meant by “misuse of personal information,” but that is certainly not a reference to either annoying ads or clogged bandwidth.
    As for the rise of “malvertising,” it was that development that led me to say to a Mashable reporter that if this continues unabated we might all eventually end up with an ethical duty to install ad blockers—in order to protect ourselves and others who might then be infected in turn.
    Significantly, the dangers of malvertising are connected to those of the more “benign” tracking. As a Wired article explains,

    it is modern, more sophisticated ad networks’ granular profiling capabilities that really create the malvertising sweet spot. Today ad networks let buyers configure ads to appear according to Web surfers’ precise browser or operating system types, their country locations, related search keywords and other identifying attributes. Right away we can see the value here for criminals borrowing the tactics of savvy marketers. … Piggybacking on rich advertising features, malvertising offers persistent, Internet-scale profiling and attacking. The sheer size and complexity of online advertising – coupled with the Byzantine nature of who is responsible for ad content placement and screening – means attackers enjoy the luxury of concealment and safe routes to victims, while casting wide nets to reach as many specific targets as possible.

    As one cybersecurity expert tweeted, sarcastically rephrasing the arguments of some of those who argue that installing ad-blocking software is unethical, “If you love content then you must allow random anonymous malicious entities to run arbitrary code on your devices” (@thegrugq).

    Now, if you clicked on the link to the Wired article cited above, you might or might not have noticed a thin header above the headline. The header reads, “Sponsor content.” Yup, that entire article is a kind of advertising, too. A recent New York Times story about the rise of this new kind of “native advertising” is titled “With Technology, Avoiding Both Ads and the Blockers.” (Whether such “native experiences” are better than the old kind of ads is a subject for another ethics debate; the FTC recently held a workshop about this practice and came out with more questions than answers.)

    Of course, not all online ads incorporate tracking, not all online ads bring malware, and many small publishers are bearing the brunt of a battle about practices over which they have little (if any) control. Unfortunately, for now, the blocking tools available are blunt instruments. Does that mean, though, that until the development of more nuanced solutions, the users of ad-supported sites should continue to absorb the growing privacy and security risks?

    Bottom line: discussing the ethics of ad-blocking without first clarifying the ethics of the ecosystem in which it has developed (and the history of the increasing harms that accompany many online ads) is misleading.

  •  A Personal Privacy Policy

    Wednesday, Sep. 2, 2015

    This essay first appeared in Slate's Future Tense blog in July 2015.

    Dear Corporation,

    You have expressed an interest in collecting personal information about me. (This interest may have been expressed by implication, in case you were attempting to collect such data without notifying me first.) Since you have told me repeatedly that personalization is a great benefit, and that advertising, search results, news, and other services should be tailored to my individual needs and desires, I’ve decided that I should also have my own personalized, targeted privacy policy. Here it is.

    While I am glad that (as you stated) my privacy is very important to you, it’s even more important to me. The intent of this policy is to inform you how you may collect, use, and dispose of personal information about me.

    By collecting any such information about me, you are agreeing to the terms below. These terms may change from time to time, especially as I find out more about ways in which personal information about me is actually used and I think more about the implications of those uses.

    Note: You will be asked to provide some information about yourself. Providing false information will constitute a violation of this agreement.

    Scope: This policy covers only me. It does not apply to related entities that I do not own or control, such as my friends, my children, or my husband.

    Age restriction and parental participation: Please specify if you are a startup; if so, note how long you’ve been in business. Please include the ages of the founders/innovators who came up with your product and your business model. Please also include the ages of any investors who have asserted, through their investment in your company, that they thought this product or service was a good idea.

    Information about you. For each piece of personal information about me that you wish to collect, analyze, and store, you must first disclose the following: a) Do you need this particular piece of information in order for your product/service to work for me? If not, you are not authorized to collect it. If yes, please explain how this piece of information is necessary for your product to work for me. b) What types of analytics do you intend to do perform with this information? c) Will you share this piece of information with anyone outside your company? If so, list each entity with which you intend to share it, and for what purpose; you must update this disclosure every time you add a new third party with which you’d like to share. d) Will you make efforts to anonymize the personal information that you’re collecting? e) Are you aware of the research that shows that anonymization doesn’t really work because it’s easy to put together information from several categories and/or several databases and so figure out the identity of an “anonymous” source of data? f) How long will you retain this particular piece of information about me? g) If I ask you to delete it, will you, and if so, how quickly? Note: by “delete” I don’t mean “make it invisible to others”—I mean “get it out of your system entirely.”

    Please be advised that, like these terms, the information I’ve provided to you may change, too: I may switch electronic devices; change my legal name; have more children; move to a different town; experiment with various political or religious affiliations; buy products that I may or may not like, just to try something new or to give to someone else; etc. These terms (as amended as needed) will apply to any new data that you may collect about me in the future: your continued use of personal information about me constitutes your acceptance of this.

    And, of course, I reserve all rights not expressly granted to you.

    Photo by Perspecsys Photos, used without modification under a Creative Commons license.

  •  Internet Ethics: Fall 2015 Events

    Tuesday, Sep. 1, 2015

    Fall will be here soon, and with it come three MCAE events about three interesting Internet-related ethical (and legal) topics. All of the events are free and open to the public; links to more details and registration forms are included below, so you can register today!

    The first, on September 24, is a talk by Santa Clara Law professor Colleen Chien, who recently returned from her appointment as White House senior advisor for intellectual property and innovation. Chien’s talk, titled “Tech Innovation Policy at the White House: Law and Ethics,” will address several topics—including intellectual property and innovation (especially the efforts toward patent reform); open data and social change; and the call for “innovation for all” (i.e. innovation in education, the problem of connectivity deserts, the need for tech inclusion, and more). Co-sponsored by the High Tech Law Institute, this event is part of our ongoing “IT, Ethics, and Law” lecture series, which recently included presentations on memory, forgiveness, and the “right to be forgotten”; ethical hacking; and the ethics of online price discrimination. (If you would like to be added to our mailing list for future events in this series, please email

    The second, on October 6, is a half-day symposium on privacy law and ethics and the criminal justice system. Co-sponsored by the Santa Clara District Attorney’s office and the High Tech Law Institute, “Privacy Crimes: Definition and Enforcementaims to better define the concept of “privacy crimes,” assess how such crimes are currently being addressed in the criminal justice system, and explore how society might better respond to them—through new laws, different enforcement practices, education, and other strategies. The conference will bring together prosecutors, defense attorneys, judges, academics, and victims’ advocates to discuss three main questions: What is a “privacy crime”? What’s being done to enforce laws that address such crimes? And how should we balance the privacy interests of the people involved in the criminal justice system? The keynote speaker will be Daniel Suvor, chief of policy for California’s Attorney General Kamala Harris. (This event will qualify for 3.5 hours of California MCLE, as well as IAPP continuing education credit; registration is required.)

    Finally, on October 29 the Center will host Antonio Casilli, associate professor of digital humanities at Telecom Paris Tech. In his talk, titled “How Can Somebody Be A Troll?,” Casilli will ask some provocative questions about the line between actual online trolls and, as he puts it, “rightfully upset Internet users trying to defend their opinions.” In the process, he will discuss the arguments of a new generation of authors and scholars who are challenging the view that trolling is a deviant behavior or the manifestation of perverse personalities; such writers argue that trolling reproduces anthropological archetypes; highlights the intersections of different Internet subcultures; and interconnects discourses around class, race, and gender.

    Each of the talks and panels will conclude with question-and-answer periods. We hope to see you this fall and look forward to your input!

    (And please spread the word to any other folks you think might be interested.)


  •  Nothing to Hide? Nothing to Protect?

    Wednesday, Aug. 19, 2015

    Despite numerous articles and at least one full-length book debunking the premises and implications of this particular claim, “I have nothing to hide” is still a common reply offered by many Americans when asked whether they care about privacy.

    What does that really mean?

    An article by Conor Friedersdorf, published in The Atlantic, offers one assessment. It is titled “This Man Has Nothing to Hide—Not Even His Email Password.” (I’ll wait while you consider changing your email password right now, and then decide to do it some other time.) The piece details Friedersdorf’s interaction with a man named Noah Dyer, who responded to the writer’s standard challenge—"Would you prove [that you have nothing to hide] by giving me access to your email accounts, … along with your credit card statements and bank records?"—by actually providing all of that information. Friedersdorf then considers the ethical implications of Dyer’s philosophy of privacy-lessness, while carefully navigating the ethical shoals of his own decisions about which of Dyer’s information to look at and which to publish in his own article.

    Admitting to a newfound though limited respect for Dyer’s commitment to drastic self-revelation, Friedersdorf ultimately reaches, however, a different conclusion:

    Since Dyer granted that he was vulnerable to information asymmetries and nevertheless opted for disclosure, I had to admit that, however foolishly, he could legitimately claim he has nothing to hide. What had never occurred to me, until I sat in front of his open email account, is how objectionable I find that attitude. Every one of us is entrusted with information that our family, friends, colleagues, and acquaintances would rather that we kept private, and while there is no absolute obligation for us to comply with their wishes—there are, indeed, times when we have a moral obligation to speak out in order to defend other goods—assigning the privacy of others a value of zero is callous.

    I think it is more than callous, though. It is an abdication of our responsibility to protect others, whose calculations about disclosure and risk might be very different from our own. Saying “I have nothing to hide” is tantamount to saying “I have nothing and no one to protect.” It is either an acknowledgment of a very lonely existence or a devastating failure of empathy and imagination.

    As Friedersdorf describes him, Dyer is not a hermit; he has interactions with many people, at least some of whom (including his children) he appears to care about. And, in his case, his abdication is not complete; it is, rather, a shifting of responsibility. Because while he did disclose much of his personal information (which of course included the personal details of many others who had not been consulted, and whose “value system,” unlike his own, may not include radical transparency), Dyer wrote to Friedersdorf, the reporter, “[a]dditionally, while you may paint whatever picture of me you are inclined to based on the data and our conversations, I would ask you to exercise restraint in embarrassing others whose lives have crossed my path…”

    In other words, “I have nothing to hide; please hide it for me.”

    “I have nothing to hide” misses the fact that no person is an island, and much of every person’s data is tangled, interwoven, and created in conjunction with, other people’s.

    The theme of the selfishness or lack of perspective embedded in the “nothing to hide” response is echoed in a recent commentary by lawyer and privacy activist Malavika Jayaram. In an article about India’s Aadhar ID system, Jayaram quotes Edward Snowden, who in a Reddit AMA session once said that “[a]rguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.” Jayaram builds on that, writing that the “nothing to hide” argument “locates privacy at an individual (some would say selfish) level and ignores the collective, societal benefits that it engenders and protects, such as the freedom of speech and association.”

    She rightly points out, as well, that the “’nothing to hide’ rhetoric … equates a legitimate desire for space and dignity to something sinister and suspect” and “puts the burden on those under surveillance … , rather than on the system to justify why it is needed and to implement the checks and balances required to make it proportional, fair, just and humane.”

    But there might be something else going on, at the same time, in the rhetorical shift from “privacy” to “something to hide”—a kind of deflection, of finger-pointing elsewhere: There, those are the people who have “something to hide”—not me! Nothing to see here, folks who might be watching. I accept your language, your framing of the issue, and your conclusions about the balancing of values or rights involved. Look elsewhere for troublemakers.

    Viewed this way, the “nothing to hide” response is neither naïve nor simplistically selfish; it is an effort—perhaps unconscious—at camouflage. The opposite of radical transparency.

    The same impetus might present itself in a different, also frequent response to questions about privacy and surveillance: “I’m not that interesting. Nobody would want to look at my information. People could look at information about me and it would all be banal.” Or maybe that is, for some people, a reaction to feelings of helplessness. If every day people read articles advising them about steps to take to protect their online privacy, and every other day they read articles explaining how those defensive measures are defeated by more sophisticated actors, is it surprising that some might try to reassure themselves (if not assure others) that their privacy is not really worth breaching?

    But even if we’re not “interesting,” whatever that means, we all do have information, about ourselves and others, that we need to protect. And our society gives us rights that we need to protect, too--for our sake and others'.

    Photo by Hattie Stroud, used without modification under a Creative Commons license.

  •  What Does An Engineer Look Like?

    Friday, Aug. 14, 2015
    Over the past year, our Center's Hackworth Engineering Ethics Fellows have developed ethics case studies drawn from the experiences of Santa Clara University engineering alumni.
    Over the past year, our Center's Hackworth Engineering Ethics Fellows have developed ethics case studies drawn from the experiences of Santa Clara University engineering alumni.


    What does an engineer look like? In the East European country where I grew up, an engineer looked like my mom, my dad, my stepmom, and many (if not most) of their male and female friends. But I’ve lived in Silicon Valley a long time now…

    If the question about what an engineer looks like has conjured up the image of a young white guy (possibly not particularly well dressed), or even if it didn’t, you should take a look at some of the more than 75,000 photos posted on Twitter in the last two weeks or so under the hashtag “#ILookLikeAnEngineer.” If you’re not a Twitter user, you can still see some of the photos incorporated into the many articles that have already detailed the efforts of Isis Anchalee (articles that have appeared in the New York Times, USA Today, Time, The Guardian, CNN, NPR, the Washington Post, the Boston Globe, the San Francisco Chronicle, the Christian Science Monitor, the Los Angeles Times, TechCrunch, etc.). Anchalee is the engineer who came up with the hashtag after a billboard with her image, depicting her simply as one of the engineers working for a particular company, was met with disbelief and a lot of commentary.

    The CEO of the company for which she works noted afterward that, for that recruitment campaign, they had chosen “a diverse sample of the engineers at OneLogin, but… had expected that Peter [another engineer featured in the campaign] with his top hat and hacker shirt would be the most controversial one, not Isis, simply because she is female.”

    A guy in a tophat? Sure, we can accept he’s an engineer. Probably a good one, too: creative! But a good-looking young woman? Now people had to wonder, analyze the choices reflected in the ad, contest its implications.

    In response to that response, on August 1st Anchalee published a post on Medium and posted an image of herself, holding up a sign with the hashtag, on Twitter. Soon, other women engineers joined her, posting their own photos and comments with the hashtag. Then engineers from other underrepresented groups joined in, too. And then larger groups of engineers—like the women engineers of Tesla, and Snapchat, and 200 women engineers who work at Google. In that last picture, it’s a bit hard to see the faces—but the image is still powerful, in a different way than the individual ones.

    I asked several Santa Clara University engineering professors what they thought about the “#ILookLikeAnEngineer” effort (which is no longer just a hashtag campaign meant to raise awareness; yesterday evening, people who aim to increase diversity in engineering met in San Francisco to discuss further actions). All of them said they appreciated it, precisely because images do sway people; they had all scrolled through the images, enjoying the kaleidoscope of faces.

    A couple of years ago, I wrote an article about the need to “power the search” for women in tech. Since then, there have been some hopeful advances: Harvey Mudd college, for example, announced earlier this year that the percentage of women graduating from its computer science program had increased “from 12 percent to approximately 40 percent in five years.” Last year, various media reports highlighted the fact that, for the first time, more women than men had enrolled in at least one introductory computer science class at U.C. Berkeley. And I have recently sat in on a software engineering ethics graduate-level class at Santa Clara, noting with pleasure how many of the students in it were women.

    But, even if the “pipeline” problem is being more aggressively addressed these days, the retention problem remains daunting. Vast numbers of women engineers leave the profession—and that is largely due to the way in which women engineers are treated, too often, at work. Many of the comments that accompany “#ILookLikeAnEngineer” posts detail small (and not-small-at-all) slights, repeated comments and actions that signal to women that they just don’t belong in engineering. With its smiling, hopeful images, that is exactly the message that the #ILookLikeAnEngineer effort counters: It’s a repeated assertion, by a variety of people, that they do belong.

    In January of this year, Newsweek ran a cover story about sexism in the Silicon Valley tech world. It seems telling, in retrospect, that the main controversy that it generated was about the issue’s cover—in particular about its depiction of a woman.  Images do matter.

    The images still adding up under the #ILookLikeAnEngineer add something new to the narrative about women in tech. (This is one of the things that social media, and in particular Twitter with its hashtag curation, does well: give users a sense of both the intimate impact and the social scope of a particular issue.) Whether they and similar efforts will eventually lead to workplaces in which people are no longer surprised by the faces of women engineers, and to a society that pictures a woman just as readily as it does a man when asked what an engineer looks like, remains to be seen.

    Here’s looking to a time when an engineer like Anchalee will have to don a tophat and a “hacker” t-shirt to have any chance at stirring controversy.


  •  Death and Facebook

    Wednesday, Jul. 29, 2015

    A number of recent articles have noted Facebook’s introduction of a feature that allows users to designate “legacy contacts” for their accounts. In an extensive examination titled “Where Does Your Facebook Account Go When You Die?,” writer Simon Davis explains that, until recently, when Facebook was notified that one of its users had died, the company would “memorialize” that person’s account (in part in order to keep the account from being hacked). What “memorialization” implies has changed over time. Currently, according to Davis, memorialized accounts retain the privacy and audience settings last set by the user, while the contact information and ability to post status updates are stripped out. Since February, however, users can also designate a “legacy contact” person who “can perform certain functions on a memorialized account.” As Davis puts it, “Now, a trusted third party can approve a new friend request by the distraught father or get the mother’s input on a different profile image.”

    Would you give another person the power to add new “friends” to your account or change the profile image, after your death? Which begs the question, what is a Facebook account?

    In his excellent article, Davis cites Vanessa Callison-Burch, the Facebook product manager who is primarily responsible for the newly-added legacy account feature. Explaining some of the thinking behind it, she argues that a Facebook account “is a really important part of people’s identity and is a community space. Your Facebook account is incredibly personalized. It’s a community place for people to assemble and celebrate your life.” She adds that “there are certain things that that community of people really need to be supported in that we at Facebook can’t make the judgment call on.”

    While I commend Facebook for its (new-found?) modesty in feature design, and its recognition that the user’s wishes matter deeply, I find myself wondering about that description of a Facebook account as “a community space.” Is it? I’ve written elsewhere that posting on Facebook “echoes, for some of us, the act of writing in a journal.” A diary is clearly not a “community space.” On Facebook, however, commenters on one user’s posts get to comment on other commenters’ comments, and entire conversations develop among a user’s “friends.” Sometimes friends of friends “friend” each other.  So, yes, a community is involved. But no, the community’s “members” don’t get to decide what your profile picture should be, or whether or not you should “friend” your dad. Who should?

    In The Guardian, Stuart Heritage explores that question in a much lighter take on the subject of “legacy contacts,” titled “To my brother I leave my Facebook account ... and any chance of dignity in death.” As he makes clear, “nominating a legacy contact is harder than it looks.”

    Rather than simply putting that responsibility on a trusted person, Simon Davis suggests that Facebook should give users the opportunity to create an advance directive with specific instructions about their profile: “who should be able to see it, who should be able to send friend requests, and even what kind of profile picture or banner image the person would want displayed after death.” That alternative would respect the user’s autonomy even more than the current “legacy contact” does.

    But there is another option that perhaps respects that autonomy the most: Facebook currently also allows a user to check a box specifying that his or her account be simply deleted after his or her death. Heritage writes that “this is a hard button to click. It means erasing yourself.” Does it? Maybe it just signals a different perspective on Facebook. Maybe, for some, a Facebook account is neither an autobiography nor a guest book. Maybe the users who choose that delete option are not meanly destroying a “community space,” but ending a conversation.

    Photo by Lori Semprevio, used without modification under a Creative Commons license.

  •  IoT: The Internet of Trees

    Friday, Jul. 17, 2015

    Ethics is about living the good life, and, for many of us, trees are an important part of that good life (and not just because we like breathing).  This becomes clear in an article titled “When You Give a Tree an Email Address,” in which The Atlantic’s Adrienne LaFrance writes about a project undertaken by the city of Melbourne.  As LaFrance explains, “[o]fficials assigned the trees ID numbers and email addresses in 2013 as part of a program designed to make it easier for citizens to report problems like dangerous branches.”  As it turned out, however, quite a few citizens chose, instead, to write messages addressed directly to particular trees.

    Some of the messages quoted by LaFrance are quite moving.  On May 21, 2015, for example, a message to “Golden Elm, Tree ID 1037148” read, “I’m so sorry you’re going to die soon. It makes me sad when trucks damage your low hanging branches. Are you as tired of all this construction work as we are?” Other messages are funny. (All, by definition, are whimsical. How else do you write to a tree?) But the best part, perhaps, is that the trees sometimes write back.  For example, in January 2015, a Willow Leaf Peppermint answered a query about its gender. “Hello,” it began,

    I am not a Mr or a Mrs, as I have what’s called perfect flowers that include both genders in my flower structure, the term for this is Monoicous. [Even trees generate run-ons.] Some trees species have only male or female flowers on individual plants and therefore do have genders, the term for this is Dioecious. Some other trees have male flowers and female flowers on the same tree. It is all very confusing and quite amazing how diverse and complex trees can be. 

    Kind regards,

    Mr and Mrs Willow Leaf Peppermint (same Tree)

    Should we rethink the possibilities of the acronym “IoT”? With the coming of the much-anticipated “Internet of Things,” will trees eventually notify the city officials directly when they’re about to tip over, or a branch has scraped a car, or a good percentage of their fruits are ripe?

    In the meantime, is it pessimistic to worry that hackers might break into the trees’ email accounts and start sending offensive responses, or distribute spam instead of pollen?

    For now, the article made me think of a famous poem by Joyce Kilmer, “Trees,” which was published in 1913. With apologies, here is my take on the Internet of Trees:


    I thought that I would never see

    An email written by a tree.


    A tree whose hungry eyes are keen

    Upon a gadget’s glowing screen;


    A tree that doesn’t choose to Skype

    But lifts her leafy arms to type;


    A tree that may in Summer share

    Selfies with robins in her hair;


    Within whose bosom drafts might end;

    Who intimately lives with “Send.”


    Poems are made by fools like me,

    But emails come, now, from a tree.


    Photo by @Doug88888, used without modification under a Creative Commons license.

  •  Internet Values?

    Tuesday, Jun. 30, 2015

    "1.     The Internet’s architecture is highly unusual.

    2.       The Internet’s architecture reflects certain values.

    3.       Our use of the Net, based on that architecture, strongly encourages the adoption of those values.

    4.       Therefore, the Internet tends to transform us and our institutions in ways that reflect those values.

    5.       And that’s a good thing."

    The quoted list above comprises the premises that undergird an essay by David Weinberger, recently published in The Atlantic, titled “The Internet That Was (And Still Could Be).” Weinberger, who is the co-author of The Cluetrain Manifesto (and now a researcher at Harvard’s Berkman Center for Internet & Society), argues that the Internet’s architecture “values open access to information, the democratic and permission-free ability to read and to post, an open market of ideas and businesses, and provides a framework for bottom-up collaboration among equals.” However, he notes, in what he calls the “Age of Apps” most Internet users don’t directly encounter that architecture:

    In the past I would have said that so long as this architecture endures, so will the transfer of values from that architecture to the systems that run on top of it. But while the Internet’s architecture is still in place, the values transfer may actually be stifled by the many layers that have been built on top of it.

    Moreover, if people think, for example, that the Internet is Facebook, then the value transfer may be not just stifled but shifted: what they may be absorbing are Facebook’s values, not the Internet’s. However, Weinberger describes himself as still ultimately optimistic about the beneficial impact of the Internet. In light of the layers that obscure its architecture and its built-in values, he offers a new call to action: “As the Internet’s architecture shapes our behavior and values less and less directly, we’re going to have to undertake the propagation of the values embedded in that architecture as an explicit task” (emphasis added).

    It’s interesting to consider this essay in conjunction with the results of a poll reported recently by the Pew Research Center. In a study of people from 32 developing and emerging countries, the Pew researchers found that

    [t]he aspect of the internet that generates the greatest concern is its effect on a nation’s morals. Overall, a median of 42% say the internet has a negative influence on morality, with 29% saying it has a positive influence. The internet’s influence on morality is seen as the most negative of the five aspects tested in 28 of the 32 countries surveyed. And in no country does a majority say that the influence of the internet on morality is a positive.

    It should be noted at the outset that not all of those polled described themselves as internet users—and that Pew reports that a “major subgroup that sees the internet positively is internet users themselves” (though, as a different study shows, millions of people in some developing countries mistakenly identify themselves as non-users when they really do use the Internet).

    Interesting distinctions emerge among the countries surveyed, as well. In Nigeria, Pew reports, 50% of those polled answered that “[i]ncreasing use of the Internet in [their] country has had a good influence on morality.” In Ghana, only 29% did. In Vietnam, 40%. In China, 25%. In Tunisia, 17%. In Russia, 13%.

    The Pew study, however, did not attempt to provide a definition of “morality” before posing that question. It would have been interesting (and would perhaps be an interesting future project) to ask users in other countries what they perceive as the values embedded in the Internet. Would they agree with Weinberger’s list? And how might they respond to an effort to clarify and propagate those values explicitly, as Weinberger suggests? For non-users of the Internet, in other countries, is the motivation purely a lack of access, or is it a rejection of certain values, as well?

    If a clash of values is at issue, it involves a generational aspect, too: the Pew report notes that in many of the countries surveyed, “young people (18-34 years old) are much more likely to say that the internet has a good influence compared with older people (ages 35+).” This, the report adds, “is especially true on its influence of morality.”

    Photo by Blaise Alleyne, used without modification under a Creative Commons license.

  •  Applying Applied Ethics -- on Yik Yak

    Friday, Jun. 26, 2015

    Earlier this week, the associate director of the Markkula Center for Applied Ethics, Miriam Schulman, published a blog post about one of the center's recent campus projects. "If we want to engage with students," she wrote, "we have to go where they are talking, and this year, that has been on Yik Yak." To read more about this controversial app and a creative way to use it in a conversation about applied ethics, see "Yik Yak: The Medium and the Message." (And consider subscribing to the "All About Ethics" blog, as well!)