Santa Clara University

internet-ethics-banner
Bookmark and Share
 
 
RSS

Ethical Issues in the Online World

Welcome to the blog of the Internet Ethics program at the Markkula Center for Applied Ethics, Santa Clara University. Program Director Irina Raicu will be joined by various guests in discussing the ethical issues that arise continuously on the Internet; we hope to host a robust conversation about them, and we look forward to your comments.

The following postings have been filtered by tag ethics. clear filter
  •  Internet Ethics: Fall 2015 Events

    Tuesday, Sep. 1, 2015

    Fall will be here soon, and with it come three MCAE events about three interesting Internet-related ethical (and legal) topics. All of the events are free and open to the public; links to more details and registration forms are included below, so you can register today!

    The first, on September 24, is a talk by Santa Clara Law professor Colleen Chien, who recently returned from her appointment as White House senior advisor for intellectual property and innovation. Chien’s talk, titled “Tech Innovation Policy at the White House: Law and Ethics,” will address several topics—including intellectual property and innovation (especially the efforts toward patent reform); open data and social change; and the call for “innovation for all” (i.e. innovation in education, the problem of connectivity deserts, the need for tech inclusion, and more). Co-sponsored by the High Tech Law Institute, this event is part of our ongoing “IT, Ethics, and Law” lecture series, which recently included presentations on memory, forgiveness, and the “right to be forgotten”; ethical hacking; and the ethics of online price discrimination. (If you would like to be added to our mailing list for future events in this series, please email ethics@scu.edu.)

    The second, on October 6, is a half-day symposium on privacy law and ethics and the criminal justice system. Co-sponsored by the Santa Clara District Attorney’s office and the High Tech Law Institute, “Privacy Crimes: Definition and Enforcementaims to better define the concept of “privacy crimes,” assess how such crimes are currently being addressed in the criminal justice system, and explore how society might better respond to them—through new laws, different enforcement practices, education, and other strategies. The conference will bring together prosecutors, defense attorneys, judges, academics, and victims’ advocates to discuss three main questions: What is a “privacy crime”? What’s being done to enforce laws that address such crimes? And how should we balance the privacy interests of the people involved in the criminal justice system? The keynote speaker will be Daniel Suvor, chief of policy for California’s Attorney General Kamala Harris. (This event will qualify for 3.5 hours of California MCLE, as well as IAPP continuing education credit; registration is required.)

    Finally, on October 29 the Center will host Antonio Casilli, associate professor of digital humanities at Telecom Paris Tech. In his talk, titled “How Can Somebody Be A Troll?,” Casilli will ask some provocative questions about the line between actual online trolls and, as he puts it, “rightfully upset Internet users trying to defend their opinions.” In the process, he will discuss the arguments of a new generation of authors and scholars who are challenging the view that trolling is a deviant behavior or the manifestation of perverse personalities; such writers argue that trolling reproduces anthropological archetypes; highlights the intersections of different Internet subcultures; and interconnects discourses around class, race, and gender.

    Each of the talks and panels will conclude with question-and-answer periods. We hope to see you this fall and look forward to your input!

    (And please spread the word to any other folks you think might be interested.)

     

  •  Nothing to Hide? Nothing to Protect?

    Wednesday, Aug. 19, 2015

    Despite numerous articles and at least one full-length book debunking the premises and implications of this particular claim, “I have nothing to hide” is still a common reply offered by many Americans when asked whether they care about privacy.

    What does that really mean?

    An article by Conor Friedersdorf, published in The Atlantic, offers one assessment. It is titled “This Man Has Nothing to Hide—Not Even His Email Password.” (I’ll wait while you consider changing your email password right now, and then decide to do it some other time.) The piece details Friedersdorf’s interaction with a man named Noah Dyer, who responded to the writer’s standard challenge—"Would you prove [that you have nothing to hide] by giving me access to your email accounts, … along with your credit card statements and bank records?"—by actually providing all of that information. Friedersdorf then considers the ethical implications of Dyer’s philosophy of privacy-lessness, while carefully navigating the ethical shoals of his own decisions about which of Dyer’s information to look at and which to publish in his own article.

    Admitting to a newfound though limited respect for Dyer’s commitment to drastic self-revelation, Friedersdorf ultimately reaches, however, a different conclusion:

    Since Dyer granted that he was vulnerable to information asymmetries and nevertheless opted for disclosure, I had to admit that, however foolishly, he could legitimately claim he has nothing to hide. What had never occurred to me, until I sat in front of his open email account, is how objectionable I find that attitude. Every one of us is entrusted with information that our family, friends, colleagues, and acquaintances would rather that we kept private, and while there is no absolute obligation for us to comply with their wishes—there are, indeed, times when we have a moral obligation to speak out in order to defend other goods—assigning the privacy of others a value of zero is callous.

    I think it is more than callous, though. It is an abdication of our responsibility to protect others, whose calculations about disclosure and risk might be very different from our own. Saying “I have nothing to hide” is tantamount to saying “I have nothing and no one to protect.” It is either an acknowledgment of a very lonely existence or a devastating failure of empathy and imagination.

    As Friedersdorf describes him, Dyer is not a hermit; he has interactions with many people, at least some of whom (including his children) he appears to care about. And, in his case, his abdication is not complete; it is, rather, a shifting of responsibility. Because while he did disclose much of his personal information (which of course included the personal details of many others who had not been consulted, and whose “value system,” unlike his own, may not include radical transparency), Dyer wrote to Friedersdorf, the reporter, “[a]dditionally, while you may paint whatever picture of me you are inclined to based on the data and our conversations, I would ask you to exercise restraint in embarrassing others whose lives have crossed my path…”

    In other words, “I have nothing to hide; please hide it for me.”

    “I have nothing to hide” misses the fact that no person is an island, and much of every person’s data is tangled, interwoven, and created in conjunction with, other people’s.

    The theme of the selfishness or lack of perspective embedded in the “nothing to hide” response is echoed in a recent commentary by lawyer and privacy activist Malavika Jayaram. In an article about India’s Aadhar ID system, Jayaram quotes Edward Snowden, who in a Reddit AMA session once said that “[a]rguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.” Jayaram builds on that, writing that the “nothing to hide” argument “locates privacy at an individual (some would say selfish) level and ignores the collective, societal benefits that it engenders and protects, such as the freedom of speech and association.”

    She rightly points out, as well, that the “’nothing to hide’ rhetoric … equates a legitimate desire for space and dignity to something sinister and suspect” and “puts the burden on those under surveillance … , rather than on the system to justify why it is needed and to implement the checks and balances required to make it proportional, fair, just and humane.”

    But there might be something else going on, at the same time, in the rhetorical shift from “privacy” to “something to hide”—a kind of deflection, of finger-pointing elsewhere: There, those are the people who have “something to hide”—not me! Nothing to see here, folks who might be watching. I accept your language, your framing of the issue, and your conclusions about the balancing of values or rights involved. Look elsewhere for troublemakers.

    Viewed this way, the “nothing to hide” response is neither naïve nor simplistically selfish; it is an effort—perhaps unconscious—at camouflage. The opposite of radical transparency.

    The same impetus might present itself in a different, also frequent response to questions about privacy and surveillance: “I’m not that interesting. Nobody would want to look at my information. People could look at information about me and it would all be banal.” Or maybe that is, for some people, a reaction to feelings of helplessness. If every day people read articles advising them about steps to take to protect their online privacy, and every other day they read articles explaining how those defensive measures are defeated by more sophisticated actors, is it surprising that some might try to reassure themselves (if not assure others) that their privacy is not really worth breaching?

    But even if we’re not “interesting,” whatever that means, we all do have information, about ourselves and others, that we need to protect. And our society gives us rights that we need to protect, too--for our sake and others'.

    Photo by Hattie Stroud, used without modification under a Creative Commons license.

  •  What Does An Engineer Look Like?

    Friday, Aug. 14, 2015
    Over the past year, our Center's Hackworth Engineering Ethics Fellows have developed ethics case studies drawn from the experiences of Santa Clara University engineering alumni.

     

    What does an engineer look like? In the East European country where I grew up, an engineer looked like my mom, my dad, my stepmom, and many (if not most) of their male and female friends. But I’ve lived in Silicon Valley a long time now…

    If the question about what an engineer looks like has conjured up the image of a young white guy (possibly not particularly well dressed), or even if it didn’t, you should take a look at some of the more than 75,000 photos posted on Twitter in the last two weeks or so under the hashtag “#ILookLikeAnEngineer.” If you’re not a Twitter user, you can still see some of the photos incorporated into the many articles that have already detailed the efforts of Isis Anchalee (articles that have appeared in the New York Times, USA Today, Time, The Guardian, CNN, NPR, the Washington Post, the Boston Globe, the San Francisco Chronicle, the Christian Science Monitor, the Los Angeles Times, TechCrunch, etc.). Anchalee is the engineer who came up with the hashtag after a billboard with her image, depicting her simply as one of the engineers working for a particular company, was met with disbelief and a lot of commentary.

    The CEO of the company for which she works noted afterward that, for that recruitment campaign, they had chosen “a diverse sample of the engineers at OneLogin, but… had expected that Peter [another engineer featured in the campaign] with his top hat and hacker shirt would be the most controversial one, not Isis, simply because she is female.”

    A guy in a tophat? Sure, we can accept he’s an engineer. Probably a good one, too: creative! But a good-looking young woman? Now people had to wonder, analyze the choices reflected in the ad, contest its implications.

    In response to that response, on August 1st Anchalee published a post on Medium and posted an image of herself, holding up a sign with the hashtag, on Twitter. Soon, other women engineers joined her, posting their own photos and comments with the hashtag. Then engineers from other underrepresented groups joined in, too. And then larger groups of engineers—like the women engineers of Tesla, and Snapchat, and 200 women engineers who work at Google. In that last picture, it’s a bit hard to see the faces—but the image is still powerful, in a different way than the individual ones.

    I asked several Santa Clara University engineering professors what they thought about the “#ILookLikeAnEngineer” effort (which is no longer just a hashtag campaign meant to raise awareness; yesterday evening, people who aim to increase diversity in engineering met in San Francisco to discuss further actions). All of them said they appreciated it, precisely because images do sway people; they had all scrolled through the images, enjoying the kaleidoscope of faces.

    A couple of years ago, I wrote an article about the need to “power the search” for women in tech. Since then, there have been some hopeful advances: Harvey Mudd college, for example, announced earlier this year that the percentage of women graduating from its computer science program had increased “from 12 percent to approximately 40 percent in five years.” Last year, various media reports highlighted the fact that, for the first time, more women than men had enrolled in at least one introductory computer science class at U.C. Berkeley. And I have recently sat in on a software engineering ethics graduate-level class at Santa Clara, noting with pleasure how many of the students in it were women.

    But, even if the “pipeline” problem is being more aggressively addressed these days, the retention problem remains daunting. Vast numbers of women engineers leave the profession—and that is largely due to the way in which women engineers are treated, too often, at work. Many of the comments that accompany “#ILookLikeAnEngineer” posts detail small (and not-small-at-all) slights, repeated comments and actions that signal to women that they just don’t belong in engineering. With its smiling, hopeful images, that is exactly the message that the #ILookLikeAnEngineer effort counters: It’s a repeated assertion, by a variety of people, that they do belong.

    In January of this year, Newsweek ran a cover story about sexism in the Silicon Valley tech world. It seems telling, in retrospect, that the main controversy that it generated was about the issue’s cover—in particular about its depiction of a woman.  Images do matter.

    The images still adding up under the #ILookLikeAnEngineer add something new to the narrative about women in tech. (This is one of the things that social media, and in particular Twitter with its hashtag curation, does well: give users a sense of both the intimate impact and the social scope of a particular issue.) Whether they and similar efforts will eventually lead to workplaces in which people are no longer surprised by the faces of women engineers, and to a society that pictures a woman just as readily as it does a man when asked what an engineer looks like, remains to be seen.

    Here’s looking to a time when an engineer like Anchalee will have to don a tophat and a “hacker” t-shirt to have any chance at stirring controversy.

     

  •  Death and Facebook

    Wednesday, Jul. 29, 2015

    A number of recent articles have noted Facebook’s introduction of a feature that allows users to designate “legacy contacts” for their accounts. In an extensive examination titled “Where Does Your Facebook Account Go When You Die?,” writer Simon Davis explains that, until recently, when Facebook was notified that one of its users had died, the company would “memorialize” that person’s account (in part in order to keep the account from being hacked). What “memorialization” implies has changed over time. Currently, according to Davis, memorialized accounts retain the privacy and audience settings last set by the user, while the contact information and ability to post status updates are stripped out. Since February, however, users can also designate a “legacy contact” person who “can perform certain functions on a memorialized account.” As Davis puts it, “Now, a trusted third party can approve a new friend request by the distraught father or get the mother’s input on a different profile image.”

    Would you give another person the power to add new “friends” to your account or change the profile image, after your death? Which begs the question, what is a Facebook account?

    In his excellent article, Davis cites Vanessa Callison-Burch, the Facebook product manager who is primarily responsible for the newly-added legacy account feature. Explaining some of the thinking behind it, she argues that a Facebook account “is a really important part of people’s identity and is a community space. Your Facebook account is incredibly personalized. It’s a community place for people to assemble and celebrate your life.” She adds that “there are certain things that that community of people really need to be supported in that we at Facebook can’t make the judgment call on.”

    While I commend Facebook for its (new-found?) modesty in feature design, and its recognition that the user’s wishes matter deeply, I find myself wondering about that description of a Facebook account as “a community space.” Is it? I’ve written elsewhere that posting on Facebook “echoes, for some of us, the act of writing in a journal.” A diary is clearly not a “community space.” On Facebook, however, commenters on one user’s posts get to comment on other commenters’ comments, and entire conversations develop among a user’s “friends.” Sometimes friends of friends “friend” each other.  So, yes, a community is involved. But no, the community’s “members” don’t get to decide what your profile picture should be, or whether or not you should “friend” your dad. Who should?

    In The Guardian, Stuart Heritage explores that question in a much lighter take on the subject of “legacy contacts,” titled “To my brother I leave my Facebook account ... and any chance of dignity in death.” As he makes clear, “nominating a legacy contact is harder than it looks.”

    Rather than simply putting that responsibility on a trusted person, Simon Davis suggests that Facebook should give users the opportunity to create an advance directive with specific instructions about their profile: “who should be able to see it, who should be able to send friend requests, and even what kind of profile picture or banner image the person would want displayed after death.” That alternative would respect the user’s autonomy even more than the current “legacy contact” does.

    But there is another option that perhaps respects that autonomy the most: Facebook currently also allows a user to check a box specifying that his or her account be simply deleted after his or her death. Heritage writes that “this is a hard button to click. It means erasing yourself.” Does it? Maybe it just signals a different perspective on Facebook. Maybe, for some, a Facebook account is neither an autobiography nor a guest book. Maybe the users who choose that delete option are not meanly destroying a “community space,” but ending a conversation.

    Photo by Lori Semprevio, used without modification under a Creative Commons license.

  •  Internet Values?

    Tuesday, Jun. 30, 2015

    "1.     The Internet’s architecture is highly unusual.

    2.       The Internet’s architecture reflects certain values.

    3.       Our use of the Net, based on that architecture, strongly encourages the adoption of those values.

    4.       Therefore, the Internet tends to transform us and our institutions in ways that reflect those values.

    5.       And that’s a good thing."

    The quoted list above comprises the premises that undergird an essay by David Weinberger, recently published in The Atlantic, titled “The Internet That Was (And Still Could Be).” Weinberger, who is the co-author of The Cluetrain Manifesto (and now a researcher at Harvard’s Berkman Center for Internet & Society), argues that the Internet’s architecture “values open access to information, the democratic and permission-free ability to read and to post, an open market of ideas and businesses, and provides a framework for bottom-up collaboration among equals.” However, he notes, in what he calls the “Age of Apps” most Internet users don’t directly encounter that architecture:

    In the past I would have said that so long as this architecture endures, so will the transfer of values from that architecture to the systems that run on top of it. But while the Internet’s architecture is still in place, the values transfer may actually be stifled by the many layers that have been built on top of it.

    Moreover, if people think, for example, that the Internet is Facebook, then the value transfer may be not just stifled but shifted: what they may be absorbing are Facebook’s values, not the Internet’s. However, Weinberger describes himself as still ultimately optimistic about the beneficial impact of the Internet. In light of the layers that obscure its architecture and its built-in values, he offers a new call to action: “As the Internet’s architecture shapes our behavior and values less and less directly, we’re going to have to undertake the propagation of the values embedded in that architecture as an explicit task” (emphasis added).

    It’s interesting to consider this essay in conjunction with the results of a poll reported recently by the Pew Research Center. In a study of people from 32 developing and emerging countries, the Pew researchers found that

    [t]he aspect of the internet that generates the greatest concern is its effect on a nation’s morals. Overall, a median of 42% say the internet has a negative influence on morality, with 29% saying it has a positive influence. The internet’s influence on morality is seen as the most negative of the five aspects tested in 28 of the 32 countries surveyed. And in no country does a majority say that the influence of the internet on morality is a positive.

    It should be noted at the outset that not all of those polled described themselves as internet users—and that Pew reports that a “major subgroup that sees the internet positively is internet users themselves” (though, as a different study shows, millions of people in some developing countries mistakenly identify themselves as non-users when they really do use the Internet).

    Interesting distinctions emerge among the countries surveyed, as well. In Nigeria, Pew reports, 50% of those polled answered that “[i]ncreasing use of the Internet in [their] country has had a good influence on morality.” In Ghana, only 29% did. In Vietnam, 40%. In China, 25%. In Tunisia, 17%. In Russia, 13%.

    The Pew study, however, did not attempt to provide a definition of “morality” before posing that question. It would have been interesting (and would perhaps be an interesting future project) to ask users in other countries what they perceive as the values embedded in the Internet. Would they agree with Weinberger’s list? And how might they respond to an effort to clarify and propagate those values explicitly, as Weinberger suggests? For non-users of the Internet, in other countries, is the motivation purely a lack of access, or is it a rejection of certain values, as well?

    If a clash of values is at issue, it involves a generational aspect, too: the Pew report notes that in many of the countries surveyed, “young people (18-34 years old) are much more likely to say that the internet has a good influence compared with older people (ages 35+).” This, the report adds, “is especially true on its influence of morality.”

    Photo by Blaise Alleyne, used without modification under a Creative Commons license.

  •  Applying Applied Ethics -- on Yik Yak

    Friday, Jun. 26, 2015

    Earlier this week, the associate director of the Markkula Center for Applied Ethics, Miriam Schulman, published a blog post about one of the center's recent campus projects. "If we want to engage with students," she wrote, "we have to go where they are talking, and this year, that has been on Yik Yak." To read more about this controversial app and a creative way to use it in a conversation about applied ethics, see "Yik Yak: The Medium and the Message." (And consider subscribing to the "All About Ethics" blog, as well!)

     

  •  The Social Network of Discourse and Discomfort

    Friday, Jun. 19, 2015

    Ello, the social media platform that was prominently (if briefly) touted last year as the “anti-Facebook,” is reinventing itself for mobile. Twitter is reinventing itself, too. Pinterest is reinventing itself into a store. And the anti-“anti-Facebook,” i.e. Facebook, is constantly reinventing itself.

    But the real “anti-Facebook” is described by the director of MIT’s Center for Civic Media, Ethan Zuckerman, in the transcript of a wide-ranging discussion recently held under the auspices of the Carnegie Council on Ethics in International Affairs. Zuckerman notes that one of his students, Sands Fish,

    is trying to build social networks designed to make you uncomfortable. Basically, the first thing he does is he takes away the choice of friends. You no longer have a choice about who is going to be your friend. You are going to interact with people whom he thinks you should be interacting with, as a way of sort of challenging us. Will anyone use this? It's a good question. This is why you do this at research universities rather than going out and getting venture capital for it.
     
    Initially, the idea of a social platform designed to make users uncomfortable seems amusing, or maybe closer to a conceptual art project than a real social network. But at a time when scholars warn about “filter bubbles” (and companies who might be blamed for them try to calm the worries, or at least deflect responsibility), a time when we seem to either surround ourselves with like-minded people or get sucked into the “spriral of silence” and stop talking about controversial topics, such a network could become a fascinating training ground. Might it lead to constructive ways to engage with people who have different experiences and preferences, hold different beliefs, etc., yet still need to function together, as people in a pluralistic society do?
     
    Would people willingly submit themselves to discomfort by participating in such a network? Would folks who join such a network be the ones already more comfortable with (or even attracted to) conflict and diversity? Or is it a question of degrees—the degree of discomfort, the degree of diversity, and the degree of thoughtfulness of the conversations that might ensue?
     
    Zuckerman addresses this issue:
    A lot of my theories around this suggest that you need bridge figures. You need people whom you have one thing in common with, but something else that is very different. I spend a ton of my life right now working on technology and innovation in sub-Saharan Africa. I work with people whom I don't have a lot in common with in terms of where we grew up, who we know, where we are from, but we have a lot in common in terms of what we do day to day, how we interact with technological systems, the things that we care about. That gives us a common ground that we are able to work on.
     
    Would the designer of the network of discomfort provide us with bridge figures? Or would serendipity offer some?
     
    One final thought: in some ways, for some people, Facebook itself has become the kind of social network that Fish (Zuckerman’s student) is apparently trying to design. When your relatives or co-workers send you “friend” requests, do you still “have a choice about who is going to be your friend”? (Much has been written about how conversations on Facebook have deteriorated as the users have amassed vast numbers of “friends” from diverse parts and periods of their lives; and many commentators have suggested that this kind of blended audience has driven teens, at least, to other social networks not yet co-opted by their parents and teachers). Maybe the key distinction in the MIT project would be that participants would, as Zuckerman describes it, “interact with people whom [the network designer] thinks [they] should be interacting with.” The anti-Facebook would provide us more thoughtfully curated discomfort.
     
    Photo by Kevin Dooley, used without modification under a Creative Commons license.

     

  •  Privacy and Diversity

    Friday, Jun. 12, 2015
     
    Teams that work on privacy-protective features for our online lives are much more likely to be effective if those teams are diverse, in as many ways as possible.
     
    Here is what led me to this (maybe glaringly obvious) insight:
     
    First, an event that I attended at Facebook’s headquarters, called “Privacy@Scale,” which brought together academics, privacy practitioners (from both the legal and the tech sides), regulators, and product managers. (We had some great conversations.)
     
    Second, a study that was recently published with much fanfare (and quite a bit of tech media coverage) by the International Association of Privacy Professionals, showing that careers in privacy are much more likely than others to provide gender pay parity—and including the observation that there are more women than men in the ranks of Chief Privacy Officers.
     
    Third, a story from a law student who had interned on the privacy team of a large Silicon Valley company, who mentioned sitting in a meeting and thinking to herself that something being proposed as a feature would never have been accepted in the culture that she came from—would in fact have been somewhat taboo, and might have upset people if it were broadly implemented, rather than offered as an opt-in—and realizing that none of the other members of the team understood this.
     
    And fourth, a question that several commenters asked earlier this year when Facebook experienced its “It’s Been a Great Year” PR disaster (after a developer wrote about the experience of seeing his daughter’s face auto-inserted by Facebook algorithms under a banner reading “It’s Been a Great Year!” when in fact his daughter had died that year): Had there been any older folks on the team that released that feature? If not, would the perspective of some older team members have tempered the roll-out, provided a word of caution?
     
    Much has been said, for a long time, about how it’s hard to “get privacy right” because privacy is all about nuance and gray areas, and conceptions of privacy vary so much among individuals, cultures, contexts, etc.  Given that, it makes sense that diverse teams working on privacy-enhancing features would be better able to anticipate and address problems. Not all problems, of course—diversity would not be a magic solution. It would, however, help.
     
    Various studies have recently shown that diversity on research teams leads to better science, that cultural diversity on global virtual teams has a positive effect on decision-making, that meaningful gender diversity in the workplace improves companies’ bottom line, and that “teams do better when they are composed of people with the widest possible range of personalities, even though it takes longer for such psychologically diverse teams to achieve good cooperation.”
     
    In Silicon Valley, the talk about team building tends to be about “culture fit” (or, more sharply critical, about “broculture”). As it turns out, though, the right “culture fit” for a privacy team should probably include diversity (of background, gender, age, skills, and even personality), combined with an understanding that one’s own perspectives are not universal; the ability to listen; and curiosity about and respect for difference. 
     
    Photo by Sean MacEntee, used without modification under a Creative Commons liccense.
     
  •  Which Students? Which Rights? Which Privacy?

    Friday, May. 29, 2015

     

    Last week, researcher danah boyd, who has written extensively about young people’s attitudes toward privacy (and debunked many pervasive “gut feelings” about those attitudes and related behaviors), wrote a piece about the several bills now working their way through Congress that aim to protect “student privacy.” boyd is not impressed. While she agrees that reform of current educational privacy laws is much needed, she writes, "Of course, even though this is all about *students*, they don’t actually have a lot of power in any of these bills. It’s all a question of who can speak on their behalf and who is supposed to protect them from the evils of the world. And what kind of punishment for breaches is most appropriate."
     
    boyd highlights four different “threat models” and argues that the proposed bills do nothing to address two of those: the “Consumer Finance Threat Model,” in which student data would “fuel the student debt ecosystem,” and the “Criminal Justice Threat Model,” in which such data would help build “new policing architectures.”
     
    As boyd puts it, “the risks that we’re concerned about are shaped by the fears of privileged parents.”
     
    In a related post called “Students: The one group missing from student data privacy laws and bills,” journalist Larry Magid adds that the proposed bills “are all about parental rights but only empower students once they turn 18.” Referencing boyd’s research, he broadens the conversation to argue that “[i]t’s about time we start to respect privacy, free speech rights and intellectual property rights of children.”
     
    While free speech and property rights are important, the protection of privacy in particular is essential for the full development of the self. The fact that children and young people need some degree of privacy not just from government or marketers but from their own well-intentioned family members has been particularly obscured by pervasive tropes like “young people today don’t care about privacy.”
     
    Of course, one way to combat those false tropes is to talk to young people directly. Just ask them: are there some things they keep to themselves, or share only with a few close friends or family members? And no, the fact that some of them post lots of things on social media that their elders might not does not mean that they “don’t care about privacy.” It just means that privacy boundaries vary—from generation to generation, from culture to culture, from context to context, from individual to individual.
     
    The best recent retort to statements about young people and privacy comes from security expert Bruce Schneier, who answered a question from an interviewer with some questions of his own: "Who are all these kids who are growing up without the concept of digital privacy? Is there even one? … All people care deeply about privacy—analog, digital, everything—and kids are especially sensitive about privacy from their parents, teachers, and friends. … Privacy is a vital aspect of human dignity, and we all value it."
     
    Given that, boyd’s critique of current efforts aimed at protecting student privacy is a call to action: Policy makers (and, really, all of us) need to better understand the true threats, and to better protect those who are most vulnerable in a “hypersurveilled world.”

     

    Photo by Theen Moy, used without modification under a Creative Commons license.

  •  "Harrison Bergeron" in Silicon Valley -- Part II

    Friday, May. 22, 2015

    A few weeks ago, I wrote about Kurt Vonnegut’s short story “Harrison Bergeron.” In the world of that story the year is 2081, and, in an effort to render all people “equal,” the  government imposes handicaps on all those who are somehow better than average. One of the characters, George, whose intelligence is "way above normal," has "a little mental handicap radio in his ear.”

    As George tries to concentrate on something,

    “[a] buzzer sounded in George's head. His thoughts fled in panic, like bandits from a burglar alarm.

    "That was a real pretty dance, that dance they just did," said Hazel.

    "Huh" said George.

    "That dance-it was nice," said Hazel.

    "Yup," said George. He tried to think a little about the ballerinas. … But he didn't get very far with it before another noise in his ear radio scattered his thoughts.

    George winced. So did two out of the eight ballerinas.

    Hazel saw him wince. Having no mental handicap herself, she had to ask George what the latest sound had been.

    "Sounded like somebody hitting a milk bottle with a ball peen hammer," said George.

    "I'd think it would be real interesting, hearing all the different sounds," said Hazel a little envious. "All the things they think up."

    "Um," said George.

    "Only, if I was Handicapper General, you know what I would do?" said Hazel. … "I'd have chimes on Sunday--just chimes. Kind of in honor of religion."

    "I could think, if it was just chimes," said George.

    Re-reading the story, I thought about the work of the late professor Cliff Nass, whose “pioneering research into how humans interact with technology,” as the New York Times described it, “found that the increasingly screen-saturated, multitasking modern world was not nurturing the ability to concentrate, analyze or feel empathy.”

    If we have little “mental handicap radios” in our ears, these days, it’s usually because we put them there—or on our eyes, or wrists, or just in our hands—ourselves (though some versions are increasingly required by employers or schools). Still, like the ones in the story, they are making it more difficult for all of us to focus on key tasks, to be present for our loved ones, to truly take in and respond to our surroundings.

    In anticipation of the Memorial Day’s weekend, I wish you a few days of lessened technological distractions. And, if you have some extra time, you might want to read some of professor Nass’ research.