Santa Clara University

Bookmark and Share

Ethical Issues in the Online World

Welcome to the blog of the Internet Ethics program at the Markkula Center for Applied Ethics, Santa Clara University. Program Director Irina Raicu will be joined by various guests in discussing the ethical issues that arise continuously on the Internet; we hope to host a robust conversation about them, and we look forward to your comments.

  •  “It’s Been a Great Year!”

    Friday, Jan. 16, 2015
    Was 2014 a great year for Facebook? That depends, of course, on which measures or factors you choose to look at. The number of videos in users’ newsfeeds more than tripled.  The number of monthly active Facebook users is 1.35 bilion, and going up. Last June, however, Facebook took a drubbing in the media when reports about its controversial research on “emotional contagion” brought the term “research ethics” into worldwide conversations.  In response, Facebook announced that it would put in place enhanced review processes for its studies of users, and that newly hired engineers will receive training in research ethics when they join the company.
    Then, in December, Facebook offered its users a way to share with their friends an overview of their year (their Facebook year, at least). It was a mini-photo album: a collection of photos from one’s account, curated by Facebook (and no, the pre-selected photos were not the most “liked” ones). While customizable, their personalized albums showed up in users’ newsfeeds with a pre-filled cover photo and the tagline “It’s Been a Great Year! Thanks for being a part of it.”
    Now, Facebook chooses things like taglines very, very carefully. Deliberately. This was not a throwaway line. But, as you may already know by now, a father whose six-year-old daughter died last year—and who was repeatedly faced with her smiling photo used as the cover of his suggested “It’s Been a Great Year!” album—wrote a blog post that went viral, decrying what he termed “inadvertent algorithmic cruelty” and adding, “If I could fix one thing about our industry, just one thing, it would be that: to increase awareness of and consideration for the failure modes, the edge cases, the worst-case scenarios.” Many publications picked up the story.
    Apologies were then exchanged. But many other Facebook users felt the same pain, and did not receive an apology. And some were maybe reminded of the complaints that accompanied the initial launch of Facebook’s “Look Back Video” feature in early February 2014. As TechCrunch noted then, “[a]lmost immediately after launch, many users were complaining about the photos that Facebook auto-selected. Some had too many photos of their exes. Some had sad photos that they’d rather not remember as a milestone.” On February 7, TechCrunch reported that a “quick visit to the Facebook Look Back page now shows a shiny new edit button.”
    Come December, the “year-in-review” album was customizable. But the broader lesson about “the failures modes, the edge cases, the worst-case scenarios” was apparently not learned, or forgotten between February and December, despite the many sharp intervening critiques of the way Facebook treats its users.  
    In October, Santa Clara University professor Shannon Vallor and I wrote an op-ed arguing that Facebook’s response to the firestorm surrounding the emotion contagion study was too narrowly focused on research ethics.  We asked, “What about other ethical issues, not research-related, that Facebook's engineers are bound to encounter, perhaps even more frequently, in their daily work?”  The year-in-review app demonstrates that the question is very much still in play.  You can read our op-ed, which was published by the San Jose Mercury News, here.
    Here’s hoping for a better year.
    Photo by FACEBOOK(LET), used without modification under a Creative Commons license.
  •  #Compassion

    Thursday, Jan. 8, 2015

    For the 2014-2015 school year, the overarching theme being
    explored by various programs of the Markkula Center for
    Applied Ethics
    is “Compassion.” Fittingly, the Center’s first program
    on this theme was a talk entitled “What Is Compassion? A Philosophical Overview.” 

    Led by emeritus philosophy professor William J. Prior, the event turned out to be less of a talk and more of a spirited conversation. The professor had set it up that way—by handing out a one-pager with a brief description of the Good Samaritan parable and a number of questions to be answered by the audience. “In doing the following exercise,” he began, “I’d like you to try to forget everything you think you know about compassion and about this very famous story.” He also asked the audience to ignore the story’s religious underpinnings, and focus on its philosophical aspects.  After several questions that focused the reader’s attention on certain elements of the story, Prior asked, “Based on the reading of the text and your own interpretation of that text, what is compassion?”

    My scribbled notes reply, “Recognition of suffering and action to alleviate it.” As it turns out, that’s a bit different than many of the dictionary definitions of compassion (some of which Prior had also collected and distributed to the crowd). Most of those were variations of a two-part definition that involved a) recognition/consciousness of suffering, and b) desire to alleviate that suffering.

    But the Good Samaritan story argues for more than just desire. The two people who walked by the man who had been left “half dead” before the Good Samaritan found him might have felt a desire to help—we don’t know; however, for whatever reason, they didn’t act on it.  The Samaritan cared for the man’s wounds, took him to shelter at an inn, and even gave money to the innkeeper for the man’s continued care.

    The discussion of the Samaritan’s acts raised the issue of what level of action might be required. If action is required as part of compassion, is any action enough?

    And, I wondered, what does compassion look like online?

    As I am writing this, social media is flooded with references to the heartbreaking killings at the French satirical magazine Charlie Hebdo. People are using #JeSuisCharlie, #CharlieHebo, and other hashtags to express solidarity with satirists, respect, sorrow, anger, support for free speech, opposition to religious extremism. But they are also using social media, and blogs, and online maps, and other online tools, to organize demonstrations—to draw each other out into the cold streets in a show of support for the victims and for their values. Do these actions reflect compassion?

    We often hear the online world described as a place of little compassion. But we also know that people contribute to charities online, offer support and understanding in comments on blogs or on social media posts, click “like…” Is clicking “like” action enough? Is tweeting with the #bringbackourgirls hashtag enough? Is re-tweeting? Are there some actions online so ephemeral and without cost that they communicate desire to help but don’t rise to the level of compassion?

    Would the Good Samaritan have been compassionate if he had seen the wounded man lying on the ground and raised awareness of the need by tweeting about it? (“Man beaten half to death on the road to Jericho. #compassion”) Does compassionate action vary depending on our proximity to the need? On the magnitude of the need? On our own ability to help?

    I am left with lots of questions, some of which I hope to ask during the Q&A following next week’s “Ethics at Noon” talk by the Chair of SCU’s Philosophy department, Dr. Shannon Vallor (author of 21st Century Virtue: Cultivating the Technomoral Self, as well as of our module on software engineering ethics, the Stanford Encyclopedia of Philosophy article on social networking and ethics, and more). Professor Vallor’s talk, which will be held on Thursday, January 15, is titled “Life Online and the Challenge of Compassion.” The talk is free and open to the public; feel free to join us and ask your own questions! 

  •  Ethical Hacking and the Ethics of Disclosure

    Tuesday, Dec. 23, 2014


    Whether we call it “ethical hacking,” “penetration testing,” “vulnerability analysis,” “cyberoffense,” or “cybersecurity research,” we are talking about an increasingly important field rich in remunerative employment, intellectual challenges, and ethical dilemmas.

    As a recent Washington Post article noted, this is a “controversial area of technology: the teaching and practice of what is loosely called ‘cyberoffense.’ In a world in which businesses, the military and governments rely on computer systems that are potentially vulnerable, having the ability to break into those systems provides a strategic advantage.” The Post adds, “Unsurprisingly, ethics is a big issue in this field.”
    (Also unsurprisingly, perhaps, the coverage of ethics included in cyberoffense courses at various universities—at least as described in the article—is deeply underwhelming. In many engineering and computer science courses, ethics is barely mentioned; discussion of ethics, when it does happen, is often left to a separate course, removed from the substance and skills that the students are actually mastering.)
    Last month, as part of the “IT, Ethics, and Law” lecture series co-sponsored by the Markkula Center for Applied Ethics and the High Tech Law Institute, Santa Clara University hosted a panel discussion about ethical hacking. The panelists were Marisa Fagan (Director of Crowd Ops at Bugcrowd), Manju Mude (Chief Security Officer at Splunk), Abe Chen (Director of Information and Product Security at Tesla Motors), Alex Wheeler (Director of R&D at Accuvant), and Seth Schoen (Senior Staff Technologist at the Electronic Frontier Foundation). The topics ranged from an effort to define “ethical hacking” to a review of current bug bounty practices and employment opportunities for ethical hackers, to a discussion about the ethics of teaching cyberoffense in colleges and universities, and more.
    A particularly interesting chunk of the conversation addressed the ethical issues associated with disclosures of discovered vulnerabilities. Rather than try to summarize it, I’ve included an audio clip of that discussion below. Unfortunately, the participants are (mostly) not identified by name; I can tell you, though, that the voices you hear, in order, are those of yours truly (who moderated), and then Seth, Alex, Seth, Abe, Marisa, Abe, Alex, Marisa, and me again.
    As it happens, the one participant who is not heard in this clip is Manju Mude—so it bears noting that Manju contributed significantly throughout the panel (including steering the conversation, right after this clip, to the related topic of hacktivism), and that she was a driving force beyond the convening of the whole event, as well as invaluable help in reaching out to the other panelists. I will take this opportunity to thank all of them again, and hope that you will appreciate their insights on the topic of the ethics of disclosure:
    [For more on the topic of ethical decision-making in general, please see the Markkula Center for Applied Ethics' framework for ethical decision making--and, for an introduction to its key concepts, download the free companion app!]
    [In the photo, left to right: Seth Schoen, Marisa Fagan, Abe Chen, Alex Wheeler, Manju Mude, Irina Raicu]
  •  Content versus Conversation

    Tuesday, Dec. 16, 2014
    Last month, at the pii2014 conference held in Silicon Valley (where “pii” stands for “privacy, identity, innovation”), one interesting session was a conversation between journalist Kara Swisher and the co-founders of Secret—one of a number of apps that allow users to communicate anonymously.  Such apps have been criticized by some as enabling cruel comments and cyberbullying; other commentators, however, like Rachel Metz in the MIT Tech Review, have argued that “[s]peaking up in these digital spaces can bring out the trolls, but it’s often followed by compassion from others, and a sense of freedom and relief.”
    During the conversation with David Byttow and Chrys Bader-Wechseler, Swisher noted that Secret says it is not a media company—but, she argued, it does generate content through its users. Secret’s co-founders pushed back. They claimed that what happens on their platform are conversations, not “content.”  Secret messages are ephemeral, they noted; they disappear soon after being posted (how soon is not clear). We’ve always had great, passionate conversations with people, they said, without having those conversations recorded for ever; Secret, they argued, is just a new way to do that.
    Those comments left me thinking about the term “social media” itself. What does “media” mean in this context? I’m pretty sure that most Facebook or Twitter users don’t see themselves as content-creators for media companies. They see themselves, I would guess, as individuals engaged in conversations with other individuals. But those conversations do get treated like media content in many ways. We keep hearing about social media platforms collecting the “data” or “content” created by their users, analyzing that content, tweaking it to “maximize engagement,” using it as fodder for behavioral research, etc.
    There are other alternatives for online conversations, of course. Texting and emailing are never claimed to constitute “content creation” for media companies. But texts and email conversations are targeted, directed. They have an address line, which has to be filled in.
    Tools like Secret, however, enable a different kind of interaction. If I understand it correctly, this is more like shouting out a window and—more often than not—getting some response (from people you know, or people in your area).  It’s hoping to be heard, and maybe acknowledged, but not seen, not known.
    A reporter for Re/Code, Nellie Bowles, once wrote about a “real-life” party organized through Secret. Some of the conversations that took place at that party were pretty odd; some were interesting; but none of them became “content” until Bowles wrote about them.
    Calling social media posts “content” turns them into a commodity, and makes them sound less personal. Calling them parts of a conversation is closer, I think, to what most people perceive them to be, and reminds us of social norms that we have around other people’s conversations—even if they’re out loud, and in public.
    It’s a distinction worth keeping in mind. 
    Photo by Storebukkebruse, used without modification under a Creative Commons license.
  •  Readings in Big Data Ethics - Updated List

    Wednesday, Dec. 10, 2014
    A recent Politico article about online education cites David Hoffman, Intel’s Director of Security Policy and Global Privacy Officer, who argues that we are “entering the phase of data ethics.” Increasingly, we hear the term “big data ethics”—often in articles that highlight various ways in which current big-data-related practices (collection, processing, sharing, selling, use for predictive purposes, etc.) are problematic.
    Below is a collection of readings that address those issues (updated on 12/10/14). It is by no means exhaustive, but we hope it will provide a useful starting point for conversations about big data ethics. If you would like to suggest further additions to this list, please include them in the Comments section below, email them to, or tweet them to @IEthics. Thank you!
    What's Up With Big Data Ethics?
    By Jonathan H. King and Neil M. Richards
    What Big Data Needs: A Code of Ethical Practices
    By Jeffrey F. Rayport
    Big Data Is Our Generation’s Civil Rights Issue, and We Don’t Know It
    By Alistair Croll
    Injustice In, Injustice Out: Social Privilege in the Creation of Data
    By Jeffrey Alan Johnson
    Big Data and Its Exclusions
    By Jonas Lerman
    Big Data Are Made by (And Not Just a Resource for) Social Science and Policy-Making
    By Solon Barocas
    Big Data, Big Questions: Metaphors of Big Data
    By Cornelius Puschmann and Jean Burgess
    View from Nowhere: On the cultural ideology of big data
    By Nathan Jurgenson
    Big Data, Small Politics [podcast]
    By Evgeny Morozov
    The Hidden Biases in Big Data
    By Kate Crawford
    The Anxieties of Big Data
    By Kate Crawford
    How Big Data Is Unfair
    By Moritz Hardt
    Unfair! Or Is It? Big Data and the FTC’s Unfairness Jurisdiction
    By Dennis Hirsch
    How Big Data Can be Used to Fight Unfair Hiring Practices
    By Dustin Volz
    Big Data’s Disparate Impact
    By Solon Barocas and Andrew D. Selbst
    Big Data and the Underground Railroad
    By Alvaro M. Bedoya
    Punished for Being Poor: The Problem with Using Big Data in the Justice System
    By Jessica Pishko
    The Ethics of Big Data in Higher Education
    By Jeffrey Alan Johnson
    The Chilling Implications of Democratizing Big Data
    By Woodrow Harzog and Evan Selinger
    Big Data: Seizing Opportunities, Preserving Values [White House Report]
    By John Podesta et al
    Data and Discrimination: Collected Essays
    Edited by Seeta Pena Gangadharan with Virginia Eubanks and Solon Barocas


    Photo by Terry Freedman, used without modification under a Creative Commons license.

  •  Movie Review: Citizenfour

    Tuesday, Nov. 25, 2014
    Sona Makker is a second-year law student at Santa Clara University’s School of Law, in the process of earning a Privacy Law certificate. This piece first appeared in The Advocate--the law school's student-run newspaper.
    When The Guardian first leaked the story about the National Security Agency’s surveillance programs, I was sitting in a conference room at one of largest privacy conferences in the world. I couldn’t help but laugh at the irony. I was surrounded by some of the world’s leading experts in this field, who have written texts and treatises on the current state of privacy law in this country. Surveillance wasn’t on the agenda for this conference, but of course, since that day, government surveillance has remained at the top of the public’s agenda.
    To some, the man behind the NSA revelations, Edward Snowden, is a hero; to others he is a traitor. Whatever you may believe, I recommend seeing Laura Poitras’ latest documentary-- Citizenfour-- which follows the story of NSA whistleblower Edward Snowden during the moments leading up to the Guardian story that exposed the U.S. government’s secret collection of Verizon cellphone data.
    The majority of the film takes places in a hotel room in Hong Kong. Snowden contacted Poitras through encrypted channels. Only after a series of anonymous e-mail exchanges did the two finally trust that the other was really who they said they were-- “assume your adversary is capable of 3 billion guesses per second,” he wrote her. Poitras and Snowden were eventually joined by Guardian reporter Glen Greenwald, whom Snowden contacted under the pseudonym “Citizenfour.”
    Snowden guides the journalists through the piles and piles of NSA documents as they strategize how to publish and inform the American public about the government snooping programs, including Verizon, AT&T, and other telecom companies sharing phone records with the NSA, FBI access to data from private web companies like Yahoo and Google, and the PRISM program that authorized the collection of e-mail, text messages, and voicemails, of both foreigners and US citizens. Snowden appears to be very calm and quiet as he unveils all of this.
    Snowden worried that “personality journalism” would end up making the story about him, rather than the substance of his revelations. When Greenwald’s stories were published in the Guardian, the three sat together and watched as the media reacted and the story unfolded on TV. “We are building the biggest weapon for oppression in the history of mankind,” said Snowden.
    The film also contextualizes the leaks, providing background on the extent of government surveillance. Poitras interviewed William Binney, a former NSA employee who also blew the whistle -- “a week after 9/11, they began actively spying on everyone in this country,” he says. She also includes CSPAN footage of former NSA chief Keith Alexander who flatly denied any kind of snooping programs to Congress.
    There is a perfect scene (almost too perfect) where Poitras films Snowden’s reaction to a fire alarm that went off during one of their meetings in the hotel. It was a routine test, but Snowden questions whether or not someone staged it. The timing “seems fishy,” he says. Is the room bugged? As the viewer you start to question whether it was actually a test too, but then you ask yourself “is that even possible?” It seems so outlandish, straight out of a scene from 24 or something. With that, Poitras effectively prompts the viewer to think that the whole thing, the snooping, the surveillance, it all seems outlandish, but clearly, the evidence proves otherwise.
    I am optimistic that the law can serve as a powerful counterweight to curbing mass surveillance, but this cannot happen without continued public pressure. The Internet is changing how we live and how we interact with our social institutions. Institutions—how we structure our everyday lives and how we produce social order—are not written in stone, but are mutable and capable of evolving alongside our own evolution as social beings. This evolution is dependent upon the will and foresight of those who are willing to speak up. Citizenfour puts a human face to Snowden, and Poitras does so without painting him as a hero or a villain, but just as a twenty-something concerned citizen whom many can relate to. “This is the first time people can see who Snowden really is,” said Glenn Greenwald after the film’s premiere. “You can decide what you think about him." 
    Photo by Mike Mozart, used without modification under a Creative Commons license.
  •  Cookies and Privacy: A Delicious Counter-Experiment

    Monday, Nov. 17, 2014


    Last month, a number of stories in publications such as Pro Publica, Mashable, Slate, and The Smithsonian Magazine covered an “experiment” by artist Risa Puno, who asked attendees at an art festival to disclose bits of personal information about themselves in exchange for cookies.  ProPublica described the event as a “highly unscientific but delicious experiment” in which “380 New Yorkers gave up sensitive personal information—from fingerprints to partial Social Security numbers—for a cookie.” Of course, we are given no count of the number of people who refused the offer, and the article notes that “[j]ust under half—or 162 people—gave what they said were the last four digits of their Social Security numbers”—with that rather important “what they said” caveat casually buried mid-sentence.

    “To get a cookie,” according to the Pro Publica story, “people had to turn over personal data that could include their address, driver's license number, phone number and mother's maiden name”—the accuracy of most of which, of course, Puno could also not confirm.
    All of this is shocking only if one assumes that people are not capable of lying (especially to artists offering cookies). But the artist declared herself shocked, and Pro Publica somberly concluded that “Puno's performance art experiment highlights what privacy experts already know: Many Americans are not sure how much their personal data is worth, and that consumer judgments about what price to put on privacy can be swayed by all kinds of factors.”
    In this case, I am at least thankful for the claim that the non-experiment “highlights,” rather than “proves” something. Other stories, however, argued that the people convinced to give up information “demonstrated just how much their personal information was worth.” The Smithsonian argued that the “artistic experiment is confirmation of the idea that people really just have no sense of what information and privacy is worth other than, variably, a whole lot, or, apparently, a cookie.” The headline in The Consumerist blared, “Forget Computer Cookies: People Happily Give Up Personal Data For The Baked Kind” (though, in all fairness, The Consumerist article did highlight the “what they said” bit, and noted that the “finely-honed Brooklynite sense of modern irony may have played a role, too. Plenty of purchasers didn’t even eat their cookies…. They ‘bought’ them so they could post photos on Twitter and Instagram saying things like, ‘Traded all my personal data for a social media cookie’…”—which suggests rather more awareness than Puno gives people credit for).
    In any case, prompted by those stories, I decided that a flip-side “artistic experiment” was in order. Last week, together with my partner in privacy-protective performance art—Robert Henry, who is Santa Clara University’s Chief Information Security Officer—I set up a table in between the campus bookstore and the dining area.  Bob had recently sent out a campus-wide email reminding people to change their passwords, and we decided that we would offer folks cookies in return for password changes. We printed out a sign that read “Treats for Password Changes,” and we set out two types of treats: cookies and free USB drives. The USB drives all came pre-loaded with a file explaining the security dangers associated with picking up free USB drives. The cookies came pre-loaded with chocolate chips.
    We are now happy to report our results. First, a lot of people don’t trust any offers of free cookies. We got a lot of very suspicious looks. Second, within the space of about an hour and a half, about 110 people were easily convinced to change one of their passwords—something that is a good privacy/security practice in itself—in exchange for a cookie. Does this mean people do care about privacy? (To anticipate your question: some people pulled out their phones or computers and appeared to be changing a password right there; others promised to change a password when they got to their computer; we have no way of knowing if they did—just like Puno had no way of knowing whether much of the “information” she got was true. Collected fingerprints aside…) Third, cookies were much, much more popular than the free USB drives. Of course, the cookies were cheaper than the USB drives. Does this mean that people are aware of the security dangers posed by USB drives and are willing to “pay” for privacy?
    Responses from the students, parents, and others who stopped to talk with us and enjoy the soft warm chocolate-chip cookies ranged from “I’m a cryptography student and I change my passwords every three months” to “I only have one password—should I change that?” to “I didn’t know you were supposed to change passwords” to “But I just changed my password in response to your email” (which made Bob really happy). It was, if nothing else, an educational experience—in some cases for us, in others for them.
    So what does our “artistic experiment” prove? Absolutely nothing, of course—just like Puno’s “experiment,” which prompted so much coverage. (Or maybe they both prove that people like free cookies.)
    The danger with projects like hers, though, is that their “conclusions” are often echoed in discussions about business, regulation, or public policy in general: If people give up personal information for a cookie, the argument goes, why should we protect privacy? That is the argument that needs to be refuted—again and again. Poll after poll finds that people say they do value their privacy, are deeply concerned by its erosion, and want more laws to protect it; but some refuse to believe them and turn, instead, to “evidence” from silly “experiments.” If so, we need more flip-side “experiments”—complete, of course, with baked goods.
  •  Dickens on Big Data

    Thursday, Nov. 6, 2014
    This essay first appeared in Re/Code in May 2014.
     While some writers like to imagine what Plato would have said about the Googleplex and other aspects of current society, when it comes to life regulated and shaped by data and algorithms, Charles Dickens is the one to ask. His novel Hard Times is subtitled “For these times,” and his exploration of oversimplification through numbers certainly makes that subtitle apt again.
    Hard Times is set in a fictional Victorian mill town in which schools and factories are purportedly run based on data and reason. In Dickens’s days, the Utilitarians proposed a new take on ethics, and social policies drew on utilitarian views. Hard Times has often been called a critique of utilitarianism; however, its critique is not directed primarily at the goal of maximizing happiness and minimizing harm, but at the focus on facts/data/measurable things at the expense of everything else. Dickens was excoriating what today we would call algorithmic regulation and education.
    Explaining the impact of “algorithmic regulation,” social critic Evgeny Morozov writes about
    … the construction of “invisible barbed wire” around our intellectual and social lives. Big data, with its many interconnected databases that feed on information and algorithms of dubious provenance, imposes severe constraints on how we mature politically and socially. The German philosopher Jürgen Habermas was right to warn — in 1963 — that “an exclusively technical civilization … is threatened … by the splitting of human beings into two classes — the social engineers and the inmates of closed social institutions.
    Dickens’s Hard Times is concerned precisely with the social engineers and the inmates of closed social institutions. Its prototypical “social engineer” is Thomas Gradgrind — a key character who advocates data-based scientific education (and nothing else):
    Thomas Gradgrind, sir. A man of realities. A man of fact and calculations. … With a rule and a pair of scales, and the multiplication table always in his pocket, sir, ready to weight and measure any parcel of human nature, and tell you exactly what it comes to. It is a mere question of figures, a case of simple arithmetic.
    Today he would carry a cellphone in his pocket instead of a multiplication table, but he is otherwise a modern man: A proponent of a certain way of looking at the world, through big data and utopian algorithms.
    Had Dickens been writing today, would he have set his book in Silicon Valley? Writers like Dave Eggers do, in novels like The Circle, which explores more recent efforts at trying out theoretical social systems on vast populations.
    Morozov frequently does, too, as in his article “The Internet Ideology: Why We Are Allowed to Hate Silicon Valley,” where he argues that the “connection between the seeming openness of our technological infrastructures and the intensifying degree of control [by corporations, by governments, etc.] remains poorly understood.”
    Dickens was certainly concerned by the intensifying control he observed in the ethos of his age. “You,” says one of his educators to a young student, “are to be in all things regulated and governed … by fact. We hope to have, before long, a board of fact, composed of commissioners of fact, who will force the people to be a people of fact, and of nothing but fact. You must discard the word Fancy altogether. You have nothing to do with it.”
    Fancy, wonder, imagination, creativity — all qualities that can’t be accurately quantified — may indeed be downplayed (even if unintentionally) in a fully quantified educational system. In such a system, what happens to the questions that can’t be answered once and for all?
    As Dickens puts it, “Herein lay the spring of the mechanical art and mystery of educating the reason without stooping to the cultivation of the sentiments and affections. Never wonder. By means of addition, subtraction, multiplication, and division, settle everything somehow, and never wonder.”
    And what about the world beyond the school? In Hard Times, the other people subjected to “algorithmic regulation” are the workers of Coketown. Describing this fictional Victorian mill town (after having visited a real one), Dickens writes:
    Fact, fact, fact, everywhere in the material aspect of the town; fact, fact, fact, everywhere in the immaterial. The … school was all fact, and the school of design was all fact, and the relations between master and man were all fact, and everything was fact between the lying-in hospital and the cemetery, and what you couldn’t state in figures, or show to be purchaseable in the cheapest market and saleable in the dearest, was not, and never should be, world without end, Amen.
    The overarching point, for Dickens, is that many of the most important aspects of human life are not measurable with precision and not amenable to algorithmically designed policies. “It is known,” he writes in Hard Times:
    … to the force of a single pound weight, what the engine will do; but not all the calculators of the National Debt can tell me the capacity for good or evil, for love or hatred, for patriotism or discontent, for the decomposition of virtue into vice, or the reverse, at any single moment in the soul of one of these its quiet servants…. There is no mystery in it; there is an unfathomable mystery in the meanest of them, for ever.
    Does the exponentially greater power of our “calculators” challenge that perception? Does big data mean “no mystery,” even in human beings?
    We are buffered by claims that Google or Facebook or some other data-collection entities know us better than we know ourselves (or at least better than our spouses do); we are implementing predictive policing; we ponder algorithmic approaches to education.
    As we do so, Dickens’s characters, like Thomas Gradgrind’s daughter Louisa, call out a warning from another time when we tried this approach. In a moment of crisis, Louisa (who has been raised on a steady diet of facts) tells Gradgrind, “With a hunger and thirst upon me, father, which have never been for a moment appeased; with an ardent impulse toward some region where rules, and figures, and definitions were not quite absolute; I have grown up, battling every inch of my way.”
    In Hard Times, all the children who are shaped by the utilitarian fact-based approach, with no room for wonder and fancy, are stifled and stilted. Eventually, even Gradgrind realizes this. “I only entreat you to believe … I have meant to do right,” he tells his daughter. Dickens adds, “He said it earnestly, and to do him justice he had. In gauging fathomless deeps with his little mean excise-rod, and in staggering over the universe with his rusty stiff-legged compasses, he had meant to do great things.” But this is not a Disney ending; Gradgrind’s remorse does not reverse the damage done to Louisa and others like her. And the lives of the algorithmically-governed people of Coketown are miserable.
    In a recent Scientific American blog, psychologist Adam Waytz uses the term “quantiphobia” in reference to the claim that creativity is unquantifiable. He describes himself as “bugged” by such claims, and wonders “where such quantiphobia originates.” He then writes that
    … both neural and self-report evidence show that people tend to represent morals like preferences more than like facts. Getting back to the issue of quantiphobia, my sense is that when numbers are appended to issues with moral relevance, this moves them out of the realm of preference and into the realm of fact, and this transition unnerves us.
    Is it irrational to be “unnerved” by this transition? What Waytz fails to address is the assumption that numbers or facts provide greater or more objective truth than unquantified “preferences” do. Of course, the process through which “numbers are appended to issues” is itself subjective — expressive of preferences. What we choose to measure, and how, is subjective. How we analyze the resulting numbers is subjective. The movement into the realm of fact is not equivalent to a movement into the realm of truth. The refusal to append numbers to certain things is not “quantiphobia”—it is wisdom.
    This is not to dismiss the very real benefits that can be derived from big-data analytics and algorithmic functions in many contexts. We can garner those and still acknowledge that certain things may be both extremely important and unmeasurable, and that our policies and approaches should reflect that reality.
    Dickens throws down a gauntlet for our times: “Supposing we were to reserve our arithmetic for material objects, and to govern these awful unknown quantities [i.e., human beings] by other means!”
    Dear Reader, if you are a “quant,” please read Hard Times.  And no, don’t count the lines.
    (Photo by Wally Gobetz, used without modification under a Creative Commons license.)



  •  “Practically as an accident”: on “social facts” and the common good

    Thursday, Oct. 30, 2014


    In the Los Angeles Review of Books, philosopher Evan Selinger takes issue with many of the conclusions (and built-in assumptions) compiled in Dataclysm—a new book by Christian Rudder, who co-founded the dating site OKCupid and now heads the site’s data analytics team. While Selinger’s whole essay is really interesting, I was particularly struck by his comments on big data and privacy. 

    “My biggest issue with Dataclysm,” Selinger writes,
    lies with Rudder’s treatment of surveillance. Early on in the book he writes: ‘If Big Data’s two running stories have been surveillance and money, for the last three years I’ve been working on a third: the human story.’ This claim about pursuing a third path isn’t true. Dataclysm itself is a work of social surveillance.
    It’s tempting to think that different types of surveillance can be distinguished from one another in neat and clear ways. If this were the case, we could say that government surveillance only occurs when organizations like the National Security Agency do their job; corporate surveillance is only conducted by companies like Facebook who want to know what we’re doing so that they effectively monetize our data and devise strategies to make us more deeply engaged with their platform; and social surveillance only takes place in peer-to-peer situations, like parents monitoring their children’s phones, romantic partners scrutinizing each other’s social media feeds….
    But in reality, surveillance is defined by fluid categories.
    While each category of surveillance might include both ethical and unethical practices, the point is that the boundaries separating the categories are porous, and the harms associated with surveillance might seep across all of them.
    Increasingly, when corporations like OKCupid or Facebook analyze their users’ data and communications in order to uncover “social facts,” they claim to be acting in the interest of the common good, rather than pursuing self-serving goals. They claim to give us clear windows into our society. The subtitle of Rudder’s book, for example, is “Who We Are (When We Think No One’s Looking).” As Selinger notes,
    Rudder portrays the volume of information… as a gift that can reveal the truth of who we really are. … [W]hen people don’t realize they’re lab rats in Rudder’s social experiments, they reveal habits—‘universals,’ he even alleges…  ‘Practically as an accident,’ Rudder claims, digital data can now show us how we fight, how we love, how we age, who we are, and how we’re changing.’
    Of course, Rudder should contain his claims to the “we” who use OKCupid (a 2013 study by the Pew Research Trust found that 10% of Americans report having used an online dating service). Facebook has a stronger claim to having a user base that reflects all of “us.”  But there are other entities that sit on even vaster data troves than Facebook’s, even more representative of U.S. society overall. What if a governmental organization were to decide to pursue the same selfless goals, after carefully ensuring that the data involved would be carefully anonymized and presented only in the aggregate (akin to what Rudder claims to have done)?
    In the interest of better “social facts,” of greater insight into our collective mindsets and behaviors, should we encourage (or indeed demand from) the NSA to publish “Who Americans Are (Whey They Think No One’s Watching)”? To be followed, perhaps, by a series of “Who [Insert Various Other Nationalities] Are (When They Think No One’s Watching)”? Think of all the social insights and common good that would come from that!
    In all seriousness, as Selinger rightly points out, the surveillance behind such no-notice-no-consent research comes at great cost to society:
    Rudder’s violation of the initial contextual integrity [underpinning the collection of OKCupid user data] puts personal data to questionable secondary, social use. The use is questionable because privacy isn’t only about protecting personal information. People also have privacy interests in being able to communicate with others without feeling anxious about being excessively monitored. … [T]he resulting apprehension inhibits speech, stunts personal growth, and possibly even disinclines people from experimenting with politically relevant ideas.
    With every book subtitled “Who We Are (When We Think No One’s Looking),” we, the real we, become more weary, more likely to assume that someone’s always looking. And as many members of societies that have lived with excessive surveillance have attested, that’s not a path to achieving the good life.
    Photo by Henning Muhlinghaus, used without modification under a Creative Commons license.


  •  Who (or What) Is Reading Whom: An Ongoing Metamorphosis

    Thursday, Oct. 23, 2014
    If you haven’t already read the Wall Street Journal article titled “Your E-Book Is Reading You,” published in 2012, it’s well worth your time. It might even be worth a second read, since our understanding of many Internet-related issues has changed substantially since 2012.
    I linked to that article in a short piece that I wrote, which was published yesterday in Re/Code: “Metamorphosis.”  I hope you’ll read that, too—and we’d love to get your comments on that story either at Re/Code or in the Comments section here!
    And finally, just a few days ago, a new paper by Jules Polonetsky and Omer Tene (both from the Future of Privacy Forum) was released through SSRN: “Who Is Reading Whom Now: Privacy in Education from Books to MOOCs.” This is no bite-sized exploration, but an extensive overview of the promises and challenges of technology-driven innovations in education—including the ethical implications of the uses of both “small data” and “big data” in this particular context.
    To play with yet another title—there are significant and ongoing shifts in “the way we read now”…

    Photo by Jose Antonio Alonso, used without modification under a Creative Commons license.