Santa Clara University

Bookmark and Share

Ethical Issues in the Online World

Welcome to the blog of the Internet Ethics program at the Markkula Center for Applied Ethics, Santa Clara University. Program Director Irina Raicu will be joined by various guests in discussing the ethical issues that arise continuously on the Internet; we hope to host a robust conversation about them, and we look forward to your comments.

The following postings have been filtered by tag privacy. clear filter
  •  Privacy Crimes Symposium: A Preview

    Monday, Oct. 5, 2015
    Daniel Suvor
    Daniel Suvor

    Tomorrow, Santa Clara University will host a free half-day symposium titled “Privacy Crimes: Definition and Enforcement.” The event is co-sponsored by the Santa Clara District Attorney’s Office, the High Tech Law Institute, and the Markkula Center for Applied Ethics. (Online registration is now closed, but if you’d still like to attend, you can email

    The event will open with remarks from Santa Clara DA Jeff Rosen and a keynote by Daniel Suvor, who is the California attorney general’s current policy advisor. A recent Fusion article detailing the latest efforts to criminalize and prosecute “revenge porn” quotes Suvor, who explains that the attorney general “sees this as the next front in the violence against women category of crime. … She sees it as the 21st century incarnation of domestic violence and assaults against women, now taken online.’”

    The Fusion article also points out that California was “the second state to put a revenge porn law on the books…. In the past two years, 23 other states have followed suit.”

    Of course, revenge porn is not the only crime that impacts privacy, and legislative responses are not the only way to combat such crimes.  The Privacy Crimes symposium will feature panel discussions that will address a broad variety of related questions: How are privacy interests harmed? Why (and when) should we turn to criminal law in response? What types of criminal charges are currently used in the prosecutions that involve such harms? Are current laws sufficiently enforced? Are the current laws working well? Should some laws be changed? Do we need new ones? Are there other ways that would also work (or work better) to minimize privacy harms? Are there better ways to protect competing privacy interests in the criminal justice system?

    We are looking forward to a thought-provoking discussion and many more questions from audience members! And we are grateful to the International Association of Privacy Professionals, the Electronic Frontier Foundation, and the Identity Theft Council for their help in publicizing this event.

  •  The Ethics of Ad-Blocking

    Wednesday, Sep. 23, 2015
    (AP Photo/Damian Dovarganes)
    (AP Photo/Damian Dovarganes)

    As the number of people who are downloading ad-blocking software has grown, so has the number of articles discussing the ethics of ad-blocking. And interest in the subject doesn’t seem to be waning: a recent article in Mashable was shared more than 2,200 times, and articles about the ethics of ad-blocking have also appeared in Fortune (“You shouldn’t feel bad about using an ad blocker, and here’s why” and “Is using ad blockers morally wrong? The debate continues”), Digiday (“What would Kant do? Ad blocking is a problem, but it’s ethical”), The New York Times (“Enabling of Ad Blockers in Apple’s iOS9 Prompts Backlash”), as well as many other publications.

    Mind you, this is not a new debate. People were discussing it in the xkcd forum in 2014. The BBC wrote about the ethics of ad blocking in 2013. Back in 2009, Farhad Manjoo wrote for about what he described as a more ethical “approach to fair ad-blocking”; he concluded his article with the lines, “Ad blocking is here to stay. But that doesn't have to be the end of the Web—just the end of terrible ads.”
    As it turns out, in 2015, we still have terrible ads (see Khoi Vinh’s blog post, “Ad Blocking Irony.”) And, as a recent report by PageFair and Adobe details, the use of ad blockers “grew by 48% during the past year, increasing to 45 million average monthly active users” in the U.S. alone.
    In response, some publishers are accusing people who install (or build) ad blockers of theft. They are also accusing them of breaching their “implied contracts” with sites that offer ad-supported content (but see Marco Arment’s recent blog post, “The ethics of modern web ad-blocking,” which demolishes this argument, among other anti-blocker critiques).
    Many of the recent articles present both sides of the ethics debate. However, most of the articles on the topic claim that the main reasons that users are installing ad blockers are the desires to escape “annoying” ads or to improve browsing speeds (since ads can sometimes slow downloads to a crawl). What many articles leave out entirely, or gloss over in a line or two, are two other reasons why people (and especially those who understand how the online advertising ecosystem works) install ad blockers: For many of those users, the primary concerns are the tracking behind “targeted” ads, and the meteoric growth of “malvertising”—advertising used as vectors for malware.
    When it comes to the first concern, most of the articles about the ethics of ad-blocking simply conflate advertising and tracking—as if the tracking is somehow inherent in advertising. But the two are not the same, and it is important that we reject this false either/or proposition. If advertisers continue to push for more invasive consumer tracking, ad blocker usage will surge: When the researchers behind the PageFair and Adobe 2015 report asked “respondents who are not currently using an ad blocking extention … what would cause them to change their minds,” they found that “[m]isuse of personal information was the primary reason to enable ad blocking” (see p. 12 of the report). Now, it may not be clear exactly what the respondents meant by “misuse of personal information,” but that is certainly not a reference to either annoying ads or clogged bandwidth.
    As for the rise of “malvertising,” it was that development that led me to say to a Mashable reporter that if this continues unabated we might all eventually end up with an ethical duty to install ad blockers—in order to protect ourselves and others who might then be infected in turn.
    Significantly, the dangers of malvertising are connected to those of the more “benign” tracking. As a Wired article explains,

    it is modern, more sophisticated ad networks’ granular profiling capabilities that really create the malvertising sweet spot. Today ad networks let buyers configure ads to appear according to Web surfers’ precise browser or operating system types, their country locations, related search keywords and other identifying attributes. Right away we can see the value here for criminals borrowing the tactics of savvy marketers. … Piggybacking on rich advertising features, malvertising offers persistent, Internet-scale profiling and attacking. The sheer size and complexity of online advertising – coupled with the Byzantine nature of who is responsible for ad content placement and screening – means attackers enjoy the luxury of concealment and safe routes to victims, while casting wide nets to reach as many specific targets as possible.

    As one cybersecurity expert tweeted, sarcastically rephrasing the arguments of some of those who argue that installing ad-blocking software is unethical, “If you love content then you must allow random anonymous malicious entities to run arbitrary code on your devices” (@thegrugq).

    Now, if you clicked on the link to the Wired article cited above, you might or might not have noticed a thin header above the headline. The header reads, “Sponsor content.” Yup, that entire article is a kind of advertising, too. A recent New York Times story about the rise of this new kind of “native advertising” is titled “With Technology, Avoiding Both Ads and the Blockers.” (Whether such “native experiences” are better than the old kind of ads is a subject for another ethics debate; the FTC recently held a workshop about this practice and came out with more questions than answers.)

    Of course, not all online ads incorporate tracking, not all online ads bring malware, and many small publishers are bearing the brunt of a battle about practices over which they have little (if any) control. Unfortunately, for now, the blocking tools available are blunt instruments. Does that mean, though, that until the development of more nuanced solutions, the users of ad-supported sites should continue to absorb the growing privacy and security risks?

    Bottom line: discussing the ethics of ad-blocking without first clarifying the ethics of the ecosystem in which it has developed (and the history of the increasing harms that accompany many online ads) is misleading.

  •  Internet Ethics: Fall 2015 Events

    Tuesday, Sep. 1, 2015

    Fall will be here soon, and with it come three MCAE events about three interesting Internet-related ethical (and legal) topics. All of the events are free and open to the public; links to more details and registration forms are included below, so you can register today!

    The first, on September 24, is a talk by Santa Clara Law professor Colleen Chien, who recently returned from her appointment as White House senior advisor for intellectual property and innovation. Chien’s talk, titled “Tech Innovation Policy at the White House: Law and Ethics,” will address several topics—including intellectual property and innovation (especially the efforts toward patent reform); open data and social change; and the call for “innovation for all” (i.e. innovation in education, the problem of connectivity deserts, the need for tech inclusion, and more). Co-sponsored by the High Tech Law Institute, this event is part of our ongoing “IT, Ethics, and Law” lecture series, which recently included presentations on memory, forgiveness, and the “right to be forgotten”; ethical hacking; and the ethics of online price discrimination. (If you would like to be added to our mailing list for future events in this series, please email

    The second, on October 6, is a half-day symposium on privacy law and ethics and the criminal justice system. Co-sponsored by the Santa Clara District Attorney’s office and the High Tech Law Institute, “Privacy Crimes: Definition and Enforcementaims to better define the concept of “privacy crimes,” assess how such crimes are currently being addressed in the criminal justice system, and explore how society might better respond to them—through new laws, different enforcement practices, education, and other strategies. The conference will bring together prosecutors, defense attorneys, judges, academics, and victims’ advocates to discuss three main questions: What is a “privacy crime”? What’s being done to enforce laws that address such crimes? And how should we balance the privacy interests of the people involved in the criminal justice system? The keynote speaker will be Daniel Suvor, chief of policy for California’s Attorney General Kamala Harris. (This event will qualify for 3.5 hours of California MCLE, as well as IAPP continuing education credit; registration is required.)

    Finally, on October 29 the Center will host Antonio Casilli, associate professor of digital humanities at Telecom Paris Tech. In his talk, titled “How Can Somebody Be A Troll?,” Casilli will ask some provocative questions about the line between actual online trolls and, as he puts it, “rightfully upset Internet users trying to defend their opinions.” In the process, he will discuss the arguments of a new generation of authors and scholars who are challenging the view that trolling is a deviant behavior or the manifestation of perverse personalities; such writers argue that trolling reproduces anthropological archetypes; highlights the intersections of different Internet subcultures; and interconnects discourses around class, race, and gender.

    Each of the talks and panels will conclude with question-and-answer periods. We hope to see you this fall and look forward to your input!

    (And please spread the word to any other folks you think might be interested.)


  •  Nothing to Hide? Nothing to Protect?

    Wednesday, Aug. 19, 2015

    Despite numerous articles and at least one full-length book debunking the premises and implications of this particular claim, “I have nothing to hide” is still a common reply offered by many Americans when asked whether they care about privacy.

    What does that really mean?

    An article by Conor Friedersdorf, published in The Atlantic, offers one assessment. It is titled “This Man Has Nothing to Hide—Not Even His Email Password.” (I’ll wait while you consider changing your email password right now, and then decide to do it some other time.) The piece details Friedersdorf’s interaction with a man named Noah Dyer, who responded to the writer’s standard challenge—"Would you prove [that you have nothing to hide] by giving me access to your email accounts, … along with your credit card statements and bank records?"—by actually providing all of that information. Friedersdorf then considers the ethical implications of Dyer’s philosophy of privacy-lessness, while carefully navigating the ethical shoals of his own decisions about which of Dyer’s information to look at and which to publish in his own article.

    Admitting to a newfound though limited respect for Dyer’s commitment to drastic self-revelation, Friedersdorf ultimately reaches, however, a different conclusion:

    Since Dyer granted that he was vulnerable to information asymmetries and nevertheless opted for disclosure, I had to admit that, however foolishly, he could legitimately claim he has nothing to hide. What had never occurred to me, until I sat in front of his open email account, is how objectionable I find that attitude. Every one of us is entrusted with information that our family, friends, colleagues, and acquaintances would rather that we kept private, and while there is no absolute obligation for us to comply with their wishes—there are, indeed, times when we have a moral obligation to speak out in order to defend other goods—assigning the privacy of others a value of zero is callous.

    I think it is more than callous, though. It is an abdication of our responsibility to protect others, whose calculations about disclosure and risk might be very different from our own. Saying “I have nothing to hide” is tantamount to saying “I have nothing and no one to protect.” It is either an acknowledgment of a very lonely existence or a devastating failure of empathy and imagination.

    As Friedersdorf describes him, Dyer is not a hermit; he has interactions with many people, at least some of whom (including his children) he appears to care about. And, in his case, his abdication is not complete; it is, rather, a shifting of responsibility. Because while he did disclose much of his personal information (which of course included the personal details of many others who had not been consulted, and whose “value system,” unlike his own, may not include radical transparency), Dyer wrote to Friedersdorf, the reporter, “[a]dditionally, while you may paint whatever picture of me you are inclined to based on the data and our conversations, I would ask you to exercise restraint in embarrassing others whose lives have crossed my path…”

    In other words, “I have nothing to hide; please hide it for me.”

    “I have nothing to hide” misses the fact that no person is an island, and much of every person’s data is tangled, interwoven, and created in conjunction with, other people’s.

    The theme of the selfishness or lack of perspective embedded in the “nothing to hide” response is echoed in a recent commentary by lawyer and privacy activist Malavika Jayaram. In an article about India’s Aadhar ID system, Jayaram quotes Edward Snowden, who in a Reddit AMA session once said that “[a]rguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.” Jayaram builds on that, writing that the “nothing to hide” argument “locates privacy at an individual (some would say selfish) level and ignores the collective, societal benefits that it engenders and protects, such as the freedom of speech and association.”

    She rightly points out, as well, that the “’nothing to hide’ rhetoric … equates a legitimate desire for space and dignity to something sinister and suspect” and “puts the burden on those under surveillance … , rather than on the system to justify why it is needed and to implement the checks and balances required to make it proportional, fair, just and humane.”

    But there might be something else going on, at the same time, in the rhetorical shift from “privacy” to “something to hide”—a kind of deflection, of finger-pointing elsewhere: There, those are the people who have “something to hide”—not me! Nothing to see here, folks who might be watching. I accept your language, your framing of the issue, and your conclusions about the balancing of values or rights involved. Look elsewhere for troublemakers.

    Viewed this way, the “nothing to hide” response is neither naïve nor simplistically selfish; it is an effort—perhaps unconscious—at camouflage. The opposite of radical transparency.

    The same impetus might present itself in a different, also frequent response to questions about privacy and surveillance: “I’m not that interesting. Nobody would want to look at my information. People could look at information about me and it would all be banal.” Or maybe that is, for some people, a reaction to feelings of helplessness. If every day people read articles advising them about steps to take to protect their online privacy, and every other day they read articles explaining how those defensive measures are defeated by more sophisticated actors, is it surprising that some might try to reassure themselves (if not assure others) that their privacy is not really worth breaching?

    But even if we’re not “interesting,” whatever that means, we all do have information, about ourselves and others, that we need to protect. And our society gives us rights that we need to protect, too--for our sake and others'.

    Photo by Hattie Stroud, used without modification under a Creative Commons license.

  •  Death and Facebook

    Wednesday, Jul. 29, 2015

    A number of recent articles have noted Facebook’s introduction of a feature that allows users to designate “legacy contacts” for their accounts. In an extensive examination titled “Where Does Your Facebook Account Go When You Die?,” writer Simon Davis explains that, until recently, when Facebook was notified that one of its users had died, the company would “memorialize” that person’s account (in part in order to keep the account from being hacked). What “memorialization” implies has changed over time. Currently, according to Davis, memorialized accounts retain the privacy and audience settings last set by the user, while the contact information and ability to post status updates are stripped out. Since February, however, users can also designate a “legacy contact” person who “can perform certain functions on a memorialized account.” As Davis puts it, “Now, a trusted third party can approve a new friend request by the distraught father or get the mother’s input on a different profile image.”

    Would you give another person the power to add new “friends” to your account or change the profile image, after your death? Which begs the question, what is a Facebook account?

    In his excellent article, Davis cites Vanessa Callison-Burch, the Facebook product manager who is primarily responsible for the newly-added legacy account feature. Explaining some of the thinking behind it, she argues that a Facebook account “is a really important part of people’s identity and is a community space. Your Facebook account is incredibly personalized. It’s a community place for people to assemble and celebrate your life.” She adds that “there are certain things that that community of people really need to be supported in that we at Facebook can’t make the judgment call on.”

    While I commend Facebook for its (new-found?) modesty in feature design, and its recognition that the user’s wishes matter deeply, I find myself wondering about that description of a Facebook account as “a community space.” Is it? I’ve written elsewhere that posting on Facebook “echoes, for some of us, the act of writing in a journal.” A diary is clearly not a “community space.” On Facebook, however, commenters on one user’s posts get to comment on other commenters’ comments, and entire conversations develop among a user’s “friends.” Sometimes friends of friends “friend” each other.  So, yes, a community is involved. But no, the community’s “members” don’t get to decide what your profile picture should be, or whether or not you should “friend” your dad. Who should?

    In The Guardian, Stuart Heritage explores that question in a much lighter take on the subject of “legacy contacts,” titled “To my brother I leave my Facebook account ... and any chance of dignity in death.” As he makes clear, “nominating a legacy contact is harder than it looks.”

    Rather than simply putting that responsibility on a trusted person, Simon Davis suggests that Facebook should give users the opportunity to create an advance directive with specific instructions about their profile: “who should be able to see it, who should be able to send friend requests, and even what kind of profile picture or banner image the person would want displayed after death.” That alternative would respect the user’s autonomy even more than the current “legacy contact” does.

    But there is another option that perhaps respects that autonomy the most: Facebook currently also allows a user to check a box specifying that his or her account be simply deleted after his or her death. Heritage writes that “this is a hard button to click. It means erasing yourself.” Does it? Maybe it just signals a different perspective on Facebook. Maybe, for some, a Facebook account is neither an autobiography nor a guest book. Maybe the users who choose that delete option are not meanly destroying a “community space,” but ending a conversation.

    Photo by Lori Semprevio, used without modification under a Creative Commons license.

  •  Privacy and Diversity

    Friday, Jun. 12, 2015
    Teams that work on privacy-protective features for our online lives are much more likely to be effective if those teams are diverse, in as many ways as possible.
    Here is what led me to this (maybe glaringly obvious) insight:
    First, an event that I attended at Facebook’s headquarters, called “Privacy@Scale,” which brought together academics, privacy practitioners (from both the legal and the tech sides), regulators, and product managers. (We had some great conversations.)
    Second, a study that was recently published with much fanfare (and quite a bit of tech media coverage) by the International Association of Privacy Professionals, showing that careers in privacy are much more likely than others to provide gender pay parity—and including the observation that there are more women than men in the ranks of Chief Privacy Officers.
    Third, a story from a law student who had interned on the privacy team of a large Silicon Valley company, who mentioned sitting in a meeting and thinking to herself that something being proposed as a feature would never have been accepted in the culture that she came from—would in fact have been somewhat taboo, and might have upset people if it were broadly implemented, rather than offered as an opt-in—and realizing that none of the other members of the team understood this.
    And fourth, a question that several commenters asked earlier this year when Facebook experienced its “It’s Been a Great Year” PR disaster (after a developer wrote about the experience of seeing his daughter’s face auto-inserted by Facebook algorithms under a banner reading “It’s Been a Great Year!” when in fact his daughter had died that year): Had there been any older folks on the team that released that feature? If not, would the perspective of some older team members have tempered the roll-out, provided a word of caution?
    Much has been said, for a long time, about how it’s hard to “get privacy right” because privacy is all about nuance and gray areas, and conceptions of privacy vary so much among individuals, cultures, contexts, etc.  Given that, it makes sense that diverse teams working on privacy-enhancing features would be better able to anticipate and address problems. Not all problems, of course—diversity would not be a magic solution. It would, however, help.
    Various studies have recently shown that diversity on research teams leads to better science, that cultural diversity on global virtual teams has a positive effect on decision-making, that meaningful gender diversity in the workplace improves companies’ bottom line, and that “teams do better when they are composed of people with the widest possible range of personalities, even though it takes longer for such psychologically diverse teams to achieve good cooperation.”
    In Silicon Valley, the talk about team building tends to be about “culture fit” (or, more sharply critical, about “broculture”). As it turns out, though, the right “culture fit” for a privacy team should probably include diversity (of background, gender, age, skills, and even personality), combined with an understanding that one’s own perspectives are not universal; the ability to listen; and curiosity about and respect for difference. 
    Photo by Sean MacEntee, used without modification under a Creative Commons liccense.
  •  Which Students? Which Rights? Which Privacy?

    Friday, May. 29, 2015


    Last week, researcher danah boyd, who has written extensively about young people’s attitudes toward privacy (and debunked many pervasive “gut feelings” about those attitudes and related behaviors), wrote a piece about the several bills now working their way through Congress that aim to protect “student privacy.” boyd is not impressed. While she agrees that reform of current educational privacy laws is much needed, she writes, "Of course, even though this is all about *students*, they don’t actually have a lot of power in any of these bills. It’s all a question of who can speak on their behalf and who is supposed to protect them from the evils of the world. And what kind of punishment for breaches is most appropriate."
    boyd highlights four different “threat models” and argues that the proposed bills do nothing to address two of those: the “Consumer Finance Threat Model,” in which student data would “fuel the student debt ecosystem,” and the “Criminal Justice Threat Model,” in which such data would help build “new policing architectures.”
    As boyd puts it, “the risks that we’re concerned about are shaped by the fears of privileged parents.”
    In a related post called “Students: The one group missing from student data privacy laws and bills,” journalist Larry Magid adds that the proposed bills “are all about parental rights but only empower students once they turn 18.” Referencing boyd’s research, he broadens the conversation to argue that “[i]t’s about time we start to respect privacy, free speech rights and intellectual property rights of children.”
    While free speech and property rights are important, the protection of privacy in particular is essential for the full development of the self. The fact that children and young people need some degree of privacy not just from government or marketers but from their own well-intentioned family members has been particularly obscured by pervasive tropes like “young people today don’t care about privacy.”
    Of course, one way to combat those false tropes is to talk to young people directly. Just ask them: are there some things they keep to themselves, or share only with a few close friends or family members? And no, the fact that some of them post lots of things on social media that their elders might not does not mean that they “don’t care about privacy.” It just means that privacy boundaries vary—from generation to generation, from culture to culture, from context to context, from individual to individual.
    The best recent retort to statements about young people and privacy comes from security expert Bruce Schneier, who answered a question from an interviewer with some questions of his own: "Who are all these kids who are growing up without the concept of digital privacy? Is there even one? … All people care deeply about privacy—analog, digital, everything—and kids are especially sensitive about privacy from their parents, teachers, and friends. … Privacy is a vital aspect of human dignity, and we all value it."
    Given that, boyd’s critique of current efforts aimed at protecting student privacy is a call to action: Policy makers (and, really, all of us) need to better understand the true threats, and to better protect those who are most vulnerable in a “hypersurveilled world.”


    Photo by Theen Moy, used without modification under a Creative Commons license.

  •  BroncoHack 2015 (Guest Post)

    Friday, May. 8, 2015

    Last weekend, Santa Clara University hosted BroncoHack 2015—a hackathon organized by the OMIS Student Network, with the goal of creating “a project that is innovative in the arenas of business and technology” while also reflecting the theme of “social justice.” The Markkula Center for Applied Ethics was proud to be one of the co-sponsors of the event.

    The winning project was “PrivaSee”—a suite of applications that helps prevent the leakage of sensitive and personally identifiable student information from schools’ networks. In the words of its creators, “PrivaSee offers a web dashboard that allows schools to monitor their network activity, as well as a mobile application that allows parents to stay updated about their kids’ digital privacy. A network application that sits behind the router of a school's network continuously monitors the network packets, classifies threat levels, and notifies the school administration (web) and parents (mobile) if it discovers student data being leaked out of the network, or if there are any unauthorized apps or services being used in the classrooms that could potentially syphon private data. For schools, it offers features like single dashboard monitoring of all kids and apps. For parents, it provides the power of on-the-move monitoring of all their kids’ privacy and the ability to chat with school administration in the event of any issues. Planned extensions like 'privacy bots' will crawl the Internet to detect leaked data of students who might have found ways to bypass a school's secure networks. The creators of PrivaSee believe that cybersecurity issues in connected learning environments are a major threat to kids' safety, and they strive to create a safer ecosystem.”

    From the winning team:

    "Hackathons are always fun and engaging. Personally, I put this one at the top of my list. I feel lucky to have been part of this energetic, multi-talented team, and I will never forget the fun we had. Our preparations started a week ago, brainstorming various ideas. We kick-started the event with analysis of our final idea and the impact it can create, rather than worrying about any technical challenges that might hit us. We divided our work, planned our approach, and enjoyed every moment while shaping our idea to a product. Looking back, I am proud to attribute our success to my highly motivated and fearless team with an unending thirst to bring a vision to reality. We are looking forward to testing our idea in real life and helping to create a safer community." - Venkata Sai Kishore Modalavalasa, Computer Science & Engineering Graduate Student, Santa Clara University

    "My very first hackathon, and an amazing experience indeed! The intellectually charged atmosphere, the intense coding, and the serious competition kept us on our toes throughout the 24 hours. Kudos to ‘Cap'n Sai,’ who guided us and helped take the product to near perfection. Kudos to the rest of my teammates, who coded diligently through the night. And finally, thank you to the organizers and sponsors of BroncoHack 2015, for having provided us with a platform to turn an idea into a functional security solution that can help us make a difference." - Ashish Nair, Computer Science & Engineering Graduate Student, Santa Clara University

    "Bronco-hack was the first hackathon I ever attended, and it turned to be an amazing experience. After pondering over many ideas, we finally decided to stick with our app: 'PrivaSee'. The idea was to come up with a way to protect kids from sending sensitive digital information that can potentially be compromised over the school’s network. Our objective was to build a basic working model (minimum viable product) of the app. It was a challenge to me because I was not experienced in the particular technical skill-set that was required to build my part of the app. This experience has most definitely strengthened my ability to perform and learn in high pressure situations. I would definitely like to thank the organizers for supporting us throughout the event. They provided us with whatever our team needed and were very friendly about it. I plan to focus on resolving more complicated issues that still plague our society and carry forward and use what I learnt from this event." - Manish Kaushik, Computer Science & Engineering Graduate Student, Santa Clara University

    "Bronco Hack 2015 was my first Hackathon experience. I picked up working with Android App development. Something that I found challenging and fun to do was working with parse cloud and Android Interaction. I am really happy that I was able to learn and complete the hackathon. I also find that I'm learning how to work and communicate effectively in teams and within time bounds. Everyone in the team comes in with different skill levels and you really have to adapt quickly in order to be productive as a team and make your idea successful within 24hrs." - Prajakta Patil, Computer Science & Engineering Graduate Student, Santa Clara University

    "I am extremely glad I had this opportunity to participate in Bronco Hack 2015. It was my first ever hackathon, and an eye-opening event for me. It is simply amazing how groups of individuals can come up with such unique and extremely effective solutions for current issues in a matter of just 24 hours. This event helped me realize that I am capable of much more than I expected. It was great working with the team we had, and special thanks to Captain Sai for leading the team to victory. " - Tanmay Kuruvilla, Computer Science & Engineering Graduate Student, Santa Clara University

    Congratulations to all of the BroncoHack participants—and yes, BroncoHack will return next Spring!

  •  A New Ethics Case Study

    Friday, Apr. 24, 2015
    A Google receptionist works at the front desk in the company's office in this Oct. 2, 2006, file photo. (AP Photo/Mark Lennihan, File)
    A Google receptionist works at the front desk in the company's office in this Oct. 2, 2006, file photo. (AP Photo/Mark Lennihan, File)

    In October 2014, Google inaugurated a Transparency Report detailing its implementation of the European court decision generally (though mistakenly) described as being about “the right to be forgotten.” To date, according to the report, Google has received more than 244,000 requests for removals of URLs from certain searches involvijng names of EU residents. Aside from such numbers, the Transparency Report includes examples of requests received--noting, in each case, whether or not Google complied with the request.

    The “right to be forgotten” decision and its implementation have raised a number of ethical issues. Given that, we thought it would be useful to draw up an ethics case study that would flesh out those issues; we published that yesterday: see “Removing a Search Result: An Ethics Case Study.”

    What would you decide, if you were part of the decision-making team tasked with evaluating the request described in the case study?


  •  Grant from Intel's Privacy Curriculum Initiative Will Fund New SCU Course

    Friday, Mar. 27, 2015

    Exciting news! A new course now being developed at Santa Clara University, funded by a $25,000 grant from Intel Corporation's Privacy Curriculum Initiative, will bring together engineering, business, and law students to address topics such as privacy by design, effective and accurate privacy policies, best‐practice cybersecurity procedures, and more. Ethics will be an important part of the discussion, and the curriculum will be developed by the High Tech Law Institute in conjunction with Santa Clara University’s School of Engineering, the Leavey School of Business, and the Markkula Center for Applied Ethics.

    More details here!


  • Pages:
  • 1
  • 2
  • »