Ethical Issues in the Online World
Welcome to the blog of the Internet Ethics program at the Markkula Center for Applied Ethics, Santa Clara University. Program Director Irina Raicu will be joined by various guests in discussing the ethical issues that arise continuously on the Internet; we hope to host a robust conversation about them, and we look forward to your comments.
Wednesday, Jul. 29, 2015
A number of recent articles have noted Facebook’s introduction of a feature that allows users to designate “legacy contacts” for their accounts. In an extensive examination titled “Where Does Your Facebook Account Go When You Die?,” writer Simon Davis explains that, until recently, when Facebook was notified that one of its users had died, the company would “memorialize” that person’s account (in part in order to keep the account from being hacked). What “memorialization” implies has changed over time. Currently, according to Davis, memorialized accounts retain the privacy and audience settings last set by the user, while the contact information and ability to post status updates are stripped out. Since February, however, users can also designate a “legacy contact” person who “can perform certain functions on a memorialized account.” As Davis puts it, “Now, a trusted third party can approve a new friend request by the distraught father or get the mother’s input on a different profile image.”
Would you give another person the power to add new “friends” to your account or change the profile image, after your death? Which begs the question, what is a Facebook account?
In his excellent article, Davis cites Vanessa Callison-Burch, the Facebook product manager who is primarily responsible for the newly-added legacy account feature. Explaining some of the thinking behind it, she argues that a Facebook account “is a really important part of people’s identity and is a community space. Your Facebook account is incredibly personalized. It’s a community place for people to assemble and celebrate your life.” She adds that “there are certain things that that community of people really need to be supported in that we at Facebook can’t make the judgment call on.”
While I commend Facebook for its (new-found?) modesty in feature design, and its recognition that the user’s wishes matter deeply, I find myself wondering about that description of a Facebook account as “a community space.” Is it? I’ve written elsewhere that posting on Facebook “echoes, for some of us, the act of writing in a journal.” A diary is clearly not a “community space.” On Facebook, however, commenters on one user’s posts get to comment on other commenters’ comments, and entire conversations develop among a user’s “friends.” Sometimes friends of friends “friend” each other. So, yes, a community is involved. But no, the community’s “members” don’t get to decide what your profile picture should be, or whether or not you should “friend” your dad. Who should?
In The Guardian, Stuart Heritage explores that question in a much lighter take on the subject of “legacy contacts,” titled “To my brother I leave my Facebook account ... and any chance of dignity in death.” As he makes clear, “nominating a legacy contact is harder than it looks.”
Rather than simply putting that responsibility on a trusted person, Simon Davis suggests that Facebook should give users the opportunity to create an advance directive with specific instructions about their profile: “who should be able to see it, who should be able to send friend requests, and even what kind of profile picture or banner image the person would want displayed after death.” That alternative would respect the user’s autonomy even more than the current “legacy contact” does.
But there is another option that perhaps respects that autonomy the most: Facebook currently also allows a user to check a box specifying that his or her account be simply deleted after his or her death. Heritage writes that “this is a hard button to click. It means erasing yourself.” Does it? Maybe it just signals a different perspective on Facebook. Maybe, for some, a Facebook account is neither an autobiography nor a guest book. Maybe the users who choose that delete option are not meanly destroying a “community space,” but ending a conversation.
Photo by Lori Semprevio, used without modification under a Creative Commons license.
Friday, Jul. 17, 2015
Ethics is about living the good life, and, for many of us, trees are an important part of that good life (and not just because we like breathing). This becomes clear in an article titled “When You Give a Tree an Email Address,” in which The Atlantic’s Adrienne LaFrance writes about a project undertaken by the city of Melbourne. As LaFrance explains, “[o]fficials assigned the trees ID numbers and email addresses in 2013 as part of a program designed to make it easier for citizens to report problems like dangerous branches.” As it turned out, however, quite a few citizens chose, instead, to write messages addressed directly to particular trees.
Some of the messages quoted by LaFrance are quite moving. On May 21, 2015, for example, a message to “Golden Elm, Tree ID 1037148” read, “I’m so sorry you’re going to die soon. It makes me sad when trucks damage your low hanging branches. Are you as tired of all this construction work as we are?” Other messages are funny. (All, by definition, are whimsical. How else do you write to a tree?) But the best part, perhaps, is that the trees sometimes write back. For example, in January 2015, a Willow Leaf Peppermint answered a query about its gender. “Hello,” it began,
I am not a Mr or a Mrs, as I have what’s called perfect flowers that include both genders in my flower structure, the term for this is Monoicous. [Even trees generate run-ons.] Some trees species have only male or female flowers on individual plants and therefore do have genders, the term for this is Dioecious. Some other trees have male flowers and female flowers on the same tree. It is all very confusing and quite amazing how diverse and complex trees can be.
Mr and Mrs Willow Leaf Peppermint (same Tree)
Should we rethink the possibilities of the acronym “IoT”? With the coming of the much-anticipated “Internet of Things,” will trees eventually notify the city officials directly when they’re about to tip over, or a branch has scraped a car, or a good percentage of their fruits are ripe?
In the meantime, is it pessimistic to worry that hackers might break into the trees’ email accounts and start sending offensive responses, or distribute spam instead of pollen?
For now, the article made me think of a famous poem by Joyce Kilmer, “Trees,” which was published in 1913. With apologies, here is my take on the Internet of Trees:
I thought that I would never see
An email written by a tree.
A tree whose hungry eyes are keen
Upon a gadget’s glowing screen;
A tree that doesn’t choose to Skype
But lifts her leafy arms to type;
A tree that may in Summer share
Selfies with robins in her hair;
Within whose bosom drafts might end;
Who intimately lives with “Send.”
Poems are made by fools like me,
But emails come, now, from a tree.
Photo by @Doug88888, used without modification under a Creative Commons license.
Tuesday, Jun. 30, 2015
"1. The Internet’s architecture is highly unusual.
2. The Internet’s architecture reflects certain values.
3. Our use of the Net, based on that architecture, strongly encourages the adoption of those values.
4. Therefore, the Internet tends to transform us and our institutions in ways that reflect those values.
5. And that’s a good thing."
The quoted list above comprises the premises that undergird an essay by David Weinberger, recently published in The Atlantic, titled “The Internet That Was (And Still Could Be).” Weinberger, who is the co-author of The Cluetrain Manifesto (and now a researcher at Harvard’s Berkman Center for Internet & Society), argues that the Internet’s architecture “values open access to information, the democratic and permission-free ability to read and to post, an open market of ideas and businesses, and provides a framework for bottom-up collaboration among equals.” However, he notes, in what he calls the “Age of Apps” most Internet users don’t directly encounter that architecture:
In the past I would have said that so long as this architecture endures, so will the transfer of values from that architecture to the systems that run on top of it. But while the Internet’s architecture is still in place, the values transfer may actually be stifled by the many layers that have been built on top of it.
Moreover, if people think, for example, that the Internet is Facebook, then the value transfer may be not just stifled but shifted: what they may be absorbing are Facebook’s values, not the Internet’s. However, Weinberger describes himself as still ultimately optimistic about the beneficial impact of the Internet. In light of the layers that obscure its architecture and its built-in values, he offers a new call to action: “As the Internet’s architecture shapes our behavior and values less and less directly, we’re going to have to undertake the propagation of the values embedded in that architecture as an explicit task” (emphasis added).
It’s interesting to consider this essay in conjunction with the results of a poll reported recently by the Pew Research Center. In a study of people from 32 developing and emerging countries, the Pew researchers found that
[t]he aspect of the internet that generates the greatest concern is its effect on a nation’s morals. Overall, a median of 42% say the internet has a negative influence on morality, with 29% saying it has a positive influence. The internet’s influence on morality is seen as the most negative of the five aspects tested in 28 of the 32 countries surveyed. And in no country does a majority say that the influence of the internet on morality is a positive.
It should be noted at the outset that not all of those polled described themselves as internet users—and that Pew reports that a “major subgroup that sees the internet positively is internet users themselves” (though, as a different study shows, millions of people in some developing countries mistakenly identify themselves as non-users when they really do use the Internet).
Interesting distinctions emerge among the countries surveyed, as well. In Nigeria, Pew reports, 50% of those polled answered that “[i]ncreasing use of the Internet in [their] country has had a good influence on morality.” In Ghana, only 29% did. In Vietnam, 40%. In China, 25%. In Tunisia, 17%. In Russia, 13%.
The Pew study, however, did not attempt to provide a definition of “morality” before posing that question. It would have been interesting (and would perhaps be an interesting future project) to ask users in other countries what they perceive as the values embedded in the Internet. Would they agree with Weinberger’s list? And how might they respond to an effort to clarify and propagate those values explicitly, as Weinberger suggests? For non-users of the Internet, in other countries, is the motivation purely a lack of access, or is it a rejection of certain values, as well?
If a clash of values is at issue, it involves a generational aspect, too: the Pew report notes that in many of the countries surveyed, “young people (18-34 years old) are much more likely to say that the internet has a good influence compared with older people (ages 35+).” This, the report adds, “is especially true on its influence of morality.”
Photo by Blaise Alleyne, used without modification under a Creative Commons license.
Friday, Jun. 26, 2015
Earlier this week, the associate director of the Markkula Center for Applied Ethics, Miriam Schulman, published a blog post about one of the center's recent campus projects. "If we want to engage with students," she wrote, "we have to go where they are talking, and this year, that has been on Yik Yak." To read more about this controversial app and a creative way to use it in a conversation about applied ethics, see "Yik Yak: The Medium and the Message." (And consider subscribing to the "All About Ethics" blog, as well!)
Friday, Jun. 19, 2015
Ello, the social media platform that was prominently (if briefly) touted last year as the “anti-Facebook,” is reinventing itself for mobile. Twitter is reinventing itself, too. Pinterest is reinventing itself into a store. And the anti-“anti-Facebook,” i.e. Facebook, is constantly reinventing itself.
But the real “anti-Facebook” is described by the director of MIT’s Center for Civic Media, Ethan Zuckerman, in the transcript of a wide-ranging discussion recently held under the auspices of the Carnegie Council on Ethics in International Affairs. Zuckerman notes that one of his students, Sands Fish,
is trying to build social networks designed to make you uncomfortable. Basically, the first thing he does is he takes away the choice of friends. You no longer have a choice about who is going to be your friend. You are going to interact with people whom he thinks you should be interacting with, as a way of sort of challenging us. Will anyone use this? It's a good question. This is why you do this at research universities rather than going out and getting venture capital for it.
Initially, the idea of a social platform designed to make users uncomfortable seems amusing, or maybe closer to a conceptual art project than a real social network. But at a time when scholars warn about “filter bubbles
” (and companies who might be blamed for them try to calm the worries, or at least deflect responsibility
), a time when we seem to either surround ourselves with like-minded people or get sucked into the “spriral of silence”
and stop talking about controversial topics, such a network could become a fascinating training ground. Might it lead to constructive ways to engage with people who have different experiences and preferences, hold different beliefs, etc., yet still need to function together, as people in a pluralistic society do?
Would people willingly submit themselves to discomfort by participating in such a network? Would folks who join such a network be the ones already more comfortable with (or even attracted to) conflict and diversity? Or is it a question of degrees—the degree of discomfort, the degree of diversity, and the degree of thoughtfulness of the conversations that might ensue?
Zuckerman addresses this issue:
A lot of my theories around this suggest that you need bridge figures. You need people whom you have one thing in common with, but something else that is very different. I spend a ton of my life right now working on technology and innovation in sub-Saharan Africa. I work with people whom I don't have a lot in common with in terms of where we grew up, who we know, where we are from, but we have a lot in common in terms of what we do day to day, how we interact with technological systems, the things that we care about. That gives us a common ground that we are able to work on.
Would the designer of the network of discomfort provide us with bridge figures? Or would serendipity offer some?
One final thought: in some ways, for some people, Facebook itself has become the kind of social network that Fish (Zuckerman’s student) is apparently trying to design. When your relatives or co-workers send you “friend” requests, do you still “have a choice about who is going to be your friend”? (Much has been written about how conversations on Facebook have deteriorated as the users have amassed vast numbers of “friends” from diverse parts and periods of their lives; and many commentators have suggested that this kind of blended audience has driven teens, at least, to other social networks not yet co-opted by their parents and teachers). Maybe the key distinction in the MIT project would be that participants would, as Zuckerman describes it, “interact with people whom [the network designer] thinks [they] should be interacting with.” The anti-Facebook would provide us more thoughtfully curated discomfort.
Friday, Jun. 12, 2015
Teams that work on privacy-protective features for our online lives are much more likely to be effective if those teams are diverse, in as many ways as possible.
Here is what led me to this (maybe glaringly obvious) insight:
First, an event that I attended at Facebook’s headquarters, called “Privacy@Scale,” which brought together academics, privacy practitioners (from both the legal and the tech sides), regulators, and product managers. (We had some great conversations.)
Second, a study that was recently published with much fanfare
(and quite a bit of tech media coverage) by the International Association of Privacy Professionals, showing that careers in privacy are much more likely than others to provide gender pay parity—and including the observation that there are more women than men in the ranks of Chief Privacy Officers.
Third, a story from a law student who had interned on the privacy team of a large Silicon Valley company, who mentioned sitting in a meeting and thinking to herself that something being proposed as a feature would never have been accepted in the culture that she came from—would in fact have been somewhat taboo, and might have upset people if it were broadly implemented, rather than offered as an opt-in—and realizing that none of the other members of the team understood this.
And fourth, a question that several commenters asked earlier this year when Facebook experienced its “It’s Been a Great Year” PR disaster
(after a developer wrote about the experience of seeing his daughter’s face auto-inserted by Facebook algorithms under a banner reading “It’s Been a Great Year!” when in fact his daughter had died that year): Had there been any older folks on the team that released that feature? If not, would the perspective of some older team members have tempered the roll-out, provided a word of caution?
Much has been said, for a long time, about how it’s hard to “get privacy right” because privacy is all about nuance and gray areas
, and conceptions of privacy vary so much among individuals, cultures, contexts, etc. Given that, it makes sense that diverse teams working on privacy-enhancing features would be better able to anticipate and address problems. Not all problems, of course—diversity would not be a magic solution. It would, however, help.
In Silicon Valley, the talk about team building tends to be about “culture fit” (or, more sharply critical, about “broculture”). As it turns out, though, the right “culture fit” for a privacy team should probably include diversity (of background, gender, age, skills, and even personality), combined with an understanding that one’s own perspectives are not universal; the ability to listen; and curiosity about and respect for difference.
Photo by Sean MacEntee, used without modification under a Creative Commons liccense.
Friday, May. 29, 2015
Last week, researcher danah boyd, who has written extensively about young people’s attitudes toward privacy
(and debunked many pervasive “gut feelings” about those attitudes and related behaviors
), wrote a piece
about the several bills now working their way through Congress that aim to protect “student privacy.” boyd is not impressed. While she agrees that reform of current educational privacy laws is much needed, she writes, "Of course, even though this is all about *students*
, they don’t actually have a lot of power in any of these bills. It’s all a question of who can speak on their behalf and who is supposed to protect them from the evils of the world. And what kind of punishment for breaches is most appropriate."
boyd highlights four different “threat models”
and argues that the proposed bills do nothing to address two of those: the “Consumer Finance Threat Model,” in which student data would “fuel the student debt ecosystem,” and the “Criminal Justice Threat Model,” in which such data would help build “new policing architectures.”
As boyd puts it, “the risks that we’re concerned about are shaped by the fears of privileged parents.”
In a related post called “Students: The one group missing from student data privacy laws and bills
,” journalist Larry Magid adds that the proposed bills “are all about parental rights but only empower students once they turn 18.” Referencing boyd’s research, he broadens the conversation to argue that “[i]t’s about time we start to respect privacy, free speech rights and intellectual property rights of children.”
While free speech and property rights are important, the protection of privacy in particular is essential for the full development of the self
. The fact that children and young people need some degree of privacy not just from government or marketers but from their own well-intentioned family members has been particularly obscured by pervasive tropes like “young people today don’t care about privacy.”
Of course, one way to combat those false tropes is to talk to young people directly. Just ask them: are there some things they keep to themselves, or share only with a few close friends or family members? And no, the fact that some of them post lots of things on social media that their elders might not does not mean that they “don’t care about privacy.”
It just means that privacy boundaries vary—from generation to generation, from culture to culture, from context to context, from individual to individual.
The best recent retort to statements about young people and privacy
comes from security expert Bruce Schneier, who answered a question from an interviewer with some questions of his own: "Who are all these kids who are growing up without the concept of digital privacy? Is there even one? … All people care deeply about privacy—analog, digital, everything—and kids are especially sensitive about privacy from their parents, teachers, and friends. … Privacy is a vital aspect of human dignity, and we all value it."
Given that, boyd’s critique of current efforts aimed at protecting student privacy is a call to action: Policy makers (and, really, all of us) need to better understand the true threats, and to better protect those who are most vulnerable in a “hypersurveilled world.”
Photo by Theen Moy, used without modification under a Creative Commons license.
Friday, May. 22, 2015
A few weeks ago, I wrote about Kurt Vonnegut’s short story “Harrison Bergeron.” In the world of that story the year is 2081, and, in an effort to render all people “equal,” the government imposes handicaps on all those who are somehow better than average. One of the characters, George, whose intelligence is "way above normal," has "a little mental handicap radio in his ear.”
As George tries to concentrate on something,
“[a] buzzer sounded in George's head. His thoughts fled in panic, like bandits from a burglar alarm.
"That was a real pretty dance, that dance they just did," said Hazel.
"Huh" said George.
"That dance-it was nice," said Hazel.
"Yup," said George. He tried to think a little about the ballerinas. … But he didn't get very far with it before another noise in his ear radio scattered his thoughts.
George winced. So did two out of the eight ballerinas.
Hazel saw him wince. Having no mental handicap herself, she had to ask George what the latest sound had been.
"Sounded like somebody hitting a milk bottle with a ball peen hammer," said George.
"I'd think it would be real interesting, hearing all the different sounds," said Hazel a little envious. "All the things they think up."
"Um," said George.
"Only, if I was Handicapper General, you know what I would do?" said Hazel. … "I'd have chimes on Sunday--just chimes. Kind of in honor of religion."
"I could think, if it was just chimes," said George.
Re-reading the story, I thought about the work of the late professor Cliff Nass, whose “pioneering research into how humans interact with technology,” as the New York Times described it, “found that the increasingly screen-saturated, multitasking modern world was not nurturing the ability to concentrate, analyze or feel empathy.”
If we have little “mental handicap radios” in our ears, these days, it’s usually because we put them there—or on our eyes, or wrists, or just in our hands—ourselves (though some versions are increasingly required by employers or schools). Still, like the ones in the story, they are making it more difficult for all of us to focus on key tasks, to be present for our loved ones, to truly take in and respond to our surroundings.
In anticipation of the Memorial Day’s weekend, I wish you a few days of lessened technological distractions. And, if you have some extra time, you might want to read some of professor Nass’ research.
Thursday, May. 14, 2015
Happy Birthday, Right-to-Have-Certain-Results-De-Listed-from-Searches-on-Your-Own-Name-,-Depending-on-the-Circumstances!
It’s now been a year since the European Court of Justice shocked (some) people with a decision that has mistakenly been described as announcing a “right to be forgotten.”
Today, 80 Internet scholars sent an open letter to Google asking the company to release additional aggregate data about the company’s implementation of the court decision. As they explain,
The undersigned have a range of views about the merits of the ruling. Some think it rightfully vindicates individual data protection/privacy interests. Others think it unduly burdens freedom of expression and information retrieval. Many think it depends on the facts.
We all believe that implementation of the ruling should be much more transparent for at least two reasons: (1) the public should be able to find out how digital platforms exercise their tremendous power over readily accessible information; and (2) implementation of the ruling will affect the future of the [“right to be forgotten”] in Europe and elsewhere, and will more generally inform global efforts to accommodate privacy rights with other interests in data flows.
Although Google has released a Transparency Report with some aggregate data and some examples of the delinking decisions reached so far, the signatories find that effort insufficient. “Beyond anecdote,” they write,
we know very little about what kind and quantity of information is being delisted from search results, what sources are being delisted and on what scale, what kinds of requests fail and in what proportion, and what are Google’s guidelines in striking the balance between individual privacy and freedom of expression interests.
For now, they add, the participants in the delisting debate “do battle in a data vacuum, with little understanding of the facts.”
More detailed data is certainly much needed. What remains striking, in the meantime, is how little understanding of the facts many people continue to have about what the decision itself mandates. A year after the decision was issued, an associate editor for Engadget, for example, still writes that, as a result of it, “if Google or Microsoft hides a news story, there may be no way to get it back.”
To “get it back”?! Into the results of a search on a particular person’s name? Because that is the entire scope of the delinking involved here—when the delinking does happen.
In response to a request for comment on the Internet scholars’ open letter, a Google spokesman told The Guardian that “it’s helpful to have feedback like this so we can know what information the public would find useful.” In that spirit of helpful feedback, may I make one more suggestion?
Google’s RTBF Transparency Report (updated on May 14) opens with the line, “In a May 2014 ruling, … the Court of Justice of the European Union found that individuals have the right to ask search engines like Google to remove certain results about them.” Dear Googlers, could you please add a line or two explaining that “removing certain results” does not mean “removing certain stories from the Internet, or even from the Google search engine”?
Given the anniversary of the decision, many reporters are turning to the Transparency Report for information for their articles. This is a great educational opportunity. With a line or two, while it weighs its response to the important request for more detailed reporting on its actions, Google could already improve the chances of a more informed debate.
[I’ve written about the “right to be forgotten” a number of times: chronologically, see “The Right to Be Forgotten, Or the Right to Edit?” “Revisiting the ‘Right to Be Forgotten,’” “The Right to Be Forgotten, The Privilege to Be Remembered” (that one published in Re/code), “On Remembering, Forgetting, and Delisting,” “Luciano Floridi’s Talk at Santa Clara University,” and, most recently, “Removing a Search Result: An Ethics Case Study.”]
(Photo by Robert Scoble, used without modification under a Creative Commons license.)
Friday, May. 8, 2015
Last weekend, Santa Clara University hosted BroncoHack 2015—a hackathon organized by the OMIS Student Network, with the goal of creating “a project that is innovative in the arenas of business and technology” while also reflecting the theme of “social justice.” The Markkula Center for Applied Ethics was proud to be one of the co-sponsors of the event.
The winning project was “PrivaSee”—a suite of applications that helps prevent the leakage of sensitive and personally identifiable student information from schools’ networks. In the words of its creators, “PrivaSee offers a web dashboard that allows schools to monitor their network activity, as well as a mobile application that allows parents to stay updated about their kids’ digital privacy. A network application that sits behind the router of a school's network continuously monitors the network packets, classifies threat levels, and notifies the school administration (web) and parents (mobile) if it discovers student data being leaked out of the network, or if there are any unauthorized apps or services being used in the classrooms that could potentially syphon private data. For schools, it offers features like single dashboard monitoring of all kids and apps. For parents, it provides the power of on-the-move monitoring of all their kids’ privacy and the ability to chat with school administration in the event of any issues. Planned extensions like 'privacy bots' will crawl the Internet to detect leaked data of students who might have found ways to bypass a school's secure networks. The creators of PrivaSee believe that cybersecurity issues in connected learning environments are a major threat to kids' safety, and they strive to create a safer ecosystem.”
From the winning team:
"Hackathons are always fun and engaging. Personally, I put this one at the top of my list. I feel lucky to have been part of this energetic, multi-talented team, and I will never forget the fun we had. Our preparations started a week ago, brainstorming various ideas. We kick-started the event with analysis of our final idea and the impact it can create, rather than worrying about any technical challenges that might hit us. We divided our work, planned our approach, and enjoyed every moment while shaping our idea to a product. Looking back, I am proud to attribute our success to my highly motivated and fearless team with an unending thirst to bring a vision to reality. We are looking forward to testing our idea in real life and helping to create a safer community." - Venkata Sai Kishore Modalavalasa, Computer Science & Engineering Graduate Student, Santa Clara University
"My very first hackathon, and an amazing experience indeed! The intellectually charged atmosphere, the intense coding, and the serious competition kept us on our toes throughout the 24 hours. Kudos to ‘Cap'n Sai,’ who guided us and helped take the product to near perfection. Kudos to the rest of my teammates, who coded diligently through the night. And finally, thank you to the organizers and sponsors of BroncoHack 2015, for having provided us with a platform to turn an idea into a functional security solution that can help us make a difference." - Ashish Nair, Computer Science & Engineering Graduate Student, Santa Clara University
"Bronco-hack was the first hackathon I ever attended, and it turned to be an amazing experience. After pondering over many ideas, we finally decided to stick with our app: 'PrivaSee'. The idea was to come up with a way to protect kids from sending sensitive digital information that can potentially be compromised over the school’s network. Our objective was to build a basic working model (minimum viable product) of the app. It was a challenge to me because I was not experienced in the particular technical skill-set that was required to build my part of the app. This experience has most definitely strengthened my ability to perform and learn in high pressure situations. I would definitely like to thank the organizers for supporting us throughout the event. They provided us with whatever our team needed and were very friendly about it. I plan to focus on resolving more complicated issues that still plague our society and carry forward and use what I learnt from this event." - Manish Kaushik, Computer Science & Engineering Graduate Student, Santa Clara University
"Bronco Hack 2015 was my first Hackathon experience. I picked up working with Android App development. Something that I found challenging and fun to do was working with parse cloud and Android Interaction. I am really happy that I was able to learn and complete the hackathon. I also find that I'm learning how to work and communicate effectively in teams and within time bounds. Everyone in the team comes in with different skill levels and you really have to adapt quickly in order to be productive as a team and make your idea successful within 24hrs." - Prajakta Patil, Computer Science & Engineering Graduate Student, Santa Clara University
"I am extremely glad I had this opportunity to participate in Bronco Hack 2015. It was my first ever hackathon, and an eye-opening event for me. It is simply amazing how groups of individuals can come up with such unique and extremely effective solutions for current issues in a matter of just 24 hours. This event helped me realize that I am capable of much more than I expected. It was great working with the team we had, and special thanks to Captain Sai for leading the team to victory. " - Tanmay Kuruvilla, Computer Science & Engineering Graduate Student, Santa Clara University
Congratulations to all of the BroncoHack participants—and yes, BroncoHack will return next Spring!