Santa Clara University

 
 
RSS

Business Ethics in the News

A discussion on the week's top business ethics stories by Professor Kirk O. Hanson, Executive Director of the Markkula Center for Applied Ethics, and Patrick Coutermarsh, Fellow in Applied Ethics and recent graduate of Santa Clara University.

The following postings have been filtered by category Data Security. clear filter
  •  GOOGLE: Are App Developers On the Hook?

    Thursday, Jan. 23, 2014
    Monday, Google removed two Chrome browser extensions (think “apps added to your web browser”) from its store after they were found to be installing unwanted software and redirecting users to affiliate links. The two extensions, “Tweet this Page” and “Send to Feedly,” began as legitimate services, created by individual developers and offered free of charge. In both cases, the original developer sold the extension to a company who then took advantage of existing subscribers to disseminate ads. Send to Feedly’s founder, Amit Agarwal, sold his extension used by 30,000 to an unidentified party. “It was a 4-figure offer for something that had taken an hour to create and I agreed to the deal, says Amit. He has since published a blog post apologizing to existing users, and stated that taking the deal was a bad decision. While many corporations publish apps and extensions, a great deal of these services are made by nonbusiness entities and are offered free of charge. Do independent developers have the same obligations to their users as corporations? Is Amit Agarwal correct in calling his decision a bad one?
     

      Kirk: Anytime you have 30,000 people using a product, you have an obligation to not sell out to someone who might corrupt it or change it in ways that exploit users. Agarwal and others like him clearly want to cash out, and rightfully so. But the glaring problem here is that Agarwal did not identify whom he was dealing with. In this case, it seems like the buyers refused to identify themselves, or at least made it very hard to do so. This alone is enough to say Agarwal should've passed on the deal.

      Patrick: First, kudos to Amit for acknowledging his role in the situation. To start, I get where Amit was coming from: “I’m just a guy that made an extension… I don’t have customers.” But the way I see it, when Amit entered the market to sell the extension those existing users became “paying customers;” that is, he was then using them as leverage to get a deal. With that, I think certain obligations emerge; at the least, Amit should’ve announced the change in ownership to existing users.

    Google pulls malware Twitter and Feedly extensions from Chrome (The Guardian)

    I Sold a Chrome Extension but it was a bad decision (Digital Inspiration)

    A Framework for Thinking Ethically (Markkula Center for Applied Ethics)

     

    NEXT STORY: HOW MANY MINUTES DOES IT TAKE TO EAT A HAPPY MEAL?

  •  CONFIDE: Disappearing Messages in the Workplace?

    Monday, Jan. 13, 2014
    Private messaging apps, led by Snapchat, are becoming immensely popular as a way of sending messages to friends and family, and a new app is now looking to bring private messaging into the workplace. Confide, made for use by Apple devices, allows users to send vanishing text messages (as opposed to Snapchat’s pictures), which disappear immediately after being read. Many are already raising red flags, with the fear that disappearing messages will allow individuals, and even corporations, to systematically erase any record of improper communications or behavior: including matters of insider trading, discrimination in the work place, and messages that constitute sexual harassment. Moreover, messaging services like this will without a doubt undermine the legal discovery process, which has effectively revealed wrongdoing in recent years through email history. Then again, these “off the record” conversations are going on regardless, and new tech developments always open doors for possible misuse. Should employers allow these private messaging apps in the workplace? Would you condemn a company that provides this service for its employees?
     

      Kirk: While business ethics involves many things, managing an ethical business often comes down to effectively managing incentives. Offering this capability will undoubtedly give the message, inadvertently or otherwise, that you can say or do anything as long as you don’t get caught. This is particularly troubling, given that we are in a time where investigations of corporate wrongdoing are heavily dependent on email records. No company can afford to permit this type of communication without putting the state of its ethical culture at risk.

      Patrick: The way I see it, companies would not be at fault for allowing this type of service to be used, although I would find it troubling if a company adopted disappearing messages as a provided service. “Off the record” conversations between individuals happen and there’s no stopping them. Yes, this technology makes having these conversations easier, but the price of technological advancement is often the possibility of misuse. On the other hand, if a company paid for or encouraged the use of this service, I would no longer see it as an off the record discussion, but rather an official forum provided for by the corporation—one which should be subject to review just like phone and email communications.

    A Snapchat for executives? (Washington Post)

    A Framework for Thinking Ethically (The Markkula Center for Applied Ethics)

     

    NEXT STORY: HACKER ATTACK OR PUBLIC SERVICE?

  •  SNAPCHAT: Hacker Attack or Public Service

    Tuesday, Jan. 7, 2014
    Snapchat, one of the hottest startups of 2013, is under heavy fire this week over a security breach that compromised the usernames and phone numbers of 4.6 million Snapchat users. “Gibson Security,” a group of unidentified “white hat" hackers that first uncovered the vulnerabilities, warned Snapchat privately in August to no avail, leading Gibson to publish a detailed account of the security flaws on an online website. On New Year’s Eve, a different group of hackers used Gibson’s information to “steal” user information, and then posted the usernames and phone numbers (partially redacted) on its own website to raise awareness on the issue.

    Snapchat’s CEO, Evan Spiegel, responded with a cryptic tweet, stating that Snapchat was working with law enforcement, and later called the incident an “attack” and “abuse” of its system. Numerous journalists have criticized Snapchat for ignoring the initial warnings, the lack of apology, and for depicting the “hack” as a malicious attempt, as opposed to the benevolent effort many believe it to be. Nonetheless, Snapchat’s system was “attacked,” and millions of users’ private information was published online. Should the actions of Gibson and the other hacker group be seen as abuses of the system or as a public service to be lauded?

      Kirk: I’ve never had much affection for “white hat” activists, especially when they facilitate the misuse of private and confidential information. These groups often due more harm than good, even when their intentions are in the right place. Real “white hat” groups should be able to accomplish their goals without publicly revealing data or methods for exploiting security weaknesses. Snapchat’s inability to respond to these warnings needs to be addressed--they have since created a direct email to receive security related messages--but Gibson Security is not blameless here.

      Patrick: I see the upside of “white hats,” when done right they provide a counterbalance to keep corporations and governments honest. The flip side to this is that there is no counterbalance, no accountability, and often no way to prosecute these white hat groups, all of which should make the public hesitant to fully embrace them. My concern is over the publishing of the “recipe” for hacking the system—why wasn’t that also partially redacted? I think they got lucky another supposedly white hat group was the one to capitalize on the loopholes.

    Snapchat Breach Exposes Weak Security (Times)

    Snapchat GibSec Full Disclosure (Gibson Security)

    SnapchatDB ("second hacker group")

    A Framework for Ethical Thinking (Markkula Center for Applied Ethics)

     

    NEXT STORY: WELLS FARGO INITIATES PROACTIVE ETHICS REVIEW

  •  MUGSHOTS: How Much Control Do You Have Over Your Online Identity?

    Monday, Oct. 7, 2013

    Of the many emerging online industries, this one may be a surprise: online mug shot databases. At sites like JustMugshots, visitors can search through these databases — which include anyone who has been arrested — and view mug shots, name, arrest location, age, and charges. The owners of these websites claim they are providing a valuable public service; “Everyone has a right to know if your babysitter has been arrested,” is a common catch phrase. The flip side to this “service” is the crippling effect that these databases have on those with arrest records: even those who were not convicted, or otherwise had their record expunged, find their job prospects severely crippled. These sites depend on this debilitating effect for revenue, and do so by charging anywhere from $40 to $400 to remove the mug shots. In the wake of a great deal of criticism, the mug shot sites are quick to argue that arrest documents, including mug shots, are of public record and to prevent their publishing is unlawful on Constitutional grounds. Are there some types of public information that should not be actively promoted or monetized? Is charging for the removal of the photos a legitimate business practice, or the equivalent of extortion?

      Kirk: Of course there is public information that shouldn’t be actively promoted. The primary concern with the emergence of “Mug Shot” sites is they don’t tell the full story. Take for example, a person who was wrongly accused of a crime, or otherwise a victim of circumstance or error. These sites offer no protection for such a person, other than to empty their pockets and give into the site owners’ shakedown tactics. The integrity of public records leaves no room for the ulterior motives of these mug shot sites.

      Patrick: I take the site owners' Constitutional claims to have considerable weight. Yes, this behavior is certainly exploitative and probably does more harm than good, but freedom of the press covers even those who are out there for a quick buck. Kudos to Google, for adjusting their algorithm to push down mug shot sites in search results, and PayPal, Discover, MasterCard, and the other companies that refuse to do business with them. Mug shot sites are free to publish what they like, but it doesn’t mean we have to give them the forum to do so.

    Mugged by a Mug Shot Online

    A Framework for Thinking Ethically

     

    NEXT STORY: RETRAINING VS HIRING ABROAD

  •  BOOZALLEN: Rethinking Corporate Policy Toward Whistleblowers

    Friday, Jun. 21, 2013

    Edward Snowden, National Security Agency whistleblower, has been fired from his job at Booz Allen Hamilton. This month, Snowden went public with details on the NSA’s PRISM, a government surveillance program, which he gained through his work at the firm. Booz Allen released a statement confirming that Snowden had been terminated due to “violations of the firm’s code of ethics and firm policy.” With its primary business involving highly sensitive government information, it is no surprise Booz Allen places a premium on discretion. Nonetheless, news of NSA’s PRISM program has been embraced by the public, and has sparked calls for open debates on the program from both members of Congress and President Obama. Whistleblowing is often detrimental to a firm’s short-term financial position, but has proved to be a valuable practice, from society’s perspective, in keeping firms and governments accountable. Did Booz Allen handle Snowden’s whistleblowing case correctly? Should companies leave room for principled whistleblowing on some issues?

      Kirk: It would be very hard to construct a policy that allowed employees to violate some obligations of confidentiality or specific performance for clients. Snowden had opportunities to raise his concerns internally within Booz Allen, or to resign and end his complicity with a system he felt was unethical. If Snowden felt he had an obligation to violate his own contractual obligation to secrecy, he should be willing to be tried and even prosecuted. Civil disobedience is most powerful when it demonstrates the whistleblower’s willingness to pay a price to get the word out.

      Patrick: Firms like Booz Allen would not exist if they included a “whistleblower clause,” as their business is predicated on secrecy. On the other hand, there is great concern over the efficacy of “internal whistleblowing” and its fairness to the whistleblower. The company retains the power to sweep both the issue and the whistleblower under the rug, through stripping the whistleblower of responsibility and power over an extended period of time. Ideally, the public would band together to provide a safety net to whistleblowers, allowing whistleblowers to speak up despite lacking company support; but, can we trust the masses to get these things right?

    Booz Allen Fires NSA Whistleblower Following Leaks

    A Framework for Thinking Ethically

     

    NEXT STORY: SHOULD FIRMS PARTICIPATE IN GOVERNMENT DATA PROGRAMS?

  •  PRISM: Should Firms Participate in Government Data Programs?

    Thursday, Jun. 13, 2013

    Silicon Valley companies, such as Google, Apple, and Facebook, are under tremendous pressure for their participation in the NSA’s PRISM program. The Washington Post’s news breaking article on PRISM claimed that the NSA and FBI had direct and unfettered access to the servers of nine major Internet companies: some have now denied this. To salvage users trust, a number of these companies are petitioning the Attorney General for to make public the types of requests they have received from the NSA as well as the percentage of those requests that they facilitate. The hope of the disclosure is to dispel the public perception that the NSA has direct access to company servers, and instead portray their participation as both legal and limited. Transparency of the nature of their involvement in PRISM is a positive first step to regaining user trust, but these companies still find themselves in a double bind between assisting matters of national security and respecting their users’ privacy. Going forward, should companies be participating in these national security programs, and to what extent must users be informed?

      Kirk: We are badly in need of a new national debate over what the Patriot Act has authorized. Data on our phone calls, email, shopping, travels, and web surfing sits in the servers of private companies. The key questions are what data can companies keep, how much aggregation of data from different sources will be permitted (data mining, big data), and when will the government be allowed to look at and “mine” the data. Threats to individual privacy are many. The impacts of losing our privacy are not well understood. For now as much “transparency” as possible, and some resistance to overbroad government requests, constitute a good ethical stance, in my view.

      Patrick: I agree with Kirk, as much transparency as possible is the first step. Moving forward, there are a number of things that companies should be doing to preempt future ethical dilemmas between national security and user privacy. First, technology firms should form a coalition to establish a unified stance on this issue. That way, individual firms are not “bullied” by government agencies into sharing user data, and a baseline for future instances will be in place. This baseline will allow users to have a better understanding of the way their data will be used, and will place responsibility on individual firms for making known their policies through user agreements.

    U.S., British intelligence mining data from nine U.S. Internet companies in broad secret program

    Google, Facebook, Microsoft push US for public disclosure of security requests

    A Framework for Thinking Ethically

     

    NEXT STORY: CISCO'S INNOVATIVE CODE OF BUSINESS CONDUCT

  •  CISPA: Are Privacy Contracts Only as Good as the Law that Mandates Them?

    Monday, Apr. 22, 2013

    Thursday, the House of Representatives passed CISPA (Cyber Intelligence Sharing and Protection Act): part of which allows companies to break privacy contracts and share consumer personal information with other firms and the US government for “cyber security purposes.” The bill allows companies such as Facebook and Twitter to share account information, messages, and other user data they determine to be cyber threat information, and to be protected from lawsuits if they do. They can do this even when there is no warrant for the information and also when the company has signed a contract explicitly stating it would disclose personal information. While the bill must survive the Senate and a potential veto from the President, many consumer rights advocates are up in arms. Provided that CISPA is passed into law, are companies ethically obligated to honor their preexisting contracts with users?

      Kirk: This is bill is a significant threat to personal privacy. There are tough choices to be made regarding the threat of terrorism, but, as written, this bill goes much too far. Any company could arbitrarily decide what is cyber threat information and then release it without being accountable for the breach of privacy. Privacy guarantees from Facebook, Google, AT&T, and Comcast would be worth next to nothing. It is not surprising some companies are supporting it; they get immunity from lawsuits for violating privacy contracts.

      Patrick: I agree, this bill goes way too far. Nonetheless, if passed, it seems unreasonable to expect companies to opt out of this “free pass” of being insulated from civil lawsuits. The emphasis then becomes how the company handles this transition and the procedure it uses for deeming content “cyber threat” relevant. Companies are obligated to be transparent with users, most importantly in notifying existing users that the privacy agreement and terms of service has changed, and would be obliged to use discretion in releasing information. Hopefully the Senate corrects the House of Representatives’ mistake.

    CISPA Vote: House Passes Cybersecurity Bill To Let Companies Break Privacy Contracts

    A Framework for Thinking Ethically

     

    NEXT STORY: CAN GUN MANUFACTURERS ADVOCATE THEIR NARROW ECONOMIC INTERESTS?

  •  GOOGLE: Creating a Company Culture that Respects Private Data Despite Profiting from its Collection

    Thursday, Mar. 14, 2013

    On March 12th, Google agreed to pay a fine of $7 million for collecting personal data during recording for its Google Maps Street View feature. Google's mobile vans, in addition to filming "street views," were also collecting emails, medical and finance records, and passwords from unprotected wireless networks as they passed by. This is not the first time Google has been penalized for privacy transgressions; last year Google was fined $22 million for bypassing security settings on Apple's Safari browser. Many are concerned that the fine of $7 million is not enough to force Google to change, pointing to the company's net profit of $32 million per day. As part of the settlement, Google has agreed to offer employee education on privacy, invest in educating the public on securing the wireless networks, and will destroy the data collected from the Street View cars. Given the value of "big data" to Google, company managers face a dilemma in determining where to draw the line between data they should collect and data that violates privacy: a growing concern in light of emerging technologies that allow for even more opportunities for data collection. What directives should Google's management give its employees?

      Kirk: This is a classic dilemma where the company's self-interest and strategy threatens the public interest in personal privacy. Google must create a respect for privacy among all its managers and employees. And it must create a system for reviewing decisions, like letting Street View vans collect such data, before they are implemented. Such a system would demonstrate a companywide commitment to respecting user privacy, while offering clear guidelines to employees. Many observers think they have failed to do either. 

      Patrick: Speaking from the perspective of a college student, I believe many of my peers share my sentiment that I am not too concerned about Google and firms like it collecting data from its users. Google offers a number of incredibly useful products--Gmail, YouTube, and many others--free of charge. Advertising, and the data collection which enhances it, allows these products to be free; in consequence, if you are using these products you should reasonably expect data to be collected. Although, in this case Google was in the wrong because they collected data from people who were not directly using their products. While communication with employees is important, the key here for Google is to work toward transparency with its users, allowing them to know exactly what they are agreeing to when they use Google products.

    Google Pays Fine Over Street View Privacy Breach

    A Framework for Thinking Ethically

     

    NEXT STORY: SHOULD LABOR POLICIES BE JUDGED BY THE HOME OR HOST COUNTRY'S STANDARDS?

  •  CYBER ATTACKS: Should Companies Admit They've Been Hacked?

    Sunday, Feb. 24, 2013

    Cyber attacks on American companies have become increasingly more common, but not all companies respond to security breaches the same way. Companies such as Facebook, Twitter and Apple, have voluntarily gone public with their security troubles. Alternatively, a number of companies have continued to deny cyber attacks, despite reports stating otherwise; including, Exxon Mobil, Coca-Cola, Baker Hughes, and others. The U.S. government has encouraged transparency on cyber attacks as part of a wider effort to protect American intellectual property. Advocates of disclosing breaches claim it will set a precedent for other companies to get more active in fighting cyber attacks. The majority of company lawyers advise not to disclose, pointing to potential shareholder lawsuits, embarrassment and fear of inciting future attacks. Health and insurance companies must disclose breaches of patient information, and publicly traded companies must when an incident effects earnings. What policy should companies adopt when dealing with a cyber security breach?

      Kirk: The common good demands a united effort by public and private institutions to fight cyber attacks. Companies owe it to the public to admit they've been hacked and to use their experience toward improving efforts against hacking. Anything short of full participation will guarantee that cyber attacks will continue to be a problem, and companies will be picked off one by one as they stand silent. Due to the sheer number of incidents the stigma of being hacked has decreased dramatically, opening the door for more companies to come forward. It's time for companies to think of the common good over protecting their own tail.

      Patrick: The focus here should be on the legal system, not the victims of cyber attacks. Hacked companies are being further victimized by being pressured to release security breaches, while being inadequately protected from the liability that comes with it. This is not to say that companies should not be held accountable for a reasonable amount of preventative security, but the U.S. government is sending companies mixed messages. If the Federal Government really wants collaboration from hacked companies they should consider offering anonymous participation in their current initiatives, as well as insulate companies from unwarranted shareholder lawsuits.

    Some Victims of Online Hacking Edge Into the Light

    A Framework for Thinking Ethically

     

    NEXT STORY: ARE SUPPLIERS PAID ENOUGH TO MAKE ETHICAL LABOR PRACTICES POSSIBLE?