Causing harm explores the different types of harm that may be caused to people or groups and the potential reasons we may have for justifying these harms.

7. Can you think of other instances when taking such actions is not ethical?

6. Are you supportive of governments or institutions taking actions that may cause harm to some but would likely benefit many? How is this justified? Why is it permissible?

5. Do you think an institution such as a business or government can be held accountable for causing harm in the same way an individual can be? Support your position.

4. In what situation(s) would you knowingly cause harm? How would the benefits outweigh the harm?

3. The video claims that we should not cause harm to others unless we are willing to suffer the same harm ourselves. Do you agree?

2. Can you think of an example of when you have been harmed? Was this harm ethically justifiable? Was it not? Explain how.

1. The students interviewed for this video disagree about which type of harm is the worst — physical, emotional, psychological, financial, reputational — which do you think is the worst and why?

Case Studies

In 2013, computer expert and former CIA systems administrator, Edward Snowden released confidential government documents to the press about the existence of government surveillance programs. According to many legal experts, and the U.S. government, his actions violated the Espionage Act of 1917, which identified the leak of state secrets as an act of treason. Yet despite the fact that he broke the law, Snowden argued that he had a moral obligation to act. He gave a justification for his “whistleblowing” by stating that he had a duty “to inform the public as to that which is done in their name and that which is done against them.” According to Snowden, the government’s violation of privacy had to be exposed regardless of legality.

Many agreed with Snowden. Jesselyn Radack of the Government Accountability Project defended his actions as ethical, arguing that he acted from a sense of public good. Radack said, “Snowden may have violated a secrecy agreement, which is not a loyalty oath but a contract, and a less important one than the social contract a democracy has with its citizenry.” Others argued that even if he was legally culpable, he was not ethically culpable because the law itself was unjust and unconstitutional.

The Attorney General of the United States, Eric Holder, did not find Snowden’s rationale convincing. Holder stated, “He broke the law. He caused harm to our national security and I think that he has to be held accountable for his actions.”

Journalists were conflicted about the ethical implications of Snowden’s actions. The editorial board of The New York Times stated, “He may have committed a crime…but he has done his country a great service.” In an Op-ed in the same newspaper, Ed Morrissey argued that Snowden was not a hero, but a criminal: “by leaking information about the behavior rather than reporting it through legal channels, Snowden chose to break the law.” According to Morrissey, Snowden should be prosecuted for his actions, arguing that his actions broke a law “intended to keep legitimate national-security data and assets safe from our enemies; it is intended to keep Americans safe.”

safe.”

Discussion Questions

1. What values are in conflict in this case? What harm did Snowden cause? What benefits did his actions bring? 2. Do you agree that Snowden’s actions were ethically justified even if legally prohibited? Why or why not? Make an argument by weighing the competing values in this case. 3. If you were in Snowden’s position, what would you have done and why? 4. Would you change your position if you knew that Snowden’s leak would lead to a loss of life among CIA operatives? What about if it would save lives? 5. Is there a circumstance in which you think whistleblowing would be ethically ideal? How about ethically prohibited?

Bibliography

Whistle-Blowers Deserve Protection Not Prison

http://www.nytimes.com/roomfordebate/2013/06/11/in-nsa-leak-case-a-whistle-blower-or-a-criminal/whistle-blowers-deserve-protection-not-prison Eric Holder: If Edward Snowden were open to plea, we’d talk

http://www.politico.com/story/2014/01/eric-holder-edward-snowden-plea-102530.html Edward Snowden: Whistleblower

http://www.nytimes.com/2014/01/02/opinion/edward-snowden-whistle-blower.html?_r=0 Edward Snowden Broke the Law and should be Prosecuted

http://www.nytimes.com/roomfordebate/2013/06/11/in-nsa-leak-case-a-whistle-blower-or-a-criminal/edward-snowden-broke-the-law-and-should-be-prosecuted

In the context of health care in the United States, the value on autonomy and liberty was cogently expressed by Justice Benjamin Cardozo in Schloendorff v. Society of New York Hospitals (1914), when he wrote, “Every human being of adult years and sound mind has a right to determine what shall be done with his own body.” This case established the principle of informed consent and has become central to modern medical practice ethics. However, a number of events since 1914 have illustrated how the autonomy of patients may be overridden. In Buck v. Bell (1927), Justice Oliver Wendell Holmes wrote that the involuntary sterilization of “mental defectives,” then a widespread practice in the U.S., was justified, stating, “Three generations of imbeciles are enough.” Another example, the Tuskegee Syphilis Study, in which African-American males were denied life-saving treatment for syphilis as part of a scientific study of the natural course of the disease, began in 1932 and was not stopped until 1972.

Providing advice related to topics of bioethics, the President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research stated, “Informed consent is rooted in the fundamental recognition—reflected in the legal presumption of competency—that adults are entitled to accept or reject health care interventions on the basis of their own personal values and in furtherance of their own personal goals.” But what of circumstances where patients are deemed incompetent through judicial proceedings, and where someone else is designated to make decisions on behalf of a mentally incompetent individual?

Consider the following case:

A middle aged man was involuntarily committed to a state psychiatric hospital because he was considered dangerous to others due to severe paranoid thinking. His violent behavior was controlled only by injectable medications, which were initially administered against his will. He had been declared mentally incompetent, and the decisions to approve the use of psychotropic medications were made by his adult son who had been awarded guardianship and who held medical power of attorney.

While the medications suppressed the patient’s violent agitation, they made little impact on his paranoid symptoms. His chances of being able to return to his home community appeared remote. However, a new drug was introduced into the hospital formulary which, if used with this patient, offered the strong possibility that he could return home. The drug, however, was only available in a pill form, and the patient’s paranoia included fears that others would try to poison him. The suggestion was made to grind up the pill and surreptitiously administer the drug by mixing it in pudding.

Hospital staff checked with the patient’s son and obtained informed consent from him. The “personal values and…personal goals” of the son and other family members were seen to substitute for those of the mentally incompetent patient—and these goals included the desire for the patient to live outside of an institution and close to loved ones in the community. This was the explicitly stated rationale for the son’s agreeing to the proposal to hide the medication in food. However, staff were uncomfortable about deceiving the patient, despite having obtained informed consent from the patient’s guardian.

Discussion Questions

1. In the case study above, do you think the ends justify the means? In other words, does the goal of discharging the patient from an institutional setting into normal community living justify deceiving him? Explain your reasoning. 2. Do you think it is ever ethically permissible to deceive clients? Under what circumstances? Why or why not? 3. To what degree should family members or legal guardians have full capacity to make decisions or give consent on behalf of those under their care? Explain. 4. Do you think severely mentally ill people retain any rights “to determine what shall be done with [their] own [bodies]?” Why or why not? 5. Are there risks in surreptitiously medicating a paranoid patient? Would this confirm the patient’s delusions of being “poisoned” by others or escalate his resistance to treatment? Are these risks worth taking in view of the potential to dramatically improve his mental functioning and reduce his suffering? 6. Since psychiatric patients have the right to treatment, does the strategy to surreptitiously administer medications serve this goal? Do you think this is ethically justifiable? Why or why not? 7. Does the history of the forcible treatments of persons with disabilities and other powerless populations affect how you view this case? Explain.

Bibliography

The Nazi Doctors: Medical Killing and the Psychology of Genocide

http://www.worldcat.org/title/nazi-doctors-medical-killing-and-the-psychology-of-genocide/oclc/264730584 Medical Apartheid: The Dark History of Medical Experimentation on Black Americans from Colonial Times to the Present

http://www.worldcat.org/title/medical-apartheid-the-dark-history-of-medical-experimentation-on-black-americans-from-colonial-times-to-the-present/oclc/61131882 Imbeciles: The Supreme Court, American Eugenics, and the Sterilization of Carrie Buck

http://www.worldcat.org/title/imbeciles-the-supreme-court-american-eugenics-and-the-sterilization-of-carrie-buck/oclc/911171862 Texas Administrative Code, Chapter 404, Subchapter E: Rights of persons receiving mental health services

http://texreg.sos.state.tx.us/public/readtac$ext.ViewTAC?tac_view=5&ti=25&pt=1&ch=404&sch=E&rl=Y A history and a theory of informed consent

http://www.worldcat.org/title/history-and-theory-of-informed-consent/oclc/228168485 Enduring and emerging challenges of informed consent

http://www.nejm.org/doi/full/10.1056/NEJMra1411250 Chapter “Consent to medical care: the importance of fiduciary context” in The ethics of consent: theory and practice

http://www.worldcat.org/title/ethics-of-consent-theory-and-practice/oclc/312625462 CASES; Advice rejoins consent

http://www.nytimes.com/2002/07/02/health/cases-advice-rejoins-consent.html Making health care decisions: The ethical and legal implications of informed consent in the patient-practitioner relationship

http://www.worldcat.org/title/making-health-care-decisions-a-report-on-the-ethical-and-legal-implications-of-informed-consent-in-the-patient-practitioner-relationship/oclc/8922324

In many ways, social media platforms have created great benefits for our societies by expanding and diversifying the ways people communicate with each other, and yet these platforms also have the power to cause harm. Posting hurtful messages about other people is a form of harassment known as cyberbullying. Some acts of cyberbullying may not only be considered slanderous, but also lead to serious consequences. In 2010, Rutgers University student Tyler Clementi jumped to his death a few days after his roommate used a webcam to observe and tweet about Tyler’s sexual encounter with another man. Jane Clementi, Tyler’s mother, stated, “In this digital world, we need to teach our youngsters that their actions have consequences, that their words have real power to hurt or to help. They must be encouraged to choose to build people up and not tear them down.”

In 2013, Idalia Hernández Ramos, a middle school teacher in Mexico, was a victim of cyber harassment. After discovering that one of her students tweeted that the teacher was a “bitch” and a “whore,” Hernández confronted the girl during a lesson on social media etiquette. Inquiring why the girl would post such hurtful messages that could harm the teacher’s reputation, the student meekly replied that she was upset at the time. The teacher responded that she was very upset by the student’s actions. Demanding a public apology in front of the class, Hernández stated that she would not allow “young brats” to call her those names. Hernández uploaded a video of this confrontation online, attracting much attention.

While Hernández was subject to cyber harassment, some felt she went too far by confronting the student in the classroom and posting the video for the public to see, raising concerns over the privacy and rights of the student. Sameer Hinduja, who writes for the Cyberbullying Research Center, notes, “We do need to remain gracious and understanding towards teens when they demonstrate immaturity.” Confronting instances of a teenager venting her anger may infringe upon her basic rights to freedom of speech and expression. Yet, as Hinduja explains, teacher and student were both perpetrators and victims of cyber harassment. All the concerns of both parties must be considered and, as Hinduja wrote, “The worth of one’s dignity should not be on a sliding scale depending on how old you are.”

Discussion Questions

1. In trying to teach the student a lesson about taking responsibility for her actions, did the teacher go too far and become a bully? Why or why not? Does she deserve to be fired for her actions? 2. What punishment does the student deserve? Why? 3. Who is the victim in this case? The teacher or the student? Was one victimized more than the other? Explain. 4. Do victims have the right to defend themselves against bullies? What if they go through the proper channels to report bullying and it doesn’t stop? 5. How should compassion play a role in judging other’s actions? 6. How are factors like age and gender used to “excuse” unethical behavior? (ie. “Boys will be boys” or “She’s too young/old to understand that what she did is wrong”) Can you think of any other factors that are sometimes used to excuse unethical behavior? 7. How is cyberbullying similar or different from face-to-face bullying? Is one more harmful than the other? Explain. 8. Do you know anyone who has been the victim of cyber-bullying? What types of harm did this person experience?

Why or why not? Does she deserve to be fired for her actions?

Bibliography

Teacher suspended after giving student a twitter lesson

http://www.cnn.com/2013/09/12/world/americas/mexico-teacher-twitter/index.html Pros and Cons of Social Media in the Classroom

http://campustechnology.com/Articles/2012/01/19/Pros-and-Cons-of-Social-Media-in-the-Classroom.aspx?Page=1 How to Use Twitter in the Classroom

http://thenextweb.com/twitter/2011/06/23/how-to-use-twitter-in-the-classroom/ Twitter is Turning Into a Cyberbullying Playground

http://www.takepart.com/article/2012/08/08/twitter-turning-cyberbullying-playground Can Social Media and School Policies be “Friends”?

http://www.ascd.org/publications/newsletters/policy-priorities/vol17/num04/Can-Social-Media-and-School-Policies-be-%C2%A3Friends%C2%A3%C2%A2.aspx What Are the Free Expression Rights of Students In Public Schools Under the First Amendment?

http://www.firstamendmentschools.org/freedoms/faq.aspx?id=12991 Teacher Shames Student in Classroom After Student Bullies Teacher on Twitter

http://cyberbullying.us/teacher-shames-student-in-classroom-after-student-bullies-teacher-on-twitter/

The Therac-25 machine was a state-of-the-art linear accelerator developed by the company Atomic Energy Canada Limited (AECL) and a French company CGR to provide radiation treatment to cancer patients. The Therac-25 was the most computerized and sophisticated radiation therapy machine of its time. With the aid of an onboard computer, the device could select multiple treatment table positions and select the type/strength of the energy selected by the operating technician. AECL sold eleven Therac-25 machines that were used in the United States and Canada beginning in 1982.

Unfortunately, six accidents involving significant overdoses of radiation to patients resulting in death occurred between 1985 and 1987 (Leveson & Turner 1993). Patients reported being “burned by the machine” which some technicians reported, but the company thought was impossible. The machine was recalled in 1987 for an extensive redesign of safety features, software, and mechanical interlocks. Reports to the manufacturer resulted in inadequate repairs to the system and assurances that the machines were safe. Lawsuits were filed, and no investigations took place. The Food and Drug Administration (FDA) later found that there was an inadequate reporting structure in the company, to follow up with reported accidents.

There were two earlier versions of the Therac-25 unit: the Therac-6 and the Therac-20, which were built from the CGR company’s other radiation units–Neptune and Sagittaire. The Therac-6 and Therac-20 units were built with a microcomputer that made the patient data entry more accessible, but the units were operational without an onboard computer. These units had built-in safety interlocks and positioning guides, and mechanical features that prevented radiation exposure if there was a positioning problem with the patient or with the components of the machine. There was some “base duplication” of the software used from the Therac-20 that carried over to the Therac-25. The Therac-6 and Therac-20 were clinically tested machines with an excellent safety record. They relied primarily on hardware for safety controls, whereas the Therac-25 relied primarily on software.

On February 6, 1987, the FDA placed a shutdown on all machines until permanent repairs could be made. Although the AECL was quick to state that a “fix” was in place, and the machines were now safer, that was not the case. After this incident, Leveson and Turner (1993) compiled public information from AECL, the FDA, and various regulatory agencies and concluded that there was inadequate record keeping when the software was designed. The software was inadequately tested, and “patches” were used from earlier versions of the machine. The premature assumption that the problem(s) was detected and corrected was unproven. Furthermore, AECL had great difficulty reproducing the conditions under which the issues were experienced in the clinics. The FDA restructured its reporting requirements for radiation equipment after these incidents.

As computers become more and more ubiquitous and control increasingly significant and complex systems, people are exposed to increasing harms and risks. The issue of accountability arises when a community expects its agents to stand up for the quality of their work. Nissenbaum (1994) argues that responsibility in our computerized society is systematically undermined, and this is a disservice to the community. This concern has grown with the number of critical life services controlled by computer systems in the governmental, airline, and medical arenas.

According to Nissenbaum, there are four barriers to accountability: the problem of many hands, “bugs” in the system, the computer as a scapegoat, and ownership without liability. The problem of too many hands relates to the fact that many groups of people (programmers, engineers, etc.) at various levels of a company are typically involved in creation of a computer program and have input into the final product. When something goes wrong, there is no one individual who can be clearly held responsible. It is easy for each person involved to rationalize that he or she is not responsible for the final outcome, because of the small role played. This occurred with the Therac-25 that had two prominent software errors, a failed microswitch, and a reduced number of safety features compared to earlier versions of the device. The problem of bugs in the software system causing errors in machines under certain conditions has been used as a cover for careless programming, lack of testing, and lack of safety features built into the system in the Therac-25 accident. The fact that computers “always have problems with their programming” cannot be used as an excuse for overconfidence in a product, unclear/ambiguous error messages, or improper testing of individual components of the system. Another potential obstacle is ownership of proprietary software and an unwillingness to share “trade secrets” with investigators whose job it is to protect the public (Nissenbaum 1994).

The Therac-25 incident involved what has been called one of the worst computer bugs in history (Lynch 2017), though it was largely a matter of overall design issues rather than a specific coding error. Therac-25 is a glaring example of what can go wrong in a society that is heavily dependent on technology.

Discussion Questions

1. Who should be responsible for the errors in a medical device? 2. What moral responsibility do creators of software have for the adverse consequences that flow from flaws in that software? 3. What steps are creators of software morally required to take to minimize the risk that they will sell flawed software with dangerous consequences? 4. What should constitute FDA approval of a medical device? Should the benefit outweigh the harm? Should the device be 100% safe prior to approval? Should FDA approval guidelines take into consideration novel therapies for protected populations such as children or patients with rare conditions? 5. Should updated medical devices be reviewed by the FDA as a new device or as an improvement in an older design? If reviewed as an improvement, at what point can/should a device be subject to a full review process? If reviewed as a novel device, how might this effect the production of modified/ improved devices and the overall companies that produce medical devices?

Bibliography