Psychology in the News

May 19, 2009

A certain sense of morality

Filed under: decision making, emotions, evolution — Tags: , , — intro2psych @ 9:00 am

By Victoria Velasco

train wreck by woodleywonderworks

train wreck by woodleywonderworks

The train problem consists of two scenarios. In the first, one must pull a lever to direct a moving train away from five people and toward one person, and in the second, one must push a person under a train, thereby stopping it in time to save five people. In a wide survey, many people regarded the option in the first scenario to be ethical, however, an overwhelming number of subjects strongly dissented the morality of the second scenario, yet they were unable to articulate the ethical difference from the first scenario (Hauser, Cushman & Young, 1997). In both situations one is asked to harm one for the good of the community. The sources of this inconsistency are, according to a recent article by Steven Pinker, universal morals.

In a recent study, fMRI’s monitored brain activity when subjects were presented with the “train problem” (Greene, 2001). In all subjects considering the first scenario, only the area of the frontal lobes linked to logic, showed any signs of excessive activity. However, when presented with the second scenario, the medial area of the frontal lobes, linked to interpersonal emotions, as well as that linked to logic and the anterior cingulate cortex, which registers conflicts between different urges. These findings, as well as those of the previous study illustrate moral battle between emotions and logic, and the universal victory of emotions.

Another experiment on universal morality focused on Rhesus monkeys illustrates the sense of community and avoidance of harm of community members (Masserman, 1964). Operator monkeys were trained to pull a chain to receive food, and another chain when signaled with a red and blue light, respectively, however on the fourth day of the experiment, the monkeys were paired, and when the operator monkey pulled the chains, the other would receive a shock. Two-thirds of the monkeys showed discretion in pulling the chains, especially after receiving the shocks, and if they had previous interaction with their pair, and many of the monkeys even avoided pulling the chains to feed themselves.

The ubiquity of the cerebral response to wrongdoing suggests some evolutionary benefit to morality. Psychologists Jonathan Haidt and Jesse Graham argued that all evolutionary morals fit into five broad categories: avoidance of harm, fairness, a sense of community, respect for authority, and purity (Haidt, & Graham 2006). Although these are distinct human ideals, they are also represented in animals, illustrating evolutionary benefits. The experiment on Rhesus Monkeys (Masserman, Wechkin & Terris, 1964) reflects avoidance of harm; the hierarchy of dominance reflects respect for authority; animal communities inherently reflect an emphasis on fairness and reciprocation, and avoidance of certain foods reflects the importance of purity,
This new concept of a universal and unwritten moral code could lead to major changes in the ways social interactions and ethics are studied. With further exploration, these universal morals could prove to be the foundations of anything from someone holding a door open for another to the world’s major religions. Perhaps in time, we will be able to better understand the motivations behind our instinctual moral responses.


Greene, J. D.,  & Cohen, J.D. (2001, September) “An fMRI Investigation of Emotional  Engagement in Moral Judgment.” Science Magazine.

Haidt, J., & Graham, J. (2006) “Planet of the durkheimians, where community, authority, and sacredness are foundations of morality.” Social Science Research Network.

Hauser, M., Cushman F., & Young, L. (1997)”A Disossiacion Between Moral Judgements and Justifications.” Mind and Language 22

Masserman, J.H., Wechkin, S., & Terris, W. (1964) “’Altruistic’ Behavior in Rhesus Monkeys.” American Journal of Psychiatry Dec. 1964: 584-85.

Pinker, S.  (2008, January 13) “The Moral Instinct” New York Times Magazine Jan. 2008. retrieved from


  1. I found this article particularly interesting. I studied the train (or, as it was originally presented, the trolley) problem in a philosophy class I took at Vassar, and while we attempted to find a solution to why it is morally permissible in one case but not the other, this study took a more psychological/physiological approach to the problem to show that in one case we are only using logic, and in the second we use emotion and our connection with other people. One could say that this implies that, logically, in all cases we should kill the one to save the five, but it is possible that morality is also tied up into our emotions and so doesn’t necessarily solve the philosophical problem.

    Comment by Will Jobs — May 19, 2009 @ 11:22 am

  2. I think that it is fascinating that the two choices in “the train problem” were related to different parts of the frontal lobe. Both responses require killing one person and saving five, but it is so interesting to me that the difference between pulling a lever and pushing someone in front of the train, though causing the same effect, elicit such different responses.

    Also, this article reminded me of a study I learned about in high school, the details of which can be found here: Since 2/3 of participants were able to finish out the study and administer the highest voltage, this study showed that people were willing to follow the orders of someone they perceived to be in charge even at the expense of morality. I think that this study provides an interesting way to look at the way our instinctual moral responses work, since it helps to look at when something fails when trying to figure out how something works.

    Comment by Hannah E. — October 5, 2009 @ 5:23 pm

  3. An interesting study would be how the “universal morals” of a group of individuals survives if unchecked by a larger community. In the Milgram experiment (explained in the comment above) there is an instigator that edges the participant into shocking a bystander. I would be interested to see how a group reacts without said instigation. There was a study, along these lines, conducted in the 1970’s called the “Stanford Prison Experiment”. Wherein a group of 24 individuals was split in half with one side as guards and another side as prisoners. The purpose of the experiment was to replicate and observe prison systems and their psychological effects on prisoners and guards. However, after the sixth day the guards had developed sadistic tendencies, begun abusing the other participants, and the experiment had to be concluded early to prevent any serious physical and psychological damage. It would be an interesting experiment to observe, if it would be possible to develop one without the risk to any participants.


    Comment by Dakota House — October 11, 2009 @ 5:34 pm

  4. After reading this article, I did some further research on the topic and came across a study in which people with damage to their ventromedial prefrontal cortex were compared with healthy subjects in how they would response to situations such as the train scenario listed above. Researchers found that those with damage to their VMPC reacted based on “cold reasoning” rather than through their emotions much more often than those without VMPC damage.

    While the “cold reasoning” response might seem devoid of emotion, it should not, however, be an indicator of the morals of the subjects. In both train scenarios, one person will die in order to save the lives of five people. While subjects without brain damage might hesitate to throw someone to their death under the train, this is only due to their emotional guilt in the situation, not their broader morals.

    Comment by Sarah Morrison — October 26, 2009 @ 10:08 pm

  5. This issue of a “sense of morality,” especially with regard to the train scenario, is interesting in its ties to Tversky and Kahneman’s Prospect Theory, one example of formal reasoning. In their study, Tversky and Kahneman looked at risk-seeking and risk-aversion behaviors in choices of the same expected utility and how people behave depending on the framing of these choices – if a choice was worded as a loss, people demonstrated risk-aversion but if a choice was worded as a gain, they demonstrated risk-seeking, even if the ends of both choices were the same. If a subject is posed with a choice, they are more likely to take a big risk if they see it as a gain than if they see it as a loss. Tversky and Kahneman’s Prospect Theory proved highly important in economics, especially financial economics. What the study looked at, though, was the effect of changing the wording of a choice in a hypothetical situation in economic terms, how subjects’ choices in losses vs. gains depended on aversion to losses, even when the expected utility was equal. What if Tversky and Kahneman had not just looked at loss aversion but sense of morality as well? Like the train scenario, their subjects showed an aversion to “loss” – to actively push one person in front of the train – and preferred a “gain” – actively saving five people, even if one person is affected (and killed). Perhaps in looking at risk-aversion in economic terms, they should have taken into consideration who would be affected by the behavior: a father of four with a 20-year mortgage may be understandably more averse to financial loss and his behavior when faced with a loss vs. gain scenario would clearly be informed by his own sense of responsibility for others, his own sense of morality. Their experiment, then, would fit very well into this issue of universal morality if the subject variable had not been constant, if each subject had different obligations, responsibilities, and necessary moral codes. Tversky and Kahneman’s Prospect Theory is virtually the economic or financial theory of universal morality.

    Watkins, Thayer. Kahneman and Tversky’s Prospect Theory. San Jose State University Economics Department. Retrieved November 17, 2009, from

    Comment by Elena Hershey — November 17, 2009 @ 9:41 pm

  6. There was another interesting study done in which a problem was presented in two different ways. The subjects were asked a question about what procedure they would take to deal with a disease, one of which would give them a 100% chance of saving only 200 of the 600 infected people (the other 400 of which would die), the other of which would give them a 1/3 chance of saving everyone, but a 2/3 chance that every infected person would die. The experimenters found that the way in which they framed the question was very important, as it was in the example at the beginning of this article. If they spoke in such a way which quantified the choices in terms of how many people it would save, most people chose to definitely save 200 people. If they put it in terms of how many would die, most people chose to take a risk to try to avoid any deaths at all.
    In one of Tversky and Kahneman’s specific studies (as part of the project mentioned in a comment above), they presented a question involving money. In the first explanation, they asked if people would rather definitely gain $250, or take a risk and have a 25% chance of getting $1000 but a 75% chance of getting nothing. In the second explanation, they asked if people would rather lose $750 for sure, or risk it and have a 25% chance of losing nothing with a 75% chance of losing $1000. They found that when it was stated as a gain, more people wanted the sure thing, but when it was presented as a loss, more people wanted to take a chance.
    As the train example shows, the way a question is framed is very important. This is certainly a very interesting thing to think about. I enjoyed reading this article.
    Here is a link to some information on the study, if anyone is interested:

    Comment by Sahara Kruidenier — November 24, 2009 @ 8:37 pm

  7. This article surprise me because most human beings live by “mob mentality” and will succumb to authority. Although we see animals as being more savage like and lacking compassion, the experiment with the Rhesus monkeys shows that they do in fact have the qualities we thought only to be human. However, if you observe the results of the obedience to authority experiments performed by Stanley Milgrim, you could see the humans may not be as compassionate as we think. He conducted the controversial experiments at Yale University in 1961-1962 with new haven residents. He chose f the subjects to be the “teachers” and administer shock up to 450v when the “learners” who were really actors, when they got a question wrong. Shockingly, 65% of the teachers had no problem giving pleading learners shocks up to 450v. Does this reflect on our sense of morality or our need for social cohesion?

    Comment by pysch10502student — December 8, 2009 @ 1:10 pm

  8. Heres more information on the Milgrim and his study

    Comment by pysch10502student — December 8, 2009 @ 1:11 pm

  9. The sense of detachment associated with the train problem reminds me of Stanley Milgram’s famous (or infamous?) 1961 experiment, in which he tested people’s capacities to take orders that grew increasingly grotesque in nature. Subjects were asked to play “teachers,” and a confederate subject was assigned the role of “student.” The student was asked a series of word association questions, and whenever they answered incorrectly the teacher was required to administer a shock to the student, from a low 15 V to a lethal 450 V. Since the teachers couldn’t see the student, and they were being pressed by authority, they obeyed to horrifying levels of voltage. Of course, the student was not actually being shocked, and the cries of pain were tape recorded. This degree of separation—being in separate rooms—was key in the teachers’ continuance of the torture. This is similar to subjects’ propensity to chose option #1 of the train scenario, in which a lever is pulled. Each degree of separation from the actual harm being caused allows the person in power to have fewer qualms about what they are going to do.

    Comment by Katie De Heras — December 11, 2009 @ 11:10 pm

  10. Another point to consider in this same dilemma is bias. A study showed that subjects presented with similar dilemmas that people often made biased decisions. For instance, when a subject was told that ten people are in danger, but that the sacrifice of eight will save the remaining two, the subject is hesitant to sacrifice the eight lives. However, if forty lives are at stake, and sacrificing eight lives will save thirty two lives, than the decision suddenly becomes much clearer to the subject – sacrificing the eight lives seems like the moral thing to do; although, in both situations eight lives would be sacrificed.

    It is interesting how the perception of the “value of life” changes based on figures, even though the figures, at least those of lives that will become collateral damage, are not changing. This shows a bias for numbers, and a perception of larger numbers as more compelling.

    Comment by Daniele Selby — March 6, 2010 @ 12:16 am

  11. I feel as though each individual’s goal whether consciously or unconsciously done is to reach a point of self-actualization (Maslow). In some way the majority of us take pleasure in seeing ourselves as being the super hero and not as the villain. In the first case scenario we are simply making a choice and performing a very impersonal action i.e. pulling a lever; however in the second scenario we are performing a much harder task that would require direct contact.

    In an experiment done by Stanley Milgram where participants had to administer a shock to other participants because they were following orders from the experimenter: when the cries of the victims i.e. the reality of the situation got louder the task got harder and harder and many participants quit, so perhaps the real question should be

    is it easier to pull a lever or to push a person under a train?

    Maslow, A.H.(1971). The farther reaches of human nature. New York Viking Press

    Milgram S. (1974) Obedience to authority. New York Harper & Row

    Comment by Alyssa Pratt — March 24, 2010 @ 11:17 pm

  12. Somewhat coincidentally, I found this post at around the same time I started to read a new book by one of the researchers mentioned in this article: Jonathan Haidt (his new book is entitled “The Righteous Mind.” He discusses the train situation (what he calls “the trolley dilemma” but it is otherwise identical), which Greene & Cohen examined in a 2001 study. Greene & Cohen devised 20 new ethical dilemmas that involved direct physical harm (push someone in front of/under the trolley) as well as 20 that involved impersonal harm (pull a lever, push a button, etc.). Subjects were then placed in an fMRI scanner and presented the stories electronically, using buttons to indicate their decision to harm or not to harm. Overall, the results Greene & Cohen found show almost immediate activation of the regions of the brain that preside over emotions and “high activity in these areas correlates with the kinds of moral judgments or decisions that people ultimately make” (66). When we decide whether or not we would be willing to push someone into the path of an oncoming train in the hopes of saving more lives, no matter how much time we take to analyze the issue, perhaps our emotions are the most well-suited adviser.

    Comment by Matt Allan — April 17, 2012 @ 4:11 pm

  13. While there may be some “ubiquity of the cerebral response to wrongdoing” and an evolutionary instinct to maintain community and avoid directly harming others, this instinct is frequently overpowered by other pressures. These include the power of situations to facilitate self-preservation (I doubt starving monkeys would have refrained from shocking another monkey to get food) or obedience to authority. Stanley Milgram demonstrated that subjects would administer what they believed to be very painful shocks to another person when removed from view and with a psychologist pressuring them to continue ( Morality was less of a priority than appeasing the experimenters, which proves social pressures are not a black and white hierarchy. Both obedience and avoiding harming other community members are important evolutionary habits, and obedience is even frequently part of preserving community. Situational differences have more to do with so-called “ethical” choices than a clear definition of wrongdoing.

    The ethical train scenario posed by Hauser, Cushman, and Young does not prove universal morals. This may just be a reflection of Tversky and Kaheneman’s prospect theory of formal reasoning, in which people are more likely to take risks framed as gains and avoid risks framed as losses ( Pulling a lever to save five people and sacrifice one (a gain) sounds more appealing than pushing someone under a train to save five (a loss, and a more personal involvement). The articulated reasoning of these ethical differences is irrelevant, because there is no ethical difference, just a reasoning preference.

    Comment by Charlene Button — May 7, 2012 @ 7:51 pm

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

The Silver is the New Black Theme. Blog at


Get every new post delivered to your Inbox.

Join 47 other followers

%d bloggers like this: