Skip to main contentSkip to navigationSkip to navigation
Danger Electric Shock Risk. Sign
Would you rather shock yourself or a stranger for profit?
Would you rather shock yourself or a stranger for profit?

Behind the scenes of a 'shocking' new study on human altruism

This article is more than 9 years old

A recent study suggests most people would rather harm themselves than a stranger for profit. Lead author Molly Crockett takes us behind the scenes of the research

How much money would you give up to prevent a stranger’s pain? And how does this compare to what you’d pay to prevent your own pain?

With colleagues at University College London I addressed this question in a recent study. We were interested in quantifying how much people care about others, relative to themselves. A lack of concern for others’ suffering lies at the heart of many psychiatric disorders such as psychopathy, so developing precise laboratory measures of empathy and altruism will be important for probing the brain processes that underlie antisocial behavior.

We brought 80 pairs of volunteers in to the lab and led them to different rooms so they couldn’t see or talk to each other. They drew lots to determine which would be the “decider”, and which the “receiver”. The decider then made a series of decisions between different amounts of money and different amounts of moderately painful electric shocks. The decider always received the money, but sometimes the shocks were for the decider, and sometimes the shocks were for the receiver. By observing the deciders’ choices we were able to calculate how much money they were willing to sacrifice to prevent shocks to either themselves or to the receiver.

We found that on average, people were willing to sacrifice about twice as much money to prevent another person from being shocked, than to prevent themselves from being shocked. So for example, they would give up £8 to prevent 20 shocks to another person but would only give up £4 to prevent 20 shocks to themselves. These results are surprising because most previous studies of altruism in the lab suggested that people care about themselves far more than others.

Our findings have received widespread media coverage, and as often happens some aspects of the work have been misconstrued. So I thought I’d provide some extra background here to help clear up some of these misconceptions.

How did you administer pain in the laboratory? Is this ethical?

We used an electric stimulation device called a Digitimer to deliver electric shocks to the left wrist of our volunteers. Shocks delivered by this device can range from imperceptible to intolerably painful, depending on the electric current level. Importantly, the shocks are safe and don’t cause any damage to the skin. To ensure that no volunteer received a shock that was intolerably painful, we always began our experiment with a thresholding procedure that has been used in many previous studies. During thresholding we start by delivering a shock at a very low current level – 0.1 milliamps (mA) – that is almost imperceptible. We then gradually increase the current level, shock by shock, and the volunteer rates each shock on a scale from 0 (imperceptible) to 10 (intolerable). We stop increasing the current once the volunteer’s rating reaches a 10. For the shocks used in the experiment we use a current level that corresponds to a rating of 8 out of 10, so the shocks are unpleasant, but not intolerable. Subjectively, they feel a bit like a bee sting that lasts for half a second, or like running your hand momentarily under very hot water.

The thresholding procedure is necessary because there are large individual differences in pain thresholds. Whilst one person might find a current level of 3 mA intolerable, another might find it barely perceptible. In our studies we observed pain thresholds ranging from 0.4 mA to more than 10 mA. At low subjective levels the shocks are not at all unpleasant; many find low-level shocks interesting or even pleasurably stimulating. So one should be skeptical of studies that claim to administer unpleasant or painful shocks but do not use a thresholding procedure, because such studies run the risk that many of their participants did not actually find the shocks unpleasant.

Thresholding is also important for ethical reasons. Our procedures have undergone extensive review by the university’s research ethics committee, and central to the ethics of our study is that we ensure no one receives a shock that is beyond their intolerable pain threshold. If one delivers a standard current level to all volunteers – say, 3 mA – without performing a thresholding first, one runs the risk that this will be higher than the intolerable pain threshold for a subset of volunteers. All volunteers in our study are fully informed of the procedures involved before they consent to taking part, and they are free to withdraw from the study at any time without penalty. By following these procedures we are able to deliver unpleasant stimuli in the laboratory in an ethical manner.

Do your results prove that altruism is “hard-wired” or innate?

Nope. Our experiment can say nothing about the extent to which altruism is innate versus learned through experience. Addressing this question is actually quite difficult; to “prove” that a given behavior is innate is next to impossible. One source of relevant evidence comes from studies on infants. If a behavior can be observed in very young infants, this implies that it may be innate since infants have had very little time to learn through experience. Studies by researchers at Yale and the University of British Columbia have shown that even 3-month-old infants show a preference for helpful characters over harmful characters, suggesting that the roots of morality may be innate. But our study was conducted on adults aged 18-35, so they would have had plenty of time to learn about the moral costs of harming others.

But surely people in your study only behaved altruistically because they knew they were being observed?

This is unlikely for several reasons. First, we anticipated that people might care about their reputations and so in our instructions to participants we emphasized that their decisions would be confidential. Participants were alone whilst making their decisions, and only identifiable by an ID number, so their name and other identifying details were never linked to their decisions.

Second, in a separate part of the study we gave participants the opportunity to donate money to charity. In this context people were quite selfish, keeping on average 80% of the money for themselves. So if the laboratory setting causes altruism because people feel they are being observed, we would have seen altruistic behavior in the charity decisions as well. The fact that we didn’t suggests that people knew their decisions were secret, but when making decisions about pain for self and others, most people truly preferred to avoid harming others more than themselves.

Do your results prove that altruism actually exists?

First, I’d point out that lab experiments are not necessary to demonstrate the existence of human altruism – examples of selfless acts of kindness toward strangers abound in the real world. And previous lab studies have shown that humans, monkeys, and even rats are sometimes willing to sacrifice personal benefits to spare another’s suffering.

An open question, however, is to what extent altruistic behaviors are motivated by a “true” concern for the well-being of others, versus more self-serving motives such as the desire to boost one’s reputation or even the pleasant feeling that results from being kind. Although I’m fairly confident that the volunteers in our recent study were not making altruistic choices out of concern for their reputation, we cannot rule out the possibility that they behaved altruistically in order to avoid feeling guilty, or to feel good about themselves, rather than because they truly cared about the suffering of others.

But is it even worth asking the question of whether “true” altruism actually exists? Stanford neuroscientist Jamil Zaki argues not:

Attempts to identify true altruism often boil down to redacting motivation from behavior altogether. The story goes that in order to be pure, helping others must dissociate from personal desire (to kiss up, look good, feel rewarded, and so forth). But it is logically fallacious to think of any human behavior as amotivated. De facto, when people engage in actions, it is because they want to. Second... critics of “impure” altruism chide helpers for acting in human ways, for instance by doing things that feel good. The ideal, then, seems to entail acting altruistically while not enjoying those actions one bit. To me, this is no ideal at all. I think it’s profound and downright beautiful to think that our core emotional makeup can be tuned towards others, causing us to feel good when we do. Color me selfish, but I’d take that impure altruism over a de-enervated, floating ideal any day.

In my view, the importance of asking whether “true” altruism exists is dwarfed by the importance of understanding the mechanics of altruistic behavior. Uncovering the computations that factor into moral decisions could suggest ways to intervene and encourage people to be more altruistic. We need to do a lot more research in order to understand precisely how moral decision-making works, but the methods we’re developing can help to tease apart the factors that push people towards altruism vs. selfishness.

Comments (…)

Sign in or create your Guardian account to join the discussion

Most viewed

Most viewed