Is Social Psychology Biased Against Republicans?

PHOTOGRAPH BY SCOTT F. SMITH/THE NEW YORK TIMES/REDUX

On January 27, 2011, from a stage in the middle of the San Antonio Convention Center, Jonathan Haidt addressed the participants of the annual meeting of the Society for Personality and Social Psychology. The topic was an ambitious one: a vision for social psychology in the year 2020. Haidt began by reviewing the field that he is best known for, moral psychology. Then he threw a curveball. He would, he told the gathering of about a thousand social-psychology professors, students, and post-docs, like some audience participation. By a show of hands, how would those present describe their political orientation? First came the liberals: a “sea of hands,” comprising about eighty per cent of the room, Haidt later recalled. Next, the centrists or moderates. Twenty hands. Next, the libertarians. Twelve hands. And last, the conservatives. Three hands.

Social psychology, Haidt went on, had an obvious problem: a lack of political diversity that was every bit as dangerous as a lack of, say, racial or religious or gender diversity. It discouraged conservative students from joining the field, and it discouraged conservative members from pursuing certain lines of argument. It also introduced bias into research questions, methodology, and, ultimately, publications. The topics that social psychologists chose to study and how they chose to study them, he argued, suffered from homogeneity. The effect was limited, Haidt was quick to point out, to areas that concerned political ideology and politicized notions, like race, gender, stereotyping, and power and inequality. “It’s not like the whole field is undercut, but when it comes to research on controversial topics, the effect is most pronounced,” he later told me. (Haidt has now put his remarks in more formal terms, complete with data, in a paper forthcoming this winter in Behavioral and Brain Sciences.)

Haidt was far from the first to voice concern over the liberal slant in academia, broadly speaking, and in social psychology in particular. He was, however, the first to do it quite so visibly—and the reactions were vocal. At first, Haidt was pleased. “People responded very constructively,” he said. “They listened carefully, took it seriously. That speaks very well for the field. I’ve never felt as if raising this issue has made me into a pariah or damaged me in any way.” For the most part, his colleagues have continued to support his claims—or, at least, the need to investigate them further. Some, however, reacted with indignation.

The critique started with data. True, there was little doubt that conservatives in the world of psychology are few. A 2012 survey of social psychologists throughout the country found a fourteen-to-one ratio of Democrats to Republicans. But where were the hard numbers that pointed to bias, be it in the selection of professionals or the publication process, skeptics asked? Anecdotal evidence, the Harvard psychologist Daniel Gilbert pointed out, proved nothing. Maybe it was the case that liberals simply wanted to become professors more often than conservatives. “Liberals may be more interested in new ideas, more willing to work for peanuts, or just more intelligent,” he wrote. The N.Y.U. political psychologist John Jost made the point even more strongly, calling Haidt’s remarks “armchair demography.” Jost wrote, “Haidt fails to grapple meaningfully with the question of why nearly all of the best minds in science find liberal ideas to be closer to the mark with respect to evolution, human nature, mental health, close relationships, intergroup relations, ethics, social justice, conflict resolution, environmental sustainability, and so on.”

The views on the other side are equally strong. When I asked Paul Bloom, a psychologist at Yale who edits the journal where Haidt’s paper will appear, what he thought of the research, he pointed out what he believed to be a major inconsistency in the field’s responses. “There’s often a lot of irony in this area," he said. "The same people who are exquisitely sensitive to discrimination in other areas are often violently antagonistic when it comes to political ideology, bringing up clichéd arguments that they wouldn’t accept in other domains: 'They aren’t smart enough.' 'They don’t want to be in the field.' ”

The Nobel Prize-winning behavioral economist Daniel Kahneman called Haidt’s work “great” and “a real service.” The University of British Columbia psychologist Steven Heine pointed out, “Science benefits from diverse perspectives, and key advances often occur when ideas slip across disciplinary borders. But many invisible norms and practices in a field can discourage the mingling of diverse ideas.” Political homogeneity, he went on, comes at “a substantial cost” to research quality. Conservative viewpoints, the Florida State University psychologist Roy Baumeister added, “would inform and elevate how we understand a huge part of life and of culture.”

So, apart from generating strong emotions, what do the data actually say?

A year after Haidt’s lecture, the Tilburg University psychologists Yoel Inbar and Joris Lammers published the results of a series of surveys conducted with approximately eight hundred social psychologists—all members of the Society for Personality and Social Psychology. In the first survey, they repeated a more detailed version of Haidt’s query: How did the participants self-identify politically? The question, however, was asked separately regarding social, economic, and foreign-policy issues. Haidt, they found, was both wrong and right. Yes, the vast majority of respondents reported themselves to be liberal in all three areas. But the percentages varied. Regarding economic affairs, approximately nineteen per cent called themselves moderates, and eighteen per cent, conservative. On foreign policy, just over twenty-one per cent were moderate, and ten per cent, conservative. It was only on the social-issues scale that the numbers reflected Haidt’s fears: more than ninety per cent reported themselves to be liberal, and just under four per cent, conservative.

When Inbar and Lammers contacted S.P.S.P. members a second time, six months later, they found that the second element of Haidt’s assertion—that the climate in social psychology was harsh for conservative thinkers—was on point. This time, after revealing their general political leanings, the participants were asked about the environment in the field: How hostile did they think it was? Did they feel free to express their political ideas? As the degree of conservatism rose, so, too, did the hostility that people experienced. Conservatives really were significantly more afraid to speak out. Meanwhile, the liberals thought that tolerance was high for everyone. The more liberal they were, the less they thought discrimination of any sort would take place.

As a final step, the team asked each person a series of questions to see how willing she would personally be to do something that could be considered discrimination against a conservative. Here, an interesting disconnect emerged between self-perception—does my field discriminate?—and theoretical responses about behaviors. Over all, close to nineteen per cent reported that they would have a bias against a conservative-leaning paper; twenty-four per cent, against a conservative-leaning grant application; fourteen per cent, against inviting a conservative to a symposium; and thirty-seven and a half per cent, against choosing a conservative as a future colleague. They persisted in saying that no discrimination existed, yet their theoretical behaviors belied that idealized reality.

Haidt hadn’t planned on continuing his research after his initial speech. He had meant to raise the issues and wait for others to take over. But, two months after the convention, he came across the thoughts of a libertarian graduate student (now a co-author of Haidt’s new paper), published on the S.P.S.P. listserv. José Duarte hadn’t been at the S.P.S.P. meeting, he told me when we spoke. But the subsequent attention sparked his interest. He had long been fascinated by the methodological issues that stemmed from a lack of political skepticism in the field, and he also believed that he had been rejected from one graduate program because of his political views. Duarte has written about that experience, and Haidt has compiled a list of other student accounts on his personal Web page. The problem, Haidt decided, was widespread enough that it merited further research.

Perhaps even more potentially problematic than negative personal experience is the possibility that bias may influence research quality: its design, execution, evaluation, and interpretation. In 1975, Stephen Abramowitz and his colleagues sent a fake manuscript to eight hundred reviewers from the American Psychological Association—four hundred more liberal ones (fellows of the Society for the Psychological Study of Social Issues and editors of the Journal of Social Issues) and four hundred less liberal (social and personality psychologists who didn’t fit either of the other criteria). The paper detailed the psychological well-being of student protesters who had occupied a college administration building and compared them to their non-activist classmates. In one version, the study found that the protesters were more psychologically healthy. In another, it was the more passive group that emerged as mentally healthier. The rest of the paper was identical. And yet, the two papers were not evaluated identically. A strong favorable reaction was three times more likely when the paper echoed one's political beliefs—that is, when the more liberal reviewers read the version that portrayed the protesters as healthier.

More than twenty years later, the University of Pennsylvania marketing professor J. Scott Armstrong conducted a meta-analysis of all studies of peer review conducted since (and including) Abramowitz’s, to determine whether there was, in fact, a systemic problem. He concluded that the peer-review system was highly unfair and discouraged innovation. Part of the reason stemmed from known bias: papers from more famous institutions, for instance, were judged more favorably than those from unknown ones, and those authored by men were viewed more favorably than those by women if the reviewers were male, and vice versa. But others had to do with the less visible bias of belief—the type of bias implied in Haidt’s argument and Duarte’s methodological concerns. “Findings that conflict with current beliefs are often judged to have deficits,” Armstrong wrote. Here, the question isn’t about things that can be easily tested as empirical fact, like whether the sky is green or whether French fries make you skinny. They’re about the nebulous areas of philosophical and ideological leanings about the way the world should be.

One early study had psychologists review abstracts that were identical except for the result, and found that participants “rated those in which the results were in accord with their own beliefs as better.” Another found that reviewers rejected papers with controversial findings because of “poor methodology” while accepting papers with identical methods if they supported more conventional beliefs in the field. Yet a third, involving both graduate students and practicing scientists, showed that research was rated as significantly higher in quality if it agreed with the rater’s prior beliefs. When Armstrong and the Drake University professor Raymond Hubbard followed publication records at sixteen American Psychological Association journals over a two-year period, comparing rejected to published papers—the journals’ editors had agreed to share submitted materials—they found that those about controversial topics were reviewed far more harshly. Only one, in fact, had managed to receive positive reviews from all reviewers. There was a secret, however, about that one. “The editor revealed that he had been determined to publish the paper, so he had sought referees that he thought would provide favorable reviews,” Armstrong wrote.

All these studies and analyses are classic examples of confirmation bias: when it comes to questions of subjective belief, we more easily believe the things that mesh with our general world view. When something clashes with our vision of how things should be, we look immediately for the flaws. That, in a sense, is the heart of Haidt’s concern. If findings that rub liberals the wrong way can’t be reviewed impartially—and if those that match their ideals are given more lenient treatment—we have a problem. In a review of the literature on bias in interpreting research results, the social psychologist Robert MacCoun (then at Berkeley and now at Stanford) found that “biased research interpretation is a common phenomenon, and an overdetermined one.” It could be intentional, but often it was the result of motivational, under-the-radar biases, subtle shifts in interpretation imperceptible to the researchers themselves. The bias is especially strong when we’re confronted with topics that affect us or our group identity directly—something that ideological debates often do.

Haidt believes that the problems start with the selection and formulation of research topics to begin with. In his paper, he and his co-authors review how liberal values can influence the choice of topics and method of research. What questions, for instance, do researchers choose to tackle? Those likely to get better traction are those that most resonate with the researchers—after all, psychologists often jest that research is little more than "me-search." Given a homogeneity of views, topics can fall into ruts because conflicting approaches won’t be considered. Experimentally, viewpoint diversity is one of the most effective ways of attaining creative and innovative breakthroughs in any field; its absence leads to much the opposite result.

The approach to research questions themselves may become biased without conscious consideration. One study that Haidt and his co-authors analyzed, for instance, found that individuals who believe that social systems should be organized in hierarchies were more likely to make unethical decisions, and those who scored high on a scale of willingness to submit to authority were more likely to go along with those decisions. At first glance, that seems perfectly reasonable. But, when Haidt and his colleagues dug deeper, they found that the study design was stacked in favor of the outcome. “Unethical decisions” here meant not formally taking the side of a female colleague in a sexual-harassment complaint or placing your company’s well-being above some non-specific environmental harm that the company’s activities might be causing. The values of feminism and environmentalism, Haidt argues, are here embedded in the very definition of ethics. But what if the people who said that they were against this “ethical” behavior simply wanted more information? The vignettes provide no color or context. Could it be ethical to wait to find out more before taking sides? Could the environmental harm caused by the company be relatively minor compared to the cost to shareholders?

There’s a simple test of whether a question is objective or ideologically loaded. It’s what, in 1994, the political psychologist Phil Tetlock termed the turnabout test: imagine the opposite of your question. If it sounds loaded, your original phrase probably is, too. Consequently, if the premise of a study is to look at something like the “denial of the irrationality of many religious beliefs,” turn it around to be “the denial of the benefits of church attendance.” Something like the “denial of the economic inequality caused by a strong concentration of wealth” becomes the “denial of the benefits of free-market capitalism.” The point isn’t that researchers need more conservative values. It’s that they need to avoid value-driven formulations in the first place if they are looking to get an objective assessment of a question.

Eldridge Cleaver, a leader of the Black Panthers, once remarked, “Too much agreement kills a chat.” It is, in other words, boring. It doesn’t challenge thought in the same way as an argument. The lack of political diversity in social psychology in no way means the resulting research is bad or flawed. What it does mean is that it’s limited. Certain areas aren’t being explored, certain questions aren't being asked, certain ideas aren't being challenged—and, in some cases, certain people aren't being given a chance to speak up.

There is a case to be made that, despite the imbalance, no formal changes need to be made, and that, on the whole, despite its problems, social psychology continues to function remarkably well and regularly produces high-quality research. Controversial work gets done. Even studies that directly challenge the field—like Haidt’s—are publicized and inspire healthy debate. In a sense, that’s the best evidence against him: if he’s able to make his point and the psychology world is willing to listen, the problem can’t be so deep. Perhaps people just need to be more sensitive and aware.

And yet the evidence for more substantial bias, against both individuals and research topics and directions, is hard to dismiss—and the hostility that some social psychologists have expressed toward the data suggests that self-correction may not be an adequate remedy. Haidt believes that the way forward is through a system of affirmative action: engaging in extra recruitment and consciousness-raising efforts for political conservatives in much the same way as for women or ethnic minorities. That approach, however, misses the fundamental nature of the problem: if the underlying issues in research aren’t addressed, more conservatives won’t help. (That’s not to mention that affirmative action is itself an ideologically fraught topic; this would be a case of one hot-button issue being used to solve another.) Instead, social psychology needs to employ the tools that the field has shown successfully reduce bias in individuals in other circumstances and apply them to ideology.

One of the foremost researchers of bias, Kahneman, has pointed out that knowing about a bias isn’t enough to make it disappear from your decision-making calculus. He suggests, instead, instating a system that is as objective as possible whenever bias may enter into a choice. For instance, when interviewing job candidates, have a scorecard and a checklist that are identical from person to person. Use them, instead of a personal impression, to inform your decision. Often, he’s found, people end up with a disconnect between their subjective impressions and the hard data. Perhaps a similar, objective system needs to be in place to help prevent ideological bias from creeping into evaluations of candidates and research alike.

For studies and grants, though, even that may not be enough. What is needed in those cases is a blinding of the peer-review system—both in terms of applicants’ names and personal backgrounds and the hypotheses (or findings) of their research. If you want to research Democrats and Republicans, say—or any other ideologically loaded topics—call them Purples and Oranges for the duration of the paper. The methods and research structure will be evaluated without any ideological predispositions. Blind peer review in papers and grants would also solve a number of other bias problems, including against certain people, institutions, and long-held ideas. As for ideologically sensitive papers that have already been published, blind that data as well and reanalyze the premises and conclusions, pairing them with Tetlock’s turnabout tests. Is the opposite approach nonsensical? Chances are, then, that this one is, too.

Whatever the ultimate solution, social psychologists are well-equipped to fix the problem: they are, after all, only falling prey to the same bias that they’ve so often identified in others. “We’re social psychologists,” Haidt said. “We’re in the best possible position to understand and address hidden bias. And, if you care about psychological science, that’s what we have to do.”