Groups are often said to believe things. For instance, we talk about PETA believing that factory farms should be abolished, the Catholic Church believing that the Pope is infallible, and the US government believing that people have the right to free speech. But how can we make sense of a group believing something?

This is an important question, from both a theoretical and a practical point of view. If we don’t understand what group belief is, then we won’t be able to grasp what it means to say that a group knows or should have known something. This matters a great deal, given that belief, knowledge, and culpable ignorance are intimately connected to moral and legal responsibility.

For instance, if the Bush Administration believed that Iraq did not have weapons of mass destruction, then not only did the Administration lie to the public in saying that it did, but it is also fully culpable for the hundreds of thousands of lives needlessly lost in the Iraq war. And if BP should have known that its Deepwater Horizon oil rig was in need of repairs, then it is responsible for the vast quantity of oil that spilled into the Gulf of Mexico.

Despite the importance of this question, the topic of group belief has received surprisingly little attention in the philosophical literature. So far, the majority of those who have addressed it favor an inflationary approach where groups are treated as entities with “minds of their own.” That is to say, groups are something more than the mere collection of their members, and group belief is something more than their individual beliefs. This rather bold view is typically motivated by arguments that claim a group can be properly said to believe something even when not a single one of its members believes it. A classic example of this sort of case is where a group decides to let a view “stand” as what the group thinks, despite the fact that none of its members actually holds the view in question.

For instance, suppose that the Philosophy Department at a university is deliberating about the final candidate to whom it will extend admission to its graduate program. After hours of discussion, all of the members jointly agree that Jane Smith is the most qualified candidate from the remaining pool of applicants. However, not a single member of the department actually believes this; instead, they all think that Jane Smith is the candidate who is most likely to be approved by the administration. Here, it is argued that the Philosophy Department itself believes that Jane Smith is the most qualified candidate for admission, even though none of the members holds this belief. This attribution of belief to the group is supported by looking at its actions: the group asserts that Jane Smith is the most qualified candidate, it defends this position in conversation with administrators, it heavily recruits her to join the department, and so on. Why does the group do all of this? The most natural explanation of how the group behaves is that it really does think Jane is the best candidate—and this can be true even if each group member would deny it individually.

This argument has led some philosophers to say that a group’s believing something should be understood in terms of the members of the group intentionally and openly jointly accepting it, where it is possible to accept something without believing it. The Philosophy Department above, then, believes that Jane Smith is the most qualified candidate for admission, even though none of the members holds this belief, precisely because they jointly agree to let this position stand as the group’s.

There is, however, what I take to be a decisive objection to this way of thinking about group belief. Groups lie, and they do so with some frequency: a cursory review of recent news pulls up stories about the lies of Halliburton, Enron, the Bush Administration, and various pharmaceutical companies. And no matter how we understand group lies, a minimum condition is that a group must state what it believes to be false.

Here is a paradigmatic group lie, slightly fictionalized from a real case: Phillip Morris, one of the largest tobacco companies in the world, is aware of the massive amounts of scientific evidence revealing not only the addictiveness of smoking, but also the links it has with lung cancer. While all of the members of the board of directors of the company believe this conclusion, they all openly decide and then jointly accept that, because of what is at stake financially, the official position of Phillip Morris is that smoking is neither highly addictive nor detrimental to one’s health. This claim is then published in all of their advertising materials and defended against objections.

Herein lies the problem with the joint acceptance account: an adequate view of group belief should be able to tell the difference between a group’s stating its belief and a group’s lying. On the joint acceptance account, however, the actions of Phillip Morris in the case above make it the case that the group believes that smoking is neither highly addictive nor detrimental to one’s health. The relevant members of the company—namely, the board of directors—not only jointly accept this proposition, but also support it through their public statements, actions, planning, and so on. But surely the most natural way to think of what the company is doing is that they are lying about the health risks of smoking. Phillip Morris says what it does, not because the company genuinely believes that smoking isn’t dangerous, but because it wants to deceive others to believe this.

Because the joint acceptance account confuses group belief with group lying, we should look for a new way to think about group belief.