One of the most popular?—?and least costly?—?solutions is disclosure. The notion is that requiring experts to put everything on the table should give them an incentive to behave ethically and avoid tarnishing their reputation: Transparency begets honesty. But work by Cain, in collaboration with Don Moore at the University of California Berkeley and George Loewenstein at Carnegie Mellon University, finds that disclosure can have the opposite effect.

Ideally, all of us would be unconflicted actors, working in the best interests of the people we serve. In reality, though, we all navigate a sea of competing desires, and some of these create financial or social pressures that interfere with our objectivity. In some cases, the consequences of these conflicts are severe enough that industries have established rules for managing them.

“None of us are saying that transparency is a bad thing,” says Daylian Cain, a behavioral economist at Yale University. “But almost always, it fails to work as well as we think it does.” By assuming that disclosure is always a benefit, he and his colleagues argue, regulators may be failing to address the real problems caused by conflicts of interest. In fact, biases are rooted deep in our psychology, and can’t be dispelled with a simple confession. Policies of disclosure, far from being a panacea, may be drawing attention away from the much harder work of removing conflicts and making sure that people’s advice and their interests align.

But recent research by experimental psychologists is uncovering some uncomfortable truths: Disclosure doesn’t solve problems the way we think it does, and in fact it can actually backfire. Coming clean about conflicts of interest, they find, can promote less ethical behavior by advisers. And though most of us assume we’d cast a skeptical eye on advice from a doctor, stockbroker, or politician with a personal stake in our decision, disclosure about conflicts may actually lead us to make worse choices.

Within many fields, one solution has emerged: require people to disclose any ties that might sway their judgment. Such transparency, the rationale goes, encourages those in authority to behave more ethically, and lets those relying on their guidance take the bias into consideration.

They’re questions worth asking, because conflicts of interest like these are commonplace. In just about any profession?—?medicine or real estate, accounting or academia?—?people giving information and advice may carry agendas that bias their judgments, or find themselves in situations where duty and personal benefit clash.

If your doctor recommended a drug whose manufacturer’s consulting fees financed his summer home, would that give you pause? Would you trust a stockbroker who wanted to sell you on a risky mutual fund that gave him a commission for every sale? How about a public official touting a new energy technology made by a company she invests in?

Sah sees people complying with biased advice as a way of helping their advisers, even in one-off interactions between strangers participating in a study. “People feel pressure to behave generously even if it’s not in their best interest,” she says. In these situations, she says, “instead of being a warning, disclosure places this burden on the very people it’s supposed to protect.”

Sunita Sah, a researcher at Duke University’s Fuqua School of Business, has conducted experiments focusing on doctor-patient interactions, in which a doctor prescribes a medication but discloses a financial interest in the company that makes the drug. As expected, most people said such a disclosure would decrease their trust in the advice. But in practice, oddly enough, people were actually more likely to comply with the advice when the doctor’s bias was disclosed. Sah says that people feel an increased pressure to take the advice to avoid insinuating that they distrust their doctor.

What, then, about the other half of disclosure’s supposed benefits? In effect, what the experts were doing was passing the buck on managing their bias to the people they were advising. So does disclosing a conflict of interest enable the people receiving advice to take that information with the proper grain of salt? Research again suggests the answer is no.

“We call it moral licensing,” Moore says. “After having behaved honestly and virtuously, you then feel licensed to indulge in being a little bit bad.” Other recent findings on ethical behavior, he says, show that people compensate for virtuous acts with vice, and vice versa. “People behave as if they have a moral ‘set point,’?” Moore says. Indeed, it appeared that disclosing a conflict of interest gave people a green light to behave unethically, as if they were absolved from having to consider others’ interests.

No surprise there: People with a conflict gave biased advice to benefit themselves. But the twist came when the researchers required the experts to disclose this conflict to the people they were advising. Instead of the transparency encouraging more responsible behavior in the experts, it actually caused them to inflate their numbers even more. In other words, disclosing the conflict of interest?—?far from being a solution?—?actually made advisers act in a more self-serving way.

Cain, Loewenstein, and Moore conducted a series of experiments meant to mimic a situation in which a person in authority?—?such as a doctor, consultant, or real estate broker?—?is giving advice that influences another person’s decision. Certain study participants were required to make an estimate?—?evaluating the prices of houses, for instance. Meanwhile, other participants were selected to serve as experts: They were given additional information with which to advise the estimators. When these experts were put in a conflicted situation?—?they were paid according to how high the estimator guessed?—?they gave worse advice than if they were paid according to the accuracy of the estimate.

At a recent conference on conflicts of interest at Harvard Law School, Harvard psychologist Mahzarin Banaji said that the core problem is a fundamental misunderstanding about the pervasiveness and power of bias. We assume we’re in command of our preferences and decisions, but psychology and cognitive science have shown that much of our decision-making occurs unconsciously. Banaji pointed out that we have preferences for everything from politically similar people to the letters in our own names. “There is no ‘neutral’ at the implicit or unconscious level,” she said.

This disconnect results in policies that underestimate not only the prevalence of bias, but also its burden on society. “The big missing ingredient is that people don’t understand how dangerous conflicts of interest are in the first place,” Cain says. He points out that people’s decisions are easily influenced by information they receive beforehand, even if they know the information to be incorrect, irrelevant, or biased. This phenomenon, called anchoring, has been shown time and again in psychological experiments. Thus, experts can’t simply overlook their own personal interests, and those who get advice can’t easily discount experts’ prejudices, even if they want to.

Personal connection adds a further layer of complexity. Francesca Gino and her colleagues at Harvard Business School have found that people who are prescribed medicines by personal doctors are less likely to recognize the potential dangers of their doctors’ conflict of interest. Although most of us recognize that conflicts of interest are a problem in the abstract, we don’t want to acknowledge them in people we know. That’s because we don’t see bias as something that affects good, intelligent people. But in fact, Gino says, “there are lots of very subtle factors that can push us to cross ethical boundaries without us realizing that these factors are having an effect.”

If disclosure is as ineffective?—?or even counterproductive?—?as these studies suggest, is there any hope for it as a tool? Some studies suggest that disclosure of conflict of interest works better when people on the receiving end are well informed?—?it might, for example, work better among colleagues than for doctors and patients. Sah’s research, meanwhile, points to a number of ways disclosures can be improved. She found that people were more likely to discount biased advice from doctors if disclosures were made by a third party, if they were not made face-to-face, or if patients had a “cooling off” period to reconsider their decisions.

Even if these fixes make disclosure more effective, the true implication of these studies is that transparency is not a blanket solution to problems of corruption. “Regulators should be looking harder at eliminating conflicts,” Cain says. Unfortunately, requiring disclosure is much easier than changing the status quo. As he puts it, “I’d rather tell you I’m on the gravy train than get off it.”

Furthermore, as Moore admits, in some cases the high costs of eliminating conflicts of interest may not be worth the effort. But in circumstances where conflicts cause harm, changing the system could be worthwhile. Regulators, Moore says, need to look for ways to structure systems so that experts’ personal interests are matched with the interests of those they are helping. “Restructuring to align interests is difficult,” he says, “but when you do it, it can be beautiful.”

Courtney Humphries is a freelance writer in Boston and the author of ”Superdove: How the Pigeon Took Manhattan...And the World.”

© Copyright 2011 Globe Newspaper Company.