October 31, 2005

Porn Stats

Come to think of it, there hasn't ever been a good pornography discussion on this site—and this post probably isn't it—but some of the facts laid out in American Sexuality article on the subject seem worth tucking away for future reference:

From research and the testimony of women who have been prostituted and used in pornography, we know that childhood sexual assault (which often leads victims to see their value in the world primarily as the ability to provide sexual pleasure for men) and economic hardship (a lack of meaningful employment choices at a livable wage) are key factors in many women’s decisions to enter the sex industry. (For a good summary of this evidence see Margaret Baldwin’s article "Split at the Root: Prostitution and Feminist Discourses of Law Reform," 5 Yale Journal of Law & Feminism 47, 1992.) We know how women in the sex industry—not all, but many—routinely dissociate to cope with what they do. We know that in one study of 130 street prostitutes, 68% met the diagnostic criteria for post-traumatic stress disorder. (For details on this, see the work of Melissa Farley, "Prostitution, Violence, and Post-Traumatic Stress Disorder.") Clip. Good magazine, too, although I'm a little dismayed that left-wing Queer Eye for the Straight Guy haven't really advanced much in the past two years. Come to think of it, there hasn't ever been a good pornography discussion on this site—and this post probably isn't it—but some of the facts laid out in this article on the subject seem worth tucking away for future reference:Clip. Good magazine, too, although I'm a little dismayed that left-wing critiques ofhaven't really advanced much in the past two years.

Alito and Abortion

So Samuel A. Alito, Jr. will be the new Supreme Court dude. Emphasis on "dude". Or emphasis on "fascist". Whatever. Anyway, I've been reading his infamous Planned Parenthood v. Casey, the one in which he upheld a spousal notification law for abortions, and it's important to hash this out. The conservative defense of Alito will be that it wasn't his job to decide whether the law was good public policy or not, merely to decide whether it was constitutional; and on the latter, he was upholding what he thought were the precedents on abortion at the time. That's not implausible; see this passage:

Taken together, Justice O’Connor’s [earlier] opinions reveal that an undue burden does not exist unless a law (a) prohibits abortion or gives another person the authority to veto an abortion or (b) has the practical effect of imposing "severe limitations," rather than simply inhibiting abortions "to some degree'" or inhibiting "some women." In Alito's defense, it's sometimes hard to figure out exactly what Sandra Day O'Connor intends in her opinions—often only she knows for sure—and prior to Planned Parenthood, the Supreme Court had placed restrictions on abortion that, while not "severe," probably did prevent some women from getting abortions. So Alito's ruling partially stems from previous Supreme Court sloppiness, it seems. Meanwhile, the plaintiffs who opposed the spousal notification law had not shown that the 5 percent of women who don't notify their husbands would in fact be harmed by the new law. (The law leaves an out for women who have "reason to believe that notification is likely to result in the infliction of bodily injury upon her.") On one level, then, Alito's opinion is sort of reasonable.



But on another level, it's not. It's ridiculous. It's dangerous. It's wrong. According to Alito, because only a small number of women might face an "undue burden" in theory, but that's not known for sure, the law is just fine and dandy? What kind of legal principle is that? The Supreme Court obviously



This all matters very, very much because in an upcoming abortion case, Ayotte v. Planned Parenthood, the Supreme Court will decide just this sort of dry procedural issue: on whether litigants need to show that an abortion restriction places an "undue burden" on women in the abstract—and is therefore unconstitutional—or must show that it places an "undue burden" in a particular case. Alito would appear to side with the latter view, and a ruling this way would make it very hard for women to challenge abortion restrictions (litigants would have to show that parts of the law affect them personally), and the net effect would be that Roe v. Wade, for all practical purposes,



The political issue here is that over 70 percent of Americans support spousal notification laws, and if Democrats try to fight on that terrain, they could well lose. [EDIT: Sorry, I didn't intend this to mean that they shouldn't even try to convince people otherwise; they should.] But there's so much more at stake here. So Samuel A. Alito, Jr. will be the new Supreme Court dude. Emphasis on "dude". Or emphasis on "fascist". Whatever. Anyway, I've been reading his infamous dissent on, the one in which he upheld a spousal notification law for abortions, and it's important to hash this out. The conservative defense of Alito will be that it wasn't his job to decide whether the law was good public policy or not, merely to decide whether it was constitutional; and on the latter, he was upholding what hewere the precedents on abortion at the time. That's not implausible; see this passage:In Alito's defense, it's sometimes hard to figure out exactly what Sandra Day O'Connor intends in her opinions—often only she knows for sure—and prior to, the Supreme Courtplaced restrictions on abortion that, while not "severe," probably did preventwomen from getting abortions. So Alito's ruling partially stems from previous Supreme Court sloppiness, it seems. Meanwhile, the plaintiffs who opposed the spousal notification law had not shown that the 5 percent of women whonotify their husbands would in fact be harmed by the new law. (The law leaves an out for women who have "reason to believe that notification is likely to result in the infliction of bodily injury upon her.") On one level, then, Alito's opinion is sort of reasonable.But on another level, it's not. It's ridiculous. It's dangerous. It's wrong. According to Alito, because only a small number of womenface an "undue burden" in theory, but that's not known for sure, the law is just fine and dandy? What kind of legal principle is that? The Supreme Court obviously disagreed with Alito, noting that regardless of whether 95 percent of women would be unharmed by the law, "[l]egislation is measured for consistency with the Constitution by its impact on those whose conduct it affects." And that includes women potentially affected.This all matters very, very much because in an upcoming abortion case,, the Supreme Court will decide just this sort of dry procedural issue: on whether litigants need to show that an abortion restriction places an "undue burden" on women in the abstract—and is therefore unconstitutional—or must show that it places an "undue burden". Alito would appear to side with the latter view, and a ruling this way would make it very hard for women to challenge abortion restrictions (litigants would have to show that parts of the law affect them personally), and the net effect would be that, for all practical purposes, would be crippled —states could leave restrictions on the book for many years before ever being challenged.The political issue here is that over 70 percent of Americans support spousal notification laws, and if Democrats try to fight on that terrain, they could well lose. [Sorry, I didn't intend this to mean that they shouldn't evento convince people otherwise; they should.] But there's so much more at stake here.

October 29, 2005

Feed the Beast

"Starve the beast"—Grover Norquist's theory that cutting taxes and running massive deficits will somehow force Congress to rein in spending—obviously looks flimsy after the high-spending Reagan and Bush years. Even the Cato Institute

The increased fiscal gap also makes future government policy far less predictable. Having a looming debt of that size will stir every interest group in Washington to try to influence future policy. It won't be possible to take any government commitment for granted for more than a few years. With even Social Security and Medicare likely to be on the chopping block eventually, no group or lobby will be able to rely on political inertia to protect what it now has. That is an enviable state for members of Congress set on gaining campaign funds, but a worrisome situation for the rest of us. Judging from the past four years, this seems anecdotally true—out-of-control deficits lead not only to more spending, but worse spending, tilted heavily towards lobbyists and other interest groups. Alternatively, one could look at it this way: when various political interests see certain groups rewarded with tax cuts, they feel they should be rewarded too. So the squabbling begins. Meanwhile, so long as government spending is being paid for with borrowing and future taxes rather than present-day taxes, it's very easy for Congress to spend more than it otherwise would. (Which is why, of course, big-government liberals have usually opposed balanced budgets.) Ultimately, it's probably easiest to restrain government spending after tax increases—since that sets a general tone of fiscal austerity, and perhaps becomes easier for Congressmen to turn down spending requests. Divided government probably helps too.



Obviously, though, from a Republican perspective, Bush- and Reagan-style tax cuts do accomplish two very important things: they redistribute wealth upwards—rather than tax the rentier class to pay for spending, the government just borrows from them instead and pays them interest for their generosity (during the Reagan years, that interest was paid with cutting party, meanwhile, can spend more freely). It's all very clever, at least so long as deficits don't collapse the economy and prompt a socialist takeover of government. But that looks increasingly unlikely. "Starve the beast"—Grover Norquist's theory that cutting taxes and running massive deficits will somehow force Congress to rein in spending—obviously looks flimsy after the high-spending Reagan and Bush years. Even the Cato Institute agrees . But via AngryBear, this old Daniel Shaviro article makes the point that cutting taxes and running huge deficits not only fails to curb spending, but also increases the influence of interest groups:Judging from the past four years, this seems anecdotally true—out-of-control deficits lead not only to more spending, butspending, tilted heavily towards lobbyists and other interest groups. Alternatively, one could look at it this way: when various political interests see certain groups rewarded with tax cuts, they feelshould be rewarded too. So the squabbling begins. Meanwhile, so long as government spending is being paid for with borrowing and future taxes rather than present-day taxes, it's very easy for Congress to spend more than it otherwise would. (Which is why, of course, big-government liberals have usually opposed balanced budgets.) Ultimately, it's probably easiest to restrain government spending after tax increases—since that sets a general tone of fiscal austerity, and perhaps becomes easier for Congressmen to turn down spending requests. Divided government probably helps too.Obviously, though, from a Republican perspective, Bush- and Reagan-style tax cutsaccomplish two very important things: they redistribute wealth upwards—rather than tax the rentier class to pay for spending, the government just borrows from them instead and pays them interest for their generosity (during the Reagan years, that interest was paid with payroll tax hikes on workers); and second, the tax hikes put Democrats in a bind by forcing them to raise taxes if and when they come to power, which hurts them politically, and forces the "responsible" party to rein in its own preferred spending programs (the tax-party, meanwhile, can spend more freely). It's all very clever, at least so long as deficits don't collapse the economy and prompt a socialist takeover of government. But that looks increasingly unlikely.

The Other Intelligence Story

Hm. What to write about? Scooter Libby? Nah, not dull enough. Oh, what about John Negroponte's new that's the dullness we need. Important, though, especially since the new report doesn't have much to say about one of, I think, the United States' most persistent intelligence weaknesses over the past decade. More on that in a bit. For now, it's enough to note that Negroponte's report is full of resolutions to coordinate this and integrate that and improve the other, and it's all focused, naturally, on using intelligence to combat terrorism, stopping WMD proliferation, and, uh, democracy promotion. Apart from the last, which is bizarre, this is all pretty uncontroversial and likely useful—up to a point.



The idea that we need to "improve and integrate" our intelligence is always a nice platitude, but it's worth stepping back and asking how we got to where we are. In this day and age, curiously, every major foreign policy move needs to be backed, it seems, by ultra-solid intelligence—as if to give the public a veneer of objectivity for what are often simply judgment calls, hunches, guesses. That alone puts a very high degree of natural political pressure on what is otherwise an inherently imprecise process. The Bush administration may have been particularly flagrant about "stove-piping" CIA reports during the march to war in Iraq, true, but any policymaker who gets it in his or her head to pursue a course of action will have to do the same to some degree or other, because he or she will want backing from the intelligence agencies, and they can often provide no such thing.



Now Congress can always re-jigger the ways in which the various agencies are set up, as it recently did, and maybe that will alleviate some of the pressure from here, but the implicit "politicization" of intelligence can never go away completely. In a better world, policymakers would just acknowledge that intelligence is highly imperfect even in the best of times, and tell voters that ultimately, their national security decisions are primarily judgment calls, rather than obvious conclusions borne of intelligence. So many charades could be dispensed with. (Including the idea that ordinary citizens aren't informed enough to form an opinion on foreign policy decisions—a fiction that should have been buried by the Iraq war.) This cultural change will never happen, of course. So the real problem is that foreign policies that put an impossibly high burden on intelligence—the Bush doctrine and preventive war come to mind—will likely fail more often than not.



(As a side note, if only because I don't know where else to stick this, Chaim Kaufmann's hypothesis—which would entail making predictions—was an especially ingrained failure. No amount of shuffling or re-jiggering will fix this.)



At any rate, these are badly disjointed thoughts, sorry, but I promised to say a bit about one of the most glaring and overlooked types of intelligence failure throughout U.S. history: namely, our poor ability to predict how other countries—or other people, period—will react to our actions abroad. Robert Jervis has written a few papers on this, but to put it another way, U.S. policymakers rarely seem to be able to figure out how other countries see the world, a blind spot which, during the Cold War, was more serious than the various mistaken analyses about missile gaps or



A few examples from history: In the 1950s, the U.S. failed to understand that Stalin would invade Korea—at the time, his strategy was remarkably opaque. More recently, in 1996, the Cedras junta in Haiti for some reason didn't take the Clinton administration's warnings to abdicate seriously until an invasion force was actually in the air. (Did they not think Clinton meant it? Why?) Ditto a few years later, when the Clinton administration couldn't understand why Milosevic wouldn't back down from Kosovo in the face of NATO threats. Nor did the Bush administration make any apparent attempt to understand why, in 2002, Saddam might have been acting the way he did—for instance, keeping the status of his WMDs ambiguous to fool Iran. But so long as the U.S. has a poor handle on the beliefs and calculations of other world leaders, especially its adversaries, coercive diplomacy will tend to fail. (And we haven't even touched on terrorist groups....) Hm. What to write about? Scooter Libby? Nah, not dull enough. Oh, what about John Negroponte's new "National Intelligence Strategy," released last week. Yes,the dullness we need. Important, though, especially since the new report doesn't have much to say about one of, I think, the United States' most persistent intelligence weaknesses over the past decade. More on that in a bit. For now, it's enough to note that Negroponte's report is full of resolutions to coordinate this and integrate that and improve the other, and it's all focused, naturally, on using intelligence to combat terrorism, stopping WMD proliferation, and, uh, democracy promotion. Apart from the last, which is bizarre, this is all pretty uncontroversial and likely useful—up to a point.The idea that we need to "improve and integrate" our intelligence is always a nice platitude, but it's worth stepping back and asking how we got to where we are. In this day and age, curiously,major foreign policy move needs to be backed, it seems, by ultra-solid intelligence—as if to give the public a veneer of objectivity for what are often simply judgment calls, hunches, guesses. That alone puts a very high degree of natural political pressure on what is otherwise an inherently imprecise process. The Bush administration may have been particularly flagrant about "stove-piping" CIA reports during the march to war in Iraq, true, butpolicymaker who gets it in his or her head to pursue a course of action will have to do the same to some degree or other, because he or she will want backing from the intelligence agencies, and they can often provide no such thing.Now Congress can always re-jigger the ways in which the various agencies are set up, as it recently did, and maybe that will alleviate some of the pressure from here, but the implicit "politicization" of intelligence can never go away completely. In a better world, policymakers would just acknowledge that intelligence is highly imperfect even in the best of times, and tell voters that ultimately, their national security decisions are primarily judgment calls, rather than obvious conclusions borne of intelligence. So many charades could be dispensed with. (Including the idea that ordinary citizens aren't informed enough to form an opinion on foreign policy decisions—a fiction that should have been buried by the Iraq war.) This cultural change will never happen, of course. So the real problem is that foreign policies that put an impossibly high burden on intelligence—the Bush doctrine and preventive war come to mind—will likely fail more often than not.(As a side note, if only because I don't know where else to stick this, Chaim Kaufmann's "Threat Inflation and the Failure of the Marketplace of Ideas" is one of the better essays on intelligence failure during the run-up to the Iraq war; he notes, among other things, that the tendency of experts to avoid treating what they think they see as a scientific—which would entail making predictions—was an especially ingrained failure. No amount of shuffling or re-jiggering will fix this.)At any rate, these are badly disjointed thoughts, sorry, but I promised to say a bit about one of the most glaring and overlooked types of intelligence failure throughout U.S. history: namely, our poor ability to predict how other countries—or other people, period—willto our actions abroad. Robert Jervis has written a few papers on this, but to put it another way, U.S. policymakers rarely seem to be able to figure out how other countries see the world, a blind spot which, during the Cold War, was more serious than the various mistaken analyses about missile gaps or mineshaft gaps or the like. The problem is that this sort of "empathy" is much very difficult to improve—trying to figure out the near-infinite set of calculations and beliefs other actors might have will always be close to impossible.A few examples from history: In the 1950s, the U.S. failed to understand that Stalin would invade Korea—at the time, his strategy was remarkably opaque. More recently, in 1996, the Cedras junta in Haiti for some reason didn't take the Clinton administration's warnings to abdicate seriously until an invasion force was actually in the air. (Did they not think Clinton meant it? Why?) Ditto a few years later, when the Clinton administration couldn't understand why Milosevic wouldn't back down from Kosovo in the face of NATO threats. Nor did the Bush administration make any apparent attempt to understand why, in 2002, Saddam might have been acting the way he did—for instance, keeping the status of his WMDs ambiguous to fool Iran. But so long as the U.S. has a poor handle on the beliefs and calculations of other world leaders, especially its adversaries, coercive diplomacy will tend to fail. (And we haven't even touched on terrorist groups....)

October 27, 2005

Holy Nanotech Batman

Check out



No, really, it's pretty stunning to see how much research money is poured into weapons. Over Check out the list of potential new nanobiotechnology weapons under development by the U.S. military. Let's see, we've got: ultralight body armor; "artificial muscles" built of nanomaterial; nanotech sensors capable of detecting individual molecules; camouflage suits that automatically heal a wounded soldier. It's every adolescent's comic-book fantasy! Now all we need to do is start a war or two to test this stuff out…No, really, it's pretty stunning to see how much research money is poured into weapons. Over a third of NIAD's basic research is now in biodefense, and that's a growing share of a shrinking research budget. "Biodefense" is much like missile defense, only infinitely more lunatic, and in practice ends up creating ever more deadly biological weapons—necessary to test out the defenses, see—potentially kicking off a bioweapons arms race. Meanwhile, there's no money left for flu vaccines, or much of anything else. We are insane. Human beings are insane. But at least we'll have cool armor.

Assume the Worst

In the New Republic today, Clay Risen has a appearing to care more about its workers:

The memo is the result of a study carried out in coordination with McKinsey, the elite consulting firm--and it shows in its fantastic grasp of [Wal-Mart's] numbers and abysmal conception of the workers who make them possible. One proposal would replace the current 401(k) program, into which the company puts a fixed percentage of the employee's wage, with a matching program, in which the company's contribution is equal to the employee's (this on top of the proposed cut in company contributions, from 4 percent to 3 percent). From a cost-savings point of view, this is a brutally efficient strategy--after all, the average Wal-Mart employee makes $17,500 a year. How many are going to set aside 3 percent of that for retirement? What's amazing, though, is that the memo's author, Susan Chambers, seems to believe that employees would actually like this reduction in benefits, because, for those who can somehow afford to take full advantage, it "would help Associates better prepare for retirement."



Then there is the proposal to shift all employees into health-savings plans, replacing traditional insurance with tax-free bank accounts in which both employees and the company set aside money; they then use that money to pay for doctor visits, prescriptions, and so on. Again, from a coldly rational point of view, this makes certain sense: The more financial responsibility employees bear in their health-care costs, the less they are likely to spend. The problem is that, again, poorly paid employees are unlikely to make the sort of contributions necessary to cover expenses. Moreover, it's much easier for the company to quietly adjust its own contributions to employee health downward, a fact sneakily acknowledged by the memo (though instead of proposing a check it merely recommends more p.r.: "Wal-Mart will have to be sophisticated and forceful in communicating this change"). That's the crux of it: Wal-Mart will use some nifty gimmicks to slash its workers' health and retirement benefits and then just pretend that this counts as an improvement. Ultimately, of course, this won't work. Wal-Mart's critics have bullshit detectors like few other groups of people on the planet, and always, always, always assume the worst about the store. The company will never appease its "well-funded and well-organized" attackers until it actually starts offering substantial benefits for workers. Although, do note, Wal-Mart executives are probably paranoid that the critics want to destroy the company altogether, rather than merely improve the lives of its workers, so maybe Wal-Mart thinks that there are no steps ever worth taking—because its enemies will never be appeased. Surely it doesn't help when lunatic lefties start writing posts like "Abolish the Corporation," either.



Alternatively, of course, Wal-Mart could solve its problems by lobbying for some sort of government-run health insurance, which would relieve the company of the burden of covering workers in the first place. It probably will end up doing this, although it won't lobby for single-payer, but rather the GOP's plans for government-financed Health Savings Accounts, high-deductible insurance, and tax credits, along with a phase-out of the employer-health tax deduction; two steps that I think would let Wal-Mart and other big companies wash their hands of handling health insurance without having to pay taxes for some sort of single-payer system. We'll see. In thetoday, Clay Risen has a smart analysis of the infamous Wal-Mart memo that surfaced yesterday—which described ways the company could reduce its health care costs whileto care more about its workers:That's the crux of it: Wal-Mart will use some nifty gimmicks to slash its workers' health and retirement benefits and then justthat this counts as an improvement. Ultimately, of course, this won't work. Wal-Mart's critics have bullshit detectors like few other groups of people on the planet, and always, always, always assume the worst about the store. The company will never appease its "well-funded and well-organized" attackers until it actually starts offering substantial benefits for workers. Although, do note, Wal-Mart executivesprobably paranoid that the critics want to destroy the company altogether, rather than merely improve the lives of its workers, so maybe Wal-Mart thinks that there aresteps ever worth taking—because its enemies will never be appeased. Surely it doesn't help when lunatic lefties start writing posts like "Abolish the Corporation," either.Alternatively, of course, Wal-Mart could solve its problems by lobbying for some sort of government-run health insurance, which would relieve the company of the burden of covering workers in the first place. It probablyend up doing this, although it won't lobby for single-payer, but rather the GOP's plans for government-financed Health Savings Accounts, high-deductible insurance, and tax credits, along with a phase-out of the employer-health tax deduction; two steps that I think would be very bad for actual people, but on the other handlet Wal-Mart and other big companies wash their hands of handling health insurance without having to pay taxes for some sort of single-payer system. We'll see.

Breakdown

No larger moral lurking here, but Dexter Filkins' New York Times Magazine

The tough tactics employed by Sassaman's battalion had their effect. Attacks in the Sunni villages like Abu Hishma, wrapped in barbed wire, dropped sharply. And his men succeeded in retaking Samarra. Winning the long-term allegiance of the Iraqis in those areas was another matter, however. If many Iraqis in the Sunni Triangle were ever open to the American project - the Shiite cities like Balad excepted - very few of them are anymore. Majool Saadi Muhammad, 49, a tribal leader in Abu Hishma, said that he had harbored no strong feelings about the Americans when they arrived in April 2003 and was proud to have three sons serving in the new American-backed Iraqi Army. Then the raids began, and many of Abu Hishma's young men were led away in hoods and cuffs. In early 2004, he said, Sassaman led a raid on his house, kicking in the doors and leaving the place a shambles. "There is no explanation except to humiliate," Muhammad told me. "I really hate them."



In retrospect, it is not clear what strategy, if any, would have won over Sunni towns like Samarra and Abu Hishma. Crack down, and the Iraqis grew resentful; ease up, and the insurgents came on strong. As Sassaman pointed out, the Americans poured $7 million of reconstruction money into Samarra, and even today, the town is not completely under American control.



But there is another reason American commanders shy from using violence on civilians: the effects it has on their own men. Pittard, the American commander in Baquba, says that he was careful not to give his men too much leeway in using nonlethal force. It wasn't just that he regarded harsh tactics as self-defeating. He feared his men could get out of control. "We were not into reprisals," Pittard says. "It's a fine line. If you are not careful, your discipline will break down."



In most of the 20th century's guerrilla wars, the armies of the countries battling the insurgents have suffered serious breakdowns in discipline. This was true of the Americans in Vietnam, the French in Algeria and the Soviets in Afghanistan. Martin van Creveld, a historian at Hebrew University of Jerusalem, says that soldiers in the dominant army often became demoralized by the frustrations of trying to defeat guerrillas. Nearly every major counterinsurgency in the 20th century failed. "The soldiers fighting the insurgents became demoralized because they were the strong fighting the weak," van Creveld says. "Everything they did seemed to be wrong. If they let the weaker army kill them, they were idiots. If they attacked the smaller army, they were seen as killers. The effect, in nearly every case, is demoralization and breakdowns of discipline."

Continue reading "Breakdown" No larger moral lurking here, but Dexter Filkins' story on Iraq from last weekend was a really good read. Since the topic of the hour seems to be whether the occupation was doomed from the start or could have succeeded with a more competent helmsman, these four paragraphs should do the trick:

October 26, 2005

Abolish the Corporation! Er, Maybe.

Steven Greenhouse's New York Times Time's new



How did we get into this mess? I mean, the quick 'n' dirty answer is that in the postwar era, short-sighted union leaders bargained with employers for corporate benefits rather than stumping en masse for universal health care and super-Social Security. But why, in this day and age, are companies still forced to worry about health care and retirement funds? They shouldn't have to do it; the system only encourages Wal-Mart-esque behavior, and makes it hard for businesses to compete globally. Luckily, good liberals have an exit strategy: the government should handle health care and retirement, so that corporations can get back to what they do best: focusing on profit-making. GM would no longer have to operate as a "social insurance system that sells cars to finance itself," and the business of America could be business once again. Only the state can free the market from these heavy chains, say liberals. On most days, I'd agree with this. But is all this really the best way to go, or only yet another short-sighted solution to a longstanding problem? Let's digress for a bit.



There is, of course, no such thing as a truly "free" market, only types of markets designed by the state, and it's hard to figure out what the ideal design really is. The framers of the Constitution never predicted that the corporation as we know it would ever exist—at the time of the American Revolution, there were only franchises charted by legislatures for public purposes. The Jacksonians, aiming to curb corruption, later revised the corporate charter and opened it up to all comers, but never intended to exempt corporations from the common law or social responsibility. It wasn't until the late 1800s that New Jersey changed all that, rewriting its charter laws to allow corporations to do whatever they damn well pleased. Soon all the major corporations were flocking to New Jersey, and states were forced to compete with each other for lax charters (thanks especially to several Supreme Court decisions protecting charters and declaring corporations "persons" entitled to full constitutional protection—including out-of-state recognition). Toss in decades of lavish federal subsidies and voila, we've got corporate America. Hence the modern "free market" that conservatives have fought so hard to protect against "state intrusion."



True legal originalists would overturn Santa Clara County vs. Southern Pacific Railroad Company—which gave corporations 14th amendment protection—as an unconscionable act of judicial activism, though don't expect anything along these lines today (nor, necessarily, should there be). But that's just to say that ultimately nothing stops citizens from revising corporate charter law, if needed. Charters aren't sacred. That leaves the question of whether to do so. The situation we have today, in which firms like Wal-Mart and GM are obliged to cater to shareholders (in theory at least), but somehow got lumped with these other profit-draining social responsibilities, like paying for prescription drugs—is untenable. It's no surprise, then, that, as Time details, companies are raiding and shedding their pensions and getting the government to bail them out of their obligations to retirees whenever possible. It's what they're "supposed" to do.



Again, one response is to say, "Enough, enough" and just make the government assume primary responsibility for all these profit-draining obligations. Set up basic universal healthcare and mandatory savings accounts, and let corporations offer extra benefits only insofar as they need to compete for workers. Taxpayers will foot the rest. That way, unions can stop haggling with employers over premiums and deductibles and focus instead on wages and workplace conditions. It's a far more rational and stable system than what we have now, true. But why we think it will be any more sustainable, over the long haul, than the postwar bargain struck fifty years ago is a good question. Isn't it likely that, even if we had a single-payer health care system and "mandatory savings accounts", companies would still offer extra benefits to workers, only to blow the whole thing up down the road when they decide it's no longer profitable to support increasingly long-living retirees?



Alternatively—and you see this proposal at WTO protests or in never work in The Globalized Economy™; companies couldn't compete!" Or: "Fool! We tried this already; it was called 'Fascist Italy.'" Yeah, yeah. Still, the idea that companies should just do their thing while government can swoop in later and pick up the mess looks less and less appealing by the day. Steven Greenhouse's article today on how Wal-Mart is trying to pare down its health-care costs makes for depressing reading alongside's new cover story on those ever-shrinking corporate pension funds. The short story: Companies don't want to be on the hook for medical or retirement costs. Wal-Mart's first order of business is to keep its profits rising, which means insuring its employees as little as public opinion will allow. (Ensuring high turnover helps.) Meanwhile, managers across America have been raiding and overstating their company's pension holdings, while forking over millions to takeover artists and CEOs. Congress has let them get away with it by promising, dubiously, to take care of the retirees if things go badly. The abstract term for this is "moral hazard"; the more concrete term, I believe, is: "retirees recycling cans to avoid eating garbage." But fear not: as long as shareholders are happy, capitalism is working. We know because they tell us so.How did we get into this mess? I mean, the quick 'n' dirty answer is that in the postwar era, short-sighted union leaders bargained with employers for corporate benefits rather than stumping en masse for universal health care and super-Social Security. But why, in this day and age, are companiesforced to worry about health care and retirement funds? They shouldn't have to do it; the system only encourages Wal-Mart-esque behavior, and makes it hard for businesses to compete globally. Luckily, good liberals have an exit strategy: the government should handle health care and retirement, so that corporations can get back to what they do best: focusing on profit-making. GM would no longer have to operate as a "social insurance system that sells cars to finance itself," and the business of America could be business once again. Only the state can free the market from these heavy chains, say liberals. On most days, I'd agree with this. But is all thisthe best way to go, or only yet another short-sighted solution to a longstanding problem? Let's digress for a bit.There is, of course, no such thing as a truly "free" market, onlyof markets designed by the state, and it's hard to figure out what the ideal design really is. The framers of the Constitution never predicted that the corporation as we know it would ever exist—at the time of the American Revolution, there were only franchises charted by legislatures for public purposes. The Jacksonians, aiming to curb corruption, later revised the corporate charter and opened it up to all comers, but never intended to exempt corporations from the common law or social responsibility. It wasn't until the late 1800s that New Jersey changed all that, rewriting its charter laws to allow corporations to do whatever they damn well pleased. Soon all the major corporations were flocking to New Jersey, and states were forced to compete with each other for lax charters (thanks especially to several Supreme Court decisions protecting charters and declaring corporations "persons" entitled to full constitutional protection—including out-of-state recognition). Toss in decades of lavish federal subsidies and voila, we've got corporate America. Hence the modern "free market" that conservatives have fought so hard to protect against "state intrusion."True legal originalists would overturn—which gave corporations 14th amendment protection—as an unconscionable act of judicial activism, though don't expect anything along these lines today (nor, necessarily, should there be). But that's just to say that ultimately nothing stops citizens from revising corporate charter law, if needed. Charters aren't sacred. That leaves the question ofto do so. The situation we have today, in which firms like Wal-Mart and GM are obliged to cater to shareholders (in theory at least), but somehow got lumped with these other profit-draining social responsibilities, like paying for prescription drugs—is untenable. It's no surprise, then, that, asdetails, companies are raiding and shedding their pensions and getting the government to bail them out of their obligations to retirees whenever possible. It's what they're "supposed" to do.Again, one response is to say, "Enough, enough" and just make the government assume primary responsibility for all these profit-draining obligations. Set up basic universal healthcare and mandatory savings accounts, and let corporations offer extra benefits only insofar as they need to compete for workers. Taxpayers will foot the rest. That way, unions can stop haggling with employers over premiums and deductibles and focus instead on wages and workplace conditions. It's a far more rational and stable system than what we have now, true. But why we think it will be any more sustainable, over the long haul, than the postwar bargain struck fifty years ago is a good question. Isn't it likely that, even if we had a single-payer health care system and "mandatory savings accounts", companies would still offer extra benefits to workers, only to blow the whole thing up down the road when they decide it's no longer profitable to support increasingly long-living retirees?Alternatively—and you see this proposal at WTO protests or in Multinational Monitor from time to time—we could start rewriting corporate charters, drastically, and require companies to worry about this stuff. Always seems iffy, but you know. Perhaps in the end we could even hack away at a good deal of government regulation; there'd be no need if companies were beholden to civil authority, as was the case in the 19th century, and required to meet their social responsibilities in whatever manner they find most efficient. What would that mean for health care? I don't know. State-run health insurance might still probably be the best way to go. Fine. Perhaps charter revision would prove far more useful in other areas, like in environmental conduct. (I'm not sold on the current "corporate responsibility" trend underway.) But corporations were originally designed by the people and for the people; why not have them act that way? "Ah," one will say, "but that sort of thing wouldwork in The Globalized Economy™; companies couldn't compete!" Or: "Fool! We tried this already; it was called 'Fascist Italy.'" Yeah, yeah. Still, the idea that companies should just do their thing while government can swoop in later and pick up the mess looks less and less appealing by the day.

October 25, 2005

Against Homework

Question of the day: Is homework even necessary? Ayelet Waldman

I also learned from professor Cooper -- aka the homework guru -- that there is no correlation between how much homework young children do and how well they comprehend material or perform on tests. [n.b., see also this study.] Why? … Because their attention spans are just too short -- they can't tune out external stimuli to focus on material. Second, younger children cannot tell the difference between the hard stuff and the easy stuff. They'll spend 15 minutes beating their heads against a difficult problem, and leave themselves no time to copy their spelling words. Finally, young children do not know how to self-test. They haven't the faintest idea when they're making mistakes, so in the end they don't actually learn the correct answers. It isn't until middle school and high school that the relationship between homework and school achievement becomes apparent.



So why the hell do Zeke and I have to spend every afternoon gnashing our teeth… The reasons, Cooper says, extend beyond Zeke's achievement in this particular grade. Apparently, by slaving over homework with my son, I am expressing to him how important school is. … When younger kids are given homework, Cooper says, it can also help them understand that all environments are learning ones, not just the classroom. For example, by helping calculate the cost of items on a trip to the grocery store, they can learn about math. The problem is, none of my children's assignments have this real-world, enjoyable feel to them. My children have never been assigned Cooper's favorite reading task -- the back of the Rice Krispies box.



The final, and perhaps most important, reason to assign homework to young children, says Cooper, is to help them develop study habits and time management skills that they'll need to succeed later on in their academic careers. If you wait until middle school to teach them these skills, they'll be behind. I suppose this makes sense. Spending their afternoons slaving over trigonometry and physics will come as no surprise to my kids. By the time they're in seventh grade they won't even remember what it's like to spend an idle afternoon. I guess that settles that: Everyone go out and play. Seriously. Also, let me call bullshit on Dr. Cooper and doubt very much that homework "help[s children] develop study habits and time management skills." Generalizing from a single experience here, when I was in elementary school, I remember very distinctly cutting corners on virtually all of my homework. Math problems would get scribbled frantically in pencil on paper during homeroom. (In fact, what little creativity I have owes entirely to those ingenious, sweaty-fingered minutes spent trying to make it appear as if I had thought very hard about, say, problem #23(a) but just couldn't get the answer.) The spelling workbook, I quickly discovered, didn't need to be filled out at all—if you worried about grades you could always recoup your losses by getting the "bonus" spelling words on quizzes right. "Homework" always denoted something to do as little of as physically possible. Ever since, I've always had terrible study skills, and while I blame my own laziness, all that useless homework gets part of the blame.



But let's do Waldman one better and say it flat out: homework is most likely evil. Yes, evil. Any educational system that relies on parents at home to help with the "learning process" will only end up perpetuating inequality, as long as some parents can help their kids and some cannot; as long as some parents can speak English and some cannot. And homework, for all its uselessness, is far more likely to put undue stress on family life than anything else. Of course, let's also be honest, the whole point of public school isn't to turn students into well-educated citizens but rather to produce good consumers and dutiful worker bees—people with short attention spans who follow authority, care deeply about status, and will attend with all due diligence to humiliatingly pointless tasks. Get used to working overtime, kid, you'll need it. In that regard, homework is indispensible. Question of the day: Is homework even necessary? Ayelet Waldman demands answers:I guess that settles that: Everyone go out and play. Seriously. Also, let me call bullshit on Dr. Cooper and doubt very much that homework "help[s children] develop study habits and time management skills." Generalizing from a single experience here, when I was in elementary school, I remember very distinctly cutting corners on virtually all of my homework. Math problems would get scribbled frantically in pencil on paper during homeroom. (In fact, what little creativity I have owes entirely to those ingenious, sweaty-fingered minutes spent trying to make it appear as if I had thought very hard about, say, problem #23(a) but just couldn't get the answer.) The spelling workbook, I quickly discovered, didn't need to be filled out at all—if you worried about grades you could always recoup your losses by getting the "bonus" spelling words on quizzes right. "Homework" always denoted something to do as little of as physically possible. Ever since, I've always had terrible study skills, and while I blame my own laziness, all that useless homework gets part of the blame.But let's do Waldman one better and say it flat out: homework is most likely evil. Yes, evil. Any educational system that relies on parents at home to help with the "learning process" will only end up perpetuating inequality, as long as some parents can help their kids and some cannot; as long as some parents can speak English and some cannot. And homework, for all its uselessness, is far more likely to put undue stress on family life than anything else. Of course, let's also be honest, the wholeof public school isn't to turn students into well-educated citizens but rather to produce good consumers and dutiful worker bees—people with short attention spans who follow authority, care deeply about status, and will attend with all due diligence to humiliatingly pointless tasks. Get used to working overtime, kid, you'll need it. In that regard, homework is indispensible.

Innuendo

In the midst of all that praise he was heaping on Ben Bernanke, I'm glad that Brad DeLong took the time to let us know what he really In the midst of all that praise he was heaping on Ben Bernanke, I'm glad that Brad DeLong took the time to let us know what he thinks ...

More Brains, Igor

The New York Times has a very good

Most experts agree that the exodus of skilled workers from poor countries is a symptom of deep economic, social and political problems in their homelands and can prove particularly crippling in much needed professions in health care and education.



Jagdish Bhagwati, an economist at Columbia University who migrated from India in the late 1960's, said immigrants were often voting with their feet when they departed from countries that were badly run and economically dysfunctional. They get their government's attention by the act of leaving….



But some scholars are asking whether the brain drain may also fuel a vicious downward cycle of underdevelopment - and cost poor countries the feisty people with the spark and the ability to resist corruption and incompetent governance. Remittances back home from expatriate workers make up some of the difference—and these payments are usually spent



Some suggest that OECD countries should restrict skilled immigration. One response would be that in some sense we already do; strict licensing requirements here in the United States already put up staggeringly high informal tariffs on the importation of doctors, lawyers, economists, and other professionals. Quick example: Several years ago the federal government paid New York hospitals



So what to do? Only a handful of countries have been successful in luring their emigrés back home. Bhagwati has suggested that developing countries should tax their expatriates. Creating networks among entrepreneurs might offer one solution—I know of at least one example in Latin America where the government sets up links between researchers abroad and workers at home to share knowledge. Set up something like Craigslist for really smart expatriates. Ultimately, the best thing to do would be to figure out how to get the poorest countries in the world to start growing—just as China, India, Indonesia, and Brazil have done—but the first person who figures out a fail-proof way to do that will get a very nice prize indeed. Thehas a very good piece today on the "brain drain" phenomenon among developing countries; wherein the most talented and educated workers in the Third World emigrate to the United States or Europe or other wealthy countries, thus leaving their home countries with very little in the way of human capital, and no way to exit the vicious cycle that caused people to leave in the first place:Remittances back home from expatriate workers make up some of the difference—and these payments are usually spent more effectively than foreign aid—but not enough. Interestingly, the "powerhouses" of the developing world—China, India, Indonesia, Brazil—don't suffer from brain drain, with less than 5 percent of their skilled citizens living in OECD countries.Some suggest that OECD countries should restrict skilled immigration. One response would be that in some sense we already do; strict licensing requirements here in the United States already put up staggeringly high informal tariffs on the importation of doctors, lawyers, economists, and other professionals. Quick example: Several years ago the federal government paid New York hospitals $400 million to train fewer doctors out of concern for "oversupply"; blue-collar protectionists never had it so good. These barriers, by the way, dwarf our rather small tariffs on goods that "free traders" tend to worry so much about. But that's only part of it. On the other hand, the United States, Britain, Canada, and Australia really do actively seek out many other sorts of skilled workers from abroad, especially in more technical fields, and this seems to hurts developing countries the most.So what to do? Only a handful of countries have been successful in luring their emigrés back home. Bhagwati has suggested that developing countries should tax their expatriates. Creating networks among entrepreneurs might offer one solution—I know of at least one example in Latin America where the government sets up links between researchers abroad and workers at home to share knowledge. Set up something like Craigslist for really smart expatriates. Ultimately, the best thing to do would be to figure out how to get the poorest countries in the world to start growing—just as China, India, Indonesia, and Brazil have done—but the first person who figures out a fail-proof way to do that will get a very nice prize indeed.

"It was just a day like any other day"

From the New York Times'

Over the years myth tended to obscure the truth about Mrs. Parks. One legend had it that she was a cleaning woman with bad feet who was too tired to drag herself to the rear of the bus. Another had it that she was a "plant" by the National Association for the Advancement of Colored People.



The truth, as she later explained, was that she was tired of being humiliated, of having to adapt to the byzantine rules, some codified as law and others passed on as tradition, that reinforced the position of blacks as something less than full human beings.



"She was fed up," said Elaine Steele, a longtime friend and executive director of the Rosa and Raymond Parks Institute for Self Development. "She was in her 40's. She was not a child. There comes a point where you say, 'No, I'm a full citizen, too. This is not the way I should be treated.' " Right. Similar "caveats" (Parks was a NAACP plant!) seem to be agreed to be handpicked—by civil rights leaders to become the poster child for the Montgomery bus boycotts. So what? That's always made her even more of a hero, I think; to have agreed to set her life aside and stand at the forefront of a movement. From the obit: "Her act of civil disobedience, what seems a simple gesture of defiance so many years later, was in fact a dangerous, even reckless move in 1950's Alabama. In refusing to move, she risked legal sanction, even harm." To put it lightly. No amount of mythmaking can denigrate that.



As many people know, fifteen!—named Claudette Colvin did much the same thing on a Montgomery bus; the case she ended up filing in court along with three other women, Browder v. Gayle, eventually became the one in which the Supreme Court's struck down bus segregation. Initially, the NAACP wanted to organize a boycott around Colvin's case, but backed off because they didn't think she made for a suitable enough poster-child—Colvin was allegedly several months pregnant, and "prone to outbursts." Or perhaps the timing just wasn't right—mass movements are always sensitive to timing. (Baton Rouge had staged the first bus boycotts two years earlier, but that had been forgotten.) Parks, as a member of the NAACP, was Colvin's mentor, and sat in on the decision to boycott or not after the younger girl was arrested, and was eventually inspired by her example to do the same nine months later. That this was how a movement sprouted—with two women inspired by each other—is no less sweeping a story than the traditional tale of one brave person sparking a wildfire.



Presumably those civil rights leaders were right that the nation needed to see Rosa Parks—"one of the finest citizens of Montgomery"—at the head of the boycotts rather than Colvin, who might have been more easily be slimed by reactionaries who think a movement can be discredited by attacking the private lives of the people who lead it. Not much has changed in the last fifty years, in that regard. At any rate, none of this can minimize what Parks did; that wouldn't be possible. From the obituary of Rosa Parks:Right. Similar "caveats" (Parks was a NAACP plant!) seem to be wending their way through the internet, and I'm still not sure what the "point" of these myths is; in truth, they don't matter very much. Yes, Parks was handpicked—to be handpicked—by civil rights leaders to become the poster child for the Montgomery bus boycotts. So what? That's always made her even more of a hero, I think; to have agreed to set her life aside and stand at the forefront of a movement. From the obit: "Her act of civil disobedience, what seems a simple gesture of defiance so many years later, was in fact a dangerous, even reckless move in 1950's Alabama. In refusing to move, she risked legal sanction, even harm." To put it lightly. No amount of mythmaking can denigrate that.As many people know, nine months before Parks refused to move, a fifteen-year-old——named Claudette Colvin did much the same thing on a Montgomery bus; the case she ended up filing in court along with three other women,, eventually became the one in which the Supreme Court's struck down bus segregation. Initially, the NAACP wanted to organize a boycott around Colvin's case, but backed off because they didn't think she made for a suitable enough poster-child—Colvin was allegedly several months pregnant, and "prone to outbursts." Or perhaps the timing just wasn't right—mass movements are always sensitive to timing. (Baton Rouge had staged the first bus boycotts two years earlier, but that had been forgotten.) Parks, as a member of the NAACP, was Colvin's mentor, and sat in on the decision to boycott or not after the younger girl was arrested, and was eventually inspired by her example to do the same nine months later. That this was how a movement sprouted—with two women inspired by each other—is no less sweeping a story than the traditional tale of one brave person sparking a wildfire.Presumably those civil rights leaders were right that the nation needed to see Rosa Parks—"one of the finest citizens of Montgomery"—at the head of the boycotts rather than Colvin, who might have been more easily be slimed by reactionaries who think a movement can be discredited by attacking the private lives of the people who lead it. Not much has changed in the last fifty years, in that regard. At any rate, none of this can minimize what Parks did; that wouldn't be possible.

October 24, 2005

Electing to Fight

John M. Owen IV has a Foreign Affairs Electing to Fight: Why Emerging Democracies Go to War, which argues exactly what the title suggests:

According to Mansfield and Snyder, in countries that have recently started to hold free elections but that lack the proper mechanisms for accountability (institutions such as an independent judiciary, civilian control of the military, and protections for opposition parties and the press), politicians have incentives to pursue policies that make it more likely that their countries will start wars. In such places, politicians know they can mobilize support by demanding territory or other spoils from foreign countries and by nurturing grievances against outsiders. As a result, they push for extraordinarily belligerent policies.



Even states that develop democratic institutions in the right order -- adopting the rule of law before holding elections -- are very aggressive in the early years of their transitions, although they are less so than the first group and more likely to eventually turn into full democracies. The historical record bears this out, it seems. Owen wonders if, on this theory, "a democratic Iraq [will be] no less bellicose" than Saddam Hussein's regime, as various factions in the near future "compete for popularity by stirring up nationalism against one or more of Iraq's neighbors." This doesn't seem so implausible—I could see an Iraqi government with a large Sadrist presence getting all up in some neighbor's face; John M. Owen IV has a review of, which argues exactly what the title suggests:The historical record bears this out, it seems. Owen wonders if, on this theory, "a democratic Iraq [will be] no less bellicose" than Saddam Hussein's regime, as various factions in the near future "compete for popularity by stirring up nationalism against one or more of Iraq's neighbors." This doesn't seem so implausible—I could see an Iraqi government with a large Sadrist presence getting all up in some neighbor's face; Jordan , perhaps—but it does sort of seem like the least of Iraq's concerns right now. On the other hand, a rapid push for democratization in the Middle East—if and when it ever comes—would make this sort of chaotic outcome all the more likely. But as Josh Marshall once suggested , perhaps this was the plan all along.

All Hail Our New Chairman?

This isn't really the place to come for Federal Reserve commentary, but maybe I can provide a few knee-jerk lefty complaints about the new Fed chief, Ben Bernanke. He's undoubtedly a smart guy, and all the center-left blogs



Moreover, Epstein argues, moderate rates of inflation, up to about 20 percent, "have no predictable negative consequences on the real economy," so perhaps the Fed obsession is misguided after all. As far as I can tell, no one seems to know for sure whether or not inflation would hurt the poor, but that's probably not to question to ask, instead let's debate: what sort of monetary policy would be better for the least well-off, and the rest of us? Or rather: Why not have the Fed stop fretting about inflation—within limits—and instead focus on promoting full employment, investment, and GDP growth? Good question. The answer is to follow the money:

One likely explanation is that a focus on fighting inflation and keeping it low and stable is in the interest of the rentier groups in these counties. Epstein and Power (2003) present new calculations of rentier incomes in the OECD countries supporting the view that in many countries, higher real interest rates and lower inflation increase the rentier shares of income. Ah, rentiers. The argument against Epstein, I take it, is that can't use inflation to boost employment because people aren't dumb, they'll soon catch on to what the bank's doing and plan accordingly, nothing will change when inflation strikes, and soon we're on the path towards stagflation. Hence the virtues of a hawk like Greenspan—or Bernanke. In reply, the dying herd of old Keynesians might say eh, this isn't really a concern, since the real inflationary dangers come not from full employment, which is usually a good thing, but from stagnant growth, since during a slowdown monopolistic enterprises will start raising prices to recoup their fixed costs. Certainly



I'm not even fractionally smart enough to know who's right in all of this, so I'll just leave it at that and admit that my bias is towards Epstein. His suggestion for "real targeting" makes sense on the surface, although for the Fed to be truly democratic, the whole institution itself will probably have to be rejiggered so that ordinary citizens get actual input into central bank decision-making. That obviously won't happen in my lifetime, but surely the least we can do is be bitter about it, no? This isn't really the place to come for Federal Reserve commentary, but maybe Iprovide a few knee-jerk lefty complaints about the new Fed chief, Ben Bernanke. He's undoubtedly a smart guy, and all the center-left blogs like him , but this just looks like more of the same. He's a fan of "formal inflation targeting," eh? As best I can tell from his 1999 spat with James K. Galbraith, Bernanke doesn't take this to mean that the Fed should sacrifice everything else under the sun—including employment growth—at the altar of Always Low Prices, but Gerald Epstein argues here that that's what inflation targeting tends to mean in practice. That inflation-obsessed monetary theorists in the U.S. wrongly insisted that the rate of unemployment could never go below 6.5 percent during the 1980s, letting wages stagnate and poverty rise, makes Scooter Libby's high crimes and misdemeanors look rather flimsy in comparison.Moreover, Epstein argues, moderate rates of inflation, up to about 20 percent, "have no predictable negative consequences on the real economy," so perhaps the Fed obsession is misguided after all. As far as I can tell, no one seems to know for sure whether or not inflation would hurt the poor, but that's probably not to question to ask, instead let's debate: what sort of monetary policy would be better for the least well-off, and the rest of us? Or rather: Why not have the Fed stop fretting about inflation—within limits—and instead focus on promoting full employment, investment, and GDP growth? Good question. The answer is to follow the money:Ah, rentiers. The argumentEpstein, I take it, is that theoretically a central banker justuse inflation to boost employment because people aren't dumb, they'll soon catch on to what the bank's doing and plan accordingly, nothing will change when inflation strikes, and soon we're on the path towards stagflation. Hence the virtues of a hawk like Greenspan—or Bernanke. In reply, the dying herd of old Keynesians might say eh, this isn't really a concern, since the real inflationary dangers come not from full employment, which is usually a good thing, but from stagnant growth, since during a slowdown monopolistic enterprises will start raising prices to recoup their fixed costs. Certainly Big Pharma and Big Insurance have been doing just that recently, so score one for the dying herd.I'm not even fractionally smart enough to know who's right in all of this, so I'll just leave it at that and admit that my bias is towards Epstein. His suggestion for "real targeting" makes sense on the surface, although for the Fed to be truly democratic, the whole institution itself will probably have to be rejiggered so that ordinary citizens get actual input into central bank decision-making. That obviously won't happen in my lifetime, but surely the least we can do is be bitter about it, no?

October 23, 2005

Adventure!

I was flipping through a copy of National Geographic Adventure yesterday, and figured, hey, some of this stuff is worth linking to on the ol' blog. The zud, that would wipe out all their livestock. You'd be ornery too.



The I was flipping through a copy ofyesterday, and figured, hey, some of this stuff is worth linking to on the ol' blog. The cover feature's about a guy who backpacked along some or all (I forget) of the Great Wall of China. Some good factoids in there: The GWoC took 1800 years to build, you can't actually see it from space, and the reason that the Mongols of old would get so ornery and conquer stuff every now and again was because Mongolia is prey to the occasional freak super-blizzard, called a, that would wipe out all their livestock. You'd be ornery too.The other good story was about how climate change has made it difficult for grizzly bears in ANWR to find food these days, so now they're out for blood... human blood! No, really, they never used to attack people, but now they do. Best part comes at the end when, shortly after being chased by a bear, a once-fuzzy-wuzzy environmentalist vows to go on a shooting spree the next time he sees one. Cool pictures, too.

Who's Doing the Damage?

Oh hilarious. Here's the real problem with legalizing gay marriage:

If the principle behind SSM is institutionalized in law… then people like me who think marriage is the union of husband and wife importantly related to the idea that children need moms and dads will be treated in society and at law like bigots. Awww… poor thing. Sign gay marriage into law and suddenly people might not be allowed to gay-bash on the radio anymore, for fear of sounding like bigots and having their broadcast licenses revoked (I really don't think she needs to worry here); and future schoolteachers will brainwash their students into thinking that the gay rights debates of yore pitted a few noble crusaders for equality against a wall of old-fashioned and mostly stodgy bigots. Liberal elites can be so cruel! Really, though, I wanted to comment on this, somewhat less-goofy, paragraph:

The most important fault line in the marriage debate is between a) people who think SSM will help a small number of gay couples and not affect anyone else and b) people like me who think this is going to change fundamentally the nature of marriage. Is that really the fault line? Neither of these propositions is testable unless we just go for it and legalize gay marriage—or, alternatively, we could just look at Europe's experience and note that Option A not being married, it could easily strengthen the institution, which, I take it, is Andrew Sullivan's argument. That's why you have more than a few feminists on the left opposed to the whole idea, seeing as how it would bolster what they see as a patriarchal and mostly oppressive institution. And they're probably right.



But let's also take Gallagher's fears seriously for a second. My guess is that keeping gay marriage illegal will do far more to erode marriage than anything else in the near future. Corporations and states, after all, are increasingly creating partner benefits for gay couples—it's hard to stop the states from doing this, and even harder to stop companies from doing it. (I guess you could try to pass an amendment, but that seems difficult.) And once there are benefit systems in place for gay couples, straight couples may as well sign on too, forgoing marriage. If companies increasingly extend healthcare and retirement benefits to "domestic partners," well, that's one less incentive for everyone else to get married, isn't it? I think Jonathan Rauch once warned that without gay marriage, "every unmarried gay couple"—especially those with kids—"will become a walking billboard for the joys of co-habitation." Not good for Gallagher. This seems like the greater "threat" to marriage, and unless we plan on banning all gay people everywhere from even looking at each other—and even in America this seems like a daunting task—allowing gay marriage is probably the best way to avert the inevitable "erosion" at work here.



We can add another loop too. Just as Gallagher seems to fear, young people increasingly do seem to see the backlash against gay rights as a form of bigotry. How much respect will those kids have for an institution they see as discriminatory? Not much, one would think. This should really be what gets Gallagher nervous. Granted, it's near-impossible to test any of these arguments—I guess we can see what happens in Massachusetts and, inevitably, California in the coming years—though my gut feeling is that it would be impossible for gay couples to screw up marriage any more than straight couples have already done.



(Granted, in real life I think it's right to allow gay marriage even if it does somehow affect straight couples—just like it was right to end racial discrimination among employers even if the net effect is to pull down white wages—but this seems to be one of those cases where doing what's right and doing what's beneficial for the majority are actually aligned.) Oh hilarious. Here's the much mocked Maggie Gallagher's take on theproblem with legalizing gay marriage:Awww… poor thing. Sign gay marriage into law and suddenly people might not be allowed to gay-bash on the radio anymore, for fear of sounding like bigots and having their broadcast licenses revoked (I really don't think she needs to worry here); and future schoolteachers will brainwash their students into thinking that the gay rights debates of yore pitted a few noble crusaders for equality against a wall of old-fashioned and mostly stodgy bigots. Liberal elites can be so cruel! Really, though, I wanted to comment on this, somewhat less-goofy, paragraph:Is that really the fault line? Neither of these propositions is testable unless we just go for it and legalize gay marriage—or, alternatively, we could just look at Europe's experience and note that Option A looks like the likely result. Alternatively, though, one could throw in a third option—that gay marriage will change marriage, yes, but for the better. I don't see why this argument's any less implausible than the other two. Insofar as legalizing gay marriage can send out a signal that being married is preferable tobeing married, it could easily strengthen the institution, which, I take it, is Andrew Sullivan's argument. That's why you have more than a few feminists on the left opposed to the whole idea, seeing as how it would bolster what they see as a patriarchal and mostly oppressive institution. And they're probably right.But let's also take Gallagher's fears seriously for a second. My guess is that keeping gay marriagewill do far more to erode marriage than anything else in the near future. Corporations and states, after all, are increasingly creating partner benefits for gay couples—it's hard to stop the states from doing this, and even harder to stopfrom doing it. (I guess you could try to pass an amendment, but that seems difficult.) And once there are benefit systems in place for gay couples, straight couples may as well sign on too, forgoing marriage. If companies increasingly extend healthcare and retirement benefits to "domestic partners," well, that's one less incentive for everyoneto get married, isn't it? I think Jonathan Rauch once warned that without gay marriage, "every unmarried gay couple"—especially those with kids—"will become a walking billboard for the joys of co-habitation." Not good for Gallagher. This seems like the greater "threat" to marriage, and unless we plan on banning all gay people everywhere from evenat each other—and even in America this seems like a daunting task—allowing gay marriage is probably the best way to avert the inevitable "erosion" at work here.We can add another loop too. Just as Gallagher seems to fear, young people increasinglyseem to see the backlash against gay rights as a form of bigotry. How much respect will those kids have for an institution they see as discriminatory? Not much, one would think. This should really be what gets Gallagher nervous. Granted, it's near-impossible toany of these arguments—I guess we can see what happens in Massachusetts and, inevitably, California in the coming years—though my gut feeling is that it would be impossible for gay couples to screw up marriage any more than straight couples have already done.(Granted, in real life I think it's right to allow gay marriage even if it does somehow affect straight couples—just like it was right to end racial discrimination among employers even if the net effect is to pull down white wages—but this seems to be one of those cases where doing what's right and doing what's beneficial for the majority are actually aligned.)

October 21, 2005

Balancing Act

This is over a year old, but Stephen Brooks and William Wohlforth of Dartmouth have written a very interesting (draft)



So is this true? Brooks and Wohlforth say probably not. It's hard to distinguish, granted, between explicit "balancing" and normal moves made by other countries, for reasons of their own, that just so happen to inconvenience or hurt the United States. But real "balancing" would mean that Europe and Russia and China were taking moves that are only coming about because the United States is the pre-eminent power in the world, and they fear that; moves they wouldn't pursue otherwise.



This probably isn't the case. Jacques Chirac and Gerhard Schroeder opposed the Iraq war partly because they genuinely thought it was a bad idea, quite rightly, and partly because opposition was popular domestically. Likewise, Russia's recent "strategic partnerships" with India and China may look menacing, but they aren't really intended to counter U.S. power in any meaningful way. (All three countries are pursuing economic modernization, and since that entails working with U.S.-controlled financial institutions, they still need to cozy up to the hegemon.) Meanwhile, Russia's recent arms sales to India and China, along with its support for Iran's nuclear program, mostly stem from its desperate need to slow the rapid decline of its defense sector, which is in a bad way. That's why Vladimir Putin can call nuclear proliferation the "main threat of the 21st century" and still fund the Bushehr reactor in Iran. He's sincere about the former, no doubt, but that reactor contract means 20,000 jobs at home.



The EU's proposal for defense cooperation, meanwhile, is meant to complement, not counter, American military power. Again, people like Chirac may say otherwise for public consumption at home, but in reality, the EU is actually weakening its ability to balance against the United States—by foregoing investment in advanced defense technology—in order to create a rapid reaction force that can help the U.S. by dealing with Balkans-style problems. Given that the U.S. and the EU are currently working together on Iran, it's obvious that their interests are mostly aligned.



In short, people like O'Reilly are wrong. No one's balancing against the U.S.; not yet. Though it still seems that the U.S. should avoid unilateralism when possible, because ill will makes cooperation on other issues difficult. Also, notice that France and Germany have a serious dilemma here. The more that they use the language of balancing—the more that they talk about "checking American power," even when they obviously intend to do no such thing—then the more the U.S. will discount their specific objections to policies. Chirac and Schroeder may have had good reason to believe that Iraq was a flawed idea, but U.S. policymakers were inclined to dismiss their objections as knee-jerk anti-Americanism. That's bad. Likewise, if U.S. leaders believe that, say, France and Germany want to work through international institutions only in order to check American power, then the U.S. will be less likely to pursue multilateralism. This is over a year old, but Stephen Brooks and William Wohlforth of Dartmouth have written a very interesting (draft) paper asking whether other countries are engaging in "soft balancing" against the United States. Prior to the Iraq war, many liberal analysts worried that too much unilateralism from America would provoke other nations—especially Europe, China, and Russia—to start banding together and counterbalancing that loud, honking hegemon across the Atlantic. U.S. conservatives, meanwhile, viewed France and Germany's opposition to the war as stemming from a desire to constrain American power. On this view, what started as "soft" balancing—a bit of stubbornness at the Security Council—would soon lead to hard opposition. As Bill O'Reilly said on the Daily Show just a few nights ago, "France is the enemy!"So is this true? Brooks and Wohlforth say probably not. It's hard to distinguish, granted, between explicit "balancing" and normal moves made by other countries, for reasons of their own, that just soto inconvenience or hurt the United States. But real "balancing" would mean that Europe and Russia and China were taking moves that arecoming about because the United States is the pre-eminent power in the world, and they fear that; moves they wouldn't pursue otherwise.This probably isn't the case. Jacques Chirac and Gerhard Schroeder opposed the Iraq war partly because they genuinely thought it was a bad idea, quite rightly, and partly because opposition was popular domestically. Likewise, Russia's recent "strategic partnerships" with India and China may look menacing, but they aren't really intended to counter U.S. power in any meaningful way. (All three countries are pursuing economic modernization, and since that entails working with U.S.-controlled financial institutions, they still need to cozy up to the hegemon.) Meanwhile, Russia's recent arms sales to India and China, along with its support for Iran's nuclear program, mostly stem from its desperate need to slow the rapid decline of its defense sector, which is in a bad way. That's why Vladimir Putin can call nuclear proliferation the "main threat of the 21st century" and still fund the Bushehr reactor in Iran. He's sincere about the former, no doubt, but that reactor contract means 20,000 jobs at home.The EU's proposal for defense cooperation, meanwhile, is meant to complement, not counter, American military power. Again, people like Chirac may say otherwise for public consumption at home, but in reality, the EU is actuallyits ability to balance against the United States—by foregoing investment in advanced defense technology—in order to create a rapid reaction force that can help the U.S. by dealing with Balkans-style problems. Given that the U.S. and the EU are currently working together on Iran, it's obvious that their interests are mostly aligned.In short, people like O'Reilly are wrong. No one's balancing against the U.S.; not yet. Though it still seems that the U.S. should avoid unilateralism when possible, because ill will makes cooperation on other issues difficult. Also, notice that France and Germany have a serious dilemma here. The more that they use the language of balancing—the more that they talk about "checking American power," even when they obviously intend to do no such thing—then the more the U.S. will discount their specific objections to policies. Chirac and Schroeder may have had good reason to believe that Iraq was a flawed idea, but U.S. policymakers were inclined to dismiss their objections as knee-jerk anti-Americanism. That's bad. Likewise, if U.S. leaders believe that, say, France and Germany want to work through international institutions only in order to check American power, then the U.S. will be less likely to pursue multilateralism.

Preposterous Universe

One of these days, I'll actually be able to wrap my head around those extra dimensions in space that string theorists always talk about. One of these days.

The simplest way to hide extra dimensions from view is to imagine that they are "compactified"—curled up into a tiny ball (or other geometrical configuration) with an extent much smaller than what can be probed by current experimental apparatus. In the 1990s, however, a new possibility arose, as scientists came to appreciate the role of "branes" in higher-dimensional physics. A brane, generalizing the concept of a membrane, is simply an extended object: A string is a one-dimensional brane, a membrane is a two-dimensional brane, and so on, up to however many dimensions may exist. A remarkable feature of such objects is that particles may be confined to them, unable to escape into the surrounding space. We can therefore imagine that our visible world is a three-dimensional brane, embedded in a larger universe into which we simply can't reach.



Gravity, as the curvature of spacetime itself, is the one force that is hard to confine to a brane; the extra dimensions must therefore have some feature that prevents gravity from appearing higher-dimensional. (For example, in four spatial dimensions, the gravitational force would fall off as the distance cubed, rather than the distance squared.) One possibility, proposed by Nima Arkani-Hamed, Savas Dimopoulos and Georgi ("Gia") Dvali, is that the extra dimensions curl up into a ball that is small without being too small—perhaps as large as a millimeter across in each direction. Randall, in collaboration with Raman Sundrum, showed that an extra dimension could be infinitely big, if the higher-dimensional space was appropriately "warped" (hence the title of her book). That's from a Warped Passages. One of these days, I'll actually be able to wrap my head around those extra dimensions in space that string theorists always talk about. One of these days.That's from a review of Lisa Randall's new book,

Share Your Toys!

The other day, I



To recap: The Bayh-Dole Act, in essence, transferred the patents for all pharmaceutical inventions made with the help of federal research grants to the universities and small businesses where they were made. No longer would taxpayers own the research that the government had paid for; it would be in private hands from now on. Many people credit this act with spawning the multi-billion dollar biotech industry, since in 1979, only much more unwilling to share their research with other scientists, instead spending their time seeking out licenses with private business in order to earn millions. Out with the altar of Hermes, in with Mammon.



Whether or not Bayh-Dole was warranted at the time—if nothing else, it helped many research universities reap windfalls—it's certainly not having a positive effect on drug innovation today. Between 2000 and 2003, the average number of "new molecular entities," or genuinely new drugs (as opposed to "me-too" drugs) dropped to eight a year, and



The thing is, as Clifton Leaf pointed out in a recent Forbes without anything like Bayh-Dole: the computer industry. The IT industry still has patents, of course, but companies and research institutions are much more generous in licensing their technology, and inter-company sharing is much more widespread. In part because of all this sharing, computer prices keep going down,



Now here's the kicker: technically the Bayh-Dole Act empowers federal agencies to ensure that new technologies—gene analysis, cell lines, research techniques—are being shared as widely as possible. But the NIH has never once used this power. As well, Bayh-Dole technically allows the government power to use its taxpayer-funded research royalty-free, but it's never done that. One wonders: What the hell? Government reticence on both these measures essentially acts as a taxpayer subsidy to Pfizer and Bayer, and hinders innovation. I guess that's the point, but it sucks. I honestly don't know whether it's time to repeal Bayh-Dole altogether. As a "compromise" an amendment could be passed, though, that forces the government to do the above two things—and require scientists to license their patents as widely as possible—at minimum. The other day, I wrote that it might be time to socialize drug research, or at least take a kangaroo hop in that direction. (Granted, the Pharma lobby would never let this happen, but let's keep things unrealistic for now.) In comments, serial catowner pointed out that the drug industry isn't in any way innovative these days, which I definitely agree with, and JimPortlandOR pointed to the Bayh-Dole Act of 1980 as a culprit mucking things up. Good point. So here's a suggestion.To recap: The Bayh-Dole Act, in essence, transferred the patents for all pharmaceutical inventions made with the help of federal research grants to the universities and small businesses where they were made. No longer would taxpayers own the research that the government had paid for; it would be in private hands from now on. Many people credit this act with spawning the multi-billion dollar biotech industry, since in 1979, only 5 percent of government-held patents had ever been developed—because companies didn't want to risk commercializing them if they didn't own the patents. Bayh-Dole fixed that, in theory. But it also made academic institutionsmore unwilling to share their research with other scientists, instead spending their time seeking out licenses with private business in order to earn millions. Out with the altar of Hermes, in with Mammon.Whether or not Bayh-Dole was warranted at the time—if nothing else, it helped many research universities reap windfalls—it's certainly not having a positive effect on drug innovation today. Between 2000 and 2003, the average number of "new molecular entities," or genuinely new drugs (as opposed to "me-too" drugs) dropped to eight a year, and few of them were by the major corporations. More tellingly, drug companies have made often made relatively little progress on any number of important diseases in the past few decades. There's virtually nothing out there to treat MS, or Parkinson's, or Alzheimer's. Diabetes treatments have stalled. Cancer medications aren't really going anywhere. Perhaps that's just because these things are intrinsically difficult. But one theory is that, because government-funded research institutions now worry more about cashing in on their inventions, they spend more time hoarding their research, groveling for contracts, and litigating over patents than they do collaborating fruitfully with other scientists. Plus, Bayh-Dole inflates the price of drugs—drugs researched with taxpayer money.The thing is, as Clifton Leaf pointed out in a recent article , there's another "high-technology, university-incubated industry" that's doing perfectly fineanything like Bayh-Dole: the computer industry. The IT industry still has patents, of course, but companies and research institutions are much more generous in licensing their technology, and inter-company sharing is much more widespread. In part because of all this sharing, computer prices keep going down, Moore's law is awesome, and innovation after innovation keeps cropping up. Meanwhile, entrepreneurs and researchers at universities don't need restrictive patent rules as incentive to innovate: Leaf points out that the "$50K Competition" at MIT, which offers a mere $50,000 in seed money for innovative business plans, "has showcased some notable winners—and losers" over the years, including Ask Jeeves and Akamai. Smart people will always find ways to bring good ideas to the market, and, the more widely ideas are shared, the more stuff they'll probably invent. No reason the pharmaceutical industry should be any different.Now here's the kicker: technically the Bayh-Dole Act empowers federal agencies to ensure that new technologies—gene analysis, cell lines, research techniques—being shared as widely as possible. But the NIH has never once used this power. As well, Bayh-Dole technically allows the government power to use its taxpayer-funded research royalty-free, but it's never done that. One wonders: What the hell? Government reticence on both these measures essentially acts as a taxpayer subsidy to Pfizer and Bayer, and hinders innovation. I guess that's the point, but it sucks. I honestly don't know whether it's time to repeal Bayh-Dole altogether. As a "compromise" an amendment could be passed, though, that forces the government to do the above two things—and require scientists to license their patents as widely as possible—at minimum.

October 20, 2005

Simplify, Simplify

The go anywhere, and they're mostly just ideas for "discussion" rather than things the Bush administration will actually end up backing. (With sinking poll numbers and Karl Rove potentially out of commission, it's hard to see the president finding the gumption to cap the mortgage-interest deduction, ya know?) So let's "discuss." I realize no one on the House Ways & Means Committee plans on asking me, but here's one way to take a more progressive stab at tax simplification:

Find some way to slowly phase out the mortgage-interest deduction. Robert Shapiro has argued that it doesn't actually benefit home-buyers, since sellers just bid up the price of houses until they exactly offset the cost of the deduction, so in essence, it just acts as a taxpayer subsidy to the construction and real estate industries. Is that really worth it? Any phase-out would lower home values, though, so this step makes for thorny politics, but that's why you…



Simplify and expand the family tax credit. Rep. Rahm Emanuel has proposed a simplified, refundable tax credit available to all working taxpayers with children that would replace the EITC, Child Credit, Additional Child Credit, and Child and Dependent Care Credit—cutting away about 200 pages of the tax code. This would cost an extra $200 billion over ten years, which is a lot, but doable once we get to…



Earlier this year, David Cay Johnston reported on a paper by two tax experts noting that a number of investors overstate the price of stocks, businesses, and real estate, because they're allowed to report their capital gains and losses on the honor system, unlike wage-earners. Actual verification and enforcement of these reports could recoup at least $250 billion over the next decade.



It also seems like a good idea to consolidate, simplify, and expand, as Paul Weinstein, Jr., has suggested, both the various college subsidies into one single College Tax Credit, and the various tax savings vehicles—IRAs or 401(k)s—into a single and transferable universal pension account. Simplifies a lot, and good for all involved. I realize people like Paul Krugman have argued that college isn't for everyone, but might as well try to raise the numbers. Weinstein lists a bunch of corporate loopholes and tax deductions we could close to pay for these parts. Works for me. That's not so hard. Those aren't earth-shaking steps, but they're all good, liberal things to do, and they do simplify the tax code quite a bit, especially for working families. I don't really see the point in repealing the alternative-minimum tax (AMT), which is there to ensure that the very wealthy won't exploit loopholes and dodge taxesl; if the AMT is falling on too many middle-class families then just raise the threshold and reform, rather than eliminate, it. I also don't really know how one would simplify capital gains taxation, which is obviously at the heart of any reform, but I'm sure there are decent ways to go about it. Oh yeah, and most of the Bush tax cuts are going to have to be repealed (for a start) to avoid fiscal disaster in the long run, but that's another story...

That's not so hard. Those aren't earth-shaking steps, but they're all good, liberal things to do, and they do simplify the tax code quite a bit, especially for working families. I don't really see the point in repealing the alternative-minimum tax (AMT), which is there to ensure that the very wealthy won't exploit loopholes and dodge taxesl; if the AMT is falling on too many middle-class families then just raise the threshold and reform, rather than eliminate, it. I also don't really know how one would simplify capital gains taxation, which is obviously at the heart of any reform, but I'm sure there are decent ways to go about it. Oh yeah, and most of the Bush tax cuts are going to have to be repealed (for a start) to avoid fiscal disaster in the long run, but that's another story...

Continue reading "Simplify, Simplify" The consensus seems to be that the Tax Reform Commission's proposals for, uh, tax reform won't actuallyanywhere, and they're mostly just ideas for "discussion" rather than things the Bush administration will actually end up backing. (With sinking poll numbers and Karl Rove potentially out of commission, it's hard to see the president finding the gumption to cap the mortgage-interest deduction, ya know?) So let's "discuss." I realize no one on the House Ways & Means Committee plans on asking me, but here's one way to take a more progressive stab at tax simplification:

Is Liberal Interventionism Dead?

Sam Rosenfeld and Matt Yglesias have a new TAP even if the war had been sold and fought exactly as the liberal hawks wanted—as a way to turn Iraq into a liberal democracy—with a different, more competent administration, it still would have failed.



Well, agreed. The United States has never shown much interest in democracy-building, it's never been very good at it, and as I TAP piece sells the idea of liberal interventionism somewhat short:

Intervening requires us to take sides and to live with the empowerment of the side we took. Tensions between Kosovar and Serb, Muslim and Croat, Sunni and Shiite are not immutable hatreds, and it’s hardly the case that such conflicts can never be resolved. But they cannot be resolved by us. Outside parties can succeed in smoothing the path for agreement, halting an ongoing genocide, or preventing an imminent one by securing autonomy for a given area. But only the actual parties to a conflict can bring it to an end. No simple application of more outside force can make conflicting parties agree in any meaningful way or conjure up social forces of liberalism, compromise, and tolerance where they don’t exist or are too weak to prevail. That's obviously true of the United States' military, which has classically been good primarily at smashing things, although our twenty-year-old soldiers have adapted to "mission creep" unbelievably well in Iraq. But Donald Rumsfeld wants to make the military even more focused on smashing things—as opposed to people like Thomas Barnett, who wants to see a more fully developed "SysAdmin" side—and regardless of what you want to call it, the



But the United Nations complicates the tale somewhat, since their peacekeeping forces actually have succeeded in reconciling a large number of post-conflict nations. Post-WWII UN operations in Congo, and post-Cold War peacekeeping forces in Namibia, El Salvador, Mozambique, Eastern Slavonia, Sierra Leone, and East Timor should all count as successes—the UN disarmed the parties, demobilized militias, held relatively free and fair elections, and put the countries on a path towards sustained civil peace. So in one sense, outside forces can "make conflicting parties agree in [a] meaningful way," and if those UN missions didn't conjure up, as TAP puts it, "social forces of liberalism, compromise, and tolerance," they at least pointed the way down that path. Those countries, save for the Congo, are all peaceful democracies today. We know it can work because it's been done.



On the other hand, even the UN can't seem to stop a country on the brink of disintegration from doing so, but it's hard to tell how much of that failure has come from the sheer difficulty of the task and how much from poor implementation. The original UN peacekeeping mission in Somalia obviously flopped, but it was also severely undermanned. Same with the initial UN force in Bosnia. (Could a more robust operation—say, 20,000 more troops and American commanders—have averted many of the Balkan crises later in the 1990s? Who knows?) The UN actually enforced (rather than just "kept") the peace in Eastern Slavonia and East Timor, both successfully, when it had enough troops. So I don't think I'm quite as ready to say "it's impossible", although a good deal of modesty and skepticism is absolutely crucial here. I think the United States is inherently awful at nation-building right now, yes. But that says as much about the United States and its military as it does about the inherent impossibility in peacekeeping and nation-building, and it's worth, I think, trying to disentangle the two. Sam Rosenfeld and Matt Yglesias have a new article arguing that the war in Iraq—or at least the "liberal hawk" idea that Iraq could be made into a democracy at the barrel of a gun—was always doomed to fail, and it wasn't just because Bush utterly botched it. They say thatthe war had been sold and fought exactly as the liberal hawks wanted—as a way to turn Iraq into a liberal democracy—with a different, more competent administration, it still would have failed.Well, agreed. The United States has never shown much interest in democracy-building, it's never been very good at it, and as I noted earlier in the week, the success of our nation-building adventures abroad have usually depended on internal factors in the occupied country, rather than the competence of our plans. That was as true of the American South in 1865 as it was of Kosovo in 1999. And sad to say, but the mere existence of a profit-seeking military-industrial complex made problems like the looting of the Iraqi treasury pretty much inevitable. There's no reason to think an invasion run by George Packer or Peter Beinart c