A cursory Twitter search for any number of hate-filled phrases too vulgar to print here reveals a deep well of nastiness and ugly racism. In the United States, users are protected under the First Amendment. But what happens when a user's hate-speech violates the law in other countries?

A French judge ruled Thursday that Twitter would have to reveal the details of users who post racist or offensive tweets. The social network has not decided if it will comply, and insists that it is only subject to laws in the United States, where it maintains offices and stores information.

It all started with a case brought by the Association of Jewish Students (UEJF), which claimed that pseudonymous users behind the hashtag #unbonjuif (#agoodjew) had violated French laws that prohibit racist and inflammatory speech. Twitter agreed to remove the offending tweets, which has long been its policy when laws in foreign countries are broken. On Thursday, however, the high court in Paris ordered Twitter to hand over the account information of offenders to authorities. Furthermore, the social network must also "roll out as part of its French platform" a new notice system that is "easily accessible and visible" to flag questionable content. Failure to comply within two weeks will result in a €1,000 fine per day.

A spokesperson for Twitter said Thursday that the company is reviewing its legal options. "It is a big deal because it shows the conflict between laws in France and laws in the U.S., and how difficult it can be for companies doing business around the world," Françoise Gilbert, a French lawyer who represents Silicon Valley companies on both sides of the Atlantic, tells The New York Times. On the plus side for Twitter, the company (unlike Facebook and Google) doesn't maintain offices in France and, according to the Times, "does not face the prosecution of its employees" there.

This isn't Twitter's first free-speech controversy. The company made headlines in October when it complied with the German government's request to block account access for a neo-Nazi group accused of anti-Semitism. Just as controversially, it blocked a Financial Times journalist for lashing out at NBC for its Olympic coverage last summer, and posted the email of one of the network's executives. In 2011, the company agreed to help British authorities unmask a California man who used an anonymous account to defame members of a British town council.

It's a disturbing trend for free speech advocates, wrote Mathew Ingram at GigaOm late last year. "More than anything, these kind of cases reinforce how much influence private entities like Twitter and Google now have over what information we receive (or are able to distribute), and the responsibility that this power imposes on them."

But even when corporate Twitter hangs back, the Twitter community has its own methods for self-policing in the United States. In November, Jezebel controversially published a slideshow outing users who used the N-word to express their distaste for a newly re-elected President Obama. (The blog even alerted a few of the offenders' schools, leading to suspensions.) And other accounts, like @YesYoureRacist, are similarly dedicated to shaming users that post racist tweets.

Either through peer pressure or legal suits, we're seeing a pushback to racist material on Twitter, particularly against users who hide behind anonymous account names. And some say that's a positive development. "The internet is real," wrote Matt Buchanan at Buzzfeed, in a post titled "Why social media shaming is okay."

When you say things on the Internet now, they carry real weight and meaning. That evolution is a good thing, mostly. But reality has a price, and it is consequence. If you didn't know that already, you should now. [Buzzfeed]