Updated. Today was just another Saturday morning in blog land when Robert Scoble, the well-known tech startup enthusiast, went to post a comment on a Facebook post written by Carnegie Mellon student (and TechCrunch commenter extraordinaire) Max Woolf about the nature of today’s tech blogging scene. Scoble’s comment itself was pretty par-for-the-course — generally agreeing with Woolf’s sentiments and adding in his own two cents.

But when Scoble went to click post, he received an odd error message:

“This comment seems irrelevant or inappropriate and can’t be posted. To avoid having comments blocked, please make sure they contribute to the post in a positive way.”

Now, Facebook makes no apologies for working to create a safe and clean environment on its corner of the web by shutting down abusive or harassing behavior, content such as pornography, or general spamming of the system. This particular method policing “inappropriate” comments may be new, but it would fall within the same realm.

But even so, this instance seems to be a very strange enactment of any kind of Facebook policy. Scoble posted his original comment in its entirety on his Google+ page, and it’s clear that it contains no profanity or even any obvious argumentative language.

Of course, what makes a comment “positive” or “negative” is a very subjective thing. Since Facebook is a global site, and what is acceptable in one culture is offensive in another, the company generally relies on a combination of software algorithms and notifications from other users to identify inappropriate behavior. This seems to show a glitch in that system.

This could be similar to what happened to film critic Roger Ebert back in January 2011, when Facebook temporarily disabled Ebert’s blog because of allegedly “abusive comment.” It turns out that Ebert’s blog never actually contained objectionable content — a number of Facebook users had flagged his page as “abusive” after he wrote a critical tweet about Ryan Dunn, an actor who died in a drunk driving accident. It could be that Robert Scoble has been similarly flagged by other Facebook users, for reasons justified or not.

Scoble’s a pretty popular guy on the web, so not surprisingly his Google+ post about the incident attracted more than 100 comments within the first hour after he posted it. Several other people there report having seen the same message in recent days, and one person named Steven Streight wrote that recently his Facebook commenting ability was “temporarily limited” because of comments that he says were similarly benign such as “I’m a married man.” TechCrunch commenters have weighed in on this post as well to recount similar experiences.

Not surprisingly, a number of people are seeing this as an example of censorship — a word that almost always has negative connotations in the tech world.

We’ve reached out to Facebook for more information on what this policy means, how it is powered, and what specific words or behaviors it is meant to filter. We’ll update this post if we hear anything back and as the situation develops.

Update: A Facebook policy spokesperson emailed the following explanation:

“To protect the millions of people who connect and share on Facebook every day, we have automated systems that work in the background to maintain a trusted environment and protect our users from bad actors who often use links to spread spam and malware. These systems are so effective that most people who use Facebook will never encounter spam. They’re not perfect, though, and in rare instances they make mistakes. This comment was mistakenly blocked as spammy, and we have already started to make adjustments to our classifier. We look forward to learning from rare cases such as these to make sure we don’t repeat the same mistake in the future. For more information about our spam prevention systems, please see this blog post: https://blog.facebook.com/blog.php?post=403200567130.”

Also, my colleague Josh Constine has written a detailed report on Facebook’s explanation on the situation, which can be found here.