In the late ’90s, early bloggers expected the level of discourse to be high. In fact, intelligent commenting was seen as a path to gaining respect in the blogging community. At the beginning of 1999, there were only about two dozen blogs (which were mainly lists of interesting Web sites), but as the number exploded, it became hard for bloggers to follow the fragmenting conversations. In 2000, the blog service Blogger introduced permalinks, which allowed each blog entry to have its own URL, and in 2002, Moveable Type implemented the TrackBack, which automatically alerted an author that a permalink from his blog had been posted elsewhere. The TrackBack was meant, at least in part, to blur the lines between commenters and writers; the conversation surrounding one blog post no longer needed to be relegated to the comments section, but could be sprinkled across disparate blogs with the TrackBack as its link. That was great, in theory. But while conversations were the model for interactions, the technology couldn’t sustain what real conversations required.

What killed the promise of the culture of openness in early blog culture was the blizzard of link spam that hit in the mid-2000s. Planting millions of links to penis-enlargement ads in the comments of the most innocuous of blogs could trick Google bots into ranking certain penis-enlargement services higher when actual humans search “penis enlargement” (which, apparently, they do). Cleaning spam out of comment sections became a bigger headache for bloggers than policing the trolls. A lot of them disabled comments for good.

The invasion of spam highlighted the problems with early blog moderation, which grew out of a culture shaped by attitudes and tools inherited from the free-speech cowboys of the B.B.S. era. It took site owners years to realize that they’re not providing platforms and soapboxes but creating communities out of lines of text, which requires a more subtle approach. The Web forum MetaFilter, for instance, which is known for a positive commenting flavor, depends on a 24/7 team of moderators. “People come to us all the time and say, ‘Here’s a problem with people behaving badly, we want a tech solution,’ ” says Paul Bausch, a MetaFilter developer. “We tell them that human problems require human judgment.”

Yet high-traffic sites continue to leave comments unmoderated or use imperfect automated moderation. Only a few seem to have tried user-moderation systems like the one developed by Slashdot’s creator, Rob Malda. Founded in 1997, Slashdot rapidly began to suffer from what Malda called “signal-to-noise-ratio problems” as tens of thousands of users showed up. Rather than embracing the chaos (which was a hallmark of Usenet, another digital channel of communications) or locking things down with moderators (which e-mail lists did), Malda figured out a way for users to moderate one another. Moderation became like jury duty, something you were called to do.

In my view, the worst places to visit aren’t the comment jungles of 4chan or YouTube, but the overly manicured comment lawns of some newspapers. Papers have mistakenly treated comments as the digital equivalents of letters to the editor. “We’ve got a 160-year tradition of no comments on our stories in the newspaper, so it’s not surprising it took a little bit of time to get comfortable with that idea,” Adee of The Chicago Tribune says. (At The Times, select articles, including this one, are open for comments, which are moderated by humans.)

Talking to people at newspapers makes it seem as if the future of comments is all social log-ins and filtering algorithms. But these are really just tools for putting a lid on commenting culture’s excesses, not rethinking the relationship between creators and commenters in more fundamental ways.

A step in that direction is annotation, in which reactions, corrections and elaborations are placed directly on the text itself, which could, perhaps dangerously, put commenters on the same plane as writers and reporters, who spend days or weeks or months learning about a subject. One example is Medium, which allows readers to make notes at the paragraph level. (Unlike a Wikipedia entry, where users can edit the text, the article remains intact.) Writers might balk at this, but look at it this way: people are more likely to comment on what’s in the text, which may prompt them to actually read it before commenting.