Unless we take action, chatbots could seriously endanger our democracy, and not just when they go haywire.

The most obvious risk is that we are crowded out of our own deliberative processes by systems that are too fast and too ubiquitous for us to keep up with. Who would bother to join a debate where every contribution is ripped to shreds within seconds by a thousand digital adversaries?

A related risk is that wealthy people will be able to afford the best chatbots. Prosperous interest groups and corporations, whose views already enjoy a dominant place in public discourse, will inevitably be in the best position to capitalize on the rhetorical advantages afforded by these new technologies.

And in a world where, increasingly, the only feasible way of engaging in debate with chatbots is through the deployment of other chatbots also possessed of the same speed and facility, the worry is that in the long run we’ll become effectively excluded from our own party. To put it mildly, the wholesale automation of deliberation would be an unfortunate development in democratic history.

Recognizing the threat, some groups have begun to act. The Oxford Internet Institute’s Computational Propaganda Project provides reliable scholarly research on bot activity around the world. Innovators at Robhat Labs now offer applications to reveal who is human and who is not. And social media platforms themselves — Twitter and Facebook among them — have become more effective at detecting and neutralizing bots.

But more needs to be done.

A blunt approach — call it disqualification — would be an all-out prohibition of bots on forums where important political speech takes place, and punishment for the humans responsible. The Bot Disclosure and Accountability Bill introduced by Senator Dianne Feinstein, Democrat of California, proposes something similar. It would amend the Federal Election Campaign Act of 1971 to prohibit candidates and political parties from using any bots intended to impersonate or replicate human activity for public communication. It would also stop PACs, corporations and labor organizations from using bots to disseminate messages advocating candidates, which would be considered “electioneering communications.”

A subtler method would involve mandatory identification: requiring all chatbots to be publicly registered and to state at all times the fact that they are chatbots, and the identity of their human owners and controllers. Again, the Bot Disclosure and Accountability Bill would go some way to meeting this aim, requiring the Federal Trade Commission to force social media platforms to introduce policies requiring users to provide “clear and conspicuous notice” of bots “in plain and clear language,” and to police breaches of that rule. The main onus would be on platforms to root out transgressors.