So Hertzberg introduced another bill this year, the first of its kind in the United States, that would compel automated social media accounts to identify themselves as bots; in other words, to disclose their nonpersonhood. Because bots are only effective if they seem convincingly human. Right? Depending on how you define them, bots have been around since before most of us were using the internet. Their presence online was considered fairly benevolent, if considered at all, until 2016, when they were among the host of factors used to explain away the election of President Donald Trump. Since then, bots have become, for many people, a digital boogeyman, a viral weapon that can be wielded to influence political opinions, fool advertisers, prank unknowing social media users and get bad hashtags to trend. (They are also the lifeblood of many users we call influencers.) Last week, Twitter announced it would remove tens of millions of suspicious accounts to crack down on the bots that can be bought (through third parties) by users who want to inflate the number of their followers. The company also said last month that it has been "locking" almost 10 million suspicious accounts per week and removing others for violating anti-spam policies.

Still, bots are easy to make and widely employed, and social media companies are under no legal obligation to get rid of them. A law that discourages their use could help, but experts aren't sure how the one Hertzberg is trying to push through, in California, might work. For starters, would bots be forced to identify themselves in every Facebook post? In their Instagram bios? In their Twitter handles? Loading The measure, SB-1001, a version of which has already left the senate floor and is working its way through the state's Assembly, also doesn't mandate that tech companies enforce the regulation. And it's unclear how a bill that is specific only to California would apply to a global internet. Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence, applauded the spirit of the law but was not as sold on its letter. "This is groundbreaking legislation," he said. "We are on a trajectory where reality, the very fabric of the information we see, can be altered in an unprecedented fashion. When that's done, as the law says, with an intent to mislead, that's a huge problem."

But "you don't want to measure twice, regulate once," he said. "You don't want to put the wrong laws on the books and have unintended consequences." Jeremy Gillula, a technologist at the Electronic Frontier Foundation who has been critical of the bill since its inception, said the first version was "a little like trying to treat the flu using chemotherapy." "Not only will it not fix the thing you're trying to fix," he said earlier this month. "It'll cause a lot of collateral damage at the same time." Loading The bill was drafted by Common Sense Media, a nonprofit that provides consumer ratings about the age-appropriateness of movies and TV shows, in collaboration with the Center for Humane Technology, a group of former employees of big tech companies including Google and Facebook who have banded together to regulate their former employers.

Neither Hertzberg nor Jim Steyer, chief executive of Common Sense Media, was overly concerned with criticism when interviewed about the bill in June. The senator called skepticism "the drivel of people who want to stop progress." He said that the analysis he had seen had been influenced by lobbyists and was flat out wrong. But after they were interviewed and the bill moved through the assembly's committees, the content of the proposed law changed substantially. The definition of bot grew more precise (from "online account" to "automated online account on an online platform"), and language that recommended an online platform for reporting bots was scrapped. Furthermore, the bill now asks only bots that are hoping to sell consumers good and services, or to influence votes in an election, to identify themselves as bots. But even with the changes, the bill summons significant constitutional questions, said Ryan Calo, a co-director of the Tech Policy Lab at the University of Washington, and Madeline Lamo, a former fellow at the lab. Lamo said that language in the bill about bots "influencing a vote in an election" ran into a problem that has plagued campaign finance regulations and election-related speech laws: It can be difficult to distinguish speech about political issues from speech explicitly intended to influence voters. Furthermore, she noted, the bill was simply not crafted to address the problem it had in mind. Insofar as bots have had sway over political views, they have acted at scale, with thousands of automated accounts working to spread a diverse array of messages. It's hard to imagine, she said, that requiring individual accounts to identify themselves in a single state would do much to sap the strength of bot armies.

All parties agree that the bill illustrates the difficulty that lawmakers have in crafting legislation that effectively addresses the problems constituents confront online. As the pace of technological development has raced ahead of government, the laws that exist on the books — not to mention some lawmakers' understandings of technology — have remained comparatively stagnant. And, as Twitter's action last week demonstrates, technology companies have the power to change dynamics on their platforms directly and at the scale that those problems require. Turning a bill into a law can take a long time. And then the law runs the risk of being inexact. New York Times