Bots, automated scripts that run tasks throughout the internet, have altered our digital landscape.

While Congress has done essentially nothing, many states—including Maryland, New York, and Washington—are drafting regulations that would attempt to reign in bots. In California, two bills—AB 1950 and SB 1001—would force Silicon Valley giants to identify which accounts are not “natural” humans.

Read More: The Bots That Are Changing Politics

“Consumers of social media don't really know who's pushing information on them,” Assemblymember Marc Levine, the author of AB 1950, told me in a phone call. “This bill is not going to solve all of our problems, but it's going to strike at the heart of it, which is that we need reasonable regulations on how this technology is being used. We know that we cannot trust the technology companies to regulate themselves.”

The bill would require tech companies to brand bots with a disclaimer, linking them and all online advertising purchases to a “verified human,” according to a statement on Levine’s website. But some have accused the bill of being a “phony” solution. “[T]he problem isn’t one of self-regulation, but of rapid technological advancement,” Steven Greenhut, western region director of the free market think tank R Street Institute, wrote in the OC Register .

Levine said he “loves” the criticism, and invites others to point out how the bill can be strengthened, but insists that tech giants need to be reigned in.

“If you took Silicon Valley’s word for it, they’re just kids in hoodies in garages making magic to make our lives easier. And we know that's not the case,” Levine said. “These are highly-educated individuals that are making more money than has ever been made since God created light. These are the most profitable corporations in world history. They are the biggest, most influential corporate special-interest that has ever existed.”

Facebook, Google, and Twitter did not respond to repeated requests for comment.

Sen. Bob Hertzberg, the author of SB 1001, feels similarly. His bill, introduced just six days after Levine’s, covers much of the same territory. It would make it illegal for a bot to communicate with someone with “the intention of misleading and without clearly and conspicuously disclosing that the bot is not a natural person.”

“This bill wouldn’t require the removal of all of these accounts, because it would just be impossible. Bots are proliferating at such an improbable speed that it is foolish to think we can curb their production,” Hertzberg said in an email. “Plus, not all automated accounts are bad—for example, the USGS automatically tweets the location and magnitude of earthquakes worldwide right when they occur.”

In fact, Hertzberg has his own Twitter bot: @Bot_Hertzberg. It automatically retweets the senator and posts when Senate floor sessions are about to start.

The idea for bills like these have been bubbling for a while, according to Shum Preston, National Director of Advocacy and Communications at Common Sense Kids Action, one of the major sponsors of SB 1001. It began with a New York Times op-ed, “Please Prove You’re Not A Robot,” written last summer by Columbia Law School professor Tim Wu, which proposed so-called _Blade Runner_-laws, making it illegal to pose as a human.

“I don't think anybody in this country believes that Google, Twitter, and Facebook right now are making a good-faith efforts to crack down on bots,” Preston said. “The idea that a company could just put up fraudulent information or empower it…just shows that our laws haven't caught up in Washington, D.C. We don't expect much out of Washington in the near term, which is why California is turning into a very interesting place where a lot of the nuts and bolts of tech policy are being worked out.”

Levine is also behind AB 2182, which would create the California Data Protection Authority, protecting users by allowing them to erase their data when no longer using a service. It would also prohibit websites from “conducting potentially harmful experiments on nonconsenting users,” such as the infamous, secretive experiments Facebook executed on some users. With problems like the Cambridge Analytica catastrophe, such laws may become more necessary—but as others have said, it may already be too late.

Follow Troy Farah on Twitter .