We’ve all done or said stupid things online at some point or another. But if you’re a Californian under 18, you’ll soon have the right to delete those stupid things. California has put into place the first state law that requires companies, websites, and app developers to give kids under the age of 18 the option to delete a post.

The law (SB 568), which was signed by Governor Jerry Brown on Monday and takes effect on January 1, 2015, imposes onto Web companies and app makers this new requirement. Those firms now must:

Provide clear instructions to a minor who is a registered user of the operator’s Internet website, online service, online application, or mobile application on how the user may remove or, if the operator prefers, request and obtain the removal of content or information posted on the operator’s Internet Web site, online service, online application, or mobile application.

The law has received praise from its primary sponsor.

“This is a groundbreaking protection for our kids who often act impetuously with postings of ill-advised pictures or messages before they think through the consequences,” said State Senator Darrell Steinberg (D-Sacramento), in a statement. “They deserve the right to remove this material that could haunt them for years to come.”

Modest protection

However, some legal analysts have some quibbles with exactly how the law’s language was drafted.

The “eraser” provision is part of a new, larger law—Privacy Rights for California Minors in the Digital World—that was added to California’s business code. The first section of this “Digital World” law focuses on not allowing sites and services that are “directed to minors” to compile or disclose information about a minor if that compilation or disclosure “is for the purpose of marketing or advertising products or services to that minor.”

But what exactly is a site “directed to minors”? And wouldn’t Facebook and Twitter, which both already allow all users (regardless of age) to delete posts and tweets, already be in compliance? Neither Sen. Steinberg’s office nor Twitter immediately responded to Ars’ request for comment on this point, while Facebook declined to comment.

“While the Center for Democracy and Technology understands that SB 568 was motivated by the best of intentions, we remain concerned that the bill's focus on sites ‘directed to minors’ under the age of 18 will leave operators of websites that are popular with older teens and young adults uncertain of their obligations under the law,” Emma Llansó, a staff attorney at the Center for Democracy and Technology, wrote to Ars in an e-mail.

“Our primary concern is that this legal uncertainty will discourage operators from developing content and services tailored to younger users," Llansó continued, "and will lead popular sites and services that may appeal to minors to prohibit users under 18 from using their services. (We've seen a similar reaction to the federal Children's Online Privacy Protection Act, which covers website directed to children under the age of 13: even sites that are clearly general-audience sites, or intended for adults, will include prohibitions in their Terms of Service barring minors under 13 from using their service, out of an abundance of caution.)”

She also pointed out that the law as it’s currently written may require companies to collect even more data on teens—such as location and age—so that a company can figure out whether it is in fact, in compliance. However, it’s likely that California, by virtue of being the largest American state by population, would set a new national standard.

Still, Woodrow Hartzog, a professor and privacy law expert at the Cumberland School of Law at Samford University in Alabama, told Ars that while the law may not be perfect, it’s better than nothing.

“While it’s true that it would be exceptionally difficult if not impossible in many instances to remove every occurrence of problematic content from the Internet, I wouldn’t be so quick to say this limitation means that the law is doomed to failure,” he told Ars.

“In many cases, individuals don’t need to remove every single instance of a piece of content from the Internet in order to be protected. Perhaps they just need to remove the most prominent or popular occurrences. Sometimes adequate protection can come not just from being completely ‘erased,’ but also from having personal information that’s out there being so tucked away in unpopular or forgotten corners of the Internet that it’s unlikely to be found. It’s not clear how well this bill will be able to provide that kind of modest protection, but it’s a possibility that’s more promising, attainable, and palatable than being able to completely erase yourself from the Web.”