US Attorney General William Barr testifies before the Senate Judiciary Committee on "The Justice Department's Investigation of Russian Interference with the 2016 Presidential Election" on Capitol Hill in Washington, DC, on May 1, 2019. Mandel Ngan | AFP | Getty Images

The Justice Department is hosting a forum for academics, nonprofit leaders and industry advocates to discuss the future of a law that has shielded tech companies from legal liability for their users' posts since its enactment in 1996. For critics of the tech industry, Section 230 of the Communications Decency Act has come to symbolize the exceptional treatment from the government that has fueled the growth of a small number of players. For tech companies, the law represents the internet's founding values of openness and free expression, while also allowing them to remove the most insidious speech without stumbling into a legal minefield. Attorney General William Barr aligned himself with the skeptics, telling a gathering of the National Association of Attorneys General in December that the department was "studying Section 230 and its scope." "Section 230 has been interpreted quite broadly by the courts," Barr said, according to a transcript of his remarks. "Today, many are concerned that Section 230 immunity has been extended far beyond what Congress originally intended. Ironically, Section 230 has enabled platforms to absolve themselves completely of responsibility for policing their platforms, while blocking or removing third-party speech — including political speech — selectively, and with impunity." Here are the key things to know about this piece of legislation that's the subject of Wednesday's DOJ forum:

What is Section 230 and why was it enacted?

Section 230 was introduced by Sen. Ron Wyden, D-Ore., and former Rep. Chris Cox, R-Calif., as a way of protecting tech companies from becoming legally liable for their users' content if they opted to moderate it. The law followed a court ruling against the online platform Prodigy. An investment firm sued Prodigy after one of the platform's anonymous users accused it of fraud. Prodigy argued it wasn't responsible for its users' speech, but the court found that because the platform moderated some of its users' posts, it should be treated more like a publisher, which can be held legally liable for misleading or harmful content it publishes. The ruling galvanized Cox and Wyden to introduce what would become Section 230. The law allows for companies to engage in "good Samaritan" moderation of "objectionable" material without being treated like a publisher or speaker under the law. That's what allows platforms like Twitter, Facebook and Google's YouTube to take down terrorist content or harassing messages while still enjoying other legal protections. It's also been essential for these companies to achieve massive scale — if they were liable for everything users posted, they'd either have to vet every piece of content before it went live, which would dramatically increase expenses and create delays, or give up all moderation, which would make for a worse user experience.

Why do some people want to change the law?

In recent years, Washington has begun to sour on the tech industry after a series of complaints about privacy and the growing power of a few key players. As politicians and the general public have awakened to the vast power of the large tech companies, they've begun to see Section 230 as a key contributor to that power. Lawmakers on both sides of the aisle have publicly questioned the broad scope of Section 230. Once a way to protect upstart tech firms, the law now provides a legal shield to some of the most valuable companies in the world. Some fear tech companies lack the incentives to combat misinformation on their platforms as technology that makes it easier to fake video and voices becomes more advanced. Some conservatives believe Section 230 has aided tech companies' ability to censor speech they don't agree with. There's little evidence mainstream tech firms systematically discriminate against certain ideologies, but they have at points removed politically charged posts, sometimes in error, only to apologize and reinstate them later. Such claims of bias inspired Missouri Republican Sen. Josh Hawley's proposed revision to Section 230 that would tie the law's promise of immunity to a regular audit proving tech companies' algorithms and content-removal practices are "politically neutral."

What do the law's defenders say?

Tech companies have vigorously defended Section 230, testifying to Congress repeatedly about how it allows them to remove the most objectionable content from their platforms and protects start-ups from being sued out of existence. Wyden still stands by Section 230, writing in a Washington Post op-ed Monday that efforts to repeal it would punish small start-ups rather than giants like Facebook and Google. Wyden said corporations lobbying for changes to Section 230 are doing so to find "an advantage against big tech companies." "Whenever laws are passed to put the government in control of speech, the people who get hurt are the least powerful in society," Wyden wrote, referencing SESTA-FOSTA, a 2018 law that made an exception to Section 230 for platforms hosting sex work ads. The law was billed as a way to mitigate sex trafficking, but opponents, including many sex workers, say it made consensual sex work less safe since those engaging can no longer vet their clients in advance and from behind a screen.

How could the law change?