New laws criminalising internet companies which do not quickly remove "abhorrent" video and images became law today, despite heavy criticism from the legal community, rights groups and the technology industry.

Key points: The new law has passed the Senate and is expected in the Lower House on Thursday

The new law has passed the Senate and is expected in the Lower House on Thursday Labor has concerns the bill will not achieve its intended purpose

Labor has concerns the bill will not achieve its intended purpose The legitimate sharing of violent content could be stymied, such as in human rights abuse cases

The bill, aimed at preventing the "weaponising of social media platforms" by terrorists and criminals, is part of the political response to the Christchurch terror attack.

The massacre was livestreamed on Facebook for 17 minutes, and online platforms have struggled to remove it entirely.

"Mainstream media cannot live broadcast the horror of Christchurch or other violent crimes and neither should social media be able to do so," Minister for Communications Mitch Fifield said in a statement after the law passed.

However, the laws represented a "serious step" that required proper consultation, said Law Council president Arthur Moses SC in a statement.

"As we know, laws formulated as a knee-jerk reaction to a tragic event do not necessarily equate to good legislation and can have myriad unintended consequences," he said.

StartupAUS chief executive Alex McCauley echoed the sentiment in an email: "Between this and the encryption laws, we're starting to see a trend towards jumping into anti-tech legislation in a knee-jerk fashion."

The Unlawful Showing of Abhorrent Violent Material Bill threatens those that host and stream content with financial penalties and jail time for a range of offences.

Social media companies risk fines of up to 10 per cent of the platform's annual turnover if they fail to remove violent content "expeditiously".

The law defines "abhorrent violent material" as videos that show terrorist attacks, murders or rapes.

If companies become aware of such content on their platform, they must also inform the Australian Federal Police within a reasonable timeframe.

Scott Farquhar, co-founder of one of Australia's biggest technology companies, Atlassian, weighed in on Twitter, saying the Government was creating confusion and threatening jobs.

"Without any consultation, the government claims to rush through legislation aimed at 'abhorrent violent material'," he wrote.

"Let me be clear, no-one wants this material on the internet.

"But the legislation is flawed and will unnecessarily cost jobs and damage our tech industry."

Claims legislation could breach international law

The proposed laws moved swiftly from idea to reality.

The legislation was formally announced by Prime Minister Scott Morrison on Saturday, and the Digital Industry Group Inc (DIGI), which represents Facebook, Google and Twitter, received the draft legislation on Tuesday.

Senator Jordon Steele-John said the Greens had not been given a copy of the bill even by Wednesday morning.

"We have not seen it. The sector has not seen it. None of the experts in Australia ... have seen hide nor hair of this legislation," he said.

The technology sector was concerned the new laws could have far-reaching consequences.

In a letter to Government, shown to the ABC, DIGI director Sunita Bose expressed concern the bill could require US internet providers operating in Australia to breach American law, among other issues.

"There are laws in the United States, where all DIGI founding members are headquartered, that forbid companies from sharing certain types of information, specifically content data, with law enforcement agencies outside of the US," Ms Bose wrote.

Prime Minister Scott Morrison said the laws were about "keeping Australians safe by forcing social media companies to step up". ( ABC News: Marco Catalano )

Despite Labor joining with the Government to pass the bill, Shadow Attorney-General Mark Dreyfus said in a statement that the proposed laws were flawed.

"Labor has serious concerns that this bill has been poorly drafted and will not achieve its intended purpose," he said.

If Labor forms government after the upcoming election, he said the bill would be referred to the Parliamentary Joint Committee on Intelligence and Security for review.

Critics warn of unintended consequences

While social media companies have largely evaded major regulation in Australia, some legal experts fear the laws are too much of a "big stick".

Under the legislation, for example, the E-Safety Commissioner can issue a notice to a platform, advising of the existence of abhorrent content.

While such notices are a good idea, Dr Andre Oboler, chief executive of the Online Hate Prevention Institute, said the proposed law as written presumes guilt on the part of social media companies.

If content can be accessed "at the time the notice was issued", then the organisation is considered to have recklessly allowed the content to be available, unless it can prove otherwise.

"You need to create a system of trust and cooperation between government and technology companies," Dr Oboler said.

There is also concern the bill could affect the legitimate sharing of violent content.

Senator Steele-John suggested it could curtail the sharing of content on social media that shows human rights abuses, for example.

"If we don't have public interest safeguards, there is a definite possibility it could be used to take down videos of refugees being mistreated on Manus Island," he said.

"Nobody is suggesting in the aftermath of Christchurch that there does not need to be action."

This risk was also raised by the Law Council's Arthur Moses, who said it could "silence and criminalise whistleblowers trying to bring attention to violent atrocities occurring overseas".

Facebook executives have been on a media offensive in the weeks since Christchurch, seeking to assure its users and governments that they are acting against white supremacist content.

In an open letter, Facebook Chief Operating Officer Sheryl Sandberg said the company was considering restricting who can use its livestream function, among other steps.