Twitter claims it is banning most terrorist-related accounts before they even tweet, as the prime minister, Scott Morrison, pushes for the tech companies to be more transparent around what they’re doing to fight terrorism online.

At the G7 on Monday, Morrison announced an undisclosed amount of funding from the OECD to fund the development of voluntary transparency reporting protocols for social media companies to prevent, detect, and remove terrorist and violent extremist content.

“This work will establish standards and provide clarity about how online platforms are protecting their users, and help deliver commitments under the Christchurch Call to implement regular and transparent public reporting in a way that is measurable and supported by clear methodology,” Morrison said in a statement.

“Digital industry will benefit from establishing a global level playing field. The project will assist to reduce the risk of further unilateral action at national levels, avoid fragmentation of the regulatory landscape and reduce reporting burdens for online platforms.”

Facebook and Twitter already voluntarily publish bi-annual or quarterly transparency reports on what content and accounts they shut down and remove, covering terrorism, child exploitation material, copyright infringing material, and legal requests from governments.

It is unclear at this stage how Morrison’s proposal would differ from what the companies already do. Guardian Australia has sought comment from the prime minister’s office.

Facebook declined to comment on the prime minister’s proposal, but in a statement a spokesperson for Twitter pointed to the company’s efforts in its existing transparency report.

“During the last reporting period, a total of 166,513 accounts were suspended for violations related to promotion of terrorism, which is a reduction of 19% from the volume shared in the previous reporting period,” the spokesperson said.

“Of those suspensions, 91% consisted of accounts flagged by internal, purpose-built technological tools. The trend we are observing year-on-year is a steady decrease in terrorist organisations attempting to use our service.”

The spokesperson said Twitter was able to ban terrorist accounts by detecting behaviour patterns during account registration, and most accounts were banned during the account setup stage, before the account had even had a chance to tweet.

In Facebook’s latest transparency report, it said the amount of content removed increased in the first quarter of 2019 compared to Q4 2018 (6.4m up from 4.7m) because the site had stepped up its action on such material.

Morrison also used the G7 on the weekend to re-announce plans for internet service providers in Australia to block websites hosting terrorist or violent extremist content during events similar to the Christchurch terror attack.

Several ISPs blocked sites hosting the Christchurch livestream at the time of the massacre, but did so voluntarily without any legislative backing.

The government is now looking to codify this practice so in future, the esafety commissioner can issue determinations to block sites hosting such material.

“This new protocol will better equip our agencies to rapidly detect and shut down the sharing of dangerous material online, even as a crisis may still be unfolding,” the minister for home affairs, Peter Dutton, said in a statement.

The esafety commissioner currently has no oversight over whether sites blocked during the Christchurch attack are still being blocked. A spokesperson said it was a matter for the ISPs.

One of the sites that had hosted the material, 8chan, has gone dark since DDoS protection provider Cloudflare was pressured to drop the site. But experts have warned 8chan’s users have migrated to the dark web or encrypted messaging apps.

The esafety commissioner Julie Inman-Grant and Queensland police service’s Jonathan Rouse also called for companies like Facebook and Google to ensure when they’re developing end-to-end encrypted messaging apps that they’re still able to scan messages (despite being encrypted end-to-end) to check for child abuse material.

“There is a deeply held sentiment among law enforcement agencies and governments around the globe that there needs to be some transparency and robust explanations about how they will mitigate risks to children and the fight against online child exploitation – so that they do not create a secondary Dark Web through their messaging services,” an esafety commissioner spokesperson told Guardian Australia.