Both Facebook and Google have previously denied responsibility for the content published on their sites, evoking the communications act in the US to overcome lawsuits accusing them of enabling terrorism and spreading extremist views. But calls for big tech to be regulated have grown in recent years following a spate of controversial incidents, the most recent of which was the live-streaming of the mass shooting in New Zealand on Facebook.

Google, meanwhile, has been called out for the spread of conspiracy theories on YouTube. And Twitter has long grappled with toxic abuse on its site. Execs from all three companies have also appeared before Congress in relation to Russian activity on their respective platforms during the 2016 US election.

The new measures form part of the "Online Harms White Paper," a joint proposal from the UK's Department for Digital, Culture, Media and Sport (DCMS) and the Home Office, and have received the blessing of Prime Minister Theresa May.

"The internet can be brilliant at connecting people across the world - but for too long these companies have not done enough to protect users, especially children and young people, from harmful content," said May in a statement. "That is not good enough, and it is time to do things differently. We have listened to campaigners and parents, and are putting a legal duty of care on internet companies to keep people safe."

Earlier this year, the DCMS referred to Facebook's senior management as "digital gangsters" in its report on fake news online. It added that CEO Mark Zuckerberg had shown "wilful contempt" toward the UK parliament by twice failing to appear before the committee. Facebook-owned Instagram was also recently forced to blur self-harm images on its app in the UK following the suicide of British schoolgirl Molly Russell. Her parents said her death came as a result of viewing images of self-harm on Instagram and Pinterest.