TikTok users under the age of 16 will soon no longer be able to send or receive direct messages through the hugely popular video-sharing app.

From 30 April, new online safety measures introduced by the Chinese-owned app will stop children from using the Direct Messaging feature to contact other users.

TikTok’s head of safety Cormac Keenan explained that the ban is aimed at “going one step further” with its existing restrictions, which already prevent users from receiving unsolicited messages from people that are not friends with them in the app.

“As part of our commitment to improve safety on TikTok, we are introducing new restrictions on who can use our Direct Messaging feature,” he said.

“Direct Messaging is an amazing tool that enables people to make new friends and connections no matter where they are in the world. But despite its potential for good, we understand the potential for misuse.”

Andy Burrows, head of online child safety at the NSPCC praised the “proactive” step firm and called for other social media firms to do the same.

“This is a bold move by TikTok as we know that groomers use direct messaging to cast the net widely and contact large numbers of children,” he said.

“Offenders are taking advantage of the current climate to target children spending more time online, but this shows proactive steps can be taken to make sites safer and frustrate groomers from being able to exploit unsafe design choices.”Since launching in 2016, TikTok has been downloaded more than 1.5 billion times, according to figures from app analytics firm Sensor Tower.

Last year, it was one of the most-downloaded apps in the world, proving particularly popular with younger demographics.

Its massive popularity has brought with it increased scrutiny on how it is protecting young people from privacy breaches and online abuse.

In February, TikTok announced a variety of safety measures that allow parents to control what their children see on the platform.

The family safety mode links childrens’ accounts to their parents’, meaning controls on screen time and the type of content that appears on their feed can be enforced.