When The Times alerted YouTube that its system was circulating family videos to people seemingly motivated by sexual interest in children, the company removed several but left up many others, including some apparently uploaded by fake accounts.

The recommendation system itself also immediately changed, no longer linking some of the revealing videos together. YouTube said this was probably a result of routine tweaks to its algorithms, rather than a deliberate policy change.

Jennifer O’Connor, YouTube’s product director for trust and safety, said the company was committed to eradicating the exploitation of children on its platform and had worked nonstop since February on improving enforcement. “Protecting kids is at the top of our list,” she said.

But YouTube has not put in place the one change that researchers say would prevent this from happening again: turning off its recommendation system on videos of children, though the platform can identify such videos automatically. The company said that because recommendations are the biggest traffic driver, removing them would hurt “creators” who rely on those clicks. It did say it would limit recommendations on videos that it deems as putting children at risk.

Down the Rabbit Hole

YouTube has described its recommendation system as artificial intelligence that is constantly learning which suggestions will keep users watching. These recommendations, it says, drive 70 percent of views, but the company does not reveal details of how the system makes its choices.

Some studies have found what researchers call a “rabbit hole effect”: The platform, they say, leads viewers to incrementally more extreme videos or topics, which are thought to hook them in.

Watch a few videos about makeup, for example, and you might get a recommendation for a viral makeover video. Watch clips about bicycling and YouTube might suggest shocking bike race crashes.