The changes will “take some time” to put in place, he added.

Daniel J. Reidenberg, the executive director of the suicide prevention group Save.org, said that he helped advise Facebook’s decision over the past week or so and that he applauded the company for taking the problem seriously.

Mr. Reidenberg said that because the company was now making a nuanced distinction between graphic and nongraphic content, there would need to be plenty of moderation around what sort of image crosses the line. Because the topic is so sensitive, artificial intelligence probably will not suffice, Mr. Reidenberg said.

“You might have someone who has 150 scars that are healed up — it still gets to be pretty graphic,” he said in an interview. “This is all going to take humans.”

In Instagram’s statement, Mr. Mosseri said the site would continue to consult experts on other strategies for minimizing the potentially harmful effects of such content, including the use of a “sensitivity screen” that would blur nongraphic images related to self-harm.

He said Instagram was also exploring ways to direct users who are searching for and posting about self-harm to organizations that can provide help.

This is not the first time Facebook has had to grapple with how to handle threats of suicide on its site. In early 2017, several people live-streamed their suicides on Facebook, prompting the social network to ramp up its suicide prevention program. More recently, Facebook has utilized algorithms and user reports to flag possible suicide threats to local police agencies.

April C. Foreman, a psychologist and a member of the American Association of Suicidology’s board, said in an interview that there was not a large body of research indicating that barring graphic images of self-harm would be effective in alleviating suicide risk.