Building a platform for support and inclusivity

By Francesco Fogu, Hitomi Hayashi-Branson, and Lauren Wong

People come to Instagram to connect, create, and share with others, and we want them to feel comfortable expressing themselves freely and authentically. As members of the Well-being team, our mission is to help Instagram stay a safe and supportive place for our community. And as content strategists, designers, and researchers focused on Well-being, our job is to understand both the good and bad experiences people may have, so we can amplify the good and support people through the bad. The problems we design for include mental health, where we address issues like self-harm and social comparison; hate speech; and bullying.



We’ve spent the better part of a year focusing on bullying because we know how deeply it can affect people’s feelings of self-esteem, well-being, and safety on Instagram. It’s a tough problem to tackle because it comes in many forms: from posting embarrassing photos of someone without their permission, to insulting them or their family, to threatening them with physical harm. What’s even more challenging is that we won’t have all the context, so we may assume people are bullies when they’re actually being bullied themselves. That’s why the solutions we build, and the words we use, need to show care for what they may be going through — without making them feel victimized.



As we create solutions for people, we hope to empower them to stop bullying when and where it happens — and to work towards preventing it from happening in the first place.

Preventing Bullying

Changing Behavior Through Education

One approach we’re taking is to prevent bullying by giving people the opportunity to play a role in keeping Instagram a safe place for themselves and others.



To equip people with the tools they need to do this, we developed a simple warning that nudges them during possible moments of conflict, and asks them to reconsider posting comments that might be mean or offensive. It’s powered by machine learning, which detects comments that are similar to others that have been reported in the past. It pairs simple, friendly language with visual cues to create an educational opportunity during a critical moment: right before sharing. By doing so, it gives those who are on the verge of posting a hurtful comment time to reflect and reconsider.



But because we’re intervening at times when people may be behaving emotionally or reactively, we need to consider what they may think or how they may feel at that moment: are they putting down a stranger, or standing up for their family? For this reason, the warning is designed to be subtle, rather than disruptive. We also use non-judgmental language that communicates transparency, education, and control over what happens next.