I’ve never liked the term “real life” to signify “not online”. In an era when social media is central to much human interaction, and a phone can bring the whole world into our palms, the idea that what happens on the internet is somehow “less real” feels like it’s missing the point.

This is not least the case for victims of online abuse; sites like Facebook and Twitter have become breeding grounds for very real hate. New research by the disability charity Leonard Cheshire, released today, shows online disability hate crime has soared in the last year, with recorded incidents up by almost a third. It’s important to be careful with such stats: while an increase in reports could indicate a rise in incidents, it might also represent proactive police forces and more survivors willing to speak out. At the same time, the charity stresses such a rise is likely the “tip of the iceberg” as it is a notoriously underreported area; for instance, traditional reporting methods, such as the phone, may not be accessible for some disabled people.

Social media bosses could be liable for harmful content, leaked UK plan reveals Read more

Listen to what some of the victims told the charity about their experience and you get an insight into what online abuse means. One young woman with extensive facial scarring spoke of being repeatedly mocked in public, with children on local school buses banging on bus windows to get her attention as she goes by. Some passersby took her photograph and posted it on social media; others then posted hateful comments about her and tagged her to ensure she saw the abuse.

This is “real life”, in which strangers abuse you in the “outside world” and enable others to add to the hate on their phones. It is no less serious, of course, if the abuse “only” takes place online.

What is being done about this? The government has announced a plan to make Britain “the safest place in the world to go online”, suggesting social media platforms should be held responsible not only for removing illegal content but also for removing so-called “legal but harmful” content, including cyberbullying.

Facebook Twitter Pinterest ‘Katie Price went in front of MPs last year in a bid to make online abuse a specific criminal offence.’ Photograph: PA

It’s an important distinction. The line between illegal and simply unpleasant is often presented as a murky one when it comes to social media, with much abuse levelled at victims not even crossing sites’ own rules, let alone getting attention from police. As Twitter’s head of public policy and government, Nick Pickles, has said: “One of the things we struggle with a lot is that it is possible to be offensive without breaking our rules.”

“Offensive” makes it sound as if the problem is with an oversensitive victim, rather than with targeted harassment. In fact, the issue is not that online abuse is typically subjective, but that the law – and social media guidelines – appear completely unequipped to deal with it. When Katie Price went in front of MPs last year in a bid to make online abuse a specific criminal offence – describing how her son Harvey, who has multiple disabilities, has been a repeated victim of “horrific” trolling – politicians admitted the law on online abuse is “not fit for purpose” to protect disabled people.

That Twitter didn’t immediately ban the user who was recently reported for what amounted to a rape threat against the Labour MP Jess Phillips shows that this is a problem that crosses over to a host of other marginalised groups. But you don’t need to buy into the patronising myth that disabled people are inherently fragile to acknowledge that certain groups, such as those with learning disabilities, are particularly vulnerable to abuse. This only increases when you consider that disabled people often rely on social media, whether to feel less isolated or to find a much-needed community. Log on to see abusive messages on Facebook and it can feel like a bully breaking into your safe haven.

Government can’t regulate Facebook – it’s up to all of us | Carys Afoko Read more

Facebook and Twitter both admitted to MPs last year that they could do more to protect disabled people from online abuse, and pledged to look into measures such as making the reporting functions easier to access. Better regulation and swifter action from social media companies is vital. But just as importantly, we surely have to tackle the attitudes that lead to the abuse in the first place. This includes the deeply embedded prejudice against disabled people that still sees us dehumanised and ridiculed.

That Facebook was recently under fire for an employee saying “some people find it disturbing to see pictures of disabled people” as a reason for blocking a disability campaigner’s page is a sign those making the rules are as open to bigotry as their users. (Facebook later apologised.)

Human beings abusing one another is hardly a new phenomenon, but we are living in an era in which toxic rhetoric, thanks to social media, proliferates more quickly than ever. Whether it’s abuse against disabled people or another target, the need to tackle online hate is all too real.

• Frances Ryan is a Guardian columnist

Comments on this piece are premoderated to ensure discussion remains on topics raised by the writer. Please be aware there may be a short delay in comments appearing on the site.