The words "hate speech" are being thrown around a lot in 2018. Thanks to the politically motivated de-platforming of Alex Jones, those words have become synonymous with labelling anyone with a conservative/right-wing leaning opinion on something.

The problem with the categorisation of hate speech is that nobody can seemingly agree with what it is and what constitutes hate speech.

To try and understand the meaning of the term, I did a tonne of reading and research to collectively group together a wide variety of definitions and still, I am confused as everyone else is.

Let's start with one of Americas closest allies, Australia. Surprisingly Australia has some clear-cut laws around hate speech, primarily under its Racial Discrimination Act of 1975.

Section 18C of the RDA makes it is unlawful for a person to do an act in public if it is reasonably likely to "offend, insult, humiliate or intimidate" a person of a certain race, colour or national or ethnic origin, and the act was done because of one or more of those characteristics. Exemptions are provided in section 18D, including acts relating to artistic works, genuine academic or scientific purposes, fair reporting, and fair comment on matters of public interest.

In the context of the RDA in Australia, it's common sense. If someone intentionally intends to hurt someone based on one or more of the aforementioned traits such as race, it's considered a violation.

However, take note in section 18D where it specifically says there are exemptions for artistic works, academic or scientific purposes, fair reporting and fair comment on matters of public interest.

In the Wikipedia article on Hate speech laws in Australia it specifically mentions a part of the Criminal Code Act 1995 refers to the internet and hate speech.

Section 474.17 of the Criminal Code Act 1995 makes it an offence to use a carriage service such as the Internet in a manner which reasonable persons would regard as menacing, harassing or offensive. Federal criminal law, therefore, is available to address racial vilification where the element of threat or harassment is also present, although it does not apply to material that is merely offensive

Take note of the last part of that section, "it does not apply to material that is merely offensive" - this section is saying that if you're using the internet to threaten or harass someone based on their race or religion, that's a violation of the act. However, if you're merely stating an opinion some would find offensive, it is not.

In the more generic and broader Wikipedia article Hate speech the definition of hate speech is defined as:

Hate speech is speech that attacks a person or group on the basis of attributes such as race, religion, ethnic origin, national origin, sex, disability, sexual orientation, or gender identity.

It then goes on to say that hate speech laws differ country-by-country and what could be constituted as hate speech. Then there is little interesting tidbit:

In some countries, hate speech is not a legal term. And additionally in some countries, including the United States, hate speech is constitutionally protected.

As I went further and further down the rabbit hole, including the discovery of some European countries which have specific laws preventing even publicly saying something bad about a religion or race (even if only an opinion) I realise that the definition is muddied and differs quite a bit.

Now we get into the meat and potatoes of censorship and hate speech: Facebook.

What definition does Facebook see fitting for hate speech? It all goes back to 2013, way before hate speech and fake news was a hot button issue. A post was published titled Controversial, Harmful and Hateful Speech on Facebook.

We prohibit content deemed to be directly harmful, but allow content that is offensive or controversial. We define harmful content as anything organizing real world violence, theft, or property destruction, or that directly inflicts emotional distress on a specific private individual (e.g. bullying).

This isn't necessarily related to hate speech, but it is ironic and comedic to see just how much Facebook has changed its attitude towards what it censors in just five years (amazing what political influence can do, huh?).

As we continue reading, we get to the definition Facebook used to define hate speech:

While there is no universally accepted definition of hate speech, as a platform we define the term to mean direct and serious attacks on any protected category of people based on their race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or disease. We work hard to remove hate speech quickly, however there are instances of offensive content, including distasteful humor, that are not hate speech according to our definition. In these cases, we work to apply fair, thoughtful, and scalable policies. This approach allows us to continue defending the principles of freedom of self-expression on which Facebook is founded.

As Facebook's current community standards define the subject of hate speech, it's a little different to what it was back in 2013.

We define hate speech as a direct attack on people based on what we call protected characteristics — race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability. We also provide some protections for immigration status. We define attack as violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation.

It's interesting to note that in 2013, Facebook had specific definitions of what they believe hate speech is. In 2018, the definition Facebook uses has become quite broad and non-specific, allowing to remove anyone for simply using the words disgusting or ugly in relation to someone.

In a late 2017 blog post Facebook goes into detail into how it polices, defines and enforces hate speech censorship on its platform here. It's an interesting read for quite a few reasons and there are a few takeways I want to share.

Our current definition of hate speech is anything that directly attacks people based on what are known as their “protected characteristics” — race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, or serious disability or disease.

Once again, Facebook specifically says hate speech is anything that "directly attacks people based on what are known as their “protected characteristics" - I don't think anyone would disagree with Facebook's definition here, attacking someone with intent is hate speech.

Often a policy debate becomes a debate over hate speech, as two sides adopt inflammatory language. This is often the case with the immigration debate, whether it’s about the Rohingya in South East Asia, the refugee influx in Europe or immigration in the US. This presents a unique dilemma: on the one hand, we don’t want to stifle important policy conversations about how countries decide who can and can’t cross their borders. At the same time, we know that the discussion is often hurtful and insulting.

When Mark Zuckerberg was asked how Facebook defines hate speech back in April, dodged the question because it's apparent that not even Facebook really knows what hate speech is and evident by the numerous false takedowns, it's not as easy as left-wing liberals believe it is.