SINGAPORE: Facebook will remove content that is “locally illegal” if it receives a court order, the social media giant said on Thursday (Mar 22). But it does not have a policy that requires all content posted to be “true, verified and accurate”, so as not to be placed in a position where it has to be the arbiter of the truth.



In a marathon three-hour exchange during the public hearings of the Select Committee on Deliberate Online Falsehoods, Facebook’s vice-president of public policy for Asia Pacific Simon Milner made that point, after being questioned repeatedly by Home Affairs and Law Minister K Shanmugam.



In the course of the exchange, Mr Milner also admitted that Facebook had made a “wrong call” and should have informed its users earlier about the breach of its policies involving political consultancy Cambridge Analytica.



When pressed by Mr Shanmugam on whether Facebook would act to remove falsehoods, Mr Milner said that Facebook does not have a policy that says everything posted has to be “true, verified and accurate”.



“We do not have a policy that says everything has to be true, and we do not put ourselves in a position of deciding what is true,” he said, in response to Mr Shanmugam’s questions.



Mr Milner later added that if something is shown to be false through a court process, and Facebook gets a court order, it will respect that order.





“If we get a court order telling us that something is locally illegal, we will take it down. That’s as far as I can go,” he said.



Mr Shanmugam pointed out that the courts can only act based on legislation.



“Do you not realise then that the natural and logical conclusion, based on your policy that you will not yourself take down falsehoods unless there is a court order, combined with the fact that court orders can only be given pursuant to legislation, means that if a state wants falsehoods to be taken down, that can only be done through legislation, vis-a-vis Facebook,” he asked.







In response, Mr Milner explained that defining what constitutes a deliberate online falsehood is “tremendously difficult”. But that does not mean that Facebook is not doing anything about it, he said.



“Just because we do not have a policy saying that everything on FB has to be true, so that we’re not put in a position of being the arbiter of the truth, we are nonetheless taking lots of steps to address this issue,” he said.



“We’re concerned about a rush to legislate, and legislation that is enacted in haste is often legislation that is regretted at length,” he added.



On this issue of being the “arbiter of the truth”, social media companies Twitter and Google, as well as industry association Asia Internet Coalition (AIC), took a similar position. Representatives from the three organisations were also making oral representations to the committee.



Google, in its written representation, stated that it is “not positioned to evaluate disputes related to facts or characterisations laid out in (the) news article”.

It also pointed out that for both Google Search and Google News, claims that a particular article’s content is inaccurate will “generally not result in its removal unless pursuant to a valid legal request”.



During the hearing, Social and Family Development Minister Desmond Lee asked Google’s representative, Irene Jay Liu, to clarify that it does not have a policy to ascertain truth, or a policy to remove content that is “clearly shown to be untrue”.

“For searches, we do not host the content, we just point to it,” responded Ms Liu, who is Google’s News Lab lead for Asia Pacific.

“Our mission is to point to high quality sources of information. If a particular news article - that in someone’s view - requires a correction, we wouldn’t be able to correct it, because it is another company’s content.”

When asked again by Mr Lee to clarify that Google is not in a position to discern if content is truthful or a deliberate online falsehood, and that Google would let a legal party “make that determination” for them, and comply with legal authority, Ms Liu responded: “We stand by our submission.”

DEBATE ON SELF-REGULATION, ADEQUACY OF LAWS

In their written submission, AIC and the tech firms had said a “stringent self-regulatory approach”, executed in coordination and cooperation with the authorities, would be more appropriate compared to legislation.



On that, Mr Lee asked if Twitter has a policy to discern what is true and what is a deliberate online falsehood. He also wanted to confirm that Twitter does not have a policy of taking down information that is known and proven to be false unless there is a legal requirement to do so.

In response, Twitter’s director of public policy and philanthropy for Asia Pacific Kathleen Mary Helen Reen referred to the roll-out of a "hateful conduct policy" across the platform in November 2016. Twitter subsequently updated the policy last December and created new rules related to violent extremist groups.

“We have detailed much more commitment to making sure that incitement, encouragement, harassment and the inference of that kind of racism is not allowed on our platform,” she said. “We have zero tolerance for it.”

Hence, in Singapore’s context, Ms Reen said that there are many instances when Twitter’s policies can be applied to circumstances involving deliberate online falsehoods.

As such, it sees the question of a court order “as a forced binary” that goes against the company’s commitment and constant updates to its policies.

When asked about Twitter’s responsiveness to pressing situations and how it measures up to its commitment, Ms Reen said the company is “congruent”.



Referring to Mr Lee’s examples, which include a fake Twitter account masquerading as the Tennessee Republican Party during the 2016 United States elections, Ms Reen said that the social media platform has since made more than 30 changes to its product and policies. It is also undertaking special measures in anticipation of the upcoming mid-term elections in the US.

“We could progressively measure and say that we not only are congruent, but that moment has passed in 2016.”

Mr Lee also had a question for AIC’s managing director Jeff Paine about the adequacy of Singapore’s legislative framework that can be used to counter deliberate online falsehoods.

In his written representation, Mr Paine wrote that the boundaries of free speech and expression in the Singapore context are already well-defined by existing Singapore laws, which are “widely thought to be comprehensive enough” to address the issue.

Mr Lee cited Singapore Management University’s law school dean Goh Yihan’s written submission about gaps in existing laws in Singapore.

To that, Mr Paine said that deliberate online falsehoods are “fairly new things” and legislation surrounding it should not be rushed. He added that many of the existing laws in Singapore are aligned with the community guidelines and policies of its members. “So we didn’t really see that as a major issue with respect to any kind of legislation on DOFs.”



When asked by Mr Shanmugam whether he accepts that his statement is inaccurate and that there are gaps that are not covered by current legislation, Mr Paine eventually said that “there could be gaps”.



FACEBOOK WILL CONSIDER BANNING FOREIGN CURRENCY PAYMENTS FOR POLITICAL ADVERTISEMENTS

In response to questions from Mr Shanmugam, Facebook’s Mr Milner said that the company does not currently have a policy on banning foreign currency payments for political advertisements. But he stressed that it will consider doing so.



The social media giant had admitted in September 2017 that Russian-linked ad buyers had spent US$150,000 on thousands of US political ads during the 2016 Presidential campaign in a hearing by a Senate Intelligence Committee investigating Russia’s interference in the election.

Mr Milner explained that the reason Facebook does not have such a ban in place is due to the difficulty of defining what a political advertisement is.



“Most of the ads, in the case of the US, may not have been classified as political ads under the jurisdiction there because they did not endorse a candidate,” he said. “It’s not because we don’t want to try to address the issue, it’s just that it is actually not simple.”

But Mr Milner said that Facebook can “take actions to ensure that only people who can advertise that kind of content are based in the country concerned”.

STATEMENTS NEED TO BE TESTED AGAINST REALITY: SHANMUGAM

At the end of the hearing, Mr Shanmugam raised the example of a disturbing graphic cartoon posted on Twitter with the hashtag “#DeportallMuslims. Twitter had said the cartoon was not in breach of its hateful conduct policy when asked about it during an inquiry into hate crimes by the UK’s Home Affairs Committee last year.

The graphic cartoon depicts a group of male, ethnic minority migrants tying up and abusing a half-naked white woman, while stabbing her baby to death.

“If this is not in breach of hateful conduct policy, I find it difficult to understand what else can be,” said Mr Shanmugam. “The various beautiful statements you made … have to be tested against the reality.”

He added that the cartoon is “way beyond” what Singapore will tolerate.

“For us as a multiracial society, putting up something like this about Muslims would be completely unacceptable as a matter of law. If the law does not cover this, then we will need to have law that covers this.

“No amount of protestation that self-regulation will be enough is going to wash,” said Mr Shanmugam.

