Of the 17,807 content restrictions made by Facebook globally, the highest number — over 31 per cent — of the requests originated from Pakistan between January and July 2019, according to the platform’s latest transparency report released on Wednesday.

Facebook restricted 5,690 items within Pakistan during the first half of 2019, as compared to 4,174 pieces from the second half of 2018.

During the reporting period, the volume of content restrictions based on local law decreased globally by 50pc from 35,972 to 17,807. Of the total volume, 58pc of restrictions originated from Pakistan and Mexico.

Facebook restricted 5,690 items within Pakistan during the first half of 2019.

Facebook said it restricted access in Pakistan to items reported by the Pakistan Telecommunication Authority as allegedly violating local laws prohibiting blasphemy, anti-judiciary content, defamation, and condemnation of the country's independence.

“Upon a routine review of our actions, we determined that we restricted access to 17 items in error during this period, including 11 items that should have been deleted for violating the Community Standards and six items on which we should have taken no action. We have corrected these mistakes,” it said.

In January 2019, Facebook received a formal takedown request from the PTA, alleging that two Facebook posts constituted illegal obscenity under Section 37 of the Prevention of Electronic Crime Act (PECA). The posts linked to an article discussing wife swapping and swingers events.

Also read: PTA's content removal conundrum

The platform added that neither of the reported posts violated the Facebook Community Standards Pursuant to the request from the PTA. However, following the assessment of local laws, Facebook restricted access to the posts within Pakistan and notified the impacted users.

According to the breakdown of the content restricted in Pakistan, Facebook suspended 5,376 posts, 128 pages and groups, six profiles and two comments.

On Instagram, the platform restricted a total of 178 items — 171 posts and seven accounts. This is a massive jump from last year where Facebook restricted only nine items on Instagram.

The government’s requests to Facebook also spiked in the period under review — the highest ever—as the authorities sent 1,849 data requests and sought data of 2,594 users/accounts. Of the total requests, 1,674 were processed legally.

Requests received from governments are accompanied by legal processes, like a search warrant. Facebook discloses account records solely in accordance with its terms of service and applicable law.

In emergencies, law enforcement may submit requests without the legal process. Based on the circumstances, Facebook may voluntarily disclose information to law enforcement where they have a good faith reason to believe that the matter involves the imminent risk of serious physical injury or death. Pakistan sent 175 emergency disclosure requests in the reporting period.

Explore | Special report: The mechanics of silencing online dissent

Facebook responds to government requests for data in accordance with applicable law and its terms of service. The platform complied with 51 per cent of the government requests.

The platform also accepts government requests to preserve account information pending receipt of formal legal process.

“When we receive a preservation request, we will preserve a temporary snapshot of the relevant account information but will not disclose any of the preserved records unless and until we receive formal and valid legal process,” it explained.

During Jan-July 2019, the Pakistan government sent 400 preservation requests and specified 483 users/accounts compared to the platform.

Content restricted by Facebook for Pakistan over the years.

According to Shmyla Khan of the Digital Rights Foundation (DRF), local laws are repeatedly used to silence dissent and criticism of state institutions, and Facebook’s reproduction of the same criteria results in worryingly high censorship. "While Facebook is not bound to follow local laws in Pakistan, we have seen that compliance has steadily risen over the years. This is particularly worrying when it comes to content restrictions based on the stated criteria: 'prohibiting blasphemy, anti-judiciary content, defamation, and condemnation of the country's independence'," she told Dawn.

Global trends

In the first half of 2019, government requests for user data increased by 16pc from 110,634 to 128,617. Of the total volume, the US continues to submit the largest number of requests, followed by India, the UK, Germany and France.

During the reporting period, Facebook identified 67 disruptions of services in 15 countries, compared to 53 disruptions in nine countries in the second half of 2018. India topped the list of countries with the most disruptions in the world with 40 incidents of suspension of internet services.

Facebook also took down 3,234,393 pieces of content based on 568,836 copyright reports, 255,222 pieces of content based on 96,501 trademark reports and 821,727 pieces of content based on 101,582 counterfeit reports.

3.2bn fake accounts removed

According to its latest content moderation report released on Wednesday, Facebook said it had removed 3.2 billion fake accounts between April and September this year, along with millions of posts depicting child abuse and suicide.

That is more than double the number of fake accounts taken down during the same period last year, when 1.55 billion accounts were removed, according to the report.

Read further: Facebook takes sweeping action against networks in Pakistan, India for 'coordinated inauthentic behaviour'

The world’s biggest social network also disclosed for the first time how many posts it removed from popular photo-sharing app Instagram, which has been identified as a growing area of concern about fake news by disinformation researchers.

It removed more than 11.6 million pieces of content depicting child nudity and sexual exploitation of children on Facebook and 754,000 pieces on Instagram during the third quarter.

Facebook also added data on actions it took around content involving self-harm for the first time in the report. It said it had removed about 2.5 million posts in the third quarter that depicted or encouraged suicide or self-injury.