Facebook recently released its Government Request Report for second half of 2015 (July – December). This exercise was started by Facebook in the 2013 as an effort to share information about the requests that they received from governments around the world. The four data pointers that are shared in public domain in this report are:
- Requests for User Data
- User Accounts Referenced
- Requests where some data produced (as a % of requests received)
- Content Restrictions (included since 2013 H2 report)
According to the 2015 H2 report, the number of government requests for user data shows an increase of almost 12% from 41,214 to 46,710 requests globally. This rise can be accredited to the increased number of user data requests from the US, India UK, France and Germany, however it is lesser than the 15% rise recorded in 2015 H1.
The number of user data requests from the top ten countries amounts to close to 86% of all the user data requests that Facebook receives. Led by the US, India is second on this list followed by UK and Germany. Myanmar comes in last on this list with no requests made to Facebook for user data. The graph below gives the country wise request for User Data in 2015 H2.
In a more India specific context, we see that the actual number of user data requests (5,561) are at a highest since 2013. India also shows a 10% increase in the number of user accounts referenced (only basic subscriber information, such as name and length of service or IP address logs and actual account content as the nature of the request may warrant), from 6,268 2015 H1 to 7,018 in 2015 H2. Facebook’s Law Enforcement Guidelines are very well defined and that is probably the reason why Indian law enforcement agencies have not been able to get access to user data for almost half of the requests that they have sent to Facebook. In contrast the US Law Enforcement agencies get access to user data in about 80% of requests.
What the Indian authorities lose in User data requests; they cover up in Content Restrictions. Simply put these are requests to Facebook for removing content that violates “local” law. In India, Facebook restricted access to content reported primarily by law enforcement agencies and the India Computer Emergency Response (Cert-In) team within the Ministry of Communications and Information Technology. Content was flagged by these agencies for being anti-religious and hate speech that could cause unrest and disharmony within India. In 2016, Facebook made changes in its policy following the Supreme Court judgement repealing Sec 66A of the Information Technology Act (2000). It will now “cease to act upon legal requests to remove access to content unless received by way of a binding court order and/or a notification by an authorized agency”. What this means is that while content can be removed for been reported “offensive”, FaceBook will ask for a legal notice to remove content that is flagged as “illegal”, either by law enforcement agencies or NGOs.
In 2015 H2 France led the world with a staggering 37, 695 requests to remove content up from 295 in 2015 (H1). This is primarily due to the fact that 32,100 requests received are related to removal of a single image that violated French laws on human dignity in the aftermath of the November 2015 terrorist attack in Paris. India comes in second with 14,971 requests, a drop from the 15,155 requests in 2015 H1. Interestingly if we factor out the repeated count of French requests then India contributes to 63% of content removal requests, a drop of 10% from 2015 H2. Turkey, Israel and Germany are following France and India to be the top 5 countries that account for close to 99% of all global requests for content removal.
The Indian Information Technology Act in general and the Rules to the IT Act, 2011 in particular, facilitate this covert form of online censorship as Facebook is treated as an “intermediary” and is liable for prosecution if it fails to remove content flagged for been grossly harmful, harassing, blasphemous defamatory, obscene, pornographic, pedophilic, libelous, invasive of another’s privacy, hateful, or racially, ethnically objectionable, disparaging, relating or encouraging money laundering or gambling, or otherwise unlawful in any manner whatever. The vague terms used in these Rules combined with the fact that India does not have any legislative framework to guarantee a right to privacy for information shared over the internet makes it easier for law enforcement agencies to actively monitor and seek removal of content posted online.
It would be interesting if Facebook releases granular details about the type of legislation that is invoked by law enforcement agencies when asking for content removal. This will give us a clear idea of what is now been used by these agencies in the absence of Sec 66A. Further, it would also be important to see which is the type of content that generates maximum removal requests. This may help us to fine tune the provisions of the IT Act such that it becomes an enabler of growth of internet access in the country and not another law that allows governments to force censorship on netizens.
Data sourced from the Facebook Government Request Report.
Title image: wallpaperswa
Author: Ranjeet Rane
This article was originally published on The Dialogue and is republished here with the author's consent.