Publisher: APCNews 25 October 2019

A damning new report from the United Nations Special Rapporteur on extreme poverty and human rights, Philip Alston, raises alarm about the rise of the digital welfare state, which uses data and technologies to automate, predict, identify, surveil, detect, target and punish the poor.

The report, recently presented to the UN General Assembly, covers a range of uses of digital technologies for welfare systems – digital identification, automated programmes to assess eligibility and calculation and payment of benefits, and fraud prevention and detection using algorithms to score risks and classify needs. The report picked up on a number of the issues that APC flagged in our joint submission with Derechos Digitales and Media Matters for Democracy, including:

Concerns about the digitisation of welfare systems excluding people who do not have meaningful access to the internet or digital skills, who are often the same people most in need of social protection services.

The potential of discrimination by the application of automated decision making through the use of algorithms and artificial intelligence in digital welfare systems.

Concern that big technology companies operate in an almost rights-free zone, and are leading the design and operating significant parts of the digital welfare state with little to no government oversight.

Digital rights are often dismissed as a luxury issue that only people in the global North or who have their basic needs met can be concerned with. Philip Alston's report demonstrates how far this is from the truth. By operating in a rights-free zone, digital welfare systems harm the most vulnerable in society, and strip them of their dignity and basic needs. For example, unrestricted data matching is used to expose and punish the slightest irregularities in the records of welfare beneficiaries, sometimes denying them critical services. Refined surveillance options subject welfare beneficiaries to around-the-clock monitoring. These practices can have a disproportionate effect on already marginalised groups, such as women and transgender persons.

Examples abound of digital welfare systems being implemented without due consideration of the human rights of beneficiaries. In Chile, for instance, a biometric identification system was implemented for the school meal programme, requiring learners to provide their fingerprints to obtain the meal benefits. While the Supreme Court ruled that the use of biometrics of minors must be subject to the consent of their parents or legal guardians, the practical implication is that the children being fed is has been made contingent on this consent being provided. Critically people must be able to meaningfully consent to their data being used and how, and governments must always ensure that a non-digital option is available. "The need to deliver basic services is often used as an excuse to subject the most vulnerable groups to full exclusion or surveillance in order to maintain an unequal social structure in which the exercise of fundamental rights is restricted to a few: technology is implemented by states to monitor the poor, while local elites can turn to private providers to maintain greater control over their information," says Jamila Venturini, regional coordinator at Derechos Digitales. "It is urgent that these initiatives adopt a comprehensive human rights approach," she added.

In India, each person claiming support from the welfare system was required to have an Aadhaar, a unique 12-digit number linked to their biometric and demographic data introduced in 2009. In 2018, the Supreme Court ruled that Aadhaar enrolment is not mandatory for the provision of public services. However, in practice, citizens who had already linked their Aadhaar number prior to the amendment were unable to change to the secondary option. "Aadhar is essential to access several welfare mechanisms, and for people categorised as below the poverty line this includes monthly food rations for a household (especially if there is a child at risk of malnourishment), subsidised medical services, gas cylinders needed for cooking, disability pension and stipends, and even scholarships for people from oppressed castes and economically weaker sections of society," explained Namita Aavriti, co-manager of the APC Women's Rights Programme. "People who are already vulnerable due to economic and social factors are trading their data and privacy to access some of the most basic amenities. Added to this is that the failure of Aadhar to function like it should – mismatched fingerprints, lost accounts, failure in verification, frauds and leaks – has led to loss and cancellation of welfare services and suffering for many people, and also a devaluing of their right to privacy."

The report outlines a series of recommendations to make digital technologies work for social protection, which APC supports, such as bringing digital welfare systems in line with human rights, including by regulating the technology companies on which they rely. It cites the need to protect economic, social and cultural rights, and for welfare systems to stop demanding that people demonstrate they "deserve" social protections. Likewise, digital technologies should not be employed in the welfare state to surveil, target, harass and punish beneficiaries, especially the most vulnerable among them.

The values underpinning and shaping the new technologies are unavoidably skewed by the fact that there is “a diversity crisis in the AI sector across gender and race.” Those designing AI systems in general, as well as those focused on the welfare state, are overwhelmingly white, male, well-off, and from the global North, says Alston. To counteract these biases and to ensure that human rights considerations are adequately taken into account, the practices underlying the creation, auditing and maintenance of data should be subjected to very careful scrutiny.

The report stresses the need for transparency and broad-based inputs into policy-making processes. As our joint submission stated, meaningful engagement of those directly affected by the welfare system is imperative to ensure that these measures are suitable to their needs, are effective, and will be utilised.

Alston cautions against the inevitability of the digital welfare state, reminding us that these systems are developed through a series of policy choices that assume values such as dignity, choice, self-respect, autonomy, self-determination, privacy and a range of other factors are all traded off without being factored into the overall equation. He concludes that the real digital welfare state revolution should focus on how the existing welfare system could be transformed through technology to ensure a higher standard of living for the vulnerable and disadvantaged. Governments must treat this as a much needed wake up call, to ensure that digital welfare programmes have a legal basis, safeguard the rights and dignity of the people they are meant to support, are non-discriminatory, and are transparent.

Read the full joint submission by APC, Derechos Digitales and Media Matters for Democracy to the United Nations Special Rapporteur on extreme poverty and human rights in response to the call for inputs on digital technology, social protection and human rights here.