TWN Info Service on UN Sustainable Development (Oct19/03)
24 October 2019
Third World Network

World stumbling zombie-like into a digital welfare dystopia
Published in SUNS #9001 dated 21 October 2019

Geneva, 18 Oct (Kanaga Raja) - As humankind moves, perhaps inexorably, towards the digital welfare future, it needs to alter course significantly and rapidly to avoid stumbling zombie-like into a digital welfare dystopia, a UN human rights expert has warned.

This is one of the main conclusions highlighted by Mr Philip Alston, the UN Special Rapporteur on extreme poverty and human rights, in a report to be presented to the UN General Assembly in New York on 18 October.

Expressing concerns over the emergence of the "digital welfare state", the UN expert said that such a future would be one in which: unrestricted data matching is used to expose and punish the slightest irregularities in the record of welfare beneficiaries (while assiduously avoiding such measures in relation to the well-off); evermore refined surveillance options enable around the clock monitoring of beneficiaries; conditions are imposed on recipients that undermine individual autonomy and choice in relation to sexual and reproductive choices, and in relation to food, alcohol and drugs and much else; and highly punitive sanctions are able to be imposed on those who step out of line.

"Digital welfare states thereby risk becoming a Trojan Horse for neoliberal hostility towards social protection and regulation", said the Special Rapporteur.

"Moreover, empowering governments in countries with significant rule of law deficits by endowing them with the level of control and the potential for abuse provided by these biometric ID systems should send shudders down the spine of anyone even vaguely concerned to ensure that the digital age will be a human rights friendly one".

The digital welfare state is either already a reality or is emerging in many countries across the globe.  In these states, systems of social protection and assistance are increasingly driven by digital data and technologies that are used to automate, predict, identify, surveil, detect, target and punish, Mr Alston noted.

Big Tech operates in an almost human rights free-zone, and this is especially problematic when the private sector is taking a leading role in designing, constructing, and even operating significant parts of the digital welfare state, he said.

He recommended that instead of obsessing about fraud, cost savings, sanctions, and market-driven definitions of efficiency, the starting point should be on how welfare budgets could be transformed through technology to ensure a higher standard of living for the vulnerable and disadvantaged.

In this context, the Special Rapporteur called for the regulation of digital technologies, including artificial intelligence (AI), to ensure compliance with human rights, and for a rethinking of the positive ways in which the digital welfare state could be a force for the achievement of vastly improved systems of social protection.


Meanwhile, commenting on the report by the Special Rapporteur, seven human rights groups on Thursday said that governments should heed the call of the Special Rapporteur to fully integrate human rights protections into their efforts to digitize and automate welfare benefits and services.

The human rights groups are Access Now, AlgorithmWatch, Amnesty International, Child Poverty Action Group, Human Rights Watch, Irish Council for Civil Liberties, and Privacy International.

"The UN expert's findings show that automating welfare services poses unique and unprecedented threats to welfare rights and privacy," said Amos Toh, senior artificial intelligence and human rights researcher at Human Rights Watch (HRW), in a statement.

"Using technology to administer welfare has risks and is not a panacea for rights-based reforms that safeguard the dignity and autonomy of society's most vulnerable people," Toh added.

According to the human rights groups, in the first UN global survey of digital welfare systems, Alston found that governments increasingly rely on automated decision-making and other data-driven technologies to verify the identity of welfare beneficiaries, assess their eligibility for various services, calculate benefit amounts, and detect welfare fraud.

But the use of these technologies can create serious harm to human rights, the groups said.

The automation of key welfare functions without sufficient transparency, due process, and accountability raises the spectre of mass violations of welfare rights.

In the United Kingdom, errors in the Real Time Information System, which calculates benefits payments based on earnings information reported to the tax authority, have caused potentially catastrophic delays and reductions in benefit payments for impoverished families.

Design flaws in automated fraud detection systems in Australia and the United States have also triggered debt notices to scores of beneficiaries, wrongfully accusing them of welfare fraud, they noted.

"Automated decision-making should be made more transparent, as highlighted by the Rapporteur, in three important ways," said Matthias Spielkamp, executive director of AlgorithmWatch.

"Citizens need to be able to understand what policies are implemented using algorithms and automation. The administration has to keep a register of all complex automation processes it uses that directly affect citizens. Also, there needs to be transparency of responsibility, so that people know who to contact to challenge a decision."

The groups endorsed the Special Rapporteur's recommendation that governments should establish laws ensuring that the private sector incorporates transparency, accountability, and other human rights safeguards in the development and sale of technologies to facilitate the delivery of welfare services and benefits.


According to the report by the Special Rapporteur, "the era of digital governance is upon us. In high and middle income countries, electronic voting, technology-driven surveillance and control including through facial recognition programs, algorithm-based predictive policing, the digitization of justice and immigration systems, online submission of tax returns and payments, and many other forms of electronic interactions between citizens and different levels of government are becoming the norm."

And in lower-income countries, national systems of biometric identification are laying the foundations for comparable developments, especially in systems to provide social protection, or "welfare", to use a shorthand term.

Invariably, improved welfare provision, along with enhanced security, is one of the principal goals invoked to justify the deep societal transformations and vast expenditures that are involved in moving the entire population of a country not just on to a national unique biometric identity card system but on to linked centralized systems providing a wide array of government services and the provision of goods ranging from food and education to health care and special services for the ageing or those with disabilities.

The result is the emergence of the "digital welfare state" in many countries across the globe. In these states, systems of social protection and assistance are increasingly driven by digital data and technologies that are used to automate, predict, identify, surveil, detect, target and punish, said Mr Alston.

Commentators have predicted "a future in which government agencies could effectively make law by robot", and it is clear that new forms of governance are emerging which rely significantly on the processing of vast quantities of digital data from all available sources, use predictive analytics to foresee risk, automate decision-making, and remove discretion from human decision-makers.

In such a world, citizens become ever more visible to their governments, but not the other way around.

According to the Special Rapporteur, welfare is an attractive entry point not just because it takes up a major share of the national budget or affects such a large proportion of the population but because digitization can be presented as an essentially benign initiative.

The embrace of the digital welfare state is presented as an altruistic and noble enterprise designed to ensure that citizens benefit from new technologies, experience more efficient government, and enjoy higher levels of well-being.

Often, however, the digitization of welfare systems has been accompanied by deep reductions in the overall welfare budget, a narrowing of the beneficiary pool, the elimination of some services, the introduction of demanding and intrusive forms of conditionality, the pursuit of behavioural modification goals, the imposition of stronger sanctions regimes, and a complete reversal of the traditional notion that the state should be accountable to the individual.

These other outcomes are promoted in the name of efficiency, targeting, incentivizing work, rooting out fraud, strengthening responsibility, encouraging individual autonomy, and responding to the imperatives of fiscal consolidation.

Through the invocation of what are often ideologically-charged terms, neoliberal economic policies are seamlessly blended into what are presented as cutting edge welfare reforms, which in turn are often facilitated, justified, and shielded by new digital technologies, said Mr Alston.

Although the latter are presented as being "scientific" and neutral, they can reflect values and assumptions that are far removed from, and may be antithetical to, the principles of human rights, he cautioned.

Despite the enormous stakes involved not just for millions of individuals but for societies as a whole, these issues have, with a few notable exceptions, garnered remarkably little attention.

The mainstream tech community has been guided by official pre-occupations with efficiency, budget-savings, and fraud detection.

According to the Special Rapporteur, the welfare community has tended to see the technological dimensions as separate from the policy developments, rather than as being integrally linked.

And those in the human rights community concerned with technology have understandably been focused instead on concerns such as the emergence of the surveillance state, the potentially fatal undermining of privacy, the highly discriminatory impact of many algorithms, and the consequences of the emerging regime of surveillance capitalism.

But the threat of a digital dystopia is especially significant in relation to the emerging digital welfare state, Mr Alston argued.


The Special Rapporteur cited several instances in the welfare context, in which digital innovation has been most prominently used.

For example, he said automated programs are increasingly used to assess eligibility in many countries.

An especially instructive case was the automation of eligibility decisions in Ontario in 2014 through the Social Assistance Management System (SAMS) which relied on Curam, a customizable off-the-shelf IBM software package, also used in welfare programs in Canada, the United States, Germany, Australia and New Zealand.

In 2015, the Ontario Auditor-General reported on 1,132 cases of errors with eligibility determinations and payment amounts under SAMS involving about $140 million. The total expenditure on SAMS by late 2015 was $290 million.

The new system reportedly led caseworkers to resort to subterfuges to ensure that beneficiaries were fairly treated, made decisions very difficult to understand, and created significant additional work for staff.

The calculation and payment of benefits is increasingly done through digital technologies without the involvement of caseworkers and other human decision-makers, Mr Alston pointed out.

While such systems offer many potential advantages, the Special Rapporteur said that he also received information about prominent examples of system errors or failures that generated major problems for large numbers of beneficiaries.

These included the "Robodebt" fiasco in Australia, the Real Time Information system in the United Kingdom, and the SAMS system in Canada.

Electronic payment cards or debit cards are also increasingly being issued to welfare recipients, noted Mr Alston.

Fraud and error in welfare systems can potentially involve very large sums of money and have long been a major concern for governments, he acknowledged.

It is thus unsurprising that many of the digital welfare systems that have been introduced have been designed with a particular emphasis on the capacity to match data from different sources in order to expose deception and irregularities on the part of welfare applicants.

Nevertheless, evidence from country missions undertaken by the Special Rapporteur, along with other cases examined, suggests that the magnitude of these problems is frequently overstated and that there is sometimes a wholly disproportionate focus on this particular dimension of the complex welfare equation.

Images of supposedly wholly undeserving individuals receiving large government welfare payments, such as Ronald Reagan's "welfare queen" trope, have long been used by conservative politicians to discredit the very concept of social protection.

The risk is that the digital welfare state provides endless possibilities for taking surveillance and intrusion to new and deeply problematic heights, said the Special Rapporteur.

He noted that risk calculation is inevitably at the heart of the design of welfare systems and digital technologies can achieve very high levels of sophistication in this regard.

In addition to fraud detection and prevention, child protection has been a major focus in this area, as illustrated by examples from countries as diverse as the United States, New Zealand, the United Kingdom, and Denmark.

Governments have also applied these techniques to determine whether unemployment assistance will be provided and at what level.

A prominent such scheme in Poland was held unconstitutional, but an algorithm-based system in Austria continues to categorize unemployed jobseekers to determine the support they will receive from government job centers.

Many other areas of the welfare state will also be affected by new technologies used to score risks and classify needs, said Mr Alston.

While such approaches offer many advantages, it is also important to take account of the problems that can arise.

First there are many issues raised by determining an individual's rights on the basis of predictions derived from the behavior of a general population group.

Second, the functioning of the technologies and how they arrive at a certain score or classification is often secret, thus making it difficult to hold governments and private actors to account for potential rights violations.

Third, risk-scoring and need categorization can reinforce or exacerbate existing inequalities and discrimination.

Communications that previously took place in person, by phone or by letter are increasingly being replaced by online applications and interactions, he also said.

Various submissions to the Special Rapporteur cited problems with the Universal Credit system in the United Kingdom, including difficulties linked to a lack of internet access and/or digital skills, and the extent to which online portals can create confusion and obfuscate legal decisions, thereby undermining the right of claimants to understand and appeal decisions affecting their social rights.

Similar issues were also raised in relation to other countries including Australia, and Greece.

Another problem is the likelihood that once the entire process of applying and maintaining benefits is moved online, the situation invites further digital innovation.

In 2018 Sweden was forced to reverse a complex digital system used by the Employment Service to communicate with jobseekers because of problems that led to as many as 15% of the system's decisions probably being incorrect.

Digital technologies, including artificial intelligence, have huge potential to promote the many benefits that are consistently cited by their proponents, said Mr Alston.

They are already doing so for those who are economically secure and can afford to pay for the new services.

They could also make an immense positive difference in improving the well-being of the less well-off members of society, but this will require deep changes in existing policies.

The leading role in any such effort will have to be played by governments through appropriate fiscal policies and incentives, regulatory initiatives, and a genuine commitment to designing the digital welfare state, not as a Trojan Horse for neoliberal hostility towards welfare and regulation, but as a way to ensure a decent standard of living for everyone in society, the Special Rapporteur underlined.


Egalitarianism is a consistent theme of the technology industry, as exemplified by Facebook's aim "to give people the power to build community and bring the world closer together".

But at the macro level, Big Tech has been a driver of growing inequality and has facilitated the creation of a "vast digital underclass", said Mr Alston.

For its part, he noted, the digital welfare state sometimes gives beneficiaries the option to go digital or continue using more traditional techniques.  But in reality, policies such as "digital by default" or "digital by choice" are usually transformed into "digital only" in practice.

This in turn exacerbates or creates major disparities among different groups. A lack of digital literacy leads to an inability to use basic digital tools at all, let alone effectively and efficiently.

Limited access, or no access to the internet, poses huge problems for a great many people. Additional barriers arise for individuals who have to pay high prices to obtain internet access, to travel long distances or absent themselves from work to do so, visit public facilities such as libraries in order to get access, or obtain assistance from staff or friends to navigate the systems.

And while the well-off might have instant access to up-to-date, and easy to use computers and other hardware, as well as fast and efficient broadband speeds, the least well off are far more likely to be severely disadvantaged by out of date equipment and time-consuming and unreliable digital connections.

Submissions to the Special Rapporteur from a wide range of countries emphasized the salience of these different problems.

Both in the Global North and the Global South, many individuals, and especially those living in poverty, do not have a reliable internet connection at home, cannot afford such a connection, are not digitally skilled or confident, or are otherwise inhibited in communicating with authorities online, said Mr Alston.

He said that the United Kingdom provides an example of a wealthy country in which, even in 2019, 11.9 million people (22% of the population) do not have the "essential digital skills" needed for day-to-day life. An additional 19% cannot perform fundamental tasks such as turning on a device or opening an app.

In addition, 4.1 million adults (8%) are offline because of fears that the internet is an insecure environment, and proportionately almost half of those are from a low income household and almost half are under sixty years of age.

These problems are compounded by the fact that when digital technologies are introduced in welfare states, their distributive impact is often not a significant focus of governments.

In addition, vulnerable individuals are not commonly involved in the development of IT systems and the IT professionals are often ill-equipped to anticipate the sort of problems that are likely to arise.

Programs often assume, without justification, that individuals will have ready access to official documents and be able to upload them, that they will have a credit history or broader digital financial footprint, or even that their fingerprints will be readable, which is often not the case for those whose working lives have involved un-remitting manual labour.

In terms of digital welfare policy, several conclusions emerge, said the Special Rapporteur.

First, there should always be a genuine non-digital option available.

Second, programs that aim to digitize welfare arrangements should be accompanied by programs designed to promote and teach the needed digital skills and to ensure reasonable access to the necessary equipment as well as effective online access.

Third, in order to reduce the harm caused by incorrect assumptions and mistaken design choices, digital welfare systems should be co-designed by their intended users and evaluated in a participatory manner.

Many of the programs used to promote the digital welfare state have been designed by the very same companies that are so deeply resistant to abiding by human rights standards, the Special Rapporteur noted.

Moreover, those companies and their affiliates are increasingly relied upon to design and implement key parts of the actual welfare programs.

It is thus evident that the starting point for efforts to ensure human rights-compatible digital welfare states outcomes is to ensure through governmental regulation that technology companies are legally required to respect applicable international human rights standards, he concluded.