Artificial Administration in Action: the Robo-Debt Scandal
This is the second post in a series. The introductory post is here.
In Bureaucratic Justice: Managing Social Security Disability Claims,[1] Professor Jerry Mashaw set out three influential “models” of administrative justice. Further iterations have been suggested by Michael Adler[3] and Robert Kagan,[4] but for present purposes Mashaw’s models are sufficient.
- A bureaucratic rationality model features administrative decision-makers who gather relevant information – about applicants for benefits, taxpayers, or licence holders – and make decisions based on that information, generally in a cost-effective manner.
- The professional treatment model is associated with decision-making structures in which professionals, such as doctors or social workers, play an important, indeed dominant, role: the norms of their professional culture dictate how information is gathered and how the law is applied to the information gathered.
- The moral judgment model is most closely associated with adjudication by an independent and impartial decision-maker.
In general, reliance on artificial administration can shift the administrative justice model from professional treatment or moral judgment to bureaucratic rationality. When a decision-making structure “is coded as a program”, the result is that the technology “automatically determines the range of possible action”.[6]
A Member of Parliament wisely and presciently observed, in the context of a statutory provision authorising the use of computerised social welfare decisions that the effect of artificial administration will often be to dehumanise those individuals who deal with governmental decision-makers:
What is important here is…the fact that the complainant feels even more strongly that he is not dealing with a group of human beings who in any sense understand the human problems that he may be experiencing. He will feel that he is dealing with a monolith that will take on the appearance of a large network of computers producing pieces of paper – sometimes correctly, sometimes not – with never a human hand behind them. The long-term effects of this increasing depersonalisation…are, and will become, even greater than many imagine…The ethic of all of us…should be to re-integrate as fully as possible into the wider community individuals who receive benefit, by making them feel that they are not pawns in the hands of a monolith.[7]
Indeed, several recent examples of the introduction of and reliance on artificial administration support the premise that bureaucratic rationality comes to displace or replace professional treatment or moral judgment as the operative administrative justice model, with unhappy consequences.
In this post, I deal with Australia’s robo-debt scandal.
Ensuring that recipients of social welfare benefits receive appropriate amounts of state support is a constant challenge given the complexity of contemporary legislative and regulatory structures.[8] Equally, ensuring that distribution of benefits or repayment of overpaid benefits, does not cause hardship to vulnerable individuals has long been a challenge for social welfare decision-making structures.[9] Having human decision-makers involved in the process, certainly at the point of enforcing the recovery of overpayments, ensures that professional treatment is the operative administrative justice model. A shift from professional treatment to bureaucratic rationality, prompted by the introduction of artificial administration, was at the heart of Australia’s recent robo-debt scandal. The allure of using artificial administration to secure efficiency gains and cost savings motivated the shift. From the perspective of the individuals receiving benefits the outcome was a Kafkaesque dystopia, not a utopia of streamlined public administration.
The trigger was the introduction of the federal government’s Online Compliance Intervention programme. Recipients of income support payments from the Department of Human Services had been subject for many years to a verification process based on collaboration between the Department and the Australian Taxation Office. Departmental officials, working under the Centrelink brand name, reviewed the recorded lump-sum annual taxation income against its own fortnightly income records to check for discrepancies. As is common in such systems, variations in the recipient’s income – which may arise in a variety of ways, especially for seasonal or casual workers[10] – may affect the entitlement to income support. Accordingly, the verification system was designed to ensure that recipients were not receiving over-payments of benefits. If they were, enforcement action could be initiated.
When operation of the verification system was transferred from officials to recipients of welfare benefits, who were made responsible for inputting accurate data on an online portal and subject to automatic enforcement measures without human oversight, the robo-debt scandal erupted. Generation of information for the initiation of enforcement action – from 20,000 a year to 20,000 a week[11] – was readily facilitated by the paradigm shift.[12] But this increase in efficiency came at a cost.
Previously, discrepancies liable to have caused an overpayment were verified by Centrelink officials, but this human oversight was replaced by artificial administration. First, “the responsibility for checking and clarifying income information…shifted from the department to current and former recipients of Centrelink payments” and second, “recipients [were] directed to an online portal to check the information and provide supporting evidence of their fortnightly income, dating back to 2010 for some people”;[13] some recipients could be asked to account for discrepancies which had occurred many years in the past, for which most would typically no longer have accurate records.[14]
Where recipients could not provide information in respect of the discrepancies, an algorithm was applied:
[The algorithm] calculates the average of an individual’s annual income reported to the Australian Tax Office by their employer over twenty-six fortnightly periods and compares it with the fortnightly earnings reported to Centrelink by the welfare recipient. All welfare recipients are required to declare their gross earnings (income accrued before tax and other deductions) within this fourteen-day period. Any discrepancy between the two figures is interpreted by the algorithm as proof of undeclared or underreported income, from which a notice of debt is automatically generated. Previously, these inconsistencies would be handled by Centrelink staff, who would call up your employer, confirm the amount you received in fortnightly payments, and cross-index that figure with the one calculated in the system. But the automation of the debt recovery process has outsourced authority from humans to the algorithm itself.[15]
Many individuals who logged onto the online portal when a discrepancy was flagged simply accepted the averaged income figure provided by the algorithm, perhaps because they presumed it was accurate[16] or because they did not have access to the information necessary to correct the discrepancy.[17] Centrelink did not do any modelling in advance of the introduction of the new system to assess “how many debts were likely to be over-calculated as opposed to under-calculated.”[18] If individuals did not respond, they were subject to an additional penalty, even those “who may not have had an adequate opportunity to provide a reasonable excuse, for example if they did not receive the initial letter, or did not understand the connection between reasonable excuse and the recovery fee”.[19]
Moreover, an automated system will collect a debt in the run-up to Christmas or other financially difficult periods;[20] it will be unable to account for literacy, numeracy or disability challenges which may prevent some recipients from engaging with the online portal;[21] and it will collect a debt from individuals who are vulnerable or at risk;[22] Centrelink does flag vulnerable individuals and does not initiate enforcement proceedings against them. The point is that there are often individuals who are on the point of, or close to, vulnerability, whose precarious circumstances could be taken into account by a human official, whereas an algorithm cannot perceive these characteristics.
Evidently, had humans been manning the controls, the
situation would have been very different; at least recipients of overpayments
would have been able to explain their circumstances and receive guidance on
what to do next. As Basic Rights Queensland submitted to a Senate inquiry:
“There has been frustration at having to argue their case by talking to the
computer and over the difficulty in trying to discuss this with an actual
person”.[23]
The displacement[24]
of professional treatment by bureaucratic rationality plainly had an important
impact on the acceptability of the OCI system. In part, the robo-debt scandal
is attributable to a poorly designed computer system. Even taking design flaws
into account, however, automating a decision process which previously had been
operated by human officials created significant problems. One way to view those
problems is as the replacement of professional treatment by bureaucratic
rationality as the operative administrative justice model, with little effort
made to find a compromise position. This is hardly a model for artificial
administration.
[1] Yale University Press, New Haven, 1983.
[2] Joe Tomlinson and Robert Thomas, “Administrative justice – A primer for policymakers and those working in the system” UK Administrative Justice Institute (September 9, 2016).
[3] “A Socio-Legal Approach to Administrative Justice” (2003) 25 Law & Policy 323.
[4] “Varieties of Bureaucratic Justice” in Nicolas Parrillo ed., Administrative Law from the Inside Out: Essays on the Themes in the Work of Jerry Mashaw (Cambridge UP, 2016).
[5] In “A Socio-Legal Approach to Administrative Justice” (2003) 25 Law & Policy 323, Adler argues, for instance, that models associated with the rise of New Public Management need to be incorporated into Mashaw’s schema. This is a good point, well put and well taken. But these models are arguably more closely related to internal management and redress systems than to front-line interactions between decision-makers and individuals (and the same can be said of Kagan’s focus on institutional culture: “Varieties of Bureaucratic Justice” in Nicolas Parrillo ed., Administrative Law from the Inside Out: Essays on the Themes in the Work of Jerry Mashaw (Cambridge UP, 2016)). The replacement of human decision-making with market-based forms of resource allocation is, admittedly, an exception and can be conceived of as an additional administrative justice model. But the displacement of humans by market mechanisms really is a separate topic to the rise of artificial administration.
[6] A Aneesh, “Global Labor: Algocratic Modes of Organization” (2009) 27 Sociological Theory 347, at p. 356. See also Andrew Burton-Jones, “What Have We Learned from the Smart Machine?” (2014) 24 Information and Organization 71, at pp. 76-77.
[7] Quoted from Standing Committee B, October 28, 1997 (Oliver Letwin MP) in Andrew Le Sueur, “Robot Government: Automated Decision-Making and its Implications for Parliament” in Alexander Horne and Andrew Le Sueur eds., Parliament: Legislation and Accountability (Hart Publishing, Oxford, 2016), at p. 189.
[8] See e.g. Neville Harris, “Simplification of social security: the international experience” [2008] Journal of Social Security Law 12.
[9] See e.g. Goldberg v Kelly 397 US 254 (1970); Matthews v Eldridge 424 U.S. 319 (1976).
[10] See e,g. Senate Standing Committee on Community Affairs (Australia), Design, scope, cost-benefit analysis, contracts awarded and implementation associated with the Better Management of the Social Welfare System initiative, 21 June 2017, at para. 2.86.
[11] Commonwealth Ombudsman, Centrelink’s automated debt raising and recovery system, April 2017, at para. 2.4.
[12] See e.g. “Centrelink urged to stop collecting welfare debts after compliance system errors”, Guardian Australia, 14 December 2016.
[13] Senate Standing Committee on Community Affairs (Australia), Design, scope, cost-benefit analysis, contracts awarded and implementation associated with the Better Management of the Social Welfare System initiative, 21 June 2017, at para. 2.21.
[14] See e.g. “Top QC slams Centrelink’s robo-debt program as ‘elaborate sham’”, Sydney Morning Herald, 2 December 2018. Cf Terry Carney, “The New Digital Future for Welfare: Debts without Legal Proofs or Moral Authority” [2018] University of New South Wales Law Journal Forum 1, at pp. 5-6, on the lawfulness of placing an onus on a recipient.
[15] Gillian Terzis, “Austerity is an Algorithm” (2017) 3 Logic: https://logicmag.io/03-austerity-is-an-algorithm/
[16] Senate Standing Committee on Community Affairs (Australia), Design, scope, cost-benefit analysis, contracts awarded and implementation associated with the Better Management of the Social Welfare System initiative, 21 June 2017, at para. 3.10. See also Terry Carney, “The New Digital Future for Welfare: Debts without Legal Proofs or Moral Authority” [2018] University of New South Wales Law Journal Forum 1, at pp. 8-9.
[17] Senate Standing Committee on Community Affairs (Australia), Design, scope, cost-benefit analysis, contracts awarded and implementation associated with the Better Management of the Social Welfare System initiative, 21 June 2017, at para. 2.110-2.111; Commonwealth Ombudsman, Centrelink’s automated debt raising and recovery system, April 2017, at para. 3.22-3.24.
[18] Commonwealth Ombudsman, Centrelink’s automated debt raising and recovery system, April 2017, at para. 3.5.
[19] Commonwealth Ombudsman, Centrelink’s automated debt raising and recovery system, April 2017, at para. 3.8.
[20] “Centrelink urged to stop collecting welfare debts after compliance system errors”, Guardian Australia, 14 December 2016; Senate Standing Committee on Community Affairs (Australia), Design, scope, cost-benefit analysis, contracts awarded and implementation associated with the Better Management of the Social Welfare System initiative, 21 June 2017, at para. 3.24-3.27.
[21] Senate Standing Committee on Community Affairs (Australia), Design, scope, cost-benefit analysis, contracts awarded and implementation associated with the Better Management of the Social Welfare System initiative, 21 June 2017, at para. 3.43-3.55.
[22] Senate Standing Committee on Community Affairs (Australia), Design, scope, cost-benefit analysis, contracts awarded and implementation associated with the Better Management of the Social Welfare System initiative, 21 June 2017, at para. 3.33-3.42.
[23] Senate Standing Committee on Community Affairs (Australia), Design, scope, cost-benefit analysis, contracts awarded and implementation associated with the Better Management of the Social Welfare System initiative, 21 June 2017, at para. 1.24. See also the comment of the Chief Executive Officer of the Queensland Council of Social Service at para. 1.28: “QCOSS supports the recovery of incorrect income support payments. These are public funds and there must be a level of accountability. The public expects that, but it also expects that the government will treat all people with an appropriate level of respect and in a dignified manner”.
[24] Review and appeal systems exist in respect of Centrelink decisions, giving an opportunity for those able and willing to take it to have professional treatment or moral judgment exercised in a given case Senate Standing Committee on Community Affairs (Australia), Design, scope, cost-benefit analysis, contracts awarded and implementation associated with the Better Management of the Social Welfare System initiative, 21 June 2017, chapter 4. If not, however, the debt enforcement process commences. Ibid. chapter 5.
This content has been updated on April 4, 2019 at 14:34.