Information Technology and Administrative Justice

The appropriate scope of the use of sophisticated computer programmes by government agencies is likely to be one of the most important topics in administrative law and administrative justice in the next decade. Indeed, it already is attracting significant scholarly and political attention, from the use of automated technologies to collect overpayments of welfare benefits; the use of algorithms to fix accurate sentences and conditions of release for prison inmates; to (potentially) the use of machine learning techniques to help to distinguish genuine and non-genuine refugee claimants.

Figuring out how such systems can be integrated into existing regulatory schemes is an important and difficult task. Artificial intelligence, using algorithms or machine learning, certainly has some place in the machinery of government, but clarity is needed about the settings in which it can properly be used. Thinking about models of administrative justice is a useful first step in determining the appropriate scope of the use of information technology in public administration.

In Bureaucratic Justice: Managing Social Security Disability Claims (Yale UP, 1983), Professor Jerry Mashaw set out three influential “models” of administrative justice. These can be visualised as follows:[1]

A bureaucratic rationality model features administrative decision-makers who gather relevant information – about applicants for benefits, taxpayers, or licence holders – and make decisions based on that information, generally in a cost-effective manner.

The professional treatment model is associated with decision-making structures in which professionals, such as doctors or social workers, play an important, indeed dominant, role: the norms of their professional culture dictate how information is gathered and how the law is applied to the information gathered.

The moral judgment model is most closely associated with adjudication by an independent and impartial decision-makers.

Use of information technology is less likely to upset the bureaucratic rationality model. Indeed, given the focus on cost-effective and efficient decision-making, artificial intelligence, in the form of automation, computer-powered algorithmic decision-making or machine learning, could profitably be used where bureaucratic rationality is the operative model of administrative justice.

Less scope exists for the use of information technology where moral judgment is the operative model. Machine learning is not an obvious fit for the moral judgment model – although one could imagine sophisticated neural nets that are capable of making independent and impartial judgments, they would struggle to communicate the reasons for their conclusions in the manner associated with moral judgment. However, simple automation and algorithmic decision-making could be applied in this context, to the extent they do not compromise the contextual interpretation and independent and impartial adjudication of the moral judgment model (albeit that automation bias would have to be guarded against).

Information technology seems destined only to play a very limited role where professional treatment is the operative model of administrative justice. With the emphasis on professional culture and personalized treatment, it is difficult to see how  automation, algorithms or machine learning could play a significant role, beyond, perhaps basic information gathering for a final, human decision (with, again, concerns about automation bias likely to arise).

The controversies that have arisen to date in respect of the use of information technology tend to involve a mismatch between the technology and the administrative justice model promised to or expected by individuals. In the Australian “robo-debt” affair involving Centrelink, individuals expected professional treatment but found themselves on the receiving end of bureaucratic rationality. Concerns about the use of sentencing algorithms can be understood as concerns about bureaucratic rationality displacing moral judgment. And one of the worries expressed about the potential use of algorithms in Canadian immigration adjudication is that moral judgment would be replaced by bureaucratic rationality.

The point here is that reflection on administrative justice models — on what precisely a given regulatory scheme is trying to achieve — is likely to be a necessary prerequisite to the successful implementation of information technology in public administration.

This content has been updated on December 18, 2018 at 19:46.