Judicial Review of AI-Aided Decisions: Haghshenas v. Canada (Citizenship and Immigration), 2023 FC 464

No sooner had I started to play around with Chat GPT-4 to write administrative decisions than the Federal Court issued a decision on the use of AI in the administrative decision-making process.

In Haghshenas v. Canada (Citizenship and Immigration), 2023 FC 464, Brown J considered whether using an AI system (Chinook) in writing a decision was lawful. He concluded that there was no unreasonableness or procedural unfairness in doing so when preparing a negative decision on a work permit application:

As to artificial intelligence, the Applicant submits the Decision is based on artificial intelligence generated by Microsoft in the form of “Chinook” software. However, the evidence is that the Decision was made by a Visa Officer and not by software. I agree the Decision had input assembled by artificial intelligence, but it seems to me the Court on judicial review is to look at the record and the Decision and determine its reasonableness in accordance with Vavilov. Whether a decision is reasonable or unreasonable will determine if it is upheld or set aside, whether or not artificial intelligence was used. To hold otherwise would elevate process over substance.

Regarding the use of the “Chinook” software, the Applicant suggests that there are questions about its reliability and efficacy. In this way, the Applicant suggests that a decision rendered using Chinook cannot be termed reasonable until it is elaborated to all stakeholders how machine learning has replaced human input and how it affects application outcomes. I have already dealt with this argument under procedural fairness, and found the use of artificial intelligence is irrelevant given that (a) an Officer made the Decision in question, and that (b) judicial review deals with the procedural fairness and or reasonableness of the Decision as required by Vavilov (at paras. 24, 28).

There is little detail about what the applicant’s argument about Chinook was but the decision at issue here (at para. 3) looks suspiciously like the one Chat GPT-4 concocted for me.

That said, Brown J must be right that the final decision here was that of the officer, not the machine. The machine assisted the officer in writing the decision but there was no evidence to suggest the machine actually made the decision. This was not a situation in which the officer had sub-delegated his decision: he fed the machine inputs to allow it to produce a readable (and reasonable decision). At most, the reasons are the machine’s, not the decision. And there is nothing inherently problematic about people other than the decision-maker influencing or contributing to the reasons as long as the decision is that of the decision-maker: Weerasinge v. Canada (Minister of Employment and Immigration), [1994] 1 F.C. 330; Bovbel v. Canada (Minister of Employment and Immigration), [1994] 2 F.C. 563. In addition, it is hard to see that any procedural unfairness was visited upon the applicant, who was given an opportunity to make his case.

Nonetheless, it is difficult to know how exactly Chinook might have been used here. Now, I should say that it is not clear that Chinook would have been involved in the writing of the decision. Chinook has been in the news before and in response to parliamentary questions, the Department of Immigration, Refugees and Citizenship explained its purpose as follows:

Chinook is a Microsoft Excel-based tool developed by Immigration, Refugees and Citizenship Canada (IRCC) for temporary resident application processing to increase efficiency and to improve client service by decreasing the impacts of system and broadband latency, thus improving processing times…

The Immigration Refugee Protection Act and Regulations (IRPA/IRPR) are the basis for the decisions rendered by officers. Decisions are based on the officer review and made by the officer. Chinook does not change the way decisions are made; it is always an officer – not the Chinook spreadsheet – that makes final decisions and provides the rationale for them. Chinook supports decision-makers and IRCC staff and is not used to make or recommend decisions….

Chinook is a tool designed to simplify the visual representation of a client’s information. It does not utilize artificial intelligence (AI), nor advanced analytics for decision-making, and there are no built in decision-making algorithms. IRCC’s advanced analytics and automation systems operate independently of Chinook, and they are reviewed routinely to ensure they are working as intended and align with the Government of Canada’s Directive on Automated Decision-Making

Whatever about Chinook being used to write decisions, given that Chat GPT-4 is available and willing to help administrative decision-makers, it would be useful to know how IRCC officers are trained and instructed in the use of AI to assist in decision writing. The process of writing reasons is important: it focuses the mind on relevant considerations and may lead the writer to reconsider the initial decision they are minded to take. Will we ever know?

Unfortunately, the Directive on Automated Decision-making is not triggered here. The Directive (which has led to the disclosure of information about sophisticated AI decision-making in the immigration space) applies only to systems “used to recommend or make an administrative decision about a client” (s. 5.2).

There might be a case for disclosure under rule 317(1) of the Federal Court Rules of internal policies relating to how Chinook is deployed, but it would be necessary to demonstrate that this is “material relevant to an application” and that it is “in the possession of a tribunal whose order is the subject of the application”.

Over and above these soft and hard law provisions, IRCC could simply disclose this information proactively. If decision-makers are routinely relying on AI to assist in the making or writing of decisions, I suggest that proactive disclosure would be appropriate, if nothing else in order to enhance public confidence in the decision-making process.

UPDATE: Via Will Tao, here are some additional thoughts, including a link to an affidavit filed by IRCC in another case. The affidavit explains that Chinook does generate language for decisions, although officers can modify it (Will thinks the officer probably did modify the language in this case):

The notes generator presents standard language that Decision-Makers may select, review, and modify to fit the circumstances of an Application, in preparing reasons for refusal. Standard language is not automatically populated by the tool. Decision-Makers may choose not to use any standard language. If Decision-Makers choose to use standard language, the language may be modified to address further the specific circumstances of an Application and certain standard language selections require Decision-Makers to enter customized language. The standard language allows both positive and negative factors of an Application to be noted. The negative factors are linked to refusal grounds, as per the Immigration and Refugee Protection Act and the Immigration and Refugee Protection Regulations.

This content has been updated on April 25, 2023 at 20:46.