Reviewing the Use of Artificial Intelligence: Mehrara v. Canada (Citizenship and Immigration), 2024 FC 1554
The use of artificial intelligence in public administration has recently been a source of significant debate. One issue that arises, in the context of judicial review specifically, is whether and to what extent an individual is entitled to know about the use of artificial intelligence in administrative decision-making. The answer given in Mehrara v. Canada (Citizenship and Immigration), 2024 FC 1554 — albeit in the context of a technology that probably should not be described as artificial intelligence — is “not much at all”.
Here, Battista J addressed the use of Chinook (see also here) in a case where the applicants’ study permit and visa applications were refused. The reasons were as follows:
Although PA is traveling without their spouse, PA will be accompanied by their child. The ties to their home country are weakened with the intended travel to Canada involving their immediate family. I have concerns that the remaining ties to Iran are not sufficiently great to motivate departure from Canada, especially considering the current socio-economic situation in this country.
The applicant is 38 years old self-employed in workshop management. I fail to see how the proposed program adequately demonstrates a logical progression of studies and career. It is unclear why the applicant would choose to study if currently self-employed and having to maintain a business in the country of residence. In view of these facts, I am not satisfied that the proposed studies make sense in the context of the applicant’s economic background, prior studies and career plans. I am also not satisfied that the professional ties in country of residence would be sufficient following the studies to compel departure from Canada.
Weighing the factors in this application. I am not satisfied that the applicant will depart Canada at the end of the period authorized for their stay.
It turned out that this decision was unreasonable as it flew in the face of the “detailed and specific evidence” provided by the applicants (at para. 37).
But that’s not what we are here for! The reasons also noted that the file had been “processed with the assistance of Chinook 3+“. As Battista J explained:
Officers used
“Module 3”of Chinook during their decision-making; they use“Module 4”to make the decision. Module 3 contains columns that categorize information by, for example, age, martial status, etc. Officers can review application materials contemporaneously while using Chinook. Module 4 provides officers with pre-generated reasons for refusal that can be tailored to an applicant’s circumstances (at para. 9).
Understandably, the applicants wanted to know more about this technological assistance. They argued that by failing to provide any background information about how Chinook was used in this case, the decision-maker failed to file an appropriately full record. They also argued that the decision-maker fettered his discretion by using the Chinook software
Battista J held that the Federal Court was not, in a case like this one, the place where more information would be forthcoming.
In the first place, the fettering argument was “speculative”:
The Officer’s failure to produce Chinook training materials is not reasonably connected to her capacity to render an independent decision on the substance of the Applicants’ applications. The similarity of her affidavit to another officer’s affidavit in a separate application does not imply that the affidavit is not based on her personal experience or knowledge; legal counsel who assisted in drafting the application may have chosen the similar wording. Finally, the Applicants’ estimate of time available to the Officer for assessing their applications was based on the Applicants’ untested assumptions and is disputed by the Respondent (at para. 44).
Importantly, the mention of Chinook in the reasons did not rebut the ordinary presumption that the decision-maker had considered all relevant information: “It is not necessary for an Applicant to have full access to a decision maker’s entire file to demonstrate that evidence was not considered, particularly in the “reasons first” exercise of post-Vavilov judicial review. The presence or absence of an analysis of relevant evidence in the reasons is sufficient to establish whether a decision maker considered it” (at para. 47).
Second, as far as production of the decision-maker’s record was concerned, there was an insufficient link between the pleadings and the alleged shortcomings in disclosure:
at its essence, the main challenge to the decision remained an alleged mismatch between the evidence that was provided and the reasons for the decision that were rendered. This mismatch is ultimately the basis on which this application for judicial review is being decided. Nothing more is necessary for effective judicial review, in this case, than the Applicants’ evidence and the reasons (at para. 53).
The applicants argued that the decision-maker’s working notes generated by Chinook, the risk indicators and the underlying spreadsheet should have been part of the record. On each count, Battista J disagreed, essentially because these items were not relevant to the judicial review exercise the applicants had asked the Federal Court to undertake. As far as working notes were concerned, “Working notes are properly classified as evidence of an officer’s thought process, but thoughts are only marginally relevant, if relevant at all, to the assessment of reasons, which is the heart of judicial review” (at para. 58). Similarly, the underlying spreadsheet was not relevant as it is merely a “conglomerative” tool that collects information from other sources (at para. 65):
These spreadsheets organize working notes, risk indicators, and other information in an application (Daponte affidavit at paras 28-31, see also Exhibit B). I find no evidence that they independently generate information such as statistics, as alleged by the Applicants. The evidence stablishes that all the information organized in Chinook spreadsheets is contained in the GCMS, other than working notes (Daponte affidavit at paras 12, 54; Transcript of Cross-examination at pages 58–61). Mr. Tao stated that
“I do not know if historical trend information is still presented to the final Decision-Maker in Chinook as of November 12, 2023”(Tao affidavit at page 21). Therefore, no concern arises regarding this omitted information. Chinook’s spreadsheets simply organize application information that appears in the GCMS, along with risk indicators. Their production is not required (at para. 66).
The claim in respect of the risk indicators was more persuasive but, in the end, unconvincing:
The Applicants did present evidence raising legitimate concerns regarding the potential for the ITAT to mine data, generally speaking (see g., Tao affidavit at pages 53–55, 819, 827). If the ITAT generates problematic risk indicator categories, which are then imported into Chinook, concerns regarding bias or improper considerations could arise. However, the Applicants’ evidence also indicated that there is currently a firewall and anti-bias process between the information spotted by the ITAT and risk indicator categories appearing in Chinook (Tao affidavit at pages 691, 694–696).
In the present case, the Officer testified under cross-examination that she did not know what the risk indicators in Chinook were (Transcript of Cross-examination at pages 61–62, 65). The GCMS notes in this matter confirm that no risk indicator was flagged in the Applicants’ application. Therefore, in this matter, the evidence establishes that risk indicators played no part in the Officer’s decision-making. They were not external information that should have been included in the CTR (at paras. 63-64).
Battista J did warn that systematic deletion of material generated by Chinook is not best practice and could get the government into trouble in a future case (at para. 70). Ultimately, though, the central thrust of his judgment is that unless there is some indication on the face of a decision-maker’s reasons that artificial intelligence (or some less sophisticated type of automation, such as Chinook) was (a) used and (b) in a way liable to call the reasonableness or procedural fairness of the decision into account, the decision-maker is not required to provide further details to the court. I am not aware of any such case having made it to the Federal Court or any other Canadian court. This is consistent with the general rule of deliberative secrecy, pursuant to which a decision-maker’s internal deliberations are shielded from court oversight unless there is some reason to think they were the product of unlawfulness.
In turn, this strongly suggests that (as I have intimated before) judicial review is not the optimal forum for gaining information about internal government uses of technology in decision-making processes. This decision is a signal that advocacy efforts by those sceptical of the use of technology should be shifted in other directions, such as access-to-information requests and statutory reform.
This content has been updated on July 3, 2025 at 00:52.