Article body

Introduction

According to one recent study of the use of technology in public administration in Canada, the bots are at the gate (Molnar & Gill, 2018), conjuring up images of a horde of cyborg-barbarians preparing to wreak havoc on the everyday work of government officials in distributing benefits, determining statuses, revoking licenses and much else besides. However, the Canadian government has long believed that the use of technology in public administration is destined to improve life in Canada by allowing cutting-edge thinkers on public policy to analyze vast quantities of data and harbour the exponential growth in computing power to serve the public more effectively and more efficiently. In a fascinating recent contribution, Lepage-Richer and McKelvey highlight how two Canadian Prime Ministers—namely, Pierre Elliott Trudeau and, several generations later, his son Justin—sought to embrace technology, believing it could be harnessed to further the common good (Lepage-Richer & McKelvey, 2022; Digital Disruption White Paper Series, 2018, p. 3).

On the one hand, technological advances are pushing Canadians toward a dystopic state, with friendly bureaucrats being replaced by impassive machines. On the other hand, embracing technology will allow us to move confidently toward a utopian low-cost, high-impact decision-making process (Boyd & Crawford, 2012, p. 663). Of course, this is an oversimplification of a vast literature, but the existence of two diametrically opposed poles nonetheless provides a helpful frame for the discussion in this research note.

I will suggest in this research note that the truth—for the moment at least—lies somewhere between the extremes of dystopia and utopia. In the federal public administration, technology is being deployed in a variety of areas, but rarely if ever displacing human decision-making. Indeed, technology tends to be leveraged in areas of public policy that don’t involve any settling of benefits, statuses, licenses, etc. We are still a long way from sophisticated machine learning tools deciding whether marriages are genuine, whether taxpayers are compliant or whether nuclear facilities are safe. The reality is more down to earth.

My goals are modest: I intend only to reveal what is currently being done in this space, based on publicly available information. I will offer some thoughts about whether the current uses are justified, but these are offered primarily as food for thought rather than as fully formed conclusions. Moreover, the map is partial: it will need to be completed over time. I have based the map on publicly available information about algorithmic impact assessments and web searches of federal government departments. Despite the inherent limitations of such a study, it should inform debates about the use of technology in governmental settings in Canada. While competing views—utopia and dystopia—are so strongly expressed in the literature, it’s helpful to see the full picture to help us navigate a way forward. In addition, as has been observed, there is a “need to conduct more domain-specific studies, specific to certain areas or countries and at specific government levels in relation to AI” (Zuiderwijk, Chen & Salem, 2021, p. 15). This research note aims to observe the use of AI at the federal government level in Canada, specifically.

1. Methodology

This paper focuses on the general notion of automation and particularly on how computing power and data sets are leveraged to deploy complex algorithms and machine learning to assist in or displace human decision-making. My goal is to be as inclusive as possible to draw as good a map as I can based on available data. Accordingly, I’ll use the UNESCO definition of AI systems, i.e. “information-processing technologies that embody models and algorithms that produce a capacity to learn and to perform cognitive tasks leading to outcomes such as prediction and decision-making in real and virtual environments […] designed to operate with some degree of autonomy by means of knowledge modelling and representation and by exploiting data and calculating correlations” (UNESCO, 2020). This broad definition captures the use cases described below.

In addition, I have focused on use cases where the Government of Canada is interacting with Canadian citizens or other individuals who are seeking benefits, information or statuses. This is not to downplay the potential use (or abuse) of AI internally within the Government of Canada. However, public-facing usage is much easier to identify (in part because the notion of impact on individuals is central in the Treasury Board directive described below) and therefore provides a useful starting point for analysis.

There are two separate data sources for this paper.

The first is the publicly available list of algorithmic impact assessments performed under the Treasury Board’s Directive on Automated Decision-making. The DADM (Treasury Board of Canada Secretariat, 2023) is the federal government’s strategy on regulating AI and algorithms, while the Algorithmic Impact Assessment tool (AIA) (Government of Canada, 2021g) serves a complementary function to implement the DADM. The AIA tool is a questionnaire that seeks to determine an automated decision system’s impact and to determine the acceptability of AI solutions from an ethical and human perspective, based on factors such as the complexity of the system’s design, algorithm, decision type, impact, and data (Brandusescu, 2021, p. 22). The tool determines the impact level—from low to high—of an automated decision system, specifically by measuring how the system affects the rights of individuals or communities, the health or well-being of individual or communities, the economic interests of individuals, entities, or communities, and the ongoing sustainability of an ecosystem (Treasury Board of Canada Secretariat, 2021, Appendix B). Once the impact assessment level is determined, specific requirements apply under the DADM that reflect the significance of the decision.

The second source consists of web searches conducted in 2021 on websites of Canadian federal government departments, either directly or via Google. Evidently, the range of governmental institutions covered could be broadened. For the purposes of mapping an emerging landscape, however, the government departments provide ample topography, as technology can be helpful in their area of policy-making and policy-implementation functions. A focus on a homogenous group of governmental entities such as federal government departments to produce a relatively accurate map, albeit of limited terrain.

2. Use cases

2.1 Enhancing the accessibility of public-facing resources

In several instances, departments have gathered data about online resources usage to make them easier to understand.

The ATIP Online Request Service AIA (Government of Canada, 2020a) relates to a “Simple central website for Canadians to submit [access to information] requests” (Government of Canada, 2021d, p. 1). The service “offers Canadians the ability to submit access to information and personal information requests, and to have those requests automatically redistributed to a Responding Institution among the 240-plus Government of Canada institutions subject to [Part 1 of] the Access to Information Act and to the Privacy Act” (Treasury Board of Canada Secretariat, 2019a, p. 4). As the AIA response makes clear, this is a low-impact use of technology: it involves the implementation of an automated system, which works based on user-inputted data, to forward requests to the appropriate respondent. Crucially, the system itself “does not prevent requester from exercising their right to information” (Government of Canada, 2021d, p. 4).

Most of the use cases under this heading come from Canadian Heritage. For example, the Canada Travelling Exhibitions Indemnification Program tested out an application form with questions focused on adherence to accepted museum principles, practices, and museum standards rather than on specific details (Canadian Heritage, 2022). The purpose of this initiative is to assist claimants in completing forms quickly while allowing the Program to analyze requests more easily. The Canadian Conservation Institute also tested experimental systems to explore potential ways in which artificial intelligence systems can be used to respond to enquiries on heritage conservation more efficiently. Similarly, Canadian Heritage collaborated with the Translation Bureau in a pilot project exploring personalized linguistic service supported by artificial intelligence (Committee of Assistant Deputy Ministers on Official Languages, 2022). Canadian Heritage continues to provide these services through AI within the department. In addition, the department had a pilot program to leverage artificial intelligence to monitor official language use by funding recipients (Canadian Heritage, 2019). The pilot project applied AI to learn how to assess all clients’ digital communications, and to provide real-time results. The software analyzed websites and social media feeds to determine whether communication was provided in both official languages.

2.2 Using information to create and enhance models of natural and human activity

Modern governments rely to a great extent on models of natural and human activity to determine where to deploy resources or how to develop policy or improve the performance of internal systems). For example, Agriculture and Agri-Food (2022) has developed the ISO 19131 Annual Crop Inventory—Data Product Specifications, which involves an operational software system for mapping crop types using satellite observations. The Inventory examines data from multiple sources to create a national digital crop inventory. This data is derived from optical and radar images over a single growing season, in conjunction with ground data. The Inventory then processes all this data through a Decision Tree (DT) algorithm, which maps crop output using image-based segmentation prior to calculating a final, accurate assessment. The DT algorithm uses known crop types in certain locations on the ground to spectrally differentiate each of the crop types being mapped. These relationships are then applied to satellite image data to identify the most likely crop type in each field in the study area.

Within the Employment Insurance system, the Record of Employment Comments (ROEC) plays an important role. Employers use this partially automated form (Government of Canada, 2021b) to record interruptions of earnings when employees stop working. Periods of no work impact the eligibility to Employment Insurance. The AIA response reveals that the more sophisticated ROEC system “will interpret and assess free text comments captured by employers when records of employment (ROE) are issued” and based on simple rules, “the AI will assess and predict simple actions (i.e. save or ignore comments, predict a different Reason for Separation [RFS])” (Government of Canada, 2021e, p. 1). The AIA response uses a limited pilot-style program in the first instance, as the model will in the beginning “only assess comments related to simple decisions, which are based on current procedures used by agents” and require only “minimal” judgment or discretion (Government of Canada, 2021e, p. 4). As such, the likely impact is low and entirely reversible.

2.3 Performance assessment

There are some examples of government departments using technology to monitor job performance. For example, the Canadian Forces Health Information System (CFHIS) is a Canadian Forces-wide electronic medical information database designed to manage health information in support of efficient decision-making and enhanced operational effectiveness (Ombudsman for the Department of National Defence [DND] and the Canadian Forces [CF], n.d.). The system is intended to deliver integrated, automated health information to every serving member of the Regular and Reserve Forces.

In the aviation sector, automated fatigue audit systems use biomathematical modelling algorithms to predict how much sleep an employee is likely to get in a given schedule. In collaboration with a private sector partner, Transport Canada developed the Fatigue Risk Management System (FRMS) Toolbox for Canadian Aviation (Transport Canada, 2013). The underlying software is able to calculate a fatigue likelihood score for each employee at any given point in a work schedule. The algorithm considers factors such as shift time and length, previous work schedules, and break times to produce fatigue likelihood scores for each shift. The algorithm then estimates fatigue-related risk for groups of workers in a particular schedule, allowing aviation companies to deploy their resources accordingly. In rail, for similar reasons, scheduling algorithms are used to mitigate the risks associated with fatigue of railway workers (Transport Canada, 2018).

2.4 Enforcement resources

Governments regularly have to make difficult choices regarding the distribution of enforcement resources. Technology has been used to target scarce investigative and intelligence resources to identify prohibited behaviours. Of course, the right to appeal or seek review in any investigations or enforcement proceedings is protected.

The ArriveCAN application is arguably an example, targeting border resources to those whose vaccination status cannot easily be verified. The ArriveCAN application was implemented during the COVID-19 pandemic. The goal was to verify the vaccination status of individuals travelling to Canada to see whether they were authorized to enter the country and whether any quarantine requirements might apply. In the AIA response (Government of Canada, 2021a), the application scored low on impact, as the application was not designed to make decisions, as such, but to assist border officers in making decisions about eligibility to enter Canada based on vaccination status: a positive result on the application would confirm eligibility, but a negative result wouldn’t necessarily lead to an exclusion, but, rather, to a simple manual verification. Negative effects from the use of the application would be brief and reversible, as any errors could be corrected by an officer’s manual review. Of course, at various points in time during the pandemic, the application would generate a negative result for unvaccinated travellers. The practical effect could be to prevent the applicant from boarding a plane or train to enter Canada. However, any automated decisions about eligibility to enter or quarantine requirements arose from the underlying legal framework, not the application. Hence the low, reversible impact of the application itself. Similarly, the Integrity Risk Management Branch of Immigration, Refugees and Citizenship Canada has developed an integrity trends analysis tool to detect fraud patterns in applications for immigration status: this analysis informs investigations by Risk Assessment Units on the validity of documents (which was previously done manually), but does not affect the processing of applications once fraud checks have been completed (Government of Canada, 2022e).

Technology has also been used upstream to target scarce enforcement resources on the internet with respect to child sexual exploitation through Project Arachnid (2022), a tool designed to combat the proliferation of child sexual abuse material on the internet and at the border, where facial recognition has been deployed. Law enforcement has used traditional facial recognition tools for decades. Technological advances in areas such as biometrics, machine learning, and AI have led to the development of more advanced and sophisticated facial recognition tools. These tools can dramatically reduce the amount of time that investigators spend reviewing potential matches (Public Safety Canada, 2020c). But they have been extremely controversial, due to concerns about bias embedded in the tools (Buolamwini & Gebru, 2018) and violations of privacy (Office of the Privacy Commissioner of Canada, 2020a). Facial recognition is an automated biometric system for identification that employs a one-to-many search in a database of images to try to identify an individual. The automated system compares the submitted image against a biometric database containing images of “known” faces previously enrolled in the system. There’s a risk of system bias, because “software trained predominantly on the faces of white and lighter-skinned people may be less capable of accurately identifying individuals with darker skin tones” (Molnar & Gill, 2018, p. 9). In litigation, it has emerged that the Canada Border Services Agency had apparently resorted to Clearview AI’s facial recognition technology despite the risks of misidentification, which drew criticism from the Federal Court (Barre v. Canada, 2022, para. 56). Indeed, Clearview AI no longer offers facial recognition services in Canada (Office of the Privacy Commissioner of Canada, 2020b).

Consider, however, a positive story from Immigration, Refugees and Citizenship Canada about the use of Gender-Based Analysis Plus for facial recognition. The GBA+ methodology involves an appreciation of gender, diversity, and intersectionality. One of IRCC’s highlights of 2021–22 was the Facial Recognition Solution (FRS). IRCC uses photos provided by travel document applicants to conduct facial biometric comparisons using its FRS. The FRS helps screen and validate the applicant’s identity as part of the Passport Program’s identity management framework. To mitigate risks stemming from algorithmic bias, IRCC ensures a human operator is available to review the system’s findings. Only designated employees of IRCC—those formally trained to conduct facial comparison analysis—can determine whether a potential match from FRS consists of two identities bearing the same photo (Immigration, Refugees and Citizenship Canada, 2021).

2.5 Advising on eligibility

In several areas, government departments have based recommendations about statuses, benefits or privileges an individual might be eligible for on user-generated information. These recommendations don’t confirm eligibility, but, rather, indicate to the individual user what they might apply for.

Immigration, Refugee and Citizenship Canada (IRCC) has used technology to enhance its first point of contact with users, where individuals seeking IRCC’s services can acquire relevant information about immigrating to Canada (Immigration, Refugees and Citizenship Canada [IRCC], 2020b). “Quaid” is an artificial intelligence-driven chatbot (IRCC, 2020a), responding to online enquiries through IRCC’s official Facebook Messenger account. Quaid was trained using actual client questions and is continually improved based on client needs, which are determined through question data. Since its launch in 2018, Quaid has provided over 70,000 automated responses to clients. Where Quaid is unable to provide an effective response to a question on social media, it will direct the individual to IRCC’s online form for case-specific questions. Where it is unable to provide responses on the IRCC’s online web chat, Quaid will provide a series of questions which will help determine the most appropriate service for the requestor, and direct them accordingly (IRCC, 2022b).

2.6 Triaging applications

Once an application for a status, benefit or privilege has been received, and before any determination is made on it, the application may be triaged. Technology can be used in sorting applications for a status, benefit or privilege into different categories, depending on whether the applications are straightforward or complex. For example, IRCC’s Express Entry program uses automation to organize candidates with appropriate credentials and less complex instances for permanent residence applications. Express Entry establishes a two-step application process. In step one, foreign nationals seeking to obtain permanent resident status under economic programs must submit an online expression of interest to come to Canada. The Express Entry system includes a stand-alone pre-application stage where eligible candidates are entered into a pool. Eligibility is based on self-declared information These candidates are scored and ranked against others in the pool. In step two, the system provides an Invitation to Apply (ITA) for permanent residence to highest-ranked candidates. Once the ITA is issued, the application is addressed through IRCC’s normal decision-making processes for permanent residency applications. Express Entry therefore serves as a triaging tool and doesn’t advise on or determine eligibility for status.

2.7 Eligibility decisions

Many government decisions involve determining whether an applicant qualifies for a status, benefit or privilege. Technology has been deployed to make or assist in making these determinations.

These determinations can be discretionary, as with Innovation, Science and Economic Development’s use of an algorithm when licensing spectrum use. The bid processing algorithm described in “Consultation on a Policy and Licensing Framework for Spectrum in the 3500 MHz Band” maintains a queue of all bids from the round that have not been applied in their entirety (ISED, 2019). The highest priority bid that hasn’t yet been considered is processed. The algorithm then checks to what extent the bid can be applied using the most recently determined processed demands. In another instance, a software algorithm will be used to determine the set of assignment prices that meet the conditions outlined in the report (ISED, 2020).

By far, the most sophisticated use of automation in the Government of Canada, about which information is available because of several algorithmic impact assessments and a paper published by those involved in the creation of the systems, is at Immigration, Refugees and Citizenship Canada (IRCC).

IRCC first developed a sophisticated process for temporary residence visas (TRVs) (McEvenue & Mann, 2019; Government of Canada, 2022a). Note that an applicant for a TRV must be both eligible and admissible. Eligibility depends on factors such as whether the applicant is likely to leave Canada on or before the expiry of the TRV; admissibility relates to issues such as the applicant’s criminal record. The process, which has been described as a triaging process, applies only to eligibility.

The system triages incoming applications for TRVs. At an initial stage, the system disqualifies any applications that trigger a key complexity indicator (such as travelling with a minor) and sorts the remaining applications into three “bins”: low complexity applications, which are automatically approved, and medium and higher complexity applications, where officers will determine whether the applicant is eligible for a TRV. The system can’t generate any refusal automatically, only approvals. Figure 1, borrowed from McEvenue & Mann, 2019, illustrates the process.

Figure 1

Temporary Residence Visas’ process

Temporary Residence Visas’ process
Source : McEvenue & Mann (2019)

-> See the list of figures

All files are eventually reviewed by an officer. As the AIA response puts it: “Even in cases where the system approves the eligibility, officers continue to make the admissibility determination and the final decision on each application. As a result, there is an officer review of all applications”. As a quality assurance measure during the pilot program, officers were fed a random sample of about 10 % of the automatic approvals, without prior knowledge of the system’s complexity classification. It’s not clear whether this feature has been rolled out for TRVs generally.

Lastly, it’s worth noting that IRCC emphasized the relatively low impact of the TRV system in its AIA responses: “Visas are temporary and do not entitle the holder to work, study or immigrate to Canada. Impacts may affect travel plans and the ability of clients to personally attend meetings or events in Canada, but this impact is temporary as clients whose visa application is refused can reapply at any time” (Government of Canada, 2022b, p. 4). In other words, the TRV system does not involve life-and-death decision-making. Similarly, an automated system was put in place to expedite the processing of visas and work permits for those fleeing the conflict in Ukraine (Government of Canada, 2023).

Moving beyond TRVs, IRCC has automated decision-making in the area of permanent residence. Canadian citizens who are married or in a common-law relationship can sponsor their partner for permanent residence status in Canada. IRCC has developed a system for automatically approving sponsorship applications based on models gleaned from past positive determinations (Government of Canada, 2021c). This model has now been extended to private sponsorship of refugee applications (Government of Canada, 2022f).

The spouse/common-law partner system is similar to the TRV system. Refusals are determined by human decision makers only, and positive eligibility determinations can be made by the system without human intervention. Interestingly, the AIA response explicitly states that the rules underpinning the analytics aren’t to be shared with the officers: “The impact of the triage performed by the model on decision-making officers is limited because officers will not be aware of the rules used by the model for its triage or automated positive eligibility determinations, nor will they receive any information about the analysis that was performed by the model” (Government of Canada, 2021f, p. 4). The underlying concern here is presumably that if officers know what triggers an approval, they will be more likely to refuse an application that does not contain these triggers.

The uses by IRCC are the most sophisticated of those revealed by the available AIA responses. These uses also involve sensitive areas of decision-making, where there is significant potential for bias. Sponsorship applications are a good example. There are conventional marital relationships, running from courtship to engagement to a wedding ceremony to subsequent cohabitation and maybe to child rearing. These might be thought of as “easy” cases as far as sponsorship is concerned, because there will rarely be any meaningful suggestion that the relationship was not genuine. But such cases are only “easy” because of prevailing social norms about conventional marital relationships. In this sense, a system based on past decisions is likely to be biased towards conventional marital relationships and hostile to relationships which do not fit prevailing norms. Of course, individual officers making decisions aren’t free from such biases themselves. And one can legitimately ask whether the efficiency gains generated by automating approvals of (one assumes) conventional marital relationships outweigh any harm from entrenching the bias in the system.

More serious questions arise about the consequences of deploying AI in decision-making structures. Will the structures, intentionally or unintentionally, favour refusals in some cases? In general, the systems seem to have been designed to prevent decision makers from learning how and on what basis an application has been classified by the automated system. As such, they can’t form any bias based on the automated treatment of the application. Undoubtedly, officers may, over time, come to recognize the features of low complexity cases and, conceivably, pay greater attention to medium or high complexity cases. However, the ability to distinguish between of the different degrees of complexity is an ability that officers can develop over time in any event, based on their own experience and acquired expertise.

The spouse/common-law partner system is delicately poised: if officers are now receiving “non-straightforward” cases for decision, they can apply a higher level of scrutiny to those cases than they did before, with their biases favouring close analysis at the least, and potentially even a refusal. It’s not clear from the AIA whether officers are also provided with a blind, random sample of positive determinations as part of their ordinary workload. There is assurance that ongoing monitoring and quality assurance will be performed to avoid bias, but no details about what this might involve specifically. At a minimum, it would be appropriate to distribute random positive decisions to officers, withholding the knowledge that the system has already provided a positive determination, in order to prevent the development of any biases.

Interestingly, in the most recent discussion about IRCC’s automated decision-making in the context of an algorithmic impact assessment, the department has confirmed the use of additional measures along these lines:

Measures are also in place to mitigate against the potential risk that the triage function could influence officer decision-making. There is deliberate separation of officers from the system: officers are not aware of the rules used by the system, nor do they receive information about the analysis performed by the system. This separation mitigates the risk that officers could be unduly influenced by the system’s outputs (also known as “automation bias”). Additionally, an ongoing quality assurance process has been implemented to monitor whether officers make the same positive eligibility determinations as the system. This process ensures that biases have not been introduced by the system

Government of Canada, 2022g

These measures are designed to respond to precisely the concerns identified above.

It should be noted that IRCC also uses additional tools, about which less information is in the public domain. For example, the Chinook system is used in the processing of applications for temporary residence visas (TRVs): it has been the subject of popular criticism (Nash, 2022), parliamentary scrutiny (IRCC, 2022) and judicial review.[2]

Discussion

In the Government of Canada, the notion of “impact” is central to the Directive on Automated Decision-Making, which is also the main accountability mechanism for the use of AI systems. Impact is defined as the effect on the rights of individuals or communities; the health or well-being of individuals or communities; the economic interests of individuals, entities, or communities; and the ongoing sustainability of an ecosystem. In critically analyzing the use cases described in this part, I will refer to “impact.” This is also consistent with my framing of this research note in terms of a distinction between utopian and dystopian futures for public administration. The distinction turns on the relationship between the citizen and the state and prompts us to ask whether the impact of AI systems is appropriate.

The first three sections of this first part addressed uses of AI systems that have generally been considered beneficial (Coglianese & Lehr, 2017; Valle-Crux et al, 2019). Leveraging technology to create models of natural and human behaviour doesn’t interfere with anyone’s rights or interests and, all things considered, is apt to create a more effective government. Even in terms of performance assessments, technology has been used to make group-level assessments, for instance in relation to work schedules, rather than individualized decisions. It is true, of course, that a world viewed through an AI lens may look quite different than a world viewed through a human lens (Pasquale & Cashwell, 2018), but in terms of impact and the risk of a dystopian future, these use cases pose little to no threat.

With the remaining use cases, caution is required.

Although decisions about enforcement resources engage the state-citizen relationship to some degree, they’re nonetheless upstream decisions, in the sense that the ultimate determination of an individual’s rights and interests will be made by a subsequent decision maker. Before the subsequent decision maker comes to a conclusion, the individual concerned will have the right to participate in an investigative and adjudicative process of some sort. As such, the use of technology in the upstream allocation of scarce resources does not directly impact individuals who are subject to enforcement. It’s difficult to quibble with Project Arachnid, for example. That said, as the facial recognition discussion shows, if the use of technology has the effect of focusing attention on a particular group, then technology imposes costs on an identifiable group: even if there is downstream human intervention and no ultimate effect on rights and interests, individual members of the group pay a price as they are disproportionately subject to particular procedures.[3] Accordingly, there’s a greater need for safeguards in this space.

As we moved down the list of use cases, we encountered uses that more directly concern the relationship between the citizen and the state and are more impactful. The first is advising on eligibility. As with the discussion of enforcement resources above, advising on eligibility does not generally raise concerns about the state-citizen relationship. In these use cases, eligibility determinations are ultimately made by human decision makers, on the basis of applicable legal standards. The upstream advice does not have an impact on the final decision. However, if the upstream advice is inaccurate, and dissuades an individual from seeking a status, benefit or privilege, this is problematic, especially if the burden of inaccurate advice falls more heavily on a particular group.

Evidently, the making of determinations is potentially the most far-reaching use of technology in the Government of Canada. Where determinations are based on simple rules, there can be little cause for concern: the rules will be by definition knowable and subject to revision as appropriate. Even where determinations are discretionary, it may be helpful to use technology to bring a greater degree of predictability to the decision-making process; algorithmic auctions for spectrum space are a good example. As we have seen, the middle space between determinations based on rules and discretionary decisions—determinations involving judgment—requires the highest level of care. And even where only positive decisions are automated, the use of AI systems can have an impact on the treatment of other decisions, perhaps subjecting them to a greater level of scrutiny in a way that perpetuates or reinforces existing social biases. Here, safeguards are certainly needed to avoid some citizens finding themselves locked in a dystopic ghetto because AI systems are riddled with prejudice.

Conclusion

In this paper I have mapped out, on the basis of publicly available information through algorithmic impact assessment responses and web searches, the uses of algorithms and machine learning in the Government of Canada.

As explained in Part I, my focus was narrow—federal government departments specifically—but nonetheless allowed me to develop a clear picture of current uses of AI systems. I identified seven different use cases:

  • Enhancing the accessibility of public-facing resources;

  • Using information to create and enhance models of natural and human activity;

  • Assessing performance;

  • Managing enforcement resources;

  • Advising on eligibility;

  • Triaging applications;

  • Assisting with or making eligibility decisions.

With a clear picture in view, it was possible to begin engaging in some critical reflection on the appropriateness of the current uses. Some are of relatively low impact and don’t portend any sort of a dystopian future. However, more sophisticated AI systems, such as those used by Immigration, Refugees and Citizenship Canada prompt critical reflection on their design and on ensuring good decision-making.

Overall, the uses described here don’t support the proposition that “bot barbarians” are about to storm the gates of the Government of Canada. We are a long way from the nightmarish scenario of machines making life-or-death decisions. Equally, these uses hardly suggest that machines are going to carry Canadians to a utopia of quick, easy and accurate decision-making: most of the uses are so far upstream from decision-making that the benefits promised by some of the more bullish technology boosters are some way off in the distance.