Abstracts
Abstract
On the one hand, technological advances and their enthusiastic uptake by government entities are seen as a push toward a Canadian dystopic state, with friendly bureaucrats being replaced by impassive machines. On the other hand, embracing technology is considered a confident move of the Canadian administrative state toward an utopian low-cost, high-impact decision making process. I will suggest in this paper that the truth—for the moment, at least—lies somewhere between the extremes of dystopia and utopia. In the federal public administration, technology is being deployed in a variety of areas, but rarely, if ever, displacing human decision making. Indeed, technology tends to be leveraged in areas of public policy that don’t involve any settling of benefits, statuses, licenses, and so on. We are still a long way from sophisticated machine learning tools deciding whether marriages are genuine, whether taxpayers are compliant or whether nuclear facilities are safe. The reality is more down to earth. In this paper, I map out the uses of algorithms and machine learning in the federal public administration in Canada. I will briefly explain my methodology in Part I; in Part II, I identify seven different use cases, which I describe with the aid of representative examples, and offer some critical reflections.
Keywords:
- automated decision-making,
- public administration,
- artificial intelligence,
- machine learning
Résumé
D’un côté, les progrès technologiques et leur adoption enthousiaste par les entités gouvernementales poussent les Canadiens vers un état dystopique, avec des fonctionnaires sympathiques remplacés par des machines. De l’autre, l’adoption de la technologie permet à l’État administratif canadien de se lancer avec confiance dans une utopie où la prise de décisions est exécutée à coût modique et engendre un fort impact. Je suggérerai dans cet article que la vérité actuelle se situe quelque part entre les extrêmes de la dystopie et de l’utopie. Dans l’administration publique fédérale, la technologie est déployée dans une variété de domaines, mais rarement, voire jamais, dans le but de remplacer la prise de décision humaine. En effet, la tendance est de s’appuyer sur la technologie pour améliorer les domaines de la politique publique sans pour autant décider des avantages, du statut, des licences, etc. Nous sommes loin jusqu’à présent d’outils d’apprentissage automatique sophistiqués qui passent au crible l’authenticité des mariages, les potentielles fraudes fiscales et la sûreté des installations nucléaires dangereuses. La réalité est plus terre à terre. Dans cet article, j’identifierai les utilisations des algorithmes et de l’apprentissage automatique dans l’administration publique fédérale au Canada. Dans la partie I, j’explique brièvement ma méthodologie et dans la partie II, j’identifie sept cas d’utilisation différents, que je décris à l’aide d’exemples représentatifs avant d’offrir quelques réflexions critiques.
Mots-clés :
- prise de décision automatisée,
- administration publique,
- intelligence artificielle,
- apprentissage automatique
Article body
Introduction
According to one recent study of the use of technology in public administration in Canada, the bots are at the gate (Molnar & Gill, 2018), conjuring up images of a horde of cyborg-barbarians preparing to wreak havoc on the everyday work of government officials in distributing benefits, determining statuses, revoking licenses and much else besides. However, the Canadian government has long believed that the use of technology in public administration is destined to improve life in Canada by allowing cutting-edge thinkers on public policy to analyze vast quantities of data and harbour the exponential growth in computing power to serve the public more effectively and more efficiently. In a fascinating recent contribution, Lepage-Richer and McKelvey highlight how two Canadian Prime Ministers—namely, Pierre Elliott Trudeau and, several generations later, his son Justin—sought to embrace technology, believing it could be harnessed to further the common good (Lepage-Richer & McKelvey, 2022; Digital Disruption White Paper Series, 2018, p. 3).
On the one hand, technological advances are pushing Canadians toward a dystopic state, with friendly bureaucrats being replaced by impassive machines. On the other hand, embracing technology will allow us to move confidently toward a utopian low-cost, high-impact decision-making process (Boyd & Crawford, 2012, p. 663). Of course, this is an oversimplification of a vast literature, but the existence of two diametrically opposed poles nonetheless provides a helpful frame for the discussion in this research note.
I will suggest in this research note that the truth—for the moment at least—lies somewhere between the extremes of dystopia and utopia. In the federal public administration, technology is being deployed in a variety of areas, but rarely if ever displacing human decision-making. Indeed, technology tends to be leveraged in areas of public policy that don’t involve any settling of benefits, statuses, licenses, etc. We are still a long way from sophisticated machine learning tools deciding whether marriages are genuine, whether taxpayers are compliant or whether nuclear facilities are safe. The reality is more down to earth.
My goals are modest: I intend only to reveal what is currently being done in this space, based on publicly available information. I will offer some thoughts about whether the current uses are justified, but these are offered primarily as food for thought rather than as fully formed conclusions. Moreover, the map is partial: it will need to be completed over time. I have based the map on publicly available information about algorithmic impact assessments and web searches of federal government departments. Despite the inherent limitations of such a study, it should inform debates about the use of technology in governmental settings in Canada. While competing views—utopia and dystopia—are so strongly expressed in the literature, it’s helpful to see the full picture to help us navigate a way forward. In addition, as has been observed, there is a “need to conduct more domain-specific studies, specific to certain areas or countries and at specific government levels in relation to AI” (Zuiderwijk, Chen & Salem, 2021, p. 15). This research note aims to observe the use of AI at the federal government level in Canada, specifically.
1. Methodology
This paper focuses on the general notion of automation and particularly on how computing power and data sets are leveraged to deploy complex algorithms and machine learning to assist in or displace human decision-making. My goal is to be as inclusive as possible to draw as good a map as I can based on available data. Accordingly, I’ll use the UNESCO definition of AI systems, i.e. “information-processing technologies that embody models and algorithms that produce a capacity to learn and to perform cognitive tasks leading to outcomes such as prediction and decision-making in real and virtual environments […] designed to operate with some degree of autonomy by means of knowledge modelling and representation and by exploiting data and calculating correlations” (UNESCO, 2020). This broad definition captures the use cases described below.
In addition, I have focused on use cases where the Government of Canada is interacting with Canadian citizens or other individuals who are seeking benefits, information or statuses. This is not to downplay the potential use (or abuse) of AI internally within the Government of Canada. However, public-facing usage is much easier to identify (in part because the notion of impact on individuals is central in the Treasury Board directive described below) and therefore provides a useful starting point for analysis.
There are two separate data sources for this paper.
The first is the publicly available list of algorithmic impact assessments performed under the Treasury Board’s Directive on Automated Decision-making. The DADM (Treasury Board of Canada Secretariat, 2023) is the federal government’s strategy on regulating AI and algorithms, while the Algorithmic Impact Assessment tool (AIA) (Government of Canada, 2021g) serves a complementary function to implement the DADM. The AIA tool is a questionnaire that seeks to determine an automated decision system’s impact and to determine the acceptability of AI solutions from an ethical and human perspective, based on factors such as the complexity of the system’s design, algorithm, decision type, impact, and data (Brandusescu, 2021, p. 22). The tool determines the impact level—from low to high—of an automated decision system, specifically by measuring how the system affects the rights of individuals or communities, the health or well-being of individual or communities, the economic interests of individuals, entities, or communities, and the ongoing sustainability of an ecosystem (Treasury Board of Canada Secretariat, 2021, Appendix B). Once the impact assessment level is determined, specific requirements apply under the DADM that reflect the significance of the decision.
The second source consists of web searches conducted in 2021 on websites of Canadian federal government departments, either directly or via Google. Evidently, the range of governmental institutions covered could be broadened. For the purposes of mapping an emerging landscape, however, the government departments provide ample topography, as technology can be helpful in their area of policy-making and policy-implementation functions. A focus on a homogenous group of governmental entities such as federal government departments to produce a relatively accurate map, albeit of limited terrain.
2. Use cases
2.1 Enhancing the accessibility of public-facing resources
In several instances, departments have gathered data about online resources usage to make them easier to understand.
The ATIP Online Request Service AIA (Government of Canada, 2020a) relates to a “Simple central website for Canadians to submit [access to information] requests” (Government of Canada, 2021d, p. 1). The service “offers Canadians the ability to submit access to information and personal information requests, and to have those requests automatically redistributed to a Responding Institution among the 240-plus Government of Canada institutions subject to [Part 1 of] the Access to Information Act and to the Privacy Act” (Treasury Board of Canada Secretariat, 2019a, p. 4). As the AIA response makes clear, this is a low-impact use of technology: it involves the implementation of an automated system, which works based on user-inputted data, to forward requests to the appropriate respondent. Crucially, the system itself “does not prevent requester from exercising their right to information” (Government of Canada, 2021d, p. 4).
Most of the use cases under this heading come from Canadian Heritage. For example, the Canada Travelling Exhibitions Indemnification Program tested out an application form with questions focused on adherence to accepted museum principles, practices, and museum standards rather than on specific details (Canadian Heritage, 2022). The purpose of this initiative is to assist claimants in completing forms quickly while allowing the Program to analyze requests more easily. The Canadian Conservation Institute also tested experimental systems to explore potential ways in which artificial intelligence systems can be used to respond to enquiries on heritage conservation more efficiently. Similarly, Canadian Heritage collaborated with the Translation Bureau in a pilot project exploring personalized linguistic service supported by artificial intelligence (Committee of Assistant Deputy Ministers on Official Languages, 2022). Canadian Heritage continues to provide these services through AI within the department. In addition, the department had a pilot program to leverage artificial intelligence to monitor official language use by funding recipients (Canadian Heritage, 2019). The pilot project applied AI to learn how to assess all clients’ digital communications, and to provide real-time results. The software analyzed websites and social media feeds to determine whether communication was provided in both official languages.
2.2 Using information to create and enhance models of natural and human activity
Modern governments rely to a great extent on models of natural and human activity to determine where to deploy resources or how to develop policy or improve the performance of internal systems). For example, Agriculture and Agri-Food (2022) has developed the ISO 19131 Annual Crop Inventory—Data Product Specifications, which involves an operational software system for mapping crop types using satellite observations. The Inventory examines data from multiple sources to create a national digital crop inventory. This data is derived from optical and radar images over a single growing season, in conjunction with ground data. The Inventory then processes all this data through a Decision Tree (DT) algorithm, which maps crop output using image-based segmentation prior to calculating a final, accurate assessment. The DT algorithm uses known crop types in certain locations on the ground to spectrally differentiate each of the crop types being mapped. These relationships are then applied to satellite image data to identify the most likely crop type in each field in the study area.
Within the Employment Insurance system, the Record of Employment Comments (ROEC) plays an important role. Employers use this partially automated form (Government of Canada, 2021b) to record interruptions of earnings when employees stop working. Periods of no work impact the eligibility to Employment Insurance. The AIA response reveals that the more sophisticated ROEC system “will interpret and assess free text comments captured by employers when records of employment (ROE) are issued” and based on simple rules, “the AI will assess and predict simple actions (i.e. save or ignore comments, predict a different Reason for Separation [RFS])” (Government of Canada, 2021e, p. 1). The AIA response uses a limited pilot-style program in the first instance, as the model will in the beginning “only assess comments related to simple decisions, which are based on current procedures used by agents” and require only “minimal” judgment or discretion (Government of Canada, 2021e, p. 4). As such, the likely impact is low and entirely reversible.
2.3 Performance assessment
There are some examples of government departments using technology to monitor job performance. For example, the Canadian Forces Health Information System (CFHIS) is a Canadian Forces-wide electronic medical information database designed to manage health information in support of efficient decision-making and enhanced operational effectiveness (Ombudsman for the Department of National Defence [DND] and the Canadian Forces [CF], n.d.). The system is intended to deliver integrated, automated health information to every serving member of the Regular and Reserve Forces.
In the aviation sector, automated fatigue audit systems use biomathematical modelling algorithms to predict how much sleep an employee is likely to get in a given schedule. In collaboration with a private sector partner, Transport Canada developed the Fatigue Risk Management System (FRMS) Toolbox for Canadian Aviation (Transport Canada, 2013). The underlying software is able to calculate a fatigue likelihood score for each employee at any given point in a work schedule. The algorithm considers factors such as shift time and length, previous work schedules, and break times to produce fatigue likelihood scores for each shift. The algorithm then estimates fatigue-related risk for groups of workers in a particular schedule, allowing aviation companies to deploy their resources accordingly. In rail, for similar reasons, scheduling algorithms are used to mitigate the risks associated with fatigue of railway workers (Transport Canada, 2018).
2.4 Enforcement resources
Governments regularly have to make difficult choices regarding the distribution of enforcement resources. Technology has been used to target scarce investigative and intelligence resources to identify prohibited behaviours. Of course, the right to appeal or seek review in any investigations or enforcement proceedings is protected.
The ArriveCAN application is arguably an example, targeting border resources to those whose vaccination status cannot easily be verified. The ArriveCAN application was implemented during the COVID-19 pandemic. The goal was to verify the vaccination status of individuals travelling to Canada to see whether they were authorized to enter the country and whether any quarantine requirements might apply. In the AIA response (Government of Canada, 2021a), the application scored low on impact, as the application was not designed to make decisions, as such, but to assist border officers in making decisions about eligibility to enter Canada based on vaccination status: a positive result on the application would confirm eligibility, but a negative result wouldn’t necessarily lead to an exclusion, but, rather, to a simple manual verification. Negative effects from the use of the application would be brief and reversible, as any errors could be corrected by an officer’s manual review. Of course, at various points in time during the pandemic, the application would generate a negative result for unvaccinated travellers. The practical effect could be to prevent the applicant from boarding a plane or train to enter Canada. However, any automated decisions about eligibility to enter or quarantine requirements arose from the underlying legal framework, not the application. Hence the low, reversible impact of the application itself. Similarly, the Integrity Risk Management Branch of Immigration, Refugees and Citizenship Canada has developed an integrity trends analysis tool to detect fraud patterns in applications for immigration status: this analysis informs investigations by Risk Assessment Units on the validity of documents (which was previously done manually), but does not affect the processing of applications once fraud checks have been completed (Government of Canada, 2022e).
Technology has also been used upstream to target scarce enforcement resources on the internet with respect to child sexual exploitation through Project Arachnid (2022), a tool designed to combat the proliferation of child sexual abuse material on the internet and at the border, where facial recognition has been deployed. Law enforcement has used traditional facial recognition tools for decades. Technological advances in areas such as biometrics, machine learning, and AI have led to the development of more advanced and sophisticated facial recognition tools. These tools can dramatically reduce the amount of time that investigators spend reviewing potential matches (Public Safety Canada, 2020c). But they have been extremely controversial, due to concerns about bias embedded in the tools (Buolamwini & Gebru, 2018) and violations of privacy (Office of the Privacy Commissioner of Canada, 2020a). Facial recognition is an automated biometric system for identification that employs a one-to-many search in a database of images to try to identify an individual. The automated system compares the submitted image against a biometric database containing images of “known” faces previously enrolled in the system. There’s a risk of system bias, because “software trained predominantly on the faces of white and lighter-skinned people may be less capable of accurately identifying individuals with darker skin tones” (Molnar & Gill, 2018, p. 9). In litigation, it has emerged that the Canada Border Services Agency had apparently resorted to Clearview AI’s facial recognition technology despite the risks of misidentification, which drew criticism from the Federal Court (Barre v. Canada, 2022, para. 56). Indeed, Clearview AI no longer offers facial recognition services in Canada (Office of the Privacy Commissioner of Canada, 2020b).
Consider, however, a positive story from Immigration, Refugees and Citizenship Canada about the use of Gender-Based Analysis Plus for facial recognition. The GBA+ methodology involves an appreciation of gender, diversity, and intersectionality. One of IRCC’s highlights of 2021–22 was the Facial Recognition Solution (FRS). IRCC uses photos provided by travel document applicants to conduct facial biometric comparisons using its FRS. The FRS helps screen and validate the applicant’s identity as part of the Passport Program’s identity management framework. To mitigate risks stemming from algorithmic bias, IRCC ensures a human operator is available to review the system’s findings. Only designated employees of IRCC—those formally trained to conduct facial comparison analysis—can determine whether a potential match from FRS consists of two identities bearing the same photo (Immigration, Refugees and Citizenship Canada, 2021).
2.5 Advising on eligibility
In several areas, government departments have based recommendations about statuses, benefits or privileges an individual might be eligible for on user-generated information. These recommendations don’t confirm eligibility, but, rather, indicate to the individual user what they might apply for.
Immigration, Refugee and Citizenship Canada (IRCC) has used technology to enhance its first point of contact with users, where individuals seeking IRCC’s services can acquire relevant information about immigrating to Canada (Immigration, Refugees and Citizenship Canada [IRCC], 2020b). “Quaid” is an artificial intelligence-driven chatbot (IRCC, 2020a), responding to online enquiries through IRCC’s official Facebook Messenger account. Quaid was trained using actual client questions and is continually improved based on client needs, which are determined through question data. Since its launch in 2018, Quaid has provided over 70,000 automated responses to clients. Where Quaid is unable to provide an effective response to a question on social media, it will direct the individual to IRCC’s online form for case-specific questions. Where it is unable to provide responses on the IRCC’s online web chat, Quaid will provide a series of questions which will help determine the most appropriate service for the requestor, and direct them accordingly (IRCC, 2022b).
2.6 Triaging applications
Once an application for a status, benefit or privilege has been received, and before any determination is made on it, the application may be triaged. Technology can be used in sorting applications for a status, benefit or privilege into different categories, depending on whether the applications are straightforward or complex. For example, IRCC’s Express Entry program uses automation to organize candidates with appropriate credentials and less complex instances for permanent residence applications. Express Entry establishes a two-step application process. In step one, foreign nationals seeking to obtain permanent resident status under economic programs must submit an online expression of interest to come to Canada. The Express Entry system includes a stand-alone pre-application stage where eligible candidates are entered into a pool. Eligibility is based on self-declared information These candidates are scored and ranked against others in the pool. In step two, the system provides an Invitation to Apply (ITA) for permanent residence to highest-ranked candidates. Once the ITA is issued, the application is addressed through IRCC’s normal decision-making processes for permanent residency applications. Express Entry therefore serves as a triaging tool and doesn’t advise on or determine eligibility for status.
2.7 Eligibility decisions
Many government decisions involve determining whether an applicant qualifies for a status, benefit or privilege. Technology has been deployed to make or assist in making these determinations.
These determinations can be discretionary, as with Innovation, Science and Economic Development’s use of an algorithm when licensing spectrum use. The bid processing algorithm described in “Consultation on a Policy and Licensing Framework for Spectrum in the 3500 MHz Band” maintains a queue of all bids from the round that have not been applied in their entirety (ISED, 2019). The highest priority bid that hasn’t yet been considered is processed. The algorithm then checks to what extent the bid can be applied using the most recently determined processed demands. In another instance, a software algorithm will be used to determine the set of assignment prices that meet the conditions outlined in the report (ISED, 2020).
By far, the most sophisticated use of automation in the Government of Canada, about which information is available because of several algorithmic impact assessments and a paper published by those involved in the creation of the systems, is at Immigration, Refugees and Citizenship Canada (IRCC).
IRCC first developed a sophisticated process for temporary residence visas (TRVs) (McEvenue & Mann, 2019; Government of Canada, 2022a). Note that an applicant for a TRV must be both eligible and admissible. Eligibility depends on factors such as whether the applicant is likely to leave Canada on or before the expiry of the TRV; admissibility relates to issues such as the applicant’s criminal record. The process, which has been described as a triaging process, applies only to eligibility.
The system triages incoming applications for TRVs. At an initial stage, the system disqualifies any applications that trigger a key complexity indicator (such as travelling with a minor) and sorts the remaining applications into three “bins”: low complexity applications, which are automatically approved, and medium and higher complexity applications, where officers will determine whether the applicant is eligible for a TRV. The system can’t generate any refusal automatically, only approvals. Figure 1, borrowed from McEvenue & Mann, 2019, illustrates the process.
All files are eventually reviewed by an officer. As the AIA response puts it: “Even in cases where the system approves the eligibility, officers continue to make the admissibility determination and the final decision on each application. As a result, there is an officer review of all applications”. As a quality assurance measure during the pilot program, officers were fed a random sample of about 10 % of the automatic approvals, without prior knowledge of the system’s complexity classification. It’s not clear whether this feature has been rolled out for TRVs generally.
Lastly, it’s worth noting that IRCC emphasized the relatively low impact of the TRV system in its AIA responses: “Visas are temporary and do not entitle the holder to work, study or immigrate to Canada. Impacts may affect travel plans and the ability of clients to personally attend meetings or events in Canada, but this impact is temporary as clients whose visa application is refused can reapply at any time” (Government of Canada, 2022b, p. 4). In other words, the TRV system does not involve life-and-death decision-making. Similarly, an automated system was put in place to expedite the processing of visas and work permits for those fleeing the conflict in Ukraine (Government of Canada, 2023).
Moving beyond TRVs, IRCC has automated decision-making in the area of permanent residence. Canadian citizens who are married or in a common-law relationship can sponsor their partner for permanent residence status in Canada. IRCC has developed a system for automatically approving sponsorship applications based on models gleaned from past positive determinations (Government of Canada, 2021c). This model has now been extended to private sponsorship of refugee applications (Government of Canada, 2022f).
The spouse/common-law partner system is similar to the TRV system. Refusals are determined by human decision makers only, and positive eligibility determinations can be made by the system without human intervention. Interestingly, the AIA response explicitly states that the rules underpinning the analytics aren’t to be shared with the officers: “The impact of the triage performed by the model on decision-making officers is limited because officers will not be aware of the rules used by the model for its triage or automated positive eligibility determinations, nor will they receive any information about the analysis that was performed by the model” (Government of Canada, 2021f, p. 4). The underlying concern here is presumably that if officers know what triggers an approval, they will be more likely to refuse an application that does not contain these triggers.
The uses by IRCC are the most sophisticated of those revealed by the available AIA responses. These uses also involve sensitive areas of decision-making, where there is significant potential for bias. Sponsorship applications are a good example. There are conventional marital relationships, running from courtship to engagement to a wedding ceremony to subsequent cohabitation and maybe to child rearing. These might be thought of as “easy” cases as far as sponsorship is concerned, because there will rarely be any meaningful suggestion that the relationship was not genuine. But such cases are only “easy” because of prevailing social norms about conventional marital relationships. In this sense, a system based on past decisions is likely to be biased towards conventional marital relationships and hostile to relationships which do not fit prevailing norms. Of course, individual officers making decisions aren’t free from such biases themselves. And one can legitimately ask whether the efficiency gains generated by automating approvals of (one assumes) conventional marital relationships outweigh any harm from entrenching the bias in the system.
More serious questions arise about the consequences of deploying AI in decision-making structures. Will the structures, intentionally or unintentionally, favour refusals in some cases? In general, the systems seem to have been designed to prevent decision makers from learning how and on what basis an application has been classified by the automated system. As such, they can’t form any bias based on the automated treatment of the application. Undoubtedly, officers may, over time, come to recognize the features of low complexity cases and, conceivably, pay greater attention to medium or high complexity cases. However, the ability to distinguish between of the different degrees of complexity is an ability that officers can develop over time in any event, based on their own experience and acquired expertise.
The spouse/common-law partner system is delicately poised: if officers are now receiving “non-straightforward” cases for decision, they can apply a higher level of scrutiny to those cases than they did before, with their biases favouring close analysis at the least, and potentially even a refusal. It’s not clear from the AIA whether officers are also provided with a blind, random sample of positive determinations as part of their ordinary workload. There is assurance that ongoing monitoring and quality assurance will be performed to avoid bias, but no details about what this might involve specifically. At a minimum, it would be appropriate to distribute random positive decisions to officers, withholding the knowledge that the system has already provided a positive determination, in order to prevent the development of any biases.
Interestingly, in the most recent discussion about IRCC’s automated decision-making in the context of an algorithmic impact assessment, the department has confirmed the use of additional measures along these lines:
Measures are also in place to mitigate against the potential risk that the triage function could influence officer decision-making. There is deliberate separation of officers from the system: officers are not aware of the rules used by the system, nor do they receive information about the analysis performed by the system. This separation mitigates the risk that officers could be unduly influenced by the system’s outputs (also known as “automation bias”). Additionally, an ongoing quality assurance process has been implemented to monitor whether officers make the same positive eligibility determinations as the system. This process ensures that biases have not been introduced by the system
Government of Canada, 2022g
These measures are designed to respond to precisely the concerns identified above.
It should be noted that IRCC also uses additional tools, about which less information is in the public domain. For example, the Chinook system is used in the processing of applications for temporary residence visas (TRVs): it has been the subject of popular criticism (Nash, 2022), parliamentary scrutiny (IRCC, 2022) and judicial review.[2]
Discussion
In the Government of Canada, the notion of “impact” is central to the Directive on Automated Decision-Making, which is also the main accountability mechanism for the use of AI systems. Impact is defined as the effect on the rights of individuals or communities; the health or well-being of individuals or communities; the economic interests of individuals, entities, or communities; and the ongoing sustainability of an ecosystem. In critically analyzing the use cases described in this part, I will refer to “impact.” This is also consistent with my framing of this research note in terms of a distinction between utopian and dystopian futures for public administration. The distinction turns on the relationship between the citizen and the state and prompts us to ask whether the impact of AI systems is appropriate.
The first three sections of this first part addressed uses of AI systems that have generally been considered beneficial (Coglianese & Lehr, 2017; Valle-Crux et al, 2019). Leveraging technology to create models of natural and human behaviour doesn’t interfere with anyone’s rights or interests and, all things considered, is apt to create a more effective government. Even in terms of performance assessments, technology has been used to make group-level assessments, for instance in relation to work schedules, rather than individualized decisions. It is true, of course, that a world viewed through an AI lens may look quite different than a world viewed through a human lens (Pasquale & Cashwell, 2018), but in terms of impact and the risk of a dystopian future, these use cases pose little to no threat.
With the remaining use cases, caution is required.
Although decisions about enforcement resources engage the state-citizen relationship to some degree, they’re nonetheless upstream decisions, in the sense that the ultimate determination of an individual’s rights and interests will be made by a subsequent decision maker. Before the subsequent decision maker comes to a conclusion, the individual concerned will have the right to participate in an investigative and adjudicative process of some sort. As such, the use of technology in the upstream allocation of scarce resources does not directly impact individuals who are subject to enforcement. It’s difficult to quibble with Project Arachnid, for example. That said, as the facial recognition discussion shows, if the use of technology has the effect of focusing attention on a particular group, then technology imposes costs on an identifiable group: even if there is downstream human intervention and no ultimate effect on rights and interests, individual members of the group pay a price as they are disproportionately subject to particular procedures.[3] Accordingly, there’s a greater need for safeguards in this space.
As we moved down the list of use cases, we encountered uses that more directly concern the relationship between the citizen and the state and are more impactful. The first is advising on eligibility. As with the discussion of enforcement resources above, advising on eligibility does not generally raise concerns about the state-citizen relationship. In these use cases, eligibility determinations are ultimately made by human decision makers, on the basis of applicable legal standards. The upstream advice does not have an impact on the final decision. However, if the upstream advice is inaccurate, and dissuades an individual from seeking a status, benefit or privilege, this is problematic, especially if the burden of inaccurate advice falls more heavily on a particular group.
Evidently, the making of determinations is potentially the most far-reaching use of technology in the Government of Canada. Where determinations are based on simple rules, there can be little cause for concern: the rules will be by definition knowable and subject to revision as appropriate. Even where determinations are discretionary, it may be helpful to use technology to bring a greater degree of predictability to the decision-making process; algorithmic auctions for spectrum space are a good example. As we have seen, the middle space between determinations based on rules and discretionary decisions—determinations involving judgment—requires the highest level of care. And even where only positive decisions are automated, the use of AI systems can have an impact on the treatment of other decisions, perhaps subjecting them to a greater level of scrutiny in a way that perpetuates or reinforces existing social biases. Here, safeguards are certainly needed to avoid some citizens finding themselves locked in a dystopic ghetto because AI systems are riddled with prejudice.
Conclusion
In this paper I have mapped out, on the basis of publicly available information through algorithmic impact assessment responses and web searches, the uses of algorithms and machine learning in the Government of Canada.
As explained in Part I, my focus was narrow—federal government departments specifically—but nonetheless allowed me to develop a clear picture of current uses of AI systems. I identified seven different use cases:
Enhancing the accessibility of public-facing resources;
Using information to create and enhance models of natural and human activity;
Assessing performance;
Managing enforcement resources;
Advising on eligibility;
Triaging applications;
Assisting with or making eligibility decisions.
With a clear picture in view, it was possible to begin engaging in some critical reflection on the appropriateness of the current uses. Some are of relatively low impact and don’t portend any sort of a dystopian future. However, more sophisticated AI systems, such as those used by Immigration, Refugees and Citizenship Canada prompt critical reflection on their design and on ensuring good decision-making.
Overall, the uses described here don’t support the proposition that “bot barbarians” are about to storm the gates of the Government of Canada. We are a long way from the nightmarish scenario of machines making life-or-death decisions. Equally, these uses hardly suggest that machines are going to carry Canadians to a utopia of quick, easy and accurate decision-making: most of the uses are so far upstream from decision-making that the benefits promised by some of the more bullish technology boosters are some way off in the distance.
Appendices
Acknowledgements
With thanks to, at various points and in chronological order, Adam Strombergson-Denora, Kseniya Kudischeva, Alec Carden and Rachel Freeland for research assistance. The anonymous reviewer also offered very helpful observations on an earlier draft. This research was supported by the Social Sciences and Humanities Research Council.
Notes
-
[1]
Professor Paul Daly holds the University Research Chair in Administrative Law & Governance at the University of Ottawa. Professor Daly’s practice and research interests span the broad field of public law, with a particular emphasis on judicial review, advice and training for regulatory agencies and administrative tribunals, public authority liability and complex constitutional issues. ORCID: 0000-0002-2901-6765
-
[2]
Ocran v. Canada (Citizenship and Immigration), 2022 FC 175.
-
[3]
For example, see Luamba c. Procureur général du Québec, 2022 QCCS 3866.
Bibliography
- Agriculture and Agri-food Canada. (2022). ISO 19131 Annual crop inventory–Data product specifications, revision A. https://agriculture.canada.ca/atlas/data_donnees/annualCropInventory/supportdocument_documentdesupport/en/ISO%2019131_AAFC_Annual_Crop_Inventory_Data_Product_Specifications.pdf
- Barre v. Canada (Citizenship and Immigration), 1078 Federal Court of Canada. (2022). https://decisions.fct-cf.gc.ca/fc-cf/decisions/en/item/521971/index.do
- Buolamwini, J. and Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, Proceedings of Machine Learning Research 81:1–15. http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf
- boyd, d and Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society, 15(5), 662–679. https://doi.org/10.1080/1369118X.2012.678878
- Brandusescu, Ana. (March, 2021). Artificial intelligence policy and funding in Canada: Public investments, private interests. Centre for Interdisciplinary Research on Montreal, McGill University. https://www.mcgill.ca/centre-montreal/files/centre-montreal/aipolicyandfunding_report_updated_mar5.pdf
- Canadian Heritage. (2019). Departmental results report 2018–2019—Canadian Heritage.https://www.canada.ca/content/dam/pch/documents/corporate/publications/plans-reports/departmental-results-report-2018-2019/2018-19-pch-drr-eng.pdf
- Canadian Heritage. (2022). Departmental plan 2020-21—Canadian Heritage. https://www.canada.ca/en/canadian-heritage/corporate/publications/plans-reports/departmental-plan-2020-2021.html
- Cattiau, J. (January 28, 2020). AI’s killer (whale) app. https://blog.google/technology/ai/protecting-orcas/.
- Coglianese, C. and Lehr, D. (2017). Regulating by robot: Administrative decision making in the machine-learning era. Georgetown Law Journal, 105:1147-1223. https://scholarship.law.upenn.edu/faculty_scholarship/1734/
- Committee of Assistant Deputy Ministers on Official Languages (CADMOL). (2022). Dashboard on the status of the language of work recommendations. https://www.noslangues-ourlanguages.gc.ca/en/ressources-resources/tableau-de-bord-dashboard-eng
- Cybertip.ca. (2022). Child sexual abuse: Project Arachnid. https://www.cybertip.ca/en/child-sexual-abuse/project-arachnid/
- Department of National Defense. (2021). Personnel. https://www.canada.ca/en/department-national-defence/corporate/reports-publications/proactive-disclosure/main-estimates-2020-2021/personnel.html
- Digital Disruption White Paper Series. (2018). Responsible artificial intelligence in the Government of Canada.
- Employment and Social Development Canada (ESDC). (2020a). Chapter 3: Impact and effectiveness of Employment Insurance benefits (EBSMs–Part II of the Employment Insurance Act).https://www.canada.ca/en/employment-social-development/programs/ei/ei-list/reports/monitoring2019/chapter3.html
- Employment and Social Development Canada (ESDC). (2020b). Horizontal evaluation of the youth employment strategy – Skills Link stream. https://www.canada.ca/content/dam/canada/employment-social-development/corporate/reports/evaluations/horizontal-skills-link/horizontal-skills-link-EN.pdf
- Employment and Social Development Canada (ESDC). (2022a). Evaluation of Employment Insurance (EI) automation and modernization: Final report. R https://www.canada.ca/en/employment-social-development/corporate/reports/evaluations/2016-employment-insurance-automation-and-modernization.html
- Employment and Social Development Canada (ESDC). (2022b). Employment Insurance monitoring and assessment report for the fiscal year beginning April 1, 2016 and ending March 31, 2017, Chapter IV – Program administration. https://www.canada.ca/en/employment-social-development/programs/ei/ei-list/reports/monitoring2017/chapter4.html
- Employment and Social Development Canada (ESDC). (2022c). Evaluation of learning and labour market Information as disseminated by Employment and Social Development Canada using a web-based consolidated approach.https://www.canada.ca/en/employment-social-development/corporate/reports/evaluations/learning-labour-information-web-approach.html
- Employment and Social Development Canada (ESDC). (2022d). Evaluation of the Job match service connecting job seekers to Canadian employers. https://www.canada.ca/en/employment-social-development/corporate/reports/evaluations/job-match-connecting-job-seekers-employers.html
- Employment and Social Development Canada (ESDC). (2022e). Evaluation of the Job match service connecting job seekers to Canadian employers. https://www.canada.ca/content/dam/esdc-edsc/documents/corporate/reports/evaluations/job-match-connecting-job-seekers-employers/summary/Infographic_EN.pdf
- Financial Administration Act, RSC 1985, c F-11, Schedule 1.
- Fisheries and Oceans Canada. (2018). Center of Expertise in Marine Mammalogy, Scientific research report 2015–2017. https://www.dfo-mpo.gc.ca/species-especes/publications/mammals-mammiferes/cemam/2015-2017/page01-eng.html
- Fisheries and Oceans Canada. (2019). State of salmon aquaculture technologies. https://www.dfo-mpo.gc.ca/aquaculture/publications/ssat-ets-eng.html
- Fisheries and Oceans Canada. (2022). Fisheries and Oceans Canada in the Pacific region.https://www.pac.dfo-mpo.gc.ca/index-eng.html
- Girard, J. (2004). Defense knowledge management: A passing fad?http://www.journal.forces.gc.ca/vo5/no2/doc/knowledge-connaisanc-eng.pdf
- Governor General in Council: Order Amending the Canadian Passport Order: SI/2019-27 (2019). Canada Gazette Part II, 153 (11). https://gazette.gc.ca/rp-pr/p2/2019/2019-05-29/html/si-tr27-eng.html
- Government of Canada. (2016a). Annual arctic ice atlas winter 2015 to 2016. https://www.canada.ca/en/environment-climate-change/services/ice-forecasts-observations/publications/annual-arctic-atlas-winter-2015-2016.html
- Government of Canada. (2016b). Atlantic whitefish (Coregonus huntsmani): Action plan.https://www.canada.ca/en/environment-climate-change/services/species-risk-public-registry/action-plans/atlantic-whitefish.html
- Government of Canada. (2017). Boreal felt lichen, Atlantic population (Erioderma pedicellatum) recovery strategy: chapter 2. https://www.canada.ca/en/environment-climate-change/services/species-risk-public-registry/recovery-strategies/boreal-felt-lichen-atlantic-population/chapter-2.htm
- Government of Canada. (2018). Nesting periods of migratory birds: technical background. https://www.canada.ca/en/environment-climate-change/services/avoiding-harm-migratory-birds/general-nesting-periods/technical-background.html
- Government of Canada. (2019). Sei whale (Balaenoptera borealis), Atlantic population: COSEWIC assessment and status report 2019. https://www.canada.ca/en/environment-climate-change/services/species-risk-public-registry/cosewic-assessments-status-reports/sei-whale-2019.html.
- Government of Canada. (2020a). Algorithmic impact assessment – ATIP online request service. https://open.canada.ca/data/en/dataset/cea9985f-5e0f-425e-9b7e-e1d122272c56
- Government of Canada. (2020b). EOLakeWatch: Remote sensing of algal blooms. https://www.canada.ca/en/environment-climate-change/services/water-overview/satellite-earth-observations-lake-monitoring/remote-sensing-algal-blooms.html
- Government of Canada. (2021a). Algorithmic impact assessment – ArriveCAN Proof of Vaccination recognition. https://open.canada.ca/data/en/dataset/afc17416-3781-422d-a4a9-cc55e3a053c8
- Government of Canada. (2021b). Algorithmic impact assessment – Record of Employment comments (ROEC).https://open.canada.ca/data/en/dataset/daa9ca66-566f-4c2e-a285-d2e217c2a00f
- Government of Canada. (2021c). Algorithmic impact assessment – Spouse or common-law partner in Canada advanced analytics.https://open.canada.ca/data/en/dataset/d41f9ec2-bf01-4b2a-bd8d-1b3a8424f534
- Government of Canada. (2021d). Algorithmic impact assessment results, ATIP online request service – 2021 update. https://open.canada.ca/data/dataset/cea9985f-5e0f-425e-9b7e-e1d122272c56/resource/57087e3d-26bb-42e6-8e8b-573f65a1fea0/download/atip-digital-services-aia-en.pdf
- Government of Canada. (2021e). Algorithmic impact assessment results, Record of Employment Comments (ROEC).https://opencanada.blob.core.windows.net/opengovprod/resources/c2863d6c-68e9-47d7-b041-48420c0bc4c1/roec_preprod_07-12-2020-english.pdf?sr=b&sp=r&sig=Dsv7I7WM1mEcEjnAZdVUY541GxDQh3H6obJybwuHp3c%3D&sv=2015-07-08&se=2023-05-11T16%3A53%3A17Z
- Government of Canada. (2021f). Algorithmic impact assessment results, Spouse or common-law partner in Canada advanced analytics. https://opencanada.blob.core.windows.net/opengovprod/resources/e523687a-d1d0-46fc-8fbe-d0768e209275/algorithmic-impact-assessment-spouse-or-common-law-partner-in-canada-advanced-analytics-english.pdf?sr=b&sp=r&sig=jGFZSDR89HMXInAvLYheED4f4CM8vS3QqwKd59NOz4o%3D&sv=2015-07-08&se=2022-12-03T21%3A45%3A25Z
- Government of Canada. (2021g). Algorithmic impact assessment tool. https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html
- Government of Canada. (2022a). Algorithmic impact assessment – Advanced analytics triage of overseas temporary resident visa applications. https://open.canada.ca/data/en/dataset/6cba99b1-ea2c-4f8a-b954-3843ecd3a7f0
- Government of Canada. (2022b). Algorithmic impact assessment results, advanced analytics triage of overseas temporary resident visa applications. https://open.canada.ca/data/en/dataset/6cba99b1-ea2c-4f8a-b954-3843ecd3a7f0/resource/9f4dea84-e7ca-47ae-8b14-0fe2becfe6db
- Government of Canada. (2022c). Job bank. https://www.jobbank.gc.ca/findajob/match
- Government of Canada. (2022d). Responsible use of artificial intelligence (AI).https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai.html
- Government of Canada. (2022e). Algorithmic Impact Assessment – Integrity Trends Analysis Tool. https://search.open.canada.ca/opendata/?collection=aia&page=1&sort=date_modified+desc.
- Government of Canada. (2022g). Algorithmic Impact Assessment—Automation tools to help process privately sponsored refugee applications—https://open.canada.ca/data/en/dataset/ad4be3b8-ac97-4dc1-8dd8-231239d018f2
- Government of Canada. (2022f). Algorithmic Impact Assessment–Advanced analytics triage of visitor record applications. https://open.canada.ca/data/en/dataset/01396e33-2c69-47e5-9381-32e717943b96
- Government of Canada. (2023). Algorithmic Impact Assessment – Project to automate the review of non-complex applications for Temporary Resident Visas and Work Permits made under the Canada-Ukraine Authorization for Emergency Travel. https://search.open.canada.ca/opendata/?collection=aia&page=2&sort=date_modified+desc
- Health Canada. (2016). Canada Health Act annual report 2014–2015.https://www.canada.ca/en/health-canada/services/health-care-system/reports-publications/canada-health-act-annual-reports/report-2014-2015.html
- Immigration and Refugee Protection Act: Regulations Amending the Immigration and Refugee Protection Regulations. (2014). Canada Gazette Part II, 148 (24). https://canadagazette.gc.ca/rp-pr/p2/2014/2014-11-19/html/sor-dors256-eng.html
- Immigration, Refugees and Citizenship Canada (IRCC). (2020a). Immigrating to Canada: Client service fundamental brief.https://www.canada.ca/content/dam/ircc/documents/pdf/english/corporate/transparency/transition/min2019/client-service-en.pdf
- Immigration, Refugees and Citizenship Canada (IRCC). (2020b). Immigration, Refugees and Citizenship Canada departmental plan 2020–2021. https://www.canada.ca/en/immigration-refugees-citizenship/corporate/publications-manuals/departmental-plan-2020-2021/departmental-plan.html
- Immigration, Refugees and Citizenship Canada (IRCC). (2020c). IRCC Minister transition binder 2019: Client service. https://www.canada.ca/en/immigration-refugees-citizenship/corporate/transparency/transition-binders/minister-2019/client-service.html/
- Immigration, Refugees and Citizenship Canada (IRCC). (2021). Gender-based analysis plus (GBA+).https://www.canada.ca/en/immigration-refugees-citizenship/corporate/publications-manuals/departmental-plan-2021-2022/gender-based-analysis-plus.html
- Immigration, Refugees and Citizenship Canada (IRCC). (2022a). Immigration, Refugees and Citizenship Canada service standards. https://www.canada.ca/en/immigration-refugees-citizenship/corporate/mandate/service-declaration/service-standards.html
- Immigration, Refugees and Citizenship Canada (IRCC). (2022b). Terms and conditions—Immigration, Refugees and Citizenship Canada. https://www.canada.ca/en/immigration-refugees-citizenship/corporate/terms-conditions.html
- Immigration, Refugees and Citizenship Canada (IRCC). (2022c). Chinook Development and Implementation in Decision-Making–February 15 & 17, 2022. Minister’s Appearance before the Standing Committee on Citizenship and Immigration on Backlogs, Processing and Applications and Francophone International Students. https://www.canada.ca/en/immigration-refugees-citizenship/corporate/transparency/committees/cimm-feb-15-17-2022.html
- Innovation, Science and Economic Development Canada (ISED). (2019). Spectrum management and telecommunications, consultation on a policy and licensing framework for spectrum in the 3500 MHz band. https://www.ic.gc.ca/eic/site/smt-gst.nsf/vwapj/SLPB-002-19-2019-09EN.pdf/$file/SLPB-002-19-2019-09EN.pdf
- Innovation, Science and Economic Development Canada (ISED). (2020). Spectrum management and telecommunications policy and licensing framework for spectrum in the 3500 MHz band.https://www.ic.gc.ca/eic/site/smt-gst.nsf/vwapj/SLPB-001-20-a3-2021-04EN.pdf/$file/SLPB-001-20-a3-2021-04EN.pdf
- Innovation, Science and Economic Development Canada (ISED), Canadian Intellectual Property Office (2021). Processing artificial intelligence: Analysis from a Canadian perspective.https://ised-isde.canada.ca/site/canadian-intellectual-property-office/en/publications/processing-ai/processing-ai/processing-artificial-intelligence-analysis-canadian-perspective
- Jennings, R. (2020, August 6). Government scraps immigration “Streaming Tool” before judicial review. UK Human Rights Blog. https://ukhumanrightsblog.com/2020/08/06/government-scraps-immigration-streaming-tool-before-judicial-review/
- Lepage-Richer, T., McKelvey, F. (2022). States of computing: On government organization and artificial intelligence in Canada. Big Data and Society (2022) (July-December), 1-15. https://doi.org/10.1177/20539517221123304
- Luamba c. Procureur général du Québec. (2022). 3866 QCCS.
- McEvenue, P., Mann, M. (November 21, 2019). Case Study: Developing guidance for the responsible use of artificial intelligence in decision-making at Immigration, Refugees and Citizenship Canada. Law Society of Ontario program Special Lectures 2019: Innovation, Technology, and the Practice of Law.
- Molnar, P., Gill, L. (2018). Bots at the gate: A human rights analysis of automated decision making in Canada’s Immigration and Refugee System. Citizen Lab and International Human Rights Program (Faculty of Law, University of Toronto) Research Report No. 114. https://hdl.handle.net/1807/94802
- Morneau, W. F. (2019). Investing in the middle class, Budget 2019. https://www.budget.canada.ca/2019/docs/plan/budget-2019-en.pdf.
- Nash, C. (2022). “Racism plays a role in immigration decisions,” House Immigration Committee hears. The Hill Times (March 28, 2022). https://www.hilltimes.com/story/2022/03/28/racism-plays-a-role-in-immigration-decisions-house-immigration-committee-hears/230084/
- Natural Resources Canada. (2021). Earthquake early warning. https://earthquakescanada.nrcan.gc.ca/eew-asp/system-en.php
- Office of the Privacy Commissioner of Canada (2020a). “Commissioners launch joint investigation into Clearview AI amid growing concerns over use of facial recognition technology”. February 21, 2020. https://www.priv.gc.ca/en/opc-news/news-and-announcements/2020/an_200221/
- Office of the Privacy Commissioner of Canada (2020b). “Clearview AI ceases offering its facial recognition technology in Canada”. July 6, 2020. https://www.priv.gc.ca/en/opc-news/news-and-announcements/2020/nr-c_200706/
- Ombudsman for the Department of National Defence (DND) and the Canadian Forces (CF). (n.d.). Fortitude under fatigue: Assessing the delivery of care for operational stress injuries that Canadian Forces Members need and deserve. http://www.ombudsman.forces.gc.ca/en/ombudsman-reports-stats-investigations-fortitude/report.page
- Project Arachnid. (2022). What is Project Arachnid?https://projectarachnid.ca/en/
- Public Safety Canada. (2020a). Earthquake preparedness. https://www.publicsafety.gc.ca/cnt/trnsprnc/brfng-mtrls/prlmntry-bndrs/20201119/002/index-en.aspx
- Public Safety Canada. (2020b). Facial recognition. https://www.publicsafety.gc.ca/cnt/trnsprnc/brfng-mtrls/prlmntry-bndrs/20200708/009/index-en.aspx
- Public Safety Canada. (2020c). Facial recognition. https://www.publicsafety.gc.ca/cnt/trnsprnc/brfng-mtrls/prlmntry-bndrs/20200930/027/index-en.aspx
- Public Safety Canada. (2022a). Child sexual exploitation on the Internet. https://www.publicsafety.gc.ca/cnt/cntrng-crm/chld-sxl-xplttn-ntrnt/index-en.aspx
- Public Safety Canada. (2022b). Organized crime–Research highlights 2017-H006. https://www.publicsafety.gc.ca/cnt/rsrcs/pblctns/2017-h006/index-en.aspx
- Rogers Communications Canada Inc. (2017). Consultation on a technical, policy and licensing framework for spectrum in the 600 MHz band SLPB-005-17 reply comments of Rogers Communications Canada Inc. November 3, 2017. https://www.ic.gc.ca/eic/site/smt-gst.nsf/vwapj/SLPB-005-17-reply-comments-Rogers.pdf/$file/SLPB-005-17-reply-comments-Rogers.pdf
- Shared Services Canada, Steering Committee on Big Data. (2014). Diagnostic report.
- Transport Canada. (2013). Fatigue risk management system for the Canadian aviation industry—Introduction to fatigue audit tools—TP 14577. https://tc.canada.ca/en/aviation/publications/fatigue-risk-management-system-canadian-aviation-industry-introduction-fatigue-audit-tools-tp-14577
- Transport Canada. (2018). Enhancing rail safety in Canada: Working together for safer communities. https://tc.canada.ca/en/legislative-reviews/railway-safety-act-review-2017-18/enhancing-rail-safety-canada-working-together-safer-communities
- Treasury Board of Canada Secretariat. (2019a). ATIP Online request Service Memorandum of Understanding Between Treasury Board of Canada Secretariat (TBS) nd the Institution. (29695108). https://www.cnlopb.ca/wp-content/uploads/mous/moutbs.pdf
- Treasury Board of Canada Secretariat. (2019b). Policy on service and digital, Appendix A [Policy on Service and Digital]. https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32603
- Treasury Board of Canada Secretariat. (2023). Directive on automated decision-making. https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592
- UNESCO. (2020). Outcome document: first draft of the Recommendation on the Ethics of Artificial Intelligence. Ad Hoc Expert Group (AHEG) for the Preparation of a Draft text of a Recommendation the Ethics of Artificial Intelligence. https://unesdoc.unesco.org/ark:/48223/pf0000373434
- Valle-Cruz, D., Ruvalcaba-Gomez, E.A., Sandoval-Almazan, R. & Ignacio Criado, J. (2019). A review of artificial intelligence in government and its potential from a public policy perspective. Proceedings of the 20th Annual International Conference on Digital Government Research, 91-99. https://doi.org/10.1145/3325112.3325242
- Zuiderwijk, A., Chen, Y.-C. & Salem, F. (2021). Implications of the use of artificial intelligence in public governance: A systematic literature review and a research agenda. Government Information Quarterly, 38(3),1–19. https://doi.org/10.1016/j.giq.2021.101577