Language selector

Submission on Ontario’s Trustworthy Artificial Intelligence (AI) Framework

Page controls

June 14, 2021

Page content


Hillary Hartley
Chief Digital and Data Officer, Deputy Minister
Ontario Digital Service
595 Bay Street
Toronto, ON M7A 2C7

Dear Deputy Minister Hartley:

Re: Consultation on Ontario’s Trustworthy Artificial Intelligence (AI) Framework

The Ontario Human Rights Commission (OHRC) welcomes the opportunity to provide a submission to Ontario’s public consultation on the Trustworthy Artificial Intelligence Framework (Framework) and its three commitments:

  • Commitment 1: No AI in secret

The use of AI by the government is always transparent, fair, and equitable

  • Commitment 2: AI use Ontarians can trust

Risk-based rules are in place to guide the safe, equitable, and secure use of AI by government

  • Commitment 3: AI that serves all Ontarians

Government use of AI reflects and protects the rights and values of Ontarians.

The OHRC is concerned about the unique implications that artificial intelligence (AI) presents to the human rights of Ontario’s marginalized and vulnerable communities. The federal government has defined AI as any technology that performs tasks that would ordinarily require biological brainpower to accomplish, such as making sense of spoken language, learning behaviours or solving problems[1] AI systems and automated decision-making have already become an integral part of everyday life. However, some early applications of AI systems have been found to unintentionally perpetuate historic patterns of discrimination by incorporating the developers’ biases into the systems, or relying on biased data – for example, employment screening and facial recognition tools.

When AI systems that are flawed in their development are used by government for public services, they can compound existing disparities and/or create new discriminatory conditions. Such conditions could have a profound and ongoing impact on marginalized and vulnerable communities, and erode public trust in institutions. As automated decision-making and AI systems are increasingly relied upon to make public services more efficient and accessible, it is critical that these systems are not biased and do not create or perpetuate systemic discrimination.

Human rights are legally enshrined in international conventions and Canada’s human rights laws, including the Canadian Human Rights Act, the Employment Equity Act, the Charter of Rights and Freedoms, and provincial human rights codes, including Ontario’s Human Rights Code (Code). Ontario must adopt the use of AI in a way that is consistent with its legal obligations under human rights law.


Some specific areas of concern identified by the OHRC


The OHRC has identified serious human rights concerns about the potential discriminatory impact of police data collection through using facial recognition and predictive policing algorithms that may adversely and disproportionately affect Code-protected groups.

Law enforcement organizations are increasingly using AI to identify individuals, collect and analyze data and help make decisions. Tools and approaches developed to predict whether people will pose a risk to others should be designed and applied in a way that relies on transparent, accurate, valid and reliable information about risk. Organizations using AI are liable for any adverse impacts based on Code grounds, even if the tool, data or analysis is designed, housed or performed by a third party.

There is a danger in using artificial intelligence tools or approaches that are not accurate, or are based on racially-biased data. These tools or approaches may overestimate the risk posed by racialized or Indigenous people, and compound existing disparities in criminal justice outcomes. For example, determining a person’s risk level based on the number of times they have been stopped by police, and have therefore become “known to police,” can have a profound and ongoing impact on groups who are most likely to be stopped due to racial profiling.


Health care

The OHRC has raised ongoing concerns about unequal access to health-care services and the disproportionately negative health outcomes that Code-protected groups continue to experience. Recently, we published questions and answers for employers and service providers to make sure that any requirements for personal health information related to the pandemic, including use of the COVID Alert app, are legitimate and consistent with human rights and privacy laws. The OHRC also called for human rights data collection, combined with health information, to monitor and address the disproportionate impact of COVID-19 and services related to the pandemic on Code-protected groups.



The OHRC has also seen the use of technology and AI in education that is concerning. For example, the Minister of Education’s 2020 Peel District School Board Review found that the Peel District School Board (PDSB) relied on an algorithm for vetting prospective teacher candidates that appeared to inappropriately screen out otherwise qualified racialized candidates. The algorithm continued historical preferences in hiring, by selecting candidates who mirrored previous successful hires. This is an example of how technology and AI can inadvertently facilitate discrimination.


Human rights-based actions to achieve Framework commitment goals

The OHRC recommends several actions consistent with a human rights-based approach, to strengthen human rights protections, accountability and oversight and collectively help the government achieve the goals set out under its three Framework commitments. These actions would also engage several of the “points”[2] set out by the ODS in its Digital Service Standard, to help the government deliver simpler, faster, better government services.

The OHRC acknowledges the Law Commission of Ontario’s (LCO) expertise in the area of AI, as seen in its reports including: The Rise and Fall of AI and Algorithms in American Criminal Justice: Lessons for Canada (October 2020); Legal Issues and Government AI Development (March 2020) and Regulating AI: Critical Issues and Choices (April 2020). The actions listed below are informed by the LCO’s continued work in this area.


  1. Enact the Framework to govern developing, using and implementing AI in legislation and regulations.
    • Any further actions or steps taken related to AI, especially in high-risk areas such as government services affecting vulnerable communities, should be taken with caution until such legislation and regulations are enacted.


  1. Engage in meaningful consultations with human rights experts, including the OHRC and representatives of Code-protected groups, at every stage of developing any legislative or regulatory framework related to AI.


  1. Set out in the legislation and Framework a recognition of human rights values and principles, and commitment to address systemic bias in AI that negatively affects or fails to appropriately account for the unique needs of Code-protected groups including vulnerable populations such as people with disabilities, children and older persons, Indigenous and racialized communities, as well as low-income communities.


  1. Provide for the legislation and Framework to apply to provincial and municipal governments, as well as government agencies and services, including but not limited to education, policing, health care, corrections, transportation and social assistance.


  1. Set out in the legislation and Framework strict limits on measures (including those targeting specific groups) that cause harm or infringe rights, including but not limited to tracking or surveillance, use of biometric technologies and data collection.


  1. Provide in the legislation and Framework clear and plain-language definitions of all technical terms and systems, including but not limited to Artificial Intelligence, Automated Decision Making and Data.


  1. Provide in the legislation and Framework a requirement for creating and maintaining a mandatory public catalog disclosing any and all AI systems used by the government, incorporating plain-language explanations of each system, their purpose, how they are used, what information is collected and what actions are taken to minimize discriminatory effects and outcomes.


  1. Provide in the legislation and Framework a requirement to collect and make publicly available human rights data on Ontario’s use of AI systems and automated decision-making, disaggregated by sociodemographic variables including Code-protected groups.


  1. Establish in the legislation a mechanism for independent monitoring by an oversight body that ensures publicly reported impact assessments and audits of AI systems for bias and discrimination are conducted on an ongoing basis, with jurisdiction to address systemic issues and hold government accountable.


  1. Provide for in the legislation a disclosure of wrongdoing/whistle-blowing mechanism and an accessible, effective public process for hearing, adjudicating and remedying systemic issues related to AI.


  1. Provide in the legislation and Framework a requirement for creating key performance indicators (KPI) to measure progress in achieving the goals set out under the three Framework commitments. Given the continuously developing nature of AI technology, the KPIs should integrate regular public re-evaluation processes.


  1. Make sure all steps of the Framework, as well as implementation of any AI, occurs only after meaningful public consultations, paying particular attention to the most affected communities targeted or disproportionately affected by the AI, including vulnerable populations such as people with disabilities, children and older persons, Indigenous and racialized communities, as well as low-income communities among others.


The OHRC acknowledges that this stage of the consultation process focuses on a broad view of AI use. Our recommendations reflect this broad view and preliminary stages of an ongoing process towards creating guidelines for the government’s use of AI. We hope to continue to provide valuable insights throughout the process. It is important to note that the Framework would benefit from the government engaging in a further robust discussion with stakeholders about the meaning of “trustworthy AI” as a concept, including its elements and components.


Human rights, data and AI

The OHRC has long called on governments to collect and analyze data to monitor the negative impact of programs and services on vulnerable groups protected under the Code. This includes supporting the Anti-Racism Directorate’s development of consistent data standards for certain public sector organizations, and providing guidance to organizations on how to collect human rights-based data. For example, the OHRC has called for human rights data collection to monitor racial profiling in policing, race and mental health disparities in segregation in provincial jails, and the over-representation of Indigenous and racialized children and youth in the child welfare system. More recently, we called for data collection to monitor the disproportionate impact of COVID-19 and related measures on people with disabilities, women, Indigenous people and racialized communities, especially people living in poverty.

In our calls for data collection, the OHRC routinely stresses the importance of the human rights principles of equity, privacy, transparency and accountability. As the government moves forward with developing a framework for an accountable, safe and rights-based approach to AI, it has an opportunity to develop principles for AI systems that advance positive human rights changes, rather than creating or perpetuating systemic discrimination. The OHRC supports a thoughtful examination of the opportunities and risks in implementing AI, and would be pleased to offer assistance in ensuring that these important principles form a part of the government’s human rights-based approach.



Ena Chadha, LL.B., LL.M.
Chief Commissioner

cc:        Amy Bihari, Senior Manager (Acting), Data Access & Analytics, Ontario Digital Service
            Hon. Doug Downey, Attorney General
            OHRC Commissioners



[2] Including but not limited to points 7 (Make it accessible and inclusive), 10 (Embed privacy and security by design) and 11 (Support those who need it).