Skip to main content

IPC-OHRC  Principles and the OPS  AI  Directive 

Michelle E. DiEmanuele

Secretary of Cabinet, Head of the Ontario Public Service
Whitney Block, Rm 6420
99 Wellesley Street W
Toronto, Ontario
M7A 1A1

RE: IPC-OHRC Principles and the OPS AI Directive 

Dear Secretary DiEmanuele:

On Wednesday, January 21st, 2026, the Office of the Information and Privacy Commissioner (IPC) and the Ontario Human Rights Commission (OHRC) will jointly publish Principles for the Responsible Use of Artificial Intelligence (AI). These Principles provide guidance for organizations seeking to implement AI systems which comply with human rights and data protection obligations. This work builds upon the joint statement our organizations issued in May 2023 regarding the use of AI technologies. 

The IPC is responsible for overseeing compliance with Ontario access and privacy laws, ensuring that personal information is protected, and that the public has the right to access government-held information. In the context of rapidly evolving technologies, particularly artificial intelligence, the IPC plays a critical role in identifying and mitigating privacy risks, promoting privacy-by-design and data minimization principles, and ensuring that the collection, use, and disclosure of personal information remain lawful, necessary, and proportionate. This oversight is essential for maintaining public trust and ensuring that technological innovation respects the rights and interests of all Ontarians.

The OHRC has oversight for human rights in Ontario. The Ontario Human Rights Code (Code) states that it is public policy in Ontario to recognize every person’s dignity and worth and provide equal rights and opportunities without discrimination. Under the Code, the OHRC has a broad mandate to promote, protect and advance respect for human rights and to identify and eliminate discriminatory practices. The OHRC uses various means to fulfill its mandate, including education, policy development, public inquiries and litigation. The OHRC has been actively engaged on issues of AI, providing submissions to government and public sector entities emphasizing a human rights-based approach. In addition, the OHRC has co-developed and published Canada’s first and only Human Rights Impact Assessment for AI technologies.

In December 2024, the Ontario Public Service released the Responsible Use of Artificial Intelligence Directive (OPS AI Directive), which outlines requirements for the transparent, responsible, and accountable use of AI by provincial institutions. Together, the OPS AI Directive and the IPC-OHRC Principles create a foundation for the responsible use of AI by provincial Institutions. The following comparison of the OPS AI Directive and the IPC-OHRC AI Principles demonstrates how they align and complement one another. 

By reconciling these requirements and providing greater specificity, our goal is to facilitate their practical implementation. When used together, the IPC-OHRC Principles and the OPS AI Directive not only support provincial ministries and other relevant government institutions in developing and applying responsible AI governance frameworks, but also enhance their adherence to applicable human rights and privacy rights obligations.  Further, using them together can foster responsible outcomes which benefit and protect Ontarians, upholding public trust and enhancing fairness, transparency, and accountability in the use of AI systems by Ontario’s public sector. 

In line with the IPC’s and OHRC’s commitment to public accountability and our responsibilities to the people of Ontario, this letter will be available on our respective websites.

Sincerely,

                                                                                                       

Patricia Kosseim                                                                            Patricia DeGuire

Commissioner                                                                                Chief Commissioner 

Information and Privacy Commissioner of Ontario       Ontario Human Rights Commission  
 



Introduction

This document compares the IPC-OHRC AI Principles (“IPC-OHRC Principles”) with the principles of the Ontario Public Service AI Directive (“OPS AI Directive”).  This comparison demonstrates how these principles align and complement each other. By reconciling these requirements and providing further specificity, our goal is to facilitate their implementation in practice and enhance their intended outcomes without adding burden to users. 

The OPS AI Directive, combined with the IPC-OHRC Principles, can help support provincial ministries, and other relevant government institutions in developing and operationalizing coherent AI governance frameworks that adhere to their OPS requirements, as well as their human rights and privacy obligations. Their combined application can help drive responsible outcomes that both benefit and protect Ontarians in a way which further upholds public trust and enhances fairness, transparency and accountability in the use of AI systems by Ontario’s public sector. 

For ease of reference, the OPS AI Directive and the IPC-OHRC Principles are reproduced below side by side, together with a comparative examination of how they align and complement one another.   
 


 

1. Beneficial Purpose

The IPC and OHRC recognize and support the Ontario government’s emphasis on ensuring that any use of AI systems serves the public interest and delivers tangible benefits to all Ontarians, while proactively mitigating and addressing potential risks and harms. While the IPC-OHRC Principles do not include an equivalent, stand-alone principle of Beneficial Purpose, the underlying objective of advancing the public interest forms the foundation of the IPC’s and OHRC’s work and connects all the IPC-OHRC Principles which are set out below.

Ontario’s directive:

AI is used to benefit the people of Ontario.

The people interacting with the AI system, and those affected by its outcomes, are considered when exploring potential AI use. The unique and diverse needs of users of government programs and services that leverage AI, and those affected by the outcomes of AI use, are accounted for in the design, operation and interpretation of outcomes. The tremendous benefits that can be realized by use of AI must be shared with the people of Ontario, while also ensuring that direct and indirect risks to the people of Ontario are mitigated and balanced with the benefits.

IPC – OHRC principle: N/A

 


 

2. Valid and Reliable

This OPS Directive and the IPC-OHRC Principle both recognize that AI systems only serve their purposes if they are valid and reliable. 

The IPC-OHRC Principle adds practical specificity to the OPS requirement that AI systems be valid and reliable.   The IPC-OHRC Principles requires institutions to independently verify performance across real operating environments, rather than relying on assumptions or vendor assurances. It also states that “working as intended” includes functioning in a way that promotes substantive equality across Ontario’s diverse communities. Further, this IPC-OHRC Principle ties system reliability directly to data accuracy, completeness, and representativeness which helps to maintain valid outputs over time.  
 

Ontario’s Directive:

AI use is justified and proportionate, and AI systems used are reliable and valid.

AI is only used where it serves a well-defined purpose, and the scope of AI use is proportionate to the problem it is trying to solve. Use follows a problem-first, rather than technology-first, approach. Once deployed, the AI system is reliable and valid – i.e., it works as intended and expected throughout its lifecycle.

IPC – OHRC principle:

AI systems must exhibit valid, reliable and accurate outputs for the purpose(s) for which they are designed, used, or implemented.

To be valid, AI systems must meet independent testing standards and be shown, using objective evidence, to fulfil the intended requirements for a specified use or application. They must be proven to be reliable by performing consistently, as required, over a specified duration, and in the environments in which they are intended to be used. They must also be robust enough to maintain that level of performance across various other operating conditions, particularly in situations in which experiences and outcomes may differ for Ontario’s diverse communities.

Validity and reliability standards contribute to the accuracy of observations, computations or estimates so that results can be reasonably accepted as being true. However, the accuracy of results also depends heavily on the accuracy, completeness, and quality of the input data provided to the AI system. Even a highly valid and reliable tool can yield poor outcomes if it is provided with inaccurate, biased, or incomplete data.

An AI system, therefore, should pass validity and reliability assessments prior to being deployed and be regularly assessed throughout its lifecycle to confirm that it continues to produce accurate results and to operate as expected in a variety of circumstances.

 


 

3. Safe and Privacy Protective

While the OPS AI Directive addresses safety by focusing on data use and collection, the IPC-OHRC Principle takes a more comprehensive approach, defining safety as a broader responsibility to prevent various types of harm, including impacts on human life, physical and mental health, economic security, and the environment. As set out below, these additional considerations are significant because they provide expectations for ongoing monitoring, evaluation, and eventual decommissioning of AI systems. Further, the IPC-OHRC Privacy Protective Principle enhances program design by elaborating on privacy-by-design requirements, and clarifying how lawful authority, data minimization, and privacy-enhancing measures should be incorporated from the outset to mitigate privacy-related risks. 

Ontario’s Directive:

AI is used in a safe, secure and privacy protective way.

Data privacy and security are maintained in a way that protects personal and sensitive information and minimizes potential risks and negative impacts, as per Ontario privacy legislation and internal sensitivity policies. Any use or collection of personal or sensitive data is proportionate and reasonable, accounting for the potential benefit to the people of Ontario.

IPC – OHRC principle:

AI must be developed, acquired, adopted and governed to prevent harm or unintended harmful outcomes that infringe upon human rights, including the right to privacy and non-discrimination.

AI systems should be monitored to support, among other considerations, human life, physical and mental health, economic security, and the environment. AI systems should be monitored and evaluated throughout their lifespan to confirm that they can withstand unexpected events or deliberate efforts that cause harm. This will, in part, require demonstrating that the AI systems have robust cybersecurity protection, and that human rights and privacy safeguards are firmly in place.

 Any new use of a given AI system should undergo a comprehensive assessment process to ensure it will constitute a safe use in the new context. Safe AI systems must also make evident when they are producing unexpected outputs. AI systems should be temporarily or permanently turned off or decommissioned when they become unsafe, and any negative impacts to individuals and groups must be reviewed accordingly.

AI should be developed using a privacy-by-design approach. Developers, providers, or users of AI systems should take proactive measures to protect the privacy and security of personal information and support the right of access to information from the very outset.

AI systems should be developed using a privacy-by-design approach that anticipates and mitigates privacy risks to individuals and groups. This approach ensures that privacy protections are embedded into the system from the outset, proactively safeguard personal data, and respect the privacy of all individuals, especially those who are vulnerable or unable to provide informed consent. AI systems often interact with, or process, significant volumes of personal information in their development, training, or operation. The privacy protection principle requires clear lawful authority to collect, process, retain, and use these data. Accordingly, developers, providers, or users of AI systems must comply with applicable federal or provincial privacy laws, directives, regulations, or other legal instruments.

 Any use of personal information should be limited to what is required to fulfill the intended purpose. Institutions developing, providing, or using AI systems should reduce the need for large volumes of personal information using privacy enhancing technologies including de-identification methods or the use of synthetic data. 

Privacy protective AI systems must build in measures to adjust the training data to mitigate any inherent bias and to ensure the accuracy of AI outputs, particularly where consequential decisions or inferences are being made about individuals or groups based on these outputs. 

Individuals should be informed whether and when their personal information is being used in the development, refinement, or operation of an AI system, as well as the purpose and intended use of the AI system. Where appropriate, individuals should be provided with an opportunity to access or correct their personal information, including information about them generated by an AI system.   Individuals should be provided with at least a right of review for automated decision processes that do not involve high risk, and the choice of opting out of high-risk automated decision processes that can materially impact an individual’s well-being in preference of a human decision maker.

AI systems must also be designed to protect the security of personal information from unauthorized access. Strong security safeguards are essential to ensure that personal information is protected from unauthorized access or misuse through the AI’s life cycle. 

 


 

4. Human Rights Affirming

The OPS AI Directive addresses not only how AI systems should be implemented to align with broader human rights considerations, but also whether they should be employed at all, acknowledging that AI systems that do not function in a non-discriminatory way should not be used. 

This IPC-OHRC Principle enhances the OPS AI Directive by providing practical instructions on how institutions can actively promote substantive equality in their use of AI systems. This includes proactive measures to identify and mitigate systemic bias within datasets, models, and deployment environments. This Principle clarifies that adopting a human rights-affirming approach to the use of AI systems necessitates non-discriminatory outcomes, supporting OPS program alignment with obligations under the Code. This IPC-OHRC Principle further emphasizes the importance for government and governmental actors to uphold Charter-protected rights, ensuring that AI systems do not unduly target individuals involved in public or social movements, or impose disproportionate surveillance on marginalized communities. 

In the absence of the specific provisions outlined in the IPC-OHRC Principle, developers, providers, and users of AI systems may inadvertently equate uniform treatment with substantive equality, which could result in discriminatory outcomes.  

Ontario’s Directive:

AI use is human rights affirming and non-discriminatory.

AI is used in ways that respect and protect equity, human rights and fundamental freedoms and ensure fairness consistent with applicable legislation including the Canadian Charter of Rights and Freedoms and the Ontario Human Rights Code. Community-informed context, including an understanding of potential discriminatory outcomes and their mitigations, as well as inclusive design, are the foundations of determining if and how AI is used.

IPC – OHRC principle:

Human rights are inalienable, and protections must be built into the design of AI systems and procedures. Institutions using AI systems must prevent and remedy discrimination effectively and ensure that benefits from the use of AI are universal and free from discrimination. 

Human rights law requires that developers, providers and institutions ensure that they do not infringe substantive equality rights. This can be done by proactively identifying and addressing systemic discrimination in the design and deployment of AI systems on grounds protected under the Ontario Human Rights Code (Code). Institutions should take active measures to mitigate the discriminatory impacts present in AI systems and their associated datasets, such as adjusting training data to resolve any inherent biases detected through ongoing monitoring. In addition, institutions should avoid the uniform use of AI systems with diverse groups. Such a use, though seemingly neutral, may actually result in adverse impact discrimination.

Institutions have both privacy and human rights obligations to ensure that the collection, processing, and sharing of personal information or pseudonymous or anonymous data does not contribute to or reinforce existing inequalities or discrimination. 

Likewise, government and governmental actors must comply with the rights guaranteed under the Canadian Charter of Rights and Freedoms, including the rights to freedom of expression, peaceful assembly, and association. This includes, but is not limited to, ensuring that AI systems do not unduly target participants in public or social movements, or subject marginalized communities to excessive surveillance that impedes their ability to freely associate with one another. 

 


 

5. Transparent

This OPS Directive prioritizes transparency in the use of AI. The IPC-OHRC Principle enhances the OPS Directive by offering a detailed practical framework which outlines key characteristics of transparency, including visibility, understandability, explainability, and traceability. This framework provides institutions with operational guidance beyond disclosing AI use. Such measures are critical for maintaining transparency and fostering public trust. According to this IPC-OHRC Principle, institutions are expected to understand and publicly disclose how their AI systems operate. In addition, their documentation should include an account of how systems operate, including an accounting of models used, training and validation data, and how their AI systems are monitored. Overall, this helps institutions translate high-level transparency commitments into consistent and verifiable practices. The further specificity set out in the IPC-OHRC Principle below also supports institutions in providing information and accessible communication to affected individuals and communities.  

Ontario’s Directive:

AI use is transparent and meaningful explanations of decisions are made available.

Information is provided to the public and public servants about how AI is being used in a service or process, in a way that facilitates understanding of outcomes, consequences and benefits.

IPC – OHRC principle:

Institutions that develop, provide and use AI must ensure that these AI systems are visible, understandable, traceable and explainable to others.

Transparency involves providing clear notice about the use of AI systems, and adopting policies and practices that make visible, explainable, and understandable how AI systems work. Institutions developing, providing, or using AI must also ensure that AI systems are traceable and explainable. Transparency fosters public trust by enabling interested parties to understand how an AI system functions, how it produces its outputs, and the measures being taken to ensure that the AI system operates safely and accurately. Transparency consists of the following characteristics.

First, AI systems must be visible. This means that institutions should provide a public account that explains the operation of the system throughout its lifecycle, from design and development to deployment and eventual decommissioning. This documentation may include privacy impact assessments, algorithmic impact assessments, or other relevant materials. Institutions must also be transparent about the sources of any personal data collected and used to train or operate the system, the intended purposes of the system, how it is being used, and the ways in which its outputs may affect individuals or communities. Importantly, this documentation should be written in clear, accessible language that avoids unnecessary jargon and technical complexity. Furthermore, institutions must notify individuals when they are interacting with an AI system and when any information presented to them has been generated by AI systems.

Second, AI systems must be understandable. This means that institutions must be able to explain how the technology operates and why errors may occur. To achieve this, they should document and retain sufficient technical information about the systems they are using so they can provide a full and transparent accounting of the basis on which decisions or actions were taken. 

AI system’s vendors should design and communicate about their AI systems in such a way that allows institutions that deploy and use them to understand how the AI system operates and how and why its outputs are generated as they are.

Third, AI systems must be explainable. This means institutions must be able to describe both the process (how) and the rationale (why) behind the outputs AI systems generate. This information should be communicated in a clear and accessible manner. The level of detail may vary according to the audience - whether it is directed to the public, non-experts, individuals or groups directly impacted by AI systems, or regulators overseeing institutional practices.

Fourth, AI systems must be traceable, meaning it must be possible for institutions to collect a thorough account of how the system operates, which can include: 
 

  1. model details, such as the intended use of an AI system, type(s) of algorithm or neural network, hyper-parameters, as well as pre- and post-processing steps,
     
  2. training and validation data, including details on data gathering processes, data composition, acquisition protocols and data labelling information, and 
     
  3. AI tool monitoring details, which can include performance metrics, failures, and periodic evaluations. 

 


 

6. Accountable

The IPC-OHRC Principle complements the OPS Directive by specifying critical components which help ensure strong AI governance throughout the AI lifecycle. This Principle elaborates on essential and practical governance mechanisms, including clearly defined roles, mandatory documentation, and structured procedures for risk and impact assessments. As the IPC-OHRC Principle notes, institutions should clarify who is responsible at each stage of an AI system’s lifecycle and establish a human-in-the-loop approach that allows for real-time intervention, when necessary. Furthermore, the IPC-OHRC Principle strengthens accountability by requiring privacy, human rights, and algorithmic impact assessments from the outset, thereby proactively identifying potential risk of harms before they occur. The IPC-OHRC Principle also reinforces key aspects of accountability, such as expectations for independent review, mechanisms for receiving and responding to questions, concerns, challenges, and internal whistleblowing protections to help ensure that institutions create the necessary and trusted conditions to receive timely information and undertake corrective or remedial actions regarding the use of AI systems.  
 

Ontario’s Directive:

AI use is accountable and responsible.

There is clear ongoing human oversight, accountability for, and maintenance of AI systems with a readily available process for the public and public servants to raise concerns about AI use

 

IPC – OHRC principle:

Institutions should implement a robust internal governance structure with clearly defined roles responsibilities and oversight procedures, including a human-in-the-loop approach, to ensure accountability throughout the entire lifecycle of their AI systems. 

Incorporating robust internal governance structures, including a human-in-the-loop approach, ensures that human oversight is maintained throughout the lifecycle of the AI and allows for real-time intervention as needed.

Up front risk assessments should be carried out to identify and assess risks associated with the AI system, and to develop measures necessary to mitigate against them. Such assessments should include privacy and human rights impact assessments, algorithmic impact assessments, and others as relevant and appropriate. 

Institutions should designate a person or persons responsible for overseeing the development, deployment, and/or use of an AI system, and for pausing or decommissioning an AI system that produces unsafe outputs or begins to operate in ways which are not valid or reliable. 

Institutions should document their decisions about design and application choices in relation to AI systems. Where such a decision impacts specific groups or communities, they should be meaningfully informed and provided an opportunity to challenge that decision and any related outputs or results and seek recourse accordingly. 

Institutions should be prepared to explain and provide plain language documentation on how the AI system works to an independent oversight body, upon request, and undertake any remedial or corrective actions as directed. Institutions must establish a mechanism to receive and respond to privacy, transparency, or human rights questions or concerns, as well as freedom of information requests, or to any challenges concerning how the AI system arrived at a decision or was used during a decision-making process. 

Members of institutions should be empowered through safe whistleblowing protections to report instances where an AI system does not comply with legal, technical, or policy requirements. Whistleblowers should be able to report non-compliance to an independent oversight body responsible for reviewing or overseeing the AI system, without fear of reprisal. Institutions should be subject to review by an independent oversight body with authority to enforce this and the other AI principles and require the organization to undertake remedial or corrective actions associated with the AI system.

 

1. Responsible Use of Artificial Intelligence Directive https://www.ontario.ca/page/responsible-use-artificial-intelligence-directive