Language selector

Ontario Human Rights Commission Submission to the Standing Committee on Social Policy Regarding Bill 149, Working for Workers Four Act, 2023

Page controls

February 13, 2024

Page content

 

The Ontario Human Rights Commission (OHRC) welcomes the opportunity to provide this submission on the proposed Bill 149, Working for Workers Four Act, 2023. This submission concerns Section 8 amendments to the Employment Standards Act relating to Canadian experience and artificial intelligence.

 

“Canadian Experience” requirements in job advertisements and application forms

Section 8.3 (1) of Bill 149 would prohibit any requirements related to Canadian experience in job advertisements and associated application forms, unless they meet prescribed criteria.

The OHRC released the Policy on removing the Canadian experience barrier in 2013 and believes that such requirements in employment and accreditation continue to raise human rights concerns. In the OHRC’s view, a strict requirement for Canadian experience is prima facie discrimination under Section 23(1) and 23(2) of the Ontario Human Rights Code (Code), which prohibits employers from displaying job advertisements, using application forms, or asking applicants questions which directly or indirectly classify or indicate qualifications under a prohibited ground of discrimination. Under the Code, employers can only ask about Canadian experience if they can show the work experience in Canada is a legitimate requirement, and providing human rights accommodations would cause undue hardship.1

The OHRC’s policy refers to the Supreme Court of Canada’s decision in Meiorin and its legal test for determining whether an employment standard that results in discrimination can be justified as a bona fide occupational requirement, and the standard:

  1. was adopted for a purpose or goal that is rationally connected to the function being performed;
  2. was adopted in good faith, in the belief that it is needed to fulfill the purpose or goal; and
  3. is reasonably necessary to accomplish its purpose or goal, because it is impossible to accommodate the claimant without undue hardship.2

Section 8.3 (2) leaves to regulation the criteria prescribed for the exemptions to Section 8.3 (1). The OHRC recommends that the government:

  • recognizes the test in Meiorin when setting out the criteria under Section 8.3 (2).

 

Disclosure of the use of artificial intelligence (AI) in recruitment and hiring

Section 8.4 (1) of Bill 149 would amend the Employment Standards Act to require employers to disclose the use of AI to screen, assess or select applicants for a publicly advertised job position. It is a positive first step to provide much needed transparency and safety measures for the use of AI technologies for recruitment and hiring.

Employment is one of five protected social areas in the Ontario Human Rights Code, in addition to accommodation (housing), contracts, goods, services and facilities, and membership in unions, trade or professional associations. Human rights claims involving employment are a significant contributor to the pressures on the human right system. Since fiscal year 2019-2020 (inclusive), more than 50% of the applications received by the Human Rights Tribunal of Ontario (HRTO)3 and intakes at the Human Rights Legal Support Centre (HRLSC)4 related to employment. Employment is an area experiencing significant human rights challenges. Therefore, AI can exacerbate employment-related discrimination if the government does not enforce the protections that it enacts.

It is in this context that the OHRC makes the following recommendations for this provision.

 

1. Transparency and openness upfront

AI technologies for recruitment and hiring range broadly for functions, including but not limited to:

  • targeting the places where job ads will be displayed;
  • screening to review, filter and score applications based on job requirements;
  • testing and interviewing candidates, which can include chatbots and technologies used for analysis of facial expressions and speech patterns; and
  • reference checking and contacting successful candidates.

Transparency in the use of such technologies can build trust and confidence with the public and job seekers.  

The introduction of the disclosure provision in Section 8.4 (1) should result in proactive transparency. However, it does not allow for interpretability (the ability to understand how the AI model works and predict a particular outcome)5 by introducing provisions that would require employers to detail:

  • the purpose of using the technologies,
  • types of technologies and how they are used,
  • at which points the technologies are used, and
  • the data on personal characteristics that the technologies use.

As outlined in Dr. Teresa Scassa’s submission on Bill 149, employers may satisfy the notice requirement by including a general statement in all job postings which states that they might use AI technologies in their process, without any information about when and how the technologies are used. The disclosure requirement as it is currently written is vague. It does not provide notice to applicants and allow them to interpret when and how AI technologies might impact their application. It does not adequately enhance transparency regarding the employer’s use of AI technologies.

The scope of Section 8.4 (1) of Bill 149 should expand from screening, assessment and selection to all aspects of the recruitment and hiring process to prevent discrimination. For example, the use of AI technologies to target online advertisements in a discriminatory manner has raised human rights concerns in Canada and other jurisdictions.

In 2019, the OHRC and the Canadian Human Rights Commission jointly urged Facebook Inc. (now known as Meta Platforms Inc.) to create safeguards to prevent discriminatory targeting of advertisements for housing, employment and credit opportunities that excluded people based on protected characteristics, such as age or gender. The company implemented changes to its AI-based advertising platform in December 2020 and worked with experts, academics, researchers, and civil rights and privacy advocates to change algorithmic bias. In the United States, the company settled a similar matter with the Department of Justice in 2022, and agreed to terminate the use of a machine learning-based algorithmic targeting tool that was alleged to rely on protected characteristics, such as race, religion, sex, disability, family status and place of origin, to engage in the discriminatory delivery of housing advertisements.

AI can also replicate unfair requirements and biased language in job advertisements.6 AI technologies may try to insert job requirements that are not essential to the position’s functions, based on patterns in the system's training data, if it is not prompted with specific skills and duties to include in an advertisement. Job advertisements can prevent or discourage people from applying if they specify requirements that are not essential to the job duties, and may infringe on human rights if they adversely exclude people with characteristics protected under Code grounds.

The OHRC recommends that:

  • Bill 149 requires employers to disclose, in job postings, at which points AI technologies are used in the recruitment and hiring process, and for what purpose.
  • Bill 149 requires employers to disclose, in job postings, how data associated with personal characteristics of the applicant may be used by AI technologies.
  • The requirements relating to the use of AI technologies apply to all stages and activities of recruitment and hiring.

 

2. Safeguards

“Artificial intelligence” encompasses various fields that involve training algorithms to identify patterns in data and make predictions based on those patterns to perform specific tasks.7 Accordingly, AI technologies inherently present significant human rights concerns because they can exponentially replicate and exacerbate existing patterns of systemic bias and discrimination.

Examples:

  • A review of Peel District School Board found that an algorithmic technology used by the school board to vet prospective teaching candidates had inappropriately screened out racialized candidates that were otherwise qualified for the positions. The technology was designed to mirror previous successful hires and reproduced historical preferences in hiring that perpetuated discrimination.8
  • The United States government reached a settlement with an employer accused of using hiring software to automatically reject female applicants aged 55 or older or male applicants aged 60 or older. Submitting the same job applications with modified birth dates advanced applicants to the interview stage.9

Employers claim that AI technologies can relieve human resource pressures, reduce the cost and time it takes to hire, improve their ability to identify the best candidates, and eliminate bias and discrimination from their process. However, recruitment and hiring are often used as examples of discriminatory bias in AI technologies.

  • Automated screening technologies have been found to use data on Code ground characteristics of individuals to reject qualified applicants or apply lower assessment results. For example, AI systems have learned to exclude female applicants by receiving training data on the applications of successful hires in a predominantly male industry.10
  • AI technologies have also been found to use personal information, such as names, postal codes and gaps in employment history to make inferences on applicants’ race, disability, age and other Code grounds. Hiring decisions are then made based on proxy data that may be discriminatory.
  • Interview technologies may not be as reliable for assessing applicants with speech impediments, require a screen reader, or for those who have a different first language.11 Technologies that can be used to analyze applicants’ emotional expressions are more likely to incorrectly assign negative emotions to Black faces than White faces.12
  • Chatbots and other user interfaces for applicants might not be able to respond appropriately to requests for human rights-based accommodations during the recruitment and hiring process. For example, a person who is blind might have questions or request accommodation after receiving the instructions for a written assessment.13

In the OHRC’s guide on Human Rights at Work, we identify examples of discrimination in recruitment and hiring, including in setting job requirements, advertising, designing application forms, and interviewing and making hiring decisions. Ontarians should be protected from experiencing known and novel discriminatory practices in recruitment and hiring that can be replicated, exacerbated or created by AI technologies.

In a joint statement, the OHRC and the Information and Privacy Commissioner of Ontario discussed the importance of upholding privacy as part of a broader human rights approach. Concerns raised were about the reliance on immense volumes of personal information gathered from multiple disparate sources to train and operate AI systems, which can be used to perpetuate biases and lead to disparate impacts on Ontarians. The use of personal information that was voluntarily disclosed to draw inferences on additional private data also raises concerns about the infringement of privacy rights which result in discriminatory practices.14

It is essential for employers to ensure that there are safeguards in place before implementing AI technologies to their processes. At minimum, assessments should be conducted for compliance with legal obligations, including those under human rights laws.

The OHRC recommends that Bill 149 requires employers to:

  • Test their AI technologies prior to deployment for compliance with their obligations under Ontario’s privacy and human rights laws. The testing methodology and results should be publicly available.
  • Re-test their AI technologies regularly to mitigate risks as their models evolve.
  • Suspend the use of AI technologies when they violate legal protections for job applicants, including those under provincial privacy laws and the Code.
  • Establish and publicly disclose their processes to regularly retrieve and analyze their data on the use of AI, for ensuring compliance with protections for job applicants (including those under provincial privacy laws and the Code)
  •  Notify the public and individuals when information is compromised.

 

3. Accountability

Interpretability and explainability (to be able to understand how the AI model made a particular decision after it is computed) are known issues in the field of AI for addressing bias and discriminatory outcomes.15 In the context of human rights and AI, interpretability and explainability are necessary for individuals to exercise their human rights and for employers to prevent and address systemic discrimination.

The processing of personal information and decision-taking may be opaque and a barrier for individuals to understand the occurrence of discrimination, and to explain it if they were to pursue recourse through the human rights system.16 Employers can mitigate human rights concerns with transparency, by a) indicating upfront how their personal data will be used, and b) providing explanations following the decision on the sources of information that were a factor in predictions, recommendations or decisions applied.

The OHRC recommends that Bill 149 requires employers to:

  • retain information long enough for individuals to exercise their right to access (a practice recommended by the Information and Privacy Commissioner and other privacy authorities across Canada).17
  • disclose, upon request by the job applicant, how their personal information, including data on their personal characteristics, was used by AI technology in the screening and decision-making for their job application.

 

Additional concerns related to Bill 149’s requirement to disclose the use of AI

In its duty to enforce the Employment Standards Act (ESA), the Ministry of Labour, Immigration, Training and Skills Development should ensure the safe use of AI technologies in recruitment and hiring complies with the Code. Further, ESA regulations must ensure that Ontario’s employment laws on the use of AI are followed, with enhanced responsibilities and functions to receive and investigate complaints relating to the use of AI technologies for recruitment and hiring.

Other legislation complements the Human Rights Code and other entities complement Ontario's human rights system, including but not limited to:

Enforcing complementary laws helps to prevent infringement of fundamental rights. It also reduces the need for job seekers and employers to undertake potentially costly and time-consuming processes to resolve human rights disputes, which can involve mediation or adjudication that requires the services of the HRLSC and HRTO. As the government modernizes provincial laws for the opportunities and risks presented by AI in every sector, the OHRC urges the government, including agencies, and the broader public sector to be equipped to advance the safe use of AI technologies.

The Commission provides this submission to the Standing Committee on Social Policy for consideration during the review of Bill 149. In keeping with the OHRC’s commitment to public accountability and service to Ontarians, it will make this submission public.

 

Summary of Recommendations

 

“Canadian Experience” requirements in job advertisements and application forms 

  • Recognize the test set out in Meiorin by the Supreme Court of Canada for the criteria required by Section 8.3 (2).

 

Disclosure for the use of artificial intelligence (AI) in recruitment and hiring

 

  1. Transparency and openness upfront

  • Require employers to disclose, in job postings, at which points AI technologies are used in the recruitment and hiring process, and for what purpose.
  • Require employers to disclose, in job postings, what data associated with personal characteristics of the applicant may be used by AI technologies and how the data are used.
  • Apply requirements of Bill 149 relating to the use of AI technologies to all stages and activities for recruitment and hiring.

 

  2. Safeguards

  • Require employers to test their AI technologies prior to deployment for compliance with their obligations under Ontario’s privacy and human rights laws. The testing methodology and results should be publicly available.
  • Require employers to re-test their AI technologies regularly to mitigate risks as their models evolve.
  • Require employers to suspend the use of AI technologies when they violate legal protections for job applicants, including those under provincial privacy laws and the Code.
  • Require employers to establish and publicly disclose their processes to regularly retrieve and analyze their data on the use of AI, for ensuring compliance with protections for job applicants (including those under provincial privacy laws and the Code).
  • Inform the public and individuals when information has been compromised. 

 

  3. Accountability

  • Require employers to retain information long enough for individuals to exercise their right to access.
  • Require employers, upon request by the job applicant, to disclose how their personal information, including data on their personal characteristics, was used by AI technology in the screening and decision-making for their job application.

 

 


 

Endnotes

1 Ontario Human Rights Commission (1 February 2013), “Policy on Removing the ‘Canadian experience' barrier - The Ontario Human Rights Code”, online: https://www.ohrc.on.ca/en/policy-removing-%E2%80%9Ccanadian-experience%E2%80%9D-barrier/2-ontario-human-rights-code

2 OHRC, "Policy on Removing the “Canadian experience” barrier" (2013), online: https://www.ohrc.on.ca/en/policy-removing-%E2%80%9Ccanadian-experience%E2%80%9D-barrier

3 Tribunals Ontario, ”Archived Reports, Plans and Standards”, online: https://tribunalsontario.ca/en/archived-reports-plans-standards

4 Human Rights Legal Support Centre, “Annual Reports”, online: https://hrlsc.on.ca/reports-and-statistics/annual-reports

5 IBM, “What is explainable AI?“, online: https://www.ibm.com/topics/explainable-ai

6 Hunkenschroer, A.L. and Luetge, C. (2022), “Ethics of AI-Enabled Recruiting and Selection: A Review and Research Agenda”, online: https://link.springer.com/article/10.1007/s10551-022-05049-6

7 Canada Centre for Cyber Security (2022), “Artificial Intelligence - ITSAP.00.040”, online: https://www.cyber.gc.ca/en/guidance/artificial-intelligence-itsap00040

8 Ontario Ministry of Education (2020), “Review of the Peel District School Board”, online: https://files.ontario.ca/edu-review-peel-dsb-school-board-report-en-2023-01-12.pdf

9 United States Equal Employment Opportunity Commission (2023), “iTutorGroup to Pay $365,000 to Settle EEOC Discriminatory Hiring Suit”, online: https://www.eeoc.gov/newsroom/itutorgroup-pay-365000-settle-eeoc-discriminatory-hiring-suit

10 Global News (10 October 2018), “Amazon ditches AI recruiting tool that didn’t like women”, online: https://globalnews.ca/news/4532172/amazon-jobs-ai-bias

11 Organisation for Economic Co-operation and Development (18 July 2023), “AI-powered HR technology has a disability problem”, online: https://www.oecd-forum.org/posts/ai-powered-hr-technology-has-a-disability-problem

12 Brookings Institution (20 December 2021), “Why New York City is cracking down on AI in hiring”, online: https://www.brookings.edu/articles/why-new-york-city-is-cracking-down-on-ai-in-hiring

13 United States Equal Employment Opportunity Commission (12 May 2022), “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees”, online: https://www.eeoc.gov/laws/guidance/americans-disabilities-act-and-use-software-algorithms-and-artificial-intelligence

14 For example, using data from a combination of sources to infer the race or gender of a job applicant. That private identity data that the applicant did not disclose could then be used by AI technologies to make discriminatory recommendations or decisions against the person.

15 IBM, “What is explainable AI?“, online: https://www.ibm.com/topics/explainable-ai

16 The OHRC does not receive individual complaints. In Ontario’s human rights system, individuals can receive legal advice from the Human Rights Legal Support Centre and resolve claims of discrimination by filing an application to the Human Rights Tribunal of Ontario.

17 Office of the Privacy Commissioner of Canada (7 December 2023), ”Principles for responsible, trustworthy and privacy-protective generative AI technologies“, online: https://priv.gc.ca/en/privacy-topics/technology/artificial-intelligence/gd_principles_ai