Combatting Bias Ahead of NYC Bias Audit Law

It is not exactly a trade secret that AI is becoming ubiquitous across a variety of sectors. Equal Employment Opportunities (EEOC) Chairwoman Charlotte Burrows said that more than 80% of employers now use AI in their hiring processes, with that percentage expected to significantly increase as AI becomes more intelligent. Technology is transformative, particularly in the staffing industry, allowing employers to streamline their recruitment operations and allocate resources efficiently. But as the prevalence of technology increases, so does its potential to perpetuate bias and discrimination.

Local Law 144: Regulating AI Use in Hiring

Policymakers have reacted by formulating a new wave of legislation to regulate AI’s use. Significantly, New York has become the first major city in the US to require employers to conduct annual bias audits of their automated hiring and promotion processes. Local Law 144 (“NYC Bias Audit Law”) — which took effect on July 5 — prohibits companies from using automated tools to hire or promote candidates unless an impartial assessment has first been conducted. The regulations also prohibit algorithms from considering protected characteristics such as age, race, gender and sexuality.

Local Law 144 will affect hundreds of organizations within the city — perhaps even your own. But is it comprehensive enough? Or should HR leaders take it upon themselves to further scrutinize the use of AI in their processes?

It’s important to recognize that regulations like Local Law 144 aren’t just about avoiding penalties; they’re part of a broader effort to ensure AI is used responsibly.

Responsible AI

This brings us to the concept of “Responsible AI.” While the threat of AI perpetuating bias and discrimination may initially seem daunting, the school of thought known as responsible AI treats the advent of AI as an opportunity to seize rather than an obstacle to overcome.

AI has the potential to improve processes and counter biases. With proper use and human oversight, AI and algorithms can identify and eliminate structural biases throughout the organizational life cycle.

PREMIUM CONTENT: North America Legal Update Q2 2023

Preparing for Compliance: Understanding the Implications and Requirements of NYC’s Bias Audit Law

As AI evolves, so will the regulations aimed at ensuring it is deployed fairly and equitably in the staffing industry. The NYC Bias Audit Law represents an important step in this process.

To ensure compliance with Local Law 144, employers or employment agencies must:

  • Secure an annual independent audit of their automated employment decision tools (AEDTS) used to hire or promote employees residing in New York.
  • Notify candidates in advance if using AI-powered tools for hiring or promotion processes.
  • Include analysis of whether an AEDT has resulted in statistically significant outcomes for different groups.
  • Indicate the number of people assessed by an AEDT to fall within EEO category, which must represent less than 2% of data used in the protected category.

The Future of AI in the Staffing Industry

The NYC Bias Audit Law’s implementation is a significant step towards addressing biases in AI-driven hiring processes in the staffing industry. However, HR leaders should look beyond regulatory compliance and proactively evaluate their AI systems for potential biases. Responsible AI requires ongoing scrutiny and collaboration with vendors to ensure fairness and transparency. While the law is a positive development, it should be seen as a starting point, encouraging HR leaders to continuously improve and promote equitable staffing practices in AI use.

Here are some steps that HR leaders can take to ensure that their AI systems are used in a responsible way:

  • Conduct regular bias audits.
  • Collaborate with vendors to ensure fairness and transparency.
  • Continuously improve their staffing practices.

By taking these steps, HR leaders can ensure that AI is used in a way that benefits all job seekers, regardless of their background or characteristics.

Conclusion

AI is revolutionizing staffing processes, but it also holds the risk of embedding bias into these systems. Local Law 144 marks a significant step towards addressing these concerns. However, this law should not be seen as the end goal but rather as a catalyst inspiring a broader move towards responsible AI in HR.

Beyond just legal compliance, the HR industry must collectively scrutinize their AI systems. Advocating for and promoting responsible AI is what will truly shape the future staffing industry’s future.

 

Ayesha Gulley

Ayesha Gulley
Ayesha Gulley is a public policy associate with Holistic AI.

Ayesha Gulley

Share This Post

Tweet

Recent Articles

Powered by staffingindustry.com ·