Brent Shopp, Operations Manager, Integrity Resource Management, a Sanford-Rose Company
Whether you’re ready for it or not, AI is here. And it’s not going away. In recruiting we’ve seen faux AI for years and figured out how to make AI work for us. The newer tools coming out are beginning to show us the true power of AI and how it make our lives better! And, if we’re not careful, it could make our lives very much worse. Bias, regulations, ethical, fair, are just a few of the many issues that need to be navigated successfully for AI to work for you. Here are some high level risks, mitigations and best practices.
Legal Risks
- Discriminatory Practices
AI technology can be trained to select candidates based on characteristics such as race, gender, or age which is illegal in many countries and jurisdictions. Companies must ensure that the AI is not making decisions based on protected classes to remain compliant with labor laws. - Privacy Breaches
It’s important for companies to ensure that any data collected on potential workers is done legally and ethically, protecting their private information from misuse or unauthorized access. Companies should have a detailed privacy policy in place covering all aspects of data collection and usage during the hiring process. - Unresolved Disputes
As with any hiring process, there may be disputes between employers and candidates which can lead to litigation if not handled properly right away by both parties involved. Companies should have a dispute resolution protocol in place before implementing AI technology into their hiring systems so they are prepared for any potential conflicts that may arise down the line.
Mitigating Risk
- Ensure Transparency
To reduce their legal risks when using AI technology during their hiring processes, it’s important for companies to ensure transparency throughout every step of the process – from job postings through all the interview feedback. Then, employers can easily explain why a certain candidate was chosen over another if it’s needed at a later point. The accusation of bias is difficult to come back from. A lack of communication about how decisions were made using this kind of automated system will not be acceptable. Employers should also be upfront about how they are collecting candidate information (e.g., resumes, profiles, applicants) as well as how it will be used for decision-making purposes within the organization prior to applying any kind of automated system such as the one we’re discussing here (in other words, create an open dialogue between employers and applicants). - Reduce Bias Through Algorithms
To further reduce the legal risk associated with discrimination complaints using automated systems, algorithms could also be utilized during artificial intelligence-driven recruitment processes by ensuring fairness when making selection decisions across different groups and candidates regardless of protected classes such identity, gender, race, etc. This would involve designing algorithms specifically designed around these criteria while monitoring output results carefully throughout each stage until a final decision has been made regarding who will ultimately receive the job offer (or not). In addition, setting up an independent review board with representatives from various backgrounds could help provide additional oversight & due diligence measures necessary to guard against potential legal issues being missed otherwise due to a lack of proficiency regarding technical knowledge of how these systems work under the hood (i.e., human input still required even though machine doing more “heavy lifting” here).
Best Practices for Using AI Technology in Hiring Processes
- Design Algorithms Carefully & Monitor Them Closely
When designing algorithms for recruitment purposes it’s important to make sure the end result doesn’t favor one group over another. This means users need to monitor outputs closely to test results on a frequent, repeatable basis. Employers also should consider regular reviews with team members from diverse backgrounds and capabilities to make sure no biases are creeping unintentionally into the final product. - Necessary Documentation
Organizations must document all processes and component parts of their Artificial Intelligence program to support compliance initiatives that enforce regulations to protect individual rights and privacy interests, offering employment opportunities on an equal basis regardless of race, creed, national origin, age, etc. - Training
All individuals responsible for carrying out activities related to Artificial Intelligence technologies must be trained and educated, assuring that the AI adheres to all regulatory, privacy, and compliance issues. It goes without saying that much emphasis must be put toward proper training of staff members to adequately use given tools.
How are you using AI in your sourcing and recruiting processes? Drop me a note and let’s discuss. Would love to learn more and share what we’re seeing also.
brent.shopp@integrityresourcemanagement.com