As AI continues to be embedded into recruitment systems for Australian organisations, many employment lawyers and experts are raising the alarm bells on the ethical issues that could lead to contested workplace disputes.
HR Leader recently spoke to Shae McCartney, workplace relations, employment and safety partner at Clayton Utz, in regards to issues that may arise due to AI usuage in recrutiment practices.
“AI has the potential to significantly increase efficiencies and, in turn, decrease costs for organisations. But without appropriate risk management, unintended operational impacts, legal costs or even loss and liability may take away from the benefits employers initially sought out,” McCartney said.
“While legislative reforms are being considered for the use of AI in human resources and recruitment, it’s important to understand the implications of these AI systems. There are a number of legal risks that organisations and their HR teams need to carefully and proactively manage.”
One of the areas that McCartney touched on specifically is recruitment, as it’s been a key area that organisations have attempted to refine through the use of AI – and from that various ethical issues have arisen.
“Discrimination is a risk HR teams need to be aware of. AI systems rely on a significant amount of data to operate. If there’s a bias embedded in the AI system’s dataset when the model is trained, this bias could expand as the AI learns, exposing employers to legal challenges if a candidate in a recruitment process or an employee is found to be discriminated against,” she said.
“A recent example of this is a technology company that trained an AI model to screen potential candidates. Due to an embedded bias in the dataset, the AI began to prefer male candidates and those without carer responsibilities, opening the door to discrimination claims.
“Under Australian law, a candidate or employee can claim discrimination at any stage in their recruitment or employment journey from application to termination. If discrimination is found, employers may be liable to pay compensation, ordered to take corrective actions, as well as potentially facing reputational damage.”
McCartney spoke on how recruitment bots, especially, can be a catalyst for legal repercussions – reinforcing the need for HR teams and employers to carefully manage their AI implementation.
“There are other risks that are not as obvious as recruitment bots appear to have become an almost short form of psychometric testing and give ‘insights’ on candidates. There was much criticism levelled at psychometric testing and the use to which it should be put when it was first introduced as a recruitment tool,” McCartney said.
“The constraints that were developed around science-based findings, support around feedback, and transparency around its limitations do not seem to have been applied to this new AI-generated approach.
“There is a risk that without careful management, the recruitment bot may cause unintended psychological harm to the candidate, for which the organisation could be liable. I expect that the use of AI in human resource and recruitment is going to become a significant industrial issue for unions in future enterprise negotiations and the cause of many industrial disputes.”
RELATED TERMS
The practice of actively seeking, locating, and employing people for a certain position or career in a corporation is known as recruitment.
Kace O'Neill
Kace O'Neill is a Graduate Journalist for HR Leader. Kace studied Media Communications and Maori studies at the University of Otago, he has a passion for sports and storytelling.