HR’s role in addressing concerns over AI surveillance in the workplace
SHARE THIS ARTICLE
AI monitoring tools are changing worker privacy, and with a lack of regulation, they may already be in breach of existing employment and human rights legislation, research has found.
For its report, Working Under the Lens, Working Women Queensland (WWQ) and international law firm Wotton Kearney explored the rise of artificial intelligence (AI) surveillance in Australian workplaces and the legal and ethical implications that come with this rise.
Workplace monitoring has increased since the pandemic, and new AI technologies tracked workers with little transparency or oversight, “raising fresh concerns about bias, privacy and discrimination”, it said. Its data revealed that in 2022, sixty per cent of employers used technology to monitor their employees, which is double compared to the start of the pandemic.
Who is most at risk?
Based on its findings, AI monitoring tools are “eroding worker privacy, lack proper regulation, and may already be in breach of existing employment and human rights legislation”. It also found that the design and regulation of AI systems excluded women’s perspectives and data, which “compounds gender bias and inequality at work”.
Additionally, its data revealed that non-unionised women in the private sector were at a greater risk (52 per cent higher) than their unionised counterparts to face surveillance. Further, black workers (52 per cent) and young people aged 16–29 in low-skilled jobs (49 per cent) were more likely to be surveilled.
Working Women Queensland director Eloise Dalton (pictured) noted that women were disproportionately impacted by AI technologies, which were used to “monitor, manage and even punish them”. She added that if these risks are not managed, inequality will be deepened due to the efficiency of the technologies.
“We need strong, modern legal protections to protect those who need it most,” she said.
“HR professionals are well placed to champion consent-based approaches to employee data collection and surveillance within their organisations.”
This approach will give workers a choice, “reduc[ing] the potential for unintended discrimination, algorithmic bias and function creep”, she said.
The need for a human rights approach
Working Women Queensland stressed that workers have the right to dignity, autonomy, and freedom from discrimination, which are at risk due to technology being used to “monitor, rank, or control them without consent or oversight”.
Basic Rights Queensland director Sam Tracy said a human rights approach is needed, in conjunction with uniform legislation, or else the risks of new technologies can become a risk to workers.
The report recommended the implementation of a national uniform legislation, a consent-based framework, the introduction of AI-specific legislation, that employees engage with their union representatives, the removal of the employee records exemption in the Privacy Act, a stronger culture of human rights in workplaces, the establishment of a federal AI commissioner, and mandatory enforceable agreements surrounding the introduction of AI in workplaces.
Further, research by global law firm Herbert Smith Freehills found that 90 per cent of employers reported using software to monitor employees working remotely, 82 per cent said they planned to use digital tools for monitoring in the future, and 43 per cent used technology such as sentiment analysis software to detect and address wellbeing issues at work.
Dalton stressed the role of HR professionals in “advocating for and embedding human rights cultures”.
“The success of any regulatory strategy for AI or workplace surveillance relies on institutional awareness and support for gender equality as a vital element of valuing the rights, freedoms and opportunities of all,” she said.
Carlos Tse
Carlos Tse is a graduate journalist writing for Accountants Daily, HR Leader, Lawyers Weekly.