Regulation failing to ‘sufficiently address’ workplace harm from AI
SHARE THIS ARTICLE
New research into the risks of AI in the workplace has uncovered ways that the emerging technology can complement, not impede, worker safety and welfare in workplaces.
In a new article in the Journal of Industrial Relations, “AI and workplace relations: A WHS framework for managing relational risks in workplaces”, future of work expert associate professor and affiliate of the Flinders University Factory of the Future, Andreas Cebulla (pictured), analysed national and international literature to explore the AI-related “relational risks” that affect workplace dynamics and employee agency.
Missing mechanisms
For Cebulla, the Australian government’s current regulatory apparatus lacks the legally binding and workplace-specific mechanisms that are necessary to “mitigate emerging risks”. He stressed that AI’s role in the workplace should not be eliminated, but used to “co-produce a workplace that reflects operational accountability”.
Despite early assessments of AI pointing to its impacts on job automation and productivity gains, Cebulla uncovered new evidence revealing the further impacts that AI has been having on workplace relationships, worker autonomy, and psychosocial wellbeing.
He found that data-entry automation, document processing, fraud detection, and generative AI tools were the most common use cases for AI in Australian workplaces. In these use cases, Cebulla warned of the risks to “algorithmic engagement, the erosion of tacit knowledge, digital incivility and the devaluation of human labour”.
“Current governance frameworks fail to sufficiently address these relational harms,” he said.
Bridging the gap
To manage gaps in addressing relational harms, Cebulla said there must be a shift in the way AI is conceptualised – “not just as a technical tool or economic input, but as a social actor with the power to shape working relationships, identities and hierarchies”.
For his research, Cebulla proposed a framework to manage these risks, grounding it in job crafting, participatory oversight and expanded WHS definitions. He said that this framework could allow workers to be co-designers of workplace transformation as opposed to being a passive recipient of AI impacts.
In addition, it could build on existing industrial relations infrastructure, including union representation and safety committees, he added. Cebulla said that the framework will address effects that are “deeply social, often subtle, and frequently overlooked in both policy design and organisational strategy”.
Reconfiguration with AI
Cebulla also found that if AI tools are optimised for organisational goals (efficiency, compliance), job crafting optimises for worker values (dignity, purpose, agency).
He stressed that when job crafting is legitimised and supported, it will enable workers to “transform potential threats into sources of meaning and resilience”.
“AI tools do not only automate, but also reconfigure.” Cebulla said, changing how decisions are made, “who holds authority, how performance is interpreted, as well as what type of labour is considered legitimate”.
He concluded: “As such, they must be governed not only through audits and algorithms but through social institutions, norms and participatory mechanisms that foreground the human experience of work.”
Carlos Tse
Carlos Tse is a graduate journalist writing for Accountants Daily, HR Leader, Lawyers Weekly.