Powered by MOMENTUM MEDIA
HR Leader logo
Stay connected.   Subscribe  to our newsletter
Tech

A quarter of Aussies believe that the use of AI at work is cheating

By Kace O'Neill | |5 minute read

As the saying goes, “if you aren’t cheating, you aren’t trying,” but in this case, it’s the opposite. A quarter of Aussie workers disapprove of their colleagues using artificial intelligence (AI) as a tool to assist them through their working tasks.

Advertisement
Advertisement

New research released by Veritas Technologies shows that confusion about the usage of generative AI tools in the workplace is creating division between colleagues over how it is utilised while also increasing the risk of exposing sensitive information.

More than two-thirds (68 per cent) of Australian office workers acknowledged using different AI generative tools, such as ChatGPT, to assist them with their tasks. This included exerting rather risky behaviour, like inputting customer details, employee information, and company financials.

However, one in five workers admitted to not using the tools whatsoever but also added that they believe their fellow colleagues who do utilise things tools like ChatGPT should be reprimanded with either a dox in pay or a penalty. A total of 21 per cent of respondents strongly believed this; however, only 32 per cent of employers currently provide any mandatory usage policies around AI to their employees.

Pete Murray, managing director of ANZ at Veritas, said: “When employers don’t provide guidance on how to use generative AI appropriately at work, or even whether it should be used at all, it can create a ‘Wild West’ of AI cowboys – where some employees are using generative AI in risky ways, some hesitate to use it at all, and others resent their colleagues for doing so.”

“It’s not an ideal situation for anyone, especially employers, who could face regulatory compliance penalties or miss out on ways to increase efficiency for their people. To resolve this, employers should be proactively issuing effective generative AI guidelines and policies to set expectations and boundaries on what’s acceptable and what isn’t.”

This workplace division can easily be mitigated if employers came out and implemented clear policies around the use of AI generative tools, but without clear guidelines, employees are becoming aware of the potential detriments of its use.

When asked about the risks of using AI generative tools in the workplace, 47 per cent said they could leak sensitive information, 44 per cent said they could generate incorrect or inaccurate information, and 43 per cent cited the compliance risks associated.

Despite the known risks, roughly a quarter (24 per cent) of office workers still admitted to inputting potentially sensitive information like customer details, employee information and company financials into generative AI tools.

More than 76 per cent of Australian employees advocated for wanting these AI guidelines, policies or a form of training. It’s important that employers begin to implement this at a higher rate to ensure that things like customer details, employee information, and company financials are protected. It will also put a stop to the workplace division that is occurring and lift team morale.

Kace O'Neill

Kace O'Neill

Kace O'Neill is a Graduate Journalist for HR Leader. Kace studied Media Communications and Maori studies at the University of Otago, he has a passion for sports and storytelling.