HR Leader logo
Stay connected.   Subscribe  to our newsletter

AI and cyber security: Preparing for a ‘post-trust future’

By Nick Wilson | |6 minute read

The rise of generative AI is putting unprecedented power in the hands of cyber criminals. As chief financial officers increase their security budgets, some are speculating the tech will outstrip business preparedness.

The costs of cyber crime

Cyber crime, both in terms of its costs and frequency, is on the rise. As noted in Eftsure’s recent Cybersecurity Guide 2024, the business costs are growing each year at an average rate of 15 per cent and are estimated to reach US$10.5 billion per year by 2025. Australian businesses reported a 73 per cent increase in scams last year, losing a total estimated amount of AU$224 million to payment redirection schemes alone.


The picture is likely even bleaker as cyber crime is often underreported, and the true losses are often ongoing and therefore underestimated. While cyber security budgets among Australian businesses are growing, cyber criminals are forever finding novel ways to find digital weaknesses.

“Cyber criminals constantly evolve their operations against Australian organisations and are fuelled by a global industry of access, brokers, and extortionists,” said Australian Signals Directorate director-general Rachel Noble.

The new frontier of cyber crime is generative AI. The threat landscape was captured in the opening to the Eftsure report, which read: “In last year’s edition, we asked this question: despite ballooning security budgets and closer scrutiny, why are so many business communities still seeing unprecedented losses to scams and cyber crime?”

“This year, the proliferation of generative artificial intelligence (AI) has created perhaps an even more urgent question: if businesses were already facing unprecedented cyber losses, how will they fare now that AI is equipping cyber criminals with powerful capabilities and rearranging notions of evidence and truth?”

Generative AI: The new threat on the block

OpenAI’s ChatGPT enjoyed the fastest adoption of any consumer application in history, with 100 million users in two months. This was just one of many generative AI tools to grace the world stage last year, and there will be plenty more to come.

“Some leaders have heralded the fast evolution and widespread accessibility as a transformative technological revolution, while others have warned of unpredictable, irrevocable changes to all of society and even the fate of humanity,” said Eftsure.

The potential threats posed by generative AI were on everyone’s mind when, last year, researchers OpenAI wrote a letter to its board flagging a discovery that could “threaten humanity”. While the warning was vague, it certainly added fuel to existing fears from people in the know, such as Elon Musk, surrounding the technology’s risks. Though these fears seem to have more to do with AI’s potential to develop into an uncontrollable, autonomous entity beyond the control of its creators – others despair over its potential deployment by cyber criminals.

“Unlike governments and businesses that are bound by regulations, privacy rules, or security restrictions, threat actors are unfettered in their use of both mainstream and black-market AI tools,” explained Eftsure.

Indeed, the report pointed to the following ways that generative AI might be harnessed and deployed by cyber criminals:

1. Chatbots and large language models
2. Synthetic media and impersonation
3. The unknown

On the third point, Aza Raskin, co-founder of the Centre for Humane Technology, wrote: “What [AI researchers] tell us is that there is no way to know. We do not have the technology to know what else is in these models.”

“Teach an AI to fish and it’ll teach itself biology, chemistry, oceanography, evolutionary theory, and then fish all the fish to extinction.”

In other words, the use cases for the technology in cyber crime are, by definition, unknowable – it will develop as the technology tests existing methods and improves upon them.

The Eftsure report warned organisations of a threefold generative cyber crime risk:

1. Many organisations are not fully prepared for even basic scams or cyber attacks.
2. Generative AI capabilities are increasing the efficiency, reach, and scale of existing scams and cyber attacks.
3. Most of today’s controls and anti-fraud measures, no matter how robust, were not designed with AI-powered scams or synthetic media in mind.

“To stay one step ahead of [cyber criminals], CFOs need to think creatively, stay informed, and implement technology-driven processes,” concluded Eftsure.

Nick Wilson

Nick Wilson

Nick Wilson is a journalist with HR Leader. With a background in environmental law and communications consultancy, Nick has a passion for language and fact-driven storytelling.