HR Leader logo
Stay connected.   Subscribe  to our newsletter

‘Unacceptable delay’: Academics respond to the federal government’s AI strategy

By Nick Wilson | |5 minute read

Six months after the consultation period concluded, the Australian government has shed light on its approach to AI regulation. Is it too little, too late?

After a lengthy consultation process, the Australian government has announced its response to the growing threats posed by artificial intelligence (AI) technologies. The approach hinted at by the government is one of proportionality, said RMIT Professor Lisa Given.

“The Australian government appears to be taking a proportional approach to potential risks of generative AI by focusing, at least initially, on application of AI technologies to high-risk settings (such as healthcare, employment, and law enforcement,” said Professor Given.


While the government plans to align its policy with international standards, the strategy is, on its face, quite unique.

“This approach may be quite different to what other countries are considering; for example, the European Union is planning to ban AI tools that pose ‘unacceptable risk,’ while the United States has issued an executive order to introduce wide-ranging controls, such as requirements for transparency in the use of AI generally,” added Professor Given.

What, exactly, did the government announce, and what are the experts saying?

A proportional approach

Last year, the government published its Safe and Responsible AI Discussion Paper, which received 510 submissions, most of which are public. According to the government, the consultation “made clear” that AI is a double-edged sword. It “has immense potential to improve wellbeing and grow our economy”, but “Australians want stronger protections in place to help manage the risks”.

Towards that aim, the government is now considering “mandatory guardrails” for AI development and employment in “high-risk settings”. This could involve creating new laws or amending existing ones. Though little is known about the guardrails, and further consultation is anticipated to help design them, some immediate actions are being taken, including:

1. Working with industry to develop a voluntary AI safety standard.
2. Working with industry to develop options for voluntary labelling and watermarking of AI-generated materials.
3. Establishing an expert advisory group to support the development of options for mandatory guardrails.

The government has said it will consider designing guardrails around the testing of AI products, the transparency of AI systems, and the accountability of those who develop, deploy, and/or rely on AI.


In response to the announcement, several RMIT academics shared their thoughts. The consensus, it seems, is that the approach rightly appreciates the need to regulate AI technologies. Given their value, finding ways to live with the technology is not optional. That said, some took issue with the way the government went about it.

Dr Nataliya Ilyushina, a research fellow at RMIT, for example, said the announcement was too little, too late: “The consultation process for responsible AI regulation concluded six months ago.”

“Australia’s unacceptable delay in developing AI regulation represents both a missed chance for its domestic market and a lapse in establishing a reputation as an AI-friendly economy with a robust legal, institutional, and technological infrastructure globally.”

Dr Ilyushina also warned that over-regulation might encourage businesses to pack up for less regulated shores, potentially suggesting job losses exceeding those expected from AI itself.

A proportioned response, one that targets high-risk areas of AI deployment, is “an important place to start”, according to Professor Given.

“The creation of an advisory body to define the concept of ‘high-risk technologies’ and to advise government on where (and what kinds of) regulations may be needed is very welcome. It will complement other initiatives that the Australian government has taken recently to manage the risks of AI.”

“Ultimately, AI is a tool like any other, and needs principles-based legislation to ensure that it is beneficial for all of Australian society, not just those who benefit most from productivity gains, or those who own the technologies,” explained Dr Dana McKay, senior lecturer at RMIT.

Nick Wilson

Nick Wilson

Nick Wilson is a journalist with HR Leader. With a background in environmental law and communications consultancy, Nick has a passion for language and fact-driven storytelling.