HR Leader logo
Stay connected.   Subscribe  to our newsletter

Why Parliament needs to trust AI more than we do

By Nick Wilson | |6 minute read

Research shows that Australians are distrustful when it comes to artificial intelligence (AI). As the federal government looks to regulate the tech, it’s important we don’t stifle its potential.

Warranted or not, Australians have trust issues with AI technologies. In a Workday survey of 15 countries, Australians came out ahead as the least likely to trust AI (at 60 per cent). Our being cautious with new tech might have something to do with us being late to the table.

“Some of these tools are further along in, for example, the US, Canada, and Europe. We’re a little slower in terms of uptake, even in terms of workforce adoption,” explained RMIT Professor Lisa Given.


“We’re quite far behind in terms of investment in research and innovation compared with other countries, and that’s definitely going to have an impact not just on the creation of new tools but also in understanding how they will be adopted in practice.”

It’s important not to mistake apprehension with ignorance, though, as the vast majority of Australian business leaders are eyeing the tech as a possible production boon.

The high rates of distrust might be reflective of a divergence in attitudes between employers and employees, with the latter being less optimistic about the tech.

Give us today our daily AI

Despite the caution, AI is already a feature of Australian life in so many ways. We interact with it daily. Dr Given pointed out that “AI” and “ChatGPT” are nearly synonymous for many of us who have only knowingly engaged with the technology on that platform, but it is already ingrained in so many crucial, albeit less exciting, business functions. For example, if you have done any of the following today, then you’ve engaged with AI:

  1. Used Face ID to unlock a device.
  2. Scrolled a social media feed.
  3. Received an email or used antivirus software.
  4. Searched on Google.
  5. Spoken to Siri or an alternative digital voice assistant.

The same story is unfolding within businesses as AI is increasingly becoming less the sole province of IT departments, particularly in a generative capacity. So, while we have our doubts about AI, it’s already part of our daily lives.

That said, our concerns are not without basis, as the risks of AI are real and compounding with every new breakthrough. For instance, generative AI has the potential to contribute to generating misinformation in an unprecedented way.

Despite the guise of neutrality, generative AI can perpetuate and even deepen existing biases by generating materials based on bad data. Transparency, said Dr Given, will be central to tackling misinformation. This was among the priority areas identified in Australia’s recent interim response to its Safe and Responsible AI consultation. We asked Dr Given what she made of the government’s plans.

Proportional regulation

The approach proposed by the government is a “proportional” one in the sense that regulation will be prioritised in those deployment areas where the risks are the highest, explained Dr Given.

While this sounds good on paper, risk identification is not as simple as pointing to a potential use for the technology and giving it a number out of 10 that denotes some level of risk. The process will be a lot more complicated.

Drawing the line when it comes to what kinds of risks are acceptable will be a challenging process. For example, Dr Given raised the example of self-driving cars. Some will crash, but that is already happening with human-driven cars. What is an acceptable threshold?

The government plans to appoint an advisory group to aid in the development of these mandatory guardrails. Determining how to classify and calculate risk levels in this way will be their “great challenge”, said Dr Given.

While Australians might be cautious when it comes to AI, it’s important our regulation is not so cautious that it stifles innovation. This challenge was touched upon by Dr Nataliya Ilyushina, a research fellow at the Blockchain Innovation Hub.

“Over-regulation of AI might incentivise businesses to relocate their operations overseas, potentially causing greater job losses than the implementation of AI itself,” said Dr Ilyushina.

“[While] not having enough regulation can lead to market failure where cyber crime and other risks that stifle business growth, lead to high costs and even harm individuals are high.”

Though the government has only teased its interim response to its AI regulation consultation, the fact that, unlike certain other countries, Australia has yet to ban any specific technologies or applications of technologies is one indication that we are keeping innovation front of mind.

Nick Wilson

Nick Wilson

Nick Wilson is a journalist with HR Leader. With a background in environmental law and communications consultancy, Nick has a passion for language and fact-driven storytelling.